New Sinc Methods of Numerical Analysis: Festschrift in Honor of Frank Stenger's 80th Birthday (Trends in Mathematics) [1st ed. 2021] 3030497151, 9783030497156

This contributed volume honors the 80th birthday of Frank Stenger who established new Sinc methods in numerical analysis

154 98 6MB

English Pages 420 [411] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

New Sinc Methods of Numerical Analysis: Festschrift in Honor of Frank Stenger's 80th Birthday (Trends in Mathematics) [1st ed. 2021]
 3030497151, 9783030497156

Table of contents :
Preface
Contents
Contributors
Part I Applications
1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem
1.1 Introduction
1.2 Sinc-Type Errors
1.3 Sinc-Gaussian Heat Inversion on R
1.4 Sinc-Gaussian Heat Inversion on R+
1.5 Numerical Examples
Bibliography
2 Poly-Sinc Collocation Method for Solving Coupled Burgers' Equations with a Large Reynolds Number
2.1 Introduction
2.2 Poly-Sinc Approximations
2.3 Collocation of Coupled Equations
2.4 Numerical Results
2.5 Conclusion
Bibliography
3 Sinc Projection Solutions of Fredholm Integral Equations
3.1 Introduction
3.2 The Basics of the Sinc Method
3.3 Sinc-Collocation Method
3.4 Sinc-Nyström Method
3.5 Sinc-Convolution Method
3.5.1 Sinc-Convolution Scheme
3.6 Convergence Analysis
3.6.1 Sinc-Nyström Method
3.6.2 Sinc-Collocation Method
3.6.2.1 Sinc-Convolution
3.7 Numerical Illustrations
3.8 Conclusion
Bibliography
4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using Sinc Methods
4.1 Introduction
4.2 Approximation Method
4.2.1 Sinc Methods
4.2.2 Discretization Formula
4.2.3 Sinc Collocation of Fractional Sturm-Liouville Problems
4.3 Numerical Results
4.3.1 Harmonic Oscillator
4.3.2 Quarkonium Models
4.3.3 Modified Pöschel-Teller Potential
4.3.4 Bistable Quantum Wells
4.3.5 Finite Quantum Well
4.4 Conclusions
Appendix
Bibliography
5 Application of Sinc on the Multi-Order Fractional Differential Equations
5.1 Introduction
5.2 Sinc Function
5.3 Treatment of the Riemann-Liouville Fractional Derivatives of Order α>0 by Means of Sinc Methods
5.3.1 SE-Sinc and DE-Sinc Methods
5.3.2 SE-DE-Sinc Method
5.4 Sinc Collocation Methods
5.5 Error Analysis
5.5.1 Error Analysis of Sinc Collocation Methods
5.6 Illustration Numerical Results
5.7 Conclusion
Appendix 1
Appendix 2
Bibliography
6 Election Integrity Audits to Ensure Election Outcome Accuracy
6.1 Introduction Definitions, Assumptions, History
6.1.1 Definitions
6.1.2 Assumptions
6.2 Audit Units: Multiple Ballot Versus Single-Ballot
6.2.1 Single Ballot Sampling for Wide-Margin Contests
6.2.2 Multiple-Ballot Sampling: Maximum Level of Undetectability by Observation (MLUO)
6.3 Upper Margin Error Bounds
6.3.1 The Just-Winner/Just-Loser Upper Margin Error Bound
6.4 Post-Election Audit Sampling Methods
6.4.1 Uniform Sampling: Dopp/Stenger's Solution
6.4.1.1 Calculate C, the Minimum Number of Miscounted Audit Units that Could Cause an Incorrect Outcome
6.4.1.2 The Probability Equation for the Sample Size, S
6.4.1.3 Frank Stenger's Numerical Solution for Finding the Sample Size, S
6.4.2 Weighted Sampling: Proportional to Upper Error Bounds
6.4.2.1 The Probability Equation for the Sample Size, t
6.4.3 Timing and Additional Non-randomly Selected Audit Units
6.4.4 Allow the Just-Losing Candidate to Select Additional Audit Units
6.4.5 Select Additional Audit Unit(s) from ``Missed'' Jurisdictions
6.4.6 Timing: Complete Prior to Certification
6.5 Post-Election Audit Expansion
6.5.1 Number of Audit Expansion Rounds
6.5.2 Net Margin Error Threshold to Expand an Audit
6.5.3 Recalculate the Next Audit Round
6.6 Summary and Recommendations
Appendix 1
Proof That the Just-Winning-Losing Candidate Pair Upper Margin Error Bound Produces the Largest Sample Size as Compared to Other Winning-Losing Candidate Pairs
Appendix 2
References
7 Numerical Solution of the Falkner-Skan Equation Arising in Boundary Layer Theory Using the Sinc-Collocation Method
7.1 Introduction
7.2 Properties of the Solution
7.3 The Transformed Problem
7.4 The Sinc-Collocation Method
7.4.1 Preliminaries
7.4.2 Sinc-Collocation Method
7.5 Numerical Results
7.6 Conclusions
Bibliography
8 Sinc Methods on Polyhedra
8.1 Introduction
8.2 Piecewise Complex Structure on a Complex
8.2.1 Complexified Convex Hull
8.3 Quadrature
8.3.1 Holomorphic Area Measure
8.3.2 Sinc Quadrature on Products
8.3.3 Sinc Quadrature on a Face of a Complex
8.4 Approximation
8.4.1 Rectangulation of a Complex
8.4.2 Approximation on Products
8.4.3 Decomposition of Functions
8.4.3.1 Approximation of Hermite Error
8.4.3.2 Approximation on a Cube
Bibliography
Part II New Developments
9 Indefinite Integration Operators Identities, and Their Approximations
9.1 Introduction and Summary
9.2 The Hilbert Space and the Operators
9.2.1 The Operators J
9.2.2 Numerical Ranges
9.2.3 Indefinite Convolution via Fourier Transforms
9.2.4 Optimal Control
9.2.5 Fourier Transform Inversion
9.3 Connection with Interpolatory Approximation
9.4 Applications
9.4.1 Legendre Polynomial Approximation of a Model
9.4.2 Reconstruction from Statistical Data
9.4.3 Exact Formulas and Their Approximation
Bibliography
10 An Overview of the Computation of the Eigenvalues Using Sinc-Methods
10.1 Introduction
10.2 Classical Sinc-Method
10.2.1 Error Analysis of the Sinc-Method
10.2.2 General Second Order Differential Equations
10.2.3 λ-Type Problems
10.2.4 Discontinuous Problems
10.3 Regularized-Sinc Method
10.3.1 Dirac Systems
10.3.2 λ-Type Problems
10.3.3 Discontinuous Problems
10.4 Hermite Method
10.4.1 Hermite-Type Sampling
10.4.2 Computations of Eigenvalues with Hermite-Type Interpolation
10.5 Sinc-Gaussian Method
10.5.1 Sinc-Gaussian Operator
10.5.2 Computations of Eigenvalues with Sinc-Gaussian Method
10.6 Hermite-Gaussian Method
10.6.1 Hermite-Gauss Operator
10.6.2 Computations of Eigenvalues with Hermite-Gaussian Method
10.7 Generalized Sinc-Gaussian Method
10.7.1 Generalized Sinc-Gaussian Operator
10.7.2 Computations of Eigenvalues with Generalized Sinc-Gaussian Method
10.8 Conclusions
Bibliography
11 Completely Monotonic Fredholm Determinants
11.1 Introduction
11.2 Finite Dimensional Case
11.3 Infinite Dimensional Case
11.4 Examples
Bibliography
12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It
12.1 Introduction
12.2 The Generalized Euler–Maclaurin Formula for Cauchy Integrals
12.3 The Formula for the Influence of Jumps on the Sinc Interpolant
12.4 Fully Discrete Correction with Finite Differences
12.5 Quotients of Sinc Interpolants Corrected with Finite Differences
12.6 Extrapolated Sinc Interpolants
12.7 Final Remarks
Bibliography
13 Construction of Approximation Formulas for Analytic Functions by Mathematical Optimization
13.1 Introduction
13.1.1 Function Approximation
13.1.2 Numerical Integration
13.1.3 Organization of This Paper
13.2 Mathematical Preliminaries
13.2.1 Weight Functions and Weighted Hardy Spaces
13.2.2 Optimal Formula for Approximating Functions
13.2.3 Optimal Numerical Integration Formula
13.3 Accurate Formulas for Approximating Functions
13.3.1 Characterization of Optimal Formulas for Approximating Functions
13.3.2 Basic Idea for Constructing Accurate Formulas for Approximating Functions
13.3.3 Construction of Accurate Formulas for Approximating Functions
13.4 Accurate Numerical Integration Formulas
13.4.1 Characterization of Optimal Numerical Integration Formulas
13.4.2 Construction of Accurate Numerical Integration Formulas
13.4.2.1 Basic Idea
13.4.2.2 Accurate Formulas for Numerical Integration
13.5 Numerical Experiments
13.5.1 Approximation of Functions
13.5.2 Numerical Integration
13.6 Concluding Remarks
Appendix 1: Proof of the Fact That We Have Only to Consider Real Sampling Points in Sect. 13.4.1
Appendix 2: Proof of Inequality (13.35)
Property of the Minimizers of F
Hermite Interpolation
Proof of Inequality (13.35)
Bibliography
14 LU Factorization of Any Matrix
14.1 Introduction
14.2 Calculations
14.2.1 Computations
14.2.1.1 Example Factorization
14.2.2 Applications
Bibliography
Part III Frank Stenger's Work
15 Publications by, and About, Frank Stenger
15.1 Bibliographic Databases
15.2 Stenger Publications
Bibliography
Index

Citation preview

Trends in Mathematics

Gerd Baumann Editor

New Sinc Methods of Numerical Analysis Festschrift in Honor of Frank Stenger’s 80th Birthday

Trends in Mathematics Trends in Mathematics is a series devoted to the publication of volumes arising from conferences and lecture series focusing on a particular topic from any area of mathematics. Its aim is to make current developments available to the community as rapidly as possible without compromise to quality and to archive these for reference. Proposals for volumes can be submitted using the Online Book Project Submission Form at our website www.birkhauser-science.com. Material submitted for publication must be screened and prepared as follows: All contributions should undergo a reviewing process similar to that carried out by journals and be checked for correct use of language which, as a rule, is English. Articles without proofs, or which do not contain any significantly new results, should be rejected. High quality survey papers, however, are welcome. We expect the organizers to deliver manuscripts in a form that is essentially ready for direct reproduction. Any version of TEX is acceptable, but the entire collection of files must be in one particular dialect of TEX and unified according to simple instructions available from Birkhäuser. Furthermore, in order to guarantee the timely appearance of the proceedings it is essential that the final version of the entire material be submitted no later than one year after the conference.

More information about this series at http://www.springer.com/series/4961

Gerd Baumann Editor

New Sinc Methods of Numerical Analysis Festschrift in Honor of Frank Stenger’s 80th Birthday

Editor Gerd Baumann Mathematics Department German University in Cairo New Cairo City, Egypt University of Ulm Ulm, Germany

ISSN 2297-0215 ISSN 2297-024X (electronic) Trends in Mathematics ISBN 978-3-030-49715-6 ISBN 978-3-030-49716-3 (eBook) https://doi.org/10.1007/978-3-030-49716-3 Mathematics Subject Classification: 65-XX, 34A08, 65R20, 33F05 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This book is published under the imprint Birkhäuser, www.birkhauser-science.com, by the registered company Springer Nature Switzerland AG. The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Frank Stenger 2018

This book is compiled to honor Frank Stenger’s 80th birthday to collect new developments in the field of Sinc methods.

Preface

Frank Stenger, through his papers published in the 1960s and 1970s, is considered to be the founder of modern Sinc theory. He was born in Magyarpolány, Hungary in July 1938. To celebrate the 80th anniversary of his birthday, an international symposium in recognition of Stenger’s major contributions to mathematics took place at Rhodes, Greece. The symposium was held from 13 to 18 September 2018 and was attended by participants from countries all over the world. It was organized by Gerd Baumann of the Mathematics Department of German University in Cairo. The symposium was devoted to Sinc theory and its new developments in numeric computations. The impact and development of this theory, from the origin to the present day, was the subject of a series of general presentations by leading experts in the field. The colloquium concluded with a workshop covering recent research in this highly active area. Frank’s work with Sinc methods began when he substantially revised a paper written by John McNamee and Ian Whitney on Whittaker’s Cardinal Function in the 1960s. The work took off, as Sinc methods turned out to be an excellent tool for making approximations. The beautiful coinage of the Sinc function in the original paper (most likely due to McNamee) was “. . . a function of royal blood, whose distinguished properties separate it from its bourgeois brethren.” Since then Sinc methods developed in a way allowing to solve problems in a wide range of areas in mathematics, physics, electrical, and fluid dynamic problems and is the primary tool used in wavelet applications. For Frank, Sinc methods have always been the center of his work and he coined it once as “. . . it’s been a very, very lucky area in which to work . . . ” and he continues to work in this area up to this day. The organizers of the symposium decided not to publish proceedings of the meeting in the usual form. Instead, it was planned to prepare, in conjunction with the symposium, a volume containing a complete bibliography of Stenger’s published work, and to present the various aspects of Sinc theory at a rather general level making it accessible to the nonspecialist.

vii

viii

Preface

The present volume is a collection of 15 chapters relating to the symposium. It contains, in somewhat extended form, the survey lectures on Sinc theory given by the speakers. The contributions are divided into three parts incorporating applications, new developments, and bibliographic work. To complement the range of topics, the editor invited a few participants and coworkers of Frank Stenger to provide a review or other contributions in an area related to their current work covering some important aspects of current interest. Thus, the first part of the volume contains contributions which are application oriented using Sinc methods. The second part includes contributions which open the horizon to new fields and new developments. The volume ends with a comprehensive bibliography of Frank Stenger’s work. We hope that these articles, besides being a tribute to Frank Stenger, will be a useful resource for researchers, graduate students, and others looking for an overview and new developments in the field of Sinc methods. The articles in this volume can be read essentially independently. The authors have included cross-references to other sources. In order to respect the style of the authors, the editor did not ask them to use a uniform standard for notations and conventions of terminology. As regards the present volume, we are grateful to our authors for all the efforts they have put into the project, as well as to our referees for generously giving of their time. We thank Nelson Beebe who undertook the immense task of preparing the bibliography for Frank’s work. We are much indebted to Thomas Hempfling from Springer Verlag for continuing support in a fruitful and rewarding partnership. Ulm, Germany November 2019

Gerd Baumann

Contents

Part I Applications 1

2

3

Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. H. Annaby and R. M. Asharabi 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Sinc-Type Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Sinc-Gaussian Heat Inversion on R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Sinc-Gaussian Heat Inversion on R+ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Poly-Sinc Collocation Method for Solving Coupled Burgers’ Equations with a Large Reynolds Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maha Youssef 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Poly-Sinc Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Collocation of Coupled Equations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sinc Projection Solutions of Fredholm Integral Equations . . . . . . . . . . . . Khadijeh Nedaiasl 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The Basics of the Sinc Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Sinc-Collocation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Sinc-Nyström Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Sinc-Convolution Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.1 Sinc-Convolution Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 3 6 10 13 15 20 23 23 24 27 28 33 33 35 35 37 39 41 42 45

ix

x

Contents

3.6

Convergence Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Sinc-Nyström Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Sinc-Collocation Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Numerical Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

5

Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using Sinc Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gerd Baumann 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Approximation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Sinc Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Discretization Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Sinc Collocation of Fractional Sturm-Liouville Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 Harmonic Oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Quarkonium Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Modified Pöschel-Teller Potential . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Bistable Quantum Wells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.5 Finite Quantum Well . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Application of Sinc on the Multi-Order Fractional Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J. Rashidinia, A. Parsa, and R. Salehi 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Sinc Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Treatment of the Riemann-Liouville Fractional Derivatives of Order α > 0 by Means of Sinc Methods . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 SE-Sinc and DE-Sinc Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 SE-DE-Sinc Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Sinc Collocation Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Error Analysis of Sinc Collocation Methods . . . . . . . . . . . . . . 5.6 Illustration Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46 46 48 50 52 52 55 56 59 60 62 67 68 69 82 87 89 91 95 96 99 99 102 103 103 104 105 107 112 113 118 119 120 121

Contents

6

7

Election Integrity Audits to Ensure Election Outcome Accuracy . . . . . Kathy Dopp 6.1 Introduction Definitions, Assumptions, History . . . . . . . . . . . . . . . . . . . . 6.1.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Audit Units: Multiple Ballot Versus Single-Ballot . . . . . . . . . . . . . . . . . 6.2.1 Single Ballot Sampling for Wide-Margin Contests . . . . . . . 6.2.2 Multiple-Ballot Sampling: Maximum Level of Undetectability by Observation (MLUO) . . . . . . . . . . . . . . 6.3 Upper Margin Error Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 The Just-Winner/Just-Loser Upper Margin Error Bound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Post-Election Audit Sampling Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Uniform Sampling: Dopp/Stenger’s Solution . . . . . . . . . . . . . 6.4.2 Weighted Sampling: Proportional to Upper Error Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Timing and Additional Non-randomly Selected Audit Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.4 Allow the Just-Losing Candidate to Select Additional Audit Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.5 Select Additional Audit Unit(s) from “Missed” Jurisdictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.6 Timing: Complete Prior to Certification . . . . . . . . . . . . . . . . . . . 6.5 Post-Election Audit Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Number of Audit Expansion Rounds . . . . . . . . . . . . . . . . . . . . . . 6.5.2 Net Margin Error Threshold to Expand an Audit . . . . . . . . . 6.5.3 Recalculate the Next Audit Round. . . . . . . . . . . . . . . . . . . . . . . . . 6.6 Summary and Recommendations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Numerical Solution of the Falkner-Skan Equation Arising in Boundary Layer Theory Using the Sinc-Collocation Method . . . . . . . . . Basem Attili 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Properties of the Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 The Transformed Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 The Sinc-Collocation Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Sinc-Collocation Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Numerical Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xi

123 123 124 126 126 127 127 128 129 130 131 134 135 135 136 136 136 137 137 138 138 140 141 143 147 147 149 150 152 152 155 158 160 161

xii

8

Contents

Sinc Methods on Polyhedra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marc Stromberg 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Piecewise Complex Structure on a Complex. . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Complexified Convex Hull . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Quadrature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Holomorphic Area Measure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Sinc Quadrature on Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.3 Sinc Quadrature on a Face of a Complex. . . . . . . . . . . . . . . . . . 8.4 Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Rectangulation of a Complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.2 Approximation on Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.3 Decomposition of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

163 163 165 170 176 176 178 181 182 182 187 197 224

Part II New Developments 9

10

Indefinite Integration Operators Identities, and Their Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frank Stenger 9.1 Introduction and Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2 The Hilbert Space and the Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1 The Operators J . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Numerical Ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Indefinite Convolution via Fourier Transforms . . . . . . . . . . . 9.2.4 Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Fourier Transform Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Connection with Interpolatory Approximation . . . . . . . . . . . . . . . . . . . . . 9.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.1 Legendre Polynomial Approximation of a Model . . . . . . . . 9.4.2 Reconstruction from Statistical Data . . . . . . . . . . . . . . . . . . . . . . 9.4.3 Exact Formulas and Their Approximation . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An Overview of the Computation of the Eigenvalues Using Sinc-Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. H. Annaby, R. M. Asharabi, and M. M. Tharwat 10.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Classical Sinc-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Error Analysis of the Sinc-Method . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 General Second Order Differential Equations. . . . . . . . . . . . . 10.2.3 λ-Type Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4 Discontinuous Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

227 228 229 229 230 231 231 233 235 237 238 239 239 253 255 256 257 257 260 265 266

Contents

10.3

11

12

xiii

Regularized-Sinc Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 Dirac Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 λ-Type Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Discontinuous Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Hermite Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.1 Hermite-Type Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4.2 Computations of Eigenvalues with Hermite-Type Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Sinc-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Sinc-Gaussian Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Computations of Eigenvalues with Sinc-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 Hermite-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.1 Hermite-Gauss Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6.2 Computations of Eigenvalues with Hermite-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7 Generalized Sinc-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.7.1 Generalized Sinc-Gaussian Operator . . . . . . . . . . . . . . . . . . . . . . 10.7.2 Computations of Eigenvalues with Generalized Sinc-Gaussian Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

267 267 272 273 274 275

Completely Monotonic Fredholm Determinants . . . . . . . . . . . . . . . . . . . . . . . . Mourad E. H. Ismail and Ruiming Zhang 11.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Finite Dimensional Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Infinite Dimensional Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

299

The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jean-Paul Berrut 12.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 The Generalized Euler–Maclaurin Formula for Cauchy Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3 The Formula for the Influence of Jumps on the Sinc Interpolant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4 Fully Discrete Correction with Finite Differences . . . . . . . . . . . . . . . . . . 12.5 Quotients of Sinc Interpolants Corrected with Finite Differences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.6 Extrapolated Sinc Interpolants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.7 Final Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

277 280 281 283 285 285 287 289 290 292 294 294

299 301 304 310 321 323 323 325 327 332 334 337 338 338

xiv

13

14

Contents

Construction of Approximation Formulas for Analytic Functions by Mathematical Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ken’ichiro Tanaka and Masaaki Sugihara 13.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1 Function Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.3 Organization of This Paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Mathematical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Weight Functions and Weighted Hardy Spaces . . . . . . . . . . . 13.2.2 Optimal Formula for Approximating Functions . . . . . . . . . . 13.2.3 Optimal Numerical Integration Formula . . . . . . . . . . . . . . . . . . 13.3 Accurate Formulas for Approximating Functions . . . . . . . . . . . . . . . . . . 13.3.1 Characterization of Optimal Formulas for Approximating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.2 Basic Idea for Constructing Accurate Formulas for Approximating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3 Construction of Accurate Formulas for Approximating Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4 Accurate Numerical Integration Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Characterization of Optimal Numerical Integration Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2 Construction of Accurate Numerical Integration Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.1 Approximation of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LU Factorization of Any Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Marc Stromberg 14.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2 Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.1 Computations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.2.2 Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

341 342 342 343 344 344 344 345 345 346 346 348 350 351 351 353 355 355 357 358 367 369 369 375 377 380 382

Part III Frank Stenger’s Work 15

Publications by, and About, Frank Stenger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nelson H. F. Beebe 15.1 Bibliographic Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Stenger Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

385 385 386 387

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401

Contributors

M. H. Annaby Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt R. M. Asharabi Department of Mathematics, College of Arts and Sciences, Najran University, Najran, Saudi Arabia Basem Attili University of Sharjah, Sharjah, United Arab Emirates Gerd Baumann Mathematics Department, German University in Cairo, New Cairo City, Egypt University of Ulm, Ulm, Germany Nelson H. F. Beebe University of Utah, Department of Mathematics, Salt Lake City, UT, USA Jean-Paul Berrut Département de Mathématiques, Université de Fribourg, Fribourg, Switzerland Kathy Dopp University of Utah, Salt Lake City, UT, USA Mourad E. H. Ismail College of Science, Northwest A&F University, Yangling, Shaanxi, P. R. China Department of Mathematics, University of Central Florida, Orlando, FL, USA Khadijeh Nedaiasl Institute for Advanced Studies in Basic Sciences, Zanjan, Iran A. Parsa School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran J. Rashidinia School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran R. Salehi School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran Frank Stenger SINC, LLC, School of Computing, Department of Mathematics, University of Utah, Salt Lake City, UT, USA xv

xvi

Contributors

Marc Stromberg Pacific States Marine Fisheries Commission, Portland, OR, USA Ken’ichiro Tanaka Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan M. M. Tharwat Department of Mathematics, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt Maha Youssef Institute of Mathematics and Computer Science, University of Greifswald, Greifswald, Germany Ruiming Zhang College of Science, Northwest A&F University, Shaanxi, P. R. China

Yangling,

Part I

Applications

The first part of the book contains eight contributions which are mainly applications of Sinc methods. The different chapters demonstrate the variety of fields in which Sinc methods can be applied to problems in mathematics, in physics, in engineering, and in sociology. This part serves to demonstrate the strength of Sinc methods utilizing the variety of the computations.

Chapter 1

Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem M. H. Annaby and R. M. Asharabi

Abstract We introduce a new numerical method based on the sinc-Gaussian operator for solving the inverse heat equation. We establish rigorous proofs of the error estimates for both truncation and aliasing errors. The effect of the amplitude error, which has not been considered before, is also investigated theoretically and numerically for the first time in inverse heat problems. The domain of solvability of the inverse heat problem is enlarged and numerical examples show the superiority of the technique over the classical sinc-method. The power of the method is exhibited through several examples. Keywords Inverse heat equation · Sinc-Gaussian sampling · Gaussian convergence factor · Gaussian convergence factor · Amplitude and truncation errors.

1.1 Introduction The direct heat conduction problem consists in finding the temperature u(x, t) which satisfies ∂t u(x, t) = ∂xx u(x, t), u(x, 0) = f (x),

x ∈ J , t > 0,

x ∈ J,

(1.1)

M. H. Annaby () Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt e-mail: [email protected] R. M. Asharabi Department of Mathematics, College of Arts and Sciences, Najran University, Najran, Saudi Arabia e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_1

3

4

M. H. Annaby and R. M. Asharabi

where f is a given function. Here J = (0, ∞) or J = R. The problem may be analytically solved via (in the case J = R) 





−(x − y)2 u(x, t) = √ exp 4t 4π t −∞ 1

 f (y)dy,

(1.2)

provided that f is well behaved. For instance (1.2) is well defined if f ∈ L2 (R), cf. [11, 22]. The inverse heat problem, which we are considering in this paper, is to determine f from a known solution u(x, t). The authors of [8] proposed a sinc-interpolation method to solve the inverse problem. Their procedure can be outlined as follows. For d > 0 and Sd ⊂ C being the infinite strip Sd := {z ∈ C : |z| < d}, we define H p (Sd ), 1 ≤ p < ∞, as the set of holomorphic functions on Sd such that if Dd (ε) is defined for 0 < ε < 1 by Dd (ε) = {z ∈ C : |z| < 1/ε, |z| < d(1 − ε) } , then N p (f, Sd ) < ∞, 1 ≤ p < ∞, where 1/p

 N p (f, Sd ) := lim

ε→0

|f (z)|p |dz|

.

(1.3)

∂Dd (ε)

Gilliam et al. [8] proposed a solution to the inverse heat problem (1.1) under the restriction f ∈ H 1 (Sd ). This restriction via relaxed below. The solution of [8] is based on the expandability of f via the sinc-interpolation series  y − nh , f (y)

f (nh) sinc h n=−∞ 

∞ 

(1.4)

where h > 0 is a fixed step-size and the sinc function is defined by ⎧ ⎨ sin π t , t = 0, sinc(t) := πt ⎩ 1, t = 0.

(1.5)

If we define the aliasing error E [f ](y) via E [f ](y) = f (y) −

∞  n=−∞

 f (nh)sinc

 y − nh , h

(1.6)

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

5

then [18, Theorem 3.1.3] for f ∈ H 1 (Sd ), E [f ](y) is bounded via E [f ] ∞

   πd N 1 (f, Sd ) = O exp − ≤ , 2π d sinh(π d/ h) h

as h → 0+ ,

(1.7)

where the infinity norm is defined by f ∞ = supx∈R |f (x)|. Moreover if the function f decays as |f (x)| ≤ c exp(−a|x|),

x ∈ R,

for some positive constants c and a, and one selects h = aliasing and truncation error which is given by EN [f ](y) = f (y) −

 |n|≤N



(1.8) πd aN ,

then the combined

 y − nh , f (nh)sinc h 

(1.9)

for some N ∈ N is estimated in [18, Theorem 3.1.7] to be

√ √ EN [f ] ∞ ≤ C N exp − π adN ,

as N → ∞,

(1.10)

where the constant C depends only on f, a and d. The technique of [8], see also [10, pp. 27–31], stands on approximating {f (nh)} for −N ≤ n ≤ N through solving the following truncated linear system of equations BN f = 2π u ⇐⇒

N 

f (nh)βn−k = 2π u(kh, t),

−N ≤ k ≤ N,

(1.11)

n=−N



 where BN = bj k −N ≤j,k≤N = βj −k −N ≤j,k≤N , f = (f (−N h), f ((−N + 1)h), . . . , f (0), . . . , f ((N − 1)h), f (N h)) , (1.12)

 u = u(−N h, t), u((−N + 1)h, t), . . . , u(0, t) . . . , u((N − 1)h, t), u(N h, t) , (1.13)    π τ2 βl := exp(ilτ ) exp − 2 dτ, −2N ≤ l ≤ 2N, (1.14) 4π −π

h 2 and t := 2π . Hereafter, A denotes the transpose of a matrix A. While the paper [8] nicely connects the sinc-method to the inverse heat problem, the authors did not investigate the amplitude error resulting from the effect of  N approximating the samples {f (nh)}N n=−N by f (nh) n=−N . Moreover, in the analysis of the stability of (1.11), it is proved that |βl | ≤ 1l , l ≥ 1. However, this

6

M. H. Annaby and R. M. Asharabi

estimate does not imply the boundedness of the operator B = limN →∞ BN on 2 . Thus neither the existence of B −1 nor the stability of the system is not theoretically guaranteed. However, in [8] the authors introduced satisfactory computations of numerical condition numbers. On the other hand, the connection between sincinterpolations and inverse problems is rarely investigated, cf. e.g. [16, 23]. As far as we know, no studies have been performed about the connection between the sincGaussian method and the inverse heat conduction. Therefore one may ask about the possibility of applying recent advances in the sinc-methods for the inverse heat problem to improve error analysis as well as convergence rates. We aim in this paper to use the sinc-Gaussian operator, defined by Wei et al. [20, 21], and developed by Qian et al. [12–14], Schmeisser and Stenger [15], Tanaka et al. [19], and the authors [2, 4, 5], to solve the inverse heat problem. As expected this technique will have the following advantages: • No need for infinite systems.

 √ • Acceleration the rate of convergence to O exp − π2N / N which is independent of d or h as the bound (1.7) of sinc-interpolation. • The localization property that allows to approximate f in appropriately desired domains. The other task of this paper is to investigate the effect of the amplitude error due to use approximating samples on the solution of the inverse problem. We investigate the amplitude error for the method of [8] and for the sinc-Gaussian interpolation established here. Numerical examples and illustrations are introduced in the last section.

1.2 Sinc-Type Errors In this section we demonstrate the types of error associated with the use of the sincmethod in the approximation of the inverse heat problem. Let E [f ](y) := f (y) −

  y − nh , f (nh)sinc h n=−∞ ∞ 

(1.15)

where f (nh) are approximations of the samples f (nh), n ∈ Z, such that there is a sufficiently small ε, which satisfies εn := |f (nh) − f (nh)| < ε for all n ∈ Z and εn ≤ |f (nh)|,

n ∈ Z.

(1.16)

In the following we assume that the decay condition (1.16) is fulfilled and εn < ε, for all n ∈ Z.

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

7

Theorem 1.1 Let f ∈ H 1 (Sd ) satisfy the decay condition |f (t)| ≤

Cf , |t|α+1

α ∈]0, 1],

|t| > 1,

(1.17)

√ where Cf is a positive constant. Then we have for 0 < ε < min{h, h−1 , 1/ e} and y ∈ R,   E [f ](y) − E [f ](y) ≤

4 (α+1)/2 3 e + Cf 2(α+1)/2 e1/4 ε log(1/ε), α+1 (1.18)

where α ∈]0, 1[ and the constant Cf depends only on f . Proof Let p, q > 1, 1/p + 1/q = 1. For y ∈ R, we apply Hölder’s inequality, then we use an inequality of Splettstößer et al. [17], 

 q 1/q ∞     y − nh  sinc < p,   h

(1.19)

n=−∞

to obtain   E [f ](y) − E [f ](y) ≤ p



∞   p εn 

1/p .

(1.20)

n=−∞

Using the conditions (1.16) and (1.17) for a sufficiently small ε, the technique of [6], see also [1, 3], we obtain the estimate  p

∞   p εn 

n=−∞

1/p ≤

4 (α+1)/2 3 e + Cf 2(α+1)/2 e1/4 ε log(1/ε). α+1 (1.21)

Combining (1.21) and (1.20) immediately implies (1.18). For convenience, we define Aε [f ] to be Aε [f ] :=

4 (α+1)/2 3 e + Cf 2(α+1)/2 e1/4 ε log(1/ε). α+1

(1.22)

After we derived an estimate for the amplitude error (1.18), we can now estimate the error that arises from applying the sinc-method in the inverse problem. We consider both aliasing and amplitude errors.

8

M. H. Annaby and R. M. Asharabi

√ Corollary 1.1 Let f ∈ H 1 (Sd ). Then we have for 0 < ε < min{h, h−1 , 1/ e} E [f ] ∞ ≤

N 1 (f, Sd ) + Aε [f ], 2π d sinh(π d/ h)

(1.23)

where N 1 (f, Sd ) is defined in (1.3). Proof The estimate (1.23) directly follows from combining the estimates (1.18) and (1.7). To apply the sinc-method in the inverse heat problem, we have only a finite number of observations. Therefore a truncation error arises. In the following we estimate E N [f ](y) for a positive integer N , where E N [f ](y) := f (y) −

 |n|≤N

  y − nh . f (nh)sinc h

(1.24)

Corollary 1.2 √ Let f ∈ H 1 (Sd ) obey the decay (1.8). Then for 0 < ε < −1 min{h, h , 1/ e} we have

√ √ E N [f ] ∞ ≤ C N exp − π adN + Aε [f ],

(1.25)

where C is a positive constant that depends only on f, α and d. Proof From the triangle inequality, we obtain E N [f ] ∞ ≤ EN [f ] ∞ + E [f ] − E [f ] ∞ .

(1.26)

Combining (1.18), (1.10) and (1.26) implies (1.25). A more general case, where the decay condition (1.8) by the relaxed one (1.17) is treated in the following theorem. 1 Theorem 1.2 Let f ∈ H (Sd ) such that (1.17) is fulfilled. Then we have for h = πNd √

EN [f ] ∞ ≤ N 1 (f, Sd ) e−

π dN

+

2 . α(π dN )α/2

(1.27)

Proof From the definition of EN [f ], we obtain EN [f ] ∞

       · − nh  . ≤ E [f ] ∞ +  f (nh)sinc   h |n|>N  ∞

(1.28)

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

9

The first term on the right-hand side of (1.28) has been estimated in (1.7). Letting h = πNd in (1.7) implies √

E [f ] ∞ ≤ N 1 (f, Sd ) e−

π dN

.

(1.29)

We now estimate the second term on the right-hand side of (1.28). Since f obeys the decay condition (1.17), we obtain       ∞    1 y − nh 2 ≤  f (nh)sinc |f (nh)| ≤ 2 dt = ,   α+1 h α(π dN )α/2 Nh t  |n|>N |n|>N (1.30) where we have used in the last step that h = implies (1.27).



πd N .

Combining (1.30) and (1.29)

Considering the amplitude error leads to the following corollary. Corollary 1.3 Let f ∈ H 1 (Sd ) for which the decay condition (1.17) is satisfied. Then we have for h = πNd , √

E N [f ] ∞ ≤ N 1 (f, Sd ) e−

π dN

+

2 + Aε [f ]. α(π dN )α/2

(1.31)

Proof Results directly from combining (1.27), (1.26) and (1.18). The following theorem is estimating the amount of a function in the amplitude error, which can be made as small as wished. It is derived under the assumption that BN is invertible. N be a perturbed matrix for which Theorem 1.3 Let BN be invertible and B N < δ  BN − B

1 −1 BN

,

(1.32)

N is also invertible and if where . is the Euclidean norm. Then B BN f = 2π u,

N B f = 2π u,

then f − f ≤ which goes to zero as δ → 0.

−1 u 2π δ BN −1 1 − BN δ

,

(1.33)

10

M. H. Annaby and R. M. Asharabi

Proof Results directly from [9, p. 71].

1.3 Sinc-Gaussian Heat Inversion on R Let B(Sd ) be the class of holomorphic functions on Sd which are bounded on R. Let E2 be the class of all entire functions that belong to L2 (R) when restricted to the real axis. On the class B(Sd ), Schmeisser and Stenger defined in [15] a sincGaussian sampling operator Gh,N : B(Sd ) → E2 by Gh,N [f ](z) :=

     π z − nh 2 z − nh exp − f (nh) sinc , h 2N h 

 n∈ZN (z)

z ∈ C, (1.34)

  where N is a positive integer, ZN (z) := n ∈ Z : |h−1 z + 1/2 − n| ≤ N , · denotes the floor function and h = Nd . The authors of [15] have bounded the  truncation error f (z) − Gh,N [f ](z) when f ∈ B(Sd ) and z ∈ Sd/4 . Here we state only the real version of this bound because our technique will be entirely on R. If f ∈ B(Sd ), then we have, cf.[15, Theorem 3.1], − π2 N √   f − Gh,N [f ] ≤ 4 2 f ∞ e √ . ∞ π N

(1.35)

Since the samples {f (nh)}n∈ZN (z) cannot be measured explicitly for most applied   problems, and alternative approximate ones f (nh) n∈Z (z) are measured, an N amplitude error appears. Let Gh,N [f ](x) :=

 n∈ZN (x)

   2  π x − nh x − nh exp − f (nh) sinc , h 2N h 

x ∈ R. (1.36)

The authors of [2] established a bound of the amplitude error as follows:   Gh,N [f ] − Gh,N [f ]



  ≤ 2 ε e−π/8N 1 + 2N/π 2 ,

(1.37)

where ε is sufficiently small that satisfies |f (nh) − f (nh)| < ε for all n ∈ ZN (z). In the following, we introduce a new technique based on sin-Gaussian interpolation to solve the inverse heat problem. Let the initial function f of (1.1) belongs to B(Sd ). Then f ∈ C ∞ (R) ∩ L∞ (R) when the domain of f is restricted on the real line. Therefore the solution (1.2) is well defined and u ∈ C ∞ (R × (0, ∞)),

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

11

cf. [7, p. 47]. We assume that the solution of the inverse heat problem is based on approximating f via the sinc-Gaussian interpolation (1.34). Therefore f (y)

Gh,N [f ](y), and consequently as in [8, 10] 

 ∞





−(x − y)2 u(x, t) √ f (nh) exp 4t 4π t n∈Z (y) −∞ N 1

sinc

y h

−1 2 π − n e− 2N h y−n dy.

(1.38) Letting x = kh, s = u(kh, t) √

y−kh h

and l = n − k in (1.38) yields 



h 4π t

f (nh)

∞ −∞

n∈ZN (y)

e−

(hs)2 4t

sinc (s − l) e−

π (s−l)2 2N

(1.39)

ds.

The Sinc function is merely the Fourier coefficient 1 sinc (s − l) = 2π



π

−π

e−isτ eilτ dτ.

(1.40)

Combining (1.40) and (1.39) implies h u(kh, t)

√ 2π 4π t





f (nh)

n∈ZN (y)

π −π

 eilτ



−∞

e

  2 2 2 − isτ + h 4ts + π (s−l) 2N

dsdτ, (1.41)

where l = n − k. Calculating the infinite integral in (1.41) and letting t0 := we obtain the system of equations 1 u(kh, t0 ) =  2 2 π + π/2N



f (nh)Bn−k ,

k ∈ ZN (y),

2 d 2π N ,

(1.42)

n∈ZN (y)

where Bl = e

−l 2 π 2 2π N+1



−π

 2iπ lτ + Lτ 2 dτ. exp − 2π(2π N + 1) 

π

e

ilτ

(1.43)

The system (1.42) can be written in a more compact form as BN f = aN u,

(1.44)

12

M. H. Annaby and R. M. Asharabi

 where aN := 2 π 2 + π/2N and BN is the (2N +1)×(2N +1) symmetric Toeplitz matrix ⎛ ⎜ ⎜ ⎜ BN = ⎜ ⎜ ⎝

B0 B1 B2 .. .

B1 B0 B1 .. .

⎞ . . . B2N . . . B2N −1 ⎟ ⎟ . . . B2N −2 ⎟ ⎟. .. ⎟ . ⎠

B2 B1 B0 .. .

B2N B2N −1 B2N −2 . . .

(1.45)

B0

The symmetry of the Toeplitz matrix comes from the property B−l = Bl , for all 0 ≤ l ≤ 2N. With Ny := h−1 y + 1/2, (2N + 1)-vectors f and u are given by 

f = f ((−N + Ny )h), . . . , f ((N + Ny )h) ,

 u = u((−N + Ny )h, t0 ), . . . , u((N + Ny )h, t0 ) .

(1.46) (1.47)

Assume that u is known and that we determine f from (1.44). We compute the integrals Bl , −2N ≤ l ≤ 2N , which are the elements of the matrix BN , numerically because they can not be computed exactly. Again the amplitude error appears. In this setting, we do not need to consider the case of infinite series (1.4). However, as in the previous section, we assume the invertibility of BN . We then have the following theorem. Theorem 1.4 Let f ∈ B(Sd ), let BN be a perturbed matrix from BN such that BN < δ  BN −

1 B−1 N

,

(1.48)

where · is the matrix norm. Then π   √ aN B−1 e− 2 N N u −π/8N 2 . 1 + f − Gh,N [f ] ∞ ≤ 4 2 f ∞ √ + 2δ 2N/π e π N 1 − B−1 N δ

(1.49) Proof Since BN is invertible and the condition (1.48) is satisfied, f − f ≤

δaN B−1 N u 1 − B−1 N δ

.

(1.50)

Hence, applying the triangle inequality, we obtain f − Gh,N [f ] ∞ ≤ f − Gh,N [f ] ∞ + Gh,N [f ] − Gh,N [f ] ∞ .

(1.51)

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

13

Bounds on the first and second terms of (1.51) are given in (1.35) and (1.37), respectively. Combining (1.35), (1.37) and (1.51) with ε :=

δaN B−1 N u

(1.49).

1− B−1 N δ

leads to

1.4 Sinc-Gaussian Heat Inversion on R+ In this section we treat the problem (1.1) with J = (0, ∞). The solution of (1.1) is given by, cf. e.g. [8], u(x, t) = √



1 4π t

∞



−(x − y)2 exp 4t

0





−(x + y)2 − exp 4t

 f (y)dy. (1.52)

Let F be the odd extension of f on R  F (y) =

f (y), if y ≥ 0, −f (−y), if y < 0.

(1.53)

For a continuous initial data f , it is necessary that f (0) = 0. The solution (1.52) becomes    ∞ −(x − y)2 1 F (y)dy. (1.54) exp u(x, t) = √ 4t 4π t −∞ If F belongs to the class C ∞ (R)∩L∞ (R), then the solution in (1.54) is well defined and F (y) Gh,N [F ](y), i.e. Ny +N

F (y)



n=Ny −N

 y−nh 2 π y − nh − 2N h e F (nh) sinc . h



(1.55)

Substituting from (1.55) into (1.54) and using technique as in the former, we obtain the following system of equations Ny +N



F (nh)Bn−k = aN u(kh, t1 ),

Ny − N ≤ k ≤ Ny + N,

(1.56)

n=Ny −N

 2

where t1 := 2πdN , aN := 2 π 2 + π/2N and Bl is defined in (1.43). In the case Ny > N, the (2N +1)×(2N +1) system of equations (1.56) becomes (1.44) because F (y) = f (y) on [0, ∞). Therefore, we solve this system in the same way as in the former to find the vector f. If Ny = N , the system (1.56) reduces to 2N × 2N of

14

M. H. Annaby and R. M. Asharabi

equations because f (0) = 0 and u(0, t1 ) = 0 which comes from the odd extension of f . When 0 ≤ Ny < N , the order of a matrix in the system (1.56) reduces to N + Ny . Recall that B−l = Bl for all −N ≤ l ≤ N, F (−nh) = −f (nh) and u(−nh, t1 ) = −u(nh, t1 ) for all −(N − Ny ) ≤ n ≤ N − Ny . Hence the system (1.56) has the block form ⎞⎛ ⎞ ⎛ ⎞ −f1 −u1 A1 A2 A3 ⎝ A A1 A4 ⎠ ⎝ f2 ⎠ = aN ⎝ u2 ⎠ , 2  A f3 u3 3 A4 A5 ⎛

(1.57)

where the matrices Aj , j = 1, . . . , 5 are defined by ⎛ ⎜ A1 := ⎝

B0 .. .

⎞ . . . BN −Ny −1 ⎟ .. ⎠, . B0

BN −Ny −1 . . . ⎛

⎞ BN −Ny +1 . . . B2(N −Ny ) ⎟ ⎜ .. .. A2 := ⎝ ⎠, . . ⎛

B2(N −Ny )+1 . . . ⎜ .. A3 := ⎝ .

B2 ⎞

B2N .. .

⎞ BN −Ny . . . BN +Ny −1) ⎟ ⎜ .. .. A4 := ⎝ ⎠, . . ⎛

⎟ ⎠,

BN −Ny +2 . . . BN +Ny +1 ⎛ ⎜ A5 := ⎝

B0 .. .

. . . BN −Ny +1

B1

...

B2Ny

⎞ . . . B2Ny −1 ⎟ .. ⎠. .

B2Ny −1 . . .

B0

The matrices Aj , j = 1, 2 are of order N − Ny while Aj , j = 3, 4 are of order (N − Ny ) × 2Ny and A5 has order 2Ny . The vectors fj and uj , j = 1, 2, 3 are defined by

  f1 = f N − Ny )h , . . . , f (h) ,

 f2 = f (h), . . . , f N − Ny )h ,



 f3 = f N − Ny + 1)h , . . . , f N + Ny )h , and

  u1 = u N − Ny )h, t0 , . . . , u(h, t0 ) ,

 u2 = u(h, t0 ), . . . , u N − Ny )h, t0 ,



 u3 = u N − Ny + 1)h, t0 , . . . , u N + Ny )h, t0 .

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

15

The vectors fj , uj , j = 1, 2 are of dimension N − Ny while f3 and u3 are of dimension 2Ny . Thus the matrices Aj , j = 3, 4, 5 and the vectors f3 and u3 disappear from the system (1.57) when Ny = 0. Likewise, Aj , j = 1, 2, f1 and u1 disappear from the system when Ny = N. Now, it is easy to see that u2 = JN−Ny u1 ,

JN−Ny A 2 = A2 JN−Ny ,

f2 = JN−Ny f1 ,

JN−Ny A1 JN−Ny = A1 ,

(1.58) where JN −Ny is the (N − Ny ) × (N − Ny ) matrix defined as ⎛

JN −Ny

⎞ 01 1 0⎟ ⎟ .. .. ⎟ . . .⎟ ⎟ 1 ... 0 0⎠ 1 0 ... 0 0

0 ⎜0 ⎜ ⎜ := ⎜ ... ⎜ ⎝0

0 ... 0 ... .. . . . .

Using the relations (1.58), the system (1.57) reduces to

 2 A1 − A2 JN −Ny f1 − A3 − JN −Ny A 4 f3 = 2aN u1 ,

 − A 3 − A4 JN −Ny f1 + A5 f3 = aN u3 ,

(1.59)

which is of order (N + NN ) × (N + NN ). In the special case Ny = 0, the system (1.59) will be (A1 − A2 JN ) f1 = aN u1 .

1.5 Numerical Examples We work out four numerical examples in this section. Examples 1.1–1.3 are considered in [8, 10]. The fourth example is devoted to an inverse heat problem on (0, ∞). In all examples, we compare between the results of both sinc and sincGaussian methods. Let SN (x) to be SN (x) :=

 |n|≤N

  y − nh , f (nh)sinc h

and xk := d(k − 1/2)/N . The bound of the classical technique is calculated using (1.7) with h = πNd . In all examples, with both techniques we choose d = 1. The condition numbers of the matrix BN are given below in Table 1.1. They are very close to those computed for BN in [8]. This indicates that both systems, (1.44) and (1.11) have similar stability properties.

16

M. H. Annaby and R. M. Asharabi

Table 1.1 Condition numbers for BN N κ

5 1.33788

10 1.35343

15 1.35865

20 1.36127

25 1.36285

30 1.36390

0.008 0.006 0.004 0.002 0.000 0.002 0.004 1.0

0.5

0.0

0.5

1.0

  Fig. 1.1 The absolute error f (x) − SN (x) for N = 8 in Example 1.1 4. 10 6 2. 10 6

0 2. 10 6 4. 10 6 6. 10 6 1.0

0.5

0.0

0.5

1.0

  Fig. 1.2 The absolute error f (x) − Gh,N [f ](x) for h = 1 and N = 8 in Example 1.1

Example 1.1 Consider problem (1.1) with f (x) = sech given explicitly by 1 u(x, t) = π





−∞

πx  4 . The solution (1.2) is

cos(st)e−s t ds. cosh(s) 2

(1.60)

  f (x) − G1,8 [f ](x), while in Fig. 1.2 the error In Fig. 1.1, we exhibit the error   f (x) − S8 (x) is demonstrated. In Table 1.2 we show several comparisons. It first compares the actual absolute error and the error bounds obtained theoretically in the two methods. In both cases the actual error and the error bounds are close to each other. On the other hand, the efficiency of both techniques are compared, with

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

17

Table 1.2 Approximation of the initial function f (x) = sech(π x/2) at xk using the classical and sinc-Gaussian techniques     f (x) − f (x) − Gd,8 [f ](x) xk Uniform bound Uniform bound S8 (x) x0 1.05062×10−3 0.00846642 3.2755×10−6 6.9747×10−6 −3 −6 x2 1.34792×10 2.8946×10 x4 4.88234×10−3 2.9456×10−6 −3 x6 1.68542×10 4.0285×10−7 −4 x8 7.64581×10 2.8916×10−7 Table 1.3 Approximations for the initial function f (x) = exp(−x 2 /4) at xk using the classical and sinc-Gaussian techniques     f (x) − f (x) − G1,10 [f ](x) xk Uniform bound Uniform bound S10 (x) x0 3.82311×10−6 0.0193795 3.74791×10−8 2.69583×10−7 x2 2.28450×10−6 3.72522×10−8 x4 4.32722×10−6 3.57704×10−8 x6 1.22093×10−5 3.13246×10−8 x8 1.63798×10−5 3.21477×10−9 Table 1.4 Approximation of the initial function f (x) = 1/(1 + x 2 ) at xk using the classical and sinc-Gaussian techniques     f (x) − f (x) − Gd,8 [f ](x) S8 (x) xk Uniform bound Uniform bound x0 8.05126×10−4 0.0066495 2.91271×10−6 6.97468×10−6 −3 −6 x2 1.03418×10 1.81178×10 x4 3.72423×10−3 2.20509×10−7 −3 x6 1.29465×10 8.15885×10−8 −3 x8 5.74966×10 5.85657×10−7 Table 1.5 Approximation for the initial function 1 − sinc(x) at xk using the classical and sincGaussian techniques     f (x) − f (x) − Gd,8 [f ](x) xk Uniform bound Uniform bound S8 (x) x1 8.9096×10−4 0.011704 7.12755×10−7 1.1417×10−5 −3 −7 x2 1.4244×10 6.7841×10 x3 4.2408×10−3 5.8274×10−7 −3 x4 5.0428×10 5.7280×10−7 −3 x5 2.2315×10 5.2750×10−7

a remarkable enhancement is made via implementing the sinc-Gaussian techniques. Tables 1.3, 1.4, and 1.5 also assured these findings. Example 1.2 Let the initial data of problem (1.1) be f (x) = e−x (1.2) for this initial data turns out to e−x /4(t+1) u(x, t) = √ . t +1

2 /4

. The solution

2

(1.61)

18

M. H. Annaby and R. M. Asharabi 5. 10

6

0

5. 10

6

0.00001

0.000015 1.0

0.5

0.0

0.5

1.0

  Fig. 1.3 The absolute error f (x) − SN (x) for N = 10 in Example 1.2 1.5 10

7

1. 10

7

5. 10

8

0

5. 10

8

1. 10

7

1.0

0.5

0.0

0.5

1.0

  Fig. 1.4 The absolute error f (x) − Gh,N [f ](x) for h = 1 and N = 10 in Example 1.2

    In Fig.  1.3, we show the error f (x) − G1,10 [f ](x) while Fig. 1.4 illustrates the S10 (x) with α = 1/4. In Table 1.3, we show some numerical results error f (x) − with both techniques. 2 Example 1.3 In this example we let  f (x) = 1/(1 + x ), cf. [10]. In Fig. 1.5, we    S8 (x). show the error f (x) − G1,8 [f ](x) while Fig. 1.6 show the error f (x) − Table 1.4 shows the comparison between the results of the classical sinc and sincGaussian techniques.

Example 1.4 In this example, we consider problem (1.1) on (0,  ∞) with f (x) = f (x) − G1,8 [f ](x) 1 − sinc(x). Figures 1.7 and 1.8 illustrate the absolute errors    and f (x) − S8 (x) , respectively. Table 1.5 shows some numerical results with both techniques.

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

19

0.006

0.004

0.002

0.000

0.002

0.004 1.0

0.5

0.0

0.5

1.0

  Fig. 1.5 The absolute error f (x) − SN (x) for N = 8 in Example 1.3

3. 10

6

2. 10

6

1. 10

6

0

1. 10

6

1.0

0.5

0.0

0.5

1.0

  Fig. 1.6 The absolute error f (x) − Gh,N [f ](x) for h = 1 and N = 8 in Example 1.3

0.010

0.005

0.000

0.005 0.0

0.2

0.4

0.6

0.8

  Fig. 1.7 The absolute error f (x) − SN (x) for N = 8 in Example 1.4

1.0

20

M. H. Annaby and R. M. Asharabi

6. 10

6

4. 10

6

2. 10

6

0

2. 10

6

4. 10

6

0.0

0.2

0.4

0.6

0.8

1.0

  Fig. 1.8 The absolute error f (x) − Gh,N [f ](x) for h = 1 and N = 8 in Example 1.4

Acknowledgments The authors wish to thank Alexander von Humboldt-Stiftung foundation for the Grant 3.4-EGY/1039259 and 3.4-JEM/1142916.

Bibliography 1. Annaby, M.A., Asharabi, R.M.: On sinc-based method in computing eigenvalues of boundaryvalue problems. SIAM J. Numer. Anal. 64, 671–690 (2008) 2. Annaby, M.A., Asharabi, R.M.: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Process. 7, 293–311 (2008) 3. Annaby, M.A., Asharabi, R.M.: Truncation, amplitude and jitter errors on R for sampling series derivatives. J. Approx. Theory 163, 336–362 (2011) 4. Annaby, M.A., Tharwat, M.M.: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math. 63, 129–137 (2013) 5. Asharabi, R.M.: Generalized sinc-Gaussian sampling involving derivatives. Numer. Algor. 73, 1055–1072 (2016) 6. Butzer, P.L., Splettstösser, W.: On quantization, truncation and jitter errors in the sampling theorem and its generalizations. Signal Process. 2 101–112 (1980) 7. Evans, L.C.: Partial Differential Equations. American Mathematical Society, New York (1998) 8. Gilliam, D.S., Lund, J.R., Martin, C.F.: A discrete sampling inversion scheme for the heat equation. Numer. Math. 45, 493–506 (1989) 9. Gohberg, I., Goldberg, S., Kaashoek, M.A.: Basic Classes of Linear Operators. Birkhäuser, Berlin (2003) 10. Lund, J.R., Bowers, K.L.: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia (1992) 11. Ouhabaz, E.: Analysis of the Heat Equations on Domains. Princeton University Press, Princeton ( 2005) 12. Qian, L.: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Am. Math. Soc. 131, 1169–1176 (2002) 13. Qian, L., Creamer, D.B.: A modification of the sampling series with a Gaussian multiplier. Sampl. Theory Signal Image Process. 5, 1–19 (2006) 14. Qian, L., Creamer, D.B.: Localized sampling in the presence of noise. Appl. Math. Lett. 19, 351–355 (2006)

1 Sinc-Gaussian Approach for Solving the Inverse Heat Conduction Problem

21

15. Schmeisser, G., Stenger, F.: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process. 6, 199–221 (2007) 16. Shidfar, A., Zolfaghari, R., Damirchi, J.: Application of Sinc-collocation method for solving an inverse problem. J. Comput. Appl. Math. 233, 545–554 (2009) 17. Splettstößer, W., Stens, R.L., Wilmes, G.: On approximation by the interpolation series of G. Valiron. Funct. Approx. Comment. Math. 11, 39–56 (1981) 18. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 19. Tanaka, K., Sugihara, M., Murota, K.: Complex analytic approach to the sinc-Gauss sampling formula. Jpn. J. Ind. Appl. Math. 25, 209–231 (2008) 20. Wei, G.W.: Quasi wavelets and quasi interpolating wavelets. Chem. Phys. Lett. 296 , 215–222 (1998) 21. Wei, G.W., Zhang, D.S., Kouri, D.J., Hoffman, D.K.: Lagrange distributed approximating functionals. Phys. Rev. Lett. 79, 775–779 (1997) 22. Widder, D.V.: The Heat Equation. Academic, New York (1975) 23. Zolfaghari, R., Shidfar, A.: Restoration of the heat transfer coefficient from boundary measurements using the sinc method. Comput. Appl. Math. 34, 29–44 (2015)

Chapter 2

Poly-Sinc Collocation Method for Solving Coupled Burgers’ Equations with a Large Reynolds Number Maha Youssef

Abstract A collocation algorithm is presented for the solution of coupled system of Burgers’ equations with a large Reynolds number. The algorithm uses polynomial approximation with Sinc points as interpolation points. In addition, Sinc points are used as 2D grid collocation points. The scheme is tested for a coupled system of Burgers’ nonlinear equations with analytic solution obtained from Hopf-Cole transformation. The numerical results are compared to those obtained by using different numerical techniques, for example, the Chebyshev spectral collocation method and a Fourier Pseudo-spectral method. Keywords Poly-Sinc methods · Collocation methods · Burger’s equations · Coupled system · Reynolds number

2.1 Introduction In this article we focus on the two-dimensional steady Burgers’ equations, u(x, y)ux (x, y) + v(x, y)uy (x, y) = μ(∇ 2 u(x, y)), u(x, y)vx (x, y) + v(x, y)vy (x, y) = μ(∇ 2 v(x, y))

(2.1)

with the following boundary conditions u(x, y) = g1 (x, y) and v(x, y) = g2 (x, y) on ∂Ω,

(2.2)

where Ω = {(x, y), a ≤ x ≤ b, c ≤ y ≤ d} is the computational domain and ∂Ω is its boundary, u(x, y) and v(x, y) are the velocity components and the functions g1

M. Youssef () Institute of Mathematics and Computer Science, University of Greifswald, Greifswald, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_2

23

24

M. Youssef

1 and g2 are chosen to ensure the well posedness of problem (2.1). Finally, μ = R is the coefficient of viscosity and R is the Reynolds number. Burgers’ equation appears in the mathematical modeling of hydrodynamic turbulence, shock waves theory and traffic flow problems [1–3]. It also describes the sedimentation of two kinds of particles in fluid suspensions under the effect of gravity, transport and dispersion of pollutants in rivers or sediment transport [4, 5]. For large values of the Reynolds number R one of the major difficulties is due to inviscid boundary layers produced by the steepening effect of the nonlinear advection term in Burgers’ equation. For the analytical solutions some methods are used to derive the solutions of coupled Burgers’ equation, such as modified extended tanh-function method [6] or Cole–Hopf transformation [7, 8]. On the other hand, some numerical methods have been developed to solve the coupled Burgers’ equations, for example, the finite difference method (FD) [9, 10] and the finite element method (FE) [8]. A method based on the two-dimensional Hopf-Cole transformation and local discontinuous Galerkin finite element method is analyzed in [11]. The local radial basis functions collocation method to calculate the numerical solution of the transient coupled Burgers’ equation is examined in [12]. The Lie symmetry analysis and Ansatz method to obtain new solutions of coupled Burgers’ equation are accomplished in [13], in [14] the Tikhonov regularization is used to stabilize problems with a large Reynolds number. Recently, in [15] an integral equation method is used for the numerical solution of the Burgers’ equation and in [16] Lattice Boltzmann models are used to solve two-dimensional coupled Burgers’ equations. The rest of the paper is organized as follows. In Sect. 2.2, Poly-Sinc approximation method is presented. In Sect. 2.3, Poly-Sinc collocation technique for Burgers’ coupled equation is defined. In Sect. 2.4, some simulations are performed to test the present model, and finally, a brief summary is given in Sect. 2.5.

2.2 Poly-Sinc Approximations The use of Sinc points as interpolation points for polynomial approximation was introduced in [17]. It was proved that such interpolation points deliver an accuracy similar to the classical Sinc approximation. The stability of Lagrange approximation at Sinc points is studied in [18]. The sequence of Sinc points is generated using a conformal map that redistributes the infinite number of equidistant points on the real line to a finite interval. Such a redistribution by conformal maps locates most of the points near the end-points of the interval. The choice of the basis is also of importance if we are interested in highly accurate results. Besides these two major factors affecting an approximation, the method itself affects the results. We shall use here a collocation method based on the same discrete set of approximation points, the Sinc points. In addition, the formulation will be given for a set of polynomials as basis functions for collocation.

2 Poly-Sinc Method for Solving Burgers’ Equations

25

To start with, let us define the fundamentals of the poly-Sinc method: (i) Sinc points. We define the Sinc points over a finite interval [19]. (ii) Let h denote a positive parameter and let k ∈ Z. (iii) For the finite interval (a, b), the conformal map φ(x) = ln((x − a)/(b − x)) maps the arc Γ with a, b being finite, to R. (iv) Now, Γ = φ −1 (R) with a = φ −1 (−∞) and b = φ −1 (∞) denote the end points of Γ . (v) Then the set of Sinc points xk = φ −1 (kh) becomes xk = φ −1 (kh) =

a + b exp(kh) , 1 + exp(kh)

k = −M, . . . , N.

(2.3)

Note some other forms of the conformal map φ(x) can be used, see [19]. These maps are originally used in composition with Sinc functions to map a simply connected real domain to a strip parallel to the real line R. That is how these points get their name, Sinc points. Let us define an approximation u(x, y) of u(x, y) as the interpolation of u(x, y) at the Sinc points (xk , yj ) defined on (a, b) × (c, d) with k = −Mx , . . . , Nx and j = −My , . . . , Ny . Let nx = Mx + Nx + 1 and ny = My + Ny + 1 be the Sinc points on the rectangle [a, b] × [c, d]. We then get an approximation of u(x, y) of the form u(x, y) u(x, y) =

Nx 

Ny 

ukj B(x; k, h)B(y; j, h).

(2.4)

k=−Mx j =−My

The basis B are one-dimensional functions. They can be polynomials, rational functions, Sinc functions, wavelets, splines, etc. We shall use Lagrange polynomials B(.; k, h) at Sinc points defined using step length h. In this case, we achieve an exponentially convergence of the collocation approximation [17]. The collocation approach uses Lagrange polynomials of the form B(x; k, h) =

g(x) (x − xk ) g  (xk )

(2.5)

with g(x) =

N 

(x − xk ) .

k=−M

Note the functions g(x) can be a set of orthogonal polynomials, see [20]. Recently, the matrices for the Lagrange approximation using orthogonal polynomials is discussed by Gautschi. In addition, more numerical investigations of these matrices along with error analysis are discussed by the author of this paper and G. Baumann. Corresponding to (2.5), we define a matrix B of basis functions in two-

26

M. Youssef

dimensions and an operator U that maps a function u(x, y) onto a matrix of size nx × ny by B(x, y) = Bi (x)Bj (y),

 U = u(xi , yj ) ,

(2.6a) i = −Mx , . . . , Nx , j = −My , . . . , Ny .

(2.6b)

This notation enables us to write the above interpolation scheme in simple operator form, as u(x, y) = B(x, y)U .

(2.7)

Without any loss of generality, let us assume that Mx = Nx = My = Ny and so that nx = ny = n = 2N + 1. Theorem 2.1 Let h = √π , then there exist two sets of constants Ci > 0, γi > 1, N i = 0, 1, independent of N, such that En =

sup |u(x, y) − B(x, y)U | ≤ CN 1/2 log N exp −γ N − π 2 N 1/2 /2 , (x,y)∈Ω

(2.8) where C = C0 +C1 , γ = 2 min {log γ0 , log γ1 }. For the proof of (2.8), see [21]. The theorem shows that 2D Poly-Sinc approximation shows exponential decaying error. This behavior of the error is similar to Sinc approximation. The logarithmic function in the right hand side of (2.8) represents the growing rate of Lebesgue constant for such approximation [21]. If we use the approximation (2.7) to approximate a differential equation of type (2.1), we need to approximate the unknown functions as well as their derivatives. To define an approximation for the first and second order derivative, we have to differentiate formula ! (2.7) twice with ! respect to x and y. This results into two m×m matrices A = aj,k and C = cj,k , j, k = −N, . . . , N , to represent the first order derivative and second order ! derivative, for each spatial variable. The matrix A = ak,j = B  (xj ; k, h), where B  (xj ; k, h) is the first derivative of the basis functions calculated at Sinc points, is,

ak,j =

⎧ g  (x j ) ⎪ if k = j, ⎪ ⎨ (xj −xk )g  (xk ) # ⎪ ⎪ ⎩ N l=−N l =j

(2.9) 1 xl −xk

if k = j.

2 Poly-Sinc Method for Solving Burgers’ Equations

27

! In a similar way the matrix C = ck,j = B  (xj ; k, h), where B  (xj ; k, h) is the second derivative of the basis functions calculated at Sinc points, is

ck,j =

⎧ −2g  (xj ) g  (xj ) ⎪ + if k = j, ⎪ ⎨ (xj −xk )2 g  (xk ) (xj −xk )g  (xk ) # #N ⎪ ⎪ ⎩ N r=−N l=−N l,r =j

(2.10) 1

(xj −xl )(xj −xr )

if k = j.

2.3 Collocation of Coupled Equations In [22], a collocation method based on the use of bivariate Poly-Sinc interpolation defined in (2.7) is introduced to solve elliptic equations defined on rectangular domains. In [23], Poly-Sinc collocation domain decomposition method for elliptic boundary value problems is investigated on complicated domains. The idea of the collocation method is to reduce the boundary value problem to a nonlinear system of algebraic equations which have to be solved subsequently. To start let us introduce the following collocation theorem. Theorem 2.2 If u(x, y) is an analytic bounded function in a domain Ω and     uj k − uj k  = max uj k − uj k ∞ < δ, j,k

then for (x, y) ∈ Ω      < En + δΛn,2 , u(x, y) − B(x, y)U

(2.11)

where j, k = −N, . . . , N and n = 2N + 1 and Λ2,2 is the Lebesgue constant for Lagrange approximation using Sinc points in 2D. This Lebesgue constant is bounded by  Λn,2 ≤

2 1 log(n + 1) + 1.07618 . π

(2.12)

For the proof of (2.11), see [24] and for the proof of (2.12), see [21]. This theorem guarantees an accurate final approximation of u on its domain of definition provided that we know a good approximation to u at the Sinc points. The first step in the collocation algorithm is to replace u(x, y) and v(x, y) by u(x, y) and v (x, y) defined in (2.5)–(2.7). In this case, we have,





(BU ) B (1,0) U + (BV ) B (0,1) U = R B (2,0) U + B (0,2) U ,

(2.13a)

28

M. Youssef





(BU ) B (0,1) U + (BV ) B (0,1) U = R B (2,0) V + B (0,2) V ,

(2.13b)

where B (j,0) stands for the j th derivative of the basis function B with respect to x. Similarly for B (0,k) . Next, we collocate the equation by replacing x and y by corresponding Sinc points xi = φx−1 (ih) =

a + b exp(ih) , i = −N, . . . , N 1 + exp(ih)

yq = φy−1 (qh) =

c + d exp(qh) , q = −N, . . . , N 1 + exp(qh)

and

to have



 (1,0)  (0,1) (2,0) (0,2) B iq U B iq U + B iq V B iq U = R B iq U + B iq U ,



 (0,1)  (0,1) (2,0) (0,2) B iq U B iq U + B iq V B iq U = R B iq V + B iq V ,

(2.14a)

(2.14b) where u(xi , yq ) = B iq U and (1,0)

(2,0)

(0,1)

(0,2)

ux (xi , yq ) = B iq U , uxx (xi , yq ) = B iq U , uy (xi , yq ) = B iq U , uyy (xi , yq ) = B iq U . (l,0)

(0,l)

The matrices B iq and B iq with l = 0, 1 are defined explicitly using the matrices A and C defined in Sect. 2.2. Equation (2.14) represents a nonlinear algebraic system with two vectors of unknowns U and V . The boundary conditions are transformed to algebraic equations if (2.7) is evaluated at ∂Ω and Sinc points are used in the collocation. Finally, using a root finding technique (or a suitable starting point of an iterative method) to solve the nonlinear algebraic system yields the solution.

2.4 Numerical Results In this section, we solve the problem in (2.1) and (2.2) on the domain Ω = [0, 2] × [−0.5, 0.5].

2 Poly-Sinc Method for Solving Burgers’ Equations

29

The boundary conditions are compatible with the analytic solution obtained from Hopf-Cole transformation [25]: u(x, y) =

−2wx (x, y) , Rw(x, y)

v(x, y) =

−2wy (x, y) , Rw(x, y)

(2.15)

with a function w(x, y) gives a considerable control over the velocity field [u, v] and Δw = 0 and R is the Reynolds number. For more details about the structure of the function w(x, y), see [26]. Experiment 1 (Poly-Sinc Solution) In this section, we demonstrate the efficiency of the Poly-Sinc algorithm in Sect. 2.3 to solve the coupled Burgers’ equation in (2.1) with (2.2). We illustrate the error between the exact and the approximate solution. For Reynolds number, we use R = 103 . In the computation of Poly-Sinc, we use N = 5, i.e. 11×11 of 2D grid of Sinc points defined using the conformal map on the domain Ω. In Figs. 2.1 and 2.2 the absolute errors between the Poly-Sinc solutions and the exact solutions for solution functions u(x, y) and v(x, y) are presented. As a reference, we used the exact solution obtained by Hopf-Cole transformation in (2.15). Using L2 norm error, errors of order 10−9 and 10−10 are obtained in u(x, y) and v(x, y), respectively. This experiment shows that a high accurate solution is obtained with a small number of collocation Sinc points. Even with large Reynolds number R = 103 .

Fig. 2.1 Absolute error in u(x, y) using Poly-Sinc algorithm with N = 5 and R = 103

30

M. Youssef

Fig. 2.2 Absolute error in v(x, y) using Poly-Sinc algorithm with N = 5 and R = 103

Experiment 2 (Decaying Rate of Error) It has been shown in [17, 21] that PolySinc methods converge exponentially, see Theorem 2.1. To demonstrate this numerically, let us examine the solution u(x, y) of (2.1) with (2.2) for R = 103 . For this case, we find the Poly-Sinc solution for different values of Sinc points, n. For each n, we compute the L2 norm error, . As a result, a table for each n and the corresponding error has been computed. We then use this table in a least square estimation to find the coefficients of the logarithmic of error function log() = f (n) (γ − αn), where γ and α are constants. A least square fit to the collected data delivers the constants γ = −4.66878 and α = 1.45523. Dots in Fig. 2.3 represent the logarithm of the discrete norm error, while the continuous curve represents the logarithmic of the error function. Figure 2.3 demonstrates that the error of the Poly-Sinc method for the solution of (2.1) follows an exponentially decaying relation. Experiment 3 (Different R) As mentioned in Sect. 2.1, for large values of the Reynolds number R, there is a difficulty to find the numeric solution. This difficulty is also encountered in an inviscid Navier-Stokes equation for a convection dominated flow, and in fact, Burgers’ equation is one of the principle equations used to test new numerical methods to solve Navier-Stokes equation. In this experiment, we show that Poly-Sinc method can overcome this difficulty without any modifications in the collocation technique. Figure 2.4 represents the Poly-Sinc approximate solution

Log ϵ

2 Poly-Sinc Method for Solving Burgers’ Equations

10

5

10

6

10

7

10

8

10

9

10

10

6

31

8

10

12

n

Fig. 2.3 Logarithmic plot of the error fitted to the function f (n) = −4.66878 − 1.45523n 2. ×10–9

Eu

1. ×10–9

5. ×10–10

2. ×10–10 0

200

400

600

800

1000

Fig. 2.4 Absolute error in u(x, y) for R = 100, 200, . . . , 1000

of u(x, y) in (2.1) for R = 100, 200, . . . , 1000. Figure 2.5 is for v(x, y). We can observe that the error for large Reynolds number is still of order 10−9 or 10−10 . Experiment 4 (Comparison) This Section introduces a comparison between the approximate solutions obtained in the present paper and other results in the

32

M. Youssef 1. ×10–9

Ev

5. ×10–10

2. ×10–10

1. ×10–10

0

200

400

600

800

1000

Fig. 2.5 Absolute error in v(x, y) for R = 100, 200, . . . , 1000 Table 2.1 Comparison for different values of R

Methods Poly-Sinc Sinc [27] [11] [28] [29] [26]

R = 100 10−9 mRNA cleavage 10−3 10−5 10−5 10−2 –

R = 500

10−10 21 – – – – 10−5

literature. We compare the approximate solutions obtained by Poly-Sinc collocation method and some other methods in literature [11, 26–29]. In Table 2.1, we compare the error obtained from Poly-Sinc method introduced in this paper and error in the mentioned literature. The results in Table 2.1 show that most of the techniques in the literature presented poor accuracy for large R. Moreover, there is a lack of literature for R > 500. On the other hand, using Poly-Sinc method introduced in this paper shows highly accurate and rapidly convergent approximations to solve Burgers’ equation. In addition, Poly-Sinc method shows, without any modifications, highly accurate solutions for large values of R.

2 Poly-Sinc Method for Solving Burgers’ Equations

33

2.5 Conclusion In this paper, we approximate the solutions of the two-dimensional coupled Burgers’ equations by the Poly-Sinc collocation method. The accuracy of the proposed method is discussed using the decaying rate as a function of the number of used Sinc points. It is shown that the Poly-sinc method is highly efficient for large values of Reynolds number. In addition, the error is decaying exponentially. In comparison with other numerical techniques, we showed that collocation method based on Sinc points is more accurate than that based on other meshing points.

Bibliography 1. Burgers, J.M.: A mathematical model illustrating the theory of turbulence. Adv. Appl. Mech. 1, 171–199 (1948) 2. Whitham, G.B.: Linear and Nonlinear Waves. Wiley, New York (1974) 3. Ma, W.X.: A hierarchy of coupled Burgers systems possessing a hereditary structure. J. Phys. A 26, 1169–1174 (1993) 4. Esipov, S.E.: Coupled Burgers’ equations: a model of polydispersive sedimentation. Phys. Rev. E 52, 3711–3718 (1995) 5. Nee, J., Duan, J.: Limit set of trajectories of the coupled viscous Burgers equations. Appl. Math. Lett. 11, 57–61 (1998) 6. Soliman, A.A.: The modified extended tanh-function method for solving Burgers-type equations. Physica A 361, 394–404 (2006) 7. Wazwaz, A.M.: Multiple-front solutions for the Burgers equation and the coupled Burgers equations. Appl. Math. Comput. 190, 1198–1206 (2007) 8. Fletcher, C.A.J.: A comparison of finite element and finite difference solutions of the one- and two-dimensional Burgers’ equations. J. Comput. Methods 51(1), 159–188 (1983) 9. Huang, P.Z., Abduwali, A.: The modified local Crank-Nicolson method for one- and twodimensional Burgers’ equations. Comput. Math. Appl. 59, 2452–2463 (2010) 10. Bahadir, A.R.: A fully implicit finite-difference scheme for two-dimensional Burgers’ equation. Appl. Math. Comput. 137, 131–137 (2003) 11. Zhao, G., Yu, X., Zhang, R.: The new numerical method for solving the system of twodimensional Burgers’ equations. Comput. Math. Appl. 62, 3279–3291 (2011) 12. Siraj-ul-Islam, Šarler, B., Vertnik, R., Kosec, G.: Radial basis function collocation method for the numerical solution of the two-dimensional transient nonlinear coupled Burgers’ equations. Appl. Math. Modell. 36, 1148–1160 (2012) 13. Wang, G.W., Xu, T.Z., Biswas, A.: Topological solitons and conservation laws of the coupled Burgers’ equations. Rom. Rep. Phys. 66, 274–285 (2014) 14. Wang, Y., Navon, I.M., Wang, X., Cheng, Y.: 2D Burgers equation with large Reynolds number using POD/DEIM and calibration. Int. J. Numer. Methods Fluids 82(12), 909–931 (2016) 15. Egidi, N., et al.: An integral equation method for the numerical solution of the Burgers equation. Comput. Math. Appl. (2018). https://doi.org/10.1016/j.camwa.2018.04.002 16. Li, Q., Chai, Z., Shi, B.: Lattice Boltzmann models for two-dimensional coupled Burgers’ equations. Comput. Math. Appl. 75(3), 64–875 (2018). https://doi.org/10.1016/j.camwa.2017. 10.013 17. Stenger, F., Youssef, M., Niebsch, J.: Improved approximation via use of transformations. In: Shen, X., Zayed, A.I. (eds.) Multiscale Signal Analysis and Modeling, pp. 25–49. Springer, New York (2013)

34

M. Youssef

18. Youssef, M., El-Sharkawy, H.A., Baumann, G.: Lebesgue constant using sinc points. Adv. Numer. Anal. (2016). http://dx.doi.org/10.1155/2016/6758283 19. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, Boca Raton (2011) 20. Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and Schrödinger in 3 + 1. In: Concepts of Mathematical Physics in Chemistry : A Tribute to Frank E. Harris - Part A, vol. 71, pp. 265–298. Elsevier, Amsterdam (2015) 21. Youssef, M., El-Sharkawy, H.A., Baumann, G.: Multivariate poly-sinc approximation, error estimation and Lebesgue constant. J. Math. Res. 8(4) (2016). http://dx.doi.org/10.5539/jmr. v8n4p118 22. Youssef, M., Baumann, G.: Collocation method to solve elliptic equations, bivariate poly-sinc approximation. J. Prog. Res. Math. 7(3), 1079–1091 (2016). ISSN: 2395-0218 23. Youssef, M., Baumann, G.: On bivariate poly-sinc collocation applied to patching domain decomposition. Appl. Math. Sci. 11(5), 209–226 (2017) 24. Youssef, M., Pulch, R.: Poly-sinc solution of stochastic elliptic differential equations (2019). http://arxiv.org/abs/1904.02017 25. Fletcher, C.A.J.: Computational Techniques for Fluid Dynamics, Part I. Springer, Berlin (1988) 26. Cristescu, I.A.: Numerical resolution of coupled two-dimensional Burgers’ equation. Rom. J. Phys. 62, 103 (2017) 27. Pandey, K., Verma, L., Verma, A.K.: On a finite difference scheme for Burgers’ equation. Appl. Math. Comput. 215, 2206–2214 (2009) 28. Zhang, D.S., Wei, G.W., Kouri, D.J., Hoffman, D.K.: Burgers’ equation with high Reynolds number. Phys. Fluids 9(6), 1853–1855 (1997) 29. Kakuda, K., Tosaka, N.: The generalized boundary element approach to Burgers’ equation. Int. J. Numer. Methods Eng. 29, 245 (1990)

Chapter 3

Sinc Projection Solutions of Fredholm Integral Equations Khadijeh Nedaiasl

Abstract We present numerical schemes based on sinc approximations for some classes of nonlinear integral equations of Hammerstein and Urysohn types. For this aim, by the sinc interpolation operator, the projection methods likes sinccollocation, sinc-Nyström and sinc-convolution methods are introduced. These methods are analyzed by the collectively compact operator theory. Some numerical examples are given to confirm the accuracy and the ease of implementation of the methods. Keywords Fredholm integral equation · Collocation method · Convolution method · Nyström method · Nonlinear operators.

3.1 Introduction In this paper, we consider the numerical solution of the Urysohn integral equation 

b

u(t) −

f (|t − s|)k(t, s, u(s))ds = g(t),

∞ < a ≤ t ≤ b < ∞,

(3.1)

a

where u(t) is an unknown function to be determined and k(t, s, u) and g(t) are given functions. For the smooth and weakly singular integral equations, we take f (t) = 1 and f (t) = t −λ for 0 < λ < 1, respectively. This equation appeared when Pavel Urysohn rewrote a nonlinear boundary value problem of order two into an integral equation [27]. Urysohn equations are a general formulation for a class of nonlinear

K. Nedaiasl () Institute for Advanced Studies in Basic Sciences, Zanjan, Iran e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_3

35

36

K. Nedaiasl

Fredholm integral equation called Hammerstein equation 

b

u(t) −

f (|t − s|)k(t, s)ψ(s, u(s))ds = g(t),

−∞ < a ≤ t ≤ b < ∞,

a

(3.2) for known functions k(t, s) and ψ(s, u). These equations could be written in operator notation as (I − K )u = g,

(3.3)

where 

b

(K u)(t) =

k(t, s, u(s))ds,

(3.4)

k(t, s)ψ(s, u(s))ds.

(3.5)

a

or 

b

(K u)(t) = a

$ These operators are defined on the Banach space X = H∞ (D) C(D). In this notation, D ⊂ C is a simply connected domain with (a, b) ⊂ D and H∞ (D) denotes the family of the analytic functions on the domain D. In Eq. (3.3), the kernels could be smooth, weakly singular or strongly singular. In this paper, we investigate the smooth and weakly singular kernels. It is well-known that the integral representation of Laplace equation with nonlinear integral boundary conditions leads to a nonlinear integral equation of Hammerstein type. Also, Nekrasov equation which arises in the study of water waves could be categorized as a nonlinear Fredholm integral equation [4]. Furthermore, the boundary integral equations of the Laplace and Helmholtz operators are described as a linear combination of weakly singular operators [22, 24]. So, nonlinear integral equations of Fredholm type have a key role in the theory and application of integral equations. The numerical solution of Hammerstein and Urysohn integral equations are considered by many researchers. Krasnose´lski˘ı et al. [11, 12] have studied the Galerkin and Petrov-Galerkin solutions of nonlinear equations with compact operators and obtained a general upper bound depends on the orthogonal projection operator. With inspiration from the mentioned monographs, Kendall Atkinson et al. have developed the numerical analysis of the projection methods based on piecewise polynomials for Fredholm integral equations [4, 8]. His excellent monograph on the numerical solution of linear Fredholm integral equation [5] has been inspiration for many researches in this field. In this direction, we could mention Linear integral Equation written by Rainer Kress [13] and its chapter on the projection methods. So, the numerical methods for the nonlinear Fredholm integral equations have been in the spotlight of the research in this field. Applying sinc method to get an approximation for the solution of

3 Sinc Projection Solutions of Fredholm Integral Equations

37

integral equations is utilized for the class of Volterra and Fredholm with smooth and singular kernels, for more details see [1, 9, 18, 20]. In this paper, we study some numerical methods based on the sinc approximations for nonlinear Fredholm integral equation with smooth and weakly singular kernels. We provide a rigorous convergence analysis for the proposed methods. We assume that Eq. (3.3) has at least one solution and the nonlinear operator K is a completely continuous operator. Sufficient conditions to have such a solution will be introduced. Additionally, suppose that the exact solution u(t) to be determined is geometrically isolated [10] which means there is some ball B(u, r) = {v ∈ X : u − v ≤ r},   with r > 0 and u = sup |u(t)| : t ∈ [0, 1] , that contains no solution of (3.1) other than u. It is assumed that 1 is not the eigenvalue of the linear operator K  (u), so the existence of the geometrically isolated solution is guaranteed [15].

3.2 The Basics of the Sinc Method The Sinc function is defined on R by sinc(t) =

 sin(π t) πt

1,

,t= 0, t = 0.

The sinc numerical methods are based on approximation over the infinite interval (−∞, ∞), written by f (t) ≈

N 

f (j h)S(j, h)(t),

t ∈ R,

j =−N

where the basis functions S(j, h)(t) are defined by S(j, h)(t) = sinc(

t − j ), h

and h is a step size appropriately chosen depending on a given positive integer N, and j is an integer. It is known that the Sinc Approximation and numerical integration are closely related through the following identity 



(

N 

−∞ j =−N

f (j h)S(j, h)(t) − f (t))dt = h

N  j =−N

 f (j h) −

∞ −∞

f (t)dt.

(3.6)

38

K. Nedaiasl

On the other hand, this is a relation between the approximation error of the Sinc Approximation and the one of integration by the trapezoidal rule [19]. Equation (3.6) can be adapted to approximate on general intervals with the aid of appropriate variable transformations t = ϕ(x). For this aim it is enough that the function on a compact interval [a, b] to be transformed by an invertible mapping [25] ϕ : (−∞, ∞) → (a, b), ϕ(x) =

b−a 2

tanh( x2 ) +

b+a 2 .

(3.7)

The inverse transform of ϕ is given as φ(t) = log(

t −a ). b−t

To define a convenient function space, the strip domain   Dd = z ∈ C : |z| < d , for some d > 0 is introduced and when incorporated with the transformation, the domain is considered on the translated domain   z−a )| < d . ϕ(Dd ) = z ∈ C : | arg( b−z The following definitions and theorems are considered for further details of the procedure. Definition 3.1 Let D be a simply connected domain which satisfies (a, b) ⊂ D, and let α and C be positive constants. Then Lα (D) denotes the family of all functions f ∈ H∞ (D) which satisfy |f (z)| ≤ C|Q(z)|α ,

(3.8)

for all z in D where Q(z) = (z − a)(b − z). The next theorem shows the exponential convergence of the sinc interpolation approximation. Theorem 3.1 ([25]) Let f ∈ Lα (ϕ(Dd )) for d with 0 < d < π . Suppose that N πd be a positive integer, and h be given by the formula h = αN . Then there exists a constant C independent of N such that f (t) −

N  j =−N

√ √ f (ϕ(j h))S(j, h)(φ(t)) ≤ C N exp(− π dαN ).

3 Sinc Projection Solutions of Fredholm Integral Equations

39

The following theorem involves bounding the error of (2N + 1)-point sinc quadrature for f on (a, b). When incorporated with the sinc transformation, the quadrature rule is designated as the sinc quadrature. Theorem 3.2 ([25]) Let (f Q) ∈ Lα (ϕ(Dd )) for d with 0 < d < π . Let N be a positive integer and h be selected by the formula % h=

πd . αN

Then there exists a constant C which is independent of N , such that  



b

f (t) dt − h

a

N 

√  f (ϕ(j h))ϕ  (j h) ≤ C exp(− π dαN ).

(3.9)

j =−N

According to Theorems 3.1 and 3.2, the function to be approximated by the Sinc Approximation should belong to Lα (D) in order to achieve exponential convergence. By the condition (3.8), such a function is required to be zero at the endpoints, which seems to be unavailable assumption in practice. However, it can be relaxed to the following function space Mα (D) with 0 < α ≤ 1 and 0 < d < π . Definition 3.2 Let D be a simply connected and bounded domain which contains (a, b). The family Mα (D) contains of all those analytical functions which are continuous on D¯ such that the translation G[f ](t) = f (t) − (

! t −a b−t )f (a) + ( )f (b) , b−a b−a

belongs to Lα (D).

3.3 Sinc-Collocation Method A sinc approximation ucN to the solution u ∈ Mα (ϕ(Dd )) of Eq. (3.1) is described in this part. Let us define the operator PN : Mα → X as follows PN [u](t) = Lu(t) +

N 

[u(tj ) − (Lu)(tj )]S(j, h)(φ(t)),

j =−N

where L[u](t) =

t −a b−t u(a) + u(b), b−a b−a

40

K. Nedaiasl

and the points tj are defined by the formula ⎧ j = −N − 1, ⎨ a, tj = ϕ(j h), j = −N, . . . , N, ⎩ b, j = N + 1.

(3.10)

It should be noticed that PN u is an interpolation of u by sinc functions with the above points and PN is called collocation operator. The approximate solution ucN is considered that has the form ucN (t) = c−N −1

N  b−t t −a + . cj S(j, h)(φ(t)) + cN +1 b−a b−a

(3.11)

j =−N

Applying the operator PN to both sides of Eq. (3.3) gives us the following approximate equation in operator form zN = PN K zN + PN g,

(3.12)

so collocation method for solving Eq. (3.3) amounts to solve (3.12) for N sufficiently large. We are interested in approximating the integral operator in (3.12) by the quadrature formula presented in (3.9). So the following discrete single exponential operator can be defined KN (u)(t) = h

N 

k(t, tj , u(tj ))ϕ  (j h).

(3.13)

j =−N

This numerical procedure leads us to replace (3.12) with ucN = PN KN ucN + PN g.

(3.14)

By substituting ucN into Eq. (3.1) and approximating the integral by means of sinc quadrature formula and considering its collocation on 2N + 3 sampling points at t = ti , the following nonlinear system of equations ucN (ti ) = h

N 

k(ti , tj , ucN (tj ))ϕ  (j h) + g(ti ),

i = −N − 1, · · · , N + 1,

j =−N

(3.15) is obtained which is equivalent to (3.14). By solving this system, the unknown coefficients in ucN (t) are determined.

3 Sinc Projection Solutions of Fredholm Integral Equations

41

3.4 Sinc-Nyström Method In the sinc-Nyström method, we approximate the integral operator in (3.1) by the quadrature formula (3.9). Let u ∈ H∞ (ϕ(Dd )) and k(t, ., u(.))Q(.) ∈ Lα (ϕ(Dd )) for all t ∈ [a, b] and u ∈ B. Then the integral in (3.1) can be approximated by Theorem 3.2 and the discrete operator KN (u) could be defined. The Nyström method applied to (3.1) is to find unN such that unN (t) − h

N 

k(t, tj , unN (tj ))ϕ  (j h) = g(t),

(3.16)

j =−N

where the Sinc points tj are defined by the formula tj = ϕ(j h),

j = −N, . . . , N.

(3.17)

Solving (3.16) reduces to solving a finite dimensional nonlinear system. For any solution of (3.16) the values unN (tj ) at the quadrature points satisfies the system unN (ti ) − h

N 

k(ti , tj , unN (tj ))ϕ  (j h) = g(ti ),

i = −N, . . . , N.

(3.18)

j =−N

Conversely, given a solution {unN (ti )}N i=−N of the system (3.18), then the function unN defined by unN (t) = h

N 

k(t, tj , unN (tj ))ϕ  (j h) + g(t),

j =−N

is readily seen to satisfy (3.16). We rewrite the (3.16) in operator notation as (I − KN )unN = g.

(3.19)

Atkinson in [3] by using the Leray-Schauder theorem proved that under certain differentiability assumptions on K and KN , (3.19) has a unique solution in a neighborhood of an isolated solution of (3.1) and these approximation solutions converge to an isolated solution for sufficiently large N . We assume that ku (t, s, u) ≡ ∂k(t,s,u) ∂u is continuous for all t, s ∈ [a, b] and u ∈ B. This assumption implies that K is Fréchet differentiable [3] with K  (u)x(t) =



b

ku (t, s, u(s))x(s) ds, a

t ∈ [a, b],

x ∈ X.

42

K. Nedaiasl

Furthermore, the continuity assumption is considered for second partial derivative of the kernel, kuu (t, s, u), leading to the existence and the boundedness of the second Fréchet derivative with K  (u)(x, y)(t) =



b

kuu (t, s, u(s))x(s)y(s) ds,

t ∈ [a, b],

x, y ∈ X.

a

Similar to KN , (KN ) and (KN ) can be defined by the sinc quadrature formula as follow N 

(KN ) (u)x(t) = h

ku (t, tj , u(tj ))ϕ  (j h)x(tj ),

j =−N

and N 

(KN ) (u)(x, y)(t) = h

kuu (t, tj , u(tj ))ϕ  (j h)x(tj )y(tj ).

(3.20)

j =−N

The above operators are utilized to prove the convergence of the method.

3.5 Sinc-Convolution Method Let f (t) be a function with singularity at the origin and g(t) be a function with singularities at both endpoints. The method of sinc-convolution is based on an accurate approximation of following integrals 

s

p(s) = 

f (s − t)g(t)dt,

s ∈ (a, b),

a

q(s) =

(3.21)

b

f (t − s)g(t)dt,

s ∈ (a, b),

s

and then it could be used to approximate the definite convolution integral 

b

f (|s − t|)g(t)dt.

(3.22)

a

In order to make such appropriate approximation the following notations are defined.

3 Sinc Projection Solutions of Fredholm Integral Equations

43

Definition 3.3 For a given positive integers N , let DN and VN denote linear operators acting on function u by DN u = diag[u(t−N ), . . . , u(tN )], VN u =(u(t−N ), . . . , u(tN ))T .

(3.23)

where the superscript T specifies the transpose, diag symbolizes the diagonal matrix. Set the basis functions as follow γj (t) = S(j, h)(ϕ(t)), ωj (t) = γj (t),

j = −N, . . . , N,

b−t ω−N (t) = − b−a ωN (t) =

j = −N, . . . , N,

N  j =−N +1

1 γj (t), 1 + ej h

(3.24)

N −1  ej h t −a − γj (t). b−a 1 + ej h j =−N

With the aid of these basis functions for a given vector c = (c−N , . . . , cN )T , we consider a linear combination, symbolized by ΠN as follows (ΠN c)(t) =

N 

cj ωj (t).

(3.25)

j =−N

Let us define the interpolation operator PNco : Mα (D) → XN = span{ωj (t)}N j =−N as follows PNco f (t)

=

N 

f (tj )ωj (t),

j =−N

where tj s are the sinc points. The numbers σk and ek are determined by 

k

σk =

sinc(t)dt, 0

k ∈ Z, (3.26)

1 ek = + σk . 2 An (2N + 1) × (2N + 1) (Toeplitz) matrix I (−1) is defined by I (−1) = [ei−j ], with ei−j representing the (i, j )th element of I (−1) . In addition, the operators I + and

44

K. Nedaiasl

I − are specified as follows (I + g)(t) =



t

g(s)ds, a





(I g)(t) =

(3.27)

b

g(s)ds. t

The following discrete operators IN+ and IN− approximate the operators I + and I − respectively, 1 ), ϕ

(IN+ g)(t) = ΠN A(1) VN g(t),

A(1) = hI (−1) DN (

(IN− f )(t) = ΠN A(2) VN g(t),

A(2) = h(I (−1) )T DN (

For a function f , the operator F [f ](s) is defined by  c −t F [f ](s) = e s f (t)dt,

1 ). ϕ

(3.28)

(3.29)

0

and it is assumed that Eq. (3.29) is well-defined for some c ∈ [b − a, ∞] and for all s on the right half of the complex plane, Ω + = {z ∈ C : (z) > 0}. Sinc-convolution method provides formulae of high accuracy and allows for f (s) could also have an integrable singularity at s = b − a and for g to have singularities at both endpoints of (a, b) [26]. This property of the sinc-convolution makes it suitable to approximate the weakly singular integral equations. Now for convenience, some useful theorems related to sinc-convolution method are introduced. The following theorem predicts the convergence rate of the sincconvolution method. Theorem 3.3 ([26]) (a) Suppose that the integrals p(t) and q(t) in (3.21) exist and are uniformly bounded on (a, b), and let F be defined as (3.29). Then the following operator identities hold p = F (I + )g,

q = F (I − )g.

(3.30)

g ∈ Lα (D). If for some positive C  independent of N , we have ϕ |F  (s)| ≤ C  for all (s) ≥ 0, then there exists a constant C, independent of N such that √ √ p − F (IN+ )g ≤ C N exp(− π αdN ), (3.31) √ √ q − F (IN− )g ≤ C N exp(− π αdN ).

(b) Assume that

3 Sinc Projection Solutions of Fredholm Integral Equations

45

3.5.1 Sinc-Convolution Scheme In order to practically use of the convolution method, it is assumed that the dimension of matrices, 2N + 1, is such that the matrices A(1) and A(2) are diagonalizable [26] as follows A(j ) = X(j ) S(Xj )−1 ,

j = 1, 2,

(3.32)

where S = diag(s−N , . . . , sN ), X(1) = [xk,l ],

(X(1) )−1 = [x k,l ],

X(2) = [ξk,l ],

(X(2) )−1 = [ξ k,l ].

(3.33)

The integral at Eq. (3.1) has been split into the following integrals 

b

−λ

|t − s|



t

k(t, s, u(s))ds =

a

−λ

|t − s|



b

k(t, s, u(s))ds +

a

|t − s|−λ k(t, s, u(s))ds.

t

(3.34) Based on formulae (3.28), the two discrete nonlinear operators are defined (KN1 u)(t) = ΠN A(1) VN k(t, s, u(s)), (KN2 u)(t) = ΠN A(2) VN k(t, s, u(s)).

(3.35)

The approximated solution takes the form uco N (t) =

N 

cj ωj (t),

j =−N

where cj s are unknown coefficients to be determined. The integrals in right hand side of (3.34) are approximated by the formulae (3.28), (3.30) and (3.32). We substitute these approximations in (3.1) and then the approximated equation is collocated at the sinc points. This process reduces solving (3.1) to solving the following finite dimensional system of equations cj −



N 

xj,k

N 

k=−N

l=−N

N 

N 

k=−N

ξj,k

l=−N

x k,l F (sk )k(zj , zl , cl ) (3.36) ξ k,l F (sk )k(zj , zl , cl ) = y(zj ),

46

K. Nedaiasl

for j = −N, . . . , N. Equation (3.36) can be expressed in the operator notation as follows co 1 co co 2 co co uco N − PN KN uN − PN KN uN = PN y

(3.37)

3.6 Convergence Analysis 3.6.1 Sinc-Nyström Method The convergence of the sinc-Nyström method is discussed in this section. For the following lemma D represents ϕ(Dd ). In this lemma, the sufficient conditions to have a completely continuous operator have been introduced. Lemma 3.1 ([11]) Let the kernel k(t, s, u) be continuous and have a continuous partial derivative ∂k(t,s,u) for all t, s ∈ D and u ∈ B. Then K : X → X is a ∂u completely continuous operator and is differentiable at each point of B. Our basic assumption is that the (3.3) has an analytic solution. The sufficient conditions to have such a solution have been mentioned in [11, p. 83]. We supposed that those conditions are satisfied here. Our idea for deriving the order of convergence is based on collectively compact operator theory [2]. For ease of referencing, the following required conditions are mentioned from [3, 28]. C1 . {KN : N ≥ 1} is a collectively compact family on X. C2 . KN is pointwise convergent to K on X. C3 . For N ≥ 1, KN possesses continuous first and bounded second Fréchet derivatives on B. Moreover, (KN ) ≤ α < ∞, where α is a constant. It is more convenient to rewrite the quadrature rule defined in Theorem 3.2 in the following notation. Let QN : X → R be a discrete operator defined by QN f = h

N 

f (ti )ϕ  (j h),

(3.38)

j =−N

&b and Q : X → R be an integral operator defined by Qf = a f (t) dt. Stenger et al. in [14] have concluded from Steklov’s theorem that QN f → Qf for all f ∈ C[a, b]. Additionally, it is easily proven by the Banach-Steinhaus that QN is

3 Sinc Projection Solutions of Fredholm Integral Equations

47

uniformly bounded [17]. Now, the following theorem is stated to prove that KN satisfies the conditions C1 -C3 . Theorem 3.4 Assume that k(t, ., u(.))Q(.) ∈ Lα (ϕ(Dd )) for 0 < d < π and kuu (t, s, u) is continuous for all t, s ∈ [a, b] and u ∈ B, then the conditions C1 -C3 are fulfilled. Proof From the continuity of the kernel and the above discussion, the family S = {KN u | N ≥ 1, u ∈ B}, is uniformly bounded. Furthermore, note that the function k(t, s, u) is uniformly continuous on [a, b] × [a, b] × B, and therefore we can conclude from the uniform boundedness of QN that S is a family of equicontinuous functions. So C1 follows from the Arzelà-Ascoli theorem. Due to Theorem 3.2 and the relevant discussion to (3.38), the condition C2 holds. By considering (3.20) on B and the continuity of kuu (t, s, u), C3 is easily concluded. Lemma 3.2 Let I − K  (u) be non singular and the assumptions of Theorem 3.4 be fulfilled. Then for sufficiently large N , the linear operators I − KN (u) are non singular; furthermore, (I − KN (u))−1 ≤ M, where M is a constant independent of N . Proof Condition C1 is satisfied and {KN (u) | N ≥ N1 } is equi-differentiable. Therefore, according to Theorem 6.10 in [2], the set {KN (u) | N ≥ N1 } is a collectively compact family of operators. Moreover, from condition C3 and Theorem 6.11 of [2], we can conclude that KN (u) is pointwise convergent to K  (u) for all u ∈ B. So, the final result has been obtained from the existence of (I − K  (u))−1 and the theory of collectively compact operators. Now we are ready to formulate the main result. Theorem 3.5 Suppose that the assumptions of Lemma 3.2 hold. Then there exists a positive integer N1 such that for all N ≥ N1 , (3.19) has a unique solution unN . Furthermore, there exists a constant C independent of N such that √ u − unN ≤ C exp(− π dαN ). Proof By subtracting (3.1) from (3.19) and adding the term K  (u)(u−unN ) on both sides, the following term has been obtained (I − (KN ) (u))(u − unN ) =

K (u) − KN (u) − {KN (unN ) − KN (u) − (KN ) (u)(unN

− u)}. (3.39)

48

K. Nedaiasl

So Lemma 3.2 gives  u − unN ≤ M K (u) − KN (u) +  KN (unN ) − KN (u) − (KN ) (u)(unN − u) . The second term on the right-hand side has been bounded by the term 12 α u − unN by condition C3 , and the finite result has been obtained from Theorem 3.2.

3.6.2 Sinc-Collocation Method The application of the sinc-collocation method for the nonlinear Fredholm integral equations has been discussed in [16]. Applying such an Algorithm for nonlinear integral equations leads to discrete collocation method. In this section we give an error bound to the mentioned method based on the sinc-Nyström method. In [5, subsection 4.3] the iterated discrete collocation has been discussed where the integration nodes belong to the set of the collocation points, as it happens in the discrete sinc-collocation method. As the following theorem shows, in this case the iterated discrete collocation method is the Nyström method. Theorem 3.6 ([6]) Suppose that the hypotheses of Lemma 3.2 are fulfilled. Furthermore, let u be an isolated solution of (3.3). Then the iterated discrete sinccollocation method coincides with the sinc-Nyström method u − KN u = g. It is more convenient to introduce the notations zN for the discrete sinc-collocation solution and z˜ N for the iterated discrete sinc-collocation solution. In the following we mention some theorems related to the sinc-collocation methods. Firstly, we state the following lemma which is used subsequently. Lemma 3.3 ([25]) Let h > 0. Then it holds that sup

N +1 

x∈R −N −1

|S(j, h)(x)| ≤

2 (3 + log(N + 1)). π

Based on Lemma 3.3, it has been concluded PN ≤ C log(N + 1) is constant independent of N . The sinc interpolation operators PN which utilized in [16] is considered. The following relation is easily proven u − zN = [u − PN u] + PN [u − z˜ N ]. Now we are ready to state the main result of this section.

(3.40)

3 Sinc Projection Solutions of Fredholm Integral Equations

49

Theorem 3.7 Let the hypotheses of Theorem 3.1 be fulfilled. Then the following error bound hold for the sinc-collocation method, u − zN ≤



√ N log(N + 1) exp(− π dαN ).

Proof By applying . on both sides of (3.40), the following relation has been obtained u − zN ≤ u − PN u + PN u − z˜ N , the first term of the right side and the interpolation operator can be bounded by Theorem 3.1 and Lemma 3.3, respectively. So, the final result can be concluded from Theorems 3.6 and 3.5.

3.6.2.1

Sinc-Convolution

The application of sinc-convolution method for weakly singular integral equations of Fredholm type is studied in [21]. The convergence analysis of sinc-convolution method is discussed in this section . The main result is formulated in the following theorem. Theorem 3.8 Suppose that u(t) is an exact solution of (3.3) and the kernel k satisfies the Lipschitz condition with respect to the third variable by L as its Lipschitz constant. Furthermore, let the assumptions of Theorem 3.1 be fulfilled. Then there exists a constant C independent of N such that √ √ u − uco N ≤ C N log(N ) exp(− π dλN ) Proof By subtracting (3.3) from (3.37) and applying . on both sides, the following term has been obtained 1 co 1 co 2 co 2 co c u − uco N ≤ K u − PN KN uN + K u − PN KN uN + y − PN y .

Finding an upper bound for the first and second terms is almost the same. For this aim the first term is rewritten as follows co co co co co 1 co K 1 u − PNco KN1 uco N = K1 u − PN K1 uN + PN K1 uN − PN KN uN ,

so we have 1 co K 1 u − PN KN1 uco N ≤ K u − K1 uN 1 co co 1 co 1 co + K 1 uco N − PN K uN + PN K uN − KN uN , (3.41)

50

K. Nedaiasl

the second term is bounded by Theorem 3.1. Due to the Lipschitz condition, the first term is bounded by co K1 u − K1 uco N ≤ C1 u − uN ,

where C1 is a suitable constant. In addition, Theorem 3.3 and conclusion of Lemma 3.3 help us to find the following upper bound √ √ 1 co PNco K1 uco N − KN uN ≤ C2 N log(N ) exp(− π dλN ).

(3.42)

Finally, we get √ √ u − uco N ≤ C N log(N ) exp(− π dλN ).

(3.43)

3.7 Numerical Illustrations In this section, some numerical experiments concerning the accuracy and rate of convergence of the discussed methods have been presented. The proposed algorithms are executed in Mathematica on a personal PC. The nonlinear system of equations are solved by Newton iteration method and the initial guess of the iteration has been selected by the steepest descent method. The convergence rate of the sinc-collocation, sinc-Nyström and sinc-convolution methods depends on two parameters α and d. In fact the parameter d indicates the size of the holomorphic domain of u. So the parameter α is determined by Theorem 3.1 and the important parameter d value is 3.14. For all examples c (if needed) in formula (3.29) is chosen the infinity. Example 3.1 The following nonlinear Fredholm integral equation  u(t) − t 0

1

4s + π sin(π s) ds = g(t), [u(s)]2 + s 2 + 1

where 0 < t < 1 and g(t) is chosen such that u(t) = sin( π2 t) is the exact solution [17]. This equation is solved by sinc-collocation and sinc-Nyström methods. Table 3.1 presents the numerical results.

Table 3.1 The results sinc-Nyström and sinc-collocation methods for Example 3.1 Methods Sinc-collocation Sinc-Nyström

N =5 2.43E−03 3.21E−04

N = 10 2.42E−04 3.21E−05

N = 15 3.13E−05 2.21E−06

N = 20 7.28E−06 4.32E−07

N = 25 8.21E−07 7.21E−08

N = 30 5.81E−08 1.21E−08

3 Sinc Projection Solutions of Fredholm Integral Equations Fig. 3.1 Plots of the absolute error for sinc-Nyström method

51

Nyström method 10–6

||u-u_N ||

10–9

10–12

10–15 20

40

60

80

100

Partition Size N

Example 3.2 Consider 

1

u(t) − 0

ds = g(t), t + s + u(s)

0 ≤ t ≤ 1,

(3.44)

1 be the exact solution. This Urysohn integral with g(t) so chosen that u(t) = 1+t equation has been introduced and solved in [7] by projection and iterated projection methods based on the piecewise polynomials. Table 1 and Table 2 in [7] report the Galerkin and iterated Galerkin solution for this equation. Figure 3.1 shows the error for the approximation of the solution by sinc-Nyström method for Example 3.2.

Example 3.3 Consider the integral equation 

1

u(t) −

|t − s|

−1 2

u2 (s)ds = g(t),

t ∈ (0, 1),

(3.45)

0

where 1 16 5 t 2 + 2t 2 (1 − t) 2 15 3 5 4 2 + t (1 − t) 2 + (1 − t) 2 3 5 1 3 4 3 2 − t 2 − 2t (1 − t) 2 − (1 − t) 2 , 3 3 √ with the exact solution u(t) = t (1 − t) [23]. The numerical results have been shown by Fig. 3.2. 1

g(t) = [t (1 − t)] 2 +

52

K. Nedaiasl

Fig. 3.2 Plots of absolute error for sinc-convolution and sinc-collocation methods

Collocation method Convolution method

||u – uN ||

10–2

10–4

10–6

10–8 10

20

30

40

50

Partition Size N

3.8 Conclusion Three numerical methods for the solution of nonlinear integral equations of Fredholm type are introduced. These schemes are based on the projection methods and connected together with the sinc basic functions. Methods designed by Sinc function have many applications in theory and application, from holomorphic functions theory to the signal processing. Here, we utilize it to construct some basis functions which are suitable for the numerical solution of the smooth and weakly singular integral equations of Fredholm type. Fortunately, the theory of collectively compact operators provides a framework to do error the analysis of the proposed methods.

Bibliography 1. Akel, M.S., Hussein, S.H.: Numerical treatment of solving singular integral equations by using Sinc approximations. Appl. Math. Comput. 218, 3565–3573 (2011) 2. Anselone, P.M.: Collectively Compact Operator Approximation Theory and Applications to Integral Equations. Prentice Hall, Englewood Cliffs (1971) 3. Atkinson, K.: The numerical evaluation of fixed points for completely continuous operators. SIAM J. Numer. Anal. 10, 799–807 (1973) 4. Atkinson, K.E.: A survey of numerical methods for solving nonlinear integral equations. J. Integr. Equ. Appl. 4, 15–46 (1992) 5. Atkinson, K.E.: The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, Cambridge (2009) 6. Atkinson, K., Flores, J.: The discrete collocation method for nonlinear integral equations. IMA J. Numer. Anal. 13, 195–213 (1993) 7. Atkinson, K.E., Potra, F.A.: Projection and iterated projection methods for nonlinear integral equations. SIAM J. Numer. Anal. 24, 1352–1373 (1987)

3 Sinc Projection Solutions of Fredholm Integral Equations

53

8. Han, W., Atkinson, K.E.: Theoretical Numerical Analysis: A Functional Analysis Framework. Springer, New York (2001) 9. Hashemizadeh, E., Rostami, M.: Numerical solution of Hammerstein integral equations of mixed type using the Sinc-collocation method. J. Comput. Appl. Math. 279, 31–39 (2015) 10. Keller, H.: Geometrically isolated nonisolated solutions and their approximation. SIAM J. Numer. Anal. 18, 822–838 (1981) 11. Krasnose´lski˘ı, M.A., Va˘ınikko, G.M., Zabre˘ıko, P.P., Rutitskii, Ya.B., Stetsenko, V.Ya.: Approximate Solution of Operator Equations. Wolters-Noordhoff Publishing, Groningen (1972) 12. Krasnose´lski˘ı, M.A., Va˘ınikko, G.M., Zabre˘ıko, P.P., Rutitskii, Ya.B., Stetsenko, V.Ya.: Geometrical Methods of Nonlinear Analysis. Springer, Boston (1992) 13. Kress, R.: Linear Integral Equations. Springer, New York (1999) 14. Kress, R., Sloan, I.H., Stenger, F.: A sinc quadrature method for the double-layer integral equation in planar domains with corners. J. Integr. Equ. Appl. 10, 291–317 (1998) 15. Kumar, S., Sloan, I.H.: A new collocation-type method for Hammerstein integral equations. Math. Comput. 48, 585–593 (1987) 16. Maleknejad, K., Nedaiasl, K.: Application of sinc-collocation method for solving a class of nonlinear Fredholm integral equations. Comput. Math. Appl. 62, 3292–3303 (2011) 17. Maleknejad, K., Nedaiasl, K.: A sinc quadrature method for the Urysohn integral equation. J. Integr. Equ. Appl. 25, 407–429 (2013) 18. Maleknejad, K., Mollapourasl, R., Ostadi, A.: Convergence analysis of Sinc-collocation methods for nonlinear Fredholm integral equations with a weakly singular kernel. J. Comput. Appl. Math. 278, 1–11 (2015) 19. Mori, M., Sugihara, M.: The double-exponential transformation in numerical analysis. J. Comput. Appl. Math. 127, 287–296 (2001) 20. Mori, M., Nurmuhammad, A., Murai, T.: Numerical solution of Volterra integral equations with weakly singular kernel based on the DE-Sinc method. Jpn. J. Ind. Appl. Math. 25, 165– 183 (2008) 21. Nedaiasl, K.: Approximation of weakly singular integral equations by sinc projection methods. Elect. Trans. Num. Anal. 52, 416–430 (2020) 22. Nedelec, J.C.: Integral equations with non integrable kernels. Integr. Equ. Oper. Theory 5, 562–572 (1982) 23. Pedas, A., Vainikko, G.: Superconvergence of piecewise polynomial collocations for nonlinear weakly singular integral equations. J. Integr. Equ. Appl. 9, 379–406 (1997) 24. Rjasanow, S., Steinbach, O.: The Fast Solution of Boundary Integral Equations. Springer, New York (2007) 25. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, Berlin (1992) 26. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, New York (2010) 27. Urysohn, P.S.: On a type of nonlinear integral equation. Mat. Sb. 31, 236–255 (1924) 28. Weiss, R.: On the approximation of fixed points of nonlinear compact operators. SIAM J. Numer. Anal. 11, 550–553 (1974)

Chapter 4

Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using Sinc Methods Gerd Baumann

Dedicated to Frank Stenger, who advanced and distributed Sinc methods on the occasion of his 80th birthday

Abstract We shall examine the fractional generalization of the eigenvalue problem of Schrödinger’s equation for one dimensional problems in connection with Lévy stable probability distributions. The corresponding Sturm-Liouville problem for the fractional Schrödinger equation is formulated and solved on R satisfying natural Dirichlet boundary conditions. The eigenvalues and eigenfunctions are computed in a numerical Sinc Approximation bases for the Riesz-Feller representation of Schrödinger’s generalized equation. We demonstrate that the eigenvalues for a fractional operator approach deliver the well known eigenvalues of the integer order Schrödinger equation and are consistent with analytic WKB estimations by Laskin (Chaos 10:780–790, 2000) and Jeng et al. (J Math Phys 51:062102, 2010). We can also confirm the conjecture by Luchko (J Math Phys 54:12111, 2013) that only for skewness parameters θ = 0 the eigenvalues are real quantities and thus relevant in quantum mechanics. However, for skewness parameters θ = 0, the Sinc approach yields complex eigenvalues with related complex eigenfunctions, and a fortiori, real probability densities. Keywords Lévy-Schrödinger equation · Sturm-Liouville problem · Riesz-Feller derivative · Fractional operator · Sinc approximation · Fractional Schrödinger Equation · Sinc collocation · Sinc convolution · Quarkonium model · harmonic oscillator · Pöschel-Teller model · Bistable quantum well · Finite quantum well.

G. Baumann () Mathematics Department, German University in Cairo, New Cairo City, Egypt University of Ulm, Ulm, Germany e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_4

55

56

G. Baumann

4.1 Introduction Schrödinger’s equation is one of the central equations of quantum mechanics using a probability approach for its interpretation [1]. Based on probability, Feynman and Hibbs reformulated Schrödinger’s equation using the celebrated path integral approach based on the Gaussian probability distribution. Kac in his 1951 lecture pointed out that a Lévy path integral generates the functional measure in the space of left (or right) continued functions having only discontinuities of the first kind, and thus may lead to a generalization of Feynman’s path integrals to Lévy path integrals [2]. These ideas were also examined by Montroll [3] at that time using only basic concepts of quantum mechanics in order to generalize the Gaussian picture to the exotic nature of the statistical processes of Paul Lévy and the incredibly complex physical phenomena that these statistics promised to explain. A summary of the ideas was given recently by Bruce West [4] leading to a differential free formulation of a generalized Schrödinger equation based on Lévy processes. The assumption of scaling for the propagator directly results in the Riesz representation of Schrödinger’s equation based on Lévy stable processes [4]. Laskin took up these ideas, extended Feynman’s path integral to Lévy path integrals, and developed a space-fractional Schrödinger equation (SFSE) containing the Riesz-Feller fractional derivative [5, 6], as already conjectured by Kac and Montroll [2, 3]. Naber in 2004 introduced—in analogy to the fractional diffusion equation—a temporal fractional derivative which he called “time fractional Schrödinger equation (TFSE)” [7]. The corresponding Green’s functions for the fractional Schrödinger equation were derived by Dong and Xu [8]. This allowed the use of integral equations to formulate the quantum mechanical problems [9] which was also Montroll’s earlier aim. A practical application of the fractional Schrödinger equation was proposed recently by Longhi [10]. In the framework of an optical application to the transverse modes and resonance frequencies of a resonator correspond to the eigenfunctions and energies of the stationary fractional Schrödinger equation with Lévy index α in an external potential V (x). No numerical verification has been given to date of Laskin’s eigenvalues and eigenfunctions approach. The results we shall present are new in the sense that we are able to verify numerically the suggested eigenvalue relations and also, to give a general approach of eigenvalue approximations based on the Riesz-Feller operator. We note that for a special Lévy index α = 1, Jeng et al. in [11] presented an asymptotic approach for the harmonic oscillator which is in agreement with our findings. However, we shall show numerically that the constraints introduced by Laskin in [6] for the  potential V (x) ∼ x|β with 1 ≤ β ≤ 2 are not real constraints, based on our numerical computations. It turned out that for β > 0, as Laskin also mentioned in [12], we are able to determine the eigenvalues and eigenfunctions accurately and use the suggested formula given in [6] for eigenvalues for the quarkonium potentials. This allows us to compute eigenvalues for the quarkonium problem of QCD. Even more the proposed numerical approach is able to deal with a large variety of potential functions V (x) to detect bound states and free quantum states as well. The access to eigenvalues and analytically defined eigenfunctions opens a

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

57

broad field of applications in quantum mechanics which is no longer restricted to Gaussian processes. Introducing Lévy processes in the interpretation of quantum mechanics yields novel insights as well as novel phenomena that may be accessible for future research, especially for the application to the case of known eigenvalues and eigenfunctions for a given potential V (x); see for example the recent discussion in [13, 14]. Although the 1+1 dimensional formulation has in the past been applied to higher dimensional problems [9], we shall constrain our discussion to the one dimensional case 1 i∂t u(x, t) = − ∂x,x u(x, t) + V (x)u(x, t) with − ∞ < x < ∞ and t ≥ 0, 2 (4.1) where V (x) is the potential of the quantum mechanical problem. There is an ongoing discussion in the literature regarding the existence of a fractional generalization of (4.1) to the case of a potential with finite support [6, 11, 15, 16]. This discussion is motivated, in part, due to the lack of known methods for dealing with problems that have a potential with infinite support. Our approach does not suffer from such a restriction, and we shall thus take the classical route in this paper by assuming that the Sturm-Liouville problem satisfies Dirichlet conditions at x = ±∞. This assumption is due to the classical quantum mechanical properties that a wave function has to satisfy based on a probability interpretation. In addition let us assume that the solution of (4.1) is separable as u(x, t) = v(x) exp(−iλt), which allows to rewrite (4.1) as 1 − ∂x,x v(x) + V (x)v(x) = λv(x) with − ∞ < x < ∞, 2

(4.2)

with λ = E/(hω), the eigenvalues measured in terms of hω ¯ ¯ and the boundary conditions for the eigenfunctions v(±∞) = 0. The potential V (x) is assumed to be real-valued and regular in the sense that 



−∞

(x + 1)|V (x)|dx < 0 ,

(4.3)

and satisfies the minimal requirements for Sturm-Liouville boundary √ value problems (for details see [17, 18]). Note, we have used scaled units x = h/(mω)ξ in ¯ the representation of (4.1) and (4.2) and we have adopted the original symbol for the spatial coordinate. Laskin in 2000 introduced the fractional representation of (4.2) using the Riesz-Feller potential to represent the Laplacian in Schrödinger’s equation [5]. The corresponding Sturm-Liouville problem on R is given as ∞α

− Dα D

−∞x;θ

v(x) + V (x)v(x) = λv(x) with − ∞ < x < ∞ and v(±∞) = 0, (4.4)

58

G. Baumann

α represents the Riesz-Feller pseudowhere Dα is an appropriate constant and Dx;θ differential operator (see Appendix). Here V (x) is a potential with support R. The problems discussed in connection with (4.4) are how are the eigenvalues λ related to the fractional order α and how the eigenvalues are separated in terms of the quantum number n. There are only a few analytic results available, based on WKB approximations for testing these results [6, 11]. The analytic results are mainly related to the classical model of an harmonic oscillator and we extend these numerically to other types of oscillators. Another important question discussed in connection with (4.4) is the behavior of the eigenvalues and eigenfunctions if the potential V (x) is defined on a finite support of R. This question touches the open problem of how to define the boundary conditions for this infinite integral eigenvalue problem. The core problem is that the Riesz-Feller potential is incorporating all influences on the entire real line and that the introduction of finite boundaries will dismiss a large contribution of these interactions. We shall introduce finite boundaries and at the same time keep the influences of the Riesz-Feller potential for the rest of the space. The separation of the finite and infinite contributions can be formally achieved by using the integral properties of the Riesz-Feller potential as follows

 − Dα

a α

D

−∞x;θ



v(x) + D

∞α

a x;θ

v(x) + D

b x;θ

 v(x) + V (x)v(x) = λv(x),

(4.5)

with −∞ < x < ∞ and v(a) = v(b) = v(±∞) = 0, where a and b are finite real dα

values and the notation D

c x;θ

takes into account the actual interval of integration. If

we rearrange in Eq. (4.5) terms as follows we are able to write bα

− Dα D

a x;θ

 v(x) + V (x)v(x) − Dα

a α

D

−∞x;θ

∞α

v(x) + D

b x;θ

 v(x) = λv(x),

(4.6)

which can be written as, bα

− Dα D

a x;θ

v(x) + V eff (x)v(x) = λv(x),

(4.7)

with −∞ < x < ∞ and v(a) = v(b) = v(±∞) = 0. Here V eff (x) is an effective potential consisting of the “stripped” potential on a finite  aVα(x) defined ∞α support and the confining potential W (x) = − Dα D · + D · taking −∞x;θ

b x;θ

into account all influences outside the support [a, b]. So that the effective potential V eff (x) = V (x) + W (x) keeping the interactions of the Riesz-Feller potential on R with the stripped potential V (x). This separation of the potentials allows also the interpretation that the Riesz-Feller derivative of the wave function evaluated at x outside the interval [a, b] is determined by the values of the wave function inside the support where the striped potential is governing the equation embedded in the

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

59

confinement potential W . This fact is due to the nonlocal nature of the Riesz-Feller potential which is different from the behavior of a local Laplacian. In other words if we confine the stripped potential into the left and right sided parts of the RieszFeller potential, we will not loose any nonlocal information but are able to deal with the problem on a finite support. In addition such kind of division of the integral domain allows us to introduce local properties for the function which also divides the solution structure inside and outside the finite support. However, for practical applications we are only considering the finite part of the solution. Another approach discussed in literature assumes that the integral behavior outside the finite support will vanish but keeps the interaction of the Riesz-Feller potential on the whole real line [16]. Using the notion from above, we can state the problem as follows a α

− Dα D

−∞x;θ



− Dα D

a x;θ

v(x) = 0,

v(x) + V (x)v(x) = λv(x), ∞α

− Dα D

b x;θ

(4.8)

v(x) = 0,

with v(a) = v(b) = 0 and the condition that outside the interval no integral contribution will be generated to the wave function v(x). The set of three equations (4.8) have to be solved simultaneously and consistent with the boundary conditions on the finite support. The difference between the two approaches for the finite support problem is that in the first case the wave function must not necessarily vanish outside the finite support but vanishes at the boundaries and at infinity. The boundaries are transparent or semipermeable in this case. Such kind of boundaries may be possible in view of the tunneling of particles. In the second case the boundary conditions are hard, so that the integral value outside the finite support vanishes. In both cases we are able to approximate the eigenvalues and eigenfunctions which only differ slightly from each other. However, we restrict our presentation to the second case with hard boundary conditions to connect to the current discussion in literature [16]. The paper is organized as follows: in Sect. 4.2 we present the approximation method shortly. Section 4.3 discusses numerical examples and in Sect. 4.4 we give some concluding remarks.

4.2 Approximation Method The current section introduces and summarizes ideas for fractional operator approximations already available in literature [19–22]. We use the properties of sinc functions allowing a stable and accurate approximation based on Sinc points [23].

60

G. Baumann

The following subsections introduce the basic ideas and concepts for a detailed representation we refer to [24, 25]. Note we will use in this Section parameters α and β which are not related to the fractional order of the differential operators used above. This will become evident in the discussion of the convergence of the method.

4.2.1 Sinc Methods This section introduced the basic ideas of Sinc methods [26]. We will discuss only the main ideas as a collection of recipes to set up a Sinc approximation. We omit most of the proofs of the different important theorems because these proofs are available in literature [24, 25, 27, 28]. The following subsections collect information on the basic mathematical functions used in Sinc approximation. We introduce Sinc methods to represent indefinite integrations and convolution integrals. These types of integrals are essential for representing the fractional operators of differentiation and integration [25]. To start with we first introduce some definitions and theorems allowing us to specify the space of functions, domains, and arcs for a Sinc approximation. Definition 4.2.1 (Domain and Conditions) Let D be a simply connected domain in the complex plane and z ∈ C having a boundary ∂D. Let a and b denote two distinct points of ∂D and φ denote a conformal map of D onto Dd , where Dd = {z ∈ C : |I (z)| < d}, such that φ(a) = −∞ and φ(b) = ∞. Let ψ = φ −1 denote the inverse conformal map, and let Γ be an arc defined by Γ = {z ∈ C : z = ψ(x), x ∈ R}. Given φ, ψ, and a positive number h, let us set zk = ψ(kh), k ∈ Z to be the Sinc points, let us also define ρ(z) = eφ(z) . Note the Sinc points are an optimal choice of approximation points in the sense of Lebesgue measures for sinc approximations [23]. Definition 4.2.2 (Function Space) Let d ∈ (0, π ), and let the domains D and Dd be given as in Definition 4.2.1. If d  is a number such that d  > d, and if the function φ provides a conformal map of D  onto Dd  , then D ⊂ D  . Let μ and γ denote positive numbers, and let Lμ,γ (D) denote the family of analytic functions u ∈ H ol(D), for which there exists a positive constant c1 . such that, for all z ∈ D |u(z)| ≤ c1

|ρ(z)|μ . (1 + |ρ(z)|)μ+γ

(4.9)

Now let the positive numbers μ and γ belong to (0, 1], and let M μ,γ (D) denote the family of all functions g ∈ H ol(D), such that g(a) and g(b) are finite numbers, where g(a) = limz→a g(z) and g(b) = limz→b g(z), and such that u ∈ Lμ,γ (D) where u(z) = g(z) −

g(a) + ρ(z)g(b) . 1 + ρ(z)

(4.10)

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

61

The two definitions allow us to formulate the following theorem for Sinc approximations. Theorem 4.2.3 (Sinc Approximation [27]) Let u ∈ Lμ,γ (D) for μ > 0 and γ > 0, take M=[γ N/μ], where [x] denotes the greatest integer in x, and then set m = M + N + 1. If u ∈ M μ,γ (D), and if h = (π d/(γ N))1/2 then there exists a positive constant c2 independent of N,such that   N    1/2   u (zk ) wk  ≤ c2 N 1/2 e−(π dγ N ) . u(z) −  

(4.11)

k=−M

with wk the base function (see Eq. (4.14)). The proof of this theorem is given in [27]. Note the choice h = (π d/(γ N ))1/2 is close to optimal for an approximation in the space M μ,γ (D) in the sense that the error bound in Theorem 4.2.3 cannot be appreciably improved regardless of the basis [27]. It is also optimal in the sense of the Lebesgue measure achieving an optimal value less than Chebyshev approximations [23]. These definitions and the theorem directly allow the formulation of an Algorithm for a Sinc Approximation which follows next. The basis of a Sinc Approximation is defined as: Sinc (z) =

sin(π z) . πz

(4.12)

The shifted Sinc is derived from relation (4.12) by translating the argument by integer steps of length h and applying the conformal map to the independent variable. S(j, h) ◦ φ(z) = Sinc ([φ(z) − j h]/ h), j = −M, . . . , N.

(4.13)

The discrete shifting allows us to cover the approximation interval (a, b) in a dense way while the conformal map is used to map the interval of approximation from an infinite range of values to a finite one. Using the Sinc basis we are able to represent the basis functions as a piecewise defined function wj (z) by ⎧ ⎪ ⎨

# 1 − N k=−M+1 1+ekh S(k, h) ◦ φ(z) j = −M S(j, h) ◦ φ(z) j = −M + 1, . . . , N − 1 . wj = ⎪ ⎩ ρ(z) − #N −1 ekh j =N k=−M 1+ekh S(k, h) ◦ φ(z) 1+ρ(z) (4.14) 1 1+ρ(z)

This form of the Sinc basis is chosen as to satisfy the interpolation at the boundaries. The basis functions defined in (4.14) suffice for purposes of uniformnorm approximation over (a, b).

62

G. Baumann

This notation allows us to define a row vector V m (S) of basis functions V m (S) = (w−M , . . . , wN ) ,

(4.15)

with wj defined as in (4.14). For a given vector V m (u) = (u−M , . . . , uN ) T we now introduce the dot product as an approximation of the function u(z) by u(z) ≈ V m (S).V m (u) =

N 

uk wk .

(4.16)

k=−M

Based on this notation, we will introduce in the next few subsections the different integrals we need [25].

4.2.2 Discretization Formula The errors of approximating the eigenvalues of the SL problem were introduced in [29, 30] based on conformal mappings which are also used to symmetrize the SL problem. The authors in [29, 30] derive an error estimation

resulting

from aSinc collocation method delivering a dependency of order O N 3/2 exp −cN 1/2 for some c and N → ∞ where m = 2N + 1 is the dimension of the resulting discrete eigenvalue system. The basis of this relation is the corresponding Sturm-Liouville problem given by L u(x) = −v  (x) + q(x)v(x) = λp(x)v(x) ,

(4.17)

with a < x < b and v(a) = v(b) = 0. Here, q(x) and p(x) are known functions and λ is representing the eigenvalues of the problem. The bounds (a, b) define eighter a finite, semi-infinite or infinite interval. Thus our aim is not only related to regular SL problems but also includes singular one, where one of the boundaries is infinity or both [31]. The SL equation can be transformed to an equivalent Schrödinger equation (SE) with a potential function defined with the functions q(x), and p(x) of Eq. (4.17). Thus a very special—but anyhow very important practical—case is q(x) = V (x) and p(x) = 1, here (4.17) reduces to the Schrödinger equation (4.1) −

d 2 v(x) + V (x)v(x) = λv(x), dx 2

(4.18)

with vanishing boundary conditions at x = a and x = b. If a and/or b are infinite we call the SL problem singular. An eigenvalue of the problem is a value λn for which a nontrivial solution vn , the eigenfunction, exists which satisfies the boundary

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

63

conditions. For a SE problem it is easy to show that the operator on the left-hand side of Eq. (4.18) is self-adjoint and hence the eigenvalues are real. Using the basis (4.14) any function v can be represented by a Cardinal representation which is in its related Sinc Approximation CN,M for analytic functions defined as v(z) ≈ CM,N (v, h) =

N 

vk S(k, h) ◦ φ(z).

(4.19)

k=−M

Inserting this representation into (4.18) and collocating the resulting expression at Sinc points, we obtain the following system of equations for the expansion coefficients ( ' N  d2 L CM,N (u, h) = S(j, h) ◦ (xk ) + q˜ (xk ) S(j, h) ◦ (xk ) vj dx 2k j =−M =μ

N 

 ρ (ψ (xk )) ψ  (xk ) 2 S(j, h) ◦ (xk ) vj

(4.20)

j =−M

where xk = kh for k = −M, . . . , N , vj = v(xj ), and μ is the approximation of the eigenvalue λ. The function q(x) ˜ is related to the symmetry transform v(x) = w(φ(x))(φ  (x))−1/2 and results to after rewriting the transformed SL problem in terms of the inverse conformal map ψ    1 d   d q(x) ˜ = − ψ  (x) ψ (x) + (ψ  (x))2 q(ψ(x)). dx ψ  (x) dx

(4.21)

To symmetrize the final matrix representation Eggert et al. [30] use the transform v(x) = w(φ(x))(φ  (x))−1/2 and rewrite the resulting equation with respect to the inverse conformal map. This rewriting is possible if we use the relation φ(ψ(x)) = x and the differential relations related to this identity. Let us formulate their result in the following Theorem. Theorem 4.2.4 (Errors Eigenvalues [30]) Let λ0 , w0 be an eigenpair of the transformed differential equation (4.20). Assume that w0 ∈ B (Sd ) and there are positive constants μ, γ , and C so that  |w(x)| ≤ C

exp(μx) x ∈ (−∞, 0] . exp(−γ x) x ∈ (0, ∞)

(4.22)

64

G. Baumann

If there is a constant δ > 0 so that |q| ˜ ≥ 1/δ and the selection h = (π d/μM)1/2 and N = [[(μ/γ )M]] are made then there is an eigenvalue μp of the generalized eigenvalue problem satisfying

   μp − λ0  ≤ K δλ0 M 3/2 exp −(π dμM)1/2 .

(4.23)

Note that the matrix eigenvalue problem has 2M + 1 eigenvalues μi . The ∞ continuous problem has, in general, an infinite number of eigenvalues {λi }i=0 . The

becomes of inequality (4.23) holds for arbitrary λ0 . The decay rate O e−cM

−cM  √ if the function w is an entire function [32]. The factor λ0 on the order O e the right side of relation (4.23) suggests that the lower end of the spectrum is more accurate than the larger spectral data. This behavior is a common observation in all the eigenvalue calculations which follow. 1/2

 

Proof For the proof of the theorem see the paper of Eggert et al. [30].

The generalization of Eq. (4.18) to its fractional form is discussed in literature [6, 16, 33] and stated as ∞α

− Dα D

−∞x;θ

v(x) + V (x)v(x) = λv(x),

(4.24)

∞α

with D

α the Riesz-Feller operator and V (x) a potential function. For = Dx;θ  the potentials of the form V (x) ∼ x|β and 1 < β < 2, Laskin [6] derived the eigenvalues in the representation −∞x;θ

 λn =

1/α

2π h¯ Dα 4B(1/β, 1/α + 1)

βα/(α+β) 

1 n+ 2

βα/(α+β)



1 = A (β, α) n + 2



βα (α+β)

, (4.25)

where n denotes the eigenvalue order. Since (4.24) includes a non-local operator in the form of a convolution integral, we first have to discuss how such integrals can be represented in terms of Sinc approximations. There is a special approach to evaluate the convolution integrals by using a Laplace transform. Lubich [34, 35] introduced this way of calculation by the following idea  p(x) = a

x

 x  1 f (x − t)g(t)dt = F+ (s) est g(x − t)dtds , 2π i C a

(4.26)

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

65

for which the inner integral solves the initial value problem y  = sy + g with g(0) = 0. In a similar way we can define 

b

q(x) =

f (x − t)g(t)dt,

(4.27)

x

so that 

x

p(x) + q(x) =



b

f (x − t)g(t)dt +

a

 f (x − t)g(t)dt =

x

b

f (x − t)g(t)dt.

a

(4.28) We assume that the Laplace transform (Stenger-Laplace transform)  F+ (s) =

f (t)e−t/s dt ,

(4.29)

E

with E any subset of R such that E ⊇ (0, b − a), exists for all s ∈ Ω + = {s ∈ C|(s) > 0}. The indefinite integrals in (4.26) and (4.27) can be approximated by Sinc integration as defined in [27]. For collocating an indefinite integral and for obtaining explicit approximations of the functions (J u)(x) defined by 

x

(J u)(x) =

u(t)dt with x ∈ (a, b) ,

(4.30)

a

we use the following basic relations [27]. Let Sinc (x) be given by (4.12) and ek be defined using the integral σk as 

k

σk =

Sinc (x) dx =

0

1 Si (π k) , π

(4.31)

with Si (x) the sine integral. This put us into position to write ek =

1 + σk , k ∈ Z. 2

(4.32)

Let M and N be positive integers, set m = M + N + 1, and for a given function u defined on (a, b), define a diagonal matrix D(u) by D(u) = diag [u (z−M ) , . . . , u (zN )]. Let I (−1) be a square Töplitz matrix of order m having ei−j , as its (i, j )th entry, i, j = −M, . . . , N .

I (−1) i,j = ei−j with i, j = −M, . . . , N.

(4.33)

66

G. Baumann

Define square matrices Am and Bm by Am = hI (−1) D(1/φ  ) T  Bm = h I (−1) D(1/φ  ),

(4.34)

where the superscript “T” denotes the transpose. The collocated representation of the indefinite integrals are thus given by Jm u = V m (S).Am .V m (u) = hV m (S).I (−1) .D(1/φ  ).V m (u) .

(4.35)

These are collocated representations of the indefinite integrals defined in (4.30) (see details [24]). The eigenvalues of Am and Bm are all positive which was a 20 year old conjecture by FS. This conjecture was recently proved by Han and Xu [36]. In the notation introduced above we get for p

 p = F+ (J )g ≈ F+ Jm g ,

(4.36)

 q = F+ (J  )g ≈ F+ Jm g ,

(4.37)

and

are accurate approximations, at least for g in a certain space [24]. The procedure to calculate the convolution integrals is now as follows. The collocated integral Jm = V m (S).Am V m and Jm = V m (S).Bm V m , upon diagonalization of Am and Bm in the form ! −1 , Am = Xm .diag sm,−M , . . . , sm,N .Xm

(4.38)

! Bm = Ym .diag sm,−M , . . . , sm,N .Ym−1 ,

(4.39)

with Σ = diag [s−M , . . . , sN ] as the eigenvalues arranged in a diagonal matrix for each of the matrices Am and Bm . Then the Stenger-Laplace transform (4.29) delivers the square matrices F+ (Am ) and F+ (Bm ) defined via the equations



! −1 −1 = Xm F+ (Σ).Xm , F+ (Am ) = Xm .diag F+ sm,−M , . . . , F+ sm,N .Xm (4.40)



! F+ (Bm ) = Ym .diag F+ sm,−M , . . . , F+ sm,N .Ym−1 = Ym F+ (Σ).Ym−1 . (4.41)

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

67

We can get the approximation of (4.36) and (4.37) by

 −1 .V (g). F+ (J )g ≈ F+ Jm g = V m (S).F+ (Am ) V m (g) = V m (S).Xm F+ (Σ).Xm m

(4.42)

 −1 .V (g). F+ (J  )g ≈ F+ Jm g = V m (S).F+ (Bm ) V m (g) = V m (S).Ym F+ (Σ).Ym m

(4.43) These two formulas deliver a finite approximation of the convolution integrals p and q. The convergence of the method is exponential as was proved in [27].

4.2.3 Sinc Collocation of Fractional Sturm-Liouville Problems Using the notation and expressions introduced in the previous section we are now in position to discretize Eq. (4.24). Setting the prefactor Dα = 1, we get

 − F+ (J ) + F+ (J  ) v + V (x)v = λv ≈ −c+ F+ (Am ) V m (v) − c− F+ (Bm ) V m (v) + D (V m (V )) V m (v) = λ I V m (v).

(4.44)

Thus the discrete version of (4.24) becomes ∞ −c+ F+ (Am )∞ −∞ V m (v) − c− F+ (Bm )−∞ V m (v) + D (V m (V )) V m (v) = λ I V m (v) ,

(4.45)

with F+ (Am , Bm ) = c+ F+ (Am ) − c− F+ (Bm ) we write the discrete eigenvalue problem as − F+ (Am , Bm )∞ −∞ + D (V m (V )) = λ I.

(4.46)

Note, D (V m (V )) represents a diagonal matrix and I a unit matrix of dimension m × m. For the finite support problems we apply the same collocation procedure by separating the different parts of the convolution integral. This results to the following representation −c+ F+ (Am )ba V m (v) − c− F+ (Bm )ba V m (v) + D(V (x))V m (v) −c+ F+ (Am )a−∞ V m (v) − c− F+ (Bm )a−∞ V m (v) ∞ −c+ F+ (Am )∞ b V m (v) − c− F+ (Bm )b V m (v) = λ V m (v).

(4.47)

68

G. Baumann

Separation of the confining part from the striped part we get −c+ F+ (Am )ba − c− F+ (Bm )ba + D(V (x)) −c+ F+ (Am )a−∞ − c− F+ (Bm )a−∞ ∞ −c+ F+ (Am )∞ b − c− F+ (Bm )b = λ I ,

(4.48)

which finally can be written as − F+ (Am , Bm )ba + D(V (x)) − F+ (Am , Bm )a−∞ − F+ (Am , Bm )∞ b = λI , (4.49)

− F+ (Am , Bm )ba + D V eff (x) = λ I , (4.50) 

with D V eff (x) = D(V (x))−F+ (Am , Bm )a−∞ −F+ (Am , Bm )∞ b . The condition



det −F+ (Am , Bm )ba + D V eff (x) − λ I = 0 ,

(4.51)

will deliver the needed eigenvalues λn . To each λn there exists an eigenfunction vn used in the approximations. Here, c+ and c− are factors independent of x related to the Riesz-Feller operators (see Appendix). Solving for the different eigenvalues λn , we will find also the expansion coefficients of the eigenfunctions V nm (v) for each eigenvalue n allowing us to approximate the eigenfunction using the Sinc basis V m (S) by vn (x) ≈ V m (S).V nm (v).

(4.52)

These basis functions finally can be used to approximate any function u(x) ∈ L2 as follows u(x) ≈

n 

ak vk (x) ,

(4.53)

k=0

with the expansion coefficients given as 

b

ak =

u(x)vk (x)dx.

(4.54)

a

4.3 Numerical Results In this section we examine central models in quantum mechanics like the harmonic oscillator, some sort of potentials in relative coordinates useful for quarkonium models in QCD, the modified Pöschel-Teller potential useful in nonlinear generalizations, bi-harmonic quantum wells used in nano material models, and quantum

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

69

mechanical models on a finite support. The collection of models is a selection of standard model systems with a variety of applications in quantum mechanics. We will demonstrate that for these models the eigenvalues and the eigenfunctions are accessible for Lévy governed processes and we guess that much more of the standard models can be solved with our approach of approximation.

4.3.1 Harmonic Oscillator The first experiment we performed is related to the ) examination of the harmonic oscillator with the standard potential V (x) = x 2 2. We chose this classical model due to its importance in the development of quantum mechanics. We will demonstrate that the harmonic oscillator also plays a prominent role in generalized Lévy quantum mechanics. The wave function v(x) is determined on R satisfying the boundary conditions v(±∞) = 0. Note, in our approximation there is no need to approximate ±∞ by a large numeric value. Thus the computed eigenvalues and eigenfunctions are based on the whole real line R. The computations were carried out for a fixed number of Sinc points N = 96 to reach an accurate eigenvalue for α → 2 (see Table 4.1). We first checked the convergence of the lower eigenvalues to a stable value and observed that we need at least N = 60 Sinc points to get convergence to an asymptotic eigenvalue which is always positive and real if the skewness parameter θ = 0. The results of these computations are collected in Fig. 4.1. The Figure displays the four lowest eigenvalues of these computations for different fractional orders α (α-value on top of the plots). Our observation for the first six eigenvalues (four of them are shown in the graph) is that they are reproducible and converge to a fixed value if the number of Sinc points is sufficiently large. Even more if we approach with the)Lévy index α → 2, we are able to reproduce the classical eigenvalues λn = En hω ¯ = n + 1/2 for a harmonic oscillator (see Table 4.1). Knowing that the eigenvalues converge to a fixed value allowed us to vary the fractional order in the SL problem to get the first six smallest real eigenvalues for the harmonic oscillator. The variation of the six smallest eigenvalues with α are Table 4.1 First four normalized eigenvalues λn = (n + 1/2) for α = 1.98. The number of Sinc points N = 92. Numbers are truncated to 7 digits

n 0 1 2 3

λn 0.496556 1.484733 2.467141 3.451038

70

G. Baumann

Fig. 4.1 Convergence to a stable value of the first four eigenvalues λn as function of the number of Sinc points N . The fractional orders α are given on top of the graphs. For Sinc points larger than N = 60 the eigenvalues are stable

shown in Fig. 4.4. The dependence of the eigenvalues λn follows a relation derived by Laskin in 2002 [6], given by the relation  λn =

1/α

2π h¯ Dα 4B(1/2, 1/α + 1)

2α/(α+2)  n+

1 2

2α/(α+2)

  1 2α/(α+2) = A (α) n + 2 (4.55)

&1 where B(a, b) = 0 x a−1 (1 − x)b−1 dx is Euler’s Beta integral and A (α) is the shape function of the eigenvalues depending essentially on the fractional order α. The shape function is also depending parametrically on fundamental quantities like the Bohr radius a0 , the elementary charge e, the atom number Z, and the reduced Planck number h¯ in the frame of the used WKB approximation. The functional relation between the fractional order α is represented as a power relation which can be derived from [6] in detail as follows 2α  1/α  2+α a0 −1+α e2 Z h¯ −α . A (α) = α 1/α B(1/2, 1/α + 1)

(4.56)

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

71

Fig. 4.2 Universal shape function A (α) of the eigenvalues extracted from the numerical values of λn . For n ≥ 1 the shape function is unique while for n = 0 the shape function is smaller than the function for higher quantum numbers. Right panel: Eigenvalue part (n + 1/2)2α/(2+α) as a function of α. The quantum number n = 0, 1, 2 are shown from bottom to top

However, for practical applications we used the distribution of eigenvalues and determined a Hermite interpolation to get the shape structure in a simplified numeric way. This allows us to predict the eigenvalues as a continuous function of α at least for the first six eigenvalues. The shape functions for the different eigenvalue orders is shown in Fig. 4.2. The function is unique for eigenvalue orders n ≥ 1 while for n = 0 there is a deviation from this universality. This behavior is expected because the eigenvalue function for the ground state is a continuously decaying function in α while for the higher states the function is a continuously increasing function (see Fig. 4.2). Thus for the ground state we expect a different shape function A (α) than for higher quantum states (see Fig. 4.2). According to the estimation of (4.23) the error  for eigenvalues follows the relation     2α   1 2+α 1/2   EN = A (α) n + − λn  ∼ N 3/2 e−cN . (4.57)   2 We examined this relation numerically and found that if we normalize the eigenvalues by the shape function we can use the following relation to check the error decay for eigenvalues λn    2α   1 2+α 1 1/2   λn   N 3/2 e−cN ,  = A (α)  n + −  2 A (α) 

(4.58)

which is    2α   1 2+α 1 1 1/2 1/2    = n+ λn   − N 3/2 e−cN ≈ c1 N 3/2 e−c2 N .  2 A (α)  A (α) (4.59)

72

G. Baumann

Fig. 4.3 Error estimation for the quantum states with n = 0, 1 (top, bottom) for different values of α. The error decays as given in (4.57) for larger values of Sinc points N

The last relation was used to estimate the values of c1 and c2 in a least square fit of the eigenvalues of the ground state λ0 and the first quantum state λ1 . The results for this estimations are shown in Fig. 4.3. The estimated values for c2 are always greater than one so that the convergence to the eigenvalue occurs exponentially. For larger values of α we observe that the accuracy of the eigenvalues increases and reach a level of 10−6 which is an acceptable error level. For smaller α values (top curve, dots) the error is larger because the resolution of the eigenvalues is poorer due to the close values of λn as mentioned above. In Fig. 4.4 the first six eigenvalues are shown as a function of α. The numerically determined values show a maximum at a certain value of α which is moving from left to right if the eigenvalue order is increased (bottom to top in Fig. 4.4). For each quantum order n an α exists where the energies (eigenvalues λn ) become maximal so that a maximal exchange in a quantum transition can be reached. The maxima of the eigenvalues are listed in Table 4.2 and are depicted in Fig. 4.4 as large dots.

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

73

Fig. 4.4 Variation of eigenvalues λn for different fractional orders α for a harmonic oscillator. The quantum number n starts at n = 0 and ends at n = 5 from bottom to top. The small dots are the computed eigenvalues for a specific fractional order α. The solid lines are Hermite interpolation functions. The larger dots represent the maxima of the eigenvalues (see Table 4.2 for numeric values) Table 4.2 Maxima of eigenvalues. The number of Sinc points N = 92. Numbers are truncated to 6 digits

n 0 1 2 3 4 5

α 0.6769 1.0213 1.1912 1.2746 1.3325 1.3716

λn 3.47105 7.42513 10.5797 13.7338 16.7244 19.6713

To each eigenvalue λn,α there corresponds a wave function which depends beside on the quantum number also on the fractional order α. Thus the wave function depends on the quantum number n and on the fractional order α and can be written as v(x) = vn,α (x). Samples of eigenfunctions are shown in Fig. 4.5 for different fractional orders. Note that the amplitude of the probability distribution decreases up to a value α ≈ 1.24 and then increases again with α → 2. The change of the amplitude for the ground state is shown in Fig. 4.6. In addition to the amplitude the width or lateral extension of the wave functions are affected by the fractional order (see Figs. 4.5 and 4.6). For small values of α the wave functions are centered around the origin with a small lateral extension; i.e. they are localized. The decay of the  probability density un |2 is very rapidly for such small α values. If α increases in the direction to α ≈ 3/2 the extension of the wave function nearly becomes six times

74

G. Baumann

a

a

a

a

Fig. 4.5 Samples of probability distributions for the first four eigenvalues at different fractional orders α. The probability distributions are localized and symmetric with respect to the origin. The ground state is a single humped distribution while the higher order states show characteristic variations with minima and maxima. Note the overall amplitude decreases with increasing fractional order α in the interval 0 < α  1.24 and increase again in the interval 1.24  α < 2

larger and shrinks by a factor two if we approach α → 2. This behavior is observed for all eigenfunction of the harmonic oscillator. We note that the broadening of the  wavefunction is also observed for other potentials of the type V (x) ∼ x|β . The variation of the maximal amplitude can be easily examined for the ground state displayed in Fig. 4.6. The minimum of the maximal amplitude occurs at α = 1.2449 for the ground state. This decrease and increase of the amplitude means that the probability density necessarily must broaden because the total amount is a conserved quantity. The spreading and afterwards the re-localization is a characteristic behavior of the density occurring in each state and for different versions of potentials. In the two papers by Luchko et al. [15, 16] doubts about the validity of the eigenvalue relation by Laskin [6] and Jeng [11] are acknowledged. Figures 4.7 and 4.8 collect a comparison of these eigenvalue relations compared with our numerical results. Figure 4.7 examines the special case with α = 1 which was solved by Jeng [11] using a WKB approximation and the asymptotic representation of the Airy function delivering the root distribution as eigenvalues for the harmonic oscillator (dashed line in Fig. 4.7). The solid line in Fig. 4.7 was gained as a least square fit to Laskin’s formula (4.25) keeping the amplitude and the exponent factor variable. The least square fit to the eigenvalues using λn = a(n + 1/2)b with

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

75

a

Fig. 4.6 Ground state of the harmonic oscillator as a function of α (top left panel). The overall amplitude decreases with increasing fractional order in the interval 0 < α  1.24 and increase again for 1.24  α ≤ 2. Maximum value of the probability density of the ground state as a function of α (bottom panel). The minimum of the maximum value for the ground state occurs at α = 1.2449

Fig. 4.7 The dots represent the computed eigenvalues for α = 1. The solid line represents the least square fit to the computed eigenvalues with λn = a(n + 1/2)b with a = 5.519 and b = 0.6794. The value for b is close to the predicted WKB value by Laskin [6] and Jeng [11]. The dashed line uses λn = a(n + 1/2)2/3 representing the root distribution of the asymptotic expansion of Airy’s function [11]

76

G. Baumann

Fig. 4.8 Increase of the eigenvalues as a function of the quantum number n. The eigenvalues follow a relation λn = a(n + 1/2)2α/(2+α) shown as dots. The prefactor a depends on α and is shown on top of each graph. The solid line represents a least square fit where only a was estimated. The fractional eigenvalue relation for λn follows from a WKB approximation [6, 11]

a = 5.519 and b = 0.6794 delivers numerical agreement with Jeng’s result who estimated the exponent by his asymptotic approach as b = 2/3 in agreement with the results derived by Laskin. The absolute error of our estimation is  = 0.012733 corresponding to a 2% relative error which is acceptable. In Fig. 4.8 we show some results of the same approach for different α-values using the same dependence of the eigenvalues as given by Laskin [6] but now using instead directly λn = a(n + 1/2)2α/(2+α) which uses only one parameter the shape parameter a for a fitting. It turns out that the least square fits deliver nearly for all α-values excellent fits except for very small values α < 0.1. The reason for this is that we did not use a sufficiently large number of Sinc points N to resolve the stable distribution

n=1 2.0

1.5

1.5

1.0

1.0

α

α

n=0 2.0

0.5

0.5

0.0

0.0 –6

–4

–2

0 x

2

4

6

–6

–4

–2

2

4

6

2

4

6

n=3

2.0

2.0

1.5

1.5

1.0

1.0

α

α

n=2

0 x

0.5

0.5

0.0

0.0 –6

–4

–2

0 x

2

4

6

–6

–4

–2

0 x

 Fig. 4.9 Structural change of the density functions un |2 of the first four eigenstates varying α

of eigenvalues for this range of α. For α-values less than 1/10 the eigenvalues of the different quantum numbers are very close to each other and cannot be resolved in a reliable way with the used number of Sinc points. This refinement remains to be resolve in an additional approach using high precession computing with a large number of Sinc points. The conclusion from these numerical examination is that the WKB approximation used by Laskin as well as by Jeng et al. [6, 11] are highly accurate in their description of the eigenvalues and are reproducible by Sinc approximations. In another computation we examined the structural changes of the density function and the wave functions if the fractional parameter α is varied. The results  are shown in Figs. 4.9 and 4.10. Our observation is that the density un |2 is originally localized for α ≈ 0 changing its width up to a certain value for α and again becomes narrower if α approaches the value α = 2. The eigenfunctions on the other side show a pattern like a tiger fur. There exists a non equidistant pattern of positive and negative values occurring in a banded pattern if maxima and minima are plotted

78

G. Baumann

Fig. 4.10 Structural change of the eigenfunctions un of the first four eigenstates varying α. The top four graphs use a resolution Δα = 0.05 while the bottom four were generated with an α step of length Δα = 0.01. The increase of the resolution in α uncovers a detailed structure of amplitude switching from positive (bright) to negative (dark) values. The flipping of amplitudes seems to generate a banded self-similar pattern

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

79

as magnitude values in a contour plot. The pattern is shown in Fig. 4.10 for two resolutions Δα. For a physical application the sign change of the eigenfunctions does not have any impact because physically important quantities are based on the densities of the eigenfunctions. From a mathematical point of view the pattern is quite interesting due to this tiger strip pattern which shows up in an irregular way. The cumulative counting of positive and negative maxima for n = 0 and n = 1 are shown in Fig. 4.11. The structure of the increase of the counts resembles to a devil’s staircase. Another important quantity in quantum mechanics is the transition matrix of a two level system. If these quantities are known, we are able to judge on the emission or absorption of quanta. Even more we have now the Lévy parameter α at hand allowing us to tune in on a specific transition if we are able to control the fractional order in a physical sense. The transition matrix between the different eigenstates becomes now a function of α which allows to use the fractional order to find out an optimal transition; i.e. eighter an maximal or a minimal transition probability by tuning α. In Fig. 4.12 we plot the variation of different transition matrix elements (4.60) to demonstrate their variation with α.  i|x|k

α

=



−∞

∗ vi,α (x) x vk,α (x)dx with i, k = 0, 1, 2, 3, 4, . . .

(4.60)

We interpret these transition values the following way. If the matrix elements are positive, energy may be released from one to another state. If the value is negative an absorption of quantized energy may be possible with a certain rate depending on the magnitude of the matrix element. Since the matrix elements now depend on the fractional order we are able to tune using this parameter or to enforce a certain type of emission or absorption of quanta (see Figs. 4.12 and 4.13). Concerning the behavior of eigenvalues λn with a finite skewness parameter θ , we found numerical evidence that all the eigenvalues become complex for 0 < α < 2. The real and imaginary parts of the eigenvalues are shown in Fig. 4.14 where on the top panel the real part and on the bottom panel the imaginary part of λn for different values α are shown. Although the imaginary parts are small for the first few quantum numbers this indicates that the eigenvalue problem for θ = 0 becomes non-Hermitian and thus not in the framework of standard quantum mechanics. This behavior was conjectured in [16]. However, if we examine the behavior for α →2 the imaginary parts of the eigenvalues vanish. Thus it is apparent that the standard quantum mechanics is consistently incorporated in the Lévy based quantum mechanics. The conclusion at this point is that a more detailed examination is needed for nonvanishing skewness parameters to get a physical interpretation of the spectral properties.

G. Baumann

+

80

Fig. 4.11 Cumulative counts of positive Ci+ and negative Ci− maxima and minima for the first two eigenfunctions n = 0, 1. The steps of increase are irregular and resemble to a devil’s staircase

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

81

Fig. 4.12 Transition matrix element for a two level system i| and |k . Shown is the dipole moment i|x|k for different values of α. We used the four first eigenfunctions to compute for each α the corresponding transition matrix element

Fig. 4.13 Transition matrix elements for different combinations of i| and |k states for the first four eigenfunctions. The transition rates for the even to odd or odd to even are different from zero while the odd odd or even even transitions are almost zero. However, there are specific α values where this transition avoidance is broken. The transition rates shown are representing the upper triangular part of all the transition matrices which are symmetric in their structure

82

G. Baumann

Fig. 4.14 Real and imaginary parts of the first six eigenvalues for the harmonic oscillator with θ = 0.8 min(α, 2 − α) in the Riesz-Feller potential

4.3.2 Quarkonium Models Following Laskin in his proposal for a quarkonium model [12], we assume that a quark-antiquark qq bound system in a non-relativistic potential can be modeled by a relation of the form 

 V r i − r j  = qi qj |r i − r j |β ,

(4.61)

where qi and qj are the color charges of i and j quarks respectively and the power β > 0. Using a single relative coordinate this potential reduces to the simple model V (x) = q 2 |x|β where x denotes the distance between two quarks. To keep the system thermodynamically stable the power β should be taken from 0 < β < 2 [12]. However, we will examine also a case where β exceeds this

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

83

 Fig. 4.15 Different potential versions V (x) ∼ x|β for QCD quarkonium models. The horizontal lines represent the energy levels for α = 1.499

physical reasonable bounds. Examples of different potential versions are shown in Fig. 4.15, the case of panel (c) is already examine in the previous section. In Fig. 4.15 the horizontal lines in the graphs represent the first four energy levels (eigenvalues) of the model, respectively. Specifically we examine models with β = {1/2, 1, 2, 5} denoted in the following by (a), (b), (c), and (d), respectively. The potentials for each case form a cusp, a triangular well, a parabolic one, and a deep well model (see Fig. 4.15). The type of potential (4.61) coincides with the QCD requirements: that at short distances the quarks and gluons appear to be weakly coupled and at large distances the effective coupling becomes strong, resulting in the phenomena of quark confinement. The first four eigenfunctions for a specific Lévy index α = 1.499 are shown in Fig. 4.16 for each of the sample models from above. The color used in the graphs for probability densities corresponds to the magnitude of the eigenvalues indicated as eigenvalue levels in Fig. 4.15 using the same color, respectively. The panels (a), (b), (c), and (d) of Fig. 4.16 are related to the eigenvalues given in Fig. 4.15 for each panel, respectively. We observe taking into account the scales on the x-axis that the four ground states are localized in the center of the potential. However, the localization is quite different for the different models. The largest extension is observed in model (a) while the smallest width of the density is observed in model (d). The amplitudes and the structure of the different states are similar to each other with some small variations in the amplitude. The width of the localization is the most prominent property to distinguish the models from each other. The models (b) and (c) show nearly the same width but due to the potential structure triangular (b) and parabolic (c) the ground state in (c) is smaller in width than in (b). This behavior

84

G. Baumann

 Fig. 4.16 The four first density functions for: (a) the potential V (x) = x|1/2 , (b)  the potential ) V (x) = |x|, (c) the potential V (x) = x 2 2, and (d) the potential V (x) = x|5 . Th Lévy parameter was α = 1.499. Note the x scale are different for each potential model; i.e. the localisation of the density varies from model to model. We observe again a stronger localization of the densities for smaller values of α where for α → 2 an concentration of the densities sets in again. While for α values approaching α ≈ 1.2 a spreading is observed

is also true for the higher states. This property is shown for a fixed Lévy index α in Fig. 4.16. We also examined the variation of the energy (eigenvalues) if the Lévy index is varied. The results are shown for each model in Fig. 4.17, respectively. The observation is that there exists for each of the six eigenstates a Lévy index α where the energies become maximal (dots in the graph). This is the case for all eigenstates and for all models. We observe that these maxima are located to the left side of the interval 0 < α < 2 if β < 1 and move to the right side if β increases (β > 1). It is also clear that the magnitude of eigenvalues in general increases if β is increased. Since the eigenvalue of a specific state increases or decreases if α is varied we expect that the width of the corresponding density also varies. Such a variation of the width of the density function is shown in Fig. 4.18 for model (b) where α is varied. The originally tightly localized density functions (α ≈ 0) broadens if α is increased up to a critical value for α. If this critical α is exceeded the width of the density functions start to shrink to the final level at α = 2. This behavior is shown for model (b) in Fig. 4.18. However, it is a general observation in all potentials examined that such kind of critical α exists and that a broadening followed by a shrinking when α is varied from a lower to a higher value.

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

85

Fig. 4.17 Distribution of eigenvalues as a function of α for four QCD quarkonium models. (a) β = 1/2, (b) β = 1, (c) β = 2, and (d) β = 5. Shown are the first six eigenvalues depending on α

We also examined properties of the eigenvalues for all the models (a)–(d) and found some common behavior. Thus we select only model (b) to discuss these properties in detail. According to formula (4.25) the eigenvalues should satisfy a certain relation depending on the Lévy index α and the exponent β of the potentials under discussion. We examined the computed eigenvalue by using relation (4.25) for all models. The outcome of the calculations is that the derived formula is very accurate for all models examined and for nearly all values of α. Deviations exist for small values of α < 1/10 which is a numerical problem already mentioned above. An example of the examination is the one parameter fitting of the computed eigenvalues using (4.25) in the determination of the screening coefficient a in the eigenvalue relation λn = a(n + 1/2)βα/(β+α) . Results are shown in Fig. 4.19 for β = 1 demonstrating the agreement between the theoretical prediction and the numerical approximation. It is quite interesting that a single parameter least square fit for the screening parameter a delivers an accurate agreement. We note that similar results with the same accuracy were found for the three other models (not shown). Since the screening parameter a is varying under the change of α we examined the behavior of the shape function A (β, α) in (4.25) for the different models. The result is that for higher states there exists a unique function A (α) for each of the models. An example for β = 1 is shown in Fig. 4.20 left panel. Since the function (n + 1/2)βα/(β+α) is an increasing function for β > 1 and n ≥ 1 while for n = 0 the function is decreasing in α (see Fig. 4.20 right panel), we will find two different

86

G. Baumann

Fig. 4.18 The four first density functions for the potential V (x) = |x| for different values of α (see top of figures). We observe again a delocalization if α is increasing but a small localization when α → 2

shape functions for the ground state and the higher quantum states, respectively. This structure is also observed in the other models. Another common property of the models is the reduction of the amplitude of the density function in the ground state and an increase if α is increased further. The behavior is shown for β = 1 in Fig. 4.21 as an example. The location where the minimum value is reached varies from model to model and is located in the interval 1 < α < 3/2. This range is a rough estimation over the four examined models. The value seems to be related to the maximal width of the ground state which occurs for the different models at different α values. Finally, we examined the first and second moment for the quantum states. These quantities are shown in Fig. 4.22. A detailed examination reveals that there exist for all models selection rules allowing the transition between two independent states.

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

87

Fig. 4.19 Eigenvalue relation for different values of α (see top of graph). The eigenvalues are following a relation derived by Laskin [6] of the following form λn = a(n + 1/2)α/(1+α) . Here a a single parameter was estimated from the numerical data using a least square fit (values are given on top of plots)

Fig. 4.20 The graphs (left panel) shows the shape function A (α) of the eigenvalues extracted from the numerical values of λn . For n ≥ 1 the shape function is nearly unique while for n = 0 the shape function is smaller than the function for higher quantum numbers. The reason is that eigenvalues according to λn = (n + 1/2)α/(1+α) is an increasing function for n ≥ 1 and a decreasing function for n = 0 (right panel). This difference causes a change in the shape function

4.3.3 Modified Pöschel-Teller Potential The modified Pöschel-Teller (mPT) potential is widely used in chemistry to describe the vibration properties of molecules. Originally Pöschel and Teller used the potential to examine the anharmonic properties of exactly solvable quantum systems to study the influence of anharmonicity on the solution [37]. We shall examine the mPT potential as an example with negative and positive eigenvalues to demonstrate the difference between bound and free quantum states in the Lévy-Schrödinger ) equation. The potential function for this model is given as V (x) = −b cosh2 (ax)

88

G. Baumann

Fig. 4.21 Change of the maximal amplitude of the ground state density as a function of α. We observe that the maximum takes a minimum value at α = 1.23

*   + Fig. 4.22 Matrix elements for two state transitions i|x|k and i x 2  k as a function of α

with b the depth of the potential and a a scaling parameter to determine the width of the potential well. Note the mPT can be used to represent an analytic description of a rectangular hole if a is appropriately chosen. We will not discuss this behavior her but we know from our computations that the results are similar to the presented here. In our examinations we will mainly vary the potential depth and thus the number of captured eigenstates. A sample of the potential with some eigenvalues are shown in Fig. 4.23. The corresponding eigenfunctions are shown in the bottom panel of this Figure (colors indicate the correspondence between the eigenvalues and eigenfunctions). We observe that the eigenfunctions are localized for negative eigenvalues. We also realize that the lowest eigenvalue is a single value while the higher eigenvalues are given by a close doublet. The doublet separates if α is varied. The largest separations occur in the range 1 < α < 1.9. However, there is also an approach of the doublet eigenvalues observed when α approaches a certain value depending on the potential depth but this only occurs for the third and fourth eigenvalue (see Fig. 4.24). Accordingly we have only a single eigenfunction for the ground state and an even and odd pair for eigenfunctions of higher quantum states. The distribution of eigenvalues is shown in Fig. 4.24. The distribution of the eigenvalues shows that at the limiting points α ≈ 0 and α → 2 smaller eigenvalues exists while in between a maximum of the eigenvalues is present. Characteristic for the mPT potential is that the ground state is not degenerated and that for α → 2 a steep decline in the eigenvalues sets in. In addition we observe that for larger potential depths the eigenvalues are different from zero. However, if the potential depth becomes smaller and smaller some of the eigenvalues are captured at the top level of the potential which is nearly zero. In these cases where the capturing sets

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

89

Fig. 4.23 First five negative eigenvalues and the corresponding densities of the eigenfunctions for α = 0.45. The eigenvalues are λn = {−25.6975, −18.3356, −17.3409, −4.5639, −3.5965} from bottom to top (see top panel)

in the probability density nearly becomes zero; i.e. the eigenfunctions become very small. The behavior of capturing is a function of the Lévy index and is showing up first in the higher part of the α domain. The phenomena allows to remove the influence of some eigenstates on the physics while they can be tuned in for smaller or higher values of α again.

4.3.4 Bistable Quantum Wells Bistable quantum wells occur specifically in connection with the tunneling effect in quantum mechanics. Quantum tunneling is a microscopic phenomenon where a particle can penetrate and in most cases pass through a potential barrier. The

90

G. Baumann

Fig. 4.24 Distribution of the eigenvalues for bound states as a function of α. Top row shows eigenvalues for potential depths b = 64 and b = 32 from left to right. The bottom row is showing eigenvalues for b = 16 and b = 8 also from left to right. The maximum of the ground state moves from right to left if the depth is decreased. For λ1 and λ2 we observe a separation as well as a closed approach of the eigenvalues if we vary α

maximum height of the barrier is assumed to be higher than the kinetic energy of the particle, therefore such a motion is not allowed in classical mechanics. In Fig. 4.25 we represent the bistable potential with a central barrier which has its maximum at zero. Using such bi-stable potential configuration negative energy values are related to a split particle located partially to the left and to the right of the barrier. Positive eigenvalues are bound states where the particle is centered in the well. The variation of the Lévy index 0 < α < 2 will result into a variation of the eigenvalues which are shown in Fig. 4.26 for two different potentials. The top panel uses a potential strength a = 8 and the bottom panel in Fig. 4.26 uses a = 4. The larger value of a results to a deeper value of the potential wells. The difference between the two potentials is that in the first case the two first quantum states having negative eigenvalues in the total range of α while in the second case some of the eigenvalues are positive. We observe in both cases that the eigenstates separate from each other if α is varied. For small values of α  0.1 the states seem to be degenerated. The same behavior is the case for α close to two. In both potentials we examine four states. Two of them are even and the other two are odd functions. The even and odd eigenfunctions are energetically close to each other for α → 2. The observation is that the odd eigenfunctions are zero in the center of the potential for all values of α (see Fig. 4.27 left panel). The even eigenfunctions show a positive value of the probability density in the center of the potential. This probability density reaches a maximum specifically for the ground state of the first model. In the second model the ground state is partially above the maximum of the barrier and thus the particle is moving in this state as a whole. Left and right wings with negative energies exist

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

91

Fig. 4.25 Bistable potential V (x) = x 4 − ax 2 with a = 8. The horizontal line shows the energy value λ0 where the probability in the center at x = 0 is largest for the ground state; i.e. part of the particle in the ground state is located left and right to the  barrier but there is a finite value in the center for α = 1.448. The first quantum state density u1 (x)|2 is also shown. Since u1 is an odd function the center value of the density vanishes. The shaded rectangle indicates the variation  of the eigenvalues if α varies 0.05 ≤ α < 2. The variation of the local density u0 (0)|2 with α is shown in Fig. 4.27 (Note the amplitude of the density is scaled by a factor 50)

where the second quantum state shows tunnelling. The tunneling probability below the barrier are shown in Fig. 4.27. The left panel collects the local probability in the center of the potential. The right panel uses the classical turning points of the barrier to get the integral values for different α values. The local and the integrated density show a maximum at a certain α value which are different from each other. The local maximum is located at α ≈ 1.448 while the integrated shows up at α ≈ 1.587. We also examined different variations of bistable potentials like Merzbacher’s and Risken’s potential delivering similar results for tunneling processes. The results demonstrate that for a variety of model potentials the quantum mechanical characteristics are accessible.

4.3.5 Finite Quantum Well As a finite interval example, we examined numerically the problem where in the interval x ∈ [−4, 4] the potential is given by  V (x) =

) x 2 200 if |x| < 4 . 0 if |x| ≥ 4

(4.62)

92

G. Baumann

Fig. 4.26 Distribution of the eigenvalues for V (x) = x 4 − 8x 2 (top) and V (x) = x 4 − 4x 2 (bottom) as a function of α

The potential was chosen as a flat parabola at the bottom to avoid numerical problems in the determination of eigenvalues. The eigenfunctions for the first three quantum states are shown in Fig. 4.28. The graphs show that the boundary values are satisfied and the structure of the density distribution delivers the expected behavior; i.e. single peak for the ground state, double peak for the first quantum state etc. In Fig. 4.29 we show the dependence of the real eigenvalues on α. For the eigenvalue distribution as a function of α, we observe that the ground state is a single valued state while the first and second state is degenerate which separates on the α-interval. Compared with the infinite problems the distribution shows a steep increase on the right end of the interval. As for the infinite examples the distribution also shows a maximum which decreases to small values if α → 0. The graphs in Fig. 4.28 show that the boundary conditions for the wave functions at the potential limits are satisfied. It becomes also apparent that the ground state is nearly stable above a

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

93

 Fig. 4.27 Probability density un (0)|2 and the integral of the ground state on the domain D over all values of α for V (x) = x 4 −8x 2 . The left panel shows the central probability values for the four quantum states for even and odd functions. The values for odd functions are zero while the ground state and the second state assume finite values. The ground state shows a maximum at α = 1.448. The eigenvalues of the ground state are all negative (see Fig. 4.26 top panel). The second state also shows finite values where the center values are related to positive energies. The values left and right to the indicated vertical bars belong to negative eigenvalues where tunneling occurs. The right panel shows the integral value of the ground state density over the WKB intervals using the classical turning points of the barrier

Fig. 4.28 Probability densities for eigenstates of the potential (4.62) on a finite interval x ∈ [−4, 4]

value α > 1 while the other two states vary in their amplitude if α varies (see also Fig. 4.30). In Fig. 4.30 we plot the probability density as a function of α for the first three quantum states. It is apparent from the figures that for α < 1 there is a wavy structure in the density. However, for 1 < α < 2 the probability density shows a smooth behavior. The wavy structure may indicate that the first moments of the Lévy process may not exist [12]. It is also obvious from the Figure that the even states grow in their amplitudes if α > 1/2 while the odd state is completely present but varies in its

94

G. Baumann

Fig. 4.29 The first three eigenvalues as function of α on the interval x ∈ [−4, 4] using N = 72 Sinc points (colors correspond to the eigenstates in Fig. 4.28, respectively)

 Fig. 4.30 Density plots for the first three probability densities un |2

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

95

amplitude. What is also remarkable for the finite support spectrum is that there is no broadening of the wave function on the support interval if α is changed (compare with Fig. 4.9). We only observe a change in the amplitude but not in the width of the probability density.

4.4 Conclusions We demonstrated the first time numerically that the conjecture by Laskin on a Lévy based quantum mechanics; i.e. a Lévy-Schrödinger description, can be numerically realized. The analytic results derived by Laskin for the harmonic potential and the quarkonium models were verified and confirmed by our Sinc approximation. The approach of approximation delivered new results and effects which are applicable to other quantum mechanical systems with known analytic potentials, too. We also demonstrated that the Riesz-Feller representation delivers real valued eigenvalues if the skewness parameter equals to zero. If the skewness parameter is different from zero the eigenvalues are complex but become real for α → 2, consistent with the standard quantum interpretation. At the moment it is not clear if the complex eigenvalues have any physical meaning at all. The related probability densities are real valued functions which is the basis of a quantum mechanical interpretation. However, we recall that Gamov’s theory for the α-decay of atomic nuclei used complex eigenvalues which finally turned out to have a real physical meaning. We did not examine in detail so far where such kind of results may have meaningful applications. However, we definitely can state for the real valued eigenvalues, that the well known models of quantum mechanics are now accessible in a quantum mechanics based on Lévy stable probability distributions. We also presented two solution approaches for finite boundary value problems in an environment with infinite range interactions. We approximated some eigenvalues for these finite support problems by a reinterpretation of the Riesz-Feller potential in connection with the potential term. The new results presented may open a gate of interpretation and applications in quantum mechanics which will be seen in future works.

Appendix This appendix collects additional relations and definitions used in the formulation and computation of fractional derivatives. The Riesz-Feller fractional derivative is defined as follows [38] −α −α −α α f (x) = −Ix;θ f (x) = c− (α, θ )I+;x;θ f (x)+c+ (α, θ )I−;x;θ f (x) Dx;θ

(4.63)

96

G. Baumann

where the following relations hold c+ (α, θ ) =

sin((α − θ )π/2) sin((α + θ )π/2) and c− (α, θ ) = sin(απ ) sin(απ )

(4.64)

α (Weyl integrals see e.g. and the corresponding fractional integral operators Ix;θ [38]) are given as α I+;x;θ f (x)

1 = Γ (α)

α I−;x;θ f (x)

1 = Γ (α)



x

(x − ξ )α−1 f (ξ )dξ

(4.65)

(ξ − x)α−1 f (ξ )dξ.

(4.66)

−∞

and 

∞ x

Note the negative sign in the Riesz-Feller operator means that we are dealing with fractional derivatives which are defined in terms of integral operators as follows:  x 1 −α (x − ξ )n−α−1 f (ξ )dξ (4.67) I+;x;θ f (x) = Γ (n − α) −∞ and −α I−;x;θ f (x)

1 = Γ (n − α)





(ξ − x)n−α−1 f (ξ )dξ.

(4.68)

x

with n = [Re(α)] + 1 and Re(α) > 0; here [α] represents the integer part of α.

Bibliography 1. Schrödinger, E.: An undulatory theory of the mechanics of atoms and molecules. Phys. Rev. 28, 1049–1070 (1926) 2. Kac, M.: In: Neyman, J. (ed.) Second Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, Berkeley (1951) 3. Montroll, E.W.: On the quantum analogue of the Lévy distribution. In: Enz, C., Mehra, M. (eds.) Physical Reality and Mathematical Description, pp. 501–508. Reidel, Dordrecht (1974) 4. West, B.J.: Quantum Lévy propagators. J. Phys. Chem. B 104, 3830–3832 (2000) 5. Laskin, N.: Fractional quantum mechanics. Phys. Rev. E 62, 3135–3145 (2000) 6. Laskin, N.: Fractional Schrödinger equation. Phys. Rev. E 66, 56108 (2002) 7. Naber, M.: Time fractional Schrödinger equation. J. Math. Phys. 45, 3339 (2004) 8. Dong, J., Xu, M.: Space-time fractional Schrödinger equation with time - independent potentials. J. Math. Anal. Appl. 344, 1005–1017 (2008)

4 Lévy-Schrödinger Equation: Their Eigenvalues and Eigenfunctions Using. . .

97

9. Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and Schrödinger in 3+1. In: Concepts of Mathematical Physics in Chemistry: A Tribute to Frank E. Harris - Part A, vol. 71, pp. 265–298. Elsevier, Amsterdam (2015) 10. Longhi, S.: Fractional Schrödinger equation in optics. Opt. Lett. 40, 1117–1120 (2015) 11. Jeng, M., Xu, S.-L.-Y., Hawkins, E., Schwarz, J.M.: On the nonlocality of the fractional Schrödinger equation. J. Math. Phys. 51, 62102 (2010) 12. Laskin, N.: Fractional quantum mechanics and Lévy path integrals. Phys. Lett. A 268, 298–305 (2000) 13. Wei, Y.: Comment on “Fractional quantum mechanics” and “Fractional Schrödinger equation”. Phys. Rev. E 93, 66103 (2016) 14. Laskin, N.: Reply to “Comment on ‘Fractional quantum mechanics’ and ‘Fractional Schrödinger equation’ ”. Phys. Rev. E 93, 66104 (2016) 15. Al-Saqabi, B., Boyadjiev, L., Luchko, Y.: Comments on employing the Riesz-Feller derivative in the Schrödinger equation. Eur. Phys. J. Spec. Top. 222, 1779–1794 (2013) 16. Luchko, Y.: Fractional Schrödinger equation for a particle moving in a potential well. J. Math. Phys. 54, 12111 (2013) 17. Titchmarsh, E.C.: Eigenfunction Expansions. Associated with Second-Order Differential Equations. Oxford University Press, Oxford (1962) 18. Weyl, H.: Über gewöhnliche Differentialgleichungen mit Singularitäten und die zugehörigen Entwicklungen willkürlicher Funktionen. Math. Ann. 68, 220–269 (1910) 19. Baumann, G., Stenger, F.: Fractional calculus and Sinc methods. Fract. Calc. Appl. Anal. 14, 568–622 (2011) 20. Baumann, G., Stenger, F.: Fractional adsorption diffusion. Fract. Calc. Appl. Anal. 16, 737–764 (2013) 21. Baumann, G., Stenger, F.: Fractional Fokker-Planck equation. Mathematics 5, 1–19 (2017) 22. Baumann, G., Stenger, F.: Sinc-approximations of fractional operators: a computing approach. Mathematics 3, 444–480 (2015) 23. Stenger, F., El-Sharkawy, H.A.M., Baumann, G.: The Lebesgue constant for sinc approximations. In: Zayed, A., Schmeisser, G. (eds.) New Perspectives on Approximation and Sampling Theory − Festschrift in the honor of Paul Butzer’s 85th birthday. Birkhäuser, Basel (2014) 24. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, Boca Raton (2011) 25. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 26. Stenger, F.: Summary of Sinc numerical methods. J. Comput. Appl. Math. 121, 379–420 (2000) 27. Stenger, F.: Collocating convolutions. Math. Comput. 64, 211–235 (1995) 28. Kowalski, M.A., Sikorski, K.A., Stenger, F.: Selected Topics in Approximation and Computation. Oxford University Press, New York (1995) 29. Lund, J.R., Riley, B.V.: A Sine-collocation method for the computation of the eigenvalues of the radial Schrödinger equation. IMA J. Numer. Anal. 4, 83–98 (1984) 30. Eggert, N., Jarratt, M., Lund, J.: Sine function computation of the eigenvalues of SturmLiouville problems. J. Comput. Phys. 69, 209–229 (1987) 31. Lundin, L., Stenger, F.: Cardinal-type approximations of a function and its derivatives. SIAM J. Math. Anal. 10, 139–160 (1979) 32. Jarret, M.: Eigenvalue approximations on the entire real line. In: Bowers, K., Lund, J. (eds.) Computation and Control. Proceedings of the Bozeman Conference, Bozeman, Montana, August 1–11, 1988, pp. 133–144. Birkhäuser, Boston (1989) 33. Wang, S., Xu, M., Li, X.: Green’s function of time fractional diffusion equation and its applications in fractional quantum mechanics. Nonlinear Anal. Real World Appl. 10, 1081– 1086 (2009) 34. Lubich, C.: Convolution quadrature and discretized operational calculus. I. Numer. Math. 52, 129–145 (1988) 35. Lubich, C.: Convolution quadrature and discretized operational calculus. II. Numer. Math. 52, 413–425 (1988)

98

G. Baumann

36. Han, L., Xu, J.: Proof of Stenger’s conjecture on matrix of Sinc methods. J. Comput. Appl. Math. 255, 805–811 (2014) 37. Pöschel, G., Teller, E.: Bemerkungen zur Quantenmechanik des anharmonischen Osziltators. Z. Phys. 83, 143–151 (1933) 38. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006)

Chapter 5

Application of Sinc on the Multi-Order Fractional Differential Equations J. Rashidinia, A. Parsa, and R. Salehi

Dedicated to 80th birthday anniversary of Professor Frank Stenger

Abstract We developed three various classes of Sinc methods based on single exponential (SE), double exponential (DE) and single-double exponential transformation (SE-DE), to approximate the solution of linear and non-linear fractional differential equations of Riemann-Liouville and Caputo types. These methods are applicable for any fractional order. The exponential convergence rate of these methods have been discussed analytically. Various illustrative examples have been presented to examine our approaches and verify the accuracy, applicability and exponential convergence rate of the modified Sinc methods. Keywords Fractional calculus · Fractional derivatives · Sinc method Mathematics Subject Classification (2000) 65-XX, 34A08, 26A33, 97N50

5.1 Introduction In recent years, the models of fractional equations have been studied in many fields of science and engineering [1]. Historical and theoretical background of fractional calculus studied in [2, 3] along with the large number references there in. Generally, there are two types of derivatives including the Riemann-Liouville type and the Caputo type, which are defined as follow

J. Rashidinia () · A. Parsa · R. Salehi School of Mathematics, Iran University of Science and Technology, Narmak, Tehran, Iran e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_5

99

100

J. Rashidinia et al.

α Definition 5.1 Let α > 0 and n = "α#. The operator RL a Dt defined by

, RL D α y(t) = a t





y(s) 1 dn dn & t a (t−s)α−n+1 ds = dt n Γ (n−α) dt n dn α = n, dt n y(t),





n−α [y](t) , n − 1 < α < n, a It

(5.1)

is called the Riemann-Liouville fractional differential operator of order α. α Definition 5.2 Let α > 0 and n = "α#. The operator C a Dt defined by

, C α a Dt y(t)

=

& t y (n) (s) 1 Γ (n−α) a (t−s)α−n+1 ds dn α = n, dt n y(t),

= a Itn−α



dn dt n y(t)



, n − 1 < α < n, (5.2)

is called the Caputo fractional differential operator of order α. Adhering to [1], following theorem explain the condition on y(t) that make the fractional derivatives well defined. Theorem 5.1 Let α > 0, n = "α# and y(t) be a function with an absolutely continuous nth derivative on the interval [a, b], then the Riemann-Liouville and Caputo derivatives of y(t) exist in interval [a, b]. α Lemma 5.1 Let α > 0 and n = "α#. Assume that y is such that both C a Dt y and exist. then

RL D α y a t

C α a Dt y(t)

α = RL a Dt y(t) −

n−1  k=0

y (k) (a) (t − a)k−α . Γ (k − α + 1)

(5.3)

α (k) =RL a Dt y(t) if and only if y (a) = 0, k = 0, 1, .., n − 1, but in the rest, we consider the Riemann-Liouville derivatives, and in the case, when we needed to use the Caputo sense we would apply the relation (5.3) and for Simplification we α α denote RL a Dt y(t) by a Dt y(t). Now we consider the following multi-order fractional differential equation

C D α y(t) a t

ν 

pi (t)a Dtαi y(t) = q(t, y(t)),

t ∈ (a, b],

(5.4)

i=1

with initial conditions y (k) (a) = yak ,

k = 0, .., nν − 1,

(5.5)

where y, q and pi (t), i = 1, .., ν are continuous and bounded in interval (a, b], and αν > αν−1 > αν−2 > . . . > α1 > 0, ni = "αi # is the first integer not less than αi . Existence and uniqueness of solution of problem (5.4)–(5.5) are elaborated in Appendix 1.

5 Application of Sinc on the Multi-Order Fractional Differential Equations

101

Some convergence rates of polynomial methods are expressed by Diethelm [4, 5] which converted the original problem to a system of first order initial value problems. Changpin Li et al. in [6] discussed the fractional Euler, fractional backward Euler, and High-order methods for solution of (5.4)–(5.5), the solution of presented methods are compared with fractional Adams method. Saadatmandi and Dehghan used a truncated Legendre series associated with the Legendre operational matrix of fractional derivatives for numerical solution of fractional differential and fractional partial differential equations [7]. The nonlinear fractional-order ordinary differential equations was considered by [8], which used High-order fractional linear multiple step method to approximate the solution of this type of equations. The generalized pseudo-spectral method was used for solving class of fractional differential equations with initial conditions, and the pseudo-spectral differentiation matrix of fractional order derived in [9]. Kazem [10] generalized the Jacobi integral operational matrix for the fractional calculus and combined it with the Tau method, and convert the solution of this problem to the solution of a system of algebraic equations. Many authors considered multi-order fractional differential equations and made their attempt to approximate the solution by various methods, such as B-spline method [11], spectral Tau method [12], Adomian’s decomposition method [13, 14], variational iteration method [15], and Galerkin finite element method [16]. The fractional derivatives and fractional integrals are special form of Abel’s type of integrals. Thus, having the kernel and singularity at the endpoints, which is known as weak singularity for the Sinc methods are quite effective. Baumann and Stenger[17] provided a survey of application of Sinc methods to solve fractional integral, fractional derivatives, fractional equations and fractional differential equations. In [18], the authors proposed two new approximate formula for Caputo’s fractional derivatives of order 0 < α < 1 for the known functions, based on single and double exponential transformation by means of Sinc method. In fact, the authors in [18–20], following Riley [21], who employed techniques in Sinc methods to approximate the solution of the second kind weakly singular linear Volterra integral equations. In this study, our main purpose is to modify the methods in [18] to general form of Riemman-Liouville and Caputo type of derivatives for any α > 0, and also to solve the multi-order fractional differential equations (5.4)–(5.5). This paper is organized as follows: in Sect. 5.2, we present some properties of Sinc function. Treatments of the Riemann-Liouville fractional derivatives are presented in Sect. 5.3 by means of SE-Sinc, DE-Sinc, and SE-DE-Sinc methods. In Sect. 5.4, we present the Sinc collocation methods. The error analysis of the proposed methods are given in Sect. 5.5. In Sect. 5.6, four illustrative examples are given. Some concluding remarks are given in Sect. 5.7. Finally, some preliminary theorems and lemmas required in Sects. 5.1 and 5.5, are given in Appendix 1 and 2, respectively.

102

J. Rashidinia et al.

5.2 Sinc Function In this section, we review some properties of Sinc function, sinc interpolation and sinc quadrature [22–26]. The Sinc function is defined by  sin(π t) t = 0 πt , Sinc(t) = , 1, t =0 let j be an integer and h be a positive number, and the j th shifted Sinc function is defined by S(j, h)(t) = Sinc(

t − jh ). h

Since the problem (5.4) is defined over a finite interval, we need some transformation that maps the interval (−∞, ∞) to a finite interval (a, b). There are two transformations for our problem with domain (a, b) named by single exponential transformation [22–24] and double exponential transformation [19, 25–28], as follows SE t = ψa,b (z) = DE t = ψa,b (z) =

z b+a b−a tanh( ) + , 2 2 2

π b+a b−a tanh( sinh(z)) + , 2 2 2

and their inverse functions define as t −a ), b−t ⎡ ⎤ / 2  1 t − a t − a 1 DE −1 DE z = (ψa,b )+ 1+ log( ) ⎦, ) (t) = φa,b (t) = log ⎣ log( π b−t π b−t SE −1 SE z = (ψa,b ) (t) = φa,b (t) = log(

SE (kh) and t DE = ψ DE (kh). that we can define Sinc points as tkSE = ψa,b k a,b The truncated sinc quadrature rule define by

 a

b

y(t)dt = h

M 

y(ψ(j h))ψ  (j h).

(5.6)

j =−M

In general form, the function y(t), which satisfy in problem (5.4), with initial conditions, may not vanish in end points. Then, using the Sinc basis, we are able to represent the basis function as ⎧ 1 ⎪ j = −M ⎨ 1+ρ(t) , χj (φ(t)) = S(j, h)o(φ(t)), j = −M + 1, .., M − 1 , ⎪ ⎩ ρ(t) , j =M 1+ρ(t)

5 Application of Sinc on the Multi-Order Fractional Differential Equations

103

where ρ(t) = eφ(t) . This form of Sinc basis is chosen to satisfy the interpolation at the boundaries. For given vector c = [c−M , . . . , cM ]T , we approximate the function y(t) by y(t) = CM [y](t) =

M 

cj χj (φ(t)).

(5.7)

j =−M SE (t) and ψ(t) = It is to be noted that, for Eqs. (5.6) and (5.7), we can use φ(t) = φa,b SE (t) for single exponential transformation, then y SE (t) = C SE [y](t), and also ψa,b M DE (t) and ψ(t) = ψ DE (t) for double exponential transformation, then φ(t) = φa,b a,b DE [y](t). y DE (t) = CM

5.3 Treatment of the Riemann-Liouville Fractional Derivatives of Order α > 0 by Means of Sinc Methods 5.3.1 SE-Sinc and DE-Sinc Methods Now, we try to approximate the integral part of fractional derivative in Eq. (5.4). First, we transform the given interval (a, t) to (−∞, ∞) by the changing of variable s = ψa,t (τ ), we can obtain n−α [y](t) a It

=

&t y(s) 1 Γ (n−α) a (t−s)α−n+1 ds

= =

& ∞ y(ψa,t (τ ))(ψa,t ) (τ ) 1 Γ (n−α) −∞ (t−ψa,t (τ ))α−n+1 dτ (t−a)n−α & ∞ Γ (n−α) −∞ Θ(τ )y(ψa,t (τ ))dτ, (5.8)

where Θ SE (τ ) = SE (τ ) = ψa,t

1 , (1+e−τ )(1+eτ )n−α t−a τ t+a tanh( ) + , 2 2 2

π cosh(τ ) , (1+e−π sinh(τ ) )(1+eπ sinh(τ ) )n−α t−a π t+a tanh( sinh(τ )) + 2 2 2 ,

Θ DE (τ ) = DE (τ ) = ψa,t

and applying the sinc quadrature rule (5.6) we obtain n−α [y](t) a It

≈ IM [y](t) =

M (t − a)n−α  Θ(kh)y(ψa,t (kh)), h Γ (n − α)

(5.9)

k=−M

where h is a mesh size depending on M. By substituting y(t) with cardinal series CM [y](t) which has been defined in (5.7) and denoting φa,b (ψa,t (kh)) by ϕa,k (t), and using Faà di Bruno’s Formula [29] for the nth derivative of composite function. Finally, we obtained the following extended formula for the Riemann-Liouville

104

J. Rashidinia et al.

fractional derivative h α a Dt (y(t)) ≈ Γ (n−α)

M #

M #

n #

Θ(kh)cj k=−M j =−M =0

3  Γ (n+1) d n− d n−α × Γ ( +1)Γ (n− +1) dt χj (ϕa,k (t)) dt n− (t − a) 2 M n M # # # Γ (n+1)Γ (n−α+1) h −α Θ(kh)cj Γ ( +1)Γ = Γ (n−α) (n− +1)Γ ( −α+1) (t − a) k=−M j =−M =0  b ( 



# p (ϕa,k ) (t) b1 (ϕa,k ) (t) b2 (ϕa,k )( ) (t) d ! × ... , b1 !b2 !...b ! dϕ p χj (ϕa,k (t)) × 1! 2! ! 2

(5.10) where the inner sum is over all different solutions in nonnegative integers b1 , b2 , .., b of b1 + 2b2 + . . . + b = and p = b1 + b2 + . . . + b . SE and ψ SE the method is Remark 5.1 In Eqs. (5.8)–(5.10), in the case if we use φa,b a,t DE DE the method is called called SE-Sinc method, and in the case if we use φa,b and ψa,t DE-Sinc method.

5.3.2 SE-DE-Sinc Method DE (kh) for DE quadrature In this new approach, we consider Θ DE (kh) and ψa,t rule (5.9), and we know that in DE transformation the limited numbers of function can be found that satisfied in the provided conditions, so that we approximate SE [y](t), that defined in (5.7). Now by denoting y(t) with SE cardinal series CM SE−DE SE DE DE φa,b (ψa,t (kh )) by ϕa,k (t), we obtain the following extended formula for the Riemann-Liouville fractional derivative

#M #n hDE #M α Θ DE (khDE )cjSE a Dt (y(t)) ≈ Γ (n−α)

k=−M j =−M =0 2

3 Γ (n+1) SE−DE d d n− × Γ ( +1)Γ (n− +1) dt χj (ϕa,k (t)) dt n− (t − a)n−α 2 # #M #n DE Γ (n+1)Γ (n−α+1) DE (khDE )cSE = Γ h(n−α) M a) −α k=−M j =−M =0 Θ j Γ ( +1)Γ (n− +1)Γ ( −α+1) (t − ⎛ 

( ) b ⎞⎤  SE−DE  b1 SE−DE (t) ϕa,k # (ϕa,k ) (t) SE−DE dp ⎠⎦ , × ⎝ b1 !b2 !!...b ! dϕ (t)) × ... p χj (ϕa,k 1! ! (5.11)

where ⎧ 1 j = −M ⎪ SE−DE (t) , ⎪ ϕa,k ⎪ ⎪ 1+e ⎪ ⎪ ⎪ ⎪ ⎨ SE−DE SE SE−DE (t)), j = −M + 1, .., M − 1 . χj (ϕa,k (t)) = S(j, h )o(ϕa,k ⎪ ⎪ ⎪ ⎪ SE−DE (t) ⎪ ⎪ eϕa,k ⎪ ⎪ , j =M ⎩ ϕ SE−DE (t) 1+e

a,k

5 Application of Sinc on the Multi-Order Fractional Differential Equations

105

5.4 Sinc Collocation Methods The Sinc collocation method has been studied for Caputo type of fractional derivatives so far. In this paper, we study the Sinc collocation method for RiemannLiouville type of fractional derivatives. We developed a new approach named SE-DE-Sinc collocation method for solution of the given problem (5.4) associated with initial conditions (5.5). Now in the given multi-order fractional differential equation (5.4) if the fractional derivatives be in the form of Caputo type then by using the relation (5.3) it can be converted to the following Riemann-Liouville fractional differential equation: ν  ζ =1

αζ pζ (t)C a Dt y(t)

=

ν 

⎛ αζ pζ (t) ⎝RL a Dt y(t) −

ζ =1

nζ −1

 k=0

⎞ y (k) (a) k−αζ ⎠ (t − a) = q(t, y(t)), Γ (k − αζ + 1)

where nζ = "αζ #. The above equation can be simplified as ν 

α

pζ (t)a Dt ζ y(t) = Q(t, y(t)),

(5.12)

ζ =1 αi where a Dtαi y(t) = RL a Dt y(t) and

Q(t, y(t)) = q(t, y(t)) +

ζ −1 ν n  pζ (t)y (k) (a) (t − a)k−αζ . Γ (k − αζ + 1)

ζ =1 k=0

Now we describe the SE-Sinc, DE-Sinc and SE-DE-Sinc collocation method. In Sect. 5.1 we defined αν > αν−1 > αν−2 > . . . > α1 > 0, ni = "αi #, then the largest power of φ  (t) that appear in Eq. (5.12) is nν , therefore, since φ  (t) will take the large values near the end points we divided both sides of the problem (5.12) by (φ  (t))nν and denoted g(t) = (φ  1(t))nν , also we multiply both sides of Eq. (5.12) a,b

by (t − a)αν , then let us denote the following abbreviations g SE (t) =

1 1 DE (t) =

n , g n , SE (t) ν DE (t) ν φa,b φa,b g SE (t)(t − a)αν , H DE (t) = g DE (t)(t



H SE (t) =

− a)αν .

By multiplying both sides of Eq. (5.12) in H (t), we get ν  ζ =1

α

H (t)pζ (t)a Dt ζ y(t) = H (t)Q(t, y(t)).

(5.13)

106

J. Rashidinia et al.

Now, by the approximation of function y(t) in general form of the interval [a, b] by the series (5.7), then yM (t) =

M 

yj χj (φa,b (t)) = XT (t)Y,

(5.14)

j =−M

where Y = [y−M , .., yM ]T is unknown vector and must be determine. By substituting (5.14) into (5.13), for fractional derivatives in (5.13) we have α

H (t)pζ (t)a Dt ζ (yM (t)) = ζ ET (t) ζ J(t) ζ X(t) Y,

(5.15)

where ζ ET (t) is a vector of 2M+1 components, ζ J(t) is a (2M+1)×(nζ ×(2M+1)) matrix, ζ X(t) is a (nζ × (2M + 1)) × (2M + 1) matrix. By substituting (5.14) and (5.15) into (5.13), and considering it’s collocation points at t = ti = ψa,b (ih), we obtain ν 

E (ti ) ζ J(ti ) ζ X(ti ) Y = H (ti )G(ti , XT (ti )Y),

ζ T

i = −M, .., M.

(5.16)

ζ =1

Equation (5.16) is a system of 2M + 1 set of algebraic equations which can be solved to obtain unknowns {yk }M k=−M . Accordingly, the unknown function yM (t) can be determined by Eq. (5.14). Each element of matrices and component of vectors for SE, DE and SE-DE Sinc collocation methods are as follows: ⎧ ζ SE SE Es (t) = hSE

g SE (t)Θ SE ζ (t), ⎪

(−(s − M − 1)h )p ⎪ ⎪ ⎨ ζ XSE (t) = d χ SE , j −M−1 ϕa,k− (2M+1)−M−1 (t) kj dt SE : SE SE SE T ⎪ Y = [y−M , .., yM ] , ⎪ ⎪ ⎩ SE SE (ihSE ), ti = ψa,b ⎧ ζ DE DE )p (t), E (t) = hDE g DE (t)Θ DE ζ ⎪

(−(s − M − 1)h ⎪ ⎪ ζ sDE ⎨ d DE Xkj (t) = dt χj −M−1 ϕa,k− (2M+1)−M−1 (t) , DE : DE , .., y DE ]T , ⎪ ⎪ YDE = [y−M M ⎪ ⎩ DE DE ti = ψa,b (ihDE ), ⎧ ζ SE−DE DE (t) = hDE g SE (t)Θ DE ⎪

(−(s − M − 1)h )p ζ (t), ⎪ Es ⎪ ⎨ ζ XSE−DE (t) = d χ SE−DE ϕ (t) , j −M−1 kj a,k− (2M+1)−M−1 dt SE − DE : SE SE SE T ⎪ Y = [y , .., y ] , ⎪ −M M ⎪ ⎩ SE SE (ihSE ), ti = ψa,b

5 Application of Sinc on the Multi-Order Fractional Differential Equations

, ζ

Jpq (t) =

Γ (nζ +1)Γ (nζ −αζ +1) −αζ +αν , Γ ( +1)Γ (nζ − +1)Γ ( −αζ +1) (t−a)

107

If (p−q)= (2M+1)

0,

O.W

,

where s = 1, .., 2M + 1, (2M + 1) < k ≤ ( + 1)(2M + 1), j = 1, .., 2M + 1, −M ≤ i ≤ M, = 0, 1, .., nζ . Remark 5.2 If the system (5.16) is nonlinear, this system can be solved by using the − → Newton’s method with initial guess Y(0) = 0 , and stop the iteration whenever the difference between the two consequents iterates is less than a given tolerance, i.e. Y(k+1) − Y(k) < ε.

5.5 Error Analysis In this section, the errors arise due to approximation of derivatives, integrand in Eq. (5.12), and approximation of Eq. (5.12) by collocation methods. y(t) which the function to be approximated must be analytic on a strip domain Dd = {z ∈ C :| I m(z) |< d} for some d > 0, and should be bounded in some sense. First we recall the following definitions. Definition 5.3 Let D be a simple-connected domain which satisfies (a, b) ⊂ D, and let β be a positive constant, then Lβ (D) denotes the family of all functions f that satisfy | y(z) |≤ c | g(t) |β , for a positive constant c and all z ∈ D and g(t) =

1  (t))nν (φa,b

.

Remark 5.3 When in corporate with SE and DE transformations, the condition should be considered on the translated domain      SE (D ) = z ∈ C :  < d , ψa,b ) arg( z−a  d b−z   4 , % 

2    DE (D ) = z ∈ C : arg( 1 log( t−a ) + 1 + 1 log( t−a ) ψa,b  0, there exists a constant K1 > 0 such that:  n  sin(π φ(t)/ h)   g(t) d  ≤ K1 ,  n dt φ(w) − φ(t)  hn then there exists a constant K2 > 0 such that dn dn

  #∞ y(t) (S(k,h)o(φ(t))) dn dt n dt n ∞ g(t) dt n (φ y)(t) ∞ = (φ  (t))n − k=−∞ y(tk ) (φ  (t))n 2 −π d/ h ≤K e . hn

Proof Following theorem 3.2 of [24, p.59] we know

&

  sin(π φ(t)/ h) y(w)φ  (w)dw dn dn limγ →∂D γ (φ(w)−φ(t)) g(t) dt n (φ y)(t) = g(t) dt n 2π i sin(π φ(w)/ h)

3 & 2 sin(π φ(t)/ h) y(w)φ  (w)dw dn = limγ →∂D γ g(t) 2π i dt n (φ(w)−φ(t)) sin(π φ(w)/ h) thus 

   dn g(t) dt n (φ y)(t)  ≤ ≤

& K1  2π hn sinh(π d/ h) limγ →∂D γ |y(w)φ (w)|dw  4K1 N (φ y) −π d/ h 2 −π d/ h e =K , 2π hn hn e

SE (D ) or ψ DE (D ). where D can be one of ψa,b d d a,b

Lemma 5.3 Suppose that all conditions of Lemma 5.2 are fulfilled. if y ∈ Lβ (D), 1 where β is a positive constant and g(t) = (φ  (t)) n and let there exists a constant l > 0, such that for all t ∈ ψa,b (−∞, ∞)    n   g(t) d Sinc φa,b (t) − kh  ≤ l ,   hn n dt h

(5.17)

then it follows that



#M dn dn dn  y)(t) (φa,b g(t) dt n ∞ = g(t) dt n y(t) − k=−M y(tk )g(t) dt n S(k, h)o(φa,b )(t) ∞ ˜

≤ k(M)e−M . (5.18)

5 Application of Sinc on the Multi-Order Fractional Differential Equations

109

πd SE (kh), then (I) By using SE transformations and taking h = βM and tkSE = ψa,b √ n+1 we can obtain k(M) = K3 M 2 and M˜ = π dβM. (II) Also, by using DE transformations and taking h = DE (kh), then we can obtain k(M) = K ψa,b 4

M n+1 n+1 log( 2dM β )

log( 2dM β ) M

and tkDE =

and M˜ =

π dM , log( 2dM β )

where K3 and K4 are positive constants. Proof To prove the inequality (5.18) for SE-Sinc method, by using SE transformation and Lemma 5.2 and inequality (5.17), we get    



 SE d n dn SE y)(t)  ≤ # SE )(t)  + K2 e−π d/ h S(k, h)o(φa,b   |k|>M y(tkSE )g SE (t) dt  hn g (t) dt n (φa,b n  #  SE   SE d n

 K2 −π d/ h SE   ≤ y(tk ) g (t) dt n S(k, h)o(φa,b )(t)  + hn e |k|>M   ∞ # 2 −π d/ h ≤ hln 2c e−βkh + K hn e ≤

k=M+1 2cl −βMh + K2 e−π d/ h , e n+1 hn βh

πd and by taking h = βM we get inequality (5.18). Likewise, by using DE transformations and Lemma 5.2 and inequality (5.17), we get 

  #  DE   DE d n

 DE d n DE y)(t)  ≤ DE )(t)  + y(tk ) g (t) dt n S(k, h)o(φa,b g (t) dt n (φa,b   |k|>M   ∞ # π 2 −π d/ h ≤ hln 2c e− 2 β exp(kh) + K hn e ≤

k=M+1 π 4cl e− 2 β exp(Mh) πβhn+1 eMh

+

K2 −π d/ h hn e

K2 −π d/ h , hn e

inequality (5.18) is then established by taking h = completed.

log( 2dM β ) , M

so the proof is

To prove the next theorems, we consider the following identities: (i) Assume that t ∈ [a, b] then  n     ( ) d n−    Γ (n+1)Γ (n−α+1)  Γ (n−α) dt n− (t − a)n−α  |(t − a)αν | =  Γ (n−α)Γ ( +1)Γ (n− +1)Γ ( −α+1) (t − α) −α+αν  ≤ c (b − a) −α+αν ,

where − α + αν > 0. (ii) According to the fractional derivative, we have n − 1 < α < n, then 0 < n − α < 1, thus we have ∞

∞ Θ(τ )dτ =

−∞

−∞

π cosh(τ )dτ = (1 + e−π sinh(τ ) )(1 + eπ sinh(τ ) )n−α

∞

−∞

dτ 1 = . (1 + e−τ )(1 + eτ )n−α n − α

110

J. Rashidinia et al.

Now, we state and prove the following theorem. In this case, we need to consider Appendix 2, and (5.18) to be able to prove the following theorem for Sinc methods. Theorem 5.2 Assume that y satisfies y ∈ Lβ (D) for all t ∈ [a, b] and 0 < d < π2 . Let n = "α#, g(t) = (φ  1(t))n , H (t) = (t − a)α g(t), μ = min{n − α, β}, M be a a,b log( 2dM μ ) πd SE positive integer and h and hDE are selected by hSE = μM and hDE = M respectively. Then we have     dn ˜ max H (t)a Dtα (y(t)) − H (t) n IM (CM [y](t)) ≤ k ∗ (M)exp(−M), a≤t≤b dt √ n+1 where for SE-DE-Sinc method we have k ∗ (M) = K5 M 2 and M˜ = π dμM, √ n+1 and for SE-Sinc method we have k ∗ (M) = K6 M 2 and M˜ = π dμM, and for n+1 DE-Sinc method we have k ∗ (M) = K7 M2dM n+1 and M˜ = π dM 2dM . log(

μ

)

log(

μ

)

Proof Now, we try to prove Theorem 5.2 for SE-DE-Sinc method only, the proof of the error bound of SE-Sinc and DE-Sinc methods are similar manner to SE-DE-Sinc method. By considering   n    (t − a)α  g SE (t) a D α (y(t)) − g SE (t) d I DE (C SE [y](t)) , t M   dt n M and by using identities (i) and (ii) and by similar manner in Sect. 5.3, we have     n−α dn dn SE DE |(t − a)α | g SE (t) dt [y](t) − g SE (t) dt n a It n IM (CM [y](t))   SE ∞  (t) d n & DE (τ ))dτ = |(t − a)α |  Γg(n−α) Θ DE (τ )(t − a)n−α y(ψa,t dt n  −∞

 M M # #  SE−DE dn (t − a)n−α S(j, hSE )o(ϕa,k −hDE cjSE Θ DE (khDE )g SE (t) dt (t))  n k=−M j =−M  5   n  &∞ DE # n d n−  1 d DE Θ (τ )g SE (t) dt = |(t − a)α |  Γ (n−α) (t − a)n−α y(ψa,t (τ ))dτ dt n−  l −∞ =0

3 M M # #  SE−DE d S(j, hSE )o(ϕa,k cjSE Θ DE (khDE )g SE (t) dt (t)) −hDE  k=−M j =−M

5 Application of Sinc on the Multi-Order Fractional Differential Equations

111

  &∞  d DE ≤ c (b − a) [ Θ DE (τ )g SE (t) dt y(ψa,t (τ ))dτ −  −∞ =0 n #

&∞ −∞ 

M #

Θ DE (τ )g SE (t)

j =−M

d cjSE dt



  SE SE−DE S(j, h )o(ϕa,τ (t)) dτ  

 &∞ M

 #  d SE )o(ϕ SE−DE (t)) dτ + Θ DE (τ )g SE (t) cjSE dt S(j, h a,τ −∞ j =−M

3 M M # #  SE−DE d SE S(j, hSE )o(ϕa,k cj Θ DE (khDE )g SE (t) dt (t))  , −hDE k=−M j =−M

SE−DE (t) = φ SE (ψ DE (τ )) and ϕ SE−DE (t) = φ SE (ψ DE (khDE )). where ϕa,τ a,t a,t a,b a,b a,k By applying (5.18) and (5.32) we rewrite the above relation as follows

  &∞ 2 d DE ≤ c (b Θ DE (τ )g SE (t) dt  y(ψa,t (τ ))  −∞ =0 (     M

 # −π dM d SE )o(ϕ SE−DE (t)) dτ  + K exp ] S(j, h cjSE dt −  13 a,τ log( 2dM  μ ) j =−M   ' (    &∞ n √ # +1   −π dM − π dμM DE 2 ≤ c (b − a) K3 M e Θ (τ )dτ  + K13 exp  log( 2dM  −∞ μ ) =0 5 6  n √ √ # +1 n+1 K dM 3 ≤ K5 M 2 e− π dμM , ≤ c (b − a) n−α M 2 e− π dμM + K13 exp −π2dM n #

 − a) [

log(

=0

μ

)

where 



−π dM log( 2dM μ )

 ⎞

exp n ⎟ ⎜ K3 K13 ⎟

 + K5 = ⎜ c (b − a) , √ n+1 ⎠ ⎝n − α exp − π dμM M 2 =0 similarly, we can obtain the constants K6 and K7 as follows:  K6 =  K7 =

K11 K3 + n+1 n−α M 2

 n

c (b − a) ,

=0

n+1 log( 2dM K4 μ ) + K13 n−α M n+1



n 

c (b − a) .

=0

Note that K5 < K6 , so we can conclude that SE-DE-Sinc method is better than SE-Sinc method, and the proof is completed.

112

J. Rashidinia et al.

5.5.1 Error Analysis of Sinc Collocation Methods Here, we study the error analysis of Sinc collocation methods. Theorem 5.3 Let yM (t) be the approximate solutions of Eq. (5.12). Suppose that all conditions of Theorem 5.2 are fulfilled, and Q(t, y(t)) satisfies in Lipschitz condition with respect to the second variable, i.e. there exists a constant L > 0 such that: |Q(t, y1 ) − Q(t, y2 )| ≤ L |y1 − y2 | . then ˜ max |y(t) − yM (t)| ≤ c(M)exp(−M),

a 0. log(

μ

)

log(

μ

)

Proof Assume that Q(t, y(t)) be a nonlinear function of y(t), then we can rewrite Eq. (5.12) in the following form αν a Dt y(t)

=

ν−1 

pi (t)a Dtαi y(t) + Q(t, y(t)),

(5.19)

i=1

following [1, Lemma 5.2], Eq. (5.19) is equivalent to the following Volterra integral equation



&t #ν−1 pi (s) αi 1 αν −1 i=1 H (s) H (s)a Ds y(s) ds Γ (αν ) a (t − s) & # ν y (αν −j ) (a) t 1 (t − s)αν −1 Q(s, y(s))ds + nj =1 + Γ (α Γ (αν −j +1) (t ν) a

y(t) =

− a)αν −j ,

and we have



&t #ν−1 pi (s) 1 d ni αν −1 i=1 H (s) H (s) dt ni IM (CM [y](s)) ds Γ (αν ) a (t − s) &t # ν y (αν −j ) (a) 1 αν −j , (t − s)αν −1 Q(s, yM )ds + nj =1 + Γ (α Γ (αν −j +1) (t − a) ν) a

yM (t) =

then eM (t) = max |y(t) − yM (t)| a≤t≤b &

#

  1  t αi ν−1 pi (s) d ni αν −1 H (s) ≤  Γ (α (t − s) D y(s) − H (s) ds  ni IM (CM [y](s)) a s i=1 a ) H (s) dt ν  &  1  t + Γ (α  a (t − s)αν −1 (Q(s, y(s)) − Q(s, yM (s))) ds  , ν) (5.20)

5 Application of Sinc on the Multi-Order Fractional Differential Equations

113

by applying Theorem 5.2 and applying Lipschitz condition on Q(t, y(t)) with respect to the second variable in (5.20), it follows that eM (t) = max |y(t) − yM (t)| ≤ a≤t≤b



γ Γ (αν )

# ν−1

γ Γ (αν ) (ν

˜ ki (M) e−M +

LK8 & t Γ (αν ) a eM (s)ds &t ˜ − 1)kν (M)e−M + K9 a eM (s)ds, i=1

(5.21)    i (t)  where γi = sup  HpSE , i = 1, .., ν and γ = max{γ1 , .., γν }. Now by using (t) a≤t≤b

Gronwall’s Lemma 5.6 we can rewrite the relation (5.21) as follows  eM (t) ≤ exp



t

K9 ds a

γ ˜ (ν − 1)kν (M)e−M Γ (αν )



˜ ≤ c(M)exp(−M),

γ where c(M) = Γ (α (ν − 1)kν (M)eK9 (b−a) . According to Theorem 5.2, in the both ν) case of SE-DE-Sinc and SE-Sinc collocation methods by setting K  = K5 and √  K = K6 , respectively, we have M˜ = π dμM and

c(M) =

nν +1 nν +1 γ (ν − 1)eK9 (b−a) K  M 2 = C1 M 2 , Γ (αν )

and in the case of DE-Sinc collocation method we have M˜ = c(M) =

π dM log( 2dM μ )

and

M nν +1 M nν +1 γ (ν − 1)eK9 (b−a) K7 = C , 2 nν +1 nν +1 Γ (αν ) log( 2dM log( 2dM μ ) μ )

then the proof is completed.

5.6 Illustration Numerical Results In this section, we test the numerical methods on different types of examples. We assume that d0 = 3.14 over all in this section. Example 5.1 Consider the following linear initial value problem [7, 10, 11] D α y(x) + y(x) = 0 , 0 < α < 2, y(0) = 1, y  (0) = 0.

(5.22)

114

J. Rashidinia et al.

The second initial condition is for α > 1 only. The exact solution of this problem is as follows: y(x) =

∞  (−x α )k . Γ (αk + 1) k=0

By considering the relationship between the Caputo’s sense and Riemann-Liouville sense of fractional derivative, as presented in Eq. (5.3), and applying the initial conditions, we have C α 0 Dt y(t)

α = Rl 0 Dt y(t) −

t −α , 0 < α < 2, Γ (1 − α)

then, we can rewrite Eq. (5.22) as follows Rl α 0 Dt y(t) + y(t)

=

t −α , 0 < α < 2, Γ (1 − α)

(5.23)

RL α α where C 0 Dt y(t) and 0 Dt y(t) are defined in (5.1) and (5.2), respectively. We solve the problem (5.23) by applying the SE-Sinc collocation method that described in Sect. 5.3. We considered μ = 1 − α for 0 < α < 1 and μ = 2 − α for 1 < α < 2, and d = d40 . As shown in Table 5.1, the absolute error for different values of M are compared with the methods presented in [7, 10, 11]. Figure 5.1 depicts the comparison of SE-Sinc and SE-DE-Sinc collocation method for α = 0.5, 0.8, 1.4 and different values of M. We considered d = d20 for hSE and hDE . The numerical results reveal good agreement with the exact solution.

Table 5.1 Comparison of the maximum absolute error for Example 5.1

α =0.2 α =0.4 α =0.5 α =0.6 α =0.8 α =1.2 α =1.4 α =1.5 α =1.6

SE-Sinc collocation method M=32 M=128 1.587 × 10−4 1.571 × 10−7 −4 5.956 × 10 6.796 × 10−7 −3 1.304 × 10 2.588 × 10−6 −3 2.839 × 10 1.075 × 10−5 −2 2.798 × 10 3.238 × 10−4 −4 5.567 × 10 2.602 × 10−7 −3 1.378 × 10 1.502 × 10−6 −3 2.024 × 10 4.358 × 10−6 −3 3.971 × 10 1.345 × 10−5

Ref. [7] 7.4 × 10−1 7.3 × 10−1 − 6.7 × 10−3 1.2 × 10−3 4.5 × 10−3 1.3 × 10−3 − 3.1 × 10−4

Ref. [10] 6.14 × 10−2 − 5.60 × 10−3 − 8.00 × 10−4 3.09 × 10−3 − 2.03 × 10−3 −

Ref. [11] 5.3 × 10−3 1.9 × 10−3 − 1.5 × 10−3 1.0 × 10−3 2.5 × 10−3 2.4 × 10−3 − −

5 Application of Sinc on the Multi-Order Fractional Differential Equations 100

100 SE−Sinc collocation method SE−DE−Sinc collocation method

SE−Sinc collocation method SE−DE−Sinc collocation method

10−2

10−4

Absolut error

Absolut error

10−2

10−6 10−8

10−4 10−6 10−8

10−10 10−12 0

115

20

40

60

80

100

120

10−10 0

140

20

40

60

M

80

100

120

140

M 101 SE−Sinc collocation method SE−DE−Sinc collocation method

100

Absolut error

10−1 10−2 10−3 10−4 10−5 10−6 10−7 10−8

0

20

40

60

80

100

120

140

M

Fig. 5.1 A comparison of the SE-Sinc and SE-DE-Sinc collocation methods for Example 5.1 and α = 0.5 (top left), α = 0.8 (top right) and α = 1.4 (bottom)

Example 5.2 Consider the equation [12] C α 0 Dt y(t) + 2y(t)

= 2 cos(π t) +

t −α (1 F1 (1; 1 − α; iπ t) 2Γ (1 − α)

+1 F1 (1; 1 − α; −iπ t) − 2),

(5.24)

where 0 < α < 1, 0 < t < 1, y(0) = 1 and p Fq (a; b; z) is the generalized hypergeometric function with exact solution y(t) = cos(π t). From initial condition and Eq. (5.3) we have C α 0 Dt y(t)

α = RL 0 Dt y(t) −

t −α , Γ (1 − α)

then we rewrite Eq. (5.24) as follows: t −α RL D α y(t) + 2y(t) = 2 cos(π t) + (1 F1 (1; 1 − α; iπ t) + 1 F1 (1; 1 − α; −iπ t)), t 0 2Γ (1 − α)

116

J. Rashidinia et al. 100

100 SE−Sinc collocation method DE−Sinc collocation method SE−DE−Sinc collocation method

10−1

10−2

10−3

10−3

10−4 10

SE−Sinc collocation method DE−Sinc collocation method SE−DE−Sinc collocation method

10−1 Absolut error

Absolut error

10

−2

−5

10−4

10−6 10−5

10−7 10−8 0

20

40

60

80

100

120

140

M

10−6 0

20

40

60

80

100

120

140

M 100 SE−Sinc collocation method DE−Sinc collocation method SE−DE−Sinc collocation method

10−1 Absolut error

10−2 10−3 10−4 10−5 10−60

20

40

60

80

100

120

140

M

Fig. 5.2 A comparison of the SE-Sinc, DE-Sinc and SE-DE-Sinc collocation methods for 9 Example 5.2 and α = 12 (top left), α = 34 (top right) and α = 10 (bottom)

where 0 < α < 1. The results obtained by the SE-Sinc, DE-Sinc and SE-DE-Sinc collocation methods are tabulated in Table 5.2, with M = 64, d = d20 for hSE , 9 . Figure 5.2 depicts the comparison d = d40 for hDE , μ = 1 − α, α = 14 , 12 , 34 and 10 1 3 9 between the absolute errors for α = 2 , 4 and 10 obtained by SE, DE and SE-DESinc collocation methods with different values of M.

Table 5.2 Maximum absolute errors for Example 5.2

α= α= α= α=

1 4 1 2 3 4 9 10

SE-Sinc collocation method

DE-Sinc collocation method

SE-DE-Sinc collocation method

1.499 × 10−7

1.702 × 10−5

1.169 × 10−10

2.611 × 10−6

1.743 × 10−5

1.227 × 10−7

1.180 × 10−4

1.192 × 10−5

2.242 × 10−6

3.665 × 10−3

8.432 × 10−6

7.586 × 10−5

5 Application of Sinc on the Multi-Order Fractional Differential Equations

117

Example 5.3 Consider the following nonlinear initial value problem[7, 11, 17] C D α y(t) 0 t

α Γ (5+ α ) 40320 8−α − 3 Γ (5− α2 ) t 4− 2 Γ (9−α) t 2 y  (0) = 0, 0 < α < 2,

=

y(0) = 0,

3

α

+ 94 Γ (α + 1) + ( 32 t 2 − t 4 )3 − y 2 (t),

that the second initial condition is for α > 1 only. The exact solution of this model is α 9 y(t) = t 8 − 3t 4+ 2 + t α . 4

RL α α According to the initial values and Eq. (5.3), we have C 0 Dt y(t) = 0 Dt y(t). We applied the SE-Sinc, DE-Sinc, and SE-DE-Sinc collocation methods for different values of M, and with this parameters: d = d40 for hSE , d = d80 for hDE , μ = 1 − α for 0 < α < 1 and μ = 2 − α for 1 < α < 2. We tabulated the results obtained by presented methods and compared with the results of [11] in Table 5.3.

Example 5.4 In the last example, we consider the following initial value problem in the case of the inhomogeneous Bagley-Torvik equation [7] y  (t) + C 0 Dt y(t) + y(t) = 1 + t, y(0) = 1, y  (0) = 1, 3/2

(5.25)

arises in the modeling of the motion of a rigid plate immersed in a Newtonian fluid. The exact solution of this problem is y(t) = 1 + t. Table 5.3 A comparison of the maximum absolute error for Example 5.3

4

α = 0.5 SE-DE-Sinc method 1.241 × 10−1

8

1.742 × 10−2



16

1.333 × 10−3



32

3.417 × 10−5

9.1 × 10−2

64

1.942 × 10−7

6.3 × 10−2

128

9.686 × 10−11

3.2 × 10−2

M

[11] −

α = 0.75 DE-Sinc method 9.550 × 10−2 9.327 × 10−3 1.748 × 10−4 1.419 × 10−5 2.308 × 10−6 4.506 × 10−10



α = 1.5 SE-Sinc method −











1.751 × 10−1 4.218 × 10−2 3.625 × 10−4 5.517 × 10−6



[11]

2.8 × 10−2 9.2 × 10−3 6.5 × 10−3

[11]

6.5 × 10−2 3.2 × 10−2 9.9 × 10−3

118

J. Rashidinia et al.

Table 5.4 Maximum absolute errors of Example 5.4 SE-Sinc collocation method SE-DE-Sinc collocation method

M=8 2.441 × 10−3

M=16 4.368 × 10−5

M=32 3.254 × 10−6

M=64 1.711 × 10−8

1.373 × 10−6

9.279 × 10−12

2.973 × 10−22



100 SE−−Sinc collocation method SE−DE−Sinc collocation method

Absolut error

10−5

10−10

10−15

10−20

10−25

0

10

20

30

40

50

60

70

M

Fig. 5.3 A comparison of the SE-Sinc and SE-DE-Sinc collocation methods for Example 5.4

According to the initial values and relation (5.3) we obtain 1 1 3/2 y  (t) + RL +√ , 0 Dt y(t) + y(t) = 1 + t − √ √ 3 2 πt 2 π t

(5.26)

for numerical solution of Eq. (5.26), we use the SE-Sinc and SE-DE-Sinc collocation methods with the parameters d = d0 for hSE , d = d20 for hDE and μ = 12 . The absolute errors are shown in Table 5.4, and the comparison with the SE-Sinc and SE-DE-Sinc collocation methods have been shown in Fig. 5.3. Table 5.4 shows the SE-Sinc and SE-DE-Sinc collocation methods obtain the exact solution of the problem (5.26).

5.7 Conclusion In this study, we investigated and developed the numerical solution of multi-order fractional differential equation. The convergence analysis of the SE-Sinc, DE-Sinc, and SE-DE-Sinc methods are presented. In all methods given the fact that the convergence is exponential. Furthermore, we defined and proposed a new sinc

5 Application of Sinc on the Multi-Order Fractional Differential Equations

119

method named by SE-DE-Sinc method, and in practice, this study reveals that this method is more accurate than SE-Sinc, DE-Sinc methods, and particularly the existing methods in the literature.

Appendix 1 Here, we mention some theorems from [1] to indicate the existence and uniqueness of solution of problem (5.4)–(5.5). Theorem 5.4 Consider the equation C αv 0 Dt y(t)

α

α1 v−1 C = f (t, y(t), C y(t)) 0 Dt y(t), . . . , 0 Dt

(5.27)

subject to the initial conditions y (k) (0) = y0k ,

k = 0, 1, . . . , "αv # − 1,

(5.28)

where αv > αv−1 > . . . > α1 > 0, αk − αk−1 ≤ 1 for all k = 2, 3, .., v and 0 < α1 ≤ 1. Assume that αk ∈ Q for all k = 1, 2, .., v, define J to be the least common multiple of the denominators of α1 , α2 , . . . , αv and set γ = J1 and L = J αv . Then this initial value problem is equivalent to the system of equations C D γ y (t) = y (t), 1 0 t 0 C D γ y (t) = y (t), 2 0 t 1

.. .

C Dγ y 0 t L−2 (t) C Dγ y 0 t L−1 (t)

(5.29) = yL−1 (t), = f (t, y0 (t), yα1 /γ (t), . . . , yαv−1 /γ (t)),

together with the initial conditions , yk (t) =

) k J y0 / , if k J ∈ N0 , 0, else

(5.30)

in the following sense. 1. Whenever Y = (y0 , . . . , yL−1 )T with y0 ∈ C "αv # [a, b] for some b > 0 is the solution of the system (5.29)–(5.30), the function y = y0 solves the multi-term equation initial value problem (5.27)–(5.28). 2. Whenever y ∈ C "αv # [a, b] is a solution of the multi-term initial value problem (5.27)–(5.28), the vector function

T γ C 2γ C (L−1)γ Y = (y0 , . . . , yL−1 )T = y, C y 0 Dt y, 0 Dt y, . . . , 0 Dt

120

J. Rashidinia et al.

solves the multidimensional initial value problem (5.29)–(5.30). Theorem 5.5 Assume the hypotheses of Theorem 5.4. Moreover let K > 0, h∗ > 0, v T where T = [−K, K] if α ∈ and G = [0, h∗ ] × [y00 − K, y00 + K] × Πk=1 k k k / N0 αk αk and Tk = [y0 − K, y0 + K] else. If f : G → (−∞, ∞) is continuous, then the multi-term initial value problem (5.27)–(5.28) has a solution on the interval [0, h] with some h > 0. Theorem 5.6 Assume the hypotheses of Theorem 5.4 . Moreover define the set G as in Theorem 5.5. If f : G → (−∞, ∞) is continuous and satisfies a Lipschitz condition with respect to all variables except for the first, then there exists some h > 0 such that the multi-term initial value problem (5.27)–(5.28) has a unique solution on the interval [0, h].

Appendix 2 Preliminary theorems and lemmas required in Sect. 5.5. Lemma 5.4 ([18, Lemmas 4.5 and 4.17]) Let f be analytic and bounded on SE (D ) and ψ DE (D ) for d with 0 < d < π for SE transformation and ψa,b d d a,b 0 < d < π2 for DE transformation. Then f is analytic and bounded uniformly SE (D ) and ψ DE (D ) for all t ∈ [a, b]. on ψa,b d d a,b SE (D )) for d with 0 < d < π . Let M Theorem 5.7 ([18]) Let (f/φ SE ) ∈ Lβ (ψa,b d d be a positive integer, and h be selected by h = 2π βM , then there exist a constant K10 > 0 independent of M such that

  M  b  √    SE SE  f (s)ds − h f (ψa,b (kh))(ψa,b ) (kh) ≤ K10 e− 2π dβM .   a  k=−M

Now we put f (s) =

y(s) (t−s)α−n+1

and assume that y is analytic and bounded uniformly

SE ) ∈ L SE for all t ∈ [a, b], then (f/φa,t on n−α (ψa,t (Dd )). Furthermore, by SE SE taking μ = min{n − α, β}, then (f/φa,t ) ∈ Lμ (ψa,t (Dd )). Then there exists constants 0 < d < π and K11 > 0 independent of M such that SE (D ) ψa,t d



max |a Itn−α (y(t)) − IMSE [y](t)| ≤ K11 e−

a≤t≤b

where h be selected by h =



π dμM

,

(5.31)

πd μM .

DE (D )) for d with 0 < d < π/2. Let M Theorem 5.8 ([18]) If (f/φ DE ) ∈ Lβ (ψa,b d

be a positive integer, and h be selected by h =

log( 2dM β ) , M

then there exist a constant

5 Application of Sinc on the Multi-Order Fractional Differential Equations

121

K12 > 0 independent of M such that     M  b   −π dM   DE DE  f (s)ds − h f (ψa,b (kh))(ψa,b ) (kh) ≤ K12 exp .   a  log( 2dM ) β

k=−M

In similar to SE-Sinc quadrature method, we have the following results. DE (D ) for d Lemma 5.5 Assume that y be analytic and bounded uniformly on ψa,t d π d with 0 < d < 2 and for all t ∈ [a, b]. Let μ = min{n − α, β}, M be a positive

integer and h be selected by h = independent of M such that

log( 2dM μ ) . M

Then there exists a constant K13 > 0

    max a Itn−α (y(t)) − IMDE [y](t) ≤ K13 exp

a≤t≤b



−π dM log( 2dM μ )

 .

(5.32)

Lemma 5.6 (Gronwall’s Lemma) Assume that u, v, w ∈ C(I ) and w ≥ 0. if u satisfies the inequality  u(t) ≤ v(t) +

t

w(s)u(s)ds,

t ∈ I,

a

then  u(t) ≤ v(t) +



t

w(s)v(s)exp a



t

w(r)dr ds,

t ∈ I,

s

In addition, if the function v is non-decreasing, then  u(t) ≤ v(t)exp

t

 w(s)ds ,

t ∈ I.

a

Bibliography 1. Diethelm, K.: The Analysis of Fractional Differential Equations: An Application-Oriented Exposition Using Differential Operators of Caputo Type. Springer, Heidelberg (2010) 2. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 3. Debnath, L.: A brief historical introduction to fractional calculus. Int. J. Math. Educ. Sci. Technol. 35(4), 487–501 (2004) 4. Diethelm, K.: An investigation of some nonclassical methods for the numerical approximation of Caputo-type fractional derivatives. Numer. Algorithms 47, 361–390 (2008) 5. Diethelm, K., Ford, N.J., Freed, A.D., Luchko, Yu.: Algorithms for the fractional calculus: a selection of numerical methods. Comput. Methods Appl. Mech. Eng. 194, 743–773 (2005)

122

J. Rashidinia et al.

6. Li, C., Zeng, F.: The finite difference methods for fractional ordinary differential equations. Numer. Funct. Anal. Optim. 34(2), 149–179 (2013) 7. Saadatmandia, A., Dehghan, M.: A new operational matrix for solving fractional-order differential equations. Comput. Math. Appl. 59, 1326–1336 (2010) 8. Lin, R., Liu, F.: Fractional high order methods for the nonlinear fractional ordinary differential equation. Nonlinear Anal. 66, 856–869 (2007) 9. Esmaeili, S., Shamsi, M.: A pseudo-spectral scheme for the approximate solution of a family of fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 16, 3646–3654 (2011) 10. Kazem, S.: An integral operational matrix based on Jacobi polynomials for solving fractionalorder differential equations. Appl. Math. Model. 37, 1126–1136 (2013) 11. Lakestani, M., Dehghan, M., Irandoust-pakchin, S.: The construction of operational matrix of fractional derivatives using B-spline functions. Commun. Nonlinear Sci. Numer. Simul. 17, 1149–1162 (2012) 12. Ghoreishi, F., Yazdani, S.: An extension of the spectral Tau method for numerical solution of multi-order fractional differential equations with convergence analysis. Comput. Math. Appl. 61, 30–43 (2011) 13. Momani, S.: A numerical scheme for the solution of multi-order fractional differential equations. Appl. Math. Comput. 182, 761–770 (2006) 14. Daftardar-Gejji, V., Jafari, H.: Solving a multi-order fractional differential equation using adomian decomposition. Appl. Math. Comput. 189, 541–548 (2007) 15. Yang, S., Xiao, A., Su, H.: Convergence of the variational iteration method for solving multiorder fractional differential equations. Comput. Math. Appl. 60, 2871–2879 (2010) 16. Erturk, V.S., Momani, S., Odibat, Z.: Application of generalized differential transform method to multi-order fractional differential equations. Commun. Nonlinear Sci. Numer. Simul. 13, 1642–1654 (2008) 17. Baumann, G., Stenger, F.: Fractional calculus and Sinc methods. Fract. Calc. Appl. Anal. 14, 568–622 (2011) 18. Okayama, T., Matsuo, T., Sugihara, M.: Approximate formulae for fractional derivatives by means of sinc methods. J. Concrete Appl. Math. 8, P470 (2010) 19. Zakeri, G.A., Navab, M.: Sinc collocation approximation of non-smooth solution of a nonlinear weakly singular Volterra integral equation. J. Comput. Phys. 229, 6548–6557 (2010) 20. Okayama, T., Matsuo, T., Sugihara, M.: Sinc-collocation methods for weakly singular Fredholm integral equations of the second kind. J. Comput. Appl. Math. 234, 1211–1227 (2010) 21. Riley, B.V.: The numerical solution of Volterra integral equations with nonsmooth solutions based on sinc approximation. Appl. Numer. Math. 9, 249–257 (1992) 22. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 23. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, Boca Raton (2011) 24. Lund, J., Bowers, K.L.: Sinc Method for Quadrature and Differential Equations. SIAM, Philadelphia (1992) 25. Tanaka, K., Sugihara, M., Murota, K.: Function classes for successful DE-Sinc approximations. Math. Comput. 78, 1553–1571 (2009) 26. Tanaka, K., Sugihara, M., Murota, K., Mori, M.: Function classes for double exponential integration formulas. Numer. Math. 111, 631–655 (2009) 27. Sugihara, M., Matsuo, T.: Recent developments of the Sinc numerical methods. J. Comput. Appl. Math. 164/165, 673–689 (2004) 28. Mori, M., Sugihara, M.: The double-exponential transformation in numerical analysis. J. Comput. Appl. Math. 127, 287–296 (2001) 29. Johnson, W.P.: The curious history of Faà di Bruno’s formula. Am. Math. Mon. 109(3), 217– 234 (2002)

Chapter 6

Election Integrity Audits to Ensure Election Outcome Accuracy Kathy Dopp

Abstract This chapter concerns post-election manual audits designed to limit the risk of certifying an incorrect election outcome to any desired low probability, say α = 0.05. The contributions of this article are to: 1. provide upper margin error bounds for single or multi-ballot audit unit election audit sampling; 2. describe Stenger’s numerical method for precisely calculating uniform probability sample sizes; 3. revise Aslam et al.’s weighted by error bound sampling method; 4. provide a threshold for the amount of net margin error to trigger an expanded audit; and, 5. explain why the maximum level of undetectability by observation assumption for multi-ballot audit units requires the runner-up be allowed to select one or more additional precincts to be audited. Keywords Post-election audits · Election auditing · Risk-limiting election audits · Ensuring accurate election outcomes

6.1 Introduction Definitions, Assumptions, History Risk-limiting post-election audits are vital to restoring public oversight over the integrity of a US vote counting process that has been largely privatized. Such audits limit the risk of certifying any incorrect election outcome to a desired low probability. Risk-limiting post-election audits may not be, overall, more costly than

See Appendix 2. K. Dopp () University of Utah, Salt Lake City, UT, USA © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_6

123

124

K. Dopp

a fixed 3% audit because fixed rate audits often sample more than necessary for ensuring the correctness of wide-margin outcomes. E.g., the total overall amount of vote counts that would be audited nationwide for US House contests would be roughly equal to a 3% nationwide fixed rate audit for 99% risk-limiting audits. One of the first attempts to conduct a risk-limiting post-election audit in the US was in Cuyahoga County, Ohio in 2006.1 Since that time scholars and citizen groups have had limited success convincing election officials and legislators to implement effective post-election audits. Early forms of risk-limiting audits have been conducted in States such as Colorado, Rhode Island, and New Mexico [23, 25, 40]. Recently, Ohio, California, and Washington provided options for counties to conduct risk-limiting audits. In 2006, the US League of Women Voters membership voted to recommended post-election auditing and the National Institute of Standards and Technology and The US Election Assistance Commission’s Technical Guidelines Development Committee recommended a variety of safeguards for voting systems in its 2007 Voluntary Voting Systems Guidelines [31, pp. 18–20, 40–41], [6, 7, 28, 35]. In January 2009 The U.S. League of Women Voters endorsed risk-limiting audits, stating, “The number of audit units . . . should be chosen so as to ensure there is only a small predetermined chance of confirming an incorrect outcome.” [44].

6.1.1 Definitions See Table 6.1 for a list of variables used to calculate risk-limiting election audits. Audit unit or auditable vote count is a tally of one or more votes that is publicly reported for an election contest, e.g., precinct vote counts or batches of ballots. Audit units may be single ballots if cast vote images or records (CVRs) are publicly published in a way that individual ballots may be located and identified and compared with CVRs that anyone in the public may tally independently. Audit sample size is the number of audit units randomly drawn for manual counting to compare with the initial publicly reported, usually electronically tallied, audit units. Under and over-votes are cast ballots eligible to vote in the contest that have no valid vote counted on them for a particular issue or candidate. Margin between two candidates is the difference in the number of total votes each candidate receives. Upper margin error bound is the most possible margin error between a particular winning and losing candidate pair that could exist in each precinct or other audit unit, given the number of ballots cast and votes counted for each candidate. Total “upper margin error bound” is the sum of upper margin error bounds for all audit units. 1 However,

the Cuyahoga County auditors did not correctly analyze the discrepancies found by the audit based on their sample size design [13, 43].

6 Election Integrity Audits to Ensure Election Outcome Accuracy

125

Table 6.1 Variables used to calculate risk-limiting election audit sample sizes and weights Variable Notation Variable name Ballots

Variable letter bi in each audit unit i Winner’s reported wi in each audit votes unit i Runner-up’s reported ri in each audit votes unit i

Formula # b = i bi

Other reported votes or non-votes

oi in each audit unit i

o=

Margin in number of votes

M between winner and runner up m

w−r = #

Percentage margin

Margin error upper ui and U bounds for winner and runner-up for audit unit i Total number of audit N units (e.g., precincts) Maximum level of d or MLU O undetectability by observation

#

w= r=

wi

i

#

i ri

# i

oi

N i=1 wi



#N

ui = bi + wi − ri = 2wi + oi . Total contest # error bound is U = N i=1 ui

A constant d: 0 0 A = S; a = cc; else B = S; % b=cc; end end S0 = ceil(S); c0 = dopp(P,N,C,S0); if (c0 >= 0) S = S0; else S = S0 +1; end S = is the minimum number of corrupt vote counts that could cause an incorrect election outcome where d is the assumed MLUO. This value of C is used to use to calculate the audit sample size, S.

134

K. Dopp

6.4.2 Weighted Sampling: Proportional to Upper Error Bounds Computer scientists developed weighted sampling methods to be able to sample fewer ballots yet achieve the same probability for detecting incorrect outcomes, by targeting ballots having more potential for producing margin error [4, 9]. This section revises the weighted sampling method developed by Aslam, Popa, and Rivest, which they called the“PPEBWR” method[4, p16].16 This approach weights the selection of audit units by the relative weight each audit unit could contribute overall to causing an incorrect election outcome. Using upper margin error bounds to weight the probability, pi for selecting each audit unit is: d(bi + wi − ri ) dU (bi + wi − ri ) = U

pi =

where U = weights. 6.4.2.1

# i

(bi + wi − ri ). The d’s cancel so the MLUO is not used for the

The Probability Equation for the Sample Size, t

Any set of corrupted audit units that could have altered the election results will have total margin error of at least the margin, M and probability selection weight of at least M U . For any set of t audit units whose total error bound is at least equal to the margin between the winning and runner-up candidates, we want the probability that all t escape detection to be less than α = 1 − P . Thus, for sampling with replacement, we want (1 −

M t ) ≤α U

Taking the natural logarithm of both sides and solving for t we get t≥

ln(α) ln(1 −

(6.8)

M U)

So, let t, the smallest whole integer equal or greater than expression 6.8 be the minimum sample size.[4, p10]. t=

16 The

7

ln(α) ln(1 −

M U)

8 (6.9)

probability proportional to error bound with replacement method. We do not recommend the other method called NEGEXP in the same article.

6 Election Integrity Audits to Ensure Election Outcome Accuracy

135

It is important to notice that the above formula 6.9 for t does not employ the maximum level of undetectability by observation (MLUO) assumption. To employ this method using the MLUO assumption, with d < 1, reduces the sample size by assuming margin error is spread among more precincts. Thus, the value for the sample size, t would be: t=

7

ln(α) ln(1 −

8

M dU )

(6.10)

Using formula 6.10 requires that the just-losing candidate be permitted to add one or more precincts to the audit after the random sampling takes place. Both formulas for t may be calculated and checked using a spreadsheet.17 This description differs in several respects from Aslam et al.’s method described in [4, p.10]. First, we recommend using actual upper margin error bounds so that the sampling weights are ordered relative to these bounds; second, we conservatively recommend using t as the calculated sample size rather than using t as the number of random draws with replacement, which could, possibly, under-sample; and, third, if the MLUO assumption is used in the calculations for t as in Eq. 6.10, then we suggest it is imperative that the just-losing candidate be permitted to choose one or more precincts or audit units to be added to the sample after the randomly selected sample is made public. However, as cautioned by Aslam et al. using the just-winning-losing candidate pair upper margin bounds as sampling weights could possibly increase the chance for certain election rigging strategies to prevail under some circumstances. We leave the details of how to program this method to the reader.

6.4.3 Timing and Additional Non-randomly Selected Audit Units Selection of additional audit units in addition to those that are initially randomly selected may be necessary to cover all jurisdictions, for multi-ballot audit units, and, due, to timing of late reported absentee and provisional ballots.

6.4.4 Allow the Just-Losing Candidate to Select Additional Audit Units When the maximum level of undetectability MLUO is less than one, d < 1, then additional audit units chosen by the candidate are a logical necessity due to basing random sample sizes on the assumption that suspicious-looking audit units with

17 Caution:

Some calculators may calculate the values incorrectly due to rounding error.

136

K. Dopp

more than a fixed level of margin error d, say 30%, would be investigated without the necessity for an audit [19, p. 14], [15, 20], [42, p. 7], [2, 22]. We believe discretionary audit units should therefore be included during the initial manual audit without cost to candidates, or a risk-limiting audit could fail to achieve its stated minimum probability to detect erroneous initial outcomes.18 If a maximum level of undetectability of one is assumed, then using the sampling methods recommended herein, it would not be necessary to allow candidates to select discretionary audit units in order to limit the risk to a desired level.

6.4.5 Select Additional Audit Unit(s) from “Missed” Jurisdictions Some state legislatures have made the requirement that at least one audit unit is sampled from each separately administered election jurisdiction where an election contest occurs to detect innocent ballot programming errors, voting system problems, or fraud peculiar to one jurisdiction. Such additional random selections should be made after and in addition to the initial random selections because otherwise audits would insufficiently sample high-population areas, thus providing a map to potential perpetrators for what areas to target to avoid detection.19

6.4.6 Timing: Complete Prior to Certification Auditing needs to be completed prior to certifying elections. Election officials have a choice to begin a post-election audit after all the provisional and mail-in ballots have been counted in precincts and publicly reported, or to use less convenient ways to count and publicly report mail-in and provisional ballots in batches that are, ideally, roughly equal in size to the median or average-sized audit units.

6.5 Post-Election Audit Expansion The net margin error found in the sample is calculated based on the total net sum of margin error found in all audit units in the sample. If the audit units are individual

18 Some have suggested that losing candidates should not be permitted to select any additional audit

units or should be required to pay the costs of auditing any such discretionary units. However, charging candidates would discourage auditing additional discretionary audit units and thus would result in over-stating the probability for detecting incorrect outcomes because there would be no upper limit on the margin error that could occur within audit units without checking, thus negating the assumptions used to calculate risk-limiting sample sizes. 19 Attorney Paul Lehto pointed this out in emails.

6 Election Integrity Audits to Ensure Election Outcome Accuracy

137

Table 6.2 Per vote margin error found during a post-election audit to the margin between the winner and runner-up candidate pair A miscounted vote wrongly credited to Winner Runner-up Winner Runner-up Under or overvote or other candidates Under or overvote or other candidates

That should have been credited to Runner-up Winner Under, overvote or other candidates Under, overvote or other candidates Runner-up Winner

Margin error result 2 −2 1 −1 1 −1

ballots Table 6.2 shows how error found on any ballot in the sample contributes to margin error. For multi-ballot audit units, such as precincts, the reported vote margin is the number of votes of the reported winner minus the number of votes for the reported loser. The net margin error as calculated either as in Table 6.2 per vote or by subtracting the audit margin in the sample from the initially reported margin for the same sample.

6.5.1 Number of Audit Expansion Rounds In order to limit the overall risk of an audit not detecting outcome-altering vote miscount in cases, it needs to be decided in advance how many audit expansion rounds there will be before expanding to a full manual recount of all ballots cast in an election contest. According to Bayes theorem, the probability that both A and B occur equals P (AB) = P (A)P (B|A). Because these probabilities are less than one, multiplying them together results in a lower probability than either one. Thus, the chance that two or more audit samples in a row will both detect the miscount when it occurs is less than the probability that either sample alone will do so. E.g., if the overall probability of detection is √ to be at least 95% then each sampling probability in a two-round audit must be 0.95 ≈ 0.975. If the first round of an audit is designed to limit the risk to α = 0.05, then, if the second round is designed to limit the risk to α = 0.01, the overall risk limit is α = 1 − 0.95 ∗ 0.99 = 0.06. To limit the risk to the limit used to calculate the first sample, the audit must be immediately expanded to a full manual recount. To preserve a desired confidence level of, say 0.95, over m audit rounds, the confidence level for each round must be 1 equal to the mth root of 0.95 or 0.95 m .

6.5.2 Net Margin Error Threshold to Expand an Audit The ability of a post-election audit to limit the risk of certifying an incorrect election outcome, depends on analyzing any discrepancies found in the audit between the

138

K. Dopp

Table 6.3 Error thresholds for expanding post-election audits—for both uniform or weighted sampling Type of audit Multi-ballot audit units with MLUO < 1 Single-ballot audit units

Minimum net margin error found Expand audit if entire sample contains net margin error equal or greater than the threshold calculated in Eq. 6.4. Expand if net margin error is 2 or more votes

initial and manual audit tallies based on the audit sampling design; and requires a procedure to increase the audit sample size when significant discrepancies arise. Both methods discussed herein are designed to find at least one of a set of audit units that was corrupted sufficiently altogether to overcome the reported margin if such a set exists. Thus, the threshold of margin error numerically calculated in Eq. 6.4 should be used for the decision rule. To be conservative, it is assumed that the audit unit containing the smallest such margin error possible in such a set would be found in the sample. However, the same amount of margin error could be more widely distributed amongst audit units in the population or in the sample. Table 6.3 provides the thresholds for expanding a post-election audit for multi-ballot and single-ballot audit units.

6.5.3 Recalculate the Next Audit Round Once the decision is made to conduct an expanded post-election audit round, its sample size may be calculated using the same method as above. First, the overall reported vote margin, M, is adjusted by the net margin error in votes found in the prior audit round; and the number of total audit units N reduced to the number of, as yet unaudited, remaining audit units. Next the same calculations as shown in Sects. 6.4.1 or 6.4.2 are redone on the remaining audit units to determine the size of the uniform sample or the weights and size of the weighted sample and to determine the threshold of margin error for expanding to another audit round. If a subsequent audit round comes close to expanding the sample size to a full manual recount of remaining ballots, an auditor may decide to simply conduct a full recount.

6.6 Summary and Recommendations Risk-limiting post-election audits limit the risk of certifying an incorrect election outcome to a desired small maximum probability, α. This paper recommends two methods for calculating election audit sample sizes to ensure that the audits achieve their stated minimum risk. Both uniform and the weighted sampling methods can be employed for any sized audit units, from 1 ballot to many ballots, using the methods suggested in this paper. Correct upper margin error bounds ensure the adequacy of sample size and sampling weight calculations.

6 Election Integrity Audits to Ensure Election Outcome Accuracy

139

Methods and materials for auditing elections and training auditors need development. Election officials, vote count auditors, election integrity advocates, and voting system vendors, could benefit from: • Voting system design specifications to improve audit-ability and accountability to determine how much, how, when, and where errors occurred.20 • Manuals with procedures for conducting effective, efficient post-election audits, and user-friendly computer programs for inputting data to calculate risk-limiting sample sizes, sampling, and analyzing post-election audit discrepancies to decide when to certify the election or expand the audit.21 • Uniform standards adopted by all election districts to provide consistent access to electronic ballot records to make voting system components, including auditing devices, inter-operable.22 • Methods to generate vote fraud and discrepancy test data to test the ability of various audit methods’ to detect various vote fraud strategies. • Ways to assist losing candidates to select discretionary “suspicious-looking” audit units to add to the randomly selected sample. Understandable, effective, routine post-election risk-limiting audits could provide high confidence in the accuracy of final election outcomes. Acknowledgments Thanks to Ron Baiman for co-authoring and contributing to several papers I have written on post-election auditing in the past. Thanks to David Webber of Open Voting Solutions23 for reviewing a previous version of this article and making helpful suggestions and to the contributions of other authors cited herein who developed or disseminated approaches for risk-limiting post-election audits. Thanks to Nelson Beebe and to many other dedicated election integrity advocates, such as Ray Lutz of CitizensOversight.org who motivated and helped me to keep working on this topic. Frank Stenger was born in Hungary, then, after WWII, at aged nine, his family were forcibly moved to East Germany but were able to escape, after a year, to West Germany and, a year later, migrate to Canada, where his father tried farming and failed after 3 years in a row of disastrous weather for crops. After years of family hardship, with the help of his teachers and by winning scholarships, Frank obtained an undergraduate degree in engineering physics, obtained

20 E.g.,

most current voting systems are not designed to conveniently produce auditable reports or to allow convenient sampling of smaller-sized vote counts, such as precincts or individual ballots [17]. Neff and Wand [1, 47] showed that the smaller the size of reported vote counts (in number of ballots), the fewer the total number of ballots that need to be manually audited to achieve a particular risk limit and some computer scientists have proposed sampling individual ballots to make risk-limiting audits more efficient [9, 34, 46]. In large margin contests, risk-limiting election audits may require less work for the same benefit when a larger number of smaller-sized audit units are publicly reported and sampled [5, 46, 47]. 21 So that election jurisdictions do not have to hire statisticians to plan every post-election risklimiting audit, such a project might require professional manual writers, and open source computer program developers, in collaboration with election officials, security experts, and election auditing experts. 22 The Secretary of State office in California has reported precinct level election results using international recording standards. 23 http://openvotingsolutions.com.

140

K. Dopp

Professor Emeritus Frank Stenger, who motivated and contributed to this chapter

a Master’s degree in mathematics, and, after spending a year at the National Bureau of Standards in Washington, D.C., obtained a Ph.D. in mathematics at the University of Alberta, afterwards teaching at the University of Michigan for 3 years, and then moving to the University of Utah. Frank’s work is in applied mathematics and numerical methods, specifically in developing sinc functions providing fast-running computer algorithms for solving engineering equations to tight tolerances that lack analytical solutions arising in many scientific and engineering applications. Frank has authored or co-authored over 250 published academic articles, and, including this volume, is an author or co-author of a total 6 textbooks, as well as written MatLab packages on numerical analysis methods, sinc functions, and approximation computation. Frank is a captivating writer and, thus, he is well-known throughout the world in his fields and his work is well above my level but I know who to ask if I can’t figure out an analytical solution to a mathematics problem! I met Frank when he taught two courses I took in applied math and numerical methods at the department at the University of Utah and have kept in touch with him ever since, including when he taught for the University of Utah computer science department. Besides being one of the smartest, if not the smartest, professors I was lucky enough to take courses from, he is one of the nicest and most friendly and encouraging persons I have ever met. I will never forget when I went to his office for help on a step I didn’t understand during class, that several mathematics professors popped their heads into his office to ask Frank for help with mathematics questions during the half hour I was there. Frank has a wonderful gracious wife and grown children and I have had the pleasure of learning what terrific and generous cooks they are. I thank him very much for his encouragement of and invaluable contributions to my work on risk-limiting post-election auditing.

Appendix 1 Proof That the Just-Winning-Losing Candidate Pair Upper Margin Error Bound Produces the Largest Sample Size as Compared to Other Winning-Losing Candidate Pairs The smaller the ratio M/U of the margin in number of ballots M to the sum of upper margin error bounds U , the larger the sample size will be when any of the risk-limiting post-election audit sampling methods discussed in this article are used. So we show that M/U is smallest for the just-winning/just-losing candidate pair.

6 Election Integrity Audits to Ensure Election Outcome Accuracy

141

Clearly (w − r) ≤ (w − l) where l is the number of votes for any initial losing candidate. So take any losing candidate with a greater margin than the runner-up and his margin can be expressed as w − r + y where y > 0. Now compare the ratio (Mr /Ur ) of the runner-up and any other initial losing candidate. We want to compare Mr w−r and = Ur b+w−r Ml w − (r − y) = Ul b + w − (r − y) to see which is bigger. For simplicity, let w − r = x so Mr x and = Ur b+x Ml x+y = Ul b+x+y Multiplying both fractions top and bottom by the necessary factors to get a common denominator and comparing numerators xb + x2 + xy ≤ xb + x2 + xy + by so that

Mr Ml ≤ Ur Ul

Q.E.D.

Appendix 2 I met Frank Stenger while obtaining my masters’ degree in mathematics from the University of Utah with an emphasis in computer science. Since that time, for a while, I operated one of the first Internet Service Provider (ISP) businesses in Utah, began a non-profit devoted to election integrity, and have taken graduate courses in various disciplines such as biology, math, economics, government, and city and municipal planning. I have learned that social science methods could be improved by the application of simple algebra, probability, calculus, logic, and combinatoric methods (Table 6.4).

142

K. Dopp

Table 6.4 Some of my research interests Issue

Methodological contribution Environment: Conjecture improving Global temperature, the fit of first-differenced atmospheric CO2 time series models based on integral calculus levels, and human concepts CO2 emissions Natural/methane gas hydraulic fracturing

Preserving wildlife and climate via wildlife road crossings and fencing Elections: Post-election audit sampling Post-election auditing obstacles and procedures

Content finding

Human CO2 emissions Granger-causes atmospheric CO2 levels, which Granger-causes global warming. Modeling is accurate with one prior time period, suggesting reduction in human emissions may possibly quickly reduce global temperature. None (reading, including Most of the methane gas released in deep well the fine print of industry hydro-fracking is not recovered up well bores. documents, arithmetic, Now bouyant, it rises through natural fissures and interviews) into underground aquifers and, then into the atmosphere, where methane causes at least 40 times the greenhouse gas effect as CO2 . None (literature review, Building wildlife road crossings that allow logic and arithmetic) ungulates to migrate preserves biodiversity and forest health, reduces forest fire rates, and costs far less over 25 years than personal injury and property damages caused by wildlife-vehicle collisions. Various concepts and Currently no US state effectively limits the methods risk of certifying incorrect election outcomes. None (logic, talking to computer technicians, scientists, election officials, and election integrity advocates) Various methods since 2004

Most US voting system technology is unaccountable, nonuniform, make vote rigging easy for persons having the skills and/or access; and, make effective post-election auditing unnecessarily difficult. Statistical methods E.g., vote–poll discrepancies in the 2016 US for detecting vote presidential election reveal two states miscount consistent with vote miscount sufficient to have altered the 2016 U.S. presidential election outcome, even after adjusting for systematic partisan exit poll response bias and false discovery rate. Redistricting An Algorithm for A commonly required criteria of algorithms and achieving close to “compactness”, the isoperimetric quotient, measures in the US proportional disadvantages political groups tending to live representation for people in more densely populated urban areas. living in regions of various population densities Statistics: Applying Extending Qualitative E.g., black population majorities are a logic—to determine Comparative Analysis necessary but not sufficient condition for necessary and/or (QCA), a combinatorial blacks to obtain proportionately fair sufficient method, to any shape of representation in US local legislatures; and, conditions—to statistical function nonexistent or insufficient post-election audits statistics are a necessary condition for high levels of discrepancy between exit poll and reported vote shares. Working papers on the above subjects are posted on http://ssrn.com

6 Election Integrity Audits to Ensure Election Outcome Accuracy

143

References 1. Andrew Neff, C.: Election confidence—A comparison of methodologies and their relative effectiveness at achieving it (2003). http://www.electionmathematics.org/em-audits/US/NeffElectionConfidence.pdf 2. Appel, A.W.: Effective audit policy for voter-verified paper ballots in New Jersey. In: Center for Information Technology Policy & Department of Computer Science. Princeton University, Princeton (2007). http://www.cs.princeton.edu/%7Eappel/papers/appel-audits.pdf 3. Aslam, J.A., Popa, R.A., Rivest, R.L.: On estimating the size and confidence of a statistical audit. In: Proceedings of the Electronic Voting Technology Workshop (EVT’07), August 6, 2007. Boston, MA, Berkeley, CA. USENIX Association (2007). http://people.csail.mit.edu/ rivest/AslamPopaRivest-OnEstimatingTheSizeAndConfidenceOfAStatisticalAudit.pdf 4. Aslam, J.A., Popa, R.A., Rivest, R.L.: On Auditing Elections When Precincts Have Different Sizes. Massachusetts Institute of Technology, Cambridge (2008). http://people.csail.mit.edu/ rivest/AslamPopaRivest-OnAuditingElectionsWhenPrecinctsHaveDifferentSizes.pdf 5. Atkeson, L.R., Alvarez, R.M., Hall, T.E.: The New Mexico 2006 Post Election Audit Report. Pew Center on the States (2009). http://www.unm.edu/%7Eatkeson/assets/new_ mexico_report_revised_oct09.pdf 6. Burr, B.: Introduction & VVPR. In Technical Guidelines Development Committee. May 21– 22, 2007. Plenary Meeting, Computer Security Division of NIST (2007). https://www.nist.gov/ document-13916 7. Burr, W., Kelsey, J., Peralta, R., Wack, J.: Requiring Software Independence in VVSG 2007: STS Recommendations for the TGDC Technical Guidelines Development Committee for the EAC. National Institute of Standards and Technology, Gaithersburg (2006). http://vote.nist. gov/DraftWhitePaperOnSIinVVSG2007-20061120.pdf 8. Calandrino, J.A., Halderman, J.A., Felten, E.W.: In defense of pseudorandom sample selection. In: Center for Information Technology Policy and Department of Computer Science, Princeton University Woodrow Wilson School of Public and International Affairs (2007). http://www. usenix.org/event/evt08/tech/full_papers/calandrino/calandrino_html/ 9. Calandrino, J.A., Halderman, J.A., Felten, E.W.: Machine-assisted election auditing. In: Center for Information Technology Policy and Department of Computer Science, Princeton University Woodrow Wilson School of Public and International Affairs (2007). https://www.usenix.org/ event/evt07/tech/full_papers/calandrino/calandrino.pdf 10. Cordero, A., Wagner, D., Dill, D.. The role of dice in election audits. University of California at Berkeley Web Site (2006). http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.188.7507 11. Center For Democracy and Election Management. Building Confidence in US elections — Report of the Commission on Federal Election Reform. American University, Washington (2005). https://www.eac.gov/assets/1/6/Exhibit%20M.PDF 12. Dill, D.: David Dill’s Testimony Before the Commission on Federal Election Reform (The Carter–Baker Commission), American University, Washington. Verified Voting Foundation Web Site (2005). https://www.verifiedvoting.org/downloads/carter-baker-testimony3.pdf 13. Dopp, K.. First US Scientific Election Audit Reveals Voting System Flaws But Questions Remain Unanswered — Critique of the Collaborative Public Audit of Cuyahoga County Ohio, November 2006 Election. Election Mathematics Web Site (2007). http://electionmathematics. org/ucvAnalysis/OH/CuyahogaElectionAudit.pdf 14. Dopp, K.: Derivation of the formula for the number of selection rounds for the probability proportional to margin error bound (PPMEB) method for determining samples for vote count audits. ElectionMathematics Web Site (2007–2008). http://electionmathematics.org/ ucvAnalysis/US/paper-audits/PPMEB-Auditing.pdf 15. Dopp, K.: The History of Confidence Election Auditing Development (1975 to 2008) & Overview of Election Auditing Fundamentals. ElectionMathematics Web Site (2007–2008). http://electionmathematics.org/ucvAnalysis/US/paper-audits/History-ofElection-Auditing-Development.pdf

144

K. Dopp

16. Dopp, K.: Post-Election Vote Count Audits—Probability Proportional to Margin Error Bound (PPMEB) Method. ElectionMathematics Web Site (2007–2008). http://www. electionmathematics.org/ucvAnalysis/US/paper-audits/VoteCountAudits-PPMEB.pdf 17. Dopp, K.: Checking Election Outcome Accuracy—Post-Election Auditing Procedures (2009). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1574536 18. Dopp, K., Baiman, R.: How Can Independent Paper Audits Ensure Election Integrity?. ElectionMathematics Web Site (2005–2006). http://electionmathematics.org/ucvAnalysis/US/ paper-audits/Paper_Audits.pdf 19. Dopp, K., Stenger, F.: The Election Integrity Audit. ElectionMathematics Web Site (2006). http://electionmathematics.org/ucvAnalysis/US/paper-audits/ElectionIntegrityAudit.pdf 20. Dopp, K., Straight, J.: Mandatory vote count audit—A legislative & administrative proposal. US Count Votes and Utah Count Votes Web Site (2006–2008). http://electionmathematics.org// ucvAnalysis/US/paper-audits/legislative/Vote-Count-Audit-Bill-2009.pdf 21. Hall, J.: Dice binning calculator for post-election audits (2008). http://www.josephhall.org/ dicebins.php 22. Hall, J.: Policy mechanisms for increasing transparency in electronic voting. PhD Thesis, University of California at Berkeley (2008). http://josephhall.org/papers/jhall-phd.pdf 23. Hall, J.L., Miratrix, L.W., Stark, P.B., Briones, M., Ginnold, E., Oakley, F., Peaden, M., Pellerin, G., Stanionis, T., Webber, T.: Implementing Risk-Limiting Post-Election Audits in California (2009). https://www.usenix.org/event/evtwote09/tech/full_papers/hall.pdf 24. Jones, D.W.: Auditing elections. Commun. ACM 47(10), 46–50 (2004). http://homepage. divms.uiowa.edu/~jones/voting/cacm2004.shtml 25. McBurnett, N.: Obtaining Batch Reports for Audits from Election Management Systems: ElectionAudits and the Boulder 2008 Election. http://www.nist.gov/itl/vote/upload/neal-mcburnettboulder-\paper.pdf 26. McCarthy, J., Stanislevic, H., Lindeman, M., Ash, A., Addona, V., Batcher, M.: Percentagebased vs. SAFE Vote Tabulation Auditing: A Graphic Comparison (2007). http://electionaudits. org/files/Percentage-based%20vs%20SAFE%20Vote%20Tabulation%20Auditing%20Paper. pdf 27. Mebane, W.R., Jr., Sekhon, J.S., Wand, J.: Detecting and correcting election irregularities (2003). http://sekhon.berkeley.edu/papers/detecting.pdf 28. NIST staff. Voluntary voting system guidelines. Recommendations to the election assistance commission. Prepared at the Request of the Technical Guidelines Development Committee (2007). https://www.eac.gov/assets/1/28/TGDC_Draft_Guidelines.2007.pdf 29. Norden, L., Burstein, A., Lorenzo Hall, J., Chen, M.: Post-election audits: Restoring trust in elections. Brennan Center for Justice with the Samuelson Law, Technology & Public Policy Clinic (2007). https://www.brennancenter.org/publication/post-election-audits-restoring-trustelections 30. Norden, L., Burstein, A., Hall, J.L., Dill, D., Hoke, C., Mebane, W., Jr., Oakley, F., Rivest, R.L., Wagner, D.: Thoughts on mandatory audits. The Brennan Center Web Site (2007). http://www.brennancenter.org/analysis/testimony-lawrence-norden-housesubcommittee-elections-hearing-election-audits 31. Project ACCURATE. Public comment on the 2005 voluntary voting system guidelines. Submitted to the United States Election Assistance Commission. Samuelson Law, Technology & Public Policy Clinic, University of California, Berkeley (2005). https://www.eac.gov/assets/ 1/28/VVSG.1.0.publiccomment.PDF 32. Rivest, R.L.: How Big should a Statistical Audit Be?. MIT Web Site (2006). http://theory.csail. mit.edu/%7Erivest/Rivest-OnEstimatingTheSizeOfAStatisticalAudit.pdf 33. Rivest, R.L.: Sum of Square Roots (SSR) pseudorandom sampling method for election audits. MIT (2008) http://people.csail.mit.edu/rivest/RivestASumOfSquareRootsSSRPseudorandomSamplingMethodForElectionAudits.pdf 34. Rivest, R.L.: Bayesian Tabulation Audits Explained and Extended (2018). https://people.csail. mit.edu/rivest/pubs/Riv18a.pdf

6 Election Integrity Audits to Ensure Election Outcome Accuracy

145

35. Rivest, R.L., Wack, J.P.: On the notion of “software independence” in voting systems. NIST (2006). https://people.csail.mit.edu/rivest/RivestWackOnTheNotionOfSoftwareIndependenceInVotingSystems.pdf 36. Saltman, R.G.: Effective use of computing technology in vote-tallying. Technical Report NBSIR 75–687. National Bureau of Standards. Information Technology Division (1975). http://csrc.nist.gov/publications/nistpubs/NBS_SP_500-30.pdf 37. Secretary of State Gardner, W.M., Assistant Secretary of State Anthony Stevens and Deputy Secretary of State David M. Scanlan. New Hampshire Election Procedure Manual: 2016– 2017. Office of the Secretary of State of New Hampshire (2007). http://sos.nh.gov/WorkArea/ DownloadAsset.aspx?id=8589967075 38. Stanislevic, H.: Random auditing of E-voting systems: How much is enough? (2006). http:// www.votetrustusa.org/pdfs/VTTF/EVEPAuditing.pdf 39. Stark, P.B.: CAST: Canvass Audit by Sampling and Testing. Department of Statistics, University of California, Berkeley (2008). https://www.researchgate.net/publication/224602251_ CAST_Canvass_Audits_by_Sampling_and_Testing 40. Stark, P.B.: Conservative statistical post-election audits. Ann. Appl. Stat. 2(2), 550–581 (2008). https://projecteuclid.org/euclid.aoas/1215118528 41. Stark, P.B.: Election audits by sampling with probability proportional to an error bound: dealing with discrepancies. Department of Statistics, University of California, Berkeley (2008). https://www.researchgate.net/publication/248684153_Election_audits_by_sampling_ with_probability_proportional_to_an_error_bound_dealing_with_discrepancies 42. Stark, P.B.: A sharper discrepancy measure for post-election audits. Ann. Appl. Stat. 2(3), 982–985 (2008). https://projecteuclid.org/euclid.aoas/1223908048 43. The Collaborative Audit Committee. Collaborative public audit of the November 2006 General Election. Center for Election Integrity for the Cuyahoga County Board of Elections (2007). http://electionmathematics.org/em-audits/OH/2006Audit/cuyahoga_audit_report.pdf 44. The Election Audits Task Force. Report on Election Auditing (2009). https://verifiedvoting. org/downloads/Report_ElectionAudits.pdf 45. US General Accounting Office. Federal efforts to improve security and reliability of electronic voting systems are under way, but key activities need to be completed. (2005). http://www.gao. gov/new.items/d05956.pdf 46. Walmsley, P.: Requirements for statistical live auditing of optical-scan and VVPAT records— and for live auditing for vote tabulation (2005). http://bcn.boulder.co.us/~neal/elections/corla/ statistical-method-for-auditing-vote-counting-paul-walmsley-2005.pdf 47. Wand, J.: Auditing an election using sampling: The impact of bin size on the probability of detecting manipulation. Stanford Web Site. No longer available (2004)

Chapter 7

Numerical Solution of the Falkner-Skan Equation Arising in Boundary Layer Theory Using the Sinc-Collocation Method Basem Attili

Abstract We consider the numerical solution of the well known Falkner-Skan problem, which is third order nonlinear boundary value problem. The approach we are going to follow is to first transform the third order boundary value problem on the semi-finite domain into a second order nonlinear boundary value problem on a finite domain through introducing a special transformation. The resulting two-point boundary value problem is then treated numerically using the sinccollocation method which is known to converge exponentially. Numerical results will be presented for various values of the parameters representing various types of flows. Comparison with the work of others will also be done to show the accuracy of the Sinc method. Keywords Falkner-Skan Equation · Boundary Layer · Sinc-Collocation Method · Boundary Value Problems.

7.1 Introduction Consider the Falkner-Skan third order nonlinear boundary value problem[1], arising in the laminar boundary layer flow with stream-wise pressure gradient of the form

2 = 0; 0 ≤ η < ∞ f  (η) + β0 f (η)f  (η) + β1 1 − f  (η)

(7.1)

subject to f (0) = 0, f  (0) = γ , f  (∞) = 1,

(7.2)

B. Attili () University of Sharjah, Sharjah, United Arab Emirates e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_7

147

148

B. Attili

where f is the dimensionless stream function, η is the dimensionless normal coordinate, γ ≥ 0 is the movement velocity ratio of the plate to the mainstream, f  (∞) := limη→∞ f  (η) and β1 is the dimensionless pressure gradient parameter. The range of values for which β1 is physically meaningful plays a great role in the interpretation of the solution. If β1 = 0 and γ = 0 the problem reduces to the famous Blasius equation which describes the viscous flow in a laminar boundary layer over a flat plate. Notice that for 0 < γ < 1, the speed of the oncoming fluid is larger than that of the plate, while if γ > 1, the speed of the moving plate is more than that of the oncoming fluid. They move on opposite directions if γ < 0. Due to the importance of the Falkner-Skan equation given by (7.1)–(7.2)in applied sciences and fluid dynamics, it attracted the attention of many researchers. Since the problem is nonlinear and defined on an infinite domain, few researchers considered the problem analytically and under certain conditions. For example, analytical solution under appropriate boundary conditions was considered by Fang et al. [6], Fang and Zhang [5] and Sachdev et al. [14]. Their approach was mainly using the homotopy asymptotic expansion method. Numerically, many authors considered the problem using different approaches and numerical methods. Taylor’s and Runge-Kutta methods were used by Asaithambi [2] and Cortell [3] respectively. Iterative method was used by Fazio [7]. A third order and higher order compact finite difference scheme was used by Cortell [3]. A comparative study for both the Falkner-Skan and the Blasius equations was done by Raju and Sandeep [12]. Shooting method under suction-injection conditions was considered by Liu and Chang [10]. A fifth-order extended one step method was proposed by Salama [15]. Two iterative methods based on quasilinearization method were proposed by Zhu et al. [23]. Iterative finite difference scheme was proposed by Temimi and Ben-Romdhane [20]. We propose the use of the sinc-collocation method to numerically approximate the solution of (7.1)–(7.2) We first transform the equation into a two-point boundary value problem defined on a finite interval, then apply the Sinc-collocation method. The use of sinc-collocation method was done by Parand et al. [11] but on the semi-infinite domain without any transformation. More on the sinc-collocation method applied on linear and nonlinear systems can be found on [4]. In general, the Sinc method suitable for the approximations of classes of computational problems including problems over infinite domains and boundary layer problems can be found in [16–18]. The outline of the paper will be as follows. In Sect. 7.2, we give some properties of the solution of the Falkner-Skan equation. In Sect. 7.3, we present the transformation details that allow us to consider a second order system equivalent to (7.1)–(7.2) defined on a finite interval; that is, defined on the interval [0,1). We present the sinc-collocation method with the needed preliminaries in Sect. 7.4. Finally and in Sect. 7.5, we present the numerical testing and results.

7 Numerical Solution of the Falkner-Skan Equation

149

7.2 Properties of the Solution Consider the system given by (7.1)–(7.2)

2 = 0; 0 ≤ η < ∞ f  (η) + β0 f (η)f  (η) + β1 1 − f  (η)

(7.3)

subject to f (0) = 0, f  (0) = γ , f  (∞) = 1.

(7.4)

We present some properties of the solution of the system. Lemma 7.1 1. If 0 < f  (η) < 1, then the system (7.3)–(7.4) has a unique solution for β1 ≥ 0. Moreover the solution satisfies f  (η) > 0 for 0 < η < ∞. 2. There exists β ∗ ∈ (−∞, 0) such that (7.3)–(7.4) with 0 < f  (η) < 1 has at least one solution for β1 ∈ (β ∗ , 0) and the solutions are not unique satisfying f  (η) > 0 for 0 < η < ∞. 3. If 0 < f  (η) < 1, then the system (7.3)–(7.4) has a unique solution for β1 = β ∗ and the solution satisfies f  (η) > 0 for 0 < η < ∞. 4. The system (7.3)–(7.4) with 0 < f  (η) < 1, has no solution for β1 < β ∗ . Proof The proof of the first part (7.1) follows directly from the application of Theorems 6.1 and 8.1 of [8] by sitting α = β = 0. The other parts follow also from Theorem 7.1 of the same reference with α = β = 0.  

For β ∗ and in [22], it was shown that there exists a β ∈ −1 2 , 0 such that (7.3)–

(7.4) with f  (η) > 0 has one positive solution. Numerically and in [13], they obtained the value β ∗ = −0.1988 while [9] gave an upper and lower limits for β ∗ ∈ (−0.4, −0.12) . With this, one can prove the following result. Theorem 7.1 Assume that (β, f ) ∈ [β ∗ , 0) × [0, ∞) satisfying (7.3)–(7.4) and 0 < f  (η) < 1. Then f has the following properties: 1. f (η) > η for η ∈ (0, ∞) . 2. limη→∞ f (η) η = 1.

2 ≤ f  (0) ≤ 1. 3. If β ∈ [β ∗ , 0), then 27 1+4β  (0) ≤ ≤ f 4. If β ≥ 0 then 1+4β 6 3 .

Proof 1. For η ∈ (0, ∞) , consider x(η) = η − f (η) . Differentiating with respect to η leads to x  (η) = 1 − f  (η) . Since 0 < f  (η) < 1, then x  (η) > 0 which means that x(η) is strictly increasing. Hence x(η) > x(0) = 0 since f (0) = 0. Leading to x(η) = η − f (η) > 0 or f (η) > η for η ∈ (0, ∞) .

150

B. Attili 

f (η) 2. The proof is by using L’Hopitals rule, we have limη→∞ f (η) η = limη→∞ 1 = 1  since limη→∞ f (η) = 1. 3. and 4. follows directly from the proof of Theorem 3.2 of [9]  

These properties are of good use when it comes to numerical computations.

7.3 The Transformed Problem Again, let us consider the problem given by (7.1)–(7.2); that is

2 = 0; 0 ≤ η < ∞ f  (η) + β0 f (η)f  (η) + β1 1 − f  (η)

(7.5)

subject to f (0) = 0, f  (0) = γ , f  (∞) = 1,

(7.6)

To solve the system numerically, one faces two difficulties; namely, the domain is semi-infinite and one condition is given at ∞. The author in [9] transformed the Falkner-Skan equation and established an equivalent singular integral equation. In our case, we will transform the Falkner-Skan equation to obtain a second order singular two point boundary value problem. We assume γ ∈ (0, 1) and f  (η) > 0. Now since η ≥ 0, then f  (η) is strictly increasing and hence has an inverse

−1

 −1 on the interval [0, ∞). If we let z = f  (η) then η = f  (z) ≡ k(z). f (η) Reflecting back into the conditions given by (7.6), we have k(γ ) = 0 and k(1) = +∞. For γ ∈ (0, 1), we have

 z = f  (η) ≡ f  (k(z)) .

(7.7)

Multiplying both sides by k  (z) , we obtain zk  (z) = f  (k(z)) k  (z) .

(7.8)

Differentiating (7.7) with respect to z, we obtain 1 = f  (k(z)).k  (z)

(7.9)

from which we obtain f  (k(z)) =

1 k  (z)

.

(7.10)

7 Numerical Solution of the Falkner-Skan Equation

151

Using the notation w(z) ≡ f  (k (z)) =

1 k  (z)

(7.11)

,

this means (7.8) becomes z = f  (k(z)) k  (z) . w (z)

(7.12)

Integrating from γ to z and since f (k (γ )) = f (0) = 0, we have 

z γ

t dt = w(t)



z

f  (k(t))k  (t)dt = f (k (z)) .

(7.13)

γ

The integral is improper since k(z) is continuous on [γ , 1) but not defined at z = 1, hence we have 

R

lim

R→1− γ

t dt = lim w(t) R→1−



R

f  (k(t))k  (t)dt.

(7.14)

γ

To obtain a formula for f  (η), we differentiate (7.11) to obtain w  (z) = f  (k(z)).k  (z)

(7.15)

leading to f  (k(z)) = since w(z) =

1 k  (z)

w  (z) = w  (z)w(z) k  (z)

.

Now substituting the formulas we obtained for f, f  , f  and f  into (7.4)– (7.5), we obtain 



w(z)w (z) + β0 w(z) γ

z

t dt + β1 (1 − z2 ) = 0, w(t)

γ ≤ z < 1.

(7.16)

with w(1) = 0, a first-order integro-differential equation defined on a finite interval. To obtain a two point boundary value problem we eliminate the integral, to do so divide equation (7.12) by β0 w(z) w  (x) + β0

 γ

z

β1 (1 − z2 ) t dt + =0 w(t) β0 w(z)

γ ≤ z < 1.

(7.17)

152

B. Attili

Differentiating (7.17) with respect to z and after simplifying we have (1 − 2β1 )β0 z β1 (1 − z2 )w  (z) = 0, − w(z) w 2 (z)

(7.18)

w  (z)w 2 (z) − β1 (1 − z2 )w  (z) + (1 − 2β1 )β0 zw(z) = 0.

(7.19)

w  (z) + or

Notice that w(1) = f  (k(1)) = f  (∞) = 0 and to obtain the second boundary condition, we evaluate (7.17) at z = γ to obtain w(γ )w  (γ ) + β1 (1 − γ 2 ) = 0,

(7.20)

or w  (γ ) =

−β1 (1 − γ 2 ) . w(γ )

(7.21)

Finally, the resulting second order boundary value problem is w  (z)w 2 (z) − β1 (1 − z2 )w  (z) + (1 − 2β1 )β0 zw(z) = 0.

(7.22)

Subject to the boundary conditions w(1) = 0, w (γ ) =

−β1 (1 − γ 2 ) . w(γ )

(7.23)

7.4 The Sinc-Collocation Method Before presenting the details of the sinc-collocation method applied to (7.22)– (7.23), we present some needed definitions and properties of the Sinc method.

7.4.1 Preliminaries We present here a brief review of the sinc functions with some of their needed basic properties. A more detailed and comprehensive reviews can be found in [16–18]. Definition 7.1 For x ∈ R, the Sinc function is defined as Sinc(x) =

 sin π x πx

1

; x = 0 ; x = 0.

(7.24)

7 Numerical Solution of the Falkner-Skan Equation

153

We now define the Whittaker interpolation. Definition 7.2 (Whittaker Cardinal Expansion) If f (x) is a function defined for x ∈ R, the series ∞ 

C(f, h)(x) =

f (mh)S(m, h)(x)

(7.25)

m=−∞

where h > 0 is a step size and 

x − mh S(m, h)(x) = Sinc h



is called the Whittaker Cardinal expansion of f (x) provided the series (7.25) converges. Notice that in practice and instead of using an infinite series (7.25), one uses a truncated series of the form CN (f, h)(x) =

N 

f (mh)S(m, h)(x)

(7.26)

m=−N

for a given N. Notice also that at each Sinc grid point xm = mh, we have CN (f, h)(xm ) = f (xm ) since the series is an interpolation to f (x) at the grid points. For the space of approximation and rates of convergence, if we consider a class of functions that are analytic on an infinite strip  π , Dd = z = x + iy; |y| < d < 2

(7.27)

then the sinc interpolation provides an approximation that exhibits an absolute error that decays exponentially, see [19]. The space of approximation is the Hardy space as in the following definition. Definition 7.3 Let Dd be as defined in (7.27). Then the Hardy space is defined as the class of functions f (x) that are analytic in Dd such that  lim

ε→ 0 ∂Dd

|f (z)| |dz| < ∞.

(7.28)

If the function to be approximated belongs to H 1 (Dd ) and decays exponentially on the real line, then the following theorem presents the convergence result on the Sinc approximation.

154

B. Attili

Theorem 7.2 (Stenger [16]) Assume, with α, β and γ positive constants, that 1. f (x) ∈ H 1 (Dd ). 2. f (x) decays exponentially for x ∈ R such that |f (x)| ≤ αExp[−βExp(γ |x|)]. Then the error of the Sinc Approximation is bounded by   N      sup f (x) − f (mh)S(m, h)(x) ≤ CE(h)  −∞ 0 the set s is also. Now U s ∩Hp = W Us = σr × (−s, s)n−r is open in Rn , so f (Us ) = U s ∩Hp then since if p ∈ U p = vr +

r−1  i=0

ti (vi − vr ) +

n−r  i=1

ti+r−1 ui = vr +

r−1 

yi (vi − vr )

i=0

which forces ti+r−1 = 0 for 1 ≤ i ≤ n − r. We show next that sets of the form W are open in K (r) . Let p ∈ K (r) and set L = Hp ∩K. Let L◦ be the interior of L in Hp and define ∂L = L \ L◦ . Then p ∈ W ⊂ L◦ where W is defined as above and the following fact applies. Claim If p  ∈ L◦ then δK (p ) = r and if p ∈ ∂L then δK (p ) < r. Proof of Claim Let p ∈ L◦ and p = p. Then there are open segments (p0 , p) and (p1 , p ) in K with p ∈ (p0 , p) and p ∈ (p1 , p ) which shows that δK is constant on L◦ by Lemma 8.3. Let p1 ∈ ∂L = L \ L◦ . Since p ∈ L◦ we can find a segment (p0 , p1 ) containing p, so δK (p) ≥ δK (p1 ). Suppose δK (p1 )#= δK (p) = r. Then there are affinely independent v0 , . . . , vr ∈ K so that p1 = ri=0 ti vi is an open convex #rcombination, and p1 ∈ W  where W  is defined via the vi as above. If p = i=0 xi vi is an open convex combination then some vj must be affinely independent of the vi or we have Hp ⊂ Hp1 so Hp = Hp1 , and then p1 ∈ W  ⊂ Hp implies p1 ∈ L◦   which is a contradiction. Find a point p ∈ W independent of the set 0 , . . . , vr } as #{v r   follows. If p is independent of this set, let p = p. Otherwise p = # i=0 si vi where #r r   = (1 − t)v + tp for some t ∈ (0, 1). If p  = s = 1, so let p i j i=0 yi vi with #i=0 #r r  i=0 yi = 1, then vj = i=0 xi vi where xi = (yi − tsi )/(1 − t) for each i, which  is the required point. Now set p  = (1−τ )p  +τp = is another contradiction, so p 1 #r   (1 − τ )p + i=0 τ ti vi which is an open convex combination, so δK (p ) ≥ r + 1. But if τ is sufficiently small, p ∈ W which is a contradiction, and therefore in fact δK (p1 ) < δK (p).  

8 Sinc Methods on Polyhedra

173

The claim implies that L◦ is convex, since given p0 , p1 ∈ L◦ , if p is on the segment (p0 , p1 ) then δK (p ) ≥ δK (p0 ) = r and therefore p ∈ L◦ . The claim also shows that L◦ = L, since given p1 ∈ ∂L, choose p0 ∈ L◦ . Then the segment (p1 , p0 ) ⊂ L and for every point p on this segment δK (p ) ≥ δK (p0 ) = r, so p ∈ L◦ and p1 is therefore a limit point of L◦ . (r) . We claim that L is the convex Next we show that L◦ is a component #r−1 of K   ◦  hull of L∩A. Let p ∈ L with p = i=0 yi vi an open convex combination. Since # # p ∈ K, we have p = Jj=0 tj wj where tj ≥ 0 and Jj=0 tj = 1, and where #   wj ∈ A for each j . Let p = m j =1 tj wj be an expression of this form of minimal  length m so tj > 0 for each j . If m = 1 then p = w1 ∈ L. Otherwise, tj < 1  } then letting for j = 1, . . . , m. If some wj is independent of the set {v0 , . . . , vr−1 #r      p = (1−t)wj +tp for t ∈ (0, 1) we have p = (1−t)wj + i=0 tyi vi as an open #  convex combination, so δK (p ) = r + 1. Let τ ∈ (0, 1) and set p0 = m k=1 sk wk where sk = tk (1 − tτ )/(1 − τ ) if k = j and sj = (tj (1 − tτ ) − τ (1 − t))/(1 − τ ). Then p = (1 − τ )p0 + τp , which shows that δK (p ) ≥ r + 1 and contradicts p ∈ L◦ . Thus wj ∈ Hp ∩K and since j is arbitrary, as is p , the set L◦ is contained in the convex hull of L∩A which gives the claim since L◦ = L. Since the set A is finite there are at most finitely many distinct sets of the form L = Hp ∩K where δK (p) = r, for any r. ◦ Assume next that p ∈ W ⊂ L◦ (this is possible since #r if W ⊂ L and W = {vr +◦ #r−1 t (v − v ) | (t , . . . , t ) ∈ σ } and p = x v then p∈W ⊂L r 0 r−1 r i=0 i i i=0 i i where W  is defined analogously using vi = (1 − t)vi + tp for each i = 0, . . . , r s we have U s ∩K (r) = W . for some 0 < t < 1). Then for some set of the form U  (r) Suppose not, so that for each s > 0 there is ps ∈ Us ∩K \ W . Write ps = ρs + ws #r−1 #  where ρs = vr + i=0 yi (vi − vr ) ∈ W and ws = n−r i=1 ti+r−1 ui and where ps = # r−1 / Hp , vr + i=0 zi (vi −vr ) for some affinely independent set {v0 , . . . , vr }. Since ps ∈ we have ps ∈ L = Hps ∩K where L = L. There are finitely many possibilities for L , so some fixed L contains infinitely many ps j for any sequence of sj → 0. that The sets L and W are compact so we can assume by extracting subsequences # ps j → p ∈ L and ρsj → ρ ∈ W ⊂ L◦ . Then p = ρ since |ps −ρs | ≤ s n−r i=1 |ui | #r ◦     for any s. If p ∈ L then p = i=0 yi vi for some open convex combination of vi ∈ Hp . But since p ∈ L◦ we must have Hp = Hp , which contradicts L = L. Therefore p ∈ ∂L , so by earlier arguments δK (p ) < r, which then contradicts s ∩K (r) for some s > 0. Since every set δK (p ) = δK (ρ) = r. Thus we have W = U  ◦  W is a union of W for which W ⊂ L , sets of the form W are open in K (r) . Sets of the form L◦ where L = Hp ∩K and p ∈ K (r) are therefore open in K (r) , are finite in number, and are connected since they are convex, and pairwise are either equal or disjoint, so they are components of K (r) . We use the sets W to construct charts in an atlas for K (r) . Given W = {vr + #r−1  i=0 yi (vi − vr ) | (y0 , . . . , yr−1 ) ∈ σr }, define φ : W → σr by φ(p ) = T −1 T  (Q Q) Q (p −vr ) where Q is the n×r matrix of rank r with columns the vectors #r−1 yi (vi − vr ) | (y0 , . . . , yr−1 ) ∈ vi − vr for i = 0, . . . , r − 1. If W  = {vr + i=0  σr } is another such set containing p and φ1 : W → σr is the corresponding

174

M. Stromberg

map defined by φ1 (p ) = (R T R)−1 R T (p − vr ) where R has columns vi − vr , then the Jacobian of the map φ1 ◦ φ −1 : φ(W ∩W  ) → φ1 (W ∩W  ) is the matrix J = (R T R)−1 R T Q. The inverse maps φ −1 and φ1−1 have differentials the matrices Q and R respectively, so by the chain rule Q = RJ . This shows that J has rank r and is therefore invertible with inverse (QT Q)−1 QT R, so the charts (W, φ) and (W  , φ1 ) are compatible. #r Given p ∈ K (r) such that p = i=0 xi vi is an open convex combination let ψ be inverse to the map f defined above, so that ψ(p ) = B −1 (p − vr ) for any p ∈ Rn where B is the n × n matrix with ith column vi−1 − vr if 1 ≤ i ≤ r and s , ψ) determines a chart for Rn at p ∈ U s for any ui−r if r + 1 ≤ i ≤ n. Then (U (r)  s > 0. Let s and W be such that Us ∩K = W . If p ∈ W then Bt = (p − vr ) for some t ∈ Us = ψ( (U )s and ti+r−1 = 0 for i = 1, . . . , n − r so Bt = Qπn,r (t). s ∩K (r) = φ Then πn,r (t) = (QT Q)−1 QT (p − vr ) and therefore πn,r ◦ ψ | U (r) (r)  which shows that (Us ∩K , πn,r ◦ ψ) is a chart for K at p and K (r) is a regular submanifold of Rn . This completes the proof of the theorem.   Next we will show that if K is the q-dimensional convex hull of a set A = {w0 , . . . , wJ } in Rn then K has regular semi-real extensions Kθ . For 0 < θ ≤ π/2, let D(θ ) be the lens  D(θ ) = {z ∈ C |  arg



z 1−z



 < θ }.

If θ is restricted in this way then D(θ ) is the intersection of two disks so is convex. If θ = 0, we identify D(θ ) with the open unit interval. If q ≥ 1 let Σq : Cq → C be #q−1 given by summation, Σq (x0 , . . . , xq−1 ) = i=0 xi , and define the standard open (q, θ )-simplex as σq

θ

= Σq−1 (D(θ ))∩D(θ )q .

The proof of the next lemma is an application of the definitions and of convexity of D(θ ) and is omitted. Lemma 8.4 The set σq

θ

is convex for all q ≥ 1 and σq

θ

= σq .

 

(0)

Set Kθ = K (0) , viewed as a subset of Cn . If 1 ≤ r ≤ q then K (r) is a union of sets of the form W = {vr +

r−1 

yi (vi − vr ) | (y0 , . . . , yr ) ∈ σr }

(8.3)

i=0

for which δK (W ) = r by the proof of Theorem 8.1. If W has the form (8.3) define (r) Wθ as in (8.3) replacing σr by σr θ , define Kθ as the union of the sets Wθ , and q (r) finally define Kθ = ∪r=0 Kθ .

8 Sinc Methods on Polyhedra

175

Lemma 8.5 The set Kθ is a regular semi-real extension of the complex K for each θ ∈ (0, π/2]. (r)

Proof We show first that Kθ is a regular r-submanifold of Cn for r ≥ 1. Fix an affinely independent set {v0 , . . . , vr } ⊂ K and let W be of the form (8.3) for which δK (W ) = r. Define F : Cn → Cn by F (t0 , . . . , tn−1 ) = vr +

r−1  i=0

ti (vi − vr ) +

n−r 

ti+r−1 ui

i=1

where u1 , . . . , un−r ∈ Rn are independent vectors independent of the vectors vi −vr and where (t0 , . . . , tn−1 ) ∈ Cn . Let Us = σr θ × Dsn−r for s > 0 where Ds = {z ∈ C | |z| < s}, and set ; U  s = F (Us ). Assume that W ⊂ L◦ where L = Hp ∩K for any p ∈ W . By the proof s ∩K (r) = W where U s = F (Us ∩Rn ) for some of Theorem 8.1 we have U #r−1 (r)     ; s > 0. Let p ∈ U s ∩Kθ so p = vr + i=0 si (vi − vr ) = x + y where #r−1 #   ti (vi −vr ) and y = n−r x = vr + i=0 i=1 ti+r−1 ui and where v0 , . . . , vr are affinely independent vectors in K and (s0 , . . . , sr−1 ) ∈ σr θ . Set p0 = p = x + y. # s ∩K (r) , we have p0 ∈ W so vr + r−1 (si )(v  − vr ) ∈ Hp0 Since p0 ∈ U i=0 i # # and the vi satisfy vi = rj =0 μij vj where rj =0 μij = 1 for each i = 0, . . . , r. #r−1 # #r−1 Then p = vr + i=0 νi (vi − vr ) where νi = (1 − jr−1 =0 sj )μri + j =0 sj μj i . By independence of the vectors {vi −vr } and {ui } it follows that νi = ti for 0 ≤ i ≤ r−1 (r) and that ti+r−1 = 0 for i = 1, . . . , n − r, so p = x ∈ Wθ . The set Wθ = ; U  s ∩Kθ (r) is therefore open in Kθ . #r−1 Now let W be an arbitrary set of the form (8.3). If p = vr + i=0 xi (vi − vr ) ∈ Wθ for x = (x0 , . . . , xr−1 ) ∈ σr θ , let z = (z0 , . . . , zr−1 ) = x ∈ σr . Since  ) ∈ σ σr θ is open and convex there is x  = (x0 , . . . , xr−1 r θ so that x = tz + (1 − # r−1  t)x for some t > 0. Set p = vr + i=0 zi (vi − vr ) and vi = tp + (1 − t)vi for #r−1 yi (v  − vr ) | (y0 , . . . , yr−1 ) ∈ σr }. Then 0 ≤ i ≤ r and define W  = {vr + i=0  ◦ W ⊂ W and W  ⊂ L where L = Hp ∩K. Let Wθ be the extension of W  . Then if #r−1 p ∈ Wθ we have p = vr + i=0 [tzi + (1 − t)yi ](vi − vr ) ∈ Wθ since the vector #r−1 (tzi + (1 − t)yi ) ∈ σr θ . On the other hand we have p = vr + i=0 [tzi + (1 − (r)   t)xi ](vi − vr ) ∈ Wθ so Wθ is a union of sets open in the subspace topology on Kθ , (r) U  s ∩Kθ . of the form Wθ = ; : :(p) = (QT Q)−1 QT (p − vr ) for p ∈ Wθ Now define φ : Wθ → σr θ by φ #r−1 if Wθ = {vr + i=0 yi (vi − vr ) | (y0 , . . . , yr−1 ) ∈ σr θ }, where Q is the matrix with columns {vi − vr }. As in the proof of Theorem 8.1, Kθ(r) is a complex :). Similar arguments also show that (; :) are r-manifold with charts (Wθ , φ U s , ψ n   ; : coordinate charts for C at p ∈ U s , where ψ is inverse to the map F and that (r) (r) (r) :|; : if Wθ = ; U  s ∩Kθ = φ U  s ∩Kθ so Kθ is a regular submanifold of Cn . πn,r ◦ ψ :: Wθ → σr θ then φ : | Wθ ∩K (r) = φ | W . Since K (r) = K (r) , Furthermore if φ θ

176

M. Stromberg (r)

the manifold Kθ is a regular extension of K (r) . Since the differential of the map :−1 : σr θ → Cn is the matrix Q, the manifold K (r) is also semi-real. φ   θ

8.3 Quadrature The idea of integration on a set such as a polyhedron requires that the notion of area measure be well-defined, and on regular C 1 submanifolds of Rn it is, in the sense that various alternative definitions [1] of area measure then agree. However techniques of numerical integration, such as sinc quadrature, that rely on methods from the theory of analytic functions require something more, namely that area measure be represented by a holomorphic function. For a set X ⊂ Rn , this holds if X is a version of a complex K via a pair (E, g) that is essentially real.

8.3.1 Holomorphic Area Measure (r)

Let X ⊂ Cn and (E, K, g) ∈ Rq (X) and assume that Eg is a semi-real extension (r) to be the inverse of X; by Lemmas 8.1 and 8.2 this can be arranged by taking E (r) image of the appropriate submanifold of Eg and re-naming. Define a regular chart in E (r) as a chart (V , ψ) for which V ⊂ FE for some F ≤ K (r) and (V  , ψ) is a chart in K (r) where V  = V ∩K (r) and ψ(V  ) = ψ(V )∩Rr . Let (g(V  ), g −1 ) be a (r)  −1 chart at x ∈ X where g = (g | V ) ◦ ψ and where (V , ψ) is a regular chart in E (r) . The function : g = (g | V ) ◦ ψ −1 is holomorphic and if d: g is the matrix g /∂zr then the matrix d: g T d: g is invertible on ψ(V ). If with columns ∂: g /∂z1 , . . . , ∂: ψ(V ) is simply-connected then det(d: g T d: g ) has a holomorphic log and either of the T 1/2 possible definitions of [det(d: g d: g )] is holomorphic on ψ(V ). Since ∂: gk /∂zj = ∂: gk /∂xj (where as usual zj = xj + iyj and the : gk are the holomorphic coordinate functions of : g ) and in particular ∂ gk /∂xj = ∂: gk /∂zj | ψ(V  ), the positive function T T [det(d g d g )] is the restriction of [det(d: g d: g )] to ψ(V  ) and we fix the definition T 1/2 of the function [det(d: g d: g )] by requiring it to be positive on ψ(V  ). Then the set function μ defined by  μ(L) =

g −1 (L)

[det(d g T d g )]1/2 dx

where dx is Lebesgue measure on Rr , is a positive measure on Borel sets in g(V  ), and is represented by the real restriction of a holomorphic function. Area measure on g(V  ) will be termed holomorphic if it coincides with μ on compact subsets of g(V  ).

8 Sinc Methods on Polyhedra

177

Theorem 8.2 Let X ⊂ Cn and (E, K, g) ∈ Pq (X). If (V , ψ) is a regular chart in E (r) and V  = V ∩K (r) then area measure μ is given by  μ(L) =

G −1 (L)

[det(d: g ∗ d: g )]1/2 dx

(8.4)

on compact sets L ⊂ In ◦ g(V  ) where dx is Lebesgue measure on Rr and ∗ (r) represents conjugate transpose. If ψ(V ) is simply connected and Eg is a semi(r) real extension of X then area measure is given on compact sets L ⊂ g(V  ) by  μ(L) =

g −1 (L)

[det(d g T d g )]1/2 dx,

(8.5)

area measure μ is holomorphic, and area measures of the sets L ⊂ g(V  ) and In (L) ⊂ In ◦ g(V  ) coincide. Proof If M is a regular C 1 m-submanifold of Rn and (U, ψ) is a chart for Rn for which ψ(U ∩M) = ψ(U )∩{x ∈ Rn | x = jm,n ◦ πn,m (x)} and (U ∩M, πn,m ◦ ψ) is a chart for M, then area measure μ is defined on compact sets L ⊂ U ∩M by  μ(L) =

˜ ψ(L)

Vol

∂ ψ˜ −1 ∂ ψ˜ −1 ,..., (x) dx ∂x1 ∂xm

where ψ˜ = πn,m ◦ψ with inverse ψ˜ −1 = ψ −1 ◦jm,n and dx is Lebesgue measure on Rm and where the integrand represents the volume of the parallelopiped with edges the vectors ∂ ψ˜ −1 /∂xi = (∂ ψ˜ 1−1 /∂xi , . . . , ∂ ψ˜ n−1 /∂xi )T . The measure μ is typically extended to noncompact sets by inner regularity. (r) For the case of the regular submanifold In (Kg ) of R2n the relevant charts are of the form (In ◦ g(V  ), G −1 ) where V  = V ∩K (r) and (V , ψ) is a regular chart in E (r) . The required parallelopiped has edges the vectors ∂G /∂xj = (∂u1 /∂xj , ∂v1 /∂xj , . . . , ∂un /∂xj , ∂vn /∂xj )T which are columns of the 2n × r real Jacobian J of the map G : ψ(V  ) → R2n by the proof of Lemma 8.1 and the required volume is then [det(J T J )]1/2 . Let Q be the 2n × 2n block diagonal unitary matrix with diagonal entries the matrices 1 1 i  . Since the functions : gk are holomorphic we have ∂: gk /∂zj = ∂: gk /∂xj = √ 2 i 1 ∂uk /∂xj + i∂vk /∂xj . Then QJ = A + B where A and B have j th columns g1 ∂: g1 ∂: gn T ∂: gn i 1 ∂: , 0, . . . , , 0)T and √ (0, , . . . , 0, ) √ ( ∂xj ∂xj ∂xj 2 ∂xj 2 respectively, and J T J = A∗ A + B ∗ B = (d: g ∗ d: g ).

178

M. Stromberg

For the submanifold Kg of Rn the relevant charts are (g(V  ), g −1 ) where V  = (r) (r) V ∩K and (V , ψ) is a regular chart in E and by hypothesis and the proof of Lemma 8.2 the required parallelopiped has edges the vectors (r)

∂ g /∂xj = (∂u1 /∂xj , . . . , ∂un /∂xj )T which are the restrictions of (∂: g1 /∂zj , . . . , ∂: gn /∂zj )T to ψ(V  ). Therefore μ and  μ coincide in this case. Since d: g | ψ(V ) is real, area measures of L ⊂ g(V  ) and of In (L) ⊂ In ◦ g(V  ) are the same.  

8.3.2 Sinc Quadrature on Products The conditions under which various integrals associated with a function f : X → C can be approximated and explicit error bounds given reduce to those for approximation of integrals on cartesian products. For this purpose and later for approximation of f we apply the notation of [9] (see also [6]). If d > 0, let Dd be the strip Dd = {z ∈ C  | z| < d}. If q ≥ 1 and q−1 d = (d0 , . . . , dq−1 ) is a vector with positive components, set Dd = ×i=0 Ddi . If ψi : Ddi → Di is an onto injective conformal map where Di is a simply connected q−1 domain in C for each i = 0, . . . , q − 1, define ψ = ×i=0 ψi : Dd → D q−1 q−1 coordinatewise, where D = ×i=0 Di . Set Γ = ψ(Rq ) = ×i=0 Γi , where q−1 q−1 Γi = ψi (R) and set φ = ψ −1 = ×i=0 ψi−1 = ×i=0 φi : D → Dd . If h = (h0 , . . . , hq−1 ) is a vector with positive components (stepsizes) and k = (k0 , . . . , kq−1 ) ∈ Zq is an integer multiindex, define the lattice point kh = (k0 h0 , . . . , kq−1 hq−1 ) and the sinc node zk = ψ(kh) = (ψi (ki hi )) = (zk(i) ). i j −1 q−1 : Define the sets Dj = × Γ i × Dj × × Γi (omitting empty products) for i=0

i=j +1

: = ∪q−1 D : : j = 0, . . . , q − 1, and put D j =0 j , and if q = 1 set D0 = D0 . q−1 :j with Dj × Γ:j where Γ:j = × We will identify D i=0 Γi and abbreviate the vector

i =j

(x0 , . . . , xj −1 , z, xj +1 , . . . , xq−1 ) xj consists of the remaining coordinates. as (z, : xj ), where : : to consist of vectors F = (F0 , . . . , Fq−1 ) of continuous Define the class A(D) : functions Fj : Dj → C such that for fixed arbitrary : xj ∈ Γ:j , the function Fj (z, : xj ) : be the collection of functions F : D : → C such is holomorphic in Dj . Let H (D) :j ) ∈ A(D). : It is clear that H (D) : consists of the elements of that the vector (F | D : A(D) all of whose components agree on Γ .

8 Sinc Methods on Polyhedra

179

: as follows. Let N = (Nj ) be a vector We define functionals on the set A(D) of seminorms, with Nj being a seminorm on continuous functions f : Γ:j → C. If q = 1 then N has a single component, which we always interpret as complex modulus. If γ is a path (nonconstant piecewise differentiable continuous map of the unit interval) in Dj , set 

  Fj (z, : xj ) dz

xj ) = Fj,γ (: γ

for : xj ∈ Γ:j , and define the seminorm Nj (γ ; Fj ) = Nj (Fj,γ ). If 0 < δ ≤ dj and β > 0 define the paths γδ,β = {ψj (x + iδ) | |x| ≤ β} and γ β,δ = {ψj (β + iy) | |y| ≤ δ}. Set νj (Fj ) = lim inf{Nj (γδ,β ; Fj ) + Nj (γ−δ,β ; Fj )} δ→dj− β→∞

: = and finally set N(F, N , D)

max νj (Fj ).

0≤j ≤q−1

: is the set of functions F ∈ A(D) : that satisfy Definition 8.1 The class B(N , D) the following conditions. (i) For j = 0, . . . , q − 1 the seminorm Nj (γ ; Fj ) < ∞ for every path γ ⊂ Dj . : < ∞. (ii) The functional N (F, N , D) (iii) For j = 0, . . . , q − 1 the limit lim Fj,γ β,dj (: xj ) = 0 for every : xj ∈ Γ:j . |β|→∞

: similarly, by replacing (iii) with Define the class B(N , D) (iii ) For each j = 0, . . . , q − 1 the limit lim Nj (γ β,dj ; Fj ) = 0. |β|→∞

: we will say that F satisfies a decay condition of type α > 0 with If F ∈ A(D) respect to N = (Nj ) if there exist positive constants C and α such that for each j = 0, . . . , q − 1 we have Nj (Fj (x, ·)) ≤ Ce−α|φj (x)| for all x ∈ Γj . : then the component-wise product F G ∈ A(D), : by taking (F G)j = If F, G ∈ A(D) : similarly. Fj Gj , and if each Gj is nonzero then F /G ∈ A(D) The symbols φ  and 1/φ  will represent the elements (φj ◦ pj ) and ((1/φj ) ◦ pj ) :j → Dj : where φ  is the derivative of φj = ψ −1 : Dj → Dd and pj : D of A(D) j j j is projection onto j th coordinate. For purposes of numerical integration the family : is defined via the following set of norms. Define N (q) as the vector of B(N , D) Q

180

M. Stromberg

norms (Nj ), where for f : Γ:j → C we have Nj (f ) =  j −1

sup j −1

(x0 ,...,xj −1 )∈×i=0 Γi

) 1 φi (xi )



i=0

 |f (: xj )| dxj +1 · · · dxq−1 ,

q−1 ×i=j +1 Γi

interpreting this as a sup norm if j = q − 1 and as an L1 norm if j = 0. If N = (N0 , . . . , Nq−1 ) is an integer multiindex with positive components Ni , set ∼ N = max{Ni } and N = min{Ni }. Define also the set of multiindices μ(N ∼) = {k ∈ Zq | |ki | ≤ Ni for i = 0, . . . , q − 1} if q ≥ 1, and if ∼ N is a vector with no components, (q = 0) set μ(N ∼ ) = {∅} where ∅ is the empty set. Finally if β = (β0 , . . . , βq−1 ) ∈ Cq , set π(β) = β0 β1 · · · βq−1 . For quadrature on products, we have the following, which is proved in [9]. See related earlier results for quadrature in §5.2 of [6] and in [5]. (q) :  : Theorem 8.3 Let F ∈ H (D)∩B(N Q , D) and suppose F /φ satisfies a decay (q)

condition with respect to NQ of type α > 0. Then there is a constant K, which is independent of the vector ∼ N provided h = ((2π di /αNi )1/2 ), such that     F (x) dx − π(h) Γ

k∈μ(N ∼)

F (zk )  ≤ π(φ  (zk ))

K

q−1  i=0

e−(2π di αNi )

1/2

i−1 

1/2

(2Nj

+ 1).

j =0

N with positive components the In particular, there is K  such that for each ∼ (q−1)/2 −(2π dαN )1/2  quadrature error is bounded by K N e where d = min {di }.

0≤i≤q−1

 

The norms Nj will not necessarily give the same results if the coordinates in Γ are reordered. This can affect the constant K, but not the form of the error term.

8 Sinc Methods on Polyhedra

181

8.3.3 Sinc Quadrature on a Face of a Complex r−1 : = ∪r−1 D : : Let 1 ≤ r ≤ q and set D j =0 j and Γ = ×j =0 Γj for some choice of Dj and Γj ⊂ Dj as above for each j = 0, . . . , r − 1, and assume that Γj is an open segment in R for each j . Let K be a q-dimensional complex in Rm with extension : if there E and let F ≤ K (r) . We will say that F has a rectangulation by Γ over D : ⊂ O and an injective holomorphic map are an open set O ⊂ Cr for which D ρ : O → FE such that ρ(Γ ) = F . By the theory of several complex variables, the map ρ is then actually biholomorphic onto its image. We assume after this that for every rectangulation ρ : O → FE the set O is simply connected, by the following lemma.

: then there is a simply connected Lemma 8.6 If O ⊂ Cr is open and contains D   : open set O such that D ⊂ O ⊂ O. Proof For each i the set Di is contractible by a homotopy Hi : I × Di → Di given by Hi (t, z) = ψi (tψi−1 (z)) for z ∈ Di and t ∈ [0, 1]. Given β > 0 and 0 < δ < di define the set Lβ,δ = {z ∈ C | |z| ≤ β and |z| ≤ δ}. Then Lβ,δ and L◦β,δ are compact and open contractible subsets of Ddi , respectively. Let Γi (β) = ψi ([−β, β]) and Dj (β, δ) = ψj (Lβ,δ ) and define j −1

r−1

i=0

i=j +1

:j (β, δ) = × Γi (β) × Dj (β, δ) × × Γi (β). D Since Γi (β) and Dj (β, δ) are compact there are open sets V0 , . . . , Vr−1 so that :j (β, δ) ⊂ ×r−1 Vi ⊂ O. Let i be Γi (β) ⊂ Vi if i = j and Dj (β, δ) ⊂ Vj and D i=0 −1 the distance between the sets [−β, β] and ψi (Vi ∩Di )c for i = j and let j be the distance between Lβ,δ and ψj−1 (Vj ∩Dj )c . Set Wi (β) = ψi (L◦β+i /2,i /2 ) for i = j and Wj (β, δ) = ψj (L◦β+j /2,δ+j /2 ) and define j −1

r−1

i=0

i=j +1

:j (β, δ) = × Wi (β) × Wj (β, δ) × × Wi (β). W :j (β, δ) ⊂ W :j (β, δ). Let W :j be the union of the sets W :j (β, δ) for β > 0 and Then D r−1 :   : ⊂ O ⊂ O and H : I × O  → O  is a 0 < δ < dj and set O = ∪j =0 Wj . Then D contraction homotopy where −1 (ζr−1 ))) H (t, ζ0 , . . . , ζr−1 ) = (ψ0 (tψ0−1 (ζ0 )), . . . , ψr−1 (tψr−1

for (ζ0 , . . . , ζr−1 ) ∈ O  , so O  is simply connected.

 

Let (E, K, g) ∈ Rq (X) and suppose that Eg is a semi-real extension of X. If F ≤ K : then (ρ(O), ρ −1 ) is a regular and ρ : O → FE is a rectangulation of F over D chart in E for which F ⊂ ρ(O). Therefore area measure on compact sets L ⊂ Fg

182

M. Stromberg

& is given by μ(L) = (g◦ρ)−1 (L) κgρ (ζ ) dζ where for fixed g and ρ the area kernel κgρ = [det(d(g ◦ ρ)T d(g ◦ ρ))]1/2 : O → C is holomorphic. If f : Fg → C is in L1 (μ) then it is clear that for the function fgρ = f ◦ g ◦ ρ we have 

 f dμ = Fg

κgρ (ζ )fgρ (ζ ) dζ. Γ

: → C satisfy For numerical evaluation of these integrals it suffices that f : g ◦ ρ(D) : and the following theorem is an immediate result of the preceding fgρ ∈ H (D), one. Theorem 8.4 Suppose (E, K, g) ∈ Rq (X), that Eg is a semi-real extension of X : If f : g ◦ ρ(D) : → and that ρ : O → FE is a rectangulation of F ≤ K (r) over D. (r)  : and if κgρ fgρ ∈ B(N , D) : and κgρ fgρ /φ satisfies a C satisfies fgρ ∈ H (D) Q (r)

decay condition with respect to NQ of type α then there is a constant C which is independent of the vector N provided h = ((2π di /αN )1/2 ) such that   

f dμ − π(h) Fg

 κgρ (zk )fgρ (zk )  (q−1)/2 −(2π dαN )1/2 e .  ≤ CN  π(φ (zk ))

k∈μ(N )

8.4 Approximation Functions defined on a rectangulated complex can be approximated in a basis that has the essential features of the Sinc basis in one dimension [8], in the sense that it is an interpolating basis and is typified by errors of the form (8.2), since in this case the entire problem is referred to the corresponding one on a cartesian product.

8.4.1 Rectangulation of a Complex In the following we assume as in Sect. 8.3.3 that Γ is a product of (finite or infinite) q−1 ˙ will denote the Riemann sphere C ∪{∞} and segments ×i=0 (ai , bi ). The symbol C n ˙ have the subspace topology from C ˙ n . If S ⊂ Cn then S is the closure subsets of C ˙ n. of S in Cn and S˙ is the closure of S in C If γ is a set of q elements {e0 , . . . , eq−1 } ordered as indexed, list the elements (τ ) (τ ) of any nonempty subset τ ⊂ γ in order as τ = {e0 , . . . , e|τ |−1 }, by defining (τ )

(τ )

(τ )

e0 = min{ej ∈ τ } and ei = min{ej ∈ τ | ei−1 < ej } for i = 1, . . . , |τ | − 1, where |τ | represents cardinality of τ . If ∅ = τ ⊂ σ ⊂ γ define iτ,σ : {0, . . . , |τ | − 1} → {0, . . . , |σ | − 1} by the requirement that the j th element of τ be the iτ,σ (j )th

8 Sinc Methods on Polyhedra

183

element of σ , so (τ )

ej

(σ )

= eiτ,σ (j )

for j = 0, . . . , |τ | − 1. The map iτ,σ is one-to-one and order preserving, and if τ ⊂ σ ⊂ ω then iσ,ω ◦ iτ,σ = iτ,ω since ej(τ ) = ei(ω) = ei(στ,σ) (j ) = ei(ω) . τ,ω (j ) σ,ω (iτ,σ (j )) (γ )

(γ )

If W (γ ) = (W0 , . . . , Wq−1 ) is any vector associated with γ and if ∅ = τ ⊂ γ , define the generic projection πγ ,τ and an associated vector W (τ ) = πγ ,τ (W (γ ) ) = (γ ) (τ ) (τ ) (τ ) (W0 , . . . , W|τ |−1 ) by Wj = Wiτ,γ (j ) for j = 0, . . . , |τ | − 1, and define W (τ ) = πγ ,τ (W (γ ) ) = ∅ if τ = ∅. (γ ) (γ ) We require some maps on cartesian products. If the vectors (a0 , . . . , aq−1 ) and (γ )

(γ )

q−1

(γ )

(γ )

|τ |−1

(τ )

(b0 , . . . , bq−1 ) determine Γ = Γ (γ ) = ×i=0 (ai , bi ), set Γ (τ ) = ×i=0 Γi (τ )

(τ )

(τ )

where Γi = (ai , bi ) and a (τ ) = πγ ,τ (a (γ ) ) and similarly for b(τ ) , if ∅ = τ ⊂ γ . Given τ ⊂ ω ⊂ γ define πω,τ : Γ (ω) → Γ (τ ) by πω,τ (ζ ) = ζ  where (ω) (ω) (ω) (ω) ζj = ζiτ,ω (j ) and the sets ω−∞ = {ej | aj = −∞} and ω∞ = {ej | bj = ∞}. If τ and σ are subsets of ω that satisfy (ω∞ ∪ τ )∩σ = ∅ and ω−∞ ⊂ (σ ∪ τ )

(8.6)

then the injections ιτ,σ ;ω : Γ (τ ) → Γ (ω) are defined, by ιτ,σ ;ω (ζ ) = ζ  where ⎧ ⎪ ⎪ ⎨ζr ,  ζj = bj(ω) , ⎪ ⎪ ⎩a (ω) , j

if ej(ω) = er(τ ) (ω)

if ej if

(ω) ej

∈σ

(8.7)

∈ / τ ∪ σ.

The maps ιτ,σ ;ω and πω,τ are considered to exist in the case τ = ∅ also, with the image of ιτ,σ ;ω being a vertex of Γ (ω) by applying (8.6) and (8.7) and the image of πω,τ being viewed as an empty vector. The same names will refer to the ˙ |τ | → C ˙ q and πγ ,τ : C ˙q → C ˙ |τ | , also defined by (8.7). We will maps ιτ,σ ;γ : C use the convention that vectors and sets associated with γ will not (usually) be (γ ) (γ ) superscripted, so that Γ (γ ) = Γ and (a0 , . . . , aq−1 ) = (a0 , . . . , aq−1 ), etc. The set Γ will be viewed as a complex in Rq for which the r-dimensional faces are the sets Γ (τ,σ ;γ ) = ιτ,σ ;γ (Γ (τ ) ) whenever τ, σ ⊂ γ satisfy (8.6) and |τ | = r, with atlas containing the charts (Γ (τ,σ ;γ ) , πγ ,τ ), and the complex (τ ) Γ has a natural product extension. Let Dj = Diτ,γ (j ) and define the sets

184

M. Stromberg |τ |−1

(τ )

D (τ ) = ×j =0 Dj |τ |−1

k−1

|τ |−1

j =0

j =k+1

:(τ ) = × Γ (τ ) × D (τ ) × × Γ (τ ) , and finally D :(τ ) = and D k j k j

(r) where the r: . The complex Γ has the product extension Ep = ∪ ∪k=0 D k r=0 E dimensional components of E (r) are the sets D (τ,σ ;γ ) = ιτ,σ ;γ (D (τ ) ) whenever τ, σ satisfy (8.6), with atlas determined by charts (D (τ,σ ;γ ) , πγ ,τ ). This extension has the :(τ,σ ;γ ) ⊂ D (τ,σ ;γ ) , where D :(τ,σ ;γ ) = ιτ,σ ;γ (D :(τ ) ). The property that Γ (τ,σ ;γ ) ⊂ D (∗;γ ) (τ,σ ;γ ) : : = ∪{D | τ, σ ⊂ γ , (γ∞ ∪ τ )∩σ = ∅, γ−∞ ⊂ (σ ∪ τ )} consists set D :(γ ) together with points obtained by replacing one or more coordinates ζi of of D :(γ ) with one of the finite endpoints of Γi . ζ ∈D Suppose K is a q-complex in Rm with extension E. If Γ is also q-dimensional :(τ,σ ;γ ) ⊂ Γ (τ,σ ;γ ) for as above and E0 is an extension of Γ for which Γ (τ,σ ;γ ) ⊂ D E0 all τ, σ ∈ γ that satisfy (8.6), a piecewise invertible pair morphism ρ : (E0 , Γ ) → :(∗;γ ) . If K has such a (E, K) will be called a rectangulation of K by Γ over D rectangulation it will be called a q-cell. For purposes of applications it is easier to construct rectangulations of sets X ⊂ Rn by composing functions ρ and g where ρ is a rectangulation of some standard cell K and where (E, K, g) ∈ Rq (X) and g : (E, K) → (Eg , X) is a piecewise complex structure for X that is also piecewise invertible. We give an example of such a standard cell, the closed standard q-simplex K = σq . Let Γ = I q where I = [0, 1], and let 0 < θ ≤ π/2 and Di = D(θ ) and ψi : Dθ → D(θ ) where ψi (w) = ew /(1 + ew ) for each i and therefore 1 − D(θ ) = D(θ ). If K is any cell rectangulated by Γ that is the convex hull of a finite set in Rq then K is a quotient space of I q , suggesting that K can be obtained by identifying sets of vertices in I q with vertices of K. If usual set and if τ ⊂ γ then the function ϕτ : I q → I q mv } and we have τv ⊂ τ and σv ⊂ σ and emv ∈ / σ ∪ τ . Let p = |τ | and let v have dimension p = ∅ if p = 0 and otherwise let W p = {ζ ∈ d = |τv | and d + 1 vertices. Set W (τ ) p ) and Cp | πτ,τv (ζ ) ∈ Ud and ζj ∈ Dj if ej ∈ / τv }. Define E (τ,σ ;γ ) = ιτ,σ ;γ (W (τ,σ ;γ ) | τ, σ ⊂ γ , τ ∩σ = ∅}. E0 = ∪{E Lemma 8.7 If v is the set of vertices of F ≤ K and if τv ⊂ τ , σv ⊂ σ and emv ∈ / σ ∪ τ then ρ : E (τ,σ ;γ ) → FKθ is holomorphic and onto, and if τ = τv is also injective.

186

M. Stromberg

 ) where Proof The set E (τ,σ ;γ ) consists of points ζ  = (ζ0 , . . . , ζq−1

ζj =

⎧ ⎪ ⎪ ⎨ζr ,

if p > 0 and ej = er(τ ) for some 0 ≤ r ≤ p − 1

1, ⎪ ⎪ ⎩0,

if ej ∈ / σ ∪τ

if ej ∈ σ

;p if p > 0. The ith coordinate of ρ(ζ  ) is given where ζ = (ζ0 , . . . , ζp−1 ) ∈ W by (8.9) and is therefore zero if i ≤ mv and in particular ρ(ζ  ) = wq if mv = q − 1. If q − 1 ≥ j > mv then ej ∈ τv ∪ σv and ζj is 1 if ej ∈ σv and is ζr if d > 0 and ej = er(τ ) and r = iτv ,τ (s) for some s = 0, . . . , d − 1. Therefore if i = mv + 1 < q <    then ρi (ζ  ) = d−1 s=0 ζs where ζ = πτ,τv (ζ ) if d > 0 and ρi (ζ ) = 1 if d = 0, by (8.9). If q > i > mv + 1 then i > 0 so , 

ρi (ζ ) =

0, (1 − ζs )

s ζt ,

if ei−1 ∈ σv if ei−1 ∈ τv

where in the second case d > 0 and i − 1 = iτv ,γ (s) and wi ∈ v since iτv ,γ (s) = iv,w (s + 1) − 1 for s = 0, . . . , d − 1. Then ρi (ζ  ) = 0 if wi ∈ / v and otherwise ρi (ζ  ) = ξi where ,< ξi =

d−1  s=0 ζs , <  (1 − ζs ) d−1 t>s ζt ,

if i = mv + 1 if i = iv,w (s + 1) and 0 ≤ s ≤ d − 1,

#  (d) then wi ∈v, i 0 and is zero if d = 0. Then ρ(ζ ) = wμ + wi ∈v, i =μ ξi (wi − wμ ), therefore p , and the map is clearly ρ(ζ  ) ∈ FKθ and in fact ρ(E (τ,σ ;γ ) ) = FKθ by choice of W injective on E (τ,σ ;γ ) if d = p.   If f : K → C continuous on a q-cell K with rectangulation ρ : (E0 , Γ ) → (E, K) then in order to approximate f uniformly on K it is clearly sufficient to approximate the function f ◦ ρ on Γ , since there is an injective (not necessarily unique) function η : K → Γ for which ρ ◦ η is the identity on K, so that if f ρ is a uniform approximation to f ◦ ρ, then f ρ ◦ η approximates f uniformly on K. If :(∗;γ ) ) → C, then the results of the next section f extends to a function f : ρ(D make it feasible to bound the error of the approximation f ρ ◦ η especially if :(∗;γ ) , ρ), where the latter set consists of functions f : ρ(D :(∗;γ ) ) → C f ∈ H (D (τ ) : for which f ◦ ρ ◦ ιτ,σ ;γ ∈ H (D ) for all τ, σ that satisfy (8.6), and includes all piecewise holomorphic functions on E. Finally if Ω is a closed set in Rn and f : Ω → C then we can attempt to approximate f on Ω provided Ω has a decomposition of the form Ω = ∪m i=1 Xi :(∗;γ ) , ρi ). It is where each Xi is a qi -cell with rectangulation ρi and f ∈ ∩m H ( D i i=1

8 Sinc Methods on Polyhedra

187

useful to have uniform approximation of derivatives also, and this is taken up in the next section.

8.4.2 Approximation on Products Since approximation on a rectangulated complex reduces to a problem on rectangles, we deal with this situation next. If h > 0 is a stepsize, k is an integer and φ = ψ −1 : D → Dd is a conformal map of ψ(Dd ) = D onto Dd , define s(k, h) ◦ φ(x) =

sin[(π/ h)(φ(x) − kh)] (π/ h)(φ(x) − kh)

for x ∈ D. If h = (h0 , . . . , hq−1 ) is a vector of positive reals (stepsizes), k = (k0 , . . . , kq−1 ) ∈ Zq is an integer multiindex, and if φ = (φ0 , . . . , φq−1 ) is a vector of conformal maps φi : Di → Ddi , so φ : D → Dd as in Sect. 8.3.2, define s(k, h) ◦ φ(x) =

q−1 

s(ki , hi ) ◦ φi (xi )

i=0

:i and D : = ∪q−1 D :i as in Sect. 8.3.2, for x = (x0 , . . . , xq−1 ) ∈ D. Defining D i=0 we have the following results for approximation on Γ . Theorems 8.5 and 8.6 below show that even though approximants are constructed as interpolants, their derivatives approximate derivatives of the interpolated function on compact subsets of the domain of approximation. Both are higher dimensional analogues of results found in [2, 7], although even in one dimension Theorem 8.6 extends those results. If n = (n0 , . . . , nq−1 ) is a multiindex with nonnegative integer components, set nq−1 |n| = n0 + · · · + nq−1 and set ∂ji f = ∂ i f/∂xji and ∂ n f = ∂ |n| f/∂x0n0 · · · ∂xq−1 whenever the derivatives exist unambiguously. If n and m are multiindices, write n ≤ m if ni ≤ mi for each i. Given n, set n(q − 1) = (0, . . . , 0) and let n(i) be the multiindex that agrees with n in positions i + 1 through q − 1 and is otherwise zero, for i = 0, . . . , q − 2. Suppose that for a given multiindex n and 0 ≤ i ≤ q − 1 there is an associated set Ai,ni ⊂ Γi . Set q−1 q−1 A(n) = ×j =0 Aj,nj and if q > 1 set Pi(n) = ×i−1 j =0 Γj × ×j =i+1 Aj,nj for each i (n)

where P0

q−1

(n)

q−2

= ×j =1 Aj,nj and Pq−1 = ×j =0 Γj .

(n)

(n)

Given n and A(n) define a vector of seminorms N (n) = (Ni ) by Ni (f ) = (0) supx∈P (n) |f (x)| for f : Γ:i → C. If n = 0 and Ai,ni = Γi for each i, write NΓ i for this vector.

188

M. Stromberg

: If gi : Di → C is holomorphic on Di for each i = 0, . . . , q − 1 define g ∈ A(D) 0. Then there is a constant K, which is independent of the vector N = (N0 , . . . , Nq−1 ) provided h = ((π di /αNi )1/2 ), such that    F (x) − F (zk )s(k, h) ◦ φ(x) ≤ k∈μ(N ∼)

K

q−1 i−1  1/2  1/2 Ni e−(π di αNi ) log(Nj + 1) i=0

j =0

N with positive for all x ∈ Γ . In particular, there is K  such that for each ∼ components the approximation error is bounded by K N

1/2

(log(N + 1))q−1 e−(π dαN ) . 1/2

Proof Use Theorem 8.6 below with n = 0 and gi ≡ 1 for i = 0, . . . , q − 1. : and g An (D)

Theorem 8.6 Let n be a multiindex and suppose that F ∈ for i = 0, . . . , q − 1 are functions satisfying the following conditions.

i

 

: Di → C

: (i) φ  F (n) / g ∈ B(N (n) , D). (n) (ii) F / g satisfies a decay condition with respect to N (n) of type α > 0. (iii) On bounded sets of positive h there are constants C1,i,ni and C2,i,ni for i = 0, . . . , q − 1 such that  ∂ m gi (xi ) sin[π φi (xi )/ h]   ≤ C1,i,n h−m for 0 ≤ m ≤ ni , (a) sup  m i φi (z) − φi (xi ) xi ∈Ai,ni ∂xi z∈∂Di

(b)

sup

xi ∈Ai,ni j =0,±1,±2,...

  ∂ ni −ni   . ni gi (xi )s(j, h) ◦ φi (xi ) ≤ C2,i,ni h ∂xi

8 Sinc Methods on Polyhedra

189

Then there are constants K and Kj for j = 0, . . . , q − 1 which are independent of N provided h = ((π dj /αNj )1/2 ) such that the quantity  9  ∂ n F (x) − (F /G)(zk )G(x)s(k, h) ◦ φ(x) k∈μ(N )

is bounded by q−1 

Kj e

−(π dj αNj )1/2

(n +1)/2 Nj j

j =0

j −1

(ni +2)/2

Ni

i=0

and also by KN

(|n|+2q−1)/2 −(π dαN )1/2

e

for all x ∈ A(n) . Proof For purposes of the proof, if 0 ≤ j ≤ q − 1 let μj (N ) be the restriction of μ(N) to the first j + 1 coordinates, so we have μj (N ) = {k ∈ Zj +1 | |kr | ≤ Nr for 0 ≤ r ≤ j }. Define μ−1 (N ) = {∅} and construct the following functions. If j = −1 and k ∈ μj (N ) set Hk (x) =H∅ (x) = N0 

(F /G)(x) −

k0 =−N0

(F /G)(zk(0) , x1 , . . . , xq−1 ) s(k0 , h0 ) ◦ φ0 (x0 ) 0

and if 0 ≤ j ≤ q − 2 and k ∈ μj (N ), set (0)

(j )

Hk (x) = (F /G)(zk0 , . . . , zkj , xj +1 , . . . , xq−1 )− Nj +1



kj +1 =−Nj +1

(j +1)

(0)

(F /G)(zk0 , . . . , zkj +1 , xj +2 , . . . , xq−1 )s(kj +1 , hj +1 ) ◦ φj +1 (xj +1 ),

(r)

where zkr = ψr (kr hr ) for 0 ≤ r ≤ q − 1. For x ∈ Γ define , Nj be an integer and 0 < δ < dj and let CJ,δ be the positively oriented contour in Dj consisting of paths γ±δ,β and γ ±β,δ defined via ψj as in Sect. 8.3.2, where β = (2J + 1)hj /2. (j −1) For fixed j let zk = (zk(0) , . . . , zkj −1 ) if j ≥ 1 and : xj = (xj +1 , . . . , xq−1 ) if 0 j ≤ q − 2, omitting these in the following if j = 0 or j = q − 1 respectively. Then Hk (x)G(x)s(k, h) ◦ φ(x) = gj (xj ) 

j −1

! gi (xi )s(ki , hi ) ◦ φi (xi ) ×

i=0

sin[π φj (xj )/ hj ] 2π i



(F / gj )(zk , z, : xj )φj (z) CJ,δ

[φj (z) − φj (xj )] sin[π φj (z)/ hj ]

 −N j −1 

 9 J  (j ) (F / gj )(zk , zl , : + xj )s(l, hj ) ◦ φj (xj )

l=−J (j )

by residues, where zl ∂

n



l=Nj +1

= ψj (lhj ). Therefore, 



l=−J

1 2π i

 CJ,δ

j −1

! ∂ ni ni gi (xi )s(ki , hi ) ◦ φi (xi ) × ∂x i i=0

 gj (zk , z, : ∂ n(j ) F / xj )φj (z) Δnj (xj , z) dz− sin[π φj (z)/ hj ]

Hk (x)G(x)s(k, h) ◦ φ(x) =

 −N j −1 

dz−

 9 J 

 ! ∂ nj (j ) n(j ) ∂ F / gj (zk , zl , : + xj ) nj gj (xj )s(l, hj ) ◦ φj (xj ) ∂xj l=Nj +1 (8.11)

8 Sinc Methods on Polyhedra

191

where Δnj (xj , z) =

∂ nj  gj (xj ) sin[π φj (xj )/ hj ]  . n φj (z) − φj (xj ) ∂xj j

If M > 0, set Γj (M) = {xj ∈ Γj | |φj (xj )| ≤ M} = ψj ([−M, M]). By construction of CJ,δ , we can select and fix ρ > 0 so that for xj ∈ Γj (M) and |ζ − xj | < ρ we have |φj (z) − φj (ζ )| ≥ dj /2 for z ∈ γ±δ,β and |φj (z) − φj (ζ )| ≥ (β − M)/2 for z ∈ γ ±β,δ , for sufficiently large J and δ sufficiently near dj . Then Cauchy’s estimates applied to the functions fz (ζ ) = 1/(φj (z) − φj (ζ )) give   i  ∂   1 −i    ∂x i φ (z) − φ (x )  ≤ i!ρ τz j j j j for z ∈ CJ,δ and xj ∈ Γj (M), where , τz =

2/dj

if z ∈ γ±δ,β

2/(β − M)

if z ∈ γ ±β,δ .

Now apply Leibniz’ formula to Δnj (xj , z) to get |Δnj (xj , z)| ≤|Δnj (xj , w)|+ |φj (w) − φj (z)|

 i  nj 9    ∂

nj  1  |Δnj −i (xj , w)| i i ∂xj φj (z) − φj (xj )  i=0

for z ∈ CJ,δ , xj ∈ Γj and w ∈ ∂Dj . Then −nj

|Δnj (xj , z)| ≤ C1,j,nj hj

nj + nj !(ρ −1 + h−1 j ) C1,j,nj τz inf |φj (w) − φj (z)| w∈∂Dj

for xj ∈ Aj,nj ∩Γj (M) by (iii).

−n

Given  > 0, we have |Δnj (xj , z)| ≤ C1,j,nj hj j +  for z ∈ CJ,δ and xj ∈ Aj,nj ∩Γj (M) provided J is sufficiently large and dj − δ is sufficiently small. Now for z ∈ γ±δ,β we have | sin[π φj (z)/ hj ]| ≥ sinh(π δ/ hj ) and for z ∈ γ ±β,δ , the function | sin[π φj (z)/ hj ]| ≥ 1. Then

 gj (zk , z, : xj )φj (z)   ∂ n(j ) F /  Δnj (xj , z) dz ≤ sin[π φj (z)/ hj ] γ ±β,δ    n(j )  −n ∂ (C1,j,nj hj j + ) F / gj (zk , z, : xj )φj (z) dz ≤ 



γ ±β,δ

192

M. Stromberg

for all sufficiently large J by condition (iii) of Definition 8.1. Since F (n) / g satisfies a decay condition, we have  

(j ) (j ) (n) gj (zk , zl , : xj )| ≤ Nj (∂ n(j ) F / gj )(·, zl , ·) ≤ Ce−α|l|hj | ∂ n(j ) F / q−1

for : xj ∈ ×i=j +1 Aj,nj , so  −N j −1  l=−J

+

 J    n(j )  e−αNj hj (j )  ∂ F / gj (zk , zl , : xj ) ≤ 2C . αhj

l=Nj +1

Therefore we can choose J sufficiently large and δ sufficiently near dj that |∂ n [Hk (x)G(x)s(k, j ) ◦ φ(x)]| ≤  j −1 j (C1,j,nj hj j + )(1 + ) e−αNj hj  −ni −ni × 2C C2,i,ni hi + ( C2,i,ni hi )  + αhj 2π sinh(π dj / hj ) i=0 i=0 9 (n) (n) [Nj (γ−δ,β ; (φ  F (n) / g )j ) + Nj (γδ,β ; (φ  F (n) / g )j )] −n

for x ∈ A(n) with xj ∈ Γj (M), by (iii). Applying the definition of N (φ  F (n) / g, N arbitrary, we have

(n) , D) :

and the fact that  and M are

|∂ n [Hk (x)G(x)s(k, j ) ◦ φ(x)]| ≤ j  i=0

i h−n i



C1,j,nj N (φ  F (n) / g, N

(n) , D) :

2π sinh(π dj / hj )

+

2CC2,j,nj e−αNj hj αhj

9 j −1

C2,i,ni

i=0

for x ∈ A(n) . If hi = (π di /αNi )1/2 we have j e−(π dj αNj )1/2 N (nj +1)/2 |∂ n [Hk (x)G(x)s(k, j ) ◦ φ(x)]| ≤ C j

j −1

n /2

Ni i

i=0

where j = C



g, N C1,j,nj N (φ  n/

(n) , D) :

π(1 − e−2(π dj α) ) 1/2

+

2CC2,j,nj

9 j −1

(π dj α)1/2

i=0

C2,i,ni

j 

α ni /2 π di i=0

8 Sinc Methods on Polyhedra

193

provided Ni ≥ 1 for each i. Since |μj −1 (N )| =

0 is an arbitrary parameter we can take φi (x) = log K(x − a)/(b − x) and it is readily shown that |φ (m) (x)| ≤ (m − 1)!(b − a)m |(x − a)(b − x)|−m for m ≥ 1. If Γi is a semi-infinite interval such as Γi! = (a, ∞) or (−∞, b) then ! φi (x) = log K(x − a) or φi (x) = − log K(b − x) respectively satisfies (iii  ). The following is a corollary to Theorem 8.6. : and suppose that for each i = 0, . . . , q − 1 iii ) is Theorem 8.7 Let F ∈ An (D) satisfied. Suppose the functions gi satisfy   dm  gi (x) ≤ Dm,i |wi (x)||vi (x)|−m dx m for m ≥ 0, in which wi is a fixed function, Dm,i is constant and vi is as in (iii  ). Let Ai,ni be a set on which supx∈Ai,n |wi (x)||vi (x)|−k exists for 0 ≤ k ≤ ni and i suppose that for this choice of gi and Ai,ni , the function F satisfies (i) and (ii) of Theorem 8.6. Then there are constants K and Kj for j = 0, . . . , q − 1 which are independent of N provided h = ((π dj /αNj )1/2 ) such that the quantity  9  ∂ F (x) − (F /G)(zk )G(x)s(k, h) ◦ φ(x) n

k∈μ(N )

194

M. Stromberg

is bounded by q−1 

(nj +1)/2

Kj e−(π dj αNj ) Nj 1/2

j =0

j −1

n /2

Ni i

log(Ni + 1)

!

i=0

and also by KN

(|n|+1)/2

!q−1

log(N + 1)

e−(π dαN )

1/2

for all x ∈ A(n) . Proof We first verify (iii) (a) and (iii) (b) of Theorem 8.6. For fixed i consider the functions σ ◦η(x) and σj ◦η(x) for j = 0, ±1, ±2, . . . where σ (t) = sin t/(φi (z)− (h/π )t) for fixed but arbitrary z ∈ ∂Di , where σj (t) = sin(t − j π )/(t − j π ), and where η(x) = (π/ h)φi (x). Suppose 0 < h ≤ B and let t be in the strip |t| ≤ d in C. Then |σj (t)| ≤ sinh d/d for every j and since |ht/π | ≤ Bd/π and |φi (z)| = di we have |σ (t)| ≤ cosh d/(di − Bd/π ) provided d < π di /B, which we assume. (m) By the Cauchy inequalities the mth derivatives σ (m) (t) and σj (t) satisfy uniform bounds on R of the form |σ (m) (t)| ≤ m!d −m

cosh d di − Bd/π

and (m)

|σj

(t)| ≤ m!d −m−1 sinh d

for all m ≥ 0. Let σ∗ (t) represent either σ or σj and let Km,0 represent the (m) corresponding bound on |σ∗ |. By iii ) the mth derivatives of η satisfy a bound (m) of the form |η (x)| ≤ Cm h−1 |vi (x)|−m for x ∈ Γi and m ≥ 1. Apply Leibniz’ formula to get  n − 1 d n (m) d k (m+1) (n−k) η [σ ◦ η(x)] = (x) [σ ◦ η(x)] ∗ dx n dx k ∗ k n−1 k=0

8 Sinc Methods on Polyhedra

195

for n ≥ 1 and all m ≥ 0. By induction on n we have d n (m) [σ ◦ η(x)]| ≤ Km,n h−n |vi (x)|−n dx n ∗ #n−1 n−1  n−1−k if for x ∈ Γi for all m, n ≥ 0 where Km,n = k=0 k Cn−k Km+1,k B n ≥ 1. Apply the assumption on gi and Leibniz again to get |

  dm  [gi (x)σ∗ ◦ η(x)] ≤ Ei,m |wi (x)||vi (x)|−m h−m m dx #m m  = k=0 k Dk,i K0,m−k B k . Now set

where Ei,m

Ci,m = Ei,m sup |wi (x)||vi (x)|−m x∈Ai,ni

for 0 ≤ m ≤ ni , set C1,i,ni = max0≤m≤ni Ci,m , and C2,i,ni = Ci,ni choosing Kn,m according to the form of σ∗ . Thus (iii) (a), (b) of Theorem 8.6 are satisfied. The remainder of the proof is the same, except as follows. Apply (8.11) and subsequent calculations to get  n  ∂ Hk (x)G(x)s(k, h)◦φ(x)  ≤ (nj +1)/2

Cj e−(π dj αNj ) Nj 1/2

j −1

 ∂ ni !   ni gi (xi )s(ki , hi ) ◦ φi (xi ) ∂x i i=0

where Cj

 9 : g , N (n) , D) 2CC2,j,nj

α nj /2 C1,j,nj N (φ  F (n) / = + 1/2 π dj (π dj α)1/2 π(1 − e−2(π dj α) )

for k ∈ σj −1 (N). We will show that given an integer N ≥ 1 we have N 

(m)

|σj

 (t)| ≤ Km,0 log(N + 1)

(8.12)

j =−N  is a constant independent of N , for all m ≥ 0. The usual for all t ∈ R where Km,0 Leibniz arguments then show that N    dm  ≤ K  h−m |vi (x)|−m log(N + 1),  [σ (x) ◦ η(x)] j 0,m dx m

j =−N

196

M. Stromberg

that N    dm   [gi (x)σj (x) ◦ η(x)] ≤ Ei,m h−m |wi (x)||vi (x)|−m log(N + 1), m dx

j =−N

and therefore that Ni    d ni ni /2    log(Ni + 1) ni gi (xi )s(ki , hi ) ◦ φi (xi ) ≤ Ci,ni Ni dx i k =−N i

i

on Ai,ni so finally we have the bound 

 n  ∂ Hk (x)G(x)s(k, h) ◦ φ(x)  ≤

k∈μj −1 (N ) (nj +1)/2

Kj e−(π dj αNj ) Nj 1/2

j −1

n /2

Ni i

log(Ni + 1)

!

i=0

 λi and ni = ni otherwise. For any n ≤ if ni = λi > ρi ρ define Pn (ζ ) = (1 − ζi )λi ζi i , and finally define the following sets ni =λi >ρi

of functions.

ni =ρi >λi

8 Sinc Methods on Polyhedra

215

Definition 8.4 Given λ(γ ) and ρ (γ ) and ≥ ν (γ ) and 0 < α ≤ 1 and ω ⊂ γ the ω (D :(γ ) ; λ(γ ) , ρ (γ ) ) consists of those f that satisfy the following for each class H ,α n ≤ ω . :(γ ) ) and Pn ∂ n f has a continuous extension to D :(γ ,n) . (1) The function f ∈ H n (D (γ ,n) : there are constants Ki = Ki (L, n) such (2) For each compact subset L of D that for ζ ∈ L (i) if ni ≤ λi then |∂ n f (ζ ) − (∂ n f ) ◦ πˆ γ \{ei },∅;γ (ζ )| ≤ Ki |ζi |α and (ii) if ni ≤ ρi then |∂ n f (ζ ) − (∂ n f ) ◦ πˆ γ \{ei },{ei };γ (ζ )| ≤ Ki |1 − ζi |α . :(γ ,n) there are constants K i = K i (L, n) such (3) For each compact subset L of D that for ζ ∈ L (i) if ni = λi > ρi then |Pn (ζ )∂ n f (ζ )| ≤ K i |1 − ζi |α , and (ii) if ni = ρi > λi then |Pn (ζ )∂ n f (ζ )| ≤ K i |ζi |α . Denote this class by H ,α if ω = ∅. By condition (1) ∂ n f has a continuous extension :(γ ,n) for each n ≤ ω since Pn (ζ ) = 0 only if ζi = 1 for some ni = λi > ρi or to D :(γ ,n) . ζi = 0 for some ni = ρi > λi , thus Pn = 0 on D For the main approximation result, we need the next two lemmas, for which it is (r) assumed that the functions Hj,i are polynomials constructed as above. :(γ ) ; λ(γ ) , ρ (γ ) ) then we also have (I − Lemma 8.13 If the function f ∈ Hα (D (γ ) (γ ) (γ ) : Lj )f ∈ Hα (D ; λ , ρ ) for each j . Proof If nk ≤ λk consider one term of Lj f and write (r) (r) ∂ n {Hj,i (ζj )∂ji f |ζj =r } − ∂ n {Hj,i (ζj )∂ji f |ζj =r } ◦ πˆ γ \{ek },∅;γ = F1 + F2

where 3 2 n n (r) (r) F1 = ∂j j Hj,i (ζj ) − ∂j j Hj,i (ζj ) ◦ πˆ γ \{ek },∅;γ ∂ n[j,i] f ◦ πˆ γ \{ej },σ˜ ;γ and n

(r) F2 = ∂j j Hj,i (ζj ) ◦ πˆ γ \{ek },∅;γ G

where

G = ∂ n[j,i] f − ∂ n[j,i] f ◦ πˆ γ \{ek },∅;γ ◦ πˆ γ \{ej },σ˜ ;γ , where σ˜ = ∅ if r = 0 and σ˜ = {ej } if r = 1, and where n[j, i] has mth entry nm if m = j and i if m = j . Now if σ˜ = ∅ and i ≤ λj or if σ˜ = {ej } and :(γ ,n) → D :(γ ,n[j,i]) and πˆ γ \{e },σ˜ ;γ (L) is a compact i ≤ ρj then πˆ γ \{ej },σ˜ ;γ : D j (γ ,n[j,i]) : :(γ ,n) . Therefore by Definition 8.4 subset of D if L is a compact subset of D |G(ζ )| ≤ K|ζk |α if nk ≤ λk , for some K. Now |F1 (ζ )| ≤ C|ζk |α for some C

216

M. Stromberg n

(r)

if k = j , because ∂j j Hj,i (ζj ) is Lipschitz of order α and ∂ n[j,i] f is bounded on πˆ γ \{ej },σ˜ ;γ (L), and otherwise F1 = 0. In either case |F1 + F2 | ≤ K0 |ζk |α for some K0 if nk ≤ λk . We get a similar result for (2)(ii) if nk ≤ ρk . If either i = λj or i = λj ≤ ρj and either i = ρj or i = ρj ≤ λj then n[j, i] = :(γ ,n) then πˆ γ \{e },σ˜ ;γ (L) is a compact subset n[j, i] so if L is a compact subset of D j :(γ ,n[j,i]) . If either i = λj > ρj or i = ρj > λj then n[j, i]k = 0 :(γ ,n[j,i]) = D of D :(γ ,n[j,i]) ⊂ if k = j and is nk if k = j , so n[j, i] ≤ n[j, i]. By Lemma 8.10, D (γ ,n[j,i]) (γ ,n[j,i]) : : D and again πˆ γ \{ej },σ˜ ;γ (L) is a compact subset of D . Therefore n[j,i] α |Pn[j,i] (ζ )∂ f (ζ )| ≤ K k |1 − ζk | by Definition 8.4 (3)(i) for some K k if nk = λk > ρk and k = j since n[j, i]k = nk in this case, for any ζ ∈ πˆ γ \{ej },σ˜ ;γ (L). If ζ = πˆ γ \{ej },σ˜ ;γ (ζ  ) for some ζ  ∈ L then Pn[j,i] (ζ ) = Pn (ζ ) since ζj = 0 if σ˜ = ∅ and ζj = 1 if σ˜ = {ej }. If ζ  ∈ L then Pn (ζ  ) = (1 − ζj )λj Pn (ζ ) if nj = λj > ρj and is (ζj )ρj Pn (ζ ) if nj = ρj > λj , and is Pn (ζ ) otherwise. Thus in any case |Pn (ζ  )∂ n[j,i] f |ζj =r (ζ  )| ≤ K k |1 − ζk |α on L whenever nk = n

(r)

λk > ρk for any k and similarly for (3)(ii). Since ∂j j Hj,i (ζj ) is bounded on any (r) (ζj )∂ji f |ζj =r , thus each compact set, we have (3)(i) and (3)(ii) for the functions Hj,i term of Lj f . Thus Lj f satisfies Definition 8.4 for any j .   < q−1 If δ = (δ0 , . . . , δq−1 ) and r is a multiindex set δ r = i=0 δiri . If δi ∈ (0, 1) for each i define A(γ ,n) (δ) as A(γ ,n) where

(γ )

Ai,ni

⎧ ⎪ (0, 1), ⎪ ⎪ ⎪ ⎨(0, 1 − δ ), i = ⎪ (δi , 1), ⎪ ⎪ ⎪ ⎩ (δi , 1 − δi ),

if ni ≤ min{λi , ρi } if ρi < ni ≤ λi if λi < ni ≤ ρi if max{λi , ρi } < ni ,

and let (γ )

j −1

(γ )

(γ )

|γ |−1

(γ )

A(γ ,n) [Dj (θ )] = × Ai,ni × Dj (θ ) × × Ai,ni i=0

i=j +1

for n ≤ . ω (D :(γ ) (θ ); λ(γ ) , ρ (γ ) ) and ej ∈ Lemma 8.14 If f ∈ H ,α / ω then (I − Lj )f = λj σ ρ (σ ) (σ ) (γ ) (γ ) (γ ) j : where f ∈ H (D (θ ); λ , ρ ) and σ = ω ∪{ej }, and if ζ (1 − ζj ) f j

(γ ) νσ

,α (γ ) −α −α n (σ then ζi (1−ζi ) ∂ f ) is bounded on A(γ ,n) [Di (θ0 )] for each ei

n≤ and θ0 < θ .

∈σ

ω (D :(γ ) ; λ(γ ) , ρ (γ ) ) where D :(γ ) = D :(γ ) (θ ) and ej ∈ Proof Suppose f ∈ H ,α / ω, set σ = ω ∪{ej } and let n ≤ σ . Let 0 ≤ r ≤ λj so that n[j, r] ≤ ω and ∂ n[j,r] f has a continuous extension :(γ ,n[j,r]) . Now γn[j,r],λ = γn,λ and γn[j,r],ρ = γn,ρ \ {ej } if r > ρk and is to D

8 Sinc Methods on Polyhedra

217

:(γ ,n) = {ζ ∈ D :(γ ,n) | ζj = 1} consists of the union of γn,ρ otherwise. The set D j,1 (γ ) : ) for σ ⊂ γn,ρ , (σ ∪ τ )c ⊂ γn,λ and σ ∩τ = ∅ and ej ∈ sets πˆ τ,σ ;γ (D / σ , so (γ ,n) (γ ,n[j,r]) : : for each r. Also, πˆ γ \{ej },{ej };γ ◦ πˆ τ,σ ;γ = πˆ τ \{ej },{ej } ∪ σ \{ej };γ Dj,1 ⊂ D so :(γ ,n) ) = πˆ γ \{e },{e };γ (D :(γ ,n) ) πˆ γ \{ej },{ej };γ (D j j j,1 and therefore :(γ ,n) ∪ πˆ γ \{e },{e };γ (D :(γ ,n) ). :(γ ,n) = D D j j j,1 j,1 :(γ ,n) and assume first that ζ = (ζj , : Fix ζ ∈ D ζj ) where ζj ∈ Dj and : ζj ∈ Γ:j . j,1 n :(γ ) ), : Set F (z) = ∂ f (z, ζj ) for z ∈ Dj and note that for all i ≥ 1, since f ∈ H n (D i i i i+1 the function (ζj −z) ∂z F (z)/ i! is a primitive of the function (ζj −z) ∂z F (z)/ i!− (ζj − z)i−1 ∂zi F (z)/(i − 1)!. :(γ ,n) and for For δ ≥ 0 the segment between ζ and the point (δζj , : ζj ) is in D j,1 δ > 0 we have by a finite induction that 

ζj

δζj

 (1 − δ)r (ζj − z)i i+1 ∂z F (z) dz = F (ζj ) − ∂zr F (δζj ). ζjr i! r! i

r=0

Assume first that λj ≥ 1. Then by the continuity assumptions on f , the derivatives ∂zr F (z) are continuous on the segment [0, ζj ] for 0 ≤ r ≤ λj so we have the representation F (ζj ) =∂ n f (ζ ) =

λj  ζji i=0

λ

+

ζj j (λj − 1)!



1

i!

∂ n[j,i] f |ζj =0 (8.26)

2

3

(1 − t)λj −1 ∂ n[j,λj ] f (ζj t, : ζj ) − ∂ n[j,λj ] f (0, : ζj ) dt.

0

ζk ) where ζk ∈ Dk and : ζk ∈ Γ:k the function fζk (: ζk ) On the other hand if ζ = (ζk , :   : is differentiable on Γk . Let ni = ni if i = k and ni = 0 if i = k, and let F (z) =  ∂ n f (ζk , : ζk )|ζj =z , that is, fixing coordinates other than the j th at those of ζ . Then i (ζj − z) ∂zi F (z)/i! is a primitive of the function (ζj − z)i ∂zi+1 F (z)/ i! − (ζj − z)i−1 ∂zi F (z)/(i − 1)!

218

M. Stromberg

for i ≥ λj − 1, and we now get the representation n

ζk ) = ∂ fζk (: 1

0



i!

i=0



λ

λj i  ζj

n [j,i]

fζk (: ζk )|ζj =0 +

ζj j (λj − 1)!

× (8.27)

2  3  (1 − t)λj −1 ∂ n [j,λj ] fζk (ζj t; : ζk ) − ∂ n [j,λj ] fζk (: ζk )|ζj =0 dt

 λ where ∂ n [j,λj ] fζk (ζj t; : ζk ) = ∂z j F (z)|z=ζj t . In this case the left side of (8.27) and each term in the sum is holomorphic as a function of ζk , as is the integrand if coordinates other than the kth are fixed, so the integral is holomorphic as a function of ζk . Differentiating (8.27) nk times with respect to ζk , we get the :(γ ,n) provided we now representation (8.26), which is therefore valid on all of D j,1 interpret : ζj to mean only coordinates other than the j th. (0) (0) Set f (n,0) = ∂ n {Hj,0 f − L(0) f }, which is the same as Hj,0 ∂ n f − ∂ n L(0) f if n ≤ σ . If λj ≥ 1 then

f

(n,0)

(ζ ) =

λj 

(0)

Qj,i (ζj )∂ n[j,i] f |ζj =0

i=0

+

λ (0) ζj j Hj,0 (ζj )

(λj − 1)!



1

2 3 (1 − t)λj −1 ∂ n[j,λj ] f (ζj t, : ζj ) − ∂ n[j,λj ] f (0, : ζj ) dt

0

(8.28) (0)

(0)

(0)

where Qj,i (ζ ) = ζ i Hj,0 (ζ )/ i! − Hj,i (ζ ). (0)

(0)

By properties of Hj,i we have d m /dζ m Qj,i (ζ )|ζj =0 = 0 for m = 0, . . . , sρj + (1−s)λj for i = 0, . . . , λj and s = 0, 1 so Qj,i (ζ ) is divisible by ζ λj +1 (1−ζ )ρj +1 , (0)

λ

(0) and by construction (1−ζ )ρj +1 divides Hj,0 (ζ ). Then f (n,0) = ζj j (1−ζj )ρj F (n,0) where

F

(n,0)

(ζ ) =

λj 

(0) (ζj )∂ n[j,i] f |ζj =0 + G(n,0) (ζ ), ζj (1 − ζj )Q j,i

i=0

where G(n,0) (ζ ) = (ζj ) (1 − ζj ) H j,0 (0)

 1 0

2 3 (1 − t)λj −1 ∂ n[j,λj ] f (ζj t, : ζj ) − ∂ n[j,λj ] f (0, : ζj ) dt

(ζ ) and Q (ζ ) = where we assume that Hj,0 (ζ ) = (λj − 1)! (1 − ζj )ρj +1 H j,i j,0 (0) λ +1 ρ +1 (ζ ). ζ j (1 − ζ ) j Q (0)

j,i

(0)

(0)

8 Sinc Methods on Polyhedra

219

:(γ ) ) for each i = 0, . . . , λj it is clear by the arguments Since f ∈ H n[j,i] (D (0,0) :(γ ) ) if n ≤ σ . The functions ∂ n[j,i] f |ζ =0 are of Lemma 8.11 that F ∈ H n (D j :(γ ,n) so ζj (1−ζj )Q (0) (ζj )∂ i f |ζj =0 satisfies Definition 8.4 for each continuous on D j j,i i for n ≤ σ by arguments of Lemmas 8.11 and 8.14. Next write G(n,0) (ζ ) = (0) (ζj ) H j,0

1

 0

1−t 1 − ζj t

λj −1 

1 − ζj 1 − ζj t



(1 − ζj t)λj ∂ n[j,λj ] f (ζj t, : ζj )

ζj ) dt − (1 − ζj )(1 − t)λj −1 ∂ n[j,λj ] f (0, : and note that both (1 − t)/(1 − ζj t) and (1 − ζj )/(1 − ζj t) are bounded by 1 for ζj ∈ Dj and t ∈ [0, 1]. If ζ = (ζj , : ζj ) set ζt = (ζj t, : ζj ). Then if λj > ρj , we have Pn[j,λj ] (ζt ) = λ j (1 − ζj t) Pn (ζ ), so by (3)(i) of Definition 8.4 we have |Pn (ζ )(1 − ζj t)λj ∂ n[j,λj ] f (ζj t, : ζj )| ≤ K j |1 − ζj t|α

(8.29)

= {ζt | ζ ∈ L, t ∈ [0, 1]} where L is any for some K j on the compact set L (γ ,n) (γ : : compact subset of D = D ,n[j,λj ]) and we also have (8.29) for ζj = 0. Therefore we have |Pn (ζ )G(n,0) (ζ )| ≤ K j |1 − ζj |α

(8.30)

:(γ ,n) by |1 − ζj |α ≥ |1 − ζj t|α |(1 − ζj )/(1 − ζj t)|. on compact subsets L of D Similarly if k = j and nk = λk > ρk or nk = ρk > λk the left side for some K k so of (8.29) is bounded by K k |1 − ζk |α or K k |ζk |α respectively on L (n,0) |Pn (ζ )G (ζ )| is bounded as in (3)(i) or (3)(ii) in these cases on compact subsets :(γ ,n) thus G(0,0) satisfies (3)(i) and (3)(ii). L⊂D :(γ ,n) so if L ⊂ D :(γ ,n) then Pn is bounded below on L and Now Pn = 0 on D by (8.30) we have |G(n,0) (ζ )| ≤ K|1 − ζj |α and if we define G(n,0) (ζ )|ζj =1 = 0 then G(0,0) satisfies (2)(ii). Given this L, set L1 = L∩{ζ ∈ Cq | |ζj − 1| ≥ ε} and L2 = L∩{ζ ∈ Cq | |ζj | ≥ ε} for some :(γ ,n[j,λj ]) if λj > ρj ε < 1/2. Then L = L1 ∪ L2 and L1 is a compact subset of D (n,0) so there are constants K1 and K2 such that |G (ζ )| ≤ K1 |ζj |α on L1 by (2)(i) n[j,λ ] (n,0) α j applied to ∂ f , and |G (ζ )| ≤ K2 |ζj | on L2 since G(n,0) is bounded on (0,0) L2 , and therefore G satisfies (2)(i).

220

M. Stromberg

In addition, preceding arguments show that for n ≤ σ we have (0) (ζj ) (1 − ζj )H j,0



1

(1 − t)λj −1 ∂ n[j,λj ] f (ζj t, ζ:j ) dt

0

λj −1 i  ζj  −λj  n (0) ∂ n[j,i] f |ζj =0 ∂ f (ζ ) − =(1 − ζj )Hj,0 (ζj )(λj − 1)! ζj i! i=0

:(γ ,n) | ζj = 0}. Therefore if k = j then on L2 we :(γ ,n) = {ζ ∈ D on the set D j,0 have |G(n,0) (ζ ) − G(n,0) ◦ πˆ γ \{ek },∅;γ (ζ )| ≤ K2 |ζk |α since f and ∂ji f |ζj =0 satisfy Definition 8.4 by the proof of Lemma 8.13. Applying the fact that ∂ n[j,i] f satisfies 1 we get a similar inequality on L |G(n,0) (ζ ) − G(n,0) ◦ πˆ γ \{ek },∅;γ (ζ )| ≤ K1 |ζk |α on L1 for k = j . Thus G(0,0) satisfies Definition 8.4 2)i) for k = j , and therefore :(γ ,n) = D :(γ ,n[j,λj ]) G(0,0) satisfies the definition if λj > ρj . If λj ≤ ρj then D :(γ ,n) = D :(γ ,n[j,λj ]) so G(0,0) satisfies Definition 8.4 because f does. Thus if and D (0) λj ≥ 1, then F (n,0) satisfies Definition 8.4. If λj = 0 set f (n,0) = Hj,0 (ζj )(∂ n f − (0) (ζj )(∂ n f − ∂ n f |ζj =0 ) and in ∂ n f |ζj =0 ) = (1 − ζj )ρj F (n,0) where F (n,0) = H j,0

(0) (0) (ζ ), so F (n,0) = (1 − ζj )(∂ n f − this case Hj,0 (ζ ) = (1 − ζ )ρj +1 = (1 − ζj )ρj H j,0 n ∂ f |ζj =0 ) which satisfies Definition 8.4. (1)

Now set f (n,1) = ∂ n {Hj,0 f − L(1) f } and apply arguments similar to the preceding to get f (n,1) =

λ ζj j (1

− ζj )ρj F (n,1) where F (0,1) satisfies Definition 8.4. λ

Then f (0,0) +f (0,1) = (I−Lj )f = ζj j (1−ζj )ρj f (σ ) where f (σ ) = F (0,0) +F (0,1) σ (D :(γ ) ; λ(γ ) , ρ (γ ) ). therefore satisfies Definition 8.4, and therefore f (σ ) ∈ H ,α We have F (n,0) |ζj =0 = F (n,0) |ζj =1 = 0 by construction, so |F (n,0) | ≤ k on any compact subset Kk |ζk |α |1 − ζk |α for all k such that nk = 0 for some K (γ ,n) (n,1) : of D , and this is similarly true of F . Thus we have |∂ n f (σ ) (ζ )| ≤ Kk |ζk |α |1 − ζk |α :(γ ,n) . But if θ0 < θ and for all k with nk = 0 on any compact subset of D (γ ) :(γ ,n) and this completes the ni = 0, then A(γ ,n) [Di (θ0 )] is a compact subset of D proof.   We can now present the approximation result.

8 Sinc Methods on Polyhedra

221

:(γ ) (θ ); λ(γ ) , ρ (γ ) ) and A(γ ,n) = A(γ ,n) (δ) and Theorem 8.9 Let f ∈ H ,α (D suppose 0 < θ0 < θ . Then if ∅ = τ ⊂ γ and σ ⊂ γ \ τ and n ≤ mτ,σ ;γ and (p,n) if p ≤ (τ ) and pi = 0 then there is a constant C = Cτ,σ ;γ such that    p <   ∂ [ e ∈τ (I − Lj )](∂ n f ) ◦ ιτ,σ ;γ (ζ )    j   ≤ C|ζi |α |1 − ζi |α (τ ) (τ )   λj <   (1 − ζj )ρj p =0 ζ

(8.31)

j

j

(γ )

(τ )

λ

for all ζ ∈ A(τ,p) [Di (θ0 )]. Moreover if gj = ζj j (1 − ζj )ρj for each j then (γ ,∗) :(γ ) (θ0 ), λ(γ ) , ρ (γ ) ; α, g (γ ) , A(γ ,∗) ) and and f ∈ A (D g (γ ) is a mollifier on A there is a constant K that is independent of r ≤ and δ and of N provided h = ((π θ0 /αNi )1/2 ), such that |∂ r {f −

 

∂ n f (zk )s(k, n; h) ◦ φ}|

k∈μγ n≤mk

≤ Kδ − r N

(|r|+1)/2

[log(N + 1)]q−1 e−(π θ0 αN )

1/2

on A(γ ,r) for each r ≤ , where ri = 0 if ri ≤ min{λi , ρi } and ri = ri − min{λi , ρi } otherwise. Proof Let ∅ = τ ⊂ γ and σ ⊂ γ \ τ and n ≤ mτ,σ ;γ and p ≤ (τ ) . Define n by nj = nj if en ∈ / τ and nj = pr if j = iτ,γ (r) for some r = 0, . . . , |τ | − 1. Then πγ ,τ (n ) = p. Let D (ω) = D (ω) (θ ) for any ω ⊂ γ . n ≤ and :(τ ) ; λ(τ ) , ρ (τ ) ). Let ζ ∈ D :(τ,p) so that We claim that (∂ n f ) ◦ ιτ,σ ;γ ∈ H (τ ) ,α (D   (τ ) (γ ) : : ζ = πˆ ω,σ˜ ;τ (ζ ) for some ζ ∈ D = πγ ,τ (D ), where σ˜ ⊂ τp,ρ and (ω ∪ σ˜ )c ⊂ τp,λ . Then ιτ,σ ;γ (ζ ) =ιτ,σ ;γ ◦ ιω,σ˜ ;τ ◦ πτ,ω (ζ  ) =ιω,σ ∪ σ˜ ;γ ◦ πγ ,ω (ζ  ) = πˆ ω,σ ∪ σ˜ ;γ (ζ  ) :(γ ) and πγ ,τ (ζ  ) = ζ  . Since n ≤ mτ,σ ;γ we have n ≤ ρ (γ ) if where ζ  ∈ D i i (γ ) ei ∈ σ and σ˜ ⊂ τp,ρ so pr ≤ ρr(τ ) if er(τ ) ∈ σ˜ , that is, nj ≤ ρj if ej ∈ σ˜ where j = iτ,γ (r). Therefore σ ∪ σ˜ ⊂ γn ,ρ . On the other hand if ej ∈ / γn ,λ then (γ ) c c  nj > λj so ej ∈ σ ∪ τ . If ej ∈ τ then ej ∈ τ ∩γn ,λ = τp,λ ⊂ ω ∪ σ˜ , and if :(τ,p) ) ⊂ D :(γ ,n ) . not, then ej ∈ σ . Thus γ c ⊂ ω ∪ σ˜ ∪ σ and therefore ιτ,σ ;γ (D n ,λ  n ∂ f has

:(γ ,n ) , the derivatives Since by assumption a continuous extension to D :(τ,p) . Furthermore πγ ,τ (n ) = p and ∂ p (∂ n f ) ◦ ιτ,σ ;γ exist and are continuous on D ) (γ ) (γ ) (τ,p) (γ ,n : : ) ⊂ D . Since n ≤ mτ,σ ;γ , we have ni = ρi > λi only if ιτ,σ ;γ (D (γ ) (γ ) :(τ,p) ) we ei ∈ σ and ni = λi > ρi only if ei ∈ / (σ ∪ τ ). Then for ζ ∈ ιτ,σ ;γ (D

222

M. Stromberg (γ )

(γ )

(γ )

have ζi = 1 if ni = ρi > λi and ζi = 0 if ni = λi Pn corresponding to these ni are all one. Therefore Pn (ζ ) =





(1 − ζi )λi

ni =λi >ρi ei ∈τ

(γ )

> ρi

and the factors of

ρ

ζi i = Pp ◦ πγ ,τ (ζ ),

ni =ρi >λi ei ∈τ 

and the function Pp ∂ p (∂ n f )◦ιτ,σ ;γ = (Pn ∂ n f )◦ιτ,σ ;γ has a continuous extension :(γ ,n ) . :(τ,p) since Pn ∂ n f has a continuous extension to D to D (τ,p) : :(γ ,n ) If L is a compact subset of D then ιτ,σ ;γ (L) is a compact subset of D and by applications of Lemma 8.8, if ei ∈ τ then πˆ γ \{ei },∅;γ ◦ ιτ,σ ;γ = ιτ,σ ;γ ◦ πˆ τ \{ei },∅;τ . If pr ≤ λr then ni ≤ λi where i = iτ,γ (r) and the ith coordinate of ιτ,σ ;γ (ζ ) is ζr for ζ ∈ L. Therefore by (2) (i) of Definition 8.4 we have (γ )

(τ )

|∂ p (∂ n f ) ◦ ιτ,σ ;γ (ζ ) − ∂ p (∂ n f ) ◦ πˆ γ \{ei },∅;γ ◦ ιτ,σ ;γ (ζ )| = |∂ p (∂ n f ) ◦ ιτ,σ ;γ (ζ ) − ∂ p (∂ n f ) ◦ ιτ,σ ;γ ◦ πˆ τ \{ei },∅;τ (ζ )| ≤ Ki |ζr |α . The other inequalities of Definition 8.4 are satisfied similarly, so we obtain (∂ n f ) ◦ :(τ ) ; λ(τ ) , ρ (τ ) ). By Lemma 8.13 and a finite induction, if p ≤ ιτ,σ ;γ ∈ H ,α (D < :(τ ) ; λ(τ ) , ρ (τ ) ) and then by (τ ) then [ pj =0 (I − Lj )](∂ n f ) ◦ ιτ,σ ;γ ∈ H ,α (D Lemma 8.14 and induction we have [



(I − Lj )](∂ n f ) ◦ ιτ,σ ;γ =

ej ∈τ (τ )

where σ˜ = {ej



(τ )

λ

(τ )

ζj j (1 − ζj )ρj f (σ˜ )

pj =0

| pj = 0} so p ≤ σ˜ , and where for each pi = 0, |∂ p f (σ˜ ) (ζ )| ≤ (τ )

Ci |ζi |α |1 − ζi |α for ζ ∈ A(τ,p) [D (τ ) (θ0 )]. This shows in particular that if F is gi | ≤ Ci |ζi |α |1 − ζi |α on A(τ,p(i)) [D (τ ) (θ0 )] for i = given by (8.21) then |∂ p(i) F /  0, . . . , |τ |−1 and that since φi (ζ ) = ζ −1 (1−ζ )−1 we have |Gi (ζ )| ≤ Ci |ζi |α−1 |1− (τ ) gi . If γ is a path in Di (θ0 ) = D(θ0 ) then ζi |α−1 on this set, where Gi = φi ∂ p(i) F / Ni

(p)



|Gi (ζi , : ζi ) dζi | ≤ Ci

(γ , Gi ) = sup

(p) : ζi ∈Pi

γ

 |ζj |α−1 |1 − ζj |α−1 |dζi | γ

which is finite on paths γ ⊂ D(θ ) since : ζi ∈ Pi

(p)

if ζ ∈ A(τ,p(i)) [D (τ ) (θ0 )].

8 Sinc Methods on Polyhedra

223

To show that νi (Gi ) < ∞ for the functionals νi of Sect. 8.3.2 it suffices to show that  |z|α−1 |1 − z|α−1 |dz| < ∞. (8.32) ∂D(θ0 )

The part γ+ of ∂D(θ0 ) in the upper half plane has the representation γ+ (t) = teiθ /(1 − t + teiθ ) for t ∈ [0, 1] and similarly in the lower half plane. We have |(1 − t) + teiθ | ≥ cos(θ/2) for 0 ≤ θ ≤ π and t ∈ [0, 1] so the integral (8.32) is bounded by 

1

2 0

|γ+ (t)|α−1 |1 − γ+ (t)|α−1 |γ+ (t)| dt ≤ 2[sec(θ0 /2)]2α B(α, α)

where B is the beta function. Now ζi ) ≤ C i Gi,γ b,θ (:

 γ b,θ

|ζj |α−1 |1 − ζj |α−1 |dζj |

with γ b,θ (t) = eb+it /(1 + eb+it ) if |t| ≤ θ , and |γ b,θ (t)| ≤ eb sec(t/2)/(1 + eb ) and similarly for |1 − γ b,θ (t)|, so using this parameterization we get Gi,γ b,θ0 ≤

eαb 2θ0 [sec(θ0 /2)]2α (1 + eb )2α

which shows (iii) of Definition 8.1, thus φ  (τ ) F (p) / g (τ ) ∈ B(N the decay condition we have Ni

(p)

(τ,p) , D :(τ ) ).

As for

(Gi (ζi , : ζi )) ≤ Ci |ζj |α |1 − ζj |α ≤ Ci e−α|φi (ζi )|

where Ci is independent of ζi . :(γ ) (θ0 ), λ(γ ) , ρ (γ ) ; α, g (γ ) , A(γ ,∗) ) because g (γ ) is a mollifier on Thus f ∈ A (D (γ ,∗) (γ ) A which we show as follows. By Leibniz’ formula we get |d m /dζ m gi (ζ )| ≤ (γ ) Ki,m |gi (ζ )||vi (ζ )|−m on D(θ ) for m ≥ 0 where Ki,m is constant and vi (ζ ) = (γ ) ζ (1−ζ ), so the requirements of Theorem 8.7 are satisfied, with wi = gi . Therefore we have , 1, if ni ≤ min{λi , ρi } −ni ≤ min{λi ,ρi }−ni sup |wi (ζ )||vi (ζ )| , otherwise δi ζ ∈Ai,ni (δ) and we get the required error bound by applying the estimates of the proofs of Theorems 8.6 and 8.7.  

224

M. Stromberg

We get the following useful corollary, which discloses the nature of interpolation of derivatives of ordinary sinc approximation. :(γ ) (θ ); 0, 0) and 0 < θ0 < θ then there is a constant Theorem 8.10 If f ∈ H ,α (D K that is independent of r ≤ and ζ and of N provided h = ((π θ0 /αNi )1/2 ) such that  f (zk )s(k, 0; h) ◦ φ}(ζ )| |∂ r {f − k∈μγ

≤K

q−1 

[ζi (1 − ζi )]−ri N

(|r|+1)/2

[log(N + 1)]q−1 e−(π θ0 αN )

1/2

i=0

on Γ for each r ≤ . q−1

Proof Any point ζ = (ζ0 , . . . , ζq−1 ) is in the set A = ×i=0 [δi , 1 − δi ] where δi = min{ζi , 1 − ζi }, thus δi−1 ≤ [ζi (1 − ζi )]−1 , and A ⊂ A(γ ,r) (δ) for each r ≤ .  

Bibliography 1. Federer, H.: Geometric Measure Theory. Springer, Berlin, (1969) 2. Lundin, L., Stenger, F.: Cardinal–type approximation of a function and its derivatives. SIAM J. Numer. Anal. 10, 139–160 (1979) 3. Schmeisser, G.: Numerical differentiation inspired by a formula of R.P. Boas. J. Approx. Theory 160, 202–222 (2009) 4. Spanier, E.H.: Algebraic Topology. Springer, New York (1966) 5. Stenger, F.: Error bounds for the evaluation of integrals by repeated gauss-type formulas. Numer. Math. 9, 200–213 (1966) 6. Stenger, F.: Kronecker product extensions of linear operators. SIAM J. Numer. Anal. 5, 422–435 (1968) 7. Stenger, F.: Approximations via Whittaker’s cardinal function. J. Approx. Theory 17, 222–240 (1976) 8. Stenger, F.: Numerical methods based on whittaker cardinal, or sinc functions. SIAM Rev. 23, 165–224 (1981) 9. Stromberg, M.: Solution of shock problems by methods using sinc functions. Ph. D. Thesis, University of Utah (1987)

Part II

New Developments

The second part of the book presents new developments in Sinc methods which are closely related to applications. The six chapters introduce new approaches in dealing with polynomial approximations, yielding new identities for indefinite convolutions, control theory, Laplace and Fourier transform inversion, solution of differential equations, and solution of the classical Wiener–Hopf integral equations. Moreover, sinc-Gaussian approaches for eigenvalue problems, Fredholm determinants together with orthogonal polynomials, the impact of an arbitrary number of jumps on finite sinc interpolants, mathematical optimization, and LU factorization of any matrix is examined in connection with Sinc methods.

Chapter 9

Indefinite Integration Operators Identities, and Their Approximations Frank Stenger

&x Abstract The integration operators (*) (J + g)(x) = a g(t) dt and (**) & b (J − g)(x) = x g(t) dt defined on an interval (a, b) ⊆ R yield new identities for indefinite convolutions, control theory, Laplace and Fourier transform inversion, solution of differential equations, and solution of the classical Wiener–Hopf integral equations. These identities are expressed in terms of J ± and they are thus esoteric. However the integrals (*) and (**) can be approximated in many ways, yielding novel and very accurate methods of approximating all of the above listed relations. Several examples are presented, mainly using Legendre polynomial as approximations, and references are given for approximation of some of the operations using Sinc methods. These examples illustrate for a class of sampled statistical models, the possibility of reconstructing models much more efficiently than by the usual slow Monte–Carlo (O(N −1/2 ) rate. Our examples illustrate that we need only sample at 5 points to get a representation of a model that is uniformly accurate to nearly 3 significant figure accuracy. Keywords Indefinite integration · Indefinite convolution · Fourier transform inversion · Laplace transform inversion · Wiener-Hopf · Differential equations · Approximations AMS Subject Classification 47A57, 47A58, 65D05, 65L05, 65M70, 65R10, 65T99, 93C05

F. Stenger () SINC, LLC, School of Computing, Department of Mathematics, University of Utah, Salt Lake City, UT, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_9

227

228

F. Stenger

9.1 Introduction and Summary This paper presents some symbolic–like approximations gotten from identities— some previously known, and some new—of Indefinite integration operators. These operators are defined for an interval (a, b) ⊆ R by the equations (J + g)(x) =



x

g(t) dt a

(J − g)(x) =



(9.1) b

g(t) dt . x

Here g is a function defined on (a, b) , and we assume, of course, that the integrals exist for all x ∈ (a, b) . Combined with convolution, Laplace and Fourier transform inversion, these operators enable many new expressions for one– dimensional models, of: control theory, for Laplace and Fourier transform inversion, for solving ODE and PDE [14, 15], and for solving Wiener–Hopf integral equations. These operations can be readily combined to get multidimensional approximations. And while these formulas, expressed in terms of J ± are esoteric, and seemingly devoid of any practical value, when replaced with certain types of approximation in terms of interpolation on (a, b) , yield novel, accurate and efficient methods of approximation. The mode or operations of this paper also enables defining models accurately in terms of statistical samples, but instead of sampling over the whole interval, it suffices to sample at just a small number of points on the interval. Thus, with respect to our illustrative one dimensional examples in this paper, for which we sample at just 5 points, we can recover the model using 52 = 25 points, in 2 dimensions, 53 = 125 points in 3 dimensions, etc. A formula for the approximation of the indefinite integral was first published in [12], albeit without proof. The first proof was published by Kearfott [8] and later, two proofs were published by Haber [6]. The author later presented a proof of this result in §4.5 of [13]. These approximation formulas were applied in [13] and [15] to solve partial differential equations (PDE), via use of explicit Laplace transforms derived by the author of the Poisson, the heat and the wave equation Green’s functions in one and in more than one dimension. In [18], the Laplace transform of the heat equation Green’s function in R3 × (0, ∞) was used to obtain a numerical solution of the Navier–Stokes equations, and in [17] the Fourier transform of this same Green’s function was used to obtain a numerical solution of the Schrödinger equation in R3 × (0, T ). In Sect. 9.2 we present the operators J ± , along with some of their properties, as well as methods of approximating the operations of these operators. In addition, we present identities for optimal control, Laplace transform inversion, and the solution of Wiener–Hopf integral equations, as well as identities that are based on Fourier

9 Indefinite Integration Operators Identities, and Their Approximations

229

transforms for optimal control, for Fourier transform inversion, for the solution of ordinary differential equations, and for the solution of Wiener–Hopf integral equations. Some of these formulas—the ones involving Laplace transforms—were previously known [13], whereas those related to Fourier transforms are new. We have also omitted the relation of these operators with solution of PDE, since this aspect was covered extensively in [15]. In Sect. 9.3 we illustrate explicit the application of the esoteric formulas developed in §2; the replacement of J ± of the formulas of §2 with explicitly defined matrices A∓ transform these esoteric identities into accurate and efficient novel methods of approximation. The matrices that have been used to date are based on either Sinc or Fourier polynomials. Sinc methods have previously been used to define these matrices (see e.g., [15], [17], we have restricted our examples to using Legendre polynomials to define and use the matrices A± , since the use of other methods of approximation is similar. An important property of the matrices A± which enables the functions of the matrices F (A± ) which are gotten by replacement of J ± with A± in the operator expression F (J ± ) to be well defined is that the eigenvalues of A± are located on the right half of the complex plane C . This was a 20–year conjecture for Sinc methods; a proof of this conjecture was first achieved by Han and Xu [7]. A proof was obtained by Gautschi and Hairer [4] for Legendre polynomials, but a proof for the case of polynomials that are orthogonal over an interval with respect to a positive weight function is still an open problem; this author offers $300 for the first proof or disproof, that the real parts of all of the eigenvalues of the corresponding integration matrices defined in Definition 9.2 of this paper have positive real parts.

9.2 The Hilbert Space and the Operators It is most convenient to work with operators in the setting of a Hilbert space. Let (a, b) ⊆ R . Our Hilbert space is just the well–known space H = L2 (0, b) , with b ∈ R+ .

9.2.1 The Operators J Let the operators J ± be defined for H . The inverses of these operators have the

−1 d property: if G = J ± g , then g = J ± G , i.e., g(t) = ± dt G(t) , whenever the derivatives exist.

230

F. Stenger

9.2.2 Numerical Ranges We mention here some properties of numerical ranges for the operators J ± . Definition 9.1 Let H be defined as above, and let the operators J ± be defined as in (9.1). The numerical range W of J ± in H is defined as W = {(J ± f, f ) : f ∈ H , with f = (f, f )1/2 = 1} .

(9.2)

“Numerical range” is synonymous with “field of values” with the latter being used more often for matrices. The closure of the numerical range of J ± contains the spectrum of J ± . Other properties of the numerical range can be found, for example, in [5] or [9], Theorem 9.1 Let J ± be defined as in Definition 9.1. Then the numerical ranges of J ± are contained in the closed right half plane, {z ∈ C : z ≥ 0} . Proof Part (i.) We give a proof of the (i.)–part of this theorem only for the case of J + , inasmuch as the proof for J − is similar. Let g ∈ H denote a complex valued function. Then

  J + g, g = 



b



x

g(t) dt g(x) dx , a

a

 2  1  b =  g(x) dx  ≥ 0 , 2 a

(9.3)

so that the inner product (J + g, g) is contained in the closed right half plane. Part (ii.) We also omit the straight–forward proof of this part of the theorem.  Theorem 9.2 Let (a, b) ⊂ R be a finite interval, let the Hilbert space H be L2 (a, b) . Then J ∓ ≤

b −a √ . 2

(9.4)

9 Indefinite Integration Operators Identities, and Their Approximations

231

Proof We prove this theorem only for the case of J + , since the proof for J − is similar. Let g ∈ L2 (a, b) . Then we have J + g 2 =





b

x

g(y) dy a



a

a



x

2 1 |g(y)| dy

dx

a



b



g(ξ ) dξ dx a



b



x dx a

=



x

(9.5) x

|g(t)|2 dt

a

(b − a)2 g 2 , 2 

which1 yields (9.4).

9.2.3 Indefinite Convolution via Fourier Transforms We now take (0, b) = R+ , where R+ denotes the interval (0, ∞) , we take f ± ∈ H = L2 (R+ ) , and we assume the usual Fourier and inverse Fourier transforms defined by ∓ (y) = f=

 R+

f ∓ (x) e∓ i x y dx f ∓ (x) =

1 2π



∓ (y) dy . e± i x y f=

(9.6)

R

9.2.4 Optimal Control Our model indefinite integrals for x ∈ (0, b) ⊆ (0, ∞) corresponding to given functions f ± and g take the form q + (x) =



x

f + (x − t) g(t) dt,

0

q − (x) =



(9.7) b

f − (x − t) g(t) dt.

x

1 It can in fact be shown by a more lengthy proof that (9.4) can be replaced with the exact result, J ∓ = (2/π ) (b − a).

232

F. Stenger

∓ in (9.7), we shall obtain a formula for determining Given one or both functions f= q ± on (0, b) , under the assumption that the Indefinite integration operators J ± are supported on (0, b) . Novel explicit evaluations of the integrals (9.7) were first obtained in [13], §4.6, by use of Laplace transforms. Indeed, many new results were obtained using those formulas, including novel explicit formulas for Laplace transform inversions, novel explicit formulas for evaluating Hilbert transforms (discovered independently by Yamamoto [19] and Stenger [15], §1.5.12 and novel formulas for solving partial differential and convolution type integral equations [15]. Included with each of these formulas are very efficient and accurate methods of approximation—most of which are given in [15]; these usually are orders of magnitude more efficient than the current popular methods of solving such equations. We shall now derive similar one dimensional novel convolution formulas based on Fourier transforms.

Theorem 9.3 Let J ± be defined as above, and have support on (0, b) ⊆ R+ , let the functions f ± of Eq. (9.7) belong to H . Let q ± be defined as in (9.7). Then

 ± ±i /J ± g . q ± = f=

(9.8)

Proof The proof resembles that given in §4.6 of [13] and §1.5.9 of [15] for the case of Laplace transforms. We consider only the case of f + , since the proof for the case of f − is similar. The proof makes use of the following: • f + has compact support [0, b]—inasmuch extension to (0, β) , or to an infinite interval can be carried out via usual well–known procedures of analysis, and moreover, our assumptions are consistent with this possibility; √ • By Theorem 9.2, the spectrum of i/J + is bounded on (0, b) by b/ (b) ; + is analytic in the upper half of the complex plane, and • By inspection of (9.7), f= − = f is analytic in the lower half; and • The well–known formulas (9.7), as well as

+ n  J g (x) =



x 0

(x − t)n−1 g(t) dt , n = 1 , 2 , . . . , (n − 1) !

1 ∓ (z) = ± f= 2π i

 R

∓ (t) f= dt , ∓ z > 0 . t −z

(9.9)

9 Indefinite Integration Operators Identities, and Their Approximations

233

Hence, applying the above points, for x ∈ (0, b) , and with I denoting the identity operator, we get q + (x) =



x

f + (x − ξ ) g(ξ ) dξ

0





x

= 0

=

 =

R

1 2π

n=0

 J

+

R

 δ→0+

∞ x  0

1 2π i

= lim

+ (t) e− i t (x−ξ ) dt f=

 g(ξ ) dξ

R

 

1 = 2π 



1 2π

(−i t (x − ξ ))n g(ξ ) dξ n!

 + (t) dt f=

 ∞ 

 + n = + −i t J f (t) dt g (x)

(9.10)

n=0





+ (t) dt J + / 1 + i t J + f=



 g

(x)

R

1 2π i



+  + = f (t)/ t I − i/ J + δ I dt g (x)

 R





  + i/ J + + δ I + i/J + g (x) , = lim f= g (x) = f= δ→0+



 − −i/J − g (x) . Similarly, q − (x) = f=



The above identities (9.8) are esoteric. They do, however readily enable applications. See §4.6 of [13], or [15].

9.2.5 Fourier Transform Inversion We describe here an explicit novel formula for the inversion of Fourier integrals, ∓ , where f ∓ are the Fourier transforms = namely for the determination of f ∓ given f= of f ∓ as defined in (9.8). Theorem 9.4 Let R+ denote the interval (0, ∞) , and let us assume that we are ± where, given one or both of the functions f= ± (x) = f=

 R+

e± i x y f ± (y) dy.

(9.11)

234

F. Stenger

± on (0, b) , Let the operators J ± have support on (0, b) ⊆ R+ . Then, for given , f=





± ± i/J ± 1 , f ± = 1/J ± f=

(9.12)

where the “1” on the right hand side of (9.12) denotes the function that is identically 1 at all points of (0, b) . Proof We shall only prove this theorem for the case of f + , inasmuch as the proof for the case of f − is similar. Let us first recall, by assumption, that f ∓ is differentiable. Then f + (x) − f + (0)

 =

x

+  f (y) dy

x

+  f (x − y) dy

0

 =

(9.13)

0

 =

x

+  f (x − y) 1 dy .

0 + + This equation is now in the form of the equation

+  for q in (2.6), except that q in (9.7) is here replaced with the derivative f , and g with the function that has

 + (x) + f (0) of f +  value 1 on (0, b) . Hence, using the Fourier transform −i x f= and substituting into (9.10) we get the Eq. (9.13) for the case of f + . 

The identities (9.8) and (9.13), while esoteric, can nevertheless yield applicable approximations in suitable analytic function settings, as they did for the case of Laplace transforms in [13] and in [15]. Remark 9.1 If we substitute y = ± i η in (9.6), we are then back to the Laplace transform cases already covered extensively, starting with [13], §4.5–4.6, and then followed up with all of the text, [15] . These sources cover the operator results of this section, elucidating them to approximation via use of Cauchy sequences of analytic functions based on Sinc methods of approximation. We should add, that the two main theorems of this section have applications not only to approximation via Sinc methods, but also, to any other method of approximation, including methods that use orthogonal polynomials, as is illustrated in the next sections of this paper.

9 Indefinite Integration Operators Identities, and Their Approximations

235

9.3 Connection with Interpolatory Approximation The formulas derived in the previous section are esoteric, but they have many applications when connected with interpolatory approximation.2 In this paper we shall primarily be interested in approximation via interpolation at the zeros xj if orthogonal polynomials. (i.) For the case Legendre polynomials, the xj are the n zeros of the Legendre polynomial Pn (x) which are orthogonal on the interval (a, b) = (−1, 1) with respect to the weight function w which is identically 1 on (−1, 1) ; (ii.) For the case of Hermite polynomial interpolation, using the Hermite polynomials Hn (x) that are orthogonal over R with respect to the weight function w , with w(x) = exp(−x 2 ), and with Hn (xj ) = 0 for j = 1 , . . . , n ; and (iii.) Other polynomials that are orthogonal with respect to a weight function, such as Jacobi polynomials, Gegenbauer polynomials, etc. Definition 9.2 Consider standard Lagrange interpolation at distinct points xj on(a, b) , with a < x1 < x2 . . . xn < b , which takes the form f (x) ≈

n 

k (x) f (xk ) ,

k=1



k (x) =

j =1 , ... , n, j =k

x − xj . xk − xj

(9.14)

3 2 ± be defined by A± , where Let the entries A∓ of the matrices A j,k j,k A+ j,k =

A− j,k =



xj

k (x) w(x) dx , a



(9.15) b

k (x) w(x) dx , xj

where w is the weight function referred to above, that is positive a.e. on (a, b) , &b and is such that the moments a w(x) x j dx exist for every non–negative integer j . The function w , where w(x) > 0 a.e. on (a, b) defines a sequence of orthogonal polynomials {Pn }∞ n=2 , for which all n zeros of Pn are located on (a, b) . 2 We could also include trigonometric polynomial in the examples which follow, e.g., those of [15], §1.4, which interpolate at points xj that are interior points of the interval of interpolation. Such formulas are effective for approximation of periodic functions. The weight function for these is w(x) = 1 for all x on the interval.

236

F. Stenger

Setting V f = (f (x1 ) , . . . , f (xn ))T (9.16) L(x) = ( 1 (x) , . . . , n (x)) , and defining Jn± by (Jn± w f )(x) = L(x)A± V f ,

(9.17)

so that if, for D = (±z > 0) , and if fˆ is analytic in D , then the eigenvalues of ±i A± are distinct and lie in (a, b) , and the matrix fˆ(±i (A± )−1 ) is then well defined, as also, is the approximation3

V fˆ(± i/J ± ) g ≈ fˆ ±i (A± )−1 V g

(9.18)

Remark 9.2 (i.)

The last line of (9.18) can be explicitly evaluated. If for the case of A+ (resp., if for the case of A− we have A+ = X Λ X−1 (resp., A− = Y Λ Y −1 ), where Λ = diag(λ1 , . . . λn ) is the diagonal matrix, which is the same for A+ and A− , and where X (resp., Y ) is the corresponding matrix of eigenvectors, then

 + i (A+ )−1 V g = X diag(i/λ , . . . , i/λ ) X −1 V g , f= 1 n

(9.19)

− , A− in place of A+ and Y in place of X . and similarly for the term involving f= ∓ (ii.) If for (i.) above, the matrices A are defined on (−1, 1) , then for any other ± interval (a, b) , A± needs to be replaced with C ± = b−a 2 A ; this means that the eigenvalues λj for (−1, 1) also to replaced with λj (b − a)/2 . However, the matrix of eigenvectors is independent of a and b .

Conjecture We state the following conjecture, for which the author of this paper offers $300 for the first proof or disproof: All of the eigenvalues of every n × n matrix A± defined as in Definition 9.2 for all polynomials {Pn }∞ 0 which are orthogonal over (a, b) with respect to the weight function w lie on the open right half of the complex plane. This conjecture has been shown to be true for Sinc interpolation by Han and Xu [7]; it has also been shown to be true for Legendre and for Chebyshev polynomials interpolation by Gautschi and Hairer [4]. However, the conjecture as stated for Definition 9.2 is still unproved for arbitrary weight functions w that are positive a.e. on (a, b) .

 that V L = I with I the unit matrix, and also, so that V (fˆ ± i L (A± )−1 V ) =

 L(x) fˆ ± i(A± )−1 V .

3 Note

9 Indefinite Integration Operators Identities, and Their Approximations

237

The following result is of use in applications. We select for this theorem the weighted Hilbert space Hw of all functions f, g, . . . , with inner product  (f, g) =

1 −1

(9.20)

w(x) f (x) g(x) dx ,

where w is defined in Definition 9.2 The transformation of the formula (9.20) to the interval (a, b) via use of the transformation x = t (y) := (a + b)/2 + y (b − a)/2 takes the form 

b

(f, g) = (F, G)(a,b) =

W (y) F (y) G(y) a

b−a dy , 2

(9.21)

where W (y) = w(t (y)) , F (y) = f (t (y)), and G(y) = g(t (y)) . The following theorem follows from Theorem 9.2. Theorem 9.5 Let the operators J ± be defined as in Eq. (9.1). If f ∈ Hw , then    



b

y

W (y) F (y) a

(b − a)2 ≤ 2 (b − a)2 ≤ 2

a



1

−1



d dx

1 −1

  (b − a)2 W (t) F (t) dt dy  = 2 

x −1

   



1

−1

f (η)

η −1

  f (ξ ) dξ dη ,

w(t) |f (t)| dt w(x) |f (x)| dx 2

w(x) |f (x)| dx

dx . (9.22)

9.4 Applications We illustrate in this section, several examples that ensue by approximation of the integration operators J ± . Such approximations were first stated in [12], then proved by Kearfott [8], Haber [6] and the author [13], §4.5. approximation based on using the operators J ± . The use of these operators for obtaining numerical solutions of differential and integral equations was first discovered by the author in [13],and these were combined extensively in [15],with formulas for approximating the indefinite integral. The present section illustrates applications by combining indefinite integration with Lagrange polynomial approximation.

238

F. Stenger

9.4.1 Legendre Polynomial Approximation of a Model For sake of simplicity, we use only Legendre polynomials as approximations, since applications using other bases are dealt with in exactly the same way. We also use polynomials of degree at most 4 , in view of the present wide–spread interest in problems arising from using large sets of data values. Under the assumption of analyticity of the reconstruction (see e.g., §2 of [15]), we get effective answers to problems whose solutions are smooth, using a very small number of points to construct a solution. That is, the explicit numerical examples of this section are most effective for cases when the model to be approximated is smooth. It is well known that if a function f is analytic in a simply connected domain D in the complex plane, and if a closed interval [a, b] is in the interior of D , then we can approximate f on [a, b] via a polynomial of degree n for which the error approaches zero at a rate of O(exp(− c n)) where c is a positive constant. To be more specific, if (a, b) = (−1, 1), and if we let φj denote the normalized &1 Legendre polynomial of degree j , so that −1 φj (x) φk (x) dx = δj,k where δj,k &1 denotes the Kronecker delta, then the numbers ck = −1 f (x) φ(x)dx approach zero exponentially, and the approximation f (x) ≈

n 

ck φk (x)

(9.23)

k=0

will then be accurate for a relatively small value of n . To this end, we could start with a positive integer m (with m = 5 in this paper), then, letting x1 , . . . , xm denote the distinct zeros of φm , and letting w1 , . . . , wm denote the corresponding Gaussian integration weights, we could evaluate numbers c1 , . . . , cn , with the formula ck ≈

m 

w f (x ) φj (x )

(9.24)

=0

and stop the process, when ck (k ≤ n) is sufficiently small, since cn is then of the order of the error of approximation. The above process can also be used when f is approximated via statistical sampling. In this case, it would be necessary to get a good approximation at the above defined zeros xj , and this can be done in many ways, one of which is 1 – averaging. Note, also, the well known m–point Lagrange interpolation formula, which is used extensively in this section, for evaluating the polynomial Pm−1 which

9 Indefinite Integration Operators Identities, and Their Approximations

239

interpolates a function f at the m points x1 , . . . , xm : Pm−1 (x) =

m 

k (x) f (xk ) ,

j =1

(9.25) k (x) =

m  j =1 ,j =k

x − xj . xk − xj

9.4.2 Reconstruction from Statistical Data Statistical models can take on many different forms. Typically, these can take the form of a system of nonlinear equations, a system of ordinary differential equations (ODE) systems of partial differential equations (PDE), systems of integral equations, all in the possible presence of noise. The variety of such equations means that many different methods are in use for solving such equations. Included among these are Fokker–Planck models [2], Navier–Stokes equations [18], Schrödinger equations [17] and the methods used in [15] for solving partial differential and integral equations. However, for sake of simplicity we shall restrict or presentation to one–dimensional models, with approximation using Legendre polynomials.

9.4.3 Exact Formulas and Their Approximation The formulas which we shall describe below will initially be defined in terms of the operators J ± , and they thus appear esoteric. However, these operations can readily be approximated via use of computable basis functions. We shall only use approximation via Legendre polynomial interpolation of degree 5 in this section, although the programs can easily be altered to work for arbitrary degrees, and with other bases; for example, the text [15] mainly uses sinc functions as bases, although methods based on Fourier polynomial bases are given in §1.4.7 and 3.10.2 of [15]. Explicit Matlab programs are available for all of the examples of this section. We construct 3 plots on the interval (a, b) on which the particular example is solved, for each of the five examples which follow: (i.) Single–point plots of the five exact (’x’) and computed (’o’) values at the five translated points (ξj = (a + b)/2 + (b − a) xj /2 , on (a, b) , where the xj are the zeros on (−1, 1) of the Legendre polynomial of degree 5 ; (ii.) Continuous curves on (a, b) of the exact (’-’) solution, and of the computed one (also (’-’)) ; and (iii.) A plot of the difference between the exact and the computed solution.

240

F. Stenger 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

3.5

4

Fig. 9.1 Course mesh plot of exact and computed FT inversion

(i.) Fourier transform inversion Visual modeling via use of Fourier transforms has at times been shown to be effective [11]. We shall use the formula of Theorem 9.4. Consider the trivial example for the recovery of f on (0, 4) given4 F + (y) = &∞ i y t dt = 1/(1 − i y) . The exact solution is (*) f (t) = e−t . We use 0 f (t)e formulas (9.12) and (9.25); then, by taking a Legendre polynomial of degree 5, we get a matrix A+ of order 5 which we multiply by 2 to get C = 2 A+ , for approximation on (0, 4) (twice the length of the interval (−1, 1)) . By Theorem 9.4 and Eq. (9.17), y in the above Fourier transform is replaced with i/C = i C −1 , and also, selecting the new interpolation points ξj = 2(1+xj ) , we get (f (ξ1 ) , . . . , f (ξn ))T ≈ (I + C)−1 1 ,

(9.26)

where I denotes the unit matrix of order n , 1 denotes a vector of n ones, and where the matrix (I + C)−1 is non–singular, since all of the eigenvalues of A+ have positive real parts.5 The plots appear in Figs. 9.1, 9.2, and 9.3.

4 We would get the same answer if we knew F only as a Fourier transform of f taken over an interval (0, b) , for any b ≥ 5. 5 Since F is analytic in the upper half plane, and since the real parts of the eigenvalues of A± are positive, F (i/A+ ) = F (i (A+ )−1 ) is well defined for all such A+ , and so (I + C)−1 is also well defined.

9 Indefinite Integration Operators Identities, and Their Approximations

241

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

3.5

4

3

3.5

4

Fig. 9.2 Fine mesh plot of exact and computed FT inversion 10-3

6

4

2

0

-2

-4

-6

0

0.5

1

1.5

2

2.5

Fig. 9.3 Fine mesh plot of error of FT inversion

(ii.) Laplace transform inversion The inversion formula for Laplace transform inversion was originally discovered by Stenger in [15]. This exact formula presented here is only the third known exact formula for inverting the Laplace transform, the other two being due to Post [10] and Bromwich [1], although we believe that practical implementation of the Post formula has never been achieved, while the evaluation of the vertical line formula of Bromwich is both far more difficult and less accurate than our method, which follows.

242

F. Stenger

Consider the case of recovering the following function f on (0, 2) given F , where  ∞ F (s) = f (t)e−st dt = 1/2 − (1/π ) tan−1 (s/π ) , 0

(9.27)

sin(π t) f (t) = . πt We again use Lagrange polynomial approximation via interpolation at the zeros of this polynomial of degree 5 . The length of the interval is (0, 2) , which is the same as the length of (−1, 1) , for which the matrix A+ is defined. The new points of interpolation are on the interval (0, 2) which shifts them by 1 from the original interval (−1, 1) for Legendre polynomials, so that the new points of interpolation are ξ(j ) = 1 + x(j ) . The exact inversion formula is f = 1/J + F (J + ) 1 , with F given on the right hand side of (9.27). Hence replacing J + with A+ , we get the following exact and approximate solutions: f (ξj ) =

sin(π ξj ) π ξj

, j = 1, ... , n, (9.28)

(f (ξ1 ) , . . . , f (ξn ))T ≈ (A+ )−1 F ((A+ )−1 ) 1 , where 1 denotes a column vector of n ones.6 In this case we need the eigenvalue representation A+ = X Λ X−1 to evaluate last equation of (9.28). We get (f (ξ1 ) , . . . , f (ξn ))T ≈ X Λ−1 F (Λ−1 ) X−1 1 .

(9.29)

We next plot in fine mesh solution, which is obtained by evaluating the degree 4 polynomial interpolation of the computed solution at 100 equi– spaced points on (0, 2) . Finally, we also plot the fine-mesh error at the same 100 points. The slight error at the end–points is due to the jump singularity of the Fourier transform at the origin. The plots are contained in Figs. 9.4, 9.5, and 9.6. (iii.) Optimal control via Fourier transforms We illustrate here an application of Theorem 9.4. Such an example may arise in simple design of a control, or in the statistical determination of a feedback control, etc. We wish to evaluate integral  p(t) =

3

f (t − τ ) g(τ ) dτ , t ∈ (0, 3) ,

(9.30)

t

6 Note that the Laplace transform F is analytic on the right half plane, and since the real parts of the eigenvalues of A+ are positive, the matrix F (A+ ) is well defined.

9 Indefinite Integration Operators Identities, and Their Approximations

243

1 0.8 0.6 0.4 0.2 0 -0.2 -0.4

0

0.5

1

1.5

2

Fig. 9.4 Course mesh plot of exact (dashed line) and computed (cross circle) LT inversion 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4

0

0.5

1

1.5

2

Fig. 9.5 Fine mesh plot of exact (dashed line) and computed (cross circle) LT inversion

using the formula p = f:(−i/J − ) g ,

(9.31)

&0 &∞ where f:(y) = −∞ f (t) ei y t dt = 0 f (−t) e−i y t dt . Here we shall use the matrix A− defined as in Definition 9.2 above, which must be replaced by C = (3/2) A− for use on the interval (0, 3) , and in addition, the points of

244

F. Stenger 0.015 0.01 0.005 0 -0.005 -0.01 -0.015

0

0.5

1

1.5

2

Fig. 9.6 Fine mesh plot of error of LT inversion

interpolation are ξj = (3/2) (1 + xj ) . Thus the approximation formula is V p ≈ f:(−i C −1 ) V g .

(9.32)

We consider as an example, the evaluation of the convolution integral 

3

exp(α (t − τ )) J0 (t − τ ) e− β τ dτ , t ∈ (0, 3) ,

(9.33)

t

where α and β are positive. In this case, we have the Fourier transform f:(y) =



∞ 0

e−i y t−a y t J0 (t) dt =

1 (1 + (α + i y)2 )−1/2 , α+iy

(9.34)

and upon replacing y with −i C −1 , we get the approximation p := (p(t1 ) , . . . , p(tn ))T

−1/2 ≈ C (1 + α 2 ) C 2 + 2 α C + I g

(9.35)

T

g = e − β t1 , . . . , e − β tn . The selection of several values of β could be used to model a given output p . In the examples 7, 8, and 9 below we have used β = 0.7 . But more directly, since p and g are related by the Eq. (9.34), we could also determine an accurate control vector g to compute an accurate approximation

9 Indefinite Integration Operators Identities, and Their Approximations

245

to the response: −1/2

p. g ≈ (1 + α 2 ) C 2 + 2 α C + I

(9.36)

Note here that the matrix multiplying g in (9.35) can be determined explicitly: setting C = X Λ X−1 , where Λ = diag(λ1 , . . . , λn ) , we have

1/2 X; D := X−1 (1 + α 2 ) C 2 + 2 α C + I (9.37) D = diag(d1 , . . . , dn ); dj =

((1 + α 2 ) λ2j

+ 2 α λj

+ 1)1/2 .

This matrix is non–singular, and we can therefore use it to compute an approximation to the function g in (9.30), in order to achieve a any particular response. If we again use the same Legendre polynomial of degree 5, we get a polynomial solution, but unfortunately, the exact solution is not explicitly known. To this end, we can use the same Eq. (9.36), but with a larger value of n, to get a more accurate solution; e.g., by taking n = 11, we get at least 8 places of accuracy, which can be taken to be the exact answer for our purposes, and which we use to compute the exact answer at 5 points ξj . We thus again get 3 plots as above, i.e., as a course mesh plot of the “exact” and our degree 4 approximation in Fig. 9.7, a fine mesh plot of the “exact” solution and of our degree 4 approximation of this solution in Fig. 9.8, and a fine mesh plot of the difference between these two quantities in Fig. 9.9.

0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

Fig. 9.7 Course mesh plot of exact and computed opt. control

2.5

3

246

F. Stenger 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2

2.5

3

2

2.5

3

Fig. 9.8 Fine mesh plot of exact and computed opt. control 10-4

1.5 1 0.5 0 -0.5 -1 -1.5

0

0.5

1

1.5

Fig. 9.9 Fine mesh plot of error of opt. control

(iv.) Modeling via ordinary differential equations Most ODE solvers in use today are one step methods, and as such, their use is restricted because of stability, convergence, stiffness, and accuracy, and moreover, they are restricted to obtaining a solution on a finite interval. Not so with the present method [3], which extends its usage to polynomials the method of [16] that was designed for Sinc approximation.

9 Indefinite Integration Operators Identities, and Their Approximations

247

The most common model for constructing methods of approximate solutions of ODE on an interval (a, b) is y  = f (x, y) , y(a) = ya (a constant)

(9.38)

Transforming to an equivalent integral equation, we get 

x

y(x) = ya +

f (t, y(t)) dt or y = ya + J + f (·, y(·) 1 .

(9.39)

a

Upon applying the approximation procedure of Sect. 9.3, we can immediately convert The IE (9.39) to a system of algebraic equations Y = ya 1 + A+ f ,

(9.40)

where Y = (y1 , . . . , yn )T , A+ is as defined in §3.2, 1 is a column vector of ones, and f := (f1 , f2 , . . . , fn )T , fj = f (xj , yj ) .

(9.41)

Some notes: (1.) convergence will always occur for b − a sufficiently small,√since by Theorem 9.2 that the eigenvalues of A are bounded by (b − a)/ 2 . (2.) Under a test of convergence criteria such as Y (m) − Y (m−1) < ε, the resulting solution will have polynomial degree n accuracy at each of the points z1 , . . . , zn where the interpolation is exact . Under mild assumptions on f , we can then achieve similar accuracy at any set of points, e.g., at equi–spaced points. (3.) Knowing accurate values of both y and y  = f at n points, we can get polynomial precision of degree 2 n − 1 using Hermite interpolation. Consider the case of recovering the following function y on (0, 1/2) , where y = 1 + y 2 , y(0) = 0.

(9.42)

The exact solution is y = tan(t) . Since the interval is (0, 1/2) , the points of interpolation are ξj = (1 + xj )/2 , the corresponding Indefinite integration matrix is C = (1/2) A+ . By applying the above outlined method of approximation, we replace the IE of (9.39) by the system of algebraic equations Y = X + C Y2 ,

(9.43)

where X = (t1 , . . . , tn )T , and Y2 = (y12 , . . . , yn2 )T . The interval (0, 1/2) was selected here since, for example, if the matrix C in (9.43) is replaced with

248

F. Stenger 1.5

1

0.5

0

0

0.2

0.4

0.6

0.8

1

0.8

1

Fig. 9.10 Course mesh plot of exact and computed ODE 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

0

0.2

0.4

0.6

Fig. 9.11 Fine mesh plot of exact & computed ODE

A+ , then Picard iteration does not converge to a solution of (9.42). We could, of course, extend the solution further, by restarting it at t = 1/2 . The computed solution and plots are the same as in the previous cases, namely, in Fig. 9.10, a course plot of the exact and computed solution at the points ξj , j = 1 , . . . , 5 , in Fig. 9.11, a fine mesh plot at 100 equi–spaced points of the exact and the computed polynomial solution on the interval (0, 1/2) , and in Fig. 9.12, a fine mesh plot of the difference between these functions on the same interval.

9 Indefinite Integration Operators Identities, and Their Approximations

249

10-3

4 3 2 1 0 -1 -2 -3 -4

0

0.2

0.4

0.6

0.8

1

Fig. 9.12 Fine mesh plot of error of ODE

(v.) Modeling via Wiener–Hopf equations The classical Wiener–Hopf integral equation with solution f for given k defined on R and g defined on (0, ∞) takes the form 



f (x) −

k(x − t) f (t) dt = g(x) , x ∈ (0, ∞) .

(9.44)

0

Many thousands of papers have been written on the solution of this equation, particularly with reference to the mathematically beautiful factorization procedure originally discovered by Wiener and Hopf in 1931 for solving this equation. Unfortunately, such a factorization cannot be determined for nearly all problems7 of the type (9.44). A revision to this mathematics among the top 10 mathematics departments occurred from about 1960 until 1970, but neither this pure mathematics activity nor the accompanying applied mathematics activity provided any insight to the solution of (9.44). We illustrate here a method of solving this problem, and while our illustration is almost trivial, in that the exact equation is easier to solve than our approximating equation, our approximating method nevertheless applies to all equations of the type (9.44), provided that the Fourier (or Laplace) transform of k is explicitly known, or that it can easily be approximated. Moreover, the solution we present in what follows is both efficient and accurate.

explicit factorization process is, in fact, known, for the case when k ∈ L1 (R) and g ∈ L1 (0, ∞) , but this does not lend itself to a practically efficient method.

7 An

250

F. Stenger

By splitting the definite convolution integral in (9.44)as an integral from 0 to x plus an integral from x to ∞ , to get two indefinite integrals, so that (9.44) can be rewritten as  x  ∞ f (x)− k(x−t) f (t) dt+ k(x−t) f (t) dt = g(x) , x > 0 . (9.45) 0

x

At this point we can invoke Theorem 9.4, which enables us to replace + denote the Fourier transforms of k the convolution integrals. Letting k= − = taken (0, ∞) (resp., letting k denote the Fourier transform of k taken over (−∞, 0)) we get



 + i (J + )−1 + k − − i (J − )−1 = f = g, f − k=

(9.46)

or, in collocated form, and now using both matrices A∓ defined above, we get (f1 , . . . , fn )T =



 −1 + i (A+ )−1 + k − − i (A− )−1 = I − k= (g1 , . . . , gn )T .

(9.47)

In (9.47), I denotes the unit matrix, while the subscripts “(·)j ” of f and g denote values to be computed and exact values respectively, at xj . It should be observed that this procedure also yields accurate approximate solutions in cases when (9.44) has non-unique solutions; such solutions can be obtained by using singular value decomposition to solve (9.47). For example, consider the (rather trivial to solve) equation, 

1

f (x) −

k(x − t)f (t)dt = g(x), 0 < x < 1.

(9.48)

0

where ⎧ −x if −1 < x < 0 , ⎨ −e k(x) = −e−x if x > 0 . ⎩ 0 if −∞ < x < −1.

(9.49)

This equation is easier to solve analytically than numerically, although this is not the case for most functions k . We could just as easily solve (9.44) for more complicated kernels, such as k(x) = log(x) e−x /(1 + x 2 )0.3 on (0 , ∞) , and with k(x) having a different, but a similarly complicated expression for x < 0 . But the procedure is the same in both the more complicated case as for this case. The equation has the unique solution, f (t) = g(t) − c e−t , &1 with c = (1/2) 0 et g(t) dt , for any g defined on (0 , 1) . In particular, if

9 Indefinite Integration Operators Identities, and Their Approximations

g(t) = 2 e−1/2 t et

2 −t

251

then c = sinh(1/2) . In this case, we have8

: k + (x) = − : k − (x) = −

 R+



0 −2

ei x y e−y dy = −1/(1 − i x) ei x t e−t dt

(9.50)

x) = 2 exp(1 − i x) sinh(1−i . 1−i x

Hence the operators : k+ (i (J + )−1 ) and : k− (−i (J − )−1 ) can be explicitly + + expressed, and replacing J with A and J − with A− , we get : k + (i (A+ )−1 ) = −A+ (I + A+ )−1 ,

 : k − (−i (A− )−1 ) = 2 exp I − (−i (A− )−1 ) ·

 −1 · sinh I − i(−i(A− )−1 ) I − i − i(A− ) .

(9.51)

These matrices can be readily computed, e.g., if λj (j = 1, . . . , n) are the eigenvalues of A± then, setting uj = −(1 + 1/λj ) , vj = −(1 − 1/λj ) , and wj = exp(−vj ) sinh(vj )/vj , U = diag(u1 , . . . , un )

 − −i (A− )−1 = Y U Y −1 , and and W = diag(w1 , . . . , wn ) , we get k=

 − −i (A+ )−1 = X W X −1 . k= We have again plotted the exact and approximate solution, where to get the approximate solution we have used a degree 4 polynomial approximation on (0,1). We could, of course easily have gotten greater accuracy using a higher degree Legendre polynomial. As above, we have again plotted the course mesh exact and approximate solutions in Fig. 9.13, the fine mesh approximate and the fine mesh exact solution in Fig. 9.14, and of the difference between the exact and the computed solution in Fig. 9.15. Acknowledgments The author is grateful to the editor and referee for valuable criticisms.

the integral for : k− must be truncated, since it will not converge otherwise. In particular, the integration from −2 to 0 instead of from −1 to 0 served to avoid the singularity due to the truncation of f:− at −1.

8 Notice,

252

F. Stenger 1

0.5

0

-0.5

0

0.2

0.4

0.6

0.8

1

Fig. 9.13 Course mesh plot of exact and computed Wiener–Hopf

Exact (-) & Approx (.) fine mesh WH soln 1.2 1 0.8 0.6 0.4 0.2 0 -0.2 -0.4 -0.6

0

0.2

0.4

0.6

Fig. 9.14 Fine mesh plot of exact and computed Wiener–Hopf

0.8

1

9 Indefinite Integration Operators Identities, and Their Approximations

253

10-3

2 1.5 1 0.5 0 -0.5 -1 -1.5

0

0.2

0.4

0.6

0.8

1

Fig. 9.15 Fine mesh plot of error of Wiener–Hopf

Bibliography 1. Bromwich, T.: Normal coordinates in dynamical systems. Proc. London Math. Soc. 15, 401– 448 (1916) 2. Baumann, G., Stenger, F.: Fractional Fokker–Planck equation. Mathematics 5, 1–19 (2017) 3. Dahlquist, G., Stenger, F.: Approximate solution of ODE via approximate indefinite integration, to appear 4. Gautschi, W., Hairer, E.: On Conjectures of Stenger in the theory of orthogonal polynomials, J. Inequal. Appl., Article Number 159 (2019) 5. Gustafson, K.E., Rao, D.K.M.: Numerical Range: The Field of Values of Linear Operators and Matrices. Springer, New York (1996) 6. Haber, S.: Two formulas for numerical indefinite integration. Math. Comput. 60, 279–296 (1993) 7. Han, L., Xu, J.: Proof of stenger’s conjecture on matrix I (−1) of sinc methods. J. Comput. Appl. Math. 255, 805–811 (2014) 8. Kearfott, R.B.: A sinc approximation for the indefinite integral. Math. Comput. V. 41, 559–572 (1983) 9. Notes on the numerical range. http://www.joelshapiro.org/Pubvit/ 10. Post, E.: Generalized differentiation. Trans. AMS 32, 723–781 (1930) 11. Royer, F.L., Rzeszotarski, M.S., Gilmore, G.C.: Application of two–dimensional fourier transforms to problems of visual perception. 15, 319–326 (1983) 12. Stenger, F.: Numerical methods based on the whittaker cardinal, or sinc functions. SIAM Rev. 23, 165–224 (1981) 13. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 14. Stenger, F: Collocating convolutions. Math. Comput. 64, 211–235 (1995) 15. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, Boca Raton (2011) 16. Stenger, F., Gustafson, S.Å., Keyes, B., O’Reilly, M., Parker, K.: ODE – IVP – PACK via sinc indefinite integration and newton’s method. Numer. Algorithms 20, 241–268 (1999)

254

F. Stenger

17. Stenger, F., Koures, V., Baumann, G.: Computational methods for chemistry and physics, and Schrödinger 3+1. In: Sabin, J.R., Cabrera-Trujillo, R. (eds.) Advances in Quantum Chemistry, vol. 71, pp. 265–298. Academic Press, Burlington (2015) 18. Stenger, F., Tucker, D., Baumann, G.: Solution of Navier–Stokes on R3 × (0, T ). Springer, Cham (2016) 19. Yamamoto, T.: Approximation of the Hilbert transform via use of Sinc convolution. Electron. Trans. Numer. Anal. 23, 320–328 (2006)

Chapter 10

An Overview of the Computation of the Eigenvalues Using Sinc-Methods M. H. Annaby, R. M. Asharabi, and M. M. Tharwat

Abstract In this chapter we give a survey for the use of sinc methods in computing eigenvalues of various types of boundary value problems. The techniques cover the classical sinc-method, regularized sinc-method, Hermite interpolations and the associated regularized technique, sinc-Gaussian, Hermite-Gauss and generalized sinc-Gaussian methods. The application of these methods covers a very wide class of problems, involving, but not limited to, second order differential operators, λ-type problems in L2 (a, b) ⊕ Cr spaces, discontinuous problems, multiparameter problems, in self-adjoint and non self-adjoint settings, regular and singular problems. Both horizontal and vertical extensions of the application of the technique are still open and under consideration. Keywords Sinc approximations · Approximating eigenvalues · Sampling theorems · Eigenvalue problems · Gaussian convergence factor · Truncation and amplitude errors

M. H. Annaby () Department of Mathematics, Faculty of Science, Cairo University, Giza, Egypt e-mail: [email protected] R. M. Asharabi Department of Mathematics, College of Arts and Sciences, Najran University, Najran, Saudi Arabia e-mail: [email protected] M. M. Tharwat Department of Mathematics and Computer Science, Faculty of Science, Beni-Suef University, Beni-Suef, Egypt e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_10

255

256

M. H. Annaby et al.

10.1 Introduction As in every historical question, it is not easy to find out who first investigated the application of the sinc-method in eigenvalue computations. Thanks to the analysis carried out by Stenger [70], which led to a tremendous works in the field, including the computations of eigenvalues of boundary value problem. However, the attempts of implementing cardinal-type interpolations attracted authors to seek solutions for applied problems, see e.g. [37]. To the best we know, the work of Lund and Riley [62] is the first to compute eigenvalues of singular Sturm-Liouville problem using a sinc technique. Then comes the works of Eggert et al. [46] and Jarratt et al. [54], refining and improving the techniques. In these works the authors derived the sinc-computations of eigenvalues of singular problems, implementing the analysis of [61, 70] for various eigenvalue problems. They have used approximations of analytic functions in domains of the complex plane C. The spectral classification of singular problems, i.e. limit-point and limit-circle classifications is not considered. However, several examples are computed. In a different approach, which is the one we summarize in this survey, there has been several studies that apply the sinc-method associated with Whittaker sampling theorem to compute the eigenvalues of differential operators numerically, see e.g. [1, 3, 9, 15, 26, 29, 32]. In these studies attention is given to the spectral quantities and the distribution of the eigenvalues. This survey is designed as follows. Section 10.2 is devoted to summarize the classical sinc-method and its applications to various boundary value problems. In Sect. 10.3, we introduce the use of regularized-sinc method in computing the eigenvalues numerically. Section 10.4 considers the use of Hermite interpolations and their implementation in eigenvalue computations. Section 10.5 briefly introduces the sinc-Gaussian method and its applications in the eigenvalue computations. Section 10.6 is devoted to the Hermite-Gaussian method. The use of generalized sincGaussian method introduced in the last section. Each section starts with the basic error analysis formulas, followed by a selection of implementation of the method for a certain problem, and acquainted with an illustrative example. Other applications will be also mentioned briefly. Thus, for space limitations, we will demonstrate only one case and one example of each method. We end this introduction by admitting that we have done our best to mention the major works and publications of the problems under considerations. Nevertheless a good work or more might be missed, which is due to humanitarian shortage, and undoubtedly not intentionally done.

10 Sinc-Computation of Eigenvalues

257

10.2 Classical Sinc-Method In this section we introduce the use of sinc-method in computing eigenvalues of eigenvalue problems. We start with stating the convergence and error analysis of the Whittaker-Kotel’nikov-Shannon sampling theorem. This involves the estimates of the truncation and amplitude errors. The problems discussed here are: general second order problems, λ-type problems, singular problems and discontinuous problems.

10.2.1 Error Analysis of the Sinc-Method By the classical sinc method, see [70] we mean the use of the WhittakerKotel’nikov-Shannon (WKS) sampling theorem, which states that if f (μ) is entire in μ of exponential type σ , σ > 0, which belongs to L2 (R) when restricted to R, then f (μ) can be reconstructed via the sampling (interpolation) representation ∞ 

f (μ) =

f



n=−∞

σ

sinc(σ μ − nπ ),

μ ∈ C.

(10.1)

Series (10.1) converges absolutely on C and uniformly on R and on compact subsets of C, see [36, 49, 70, 80, 81]. The space of all such f is the Paley-Wiener space of band limited functions with band width σ which will be denoted by Bσ2 , see [25].   nπ The nodes are called the sampling points and the building sinc functions σ n∈Z are ⎧ sin(σ μ − nπ ) nπ ⎪ ⎪ ⎪ ⎨ (σ μ − nπ ) , μ = σ , sinc(σ μ − nπ ) := (10.2) ⎪ ⎪ nπ ⎪ ⎩ 1, μ= , σ n ∈ Z. Series (10.1) is used in approximating solutions and eigenvalues of boundary value problems, see [3, 9, 30, 61, 69, 70]. Accordingly many types of errors arise when applying the WKS sampling theorem. In the present setting we consider two types of errors, namely, truncation and amplitude errors. For N ∈ N and f (μ) ∈ Bσ2 , let fN (μ) be the truncated cardinal series fN (μ) :=

N  n=−N

f

nπ σ

sinc(σ μ − nπ ),

μ ∈ C.

(10.3)

258

M. H. Annaby et al.

By the truncation error, we mean TN (μ) := |f (μ) − fN (μ)|,

μ ∈ C.

(10.4)

In most approximation problems, the samples {f ( nπ σ )} cannot be concretely computed. Instead, other approximate ones {f ( nπ )} are adopted. This produces the



 nπ | ≤ ε for all n ∈ Z. The amplitude error. Let ε > 0 be fixed and |f nπ − f σ σ amplitude error is defined to be Aε [f ](μ) =

∞ 

 nπ nπ  −f sinc(σ μ − nπ ), f σ σ n=−∞

μ ∈ C.

(10.5)

Since, we are considering problems when the eigenvalues are not necessarily real, we will state the estimates of both truncation and amplitude errors in C. The case when μ ∈ R, as treated in [5, 35, 52], results directly by taking μ = 0, where μ is denoted by the imaginary part of μ. Letting Ωσ,N be the disk  Nπ Ωσ,N := μ ∈ C : |μ| < , σ

9 N ∈ Z+ ,

(10.6)

it is proved in [3] that if f (μ) ∈ Bσ2 and μ f (μ) ∈ L2 (R) when restricted to R, where f (μ) is given in (10.1), then we have the following estimate for the truncation error on C |f (μ) − fN (μ)| ≤ BN (μ),

μ ∈ Ωσ,N , N ∈ N,

(10.7)

where √   1 1 2E σ σ | sin σ μ| +√ , BN (μ) := √ √ N +1 Nπ − σ μ Nπ + σ μ π2 3 / E :=



−∞

|μf (μ)|2 dμ,

and  μ is denoted by the real part of μ. If, in addition, εn := f f (μ) ∈ Bσ2 satisfy the decay conditions |εn | ≤ |f (nπ/σ )| ,

|f (μ)| ≤

Mf , |μ|

(10.8)

(10.9)

nπ 

|μ| ≥ 1, μ ∈ R,

σ

 − f nπ σ and

(10.10)

10 Sinc-Computation of Eigenvalues

259

√ where Mf is a positive constant, then for 0 < ε ≤ min{σ/π, π/σ, 1/ e} we have the estimate for the amplitude error, [3], |Aε [f ](μ)| ≤ A(ε, μ),

(10.11)

where A(ε, μ) := 4



√ √ 3 e + 2Mf exp(1/4) ε log(1/ε) exp(σ |μ|), μ ∈ C. (10.12)

The previous forms work fine if all eigenvalues are algebraically simple. If an eigenvalue is double form the algebraic point of view, one needs to consider the termwise derivative sampling theorem ∞ 

f  (μ) :=

f

nπ σ

n=−∞

sinc (σ μ − nπ ),

μ ∈ C.

(10.13)

It is also proved in [3] that if f (μ) ∈ Bσ2 and μf (μ) ∈ L2 (R) when restricted to R, then we have |f  (μ) − fN (μ)| ≤ B1,N (μ),

μ ∈ Ωσ,N , N ∈ N,

(10.14)

where B1,N (μ) := +

2E √

π2

2E 3π 2

fN (μ) :=

  σ 5/2 | cos σ μ| 1 1 +√ √ N +1 N π − σ μ N π + σ μ 3   1 σ 5/2 | sin σ μ| 1 , (10.15) + N +1 (N π − σ μ)3/2 (N π + σ μ)3/2

N  n=−N

f

nπ σ

sinc (σ μ − nπ ),

μ ∈ Ωσ,N .

(10.16)

Under conditions (10.10) we also have   Aε [f  ](μ) ≤ 2σ A(ε, μ),

(10.17)

√ provided that 0 < ε ≤ min{σ/π, π/σ, 1/ e}, cf. [3]. Notice that series (10.16) also gain similar convergence properties as WKS series has, cf. [3] and the references cited therein.

260

M. H. Annaby et al.

10.2.2 General Second Order Differential Equations In the following we introduce briefly the implementation of the sinc-method to compute the eigenvalues of the problem (y) := −y  (x, μ) + q(x)y(x, μ) = μ2 y(x, μ),

0 ≤ x ≤ b,

(10.18)

U1 (y) := α11 y(0) + α12 y  (0) + β11 y(b) + β12 y  (b) = 0,

(10.19)

U2 (y) := α21 y(0) + α22 y  (0) + β21 y(b) + β22 y  (b) = 0,

(10.20)

where q(·) ∈ L1 (0, b), μ ∈ C and (αi1 , αi2 , βi1 , βi2 ) ∈ C4 , i = 1, 2, are linearly independent. To guarantee the existence of a sequence of eigenvalues, it is assumed that the boundary conditions are strongly regular, cf. [63]. Therefore, the problem has a denumerable set of eigenvalues, which are simple geometrically and at most algebraically double. Let y1 (·, μ) and y2 (·, μ) denote the solutions of (10.18) satisfying the following initial conditions (i−1)

yj

(0, μ) = δij ,

i, j = 1, 2,

μ ∈ C.

(10.21)

The solutions yi (x, μ), i = 1, 2, are entire in μ for x ∈ [0, b] and the eigenvalues of the problem (10.18)–(10.20) are the zeros of the characteristic determinant    U1 (y1 ) U1 (y2 )    . Δ(μ) :=    U (y ) U (y )  2 1 2 2

(10.22)

The characteristic determinant Δ(μ), which is also an entire function of μ, does not necessarily belong to Bσ2 . So the method is based on dividing the characteristic function Δ(μ) into two parts, i.e. Δ(μ) := K (μ) + U (μ).

(10.23)

10 Sinc-Computation of Eigenvalues

261

The first, K (μ), is the known part K (μ) : = C1 + C2 cos[bμ] + C3 μ sin[bμ] + (C4 + C5 cos[bμ])

1 

T n [cos(μx)](b)

n=0 1 

+ (C6 + C5 μ sin[bμ])

Tn

n=0

sin(μx) ! (b) + C3 μ

1 

T T n [cos(μx)](b)

n=0

1 sin[bμ]  n sin(μx) ! (b) + C5 T T [cos(μx)](b) + C2 T μ μ n=0

6 5 1  sin[xμ] (b) T n [cos(μx)](b), + C5 T μ n=0

(10.24) and the second, U (μ), is the unknown part   ∞ sin(μx) ! (b) U (μ) : = C4 + C5 cos[bμ] + C5 T [ T n [cos(μx)](b) μ n=2

+ C2

∞  n=1

  ∞ sin(μb)  n sin(μx) ! (b) + C3 + C5 T T n T T [cos(μx)](b) μ μ n=2

+ (C6 + C5 μ sin[bμ])

∞ 

Tn

n=2

+ C5

∞ 

T T n [cos(μx)](b)

n=0

+ C5

∞ 

sin(μx) ! (b) μ

∞  n=1

T n [cos(μx)](b)

n=0

∞  n=1

Tn

sin(μx) ! (b) μ

sin(μx) ! (b), T T n μ

(10.25) where, see [3] for details, C1 := α11 α22 − α12 α21 , C4 := α22 β11 − α12 β21 ,

C2 := α11 β22 − α21 β12 , C5 := β22 β11 − β12 β21 ,

C3 := α12 β22 − α22 β12 , C6 := α11 β21 − α21 β11 , (10.26)

and T , T are the Volterra operators defined by  (T y)(x, μ) := 0

x

sin μ(x − t) q(t)y(t, μ)dt, μ

(10.27)

262

M. H. Annaby et al.

(T y)(x, μ) :=



x

cos(μ(x − t)) q(t)y(t, μ)dt.

(10.28)

0

Then we approximate U (μ) using the expansion (10.1), after proving that U (μ) lies in an appropriate Paley-Wiener space. In [3], Annaby and Asharabi proved that 2 with σ = 2b. Then U (μ) lies in a Paly-Wiener space B2b U (μ) :=

∞ 

U



n=−∞

2b

sinc(2bμ − nπ ),

μ ∈ C.

(10.29)

Let N ∈ Z+ be fixed. The samples U (nπ/2b) = Δ(nπ/2b)−K (nπ/2b), |n| ≤ N cannot be computed explicitly in the general case. We approximate these samples by solving 4N + 2 initial-value problems defined by (10.18)

 and (10.21)

 to obtain

 nπ − K nπ the approximate values U (nπ/2b), |n| ≤ N , i.e. U N nπ = Δ 2b 2b 2b and therefore we define U (μ) to be U N (μ) :=

N  n=−N

nπ sinc(2bμ − nπ ). U 2b

(10.30)

It is also proved that U (μ) satisfies the decay behavior, see [3], |U (μ)| ≤ MU |μ|−1 ,

MU > 0,

|μ| ≥ 1,

μ ∈ R.

(10.31)

N (μ) := K (μ) + U N (μ), we obtain, cf. [3], Letting Δ N (μ)| ≤ BN (μ) + A(ε, μ) |Δ(μ) − Δ

μ ∈ Ωσ,N ,

 N (μ)| ≤ B1,N (μ) + 4bA(ε, μ) |Δ (μ) − Δ

μ ∈ Ωσ,N .

(10.32) (10.33)

Let (μ∗ )2 be an eigenvalue and (μN )2 be its desired approximation, i.e. Δ(μ∗ ) = 0 N (μN ) = 0. From (10.32) we have |Δ N (μ∗ )| ≤ BN (μ∗ ) + A(ε, μ∗ ). Let ΠN and Δ be the region     N (μ) ≤ BN (μ) + A(ε, μ) . μ∗ ∈ ΠN := μ ∈ Ωσ,N : Δ

(10.34)

From ΠN we determine an enclosure annulus where μN lies in it. We solve the N (μ) = ±(BN (μ) + A(ε, μ)). Let AN be the set of all these solutions. equation Δ ∗ Then μ ∈ AN . Let μ0 , μ1 ∈ Ωσ,N be μ0 := inf{|μ| : μ ∈ AN },

μ1 := sup{|μ| : μ ∈ AN }.

(10.35)

The enclosure annulus where μN lies in is AN := {μ ∈ Ωσ,N : μ0 ≤ μ ≤ μ1 }. If μ∗ ∈ R, the enclosure annulus becomes an interval IN . The computation of the

10 Sinc-Computation of Eigenvalues

263

error bounds of |μ∗ − μN | depends on whether μ∗ is simple or double. Then we have the following two theorems, see [3]. Theorem 10.1 Let (μ∗ )2 be a simple eigenvalue of (10.18)–(10.20). Then for all  > 0, there exists N0 ∈ N such that for any positive integer N , N ≥ N0 , there exists μN such that |μ∗ − μN | < . Furthermore, we have the following estimate |μ∗ − μN |
0, there exists N0 ∈ N such that for any positive integer N , N ≥ N0 , there exists μN such that |μ∗ − μN | < . Furthermore, we have the following estimate |μ∗ − μN |
1, cf. [3].

10 Sinc-Computation of Eigenvalues

265

10.2.3 λ-Type Problems In this subsection we consider another type of problems treated by the use of the classical sinc-method. As the reader got the spirit of the method from the previous subsection, we are not filling the details. An interested reader may refer to the references cited here. By a λ-type problem, we mean a problem of the form My = λNy,

(10.45)

where M, N are differential operators and the eigenvalue parameter may appear in the boundary conditions. In most cases treated N = I , where I is the identity operator. The first treatment in this direction was given by Boumenir in [30]. He applied the sinc-method to compute the eigenvalues of the problem which consists of differential equation (10.18) and boundary conditions y  (0, μ) − dy(0, μ) = 0,

(10.46)

 a1 y(b, μ) − a2 y  (b, μ) = μ2 b1 y(b, μ) − b2 y  (b, μ) ,

(10.47)

where d ∈ R and ai , bi , i = 1, 2 are real numbers satisfy a1 b2 − a2 b1 > 0. In [30], the potential of (10.18) is assumed to be real-valued. The spectral analysis of this problem is treated in [47]. The existence of a sequence of real simple eigenvalues is guaranteed. Only truncation error is considered, while also amplitude error arises. Annaby and Tharwat, [9], used the classical sinc method to compute the eigenvalues of a second-order operator pencil generated by the problem − y  (x, λ) + q(x)y(x, λ) = λ(2iy  (x, λ) + y(x, λ)),

0 ≤ x ≤ 1,

(10.48)

cos θ1 y(0, λ) − sin θ1 (y  (0, λ) + iλy(0, λ)) = 0,

(10.49)

cos θ2 y(1, λ) + sin θ2 (y  (1, λ) + iλy(1, λ)) = 0,

(10.50)

where q ∈ L1 (0, 1) and θ1 , θ2 ∈ [0, π2 ]. The error analysis is established considering both truncation and amplitude errors associated with the sampling theorem. In [15], Annaby and Tharwat implemented the sinc method to compute the eigenvalues of a second order boundary value problem with mixed type boundary conditions − y  (x, μ) + q(x)y(x, μ) = μ2 y(x, μ),

0 ≤ x ≤ 1,

(10.51)

y(0, μ) − μ2 [ay  (0, μ) − by  (1, μ)] = 0,

(10.52)

¯  (0, μ) − cy  (1, μ)] = 0, y(1, μ) − μ2 [by

(10.53)

266

M. H. Annaby et al.

where q(·) is real-valued function, q(·) ∈ L1 (0, 1), a, c ∈ R, b ∈ C satisfy ac − bb¯ > 0. In this work, two types of error appear, namely truncation and amplitude errors. This problem, introduced in [55], is defined in L2 (0, 1)⊕C2 . It is proved that it is self–adjoint and consequently it has a denumerable set of real and algebraically simple eigenvalues, see also [56].

10.2.4 Discontinuous Problems The authors of [1] used the classical sinc method to compute eigenvalues of discontinuous second order boundary value problem which consists of the discontinuous differential equation −y1 (x, μ) + q1 (x)y1 (x, μ) = μ2 y1 (x, μ), −y2 (x, μ) + q2 (x)y2 (x, μ)

=

μ2 y2 (x, μ),

⎫ −1 ≤ x < 0, ⎬ 0 < x ≤ 1,



(10.54)

and the boundary and compatibility conditions Uj (y) := Uj,−1 (y1 ) + Uj,1 (y2 ) = 0,

j = 1, 2,

(10.55)

Vj (y) := Vj,0− (y1 ) + Vj,0+ (y2 ) = 0,

j = 1, 2,

(10.56)

respectively. Here Uj,−1 (y1 ) := αj 1 y1 (−1) + αj 2 y1 (−1), Uj,1 (y2 ) := βj 1 y2 (1) + βj 2 y2 (1), Vj,0− (y1 ) := γj 1 y1 (0− ) + γj 2 y1 (0− ), Vj,0+ (y2 ) := δj 1 y2 (0+ ) + δj 2 y2 (0+ ). (10.57) We assume that q1 (·) ∈ L1 (−1, 0), q2 (·) ∈ L1 (0, 1) and αij , βij , γij , δij ∈ C, i, j = 1, 2. Moreover the numbers αij , βij , γij and δij satisfy |αij | + |βij | > 0,

|γij | + |δij | > 0,

1 ≤ i, j ≤ 2.

(10.58)

The eigenvalues are in general complex numbers and not necessarily simple. Therefore error estimates for the truncation and amplitude errors on C of the sampling expansion and its termwise derivative are used. Moreover the problem under question is defined in two ways throughout [−1, 1] and is not in general self adjoint. Like the previous problem, the boundary and compatibility conditions are assumed to be regular in the sense of Birkhoff to guarantee the existence of eigenvalues.

10 Sinc-Computation of Eigenvalues

267

10.3 Regularized-Sinc Method Let f ∈ Ba∞ . Then f is an entire function of exponential type a but it is not necessarily in L2 (R). In this case, we can not expansion f as the classical sampling theorem of Whittaker-Kotel’nikov-Shannon. So, we multiply the function f by an appropriate regularized kernel. For any fixed ζ ∈ C, h ∈ (0, π/σ ) and m ∈ Z+ , the function sincm (h−1 (ζ − z))f ∈ Bσ2 with σ := a + mh−1 . Applying the classical sampling for this function and substituting ζ = z, we obtain, see e.g. [67], f (z) =

∞ 

f (nh) sincm (h−1 z − nπ )sinc(π h−1 z − nπ ).

n=−∞

This type of sampling is called regularized sampling. The regularized-sinc method is a method based on the WKS sampling theorem acquainted with a regularization kernel. It is applied either to accelerate the technique, improve the error, or when the required approximated function does not lie in an appropriate Paley-Weiner space. It also removes the difficulty of the need to compute integrations that cannot be computed explicitly and it keeps the number of terms in the cardinal series moderate. It has been proved that the regularized technique is capable of delivering higher order estimates of the eigenvalues at a very low cost, see e.g. [10, 39]. The problems discussed here are general second order problems, λ-type problems, Dirac systems, singular problems and discontinuous problems. Emphasis will be given to the computation of the eigenvalues of the Dirac systems.

10.3.1 Dirac Systems In [12], the authors used the regularized sinc method to compute the eigenvalues of Dirac systems y2 (x, λ) − q1 (x)y1 (x, λ) = λy1 (x, λ),

y1 (x, λ) + q2 (x)y2 (x, λ) = −λy2 (x, λ), (10.59) sin αy1 (0, λ) + cos αy2 (0, λ) = 0, (10.60)

sin βy1 (1, λ) + cos βy2 (1, λ) = 0,

(10.61)

where x ∈ [0, 1], α , β ∈ [0, π [ and q1 , q2 ∈ L1 (0, 1). The operator considered here is the massless Dirac operator. The regularized sinc helps via inserting a regularization function to the cardinal series; strengthening the existing technique, and to avoid the aliasing error. The spectral analysis of problem (10.59)–(10.61) was investigated in [58, 59]. Among the celebrated spectral properties, problem (10.59)– (10.61) has a countable set of real and simple eigenvalues with ±∞ as the only

268

M. H. Annaby et al.

limit points. Let y(·, λ) = (y1 (·, λ), y2 (·, λ)) be the fundamental solutions of the system (10.59), determined by the initial conditions: y1 (0, λ) = cos α,

y2 (0, λ) = − sin α.

(10.62)

Obviously y(·, λ) satisfies the boundary condition (10.60). Therefore, the eigenvalues of the problem (10.59)–(10.61) are exactly the zeros of the function Δ(λ) = sin βy1 (1, λ) + cos βy2 (1, λ).

(10.63)

Notice that both y(·, λ) and Δ(λ) are entire functions of λ and y(·, λ), cf. [59]. As in the previous section, the basic idea in using the regularized-sinc technique to compute the eigenvalues is to split Δ(λ) into two parts, a known one and an unknown one. Then to prove that this later belongs to an appropriate Paley-Wiener space after multiplication by a suitable regularization factor. Thus: Δ(λ) := K (λ) + U (λ), where K (λ) is K (λ) := sin β cos(α − λ) − cos β sin(α − λ)

(10.64)

and U (λ) is the unknown one U (λ) := sin βv1 (1, λ) + cos βv2 (1, λ),

(10.65)

with v1 (x, λ) := −T1 y1 (x, λ) − T 2 y2 (x, λ),

v2 (x, λ) := T 1 y1 (x, λ) − T2 y2 (x, λ), (10.66)

and the operators Ti , T i , i = 1, 2, are Volterra-type operators defined by 

x

Ti y(x, λ) := T i y(x, λ) :=



sin λ(x − t)qi (t)y(t, λ) dt,

0 x

cos λ(x − t)qi (t)y(t, λ) dt,

i = 1, 2.

0

The function U (λ) is entire in λ for each x ∈ [0, 1] and satisfy the growth condition, cf. [12], |U (λ)| ≤ 4c2 e|λ|x ,

λ ∈ C,

(10.67)

10 Sinc-Computation of Eigenvalues

269

where  c2 = c1 exp(c1 ),

1

c1 =

[ |q1 (t)| + |q2 (t)| ] dt.

0

Although U (λ) is entire and of exponential type as estimate (10.67) indicates, it is not necessarily square summable. So, U (λ) does not necessarily lie in a PaleyWiener space. To overcome this problem, we multiply U (λ) by a regularization function. Indeed, fix θ ∈ (0, 1) and m ∈ Z+ , m ≥ 1 and let Gθ,m (λ) be  Gθ,m (λ) :=

sin θ λ θλ

m U (λ),

λ ∈ C.

(10.68)

More conditions on m, θ will be discussed later on. Then Gθ,m (λ) belongs to the Paley-Wiener space Bσ2 with the band-width σ = 1+mθ and λm−1 Gθ,m (λ) ∈ L2 (R) for m ≥ 1, cf. [12]. Hence, Gθ,m (λ) can be recovered from its values at the points nπ , n ∈ Z via the sampling expansion λn = σ ∞ 

Gθ,m (λ) :=

Gθ,m

nπ σ

n=−∞

sinc(σ λ − nπ ).

(10.69)

Let N ∈ Z+ , N > m and approximate Gθ,m (λ) by its truncated series Gθ,m,N (λ), where Gθ,m,N (λ) :=

N 

Gθ,m

nπ σ

n=−N

sinc(σ λ − nπ ).

Since λm−1 Gθ,m (λ) ∈ L2 (R), the truncation error is given for |λ| <   Gθ,m (λ) − Gθ,m,N (λ) ≤ TN,m (λ).

(10.70) Nπ by σ (10.71)

Here TN,m (λ) is given by TN,m (λ) = 

| sin σ λ| m−1 1 − 4−m+1 π(π/σ )m−1 (N + 1) Em−1 (Gθ,m )

5 √

6 1 1 +√ . N π/σ − λ N π/σ + λ

(10.72) where / Em−1 (Gθ,m ) =



−∞

|λm−1 Gθ,m (λ)|2 dλ.

(10.73)

270

M. H. Annaby et al.

Let G θ,m,N (λ) approximate Gθ,m,N (λ) by G θ,m,N (λ) :=

N  n=−N

nπ sinc(σ λ − nπ ), G θ,m σ

N > m.

(10.74)

Since Gθ,m (λ) ∈ Bσ2 satisfies the condition (10.10), then   Gθ,m,N (λ) − G θ,m,N (λ) ≤ A (ε),

(10.75)

where A (ε) := 4

√ √ 3e + 2MGθ,m exp(1/4) ε log(1/ε).

Since A (ε) → 0 as ε → 0, it is guaranteed that G θ,m,N (λ) converges uniformly and absolutely on R and on compact subsets of C. Let N (λ) := K (λ) + Δ



sin θ λ θλ

−m

G θ,m,N (λ).

Then (10.71) and (10.75) imply     sin θ λ −m    Δ(λ) − Δ N (λ) ≤  TN,m (λ) + A (ε) ,  θλ 

|λ|
0 and (β1 , β2 ) = (0, 0) or β1 β2 − β1 β2 > 0 .

(10.86) The eigenvalue problem (10.83)–(10.85) will be denoted by Π (q, α, β, α  , β  ) when (α1 , α2 ) = (0, 0) = (β1 , β2 ). It is a Dirac system when the eigenparameter λ appears linearly in both boundary conditions. Annaby and Tharwat, in [10], used regularized-sinc method to compute approximate values of the eigenvalues of the problems Π (q, α, β, α  , β  ) and Π (q, α, β, 0, β  ). The error analysis is established considering both truncation and amplitude errors associated with the sampling theorem. In [57], Kerimov proved that Π (q, α, β, α  , β  ) has a denumerable set of real and simple eigenvalues with ±∞ as the limit points. Similar results are established in [11] for the problem when the eigenparameter appears in one

10 Sinc-Computation of Eigenvalues

273

condition, i.e. when α1 = α2 = 0, (β1 , β2 ) = (0, 0) or equivalently when (α1 , α2 ) = (0, 0) and β1 = β2 = 0. The regularized sinc method is also implemented in [16] to compute the eigenvalues for the problem (10.51)–(10.53). Here, the authors derived a rigorous error analysis that takes into account both truncation and perturbation errors. In [39], the author implemented a sinc-regularized method to compute the eigenvalues of the problem that consists of Eq. (10.18) and the boundary conditions α11 (μ)y(0, μ) + α12 (μ)y  (0, μ) = 0,

(10.87)

α21 (μ)y(b, μ) + α22 (μ)y  (b, μ) = 0,

(10.88)

where αij (·), i, j = 1, 2 are entire functions satisfying the growth conditions |αij (μ)| ≤ cij (1 + |μ|)m0 eLij |μ| ,

i, j = 1, 2, m0 ∈ N,

(10.89)

2 (μ) + α 2 (μ) = 0, α 2 (μ) + α 2 (μ) = 0. However, no spectral analysis and α11 12 21 22 guarantees existence of eigenvalues. In addition the error analysis neglects the amplitude error. More remarks could be found in [15] and the references cited therein. Similar arguments hold for the work of [40].

10.3.3 Discontinuous Problems The authors, in [75], applied a regularized sinc method to compute the eigenvalues of a discontinuous regular Dirac systems with transmission conditions at the point of discontinuity 

u2 (x, λ) − p1 (x)u1 (x, λ) u1 (x, λ) + p2 (x)u2 (x, λ)





 u1 (x, λ) =λ , −u2 (x, λ)

x ∈ [−1, 0) ∪ (0, 1], (10.90)

with boundary conditions sin αu1 (−1, λ) + cos αu2 (−1, λ) = 0,

(10.91)

sin βu1 (1, λ) + cos βu2 (1, λ) = 0,

(10.92)

and transmission conditions u1 (0− , λ) − δu1 (0+ , λ) = 0,

(10.93)

u2 (0− , λ) − δu2 (0+ , λ) = 0,

(10.94)

274

M. H. Annaby et al.

where λ ∈ C; the real valued function p1 (·) and p2 (·) are continuous in [−1, 0) and (0, 1], and have finite limits p1 (0± ) := lim p1 (x), p2 (0± ) := lim p2 (x); x→0±

x→0±

δ ∈ R, α, β ∈ [0, π ) and δ = 0. The error analysis is established considering both truncation and amplitude errors associated with the sampling theorem. In [78], the authors computed the eigenvalues of a Sturm–Liouville problem which contains an eigenparameter appearing linearly in two boundary conditions, in addition to an internal point of discontinuity − r(x)y  (x, μ) + q(x)y(x, μ) = μ2 y(x, μ),

x ∈ [−1, 0) ∪ (0, 1],

(10.95)

with boundary conditions (α1 μ2 − α1 )y(−1, μ) − (α2 μ2 − α2 )y  (−1, μ) = 0,

(10.96)

(β1 μ2 + β1 )y(1, μ) − (β2 μ2 + β2 )y  (1, μ) = 0,

(10.97)

and transmission conditions γ1 y(0− , μ) − δ1 y(0+ , μ) = 0,

(10.98)

γ2 y  (0− , μ) − δ2 y  (0+ , μ) = 0,

(10.99)

where μ is a complex spectral parameter; r(x) = r12 for x ∈ [−1, 0), r(x) = r22 for x ∈ (0, 1]; r1 > 0 and r2 > 0 are given real numbers; q(x) is a given realvalued function, which is continuous in [−1, 0) and (0, 1] and has a finite limit q(0± ) = lim q(x); γi , δi , αi , βi , αi , βi (i = 1, 2) are real numbers; γi = 0, x→0±

δi = 0 (i = 1, 2); γ1 γ2 = δ1 δ2 and 

α1 α1 det α2 α2





> 0,

β1 β1 det β2 β2

 > 0.

The error analysis is established considering both truncation and amplitude errors associated with the sampling theorem. Other works in this direction involve the works of [42] and [43].

10.4 Hermite Method In this section we use the derivative sampling theorem “Hermite interpolations” to compute an approximation of the eigenvalues of eigenvalue problems. We use the estimates for the truncation and amplitude errors to compute error bounds. The problems discussed here are general second order problems, λ-type problems, Dirac systems and discontinuous problems.

10 Sinc-Computation of Eigenvalues

275

10.4.1 Hermite-Type Sampling 2 . The space B 2 is defined in Sect. 10.2.1. Then f (t) can be Let f (t) ∈ Bσ2 ⊂ B2σ σ reconstructed via the Hermite-type sampling series

f (t) =

6 ∞ 5

nπ sin(σ t − nπ )  nπ 2 f Sn (t) + f  Sn (t) , σ σ σ n=−∞

(10.100)

where Sn (t) is the sequences of sinc functions

Sn (t) :=

⎧ sin(σ t − nπ ) nπ ⎪ ⎪ ⎪ ⎨ (σ t − nπ ) , t = σ , ⎪ ⎪ ⎪ ⎩ 1,

(10.101)

nπ t= . σ

Series (10.100) converges absolutely and uniformly on R, cf. [48, 50, 51, 53]. Sometimes, series (10.100) is called the derivative sampling theorem. Our task is to use formula (10.100) to compute eigenvalues of boundary value problems numerically. This approach is a fully new technique that uses the estimates for the truncation and amplitude errors associated with (10.100) which obtained in [4]. Both types of errors normally appear in numerical techniques that use interpolation procedures. In the following we summarize these estimates. The truncation error associated with (10.100) is defined to be RN (f )(t) := f (t) − fN (t),

N ∈ Z+ , t ∈ R,

(10.102)

where fN (t) is the truncated series fN (t) =

6

nπ sin(σ t − nπ )  5 nπ Sn2 (t) + f  Sn (t) . f σ σ σ

(10.103)

|n|≤N

It is proved in [4] that if f (t) ∈ Bσ2 and f (t) is sufficiently smooth in the sense that there exists k ∈ Z+ such that t k f (t) ∈ L2 (R), then, for t ∈ R, |t| < N π/σ , we have   1 ξk,σ Ek | sin σ t|2 1 |RN (f )(t)| ≤TN,k,σ (t) := √ + (N π − σ t)3/2 (N π + σ t)3/2 3(N + 1)k   ξk,σ (σ Ek + k Ek−1 ) | sin σ t|2 1 1 + +√ , √ σ (N + 1)k Nπ − σ t Nπ + σ t (10.104)

276

M. H. Annaby et al.

where the constants Ek and ξk,σ are given by / Ek :=

∞ −∞

  t k f (t)2 dt,

ξk,σ :=

σ k+1/2 . √ π k+1 1 − 4−k

(10.105)

The amplitude error occurs when approximate samples are used instead of the exact ones, which we can not compute. It is defined to be A (ε, f )(t) =

∞ 2

 nπ nπ  2 −f Sn (t) f σ σ n=−∞

6  nπ

nπ  sin(σ t − nπ) + f − f  Sn (t) , σ σ σ

t ∈ R,

(10.106)







 and f  nπ are approximate samples of f nπ and f  nπ where f nπ σ σ

nπ  σ nπ   σ respectively. Let us assume that the differences εn := f σ − f σ , εn :=



  nπ , n ∈ Z, are bounded by a positive number ε, i.e. |εn |, |ε | ≤ ε. f  nπ n σ −f σ If f (t) ∈ Bσ2 satisfies the natural decay conditions  nπ    |εn | ≤ f , σ |f (t)| ≤

 nπ    |εn | ≤ f  , σ Mf , |t|υ+1

t ∈ R − {0},

(10.107)

(10.108)

√ Mf is a positive constant and 0 < υ ≤ 1, then for 0 < ε ≤ min{π/σ, σ/π, 1/ e}, we have, [4], A (ε, f ) ∞ ≤

4e1/4 √ 3e(1 + σ ) + (A + Mf )ρ(ε) + σ + 2 σ (υ + 1)  +(ρ(ε) + log(2))Mf ε log(1/ε),

(10.109)

where

σ υ A := 3 |f (0)| + Mf , ρ(ε) := γ + 10 log(1/ε), (10.110) π ( ' n 1 and γ := lim − log n ∼ = 0.577216 is the Euler-Mascheroni constant. n→∞ k k=1

10 Sinc-Computation of Eigenvalues

277

10.4.2 Computations of Eigenvalues with Hermite-Type Interpolation Consider the Sturm-Liouville problem − y  (x, μ) + q(x)y(x, μ) = μ2 y(x, μ),

x ∈ [0, b],

μ ∈ C,

(10.111)

a11 y(0, μ) + a12 y  (0, μ) = 0,

(10.112)

a21 y(b, μ) + a22 y  (b, μ) = 0,

(10.113)

where aij , 1 ≤ i, j ≤ 2 are real numbers such that |ak1 | + |ak2 | = 0, k = 1, 2, and q ∈ L1 [0, b] is a real valued function. In [6], Annaby and Asharabi introduced a new method to compute eigenvalues of of the problem (10.111)–(10.113) by the use of Hermite interpolations at equidistant nodes. Here, the authors gave estimates for the error by considering both truncation and amplitude errors. The problem (10.111)– (10.113) has a countable set of real and simple eigenvalues with ∞ as the only possible limit point, [44, 58]. Let y(x, μ) be the solution of (10.111) determined by the initial conditions y(0, μ) := a12 ,

y  (0, μ) := −a11 .

(10.114)

The eigenvalues are the square zeros of the characteristic function Δ(μ) := a21 y(b, μ) + a22 y  (b, μ)

(10.115)

Now let us divide Δ(μ) into two functions a known one Kk (μ) and an unknown one Uk (μ), i.e. Δ(μ) := Kk (μ) + Uk (μ),

(10.116)

where k ∈ Z+ is fixed and Kk (μ) := −a12 a22 μ sin(μ b) − a11 a22 cos(μ b) + a21 a12

k 

T n cos(μ b)

n=0

− a21 a11

k−1  n=0

− a22 a11

k−1  n=0

Tn

sin(μ b) + a22 a12 μ

sin(μ b) , T T n μ

k 

T T n cos(μ b)

(10.117)

n=0

(10.118)

278

M. H. Annaby et al.

is the known part and the infinite sum ∞ 

Uk (μ) = a21 a12

∞ 

T n cos(μ b) − a21 a11

n=k

n=k+1 ∞ 

+ a22 a12

Tn

T T n cos(μ b) − a22 a11

∞  n=k

n=k+1

sin(μ b) μ

sin(μ b) T T n , (10.119) μ

is the unknown part which we want to approximate via (10.100), cf. [6]. 2 , see [6], then we can reconstruct the functions U (μ) Since Uk (μ) ∈ Bb2 ⊂ B2b k via the following sampling formula ∞ 5 

Uk (μ) =

Uk



n=−∞

b

Sn2 (μ) + Uk

nπ sin(bμ − nπ ) b

b

6 Sn (μ) . (10.120)

Let N ∈ Z+ and UN,k (μ) be the truncated series of (10.120) UN,k (μ) =

N 5 

Uk

n=−N

nπ b

Sn2 (μ) + Uk

nπ sin(bμ − nπ ) b

b

6 Sn (μ) , (10.121)

then we have, cf. (10.104),   Uk (μ) − UN,k (μ) ≤ TN,k,b (μ) ,

|bμ| < N π, μ ∈ R.

(10.122)

Now we define U N,k (μ) which approximates UN,k (μ) U N,k (μ) =

N 5  n=−N

6



2  nπ sin(bμ − nπ ) Sn (μ) + Uk Sn (μ) . Uk b b b (10.123)

√ Then for 0 < ε ≤ min{π/b, b/π, 1/ e}, cf. [6], we have |UN,k (μ) − U N,k (μ)| ≤ Ak (ε),

μ ∈ R,

(10.124)

10 Sinc-Computation of Eigenvalues

279

where, cf. (10.109),   ,  k √ 4e1/4 b MUk ρ(ε) Ak (ε) := ( 3e + 1)(1 + b) + 3 |Uk (0)| + (k + 1)b π  +(ρ(ε) + log(2))MUk + 1 ε log(1/ε),  MUk := Γ exp(cb q 1 )(cb q 1 )k+1 ,

b

q 1 :=

|q(t)|dt,

c := 1.709,

0

Γ := |a21 a12 | + q 1 |a21 a11 | + |a22 a12 |/ q 1 + |a22 a11 |. N,k (μ) := Kk (μ) + U N,k (μ). From (10.116), (10.122) and (10.124), we Now let Δ obtain   Δ(μ) − Δ N,k (μ) ≤ TN,k,b (μ) + Ak (ε),

|bμ| < N π, μ ∈ R.

(10.125)

2 Let (μ∗ )2 be an eigenvalue i.e. Δ(μ∗ ) = 0 and  and (μ N,k ) be its approximation, ∗ ∗   ΔN,k (μN,k ) = 0. Then ΔN,k (μ ) ≤ TN,k,b (μ ) + Ak (ε). From the simplicity of the eigenvalues [44, 58], we can define the enclosure interval IN,k,ε for which, cf. also [29]

 

   N,k (μ∗ ) ≤ TN,k,b μ∗ + Ak (ε) . μ∗ ∈ IN,k,ε := μ : Δ

(10.126)

Then the estimate error |μ∗ − μN,k | for an eigenvalue (μ∗ )2 is given by |μ∗ − μN,k |
0, be the class of entire functions satisfying the following condition |f (z)| ≤ ϕ (|z|) eσ |z| ,

z ∈ C,

(10.131)

where ϕ is a continuous, non-decreasing, non-negative function on [0, ∞). Clearly p that the class Eσ (ϕ) is larger than the Bernstein space Bσ , p > 1 and it includes entire functions, which are unbounded on R. Let ELp (R) be the class of all entire functions which belong to Lp (R) when restricted to R. Schmeisser and Stenger have defined the sinc-Gaussian operator Gh,N : Eσ (ϕ) → ELp (R) as follows, cf. [67], Gh,N [f ](z) :=

 n∈ZN (z)

 2  α z , −n f (nh) sinc(π h−1 z − nπ ) exp − N h (10.132)

where σ > 0, h ∈ (0, π/σ ], α = (π − hσ )/2 and N ∈ Z+ . The index of the summation in (10.132) depends on the real part of z and is defined as follows   ZN (z) := n ∈ Z : |[h−1 z + 1/2] − n| ≤ N .

(10.133)

  An estimation for the error f (z) − Gh,N [f ](z) has been introduced by Schmeisser and Stenger, cf. [67]. For f ∈ Eσ (ϕ) and |z| < N we have, cf. [67],  

−αN   f (z) − Gh,N [f ](z) ≤ 2 sin(h−1 π z) ϕ (|z| + h(N + 1)) βN h−1 z √e , π αN (10.134) where βN (t) := cosh(2αt) + √

2eαt

2 /N

π αN[1 − (t/N )2 ]

= cosh(2αt) + O(N −1/2 ),

as

+

5 6 e2αt e−2αt 1 + 2 e2π(N−t) − 1 e2π(N+t) − 1

N → ∞.

(10.135)

It is clear that the convergence rate of the sinc-Gaussian operator depends on the behaviour of the function ϕ. In the special case ϕ is a constant function, that consists of the Bernstein space Bσ∞ , the convergence rate is of order means

Eσ (ϕ) √ O e−αN / N where α is defined above.

282

M. H. Annaby et al.

By the use of the technique established by Schmeisser and Stenger in [67], It is proved in [2, 20] that, if f ∈ Eσ (ϕ), then for |λ| < N we have −αN    f (z) − G  [f ](z) ≤ ϑ(z)ϕ (|z| + h(N + 1)) βN (h−1 z) √e , h,N π αN (10.136)

where βN is defined in (10.135) and    

    ϑ(z) := 2 2α + π cos h−1 π z + sin h−1 π z . Also here the bound in (10.136) depends on the behaviour of the function ϕ. If √ f ∈ Bσ∞ , the bound is of order O e−αN / N . We would like to mention here that the authors of [2] have established the bound (10.136) only for f ∈ Bσ∞ . The amplitude error associated with the sinc-Gaussian operator and its first derivative has been studied in [2]. The amplitude error arises when the exact values f (nh) of (10.132) are replaced by the approximations f (nh). In this case, the amplitude error is given by Ah,N [f ](z) := Gh,N [f ](z) − Gh,N [f ](z).

(10.137)

Under the condition that f (nh) are close enough to f (nh), i.e. there is ε > 0, sufficiently small such that   sup f (nh) − f (nh) < ε,

(10.138)

n∈Zn (z)

the authors in [2] proved that if |z| < N we have



   Ah,N [f ](z) ≤ 2 ε e−α/4N 1 + N/απ exp (α + π )h−1 |z| .

(10.139)

Furthermore, under the condition (10.138) we have the following bound for amplitude error associated with the first derivative of the sinc-Gaussian operator   A

 −α/4N  h,N [f ](z) ≤ 2 ε e



2α π + π hN h



1+



 N/απ exp (α + π )h−1 |z| . (10.140)

The sinc-Gaussian operator (10.132) has been used in [2] to construct a sincGaussian method for approximating the eigenvalues of the second order boundary value problems. The bounds in (10.134), (10.136), (10.139) and (10.140) have been used to establish the error analysis of the method.

10 Sinc-Computation of Eigenvalues

283

10.5.2 Computations of Eigenvalues with Sinc-Gaussian Method The sinc-Gaussian method has been constructed in [2] to compute the eigenvalues of second order Birkhoff-regular eigenvalue problems (10.18)–(10.20). The method is based on dividing the characteristic function Δ(μ) in (10.23) into two parts the first part K (μ) is known and the second U (μ) is unknown. The authors of [2] proved that the U (μ) ∈ E2b (ϕ) with ϕ is a constant function. Then they approximate U (μ) using sinc-Gaussian operator (10.132) as follows U (μ) ≈ Gh,N [U ](μ) =

 2  α μ −1 −n , U (nh) sinc(π h μ − nπ ) exp −



N

n∈ZN (μ)

h

(10.141) where σ = 2b. Unfortunately, the samples U (nh) = Δ(nh) − K (nh), n ∈ ZN (μ) cannot be computed explicitly in the general case. That is why the amplitude error usually appears. The authors approximate these samples numerically by solving the initial-value problems defined by (10.21) to obtain the approximate values U (nh), (nh) − K (nh). Therefore n ∈ ZN (μ), i. e. U (nh) = Δ Gh,N [U ](μ) =

 n∈ZN (μ)

 2  α −1 −1 . h μ−n U (nh) sinc(π h μ − nπ ) exp − N (10.142)

The error analysis has been rigorously investigated using both truncation and amplitude error bounds. Let (μ∗ )2 be the eigenvalue of (10.18)–(10.20) and μN the its approximation. The bound for |μ∗ −μN | has been estimated when the eigenvalue is simple or double, see [2, Theorem 3.3, 3.4]. The accuracy of this method is higher than the classical sinc-method. The following example is given in [2]. Example 10.4 Consider the non self-adjoint problem − y  (x) + q(x)y(x) = μ2 y(x),

0 ≤ x ≤ 1,

(10.143)

2y(0) + y(1) = 0,

(10.144)

y  (0) + y  (1) = 0.

(10.145)

is a special case of problem (10.18)–(10.20) when α12 = α21 = β12 = β21 = 0, β11 = β22 = α22 = 1, α11 = 2, and q(x) :=

⎧ ⎨ −1, ⎩

0 ≤ x < 1/2, (10.146)

0,

1/2 ≤ x ≤ 1.

284

M. H. Annaby et al.

Table 10.6 The eigenvalues with N = 20, h = 0.5

μ μ1 μ2 μ3 μ4

Sinc-Gaussian 2.9781881070699154 9.37157615398148 15.67609996227684 21.96840038904169

Absolute error 5.587×10−13 3.739×10−12 1.977×10−12 2.991×10−12

The characteristic determinant is Δ(μ) = 3 + 3 cos μ2 + 1.

(10.147)

Thus the exact eigenvalues are μ2k = ((2k − 1)π )2 − 1, k ∈ Z. All eigenvalues are algebraically double (Table 10.6). After this study, some studies have appeared. We summarize those studies as follows: In [14], Annaby and Tharwat applied a sinc-Gaussian technique to compute the eigenvalues of the problem (10.48)–(10.50). In addition, the eigenparameter appears in the boundary conditions linearly. The authors, in [24], used the sinc-Gaussian method to compute eigenvalues of a discontinuous regular Dirac systems with transmission conditions at the point of discontinuity (10.90)–(10.94) numerically. In [77], the authors applied the sinc-Gaussian method to compute the eigenvalues of discontinuous Sturm–Liouville problems (10.95)–(10.99) which contain an eigenparameter appearing linearly in two boundary conditions and an internal point of discontinuity. The author, in [72], computed the eigenvalues of Sturm–Liouville problem which is defined in (10.18) with the boundary conditions a1 y(0, μ) + a2 y  (0, μ) = μ2 (a1 y(0, μ) + a2 y  (0, μ)),

(10.148)

b1 y(1, μ) + b2 y  (1, μ) = μ2 (b1 y(1, μ) + b2 y  (1, μ)),

(10.149)

by sinc Gaussian method, where μ is a complex spectral parameter, q(·) is assumed to be real valued and continuous on [0, 1] and ai , bi , ai , bi ∈ R, i = 0, 1, satisfying 



  (a1 , a2 ) = (0, 0) or a1 a2 − a1 a2 > 0 and (b1 , b2 ) = (0, 0) or b1 b2 − b1 b2 > 0 . (10.150) In [73], the authors used the sinc-Gaussian method to compute the eigenvalues of the problems Π (q, α, β, 0, 0), Π (q, α, β, 0, β  ) and Π (q, α, β, α  , β  ) which are defined in Sect. 10.3.2. Annaby and Tharwat,[16], applied a sinc-Gaussian technique to compute the eigenvalues of the problem (10.51)–(10.53). Here, the authors derived a rigorous error analysis that takes into account both truncation and perturbation errors.

10 Sinc-Computation of Eigenvalues

285

In [7], Annaby and Asharabi computed the eigenvalues of discontinuous second order boundary-value problems (10.54)–(10.57) using the sinc-Gaussian sampling technique. Their problem is defined in two ways throughout [−1, 1] and is not in general self adjoint. The boundary and compatibility conditions are assumed to be regular in the sense of Birkhoff to guarantee the existence and discreteness of the eigenvalues, which are in general complex numbers and are not necessarily simple. In [8] Annaby and Asharabi have considered the computation of the bound states of the quantum scattering problem   l(l + 1) ψl (k, r) = V (r)ψl (k, r), − ψl (k, r) + k 2 − r2  ∞ (r + 1)|V (r)| dr < ∞,

r ∈ [0, ∞), (10.151) (10.152)

0

where the potential V (r) is assumed to be real-valued. They have not applied a sincGaussian technique to solve the problem, but they have also applied a regularized sinc method as well as Hermite interpolation method. It is worth mentioning that Boumenir and Tuan, [34], have also computed the eigenvalues using a sampling theorem in Hardy-type spaces. In [8] several examples and comparisons have been carried out, see also [33] and [41].

10.6 Hermite-Gaussian Method The Hermite sampling which uses samples from a bandlimited function f and its first derivative is given in Sect. 10.4. Asharabi and Prestin in [21] modified the Hermite sampling with a Gaussian multiplier. The author of [17] have used this modification to construct a new sampling method to approximate eigenvalues of boundary value problems. This method is called Hermite-Gauss method and the convergence rate is of an exponential order.

10.6.1 Hermite-Gauss Operator The Hermite-Gauss operator Hh,N : Eσ (ϕ) → ELp (R) is defined as follows, cf. [21], Hh,N [f ](z) :=

 

1+

n∈ZN (z)

× sinc2

2β(z − nh)2 h2 N

π h



9 f (nh) + (z − nh)f  (nh)

2 β z z − nπ e− N ( h −n) , (10.153)

286

M. H. Annaby et al.

where ZN (z) is defined in (10.133), N ∈ Z+ , h ∈ (0, 2π/σ ] and β := (2π − hσ )/2. The classes of functions Eσ (ϕ) and ELp (R) are defined in Sect. 10.5.1. The summation in (10.153)   depends on the real part of z. An estimation for the error f (z) − Hh,N [f ](z) investigated in [21]. For f ∈ Eσ (ϕ) and |z| < N we have, cf. [21],  

e−βN   f (z) − Hh,N [f ](z) ≤ 2 sin2 (h−1 π z) ϕ (|z| + h(N + 1)) χN h−1 z √ , πβN

(10.154) where 4eβt /N e2βt e−2βt χN (t) := √ + +

2 −2π(N+t) 2 −2π(N−t) (1 − e ) (1 − e )2 πβN 1 − (t/N) 2

= 2 cosh(2βt) + O(N −1/2 ),

as

N → ∞.

(10.155)

The convergence rate of the Hermite Gauss operator depends on the behaviour of the function ϕ. In the special case ϕ is a constant function, the convergence rate of

√ −βN / N where β is defined above. In the Hermite Gauss operator is of order O e [19], the author estimates a bound for the error associated with the first derivative of       Hermite-Gauss operator, i.e. f (z) − Hh,N [f ](z). He proved that if f ∈ Eσ (ϕ), σ > 0, then we have for z ∈ C, |z| < N

e−βN    f (z) − H  [f ](z) ≤ 2ϕ (|z| + h(N + 1)) ψN (z)χN h−1 z √ , h,N h πβN (10.156) where χN is defined in (10.155) and ψN is defined by 

  

        ψN (z) := 2π cos(π h−1 z) + N −1 + 2β sin(π h−1 z) sin(π h−1 z) . (10.157) The bound in (10.156) depends on the behaviour of the function ϕ. If ϕ is a constant √ function, then the bound (10.156) becomes of order O e−βN / N . The amplitude error associated with the Hermite Gauss operator has been investigated in [17]. The amplitude error associated with the operator (10.153) is given by Ah,N [f ](z) := Hh,N [f ](z) − Hh,N [f ](z).

(10.158)

10 Sinc-Computation of Eigenvalues

287

Assume that f (nh) and f  (nh) are close to f (nh) and f  (nh) respectively, i.e. there is ε > 0 sufficiently small such that   sup f (nh) − f (nh) < ε, n∈ZN (z)

  sup f  (nh) − f  (nh) < ε.

(10.159)

n∈ZN (z)

Assume that (10.159) holds then for z ∈ C and |z| < N we have, [17],  

    −1 −1 Ah,N [f ](z) ≤ 2ε 1 + 2β + h 1 + N/βπ e 2π +βh h |z| e−β/4N . 2 π π N (10.160) Moreover, under the condition (10.159) we have the following bound for the amplitude error associated with the first derivative of the Hermite-Gauss operator (10.153), cf. [19],

     A [f ](z) ≤ 2ε Ch,N 1 + N/βπ e 2π +βh−1 h−1 |z| e−β/4N , h,N

(10.161)

where Ch,N is defined by Ch,N := 1 +

   β 1 2β(h + 1) 2 Nπ + + +1+h . h π N Nπh

(10.162)

The Hermite-Gauss operator (10.153) has been used in [17] to construct a Hermite-Gauss method for approximating the eigenvalues of the second order boundary value problems with separate boundary conditions. The bounds in (10.154), (10.156), (10.160) and (10.161) have used to establish the error analysis of this method for both cases of the eigenvalues which are simple or double.

10.6.2 Computations of Eigenvalues with Hermite-Gaussian Method The Hermite-Gauss method has been derived in [17] to compute the eigenvalues of second order eigenvalue problems (10.18)–(10.20) with only separate boundary conditions. The method is based on dividing the characteristic function into two parts the first part K (μ) is known and the second U (μ) is unknown. The author of [17] proved that the U (μ) ∈ Eb (ϕ) with ϕ is a constant function. Then he

288

M. H. Annaby et al.

approximate U (μ) using Hermite-Gauss operator (10.153) as follows U (μ) ≈ Hh,N [U ](μ)  9   2β(μ − nh)2  1+ U (nh) + (μ − nh)U (nh) = h2 N n∈ZN (μ)

× sinc2

π h

2 β μ μ − nπ e− N ( h −n) , (10.163)

where σ = b. Unfortunately, the samples U (nh) = Δ(nh)−K (nh) and U  (nh) = Δ (nh) − K  (nh), n ∈ ZN (μ) cannot be computed explicitly in the general case. The author finds a good technique to approximate these samples numerically to ; (nh), n ∈ ZN (μ), cf. [17]. In this obtain the approximate values U (nh) and U method also the amplitude error usually appears and therefore Hh,N [U ](μ) =

  n∈ZN (μ)

2β(μ − nh)2 1+ h2 N

× sinc2

π h



9  ; U (nh) + (μ − nh)U (nh)

2 β μ μ − nπ e− N ( h −n) . (10.164)

The error analysis has been rigorously investigated using both truncation and amplitude error bounds. In this problem, all eigenvalues were simple and the bound for |μ∗ − μN | has been estimated, where (μ∗ )2 is the simple eigenvalue of (10.18)– (10.20) and μN the its approximation, see [17, Theorem 3.2]. The accuracy of this method is higher than the sinc-Gaussian method. The following example is given in [17]. Example 10.5 Consider the Sturm-Liouville problem − y  (t) − y(t) = μ2 y(t),

t ∈ [0, 1],

(10.165)

with the separate boundary condition of the form y  (0, μ) = y(1, μ) = 0.

(10.166)

The characteristic function is Δ(μ) = cos



 1 + μ2 ,

(10.167)

10 Sinc-Computation of Eigenvalues Table 10.7 The eigenvalues with N = 7 and h = 1

289 μ μ1 μ2 μ3 μ4 μ5

Hermite-Gauss 1.211363322984587 4.605063506885792 7.790059531660102 10.950007028004358 14.101754824207502

Absolute error 3.242×10−14 2.309×10−14 5.329×10−15 8.882×10−15 1.776×10−15

and the exact eigenvalues are μ2l = (2 l + 1)2 π 2 /4 − 1, l ∈ Z. Table 10.7 shows the first five approximate eigenvalues of problem (10.165)–(10.166) using our techniques with N = 7 and h = 1. After the above study, some studies have appeared. We summarize those studies as follows: • The authors of [22] applied the Hermite-Gauss sampling method to approximate the eigenvalues of the Dirac system 5

6 5 6 u2 (x) − q1 (x)u1 (x) u1 (x) = λ , u1 (x) + q2 (x)u2 (x) −u2 (x) 5

x ∈ [a, c1 )∪m k=2 (ck−1 , ck )∪(cm , b],

6 5 6 sin α u1 (a) + cos α u2 (a) 0 = , sin β u1 (b) + cos β u2 (b) 0

(10.168) (10.169)

with the transmission conditions at the discontinuity points {ck }m k=1 5

5 6 6 u1 (ck− ) u1 (ck+ ) − γk = 0, u2 (ck− ) u2 (ck+ )

k = 1, 2, . . . , m,

(10.170)

where λ ∈ C; γ0 = 1; γk = 0, k = 1, 2, . . . , m; the real valued functions qj (x), j = 1, 2, are continuous in [a, c1 ) ∪m k=2 (ck−1 , ck ) ∪ (cm , b] and have finite limits at {ck± }m k=1 and α, β ∈ [0, π ]. • The author of [19] used the Hermite-Gauss sampling method to approximate nonreal and non-simple eigenvalues of boundary value problems (10.18)–(10.20) with α12 = α21 = β12 = β21 = 0 and q(·) is a complex-valued function satisfying q ∈ L1 (0, b) and α11 , α22 , β11 , β22 are complex numbers satisfying |α11 | > 0, |α22 | > 0, |β11 | > 0, |β22 | > 0.

10.7 Generalized Sinc-Gaussian Method The generalized sampling expansion which uses samples from a bandlimited function f and its first r derivatives was introduced by some authors, see e.g. [60, 68, 79]. The use of the generalized sampling series in approximation theory is

290

M. H. Annaby et al.

limited because of the slow convergence. The generalized sinc-Gaussian sampling operator is established recently in [18] to approximate wider classes of analytic functions. The author of [18] have used this operator to construct a new sampling method to approximate eigenvalues of boundary value problems. This method is called generalized sinc-Gaussian method and the convergence rate is of an exponential order.

10.7.1 Generalized Sinc-Gaussian Operator The author of [18] modified the generalized sampling expansion using a Gaussian function for wider classes of analytic functions. The generalized sinc-Gaussian operator Gr,h,N : Eσ (ϕ) → ELp (R) is defined as follows, cf. [18], Gr,h,N [f ](z) :=





αr

f (i) (nh) pr+k−j, l (z) sincr+1 (π h−1 z − nπ ) e− N

2

−1 h z−n

,

n∈ZN (z) i+j +k+l=r

(10.171) where ZN (z) is defined in (10.133), h ∈ (0, (r + 1)π/σ ], αr := ((r + 1)π − hσ )/2, N ∈ Z+ , z ∈ C, and r ∈ N◦ . The classes of functions Eσ (ϕ) and ELp (R) are defined in Sect. 10.5.1. Here pr+k−j, l (z) is a polynomial of degree r +k−j which is defined by  √ r−j αr (z − nh) π r+1 (−1)(r+1)n+k hi δr,l −1 h z−n pr+k−j, l (z) := Hk , √ i! k! l! Nh (10.172) where Hk (z) is the k-th degree Hermite polynomial k/2  (−1)m k! (2z)k−2m dk 2 , Hk (z) := (−1) exp(z ) k exp(−z ) = dz m!(k − 2m)! k

2

(10.173)

m=0

and the constant δr,l is given by ' δr,l :=

dl dζ l



ζ −n sin (π ζ )

r+1 ( .

(10.174)

ζ =n

The sinc-Gaussian operator (10.132) and the Hermite-Gauss operator (10.153) are included in the operator (10.171) as special cases when r = 0, 1, respectively.  An estimation for the error f (z) − Gr,h,N [f ](z) is investigated in [18]. Let

10 Sinc-Computation of Eigenvalues

291

f ∈ Eσ (ϕ), then we have for |z| < N 

   f (z) − Gr,h,N [f ](z) ≤ 2r sinr+1 π h−1 z  ϕ (|z| + h(N + 1))

e−αr N βr,N h−1 z √ , π αr N

(10.175)

where the function βN is defined by 2eαr t /N e2αr t e−2αr t

+ +

2 (1 − e−2π(N+t) )r+1 (1 − e−2π(N−t) )r+1 π αr N 1 − Nt 2

βr,N (t) : = √

= 2 cosh(2αr t) + O(N −1/2 ),

as

N → ∞.

(10.176)

Also here, the convergence rate of the generalized sinc-Gaussian operator depends ∞ convergence on the behaviour of the function ϕ. In the special case f ∈ B

σ , the √ −α N rate of the generalized sinc-Gaussian operator is of order O e r / N where αr is defined above and depends on r. The amplitude error associated with the generalized sinc-Gaussian operator has been investigated in [23]. The amplitude error associated with the operator (10.171) is given by Ar,h,N [f ](z) := Gr,h,N [f ](z) − Gr,h,N [f ](z).

(10.177)

Assume that f (i) (nh) are close to f (i) (nh), i.e. there is ε > 0 sufficiently small such that     (10.178) sup f (i) (nh) − f (i) (nh) < ε, i = 0, . . . , r. n∈ZN (z)

Under the assumption (10.178) and z ∈ C, |z| < N , we have [23]

   Ar,h,N [f ](z) ≤ εCr,h,N 1 + N/αr e−αr /4N e((r+1)h+αr )h−1 |z| ,

(10.179)

where αr is defined above and Cr,h,N is defined by Cr,h,N :=

2 π r+1



j +1

h

|βi,j,k,l |

i+j +k+l=r

k/2  m=0

√ k−2m k! 2 αr h−1 + 1 N 1/2(k−2m)−j −1 , m!(k − 2m)!

(10.180) and the constant βi,j,k,l is given by βi,j,k,l

π r+1 (−1)(r+1)n+k hi := i! k! l!

'

dl dζ l



ζ −n sin (π ζ )

r+1 ( . ζ =n

(10.181)

292

M. H. Annaby et al.

10.7.2 Computations of Eigenvalues with Generalized Sinc-Gaussian Method The generalized sinc-Gaussian operator has been used in [23] to approximate eigenvalues of the Dirac system which consists of the system of differential equations, u2 (x, λ) + p1 (x)u1 (x, λ) = λu1 (x, λ), −u1 (x, λ) + p2 (x)u2 (x, λ) = λu2 (x, λ),

9 x ∈ [0, a],

(10.182)

with boundary conditions u1 (0, λ) = (−1) −1 u1 (a, λ),

u2 (0, λ) = (−1) −1 u2 (a, λ),

= 1, 2, (10.183)

where λ ∈ C and p1 (·) and p2 (·) are real-valued smooth periodic functions with period a. For = 1, the boundary value problem (10.182)–(10.183) is said to be periodic, while, with = 2, the problem (10.182)–(10.183) is said to be semi-periodic. Problem (10.182)–(10.183) has denumerable set of real eigenvalues and the eigenvalues can repeat with multiplicity not exceeding two. Let ϕ (·, λ) = (ϕ 1 (·, λ), ϕ 2 (·, λ)) and ϑ (·, λ) = (ϑ 1 (·, λ), ϑ 2 (·, λ)) be two solutions of (10.182) satisfying the following initial conditions ϕ 1 (0, λ) = ϑ 2 (0, λ) = 0,

ϕ 2 (0, λ) = ϑ 1 (0, λ) = (−1) −1 ,

= 1, 2. (10.184)

The eigenvalues of the problem (10.182)–(10.183) are the zeros of the characteristic function Δ (λ) = ϕ 2 (a, λ) + ϑ 1 (a, λ) + (−1) 2,

(10.185)

which is an entire function in λ. In this method, the authors split the characteristic function Δ (λ) into two parts, one is known and the other is not known but it belongs to the class Ea (ϕ) with ϕ is constant function, i.e. Δ (λ) := K (λ) + U (λ),

(10.186)

where U (λ) is the unknown part involving some integral operators, cf. [23]. Therefore, the authors approximate the unknown part U (λ) using the generalized sinc-Gaussian operator (10.171) as follows U (λ) ≈ Gr,h,N [U ](λ) :=





n∈ZN (λ) i+j +k+l=r

(i)

U (nh) pr+k−j, l (λ)



2 αr −1 × sincr+1 (π h−1 λ − nπ ) e− N h λ−n ,

(10.187)

10 Sinc-Computation of Eigenvalues

293

(i)

(i)

(i)

Unfortunately, the samples U (nh) = Δ (nh) − K (nh), i = 1, . . . , r, n ∈ ZN (λ) cannot be computed explicitly in the general case. The authors find a good technique to approximate these samples numerically, cf. [23]. In this method also the amplitude error usually appears. Let λ∗ be an eigenvalue of (10.182)–(10.183) of multiplicity ν (ν = 1 or 2) and denote by λ ,N the corresponding approximation.  The bound for the error λ∗ − λ ,N  has been estimated in [23, Theorem 3.3] using the bounds in (10.175) and (10.179). The following example is given in [23]. Example 10.6 Consider the semi-periodic Dirac system u2 (x, λ) + xu1 (x, λ) = λu1 (x, λ),

u1 (0, λ) = −u1 (4, λ),

−u1 (x, λ) + xu2 (x, λ) = λu2 (x, λ),

u2 (0, λ) = −u2 (4, λ).

x ∈ [0, 4]

(10.188) (10.189)

Here a = 4, = 2 and p1 (x) = p2 (x) = x. The characteristic function will be Δ1 (λ) = 2 − 2 cos (2(4 − 2λ)) , 4 − πk , and then the exact eigenvalues are λk = 2 problem are double (Tables 10.8 and 10.9).

(10.190)

k ∈ Z. All eigenvalues of this

Table 10.8 The eigenvalues with N = 7, h = 0.7 λk λ−2 λ− λ0 λ1 λ2

G1,h,N 5.14159196109544 3.570796345992298 1.9999995037636424 0.42920436334952355 −1.1415932196009233

Table 10.9 The absolute error |λk − λk,N | with N = 7, h = 0.7

G2,h,N 5.141592653560071 3.5707963267993956 1.999999999992704 0.42920367323034775 −1.141592653617637 λk λ−2 λ−1 λ0 λ1 λ2

G1,h,N 6.92494×10−7 1.91974×10−8 4.96236×10−7 6.90144×10−7 5.66011×10−7

G3,h,N 5.141592653589792 3.570796326794903 2.0000000000000044 0.4292036732051043 −1.1415926535897942

G2,h,N 2.9722×10−11 4.49907×10−12 7.29594×10−12 2.52441×10−11 2.78437×10−11

G3,h,N 8.88178×10−16 6.66134×10−15 4.44089×10−15 6.10623×10−16 8.88178×10−16

294

M. H. Annaby et al.

Table 10.10 Comparisons Methods

Space

Samples

Convergence rate

Classical sinc



{f (nh)}N n=−N

1/N m+1/2

Regularized sinc

{f (nh)}N n=−N  N f (nh), f  (nh) n=−N

1/N m+k+1/2

Hermite

p Bσ p Bσ

Sinc-Gaussian

Eσ (ϕ)

Hermite-Gauss

Eσ (ϕ)

Gen. sinc-Gaussian

Eσ (ϕ)

{f (nh)}n∈ZN (z)   f (nh), f  (nh) n∈Z (z) N   f (nh), . . . , f (r) (nh) n∈Z

p

N (z)

1/N m+1/2 √ e−α1 N / N √ e−α2 N / N √ e−αr N / N

10.8 Conclusions During the period 1996–2018, six sampling methods have been developed to approximate the eigenvalues of boundary value problems of various types. They are called classical sinc (1996), regularized sinc (2005), Hermite (2012), sincGaussian (2008), Hermite-Gauss (2016) and generalized sinc-Gaussian (2018). See [2, 6, 17, 23, 32, 39]. We can classify those sampling methods under the following two categories. • Methods with a polynomial rate of convergence (classical sinc, regularized sinc and Hermite). • Methods of an exponential convergence rate (sinc-Gaussian, Hermite-Gauss, generalized sinc-Gaussian). In Table 10.10, we give important comparisons for the presented methods where p > 1, m, k ∈ N, αr = ((r + 1)π − hσ )/2, h ∈]0, (r + 1)π/σ ] and σ is a positive p number. The space Eσ (ϕ) is defined in Sect. 10.5.1 and Bσ is called the Bernstein space. It is worthwhile to mention that except for the sinc method of [70], the other generalized Sinc methods have not been implemented in physical and engineering problems. We expect that the implementation of the regularized and generalized methods in applied problems will remarkably enhance convergence rates, as in the case of applications in the eigenvalue computations. This are open directions in this interesting field of approximation theory.

Bibliography 1. Annaby, M.H., Asharabi, R.M.: Approximating eigenvalues of discontinuous problems by sampling theorems. J. Numer. Math. 3, 163–183 (2008) 2. Annaby, M.H., Asharabi, R.M.: Computing eigenvalues of boundary-value problems using sinc-Gaussian method. Sampl. Theory Signal Image Proc. 7, 293–311 (2008) 3. Annaby, M.H., Asharabi, R.M.: On sinc-based method in computing eigenvalues of boundaryvalue problems. SIAM J. Numer. Anal. 46, 671–690 (2008)

10 Sinc-Computation of Eigenvalues

295

4. Annaby, M.H., Asharabi, R.M.: Error analysis associated with uniform Hermite interpolations of bandlimited functions. J. Korean Math. Soc. 47, 1299–1316 (2010) 5. Annaby, M.H., Asharabi, R.M.: Truncation, amplitude, and jitter errors on R for sampling series derivatives. J. Approxim. Theory 163, 336–362 (2011) 6. Annaby, M.H., Asharabi, R.M.: Computing eigenvalues of Sturm-Liouville problems by Hermite interpolations. Numer. Algor. 60, 355–367 (2012) 7. Annaby, M.H., Asharabi, R.M.: A sinc–Gaussian solver for general second order discontinuous problems. Jpn. J. Ind. Appl. Math. 35, 653–668 (2018) 8. Annaby, M.H., Asharabi, R.M.: Sinc-interpolants in the energy plane for regular solution, Jost function, and its zeros of quantum scattering. J. Math. Phys. 59, 013502 (2018) 9. Annaby, M.H., Tharwat, M.M.: On computing eigenvalues of second-order linear pencils. IMA J. Numer. Anal. 27, 366–380 (2007) 10. Annaby, M.H., Tharwat, M.M.: Sinc-based computations of eigenvalues of Dirac systems. BIT Numer. Math. 47, 699–713 (2007) 11. Annaby, M.H., Tharwat M.M.: On sampling and Dirac systems with eigenparameter in the boundary conditions. J. Appl. Math. Comput. 36, 291–317 (2011) 12. Annaby, M.H., Tharwat M.M.: On the computation of the eigenvalues of Dirac systems. Calcolo 49, 221–240 (2012) 13. Annaby, M.H., Tharwat, M.M.: The Hermite interpolation approach for computing eigenvalues of dirac systems. Math. Comput. Model. 57, 2459–2472 (2013) 14. Annaby, M.H., Tharwat, M.M.: A sinc-Gaussian technique for computing eigenvalues of second-order linear pencils. Appl. Numer. Math., 63, 129–137 (2013) 15. Annaby, M.H., Tharwat, M.M.: A sinc-method computation for eigenvalues of Schrödinger operators with eigenparameter-dependent boundary conditions. Calcolo 54, 23–41 (2017) 16. Annaby, M.H., Tharwat, M.M.: Sinc–regularized techniques to compute eigenvalues of Schrödinger operators on L2 (I ) ⊕ C2 . Numer. Algor. 80, 795–817 (2019) 17. Asharabi, R.M.: A Hermite-Gauss technique for approximating eigenvalues of regular SturmLiouville problems. J. Inequal. Appl. 2016, 154 (2016). https://doi.org/10.1186/s13660-0161098-9 18. Asharabi, R.M.: Generalized sinc-Gaussian sampling involving derivatives. Numer. Algor. 73, 1055–1072 (2016) 19. Asharabi, R.M.: Approximating eigenvalues of boundary value problems by using the HermiteGauss sampling method. Electron. Trans. Numer. Anal. 46, 359–374 (2017) 20. Asharabi, R.M.: The use of the sinc-Gaussian sampling formula for approximating the derivatives of analytic functions. Numer. Algor. 81, 293–312 (2019). https://doi.org/10.1007/ s11075-018-0548-5 21. Asharabi, R.M., Prestin, J.: A modification of Hermite sampling with a Gaussian multiplier. Numer. Funct. Anal. Optim. 36, 419–437 (2015) 22. Asharabi, R.M., Tharwat, M.M.: Approximating eigenvalues of Dirac system with discontinuities at several points using Hermite-Gauss method. Numer. Algor. 76, 655–673 (2017) 23. Asharabi, R.M., Tharwat, M.M.: The use of the generalized sinc-Gaussian sampling for numerically computing eigenvalues of periodic Dirac system. Electron. Trans. Numer. Anal. 48, 373–386 (2018) 24. Bhrawy, A.H., Tharwat, M.M., Al-Fhaid, A.: Numerical algorithms for computing eigenvalues of discontinuous dirac system using sinc-Gaussian method. Abstr. Appl. Anal. 2012, 13. https:// doi.org/10.1155/2012/925134 25. Boas, R.P.: Entire Functions. Academic, New York (1954) 26. Boumenir, A.: Computing eigenvalues of a periodic Sturm-Liouville problem by the Shannon Whittaker sampling theorem. Math. Comp. 68, 1057–1066 (1999) 27. Boumenir, A.: Computing eigenvalues of Lommel-type equations by the sampling method. J. Comput. Anal. Appl. 2, 323–332 (2000) 28. Boumenir, A.: Computing eigenvalues of the string by sampling. Appl. Math. Lett. 13, 29–36 (2000)

296

M. H. Annaby et al.

29. Boumenir, A.: Higher approximation of eigenvalues by sampling. BIT Numer. Math. 40, 215– 225 (2000) 30. Boumenir, A.: The sampling method for Sturm-Liouville problems with the eigenvalue parameter in the boundary condition. Numer. Funct. Anal. Optimiz, 21, 67–75 (2000) 31. Boumenir, A.: Sampling and eigenvalues of non self adjoint Sturm-Liouville problems. SIAM. J. Sci. Comput. 23, 219–229 (2001) 32. Boumenir, A., Chanane, B.: Eigenvalues of Sturm-Liouville systems using sampling theory. Appl. Anal. 62, 323–334 (1996) 33. Boumenir, A., Chanane, B.: The computation of negative of eigenvalues of singular SturmLiouville problems. IMA J. Numer. Anal. 21, 489–501 (2001) 34. Boumenir, A., Tuan, V.K.: Sampling eigenvalues in Hardy spaces. SIAM J. Numer. Anal. 45, 473–483 (2007) 35. Butzer, P.L., Splettstösser, W., Stens, R.L.: The sampling theorem and linear prediction in signal analysis. Jahresber. Deutsch. Math. Verein. 90, 1–70 (1988) 36. Butzer, P.L., Schmeisser, G., Stens, R.L.: An introduction to sampling analysis. In: Marvasti, F. (ed.) Non Uniform Sampling: Theory and Practices, pp. 17–121. Kluwer, New York (2001) 37. Chadan, K.: The interpolation of the wave function and the Jost functions in the energy plane. Il Nuovo Cimento 39, 697–703 (1965) 38. Chanane, B.: Computing eigenvalues of regular Sturm-Liouville problems. Appl. Math. Lett. 12, 119–125 (1999) 39. Chanane, B.: Computation of eigenvalues of Sturm–Liouville problems with parameter dependent boundary conditions using reularized sampling method. Math. Comput. 74, 1793– 1801 (2005) 40. Chanane, B.: Computing the spectrum of non self-adjoint Sturm–Liouville problems with parameter dependent boundary conditions. J. Comput. Appl. Math. 206, 229–237 (2007) 41. Chanane, B.: Computing the eigenvalues of singular Sturm–Liouville problems using the regularized sampling method. Appl. Math. Comput. 184, 972–978 (2007) 42. Chanane, B.: Eigenvalues of Sturm–Liouville problems with discontinuity conditions inside a finite interval. Appl. Math. Comput. 188, 1725–1732 (2007) 43. Chanane, B.: Sturm–Liouville problems with impulse effects. Appl. Math. Comput. 190, 610– 626 (2007) 44. Eastham, M.S.P.: Theory of Ordinary Differential Equations. Van Nostrand Reinhold, London (1970) 45. Eastham, M.S.P.: The Spectral Theory of Periodic Differential Equations. Scottish Academic Press, London (1973) 46. Eggert, N., Jarratt, M., Lund, J.: Sine function computation of the eigenvalues of Sturm– Liouville problems. J. Comput. Phys. 69, 209–229 (1987) 47. Fulton, C.T.: Two-point boundary value problems with eigenvalue parameter contained in the boundary conditions. Proc. Roy. Soc. Edin. A 77, 293–308 (1977) 48. Grozev, G.R., Rahman, Q.I.: Reconstruction of entire functions from irregularly spaced sample points. Canad. J. Math. 48, 777–793 (1996) 49. Higgins, J.R.: Sampling Theory in Fourier and Signal Analysis: Foundations. Oxford University Press, Oxford (1996) 50. Higgins, J.R., Schmeisser, G., Voss, J.J.: The sampling theorem and several equivalent results in analysis. J. Comput. Anal. Appl. 2, 333–371 (2000) 51. Hinsen, G.: Irregular sampling of bandlimited Lp -functions. J. Approx. Theory 72, 346–364 (1993) 52. Jagerman, D.: Bounds for truncation error of the sampling expansion. SIAM. J. Appl. Math. 14, 714–723 (1966) 53. Jagerman, D., Fogel, L.: Some general aspects of the sampling theorem. IRE Trans. Inform. Theory 2, 139–146 (1956) 54. Jarratt, M., Lund, J., Bowers, K.L.: Galerkin schemes and the sine-Galerkin method for singular Sturm–Liouville problems. J. Comput. Phys. 89, 41–62 (1990) 55. Kemp, R.R.D.: Operators on L2 ⊕ Cr . Can. J. Math. 39, 33–53 (1987)

10 Sinc-Computation of Eigenvalues

297

56. Kemp, R.R.D., Lee, S.J.: Finite dimensional perturbations of differential expressions. Can. J. Math. 28, 1082–1104 (1976) 57. Kerimov, N.B.: A boundary value problem for the Dirac system with a spectral parameter in the boundary conditions. Differ. Eq. 38, 164–174 (2002) 58. Levitan, B.M., Sargsjan, I.S.: Introduction to spectral theory: selfadjoint ordinary differential operators. In: Translation of Mathematical Monographs, vol. 39. American Mathematical Society, Providence (1975) 59. Levitan, B.M., Sargsjan, I.S.: Sturm-Liouville and Dirac Operators. Kluwer Acadamic, Dordrecht (1991) 60. Linden, D.A., Abramson, N.M.: A generalization of the sampling theorem. Inform. Contr. 3, 26–31 (1960). (see also vol. 4, pp. 95–96. 1961 for correction of eq. (1)) 61. Lund, J., Bowers, K.: Sinc Methods for Quadrature and Differential Equations, SIAM, Philadelphia (1992) 62. Lund, J., Riley, B.V.: A sine-collocation method for the computation of the eigenvalues of the radial Schrödinger equation. IMA J. Numer. Anal. 4, 83–98 (1984) 63. Naimark, M.A.: Linear Differential Operators. George Harrap, London (1967) 64. Qian, L.: On the regularized Whittaker-Kotel’nikov-Shannon sampling formula. Proc. Amer. Math. Soc. 131, 1169–1176 (2002) 65. Qian, L., Creamer, D.B.: A Modification of the sampling series with a Gaussian multiplie. Sampl. Theory Signal Image Process. 5, 1–20 (2006) 66. Qian, L., Creamer, D.B.: Localized sampling in the presence of noise. Appl. Math. Lett. 19, 351–355 (2006) 67. Schmeisser, G., Stenger, F.: Sinc approximation with a Gaussian multiplier. Sampl. Theory Signal Image Process. 6, 199–221 (2007) 68. Shin, C.E.: Generalized Hermite interpolation and sampling theorem involving derivatives. Commun. Korean Math. Soc. 17, 731–740 (2002) 69. Stenger, F.: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 23, 165–224 (1981) 70. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 71. Tharwat, M.M.: Computing eigenvalues and Hermite interpolation for dirac systems with eigenparameter in boundary conditions. Bound. Value Probl. 2013, 36 (2013). https://doi.org/ 10.1186/1687-2770-2013-36 72. Tharwat, M.M.: Sinc approximation of eigenvalues of Sturm–Liouville problems with a Gaussian multiplier. Calcolo 51, 465–484 (2014) 73. Tharwat, M.M.: Approximation of eigenvalues of Dirac systems with eigenparameter in all boundary conditions by sinc-Gaussian method. Appl. Math. Comput. 262, 113–127 (2015) 74. Tharwat, M.M., Bhrawy, A.H.: Computation of eigenvalues of discontinuous Dirac system using Hermite interpolation technique. Adv. Differ. Equ. 2012, 59 (2012). https://doi.org/10. 1186/1687-1847-2012-59 75. Tharwat, M.M., Bhrawy, A.H., Yildirim, A.: Numerical computation of eigenvalues of discontinuous Dirac system using Sinc method with error analysis. Int. J. Comput. Math. 89, 2061–2080 (2012) 76. Tharwat, M.M., Bhrawy, A.H., Alofi, A.S.: Computing eigenvalues of discontinuous SturmLiouville problems with eigenparameter in all boundary conditions using hermite approximation. J. Abstr. Appl. Anal. 2013, Article ID 498457 (2013). http://dx.doi.org/10.1155/2013/ 498457 77. Tharwat, M.M., Bhrawy, A.H., Alofi, A.S.: Approximation of Eigenvalues of discontinuous Sturm-Liouville problems with eigenparameter in all boundary conditions. Bound. Value Probl. 2013, 132 (2013). https://doi.org/10.1186/1687-2770-2013-132

298

M. H. Annaby et al.

78. Tharwat, M.M., Bhrawy, A.H., Yildirim, A.: Numerical computation of eigenvalues of discontinuous Sturm-Liouville problems with parameter dependent boundary conditions using Sinc method. Numer. Algor. 63, 27–48 (2013) 79. Voss, J.: Irregular sampling: error analysis, applications and extensions. Mitt. Math. Sem. Giessen 238, 1–86 (1999) 80. Whittaker, E.: On the functions which are represented by the expansion of the interpolation theory. Proc. Roy. Soc. Edin. Sec. A 35, 181–194 (1915) 81. Zayed, A.I.: Advances in Shannon’s Sampling Theory. CRC Press, Baco Raton (1993)

Chapter 11

Completely Monotonic Fredholm Determinants Mourad E. H. Ismail and Ruiming Zhang

This work is dedicated to Professor Frank Stenger for his 80th birthday.

Abstract A function f (x) is called completely monotonic if (−1)m f (m) (x) > 0. In random matrix theory when the associated orthogonal polynomials have Freud weights, it is known that the expectation of having m eigenvalues of a random Hermitian matrix in an interval is a multiple of (−1)m times the m-th derivative of a Fredholm determinant at λ = 1. In this work we extend these results in two directions: (1) from λ = 1 to λ ∈ (−∞, 1] for general orthogonal weights; (2) from matrices to trace class operators. We also provide many special function examples that are Fredholm determinants of trace class operators in disguise. Keywords Orthogonal polynomials · Freud weights · Random matrices · Integral equations

11.1 Introduction Let {pn (x)}∞ n=0 be a sequence of orthonormal polynomials associated with a weight function w(x) ≥ 0, 

B A

 pm (x)pn (x)w(x)dx = δm,n ,

B

with

w(x)dx = 1.

(11.1)

A

M. E. H. Ismail Department of Mathematics, University of Central Florida, Orlando, FL, USA College of Science Northwest A&F, University Yangling, Shaanxi, P. R. China R. Zhang () College of Science Northwest A&F University Yangling, Shaanxi, P. R. China © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_11

299

300

M. E. H. Ismail and R. Zhang

Define the Christoffel-Darbox kernel by Kn (x, y) =



w(x)w(y)

n−1 

pk (x)pk (y).

(11.2)

k=0

Following the literature on integral equations [19] we consider the integral equation 

b

φ(x) − λ

Kn (x, y)φ(y)dy = f (x),

(11.3)

a

where (a, b) ⊂ (A, B) and

& a

A+

& B

w(x)dx > 0. In other words the set & (A, B)\(a, b) has positive μ-measure, where μ (E) = E w(x)dx. Let n−1

Jn = aj,k j,k=0 ,

b



b

aj,k =

pj (x)pk (x)w(x)dx.

(11.4)

a

In this finite case, solving the integral equation (11.3) can be reduced to solving a system of linear equations, and the determinant det (I − λJn ) of the coefficient matrix is called the Fredholm determinant associated with the integral equation (11.3). Fredholm determinants associated with more general operators can be defined. They play an essential role in the theory of integral equations, [17, 19] and in the theory of random matrices, [5, 16]. In random matrix theory, when the weight function is a Freud weight w(x) = e−Q(x) such that Q is a polynomial, (A, B) = R, and b = −a = θ > 0, it is known that   (−1)m d m det (I − λJn )  , (11.5) (i) det (I − Jn ) , (ii) m! dλm λ=1 belong to [0, 1]. The reason is that the quantity in (i) is the expectation that an n × n Hermitian random matrix has no eigenvalues in (−θ, θ ), while the quantity in (ii) is the expectation that an n×n Hermitian random matrix has m eigenvalues in (−θ, θ ). One purpose of this paper is to generalize the results in the random matrix theory in 3 ways. First, we extend these inequalities from λ = 1 to λ ≤ 1 for general orthogonal weights, then we generalize matrices Jn to certain trace class operators, and finally generalize to entire special functions that generalize the Fredholm determinants. Recall that a function f (x) is called completely monotonic on an interval (a, b) if (−1)m f (m) (x) ≥ 0 for all x ∈ (a, b) and m = 0, 1, . . . , [22]. Our results of Sect. 11.2 establish the complete monotonicity of certain Fredholm determinants. Applications of completely monotonic functions to analysis are in [22] and many applications to probability theory are in [8]. Section 11.3 treats the infinite dimensional case of Fredholm determinants of certain trace class operators on a Hilbert space. In the final section, Sect. 11.4, we present examples of complete monotonic special functions that generalize the Fredholm determinants.

11 Completely Monotonic Fredholm Determinants

301

11.2 Finite Dimensional Case & Theorem 11.1 Let μ be a probability measure that has moments R x j dμ(x) for j = 0, 1, . . . , 2n, and let {pk (x)}nk=0 be the first n orthonormal polynomials. If B is the support of μ and E ⊂ B. Then for each & positive integer s ≤ n+1, the symmetric s ×s matrix with entries αj,k = δj,k −λ E pj (x)pk (x)dμ(x), 0 ≤ j, k ≤ s −1 has only nonnegative eigenvalues for λ ∈ (−∞, 1], and all its eigenvalues in [0, 1] for λ ∈ [0, 1]. Moreover, if λ ≤ 1 and μ (B\E) > 0, then the eigenvalues are strictly positive; if 0 < λ ≤ 1 and μ (E) > 0, then the eigenvalues are strictly less than 1. Proof It is clear that   αj,k = δj,k − λ pj (x)pk (x)dμ(x) = (1 − λχE (x)) pj (x)pk (x)dμ(x). E

B

(11.6) Hence the symmetric quadratic form for λ ≤ 1 because it equals

#

0≤i,j ≤s−1 αj,k xj xk

is positive semi-definite

2 2     s−1 s−1       (1 − λ)  xk pk (x) dμ(x) ≤ xk pk (x) (1 − λχE (x)) dμ(x)    B B k=0

k=0

2   s−1    xk pk (x) dμ(x). ≤ max{1, 1 − λ}    B k=0

The last inequality also shows that for λ ≤ 1 all the eigenvalues are nonnegative and they are in the interval [0, 1] if λ ∈ [0, 1]. In the case λ ≤ 1 and μ (B\E) > 0, we have  s−1 2 2    s−1        xk pk (x) (1 − λχE (x)) dμ(x) ≥ xk pk (x) dμ(x) > 0       B B\E k=0

k=0

(11.7) unless xk = 0 for all k = 0, . . . , s − 1. Hence, the symmetric quadratic form is positive definite and its eigenvalues are positive. On the other hand, for μ (E) > 0 and 1 ≥ λ > 0 the symmetric quadratic forms satisfy  0≤i,j ≤s−1

2   s−1    αj,k xj xk = xk pk (x) (1 − λχE (x)) dμ(x)    B k=0

=

s−1  k=0

2   s−1 s−1     |xk | − λ  |xk |2 , xk pk (x) dμ(x) <  E 2

k=0

k=0

302

M. E. H. Ismail and R. Zhang

unless that xk = 0 for all k = 0, . . . , s − 1 which proves all the eigenvalues are strictly less than 1. Corollary 11.1 Let n, s, λ, B, E and αj,k be defined as in Theorem 11.1. If 0
0, then all the eigenvalues of matrix αj,k j,k=0 are in (0, 1). In particular, we have s−1

0 < det αj,k j,k=0 < 1,

s = 1, 2, . . . , n + 1.

(11.8)

Theorem 11.2 Under the assumptions as in Corollary 11.1 and let Jn

n−1 βj,k j,k=0 where

=

 βj,k =

pj (x)pk (x)dμ(x).

(11.9)

E

Then the polynomial f (λ) := det (I − λJn ) is completely monotonic on (−∞, 1]. Moreover, for m = 0, 1, . . . , n and λ ∈ (0, 1) we have 0
0, k = 0, . . . , n − 1 be its eigenvalues. Evidently, it is independent of the parameter λ, and shares the n−1

same eigenvectors with αj,k j,k=0 defined in Theorem 11.1. Let us first restrict the parameter λ in the interval (0, 1], then from Corollary 11.1 we have 0 < 1 − λμk < 1, k = 0, . . . , n − 1 or 0 < μk < λ1 , k = 0, . . . , n − 1 for all λ ∈ (0, 1]. Since μk are independent of λ, we must have 0 < μk < 1, k = 0, . . . , n − 1, which can be also proved directly as we did in the proof of Theorem 11.1. Let f (λ) = det(I − λJn ) =

n−1 

 1 − λμj .

(11.11)

j =0

It is clear that f (λ) > 0 for λ ≤ 1. We note that μk μk . . . μk (−1)n f (m) (λ) 

 1 2  m  = m!f (λ) 1 − μk1 λ 1 − μk2 λ · · · 1 − μkm λ

(11.12)

where the sum is over the m-tuple (μk1 μk2 , · · · μkm ) such that μk1 < μk2 < · · · < μkm . Using the above identity we conclude that (−1)m f (m) (λ) ≥ 0 for λ ≤ 1.

11 Completely Monotonic Fredholm Determinants

303

To prove the second part we recall the definition of the elementary symmetric functions  μj1 · · · μjm , (11.13) em (μ0 , . . . , μn−1 ) = 0≤j1 − λ11 , formulas from (11.57) to (11.60) all converge absolutely and uniformly. Then for (λ) > − λ11 we get −

∞  λ2j d log (det2 (I + λK)) = λ , dλ 1 + λλj

(11.61)

∞  λ2j d2 log (I + λK)) = (det 2

2 dλ2 1 + λλj

(11.62)

j =1



j =1

and for m ∈ N, ∞  λjm+2 (−1)m+1 d m+2 log (I + λK)) = (det 2

m+2 (m + 1)! dλm+2 1 + λλj

(11.63)

j =1

The formula (11.62) and (11.63) also proves the following: Theorem 11.4 Let K be an integral operator as in (11.42), (11.43) and (11.44). Additionally, let det2 (I + λK) be defined as in (11.55). Then for λ ∈ [0, ∞), the

310

M. E. H. Ismail and R. Zhang

function −

d2 log (det2 (I + λK)) dλ2

(11.64)

is completely monotonic. Furthermore, for λ ≥ 0 0
−1.

(11.73)

312

M. E. H. Ismail and R. Zhang

The expansion (11.73) shows that

2m . x ∈ −∞, jν,1 The Airy function A(z) =

  1 x m J ρ ν −1, q ∈ (0, 1), Jackson’s q-Bessel function of second kind Jν(2) (z; q) satisfies [23]    (2) ∞  2ν (q; q)∞ Jν z1/2 ; q z = 1− 2 , (q ν+1 ; q)∞ zν/2 jν,n (q)

(11.77)

n=1

(11.77) implies where jν,n (q) is the n-th positive zero of Jν(2) (z; q). Then,

 (2) 1/2 2 (q) . The −ν/2 that x Jν x ; q is completely monotonic for x ∈ −∞, jν,1 expansion  



√   ∞  4ν (q; q)2∞ Iν(2) 4 z; q Jν(2) 4 z; q z = 1− 4 , (q ν+1 ; q)2∞ zν/2 jν,n (q)

ν > −1, q ∈ (0, 1)

n=1

(11.78)

11 Completely Monotonic Fredholm Determinants

313

 (2) √  (2) √ 4 4 implies that x −ν/2 x; q Jν x; q is completely monotonic for x ∈ Iν

4 (q) . More generally, for any positive integer m ≥ 2 and ρ is a primitive −∞, jν,1 m-th root of unity, then we have

    ν  m (2) ∞ ρ z1/m ; q  2 (q; q)∞ m  Jν z  = , 1 − 2m (q ν+1 ; q)∞ jν,n (q) ρ z1/m =1 n=1

ν > −1, q ∈ (0, 1).

(11.79) 0, x1 , x2 ∈ R. (2) 5. Applying to q-Bessel function Jν (x; q) to get  92  x1 + x2 (x1 + x2 )2ν (2) ;q Jν(2) ≥ J (x1 ; q)Jν(2) (x2 ; q), 2 (4x1 x2 )ν ν

(11.100)

where q ∈ (0, 1), ν > −1, x1 , x2 ∈ R. 6. Applying to Aq (x) to get  A2q

x1 + x2 2

 A2q

  x1 + x2 − ≥ Aq (x1 )Aq (x2 )Aq (−x1 )Aq (−x2 ), 2 (11.101)

where q ∈ (0, 1), x1 , x2 ∈ R. 7. For any a > 0, it is known that the modified Bessel function Kix (a) in variable x is a entire function of our type. Then, K 2i (x 2

1 +x2 )

(a) ≥ Kix1 (a)Kix2 (a),

x1 , x2 ∈ R,

(11.102)

where x1 , x2 ∈ R

√ Evidently, if f (z) is an even entire function of genus at most 1, then f ( z) satisfies condition (11.88), then we have the following examples associated with this condition. 1. For the sinc(x) function we have 

⎛ ⎜ ⎜ ⎝

sin

x1 +x2 2

x1 +x2 2

 ⎞2

√ 

√  ⎟ sin x sin x2 ⎟ ≥ √ 1 , √ ⎠ x1 x2

x1 , x2 ∈ R.

(11.103)

11 Completely Monotonic Fredholm Determinants

317

This is particularly true for x1 = −y1 , x2 = −y2 where y1 , y2 > 0. Then, ⎛/

⎞ 2 + y2 y y 2 + y22 2⎠ sinh2 ⎝ 1 sinh (y1 ) sinh (y2 ) . ≥ 1 2 2y1 y2

(11.104)

2. For Bessel function Jν (z) with ν > −1 we have  ⎫2 ⎧  x1 +x2 ⎪ ⎪

√  √  ⎪ ⎪ J ⎨ ν ⎬ 2 x1 Jν x2 Jν  ν ≥ √ ν √ ν , ⎪ ⎪ x1 x2 ⎪ ⎪ x1 +x2 ⎩ ⎭

z1 , z2 ∈ R.

(11.105)

2

Let x1 = −y1 , x2 = −y2 where y1 , y2 > 0, then, ⎛/

⎞   2 + y2 2 + y2 ν y y 2⎠ 1 2 Iν (y1 ) Iν (y2 ) . Iν2 ⎝ 1 ≥ 2 2y1 y2

(11.106)

3. For Airy function we have , % A

x1 + x2 2



 %

x1 + x2 A − 2

42

√ √ √ √ ≥ A( x1 )A(− x1 )A( x2 )A(− x2 ), (11.107)

where x1 , x2 ∈ R. Then for y1 , y2 > 0 we have  ⎛/ ⎞2  2 + y 2   y 1 2 ⎠ A ⎝ i   ≥ |A (iy1 ) A (iy2 )| . 2  

(11.108)

(2)

4. For q-Bessel function Jν (z; q) with q ∈ (0, 1), ν ∈ (−1, ∞) we have  ⎫2 ⎧  x1 +x2 ⎪ ⎪

√  √  ⎪ ⎪ J ; q ⎨ ν ⎬ 2 x1 ; q Jν x2 ; q Jν  ν

√ ν , ≥ √ ν ⎪ ⎪ x1 x2 ⎪ ⎪ x1 +x2 ⎩ ⎭

z1 , z2 ∈ R.

(11.109)

2

Then for y1 , y2 > 0 we have ⎧ ⎨

⎞ ⎫2  ν ⎬ 2 + y2 y12 + y22 y (2) ⎝ 1 2 I ≥ Iν(2) (y1 ; q) Iν(2) (y2 ; q) . ; q⎠ ⎩ ν ⎭ 2 2y1 y2 ⎛/

(11.110)

318

M. E. H. Ismail and R. Zhang

In particular, for m, n ∈ N0 we have 

(2)





 2 q −n + q −m ; q

92



    m q + q n ν Sn −q −ν−n ; q Sm −q −ν−m ; q

 , 2 q n+1 , q m+1 ; q ∞

(11.111)

where Sn (x; q) are the Stieltjes-Wigert polynomials. We have applied that for all nonnegative integers n, [15], Iν(2)

 

Sn −q −ν−n ; q Sn −q ν−n ; q −n/2 2q ; q = − νn  = νn  . q 2 q n+1 ; q ∞ q 2 q n+1 ; q ∞

(11.112)

5. For Ramanujan entire function Aq (z) we have  A2q

x1 + x2 2

 ≥ Aq (x1 )Aq (x2 ),

x1 , x2 ∈ R.

(11.113)

In particular, we have  m 



 q + qn ≥ Aq −q m Aq −q n , A2q − 2

(11.114)

  m 9  am (q) q + qn bm (q) (−1)m+n A2q − ≥ m n − 2 (q 2 , q 3 ; q 5 )∞ q ( 2 )+(2) (q, q 4 ; q 5 )∞ 9  bn (q) an (q) , − × (q, q 4 ; q 5 )∞ (q 2 , q 3 ; q 5 )∞

(11.115)

or

where we have applied the Garret-Ismail-Stanton generalization of the RogersRamanujan identities, [12, 14]. 6. If b > 0, x1 , x2 ∈ R, then 1    Γ b − √x1 + x2 Γ b + √x1 + x2 2 1 ≥ √  √  √ . √  Γ b − x1 Γ b + x1 Γ b − x2 Γ b + x2

(11.116)

For b, y1 , y2 > 0, then  ⎛ / ⎞2   2 2   Γ ⎝b + i y1 + y2 ⎠ ≤ |Γ (b + iy1 ) Γ (b + iy2 )| .   2  

(11.117)

11 Completely Monotonic Fredholm Determinants

319

7. For any a > 0, the modified Bessel function Ki √z (a) is an entire function of genus 0 in variable z. Then, K 2 x i

1 +x2 2

(a) ≥ Ki √x1 (a)Ki √x2 (a),

x1 , x2 ∈ R

(11.118)

and 2 K%

y12 +y22 2

(a) ≥ Ky1 (a)Ky2 (a),

y1 , y2 > 0.

(11.119)

Using the Lemma stated in [4, 9, 10]. Given an even entire function G(z) of genus 0 or 1, then the functions G(z − ic) + G(z + ic), c ∈ R are of the same type entire functions. We can apply this transformation as many times as we want to get very complicated inequalities. 1. Apply to sinc function. For x1 , x2 , c ∈ R we have ,

 42 

9 9 2 − ic sin x1 +x sin (x2 − ic) sin (x1 − ic) 2   . ≥   x1 +x2 (x1 − ic) (x2 − ic) − ic 2 (11.120)

2. For x1 , x2 , c ∈ R and ν > −1 we get ,

 42  9 9 2 − ic Jν x1 +x Jν (x2 − ic) Jν (x1 − ic) 2  . ≥   x +x ν 1 2 (x1 − ic)ν (x2 − ic)ν − ic 2

(11.121)

3. For x1 , x2 , c ∈ R and ν > −1, 0 < q < 1 we get ,

4, 4

 42 , (2) (2) 2 Jν(2) x1 +x − ic; q − ic; q) − ic; q) J J (x (x 1 2 ν ν ≥   x +x2  . ν 1 2 (x1 − ic)ν (x2 − ic)ν − ic 2 (11.122)

4. For x1 , x2 , c ∈ R we get     92 x1 + x2 x1 + x2 A − ic A − + ic 2 2 ≥ {A (x1 − ic) A (−x1 + ic)} {A (x2 − ic) A (−x2 + ic)} .

(11.123)

320

M. E. H. Ismail and R. Zhang

5. For x1 , x2 , c ∈ R and 0 < q < 1 we get    92  x1 + x2 x1 + x2 − ic Aq − + ic Aq 2 2    ≥ Aq (x1 − ic) Aq (−x1 + ic) Aq (x2 − ic) Aq (−x2 + ic) . (11.124) 6. For x1 , x2 , c ∈ R and b > 0 we get 

 Γ

b−

  −2 x1 + x2 x1 + x2 + ic Γ b + − ic 2 2

1 . (Γ (b − x1 + ic) Γ (b + x1 − ic)) (Γ (b − x2 + ic) Γ (b + x2 − ic)) (11.125)



7. For x1 , x2 , c ∈ R and a > 0 we get 2  Kc− i (x1 +x2 ) (a) + Kc+ i (x1 +x2 ) (a) 2 2    ≥ Kc−ix1 (a) + Kc+ix1 (a) Kc−ix2 (a) + Kc+ix2 (a) .

(11.126)

Recall that 1 2

Kiz (a) =





−∞

e−a cosh t e−izt dt,

(11.127)

then,  Ki(z−i) (a) + Ki(z+i) (a) =



−∞

e−a cosh t cosh te−izt dt = −2∂a Kiz (a), (11.128)

thus,  ∂a K i (x1 +x2 ) (a) 2

2

≥ ∂a Kix1 (a)∂a Kix2 (a).

(11.129)

Acknowledgments R. Zhang, Corresponding author, research supported by National Science Foundation of China grant No. 11771355 and No. 11371294.

11 Completely Monotonic Fredholm Determinants

321

Bibliography 1. Al-Salam, W., Ismail, M.E.H.: Orthogonal polynomials associated with the Rogers-Ramanujan continued fraction. Pac. J. Math. 105, 269–283 (1983) 2. Andrews, G.E.: Ramanujan’s Lost notebook VIII: The entire Rogers-Ramanujan function. Adv. Math. 191, 393–407 (2005) 3. Andrews, G.E.: Ramanujan’s Lost notebook IX: The entire Rogers-Ramanujan function. Adv. Math. 191, 408–422 (2005) 4. Boas, R.P.: Entire Functions. Academic Press, New York (1954) 5. Deift, P.: Orthogonal Polynomials and Random Matrices: a Riemann– Hilbert Approach, vol. 3. Courant Lecture Notes in Mathematics. New York University Courant Institute of Mathematical Sciences, New York (1999) 6. Dunford, N., Schwartz, J.T.: Linear Operators, vol. 2nd. Wiley, New York (1963) 7. Erdélyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher Transcendental Functions, vol. 1, 2, 3. McGraw-Hill, New York (1953) 8. Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Academic Press, New York (1966) 9. Gasper, G.: Using sums of squares to prove that certain entire functions have only real zeros, dedicated to the memory of Ralph P. Boas, Jr. (1912–1992). In: Fourier Analysis: Analytic and Geometric Aspects, Bray, W.O., Milojevic, P.S., Stanojevic, C.V. (eds.), pp. 171–186. Marcel Dekker (1994) 10. Gasper, G.: Using integrals of squares to prove that certain real-valued special functions to prove that the Pólya Ξ ∗ (z) function, the functions Kiz (a) , a > 0 and some other entire functions have only real zeros, dedicated to Dan Waterman on the occasion of his 80th birthday. 11. Gradshteyn, I.S., Ryzhik, I.M.: Table of Integrals, Series, and Products, 8th edn. Academic Press (2015) 12. Ismail, M.E.H.: Classical and quantum orthogonal polynomials in one variable. In: Encyclopedia of Mathematics and Its Applications, paperback edition. Cambridge University Press, Cambridge (2009) 13. Ismail, M.E.H., Zhang, C.: Zeros of entire functions and a problem of Ramanujan. Adv. Math. 209, 363–380 (2007) 14. Ismail, M.E.H., Zhang, R.: Integral and series representations of q-polynomials and functions: part II. Proc. Am. Math. Soc. 145(9), 3717–3733 (2017) 15. Ismail, M.E.H., Zhang, R.: q-bessel functions and rogers-ramanujan type identities. Proc. Am. Math. Soc. 146(9), 3633–3646 (2018) 16. Mehta, M.L.: Random Matrices, 3rd edn. Elsevier, San Diego (2004) 17. Mikhlin, S.G.: Integral Equations and Their Applications to Certain Problems in Mechanics, Mathematical Physics, and Technology, 2nd edn. Pergamon Press, New York (1957) 18. Simon, B.: Trace ideals and their applications. Mathematical Surveys and Monographs, vol. 120. American Mathematical Society (2005) 19. Tricomi, F.G.: Integral Equations. Interscience Publishers, New York (1957). Reprinted, Dover Publications, New York (1985) 20. Watson, G.N.: A Treatise on the Theory of Bessel Functions, 2nd edn. Cambridge University Press, Cambridge (1944) 21. Whittaker, E.T., Watson, G.N.: A Course of Modern Analysis, 4th edn. Cambridge University Press, Cambridge (1927) 22. Widder, D.V.: The Laplace Transform, vol. 6. Princeton University Press, Princeton, NJ (1941) 23. Zhang, R.: Sums of zeros for certain special functions. Integral Transforms Spec. Funct. 21(5), 1–15 (2009)

Chapter 12

The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It Jean-Paul Berrut

To Frank Stenger, who has done so much for the advancement and dissemination of sinc methods, on his 80th birthday

Abstract In (Berrut, Numer. Algorithms 45, 369–374 (2007)), we have discovered that a sinc interpolant on a finite interval essentially is the product of a sine function by the difference of two quadrature formulae for a Cauchy principal value integral. In (Berrut, Numer. Algorithms 56, 143–157 (2011)), we have proved an Euler–Maclaurin formula for the error of such quadratures in the presence of jumps of the integrand. The present work combines the two results to obtain a formula that quantifies the impact of an arbitrary number of jumps on finite sinc interpolants. Numerical results show that the application of the formula, together with a quotienting method against the Gibbs phenomenon, allows for a spectacular damping, and often even total elimination, of this impact. Keywords Sinc interpolant · Jump singularities · Error formula · Finite differences · Rational sinc interpolants

12.1 Introduction Thanks mainly to Frank Stenger, Sinc methods are by now a well-known way of solving many analysis problems, see his books, [16] for the theory, [17] for applications. Among the many references, [12] is a comprehensive description of the sinc-Galerkin method, and [15] a sinc-based analogue of (part of) Chebfun, the software for working numerically with functions [8].

J.-P. Berrut () Department of Mathematics, University of Fribourg, Fribourg, Switzerland e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_12

323

324

J.-P. Berrut

Some years ago, we have discovered [3] that the usual finite sinc interpolant may be interpreted as the product of a value of the sine function times the difference of two quadratures, the trapezoid and the midpoint rules, approximating the same Cauchy principal value integral. We have taken advantage of this# observation to # ∞ quantify the effect of truncating the interpolant to N from n=−N n=−∞ on its accuracy. To be specific, recall that the sinc interpolant of a function defined everywhere on R is given by ∞ 

C(f, h)(x) :=

n=−∞

sinc

2π h

3 (x − xn ) fn ,

(12.1)

where the xn := nh are the sample abscissae, the fn := f (xn ) the sample values and the Sinc function is given by sinc(x) := sin x/x. Well known theorems guarantee that, for f sufficiently smooth and decreasing rapidly enough, C(f, h) converges very rapidly toward f , see the references above. The series in (12.1) will have to be truncated when evaluating C(f, h)(x). As in [3], we shall consider the finite interpolant CN (f, h)(x) =

N  n=−N



f (xn ) sinc

2π h

3 (x − xn ) ,

xn = nh,

h = X/N, (12.2)

where the double prime signals that the first and last terms are halved. This hvalue limits the interpolant to the fixed finite interval [−X, X], independently of the chosen N. Note that CN may be constructed for a function merely defined on this interval. In [3], we have given a formula for the error CN (f, h) − f as a function of h for f ∈ C q [−X, X] for some q ∈ N. The number of nodes is automatically odd in (12.2); in [5], we have treated the more involved error formula for a total number 2N of nodes on [−X, X], so that 0 is none of them. All these results are based on the observation that truncating a quadrature rule is equivalent to considering it on a circle, where the extremities of the interval become the same node, at which a jump shows up in the generic case (see Figure 3 in [2]). The Euler–Maclaurin formula for equidistant Riemann sums may then be viewed as one for the effect of that jump on the convergence of the quadrature. As further noticed in [2], this jump then is no different from any other f may have on [−X, X], an observation which leads to a generalization of the Euler–Maclaurin formula to functions with jumps. It is our intent in the present article to use these observations and formulae to extend the expression for CN (f, h) − f given in [3] and completed with the error term in [5] to accommodate functions with jumps in the interior of [−X, X]. In Sect. 12.2, we recall a special case, which we shall use here, of the generalized

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

325

Euler–Maclaurin formula for Cauchy integrals given in [5]. Section 12.3 derives the formula for the influence of jumps on the (finite) sinc interpolant, the main result of this paper; it involves one-sided derivatives of f at the jumps. We also give a first example of improvement of the accuracy with a correction of the interpolant. In Sect. 12.4, we repeat the experiment with finite differences instead of derivatives. Section 12.5 applies a quotienting idea used in former work to still improve accuracy and eliminate the Gibbs phenomenon at the jumps. The paper concludes with several comments, in particular on the possibility of using the formula for extrapolation.

12.2 The Generalized Euler–Maclaurin Formula for Cauchy Integrals In [3] it was observed that—at the x which do not coincide with an interpolation node—the finite sinc interpolant (12.2) may be written as CN (f, h)(x) =

 π 1 sin( x)(−1)N T2h (Ix ) − M2h (Ix ) , 2π h

(12.3)

where T2h (Ix ) and M2h (Ix ) denote the trapezoid, respectively midpoint approximations to  Ix := PV

X −X

g(y)dy

with

g(y) :=

f (y) . x−y

As in [2], we consider the integral on the circle obtained from joining the extremities −X and X of the interval, making of these the same point (modulo 2X)—a standard way of periodizing a function, see, e.g., [10]. The Cauchy principal value integral Th is the particular case with t = 0, and Mh that with t = 1/2, of the Riemann sum Rt (h) := h

N −1  n=0

N 



 2X , g − X + (n + t)h = h g − X + (n − 1 + t)h , 0 ≤ t < 1, h := N n=1

(12.4) of the Cauchy integral Ix . Let now f be piecewise C q−1 [−X, X], i.e., (q − 1)–times continuously differentiable on [−X, X] except at interior jumps, at which the limits of f and of its q − 1 derivatives exist on both sides. Denote by c0 the point −X ≡ X on the circle and by cj , j = 1, . . . , J , the other jumps, and let f be redefined at cj as the middle of the jump, f (cj ) :=

f (cj −) + f (cj +) , 2

j = 0, . . . , J,

(12.5)

326

J.-P. Berrut

where, as usual, f (x±) := lim→0 f (x ± ), and f (c0 ±) := lim→0 f (∓X ± ). In (12.4), t0 := t obviously is the relative distance of the jump −X ≡ X (in the generic case) to the following integration node. Similarly, we determine for every other jump cj the interval that contains it, i.e., the index nj , 0 ≤ nj ≤ N − 1, such ! that cj ∈ − X + (nj + t)h, −X + (nj + 1 + t)h , j = 1, . . . , J ; this determines tj :=

−X + (nj + 1 + t)h − cj cj =t− mod 1, h h

0 ≤ tj < 1,

(12.6)

the relative location of cj with respect to the following node (t was forgotten in [2] and [5]). (Caution with nj seems necessary when cj ∈ [−X, −X + th], but the difficulty vanishes as nj disappears from tj because of the modulo.) These notations allow us to state the following special case of Theorem 4.1 in [5]. Theorem 12.1 (Generalized Euler–Maclaurin Formula for Cauchy Integrals) Let f be piecewise C q−1 [−X, X], q ∈ N, q ≥ 2, let cj , j = 0, . . . , J , denote its jumps and let f (cj ) be defined as in (12.5) and tj as in (12.6). Suppose that f (q) is integrable on the intervals between two jumps. Let Rt (h) be any Riemann sum (12.4) of Ix . Then the integration error may be written as 3 

 2 x −t + ak (x)hk Rt (h) − Ix = (−1)N πf x cta π h q

k=1



hq q!



X −X

(q)

F2 (y)

J  j =0



0q tj − y + X dy B h

(12.7)

with , cta :=

a1 (x) :=

cot,

N even,

tan,

N odd,

J 

!

θj B1 (tj ) g(cj −) − g(cj +) ,

j =0

, θj :=

0,

tj = 0,

1,

tj = 0,

2 3 1  Bk (tj ) g (k−1) (cj −) − g (k−1) (cj +) , 2 ≤ k ≤ q, k! J

ak (x) :=

j =0

F2 (y) := g(y) −

(12.8)

2 π 3 π f (x) cot x−y 2X 2X

0k its and where Bk denotes the Bernoulli polynomial of degree k on [0, 1) and B 1-periodic extension.

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

327

When all jumps coincide with nodes (tj = 0, all j > 0), then for the trapezoid rule (t = 0) all the coefficients ak with odd k vanish: for k = 1, θj = 0 for all j , and Bk (0) = 0 for odd k ≥ 3. This is the case with the midpoint rule (t = 1/2) as well, since Bk (1/2) = 0 for every odd k. When q is even and f ∈ C q , the last two terms of (12.7) may be combined into a single O(hq )-term.

12.3 The Formula for the Influence of Jumps on the Sinc Interpolant As in [3], we set h := 2h. We start our study of the right-hand side of (12.3) with the trapezoid rule, i.e., tj be the values of (12.6) (c0 = 0) t0 = t = 0. For cj , j = 1, . . . , J , let tj , resp. corresponding to h, resp. h; then tj = t −

c cj cj j =− =2 − = 2 tj mod 1. h h 2h

(12.9)

Using Formula (12.7), we compute T h (Ix ) − Ix = R0 (h) − Ix q

x  = (−1) πf (x)cta π ak (x) hk + O(hq ) + h k=1 N

with a1 (x) :=

J 

B1 ( tj ) g(cj −) − g(cj +)

!

j =1

and ak (x) as in (12.8) for tj . (From here on, we do not carry the integral rest term any longer.) Similarly, for the midpoint rule, M h (Ix ) − Ix = R1/2 (h) − Ix

5

6  q x 1 = (−1)N πf (x)cta π akM (x) hk + O(hq ) − + h 2 k=1

328

J.-P. Berrut

with a1M (x) :=

J 

! B1 ( tjM ) g(cj −) − g(cj +) ,

j =1

2 3 1  Bk ( tjM ) g (k−1) (cj −) − g (k−1) (cj +) , k! J

akM (x) :=

2 ≤ k ≤ q,

j =0

tj corresponding to t M = 1/2 of the midpoint and where tjM denotes the value of rule. Equation (12.3) thus yields

 π 1 sin( x)(−1)N T h (Ix ) − Ix − (M h (Ix ) − Ix ) 2π h 6  5

x 1 1 π x = − cta π − sin( x) πf (x) cta π 2π h h h 2 9 q 

 k +(−1)N ak (x) − akM (x) h + O(hq ).

CN (f, h)(x) =

k=1

(12.10)

x

x 1 

x

x

x But cta π −cta π( +cot π = 2/ sin 2π , independently − ) = tan π h h 2 h h h of the parity of N. The terms with j = 0 in a1 and a1M vanish since θ0 = 0 and B1 (1/2) = 0. To avoid further treating separately the case k = 1 in ak and akM , we shall define the 01 there. value of B1 at the integers as 0, the average of the left and right limits of B Then ak (x) − akM (x) =

J 32 3 1 2 Bk ( tj ) − Bk ( tjM ) g (k−1) (cj −) − g (k−1) (cj +) , k = 1, . . . , J. k! j =0

(12.11) How do tjM and tj relate to one another? When cj lies on the left of −X + ( nj + 1 − 2 > 0 and tjM equals this value. When cj lies on the right, then tj − 12 < 0 tj + 12 = tj − 12 and the next node of the midpoint rule on the right of cj is at mod 1. The formula 1 2 )h, tj

1 tjM = ( tj − ) mod 1 2

(12.12)

thus holds in every case, also for tj = 0 and tj = 12 . But, according to [13, formula (30) p. 24 with m = 2], a difference of values of a Bernoulli polynomial at arguments differing by 1/2, as in (12.11), may be written as a single value of the

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

329

Euler polynomial of degree one less: indeed, ν Eν−1 (2x), 2ν

Bν (x + 12 ) − Bν (x) =

where E denotes the Euler polynomial of degree . Here for 0 ≤ tj < according to (12.12), tjM = tj + 12 and tj ) − Bk ( tjM ) = − Bk ( for

1 2

≤ tj < 1, tjM = tj −

1 2

1 2

one has,

k Ek−1 (2 tj ); 2k

and

tj ) − Bk ( tjM ) = Bk (

k k 0 Ek−1 (2 tjM ) = k E tj ), k−1 (2 k 2 2

0 denotes the 1-periodic extension of E |[0,1) . On the whole, with (12.9), where E Bk ( tj ) − Bk ( tjM ) = σj

k 0 Ek−1 (tj ), 2k

where , σj :=

−1, 1,

0 ≤ tj < 12 , 1 2 ≤ tj < 1.

(12.13)

(In other words, σj equals −1 whenever the next node on the right of cj belongs to the trapezoid rule, 1 otherwise.) In order for this formula to hold also for k = 1 and 00 ( ) := E0 (0) = 0 ∀ ∈ Z. tj = 0, we set E Equation (12.10) then becomes π (−1)N CN (f, h)(x) = f (x) + sin( x)· 2π h ⎧ ' (⎫   (k−1)  q J (k−1) ⎬  k ⎨ f (y) hk f (y) 0k−1 (tj ) +O(hq ), σ (c −) − (c +) E j j j ⎭ k! 2k ⎩ x−y x−y k=1

j =0

which leads us to the simple formula which follows.

330

J.-P. Berrut

Theorem 12.2 Let f be piecewise C q−1 [−X, X], q ∈ N, q ≥ 2, let cj , j = 0, . . . , J , denote its jumps and f (cj ) be defined as in (12.5) and tj as in (12.6). Suppose that f (q) is integrable on the intervals between two jumps. Then CN (f, h) and f are related by the formula π (−1)N sin( x) · CN (f, h)(x) = f (x) + 2π h ⎧ ' (⎫  (k) (k) q−1 J ⎬ hk+1  ⎨ f (y) f (y) 0k (tj ) σj E (cj −) − (cj +) ⎭ k! ⎩ x−y x−y j =0

k=0

+ O(h ), q

(12.14)

0 is the where the derivatives are taken with respect to y, σj is given in (12.13), E 00 ( ) := 0, ∈ Z. 1-periodic extension of the Euler polynomial of degree and E 0k vanishes at 0 for all even When all jumps coincide with nodes, tj = 0. Since E k, only even powers of h then appear in the sum in (12.14). Formula (12.14) may be used to successively eliminate the first terms of the sum in the hope of improving accuracy. Lemma 2.1 of [4] shows how to easily compute the derivatives of g when an expression for those of f is known. In the computations documented in the first examples we have taken the coefficients of the polynomials Ek from [1, p. 809]. Example 12.1 Let

 2 sb (x) := sinh(5x) − cos x e−b(x+0.1) and ⎧ ⎪ s (x), −X ≤ x < c1 , ⎪ ⎨5 −s (x), c1 < x ≤ X, f (x) := 7 ⎪ ⎪ ⎩ s5 (x) − s7 (x) , x = c1 . 2

(12.15)

To demonstrate the correctness of Formula (12.14), we have interpolated f on the interval√ [−1, 1], i.e., with c0 = X = 1, and with an irrational interior jump abscissa c1 = − 2/12 (see Fig. 12.1 ), which cannot coincide with a node for any N . All the computations were performed with MATLAB. In this first example, we have computed exact values (in machine arithmetic) of the derivatives on both sides of the jumps with MATLAB’s Symbolic Toolbox. We have chosen N = 400, i.e., 801 samples. The results are given in Table 12.1. The first column displays the values x at which the function and its interpolant have been evaluated, the second the error CN (f, h)(x) − f (x), the third the error with the CN (f, h) corrected with the first term of the sum in (12.14) and the remaining columns the error with the number K of terms given in the column heading.

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

331

3 2 1 0 −1 −2 −3 −1

−0.8 −0.6 −0.4 −0.2

0

0.2

0.4

0.6

0.8

1

Fig. 12.1 Function f in (12.15)—the vertical lines show the location of the jumps Table 12.1 Interpolation errors for f (x) in Example 12.1 on [−1, 1] with N = 400 x 0 1 −8.9474e−01 5.6606e−04 2.9282e−05 −7.8947e−01 1.1962e−03 2.1705e−05 −6.8421e−01 1.9195e−03 2.1076e−05 −5.7895e−01 2.7231e−03 2.2917e−05 −4.7368e−01 3.6242e−03 2.7137e−05 −3.6842e−01 4.7300e−03 3.6138e−05 −2.6316e−01 6.5658e−03 6.2953e−05 −1.5789e−01 1.5647e−02 3.8160e−04 −5.2632e−02 −3.2054e−03 3.5861e−05 5.2632e−02 1.2367e−03 −3.2975e−06 1.5789e−01 2.2150e−03 −1.7907e−06 2.6316e−01 2.4799e−03 −9.3469e−08 3.6842e−01 2.4196e−03 9.0012e−07 4.7368e−01 2.1651e−03 1.3593e−06 5.7895e−01 1.7882e−03 1.4586e−06 6.8421e−01 1.3419e−03 1.3298e−06 7.8947e−01 8.7050e−04 1.0803e−06 8.9474e−01 4.1268e−04 8.4127e−07

2 −9.2875e−09 −5.6445e−09 −1.0210e−08 −2.0194e−08 −4.1537e−08 −9.7685e−08 −3.4526e−07 −9.0182e−06 4.1147e−07 −1.1338e−08 2.0970e−09 7.7121e−09 8.8961e−09 8.1970e−09 6.6895e−09 4.8816e−09 3.0417e−09 1.2454e−09

3 −7.7670e−09 −1.2210e−09 −4.3919e−10 −3.0861e−10 −5.2994e−10 −1.6948e−09 −1.0413e−08 −8.9491e−07 −3.2136e−08 3.4376e−10 5.8904e−12 −5.1995e−11 −4.8160e−11 −3.6991e−11 −3.0072e−11 −3.1012e−11 −4.8731e−11 −1.7333e−10

14 3.2419e−14 1.4433e−13 8.7486e−14 1.2257e−13 −3.5083e−14 1.9273e−13 1.2790e−13 −1.5987e−14 5.9730e−14 4.2188e−14 3.5388e−14 4.7240e−14 6.1728e−14 4.9849e−14 −1.4488e−14 −2.0900e−14 −2.1719e−14 −3.2620e−14

The numbers show that the formula is very efficient with this relatively high number of nodes. The maximum error in every column is the larger the closer to the interior jump, but never by more than about three digits. The results are that good because, with this large number of nodes, h is relatively small and because the jump has been chosen far enough from any node (see the discussion in the next section).

332

J.-P. Berrut

12.4 Fully Discrete Correction with Finite Differences As impressive the numbers in Table 12.1 may be, they are not fully satisfactory, since they were obtained under the assumption that all one-sided derivatives can be computed with high precision at the jumps. We have already demonstrated in [4] that this limitation can in many cases be lifted by using finite differences instead of derivatives. In this prospect, we now merely assume that the limits f (cj −) and f (cj +) are given at every jump cj , so that the derivatives g (k−1) (cj −) and g (k−1) (cj +) can be computed via non-equidistant one-sided finite differences and Lemma 2.1 of [4]. For instance, a quantity like f ( ) (cj −) will be computed from values f (cj −), f (xm ), f (xm−1 ), . . . , f (xm−p+1 ), where xm is the first node on the left of cj , therefore using p values on top of f (cj −). The one-sided derivative then is obtained as the exact derivative at cj of the polynomial of degree at most p interpolating f at these p + 1 nodes. Since we had already programmed Fornberg’s method given on page 168 of [9], and since this method allows for the approximation of the derivative at any node from arbitrary placed ones, we have used the same program here. Moreover, we used the same observation as in [4] for choosing the number p + 1 of nodes. To obtain an error O(hK+1 ), K terms should be eliminated in the development (12.14). The term in hk is multiplied by derivatives of order k − 1; to have a O(hK+1 )-error, they must therefore be computed with a precision of O(K − k + 1). But Fornberg’s tables in [9] show that the number of abscissae to be taken in one-sided finite differences equals the sum of order of derivative and order of accuracy to be attained, which for k − 1 precisely yields the order K of the last eliminated term, and this independently of k. Example 12.2 We have repeated the computations of Example 12.1, just replacing the one-sided derivatives by one-sided finite differences with p + 1 = 15 nodes for all eliminated terms. The resulting numbers were exactly the same as in Table 12.1, up to machine precision (i.e., all columns but the last were exactly the same, and the latter was also the same up to machine precision). 2 We shall now address two weaknesses of the computations in Examples 12.1 and 12.2: the sample size in the two examples so far is quite large with 801 nodes and yields a small h, and the small number of evaluation points makes the distance of all of these to the jumps large as compared with h. As explained in Section 3 of [4], however, the use of formulae such as (12.14) merely makes sense if their terms do not dominate f (x), which however will happen if the evaluation point x is too close to some jump cj and the derivatives consequently become too large. The way suggested in [4] to avoid this is to require that h be small enough for the factors 1 k!

'

f (y) x−y

(k)

 (cj −) −

f (y) x−y

(k)

( (cj +) hk+1

(12.16)

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

333

Table 12.2 Interpolation errors for f (x) as in Example 12.1on [−1, 1] with N = 25 x −8.9474e−01 −7.8947e−01 −6.8421e−01 −5.7895e−01 −4.7368e−01 −3.6842e−01 −2.6316e−01 −1.5789e−01 −5.2632e−02 5.2632e−02 1.5789e−01 2.6316e−01 3.6842e−01 4.7368e−01 5.7895e−01 6.8421e−01 7.8947e−01 8.9474e−01

0 4.0576e−02 −2.5553e−02 −1.1914e−02 4.2503e−02 −2.5324e−02 −4.4629e−02 1.1336e−01 −4.9499e−02 3.0889e−01 −1.0458e−01 −1.2214e−02 5.1035e−02 −2.5032e−02 −1.5817e−02 2.7962e−02 −7.8870e−03 −1.5784e−02 1.7751e−02

1 1.6354e−02 −3.0428e−03 −1.3296e−04 −1.9104e−03 2.1619e−03 5.7421e−03 −2.3727e−02 3.4966e−02 4.5119e−02 −3.6674e−03 5.1502e−05 −1.2486e−03 9.2412e−04 7.1635e−04 −1.4281e−03 4.3190e−04 8.7860e−04 −8.3295e−04

2 −1.1684e−03 9.7960e−05 1.5198e−05 −3.5379e−05 2.0161e−05 3.2357e−05 1.6975e−04 −7.5837e−03 −1.0537e−02 2.1110e−04 −9.4524e−07 2.1037e−05 −1.1544e−05 −6.9833e−06 1.1124e−05 −2.5909e−06 −2.5156e−06 −1.9650e−05

3 −1.1610e−03 8.8821e−05 8.6612e−06 −1.1890e−07 −1.3618e−05 −7.8685e−05 9.3072e−04 −1.2429e−02 −6.6456e−03 1.0863e−04 3.0204e−07 3.3922e−06 −1.1973e−06 −2.0086e−07 −7.9021e−07 6.9542e−07 3.8691e−06 −2.6551e−05

14 2.7928e−04 −1.0038e−07 −3.0960e−10 1.0701e−11 1.8742e−10 2.0696e−08 −2.4699e−05 −1.3118e+02 −1.4752e+00 1.5603e−06 5.9527e−11 2.2104e−11 −8.1357e−13 1.2890e−13 −3.0161e−12 2.4580e−11 6.0883e−09 −3.7883e−05

to decrease with k. An identical calculation as in [4] shows that a sufficient condition for this decay when evaluating at x is that the interval length h be at most the distance from x to the next jump. Example 12.3 What happens if the sample size is too small and this condition is not fulfilled? Table 12.2 collects the numerical results of the same experiment as in Example 12.2, merely with 2N + 1 = 51 nodes instead of 801. The errors stay very good far from the jumps, but cannot be made smaller than about 10−2 in the vicinity of the interior jump. (The fact that the error increases when K becomes too large comes from the fact that, with such a large h, Formula (12.14) looses its meaning as the last term of the sum and the O(hK+1 )-terms dominate it, see [4, p. 344].) This becomes even more dramatic with a fivefold increase of the number of evaluation points to 98, as pictured—for K = 3—in Fig. 12.2 (the maximal error then is 1.389). 2 The bad results with K = 14 are at least partly due to the one-sided polynomial differences. Indeed, a small N implies a large h, thus a relatively large interval of interpolation for the divided differences for K large. The likelihood of occurrence of Runge’s phenomenon increases, as the latter often occurs already with N about 7. In his master’s thesis, Privitera [14] has repeated some experiments of the present work for functions with larger gradients at the jumps with the linear barycentric rational finite differences of [11], and obtained better results in some cases.

334

J.-P. Berrut 3 2 1 0 −1 −2 −3 −1

−0.8 −0.6 −0.4 −0.2

0

0.2

0.4

0.6

0.8

1

Fig. 12.2 Approximation of f with h too small (Example 12.3)

12.5 Quotients of Sinc Interpolants Corrected with Finite Differences The practical utility of (12.14) would be limited if restricted to small enough h (i.e., large enough N ) relatively to the distance of x to all jumps. Fortunately, as noticed in [4, Section 5], when x comes so close to a jump that the terms in the sum of (12.14) grow with k, the extremely large corrected sinc interpolant documented in Table 12.2 is about the last term of that sum, which approaches f (x) times a function of x that becomes extremely large but does not depend on f. This naturally led us to suggest to approach f not simply by the corrected sinc interpolant as above, but by the quotient of this corrected sinc interpolant and that of the function 1, also truncated at the extremities ±X of the interval; as x tends toward such a boundary point, the corrected sinc interpolant of 1 will be about the same function independent of f just mentioned, times 1, and the quotient will tend to f (x)/1 = f (x). The use of the same idea here requires an important change: the function 1 trivially does not show any interior jump and should be replaced by one that does duplicate all jumps of f . We have already done that in [7], where we have also solved the problem of the parity of the number of jumps by choosing a piecewise linear denominator function d. Here we have first tested the idea with examples of the same kind as above, i.e., displaying one interior jump on top of the one at the extremities. In that case there are two intervals between jumps and we may choose

d(x) =

⎧ ⎪ ⎪ ⎨ 1,

−1, ⎪ ⎪ ⎩ 0,

−X ≡ c0 < x < c1 , c1 < x < X, x = ±X and x = c1 ,

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

335

Table 12.3 Interpolation errors with the quotient of corrected interpolants, with N = 25 x −8.9474e−01 −7.8947e−01 −6.8421e−01 −5.7895e−01 −4.7368e−01 −3.6842e−01 −2.6316e−01 −1.5789e−01 −5.2632e−02 5.2632e−02 1.5789e−01 2.6316e−01 3.6842e−01 4.7368e−01 5.7895e−01 6.8421e−01 7.8947e−01 8.9474e−01

0 4.0576e−02 −2.5553e−02 −1.1914e−02 4.2503e−02 −2.5324e−02 −4.4629e−02 1.1336e−01 −4.9499e−02 3.0889e−01 −1.0458e−01 −1.2214e−02 5.1035e−02 −2.5032e−02 −1.5817e−02 2.7962e−02 −7.8870e−03 −1.5784e−02 1.7751e−02

1 −1.7995e−03 1.2376e−03 4.3423e−04 −9.4656e−04 2.7539e−04 1.6234e−04 −9.1750e−06 −5.2272e−05 2.0382e−04 5.7040e−05 8.2416e−05 −8.8737e−04 7.6511e−04 6.9001e−04 −1.5262e−03 4.9287e−04 1.0653e−03 −1.2583e−03

2 −6.5397e−06 5.2988e−06 2.2411e−06 −6.3370e−06 2.8387e−06 4.2129e−06 −1.4323e−05 2.6979e−05 −1.4817e−05 −4.4160e−06 −1.8998e−06 1.2964e−05 −8.1968e−06 −5.8808e−06 1.0900e−05 −3.0657e−06 5.9348e−06 6.3056e−06

3 −2.9684e−06 2.3693e−06 1.0999e−06 −4.0081e−06 2.7131e−06 6.2194e−06 −2.5406e−05 4.1478e−05 −1.4429e−05 1.9709e−06 −7.3349e−08 8.2883e−07 −4.0162e−07 −1.4568e−07 −2.4929e−08 9.2993e−08 3.4911e−07 −5.5291e−07

14 −4.7911e−12 7.8870e−13 2.7844e−13 1.2652e−12 −2.4254e−11 −1.4810e−09 6.5723e−07 6.0317e−03 1.8173e−02 3.6344e−08 4.0556e−12 2.6052e−12 −6.0729e−14 7.2331e−14 −1.7969e−13 7.1387e−14 3.3515e−13 −4.1410e−12

as the denominator function. The quotient approximation to f now is defined as where the denotes the corrected sinc approximation to the function (see (f /d)d, [7] for more details). Example 12.4 We have repeated Example 12.3, replacing the direct approximation of f with the result of the above quotient method, first with 18 evaluation points as in Tables 12.1 and 12.2; the results are collected in Table 12.3. As the extremely large values of the right-hand side of (12.14) arise in both numerator and denominator, they disappear in the quotient. The maximal error decreases to about 10−5 ; not all numbers are better than without quotient, but the error is more regularly distributed, with a quotient of the smallest to the largest (for K = 3) of about 10−3 as compared with 10−5 in Table 12.2. Also as in Example 12.3, we have then increased the number of interior evaluation points to 98; the resulting approximation, plotted in Fig. 12.3, is virtually undistinguishable from the function f itself as shown in Fig. 12.1 (the only difference being that the jump is not quite vertical anymore—the number of f — values was 1999 in Fig. 12.1). The maximal error (still with K = 3) increases from 4.1478 · 10−5 (see Table 12.3) to 7.4707 · 10−4 ; reaching such an accuracy without quotienting (i.e., as in Table 12.2) requires about 1000 nodes. With K = 14, however, the stepsize is too large (see the beginning of the Section) for the corrections to be accurate enough. For that reason, and to show that the method is very accurate as well for interior jumps, we have repeated Table 12.3

336

J.-P. Berrut 3 2 1 0 −1 −2 −3 −1

−0.8 −0.6 −0.4 −0.2

0

0.2

0.4

0.6

0.8

1

Fig. 12.3 Example 12.4: quotient of corrected interpolants with 51 points Table 12.4 Interpolation errors with the quotient of corrected interpolants, with N = 400 x −8.9474e−01 −7.8947e−01 −6.8421e−01 −5.7895e−01 −4.7368e−01 −3.6842e−01 −2.6316e−01 −1.5789e−01 −5.2632e−02 5.2632e−02 1.5789e−01 2.6316e−01 3.6842e−01 4.7368e−01 5.7895e−01 6.8421e−01 7.8947e−01 8.9474e−01

0 5.6606e−04 1.1962e−03 1.9195e−03 2.7231e−03 3.6242e−03 4.7300e−03 6.5658e−03 1.5647e−02 −3.2054e−03 1.2367e−03 2.2150e−03 2.4799e−03 2.4196e−03 2.1651e−03 1.7882e−03 1.3419e−03 8.7050e−04 4.1268e−04

1 2.9282e−05 2.1705e−05 2.1076e−05 2.2917e−05 2.7137e−05 3.6138e−05 6.2953e−05 3.8160e−04 3.5861e−05 −3.2975e−06 −1.7907e−06 −9.3469e−08 9.0012e−07 1.3593e−06 1.4586e−06 1.3298e−06 1.0803e−06 8.4127e−07

2 −9.2875e−09 −5.6445e−09 −1.0210e−08 −2.0194e−08 −4.1537e−08 −9.7685e−08 −3.4526e−07 −9.0182e−06 4.1147e−07 −1.1338e−08 2.0970e−09 7.7121e−09 8.8961e−09 8.1970e−09 6.6895e−09 4.8816e−09 3.0417e−09 1.2454e−09

14 3.2419e−14 1.4433e−13 8.7486e−14 1.2257e−13 −3.5083e−14 1.9273e−13 1.2790e−13 −1.5987e−14 5.9730e−14 4.2188e−14 3.5388e−14 4.7240e−14 6.1728e−14 4.9849e−14 −1.4488e−14 −2.0900e−14 −2.1719e−14 −3.2620e−14

with larger N. The results with N = 400 are given in Table 12.4: the final column is extremely accurate. When the evaluation point x approaches the interior jump, the Gibbs phenomenon appears in the method without quotienting. We have computed the same example as above, with N = 500 and about 1000 values of x. The directly corrected sinc interpolant is plotted in Fig. 12.4: Gibbs clearly shows up, and the maximal

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

337

3 2 1 0 −1 −2 −3 −1

−0.8 −0.6 −0.4 −0.2

0

0.2

0.4

0.6

0.8

1

Fig. 12.4 Example 12.4: Gibbs’ phenomenon with the directly corrected sinc interpolant and 1001 points

error is 4.3303 · 10−1 . With the quotient, the plot is perfect (i.e., undistinguishable from Fig. 12.1) and the maximal error is 9.9984 · 10−5 . This is just one more application of the quotienting method for fighting Gibbs’ phenomenon introduced in [7]. 2

12.6 Extrapolated Sinc Interpolants As impressive the results with the above corrections of the effect of jumps on the sinc interpolant may be, they have a big drawback: the computation of the onesided derivatives requires the knowledge of the one-sided limits of the function on both sides of the jumps. In most cases, however, these values will not be known at the interior jumps. (We have however discovered that they may be computed by extrapolation with Boole’s summation formula, see [6].) Moreover, solving equations with these corrected interpolants will require the determination of two values at each jump, a task that may be difficult. For this reason, we turned our attention to extrapolated sinc interpolants, as in [4]. Indeed, a formula such as (12.14), which is reminiscent of Euler–Maclaurin formulas—in fact more of a Boole summation formula—calls for extrapolation. In some cases, the situation is relatively simple. For instance, we have already noticed that, when all jumps coincide with nodes, the sum in (12.14) contains only 0k (tj ) − B 0k (t M ) = Bk (0) − Bk ( 1 ), independently of even powers of h. But then B j 2 j , and this expression may be brought in front of the j —sum; then the k—sum is very similar to that corresponding to a lone jump at the extremities, and the extrapolation process coincides with that given in [4]. (In fact, one may split the

338

J.-P. Berrut

interval into subintervals separated with the jumps and apply the above methods on each subinterval.) We are currently working on the general case and hope to report about results in the future.

12.7 Final Remarks If the interval [−X, X] is so large that f and its derivatives vanish up to machine precision outside it, then there is no jump at the extremities, only the interior jumps remain. And if there is only one of the latter, only the term with j = 1 remains in the interior sum in Formula (12.14). The alert reader might have noticed that the numerical results in our tables are less good than the corresponding ones in [4]. This is not surprising, as Formula (12.14) contains all powers of h, whereas (1.3) in [4] merely contains even powers. The elimination of one term in (12.14) therefore advances only one power in the accuracy, against two with (1.3) in [4]. As in the case in [4] with a jump located at a sample point (±X), the corrections introduced in this paper all maintain the interpolation property up to O(hq ), for the sine factor in (12.14) makes that the k–sum is absent at the nodes x = nh. To obtain with finite differences a convergence order 15, as in our tables of Sect. 12.4, the number of samples must be large enough for a computation of derivatives of order 15, therefore contain at least 15 values; there is no such limitation when using the true derivatives as in Sect. 12.3. Finally, all the above methods require the knowledge of the locations cj of the jumps. If the mere samples {(xn , fn ): n = −N . . . , N } are known, then an approximation of the cj ’s must be determined first. Methods for doing this exist; we are working on such a method as well, and hope to ultimately combine it with the algorithms presented in the present work. Acknowledgments This work has been supported in part by the Swiss National Science Foundation, grant # 200020–124779. The author thanks Georges Klein for making available to Aurelio Privitera his programs for computing the weights of the linear rational finite differences used in Sect. 12.4. He is also indebted to a referee for several useful comments.

Bibliography 1. Abramowitz, M., Stegun, I.A.: Handbook of Mathematical Functions. Dover, New York (1965) 2. Berrut, J.-P.: A circular interpretation of the Euler–Maclaurin formula. J. Comput. Appl. Math. 189, 375–386 (2006) 3. Berrut, J.-P.: A formula for the error of finite sinc-interpolation over a finite interval. Numer. Algorithms 45, 369–374 (2007) 4. Berrut, J.-P.: First applications of a formula for the error of finite sinc interpolation. Numer. Math. 112, 341–361 (2009)

12 The Influence of Jumps on the Sinc Interpolant, and Ways to Overcome It

339

5. Berrut, J.-P.: A formula for the error of finite sinc-interpolation with an even number of nodes. Numer. Algorithms 56, 143–157 (2011) 6. Berrut, J.-P.: Boole’s summation formula and features of jump singularities. In preparation 7. Berrut, J.-P.: Fighting Gibbs’ phenomenon through quotienting. In preparation 8. Driscoll, T.A., Hale, N., Trefethen, L.N. (eds.): Chebfun Guide. Pafnuty Publications, Oxford (2014) 9. Fornberg, B.: A Practical Guide to Pseudospectral Methods. Cambridge Univ. Press, Cambridge (1996) 10. Katznelson, Y.: Harmonic Analysis. Wiley, New York, (1968) and Dover, New York (1976) 11. Klein, G., Berrut, J.-P.: Linear rational finite differences from derivatives of barycentric rational interpolants. SIAM J. Numer. Anal. 50, 643–656 (2012) 12. Lund, J., Bowers, K.L.: Sinc Methods for Quadrature and Differential Equations. SIAM, Philadelphia (1992) 13. Nörlund, N.E.: Vorlesungen über Differenzenrechnung. Springer, Berlin (1924) 14. Privitera, A.: Correction par différences finies rationnelles de l’interpolant sinc fini avec sauts. Master’s thesis, University of Fribourg (Switzerland) (2019) 15. Richardson, M., Trefethen, L.N.: A sinc function analogue of Chebfun. SIAM J. Sci. Comput. 33, 2519–2535 (2011) 16. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 17. Stenger, F.: Handbook of Sinc Numerical Methods. Chapman and Hall, Boca Raton (2010)

Chapter 13

Construction of Approximation Formulas for Analytic Functions by Mathematical Optimization Ken’ichiro Tanaka and Masaaki Sugihara

Abstract We describe our methods developed recently to construct formulas for approximating functions and numerical integration. Our methods are developed basically for improving the numerical methods with variable transformations known as the double-exponential (DE) Sinc methods for function approximation and DE formulas for numerical integration. They are based on the sinc interpolation and trapezoidal rule on the real axis, respectively. It has been known that they are nearly optimal on Hardy spaces with single- or double-exponential weights, which are regarded as spaces of transformed functions by variable transformations. With a view to an optimal formula in the weighted Hardy space for a general weight, the authors propose a simple method for obtaining sampling points for approximating functions and numerical integration. This method is based on a minimization problem of a discrete energy. By solving the problem with a standard optimization technique, we obtain sampling points that realize accurate formulas for approximating functions and numerical integration. From some numerical examples, we can observe that the constructed formulas outperform the existing formulas. Keywords Weighted Hardy space · Function approximation · Numerical integration · Potential theory · Green potential · Discrete energy minimization.

M. Sugihara (deceased) Professor M. Sugihara wrote this chapter while at Aoyama Gakuin University, Tokyo, Japan K. Tanaka () Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_13

341

342

K. Tanaka and M. Sugihara

13.1 Introduction In this paper, we describe our methods developed recently to construct formulas for approximating functions and numerical integration. We deal with analytic functions on a strip region including the real axis and construct approximation formulas for them by solving convex optimization problems yielding sampling points for the formulas. This paper is based on our previous papers [19] and [15]. Our methods are developed basically for improving the numerical methods with variable transformations known as the double-exponential (DE) Sinc methods for function approximation [10, 16] and DE formulas for numerical integration [9, 14, 17]. They are based on variable transformations that transform analytic functions on a complex domain to those with rapid decay at the infinity on the strip region Dd := {z ∈ C | | Im z| < d},

(13.1)

where d > 0. Because rapidly decaying functions on Dd are easily tractable for approximation and numerical integration, such transformations are valuable. A typical example of such transformations is the double-exponential (DE) transformation [14] given by ψDE (x) = tanh

π 2

sinh x ,

(13.2)

which transforms analytic functions on a domain including the interval (−1, 1) to those on Dd for some d. For other DE transformations, see [16, 17] and the references therein. Besides the DE transformations, there are single-exponential (SE) transformations. A typical example is ψSE (x) = tanh

x 2

,

(13.3)

which is also known as TANH transformation. For SE transformations, see [3, 8, 11– 13] and the references therein.

13.1.1 Function Approximation For approximating a function g by the Sinc methods [11, 12], we use a transformation ψ to obtain f (x) = g(ψ(x)) and apply the sinc interpolation to the transformed function f : f (x) ≈

N+  k=−N−

f (kh) sinc(x/ h − k),

(13.4)

13 Approximation Formulas by Mathematical Optimization

343

where sinc(x) = (sin π x)/(π x). If g has a certain decay property that f decays rapidly at the infinity on Dd , the sinc interpolation provides accurate approximation of f . Actually, its precise error estimate is provided in [11, 12, 16]. A deeper result is presented in [10] that the sinc interpolation provides “nearly optimal” approximation in some spaces of analytic functions with a specific decay property. More precisely, the space H ∞ (Dd , w) defined below in (13.6) is considered as a set of the transformed functions f for several weight functions w representing the decay properties. Then, it is shown that the error of the sinc interpolation for the functions in H ∞ (Dd , w) is close to the minimum worst case error in this space in the case where w decays single-exponentially or doubleexponentially. However, an explicit optimal formula, i.e., a formula attaining the minimum worst case error in H ∞ (Dd , w), is known only in a restricted case where w decays single-exponentially [20]. In the papers [15, 18], with a view to obtaining an optimal formula in H ∞ (Dd , w) for a general weight function w, the authors propose methods for constructing accurate formulas for approximating functions by making the worst case error as small as possible. In particular, a simple method for obtaining sampling points for approximating functions in H ∞ (Dd , w) is proposed in [15]. This method is based on a minimization problem of a discrete energy, which is an analogue of a fundamental tool in potential theory [7].

13.1.2 Numerical Integration The DE formulas for numerical integration are based on the integration by substitution and trapezoidal rule on R: 

 g(t) dt = I



−∞

g(ψ(x))ψ  (x) dx ≈ h

N+ 

g(ψ(kh))ψ  (kh),

(13.5)

k=−N−

where ψ is a DE transformation. Under some assumptions for the integrand g, the transformed function f (x) = g(ψ(·))ψ  (·) decays double-exponentially, which contributes to the accuracy of the formula in (13.5). Actually, its precise error estimate is provided in [9, 17]. Besides the DE formulas, there are several formulas given by other transformations including the SE transformations. We can find related studies about how the decay rate affects the performance of the trapezoidal rule [6, 17, 21]. However, it is not obvious that the trapezoidal rule is an optimal formula among the possible formulas with a fixed number of sampling points. It is shown in [9] that the trapezoidal rule is “near-optimal” in H ∞ (Dd , w) in the case where w decays single-exponentially or double-exponentially. A similar approach to that for function approximation is employed. That is, the worst case error of a numerical integration formula in H ∞ (Dd , w) is considered as a criterion of its accuracy.

344

K. Tanaka and M. Sugihara

It should be emphasized here that the trapezoidal rule is not necessarily optimal, and a better numerical integration formula may exist. However, there are no general methods for constructing optimal formulas except in the case where w decays single-exponentially [1, 2]. Recently, an attempt to establish such methods is made in the paper [19] with the aid of the same principle for generating sampling points as that for function approximation.

13.1.3 Organization of This Paper The rest of this paper is organized as follows. In Sect. 13.2, we present some mathematical preliminaries concerning assumptions for weight functions w and the formulation of the optimality of formulas for approximating functions and numerical integration formulas in H ∞ (Dd , w). In Sect. 13.3, according to the paper [15], we deal with formulas for approximating functions. After characterizing their optimality, we describe the method for constructing accurate formulas. This method is based on a minimization problem of a convex discrete energy, which can be easily solved by a standard algorithm. In Sect. 13.4, according to the paper [19], we deal with numerical integration formulas. We also describe the characterization of their optimality and provide the method for constructing accurate formulas. In fact, we modify the method in [19] by using the method for function approximation in Sect. 13.3. In Sect. 13.5, we present some numerical results showing the performance of the proposed methods. Finally, we conclude this paper in Sect. 13.6.

13.2 Mathematical Preliminaries 13.2.1 Weight Functions and Weighted Hardy Spaces Let d be a positive real number, and let Dd be a strip region defined by (13.1). In order to specify the analytical property of a weight function w on Dd , we use a function space B(Dd ) of all functions ζ that are analytic on Dd such that  lim

d

x→±∞ −d

|ζ (x + i y)| d y = 0

and  lim



y→d−0 −∞

(|ζ (x + i y)| + |ζ (x − i y)|) d x < ∞.

13 Approximation Formulas by Mathematical Optimization

345

Let w be a complex valued function on Dd . We regard w as a weight function on Dd if w satisfies the following assumption. Assumption 13.2.1 The function w belongs to B(Dd ), does not vanish at any point in Dd , and takes positive real values in (0, 1] on the real axis. In this paper, we restrict ourselves to weight functions with the following additional assumption. Assumption 13.2.2 The function log w is strictly concave on R. For a weight function w that satisfies Assumptions 13.2.1 and 13.2.2, we define a weighted Hardy space on Dd by H ∞ (Dd , w) := {f : Dd → C | f is analytic on Dd and f < ∞ } , where

(13.6)

   f (z)  .  f := sup   z∈Dd w(z)

13.2.2 Optimal Formula for Approximating Functions For a given positive integer n, we first consider all the possible n-point interpolation formulas on R that can be applied to any function f ∈ H ∞ (Dd , w). Subsequently, we choose a criterion that determines optimality of a formula in H ∞ (Dd , w). Based on [10], we adopt the minimum worst error Enmin (H ∞ (Dd , w)) given by Enmin (H ∞ (Dd , w))

⎤    j −1 l m    (k)  ⎣ := inf inf inf inf sup sup f (x) − f (aj ) φj k (x)⎦ , m ,...,m l 1 1≤l≤n a ∈Dd φj k f ≤1 x∈R   m1 +···+ml =n j j =1 k=0 ⎡

distinct

(13.7) where φj k are analytic functions on Dd . We regard a formula that attains this value as optimal.

13.2.3 Optimal Numerical Integration Formula In a similar manner to that in Sect. 13.2.2, we provide a mathematical formulation for optimal numerical integration formulas. For a given positive integer n, we first consider all possible n-point numerical integration formulas on R that can be applied to any functions f ∈ H ∞ (Dd , w). Then, we choose one of the formulas such

346

K. Tanaka and M. Sugihara

that it gives the minimum worst error in H ∞ (Dd , w). The precise definition of the minimum worst error, denoted by Enmin (H ∞ (Dd , w)), is given by Enmin (H ∞ (Dd , w)) := inf

1≤ ≤n

inf

inf

   ,mp  inf Dcpq ,a p ,

m1 ,...,m ap ∈Dd cpq ∈C m1 +···+m =n

(13.8)

,m

p ∞ (D , w) → C is defined by where the error operator Dcpq ,a d p : H

,mp Dcpq ,a pf

 :=

∞ −∞

f (x) dx −

p −1 m 

cpq f (q) (ap ).

(13.9)

p=1 q=0 ,m

p That is, Enmin (H ∞ (Dd , w)) is the minimum norm of the error operator Dcpq ,a p. Therefore, we call it the minimum error norm. We regard a formula that attains this value as optimal.

13.3 Accurate Formulas for Approximating Functions 13.3.1 Characterization of Optimal Formulas for Approximating Functions For a mutually distinct n-sequence a = (a1 , . . . , an ) ∈ Rn , we introduce the following functions1 Td (x) := tanh Bn (x; a, Dd ) :=

n  Td (x) − Td (aj ) j =1

Bn;k (x; a, Dd ) :=

π x , 4d

1 − Td (aj ) Td (x)

 1≤j ≤n, j =k

(13.10) (13.11)

,

Td (x) − Td (aj ) 1 − Td (aj ) Td (x)

,

(13.12)

and the Lagrange-type n-point interpolation formula Ln [a; f ](x) :=

n  k=1

1 The

f (ak )

Bn;k (x; a, Dd ) w(x) Td (x − ak ) . Bn;k (ak ; a, Dd ) w(ak ) Td (0)

function given by (13.11) is called the transformed Blaschke product.

(13.13)

13 Approximation Formulas by Mathematical Optimization

347

Then, we describe characterizations of Enmin (H ∞ (Dd , w)) in (13.7) by the following proposition. Proposition 13.1 ([10, Lemma 4.3 and its proof]) Let a = (a1 , . . . , an ) ∈ Rn be a mutually distinct sequence. Then, we have the following error estimate of the formula in (13.13): Enmin (H ∞ (Dd , w)) ≤

 sup

f ∈H ∞ (Dd ,w) f ≤1

 sup |f (x) − Ln [a; f ](x)|

(13.14)

x∈R

≤ sup |Bn (x; a, Dd ) w(x)| .

(13.15)

x∈R

Furthermore, if we take the infimum over all the n-sequences a in the above inequalities, then each of them becomes an equality. ⎡ ⎢ Enmin (H ∞ (Dd , w)) = inf ⎣ aj ∈R

 sup

f ∈H ∞ (Dd ,w) f ≤1

⎤  ⎥ sup |f (x) − Ln [a; f ](x)| ⎦

x∈R

5 = inf

6 sup |Bn (x; a, Dd ) w(x)| .

aj ∈R x∈R

(13.16) (13.17)

Proposition 13.1 indicates that the interpolation formula Ln [a; f ](x) provides an explicit form of an optimal approximation formula if there exists an n-sequence a = a ∗ that attains the infimum in (13.17). By noting that Td (x) − Td (aj ) 1 − Td (aj ) Td (x)

= Td (x − aj )

for aj ∈ R and x ∈ R, and taking the logarithm of (13.17), we can consider the following equivalent alternative: ⎡



inf ⎣ sup ⎝

aj ∈R

x∈R

n 

⎞⎤ log |Td (x − aj )| + log w(x)⎠⎦ .

(13.18)

j =1

To deal with the optimization problem corresponding to (13.18), we introduce the following notation: K(x) = − log |Td (x)| Q(x) = − log w(x).



π 

  x  , = − log tanh 4d

(13.19) (13.20)

348

K. Tanaka and M. Sugihara

Furthermore, for an integer n ≥ 2, let Rn = {(a1 , . . . , an ) ∈ Rn | a1 < · · · < an }

(13.21)

be the set of mutually distinct n-point configurations in R. Then, by using the function defined by UnD (a; x) =

n 

K(x − ai ),

x∈R

(13.22)

i=1

for a = (a1 , . . . , an ) ∈ Rn , we can formulate the optimization problem corresponding to (13.18) as follows: (D)

inf UnD (a; x) + Q(x)

maximize

x∈R

subject to a ∈ Rn .

(13.23)

Problem (D) in (13.23) is closely related to potential theory. In fact, function K(x − y) of x, y ∈ R2 is the kernel function derived from the Green function of Dd :    Td (z1 ) − Td (z2 )    (13.24) gDd (z1 , z2 ) = − log  1 − Td (z2 ) Td (z1 )  in the special case where (z1 , z2 ) = (x, y) ∈ R2 . # Therefore, the function UnD (a; x) is the Green potential for the discrete measure ni=1 δai , where δai is the Dirac measure centered at ai .

13.3.2 Basic Idea for Constructing Accurate Formulas for Approximating Functions For a positive integer n, let M (R, n) be the set of all Borel measures μ on R with μ(R) = n, and let Mc (R, n) be the set of measures μ ∈ M (R, n) #with a compact support. In particular, for a sequence a ∈ Rn , the discrete measure ni=1 δai belongs to Mc (R, n). For μ ∈ M (R, n), we define potential UnC (μ; x) and energy InC (μ) by  UnC (μ; x)

=

  InC (μ) =

K(x − y) dμ(y), R

(13.25) 

K(x − y) dμ(y)dμ(x) + 2 R R

Q(x) dμ(x), R

(13.26)

13 Approximation Formulas by Mathematical Optimization

349

respectively. According to (13.19) and (13.24), these are the Green potential and energy in the case where the domain of the Green function is Dd and that of the external field Q is R. Then, by using fundamental facts in potential theory, we can show the following theorem. Theorem 13.1 ([15, Theorem 2.6]) On Assumptions 13.2.1 and 13.2.2, the minimizer μ∗n of InC yields a solution of the optimization problem (C)

maximize

inf (UnC (μ; x) + Q(x))

x∈R

subject to μ ∈ Mc (R, n). (13.27)

Our ideal goal is finding an optimal solution a † ∈ Rn of Problem (D) defined in (13.23) and proposing an optimal interpolation formula Ln [a † ; f ]. However, it is difficult to solve Problem (D) directly. Therefore, with a view to a discrete analogue of Theorem 13.1, we define discrete energy InD (a) as InD (a) :=

n n  

2(n − 1)  Q(ai ) n n

K(ai − aj ) +

i=1 j =1 j =i

(13.28)

i=1

for a = (a1 , . . . , an ) ∈ Rn , and consider its minimization. We can show the solvability of the minimization of InD . Theorem 13.2 ([15, Theorem 3.3]) On Assumptions 13.2.1 and 13.2.2, the energy InD is convex in Rn , and there is a unique minimizer of InD in Rn . D (n) be the Let a ∗ = (a1∗ , . . . , an∗ ) ∈ Rn be the minimizer of InD , and let FK,Q number defined by D FK,Q (n) = InD (a ∗ ) −

n−1 Q(ai∗ ). n n

(13.29)

i=1

Then, we can show an inequality which indicates that a ∗ is an approximate solution of Problem (D), although it does not provide the accuracy of the solution. Theorem 13.3 ([15, Theorem 3.4]) Let a ∗ ∈ Rn be the minimizer of InD . On Assumptions 13.2.1 and 13.2.2, we have UnD (a ∗ ; x) + Q(x) ≥

D (n) FK,Q

n−1

for any x ∈ R.

(13.30)

Based on this theorem, we use the minimizer a ∗ ∈ Rn of InD as a set of sampling points for the formula Ln [a; f ] in (13.13).

350

K. Tanaka and M. Sugihara

13.3.3 Construction of Accurate Formulas for Approximating Functions By using the minimizer a ∗ ∈ Rn of InD , we propose the approximation formula Ln [a ∗ ; f ] for f ∈ H ∞ (Dd , w), where Ln [a; f ] is defined by (13.13). We can provide an error estimate of this formula. Theorem 13.4 ([15, Theorem 4.1]) Let a ∗ ∈ Rn be the minimizer of the discrete energy InD and let Ln [a ∗ ; f ] be the approximation formula for f ∈ H ∞ (Dd , w) given by (13.13) with a = a ∗ . On Assumptions 13.2.1 and 13.2.2, we have    D (n) FK,Q   ∗ sup f (x) − Ln [a ; f ](x) ≤ exp − . n x∈R

 sup

f ∈H ∞ (Dd ,w) f ≤1

(13.31)

Proof From Inequality (13.15) in Proposition 13.1 and Theorem 13.3, we have  sup

f ∈H ∞ (Dd ,w) f ≤1

     ∗   sup f (x) − Ln [a ; f ](x) ≤ sup Bn (x; a ∗ , Dd ) w(x) .

x∈R

x∈R

= sup exp −UnD (a ∗ ; x) − Q(x) x∈R



≤ exp −

D (n) FK,Q



n

.

We compute a numerical approximation of the minimizer a ∗ of InD by Newton’s method. For the output of this algorithm a˜ ∗ = (a˜ 1∗ , . . . , a˜ n∗ ) ∈ Rn , we use the formula Ln [a˜ ∗ ; f ] for approximating functions in H ∞ (Dd , w). Furthermore, we can derive barycentric forms corresponding to this formula. Their explicit forms are given by ⎡

(I)

(II)

⎤ n

π  2λ˜ ∗ f (a˜ k∗ )

π k  (x − a˜ j∗ ) ⎦ , Ln [a˜ ∗ ; f ](x) = w(x) ⎣ tanh 4d sinh 2d (x − a˜ k∗ ) w(a˜ k∗ ) j =1 k=1 L˜ n [a˜ ∗ ; f ](x) = w(x)

n 

C n 2λ˜ ∗j 2λ˜ ∗ f (a˜ k∗ ) 

π k 

, π ∗ sinh 2d (x − a˜ k∗ ) w(a˜ k∗ ) k=1 j =1 sinh 2d (x − a˜ j ) n 

where λ˜ ∗k =

 j =k

tanh

1 π ˜ k∗ 4d (a

− a˜ j∗ )



(k = 1, . . . , n).

13 Approximation Formulas by Mathematical Optimization

351

Formula (I) is derived just by deforming the expression of Ln [a˜ ∗ ; f ], whereas Formula (II) is an approximation of Ln [a˜ ∗ ; f ]. These barycentric formulas are useful for fast computation.

13.4 Accurate Numerical Integration Formulas 13.4.1 Characterization of Optimal Numerical Integration Formulas The minimum error norm Enmin (H ∞ (Dd , w)) in (13.8) can be expressed by the norm of a certain integral operator. According to Sugihara [9], we have ⎡ ⎢ Enmin (H ∞ (Dd , w)) = inf ⎣ ai ∈Dd

sup

g∈H ∞ (Dd ,w) g ≤1

   



−∞

⎤  ⎥ g(x) Bn (x; a, Dd ) dx ⎦ . (13.32)

In fact, we can obtain its simpler expression provided by the following theorem. Theorem 13.5 ([19, Theorem 1]) We have Enmin (H ∞ (Dd , w)) = inf





ai ∈R −∞

w(x) |Bn (x; a, Dd )|2 dx,

(13.33)

where inf denotes the infimum over any n sampling points ai ∈ R. Theorem 13.5 follows from the inequalities Enmin (H ∞ (Dd , w)) ≥ inf





ai ∈R −∞  ∞

Enmin (H ∞ (Dd , w)) ≤ inf

ai ∈R −∞

w(x) |Bn (x; a, Dd )|2 dx,

(13.34)

w(x) |Bn (x; a, Dd )|2 dx.

(13.35)

Inequality (13.34) can be shown by specializing g in (13.32). As shown in [9], by setting g(z) = w(z) Bn (¯z; a, Dd ) in (13.32), we can obtain a lower bound of Enmin (H ∞ (Dd , w)): Enmin (H ∞ (Dd , w)) ≥ inf

ai ∈Dd



∞ −∞

w(x) |Bn (x; a, Dd )|2 dx.

(13.36)

352

K. Tanaka and M. Sugihara

In fact, by using the inequality in [10, Proof of Lemma 4.3], we can restrict the range of the sampling points ai to R in (13.36) to deduce Inequality (13.34). The details of this argument2 are presented in Appendix 1. To show Inequality (13.35), we choose a numerical integration formula (i.e., cij and ai ) and show that the RHS of (13.35) is an upper bound of the operator norm ,mp of Dcpq ,a p given by (13.9). As such an n-point numerical integration formula, we consider the formula yielded by integrating the Hermite interpolation on R by the transformed Blaschke product Bn (x; a, Dd ): Hn [a ∗ ; f ](x) =

n 

f (ai∗ ) ui (x; a ∗ ) +

n 

i=1

f  (ai∗ ) vi (x; a ∗ ),

(13.37)

i=1

where a ∗ is a sequence that attains the infimum of the RHS in (13.35) and vi (x; a ∗ ) =

1

w(x) Bn (x; a ∗ , Dd )2

Td (0)2 Bn;i (ai∗ ; a ∗ , Dd )2 w(ai∗ ) 



ui (x; a ) = − +

 (a ∗ ; a ∗ , D ) 2 Bn;i d i

w  (ai∗ ) + Bn;i (ai∗ ; a ∗ , Dd ) w(ai∗ )



Td (x − ai∗ ) , Td (x − ai∗ ) (13.38)

vi (x)

1 w(x) Bn;i (x; a ∗ , Dd )2

π . w(ai∗ ) Bn;i (ai∗ ; a ∗ , Dd )2 cosh3 4d (x − ai∗ )

(13.39)

In fact, we can show that the integrals of the functions vi (· ; a ∗ ) vanish: 

∞ −∞

vi (x; a ∗ ) dx = 0.

(13.40)

Then, by integrating Hn [a ∗ ; f ] we have 

∞ −∞

Hn [a ∗ ; f ](x) dx =

n 

ci f (ai∗ ), (0)

(13.41)

i=1

where (0) ci

2 We

 =

∞ −∞

ui (x, a ∗ ) dx.

correct the flaw in the inequality after (10) in [19].

(13.42)

13 Approximation Formulas by Mathematical Optimization

353

Because of (13.39) and (13.40), we can use the formula3 (0) ci

 =



−∞

w(x) Bn;i (x; a ∗ , Dd )2 1

π  dx. ∗ ∗ ∗ 2 3 w(ai ) Bn;i (ai ; a , Dd ) cosh 4d (x − ai∗ )

(13.43)

Therefore we can regard the RHS in (13.41) as an n-point integration formula. Then, we can prove Inequality (13.35) by using the error of the Hermite interpolation. We describe the details of the above argument4 in Appendix 2.

13.4.2 Construction of Accurate Numerical Integration Formulas 13.4.2.1

Basic Idea

We consider the optimization problem given by (13.33) in order to obtain a sequence of sampling points a. However, because it may be difficult to obtain an exact solution of the problem, we consider the following approximate solution. By noting that 

∞ −∞

 w(x) |Bn (x; a, Dd )|2 dx ≤

α −α

 w(x) |Bn (x; a, Dd )|2 dx +

|x|>α

   ≤ 2α sup w(x) |Bn (x; a, Dd )|2 +

w(x) dx

|x|>α

x∈R

w(x) dx (13.44)

for a sufficiently large α > 0, we consider the minimization of the underlined value in (13.44) instead of the original problem. Because this minimization problem is equivalent to that given by ⎡ inf ⎣ sup

ai ∈R

⎧ ⎨1

x∈R ⎩ 2

log w(x) +

n  j =1

⎫⎤  ⎬ log Td (x − aj ) ⎦ , ⎭

(13.45)

we obtain a by this problem. The expression in (13.45) is obtained by replacing log w(x) in (13.18) by (1/2) log w(x). Therefore, an approximate solution of (13.45) can be given by the method presented in Sects. 13.3.2 and 13.3.3. More precisely, we can obtain sampling points if we minimize the modified discrete energy IˆnD given by replacing

formula guarantees ci(0) ≥ 0. 4 We correct the flaw in the form of u in [19], which does not influence the conclusion of Theorem 1 i in [19]. 3 This

354

K. Tanaka and M. Sugihara

ˆ Q(x) by Q(x) = −(1/2) log w(x) in (13.28): n n  

IˆnD (a) :=

2(n − 1)  ˆ K(ai − aj ) + Q(ai ). n n

i=1 j =1 j =i

(13.46)

i=1

After obtaining the sampling points, we compute approximate values of the (0) coefficients cj in (13.42). 13.4.2.2

Accurate Formulas for Numerical Integration

We begin with minimizing the modified discrete energy IˆnD to obtain the sequence aˆ ∗ as a set of sampling points. Then, we compute approximate values c˜j(0) of cj(0) in (13.43) by the trapezoidal rule. Finally, we obtain the numerical integration formula: 

∞ −∞

f (x) dx ≈

n 

c˜i f (aˆ i∗ ). (0)

(13.47)

i=1 (0)

However, we need to modify the method to compute the coefficients c˜i . This is because the sequence aˆ ∗ does not necessarily attain the infimum in (13.35) and we  ∞

may only have

−∞

vi (x; aˆ ∗ ) dx ≈ 0. Then, we employ the Lagrange interpolation

Ln [aˆ ∗ ; f ](x) to generate coefficients. That is, we use  ci :=

∞ −∞

w(x) Bn;i (x; aˆ ∗ , Dd ) Td (x − aˆ i∗ ) w(aˆ i∗ ) Bn;i (aˆ i∗ ; aˆ ∗ , Dd ) Td (0)

(13.48)

as a coefficient for f (aˆ i∗ ). We compute approximate values c˜i of ci by the trapezoidal rule5 c˜i := h˜

n˜  ˜ Bn:i (k h; ˜ aˆ ∗ , Dd ) Td (k h˜ − aˆ i∗ ) w(k h) ∗ ∗ w(aˆ i ) Bn;i (aˆ i ; aˆ ∗ , Dd ) Td (0)

(13.49)

k=−n˜

and obtain the numerical integration formula: 



−∞

5 It

f (x) dx ≈

n 

c˜i f (aˆ i∗ ).

(13.50)

i=1

is better to compute approximations of ci without using a numerical integration formula. Finding an efficient and accurate method for this computation is a theme of future work.

13 Approximation Formulas by Mathematical Optimization

355

(0)

It has not been shown that ci = ci for an exactly optimal sampling points. However, we observe that they are close in an example shown in Sect. 13.5.2. In fact, we also observe that the coefficients ci provide accurate numerical integration.

13.5 Numerical Experiments 13.5.1 Approximation of Functions We consider the function g given by g(t) = t



1 − t2

(13.51)

and its approximation for t ∈ (−1, 1). The function g has the singularities at the endpoints t = ±1. In order to mitigate the difficulty in approximating g at their neighborhoods, we employ the variable transformations given by ψSE and ψDE in (13.3) and (13.2), respectively. Then, we consider the approximations of the transformed functions

x , (13.52) f1 (x) = g(ψSE (x)) = w1 (x) tanh 2

π sinh x (13.53) f2 (x) = g(ψDE (x)) = w2 (x) tanh 2 for x ∈ R, where w1 (x) = sech

x

, 2

π sinh x . w2 (x) = sech 2

(13.54) (13.55)

By letting d1 = π − ε,

d2 =

π −ε 2

(13.56)

with 0 < ε  1, for i = 1, 2, we can confirm that the weight function wi satisfies Assumptions 13.2.1 and 13.2.2 for d = di . Furthermore, the assertion fi ∈ H ∞ (Ddi , wi ) holds true for i = 1, 2. In the following, we choose ε = 10−10 . For the functions f1 and f2 , we compare Formulas (I), (II), and the sinc interpolation formula in (13.4). We need to determine the parameters N± and h in this formula. Since the weights wi are even, we consider odd n as the numbers of the sampling points and set N− = N+ = (n − 1)/2. Furthermore, we choose the width h > 0 so that the orders of the sampling error and truncation error (almost) coincide [10, 16]. Actually, the former is O(exp(−π di / h)) (h → 0) and the latter

356

K. Tanaka and M. Sugihara

is estimated depending on the weight wi as follows: 

, |fi (kh)| =

|k|>(n−1)/2

O(exp(−nh/4))

(i = 1),

O(exp(−(π/4) exp(nh/2)))

(i = 2),

(n → ∞).

Then, we set , h=

(4π(π − ε)/n)1/2

(i = 1),

2n−1 log((π

(i = 2).

− 2ε)n)

We choose the evaluation points x in the following manner. We find a value of x1 ≤ 0 satisfying that , wi (x1 ) ≤

10−20

(i = 1),

10−75

(i = 2),

and set x1001 = −x1 . Then, we set x for 2 ≤ ≤ 1000 such that they divide the interval [x1 , x1001 ] into equal segments. We choose x1 = −100 and −6 for the computations of f1 and f2 , respectively. We show the errors of Formulas (I), (II) and sinc formula for f1 and f2 in Figs. 13.1 and 13.2, respectively. We can observe that Formulas (I) and (II) have approximately the same accuracy and they outperform the sinc formula for each case. Errors for function 1

log 10(Max error)

0

-5

-10

Proposed formula (I) Proposed formula (II) Sinc formula

-15 0

50

100

n

Fig. 13.1 Errors for function 1 (f1 )

150

200

13 Approximation Formulas by Mathematical Optimization Fig. 13.2 Errors for function 2 (f2 )

357

Errors for function 2

0 -5

log 10(Max error)

-10 -15 -20 -25 -30 -35 -40

Proposed formula (I) Proposed formula (II) Sinc formula

-45 -50 0

50

100

150

200

n

13.5.2 Numerical Integration 

• • • •

1

1

dx = π by the following formulas: 1 − x2 SE (TANH) transformation ψSE + the trapezoidal formula, SE (TANH) transformation ψSE + the proposed formula, DE transformation ψDE + the trapezoidal formula (i.e., DE formula), DE transformation ψDE + the proposed formula.

We compute the integral

−1



Then, we observe the relative errors of the values computed by these formulas. The width d of the strip and the weight functions are determined as follows: 1 , 2 cosh(t/2) (π/2) cosh t . • DE: d = π/2 − 10−15 , w(t) = cosh((π/2) sinh t) • SE: d = π − 10−15 , w(t) =

Furthermore, in a similar manner to the sinc interpolation, we choose the width h for the trapezoidal formulas so that the orders √ of the sampling error and truncation error (almost) coincide [9, 17]. We set h = 2π d/N and h = log(2π dN/(π/4))/N for the SE and DE transformation, respectively, where N = n/2. We show the results by Figs. 13.3, 13.4, 13.5. Figure 13.3 displays the computed (0) coefficients c˜j and c˜j , which seem to be close. However, their differences may not be negligible because the integrals of the functions vi are not sufficiently close to zero as shown by Fig. 13.4. Figure 13.5 is the graph of the relative errors. We can observe that the proposed formula outperforms the trapezoidal formula.

358

K. Tanaka and M. Sugihara

Fig. 13.3 Coefficients by the Lagrange and Hermite interpolation (DE, n = 30)

Fig. 13.4 Integrals of the functions vi (DE, n = 30)

13.6 Concluding Remarks In this paper, we first described the method proposed in [15] to construct accurate formulas for approximating functions in the weighted Hardy spaces H ∞ (Dd , w). The method is based on minimizing the discrete energy InD in (13.28). Because InD is convex on Assumption 13.2.2 (the assumption that the weight w is log-concave), we can find the minimizer of InD by the standard Newton method and use it as a set of sampling points for interpolation. Furthermore, we derived the barycentric formulas: Formulas (I) and (II) for fast computation of the interpolation. We observed that they outperformed the existing sinc interpolation formulas. Next, we showed the explicit form of the minimum error norm Enmin (H ∞ (Dd , w)) for numerical integration formulas according to [19]. Then, to determine

13 Approximation Formulas by Mathematical Optimization

359

Fig. 13.5 Relative errors of the numerical integration of F (x) = (1 − x 2 )−1

sampling points, we minimize the modified discrete energy IˆnD in (13.46) to obtain the approximate solutions of the optimization problem corresponding to (13.33). Then, we obtained Formula (13.50) by using the sampling points and the coefficients corresponding to them. We also observed that the formula outperformed the existing formulas based on the trapezoidal rule. As future work, we will work on improvement and/or precise error estimate of the constructed formulas. Some results of ongoing work in this direction are presented in [4] and [5]. Acknowledgments K. Tanaka is supported by the grant-in-aid of Japan Society for the Promotion of Science (KAKENHI Grant Number: JP17K14241). The authors thank the anonymous reviewers for their helpful comments about this article.

Appendix 1: Proof of the Fact That We Have Only to Consider Real Sampling Points in Sect. 13.4.1 It suffices to show the assertion that for any z ∈ Dd there exist a ∈ R such that      Td (x) − Td (z)   Td (x) − Td (a)   ≥   1 − T (z) T (x)   1 − T (a) T (x)  d

d

d

d

for any x ∈ R. To prove this assertion, we first show the following lemma.

(13.57)

360

K. Tanaka and M. Sugihara

Lemma 13.1 ([10, Proof of Lemma 4.3]) For any t ∈ (−1, 1) and s ∈ D = {ζ ∈ C | |ζ | < 1}, we have      t − s   t − (Re s)  ≥    1 − s t   1 − (Re s) t  . Proof For a real number t, we have    t − s 2 t 2 + |s|2 − 2(Re s)t (1 − t 2 )(|s|2 − 1)    1 − s t  = 1 − 2(Re s)t + |s|2 t 2 = 1 − 2(Re s)t + |s|2 t 2 + 1 = (1 − t 2 )

α−1 + 1, β + αt 2

(13.58)

where α = |s|2 and β = 1 − 2(Re s)t. If −1 < t < 1 and |s| < 1, we have ∂ ∂α 



α − 1 β + αt 2

 =

(β + α  t 2 ) − (α  − 1)t 2 1 − 2(Re s)t + t 2 =  2 2 (β + α t ) (β + α  t 2 )2



1 − 2|t| + t 2 (1 − |t|)2 = ≥ 0,  2 2 (β + α t ) (β + α  t 2 )2

which implies that the function α = |s|2 ≥ (Re s)2 , we have

α − 1 is increasing with respect to α  . Since β + αt 2

2 α−1 2 (Re s) − 1 + 1 ≥ (1 − t ) +1 β + αt 2 β + (Re s)2 t 2    t − (Re s) 2 (1 − t 2 )((Re s)2 − 1)  .  +1= = 1 − (Re s) t  1 − 2(Re s)t + (Re s)2 t 2

(1 − t 2 )

(13.59)

Therefore the conclusion follows from (13.58) and (13.59). Second, we show the following lemma to derive Inequality (13.57) from Lemma 13.1. Lemma 13.2 For any z ∈ Dd there exists a ∈ R such that Re Td (z) = Td (a). Proof Let u = v + w i (v, w ∈ R). Then, we have 2 Re(tanh u) = tanh(v + w i) + tanh(v + w i) = tanh(v + w i) + tanh(v − w i) =

tanh v − i tan w 2 tanh v + 2 tanh v tan2 w tanh v + i tan w + = . 1 + i tanh v tan w 1 − i tanh v tan w 1 + tanh2 v tan2 w

13 Approximation Formulas by Mathematical Optimization

361

Therefore we have 1 − Re(tanh u) = = =

1 + tanh2 v tan2 w − tanh v − tanh v tan2 w 1 + tanh2 v tan2 w 1 − tanh v + (tanh v − 1) tanh v tan2 w 1 + tanh2 v tan2 w (1 − tanh v)(1 − tanh v tan2 w) 1 + tanh2 v tan2 w

.

(13.60)

π  π  π  (v + w  i) (v  , w  ∈ R, |w  | < d), then v = v,w = w, If we set u = 4d 4d 4d and 1 − tanh v tan2 w ≥ 1 − | tanh v| tan2

π = 1 − | tanh v| > 0 4

hold. Therefore it follows from (13.60) that 1 − Re(Td (z)) > 0 for z ∈ Dd . In a similar manner, we can show that −1 − Re(Td (z)) < 0. Consequently, there exists a real number a such that Re(Td (z)) = Td (a). Based on Lemmas 13.1 and 13.2, we can show Inequality (13.57) by taking a real number a with Re(Td (z)) = Td (a).

Appendix 2: Proof of Inequality (13.35) Let F (a) denote the objective function of a = (a1 , . . . , an ) in the RHS of (13.35), i.e.,  F (a) :=



−∞

w(x) |Bn (x; a, Dd )|2 dx.

(13.61)

Property of the Minimizers of F To show Inequality (13.35), we use the properties of F shown below. Lemma 13.3 ([19, Proposition 6]) The function F is differentiable with respect to each variable ai in R and ∂F (a) = −2 ∂ai



∞ −∞

w(x) Bn (x; a, Dd )2

Td (x − ai ) dx. Td (x − ai )

(13.62)

362

K. Tanaka and M. Sugihara

Lemma 13.4 ([19, Proposition 5]) The function F is continuous in Rn and has a minimizer in Rn . Lemma 13.5 The minimizer a ∗ of F satisfies ai∗ = aj∗ (i = j ). Proof We show the conclusion by contradiction. To this end, we can assume that a1∗ = a2∗ without loss of generality. Then, by using ε > 0, we perturb these points as a1 = a1∗ − ε, a2 = a1∗ + ε and observe how the value F (a ∗ ) changes. For simplicity, we set γd := π/(4d), Cd (x) := cosh(γd x), Sd (x) := sinh(γd x), and y = x − a1∗ . Then, we have ∂

Td (y − ε)2 Td (y + ε)2 ∂ε

  Td (y + ε) Td (y − ε) + = 2 γd Td (y − ε) Td (y + ε) − Cd (y − ε)2 Cd (y + ε)2

2 γd Td (ε − y) Td (ε + y) Sd (2ε) Cd (2y) →0 (ε → +0), Cd (ε − y)2 Cd (ε + y)2 ∂2

Td (y − ε)2 Td (y + ε)2 2 ∂ε   Sd (2ε)2 (Cd (2y) − 2 Sd (ε − y) Sd (ε + y)) 2 Cd (2ε) Td (ε − y) Td (ε + y) = 2γd2 Cd (2y) + Cd (ε − y)4 Cd (ε + y)4 Cd (ε − y)2 Cd (ε + y)2 =

−4 γd2 Cd (2y) Td (y)2 Cd (y)4



Because

∂ ∂ε

(ε → +0).

−4 ad2 Cd (2y) Td (y)2 < 0 holds if y = 0, we have Cd (y)4

   ∗ 2 ∗ 2 ∗ 2 w(x) Td (x − a1 − ε) Td (x − a1 + ε) Td (x − aj ) dx  −∞  j =3



n 



= 0,

ε→+0

∂2 ∂ε2

   ∗ 2 ∗ 2 ∗ 2 w(x) Td (x − a1 − ε) Td (x − a1 + ε) Td (x − aj ) dx  −∞  j =3



n 



< 0.

ε→+0

Therefore a sufficiently small ε > 0 yields the inequality 

∞ −∞

 >

w(x) Td (x − a1∗ )4

n 

Td (x − aj∗ )2 dx

j =3 ∞

−∞

w(x) Td (x − a1∗ − ε)2 Td (x − a1∗ + ε)2

n  j =3

Td (x − aj∗ )2 dx,

(13.63)

13 Approximation Formulas by Mathematical Optimization

363

which contradicts the assumption that a ∗ is a minimizer of F . Consequently the components of a ∗ are distinct.

Hermite Interpolation Based on Lemma 13.5, we derive the explicit form of the Hermite interpolation on R by the transformed Blaschke product for a sequence a = (a1 , . . . , an ) whose components are distinct. To this end, we start with the 2n-point Lagrange interpolation: L2n [(a; b); f ](x) =

n 

f (ai )

i=1

+

n 

w(x) Bn;i (x; a) Bn (x; b) Td (x − ai ) w(ai ) Bn;i (ai ; a) Bn (ai ; b) Td (0)

f (bi )

i=1

w(x) Bn (x; a) Bn;i (x; b) Td (x − bi ) , w(bi ) Bn (bi ; a) Bn;i (bi ; b) Td (0)

where we employ the simplified notation Bn (x; a) := Bn (x; a, Dd ) =

n  j =1

Bn;i (x; a) := Bn;i (x; a, Dd ) =

Td (x − aj ), 

Td (x − aj )

1≤j ≤n j =i

derived for x, ai ∈ R from (13.11) and (13.12), respectively, and choose a sequence b = (b1 , . . . , bn ) such that the components of (a; b) = (a1 , . . . , an ; b1 , . . . , bn ) are distinct. Then, we take the limit bj → aj

(j = 1, . . . , n)

(13.64)

to obtain the Hermite interpolation. The sum of the terms corresponding to f (ai ) and f (bi ) is written in the form f (ai )

w(x) Bn;i (x; a) Bn (x; b) Td (x − ai ) w(ai ) Bn;i (ai ; a) Bn (ai ; b) Td (0)

+ f (bi )

= f (ai )

w(x) Bn (x; a) Bn;i (x; b) Td (x − bi ) w(bi ) Bn (bi ; a) Bn;i (bi ; b) Td (0)

Bn;i (x; a) Bn;i (x; b) w(x) =:V w(ai ) Bn;i (ai ; a) Bn;i (ai ; b) =:W

Td (x − bi ) Td (x − ai ) =:X

Td (0) Td (ai − bi )

364

K. Tanaka and M. Sugihara

+ f (bi )

Bn;i (x; a) Bn;i (x; b) w(x) =:V w(bi ) Bn;i (bi ; a) Bn;i (bi ; b) =:Y

w(x) · V · Td (x − ai ) Td (x − bi ) = Td (0) Td (ai − bi ) =



Td (x − ai ) Td (x − bi ) =:Z

Td (0) Td (bi − ai )

f (ai ) 1 Td (x − ai ) f (bi ) 1 − w(ai ) W X Td (x − ai ) w(bi ) Y Z

Td (x − bi )



Td (x − bi )

1 w(x) · V · Td (x − ai ) Td (x − bi ) ai − bi · · Td (ai − bi ) ai − bi Td (0)   f (ai ) 1 Td (x − ai ) f (bi ) 1 Td (x − bi ) · − . w(ai ) W X Td (x − ai ) w(bi ) Y Z Td (x − bi )

(13.65)

Furthermore, we have f (ai ) 1 Td (x − ai ) f (ai ) 1 Td (x − ai ) − w(ai ) W X Td (x − ai ) w(ai ) Y Z Td (x − ai )

(The inside of ( ) in (13.65)) =

f (ai ) 1 Td (x − ai ) f (bi ) 1 Td (x − bi ) − w(ai ) Y Z Td (x − ai ) w(bi ) Y Z Td (x − bi )   1 1 f (ai ) Td (x − ai ) − = w(ai ) Td (x − ai ) W X Y Z EF G D +

1 + YZ



P

 − ai ) F (bi ) Td (x − bi ) − . w(ai ) Td (x − ai ) w(bi ) Td (x − bi ) EF G D f (ai ) Td (x

Q

(13.66) If we first take the limit bi → ai and then take bj → aj (j = i), we have 1 1 → YZ Bn;i (ai ; a)2    1 P ∂  → ai − bi ∂t Bn;i (t; a) Bn;i (t; b) t=ai =−

 (a ; a) Bn;i i

Bn;i (ai ; a)2 Bn;i (ai ; b)

→−

 (a ; a) 2 Bn;i i

Bn;i (ai ; a)3



(bi → ai )  (a ; b) Bn;i i

Bn;i (ai ; a) Bn;i (ai ; b)2

(bj → aj (j = i))

13 Approximation Formulas by Mathematical Optimization

365

 f (t) Td (x − t)  w(t) Td (x − t) t=ai   1 Td (x − t)  ∂ 1 Td (x − ai )  = f (ai ) + f (a ) . i ∂t w(t) Td (x − t) t=ai w(ai ) Td (x − ai )

Q ∂ → ai − bi ∂t



Therefore the expression in (13.65) tends to '  w(x) Bn (x; a)2 f (ai ) Td (x − ai ) 2 Bn;i (ai ; a) −  2 w(ai ) Td (x − ai ) Bn;i (ai ; a)3 Td (0)    1 Td (x − t)  1 ∂ + ) f (a i ∂t w(t) T (x − t)  B (a ; a)2 =

d

i

n;i

1 Td (x − ai ) + f (ai ) w(ai ) Td (x − ai )

(



t=ai

(x; a)2

w(x) Bn Td (0)2 ' 

 ∂ 1 Td (x − ai ) 2 Bn;i (ai ; a) 1 · f (ai ) − + w(ai ) Td (x − ai ) Bn;i (ai ; a)3 Bn;i (ai ; a)2 ∂t 6 1 Td (x − ai ) 1 . + f  (ai ) Bn;i (ai ; a)2 w(ai ) Td (x − ai )



  1 Td (x − t)  w(t) Td (x − t) t=ai

Consequently, the Hermite interpolation Hn [a; f ](x) is given by Hn [a; f ](x) :=

n 

f (ai ) ui (x; a) +

i=1

n 

f  (ai ) vi (x; a),

(13.67)

i=1

where vi (x; a) :=

1 Td (0)2 Bn;i (ai ; a)2 w(ai )

w(x) Bn (x; a)2 ui (x; a) :=  2 Td (0) Bn;i (ai ; a)2 w(ai ) =− =− +

 (a ; a) 2 Bn;i i

vi (x) +

Bn;i (ai ; a)   (x ; a) 2 Bn;i i Bn;i (xi ; a)

+

w(x) Bn (x; a)2 

 (a ; a)  2 Bn;i Td (x − ai ) i

∂ − + Bn;i (ai ; a) Td (x − ai ) ∂t

w(x) Bn (x; a)2  Td (0)2 Bn;i (ai ; a)2

w  (xi ) w(xi )

Td (x − ai ) , Td (x − ai )



∂ ∂t



(13.68) 

  w(ai ) Td (x − t)  w(t) Td (x − t) t=ai

 1 Td (x − t)  w(t) Td (x − t) t=ai

vi (x)

1 w(x) Bn;i (x; a)2

π . w(ai ) Bn;i (ai ; a)2 cosh3 4d (x − ai )

(13.69)

366

K. Tanaka and M. Sugihara

Proof of Inequality (13.35) It follows from Lemma 13.4 that the function F has a minimizer a ∗ ∈ Rn . Owing to Lemma 13.3, it satisfies 



w(x) Bn (x; a ∗ )2

−∞

Td (x − ai∗ ) dx = 0 Td (x − ai∗ )

for i = 1, . . . , n. Then, because of Lemma 13.5 and Formula (13.68), this is equivalent to 



−∞

vi (x; a ∗ ) dx = 0

(13.70)

for i = 1, . . . , n. Then, by integrating Hn [a ∗ ; f ], we have n-point numerical integration formula: 

∞ −∞

Hn [a ∗ ; f ](x) dx =

n 

ci f (ai∗ ), (0)

(13.71)

i=1

where (0)

ci

 =



−∞

ui (x; a ∗ ) dx.

(13.72)

Therefore we can provide an upper bound of the minimum worst error as follows: Enmin (H ∞ (Dd , w))





sup

f ∈H ∞ (Dd ,w) f ≤1

sup

   





−∞





f ∈H ∞ (Dd ,w) −∞ f ≤1

f (x) dx −



−∞

  Hn [a ; f ](x) dx  ∗

  f (x) − Hn [a ∗ ; f ](x) dx.

(13.73)

Here, we employ the fact that the error of the Hermite interpolation is bounded as follows:   f (x) − Hn [a ∗ ; f ](x) ≤ w(x) Bn (x; a ∗ , Dd )2

(13.74)

for any f ∈ H ∞ (Dd , w) with f ≤ 1. Inequality (13.74) follows from the error estimate of the Lagrange interpolation:     f (x) − L2n [(a ∗ ; b); f ](x) ≤ w(x) Bn (x; a ∗ , Dd ) Bn (x; b, Dd ) ,

(13.75)

13 Approximation Formulas by Mathematical Optimization

367

which is shown in [10, Proof of Lemma 4.3]. Actually, Inequality (13.74) follows from Inequality (13.75) by the limit bi → ai∗ . Consequently, it follows from Inequalities (13.73) and (13.74) that Enmin (H ∞ (Dd , w)) ≤



∞ −∞

w(x) Bn (x; a ∗ , Dd )2 dx



= inf



ai ∈R −∞

w(x) Bn (x; a, Dd )2 dx,

which is Inequality (13.35).

Bibliography 1. Andersson, J.-E.: Optimal quadrature of H p functions. Math. Z. 172, 55–62 (1980) 2. Andersson, J.-E., Bojanov, B.D.: A note on the optimal quadrature in H p . Numer. Math. 44, 301–308 (1984) 3. Haber, S.: The tanh rule for numerical integration. SIAM J. Numer. Anal. 14, 668–685 (1977) 4. Hayakawa, S., Tanaka, K.: Convergence analysis of approximation formulas for analytic functions via duality for potential energy minimization. arXiv:1906.03133 (2019) 5. Hayakawa, S., Tanaka, K.: Error bounds of potential theoretic numerical integration formulas in weighted Hardy spaces. JSIAM Letters 12, 21–24 (2020). https://doi.org/10.14495/jsiaml. 12.21 6. Mori, M.: Discovery of the double exponential transformation and its developments. Publ. RIMS Kyoto Univ. 41, 897–935 (2005) 7. Saff, E.B., Totik, V.: Logarithmic Potentials with External Fields. Springer, Berlin, Heidelberg (1997) 8. Schwartz, C.: Numerical integration of analytic functions, J. Comput. Phys. 4, 19–29 (1969) 9. Sugihara, M.: Optimality of the double exponential formula—functional analysis approach. Numer. Math. 75, 379–395 (1997) 10. Sugihara, M.: Near optimality of the sinc approximation, Math. Comput. 72, 767–786 (2003) 11. Stenger, F.: Numerical Methods Based on Sinc and Analytic Functions. Springer, New York (1993) 12. Stenger, F.: Handbook of Sinc Numerical Methods. CRC Press, Boca Raton (2011) 13. Takahasi, H., Mori, M.: Quadrature formulas obtained by variable transformation. Numer. Math. 21, 206–219 (1973) 14. Takahasi, H., Mori, M.: Double exponential formulas for numerical integration. Publ. RIMS Kyoto Univ. 9, 721–741 (1974) 15. Tanaka, K., Sugihara, M.: Design of accurate formulas for approximating functions in weighted Hardy spaces by discrete energy minimization. IMA J. Numer. Anal. 39, 1957–1984 (2019) 16. Tanaka, K., Sugihara, M., Murota, K.: Function classes for successful DE-Sinc approximations. Math. Comput. 78, 1553–1571 (2009) 17. Tanaka, K., Sugihara, M., Murota, K., Mori, M.: Function classes for double exponential integration formulas. Numer. Math. 111, 631–655 (2009) 18. Tanaka, K., Okayama, T., Sugihara, M.: Potential theoretic approach to design of accurate formulas for function approximation in symmetric weighted Hardy spaces. IMA J. Numer. Anal. 37, 861–904 (2017). https://doi.org/10.1093/imanum/drw022

368

K. Tanaka and M. Sugihara

19. Tanaka, K., Okayama, T., Sugihara, M.: Potential theoretic approach to design of accurate numerical integration formulas in weighted Hardy spaces. In: Fasshauer, G., Schumaker, L. (eds.), Approximation Theory XV: San Antonio 2016, Springer Proceedings in Mathematics & Statistics, vol. 201, pp. 335–360. Springer, Cham (2017) 20. Tanaka, K., Okayama, T., Sugihara, M.: An optimal approximation formula for functions with singularities. J. Approx. Theory 234, 82–107 (2018) 21. Trefethen, L.N., Weideman, J.A.C.: The exponentially convergent trapezoidal rule. SIAM Rev. 56, 385–458 (2014)

Chapter 14

LU Factorization of Any Matrix Marc Stromberg

Abstract The usual methods of factoring a matrix depend on its being square and nonsingular. We show that neither of these properties is required for factorization and that any matrix has a factorization of the form P A = LU , and then present a few consequences of the factorization. Keywords Factorization · LU · Matrix · Pseudoinverse · Singular

14.1 Introduction Well known algorithms for factorization (see e.g., [1]) of a square matrix A, such as Gaussian elimination and variants like Crout methods, either produce a factorization of the form P A = LU in which P is a permutation matrix and L and U are respectively lower and upper triangular matrices, or halt without producing the factorization when a zero pivot is encountered, in which case the matrix A is singular. We will show that neither of the properties of squareness or nonsingularity are essential to factorization, with a suitable generalization of L and U . The factorization leads to simple calculations for quantities like A+ , A+ A, and AA+ where A+ is the Moore-Penrose pseudoinverse of A. Theorem 14.1 Let Am×n be a nonzero matrix of rank r. Then there are a permutation matrix Pm×m , a lower trapezoidal matrix Lm×r of rank r and an upper echelon matrix Ur×n of rank r such that P A = LU . The proof of Theorem 14.1 will depend on a number of lemmas to be established concerning the construction of P , L and U in Algorithm 14.1.1. The array ρ will contain the permutation information, so that (P A)i,j = Aρi ,j . The Algorithm is typical of so-called compact schemes, in that the entries of the factors are stored by

M. Stromberg () Pacific States Marine Fisheries Commission, Portland, OR, USA e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_14

369

370

M. Stromberg

modifying those of A in place. For this it is useful to be aware that the changes to A (the assignments (14.3), (14.6)) take place only once for any position. Algorithm 14.1.1 Let ρ and γ be integer arrays of length at least m and n respectively, and let D be a real array of length m. Let r and p be integers and set r ← 0 and p ← 0. For 0 ≤ i < m set ρi ← i and Di ← Ai,· where is a norm on the rows of A. Let R be an array of integers of length at least n. 0≤ 0 {



Aρi , ← Aρi , −

(14.3)

Aρi ,γk Aρk ,

0≤k 0

{

(14.4)

set

γr ← ,

ρp ↔ ρr

for

+1≤j 0 where rˆ is the eventual value of r. Also A˜ ρi ,γ0 = Li,0 = Li,0 U0, 0

(14.16)

for 0 ≤ i < m. But then A˜ ρi , 0 = Li,0 =

rˆ 

Li,k Uk, 0

(14.17)

k=0

for 0 ≤ i < m because Uk, 0 = 0 for 0 < γk for k > 0 because γk is increasing by construction. Therefore we have (P A)i, 0 = A˜ ρi , 0 =

rˆ 

Li,k Uk, 0 = (LU )i, 0

(14.18)

k=0

for 0 ≤ i < m which establishes (14.11) and (14.12) for 0 . Now if γR 1 = 1 for some 1 > 0 we have R 1 = r0 for some r0 > 0 and the preceding argument is the same, without vacuous sums. Then (14.4) is satisfied since otherwise we would have γR 1 < 1 by an earlier assignment. By (14.6) writing things in terms of the original matrix A˜ ρr0 ,j =

r0 

Aρr0 ,γk Aρk ,j

(14.19)

k=0

for 1 + 1 ≤ j < n. But for k ≤ r0 we have γk ≤ 1 < j for 1 + 1 ≤ j so A˜ ρr0 ,j =

r0  k=0

Lr0 ,k Uk,j .

(14.20)

14 LU Factorization of Any Matrix

373

Again if k > r0 then Lr0 ,k = 0 so we have A˜ ρr0 ,j =

rˆ 

(14.21)

Lr0 ,k Uk,j

k=0

and then (P A)i,j = (LU )i,j

(14.22)

for i = r0 = R 1 and 1 + 1 ≤ j < n as required for (14.12). Now in terms of the original matrix, in (14.3) we have A˜ ρi , 1 = Aρi , 1 +

r 0 −1

Aρi ,γk Aρk , 1

(14.23)

k=0

for r0 ≤ i < m. Note that this is true even after the interchange ρp ↔ ρr because the interchange does not affect the value of ρk for k < r0 , and affects only the order of the assignments in (14.3). We have γr0 = 1 so Ur0 , 1 = 1 and therefore can write (14.23) as A˜ ρi , 1 = Aρi , 1 Ur0 , 1 +

r 0 −1

Aρi ,γk Aρk , 1

(14.24)

k=0

Since we have 1 > γk for k < r0 we can write this as A˜ ρi , 1 = Aρi , 1 Ur0 , 1 +

r 0 −1

Aρi ,γk Uk, 1 .

(14.25)

k=0

Since we have i ≥ k for k < r0 and r0 ≤ i we can write this as A˜ ρi , 1 = Aρi , 1 Ur0 , 1 +

r 0 −1

Li,k Uk, 1

(14.26)

k=0

and again i ≥ r0 so Aρi , 1 = Aρi ,γr0 = Li,r0 , so A˜ ρi , 1 = Li,r0 Ur0 , 1 +

r 0 −1 k=0

Li,k Uk, 1 =

r0  k=0

Li,k Uk, 1 .

(14.27)

374

M. Stromberg

Once again if k > r0 then 1 < γk so Uk, 1 = 0 for k > r0 and we have A˜ ρi , 1 =

rˆ 

(14.28)

Li,k Uk, 1 ,

k=0

so (P A)i, 1 = (LU )i, 1

(14.29)

for i = R 1 , . . . , m − 1 as required for (14.11). Next let γR 1 < 1 for some 1 > 0 , and let R 1 = r0 for some r0 > 0. These conditions require that the comparison (14.4) has failed, so the left side of (14.3) is zero for r0 ≤ i < m. Note in particular that the current value of r is then r = r0 + 1 by (14.7). In terms of A˜ we have in (14.3) A˜ ρi , 1 =

r0 

Aρi ,γk Aρk , 1 =

k=0

r0 

Li,k Uk, 1

(14.30)

k=0

for r0 ≤ i < m. Now if k > r0 we will necessarily have γk > 1 since (14.4) failed and 1 was not assigned to any γk . Thus for k > r0 we have Uk, 1 = 0, and we can write A˜ ρi , 1 =

r0  k=0

Li,k Uk, 1 =

rˆ 

Li,k Uk, 1

(14.31)

k=0

for r0 + 1 ≤ i < m as required for (14.10). For 0 ≤ < 0 the vacuous sums in (14.3) show that A˜ i, = 0 for i = 0, . . . , m− 1. We also have Up, = 0 for < 0 = γ0 since then < γp for p = 0, . . . , rˆ − 1 regardless of the eventual value rˆ of r by definition and the fact that the contents of γ are increasing. Thus regardless of the contents of L we have (P A)i,j = (LU )i,j for i = 0, . . . , m − 1 and j = 0, . . . , 0 − 1 since the quantities being compared are all zero. For the first part of the lemma, by construction + 1 is the largest possible value of r at (14.7), so R ≤ for all . If (14.4) is not satisfied for then γr−1 , whatever the value of r, will have been assigned a smaller value of , so γR < . If (14.4) is satisfied we will have γr−1 = for the value of r at (14.7) so γR = . Thus γR ≤ for all . Suppose  = + 1. If R = R  then r will not have changed ((14.4) was not satisfied) so γR  <  . If R < R  then R  = R + 1 since the difference cannot be any larger than 1. It must be true that if R  = r − 1 for some r, then r must have just been incremented, and we will have γR  = r − 1 =  . This takes care of the initial part of the lemma.  

14 LU Factorization of Any Matrix

375

Now we propose the following by induction. Lemma 14.2 For each = 0, . . . , n − 1 we have (P A)i,q = (LU )i,q

(14.32)

for i = 0, . . . , m − 1 and q = 0, . . . , . If ≥ 0 of Lemma 14.1 then (P A)p,j = (LU )p,j

(14.33)

for p = 0, . . . , R and j = , . . . , n − 1. Proof We have (14.32) for < 0 by Lemma 14.1, so suppose ≥ 0 . Then R ≤ and γR ≤ , also by Lemma 14.1. Suppose the present lemma is true for some ≥ 0 and let  = + 1. If R = R  then γR  <  . By the previous lemma, (P A)i,  = (LU )i,  for i = R  + 1, . . . , m − 1. By assumption we have (P A)p,j = (LU )p,j for p = 0, . . . , R and j = , . . . , n − 1, which shows that (P A)i,  = (LU )i,  for i = 0, . . . , R  thus (P A)i,  = (LU )i,  for i = 0, . . . , m− 1 and (P A)p,j = (LU )p,j for p = 0, . . . , R  which is the induction hypothesis for  . On the other hand if R < R  then R  = R + 1 and γR  =  . Then (P A)i,  = (LU )i,  for i = R  , . . . , m−1. Again by assumption we have (P A)p,j = (LU )p,j for p = 0, . . . , R and j = , . . . , n − 1, which shows that (P A)i,  = (LU )i,  for i = 0, . . . , R  − 1 thus (P A)i,  = (LU )i,  for i = 0, . . . , m − 1. We have (P A)p,j = (LU )p,j for p = 0, . . . , R and j = , . . . , n − 1 by assumption and (P A)p,j = (LU )p,j for p = R  and j =  , . . . , n − 1 by Lemma 14.1, so that (P A)p,j = (LU )p,j for p = 0, . . . , R  and j =  , . . . , n − 1. This satisfies the inductive hypothesis. Now for 0 , (14.11) and (14.12) hold, and we note that R 0 = 0. Then (14.32) is true for q = 0 , and we have (14.32) for q < 0 by (14.13), so (14.32) holds for i = 0, . . . , m − 1 and q = 0, . . . , 0 . By (14.12) we have (14.33) for j > 0 . By (14.11) we have (14.33) for j = 0 . This instance completes the induction and proof of Lemma 14.2.   Proof of Theorem 14.1 By Lemma 14.2 it is clear that P A = LU . That L is lower trapezoidal (that is, a truncated lower triangular matrix) and U is upper echelon and that both have rank r follows by construction.  

14.2 Calculations Given the factorization of Theorem 14.1, it is easily shown by well known properties of the pseudoinverse that A+ = U ∗ (U U ∗ )−1 (L∗ L)−1 L∗ P

(14.34)

376

M. Stromberg

and the associated projections AA+ = P ∗ L(L∗ L)−1 L∗ P

(14.35)

A+ A = U ∗ (U U ∗ )−1 U

(14.36)

and

for any nonzero A. Associated with the factorization of the theorem, given a system Ax = b

(14.37)

we can always carry out a scheme of forward and then back substitution. This may or may not produce an actual solution of the system (14.37) which of course may or may not have solutions. The following is stated in terms of the values of L and U as they are stored in place in A, since this type of scheme often follows factorization. Algorithm 14.2.1 Let ρ and γ be the integer arrays of length m and n respectively created in Algorithm 14.1.1, and let x = x0 , . . . , xn−1 be an arbitrary vector of length n, let y be a real temporary vector of length r where r is the rank found in Algorithm 14.1.1, and let A be the result of application of that Algorithm to the m × n matrix A. for 0 ≤ i < r { (14.38) set

yi ← (bρi −

i−1 

Aρi ,γk yk )/Aρi ,γi

(14.39)

k=0

} set for

S ← max |bρi − r≤iγi

} Theorem 14.2 The system (14.37) has a solution if and only if the norm S = 0 in (14.40). Given x = x0 , . . . , xn−1 , let xˆ be the result of applying Algorithm 14.2.1 to x. Let ν = ν0 , . . . , νn−r−1 consist of the complement (if any) of γ = γ0 , . . . , γr−1 in the set 0, . . . , n − 1. Then the substitution scheme can be carried out leaving values of xˆνi = xνi unchanged for each i. In particular if we set b = 0 and define xk(i) = δk,νi for each i = 0, . . . n − r − 1 then application of the back substitution

14 LU Factorization of Any Matrix

377

scheme to each x (i) produces a basis xˆ (i) , i = 0, . . . , n − r − 1, for the null space of A. Proof It is clear that the equation Ux = y

(14.43)

always has a solution, since (14.42) can always be carried out, and we also always have x = U ∗ (U U ∗ )−1 y as an explicit solution (usually different from the result of (14.42)). Similarly (14.39) can also always be carried out, however the vector y is clearly a solution of Ly = P ∗ b

(14.44)

if and only if the norm S in (14.40) of the residual of the non pivot rows of L is zero. But (14.44) is guaranteed to have a solution if b = 0. The only calculation that is relevant in that case is (14.42) since y = 0, but then (14.42) produces elements of the null space of A, from which we can get a basis through the right choices of the initial entries of x. Defining these in terms of the Kronecker delta as in the theorem ensures that they are linearly independent.  

14.2.1 Computations As Algorithm 14.1.1 proceeds, we get matrices approaching the final values of L and U , consisting of the contents of A at any given point (value of ). For a particular value of , we either already have the final L and U or r will increase again and we will get a new column of L and new row of U . This is determined by (14.4). The computation of loop (14.2) is done in any case, but the result only becomes marked as a column of L if we add γr ← after (14.4). Applications will use machine numbers, for which it is necessary to replace mere comparison to zero in (14.4) by a predicate (or, semi predicate) P such that if P(C) is true then C > 0 is definitely true, but if P(C) is false we know only that C is either zero or very small. This is the case for example if P is simple comparison to a tolerance  such that P(C) is true iff C > , where  is chosen as somewhat larger than the machine unit. If  is too small, then there will be doubt about whether C actually should be treated as nonzero even when P is true. Unfortunately, this produces doubt at (14.4) as to how to proceed if P(C) is false. This lack of perfection makes it tempting to consider overruling P. As long as r > 0, then the state of computation at (14.7) is that both L and U are defined, and LU is an m × n matrix (whether it is yet P A or not). We will have differences in the values of LU before and after (14.4) if that comparison is satisfied. Specifically, suppose that for one pass of loop (14.1) the value of is 0 and that r = r0 > 0 at the top of the loop.

378

M. Stromberg

If we proceed through to (14.7) and (14.4) is satisfied, we will have just added γr0 = 0 and will have increased the current value of r to r0 + 1. We will have added column r0 to L and row r0 to U (or changed the definitions of L and U to include these). In any case, the computed values of Ai,j via (LU )i,j will have now changed, by the addition of the term Li,k Uk,j to the computed product LU for k = r0 . This is zero for i < r0 or j < γr0 = 0 so the differences between the computed values of A will differ only in the lower right corner of P A. Moreover, later passes through the loop will affect only a smaller lower right corner of P A. In other words, at (14.7) we know the final value of the differences in computed values of Ai,j for j = 0 and i ≥ r0 and for j ≥ 0 and i = r0 . This will allow the computation to consider #r0 −1 Li,k Uk,j and di,j = Li,r0 Ur0 ,j overruling a false result from P. Put σi,j = k=0 for i ≥ r0 and j = 0 and for i = r0 and j ≥ 0 , the values of di,j being those that would be if (14.4) is satisfied. Suppose we enter the outer loop (14.2) with r = r0 and = 0 . Suppose also that after the loop (14.2) we have chosen p as corresponding to the largest value of |Aρi , |/Dρi in (14.3) assuming at least one of these is actually nonzero, however small. If P is wrong then we should enter loop (14.5), etc. and compute the next row of a new version of U . Anticipating that we will assign γr0 = 0 and the interchange ρp ↔ ρr0 we have , di,j = Li,r0 Ur0 ,j = Aρi ,γr0

1

if j = γr0 = 0

Tρp ,j

if j > γr0

(14.45)

for i ≥ r0 and j = 0 and for i = r0 and j ≥ 0 , so , di,j =

Aρi , 0

if i ≥ r0 ,

Aρp , 0 Tρp ,j

if j > 0

j = 0

(14.46)

where T are temporary values given by (14.6) Tρp ,j ← (Aρp ,j −



Aρp ,γk Aρk ,j )/Aρp , 0

(14.47)

0≤k i if j = i

+

n−1 

Aρj ,k Aρi ,k

k=1+γj

for j = i, . . . , r − 1 for i = 0, . . . , r − 1. (5) Apply the procedure HFS(A, y, ρ, γ , m, n, r) to y. (6) Set x ← U ∗y The method outlined can be done without disturbing the contents of U as stored in A. The first r rows of L are overwritten, so to find (14.54) for multiple b it is necessary to compute all of the required L∗ P b ahead of time. Variations on this method find (14.35) and (14.36) but with the use of more storage. The schemes presented here reduce to standard methods in the case of a square nonsingular matrix, and are of course similarly subject to the reality of machine computation. The condition number KLU of L∗ LU U ∗ and in particular KL of L∗ L indicates the potential for inaccuracy. It is easy to construct instances for which KL is arbitrarily large even with exact arithmetic, but for machine arithmetic if the matrix L∗ L is ill conditioned, numerical end results can be adversely affected. A related indicator is the ratio of the smallest to the largest (in absolute value) pivot elements of L. If this ratio is sufficiently small or on the order of the machine unit, it is an indication of potential numerical difficulty. The presence of either of these indicators might then cause a solver to seek enhanced methods of solution such as the use of greater precision, alternate methods, or to simply issue a warning. On the other hand, if the goal is merely to produce a factorization P ∗ LU of A in the sense that A − P ∗ LU is small and L∗ L is well conditioned, then it may be possible to do so through use of a relaxed P (such as by using the single precision machine unit in (14.49)), dropping the dimension and reducing the rank of L and U . This approach has not been fully explored, but may be of use in the case that the entries of A contain noise.

382

M. Stromberg

Bibliography 1. Dahlquist, G., Bjorck, A.: Numerical Methods. Prentice-Hall (1974) 2. Jeannerod, C., Rump, S.M.: Improved error bounds for inner products in floating-point arithmetic. SIAM J. Matrix Anal. Appl. 34(2), 338–344 (2013)

Part III

Frank Stenger’s Work

The third part of the book consists of a single chapter which collects bibliographic information on the work done by Frank Stenger and about his work. The bibliography is orders alphabetically using the first author’s name.

Chapter 15

Publications by, and About, Frank Stenger Nelson H. F. Beebe

Abstract This chapter supplies what is believed to be a complete list of the publications of Frank Stenger, as well as some that refer to him or his works. Keywords Sinc methods · Sinc integration · Laplace transforms · Wiener-Hopf equation · Ordinary differential equations · Integral equations · Partial differential equations · Boundary value problems · Sinc discretization

15.1 Bibliographic Databases The reference list in this chapter is a snapshot of a \BibTeX{} bibliography database in the frequently updated BibNet Project archives whose master site is maintained at the University of Utah at http://www.math.utah.edu/pub/bibnet and ftp://www.math.utah.edu/pub/bibnet. The BibNet Project was founded in 1994 to provide freely available reusable bibliographic data for distinguished numerical analysts and pioneers of quantum theory, along with a few subject-specific bibliographies in those areas. Bibliographic sources on the Internet are far too often abbreviated, inaccurate, incomplete, unreliable, unreusable, and sloppily formatted. In the Utah archives, we strive to provide accurate and rich data in \BibTeX{} format, enhanced with Digital Object Identifier (DOI) and Uniform Resource Locator (URL) value for easy access to electronic forms of publications, as well as ORCID unique identifiers for authors (see https://orcid.org), Chemical Abstracts journal codes (CODEN) (see http://cassi.cas.org), International Standard Serial Numbers (ISSN) and linking ISSN-L unique identifiers for periodicals (see https://www.issn.org/), and 10- and 13-digit International Standard Book Numbers (ISBN and ISBN-13) (see https://

N. H. F. Beebe () Department of Mathematics, University of Utah, Salt Lake City, UT, USA e-mail: [email protected],[email protected],[email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3_15

385

386

N. H. F. Beebe

www.issn.org/). Where table of contents data can be located, they too are included in book and proceedings entries. Each \BibTeX{} entry contains a bibsource key value that relates an entry back to its master bibliographic sources, and a bibtimestamp key value that records the creation date, or last major modification, of the entry. Those time stamps are critical components for nightly SQL database mining jobs that identify new material for author- and subject-specific bibliographies in the growing combined holdings of the BibNet Project and the TEX User Group bibliography archive (http:// www.math.utah.edu/pub/tex/bib). At the time of writing this, they held almost 1.51 million entries. Journal article entries generally include abbreviated and full names of the periodical, as well as a journal-URL key value that provides links to the journal’s Web site(s). Those additional data ensure that each entry contains its own provenance, which is helpful for later data verification when entries are copied into other bibliography files. \BibTeX{} entries are sorted according to various schemes documented in comments in their files, and each entry is prettyprinted and syntax checked to ensure strict conformance to a rigorous grammar (TUGboat 14(3) 222 October 1993 and 14(4) 395–419 December 1993). In addition, numerous heuristic, pattern-matching, and database-consistency checks are applied to the data to identify common errors. Spelling is verified with at least four independent spell checkers, and each .bib file has a companion .sok file that provides a customized spelling exception dictionary. Data are often merged from multiple sources, which helps to detect errors and omissions. Numerous portable software tools have been developed at Utah for support of \BibTeX{} data creation and curation, and the more general ones are made publicly available under open-source licenses. Each \BibTeX{} file in the archives is accompanied by a LATEX wrapper file that can be used to demonstrate successful typesetting of the complete bibliography, and each of those files contains a version number, time stamp, and portable checksum that can be used to detect data corruption, or other unexpected modifications.

15.2 Stenger Publications In common with many other author bibliographies in the BibNet Project, the master file, stenger-frank.bib, consists of a comment preamble, definitions of journal and publisher string abbreviations, Part I (works by Frank Stenger), Part II (publications about Frank Stenger and his works), and a final cross-reference section that contains entries referenced by earlier entries in Parts I and II. The following reference list is produced with a LATEX \nocite{*} command that requests \BibTeX{} to generate entries for every item in the bibliography file, without a corresponding list of citation tags at the point of its issue. Entries are formatted automatically by \BibTeX{} using the Springer bibliography style

15 Publications by, and About, Frank Stenger

387

spmpsci, which sorts the reference list by first author family name, and then by publication year. That style, like the hundreds of other \BibTeX{} styles, recognizes only a subset of the keys present in the \BibTeX{} entries; in particular, it does not capture CODEN, ISSN, ISSN-L, ISBN, or ISBN-13 values, but it does include document addresses from DOI and URL keys.

Bibliography 1. Allgower, E.L., Glashoff, K., Peitgen, H.O. (eds.): Numerical Solution of Nonlinear Equations: Proceedings, Bremen 1980. Lecture Notes in Mathematics, vol. 878. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1981). https://doi.org/10.1007/ BFb0090674. http://link.springer.com/book/10.1007/BFb0090674; http://www.springerlink. com/content/978-3-540-38781-7 2. Áng, –D.–D., Folias, T., Keinert, F., Stenger, F.: Viscoplastic flow due to penetration: a free boundary value problem. Int. J. Fract. 39(1–3), 121–127 (1989). https://doi.org/10.1007/97894-009-0927-4_11; https://doi.org/10.1007/bf00047445 3. Áng, –D.–D., Keinert, F., Stenger, F.: A nonlinear two-phase Stefan problem with melting point gradient: a constructive approach. J. Comput. Appl. Math. 23(2), 245–255 (1988). https:// doi.org/10.1016/0377-0427(88)90283-X. http://www.sciencedirect.com/science/article/pii/ 037704278890283X 4. Áng, –D.–D., Lund, J., Stenger, F.: Complex variable and regularization methods of inversion of the Laplace transform. Math. Comput. 53(188), 589–608 (1989). https://doi.org/10. 1090/s0025-5718-1989-0983558-7; https://doi.org/10.2307/2008722. http://links.jstor.org/ sici?sici=0025-5718(198910)53:1882.0.CO%3B2-H 5. Áng, –D.–D., et al. (eds.): In: Inverse Problems and Applications to Geophysics, Industry, Medicine and Technology: Proceedings of the International Workshop on Inverse Problems, 17–19 January 1995, Ho Chi Minh City. Publications of the Ho Chi Minh City Mathematical Society, vol. 2. Vietnam Mathematical Society, Ho Chi Minh City, Vietnam (1995) 6. Ball, J., Johnson, S.A., Stenger, F.: Explicit inversion of the Helmholtz equation for ultrasound insonification and spherical detection. In: Wang [201], pp. 451–461. https://doi.org/ 10.1007/978-1-4684-3755-3_26. http://www.springerlink.com/content/978-1-4684-3755-3 7. Baumann, G., Stenger, F.: Fractional calculus and Sinc methods. Fract. Calc. Appl. Anal. 14(4), 568–622 (2011). https://doi.org/10.2478/s13540-011-0035-3 8. Baumann, G., Stenger, F.: Fractional adsorption diffusion. Fract. Calc. Appl. Anal. 16(3), 737–764 (2013). https://doi.org/10.2478/s13540-013-0046-3 9. Baumann, G., Stenger, F.: Sinc-approximations of fractional operators: a computing approach. Mathematics 3(2), 444–480 (2015). https://doi.org/10.3390/math3020444 10. Baumann, G., Stenger, F.: Fractional Fokker–Planck equation. Mathematics 5(1), 19 (2017). https://doi.org/10.3390/math5010012 11. Beighton, S., Noble, B.: An error estimate for Stenger’s quadrature formula. Math. Comput. 38(158), 539–545 (1982). https://doi.org/10.1090/S0025-5718-1982-0645669-9; https://doi. org/10.2307/2007288. http://www.ams.org/journals/mcom/1982-38-158/S0025-5718-19820645669-9/. See [120] 12. Berggren, M.J., Johnson, S.A., Carruth, B.L., Kim, W.W., Stenger, F., Kuhn, P.K.: Ultrasound inverse scattering solutions from transmission and/or reflection data. In: Nalcioglu et al. [72], pp. 114–121 13. Berggren, M.J., Johnson, S.A., Carruth, B.L., Kim, W.W., Stenger, F., Kuhn, P.L.: Performance of fast inverse scattering solutions for the exact Helmholtz equation using multiple frequencies and limited views. In: Jones [48], pp. 193–201

388

N. H. F. Beebe

14. Bettis, D.G. (ed.): Proceedings of the Conference on the Numerical Solution of Ordinary Differential Equations: 19,20 October 1972, The University of Texas at Austin. Lecture Notes in Mathematics, vol. 362. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1974). https://doi.org/10.1007/BFb0066582. http://link.springer.com/book/10.1007/ BFb0066582;http://www.springerlink.com/content/978-3-540-37911-9 15. Bialecki, B., Kearfott, B.R., Sikorski, K.A., Sugihara, M.: Guest Editors’ preface: Issue dedicated to Professor Frank Stenger. J. Complex. 25(3), 233–236 (2009). https://doi.org/10.1016/ j.jco.2009.02.002. http://www.sciencedirect.com/science/article/pii/S0885064X09000053 16. Bialecki, B., Stenger, F.: Sinc–Nyström method for numerical solution of one-dimensional Cauchy singular integral equation given on a smooth arc in the complex plane. Math. Comput. 51(183), 133–165 (1988). https://doi.org/10.1090/s0025-5718-1988-0942147x; https://doi.org/10.2307/2008583. http://links.jstor.org/sici?sici=0025-5718(198807)51: 1832.0.CO%3B2-7 17. Bojanov, B.: Book review: selected topics in approximation and computation, by Marek A. Kowalski, Krzysztof A. Sikorski, Frank Stenger. SIAM Rev. 39(2), 333–334 (1997). https://doi.org/10.1137/SIREAD000039000002000333000001. http://links.jstor.org/sici? sici=0036-1445(199706)39:22.0.CO%3B2-Y;https://epubs.siam.org/doi/ abs/10.1137/SIREAD000039000002000333000001 18. Bowers, K., Lund, J., Bowers, K.L.K.L., Lund, J.J. (eds.): In: Computation and Control: Proceedings of the Bozeman Conference, Bozeman, Montana, August 1–11, 1988. Progress in Systems and Control Theory, vol. 1. Birkhäuser, Cambridge, MA, USA; Berlin, Germany; Basel, Switzerland (1989) 19. Bowers, K.L.K.L., Lund, J.J. (eds.): In: Computation and Control II: Proceedings of the Second Bozeman Conference, Bozeman, Montana, August 1–7, 1990. Progress in Systems and Control Theory, vol. 11. Birkhäuser, Cambridge, MA, USA; Berlin, Germany; Basel, Switzerland (1991) 20. Chambers, J.J., Almudhafar, S., Stenger, F.: Effect of reduced beam section frame elements on stiffness of moment frames. J. Struct. Eng. 129(3), 383–393 (2003). https://doi.org/10. 1061/(ASCE)0733-9445(2003)129:3(383) 21. Chan, K.Y., Henderson, D., Stenger, F.: Nonlinear Poisson–Boltzmann equation in a model of a scanning tunneling microscope. Numer. Methods Partial Differ. Equ. 10(6), 689–702 (1994). https://doi.org/10.1002/num.1690100605 22. Chauvette, J., Stenger, F.: The approximate solution of the nonlinear equation Δu = u − u3 . J. Math. Anal. Appl. 51(1), 229–242 (1975). https://doi.org/10.1016/0022-247x(75)90155-9 23. Chen, S.N.K., Stenger, F.: A harmonic-sinc solution of the Laplace equation for problems with singularities and semi-infinite domains. Numer. Heat Transfer B (Fundam.) 33(4), 433– 450 (1998). https://doi.org/10.1080/10407799808915042 24. Davis, P.: An interview with Frank Stenger. Interview conducted by the Society for Industrial and Applied Mathematics, as part of grant # DE-FG02-01ER25547 awarded by the US Department of Energy (2004). http://history.siam.org/oralhistories/stenger.htm 25. Dikshit, H.P., Sharma, A., Singh, V., Stenger, F.: Rivlin’s theorem on Walsh equiconvergence. J. Approx. Theory 52(3), 339–349 (1988). https://doi.org/10.1016/0021-9045(88)90047-0 26. Dolph, C.L., Stenger, F., Wiin-Nielsen, A.: On the stability problems of the Helmholtz– Kelvin–Rayleigh type. Technical Report 08759-3-T, Department of Meteorology and Oceanography, University of Michigan, Ann Arbor, MI, USA (1968) 27. Eiger, A., Sikorski, K., Stenger, F.: A bisection method for systems of nonlinear equations. ACM Trans. Math. Softw. 10(4), 367–377 (1984). https://doi.org/10.1145/2701.2705 28. Espelid, T.O., Genz, A. (eds.): Numerical integration: recent developments, software and applications. NATO ASI Series. Series C, Mathematical and Physical Sciences, vol. 357. Kluwer Academic Publishers, Norwell, MA, USA, and Dordrecht, The Netherlands (1992). https://doi.org/10.1007/978-94-011-2646-5 29. G., W.: Review: tabulation of certain fully symmetric numerical integration formulas of degree 7, 9 and 11, by Frank Stenger. Math. Comput. 25(116), 935–935 (1971). https:// doi.org/10.1090/S0025-5718-71-99709-2. http://www.jstor.org/stable/2004361

15 Publications by, and About, Frank Stenger

389

30. Gautschi, W., Hairer, E.: On conjectures of Stenger in the theory of orthogonal polynomials. J. Inequal. Appl., 27 (2019). https://doi.org/10.1186/s13660-019-21076. https://journalofinequalitiesandapplications.springeropen.com/articles/10.1186/s13660019-2107-6. Paper no. 159 31. Gearhart, W.B., Stenger, F.: An approximate convolution equation of a given response. In: Kirby [53], pp. 168–196 32. Goodrich, R.F., Stenger, F.: Movable singularities and quadrature. Math. Comput. 24(110), 283–300 (1970). https://doi.org/10.2307/2004478. http://links.jstor.org/sici?sici=00255718(197004)24:1102.0.CO%3B2-S 33. Govil, N.K., Mohapatra, R., Qazi, M.A., Schmeisser, G. (eds.): Progress in Approximation Theory and Applicable Complex Analysis: In Memory of Q. I. Rahman. Springer Optimization and Its Applications, vol. 117. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (2017). https://doi.org/10.1007/978-3-319-49242-1. https://link.springer. com/chapter/10.1007/978-3-319-49242-1 34. Graves-Morris, P.R., Saff, E.B., Varga, R.S. (eds.): In: Rational Approximation and Interpolation: Proceedings of the United Kingdom — United States Conference Held at Tampa, Florida, December 12–16, 1983. Lecture Notes in Mathematics, vol. 1105. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1984). https://doi.org/10.1007/ BFb0072395. http://link.springer.com/book/10.1007/BFb0072395;http://www.springerlink. com/content/978-3-540-39113-5 35. Gustafson, S.Å., Stenger, F.: Convergence acceleration applied to Sinc approximation with application to approximation of |x|α . In: Bowers and Lund [19], pp. 161–171. https://doi.org/ 10.1007/978-1-4612-0427-5_12 36. Hagmann, M.J., Stenger, F.S., Yarotski, D.A.: Linewidth of the harmonics in a microwave frequency comb generated by focusing a mode-locked ultrafast laser on a tunneling junction. J. Appl. Phys. 114(22), 223,107 (2013). https://doi.org/10.1063/1.4831952. http://scitation. aip.org/content/aip/journal/jap/114/22/10.1063/1.4831952 37. Han, L., Xu, J.: Proof of Stenger’s conjecture on matrix I (−1) of Sinc methods. J. Comput. Appl. Math. 255, 805–811 (2014). https://doi.org/10.1016/j.cam.2013.07.001. http://www. sciencedirect.com/science/article/pii/S0377042713003452. See [160] 38. Hanson, F., Steele, J.M., Stenger, F.: Distinct sums over subsets. Proc. Am. Math. Soc. 66(1), 179–180 (1977). https://doi.org/10.1090/S0002-9939-1977-04471674. http://links.jstor.org/sici?sici=0002-9939(197709)66:12.0.CO%3B2-9; http://www.ams.org/journals/proc/1977-066-01/S0002-9939-1977-0447167-4/ 39. Harvey, C., Stenger, F.: A two-dimensional analogue to the method of bisections for solving nonlinear equations. Q. Appl. Math. 33, 351–368 (1976). https://doi.org/10.1090/qam/ 455361. http://www.ams.org/journals/qam/1976-33-04/S0033-569X-1976-0455361-7/ 40. Ikebe, Y., Kowalski, M., Stenger, F.: Rational approximation of the step, filter, and impulse functions. In: Wong [202], pp. 441–454. http://www.loc.gov/catdir/enhancements/fy0647/ 90002810-d.html 41. Ikebe, Y., Li, T.Y., Stenger, F.: The numerical solution of the Hilbert problem. In: Theory of Approximation, with Applications (Proc. Conf., Univ. Calgary, Calgary, Alta., 1975; Dedicated to the Memory of Eckard Schmidt), pp. 338–358. Academic Press, New York, USA (1976) 42. Ismail, M., et al. (eds.): In: Mathematical Analysis, Wavelets, and Signal Processing: An International Conference on Mathematical Analysis and Signal Processing, January 3– 9, 1994, Cairo University, Cairo, Egypt. Contemporary mathematics, vol. 190. American Mathematical Society, Providence, RI, USA (1995) 43. Johnson, S.A., Berggren, M.J., Stenger, F., Wilcox, C.H., Jensen, E.: Recent developments in solving the exact acoustic inverse scattering problem. In: Anonymous (ed.) Proceedings of the 29th Annual Meeting of the American Institute of Ultrasound in Medicine, and the 13th Annual Meeting of the Society of Diagnostic Medical Sonographers, 16–19 September 1984, Kansas City, MO, USA, pp. 126–. American Institute of Ultrasound in Medicine, Bethesda, MD, USA (1984)

390

N. H. F. Beebe

44. Johnson, S.A., Stenger, F.: Ultrasound tomography by Galerkin or moment methods. In: Nalcio˘glu and Cho [71], pp. 254–276. https://doi.org/10.1007/978-3-642-93253-3_10 45. Johnson, S.A., Stenger, F., Wilcox, C., Ball, J., Berggren, M.J.: Wave equations and inverse solutions for soft tissue. In: Acoustical Imaging, Vol. 11 (Monterey, Calif., 1981), pp. 409– 424. Plenum, New York (1982). https://doi.org/10.1007/978-1-4684-1137-9_27 46. Johnson, S.A., Zhou, Y., Tracey, M.L., Berggren, M.J., Stenger, F.: Inverse scattering solutions by a sinc basis, multiple source, moment method. Part III: Fast algorithms. Ultrasonic Imaging 6(1), 103–116 (1984). https://doi.org/10.1016/0161-7346(84)90010-5 47. Johnson, S.A., Zhou, Y., Tracy, M.K., Berggren, M.J., Stenger, F.: Fast iterative algorithms for inverse scattering solutions of the Helmholtz and Riccati wave equations. In: Kaveh et al. [49], pp. 75–87 48. Jones, H.W. (ed.): Acoustical Imaging: Proceedings of the International Symposium, July 14– 16, 1986, Halifax, NS, Canada. Acoustical Imaging, vol. 15. Plenum Press, New York, NY, USA; London, UK (1987) 49. Kaveh, M., Mueller, R.K., Greenleaf, J.F. (eds.): In: Acoustical Imaging: Proceedings of the Thirteenth International Symposium on Acoustical Imaging, Held October 26–28, 1983, in Minneapolis, MN. Acoustical Imaging, vol. 13. Plenum Press, New York, NY, USA; London, UK (1984) 50. Kearfott, R.B., Sikorski, K., Stenger, F.: A Sinc function fast Poisson solver (1987) 51. Kim, W.W., Berggren, M.J., Johnson, S.A., Stenger, F., Wilcox, C.H.: Inverse scattering solutions to the exact Riccati wave equations by iterative RYTOV approximations and internal field calculations. In: McAvoy [64], pp. 878–882. https://doi.org/10.1109/ULTSYM. 1985.198638. http://ieeexplore.ieee.org/iel5/10285/32718/01535578.pdf?tp=&arnumber= 1535578&isnumber=32718. Two volumes. IEEE catalog number 85CH2209-5 52. Kim, W.W., Johnson, S.A., Berggren, M.J., Stenger, F., Wilcox, C.H.: Analysis of inverse scattering solutions from single frequency, combined transmission and reflection data for the Helmholtz and Riccati exact wave equations. In: Jones [48], pp. 359–369 53. Kirby, B.J. (ed.): In: Optimal Control Theory and Its Applications: Proceedings of the 14th Biennial Seminar of the Canadian Mathematical Congress, University of Western Ontario, Aug. 12–25, 1973. Lecture Notes in Economics and Mathematical Systems, vol. 106. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1974) 54. Kowalski, M.A., Sikorski, K.A., Stenger, F.: Selected Topics in Approximation and Computation. Oxford University Press, Oxford, UK (1995). http://site.ebrary.com/lib/utah/Doc?id= 10087215 55. Kowalski, M.A., Stenger, F.: Optimal complexity recovery of band- and energy-limited signals. II. J. Complex. 5(1), 45–59 (1989). https://doi.org/10.1016/0885-064x(89)900125 56. Kress, R., Sloan, I.H., Stenger, F.: A sinc quadrature method for the double-layer integral equation in planar domains with corners. J. Integral Equ. Appl. 10(3), 291–317 (1998). https:// doi.org/10.1216/jiea/1181074232. http://projecteuclid.org/euclid.jiea/1181074232 57. Kromann, G.B., Culham, J.R., Ramakrishna, K. (eds.): In: ITherm 2000: the Seventh Intersociety Conference on Thermal and Thermomechanical Phenomena in Electronic Systems, presented at Las Vegas, Nevada, USA, May 23–26, 2000. IEEE Computer Society Press, 1109 Spring Street, Suite 300, Silver Spring, MD 20910, USA (2000). Two volumes. IEEE catalog number 00CH37069 58. Lipow, P.R., Stenger, F.: How slowly can quadrature formulas converge? Math. Comput. 26(120), 917–922 (1972). https://doi.org/10.1090/s0025-5718-1972-0319356-4; https:// doi.org/10.2307/2005875. http://links.jstor.org/sici?sici=0025-5718(197210)26:1202.0.CO%3B2-R 59. Lundin, L., Stenger, F.: Cardinal-type approximations of a function and its derivatives. SIAM J. Math. Anal. 10(1), 139–160 (1979). https://doi.org/10.1137/0510016. https://epubs.siam. org/doi/abs/10.1137/0510016

15 Publications by, and About, Frank Stenger

391

60. Martin, C., White, J. (eds.): Visiting Scholars’ Lectures 1989, Texas Tech University, Lubbock, TX (USA). Mathematics Series, vol. 16. Department of Mathematics, Texas Tech University, Lubbock, TX, USA (1990) 61. McArthur, K.M.: Book review: Numerical methods based on sine and analytic functions, by Frank Stenger. SIAM Rev. 36(4), 673–674 (1994). https://doi.org/10.1137/1036167. See [149] 62. McArthur, K.M.: Review: numerical methods based on sinc and analytic functions (Frank Stenger). SIAM Rev. 36(4), 673–674 (1994). https://doi.org/10.1137/ 1036167. http://links.jstor.org/sici?sici=0036-1445(199412)36:42.0.CO %3B2-J;https://epubs.siam.org/doi/abs/10.1137/1036167 63. McAvoy, B.R. (ed.): IEEE 1983 Ultrasonics Symposium: October 31, November 1-2, 1983, Atlanta Marriott Hotel, Atlanta, Georgia. IEEE Computer Society Press, Silver Spring, MD, USA (1983). Two volumes. IEEE catalog number 83CH1947-1 64. McAvoy, B.R. (ed.): IEEE 1985 Ultrasonics Symposium: October 16–18, 1985, Cathedral Hill Hotel, San Francisco, California. IEEE Computer Society Press, Silver Spring, MD, USA (1985). Two volumes. IEEE catalog number 85CH2209-5 65. McNamee, J., Stenger, F.: Construction of fully symmetric numerical integration formulas. Technical Report 4, Department of Computing Science, University of Alberta, Edmonton, AB, Canada (1966) 66. McNamee, J., Stenger, F.: Construction of fully symmetric numerical integration formulas. Numerische Mathematik 10(4), 327–344 (1967). https://doi.org/10.1007/BF02162032 67. McNamee, J., Stenger, F., Whitney, E.L.: Whittaker’s cardinal function in retrospect. Math. Comput. 25(113), 141–154 (1971). https://doi.org/10.1090/S0025-5718-1971-03014280. http://links.jstor.org/sici?sici=0025-5718(197101)25:1132.0.CO%3B2-6; http://www.ams.org/journals/mcom/1971-25-113/S0025-5718-1971-0301428-0/ 68. Miller, A. (ed.): Contributions of mathematical analysis to the numerical solution of partial differential equations. In: Proceedings of the Centre for Mathematical Analysis, Australian National University, vol. 7. Centre for Mathematical Analysis, Australian National University, Canberra, Australia (1984) 69. Moler, C.B.: More on the sphere in the corner. ACM SIGNUM Newsl. 4(1), 7–7 (1969). https://doi.org/10.1145/1198450.1198451. See [105] 70. Morlet, A.C., Stenger, F.: Sinc approximation of solution of heat equation with discontinuous initial condition. In: Computation and Control, IV (Bozeman, MT, 1994), Progr. Systems Control Theory, vol. 20, pp. 289–303. Birkhäuser Boston, Cambridge, MA, USA (1995). https://doi.org/10.1007/978-1-4612-2574-4_19 71. Nalcio˘glu, O., Cho, Z.H. (eds.): Selected Topics in Image Science. Lecture Notes in Medical Informatics, vol. 23. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1984). https://doi.org/10.1007/978-3-642-93253-3 72. Nalcioglu, O.O., Cho, Z.H.Z.H., Budinger, T.F.T.F., et al. (eds.): In: International Workshop on Physics and Engineering of Computerized Multidimensional Imaging and Processing: 2– 4 April 1986, Newport Beach, California. SPIE, vol. 671. SPIE Optical Engineering Press, Bellingham, WA, USA (1986) 73. Narasimhan, S., Chen, K., Stenger, F.: A harmonic-sinc solution of the Laplace equation for problems with singularities and semi-infinite domains. Numer. Heat Transfer B (Fundam.) 33(4), 433–450 (1998). https://doi.org/10.1080/10407799808915042. https://www. tandfonline.com/doi/abs/10.1080/10407799808915042 74. Narasimhan, S., Chen, K., Stenger, F.: The solution of incompressible Navier–Stokes equations using the sine collocation method. In: Kromann et al. [57], pp. 199–214. https:// doi.org/10.1109/ITHERM.2000.866827. Two volumes. IEEE catalog number 00CH37069 75. Narasimhan, S., Chen, K., Stenger, F.: A first step in applying the Sinc collocation method to the nonlinear Navier–Stokes equations. Numer. Heat Transfer B (Fundam.) 41(5), 447–462 (2002). https://doi.org/10.1080/104077902753725902 76. Nashed, M.Z., Scherzer, O. (eds.): In: Inverse Problems, Image Analysis, and Medical Imaging: AMS Special Session on Interaction of Inverse Problems and Image Analysis, January

392

N. H. F. Beebe

10–13, 2001, New Orleans, Louisiana. Contemporary Mathematics, vol. 313. American Mathematical Society, Providence, RI, USA (2002) 77. Nickel, K.: Error bounds and uniqueness for the solutions of nonlinear, strongly coupled, parabolic systems of differential equations. MRC Technical Summary Report 1596, Mathematics Research Center, US Department of the Army, Madison, WI, USA (1976) 78. Olver, F.W.J., Stenger, F.: Error bounds for asymptotic solutions of second-order differential equations having an irregular singularity of arbitrary rank. J. Soc. Ind. Appl. Math. B Numer. Anal. 2(2), 244–249 (1965). https://doi.org/10.1137/0702018. http://links.jstor.org/sici?sici= 0887-459X(1965)2:22.0.CO%3B2-N 79. Papamichael, N.N., Ruscheweyh, S., Saff, E.B. (eds.): In: Computational Methods and Function Theory 1997: Proceedings of the Third CMFT Conference, 13–17 October 1997, Nicosia, Cyprus. Series in Approximations and Decompositions, vol. 11. World Scientific Publishing, Singapore (1999) 80. Quak, E.: Book reviews: Selected topics in approximation and computation, by Marek A. Kowalski, Krzysztof A. Sikorski and Frank Stenger. Math. Comput. 66(219), 1374–1374 (1997). https://doi.org/10.1090/S0025-5718-97-00877-6. http://links.jstor.org/sici?sici= 0025-5718(199707)66:2192.0.CO%3B2-L;http://www.ams.org/journals/ mcom/1997-66-219/S0025-5718-97-00877-6/ 81. Rahman, Q.I., Stenger, F.: An extremal problem for polynomials with a prescribed zero. Proc. Am. Math. Soc. 43(1), 84–90 (1974). https://doi.org/10.1090/s0002-9939-1974-0333123-0; https://doi.org/10.2307/2039331. http://links.jstor.org/sici?sici=0002-9939(197403)43:12.0.CO%3B2-2 82. Resch, R., Stenger, F., Waldvogel, J.: Functional equations related to the iteration of functions. Aequationes Mathematicae 60(1–2), 25–37 (2000). https://doi.org/10.1007/s000100050133. https://link.springer.com/article/10.1007/s000100050133 83. Rosenberg, I.G., Stenger, F.: A lower bound on the angles of triangles constructed by bisecting the longest side. Math. Comput. 29(130), 390–395 (1975). https://doi.org/10. 1090/s0025-5718-1975-0375068-5; https://doi.org/10.2307/2005558. http://links.jstor.org/ sici?sici=0025-5718(197504)29:1302.0.CO%3B2-H 84. Sabin, J.R., Cabrera-Trujillo, R. (eds.): Concepts of Mathematical Physics in Chemistry: A Tribute to Frank E. Harris, vol. 71. Academic Press, New York, USA (2015) 85. Sabin, J.R., Cabrera-Trujillo, R. (eds.): Concepts of Mathematical Physics in Chemistry: A Tribute to Frank E. Harris. Part B. Advances in Quantum Chemistry, vol. 72. Academic Press, New York, USA (2016) 86. Sack, R.A.: Comments on some quadrature formulas by F. Stenger. J. Inst. Math. Appl. 21(3), 359–361 (1978). https://doi.org/10.1093/imamat/21.3.359. https://academic.oup.com/ imamat/article/21/3/359/690581. See [120, 129] 87. Schaffer, S., Stenger, F.: Multigrid-sinc methods. Appl. Math. Comput. 19(1–4), 311– 319 (1986). https://doi.org/10.1016/0096-3003(86)90110-4. Second Copper Mountain conference on multigrid methods (Copper Mountain, Colo., 1985) 88. Schmeisser, G.: Review: Numerical methods based on sinc and analytic functions, by Frank Stenger. Math. Comput. 63(208), 817–819 (1994). https://doi.org/10.2307/2153301. http:// links.jstor.org/sici?sici=0025-5718(199410)63:2082.0.CO%3B2-K 89. Schmeisser, G., Stenger, F.: Sinc approximation with a Gaussian multiplier. Sampling Theory Signal Image Process. 6(2), 199–221 (2007). http://stsip.org/pdf_campaign/vol06/ no2/vol06no2pp199-221.pdf 90. Shen, X., Zayed, A.I. (eds.): Multiscale Signal Analysis and Modeling. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (2013). https://doi.org/10.1007/978-14614-4145-8 91. Sikorski, K.: Optimal quadrature algorithms in Hp spaces. Numerische Mathematik 39(3), 405–410 (1982). https://doi.org/10.1007/BF01407871. https://link.springer.com/article/10. 1007/BF01407871 92. Sikorski, K., Stenger, F.: Optimal quadratures in Hp spaces. ACM Trans. Math. Softw. 10(2), 140–151 (1984). https://doi.org/10.1145/399.448

15 Publications by, and About, Frank Stenger

393

93. Sikorski, K., Stenger, F., Schwing, J.: Algorithm 614: A FORTRAN subroutine for numerical integration in Hp . ACM Trans. Math. Softw. 10(2), 152–160 (1984). https://doi.org/10.1145/ 399.449 94. Sikorski, K., Stenger, F., Schwing, J.: A Fortran subroutine for integration in Hp spaces. ACM Trans. Math. Softw. 10(2), 152–157 (1984). https://doi.org/10.1145/399.449. https://dl.acm. org/citation.cfm?doid=399.449 95. Stenger, F.: Error bounds for asymptotic solutions of differential equations. 1, The distinct eigenvalue case. Technical Report 2, Department of Computing Science, University of Alberta, Edmonton, AB, Canada (1965). http://web.sirsitest.library.ualberta.ca/uhtbin/ cgisirsi/QWzmKzdaVV/UAARCHIVES/295120075/9?first_hit=1&last_hit=20&form_ type=&VIEW%5E3.x=49&VIEW%5E1.y=10 96. Stenger, F.: Error bounds for asymptotic solutions of differential equations. 2, The general case. Technical Report 3, Department of Computing Science, University of Alberta, Edmonton, AB, Canada (1965). http://web.sirsitest.library.ualberta.ca/uhtbin/cgisirsi/ 3DeRFr4iWs/UAARCHIVES/295120075/9?first_hit=1&last_hit=20&form_type=&VIEW %5E4.x=49&VIEW%5E1.y=10 97. Stenger, F.: Bounds on the error of Gauss-type quadratures. Numerische Mathematik 8(2), 150–160 (1966). https://doi.org/10.1007/BF02163184 98. Stenger, F.: Error bounds for asymptotic solutions of differential equations. I: The distinct eigenvalue case. J. Res. Natl. Bur. Stand. Sect. B Math. Math. Phys. 70(3), 167–186 (1966). https://doi.org/10.6028/jres.070b.017 99. Stenger, F.: Error bounds for asymptotic solutions of differential equations. II: The general case. J. Res. Natl. Bur. Stand. Sect. B Math. Math. Phys. 70(3), 187–210 (1966). https://doi. org/10.6028/jres.070b.018 100. Stenger, F.: Error bounds for solutions of differential equations. Ph.D. thesis, Department of Computing Science, University of Alberta, Edmonton, AB, Canada (1966) 101. Stenger, F.: Error bounds for the evaluation of integrals by repeated Gauss-type formulae. Numerische Mathematik 9(3), 200–213 (1966). https://doi.org/10.1007/BF02162084 102. Stenger, F.: Book review: Numerical integration (Philip J. Davis and Philip Rabinowitz). SIAM Rev. 10(2), 239–240 (1968). https://doi.org/10.1137/1010051 103. Stenger, F.: Kronecker product extensions of linear operators. SIAM J. Numer. Anal. 5(2), 422–435 (1968). https://doi.org/10.1137/0705033. http://links.jstor.org/sici?sici=00361429(196806)5:22.0.CO%3B2-7 104. Stenger, F.: Review: Numerical integration, by Philip J. Davis and Philip Rabinowitz. SIAM Rev. 10(2), 239–240 (1968). https://doi.org/10.1137/1010051. http://links.jstor.org/sici?sici= 0036-1445(196804)10:22.0.CO%3B2-Y 105. Stenger, F.: A trap in computations. ACM SIGNUM Newsl. 3(3), 2–2 (1968). https://doi.org/ 10.1145/1198460.1198462. See [69] 106. Stenger, F.: The asymptotic approximation of certain integrals. SIAM J. Math. Anal. 1(3), 392–404 (1970). https://doi.org/10.1137/0501036 107. Stenger, F.: On the asymptotic solution of two first order linear differential equations with large parameter. Funkcialaj Ekvacioj. Serio Internacia 13, 1–18 (1970). http://fe.math.kobeu.ac.jp/FE/Free/vol13/fe13-1.pdf 108. Stenger, F.: Book review: Integrals and sums (P. C. Chakravarti). SIAM Rev. 13(4), 582–583 (1971). https://doi.org/10.1137/1013113 109. Stenger, F.: Constructive proofs for approximation by inner functions. J. Approx. Theory 4(4), 372–386 (1971). https://doi.org/10.1016/0021-9045(71)90004-9 110. Stenger, F.: Erratum: The reduction of two dimensional integrals into a finite number of onedimensional integrals. Aequationes Mathematicae 6(2–3), 316–317 (1971). https://doi.org/ 10.1007/bf01819776. See [111] 111. Stenger, F.: The reduction of two dimensional integrals into a finite number of one dimensional integrals. Aequationes Mathematicae 6(2–3), 278–287 (1971). https://doi.org/10.1007/ bf01819765. See erratum [110]

394

N. H. F. Beebe

112. Stenger, F.: Review: Integrals and sums, by P. C. Chakravarti. SIAM Rev. 13(4), 582–583 (1971). https://doi.org/10.1137/1013113. http://links.jstor.org/sici?sici=00361445(197110)13:42.0.CO%3B2-1 113. Stenger, F.: The approximate solution of Wiener–Hopf integral equations. J. Math. Anal. Appl. 37(3), 687–724 (1972). https://doi.org/10.1016/0022-247X(72)90251-X 114. Stenger, F.: Book review: Quadrature formulae (A. Ghizetti and A. Ossicini). SIAM Rev. 14(4), 662–662 (1972). https://doi.org/10.1137/1014118 115. Stenger, F.: Review: Quadrature formulae, by A. Ghizetti and A. Ossicini. SIAM Rev. 14(4), 662–662 (1972). https://doi.org/10.1137/1014118. http://links.jstor.org/sici?sici= 0036-1445(197210)14:42.0.CO%3B2-J 116. Stenger, F.: Transform methods for obtaining asymptotic expansions of definite integrals. SIAM J. Math. Anal. 3(1), 20–30 (1972). https://doi.org/10.1137/0503003 117. Stenger, F.: An algorithm for the approximate solution of Wiener–Hopf integral equations. Commun. ACM 16(11), 708–710 (1973). https://doi.org/10.1145/355611.362549 118. Stenger, F.: The approximate solution of convolution-type integral equations. SIAM J. Math. Anal. 4(3), 536–555 (1973). https://doi.org/10.1137/0504047 119. Stenger, F.: Book review: Approximate calculation of multiple integrals (A. H. Stroud). SIAM Rev. 15(1), 234–235 (1973). https://doi.org/10.1137/1015023 120. Stenger, F.: Integration formulae based on the trapezoidal formula. J. Inst. Math. Appl. 12(1), 103–114 (1973). https://doi.org/10.1093/imamat/12.1.103. See remarks [86, 129] 121. Stenger, F.: Review: approximate calculation of multiple integrals, by A. H. Stroud. SIAM Rev. 15(1), 234–235 (1973). https://doi.org/10.1137/1015023. http://links.jstor.org/sici?sici= 0036-1445(197301)15:12.0.CO%3B2-7;https://epubs.siam.org/doi/abs/10. 1137/1015023 122. Stenger, F.: Computing the topological degree of a mapping in R n . Report, National Oceanic and Atmospheric Administration, Washington, DC, USA (1974) 123. Stenger, F.: On the convergence and error of the Bubnov–Galerkin method. Lect. Notes Math. 362, 434–450 (1974). https://doi.org/10.1007/bfb0066604 124. Stenger, F.: An algorithm for the topological degree of a mapping in n-space. Bull. Am. Math. Soc. 81(1), 179–182 (1975). https://doi.org/10.1090/s0002-9904-1975-13698-6. http:// projecteuclid.org/euclid.bams/1183536266 125. Stenger, F.: An analytic function which is an approximate characteristic function. SIAM J. Numer. Anal. 12(2), 239–254 (1975). https://doi.org/10.1137/0712022. http://links.jstor.org/ sici?sici=0036-1429(197504)12:22.0.CO%3B2-V 126. Stenger, F.: Computing the topological degree of a mapping in R n . Numerische Mathematik 25(1), 23–38 (1975). https://doi.org/10.1007/bf01419526 127. Stenger, F.: Connection between a Cauchy system representation of Kalaba and Fourier transforms. Appl. Math. Comput. 1(1), 83–91 (1975). https://doi.org/10.1016/00963003(75)90032-6 128. Stenger, F.: Approximations via Whittaker’s cardinal function. J. Approx. Theory 17(3), 222– 240 (1976). https://doi.org/10.1016/0021-9045(76)90086-1 129. Stenger, F.: Remarks on “Integration formulae based on the trapezoidal formula” (J. Inst. Math. Appl. 12 (1973), 103–114). J. Inst. Math. Appl. 19(2), 145–147 (1977). https://doi. org/10.1093/imamat/19.2.145. See [86, 120] 130. Stenger, F.: Optimal convergence of minimum norm approximations in Hp . Numerische Mathematik 29(4), 345–362 (1978). https://doi.org/10.1007/bf01432874 131. Stenger, F.: Upper and lower estimates on the rate of convergence of approximations in Hp . Bull. Am. Math. Soc. 84(1), 145–148 (1978). https://doi.org/10.1090/s0002-9904-197814446-2. http://projecteuclid.org/euclid.bams/1183540395 132. Stenger, F.: A “Sinc–Galerkin” method of solution of boundary value problems. Math. Comput. 33(145), 85–109 (1979). https://doi.org/10.1090/s0025-5718-1979-0514812-4; https:// doi.org/10.2307/2006029. http://links.jstor.org/sici?sici=0025-5718(197901)33:1452.0.CO%3B2-S

15 Publications by, and About, Frank Stenger

395

133. Stenger, F.: An algorithm for ultrasonic tomography based on inversion of the Helmholtz equation. In: Allgower et al. [1], pp. 371–406. https://doi.org/10.1007/ BFb0090689. http://link.springer.com/book/10.1007/BFb0090674;http://www.springerlink. com/content/978-3-540-38781-7 134. Stenger, F.: Numerical methods based on Whittaker cardinal, or sinc functions. SIAM Rev. 23(2), 165–224 (1981). https://doi.org/10.1137/1023037. http://links.jstor.org/sici?sici= 0036-1445(198104)23:22.0.CO%3B2-S 135. Stenger, F.: Asymptotic ultrasonic inversion based on using more than one frequency. In: J.P. Powers (ed.) Acoustical Imaging, Vol. 11 (Monterey, Calif., 1981), pp. 425–444. Plenum, New York, NY, USA (1982). https://doi.org/10.1007/978-1-4684-1137-9_28 136. Stenger, F.: Polynomial, sinc and rational function methods for approximating analytic functions. Lect. Notes Math. 1105, 49–72 (1984). https://doi.org/10.1007/BFb0072399. http://link.springer.com/chapter/10.1007/BFb0072399/ 137. Stenger, F.: Sinc methods of approximate solution of partial differential equations. In: Miller [68], pp. 40–64 138. Stenger, F.: Explicit, nearly optimal, linear rational approximation with preassigned poles. Math. Comput. 47(175), 225–252 (1986). https://doi.org/10.1090/s0025-57181986-0842132-0; https://doi.org/10.2307/2008091. http://links.jstor.org/sici?sici=00255718(198607)47:1752.0.CO%3B2-P 139. Stenger, F.: Book review: computational complexity (K. Wagner and G. Wechsung). SIAM Rev. 30(2), 353–354 (1988). https://doi.org/10.1137/1030086. http://links.jstor.org/sici?sici= 0036-1445(198806)30:22.0.CO%3B2-B 140. Stenger, F.: Review: computational complexity, by K. Wagner and G. Wechsung. SIAM Rev. 30(2), 353–354 (1988). https://doi.org/10.1137/1030086. http://links.jstor.org/sici?sici= 0036-1445(198806)30:22.0.CO%3B2-B;https://epubs.siam.org/doi/abs/10.1137/ 1030086 141. Stenger, F.: Explicit approximate methods for computational control theory. In: Bowers et al. [18], pp. 299–316. https://doi.org/10.1007/978-1-4612-3704-4_21 142. Stenger, F.: Book review: rational approximation of real functions (P. P. Petrushev and V. I. Popov). SIAM Rev. 32(1), 187–188 (1990). https://doi.org/10.1137/1032034. http://links. jstor.org/sici?sici=0036-1445(199003)32:12.0.CO%3B2-R 143. Stenger, F.: Review: rational approximation of real functions by P. P. Petrushev and V. I. Popov. SIAM Rev. 32(1), 187–188 (1990). https://doi.org/10. 1137/1032034. http://links.jstor.org/sici?sici=0036-1445(199003)32:12.0. CO%3B2-R;https://epubs.siam.org/doi/abs/10.1137/1032034 144. Stenger, F.: Some open research problems in sonic and electromagnetic inversion. In: Martin and White [60], pp. 73–89 145. Stenger, F.: Book review: sinc methods for quadrature and differential equations (J. Lund and K. L. Bowers). SIAM Rev. 35(4), 682–683 (1993). https://doi.org/10.1137/1035172. http:// links.jstor.org/sici?sici=0036-1445(199312)35:42.0.CO%3B2-F 146. Stenger, F.: Differential equations. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 7, pp. 441–532. https://doi.org/10.1007/978-1-4612-2706-9_7 147. Stenger, F.: Integral equations. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 6, pp. 311–440. https://doi.org/10.1007/978-1-4612-2706-9_6 148. Stenger, F.: Mathematical preliminaries. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 1, pp. 1–103. https://doi.org/10.1007/978-1-4612-2706-9_1 149. Stenger, F.: Numerical methods based on Sinc and analytic functions. Springer Series in Computational Mathematics, vol. 20. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1993). https://doi.org/10.1007/978-1-4612-2706-9 150. Stenger, F.: Polynomial approximation. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 2, pp. 105–130. https://doi.org/10.1007/978-1-4612-2706-9_2

396

N. H. F. Beebe

151. Stenger, F.: Review: sinc methods for quadrature and differential equations, by J. Lund and K. L. Bowers. SIAM Rev. 35(4), 682–683 (1993). https://doi.org/10.1137/ 1035172. http://links.jstor.org/sici?sici=0036-1445(199312)35:42.0.CO %3B2-F;https://epubs.siam.org/doi/abs/10.1137/1035172 152. Stenger, F.: Sinc approximation in strip. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 3, pp. 131–178. https://doi.org/10.1007/978-1-4612-2706-9_3 153. Stenger, F.: Sinc approximation on Γ . In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 4, pp. 179–242. https://doi.org/10.1007/978-1-4612-2706-9_4 154. Stenger, F.: Sinc-related methods. In: Numerical Methods Based on Sinc and Analytic Functions [149], chap. 5, pp. 243–310. https://doi.org/10.1007/978-1-4612-2706-9_5 155. Stenger, F.: Numerical methods via transformations. In: Zahar [203], pp. 543–550. https:// doi.org/10.1007/978-1-4684-7415-2_36 156. Stenger, F.: Collocating convolutions. Math. Comput. 64(209), 211–235 (1995). https:// doi.org/10.1090/s0025-5718-1995-1270624-7; https://doi.org/10.2307/2153330. http://links. jstor.org/sici?sici=0025-5718(199501)64:2092.0.CO%3B2-M 157. Stenger, F.: Sinc convolution — a tool for circumventing some limitations of classical signal processing. In: Ismail et al. [42], pp. 227–240. https://doi.org/10.1090/conm/190/02305 158. Stenger, F.: Sinc inversion of the Helmholtz equation without computing the forward solution. In: Áng et al. [5], pp. 149–157 159. Stenger, F.: Book reviews: solving problems in scientific computing using MAPLE and MATLAB, by Walter Gander and Jírí Hrebícek. Math. Comput. 65(214), 880–882 (1996). https:// doi.org/10.1090/s0025-5718-96-00724-7. http://www.ams.org/jourcgi/jour-pbprocess?fn= 110&arg1=S0025-5718-96-00700-4&u=/mcom/1996-65-214/ 160. Stenger, F.: Matrices of Sinc methods. J. Comput. Appl. Math. 86(1), 297–310 (1997). https:// doi.org/10.1016/s0377-0427(97)00163-5. http://www.sciencedirect.com/science/article/pii/ S0377042797001635. Special issue dedicated to William B. Gragg (Monterey, CA, 1996) 161. Stenger, F.: Review: integral equations: theory and numerical treatment, by Wolfgang Hackbusch. Math. Comput. 66(220), 1756–1758 (1997). https://doi.org/10.1090/ S0025-5718-97-00910-1. http://links.jstor.org/sici?sici=0025-5718(199710)66:2202.0.CO%3B2-V;http://www.ams.org/journals/mcom/1997-66-220/S0025-571897-00910-1/;https://www.jstor.org/stable/2153702 162. Stenger, F.: Reviews and descriptions of tables and books: 22. Integral equations: Theory and numerical treatment, by Wolfgang Hackbusch. Math. Comput. 66(220), 1756–1758 (1997). https://doi.org/10.1090/s0025-5718-97-00910-1. http://www.ams.org/jourcgi/jourpbprocess?fn=110&arg1=S0025-5718-97-00843-0&u=/mcom/1997-66-220/ 163. Stenger, F.: Book reviews: boundary element method, fundamentals and applications, by Frederico Paris and Jose Canas. Math. Comput. 68(225), 457–459 (1999). https://doi.org/ 10.1090/s0025-5718-99-01069-8. http://links.jstor.org/sici?sici=0025-5718(199901)68: 2252.0.CO%3B2-K;http://www.ams.org/jourcgi/jour-pbprocess?fn=110& arg1=S0025-5718-99-00992-8&u=/mcom/1999-68-225/ 164. Stenger, F.: Sinc approximation of Cauchy-type integrals over arcs. ANZIAM J. 42(1), 87–97 (2000). https://doi.org/10.1017/S1446181100011627. https://www.cambridge.org/ core/journals/anziam-journal/article/sinc-approximation-of-cauchytype-integrals-over-arcs/ F3E8E58B66C08F142BB6C63D61317703. Papers in honour of David Elliott on the occasion of his sixty-fifth birthday 165. Stenger, F.: Summary of sinc numerical methods. J. Comput. Appl. Math. 121(1–2), 379–420 (2000). https://doi.org/10.1016/s0377-0427(00)00348-4; https://doi.org/10.1137/0712022. http://www.sciencedirect.com/science/article/pii/S0377042700003484. Numerical analysis in the 20th century, Vol. I, Approximation theory 166. Stenger, F.: Three dimensional hybrid BEM–Sinc analysis of bonded/bolted composite joints with discrete cracks. Technical Report AD-a376 152, SIN-0005, Sinc. Inc., Salt Lake City, UT, USA (2000) 167. Stenger, F.: A unified approach to the approximate solution of PDE. Berichte aus der Technomathematik 00-17, Zentrum für Technomathematik, Bremen, Germany (2000)

15 Publications by, and About, Frank Stenger

397

168. Stenger, F.: Sinc convolution solution of laminated and anisotropic composite joints with discrete cracks. Technical Report AD-b302 193, Sinc. Inc., Salt Lake City, UT, USA (2004) 169. Stenger, F.: Separation of variables solution of PDE via sinc methods. Int. J. Appl. Math. Stat. 10(SO7), 98–115 (2007). http://www.ceser.in/ceserp/index.php/ijamas/article/view/305 170. Stenger, F.: Fourier series for zeta function via Sinc. Linear Algebra Appl. 429(10), 2636– 2639 (2008). https://doi.org/10.1016/j.laa.2008.01.037 171. Stenger, F.: Polynomial function and derivative approximation of sinc data. J. Complex. 25(3), 292–302 (2009). https://doi.org/10.1016/j.jco.2009.02.010 172. Stenger, F.: Handbook of Sinc Numerical Methods. Chapman and Hall/CRC Numerical Analysis and Scientific Computation Series. CRC Press, Boca Raton, FL, USA (2011). https://doi.org/10.1201/b10375. http://www.crcpress.com/product/isbn/9781439821589 173. Stenger, F.: Approximating the Riemann zeta and related functions. In: Govil et al. [33], pp. 363–373. https://doi.org/10.1007/978-3-319-49242-1_17. https://link.springer.com/chapter/ 10.1007/978-3-319-49242-1_17 174. Stenger, F.: Approximate solution of numerical problems via approximate indefinite integration. Talk dedicated to Walter Gautschi on the occasion of his 90th birthday. (2018). PDF file not yet released 175. Stenger, F.: Indefinite integration operators identities and their approximations, pp. 1–36 (2018). https://arxiv.org/abs/1809.05607 176. Stenger, F.: A proof of the Riemann hypothesis, pp. 1–26 (2018). https://arxiv.org/abs/1708. 01209v4 177. Stenger, F., Baker, B., Brewer, C., Hunter, G., Kaputerko, S., Shepherd, J.: Periodic approximations based on sinc. Int. J. Pure Appl. Math. 49(1), 63–72 (2008). http://ijpam. eu/contents/2008-49-1/8/index.html 178. Stenger, F., Barkey, B., Vakili, R.: Sinc convolution approximate solution of Burgers’ equation. In: Computation and Control, III (Bozeman, MT, 1992), Progr. Systems Control Theory, vol. 15, pp. 341–354. Birkhäuser Boston, Cambridge, MA, USA (1993) 179. Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and schrödinger in 3 + 1. Preprint, University of Utah; German University in Cairo; IISAM L3C, Salt Lake City, UT, USA; New Cairo City, Egypt; Oklahoma City, OK, USA (2015). To be published in the proceedings of a conference of December 2014 honoring the 85th birthday of Frank E. Harris 180. Stenger, F., Baumann, G., Koures, V.G.: Computational methods for chemistry and physics, and Schrödinger in 3 + 11 . Adv. Quantum. Chem. 71, 265–298 (2015). https://doi.org/10.1016/bs.aiq.2015.02.005. http://www.sciencedirect.com/science/article/pii/ S0065327615000064 181. Stenger, F., Berggren, M.J., Johnson, S.A., Li, Y.: An adaptive, noise tolerant, frequency extrapolation algorithm for diffraction corrected ultrasound tomography. In: McAvoy [63], pp. 726–731. https://doi.org/10.1109/ULTSYM.1983.198154. http://ieeexplore.ieee.org/ iel5/10283/32716/01535094.pdf?tp=&arnumber=1535094&isnumber=32716. Two volumes. IEEE catalog number 83CH1947-1 182. Stenger, F., Berggren, M.J., Johnson, S.A., Wilcox, C.H.: Rational function frequency extrapolation in ultrasonic tomography. In: Wave Phenomena: Modern Theory and Applications (Toronto, 1983), North-Holland Math. Stud., vol. 97, pp. 19–34. North-Holland Publishing, Amsterdam, The Netherlands (1984) 183. Stenger, F., Chaudhuri, R., Chiu, J.: Novel sinc solution of the boundary integral form for twodimensional bi-material elasticity problems. Compos. Sci. Technol. 60(12–13), 2197–2211 (2000). https://doi.org/10.1016/S0266-3538(00)00015-4 184. Stenger, F., Cohen, E., Riesenfeld, R.: Radial function methods of approximation based on using harmonic Green’s functions. Commun. Appl. Anal. 6(1), 1–15 (2002) 185. Stenger, F., Cook, T., Kirby, R.M.: Sinc solution of biharmonic problems. Can. Appl. Math. Q. 12(3), 391–414 (2004). http://www.math.ualberta.ca/ami/CAMQ/table_of_content/vol_ 12/12_3h.htm

398

N. H. F. Beebe

186. Stenger, F., El-Sharkawy, H.A.M., Baumann, G.: The Lebesgue constant for sinc approximations. In: Zayed and Schmeisser [204], pp. 319–335. https://doi.org/10.1007/978-3-31908801-3_13 187. Stenger, F., Gustafson, S.Å., Keyes, B., O’Reilly, M., Parker, K.: ODE-IVP-PACK via Sinc indefinite integration and Newton’s method. Numerical Algorithms 20(2–3), 241–268 (1999). https://doi.org/10.1023/A:1019108002140. http://ipsapp007.kluweronline.com/ content/getfile/5058/18/7/abstract.htm;http://ipsapp007.kluweronline.com/content/getfile/ 5058/18/7/fulltext.pdf;https://link.springer.com/article/10.1023/A%3A1019108002140 188. Stenger, F., Hagmann, M., Schwing, J.: An algorithm for the electromagnetic scattering due to an axially symmetric body with an impedance boundary condition. J. Math. Anal. Appl. 78(2), 531–573 (1980). https://doi.org/10.1016/0022-247X(80)90165-1. https://www.sciencedirect. com/science/article/pii/0022247X80901651 189. Stenger, F., Hall, R.B.: Sinc methods for computing solutions to viscoelastic and related problems. Can. Appl. Math. Q. 21(1), 95–120 (2013) 190. Stenger, F., Johnson, S.A.: Ultrasonic transmission tomography based on the inversion of the Helmholtz wave equation for plane and spherical wave insonification. Appl. Math. Notes 4(3–4), 102–127 (1979) 191. Stenger, F., Keyes, B., O’Reilly, M., Parker, K.: Sinc indefinite integration and initial value problems. In: Espelid and Genz [28], pp. 281–282. https://doi.org/10.1007/978-94-0112646-5_21 192. Stenger, F., Naghsh-Nilchi, A.R., Niebsch, J., Ramlau, R.: Sampling methods for approximate solution of PDE. In: Nashed and Scherzer [76], pp. 199–249. https://doi.org/10.1090/conm/ 313/05377 193. Stenger, F., O’Reilly, M.J.: Computing solutions to medical problems via Sinc convolution. IEEE Trans. Autom. Control 43(6), 843–848 (1998). https://doi.org/10.1109/9.679023 194. Stenger, F., Petrick, W., Rotsides, Z.: Algorithm for computing electromagnetic scattered field from an axially-symmetric body with an impedance boundary condition. SIAM Rev. 18(4), 828–829 (1976). https://doi.org/10.1137/1018136. https://epubs.siam.org/doi/abs/10.1137/ 1018136 195. Stenger, F., Rosenberg, I.: A lower bound on the angles of triangles constructed by bisecting the longest side. Math. Comput. 29(130), 390–395 (1975). https://doi.org/ 10.1090/S0025-5718-1975-0375068-5. http://www.ams.org/journals/mcom/1975-29-130/ S0025-5718-1975-0375068-5/ 196. Stenger, F., Schmidtlein, R.: Conformal maps via Sinc methods. In: Papamichael et al. [79], pp. 505–549 197. Stenger, F., Tucker, D., Baumann, G.: Navier–Stokes Equations on R 3 × [0, T ]. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (2016). https://doi.org/10.1007/ 978-3-319-27526-0. http://www.springer.com/us/book/9783319275246 198. Stenger, F., Youssef, M., Niebsch, J.: Improved approximation via use of transformations. In: Shen and Zayed [90], chap. 2, pp. 25–49. https://doi.org/10.1007/978-1-4614-4145-8_2 199. Stynes, M.: A simplification of Stenger’s topological degree formula. Numerische Mathematik 33(2), 147–155 (1979). https://doi.org/10.1007/BF01399550. https://link.springer. com/article/10.1007/BF01399550. See [126] 200. Stynes, M.: A remark on Stenger’s topological degree algorithm. Proc. R. Ir. Acad. Sect. A Math. Phys. Sci. 82(2), 163–166 (1982). https://www.jstor.org/stable/20489150 201. Wang, K.Y. (ed.): Acoustical Imaging: Visualization and Characterization, Acoustical Imaging, vol. 9. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (1980). https://doi.org/10.1007/978-1-4684-3755-3. http://www.springerlink.com/content/ 978-1-4684-3755-3 202. Wong, R.R. (ed.): In: Asymptotic and Computational Analysis: Conference in Honor of Frank W. J. Olver’s 65th Birthday. Lecture Notes in Pure and Applied Mathematics, vol. 124. Marcel Dekker, New York, NY, USA (1990). http://www.loc.gov/catdir/enhancements/ fy0647/90002810-d.html

15 Publications by, and About, Frank Stenger

399

203. Zahar, R.V.M.R.V.M. (ed.): In: Approximation and Computation: A Festschrift in Honor of Walter Gautschi: Proceedings of the Purdue Conference, December 2–5, 1993. International Series of Numerical Mathematics, vol. 119. Birkhäuser, Cambridge, MA, USA; Berlin, Germany; Basel, Switzerland (1994) 204. Zayed, A.I., Schmeisser, G. (eds.): New Perspectives on Approximation and Sampling Theory: Festschrift in Honor of Paul Butzer’s 85th Birthday. Springer, Berlin, Germany / Heidelberg, Germany / London, UK / etc. (2014). https://doi.org/10.1007/978-3-319-08801-3

Index

A Adomian’s decomposition method, 101 Affine subspace, 170, 171 Airy function, 74, 312, 316, 317 Algorithm, 48, 61, 127, 132, 142, 369–371, 376, 377, 379, 380 Analytical solution, 140, 148 Analytic functions, 36, 60, 176, 234, 256, 280, 341–343, 345 Analytic on a strip, 107 Approximated equation, 45 Approximated function, 267 Approximation formula, 244, 347, 350 Approximations, 6, 35, 37, 45, 56, 58–61, 65, 148, 154, 157, 158, 234, 256, 354, 355 Arc, 60

B Bagley-Torvik equation, 117 Baire measure, 307 Ball, 37, 167, 169 Banach space, 36 Banach-Steinhaus, 46 Barycentric, 333, 350, 351, 358 Basis functions, 37, 43, 52, 61, 68, 102, 154 Bell polynomials, 306 Blaschke, 346, 352, 363 Blasius, 158, 160 Borel measures, 348 Boundary conditions, 36, 57–59, 62, 69, 92, 152, 156, 157, 263, 284 Boundary layer, 147, 148, 160

Boundary value problem, 35, 147, 148, 151, 152, 155, 263, 265, 292 Bounded set, 203 Bounding the error, 39 Bromwich, 241

C Caputo, 99–101, 105 Cardinal representation, 63 Cartesian products, 163 Cauchy integrals, 325 Cauchy sequences, 234 Characteristic determinant, 260, 263 Characteristic function, 279, 283, 287, 288, 292, 293 Chebyshev, 61, 236 Christoffel-Darbox kernel, 300 Closure, 182, 230 Collectively compact operator, 35, 46 Collocation methods, 40, 48, 62, 105, 106, 113–118 Compact operator, 35, 46, 308 Compact set, 219 Complete monotonicity, 300, 311 Complex domain, 342 Complexification, 163 Complex plane, 60, 256, 315 Complex-valued function, 289 Confinement potential, 59 Conformal maps, 60, 61, 108, 163, 178, 187 Connected, 36, 38, 39, 52, 60, 165, 166, 173, 178, 235, 238 Continuous function, 71, 170 Continuous operator, 46

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021 G. Baumann (ed.), New Sinc Methods of Numerical Analysis, Trends in Mathematics, https://doi.org/10.1007/978-3-030-49716-3

401

402 Contour, 79, 190 Convergence, 6, 37, 38, 42, 44, 46, 49, 50, 60, 67, 69, 72, 99, 101, 118, 155, 247, 259, 280, 281, 286, 290, 291, 294, 324, 338 Converges uniformly, 165 Convexity, 174, 185 Convex optimization problems, 342 Convolution, 42, 45, 60, 64, 66, 67, 231, 232, 244, 250 Coordinate functions, 166–169, 176

D Decay condition, 6, 8, 9, 180, 182, 188, 203 Decomposition, 101, 186, 250 Definite convolution, 42, 250 Derivative, 42, 46, 56, 58, 95, 100, 103, 179, 234, 259, 266, 274, 275, 282, 285–287, 299, 332 Diagonalizable, 45 Di Bruno’s Formula, 103 Differential equations, 99, 101, 239 Dirac measure, 348 Dirichlet conditions, 57 Discrete nonlinear operators, 45 Domain, 36, 38, 39, 50, 60, 102, 107, 108, 147, 148, 154, 160, 164, 165, 178, 187, 238, 342, 349 Double-exponential, 99, 101–103, 341, 342

E Echelon matrix, 369 Effective potential, 58 Eigenfunctions, 55–59, 61, 63, 65, 67–69, 71, 73, 75, 77–79, 81, 83, 85, 87–93, 95, 97 Eigenvalues, 55–59, 61–77, 79, 81–85, 87–95, 97, 229, 236, 240, 242, 247, 251, 255–260, 263–268, 271–275, 277, 279, 280, 282–285, 287–289, 292–294, 299, 301, 302, 305, 307 Eigenvectors, 236, 302 Elementary function, 311 Energy, 79, 83, 84, 90, 91, 348–350, 353, 354, 358, 359 Entire function, 64, 267, 292, 304, 306, 313, 315, 316, 318, 319 Error analysis, 6, 101, 112, 257, 264, 265, 272–274, 282–284, 287, 288 Error estimate, 343, 347, 350, 359, 366

Index Error operator, 346 Euler–Maclaurin formula, 324–326 Exponential convergence, 38, 99 Exponential transformation, 99, 102

F Factoring a matrix, 369 Falkner-Skan, 147–151, 153, 155, 157, 159–161 Finite differences, 332, 333, 338 Finite difference scheme, 148 Fokker–Planck models, 239 Fornberg’s method, 332 Fourier transform, 227–229, 240, 242 Fractional derivatives, 96, 100, 101, 103, 105, 106 Fractional differential equations, 99, 101 Fractional integrals, 101 Fractional order, 58, 60, 69, 70, 73, 79, 101 Fractional partial differential equations, 101 Fredholm, 35–37, 39, 41, 43, 45, 47–53, 299–301, 303, 305, 307, 309–311, 313, 315, 317, 319, 321 Function, 4, 5, 9–11, 17, 35, 37–39, 41–44, 47, 52, 57–59, 61–65, 68–77, 84–95, 100, 102–104, 106, 107, 112, 115, 119, 121, 133, 142, 152, 153, 155, 164, 165, 167, 170, 172, 176, 178, 182, 184, 186, 188, 191, 193, 199, 200, 203, 206, 214, 215, 218, 228, 234, 235, 238, 239, 242, 245, 247, 260, 267, 268, 279, 281–283, 286–293, 299, 300, 304, 306, 311–313, 315–319, 324, 330, 334, 335, 337, 341–346, 348, 349, 355–357, 360–362, 366 Functions with jumps, 324

G Galerkin, 36, 51, 101 Gaussian elimination, 369 Gaussian function, 290 Gaussian integration, 238 Generalized sampling, 289, 290 Geometrically isolated solution, 37 Gibbs phenomenon, 336 Gluons, 83 Green potential, 348, 349 Green’s functions, 228 Gronwall’s lemma, 113 Ground state, 71–75, 86, 88, 90–93

Index H Hammerstein, 35, 36 Hard boundary conditions, 59 Hardy space, 153, 341, 345 Harmonic oscillator, 56, 58, 69, 74, 75, 82 Heat equation, 228 Hermite-Gauss operator, 285–288, 290 Hermite polynomial, 235 Hermitian, 299, 300, 302, 380 High accuracy, 44 Hilbert, 229, 230, 237, 300, 305 Holomorphic, 4, 50, 52, 163–168, 170, 176–178, 184, 185, 188, 199, 200, 218 Hopf, 249

I Indefinite integration, 228, 232, 247 Injective, 167, 168, 178, 185, 186 Inner product, 230 Integral, 11, 35–37, 39–41, 43–53, 56, 58, 59, 65, 66, 70, 91, 93, 96, 101, 103, 112, 142, 150, 151, 218, 223, 227, 228, 232, 237, 239, 247, 249–251, 292, 299, 300, 309, 325, 327, 351, 357 Integration operators, 227, 229, 231, 233, 235, 237, 239, 241, 243, 245, 247, 249, 251, 253 Interpolant, 323–325, 334, 337 Interpolation operator, 43, 49 Interpolation property, 338 Inverse functions, 102 Inverse heat problem, 3, 15

J Jacobi, 101, 235 Jacobian, 174, 177

K Kernel, 46, 47, 49, 101, 300, 307, 348

L Lagrange interpolation, 235, 354 Laminar boundary layer, 147, 148 Laplace transform, 65, 228, 234, 241, 242 Laplace transform inversion, 241 Lebesgue measures, 60, 61, 176, 177 Legendre polynomial, 227, 235, 239, 240 Leray-Schauder theorem, 41

403 Limit, 123, 129, 136–139, 173, 179, 268, 272, 274, 277, 364, 367 Linear operators, 43, 47 Lipschitz, 49, 50, 113, 120, 214, 216 Localization, 6, 83, 84, 86

M Mapping, 38 Minimization, 341, 343, 344, 349, 353 Minimum error, 346, 351, 358 Modified Bessel function, 316, 319 Monte–Carlo, 227 Moore-Penrose pseudoinverse, 369 Multi-order fractional differential equations, 101 Multi-term initial value problem, 119, 120

N Navier–Stokes, 228 Nekrasov equation, 36 Newton iteration method, 50 Nonsingular, 166–169, 381 Novel formula, 233 Numerical integration, 37, 179, 343–345, 351–354, 358, 359, 366 Numerical methods, 36, 37, 52, 113, 140, 341, 342 Numerical ranges, 230 Numerical schemes, 35, 163, 164

O Operators, 3, 35–37, 40–49, 52, 55, 59, 60, 63, 64, 68, 96, 100, 199, 201, 227–235, 237, 239, 241, 243, 245, 247, 249, 251, 253, 256, 261, 265, 267, 268, 281–283, 285–288, 290–292, 299, 300, 307–309, 346, 352 Optimal approximation formula, 347 Optimal control, 242 Optimality, 344, 345 Optimal sampling, 355 Orthogonal, 36, 234–236, 299, 300

P Parallelopiped, 177, 178 Permutation matrix, 369 Petrov-Galerkin solutions, 36 Picard iteration, 248 Piecewise holomorphic, 167 Pointwise convergent, 46, 47

404 Polyhedra, 163–165 Polynomial approximation, 238, 242, 251 Positively oriented, 190 Principal value integral, 325 Pseudo-spectral differentiation, 101 Q Quadrature, 39–42, 46, 102–104, 121, 163, 164, 178, 180, 181, 323, 324 Quantized energy, 79 Quantum mechanics, 56, 57, 68, 79, 95 Quarkonium, 56, 68, 82, 83, 85, 95 R Ramanujan entire function, 313 Range, 61, 77, 86, 88, 90, 95, 148, 230, 352 Region, 164, 342, 344 Regular chart, 176–178, 181 Regularization, 267–269 Regular submanifold, 165, 166, 168, 169, 174, 175, 177 Riemann-Liouville, 99–101, 103–105, 114 Riemann sums, 324, 326 Riesz-Feller derivative, 58 Riesz-Feller fractional derivative, 56, 95 Riesz-Feller operators, 64, 68, 96 Riesz-Feller potential, 57–59, 95 Runge’s, 333 S Sampling error, 355, 357 Sampling points, 40, 341–344, 349, 351–353, 355, 358, 359 Schrödinger equation, 228 Second-order operator, 265 Self-adjoint operator, 308 Seminorms, 187 Shifted Sinc, 61, 102 Simplicial complex, 165, 167, 170 Simply connected, 36, 38, 39, 60, 178, 238 Sinc approximation, 37–39, 55, 60, 61, 63, 154, 155, 164, 202 Sinc basis, 61, 68, 103, 163, 182 Sinc-collocation, 48–50, 52, 147, 148, 152, 155, 160 Sinc-collocation method, 62, 105 Sinc-convolution methods, 35, 50 Sinc functions, 4, 11, 37, 52, 59, 102, 140, 152, 163, 239, 257, 324 Sinc-Gaussian operator, 282, 283, 290–292 Sinc-Gaussian sampling, 285, 290 Sinc interpolant, 323–325, 334, 337

Index Sinc interpolation, 35, 48, 102, 153, 155, 341–343, 355, 358 Sinc methods, 36, 60, 99, 101, 119, 163–165, 234, 265, 266, 273, 285, 294, 323, 341, 342 Sinc nodes, 158 Sinc-Nyström, 35, 41, 46, 48, 50, 51 Sin-Gaussian interpolation, 10 Sinc points, 41, 59, 60, 69, 70, 72, 73, 76, 102 Sinc quadrature, 39, 40, 42, 102, 103, 164 Sinc-collocation method, 105 Single exponential, 40, 99, 102, 103 Singularities, 42, 44, 101, 164, 242, 251, 355 Singular operators, 36 Singular problems, 257 Skewness parameter, 69, 79, 95 Spectral classification, 256 Spectrum, 64, 95, 230, 232 Steklov’s theorem, 46 Stenger-Laplace transform, 66 Sturm-Liouville problem, 55, 57, 62, 264 Submanifold, 165, 166, 168, 169, 174–178 T Tau method, 101 Toeplitz, 12 Transition rates, 81 Trapezoidal formulas, 357 Trapezoidal matrix, 369 Trapezoidal rule, 38, 341, 343, 344 Truncation error, 8, 258, 265, 269, 355, 357 Tunneling effect, 89 U Uniformly bounded, 44, 47, 165 Urysohn, 35, 36, 51 V Vertices, 170, 184, 185 Volterra, 37, 101, 112, 261 W Wave function, 57–59, 69, 73, 95 Weakly singular, 35–37, 44, 49, 52, 101 Weak singularity, 101 Weierstrass factor product, 313 Weight function, 235, 299, 300, 343–345, 355 Wiener, 249 Wiener–Hopf integral, 227, 228, 249 WKB approximations, 58 WKS sampling, 257, 264 Worst case error, 343