Difference Matrices for ODE and PDE. A MATLAB Companion 9783031119996, 9783031120008

263 34 5MB

English Pages [212] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Difference Matrices for ODE and PDE. A MATLAB Companion
 9783031119996, 9783031120008

Table of contents :
Preface
Acknowledgements
Contents
Acronyms
1 Introduction
1.1 A Summary of the Differential Equations We Will Consider
1.2 The Use of MATLAB® and the Student Exercises
1.2.1 Using MATLAB®'s Debugger
1.2.2 Line Numbering in MATLAB® Examples
1.2.3 Reproducing Codes and Exercises
1.2.4 Guidelines to the Homework Exercises
1.3 The Organization of This Text
2 Review of Elementary Numerical Methods and MATLAB®
2.1 Introduction to Basic MATLAB® at the Command Line
2.1.1 MATLAB®: sum, prod, max, min, abs, norm, linspace, for loop, eigs, and sort
2.2 Runge–Kutta Method for Initial Value Problems
2.2.1 The Shooting Method for ODE BVP—an IVP Approach
2.2.2 Comparison of Approximate Solutions to Exact Solutions
2.2.3 MATLAB®: Ones, Zeros, the `:' Iterator, the @ Syntax for Defining Inline Functions, Subfunctions
2.2.4 Rows Versus Columns Part I, Diagnosing Dimension and Size Errors
2.3 Numerical Differentiation and Integration
2.3.1 Higher Order Differences and Symbolic Computation
2.3.2 MATLAB®: kron, spdiags, `backslash' (mldivide), tic, and toc
2.4 Newton's Method for Vector Fields
2.4.1 MATLAB®: if, else, while, fprintf, meshgrid, surf, reshape, find, single indexing, and rows versus columns Part II
2.5 Cubic Spline Solver
2.5.1 Making Animations
2.6 Theory: ODE, Systems, Newton's Method, IVP Solvers …
2.6.1 Some ODE Theory and Techniques
2.6.2 Convergence and Order of Newton's Method
2.6.3 First-Order IVP Numerical Solvers: Euler's and Runge–Kutta's
2.6.4 Difference Formulas and Orders of Approximation
Exercises
3 Ordinary Differential Equations
3.1 Second-Order Semilinear Elliptic Boundary Value Problems
Exercises
3.2 Linear Ordinary Second-Order BVP
Exercises
3.3 Eigenvalues of -D2 and Fourier Series
Exercises
3.4 Enforcing Zero Dirichlet, Zero Neumann, and Periodic Boundary Conditions …
Exercises
3.5 First-Order Linear Solvers
Exercises
3.6 Systems of First-Order Linear Equations for Second-Order IVP
Exercises
3.7 First-Order Nonlinear IVP
Exercises
3.8 A Practical Guide to Fourier Series
4 Partial Differential Equations
4.1 The Laplacian on the Unit Square
4.2 Creating the Sparse Laplacian Matrix D2 and Eigenvalues
Exercises
4.3 Semilinear Elliptic BVP on the Square
Exercises
4.4 Laplace's Equation on the Square
Exercises
4.5 The Heat Equation
4.5.1 Explicit Method
4.5.2 Implicit Method
4.5.3 Explicit–Implicit Method
4.5.4 The Method of Lines
4.5.5 Fourier Expansion with Numerical Integration
4.5.6 Block Matrix Systems
Exercises
4.6 The Wave Equation
4.6.1 The Method of Lines
4.6.2 A Good Explicit Method
4.6.3 Block Matrix Systems and D'Alembert Matrices
Exercises
4.7 Tricomi's Equation
Exercises
4.8 General Regions
4.8.1 The Laplacian on the Cube
4.8.2 The Laplacian on the Disk
4.8.3 Accurate Eigenvalues of the Laplacian on Disk, Annulus, and Sections
4.8.4 The Laplace–Beltrami Operator on a Spherical Section
4.8.5 A General Region Code
Exercises
4.9 First-Order PDE and the Method of Characteristics
Exercises
4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions
4.10.1 Eigenfunctions of the Laplacian
4.10.2 Laplace's Equation
4.10.3 The Heat Equation
4.10.4 The Wave Equation
5 Advanced Topics in Semilinear Elliptic BVP
5.1 Branch Following and Bifurcation Detection
5.1.1 The Tangent Newton Method for Branch Following
5.1.2 The Secant Method for Bifurcation Detection
5.1.3 Secondary Bifurcations and Branch Switching
Exercises
5.2 Mountain Pass and Modified Mountain Pass Algorithms for Semilinear BVP
5.2.1 The MPA
5.2.2 The MMPA
Exercises
5.3 The p-Laplacian
Exercises
Appendix References

Citation preview

John M. Neuberger

Difference Matrices for ODE and PDE A MATLAB® Companion

Difference Matrices for ODE and PDE

John M. Neuberger

Difference Matrices for ODE and PDE A MATLAB® Companion

123

John M. Neuberger Department of Mathematics and Statistics Northern Arizona University Flagstaff, AZ, USA

ISBN 978-3-031-11999-6 ISBN 978-3-031-12000-8 https://doi.org/10.1007/978-3-031-12000-8

(eBook)

MATLAB is a registered trademark of The MathWorks, Inc., Natick, MA, USA Mathematics Subject Classification: 65-01, 35-01, 34-01, 65L, 65M, 65M06, 65M20, 65N, 65N06, 65N25, 35F, 35G, 35A24, 35J20 © Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This text will be useful for the four different audiences listed below. It is expected that all readers will have knowledge of basic calculus, linear algebra, and ordinary differential equations, and that the successful student will either already know elementary partial differential equations, or be concurrently learning that subject. The material is intended to be accessible to those without expertise in MATLAB®, although a little prior experience with programming is probably required. 1. This text serves as a supplement for the student in an introductory partial differential equations course. A selection of the included exercises can be assigned as projects throughout the semester. Through the use of this text, the student will develop the skills to run simulations corresponding to the primarily theoretical course material covered by the instructor. 2. These notes work well as a standalone graduate course text in introductory scientific computing for partial differential equations. With prerequisite knowledge of ordinary and partial differential equations and elementary numerical analysis, most of the material can be covered and many of the exercises assigned in a one-semester course. Some of the harder exercises make substantial projects, and relate to topics from the other graduate mathematics courses graduate students typically take, e.g., differential equations and topics in nonlinear functional analysis. 3. Established researchers in theoretical partial differential equations may find these notes useful as well, particularly as an introductory guide for their research students. Those unfamiliar with MATLAB can use the included material as a reference in quickly developing their own applications in that language. A mathematician who is new to the practical implementation of methods for scientific computation in general can with relative ease, by working through a selection of exercises, learn how to implement and execute numerical simulations of differential equations in MATLAB. These notes can serve as a practical guide in independent study, undergraduate or graduate research experiences, or for help simulating solutions to specific thesis- or dissertation-related

v

vi

Preface

experiments. The author hopes that the ease and brevity with which the notes provide solutions to fairly significant problems will serve as inspiration. 4. The text can serve as a supplement for the instructor teaching any course in differential equations. Many of the examples can be easily implemented and the resulting simulation demonstrated by the instructor. If the course has a numerical component, a few exercises of suitable difficulty can be assigned as student projects. Practical assistance in implementing algorithms in MATLAB can be found in this text. Scientist and engineers have valid motivations to become proficient at implementing numerical algorithms for solving PDE. The text’s emphasis on enforcing boundary conditions, eigenfunctions, and general regions will be useful as an introduction to their advanced applications. For the mathematician, accomplished or student, a more powerful benefit can be the tangible, visual realization of the objects of calculus, differential equations, and linear algebra. The high-level programs are developed by the reader from earlier programs and short fragments of relatable code. The resulting simulations are demonstrations of the properties of the underlying mathematical objects, where vectors represent functions, matrix operations represent differentiation and integration, and calculations such as solving linear systems or finding eigenvalues are easily accomplished with one line of code. Even without much prior knowledge of programming or MATLAB, by working through a selection of exercises in this text, the reader will be able to create working programs that simulate many of the classic problems from PDE, while gaining an understanding of the underlying fundamental mathematical principles. The approach of the text is to first review MATLAB and a small selection of techniques from elementary numerical analysis, and then introduce difference matrices in the context of ordinary differential equations. We then apply these ideas to PDE, including topics from the heat, wave, and Laplace equations, eigenvalue problems, and semilinear boundary value problems. We enforce boundary conditions on regions including the interval, square, disk, and cube, and more general domains. We push the general notion that linear problems can be expressed as a single linear system, while many nonlinear problems can be solved via Newton’s method. Flagstaff, USA

John M. Neuberger

Acknowledgements

I would first and foremost like to acknowledge my father J. W. Neuberger. He taught me much of what I know about differential equations and the numerical solution of them. His belief in my pedagogical approach and the practical usefulness of my course notes was my main motivation in publishing them. I also would like to thank my Ph.D. advisor Alfonso Castro who introduced me to nonlinear functional analysis, particularly as it pertains to semilinear elliptic boundary value problems. Chapter 5 content of this text reflects a small slice of my lifetime pursuits to understand and computationally simulate the equations of that subject. I have been extremely lucky in having a wonderfully supportive family. My wife Dina has been the main source of inspiration over my career as a professor of mathematics. My son Nick has an interest in mathematics as well, and has been insistent that I refine the notes so that others could use them. I would also like to thank my 20-plus years of undergraduate and graduate partial differential equations and numerical analysis students. Their feedback was invaluable in the process of developing and refining my notes. The much-improved text contains numerous clarifications and hints as a direct result of their suggestions, and the exercises are more robust and complete, thanks to their countless hours of coding and grinding out project reports. I learned a few MATLAB® tricks from them too. A special thanks goes to my former and current students Tyler Diggans, Ryan Kelly, and Ian Williams, who each took a turn at proofreading, and made some valuable editorial suggestions from the student perspective.

vii

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 A Summary of the Differential Equations We Will Consider . 1.2 The Use of MATLAB® and the Student Exercises . . . . . . . . 1.2.1 Using MATLAB®’s Debugger . . . . . . . . . . . . . . . . 1.2.2 Line Numbering in MATLAB® Examples . . . . . . . . 1.2.3 Reproducing Codes and Exercises . . . . . . . . . . . . . . 1.2.4 Guidelines to the Homework Exercises . . . . . . . . . . 1.3 The Organization of This Text . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

2 Review of Elementary Numerical Methods and MATLAB® . . . . . 2.1 Introduction to Basic MATLAB® at the Command Line . . . . . 2.1.1 MATLAB®: sum, prod, max, min, abs, norm, linspace, for loop, eigs, and sort . . . . . . . . . . . . . . . . . . . . . . . 2.2 Runge–Kutta Method for Initial Value Problems . . . . . . . . . . 2.2.1 The Shooting Method for ODE BVP—an IVP Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Comparison of Approximate Solutions to Exact Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 MATLAB®: Ones, Zeros, the ‘:’ Iterator, the @ Syntax for Defining Inline Functions, Subfunctions . . 2.2.4 Rows Versus Columns Part I, Diagnosing Dimension and Size Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Numerical Differentiation and Integration . . . . . . . . . . . . . . . . 2.3.1 Higher Order Differences and Symbolic Computation . 2.3.2 MATLAB®: kron, spdiags, ‘backslash’ (mldivide), tic, and toc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Newton’s Method for Vector Fields . . . . . . . . . . . . . . . . . . . . 2.4.1 MATLAB®: if, else, while, fprintf, meshgrid, surf, reshape, find, single indexing, and rows versus columns Part II . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

1 1 2 3 4 4 5 6

.. ..

11 12

.. ..

15 17

..

19

..

22

..

23

.. .. ..

26 27 32

.. ..

34 36

..

37

. . . . . . . .

ix

x

Contents

2.5

Cubic Spline Solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 Making Animations . . . . . . . . . . . . . . . . . . . . . . . 2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers, and Difference Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Some ODE Theory and Techniques . . . . . . . . . . . 2.6.2 Convergence and Order of Newton’s Method . . . . 2.6.3 First-Order IVP Numerical Solvers: Euler’s and Runge–Kutta’s . . . . . . . . . . . . . . . . . . . . . . . . 2.6.4 Difference Formulas and Orders of Approximation . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.... ....

43 45

.... .... ....

48 48 52

.... .... ....

54 54 57

3 Ordinary Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Second-Order Semilinear Elliptic Boundary Value Problems . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Linear Ordinary Second-Order BVP . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Eigenvalues of D2 and Fourier Series . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Enforcing Zero Dirichlet, Zero Neumann, and Periodic Boundary Conditions Using Either Point or Cell Grid. . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 First-Order Linear Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Systems of First-Order Linear Equations for Second-Order IVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 First-Order Nonlinear IVP . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8 A Practical Guide to Fourier Series . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

63 64 67 69 71 71 75

. . . .

. . . .

. . . .

77 80 80 83

. . . . .

. . . . .

. . . . .

84 86 87 89 89

4 Partial Differential Equations . . . . . . . . . . . . 4.1 The Laplacian on the Unit Square . . . . . 4.2 Creating the Sparse Laplacian Matrix D2 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Semilinear Elliptic BVP on the Square . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Laplace’s Equation on the Square . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 The Heat Equation . . . . . . . . . . . . . . . . 4.5.1 Explicit Method . . . . . . . . . . . . 4.5.2 Implicit Method . . . . . . . . . . . . 4.5.3 Explicit–Implicit Method . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

93 94 95 98 100 101 102 106 107 108 110 111

............... ............... and Eigenvalues . . ............... ............... ............... ............... ............... ............... ............... ............... ...............

Contents

4.5.4 The Method of Lines . . . . . . . . . . . . . . . . . . . . 4.5.5 Fourier Expansion with Numerical Integration . . 4.5.6 Block Matrix Systems . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 The Method of Lines . . . . . . . . . . . . . . . . . . . . 4.6.2 A Good Explicit Method . . . . . . . . . . . . . . . . . 4.6.3 Block Matrix Systems and D’Alembert Matrices Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Tricomi’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 General Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 The Laplacian on the Cube . . . . . . . . . . . . . . . . 4.8.2 The Laplacian on the Disk . . . . . . . . . . . . . . . . 4.8.3 Accurate Eigenvalues of the Laplacian on Disk, Annulus, and Sections . . . . . . . . . . . . . . . . . . . 4.8.4 The Laplace–Beltrami Operator on a Spherical Section . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.5 A General Region Code . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 First-Order PDE and the Method of Characteristics . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10.1 Eigenfunctions of the Laplacian . . . . . . . . . . . . 4.10.2 Laplace’s Equation . . . . . . . . . . . . . . . . . . . . . . 4.10.3 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . 4.10.4 The Wave Equation . . . . . . . . . . . . . . . . . . . . .

xi

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

112 113 117 121 123 124 124 126 132 134 140 140 140 141

. . . . . . 146 . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

149 151 158 161 163

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

164 164 169 175 177

5 Advanced Topics in Semilinear Elliptic BVP . . . . . . . . . . . . . . . 5.1 Branch Following and Bifurcation Detection . . . . . . . . . . . . 5.1.1 The Tangent Newton Method for Branch Following . 5.1.2 The Secant Method for Bifurcation Detection . . . . . 5.1.3 Secondary Bifurcations and Branch Switching . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Mountain Pass and Modified Mountain Pass Algorithms for Semilinear BVP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 The MPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 The MMPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 The p-Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

181 181 182 183 186 190

. . . . . .

. . . . . .

. . . . . .

191 191 192 194 194 201

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

Acronyms

Some frequently used acronyms. ODE PDE BC IVP BVP 0-D BC 0-N BC

MOL

RHS LHS RK SOV

Ordinary differential equation, ordinary differential equations. Partial differential equation, partial differential equations. Boundary condition. Initial value problem. Boundary value problem. Zero Dirichlet boundary condition, i.e., u ¼ 0 on the boundary. For ODE, this is yðaÞ ¼ 0 and/or yðbÞ ¼ 0. Zero Neumann boundary condition, i.e., @u @g ¼ 0 on the boundary, for the outward unit normal vector g. For ODE, this is y0 ðaÞ ¼ 0 and/or y0 ðbÞ ¼ 0. Method of lines. It refers to discretizing a PDE with differences in all but one variable, usually time, and then solving the resulting system of ODE by any convenient means. General term for the RHS of an equation, most often a known function or vector defining a linear system. General term for the LHS of an equation. Runge–Kutta, an OððDxÞ4 Þ method for solving first-order IVP. Separation of variables; it refers to two distinct techniques: 1. ODE, equation ! f ðyðxÞÞy0 ðxÞ ¼ gðxÞ, hence both sides can be integrated with respect to x, yielding an implicit solution FðyÞ ¼ GðxÞ þ c. 2. PDE, for linear equation the substitution uðx1 ; x2 Þ ¼ pðx1 Þqðx2 Þ gives Fðpðx1 ÞÞ ¼ c ¼ Gðqðx2 ÞÞ, whence BC can be used to convert the side with boundary values to an eigenvalue problem, and then solve the other side as a family parameterized by the eigenvalues to get a general solution.

HW

Homework. A necessary part of the learning process. xiii

xiv

CCN

MPA MMPA

Acronyms

Usually refers to the minimal-energy sign-changing exactly-once solution to Du þ f ðuÞ ¼ 0, for a variety of assumptions on f including super linearity. The letters refer to the author’s names in the article [24]. Mountain pass algorithm, celebrated result by [25]. Modified mountain pass algorithm. It extends the MPA to find the CCN solution.

Chapter 1

Introduction

Summary In this chapter, we summarize the key features and content of the text. We explain the choice of MATLAB® as a programming language, and then briefly emphasize the usefulness of the MATLAB debugger. We discuss the process of converting code fragments and hints from the text into working programs, and give some suggestions to instructors on the assignment of the homework problems which require those working programs. We give more detail concerning each of the specific included topics.

1.1 A Summary of the Differential Equations We Will Consider Differential equations is a vast subject. We concentrate here primarily on the most fundamental equations encountered in introductory ODE and PDE courses. Many of the included computational homework exercises correspond to typical analytical homework exercises from those courses. In particular, for ODE we present algorithms for approximating solutions to firstand second-order linear and nonlinear equations and systems of equations, for initial and boundary value problems. Homework exercises include examples requiring the separation of variables, integrating factors, constant coefficient second-order theory, and other standard techniques from an introductory course in ODE. We discuss the enforcement of Dirichlet, Neumann, and periodic boundary conditions. Eigenvalue problems and eigenfunction expansions are featured. For PDE, we emphasize the eigenfunctions of the Laplacian, Laplace’s equation, the heat and wave equations, and nonlinear elliptic problems. We consider the classic regions of the square, disk, and annulus, as well as other more general regions, with various IC/BC. By a variety of methods, we show how to numerically simulate the separation of variable solutions to typical homework exercises from an introductory PDE course. For completeness, we include an example code for first-order PDE and the method of characteristics. Included are a small selection of more advanced equations, intended primarily to demonstrate the wide range of possible applications

© Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8_1

1

2

1 Introduction

for the text’s methods. In particular, we include some experiments with Tricomi’s equation, bifurcation for semilinear elliptic BVP, and eigenvalues of the p-Laplacian.

1.2 The Use of MATLAB® and the Student Exercises The goal of these notes is to make easily accessible the general ability of the reader to use first- and second difference matrices to set up and solve linear and nonlinear systems which approximate ordinary and partial differential equations. The scope of differential equations is immense, and so this brief exposition will only give the details for the most fundamental types of classic problems listed above. It is our belief that the ease with which the included problems are solved will be encouraging to the reader with more complicated applications in mind, and that the developed arsenal of MATLAB commands and techniques will be useful in their attempts to follow through with their own successful investigations. The value of the presented methodology for solving the stated types of problems is independent of the choice of programming platform. Generally, we solve linear differential equations with a single sparse matrix linear system solve, and we solve nonlinear differential equations with Newton’s method. The included algorithms could certainly be implemented in languages other than MATLAB, but why would you want to? The developed MATLAB codes are short, easily related to the underlying equations from mathematics, and the heavy lifting is generally done by built-in compiled commands, e.g., the indispensable ‘\’, MATLAB’s built-in efficient linear solver, eigs, its nearly unbeatable eigenvalue solver, and ode45, one of its generic ordinary differential equation (ODE) system solvers. One would have to work very hard to improve on the overall speed or any other facets of these built-in functions’ implementations in another setting. The sparse-matrix capability of MATLAB means that fairly huge problems can be attacked with a personal computer, and that with not too much more sophistication (but beyond the scope of this document), implementations on supercomputing clusters could be effectively used to attack the most serious and computing-intensive of applications. In all fairness, it would not be too hard to implement the algorithms presented here in Mathematica, and the experienced C/C++ programmer would not find it difficult to build a library of tools, for example from Netlib [21]. Mathematica is probably a better choice, in fact, for intensive symbolic computations that might be needed for pre- and post-processing in more complicated applications. The intermediate programmer would probably find it not too difficult to translate the code included in this text to Sage. Python could certainly be used, once some perhaps missing high-level mathematical functionality built in to MATLAB was otherwise accessed via appropriate libraries of functions. So, many languages would provide a fine platform for writing short, easy programs to solve, by our methods, the example problems included in this document, but we argue that none of these other choices easily yield significant performance increases or are easier to understand or implement than MATLAB. As an educator who regularly teaches sections of numerical analysis and other applied mathematics courses

1.2 The Use of MATLAB® and the Student Exercises

3

with a computational component to engineering and science majors, the author finds that this student population is currently likely to already know MATLAB and to be using it in their other courses, and those that don’t already know it seem to pick it up pretty quickly. Chapter 2 contains several entire MATLAB programs which demonstrate some useful coding techniques on some familiar problems from calculus and ordinary differential equations. The exercises ask the reader to modify and test these programs. Chapter 3 starts with a complete program, cut into segments and presented in between mathematical definitions and explanations of the algorithm and syntax, and Chaps. 4 and 5 extend the ideas to PDE. Each subsequent section includes enough MATLAB code fragments such that it should be possible for the relatively inexperienced programmer to put together working programs demonstrating the relevant methods. Complete programs are only occasionally presented. Each section comes with a few exercises for the reader to further their skills, suggesting to apply, generalize, or combine presented ideas to solve related problems or perform different comparisons. A little familiarity with introductory numerical analysis would certainly be helpful, but the attempt has been made to make the information fairly self-contained in that brief descriptions of some well-known and important elementary numerical methods are included.

1.2.1 Using MATLAB® ’s Debugger It is our opinion that the debugger is an essential tool for developing and understanding programs, and that it should be used early and often by the beginner and expert alike. Getting a red ‘dimension mismatch’ error? Click on the hyphen to the right of a line number on the left side of the editor. If a red bullet appears, the program will stop the first time it hits that line. Then inspect the variables in question. Placing your cursor over them will at least reveal their dimension. Perhaps the matrix should be transposed before the operation? If the data involved is not too huge, you can open a variable editor and inspect each element. Right-clicking on the breakpoint is one way to select the conditional option, e.g., i==23 will stop on the line if i = 23. The user can descend into a subfunction for inspection of that function’s local variables, or skip over the function to the next line in the parent function. Clear breakpoints, set new ones, continue to the next, checking the dimension and if need be contents of any variables whose value is in question. You can use the debugger to learn MATLAB. For a small n value, run an example program, stepping through the code line by line. Check some calculations by hand and compare. When a loop gets tedious, put a breakpoint after it or use a conditional breakpoint to skip ahead in the action. MATLAB’s way of generating graphics can be confusing to the uninitiated. Put a breakpoint on some plot lines and inspect the independent and dependent variables. Whenever a program is paused in the debugger, the command line can be used to examine contents or perform small checking calculations. The ‘execute selection’ feature can often be used at that time

4

1 Introduction

to make a smaller portion of the code run given previously stored values of relevant variables and functions. The first time one attempts to execute a program with a significant number of complicated, new lines of code, one expects coding errors. A good approach is to step through the program in the debugger either line by line, or section by section, generally inspecting the inputs and outputs of each line or related group of lines, e.g., a loop, verifying that dimensions and values appear consistent with that predicted. Minor mistakes can be corrected as they are encountered, while one may need to pause to consider the best fix for deeper errors. Real learning can occur when errors are encountered that force the coder to determine if the steps of the algorithm and corresponding variables are properly representing the underlying mathematical constructs.

1.2.2 Line Numbering in MATLAB® Examples Where it makes sense, the author includes line numbers to the left of the included lines of MATLAB code. In particular, when breaking a program or program fragment into blocks, resuming consecutive line numbering is used. Many new examples only require minor modifications of previously developed MATLAB programs. In that case, the line numbering may start at some integer larger than one, indicating that some lines are missing and presumed to be understood based on prior discussion. Of course differing styles, and adding comments, blank lines, options, extra loops, or more descriptive output will in practice change the line numbering and format of the code. Our approach in this document is somewhat minimalistic, generally making an effort to set up and solve the problems with a few extra bells and whistles. If cutting and pasting from a pdf of this text, it may be necessary to delete the line numbers where present; take the opportunity to reflect on the purpose of each line as you remove each line number. If you are typing the example programs into the MATLAB editor by hand, it is wise to do so a block of a few lines at a time, executing or better yet stepping through via the debugger and verifying the resulting state of the partial program before adding another small block.

1.2.3 Reproducing Codes and Exercises If this document is being used as a textbook, the author hopes that the student will write their own MATLAB programs to recreate the results from many of the included examples. The entire program is included in the text for some of the examples, and enough code fragments and hints are provided such that it should be possible in subsequent sections to reproduce the results without external resources. The reader might think of it as a bit of a game to see if they can do so with line numbering that agrees with that provided in the included code fragments, i.e., can you initialize

1.2 The Use of MATLAB® and the Student Exercises

5

and define the various scalars, vectors, and sparse matrices in five lines as the author did? The included exercises are for the most part intended to be fairly reasonable extensions or combinations of the presented ideas, again doable without outside resources.1 The progression of examples and exercises is designed to teach MATLAB and demonstrate the utility of difference matrices, at each step moving the reader a little closer toward being able to independently investigate differential equations, perhaps equations of their own choosing which are more complicated and beyond the scope of this text. Some sections contain short formally stated examples, typically involving penciland-paper calculations to approximate a solution for some reasonable discretization or a small number of iterations, and to compute for comparison the corresponding exact solution by some known technique. Some of the un-starred (non-programming) exercises are similar to these examples, and may be useful in preparing for in-class assessments. ;-)

1.2.4 Guidelines to the Homework Exercises Un-starred problems, problems which are not indicated by one or more ‘*’, generally do not require programming. These problems are not intended for programming HW assignments. They may be useful in the preparation for in-class assessments. Typically, these problems require a small pencil-and-paper calculation, e.g., for three divisions and one iteration, or involve a theoretical derivation. If worked out by hand prior to the writing and subsequent debugging of MATLAB code, they may serve to aid in the successful implementation and understanding of the underlying algorithms. Starred problems typically require writing MATLAB code in order to answer the questions. Problems indicated with a single ‘*’ are generally the easiest and have shorter expected student solution times. Some of these problems only require small modifications to provided listings of MATLAB code. Problems which have been marked with two or three ‘*’ generally have increased complexity, require more student programming, and have a correspondingly greater anticipated required time spent in solving them. They may make suitable projects for the beginning student. The author uses the following system for assigning and assessing HW. All HW will be submitted electronically via the course management system, e.g., BBLearn or Moodle. Each HW will first and foremost be a short report, stating the problem solved, giving a brief description of the algorithms and methods used to solve the problems, containing graphical or tabular summaries of output, and with a brief but convincing analysis of the results. Where possible, comparisons between known and approximate solutions should be included. Each report should be in .pdf format. The upload should be accompanied by one or several .m MATLAB files, capable of only 1

Some course instructors may consider that extensive sharing of code or student downloading of existing code from the Internet accomplishes little toward course goals, and in some cases may consider such actions to be unethical.

6

1 Introduction

minor editing of generating the valid results contained in the report. The report should avoid the inclusion of large quantities of tedious data, graphics, code listings, etc. Summarize! For rating the point value of individual homework exercises, the author generally uses a scale of 10, 20, and 40 points for single-, double-, and triple-starred problems, respectively. To allow for some student choice and independent investigation of several aspects of a given topic, the author often makes assignments of the form submit x out of y possible points worth of starred problem solutions from Sections W and Z, e.g., 200 out of 340 possible points worth of starred problems from Chap. 2, as the first assignment.

1.3 The Organization of This Text The text is organized as follows. Chapter 2 contains a brief introduction to MATLAB, with a few relevant elementary examples and a fairly extensive list of warm-up exercises. In particular, we discuss Runge–Kutta for systems of ODE, numerical differentiation and integration, and Newton’s method applied to vector fields. Throughout this chapter, we distribute a discussion of the MATLAB commands needed in all subsequent material, including linspace, kron, spdiags, meshgrid, reshape, eigs, and the powerful linear system solver ‘\’. From the author’s experiences with students while developing these notes, we attempt in this chapter to address many of their ‘frequently asked questions’ concerning common pitfalls in programming and MATLAB syntax. A subsection on cubic splines demonstrates the uses of most of the above commands in a single application, in particular by building a complicated sparse matrix enforcing the various constraints in order to solve the corresponding system with a single linear solve command. The final section contains a brief overview of some theories for the existence and uniqueness of solutions to ODE, ODE systems, convergence of Newton’s method, order of approximation of IVP solvers, and difference formulas. Chapter 3 contains ODE material, but with an eye toward easy modification for the subsequent application to PDE. The homework exercises assume familiarity with integrating factors, separation of variables, converting second-order problems to first-order systems, constant coefficient linear theory, some basic existence and uniqueness theory for IVP, eigenvalues, and other topics typically covered in an introductory undergraduate ODE course. It is also assumed that the most elementary numerical topics are understood, e.g., difference approximations for derivatives, Riemann sums, Euler’s and Runge–Kutta’s methods for IVP, Newton’s method for a function of one or several variables, and Taylor’s and Lagrange’s polynomial approximation theorems with error terms. For a brief overview of these

1.3 The Organization of This Text

7

topics, see Sect. 2.6. In the last section of Chap. 3, we include a brief practical guide for the construction of the Fourier series, which are used heavily in Chap. 4. • Section 3.1 starts off with an example: how to solve a nonlinear second-order ordinary elliptic boundary value problem (BVP) using Newton’s method. We use this example as a place to introduce and explain several essential MATLAB commands and techniques which will be constantly reused throughout the document. The second difference matrix D2 on the point grid is introduced in this first application and then used in many subsequent applications. This Newton’s method approach is more efficient compared to implementing the shooting method, an alternative ODE-only method briefly discussed in Chap. 2. In Sect. 4.3, only small modifications to the ODE Newton’s method code will be required to solve a semilinear elliptic BVP on the square. • Section 3.2 solves a linear ordinary second-order BVP. By our presented method, not much change will be required in Sect. 4.4 in order to solve Laplace’s equation on a square domain in a similar fashion. • Section 3.3 solves a classic ODE eigenvalue problem with one additional line of MATLAB beyond that already coded in previous sections. We use this example to introduce eigenfunction expansions. We include a proof of the exact eigenvalues of a second difference matrix, which is useful in understanding the convergence of algorithms that use this matrix. Only minor modifications to the developed MATLAB codes will be required to solve a PDE eigenvalue problem on the square in Sect. 4.2. • Section 3.4 shows how to build the D2 matrix for both the point and cell grids, for Dirichlet, Neumann, mixed, and periodic boundary conditions (BC). • Section 3.5 solves a first-order linear initial value problem. We feature the first difference matrix D1 . • Section 3.6 applies our first-order method to a system of two first-order initial value problems in order to solve a second-order initial value problem. • Section 3.7 applies Newton’s method to solve a nonlinear first-order initial value problem (IVP), similar to that in Sect. 3.1 but using D1 . • Section 3.8 provides a brief overview of the theory and a practical guide for the construction of the Fourier series. Chapter 4 concerns PDE. The homework exercises assume some familiarity with (or the concurrent learning of) elementary PDE techniques and theory, including separation of variables, applications of Fourier series in PDE, eigenfunction expansions, Laplace’s, Heat, and Wave equations, for classical regions such as the interval, square, disk, and annular domains, with various boundary and initial

8

1 Introduction

conditions. For the convenience of the reader, these topics are briefly summarized in Sect. 4.10. The required numerical techniques are for the most part explained in this text, with some review of classic methods, e.g., the explicit and implicit methods for the heat equation. • Section 4.1 introduces sparse block matrices for encoding the Laplacian. In particular, we develop the D2 matrix on a square domain, enforcing Dirichlet and/or Neumann BC on the individual four sides, using both point and cell grids. • Section 4.2 verifies the numerically approximated eigenvalues and eigenvectors of the Laplacian on the square and discusses eigenfunction expansions. • Section 4.3 generalizes the method from Sect. 3.1 to solve a semilinear elliptic BVP on the square. • Section 4.4 applies the ideas from Sect. 3.2 to PDE in order to solve nonhomogeneous Laplace’s equation on the square with nonhomogeneous mixed BC. • Section 4.5 contains seven methods for solving the heat equation on the interval and square. We present the classic explicit, implicit, and Crank–Nicolson methods. We use this material as a place to introduce the ‘method of lines’ (MOL) as a general and powerful PDE technique. We make use of the separation of variable (SOV) eigenfunction expansion solution derivation, implementing Fourier expansions in MATLAB to solve the heat equation, where the coefficients are obtained by either exact formulas or numerical integration. We demonstrate that it is possible to obtain reasonable approximations to solutions of the heat equation by solving one large sparse linear system with a single linear system solver call. • Section 4.6 contains a brief discussion of the wave equation, with examples on the interval and square. We show how to implement the method of lines. We provide discussion and exercises showing that the obvious explicit and implicit methods experience difficulties with dissipation and phase drift, and present a better alternative explicit method. It is demonstrated that a linear problem such as the wave equation can be effectively set up and solved with a single invocation of a linear solver. • Section 4.7 considers Tricomi’s equation. We are able to solve these boundary value problems by combining Laplace’s equation and wave equation singlelinear-system techniques. • In Sect. 4.8, we develop techniques for the creation of D2 matrices for other regions. We consider the cube, disk, annulus, angular sectors of disk and annulus, and sphere. Based on MATLAB’s built-in delsq function, we provide an

1.3 The Organization of This Text

9

elementary code for the first consideration of approximating a Laplacian on arbitrary subdomains of the square. There is a fairly extensive list of corresponding HW exercises which ask for numerical solutions to many of the PDE from this chapter on spacial regions other than intervals and squares. • In Sect. 4.9, we briefly discuss first-order PDE. Difference matrices are not used. The classic method of characteristics is used to set up and solve systems of ODE to obtain numerical solutions to the PDE. We demonstrate with an example that features a shock. • In Sect. 4.10, we briefly summarize the separation of variables technique for solving PDE. In particular, we compute eigenvalues and eigenfunctions of the Laplacian, and solutions to Laplace’s equation, heat equations, and wave equations. A variety of regions and boundary conditions are considered. Chapter 5 extends some of the ideas from Chap. 4 to more advanced applications in semilinear and fully nonlinear elliptic BVP. • Section 5.1 advances our ideas of following bifurcation branches for semilinear elliptic BVP. Section 5.1.1 introduces the tangent gradient Newton algorithm for continuation. In particular, we can follow bifurcation branches of the semilinear elliptic BVP from Sect. 4.3. Section 5.1.2 introduces the secant method for locating bifurcation points. Section 5.1.3 discusses a little further methods and difficulties in finding new branches bifurcating from these bifurcation points. • Section 5.2 finds several low-energy solutions of a semilinear elliptic BVP by ‘minimax’ variational methods. Section 5.2.1 applies the celebrated Mountain Pass Algorithm (MPA) of Choi and McKenna [25] to find a MI = 1 one-sign solution to a superlinear elliptic BVP. Section 5.2.2 applies the Modified Mountain Pass Algorithm (MMPA) to find a MI = 2 sign-changing exactly-once solution to a superlinear elliptic BVP. • Section 5.3 briefly introduces the p-Laplacian. We solve fully nonlinear elliptic BVP of the type Δ p u = f and −Δ p = λ(u|u| p−2 ) on the interval and square domains.

Chapter 2

Review of Elementary Numerical Methods and MATLAB®

Summary In this chapter, we review the use of MATLAB® in the context of several elementary applications from ordinary differential equations and calculus. First, we practice basic matrix and vector operations at the command line. Next, we write a short code to implement Euler’s method, which we then easily modify to implement the Runge–Kutta method for systems. We recall the shooting method, an ODE-only technique for solving BVP via repeated application of an IVP solver. We further demonstrate MATLAB syntax and programming techniques on several elementary numerical differentiation and integration problems, apply Newton’s method to nonlinear systems, and solve a cubic spline with a single linear system solve step. The included homework exercises require the reader to implement the example programs, and introduce some new concepts. Each section ends with one or more subsections containing a brief discussion of the various aspects of MATLAB which one needs in order to implement and execute the algorithms. This material attempts to address the many frequently asked questions concerning the nuts and bolts of getting a program to produce appropriate output. We introduce some concepts and techniques for the comparison of approximate solutions to exact solutions, a pedagogical theme repeated in the HW exercises throughout this textbook. We present in gray boxes overviews of specific MATLAB commands and syntax used in these programs which might be new to the uninitiated, and which are relied on heavily throughout this text.

This chapter assumes a basic familiarity with calculus and ordinary differential equations. Standard differential equations and calculus texts such as [2] and [19] can be consulted by the reader as needed for the mathematical theory behind the examples. Throughout this book, it is useful to know some linear algebra, such as that covered in [1, 7, 11, 15]. All the algorithms of this chapter can be found in any standard introductory numerical analysis texts, such as [3]. Alongside the development of several programs, we explain to the uninitiated a little about graphics and a few other elementary but possibly confusing aspects of MATLAB. The MATLAB help functionality and all manner of online examples can © Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8_2

11

12

2 Review of Elementary Numerical Methods and MATLAB®

and should be consulted to learn more about the richness of options, commands, data structures, file I/O, and so on. The possibilities of programming and applications for MATLAB are vast. For the convenience of the reader, we provide in this chapter’s brief tutorial examples of most of the commands which are used in the implementation of the algorithms in the subsequent chapters. The reader should take particular note of the useful MATLAB commands linspace, meshgrid, kron, spdiags, and the powerful solvers eigs and ‘\’, used to compute eigenvalues and solve linear systems, respectively. All these commands are used heavily throughout this text. We also give some guidance toward a number of other issues concerning the operation of MATLAB, e.g., we briefly discuss using MATLAB at the command line, basic matrix and vector operations, the threading of some algebraic operators with a dot (‘.’), logical operations, and so on. We provide a short discussion about row and column vectors, the dimensions and sizes of arrays, and diagnosing the related errors that occur when quantities in a calculation are not of compatible shape. Included is a discussion of some concepts and techniques for the comparison of approximate solutions to exact solutions. The final section of this chapter contains a brief summary of the elementary theory for ODE, systems, order of approximation of ODE solvers, Newton’s method, and difference formulas with error terms.

2.1 Introduction to Basic MATLAB® at the Command Line One can use MATLAB as a calculator, without creating a file. The workspace holds any newly created variables (not local variables declared in functions) and their values for some time, so it is possible to in effect write a program by sequentially executing commands at the command line. One will find it inconvenient to repeat complicated steps if they are only saved in your command history buffer, but it is often convenient to type a few commands at the command line prompt to explore a few MATLAB expressions of particular interest. If a program is paused by the debugger at a particular line, the workspace will contain the state of all current variables at that point of the sequential execution of the program. The operations +, -, *, /, and ˆ denote the usual operations for scalars, vectors, and matrices. For example, if A is an m × n matrix, then A ∗ x makes sense only if x is a scalar or an n × k matrix. If k = 1 then x is a column vector, and this is normal matrix-vector multiplication. Sometimes, one wants to perform componentwise operations rather than the usual mathematical operation. For this, one uses a dot (.) before the operation symbol. For example, if A and B are m × n matrices for some natural numbers m and n (including 1, i.e., column or row vectors), then A.*B denotes the m × n matrix with entries ai j bi j . If c is a scalar, no dot is needed for the usual multiplication c ∗ A, but we can use the dot to exponentiate component-wise as in A.ˆc, which gives the m × n matrix with entries (ai j )c . The sum, max, and norm commands are useful examples of the many built-in matrix-vector operators.

2.1 Introduction to Basic MATLAB® at the Command Line

13

Try the following matrix operations to get comfortable with the expected result. Repeat with some row and column vectors, and a couple of non-square matrices. > A=[1 2; 3, 4]; creates 2 × 2 matrix. Separator spaces and commas indicate new column. Separator semicolon indicates new row. Terminator semicolon suppresses output to screen. > B=[1 2; 2 1]; c = 3; Define a second 2 × 2 matrix and a scalar. > A*B Usual matrix product result displayed to screen. > A.*B Component-wise product result displayed to screen. > A./B Component-wise division result displayed to screen. The operation A/B actually invokes a linear system solver! > A+B No need for dot, usual addition is already component-wise. > A.ˆB Component-wise exponentiation. Operation AˆB gives an error. > A.ˆc Component-wise exponentiation. Operation Aˆc gives usual power of square n × n matrix. Operation Aˆc gives error if A not a square matrix. > x=[1;3;-5] Create column vector. > sum(x) Sum column vector. Try prod(x) instead. Try sum(A) also. > norm(x) Returns usual Euclidean vector norm. > max(abs(x)) Equivalent to norm(x,1).

14

2 Review of Elementary Numerical Methods and MATLAB®

In order to plot a typical (non-symbolic) function, it is necessary to understand the way MATLAB’s various plotting functions take as input an array of dependent, discrete values, and, optionally, arrays of the same size of independent values. A convenient way to discretize a range of values which we use throughout this text is via MATLAB’s linspace command. > n = 10; x=linspace(0,1,n+1)’; creates column vector x(i) = (i − 1)/n, i = 1, . . . , n + 1. semicolon suppresses output to screen. single quote transposes row vector to column vector. > plot(sin(pi*x)); creates a new (by default) figure containing the plot. many functions like sin thread over vectors, that is x, sin(x) ∈ Rn+1 . > f = @(k,x) sin(k*pi*x); defines a function which can be later plotted or passed to other functions > figure(10); plot(x, f(1,x)); creates same plot as above, but in a different figure and with independent x-values. > for k=1:6 start a loop. command prompt suppressed until ‘end’ > hold on; multiple graphics will appear in same figure plot(x, f(k,x)); end loop executed, figure with multiple graphics created. We will use the powerful eigs command to solve serious problems in differential equations, but we can immediately and easily use it to compute the eigenvalues or eigenvectors of any small square matrix. > A=[2 1;1 2]; eigs(A) returns the two eigenvalues of A. [V, D] = eigs(A) returns the eigenvectors and eigenvalues. V(:,1) extracts the first eigenvector. diag(D) extracts the eigenvalues. V*D*V’ reconstitutes A via orthogonal diagonalization. > B=diag(1:8) (creates diagonal matrix with entries 1 to 8) eigs(B) returns by default six largest eigenvalues. [V,D]=eigs(B, 6,'sm') returns six smallest eigenvalues.

2.1 Introduction to Basic MATLAB® at the Command Line

15

> L=[A zeros(2,8); zeros(8,2) B] (creates 10 × 10 block matrix.) [V,D]=eigs(L, 6, 'sm') returns 6 smallest eigenvalues. If you experiment much with the above exercises, you will soon be frustrated with retyping or recalling lines to fix mistakes or try different options. For anything complicated, the commands are generally stored permanently in a file, which is then executed all at once as a program, or stepped through line-by-line in the debugger. A good time to use MATLAB as a calculator is when a program is paused in the debugger and the workspace contains all current variables. If the command history contains several good lines, they can be copied and pasted into a new file, and presto, a program, is born. In the next sections, we present a series of short programs that introduce more MATLAB commands and solve elementary problems from calculus and differential equations.

2.1.1 MATLAB® : sum, prod, max, min, abs, norm, linspace, for loop, eigs, and sort We briefly discuss here a few of the MATLAB commands and syntaxes used in this section, in particular those which are relied on throughout this text. The sum, prod, max, min, abs, and norm commands. Given a column or row vector x, the commands sum, prod, max, and min all return a scalar of the intended calculated value. Given a two-dimensional array, these four commands return a row vector where each entry is the result of the calculation along columns. The abs command threads over any array, e.g., abs(A) = (abs(ai j )). The norm command returns by default the usual Euclidean length of a vector, e.g., norm(u)= u i2 , although the command accepts options to enforce other norms.

The linspace command. The built-in MATLAB command linspace is used throughout this text as a convenient way to generate a row vector of regularly spaced values partitioning a given interval. The MATLAB statement x=linspace(a,b,n+1); generates a row vector x of length n+1 such that x(i)=a+(i − 1)(b − a)/n, for i = 1, · · · , n + 1. The author has made a more or less permanent decision that n will always be the number of divisions of a given interval, so that n + 1 is the number of points, including

2 Review of Elementary Numerical Methods and MATLAB®

16

end points. The issues of including or not including end points when considering boundary or initial conditions, and the choice of the grid type, e.g., interior points— the point grid, or midpoints of intervals—the cell grid, makes determining the size and length of arrays a little confusing, and hence something to carefully pay attention to. Watch for the ‘fence post error’ (off by one) here! The for loop. The most basic control structure is the for loop. The syntax is to have a line such as for k=vector, without terminating semicolon, followed by some or many lines to be repeated, and finally an end, with again no semicolon required. The lines between for and end will be repeated once for each element k of the vector. Since 1:6 is the vector [1,2,3,4,5,6], a loop beginning with for k=1:6 will execute the commands enclosed by the next end six times, with first k = 1, then k = 2, and so on, until on the last iteration, k = 6. The statement for x=a:d:b can be read ‘for x going from a to b in steps of d.’

The following powerful command implements a very sophisticated algorithm to find the desired collection of eigenvalues and eigenvectors of a matrix. When the matrix is an appropriate difference matrix, we are in effect approximating solutions to eigenvalue problems which have many applications in differential equations and are of fundamental interest in their own right. The eigs command. The eigs command is used throughout this text to very efficiently compute some or all of the eigenvalues and eigenvectors of a matrix. For a small n × n matrix A, eigs(A) will return all of the eigenvalues, real or complex. For larger matrices, the default is to return the six largest in magnitude eigenvalues, but there are options to compute more or fewer eigenvalues, in various ranges. To obtain eigenvectors as well, [V, D] = eigs(A) returns a matrix V with columns of eigenvectors, normalized in Rn , corresponding to eigenvalues on the diagonal of matrix D, ordered by magnitude. For a large n × n matrix L, we commonly use the command [V,D]=eigs(L,k,'sm'); to compute the k tol && cnt0) .* (Q 0 so that p0 ∈ ( p − δ, p + δ), pn+1 = pn − f ( pn )/ f  ( pn ) defines a sequence which converges to p.

2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers …

53

Under the hypothesis the convergence is quadratic, i.e., order 2. Recall that pn → p order α with asymptotic error constant λ if | pn+1 − p| → λ ∈ (0, ∞). | pn − p|α A brief outline of the proof is as follows. That a continuous function g : [a, b] → [a, b] must have a fixed point (g( p) = p) can be seen by applying the intermediate value theorem to h(x) = x − g(x). If the function g is a contraction, i.e., |g  (x)| ≤ K < 1 on [a, b], then the fixed point is unique and fixed point iteration p0 ∈ [a, b], pk+1 = g( pk ) converges to that unique point. If there were distinct fixed points p and q, then by the mean value theorem (MVT) there exists c between p and q such that | p − q| = |g( p) − g(q)| = |g  (c)|| p − q| < | p − q|, a contradiction. Since K < 1, applying the MVT k times gives | pn+1 − p|=|g( pn ) − g( p)|≤K | pn − p|≤K 2 | pn−1 − p|≤ · · · ≤ K k | p0 − p| → 0 as k → ∞. Finally, Newton’s method is fixed point iteration, defined by g(x) = x − f (x)/ f  (x). By calculus, it is easy to see that g  ( p) = 0, so g is clearly a contraction on some interval containing p. Furthermore, in the typical case where g  ( p) =0, it can be shown that the convergence is order 2 with asymptotic error constant |g 2( p)| [3]. We will be interested in nonlinear systems. There are a number of different results with varying strengths of hypothesis and conclusion (see, for example, [18]). From [3], if F : Ω → Rn is of class C 2 with p in the open set Ω ⊂ Rn satisfying F( p) = 0 with an invertible Jacobian JF ( p), then for p0 sufficiently close to p, Newton’s method iteration pk+1 = pk − (JF ( pk ))−1 F( pk ) will converge quadratically to p. The proof again relies on fixed point iteration. In Sect. 3.1, Newton’s method is used to find approximate solutions to second-order nonlinear BVP of the form y  + f (y) = 0, y(0) = 0 = y(1). In particular, solutions to F(u) = 0 for the finite dimensional vector field defined by F(u) = D2 u + f (u) give such approximations, where D2 is a three-point central second difference matrix enforcing the boundary condition (BC).

54

2 Review of Elementary Numerical Methods and MATLAB®

2.6.3 First-Order IVP Numerical Solvers: Euler’s and Runge–Kutta’s There are many algorithms for approximating solutions to first-order ODE and systems of ODE (see, for example, [3] and [18]). Euler’s is generally presented first for pedagogical and practical reasons: it is easy to visualize the algorithm as following tangents to the vector field defining the system, and has a correspondingly short and easy implementation. Runge–Kutta’s is an algorithm that gives much better results with only a slightly more complicated code. Briefly, using Taylor’s theorem, it is straightforward to see that the error in one step of Euler’s is O(h 2 ), where h is the step size ti+1 − ti . That is, assuming that the initial point (t0 , y0 ) is known exactly, the local truncation error between the approximation 2 y1 and the exact IVP solution evaluation y(t1 ) is the quadratic error term h2 f  (ξ ), for some ξ ∈ (t0 , t1 ). At each step these errors compound, as the initial point for the next step is not known exactly. Runge–Kutta’s is almost as easy to implement as Euler’s, but more complicated to derive and analyze. It requires four times as many floating point operations and function evaluations per step, but requires many times fewer steps. For these two and many other algorithms, there exists a large variety of theorems which provide bounds on the error of the method, which generally refers to the compounding global error over an interval after computing n steps. Essentially, if one knows of the existence of a unique solution to the IVP over some interval and can infer various bounds for the solution and its derivatives, then one can obtain an error bound for the method of one less order than the local error for one step. Euler’s method is thus an O(h) method. Runge–Kutta’s method can be shown to be an O(h 4 ) method. In practical terms, this means that if one doubles the number of steps for a given fixed interval, hence halving h, the error for Euler’s will approximately halve, while the error for Runge–Kutta’s will reduce by a factor approaching 16.

2.6.4 Difference Formulas and Orders of Approximation We give a brief summary of applications of the Lagrange and Taylor theorems for computing difference approximations and their error terms for approximating derivatives. Theorem 2.4. (Taylor) Let f : I → R be C n+1 for an interval I containing x0 , and define n  f (k) (x0 ) (x − x0 )k . Pn (x) = k! k=0 Then for each x ∈ I there exists ξ(x) which depends continuously on x so that f (x) = Pn (x) + Rn (x), where

2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers …

Rn (x) =

55

f (n+1) (ξ(x)) (x − x0 )n+1 . (n + 1)!

Proof. Fix n and x ∈ I . Define L(x) = f (x) − P(x) − R(x), with R(x) = M(x − x0 )n+1 , where M is chosen so that L(x) = f (x) − P(x) − M(x − x0 )n+1 = 0. Since f (k) (x0 ) = P (k) (x0 ) and R (k) (x0 ) = 0 for k ∈ {0, . . . , n}, we have L (k) (x0 ) = 0 for k ∈ {0, . . . , n}. We next consecutively apply the Mean Value Theorem (MVT) n + 1 times to obtain the Taylor theorem error formula. Since L(x0 ) = 0 = L(x), there must exist c1 ∈ (x0 , x) so that L  (c1 ) = 0. Since L  (x0 ) = 0 = L  (c1 ), there must exist c2 ∈ (x0 , c1 ) so that L  (c2 ) = 0. This process continues until L (n+1) (cn+1 ) = f (n+1) (cn+1 ) − 0 − (n + 1)!M = 0. With ξ(x) = cn+1 , this gives M =

f (n+1) (ξ(x)) (n+1)!

so that R(x) =

f (n+1) (ξ(x)) (x (n+1)!

− x0 )n+1 .

Theorem 2.5. (Lagrange) Let f : I → R be C n+1 for an interval I containing the distinct numbers {xk }nk=0 . Define n  f (xk )L n,k (x), Pn (x) = k=0

where

n 

L n,k (x) =

j=0, j=k

x − xj . xk − x j

Then for each x ∈ I , there exists ξ(x) which depends continuously on x so that f (x) = Pn (x) + Rn (x), with

f (n+1) (ξ(x))  (x − xk ). (n + 1)! k=0 n

Rn (x) = Proof. Fix n and x ∈ I . Define

L(x) = f (x) − P(x) − R(x), with R(x) = M

n  (x − xk ), k=0

where M is chosen so that L(x) = 0. Observe that for k ∈ {0, . . . , n} we have

2 Review of Elementary Numerical Methods and MATLAB®

56

L n,k (x j ) = δ j,k so that P(xk ) = f (xk ), and that R(xk ) = 0, so that L(xk ) = 0. We can then apply the MVT to a cascade of derivatives of L to obtain the Lagrange Theorem error formula. Since L(xk ) = 0 for k ∈ {0, . . . , n} and also L(x) = 0, by n + 1 applications of the MVT, there exists constants ck1 so that L  (ck1 ) = 0, for k ∈ {0, . . . , n}. The MVT can be applied n more times between the n + 1 zeros of L  giving ck2 so that L  (ck2 ) = 0, for k ∈ {0, . . . , n − 1}. Repeating, one obtains ξ(x) = c0n+1 so that L (n+1) (ξ(x)) = (n+1) (n+1) (ξ(x)) (ξ(x)) n so that R(x) = f (n+1)! 0. Then M = f (n+1)! k=0 (x − x k ), which completes the proof. Let {x0 , . . . , xn } be distinct real numbers in the domain of some function, and {y0 , . . . , yn } be the corresponding values of the function evaluated at those numbers. Then the Lagrange polynomial Pn interpolating these n + 1 points can be used to generate difference formulas for approximating derivatives, namely f (k) (x) ≈ Pn(k) (x). Both Lagrange’s error term and Taylor’s error term are useful in computing error terms for difference approximations of derivatives. For example, given {x0 , x1 } and 1) 0) + y1 (x−x gives the two-point {y0 , y1 }, the Lagrange polynomial P1 (x) = y1 (x−x x0 −x1 x1 −x0 forward and backward first difference formulas f  (x0 ) ≈ P1 (x0 ) =

y1 − y0 = P1 (x1 ) ≈ f  (x1 ). x1 − x0

With h = x1 − x0 , the error term R1 (x) = 21 ( f (3) (ξ(x))ξ  (x)(x − x0 )(x − x1 ) +   f  (ξ(x))(2x − x0 − x1 )) gives R1 (x0 ) = − f 2(ξ0 ) h and R1 (x1 ) = f 2(ξ1 ) h. Similarly, given three equally spaced points {x0 , x1 , x2 } with h = Δx and corresponding function values {y0 , y1 , y2 }, the three-point forward first difference formula is given by f  (x0 ) ≈ P2 (x0 ) =

−3y0 + 4y1 − y2 f (3) (ξ0 ) 2 with error R2 (x0 ) = h , 2h 3

and the three-point central first difference formula is given by f  (x1 ) ≈ P2 (x1 ) =

y2 − y0 f (3) (ξ0 ) 2 with error R2 (x1 ) = − h . 2h 6

We rely heavily on the three-point central second difference formula for approximating second derivatives. Here, f  (x1 ) ≈ P2 (x1 ) =

y0 − 2y1 + y2 , h2

but we use Taylor’s theorem to compute the error term. Adding

2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers …

y2 = y1 + h f  (x1 ) +

h 2  h 3  h 4 (4) f (x1 ) + f (x1 ) + f (ξ1 ) 2 6 24

y0 = y1 − h f  (x1 ) +

h 2  h 3  h 4 (4) f (x1 ) − f (x1 ) + f (ξ2 ), 2 6 24

and

57

and dividing the result by h 2 gives y0 − 2y1 + y2 f (4) (ξ1 ) + f (4) (ξ2 ) 2  h . = f (x ) + 1 h2 24 By the intermediate value theorem, we can write this as f  (x1 ) =

y0 − 2y1 + y2 f (4) (ξ ) 4 − h , h2 12

for some ξ ∈ (x0 , x2 ).

Exercises 2.1. Consider the IVP y  = y 2 , y(0) = 1. Take two steps of Euler’s method with Δx = 21 to approximate a solution, solve the problem exactly, and compare. 2.2. Consider the IVP y  = y, y(0) = 1. Take one step of the Runge–Kutta method with Δx = 1 to approximate a solution, solve the problem exactly, and compare. 2.3. Consider the IVP y  = −y, y(0) = 0, y  (0) = 1. Take three steps of Euler’s method with Δt = 1 to approximate a solution for t ∈ [0, 3], solve the problem exactly, and compare graphically in the phase plane. 2.4. Find the interval of unique existence for the IVP y −

1 y = 0, y(0) = 1. 1−t

Then, compute the solution using integrating factors. Finally, take two steps of Euler’s method with h = 21 to compute an approximate solution, and compare the result with the exact solution. 2.5. Let f 1 , f 2 : [0, 2] → R be defined by f 1 (x) = x 2 and f 2 (x) = x 3 , respectively. Let x = (x0 , . . . , x4 ) be a regular partition of the domain into n = 4 subintervals with Δx = 21 and set ui = f i (x) ∈ R5 . Compute the first difference vectors D1 ui ∈ R4 and the approximate antiderivative vectors wi ∈ R4 obtained by composite midpoint  xj sums of x0 f i (x) d x. Now, compute the midpoint sum antiderivatives of the first difference vectors, and the first differences of the midpoint sum antiderivative vectors. Compare each result to the exact calculus result.

58

2 Review of Elementary Numerical Methods and MATLAB®

2.6. Do one iteration of Newton’s method to approximate a zero of f : C → C defined by f (z) = z 3 − 1, using the initial guess z 0 = i. Sketch the location of the three zeros of f , the initial guess, and your approximation. 2.7. Do one iteration of Newton’s method to approximate an intersection of the curves defined by 1 x 2 + 4y 2 = 4 and y = 1+x 2 , using the initial guess w0 = (x 0 , y0 ) = (2, 0). Sketch the curves, the three intersection points, the initial guess, and your approximation. 11 2.8. Approximate 0 0 sin(π x) sin(π y) d y d x using n = 4 = m x and y divisions of composite double Simpson’s rule. Compare your approximation to the exact value of the definite integral. 2.9. As in Example 2.1, approximate 

1



x

y sin(2π x) d y d x 0

0

with the n = 4 double composite Simpson’s algorithm. Compare the result to the actual integral value. 2.10. (*) Enter the first Euler program from Listing 2.1 into a file and test. Add the following two commands (say to line 20), to make the plot look a little nicer: axis([a, b, 0, ceil(max(abs(w)))]); title('e$\wedge$x'); 2.11. (*) Verify that our RK implementation from Listings 2.2 and 2.3 solves the example scalar problem y  = y, y(0) = 1 just fine. Replace the first five lines of the example RK driver with the first five lines of the Euler’s program from Listing 2.1, counting comments and blank lines. The plot statements will need to be changed correspondingly. 2.12. (*) Solve y  + x y = 0, y(0) = 1 exactly by both the separation of variables and integrating factors techniques. Modify the RK example driver from Listing 2.3 to approximate a solution to this problem, and compare the exact solution to the approximation graphically. 2.13. (*) Solve y  = x y 2 , y(0) = 2 exactly by the separation of variables technique. Modify the RK example driver from Listing 2.3 to approximate a solution to this problem, and compare the exact solution to the approximation graphically. 2.14. (*) For Problems 2.12 and 2.13, observe the order of convergence using both Euler and RK methods. 2.15. (*) Use the RK driver from Listing 2.3 for a first-order system to solve Airy’s equation y  − x y = 0. In particular, use the initial values y(0) = Ai(0) =

2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers …

59

1/(32/3 Γ (2/3)) and y  (0) = Ai  (0) = −1/(31/3 Γ (1/3)). To obtain the plot from Wikipedia [22] for Airy’s functions, solve forward on the interval [0, 5], and backwards on the interval [−15, 0]. The latter step simply requires a = 0 and b = −15, whence x will be a decreasing sequence and dx will be negative. 2.16. * Implement the code in Listing 2.4. Via either automation or trial and error, find two pairs (d1 , d2 ), one bracketing a solution which changes sign once, and another bracketing a solution which changes sign twice. 2.17. ** Implement the shooting method using the secant method instead of bisection. Compare the timing of the two methods with tic and toc. Use a large value of n to get significant observable elapsed times. 2.18. (*) Solve the Euler type equation x 2 y  − 3x y  + 3y = 0 exactly with the initial condition y(1) = 2, y  (1) = 4 (substitute y = x r and solve for r to find two linearly independent solutions). Use the RK driver from Listing 2.3 over the interval [1, 2] and compare approximate to exact solution via graphical and L 2 or sup norm of the error, for various values of n. 2.19. (**) In measuring convergence for the purpose of terminating loops, Riemann sum computations of the form norm(error_vector) * sqrt(dx) are sufficient and efficient. For more accuracy in numerical integration, a more sophisticated quadrature method is required. Implement and compare the order of convergence of composite left hand, right hand, trapezoid, midpoint, and Simpson’s methods for 1 computing 0 e x d x. 2.20. (*) Implement the multiple composite Simpson’s algorithm from Listing 2.8 and approximate the following integrals. Observe the order of accuracy by computing with n ∈ {10, 20, 40, 80} divisions and computing the error of the approximations.  1  √(1−x 2 ) 1. −1 −√(1−x 2 ) 1 − (x 2 + y 2 ) d x d y.  1  2π 2. 0 0 (1 − r 2 )r dr dθ . 11 3. 0 0 sin(π x) sin(π y) d x d y. 11 4. 0 x d x d y. 2.21. (**) Use tic and toc to observe the relative execution times of computing the sum of the vector v ∈ Rn+1 defined by vi = (i/n)2 , i = 0, . . . , n in the following four ways: 1. In a for loop, accumulate s = s + f (i/n), where f has been defined by the inline function f = @(x) x.ˆ2;. 2. Do the same as above, but instead define f as a subfunction, e.g., make your program start with function ..., and end with end, followed by something like function y = f(x) y = x.ˆ2; end

60

2 Review of Elementary Numerical Methods and MATLAB®

3. Now let x=linspace(0,1,n+1); and compute sum(f(x)) using the inline defined f . 4. Repeat the sum(f(x)) computation using the subfunction defined f . After verifying that the sums are correct, do not output the sum as printing to the screen is itself a slow process. In order to generate significant execution times, try both n = 107 , and for n = 10, a loop repeating the summing 106 times. 2.22. (*) Replace the differences afp computed in line 14 in Listing 2.5 with sparse matrix multiplication by the n × n + 1 first difference matrix D1 presented above. 2.23. (*) Replace the antiderivative aF computed in line 19 in Listing 2.5 with a sparse matrix linear system solve, using the n × n first difference matrix D1 presented above. Compare the two results for accuracy. With a large n, place a loop around each of the two antiderivative computations to perform that same calculation repeatedly. Time with tic and toc. Which method is the fastest? 2.24. (*) Enter the second difference matrix from Sect. 3.1 and use matrix multiplication to compute an O((d x)2 ) second derivative approximation of several functions f which satisfies the boundary condition f (0) = 0 = f (1), and compare graphically with their exact second derivatives. 2.25. (*) Try the numerical integration and differentiation examples from Exercises 2.22, 2.23, and 2.24 for n = 10, and insert the lines full(A), full(D1), or full(D2) to see these matrices print on the screen. Use the debugger and step through the code until the matrices are populated, and then inspect them with the variable inspector. Note the sparse format. 2.26. (*) From Listing 2.9, write code using symbolic variables to generate the Lagrange polynomial P2 fitting arbitrary data {y0 , y1 , y2 } at the irregularly spaced points {x, x + h 1 , x + h 1 + h 2 }. Evaluate the first and second derivatives of P4 at the three grid points to produce tables similar to that of Tables 2.1 and 2.2. 2.27. (*) Use Newton’s method to compute an approximation x ∗ ∈ R2 to the intersection of the curves 4x 2 + y 2 = 1 and y = e x . How many iterations do you observe so that the object function f : R2 → R2 satisfies | f (x ∗ )| < 10− 10? 2.28. (***) Use Newton’s method to find the three complex roots of f : C → C defined by f (z) = z 3 − 1. Place a call to the Newton code in a double loop to make as many complex initial guesses as there are pixels in the square [−1, 1] × i[−1, 1], and save off in a two-dimensional array which of the three roots each guess converged to. Make a plot using pcolor or contourf with some options to get a picture of the complex basins of attraction for the roots of f . 2.29. (*) Execute the code in Listing 2.14 repeatedly in a for n=10*2.\^{} (0:5) loop and observe the order of the interpolation error sup_err_SPL. Repeat for the derivative and inverse interpolation errors computed in Listings 2.15 and 2.16.

2.6 Theory: ODE, Systems, Newton’s Method, IVP Solvers …

61

2.30. (*) Implement a natural cubic spline solver and test on f defined by f (x) = sin(π x). Recall that this means that S  (x1 ) = 0 = S  (xn+1 ) instead of the clamped first derivative boundary conditions. The corresponding lines 15, 16, and 22 of Listing 2.13 will have to be modified. 2.31. (***) Modify the code from Listing 2.13 to instead generate a C 3 degree four spline. The new matrix L will be of size 5n × 5n, so each row needs an extra block of n columns, there will be an extra continuity block for the C 3 condition, the RHS vector will be correspondingly n − 1, there are 5 columns to reshape, and an extra boundary condition may be required. Test convincingly. 2.32. (*) Implement the code in Listing 2.12. Inspect the matrices X , Y , and Z for each of the two cases. Verify that the surface plots are the same. 2.33. (*) Implement the code in Listing 2.17. View the resulting AVI file in a video viewer. Comment out the video lines and run with and without the pause line commented. Change the plot to a surf of time slices of the function defined by √ z = cos( 2π t) sin(π x) sin(π y) over the unit square, using t=linspace(0,2, 1000);. 2.34. (*) Let f be defined by f (x) = e x sin(5π x). Within a for i=2:100 loop, define x=linspace(0,1,n+1)’; to be the breakpoints for a clamped cubic spline fitting f , create the spline by a call to spline3 as found in Listing 2.13, and plot to create an animation showing the convergence of the interpolations.

Chapter 3

Ordinary Differential Equations

Summary In this chapter, we use difference matrices to solve ordinary differential equations. We first apply Newton’s method to a second-order elliptic semilinear boundary value problem. Next, we use MATLAB® ’s built-in linear system solver ‘backslash’ to solve a linear ordinary second-order boundary value problem. An eigenvalue problem is solved using second difference matrices, and Fourier Sine series are used to introduce eigenfunction expansion. Next, we consider how to enforce 0-Dirichlet, 0-Neumann, and periodic boundary conditions using both point grid and cell grid. We solve first-order initial value problems, linear systems of firstorder linear initial value problems, and first-order nonlinear initial value problems. The difference matrix approach is compared with more traditional ordinary differential equation solvers, e.g., Runge–Kutta and MATLAB’s built-in solver ode45.

We use two techniques that will subsequently be applied to PDE: • Discretizing linear ODE to finite dimensional linear systems, and then solving those systems with a standard linear solver, e.g., MATLAB’s ‘\’. • Discretizing nonlinear ODE to finite dimensional nonlinear systems, and then solving those systems with Newton’s method. This chapter assumes knowledge of more topics from ordinary differential equations, still within the scope of texts such as [2]. Standard introductory numerical analysis texts like [3] develop first and second differencing schemes, linear solvers, and Newton’s method. A reference like [18] can be consulted for more on linear and ODE topics. The text [12] contains more theory corresponding to many of the algorithms found here. An advanced text like [10] contains much more on finite differences. The text [4] and Schaum’s Outline [5] by the same authors are references for difference matrices applied to differential equations. They include many good examples and exercises. The research articles [32] and [33] use ghost points on what we call here the cell grid to effectively enforce boundary conditions. The text [14] includes the use of finite differences with ghost points and the cell grid. Linear algebra texts such as [1, 7, 11, 15] can be referred to for background on norms, eigenvalues, inner products, invertibility, and so on. See Sects. 2.6 and 3.8 for

© Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8_3

63

64

1 2 3 4 5 6 7 8

3 Ordinary Differential Equations

n = 100; N = n-1; a = 0; b = 1; dx = (b-a) / n; x = linspace(a, b, n+1)'; v = x(2 : end-1); s = 8; f = @(y) s*y + y.ˆ3; fp = @(y) s + 3*y.ˆ2; y = sin(pi*v); Listing 3.1 First part of nonlinear solver for the BVP y  + f (y) = 0, y(0) = 0 = y(1). The grid is created, the nonlinearity is defined, and a guess vector for Newton’s method is initialized.

brief overviews of some elementary theories for ODE, ODE systems, ODE solvers, Newton’s method, finite difference approximations, and Fourier series.

3.1 Second-Order Semilinear Elliptic Boundary Value Problems In this section, we will develop Newton’s method MATLAB code to approximate solutions to the nonlinear BVP y  + f (y) = 0, y(0) = 0 = y(1),

(3.1)

where f : R → R is a given suitable nonlinear function. It is known, for example, that there are infinitely many solutions when the nonlinearity belongs to the family of functions defined by f (y) = sy + y 3 and a real parameter s. There are no closedform solutions to this equation in terms of elementary functions (see Exercise 3.9). Listings 3.1, 3.2, and 3.3 contain an implementation of Newton’s method to solve Eq. 3.1. The code is broken into three parts and explained. Let x ∈ Rn+1 be a regular discretization of the interval [0, 1], and v ∈ Rn−1 be the corresponding point grid of interior points of size N = n − 1. Fixing y0 = 0 and yn = 0, we will iteratively solve for an approximation y ∈ Rn−1 to the solution y evaluated at v. In Listing 3.1, we create the grid point column vectors, define the nonlinearity f and its derivative, and populate y with an initial guess. Since the term f (y) denotes the composition of functions, the dot-exponent symbol is required so that f and f  will “thread” over approximating vectors, as in ( f (y))i = f (yi ). The well-known three-point central second difference approximation y  (xi ) = (yi−1 − 2yi + yi+1 )/((dx)2 ) + O((dx)2 ) can be performed via matrix multiplication with the second difference matrix

3.1 Second-Order Semilinear Elliptic Boundary Value Problems

9 10 11 12

65

D = kron([1 -2 1], ones(N,1)); D2 = spdiags(D, [-1 0 1], N, N) / dxˆ2; G = @(y) D2*y + f(y); J = @(y) D2 + spdiags(fp(y), 0, N, N); Listing 3.2 Second part of nonlinear solver for the BVP y  + f (y) = 0, y(0) = 0 = y(1). The second difference marix D2 is created and the object function G and its Jacobian J are defined.



−2 ⎜ 1 ⎜ ⎜ 1 ⎜ ⎜ D2 = (dx)2 ⎜ ⎜ ⎜ ⎝

⎞ 1 ⎟ −2 1 ⎟ ⎟ .. .. .. ⎟ . . . ⎟ ∈ M (n−1)×(n−1) , ⎟ .. .. .. ⎟ . . . ⎟ 1 −2 1 ⎠ 1 −2

(3.2)

i.e., (D2 ∗ y)i ≈ y  (vi ) (see [BF]). The “missing 1’s” in the first and last equation do not have to be accounted for due to the boundary condition. This tridiagonal matrix can by efficiently created using the MATLAB command spdiags. This important command creates banded matrices in sparse format, taking 4 parameters as input. The first parameter is a matrix D whose columns contain the numbers to be placed on the respective diagonals, the second input parameter is a vector of signed integers indicating which diagonals are to be populated with those columns, and the last two parameters specify the column and row dimensions. To form D2 , setting the second parameter to [−1, 0, 1] indicates that we want to fill the lower diagonal, diagonal, and upper diagonal. The useful MATLAB command kron can be used to create the first parameter, an (n − 1) × 3 matrix whose three columns of 1’s, -2’s, and 1’s will be used to fill the three diagonals. In Listing 3.2, we define D2 , and then define the object function G, an N dimensional vector field, and its Jacobian matrix J . The nonlinear portion of the Jacobian is diagonal since the i th component function y → ( f (yi )) depends only on the i th unknown. Hence, J is also tridiagonal and stored in sparse format. Listing 3.3 contains the Newton iteration. Upon convergence, the vector G(y) is approximately zero and the vector y approximates a solution to the BVP. For each iteration, we use the powerful MATLAB command ‘\’ (mldivide) to efficiently solve the sparse linear system for the Newton search direction χ = (J (y))−1 G(y). The Riemann sums err1 and err2 approximate the L 2 norms of the theoretical continuous Newton search direction and the LHS of the differential equation, respectively. Either are reasonable criteria for termination, whence we can generate formatted output such as the integer number of iterations and the magnitude of the residuals in scientific notation, as well as a graphic of the approximate solution with appended boundary values y0 and yn plotted against [a; v; b], which equals x here (see Fig. 3.1). Example 3.1. Consider the the second-order nonlinear BVP

66

13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

3 Ordinary Differential Equations

max_num_iterations = 10; tolerance = 10ˆ-10; for i = 1:max_num_iterations chi = J(y) \ G(y); y = y -chi; err1 = norm(chi) * sqrt(dx); err2 = norm(G(y)) * sqrt(dx); if err1 < tolerance break; end end fprintf('%d its,resid_1=%g,resid_2=%g\n',i,err1,err2); plot([a; v; b], [0; y; 0]); Listing 3.3 Third part of nonlinear solver for the BVP y  + f (y) = 0, y(0) = 0 = y(1). Inside a loop the system JG (yn )χ = G(yn ) is solved for the search direction χ and a Newton step is taken, until the chosen convergence criteria is met. 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 3.1 Plot of the approximate solution vector y to the second-order semilinear elliptic BVP. The text output was: “5 iterations, resid_1 = 2.41759e-16, resid_2 = 2.26001e-12”

y  + y 3 = 0, y(0) = 0 = y(1), x ∈ (0, 1). Write down a 3 × 3 linear system whose solution χ is the first Newton’s step, given an initial guess defined by y(x) = 16x(1 − x) evaluated at the point grid {1/4, 1/2, 3/4}. Solution

⎤⎡ ⎤ ⎡ 3⎤ −2 1 3 3 G(y) = 16 · ⎣ 1 −2 1 ⎦ ⎣ 4 ⎦ + ⎣ 43 ⎦ , 0 1 −2 3 33 ⎡

3.1 Second-Order Semilinear Elliptic Boundary Value Problems

and

67



⎤ ⎡ ⎤ −2 1 3 · 32 0 0 J (y) = 16 · ⎣ 1 −2 1 ⎦ + ⎣ 0 3 · 42 0 ⎦ . 0 1 −2 0 0 3 · 32

The corresponding augmented matrix for the Newton’s search direction χ is ⎡

⎤ −5 16 0 −5 ⎣ 16 16 16 32 ⎦ . 0 16 −5 −5 The solution χ = (1, 0, 1) gives y1 = (2, 4, 2), which approximates the actual solution evaluated at { 41 , 21 , 43 }. Iterating to convergence gives y = (2.1291, 3.6550, 2.1291), as compared to the n = 100 approximation evaluated at the same three grid points given by y = (2.3863, 3.7081, 2.3863), which is accurate to the indicated four decimal places.

Exercises 3.1. Consider the BVP in Example 3.1. Write down the 3 × 3 point-grid linear system in augmented matrix form for the Newton’s search direction χ , given initial guess y0 = (5, 0, −5). Solve the system and take a Newton’s step. 3.2. Consider the BVP y  + y 3 = 0 with 0-D BC on [0, 1]. Write down the 3 × 3 point-grid linear system for the Newton’s search direction in augmented matrix form, using the initial guess determined by y0 (xi ) = 4xi (1 − xi ). Solve the system and take Newton’s step. 3.3. Convert y  = −y 3 , y(0) = 0, y  (0) = c into a system of two first-order equations, as was done for y  + y = 0 in Listing 2.3. Approximate the solution to this system over the interval [0, 3] with n = 3 steps of Euler’s method for c = 23 and c = 2. What can you say about a solution to the BVP y(0) = 0 = y(3)? 3.4. (*) Implement Newton’s method from Listings 3.1, 3.2, and 3.3 to solve Eq. 3.1 with f (y) = sy + y 3 for some real parameter s. Try initial guesses of the form y = c sin(kπv) for various lower frequencies k ∈ N and amplitudes c. For “superlinear” nonlinearities of the type considered in the example, if the parameter s is chosen just to the left of an eigenvalue k 2 π 2 of −y  = λy with y(0) = 0 = y(1), then small amplitudes c should work. 3.5. (***) Put an s-loop around Newton’s solver, using a good initial guess from Exercise 3.4. Decrement or increment s, saving ordered pairs [s, max(abs(y))] in a list. Plot the list, in effect plotting one part of one branch of the bifurcation diagram. Create a diagram depicting the primary branches bifurcating from the first few bifurcation points. Note that these bifurcation points are the eigenvalues of −Δ with 0-D

68

3 Ordinary Differential Equations

BC, i.e., at s = k 2 π 2 along the trivial branch. The trivial branch can itself be easily followed. 3.6. (*) Add a ‘Morse index’ (MI) calculation to annotate (by text, line type, or line color) the bifurcation diagram from Problem 3.5. The line MI=sum(eigs(-J(y), 10, 'sm')< 0) will count negative eigenvalues of the negative Jacobian of G. The MI roughly speaking classifies solutions according to the type of ‘critical point’ they are, according to the curvature of the vector field. See Sect. 5.1 for a little more on this topic. Show by hand that along the trivial branch (s ∈ R and u = 0) the eigenvectors of J (y) are eigenvectors of D2 , with eigenvalues shifted by s. In particular, show that there is a zero eigenvalue of J (y) at the observed bifurcation points. What are the corresponding zero eigenvectors? 3.7. (**) Repeat Problem 3.5 but for the sublinear nonlinearity defined by f (u) = s tanh(u). Branches bifurcate to the right from the same bifurcation points as for the superlinear problem. 3.8. (**) Use the shooting method from Listing 2.4 or homework Exercise 2.16 to investigate positive solutions to u  + su + 4(x − .5)2 u 3 = 0 with zero Dirichlet BC. Vary the bifurcation parameter s between 0 and π 2 . 3.9. (***) Consider the second-order nonlinear problem y  = −y 3 with 0-D BC. 1. Multiply both sides of the equation by y  , integrate with respect to the independent variable x, solve for y  to get an equation with an integration constant c, and then solve this equation via separation of variables to get an integral equation for x as a function of y. Take the new integration constant to be 0, so that x(0) = 0 and the boundary condition is satisfied at one end point. 2. Using symbolic software or tables of integrals, solve this integral equation via elliptic integrals of the first kind. y4 and x(ymax ) = 2k1 to solve for the integration constant c 3. Let k ∈ N. Use c = max 2 and the value ymax . A k−hump solution to the BVP is then obtained by flipping the piece defining x ∈ [0, 2k1 ] about x = 2k1 to obtain a single smooth hump, and then reflecting, translating, and stitching together copies of the hump. Compare your results to that obtained in Problem 3.4. √ 3.10. (***) Let ymax = 2 (2)F−1 ( π2 ), which is accomplished in MATLAB by using ymax = 2*sqrt(2)*ellipticF(pi/2,-1). 1. Use the shooting method in Listing 2.4 (or your similar secant method-based code from Exercise 2.16) to approximate a one-sign solution to the BVP y  + y 3 = 0 with y(0) = 0 = y(1), giving yn < 10−14 . Compute the error sup_shoot_err = abs(ymax - max(abs(y(1,:)))). 2. Use a single call to the RK code in Listing 2.2 with the initial values y(0) = 0 2 /2. Compare y  (0) to the d found by bisection or the secant and y  (0) = ymax method in Part i) above. Compute the error sup_IVP_err = abs(ymax - max(abs(y(1,:)))).

3.2 Linear Ordinary Second-Order BVP

5 6

69

BC = [c; sparse(n-3,1); d] / (dx)ˆ2; y = -D2 \ (f(v) + BC); Listing 3.4 Solving a linear nonhomogeneous second order BVP.

3. As in Exercise 3.4, use the code for implementing Newton’s method from Listings 3.1, 3.2, and 3.3 to solve the BVP. Compute the error sup_Newt_err = abs(ymax - max(abs(y))). For doubling n values starting with 10, observe the order and asymptotic error constants for the three errors. To derive the formulas for ymax and y  (0), work Exercise 3.9. For some sufficiently large n, use tic and toc to compare execution times. 3.11. (**) Using the five-point difference D2 matrix from Exercise 3.29 instead of the three-point difference matrix, implement the Newton’s algorithm to approximate a one-sign solution to the BVP y  + y 3 = 0 with 0-D BC (See Exercise 3.4 and Listings 3.1, 3.2, and 3.3). Referring to Exercise 3.10, compute sup_Newt_err. Observe the order of convergence.

3.2 Linear Ordinary Second-Order BVP In this section, we solve −y  = f on[a, b], with y(a) = c and y(b) = d by setting up a single sparse linear system which is easily solved using MATLAB’s linear solver ‘\’. Consider this a warm up for solving nonhomogeneous Laplace’s equation with nonhomogeneous boundary data. To accomplish this, we will solve an N × N linear system of the form −D2 y = f (v) + {BC terms} for y ∈ Rn−1 , approximating the solution y evaluated at the interior grid points. In a MATLAB program, define n, a, b, c, d , x, v, and a function f , and create the second difference matrix D2 , all as in the MATLAB code in Sect. 3.1. If the homogeneous BC c = 0 = d are considered, the approximate solution is simply y = -D2 \ f(v); If nonhomogeneous BC is considered, the RHS must be adjusted. More precisely, the constant values −1 ∗ c/(dx)2 and −1 ∗ d /(dx)2 are missing in the first and last equations, respectively, on the LHS, and must be moved to the RHS before solving the system. Given that the BVP is defined as discussed, Listing 3.4 will solve it. If known, one can then define the exact solution function g by g = @(x)... and compare that with the approximation via a Riemann sum or graphical com-

70

7 8 9

3 Ordinary Differential Equations

fprintf('L2 norm err = %g\n',norm(g(v)-y)*sqrt(dx)); x10 = x(1:10:end); plot([a; v; b], [c; y; d], x10, g(x10), `r*'); Listing 3.5 Graphically comparing approximation (solid line) with exact solution (several red *’s).

parison. Three more lines provide a plot of the approximation, with several red *’s indicating exact values for comparison (see Listing 3.5 and Fig. 3.2). 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 3.2 The curve represents the approximate solution to −y  = x, y(0) = 0, y(1) = 1. The red *’s mark the exact solution at several evenly spaced points

Example 3.2. Write down the 3 × 3 system corresponding to a n = 4 for the BVP in Fig. 3.2. Solution

⎡ ⎤ ⎤⎡ ⎤ ⎡ 1 ⎤ 0 2 −1 0 y1 4 16 · ⎣ −1 2 −1 ⎦ ⎣ y2 ⎦ = ⎣ 24 ⎦ + 16 ⎣ 0 ⎦ , 3 1 0 −1 2 y3 4 ⎡

which translates to the augmented matrix ⎡

⎤ 128 −64 0 1 ⎣ −64 128 −64 2 ⎦ . 0 −64 128 67 The solution (37, 72, 103)/128 compares to the exact solution x ∈ { 41 , 21 , 34 }.

7x−x3 6

evaluated for

3.3 Eigenvalues of −D2 and Fourier Series

71

Exercises 3.12. As in Example 3.2, write down the 3 × 3 point-grid system for the BVP −y  = x, y(0) = 0, and y(1) = 0. 3.13. Using n = 4 divisions, write down the 3 × 3 point-grid system whose solution approximates a solution to −y  = f , y(a) = c, y(b) = d , where f (x) = 4x, a = 0, b = 1, c = 0, and d = 1. Compute the exact solution of the linear BVP and compare it to the solution of the 3 × 3 system. 3.14. Show that one step of Newton’s method gives an approximate solution to −y  = f , y(a) = c, y(b) = d , for arbitrary a, b, c, d , f , and number of divisions n of [a, b], which is the same as the solution to the linear system from this section. 3.15. (*) Use this section’s techniques to solve −y  = x, y(0) = 0, y(1) = 0. 3.16. (*) Use this section’s techniques to solve −y  = x, y(0) = 0, y(1) = 1 and reproduce Fig. 3.2. Plot an appropriate norm of the difference between approximate and actual solutions vs the number of divisions n of [0, 1]. 3.17. (*) Use the Newton’s program developed for nonlinear problems in Sect. 3.1 to solve the linear problem in Problem 3.15. 3.18. (*) Use the Newton’s program developed for nonlinear problems in Sect. 3.1 to solve the linear problem in Problem 3.16. 3.19. (*) Suppose instead we enforce a Neumann BC at an endpoint, e.g., y  (0) = c. Then enforcing (y1 − y0 )/(dx) = c gives y0 = y1 − c ∗ (dx). The missing term −y0 on the LHS has two effects: D1 (1, 1) = (2 − 1)/(dx)2 = 1/(dx)2 , to account for the y1 part, and −c/(dx) must be added to the first element of the RHS. Solve y  = 2, y  (0) = 1, y(1) = 1 exactly and approximately via the methods of this section.

3.3 Eigenvalues of −D2 and Fourier Series In this and the next sections, we demonstrate how the built-in MATLAB function eigs easily solves the eigenvalue problem −y  = λy on an interval [a, b] with 0-D, zero Neumann (0-N), mixed 0-D and 0-N, or periodic BC. We can find and sort the smallest k eigenvalues, compare with known eigenvalues, plot corresponding eigenfunction approximations, normalize, and perform eigenfunction expansions. See Sect. 3.8 and [8]. The eigs command is used to very efficiently compute some or all of the eigenvalues and eigenvectors of a matrix. Second difference matrices can approximate

72

3 Ordinary Differential Equations

well the second derivative operator on subspaces spanned by the lower frequency eigenvectors relative to the fineness of the grid, but since the matrices are finite dimensional, they cannot represent the action of the operator on arbitrarily high frequency eigenfunctions. Hence, for a large n × n matrix L, we commonly use the command [V,D]=eigs(L,k,'sm') to compute the k = .5) to form a piecewise definition of f , numerically integrate to get the Fourier Sine series coefficients, form the eigenfunction expansion, and compare that to back to f . If one accumulates the approximate eigenvalues from Table 3.1 in a 4 × 6 matrix LAM, the MATLAB fragment in Listing 3.7 generates the numbers in Table 3.2 which demonstrates that the approximations are O(h 2 ), as expected. n = 10 n = 100 n = 1000 n = 10000 exact

9.7887 9.8688 9.8696 9.8696 9.8696

38.1966 39.4654 39.4783 39.4784 39.4784

82.4429 88.7607 88.8258 88.8264 88.8264

138.1966 157.7060 157.9116 157.9136 157.9137

200.0000 246.2332 246.7350 246.7401 246.7401

261.8034 354.2550 355.2952 355.3057 355.3058

Table 3.1 First six eigenvalues of three-point central difference matrix -D2 with 0-D BC, versus exact eigenvalues k 2 π 2 of −y  = λy, y(0) = 0 = y(1). These O((dx)2 ) accurate approximations are the result of executing Listing 3.6 with the indicated values of n

1.9986 1.9943 1.9873 1.9774 1.9647 1.9493 2.0000 1.9999 1.9999 1.9998 1.9996 1.9995 1.9995 1.9999 2.0000 2.0000 2.0000 2.0000

(10 j ) (10 j+1 ) Table 3.2 Values of log10 (λi − λi )/(λi − λi ) , for j ∈ {1, 2, 3}, i ∈ = {1, . . . , 6}. This demonstrates O((dx)2 ) convergence

Figure 3.3 shows reconstituted functions approximating the function f : [0, 1] → R with zero BC defined by f (x) = x, x < 21 and f (x) = 1 − x, x >= 21 .

3.3 Eigenvalues of −D2 and Fourier Series

4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

k [V, D]

73

= 25; = eigs(-D2, k, 'sm');

% sort evals, evects, normalize evects [lam, ndx] = sort(diag(D)); V = V(:, ndx); V = (diag(sqrt(diag(V' * V * dx))) \ V')'; fprintf('The smallest 6 approx, exact evals:\n'); [lam(1:6)'; (1:6).ˆ2 * (pi / (b-a))ˆ2] % Add 0-D BC to plot first 6 eigenvectors vs x Vbc = [zeros(1,6); V(:,1:6); zeros(1,6)]; figure(1); plot([a;v;b], Vbc); % Calc Fourier sine coeffs w/R-sum, compare f = @(x) x.*(x=.5); c = V' * f(v) * dx; w = V*c; x10 = x(1:n/10:end); z10 = f(x10); figure(2); plot([a; v; b], [0; w; 0], x10, z10, 'r*'); Listing 3.6 This code computes the eigenvalues and eigenvectors of the D2 matrix and then demonstrates Fourier series with numerical integration (line 21). Recall that since the code starts on line 4, the author is implying that the grid, parameters, and D2 matrix can all be defined in the first 3 lines (albeit with multiple commands per line, each separated by ‘;’)

28 29

err = LAM - kron(ones(4,1), (1:6).ˆ2*piˆ2); log(err(1:end-1,:) ./ err(2:end,:)) / log(10) Listing 3.7 Code for observing the order of convergence for the smallest 6 eigenvalues of the −D2 matrix for 0-D BC on the unit interval. Here, LAM is a 4 × 6 matrix of the six approximate eigenvalues corresponding to n ∈ {10, 100, 1000, 10000}, and the kron term gives four copies of the exact eigenvalues in a matrix of the same shape.

Example 3.3. Compare the eigen-pairs of the BVP −y  = λy, y(0) = 0 = y(1), x ∈ (0, 1) to the exact eigen-pairs of the matrix -D2 . Check your answer against the three n = 4 eigen-pairs.

74

3 Ordinary Differential Equations

Fig. 3.3 The solid curves are the Fourier sine series approximations for the even about 21 function f defined by f (x) = x, x < 21 , f (x) = 1 − x, x ≥ 21 . The curve on the left represents 6 nonzero terms among the first 11 modes, n = 100, while the curve on the right was generated using 500 modes, n = 1001. The ‘*’ mark the exact function f at 11 regularly spaced grid points

Solution: The eigen-pairs of the BVP are given by pk (x) = sin(kπx), λk = k 2 π 2 , for k = 1, . . . , n − 1. We will show that vk = ( pk (i/n))i=1,...,n−1 and λ¯ k = 2n 2 (1 − cos(kπ/n)) are the eigen-pairs of −D2 . Pick k ∈ {1, . . . , n − 1} and set v = vk , α = kπ/n. Consider the i th equation of −D2 v = λv, −n 2 (vi−1 − 2vi + vi+1 ) = λvi . By the sum-to-product formula, vi+1 + vi−1 = sin((i + 1)α) + sin((i − 1)α) = 2 cos(α) sin(iα) = 2 cos(α)vi . Substituting, we can solve for λ independent of i to obtain λ¯ k = 2n 2 (1 − cos(α)). Furthermore, the eigenvalues of −D2 converge to the eigenvalues of the differential operator O((dx)2 ) λ¯ k = 2k 2 π 2



1 − cos(α) α2





⎞ α2 − α4 + · · · ) 1 − (1 − 2 4! ⎠ = λk + O((dx)2 ). = k 2 π2 ⎝ α2

If n = 4, the matrix ⎡

⎤ −2 1 −D2 = −16 · ⎣ 1 −2 1 ⎦ 0 1 −2 has the following eigenvalues and corresponding eigenvectors; {16(2 −



2), 32, 16(2 +



√ ⎤ ⎡ ⎤ ⎡ √ ⎤ 1/ 2 1 1/ 2 ⎦ 2)}, and {⎣ 1√ ⎦ , ⎣ 0 ⎦ , ⎣ −1 √ }. −1 1/ 2 1/ 2 ⎡

3.3 Eigenvalues of −D2 and Fourier Series

75

Exercise 3.20 asks to verify that that enforcing the three-point central scheme to build D2 for 0-Neuman BC yields eigenvectors that agree with cosine (and constant) functions. Example 3.4. Consider constructing a 0-Dirichlet BC second difference matrix using a five-point scheme from Table 2.2. If one uses the central scheme, the resulting eigenvalues are not only O((dx)4 ) accurate approximations, but as in Example 3.3, the eigenvectors are the corresponding eigenfunctions evaluated at the interior grid points. The first and last two equations involve the boundary condition. We can still apply the central scheme at the near-boundary points v1 , v2 , vn−2 , and vn−1 , using the point grid in this example. For the 2nd equation, simply use y0 = 0, to get the row 1/(12(dx)2 ) (16 − 30 16 − 1 0 · · · 0) ∈ Rn−1 . This is the result of spdiags anyway, so nothing need be done. For the 1st equation, we use y0 = 0 and the ‘ghost’ value y−1 = −y1 to get the row 1/(12(dx)2 ) (−29 16 − 1 0 · · · 0) ∈ Rn−1 . This is a reasonable assumption, particularly in light of the odd sine eigenfunctions that the eigenvectors agree with on the point grid. By the same reasoning, the n − 2nd equation needs no modification and the last equation is changed only by −30 → −29 in the last diagonal entry. We remark that a skewed five-point scheme can be used for the first and last equations to get O((dx)4 ) accurate eigenvalues, but that these eigenvectors do not exactly agree with eigenfunctions on the grid. Exercise 3.22 asks to verify that that enforcing the five-point central scheme to build D2 does yield eigenvectors that agree with sine functions, and to show that the eigenvalue approximation formula obtained in the process converges O((dx)4 ) to λk = k 2 π 2 .

Exercises 3.20. Modify the part of Example 3.3 which computes the exact eigen-pairs of the 3 × 3, n = 4 point-grid D2 matrix to instead enforce the 0-Neumann BC y  (0) = 0 = y  (1). One sets the first and last diagonal entries of D2 to −n 2 instead of −2n 2 (see Sect. 3.4). 3.21. It is a subtle point, but for 0-N BC, the eigenvalue approximation is O((dx)2 ) for the cell grid and O(dx) for the point grid. 1. Show that the i th equation evaluated at cosine (or constant) functions evaluated on the grid give the same eigenvalue approximation λ¯ = 2n 2 (1 − cos(kπ/n)) as do the sine functions.

76

3 Ordinary Differential Equations

2. For the cell grid, show that the 1st equation (with −n 2 instead in place of −2n 2 ) ¯ The last equation also gives this result. gives the same approximation λ. 3. Apply Taylor series for cos(α) = cos(kπ/n) to conclude O((dx)2 ) convergence of eigenvalues. 4. For the point grid, use the fact that the point-grid matrix is a scalar multiple of the cell-grid matrix for one lower dimension to solve for the point-grid eigenvalues in terms of the cell-grid eigenvalues λ¯ . Compute the order of convergence of the resulting point-grid eigenvalue formula. 3.22. Complete Example 3.4, that is, show that eigenfunctions evaluated on the point grid as in Example 3.3 are eigenvectors of the five-point central second difference matrix for 0-D BC. Observe in the process that the eigenvalues of the matrix are given by   π 2 k 2 15 + cos(2α) − 16 cos(α) ¯λk = , 6 α2 α=

kπ , n

which converge O((dx)4 ) to the exact corresponding eigenvalues.

3.23. (*) Reproduce Tables 3.1 and 3.2. 3.24. (*) Work out by hand the exact Fourier sine series coefficients ci for the piecewise defined function in Fig. 3.3. Use and observe the even symmetry (about x = 21 ) in your calculations and computations. Add a few lines to your code to compute a vector m  ci sin(iπv), u = i=1

and compare the result to the approximation w and exact values z that can be computed by using the above code fragments. Compare coefficient vectors c and a, taking into account the normalization factor. Plot the series over multiple periods to see the effect of oddness about x = 0. 3.25. (*) Repeat all of Exercise 3.24 for the Fourier sine series using the even symmetry of the function f : [0, 1] → R defined by f ≡ 1. Note the effect of the discontinuities at the end points due to the sine series’ inherent satisfaction of a 0-D BC. Note also that f = @(x) ones(size(x)) is a way to code f so that it returns a constant vector of 1’s of the appropriate size. 3.26. (*) Repeat all of Exercise 3.24 for the “doubly odd” function f : [0, 1] → R defined piecewise by f (x) = 1, x < 21 , f (x) = −1, x ≥ 21 . Obtain a satisfactory formula for the sine series nonzero coefficients. 3.27. (*) Repeat all of Exercise 3.24 for the odd function f defined f (x) = 1 − 2x. 3.28. (*) Repeat all of Exercise 3.24 for the function f with (no symmetry) defined f (x) = x.

3.4 Enforcing Zero Dirichlet, Zero Neumann, and Periodic Boundary Conditions . . .

77

3.29. (*) Perform the experiment in Example 3.4. That is, from the central second difference template in Table 2.2, create a 5-diagonal D2 matrix satisfying 0-D BC and observe the order of accuracy of approximating eigenvalues with eigs. Start with the matrix generated by spdiags according to the template. The middle rows corresponding to grid points with two interior neighbors to each side give good approximations to the second derivative. The second and next to last rows are also correct, with the two truncated coefficients corresponding to y0 = 0 and yn = 0, respectively. The only change will be to the first and last diagonal entries: assume that there are ‘ghost point’ values y−1 = y1 and yn+1 = yn−1 .

3.4 Enforcing Zero Dirichlet, Zero Neumann, and Periodic Boundary Conditions Using Either Point or Cell Grid We consider here how to enforce 0-D, zero Neumann (0-N), and periodic boundary conditions when building the D2 matrix, using both the point-grid (N = n − 1, the interior grid points used up until now) and the cell-grid (N = n, midpoints) method that uses ghost points to enforce BC. Figure 3.4 gives a graphical representation of each of the possibilities. Consider the eigenvalue problems −y  = λy on (a, b) with one of the three boundary conditions. We have already seen how to use the point grid to enforce 0-D BC; ignore the missing 1’s in the first and last equation. When considering 0-N BC at an end point, say x0 = a, rather than ignore y0 = 0, one assumes that y0 = y1 , so that the two-point difference at the boundary will be zero, and as a consequence y0 − 2y1 + y2 = −y1 + y2 . 2 . Thus, to 2 be the matrix with integer entries defined by D2 = 1/(dx)2 D Let D  impose the condition at either end point, after defining D2 as in the 0-D BC case set 2 (N , N ) = −1, 2 (1, 1) = −1 or D 0 − NBC : D as appropriate. To enforce periodic boundary conditions, one sees that since the two endpoints are viewed as the same point, the matrix D2 should be of dimension N × N with N = n. Since y−1 − 2y0 + y1 = −2y0 + y1 + yn−1 and yn−2 − 2yn−1 + yn = y1 + yn−2 − 2yn−1 , 2 as in the 0-D BC to impose the condition at the common end point, after defining D case but with one greater column and row dimension, set 2 (N , 1) = 1. 2 (1, N ) = 1 and D periodicBC : D

78

3 Ordinary Differential Equations y

y •y 2 •y 2 •y 1 •y 1

y0 = 0 ◦

| v1

| x v2



|

∗ v1

∗ v2

|

|

x

◦ yg = −y1 y

y 0 = y1 ◦

y yg = y1 ◦

y •1

y •1

y •2

y •2

| v1

| x v2

|



∗ v1

|

∗ v2

|

x

Fig. 3.4 The left two graphics represent enforcing the BC on the point grid, while the right two graphics enforce the BC on the cell grid and require a ghost point. The top two graphics enforce y(0) = 0, while the bottom two enforce y  (0) = 0. The blue curve represents a theoretical solution, while the dashed lines connect approximate solution values on the chosen grid

The cell-grid choice leads to slight different matrices with essentially the same properties. It is at times a more convenient way to enforce BC, and generally slightly more accurate. For the cell grid, instead of using the interior grid points, we use the midpoints v ∈ Rn . N = n; v = x(1 : end-1) + dx/2; To impose the BC, we first note that none of the points of v lie on the boundary, so we imagine a ghost point equidistant on the other side of the boundary with the appropriate value. To impose 0-D BC at x0 , the ghost value satisfies yg = −y1 , and similarly yg = −y N at the other end. Then

3.4 Enforcing Zero Dirichlet, Zero Neumann, and Periodic Boundary Conditions . . .

79

yg − 2y1 + y2 = −3y1 + y2 , or y N −1 − 2y N + yg = y N −1 − 3y N , hence

2 (N , N ) = −3, 2 (1, 1) = −3 or D 0 − DBC : D

as appropriate. To impose 0-N BC at x0 , the ghost value satisfies yg = y1 , and similarly yg = yn at the other end. Then yg − 2y1 + y2 = −y1 + y2 , or y N −1 − 2y N + yg = y N −1 − y N , hence

2 (N , N ) = −1, 2 (1, 1) = −1 or D 0 − NBC : D

as appropriate. To impose periodic BC at x0 = xn , use that y1 and yn are neighbors. Then the n × n matrix is exactly the same as that for the point grid, we just interpret 2 matrices in R N ×N , the y-values as being at the midpoints. Figure 3.5 contains four D enforcing mixed Neumann–Dirichlet and periodic BC, via point and cell grid.

point-grid

periodic

N=n-1

N=n ⎞ −2 1 1 ⎟ ⎜ 1 −2 1 ⎟ ⎜ ⎟ ⎜ .. .. .. ⎟ ⎜ . . . ⎟ ⎜ ⎟ ⎜ . . . ⎟ ⎜ .. .. .. ⎟ ⎜ ⎝ 1 −2 1 ⎠ 1 1 −2 N=n



⎞ −1 1 ⎟ ⎜ 1 −2 1 ⎟ ⎜ ⎟ ⎜ .. .. .. ⎟ ⎜ . . . ⎜ ⎟ ⎜ ⎟ . . . ⎜ ⎟ .. .. .. ⎜ ⎟ ⎝ 1 −2 1 ⎠ 1 −2 N =n



⎞ −1 1 ⎟ ⎜ 1 −2 1 ⎟ ⎜ ⎟ ⎜ .. .. .. ⎟ ⎜ . . . ⎟ ⎜ ⎟ ⎜ . . . ⎟ ⎜ .. .. .. ⎟ ⎜ ⎝ 1 −2 1 ⎠ 1 −3





cell-grid

mixed N-D

⎞ −2 1 1 ⎟ ⎜ 1 −2 1 ⎟ ⎜ ⎟ ⎜ .. .. .. ⎟ ⎜ . . . ⎟ ⎜ ⎟ ⎜ . . . ⎟ ⎜ .. .. .. ⎟ ⎜ ⎝ 1 −2 1 ⎠ 1 1 −2

2 matrices in R N ×N enforce mixed Neumann–Dirichlet and periodic BC, Fig. 3.5 These four D 2 via point and cell grid. Recall that the (dx)2 term is omitted in D

80

3 Ordinary Differential Equations

Exercises 3.30. (*) Using the point grid, reproduce Table 3.1 for 0-N and periodic BC. 3.31. (*) Using the cell grid, reproduce Table 3.1 for 0-D, 0-N, and periodic BC. 3.32. (*) Repeat all of the Exercise 3.24 with a Fourier cosine series to enforce 0-N BC. Be sure to use and observe symmetry, and plot over several periods. 3.33. (*) Repeat all of the Exercise 3.28 of Sect. 3.3 with a full Fourier sine and cosine series to enforce periodic BC. Be sure to use and observe symmetry, and plot over several periods. 3.34. (**) Using the methods from Sect. 3.2, solve a BVP of mixed type −y  = f , y(0) = 0, and y  (1) = 0. 3.35. (*) Solve a nonhomogeneous mixed type BVP of the form −y  = f , y(a) = c, and y  (b) = d , using the cell grid and the techniques from Sect. 3.2. 3.36. (***) For the semilinear BVP y  + sy + y 3 = 0, y  (0) = 0 = y  (1), follow the first primary branch of constant solutions bifurcating from λ1 = 0. Use the fact that y  = 0 for constant solutions y ≡ c = c(s) to obtain and solve an algebraic equation for s as a function of c. Plug this result into the Jacobian to find the exact values of s at which the Jacobian is noninvertible. There are in fact secondary branches bifurcating at these points (see also Exercise 5.5).

3.5 First-Order Linear Solvers In this section, we use the first difference matrix D1 on a point grid to solve ordinary first-order IVP in the form y  + py = g, y(a) = c, x ∈ [a, b], where the functions p and g are assumed to be continuous on [a, b] [2, 3]. The well-known two-point first difference approximation y  (xi ) ≈ (yi+1 − yi )/(dx) can be performed via matrix multiplication with a first difference matrix. If one interprets the results of said multiplication to be evaluated at xi (forward difference) or xi+1 (backward difference), then the difference scheme is O(h). If one interprets the result to be evaluated at midpoints (xi + xi+1 )/2, then the scheme is O(h 2 ).

3.5 First-Order Linear Solvers

Let

81





1 ⎜ −1 1 ⎜ ⎜ .. .. . . 1 ⎜ ⎜ D1 = .. .. (dx) ⎜ ⎜ . . ⎜ ⎝ −1 1 −1

⎟ ⎟ ⎟ ⎟ ⎟ ∈ M n×n , ⎟ ⎟ ⎟ ⎠

(3.3)

1

which can be generated in MATLAB via D1=spdiags(kron([-1 1],ones(n,1)),-1:0,n,n)/dx. After discretization, the approximate solution y to our linear differential equation is the solution to the n × n linear system (D1 + P I ∗ )y = z + I C, where the n × n matrices P and I ∗ and the n × 1 vectors z and IC are defined as in Table 3.3 to implement a forward, backward, or midpoint scheme. Of course spdiags should be used to create the sparse matrix P I ∗ . Example 3.5. Consider the the first-order linear IVP y  + 6xy = 0, y(0) = 1, x ∈ (0, 1). Write down a 3 × 3 linear system whose solution provides an n = 3 midpoint approximation to the IVP. Compare approximately to actual values. Solution: ⎛

⎡ ⎤⎞ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎤ 1 y1 0 1 3 − 21 1 ⎝3 · ⎣ −1 1 ⎦ + · ⎣ 3 3 ⎦⎠ ⎣ y2 ⎦ = ⎣ 0 ⎦ + ⎣ 0 ⎦ , 2 55 0 −1 1 y3 0 ⎡

which translates to the augmented matrix ⎡ ⎤ 7 0 0 5 ⎣ −3 9 0 0 ⎦ . 0 −1 11 0 The solution (5/7, 5/21, 5/231) compares to the exact solution e−3x evaluated for x ∈ { 31 , 23 , 1}. 2

To implement any of the three schemes, first define n, N , a, b, c, x, v, and the functions p and g. The code in Listing 3.8 implements the O(h) forward scheme, which is exactly Euler’s method as far as the approximation is concerned. As a small

82

3 Ordinary Differential Equations forward

backward

midpoint

⎤ x1 ⎢ ⎥ v = ⎣ ... ⎦ xn

⎤ x2 ⎥ ⎢ v = ⎣ ... ⎦ xn+1

⎤ x1 ⎢ ⎥ v = ⎣ ... ⎦ + ( dx 2 ) xn

same

same

same

same









⎤ g(v1 ) ⎢ ⎥ z = ⎣ ... ⎦ g(vn ) ⎡ ⎢ P=⎣



p(v1 )

..

.

⎥ ⎦ p(vn )

⎤ 0 ⎥ ⎢1 0 ⎥ ⎢ I∗ = ⎢ . . ⎥ . . ⎣ . . ⎦ 1 0 ⎡

(y1 −c) dx

+ p(v1 )c = g(v1 )

ic = c



1 dx



− p(v1 )

⎤ ic ⎢0⎥ ⎢ ⎥ IC = ⎢ . ⎥ ⎣ .. ⎦

⎡ ⎢ ⎢ I ∗ = In = ⎢ ⎣



1

⎥ ⎥ ⎥ ⎦

1 ..

.

I∗ =

1 (y1 −c) dx

+ p(v1 )y1 = g(v1 ) ic =

⎤ 1 ⎥ ⎢1 1 ⎥ ⎢ ⎥ ⎢ . . . . ⎣ . . ⎦ 1 1 ⎡

(y1 −c) dx

c dx

1 2

+ p(v1 ) (y12+c) = g(v1 )

ic = c



1 dx



p(v1 ) 2



same

same

0

Table 3.3 Depiction of the component matrices and vectors for the n × n linear system approximating our first-order linear initial value problem. The first equation of the system involves the known initial value c. The vector IC is derived from moving occurrences of this known value to the RHS of the equation

additional efficiency, we shift the diagonal of P to obtain the result of P I ∗ . The code in Listing 3.9 implements the O(h) backward scheme. In this case, P I ∗ = P I = P. The algorithm implemented in Listing 3.10 is an O(h 2 ) midpoint-type scheme (which is not exactly the traditional midpoint method [3]). Example 3.6. Show that the forward scheme is exactly Euler’s method. Solution The i th equation of (D1 + P I ∗ )y = z + I C is yi+1 − yi + p(xi )y(i) = g(xi ). dx

3.5 First-Order Linear Solvers

5 6 7 8 9

83

D1 = 1/dx*spdiags(kron([-1,1],ones(n,1)),[-1,0],n,n); P = spdiags(p(v), -1, n, n); IC = [c/dx -p(a)*c; sparse(n-1,1)]; y

= (D1 + P) \ (f(v) + IC);

Listing 3.8 The O(dx) accurate forward difference scheme.

6 7 8 9

P = spdiags(p(v), 0, n, n); IC = [c; sparse(n-1,1)] / dx; y

= (D1 + P) \ (f(v) + IC);

Listing 3.9 The O(dx) accurate backward difference scheme.

6 7 8 9 10

P = spdiags(p(v), 0, n, n); Is = 1/2 * spdiags(kron([1,1],ones(n,1)),[-1, 0],n,n); IC = [c/dx -p(v(1))*c/2; sparse(n-1,1)]; y

= (D1 + P*Is) \ (f(v) + IC);

Listing 3.10 An O((dx)2 ) accurate midpoint difference scheme.

Solving for yi+1 gives yi+1 = yi + (dx)(− p(xi )yi + g(x(i))) = yi + (dx) f (xi , yi ), where f is the RHS function of the differential equation y  = − py + g. This is an Euler’s step. The first equation of our linear system, including the initial condition term, is c y1 = g(a) + − cp(a). dx dx Solving for y1 gives

y1 = c + (dx) f (a, c),

which is the required first step of Euler’s, starting with the initial value y(a) = c.

Exercises 3.37. Consider the IVP in Example 3.5. Write down the 3 × 3 linear systems in augmented matrix form for the forward and backward schemes. Compute a step of each method and compare to the exact values and the midpoint results presented in the example.

84

3 Ordinary Differential Equations

3.38. Write down the 3 × 3 linear systems in augmented matrix form for the forward, backward, and midpoint schemes for the IVP y  + 1x y = 1, y(1) = 1, x ∈ (1, 2). Compare the three approximate solutions to the exact solution evaluated at the appropriate three points. 3.39. Show that the backward scheme is precisely the implicit modified Euler’s method, and that the midpoint method is precisely the implicit midpoint modified Euler’s method. See Example 3.6. 3.40. (*) Solve the 1st order IVP from Exercise 2.12 with the midpoint method. Compare. 3.41. (*) Solve the IVP y  + 1t y = et , y(1) = 1 exactly and compare to the results using the above developed 3 schemes. 3.42. (**) For the IVP in Exercise 3.41, compare the Euler’s and traditional midpoint methods against the forward and backward scheme developed here. 3.43. (***) Use five-point skewed, central, and backward first differences to build D1 and compare the resulting implementation of the above midpoint scheme to Runge–Kutta. In particular, demonstrate that they are both O(h 4 ) schemes. Hint: There are more missing terms involving the initial value that need to be moved to the RHS.

3.6 Systems of First-Order Linear Equations for Second-Order IVP To solve the second-order IVP y  + py  + qy = g, y(a) = c, y  (a) = d , x ∈ [a, b], convert the second-order problem to the first-order system [2] 

y − z z  + pz + qy



      0 y c = , (a) = . g z d

We will approximate this system with 

D1 −I ∗ ∗ Q I D1 + P I ∗

      y 0 ICy = + . z g ICz

To implement the O(h 2 ) midpoint scheme in Listing 3.11, first define n, a, b, c, d , D1 , and I∗ as in the previous section, and define the functions p, q, and g. Then form

3.6 Systems of First-Order Linear Equations for Second-Order IVP

9 10 11 12 13 14

A ICy ICz r

= = = =

w

= A \ r;

85

[D1, -Is; Q*I2, D1 + P*Is]; [c/dt+d/2; sparse(n-1,1)]; [d/dt -(c*q(v(1))+d*p(v(1)))/2; sparse(n-1,1)]; [ICy; g(v) + ICz];

Listing 3.11 Solving a second order linear IVP as a system of first order linear IVP.

14 15 16 17 18 19 20

tic; w = A \ r; toc; F = @(t,s) [s(2); -p(t)*s(2) - q(t)*s(1) + g(t)]; tic; [x, w] = ode45(F, x, [c;d]); toc; subplot(2,1,1); plot(w(1:n), w(n+1:2*n)); subplot(2,1,2); plot(w(:,1), w(:,2)); Listing 3.12 After solving a second order linear IVP as a system of first order linear IVP as in Listing 3.11, we solve the same system with the built in solver ode45 and compare the timing (see Table 3.4).

the sparse format diagonal matrices P and Q, populated with midpoint values of p and q respectively. as in the previous section’s midpoint case. The IC is a little tricky to understand: each occurrence of D1 and I∗ has a ‘missing 1’ corresponding to the 1st and (n + 1)st equations which involve the initial values y(0) = c and z(0) = y  (0) = d . For the midpoint scheme, these equations are z1 − d c + y1 d + z1 y1 − c d + z 1 − = 0 and q(v1 ) + + p(v1 ) = g(v1 ). dt 2 2 dt 2 In Listing 3.11, the scalar terms c/d t + d /2 and d /d t − (cq(v1 ) + d p(v1 ))/2 are added to the 1st and (n + 1)st right hand sides. The approximation y is the first n elements of w, and can be adjoined with the IC c and plotted against x, or plotted against z, the last n elements of w, to generate a trajectory in the phase diagram. Since the computationally intensive portion of our implementation is all in ‘\’, it is fair to compare the computation time of our method against the standard method of converting the second-order problem to a firstorder linear system and using a generic ODE system solver which is also optimized and compiled. Listing 3.12 uses the built-in MATLAB function ode45 to solve the problem, and uses ‘tic’ and ‘toc’ to output the timing information which has been summarized in Table 3.4. The built-in solver returns y and z as the first and second columns of w ∈ M (n+1)×2 . Example 3.7. Consider the the second-order linear IVP

86

3 Ordinary Differential Equations

n 1000 10000 100000

‘\’ secs 0.004440 0.024985 0.687909

ode45 secs (ode45 secs) / ( ‘\’ secs) 0.082446 18.5689 0.166036 6.6454 2.026475 2.9458

Table 3.4 Ratios of computing times for ‘\’ and ‘ode45’. These numbers were computed using Listing 3.11.

y  + y = 1, y(0) = 1, y  (0) = 1, x ∈ (0, 1). Write down a 6 × 6 linear system whose solution provides an n = 3 forward approximation to the IVP. Compare approximate to actual values. Solution: Since the coefficient functions are defined by p ≡ 0, q ≡ 1, and g ≡ 1, we have ⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ 0 y1 3·1+1·1 3 0 ⎢ −3 3 ⎥ ⎢ y2 ⎥ ⎢ 0 ⎥ ⎢ ⎥ 0 −1 0 ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎢ ⎢ ⎥ ⎢ ⎥ ⎥ ⎥ 0 −1 0 ⎥ ⎢ y3 ⎥ ⎢ 0 ⎥ ⎢ −3 3 ⎢ ⎥. = + ⎢ ⎢ ⎢ ⎥ ⎢ 0 ⎥ ⎥ ⎥ 3 ⎢ ⎥ ⎢ z 1 ⎥ ⎢ 1 ⎥ ⎢ −1 · 1 + 3 · 1 ⎥ ⎣ ⎣ ⎣ ⎦ ⎣ 1 0 ⎦ ⎦ ⎦ −3 3 z2 1 0 −3 3 z3 1 0 1 0 The corresponding augmented matrix for the system is ⎡ ⎤ 3 0 0 0 0 0 4 ⎢ −3 3 0 −1 0 0 0⎥ ⎢ ⎥ ⎢ 0 −3 3 0 −1 0 0⎥ ⎢ ⎥ ⎢ 0 0 0 3 0 0 3⎥ ⎢ ⎥ ⎣ 1 0 0 −3 3 0 1⎦ 1 0 1 0 0 −3 3       4/3 5/3 53/27 The solution , , compares to the exact solution 1 8/9 2/3 (y, z)(t) = (1 + sin t, cos t) evaluated for t ∈ { 31 , 23 , 1}.

Exercises 3.44. Consider the IVP in Example 3.7. Write down the 6 × 6 linear systems in augmented matrix form for the backward and midpoint schemes. Solve and compare results to the forward result from the example, and with the exact values.

3.7 First-Order Nonlinear IVP

87

3.45. Show that the forward scheme is precisely Euler’s’ method, the backward scheme is precisely the implicit modified Euler’s method, and the midpoint method is precisely the implicit midpoint modified Euler’s method. You can do this by starting with the i th and (n + i)th equations, and comparing them to the appropriate Euler’s formula with RHS function defined by f (t, y, z) = (z, g − qy − pz). 3.46. (*) Using the methods for first-order systems from this section, solve the system from the R-K example in Sect. 2 (see Fig. 2.1). Solve y  + y  + 9y = 3 cos(t), y(0) = 1, y  (0) = 0 for x ∈ [0, 30] using the above ‘\’ midpoint method solver. Plot the resulting trajectory in the phase plane. Compute the exact solution by any means, and a line or two of code to plot a thick red curve corresponding to the exact steady state solution on top of the same graphic. 3.47. (**) Use Riemann sums to compute a norm of the difference between the approximate solutions obtained by the above ‘\’ and ‘ode45’ solvers, respectively, with the exact solution evaluated on the grid (see for example Exercise 2.14). Observe the order of convergence of the two methods as was done in Table 3.2. 3.48. (***) Use five-point skewed, central, and backward first differences to build D1 and compare the resulting implementation of the above midpoint scheme to Runge–Kutta. In particular, demonstrate that they are both O(h 4 ) schemes.

3.7 First-Order Nonlinear IVP To code an O(h 2 ) solver for first-order nonlinear problems of the type y  = f (t, y), y(a) = c, x ∈ [a, b], [2], first define n, a, b, c, x, midpoints v, D1 , and I∗ as in the previous section. When defining the function f : [a, b] × R → R at its y-partial derivative f y , one must ensure that the functions thread over vectors. The IC is again tricky to enforce: for each occurrence of D1 and I∗ a missing initial term is added back to the first equation, but on the LHS, as the RHS is desired to be 0. Listing 3.13 declares the initial condition vectors for modifying the first equation, and uses the Taylor series to initialize a good initial guess. Listing 3.14 implements one step of Newton’s method and can be placed in a loop as in Sect. 3.1. The idea can be extended to systems (see Fig. 3.6).

88

6 7 8

3 Ordinary Differential Equations

ic1 = [c/2; sparse(n-1,1)]; ic2 = [c/dt; sparse(n-1,1)]; y = c+f(a,c)*(x(2:end)-a)+fy(a,c)*(x(2:end)-a).ˆ2/2; Listing 3.13 These 3 lines declare the initial condition vectors for modifying the first equation of the nonlinear system. Taylor series is used to initialize a good initial guess for Newton’s method.

11 12 13 14

G J X y

= = = =

D1*y - ic2 - f(v, Im*y + ic1); D1 - spdiags(fy(v, Im*y + ic1), 0, n, n) * Im; J \ G; y - X;

Listing 3.14 These 4 lines are the key lines placed inside a loop to implement Newton’s method to solve a nonlinear first order IVP.

 G(y, z) =

D1 y + IC1 − I ∗ z + IC2 f (I ∗ y + IC3 ) − D1 z + IC4



 J(y, z) =

D1 −I ∗  ∗ ∗ f (I y + IC3 )I −D1



Fig. 3.6 An Object function and corresponding Jacobian for solving the nonlinear first-order system y  = z, z  = f (y), y(a) = c, z(a) = d coming from the second-order IVP y  = f (y), y(a) = c, y  (a) = d . The terms I C1 through I C4 correspond to the missing initial value terms in the first rows of the matrices D1 and possibly I ∗ , depending on forward, backward, or midpoint choice of implementation, as in Sect. 3.5. Similar equations can be computed to solve order k systems of the form yi = f i (t, y1 , · · · , yk ), yi (a) = ci , i = 1, . . . , k

Example 3.8. Consider the the first-order nonlinear IVP y  = y 2 , y(0) = 1, x ∈ (0, 1). Write down a 3 × 3 linear system whose solution χ is the first midpoint Newton’s step, given an initial guess defined by y0 defined by y(x) = 3x evaluated at the point grid {1/3, 2/3, 1}. Calculate the exact solution and discuss the expected result of applying Newton’s method for this nonlinearity in this domain. 1 for x < 1. Since there is a vertical Solution: The exact solution is given by y(x) = 1−x asymptote at x = 1, one expects difficulty accurately approximating a solution on this interval. The object function and Jacobian are computed by



3 0 G(y0 ) = ⎣ −3 3 0 −3

and

⎤ ⎛ ⎡ ⎤⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ 1 1 −3 100 0 1 0⎦⎣2⎦ + ⎣ 0 ⎦ − ⎝ ⎣1 1 0⎦⎣2⎦ + ⎣ 2 3 3 0 011 3

1 ⎤⎞2 2



−1



0 ⎦⎠ = ⎣ 34 ⎦ 0 − 13 4

3.8 A Practical Guide to Fourier Series ⎡

3 0 J (y0 ) = ⎣ −3 3 0 −3



89 ⎤⎡ ⎤

⎛ ⎡

⎡ 1 ⎤⎞ ⎡ ⎤ ⎡ 2 0 100 2 3 0 ⎦⎠ ⎣ 1 1 0 ⎦ = ⎣ − 29 2 011 0 0 − 11 2

1 0 100 1 0 ⎦ − diag ⎝ ⎣ 1 1 0 ⎦ ⎣ 2 ⎦ + ⎣ 2 3 3 011



0 0 ⎦.

1 2

The corresponding augmented matrix for the Newton’s search direction χ is ⎡ ⎣−

2

0

9 2

3 2 11 −2

0

0 0 1 2

−1

3 4 13 −4

⎤ ⎦.

The solution χ = (−1/2, −1, −35/2) gives y1 = (3/2, 3, 47/4). A subdomain (0, b) with b < 1 is required to find a reasonable approximate solution, with an increasing number of divisions needed as b nears 1 from the left.

Exercises 3.49. Consider the IVP in Example 3.8. Write down the 3 × 3 linear system in augmented matrix form for the forward Newton’s search direction χ , given initial guess y0 = (1, 1, 1). Solve the system and take a Newton’s step. 3.50. (*) Use Newton’s method to solve the nonlinear 1st order IVP in Exercise 2.13 of Chap. 2. Compare the exact solution to the approximate solution using graphical and Riemann sum norms of differences. 3

3.51. (*) Use Newton’s method to solve the nonlinear initial value problem y  = yt 2 , y(1) = 21 over the interval [1, 2]. Compare the exact solution graphically and obtain the result in Fig. 3.7. 3.52. (*) Use Newton’s method to solve the nonlinear initial value problem y  = y 3 , y(0) = 1 over the interval [0, b], for b = 14 , b = 21 , and b = 1. Compare to the exact solution. 3.53. (**) Compare all of the above exercises’ results with the results from using Euler’s and ode45 or R-K. 3.54. (***) Implement Newton’s method for solving a system of two nonlinear first-order initial value problems. Apply the program to compute and plot enough phase plane trajectories of the system y  = z, z  = y − y 3 to visualize the global and local behavior around and away from three equilibria. See Fig. 3.6.

3.8 A Practical Guide to Fourier Series From our point of view and for use in Chap. 4 PDE material, the Fourier series can be relatively simply explained in terms of projections on to the eigenfunctions of

90

3 Ordinary Differential Equations 0.58

0.57

0.56

0.55

0.54

0.53

0.52

0.51

0.5

1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

2

Fig. 3.7 A graphical comparison between using Newton’s method to solve the IVP in Exercise 1 (blue curve) and the exact solution (red *’s)

the negative second derivative operator with boundary conditions over an interval. Fourier Sine series correspond to 0-D BC, Fourier Cosine series correspond to 0-N BC, and full Fourier series correspond to periodic BC. Mixed 0-D/0-N conditions are also possible. From constant coefficient second-order linear homogeneous theory (see Sect. 2.6.1), solutions to the 0-D BC y  + λy = 0, y(a) = 0 = y(b)

(3.4)

√ √ are given by y = c sin( λ(x − a)). The condition y(b) = 0 gives λ(b − a) = kπ. Thus the eigenvalues and corresponding eigenfunctions are  λk =

kπ b−a

2

 kπ (x − a) , k = 1, 2, 3, . . . . , ψk (x) = sin b−a 

If instead the 0-N BC y  (a) = 0 = y  (b), are enforced, sine is replaced with cosine and the index k begins at 0, to include the constant function ψ0 ≡ 1 corresponding to eigenvalue 0. Periodic BC y(a) = y(b), y  (a) = y  (b) give both sines and cosines and the constant function, but with integer multiples of full periods. Mixed BC such as y(a) = 0 = y  (b) will result in sines or cosines, but with an odd number of quarter periods. For convenience, Fig. 3.8 contains the eigenvalues and corresponding eigenfunctions for these cases over the interval [0, 1]. Theory gives that one obtains a complete set of eigenfunctions, that is, given any square integrable function f over the interval, there exists a sequence of Fourier coefficients ak so that  gn (x) = ak ψk (x) k≤n

defines a sequence of functions that converges in the sense that

3.8 A Practical Guide to Fourier Series

91

0-D

y(0) = 0 = y(1)

eigenvalue λk (kπ)2

0-N

y (0) = 0 = y (1)

(kπ)2

cos(kπx)

k = 0, 1, 2, · · ·

periodic

y(0) = y(1), y (0) = y (1)

(2kπ)2

cos(2kπx) sin(2kπx)

k = 0, 1, 2, · · · k = 1, 2, 3, · · ·

mixed 1

y(0) = 0 = y (1)

sin((2k − 1) π2 x)

k = 1, 2, 3, · · ·

mixed 2

y (0) = 0 = y(1)

BC



(2k−1)π 2 (2k−1)π 2

2 2

eigenfunction ψk sin(kπx) k = 1, 2, 3, · · ·

cos((2k − 1) π2 x) k = 1, 2, 3, · · ·

Fig. 3.8 Eigenvalues and eigenfunctions for −y  = λy over [0, 1], various boundary conditions

 ||gn − f ||22 =

(gn (x) − f (x))2 dx → 0 as n → ∞.

This is convergence in the Hilbert space L 2 of square integrable functions, using  2 1 the norm || f ||2 := f (x) dx 2 . Further, if f is continuous at x0 , we have pointwise convergence gn (x0 ) → f (x0 ). Where limx→x0 − f (x) = limx→x0 + f (x), we have gn (x) → (limx→x0 − f (x) + limx→x0 + f (x))/2. The eigenfunctions of the negative second derivative operator with any of the above boundary conditions can be chosen to be orthogonal, i.e., the inner product  ψ j , ψk =

ψ j (x)ψk (x) dx

equals zero unless j = k. The orthogonality of the eigenfunctions of the Laplacian with homogeneous BC follows from theory, but on the interval, it can be verified directly by using trigonometry identities to compute the preceding integrals. For more details, see [8] or any standard introduction to PDE. Let f be square integrable over an interval, so that for some sequence of coefficients (ak ) the Fourier series ak ψk converges to f in the above sense. Then   f, ψ j = a k ψk , ψ j = ak ψk , ψ j = a j ψ j , ψ j , k

k

so that one has aj =

f, ψ j . ψ j , ψ j

Note that this agrees with the orthogonal projection Pψk f =

(3.5)

f,ψk ψk ,ψk

ψk = a k ψk .

92

3 Ordinary Differential Equations

Over [0, 1], the Fourier Sine series coefficients are computed by 

1

ak = 2

f (x) sin(kπx) dx, fˆ(x) =

0

∞ 

ak sin(kπx).

1

The Fourier Cosine series coefficients for the same interval are computed by  a0 =

1



1

f (x) dx and ak = 2

0

f (x) cos(kπx) dx, fˆ(x) =

∞ 

0

ak cos(kπx).

0

The full Fourier series coefficients for [0, 1] are given by a0 =

 1 0

f (x) dx, ak = 2

with fˆ(x) =

 1 0 ∞  0

f (x) cos(2kπx) dx, and bk = 2

ak cos(2kπx) +

∞ 

 1 0

f (x) sin(2kπx) dx,

bk sin(2kπx).

1

Example 3.9. Compute the Fourier Sine series for f ≡ 1 over (0, 1). 1  1 cos(kπx)  4 when k is odd. Solution We compute ak = 2 0 sin(kπx) dx = −2 kπ  = kπ 0

By the computation (or using ak = 0 when k is even. Thus, the Fourier  symmetry), 4 sin((2k − 1)πx). The series converges to the sine series for f ≡ 1 is ∞ k=1 (2k−1)π “square wave function”, defined on the whole real line, periodic with period 2, and odd about 0.

Chapter 4

Partial Differential Equations

Summary We first build a second difference block matrix corresponding to the Laplacian on the square. We use this Laplacian matrix with various enforced boundary conditions to extend the ideas developed in Chap. 3 to partial differential equations. In particular, for the square domain we investigate eigenvalues of the Laplacian, solutions to semilinear elliptic boundary value problems, Laplace’s equation with nonhomogeneous boundary conditions, the heat equation, and the wave equation. We introduce techniques for other domains, including the Laplacian on the cube and in polar coordinates, and a fairly simplistic method for constructing a Laplacian on an arbitrarily bounded two-dimensional domain. We include a section on Tricomi’s equation, and a brief tutorial on solving first-order PDE numerically via the method of characteristics. We conclude with an overview of the method of separation of variables as applied to obtaining theoretical solutions to the fundamental PDE covered in this chapter, with examples. We use many techniques in this chapter, but with a special effort to extend the two ideas from the previous chapter to PDE, namely discretizing linear problems to finite dimensional linear systems to solve with Gaussian elimination, and discretizing nonlinear problems to finite dimensional nonlinear systems to solve with Newton’s method. This chapter assumes knowledge of topics from introductory partial differential equations. All the requisite PDE theory can be found in texts such as [8], e.g., separation of variables, Laplace’s equation, heat and wave equations, eigenvalue problems, Fourier series, and Bessel functions. Standard introductory numerical analysis texts like [3] contain elementary PDE algorithms, some of which we discuss and compare here. Texts such as [4, 5, 12, 13] contain more theoretical details and applications concerning PDE and finite differences. Further treatments of finite difference schemes and the enforcement of boundary conditions can be found in [10, 14, 17]. See also Sect. 4.10 for a brief overview of the method of separation of variables, used in obtaining theoretical solutions to most of the PDE discussed in this text.

© Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8_4

93

94

4 Partial Differential Equations

D2x = kron(I, T1); D2y = kron(T2, I); D2 = D2x + D2y; Listing 4.1 These three lines build the point-grid Laplacian on the unit square with 0-D BC shown in Fig. 4.2.

4.1 The Laplacian on the Unit Square In this section, we show how to build a Laplacian second difference block matrix for the unit square, ready for immediate application to second-order PDE. We can enforce any of the previously discussed types of boundary conditions. We use MATLAB® ’s kron command to easily build these block matrices in sparse format. We can easily extend the technique used in the ODE solver from Sect. 3.1 to find approximate solutions of the PDE −Δu + f (u) = 0 on (0, 1)2 . We can analogously solve the linear problem −Δu = f on (0, 1)2 , with nonhomogeneous BC, as was done in Sect. 3.2. We can compute and compare eigenvalues, and plot eigenvectors approximating solutions to −Δu = λu on (0, 1)2 , again with any of the previously discussed types of homogeneous BC. The domain Ω = (0, 1)2 is the easiest two-dimensional region to consider in the first investigation in PDE, with n divisions of both x and y variables. Let N = n − 1 if the point grid is used, and N = n if the cell grid is used. We take here the somewhat arbitrary but standard choice for discretizing the region that u i ≈ u(x j , yk ), where i = (k − 1) ∗ N + j. This corresponds to the choice presented in Fig. 2.5. Let T1 and T2 be any sparse second difference matrix in M N ×N for the interval (0, 1), enforcing homogeneous zero Dirichlet or Neumann, or periodic boundary conditions, at one or both end points, as desired, for x and y (four sides: left, right, top, and bottom), as described in Sect. 3.4. Let I be the sparse identity matrix of the same size. Then the second partial difference matrices with respect to x and y, respectively, are given in Fig. 4.1, with the corresponding Laplacian matrix constructed in Listing 4.1, shown in Fig. 4.2. One makes sure the matrix T has the factor of 1/(dx)2 , or multiplies by the factor 2 after the kron commands. If we take u ∈ R N to be u-values over a square grid

4.2 Creating the Sparse Laplacian Matrix D2 and Eigenvalues

D x2 = kron(IN , T ) ∈ M N ⎛

T 0I · · ·

⎜ ⎜ 0I ⎜ ⎜ . ⎜ . ⎜ . ⎜ ⎜ ⎜ ⎜ ⎜ . ⎝ . . 0I

T 0I · · · .. .. .. . . . .. .. . . ···

0I ···

2 ×N 2

⎞ · · · 0I .. ⎟ . ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ . .. . ⎟ . . ⎟ ⎟ ⎟ ⎠ T 0I 0I T

D y2 = kron(T, IN ) ∈ M N

2 ×N 2

⎞ 0I · · · · · · 0I . ⎜ . .. ⎟ ⎟ ⎜ I −2I I . . ⎟ ⎜ ⎟ ⎜ . . ⎜ . . . . ⎟ ⎟ ⎜ 0I I −2I I ⎟ ⎜ ⎟ ⎜ .. . . . . . . . . ⎟ ⎜ . . . . . 0I ⎟ ⎜ ⎟ ⎜ . . ⎠ ⎝ . . . . I −2I I 0I · · · · · · 0I I −2I ⎛

1 (dx)2

95

−2I

I

Fig. 4.1 Component matrices for the Laplacian on the unit square with 0-D BC, point grid. In this example, there are N = n − 1 divisions in both the x and y directions. The block matrix D2x y computes three-point central second differences in x, along columns, while the block matrix D2 computes three-point central second differences in y, along rows. Note that T contains the factor y 1 . Figure 4.2 shows the corresponding sum D2 = D2x + D2 which approximates the Laplacian (dx)2

bands = kron(ones(Nˆ2,1),[1 1 -4 1 1]); D2 = spdiags(bands, [-N,-1,0,1,N],Nˆ2,Nˆ2); % with additional code to set the boundary ones to zeros . . . Listing 4.2 For the special case of the point-grid Laplacian on the unit square with 0-D BC, one could fill five the diagonals at once, and then change a few ones to zeros as seen in Fig. 4.2.

reshaped to be a vector as in Fig. 2.5, then D2x*u will compute approximate second partial derivatives with respect to x, and D2y*u will compute approximate second partial derivatives with respect to y. The operation D2*u will approximate the Laplacian of u. Each block T1 in the block-diagonal matrix D2x acts on a corresponding block of u, computing three-point central differences down columns, each column corresponding to a different y-value. The matrix D2y has the components of T2 distributed over three block diagonals, so its action computes differences across rows, each row corresponding to a different x-value.

4.2 Creating the Sparse Laplacian Matrix D2 and Eigenvalues The author finds it best to check a few eigenvalues and plot an eigenfunction to verify that the Laplacian matrix is coded correctly. For example, to enforce Dirichlet conditions on all four sides, define T = T1 = T2 = D2 from the examples in Sects. 3.1,

96

4 Partial Differential Equations Dx2 + Dy2 ∈ M N

⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

0 ··· 0 1 0 ··· ··· 0 .. . . .. . . . 1 −4 1 0 1 .. . .. . . . . . . .. .. .. .. . . . 0 0 . . . . . .. . . .. .. . 1 −4 1 . 1 0 . . 0 · · · 0 1 −4 0 ··· ··· 0 1 −4 1 0 · · · 0 1 0 ··· ··· 0 . .. . .. 1 −4 1 . . .. . 0 1 . .. . . . . . . .. . . . 0 .. .. .. 0 . . . . . .. . . .. .. . 1 −4 1 . 1 0 . . 0 · · · 0 1 −4 0 ··· ··· 0 1

2 ×N 2



−4 1

.. . ··· 0 ··· .. .

.. .

.. . ··· 0 ··· .. .

..

..

.

.

···

.. . ··· 0 ··· .. . 1 0 ··· . 0 1 .. .. . . . . . . . .. .. . . 0 ··· ···

..

.

1 0 ··· . 0 1 .. .. . . . . . . . .. .. . . 0 ··· ··· .. . ··· 0 .. .

··· 0 .. . . . .. . .

···

.. . ··· 0 ··· .. .

..

.

.. .

.

.. . ··· 0 ··· .. .

1 0 0 1

..

· · · 0 −4 1 0 .. . 1 −4 1 . . .. . . . . 0 .. .. . . 1 0 .. . . 1 0 1 0 ··· 0 1 0 ··· . 0 1 .. .. . . . . ··· . . . .. .. . . 0 ··· ···

··· .. . .. .

1 0 ··· . 0 1 .. .. . . . . . . . 0 .. .. . −4 1 . 1 −4 0 ··· ··· ··· 0 −4 1 0 .. . 1 −4 1 . . .. . . . . 0 .. .. .. . . . 1 . 1 0 0 ··· 0 0 1 0 .. .

··· 0 .. . . . .. . . 1 0 0 1 ··· 0 . . .. . . .. . 0 −4 1 1 −4

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

Fig. 4.2 Listing 4.1 builds this approximating Laplacian matrix on the unit square with 0-D BC, point grid. In this example, there are N = n − 1 divisions in both x and y directions. Note that D2 on the unit square with 0-D BC using the point grid is very nearly a regular banded five-diagonal matrix, except for zeros every N steps along the upper and lower diagonal corresponding to the y = 0 and y = 1 boundaries, noticeable where blocks on the diagonal meet. For other BC, grids, and regions, the Laplacian is not so uniform. Figure 4.1 shows the corresponding component matrices y D2x and D2

4.2 Creating the Sparse Laplacian Matrix D2 and Eigenvalues

1 2 3 4 5 6 7 8 9 10 11

% 1st n cell N

6 = = =

97

evals of -Lap on (0,1)ˆ2 w/0-D, pt/cell-grid 100; true; (n-1)*˜cell + n*cell;

T = T(1,1)= T(N,N)= D2 =

spdiags(kron([1,-2,1],ones(N,1)),[-1,0,1],N,N); T(1, 1) - cell; % -2 -> -3 for cell=1, 0-D T(N, N) - cell; nˆ2 * (kron(speye(N),T) + kron(T,speye(N)));

lam

= sort(eigs(-D2,6,'sm'))'

Listing 4.3 These 11 lines build the 0-D BC Laplacian matrix for either point-grid or cell-grid, and then computes the first six eigenvalues.

3.2, and 3.3. In Listing 4.3, we build the Laplacian matrix using either point grid or cell grid, and check the first six eigenvalues. We demonstrate here how to use a logical variable to effect a ‘one line if-statement’; the value of N and the corner elements of T depend on the true-or-false value of cell. This method of coding is much faster than the traditional if-then statement, a relevant fact if the conditional statement is to be executed repeatedly in a loop. Of course, one typically picks the point or cell grid from the start and need not write code that can handle both choices, and there is nothing wrong with a traditional if-then if it is not called too often. Listing 4.4 compares against the exact eigenvalues and produces a threedimensional plot. We use meshgrid another way here, namely to generate j 2 + k 2 combinations before reshaping and sorting to obtain eigenvalues. The eigenvalues are converted to strings for easy formatted output. Depending on the specific boundary condition and grid choice and before plotting, one usually pads a reshaped solution vector appropriately, and uses meshgrid to generate X and Y coordinate matrices. Listing 4.5 does this, using either the point or cell grid to enforce 0-D BC (and assuming that we have previously defined v accordingly). See Fig. 4.3 for the result of executing Listing 4.5. In this case, the order of x and y in the meshgrid is not important. This choice has X changing by row, which is consistent with the reshape command which goes column by column. Using the no-edgecolor option as in surf(X, Y, u, 'Edgecolor','None') suppresses the too many black borders around the N 2 individual small rectangles which make up the surface plot. If other, mixed BC are used, the padding can be accomplished with a little more care. For example, the following MATLAB fragment pads u to include the boundary, assuming that the BC is 0-N on the top and right sides of (0, 1)2 , and 0-D BC on the bottom and left:

98

11 12 13 14 15 16 17 18 19

4 Partial Differential Equations

[V, D] = eigs(-D2, 6, 'sm'); [lam,ndx]= sort(diag(D)'); [JJ, KK] = meshgrid(1:3,1:3); lam2 = sort(reshape(JJ.ˆ2 + KK.ˆ2, 1, 9) * piˆ2); fprintf('\n%s\n%s\n',num2str(lam), num2str(lam2(1:6))); u = reshape(V(:,ndx(3)), N, N); surf(u);

% 3rd sorted efunction

Listing 4.4 A few lines to compute eigenfunctions with a sorted index, compare eigenvalues to known values, and create a surface plot.

19 20 21 22 23 24 25 26 27 28 29

u=[zeros(1,N+2);zeros(N,1),u,zeros(N,1);zeros(1,N+2)]; x = linspace(0,1,n+1)'; dx = 1/n; if ˜cell v = x(2:end-1); else v = x(1:end-1) + dx/2; end [Y, X] = meshgrid([0; v; 1], [0; v; 1]); surf(X, Y, u, 'Edgecolor', 'None'); Listing 4.5 Code to pad the solution with 0-D boundary values and add x and y coordinates to the plot, for both point-grid and cell-grid. In most cases x and v have already been defined so only Lines 19, 27, and 29 are needed.

u =[zeros(1,N+2);zeros(N,1),u,u(:,N);0,u(N,:),u(N,N)];

Exercises 4.1 Similar to Example 4.1, write down the n = 3 cell grid 9 × 9 D2 matrix for 0-D BC on (0, 1)2 , and show that the eigenvalues agree with the formula λ¯ j,k = 2n 2 (2 − cos( j ∗ π/n) − cos(k ∗ π/n)). 4.2 (*) Compare the eigenvalues of the (n − 1)2 × (n − 1)2 block D2 matrix for 0-D BC eigenvalues with some exact known eigenvalues. Compute the order of the eigenvalue approximations, as in Tables 3.1 and 3.2.

4.2 Creating the Sparse Laplacian Matrix D2 and Eigenvalues

99

0.025 0.02 0.015 0.01 0.005 0 −0.005 −0.01 −0.015 −0.02 −0.025 1 0.8 0.6 0.4 0.2 0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 4.3 An eigenfunction corresponding to the third eigenvalue of −Δ with 0-D BC on the square

4.3 (*) Compute the eigenvalues of the (n − 1)2 × (n − 1)2 block D2 matrix for mixed 0-D and 0-N BC, on sides determined by x ∈ {0, 1} and y ∈ {0, 1}, respectively. Compare results with some exact known eigenvalues. Compute the order of the eigenvalue approximations. 4.4 (*) Create a D2 matrix which enforces 0-N BC on all four sides of (0, 1)2 . Compare the result of eigs to the known basis in terms of cosines. One can use eigs(-D_2 + speye((n-1)^2), k, 'sm') to shift the argument matrix of eigs to be invertible, and then subtract 1 to get the correct eigenvalues. 4.5 (**) Let n be an even number, and D2 be the 0-D BC second difference matrix for the Laplacian on the square using the cell grid. Using numerical integration against the subspace basis V provided by [V, D] = eigs(-D2, M, 'sm') , approximate the Fourier sine series coefficients of the function of two variables defined by f (x, y) = x(1 − x)y(1 − y). Form the eigenfunction expansion, and compare the approximation with a graph of the original function f : [0, 1]2 → R. Verify the expansion coefficients by doing the integrals by hand. 4.6 (**) Let n be an even number, and D2 be the 0-D BC second difference matrix for the Laplacian on the square using the cell grid. Using numerical integration against the subspace basis V provided by [V, D] = eigs(-D2, M, 'sm'), approximate the Fourier sine series coefficients of the following ‘pyramid’ function of two variables: f=@(x,y) x.*(y>x).*(y 2, we can revisit the topic. Visualizing solutions to the heat equation can be done via animations, e.g., viewing the temperature profiles as frames in a movie as time varies (see Sect. 2.5.1). Let D2 be a matrix for the region Ω with one of our previously discussed types of boundary conditions enforced. For the interval, D2 ∈ R N ×N , and for the square 2 2 D2 ∈ R N ×N , where N = n − 1 for the point grid and N = n for the cell grid. Let v be the corresponding interior point-grid elements or midpoint cell-grid elements. Let t be a discretization of the desired time interval [0, t f ] with m divisions (m + 1 points), so that d t = t f /m and dx = 1/n. Let ui approximate u(t , vi ) in the interval case, and u(t , v j , vk ), i = ( j − 1)N + k in the square case (with x = y). We use the notation that u is the vector of approximate u values evaluated at grid points, at the fixed time t . Let z equal f (v) in the interval case, and reshape(f(Xv, Yv), Nˆ2, 1) in the square case. We can now present several methods for approximating solutions to the heat equation, assuming 0-D or 0-N BC enforced at either end point (r = 1), on any side (r = 2), point or cell grid. Periodic BC in one or both independent variables can also be implemented. Nonhomogeneous BC can also be enforced by adding back the ‘missing boundary terms’, similar to previous examples.

4.5.1 Explicit Method If we replace the first time derivative and second space derivative(s) with two- and three-point differences, respectively, we can obtain the forward time difference equation u+1 − u = cD2 u . dt (d t) 2   Let d = c(dx) 2 and define A d = I + d D2 , where D2 = (dx) D2 is a difference matrix 2 with no factor of n . Then solving the above difference equation for u+1 , we obtain the recursive definition u+1 = Ad u .

After initializing u = z, one can overwrite one time slice with another via iterating the single line found in Listing 4.12. See also [3]. Plot (or save in a list for later) u , as desired. This is the so-called explicit or forward difference method for solving the heat equation. This method is ‘easy’, as there is no system to solve, but unstable: one finds that d < 2−r is necessary for convergence and the method is wildly inaccurate otherwise. This means a very small

4.5 The Heat Equation

109

time step must be taken, and so lots of time steps must be executed to get to a given future time, and that the situation gets worse with the square of n, the number spacial divisions. If the diffusion constant is realistic (c λd ,k = 1 − 2n 2 Δt (1 − cos

kπ kπ ) = 1 − 2d (1 − cos ) > −1 n n

1 1 only if d < 1−cos kπ . This holds for all k ∈ {1, . . . , n − 1} and for all n only if d < 2 . n Furthermore,

λdA,k = 1 − 2n 2 Δt (1 − cos

kπ ) = 1 − Δt · 2n 2 · n

= 1 − k 2 π 2 Δt + o((Δt)2 ) = e−k

π Δt

2 2



k 2 π2 1 + o( 4 ) 2n 2 n



+ o((d t)2 ).

This means that the amplitude of each mode in the Fourier sine series expansion of the solution decays at approximately the correct rate. 1 2√ Example 4.3. Let d = 1−cos be the critical value from Example 4.2, and 3π = 2+ 2 4 consider the n = 4 point grid of (0, 1) with 0-D BC. Write down a 3 × 3 matrix-vector equation one time step of the explicit method with initial temperature √ and compute √ u 0 = ( 2, 2, 2). Compare the resulting amplitude with the exact known amplitude at time t = Δt = 16+81 √2 .

Solution: ⎛ Ad = ⎝

Then

1

⎛ ⎞ ⎛√ ⎞ 2−2 √ 2 −2 1 1 2 1 ⎠+ √ · ⎝ 1 −2 1 ⎠ = √ ·⎝ 2⎠. 2 2−2 √ 2+ 2 2+ 2 1 −2 1 2 2−2 ⎞

⎛√ ⎞

⎛√ ⎞ 2 √ ⎜ ⎟ 3 2−2 ⎜ 2 ⎟ Ad ⎝ √1 ⎠ = √ · ⎝ √1 ⎠ . 2+ 2 2 2 2 2

2

2

110

4 Partial Differential Equations

u = B \ u; Listing 4.13 One time step of the implicit heat equation solver.



√ ≈ 0.6569 compares to e−π Δt ≈ 0.6967. Note that λdA,1 = 32+2−2 2 Problems 4.19 and 4.20 ask to repeat this 3 × 3 example with a different d value and initial temperature u 0 . 2

4.5.2 Implicit Method If we consider instead the backward time difference equation u+1 − u = cD2 u+1 , dt then one has a system to solve each time step: Bd u+1 = u , 2 . In Listing 4.13, we implement one step of this method. with Bd = I − d D After computing a time slice, plot or save u . This method is known to be stable [3]. Smaller α values may lead to better answers, but larger ones do not lead to catastrophic failure. The method has the reputation as being more difficult, since historically solving systems was more difficult to implement and slow to execute. This is of course now also an equally ‘easy’ method with ‘\’ as a solver executing on a modern computer. Example 4.4. Let d = 2+2√2 , and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down a 3 × 3 matrix-vector equation √and compute one time step of the √ 2 2 implicit method with initial temperature u 0 = ( 2 , 1, 2 ). Compare the resulting amplitude with the exact known amplitude at time t = Δt, and with the eigenvalue B −1 of Bd−1 . λ1,d Solution: ⎛ Bd = ⎝

Then

1

√ ⎛ ⎞ ⎛ ⎞ −2 6+ 2 −2 1 √ 1 2 1 ⎠− √ · ⎝ 1 −2 1 ⎠ = √ ·⎝ −2 6 + 2 −2 ⎠ . √ 2+ 2 2+ 2 1 −2 1 −2 6 + 2 ⎞

⎛√ ⎞

⎛√ ⎞ 2 √ ⎜ ⎟ 2+ 2 ⎜ 2 ⎟ Bd ⎝ √1 ⎠ = √ · ⎝ √1 ⎠ , 6− 2 2 2 2 2

2

2

4.5 The Heat Equation

111

u = B2 \ (A2 * u); Listing 4.14 One time step of the explicit-implicit heat equation solver.

which can easily be verified by√ matrix multiplication on the right. Note that the −1 2 √2 ≈ .7445, which compares to e−π Δt ≈ 0.6967. eigenvalue of Bd−1 is λdB,1 = 2+ 6− 2 Problems 4.23 ask to repeat this 3 × 3 example with a different d value and three different initial temperatures.

4.5.3 Explicit–Implicit Method Crank–Nicolson [3] is a slightly better solver where one takes half an explicit step, and then half an implicit step. Like the implicit method, it is unconditionally stable (see Example 4.2 and Problem 4.22). Let A2 and B2 equal Ad /2 and Bd /2 , respectively, which has the effect of halving d t. The parentheses in Listing 4.14 are important! Otherwise, a system solve might be performed for each column of A2. The matrix Bd−1 /2 Ad /2 is full, not sparse, and should not be computed for large n and m. Example 4.5. Let d = 2+2√2 , and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down the 3 × 3 matrices A d and B d . Verify that the eigenvalue formulas 2

−1

2

A B λ1, d and λ d from Examples 4.3 and 4.4 are correct by matrix multiplication times 1, 2



2



the eigenvector u0 = ( 22 , 1, 22 ). Compare the amplitude after taking one step of the Crank–Nicolson method with the exact amplitude at time t = Δt. Solution: ⎛√ ⎞ ⎛√ ⎞ ⎛ √2 ⎞ 2 √ 2 √1 2 2 ⎟ 2 2 1 ⎟ ⎜ ⎜ A d u0 = √ · ⎝ 1 2 √1 ⎠ ⎝ √1 ⎠ = √ · ⎝ √1 ⎠ 2 2+ 2 2 + 2 2 2 1 2 2 2 and ⎛ B d u0 = 2

1 √ ·⎝ 2+ 2

4+

⎛√ ⎞ √ ⎞ ⎛ √2 ⎞ 2 2 −1 2 ⎟ 2 ⎟ √ 4 ⎜ ⎜ √ · ⎝ √1 ⎠ . −1 4 + 2 −1 ⎠ ⎝ √1 ⎠ = √ 2+ 2 2 2 −1 4 + 2 2 2

This calculation agrees with A λ1, d

2

2 · 16 =1− √ 16(2 + 2)



√  √ 2− 2 2 2 = √ 2 2+ 2

112

4 Partial Differential Equations

F = @(t, u) D2*u; [t, w] = ode45(F, t, z); Listing 4.15 The method of lines (MOL) heat equation solver. After discretizing in time we have the ODE system u  = D2 u, u(0) = z.

and



B −1 λ1, d 2

2 · 16 = 1+ √ 16(2 + 2)



√ −1 √ 2− 2 2+ 2 . = 2 4 −1

−1

B A B A Then one step of Crank–Nicolson gives u1 = λ1,d λ1,d · u0 , where λ1,d λ1,d = 2 0.7071 compares to e−π Δt ≈ 0.6967. Problem 4.28 asks to repeat this 3 × 3 example with a different d value.

√ 2 2



4.5.4 The Method of Lines The so-called method of lines (MOL; see [17]) approach is generally pretty good, and widely applicable to other equations. For the heat equation, discretize the space variables only, leaving time continuous. Then one obtains a first-order IVP system of as many equations as grid points, i.e., N or N 2 for the interval and square. The built-in solver ode45 or any other first-order IVP solver can solve the problem easily after defining n, N , x, t, v, (and X v, Y v in the square case), the initial temperature distribution function f , the discretized initial temperature z, and the D2 matrix of proper boundary condition. The th column of w will contain an approximation to the time slice u  . If r = 1, you can plot the slice or surf all w. If r = 2, you can reshape and surf the slices, but it takes some tricks to visualize the four-dimensional ‘surface’ including time. Example 4.6. Show that the MOL with a Euler’s solver is precisely the explicit method from Sect. 4.5.1. Solution: The ODE u  = cD2 u has a Euler step of u i+1 = u i + Δt · cD2 u i = Ad u i . Problem 4.27 asks to show that applying the MOL with Modified Implicit Backward Euler’s and Modified Implicit Midpoint Euler’s solvers yields the implicit and explicit–implicit methods from Sects. 4.5.2 and 4.5.3.

4.5 The Heat Equation

5 6 7 8 9 10 11 12

M [V,D] V

113

= 100; % # modes in expansion = eigs(-D2, M, 'sm'); = (sqrt(abs(V'*V*dxˆr))\V')'; % normalize

[Td, Dt] = meshgrid(t, diag(D)); E = exp(-c * Dt .* Td); % eˆ{-c lambda_k t} a = V' * z * dxˆr; % a_k = u = V * diag(a) * E; % u=sum a_k e...psi_k Listing 4.16 Solving the heat equation with eigenfunction expansion and numerical integration. Line 10 computes the matrix E = (e−λ j tk ). Line 11 performs numerical integration to approximate the Fourier Sine series coefficients of the initial temperature z. Line 12 multiplies each Fourier Sine series coefficient by the appropriate negative exponential term, varying by frequency down columns and by time across rows, and computes the solution to the heat equation as a linear combination of the basis. If we change the D2 matrix to reflect other BC or regions, the code works the same.

4.5.5 Fourier Expansion with Numerical Integration Define n, N , x, t, v, and the initial temperature distribution function f . If the space dimension is r = 1, populate the initial vector by z = f (v) values. If r = 2, form X v, Y v with a meshgrid call in order to populate the initial vector z with f values over the grid. Form the D2 matrix with the desired boundary condition. Assuming this has been done, Listing 4.16 creates and normalizes an orthogonal sums for the eigenfunction expanbasis of eigenvectors of −D2 , computes Riemann  sion coefficients  via the formula c j = f ψ j , and then sums up the approximation to u(t, x) = c j e−cλ j t ψ j (x). The meshgrid command is used here to create the matrix E whose columns are negative exponential coefficients e−Λt , with time increases along rows. The code works for interval and square, using a factor of (dx)r for Riemann sums. If r = 1, Listing 4.17 shows how to produce the result in Fig. 4.9. If r = 2, Listing 4.18 shows how to reshape the initial temperature before computing u, and reshape and pad a time slice of the solution with zero boundary data before executing a surf. Figure 4.7 displays surf output from solving the heat equation on the unit square with 0-D BC, for four different times. Listing 4.19 demonstrates how for the r = 1 interval case one can use Fourier sine series coefficients obtained by symbolic computation and enter the results efficiently in MATLAB. We sum only over nontrivial modes, and compare the expanded result with the original function f . We compute the SOV solution to the heat equation on the unit interval with initial temperature distribution f using the known coefficients, eigenfunctions, and eigenvalues. The results of the Fourier approximation plot for M = 10 and M = 1000 are displayed in Fig. 4.8. The result of the heat equation surf for n = 100, m = 100, and M = 50 is found in Fig. 4.9. Example 4.7. Use the three eigen-pairs for the n = 4 point-grid second difference matrix D2 with 0-D BC on (0, 1) to compute the Fourier heat equation solution

114

13 14 15

4 Partial Differential Equations

% plot temperature surface with 0-D BC [T, X] = meshgrid(t, x); surf(X,T,[zeros(1,nt+1);u;zeros(1,nt+1)],'EdgeColor',' None'); Listing 4.17 Creating a time vs space surf plot for r = 1 space dimensions.

f

= @(x,y) (sin(pi*x) + .5*sin(3*pi*x)) .* ... (sin(pi*y) + .5*sin(3*pi*y)); = reshape(f(Xv,Yv),Nˆ2,1);

z . . . z1 = zeros(1, N+2); z2 = zeros(N, 1); w = reshape(u(:,5), N, N); % select 5th time slice w = [z1; z2 w z2; z1]; surf(X, Y, w, 'Edgecolor', 'None'); axis([0 1 0 1 0 1.25]); % fix scale Listing 4.18 Setting initial temperature and creating a time slice surface plot of temperature vs space variables, for r = 2 space dimensions.

1.2

1.2 1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2 0 1

0 1 0.8

0.8

1 0.6

1 0.6

0.8 0.6

0.4

0.8 0.4

0.2

0.2 0

0.6

0.4

0.4

0.2

0.2 0

0

1.2

0

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0 1

0 1 0.8

1 0.6

0.8 0.6

0.4

0.4

0.2

0.2 0

0

0.8

1 0.6

0.8 0.6

0.4

0.4

0.2

0.2 0

0

Fig. 4.7 Four time slices of solution to heat equation on square, via numerically computed Fourier coefficients

4.5 The Heat Equation

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22

115

n = 100; M = 50; m = 100; tf = .2; c = .05; x = linspace(0,1,n+1)'; t = linspace(0,tf,m+1)'; f = @(x) (1 - 4*x).*(x.5); % u(0,x) ntm a k

= @(k) 4*k; = @(K) (8/pi) ./ (K); = ntm(1:M)';

% nontrivial modes % coefficient formula % M nontrivial modes

[Kx, Xk] = meshgrid(k, x); [Tk, Kt] = meshgrid(t, k); V = sin(pi*Kx.*Xk); E = exp(-c*piˆ2*Kt.ˆ2.*Tk);

% basis, Rˆ{N\times M} % eˆ{-c lambda_k t}

u = V * diag(a(k)) * E; w = V * a(k);

% sum a_k eˆ... psi_k

subplot(2,1,1); plot(x,w, x(1:10:end),f(x(1:10:end)),'r*'); subplot(2,1,2); surf(Tx, Xt, u, 'Edgecolor', 'None'); Listing 4.19 Fourier coefficients computed by hand or with symbolic software are entered as functions instead of computed via numerical integration. We efficiently use only the nontrivial modes resulting from symmetry to solve and plot the corresponding exact SOV heat equation solution.

1

1

0.5

0.5

0

0

−0.5

−0.5

−1

−1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Fig. 4.8 Fourier sine series expansions with M = 10 and M = 1000 nontrivial modes

0.9

1

116

4 Partial Differential Equations

1 0.5 0 −0.5 −1 0

0.05

0.1

0.15

0.2

0.1

0

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fig. 4.9 SOV solution to heat equation with exact Fourier coefficient formulas

u t = u xx approximation with initial temperature f ≡ 1. Compare to the actual solution. √ √ Solution: The eigenvalues of D2 are {16(2 − 2), 32, 16(2 + 2)} with corresponding un-normalized eigenvectors vk = (sin kπ/4, sin kπ/2, sin 3kπ/4). The computed approximate Fourier sine series coefficients ⎛ ⎜ c=⎝

1 √1 1 2 2 2 1 √ 0 √12 2 √1 − 1 √1 2 2 2

⎞⎛ ⎞ ⎞ ⎛ ⎞ ⎛√ 1.2071 2+1 1 1 ⎟⎝ ⎠ 0⎠ 0⎠ ≈ ⎝ ⎠ 1 = √ · ⎝√ 2 .2929 1 2−1

compare to the {4/π, 0, 4/(3π)} ≈ {1.2732, 0, .4244}. The eigenval√ actual values √ ues {16(2 − 2), 32, 16(2 + 2)} ≈ {9.3726, 32, 54.674} correspond to the actual values {π 2 , 4π 2 , 9π 2 } ≈ {9.8696, 39.4784, 88.8264}. The approximate solution is

e

√ −16(2− 2)t

√ √ ⎛ 1 ⎞ ⎛ 1 ⎞ √ 2 2 + 1 ⎝ √1 ⎠ 2 − 1 ⎝ 2√1 ⎠ − 2 . + e−16(2+ 2)t √ √ 2 2 2 1 1 2

2

Problem 4.29 asks to repeat this example for an even initial temperature.

4.5 The Heat Equation

117

4.5.6 Block Matrix Systems We demonstrate how to approximate a solution to the heat equation u t = cΔu on (0, 1) and (0, 1)2 by solving a single linear system, given any initial temperature u(0, x) = f (x), with homogeneous or nonhomogeneous Dirichlet or Neumann boundary conditions. Let D˜ 1 be the m × m first difference matrix (see, e.g., Eq. 3.3) 1 factor. Let D˜ 2 be the N × N point- or cell-grid second difference without the Δt 1 Laplacian matrix for the region, without the (Δx) 2 factor, enforcing 0-D or 0-N BC at both ends of the interval or on the four sides of the square, as desired. Then we can write the PDE discretely as   Lu = kron ( D˜ 1 , I N ) − d · kron (Im∗ , D˜ 2 ) u = IC + BC, where Im∗ is one of the three m × m matrices in Table 3.3, u ∈ R N m is the solution approximation, and R = IC + BC contain initial and boundary condition terms ‘moved to the RHS’. If we take Im∗ to be the first, lower diagonal matrix, the scheme is forward in nature and precisely the explicit method. If we take the second choice Im∗ = Im , we get a backward scheme which is precisely the implicit method. We will use in our examples the third ‘averaging identity’ matrix, which generates precisely the explicit–implicit (Crank–Nicolson) method. In that case, our system becomes ⎡

Bd

⎢ ⎢ −A d ⎢ 2 ⎢ Lu = ⎢ ⎢ 0 ⎢ .. ⎢ ⎣ . 0 2

⎤ 0 ··· ··· 0 ⎡ 1 ⎤ ⎡ Ad z⎤ u 2 .. ⎥ .. 2 ⎥ ⎢ ⎢ 0 ⎥ ⎥ u .⎥⎢ Bd . ⎢ ⎥ ⎥ 2 ⎢ . ⎥ ⎢ ⎥ .. ⎥ .. .. .. ⎥ ⎢ .. ⎥ = ⎢ ... ⎥ + [BC] , . . . .⎥⎢ ⎥ ⎢ ⎥ ⎥⎢ . ⎥ ⎢ . ⎥ ⎥ ⎣ .. ⎦ ⎣ .. ⎦ .. .. .. . . . 0⎦ um 0 · · · 0 −A d B d 2

2

where the matrices B = B d and A = A d are as in Sect. 4.5.3, the vectors ui represents 2 2 the temperatures at all N locations, at time t = iΔt, and z is a vector of temperatures at all N locations, at initial time t = 0. The matrix L can be easily built using kron, the lower diagonal matrix Im∗ , the identity matrix Im , and the matrices B and A. The vector BC is the zero vector in R N m if the BC are homogeneous; all zero Dirichlet and Neumann conditions are entirely handled when building the Laplacian matrix D˜ 2 . For nonhomogeneous BC, the vector is constructed similarly as in Sect. 4.4, with an Δt d extra factor of d = c (Δx) 2 coming from the 2 terms in the off-diagonals of B and −A. For the interval, The RHS is most effectively constructed by first populating three of the four edge terms of an otherwise zero N × m matrix R with initial and boundary data, and then reshaping to a vector R ∈ R N m . Listing 4.20 and Fig. 4.10 demonstrates this for the BC u(t, 0) = a, u x (t, 0) = b. The vector A d z is placed in the time t = 0 2 edge of R. At x = 0, the cell grid with ghost points gives

118

4 Partial Differential Equations

(g1 + u i1 ) = a, hence g1 = u i1 + 2a. 2 The 1st equation in the i th block −Aui−1 + Bui = 0 can be written as



d d d i−1 d i i − g1 + (−1 + d )u i−1 + − = 0. u g u − + (1 + d )u − 1 1 1 2 2 2 2 2 2 The known term in −d g1 is −2d a, whence the value 2d a is moved to the RHS by adding it to the x = 0 edge of R. At x = 1, the cell grid with ghost points gives g2 − u n = b, hence g2 = u n + Δxb. Δx By looking at the n th equation in the i th block, we see that the term −d g2 contains the known value −d Δxb, whence the value d Δxb is added along the x = 1 edge. Reshaping the matrix R to a column vector R, the system is defined and can be solved via a single application of ’ \’ ’.

Fig. 4.10 The RHS matrix with added initial and mixed nonhomogeneous boundary data for a one space-dimension heat equation, cell grid, before reshaping to a vector. See Listing 4.20

Problem 4.24 asks to verify that the other schemes are indeed the explicit and implicit schemes, and are also described by block bidiagonal linear systems in terms

4.5 The Heat Equation

119

R = zeros(N,m); R(:,1) = A*z; R(1,:) = R(1,:) + 2*d*a; R(N,:) = R(N,:) + d*dx*b; R = reshape(R,N*m,1); u

= L \ R;

Listing 4.20 Setting up the right-hand-side to solve a one space-dimension heat equation with nonhomogeneous boundary data. The cell-grid is used to enforce u(0, x) = f (x), u(t, 0) = a, and u x (t, 1) = b. See Fig. 4.10.

of Ad or Bd , and the initial temperature, while for a one space-dimension heat equation and the Crank–Nicolson method with n = 3 cell grid, Problem 4.25 asks to write down the 6 × 6 system for two time steps. Problem 4.26 asks to implement all three block-solve schemes in MATLAB for a one space-dimension heat equation. When we implement the method on the unit square, the matrix L is created in a straightforward way. After using the point or cell grid to build an N 2 × N 2 two-space dimension Laplacian matrix D˜ 2 which enforces the desired homogeneous Dirichlet or Neumann BC on the four sides, create the block matrices A and B by I N 2 ± d D˜ 2 . Then use kron to place −A and B on the lower diagonal and diagonal, respectively, to form the N 2 m × N 2 m matrix L.

1.2

1.2

1.2

1

1

1

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0

0

1

1 0.8

1 0.6

0.8 0.6

0.4 0.4

0.2

0.2 0

0 1 0.8

1 0.6

0.8 0.6

0.4 0.4

0.2

0.2 0

0

0

0.8

1 0.6

0.8 0.6

0.4 0.4

0.2

0.2 0

0

Fig. 4.11 Three time slices of the heat equation solution on the square. The third slice is nearly at the equilibrium temperature implied by the BC u x (0, y) = 0 = u x (1, y), u(x, 0) = 0, u(x, 1) = 1

The RHS vector can be created by starting with an N × N × m rectangular array of zeros. After adding an N × N initial temperature array to the t = 0 face, the known boundary data corresponding to the four sides of the square are added to the four N × m faces. We demonstrate with the example u t = cΔu, t > 0, (x, y) ∈ (0, 1)2 , u(0, x) = f (x), with the boundary conditions

120

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25

4 Partial Differential Equations

% define initial temp and four side BC functions f = @(x,y) 1/2*cos(pi*x).*sin(pi*y) + sin(pi*y/2); g1 = @(x) zeros(size(x)); g3 = @(x) ones(size(x)); g2 = @(y) zeros(size(y)); g4 = @(y) zeros(size(y)); % define time grid and space cell-grid n = 30; % cell-grid (0,1) divisions T = .15; % final time d = 1; % d = dt/dxˆ2, c=1 dx = 1/n; dt = dxˆ2*d; m = floor(T/dt); % m time steps x = linspace(0,1,n+1)'; v = x(1:end-1) + dx/2; [Yv,Xv] = meshgrid(v,v); % nˆ2 X nˆ2 \tilde Laplacian T1 = spdiags(kron(ones(n,1),[1 -2 1]),-1:1,n,n); T2 = T1; T1(1,1) = -1; T1(n,n) = -1; % Neuman x=0, x=1 T2(1,1) = -3; T2(n,n) = -3; % Dirichlet y=0, y=1 D2 = kron(T1,speye(n)) + kron(T2,speye(n)); % create bi-block diagonal explicit-implicit matrix L B = speye(nˆ2) - d/2*D2; A = speye(nˆ2) + d/2*D2; I1 = spdiags(kron(1,ones(m,1)),-1,m,m); L = kron(speye(m),B) - kron(I1,A); Listing 4.21 For the heat equation on the square, this code generates the time and space cellgrid, the Laplacian and Crank-Nicols matrices A and B, and the linear operator L defining the left-hand-side of a linear system for all space locations and time steps.

u(x, 0) = g1 (x), u x (0, y) = g2 (y), u(x, 1) = g3 (x), u x (1, y) = g4 (y). Listing 4.21 up the cell grid and build the block matrix L. Listing 4.22 creates the RHS vector R and solves the system. Listing 4.23 extracts three time slices of the solution and plots them. Figure 4.11 shows the results of these plots at the initial and two subsequent time slices, for a specific choice of initial temperature f , diffusion constant c, and boundary functions g1 , g2 , g3 , and g4 .

4.5 The Heat Equation

26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

121

% 3D RHS array R = zeros(n,n, m); % add A*IC to first time slice R(:,:,1) = reshape(A*reshape(f(Xv, Yv),nˆ2,1),n,n); % add cols/rows of BC R(:,1,:) = R(:,1,:) + R(:,n,:) = R(:,n,:) + R(1,:,:) = R(1,:,:) R(n,:,:) = R(n,:,:) +

to each time slice 2*d*g1(v); 2*d*g3(v); dx*d*g2(v'); dx*d*g4(v');

% reshape RHS and SOLVE SYSTEM R = reshape(R,nˆ2*m,1); u = L \ R; Listing 4.22 Create the right-hand-side vector by first populating five of the faces of a three dimensional array with initial and boundary data, and solve the system.

u Z1 Z2 Z3

= = = =

reshape(u,nˆ2,m); reshape(u(:,1),n,n); reshape(u(:,20),n,n); reshape(u(:,m),n,n);

subplot(1,3,1); surf(Xv, Yv, Z1,'Edgecolor','None'); axis([0 1 0 1 -.1 1.3]); subplot(1,3,2); surf(Xv, Yv, Z2,'Edgecolor','None'); axis([0 1 0 1 -.1 1.3]); subplot(1,3,3); surf(Xv, Yv, Z3,'Edgecolor','None'); axis([0 1 0 1 -.1 1.3]); Listing 4.23 We select three time slices for surface plots. See Fig. 4.11.

Exercises 4.19. Let d = 21 and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down a 3 × 3 matrix-vector equation √and compute one time step of the explicit √ method with initial temperature u 0 = ( 22 , 1, 22 ). Compare the resulting amplitude 1 A with the exact known amplitude at time t = Δt = 32 , and with the eigenvalue λ1,d . 4.20. Let d = 2+2√2 and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down a 3 × 3 matrix-vector equation and compute one time step of the explicit

122

4 Partial Differential Equations √



method with initial temperature u 0 = ( 22 , −1, 22 ). Compare the resulting ampliA . tude with the exact known amplitude at time t = Δt, and with the eigenvalue λ3,d 4.21. Let d = 21 and consider the point grid of (0, 1) with 0-D BC. Show that the explicit method for m = 2n 2 time steps√ to compute an approximation at time t = 1 √ A 2n 2 with initial temperature u 0 = ( 22 , 1, 22 ) yields a final amplitude (λ1, which 1) converges O(n 2 ) to the exact final amplitude e−π . 2

2

4.22. Show that the implicit method is unconditionally stable by showing that the magnitude of the eigenvalues of λd ,k of Bd−1 are less than one for all d , for all n, for all k ∈ {1, . . . , n − 1}. Using the formula for λd ,k , the first two terms of a Taylor’s expansion for cos αk , and the first two terms of a geometric sum, show that −1

λdB,k = e−k

π Δt

2 2

+ o((d t)2 ).

4.23. Let d = 1, and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down a 3 × 3 matrix-vector equation and compute one time step of the implicit method with ), sin( kπ ), sin( 3kπ )) : k = 1, 2, 3}. each of the initial temperatures u 0 ∈ {(sin( kπ 4 2 4 Compare the resulting amplitudes with the exact known amplitudes at time t = Δt, B −1 of Bd−1 . and with the eigenvalues λk,d 4.24. Verify that the other two schemes (taking I ∗ to be the lower diagonal identity or identity matrix instead of the averaging matrix) are indeed the explicit and implicit schemes. Write down the linear systems in block bidiagonal form in terms of Ad or Bd , and the initial temperature. 4.25. For a one space-dimension heat equation and the Crank–Nicolson method with n = 3 cell grid and 0-D BC, with d = 2, c = 1, and u 0 = sin(πv), write down and solve the 6 × 6 system for two time steps. Compare amplitudes to correct values at t = Δt and t = 2Δt. 4.26. (**) Implement all three block-solve schemes in MATLAB for the one spacedimension heat equation in Problem 4.25. Compare with the actual solution or other approximations. 4.27. Show that the MOL with Modified Implicit Backward Euler’s and Modified Implicit Midpoint Euler’s methods are precisely the implicit and explicit–implicit methods, respectively. 4.28. Let d = 1, and consider the n = 4 point grid of (0, 1) with 0-D BC. Write down A B −1 the 3 × 3 matrices A d and B d . Verify that the eigenvalue formulas λ1,d and λ1,d from 2

2

4.6 The Wave Equation

123

Examples 4.3 and 4.4 are correct by matrix multiplication times the eigenvector u0 = √ √ 2 2 ( 2 , 1, 2 ). Compare the amplitude after taking one step of the Crank–Nicolson method with the exact amplitude at time t = Δt. 4.29. Repeat Example 4.7 for the case f (x) = 1 − 2x. 4.30. Repeat Example 4.7 with the n = 3 cell grid. 4.31. (*) Consider the one space-dimension heat equation on (0, 1) with 0-D BC, and initial temperature defined by Exercises 3.24. Compare the exact SOV Fourier sine series solutions with the results of Crank–Nicolson and the method of lines (the latter implemented with either Runge–Kutta or ode45). 4.32. (**) Repeat the previous exercise with the block matrix solve method. 4.33. (*) Consider the one space-dimension heat equation on (0, 1) with 0-D BC, and initial temperature defined by Exercise 3.28. Compute the Crank–Nicolson and method of lines solutions, and compare to the actual solution. 4.34. (**) Consider the two space-dimension heat equation on (0, 1)2 with 0-D BC, and initial temperature defined by the ‘pyramid function’ defined in Exercise 4.6. Compare the results from the method of lines with the method of Fourier expansion by numerical integration. You could even work out the integrals by hand. 4.35. (***) Implement the block matrix solve method with a convection term: u t = cu xx − bu x . Use an N × N -dimensional three-point central first difference matrix T and kron(speye(m), T) to create D1x . 4.36. (***) Use any of the above methods to solve a two space-dimension heat equation with convection: u t = cΔu − b · ∇u. Here, you will want to create D1x and y D1 .

4.6 The Wave Equation We present the method of lines (MOL; see [17]) for solving the wave equation, and a reasonable explicit method. The more obvious explicit and implicit methods do not perform very well, a fact the reader can explore in the exercises. We develop the socalled D’Alembert matrix method for setting up and solving a single linear system, as we did for the heat equation in Sect. 4.5.6. Visualizing solutions to the wave equation is a good opportunity for making animations, e.g., viewing the displacements as frames in a movie as time varies (see Sect. 2.5.1).

124

F z [t, w]

4 Partial Differential Equations

= @(t,uv) [uv(Nn+1:2*Nn);cˆ2*D2*uv(1:Nn)]; = [fx; gx]; = ode45(F, t, z);

Listing 4.24 The RHS F is a function of t and w = (u, v). The system u  = v, v = c2 D2 u is thus encoded by operating on the first and second halves of w separately.

4.6.1 The Method of Lines We consider u tt = c2 Δu, t > 0, x ∈ Ω ⊂ Rr , where u : [0, ∞) × Ω → R satisfies initial conditions u(0, x) = f (x) and u t (0, x) = g(x), with some reasonable boundary conditions, possibly mixed, on ∂Ω [8]. Let D2 ∈ M Nn ×Nn be a matrix for our discretization of a region Ω with Nn interior grid points and a choice of boundary conditions enforced, with fixed dx. Let t be a discretization of the desired time interval [0, t f ] with m divisions, so that d t = t f /m. We again use the notation that u is the vector of approximate u values evaluated at grid points, at the fixed time t . We present the Method of lines approach to the wave equation. Other methods are mentioned in the exercises. The trick here (and for many other possible methods) is in converting the second-order ODE system to an even bigger first-order system. In Listing 4.24, the vector uv is of length 2Nn , the first half containing u, and the second half containing an approximation to u t over the grid. The vectors fx and gx contain the initial values of f and g evaluated over the grid, respectively, and are used to build the first-order system’s initial value vector z. Remember that only the first half of each column of w contains a displacement u. The second half contains the velocity. If r = 1, you can plot individual time slices, make a movie, or surf all u. If r = 2, you can reshape then surf time slices. Figures 4.12 and 4.13 show two time slices for an example on the interval and an example on a general region (using methods from Sect. 4.8).

4.6.2 A Good Explicit Method Many of the obvious implicit and explicit methods for solving the wave equation have hidden flaws. The solution is ever-changing in time, unlike the equilibrium-seeking heat equation, and so errors can become more apparent over time. The most common numerical errors are dissipation and phase shift. In the first case, the amplitude decays while the quasi-period stays reasonably steady. In the second, the amplitude holds relatively constant while the phase slowly drifts until at some time the simulation is

4.6 The Wave Equation

125

Fig. 4.12 Solutions of the wave equation on an interval, obtained via the method of lines. The red plots denote the initial displacement, while the blue plots are from two later frames of a movie. A zero Dirichlet BC was enforced at the left end point, and a zero Neumann BC was enforced at the right end point

Fig. 4.13 Solutions of the wave equation on a general region, obtained via the method of lines with a D2 matrix generated using the ideas of Sect. 4.8. The plot on the left is of the initial displacement function f , while the plot on the right is a later frame of a movie. A zero Neumann BC was enforced on the boundary of the smaller interior holes in the region, and a zero Dirichlet BC was enforced on the outer boundary

up when it should be down and the other way around. See Exercises 4.50 and 4.51) for examples of explicit and implicit methods which demonstrate these difficulties. Listing 4.25 gives the key lines inside the time loop for a simple explicit method that does not suffer so much from these issues. It is possible to show that this explicit method generates the same solution approximations as the D’Alembert matrix method in the subsequent Sect. 4.6.3. For example, we show that solution approximations are exact in the one space-dimension case with u t (0, x) = 0, provided that one chooses d = cΔt = 1. In this interval case, the exact solutions agree Δx with the familiar D’Alembert principal solutions obtained by employing even and odd periodic extensions of the initial data functions, as appropriate to the choice of BC.

126

1 2 3 4 5 6 7 8

4 Partial Differential Equations

for i=1:nt u1 = u + dt/2 * v; v = v + c2*dt/dxˆ2*D2*u1; u = u1 + dt/2 * v; % (plot/save u time slice here) end Listing 4.25 After discretizing in space and turning the resulting ODE into a system of first order ODE, the following step explicit step can be repeated without the excessive dissipation or phase shift problems other more obvious methods may have. The vectors u and v are first initialized to the initial displacement and initial velocity given by the problem.

4.6.3 Block Matrix Systems and D’Alembert Matrices Consider the wave equation u tt = c2 Δu, t > 0, x ∈ (0, 1)r +1 , r ∈ {0, 1}, with 0-D or 0-N boundary conditions, and initial conditions given by u(0, x) = f (x), u t (0, x) = g(x). The D’Alembert Matrix (as we call it here) is a block-diagonal linear matrix used to define a linear system which enforces the implicit equation 1 c2 ˜ i i−1 i i+1 (u − 2u + u ) = D2 u , (d t)2 (dx)2 where u i ∈ R N denotes an approximation to the solution evaluated at time ti over 1 r r ˜ Laplacian the cell grid of the region, and (dx) 2 D2 is the corresponding N × N

(d t) matrix. Let d = c(dx) 2 . We will take d = the implicit equation becomes 2

2

1 2r

, so that d t =

dx √ c 2r

, and dx = n1 . Then

u i−1 − Au i + u i+1 = 0, where A = 2I +

d ˜ D2 . 2r

The initial condition u(0, x) = f (x) gives u 0 to be the values of f evaluated over the cell grid, reshaped in the r = 1 two space-dimensional case. Similarly, v0 is taken from g. The first idea for approximating u 1 [3] is to use the Taylor expansion 2 2 u 1 = u 0 + (d t)v0 + c (d2 t) D2 u 0 , where g  (x) = u tt (0, x) = c2 u xx (0, x) ≈ c2 D2 u 0 . This works well for g = 0, but is not the best choice otherwise. It can be shown that enforcing

4.6 The Wave Equation

127

1 1 1 (u − u 0 ) = (v1 + v0 ) dt 2 is equivalent to u1 =

1 dt (I + A)v0 . 2 2

Combining and simplifying, we will use u1 =

1 0 dt Au + (I + 1/2 A)v0 . 2 2

The resulting linear system in block form is ⎛

0 ··· ··· ··· ··· ⎜ . ⎜ −A I . . ⎜ ⎜ ⎜ I −A I . . . ⎜ ⎜ ⎜ 0 I −A I . . . ⎜ ⎜ . . ⎜ .. . . . . . . . . . . . . . . ⎜ ⎜ . .. ⎝ .. . I −A I 0 · · · · · · 0 I −A I

⎞ 0 ⎛ u2 ⎞ ⎛ 0 1 1 ⎞ −u + 2 Au .. ⎟ ⎜ u 3 ⎟ ⎜ ⎟ .⎟ −u 1 ⎜ ⎟ ⎟⎜ . ⎟ ⎜ ⎟ ⎟ .. ⎟ ⎜ .. ⎟ ⎜ 0 ⎜ ⎟ ⎟ .⎟⎜ ⎟ ⎜ ⎟ . ⎜ ⎟ .. ⎟ .. ⎟ ⎜ ... ⎟ = ⎜ ⎜ ⎟. .⎟ ⎜ ⎟ ⎟⎜ . ⎟ ⎜ ⎟ .. ⎜ ⎟ .. ⎟ . . ⎜ ⎟ ⎜ ⎟ ⎟ . .⎟⎜ ⎟ ⎜ ⎟ . ⎜ ⎟ .. ⎟ ⎝ .. ⎠ ⎝ ⎠ ⎠ . 0 0 um I

It can be shown that the solution to this linear system is the same as that obtained by the good explicit method described in Sect. 4.6.2. Thus, any observations concerning the accuracy of one method apply equally to the other. For example, when r = 0 and g = 0, the solution is exact! The general equation gives u i+1 = u ij−1 + u ij+1 − u i−1 j j , with similar conditions at j = 1 and j = n, depending on the BC. With u 1 = 21 Au 0 , an inductive proof shows that u ij =

1 0 (u + u 0j+i ), 2 j−i

which is D’Alembert’s principle exactly. Like that principle, indices j ± i that lie outside {1, . . . , n} are interpreted to be the appropriate periodic extension of f , odd about 0-D end points and even about 0-N end points of (0, 1). When r = 0 and g = 0, one does not in general expect exact solutions. The solution will be as good as the approximation u 1 , which in most cases is excellent by our formula. Example 4.8 works out a 6 × 6 example yielding a half-period of oscillations for a choice of initial and boundary conditions. Figure 4.14 shows these time slices,

128

24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41

In I1 A o2 LI

4 Partial Differential Equations

= = = = =

speye(n); spdiags(ones(m,1),-1,m,m); 2*In + d*D2; kron([1,1],ones(n*m,1)); spdiags(o2,[-2*n,0],n*m,n*m) -

kron(I1,A);

% first two time slices from initial conditions u0 = f(v); v0 = g(v); u1 = 1/2*A*u0 + dt/2*(speye(n)+1/2*A)*v0; % RHS of linear system, from initial data R = sparse(n*m,1); R(1:2*n) = [A*u1-u0;-u1]; % Linear System Solve u = L \ R; u u

= [u0;u1;reshape(u,n,m)]; = [lbc*u(1,:);u;rbc*u(n,:)];

% add IC % add BC

Listing 4.26 First define the cell-grid v ∈ Rn , initial displacement functions f and g, the D˜ 2 Laplacian matrix on (0, 1) with 0-D or 0-N BC enforced at either or both endpoints, and the number m of time steps beyond u 1 , as desired. This code fragment will then build and solve the D’Alembert system, and format the solution for plotting. Since d = 1, column i of the final u array corresponds to time ti = c( i−1 n ), for i ∈ {1, 2, . . . , m + 2}.

which agree exactly with the exact solution as determined by D’Alembert’s principle. Due to the discontinuous initial displacement, it would take an unreasonable number of Fourier series terms to reproduce this solution even somewhat roughly via the separation of variables solution, an issue we will be forced to deal with in the r = 1 two space-dimensional case. Listing 4.26 gives the MATLAB code for building and solving the system in the one space-dimensional case. It was used to produce the graphics in the figure, although they are precisely the plots of the result which is worked out by hand in the example.

Fig. 4.14 Six time steps obtained by solving the D’Alembert matrix system. This is one half of a period of the exact solution to the wave equation

4.6 The Wave Equation

129

Example 4.8. Using n = 6 cell-grid points, compute one half of a period of the solution to u tt = u xx , 0-D BC, with u 0 = (0, 0, 1, 1, 0, 0) and v0 = (0, 0, 0, 0, 0, 0). Solution: The solution is presented in the following table. The first column is u 0 . The second column is u 1 = 21 Au 0 , which can be computed by u 1j = 21 (u 0j−1 + u 0j+1 ) for j ∈ {2, . . . , 5}, u 10 = 21 (u 0−1 + u 02 ) = 21 (−u 01 + u 02 ), by the 0-D cell-grid ghost point, and similarly at j = n. Subsequent columns are computed in order, using u 1+1 = j 1+1 i−1 i i i i u ij−1 + u ij+1 − u i−1 for j ∈ {2, . . . , 5}, u = u + u − u = −u + u 1 2− 0 j j j−1 j+1 i−1 u 1 , by the 0-D cell-grid ghost point, and similarly at j = n.

j j j j j j

=1 =2 =3 =4 =5 =6

u0 0 0 1 1 0 0

u1 0 1/2 1/2 1/2 1/2 0

u2 1/2 1/2 0 0 1/2 1/2

u3 0 0 0 0 0 0

u4 -1/2 -1/2 0 0 -1/2 -1/2

u5 0 -1/2 -1/2 -1/2 -1/2 0

u6 0 0 -1 -1 0 0

These values are exactly the solution to the D’Alembert matrix system, and agree with D’Alembert’s principle exactly. Further experimentation with g = 0 and the different possible boundary conditions results in generally excellent solutions, even for step and delta functions. The r = 1 two space-dimension implementation is not quite so magical, but retains some of the nice features of the one space-dimensional case. Note that d = 1/2 in this case. It can be shown that initial displacements of the form ψ p,q = sin( pπx) sin(qπ y) for p = q do propagate exactly by the solution to the D’Alembert matrix system, but when p = q the results are not exact. These other frequencies propagate generally O((dx)2 ) accurate over time O(1), but the asymptotic error constants worsen with increasing frequency, and the error increases oscillating about a linearly increasing mean as time goes on. Not all is theoretically understood by the author concerning the full properties of this matrix, which appears to be new to the literature, particularly in the two space-dimension case. What has been shown so far generally follows from the substitution of trigonometric quantities into the two space-dimensional versions of the D’Alembert principle 1 u i+1 = Au i − u i−1 , 2 which by a similar inductive proof gives u i in terms of u 0 . The behavior of approximate solutions in this case is poor when initial displacements have discontinuities. If the initial displacement is a linear combination of the first N eigenfunctions, perhaps representing a truncated Fourier series of a discontinuous function, the solution is excellent with the caveat that any Gibbs phenomena present in the initial displacement will be accurately propagated as well.

130

4 Partial Differential Equations

More comparison with other existing methods is warranted, but the results show a nearly perfect adherence to the proper frequencies, with L 2 norms of errors of time slices decreasing at least linearly in dx, in even the worst cases. Aliasing effects take over if too many time slices are computed for the frequencies in hand, but even then the frequencies tend to remain in lock-step with that expected, with amplitudes varying with small, somewhat unpredictable errors about a good, predictable mean. At the time of writing, the author has not implemented this method for two spacedimensional experiments with g = 0, nor on disks and general regions. Listing 4.27 provides MATLAB code for defining and solving the linear system for two space-dimensions. Figures 4.15 and 4.16 show several time slices of a solution, and some plots of errors in two non-exact cases. Example 4.9 shows a 9 × 9 example computing the first time step u 1 for a pure mode case.

Fig. 4.15 Four time slices of the solution to the wave equation with piecewise continuous initial displacement f (x, y) = sin(3πx) sin(3πy) for (x, y) ∈ (1/3, 2/3)2 , zero otherwise. The first 200 eigenvectors of D2 with n = 400 were used to first form the Fourier sine truncated expansion of f , which was then used as the initial displacement u 0 . Figure 4.16 has a plot of the L 2 norm of the error of the D’Alembert system solution versus the exact propagated initial Fourier approximation, as a function of time, plotted over the first 400 time steps, or until time t = √1 2

Fig. 4.16 The plot on the left is of the L 2 norm of the error of the approximate solutions from Fig. 4.15 as a function of time, plotted over the first 400 time steps, with n = 400, or until time t = √1 . The plot on the right is of the L 2 norm of the error of the approximate solutions for initial 2 displacement defined by f (x, y) = sin(2πx) sin(πy), plotted over the first 2400 √ time steps, with 12 n = 200, or until time t = √ . Since the frequency is not an integer multiple of 2, the solution is 2 not exact, but quasi-periodicity remains perfect

4.6 The Wave Equation

44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73

In IN I1 A n2m o2 L

= = = = = = =

131

speye(n); speye(nˆ2); spdiags(ones(m,1),-1,m,m); 2*IN + d*D2; nˆ2*m; kron([1,1],ones(n2m,1)); spdiags(o2,[-2*nˆ2,0],n2m,n2m) -

[lam, V] u0 coeffs u0

= = = =

kron(I1,A);

some_eigs(-nˆ2*D2, n); reshape(f(Xv,Yv),N,1); % init displace (V'*u0)*dx*dx; % approx Fourier coeffs V*coeffs; % replace u0 w/Fourier approx

% for comparison, exact solution evaluated on grid [Ytv,Xtv, Tv] = meshgrid(v,v, t); w = h(Tv,Xtv,Ytv); % populate RHS, Solve system, format for plot u1 = 1/2*A*u0; % second time slice R = sparse(N*m,1); R(1:2*N) = [A*u1-u0;-u1]; % RHS w/initial data u = full(L \ R); % Solve System u = reshape(u,n,n,m); u0 = reshape(u0,n,n); u1 = reshape(u1,n,n); v = zeros(n,n,m+2); v(:,:,1) = u0; v(:,:,2) = u1; v(:,:,3:m+2)= u; u = v; Listing 4.27 First define the cell-grid v ∈ Rn of (0, 1), the corresponding meshgrid Y v, X v of (0, 1)2 , the initial displacement functions f and g, the D˜ 2 Laplacian matrix on (0, 1)2 with 0-D or 0-N BC enforced on the four sides, and the number m of time steps beyond u 1 , as desired. With d = 21 , this code fragment will then build and solve the D’Alembert system, and format the solution for plotting. If the exact solution h has been defined, an array w comparable to u can be created. i−1 Column i of the final u and w arrays correspond to time ti = c( √ ), for i ∈ {1, 2, . . . , m + 2}. 2n

132

4 Partial Differential Equations

Example 4.9. Using n = 3 cell-grid points over the unit square with 0-D BC, compute u 1 = 21 Au 0 for initial displacement defined by f (x, y) = 4 sin(πx) sin(π y). Verify that it is exact. Solution: The initial displacement evaluated on the cell is u 0 = (1, 2, 1, 2, 4, √ grid and reshaped 1 1 2, 1, 2, 1). Since d t = 3√2 , we have cos( 2π(d t)) = 2 . Then ⎞⎛ ⎞ −2 1 1 1 ⎜ 1 −1 1 ⎟⎜2⎟ 1 ⎜ ⎟⎜ ⎟ ⎜ ⎟⎜1⎟ 1 −2 1 ⎜ ⎟⎜ ⎟ ⎜ 1 ⎟⎜2⎟ −1 1 1 ⎜ ⎟⎜ ⎟ 1 0 1 1 1 0 ⎜ ⎟⎜4⎟ = u 10 1 1 1 u = Au = ⎜ ⎟⎜ ⎟ 2 2 4 ⎜ ⎜ ⎟ 1 −1 1⎟ 1 ⎜ ⎟⎜2⎟ ⎜ ⎟ ⎜ ⎟ 1 −2 1 ⎜ ⎟⎜1⎟ ⎝ ⎠ 1 1 −1 1 ⎝ 2 ⎠ 1 1 1 −2 ⎛

verifies that the first time step is exact.

Exercises 4.37. Using n = 3 cell-grid points, compute one half of a period of the solution to u tt = u xx , 0-D BC, with u 0 = (1, 2, 1) = 2 sin(πv) and v0 = (0, 0, 0). Write down the 6 × 6 linear system for computing the two time steps u 2 and u 3 , and verify that your solution satisfies the system. Compare the result to the exact separation of variables solution. 4.38. Implement the D’Alembert matrix scheme to reproduce the frames from Fig. 4.14 for an arbitrary cell-grid size n. Increase the number of times steps and observe the perfect periodicity. Change the boundary condition to Neumann at the left end point and repeat the experiment. Try to reproduce the same wave profile starting with initial displacement f = 0, and g chosen to be a suitable ‘delta function’ (or a combination thereof). 4.39. Use the D’Alembert principle u i+1 = 21 Au i − u i−1 to take two more time steps from Example 4.9 to compute u 2 and u 3 . 4.40. Implement the method to reproduce the plots in Fig. 4.15.

4.6 The Wave Equation

133

4.41. (*) Implement a method of lines solver for the wave equation on an interval domain with zero Dirichlet BC, initial displacement given by f (x) = sin(πx), and initial velocity given by g(x) ≡ 0. Code and test, comparing against the known solution, for a long time. 4.42. (*) Implement a method of lines solver for the wave equation on an interval domain with zero Dirichlet BC. Code and test, comparing against the known solution for the initial displacement given in Exercise 3.25, with zero initial velocity. 4.43. (*) Implement a method of lines solver for the wave equation on an interval domain with zero Dirichlet BC. Code and test, comparing against the known solution for the initial displacement given in Exercise 3.26, with zero initial velocity. 4.44. (*) Implement a method of lines solver for the wave equation on an interval domain with zero Dirichlet BC. Code and test, comparing against the known solution for the initial displacement given in Exercise 3.27, with zero initial velocity. 4.45. (*) Implement a method of lines solver for the wave equation on an interval domain with zero Dirichlet BC. Code and test, comparing against the known solution for the initial displacement given in Exercise 3.28, with zero initial velocity. 4.46. (*) Implement the explicit method from Listing 4.25 for the wave equation on an interval domain with zero Dirichlet BC. Code and test, comparing against a known solution for a long time. 4.47. (**) Modify the previous code to handle zero Neumann BC at either or both ends. Test with a bump function for initial displacement or velocity, such as f = @( x ) 20*( x -.4) .*(.6 - x ) .*( x >.4) .*( x 0. Different types of domains with various boundary conditions can be considered, some of which may lead to the non-existence of solutions. We present here the case where Ω = (a, b) × (c, d ) with a < 0 < b, u(x, c) = 0 = u(x, d ), and u(a, y) = g1 (y) and u(b, y) = g2 (y) are given (Fig. 4.17). u(x, c) = 0

u(a, y) = g1 (y)

hyperbolic

elliptic x u(b, y) = g2 (y)

y u(x, d) = 0

Fig. 4.17 The Tricomi equation changes classification on either side of the y-axis. In our example, y ∈ (c, d ) increases down columns, and x ∈ (a, b) increases across rows, 0-D BC are enforced for y ∈ {c, d }, and nonhomogeneous BC are enforced for x ∈ {a, b}

Let N be the number of point- or cell-grid points for y, and M be the number of point- or cell-grid points for x. Results for this two-variable case appear to be best when Δx = Δy, although results are not exact as they are for the wave equation with the D’Alembert matrix approach. Let yv and xv be the point- or cell-grid points for y ∈ (c, d ) and x ∈ (a, b), respectively, and D2y and D2x be the corresponding N × N and M × M point- or cell-grid second difference matrices enforcing 0-D BC over the respective intervals. With In=speye(N), we can then define a Tricomi system matrix L ∈ M N M×N M by L = kron(D2x, In) + kron(spdiags(xv,0,M,M), D2y);

4.7 Tricomi’s Equation

135

The RHS of the system is built in a similar fashion as for solving Laplace’s equation, although in our example two of the four BC are assumed to be 0-D. The following MATLAB fragment implements the nonhomogeneous BC assuming the point grid, although this can be modified in an obvious way for the cell grid: R = zeros(N,M); R(:, 1) = -g1(yv) / dxˆ2; R(:, M) = -g2(yv) / dxˆ2; R = reshape(R, N*M, 1); One can then solve the system, and reshape. The BC can be added back, and the meshgrid command used to provide x- and y-coordinates for surf or contourf graphics. u u

= L \ R; = reshape(u,N,M);

Boundary value problems of this type can be solved by the separation of variables (SOV). See Sect. 4.10 for examples applying SOV to find eigenvalues of the Laplacian and solve Laplace’s equation, the heat equation, and the wave equation. Exercise 4.54 asks to apply that technique to verify that the general solution to Tricomi’s equation with 0-D BC for y ∈ {c, d } is given by  1/3 1/3 (ak Ai(λk x) + bk Bi(λk x))ψk (y), where the pairs {ψk , λk } solve the eigenvalue problem −ψk = λk ψk with 0-D BC over the interval (c, d ), and Ai and Bi denote the standard solutions to Airy’s equation y  − xy = 0. Suppose (ck ) and (dk ) are the eigenfunction expansion coefficients of g1 and g2 , respectively. Then with  Wk (a, b) =

1/3

1/3

1/3

1/3

Ai(λk a) Bi(λk a) Ai(λk b) Bi(λk b)

 ,

to solve the BVP one must solve the systems

ck a . Wk (a, b) k = bk dk For fixed b, we define wk j to be the j th zero of the determinant of Wk (·, b). The zero determinants of Wk are spaced roughly periodically as a varies, b held fixed, with a quasi-period going to zero as k increases. Thus, if there are many high frequencies in the eigenfunction expansions of g1 and g2 , there likely will be serious difficulties concerning the existence of solutions, and corresponding difficulties in accurately computing numerical approximations of solutions.

136

4 Partial Differential Equations

Example 4.10. Let c = −π/2, d = π/2, b = 1, g1 (y) = cos y, and g2 (y) = cos 3y. Find the exact solution as a function of a, and compare with the numerical system solution for a ∈ {a0∗ , a1∗ , a2∗ } where a0∗ = −2.2, and a1∗ , a2∗ are just inside the singularities {w11 , w32 } defined to be the first zero of det (W1 (·, 1)) and the second zero of det (W3 (·, 1)), respectively. Solution: The SOV eigenvalue problem in y gives λk ∈ {k 2 } and ψk (y) ∈ {cos(2k − 1)y, sin(2ky)}, so that the solution is obtained when a1 , b1 , a3 and b3 are found so that u(x, y) = (a1 Ai(x) + b1 Bi(x)) cos(y) + (a3 Ai(32/3 x) + b3 Bi(32/3 x)) cos(3y) satisfies the boundary conditions at x = a and x = b. Plotting det (W1 (·, 1)) and det (W3 (·, 1)) and using a root finder, we find the singularities in question (see Fig. 4.18) and choose {a1∗ , a2∗ } somewhat inside those values: w11 ≈ −2.40983 < a1∗ = −2.4 < a0∗ = −2.2 < a2∗ = −1.97 < w32 ≈ −1.96729. For each choice of a, we solve the systems W1 (a, 1)

and W3 (a, 1)

a1 b1 a3 b3

=



1 0

=



0 1



to obtain the exact solution.

. Here, we reassign b Figure 4.19 shows the results for n = 200 and m = ( b−a Δy b−a to be the value a + mΔx so that Δx = m , and comparisons with exact solutions are made at the same x locations. Visually, the graphic is indistinguishable from the SOV solution, with an L 2 error approximation that converges to zero O((Δx)2 ). Note that near the singularity w32 , the dominant feature is three large amplitude nodal regions in y with two peaks (one quasi-period) in x, whereas near w11 one sees one large-amplitude nodal region in y with one peak (one half of a quasi-period) in x.

Example 4.11. Let c = −π/2, d = π/2, b = 1, g2 (y) = 0. Take g1 to be the projection of the bump function y → χ(−π/6,π/6) cos(3y) onto the span of

4.7 Tricomi’s Equation

137

2

1.5

1

0.5

0

-0.5

-1

-1.5

-2 -5

-4.5

-4

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0

Fig. 4.18 Plots of det(Wk (·, 1)) for k = 1 and k = 3. The two circled points are the values w11 and w32 used in Example 4.10 to compute the solutions on the left and right in Fig. 4.19

a = a∗1

a = a∗0

a = a∗2

Fig. 4.19 Solutions to Example 4.10 with three different choices of a. Note the large amplitudes and dominant frequencies for a1∗ near w11 and a2∗ near w32 , whereas both frequencies are lower amplitude and present in roughly the same proportion at the value a0∗ between the two singularities

{cos y, cos 3y, cos 5y, cos 7y}. If more, higher frequencies are added to the expansion, the results become increasingly difficult to interpret. There are more and more values of a that cause a singularity and non-existence, or a near singularity, resulting in large-amplitude, high-frequency solutions which may be difficult to accurately approximate. Compute the four-term eigenfunction expansion g1 with pencil and paper, or approximately via numerical integration and eigenvectors of D2y. For each a considered, find the exact solution by solving the four 2 × 2 linear systems for the eight coefficients {a1 , a3 , a5 , a7 } and {b1 , b3 , b5 , b7 }. Compare with the numerical

138

4 Partial Differential Equations

Fig. 4.20 A solution to Example 4.11 with a = −3.51 ≈ w7,10 . Note the feature of 5 quasi-periods in x of 7 peaks in y dominates the bump function

system solution. Try a few values of a near singularities, and values in between where the frequencies are more or less equally present. Solution: Solving for c2k−1 in g1 (y) = 1 1 , ). 2 5

4 

c2k−1 cos((2k − 1)y) gives c =

k=1

√ 4π 3 3 (1, √ , 4π 9 3

For each a, solving the systems



2k − 1k a W2k−1 (a, 1) 2k−1 = , b2k−1 0

for k ∈ {1, 2, 3, 4}, numerically gives us the accurate solution 4    u(x, y) = a2k−1 Ai((2k − 1)2/3 x) + b2k−1 Bi((2k − 1)2/3 x) cos((2k − 1)y), k=1

provided that we are not too close to the singularities wk j , k ∈ {1, 2, 3, 4}, j ∈ N, so that the four systems are not ill-conditioned. In Fig. 4.20, we show the solution for a near a singularity and observe the predicted large-amplitude frequencies in the x- and y directions. In Fig. 4.21, we show three solutions, where a is chosen somewhat experimentally to be far away from singularities and so that some or all of the four frequencies are more or less equally present in the eight coefficients. As a result, one can observe wave-like propagations of the ‘initial’ bump g1 , producing somewhat of a billiards effect due to the 0-D BC in y.

4.7 Tricomi’s Equation

139

Fig. 4.21 Four solutions to the Tricomi equation stated in Example 4.11. The values a ∈ {−1.69, −2.87, −3.6, −4.55} were chosen experimentally to be far away from singularities, and to give rise to solutions containing the first four frequencies of the bump function in similar proportions. Thus, we can see wave-like propagations of the bump with billiards-like paths due to the 0-D BC. The four increasing in magnitude a values result in an increasing number of quasi-periods in x

140

4 Partial Differential Equations

At the time of writing of these notes, it remains for the author to try problems where the BC is other than 0-D on the two sides, regions other than rectangles, and generalizations such as u xx + p(x, y)u yy = 0 or u xx + x(u yy + u zz ) = 0.

Exercises 4.54. Derive the SOV eigenfunction expansion general solution of this section. Hint: 1/3 The substitution t = λk converts the x portion of the separated equation into Airy’s equation. 4.55. Plot the separation of variables solution for the problem stated in Example 4.10 for a near a0∗ = −4.33. In particular, find the two singularities w1 j and w3k nearest to a0∗ , and observe the transition from j peaks to k peaks in y, and the predicted number of oscillations in x, as a ranges between these two values. 4.56. (*) Approximate a solution to the problem stated in Example 1 by setting up and solving the system Lu = R. In particular, reproduce the graphics in Fig. 4.19. Compare approximate and SOV solutions, in particular verifying the O(n 2 ) convergence. 4.57. (*) Approximate a solution to the problem stated in Exercise 4.55 by setting up and solving the system Lu = R. Compare approximate and SOV solutions.

4.8 General Regions Generating difference matrices for arbitrary regions is a very complicated subject. Lots of software exists for generating grids, and there are many ways to form the difference matrices for solving systems on those grids. We give a brief introduction here by working several examples. The author’s methods may or may not be optimal, but some of the included MATLAB tricks are not just for show: significant time slowdowns due to repeated function calls within loops can be avoided by using built-in tools such as kron, find, meshgrid, reshape, and algebra with logical variables. Testing code with tic and toc around significant code portions containing loops can help diagnose where too much time is being spent, and motivate one to seek a more efficient solution using built-in, compiled MATLAB functions.

4.8.1 The Laplacian on the Cube All of the boundary and grid techniques previously discussed apply when enforcing BC on the six faces of the cube (0, 1)3 . Listing 4.28 creates the Laplacian matrix

4.8 General Regions

n = T = I = I2 = D2xy= D2 =

141

10; N = n-1; dx = 1/n; spdiags(kron([1,-2,1],ones(N,1)),-1:1,N,N)/dxˆ2; speye(N); speye(Nˆ2); kron(T, I) + kron(I,T); kron(I, D2xy) + kron(T, I2);

Listing 4.28 Two levels of kron are used here to build the point-grid block-of-block matrix for the cube domain with 0-D BC. By using different copies of T with desired boundary conditions enforced at each end of the corresponding independent variable’s interval domain, it is easy to enforce Dirichlet or Neuman conditions on each of the six faces.

with the point grid and with 0-D BC on all six sides. The cell grid is equally easy to construct. One additional line of code beyond Listing 4.28 generates the eigenvalues. Another line compares the results to the exact eigenvalues. One has choices for graphics—probably the simplest is to surf the function values over two-dimensional slices, e.g., z =constant. Adding independent variables via meshgrid and boundary data follows the previous two-dimensional examples. See [35] for an example where the matrix is used to solve semilinear elliptic boundary value problems on the cube.

4.8.2 The Laplacian on the Disk We show how to build the D2 matrix approximating on the disk B1 (0) ⊂ R2 . This code is more complicated than previous examples, and included nearly in its entirety. The matrix L yields O(h 2 ) approximations of −Δ evaluations. With the matrix in hand, previous techniques apply to approximate solutions of Laplace’s equation, semilinear BVP, heat equation, and wave equation on the disk. The difference matrix in polar coordinates L is built by considering the terms in Δu = u rr + r1 u r + r12 u θθ . See [8], or more briefly, the summary in Sect. 4.10. Note that the lower right hand ‘-2’ entry of T1 would change to a ‘-1’ to enforce 0-N BC at r = 1, or a ‘-3’ for the cell grid with 0-D BC at r = 1. The matrix S defined below is used at r = 0 to obtain the neighbor across the origin, 180 degrees away, corresponding to the ± m2 block diagonals. The polar D2r , D1r , and D2t matrices which take second differences with respect to r , first differences with respect to r , and second differences with respect to theta, respectively, are given by D2r = 1 D˜ r , D1r = 2 1dr D˜ 1r , and D2t = (d1t)2 D˜ 2t , as below. All matrices are sparse; we use (dr )2 2 speye(k) for Ik , and spdiags otherwise. We have the difference matrix in polar coordinates L = D2r + R1 D1r + R2 D2t ,

142

4 Partial Differential Equations

Fig. 4.22 We impose the 0-D or 0-N BC for the cell grid at r = 1 in the usual way using ghost points (hollow dot). There is no boundary at r = 0, hence no boundary condition there. Instead, we use the neighbor 180 degrees away (hollow around solid dot). These polar neighbors correspond to a single one in the upper right-hand corner of the block S, placed on the ± m2 diagonals by Q, as in Figs. 4.23 and 4.24

4.8 General Regions

143

D˜ r2 = kron(Im , T1 ) + kron(Q, S) ∈ M Nm×Nm ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

⎞ T1 · · · · · · 0 S 0 · · · 0 .. .. . . .. ⎟ . S . . ⎟ . T1 ⎟ ⎟ .. .. .. ⎟ . . . 0⎟ ⎟ ⎟ .. . S⎟ 0 ⎟ ⎟ .. ⎟ . S 0⎟ ⎟ .. ⎟ 0 S . ⎟ ⎟ .. . . . . . . . . .. ⎟ ⎠ . . . . . . 0 · · · 0 S 0 · · · · · · T1

Q ∈ M m×m ⎛

⎞ 0 .. ⎟ .⎟ ⎟ ⎟ ⎟ 0⎟ ⎟ ⎟ 1⎟ ⎟ ⎟ ⎟ 0⎟ ⎟ .. ⎟ 1 .⎟ ⎟ .. .. .. . . .. ⎟ . . . . .⎠ 0 ··· 0 1 0 ··· ··· 0

0 ⎜ .. ⎜. ⎜ ⎜. ⎜. ⎜. ⎜ ⎜ ⎜0 ⎜ ⎜ ⎜ ⎜1 ⎜ ⎜ ⎜0 ⎜ ⎜. ⎝. .

··· ··· 0 1 0 ··· . .. .. . . 1 .. .. . .. . .. .

S ∈ M N×N

T1 ∈ M N×N ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

−2 1

0 ··· . 1 −2 1 . . . . . 0 .. .. .. .. . . . 1 −2 . 0 ··· 0 1



⎞ 1 0 ··· ··· 0 ⎜ .. ⎟ ⎜0 0 .⎟ ⎜ ⎟ ⎜. .. ⎟ .. ⎜. ⎟ . ⎜. .⎟ ⎜ ⎟ ⎜. . . .. ⎟ ⎝ .. . .⎠ 0 ··· ··· ··· 0

⎞ 0 .. ⎟ . ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 1 ⎠ −3

Fig. 4.23 Component matrices for the u rr portion of the Laplacian in polar coordinates on the unit disk D˜ r1 = kron(Im , T2 ) − kron(Q, S) ∈ M Nm×Nm ⎛

T2 · · · ⎜ . ⎜ .. T 2 ⎜ ⎜ . ⎜ . ⎜ . ⎜ ⎜ 0 ⎜ ⎜ ⎜ −S . . . ⎜ ⎜ ⎜ ⎜ 0 −S ⎜ ⎜ . . ⎝ .. . . 0 ···

⎞ 0 −S 0 · · · 0 . ⎟ . .. . −S . . .. ⎟ ⎟ ⎟ .. .. ⎟ . . 0 ⎟ ⎟ −S ⎟ ⎟ ⎟ .. . 0 ⎟ ⎟ .. ⎟ ⎟ . ⎟ ⎟ .. .. . . .. ⎟ . . . . ⎠ 0 −S 0 · · · T2

T2 ∈ M N×N ⎛

0

⎜ ⎜ −1 ⎜ ⎜ ⎜ ⎜ 0 ⎜ ⎜ . ⎝ .. 0

0 ··· . 0 1 .. .. .. .. . . . .. . −1 0 · · · 0 −1 1

⎞ 0 .. ⎟ . ⎟ ⎟ ⎟ ⎟ 0 ⎟ ⎟ ⎟ 1 ⎠ −1

Fig. 4.24 Component matrices for the u r portion of the Laplacian in polar coordinates on the unit disk

144

4 Partial Differential Equations

D˜ t2 = kron(T3 , In ) ∈ M Nm×Nm ⎛



−2In In 0 · · · · · · 0 In 0 ⎟ ⎜ In −2In In 0 ⎜ .. ⎟ ⎜ ⎟ .. .. .. .. ⎜ 0 . . . . . ⎟ ⎜ ⎟ ⎜ . .. ⎟ .. .. .. .. .. ⎜ . ⎟ . . . . . . ⎟ ⎜ . ⎜ . ⎟ ⎜ . ⎟ .. .. .. .. ⎜ . . . . . 0 ⎟ ⎜ ⎟ ⎜ ⎟ .. ⎝ 0 . In −2In In ⎠ In 0 · · · · · · 0 In −2In

T3 ∈ M m×m ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

−2 1 1 −2 . 0 .. .. . . . . 0 1

0

0 ··· . 1 .. .. .. . . .. .. . . .. . 1 ··· 0

1



0 . . .. . . .. . 0

⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

0

−2 1 1 −2

Fig. 4.25 Component matrices for the u θθ portion of the Laplacian in polar coordinates on the unit disk R1 = kron(Im , spdiags(1./rm,0, N, N)) ∈ M Nm×Nm ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

1./rm 0 · · · 0 1./rm 0 .. . . . . . . . . . .

.. .. . 0

···



··· 0 .. .

.

.. .

.. .. . . 0 · · · 0 1./rm

Fig. 4.26 Component matrices for the

1 r



⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

and

R2 = ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

1 r2

kron(Im , spdiags(1./rm^2 . , 0, N, N)) 1./rm^2 . 0 0

.. . 0

···

1./rm^2 . ..

···

.

..

···

0 .. .

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

. 0 · · · 1./rm^2 .

using the cell grid in on the disk

which approximates the Laplacian in polar coordinates [8] 1 1 Δu = u rr + u r + 2 u tt , r ∈ (0, 1), t = θ ∈ [0, 2π). r r We use the cell grid, as r = 0 needs to be avoided in the singular polar coordinate representation of the Laplacian. Choose a boundary condition for r = 1. The periodic BC must of course be enforced for θ. Define the number of space divisions n, r, midpoints rm , the number of time divisions m of the interval [0, t f ], and the corresponding cell-grid time elements tm . Listing 4.29 creates the ODE n × n matrix second difference matrix T with appropriate BC at r = 1, and uses kron with Im to get what is almost the correct nm × nm matrix D2r . the matrix U contains a few ones to be added to the matrix, corresponding to the neighbors (r1 , t j± m2 ) of vertices (r1 , t j ) which lie across the origin, 180◦ away. See Figs. 4.22 and 4.23. Listing 4.30 creates the ODE n × n matrix D1 using three-point central first differences, also with appropriate BC at r = 1, and uses kron with Im to get what is the almost correct nm × nm matrix D1r . One must subtract the same U term, correspond-

4.8 General Regions

5 6 7 8 9 10 11 12 13 14 15

145

% create D2r, 0-D BC at r=1, use t\pm\pi nbr at r=0. T = spdiags(kron([1,-2,1],ones(n,1)),[-1,0,1],n,n); T(n,n)= -3; % D-BC at r=1, ghost point Im = speye(m); D2r = kron(Im, T); % no BC at r=0, add a '1' 180-deg across center Q = spdiags(kron([1,1],ones(m,1)),[-m/2,m/2],m,m); S = sparse(n,n); S(1,1) = 1; U = kron(Q,S); D2r = 1 / drˆ2 * (D2r + U); Listing 4.29 The block matrix D2r is nearly the correct second difference matrix for computing u rr . The matrices Q and S are used to construct the block matrix U containing only 1’s corresponding to neighbors across the origin.

16 17 18 19 20

% 3 pt central first diff D1r, across center at r=0 T = spdiags(kron([-1,1],ones(n,1)),[-1,1],n,n); T(n,n) = -1; % Neumann BC at r=1, ghost point R = kron(Im, spdiags(1./rm, 0, n, n)); D1r = 1 / (2 * dr) * (kron(Im, T) - U); Listing 4.30 The U term in the D1r three point first difference matrix again accounts for neighbors across r = 0. The diagonal matrix R contains real numbers, since ri = 0 on the cell-grid.

21 22 23 24 25

T T(1,m) In R2 D2t

= = = = =

spdiags(kron([1,-2,1],ones(m,1)),-1:1,m,m); 1; T(m,1) = 1; speye(n); kron(Im, spdiags(1./rm.ˆ2, 0, n, n)); 1 / dtˆ2 * kron(T, In);

Listing 4.31 The construction of D2t for computing u θθ is straightforward. Periodic boundary conditions are enforced.

26 27

% polar Lap: \Delta u=u_rr + 1/r u_r + 1/rˆ2 u_tt L = D2r + R * D1r + R2 * D2t; Listing 4.32 The polar cell-grid block matrix.

ing to the neighbors across the origin of vertices (r1 , t j ). Listing 4.31 constructs the three-point second difference matrix D2t , which is periodic in θ. Listing 4.32 constructs the matrix, multiplying the correct corresponding rows of the u r and u θθ terms by factors of 1/r and 1/r 2 , respectively (Figs. 4.25 and 4.26).

146

4 Partial Differential Equations

L = D2r + R * D1r [Tr, Rt] = meshgrid([tm;tm(1)], rm); [X, Y] = pol2cart(Tr, Rt); Listing 4.33 Converting grid to Cartesian coordinates for use with surface and contour plots. Here we have added the first angle to the end, to cure a visually obvious ‘missing slice’ at θ = 0. One could also add r = 1 to the end of the r ’s in order to close up the graphic all the way to the boundary. Of course one must pad the corresponding z-coordinates matrix at the boundaries to be a matrix of the same dimensions as X and Y .

The usual reshaping and padding with boundary data is required for graphics of solutions on the disk. For circular graphics, one can use Listing 4.33 to generate proper x and y coordinates for use with a surf command. Any of the previous problems for square domains can now be investigated on the disk. In particular, we can solve nonhomogeneous Laplace’s equation, compute eigenfunctions, solve the heat and wave equations, and solve semilinear elliptic BVP on the disk (Fig. 4.27). Without much work, the matrix can be modified in an obvious way for annulus and angular sections of disk and annulus, taking care to enforce the desired boundary conditions on each piece of the boundary. The radially symmetric L in Listing 4.8.2 is an ODE operator, and hence not a block matrix. It can be generated by essentially deleting the kron lines from the full polar code. Radial solutions to the same collection of PDE on radial regions can be very efficiently computed using this matrix [26].

4.8.3 Accurate Eigenvalues of the Laplacian on Disk, Annulus, and Sections Building the matrix for an annulus, section of a disk, or section of an annulus is actually easier than the case of the full disk. One does not need to construct the matrices Q, S, or U used to handle neighbors near r = 0. Instead, one enforces the desired 0-D or 0-N boundary condition at r = a ≥ 0 and r = b > a. Either the cell or point grid can be used for the annulus, since avoiding r = 0 is not an issue. For a sector, instead of periodic boundary conditions, one enforces the desired 0-D or 0-N boundary condition at initial and final angles θ0 < θ f . One can use the constructed matrix to investigate any of the previous problems in these domains. One can obtain accurate eigenvalues and eigenfunctions of the on disk, annulus, or sector domains by first computing rough approximations with eigs(-Lap, k, 'sm'),

4.8 General Regions

147

Fig. 4.27 Eigenfunctions of −Δ on the disk with 0-D BC corresponding to λ1 , λ6 , λ15 , and λ2

on a not particularly fine level, for some reasonable number k. The rough eigenvalues are used as initial guesses to accurately find the roots of certain known transcendental expressions [8] involving Bessel functions, depending on the region and boundary conditions. The Bessel function number (determined by k) and Bessel zero number j for the corresponding eigenvector can be determined by observing the number of sign changes in the θ and r directions, respectively. For the convenience of the reader, some of the useful eigenfunction forms and corresponding nonlinear eigenvalue equations are included in Table 4.1. Derivatives of Bessel functions are required in the eigenvalue equation when enforcing a 0-N boundary condition on a radial boundary. Derivatives are also required when using Newton’s method to find the zeros (square root of eigenvalues) of these equations, although the secant method avoids this added headache. We can use well-known calculus identities for Bessel functions to compute these derivatives, or use the symbolic capability of MATLAB with the symbolic toolkit. In general, it may be expedient to simply cut and paste the output of a symbolic

148

4 Partial Differential Equations

Table 4.1 Eigenvalue equations for some combinations of radial regions and boundary conditions. The scalars s kj are the j th zeros of the k th eigenvalue equation gk (s) = 0. Since λ j,k = (s kj )2 , the square root of eigenvalues from eigs(-Lap, ...) make good initial guesses for an iterative method like Newton’s to compute these zeros. The number of sign changes in the radial and angular directions, respectively, indicate the j and k indices. Region

BC

Eigenvalue equations, Corresponding Eigenfunctions

Disk(R)

0-D

gk (s) = Jk (s R) = 0, k ≥ 0 , ψ j,0 (r, θ) = J0 (s kj r ), ψ sj,k (r, θ) = Jk (s kj r ) sin(kθ), ψ cj,k (r, θ) = Jk (s kj r ) cos(kθ).

Disk(R)

0-N

gk (s) =

∂ ∂s Jk (s R)

= 0, k ≥ 0,

same as 0-D, different s kj , with ψ0 (r, θ) ≡ 1, λ0 = 0. Disk(R, 0, θ f )

0-D

gk (s) = Jqk (s R) = 0, qk = πk/θ f , k ≥ 1, ψ j,k (r, θ) = Jqk (s kj r ) sin(qk θ).

Disk(R, 0, θ f )

0-N

gk (s) =

∂ ∂s Jqk (s R)

= 0, qk = πk/θ f , k ≥ 1,

ψ0 (r, θ) ≡ 1, λ0 = 0, ψ j,k (r, θ) = Jqk (s kj r ) cos(qk θ). Ann(a, b)

0-D

gk (s) = Yk (bs) − (Yk (as)/Jk (as))Jk (bs) = 0, k ≥ 1, ψ j,0 (r, θ) = (Y0 (s 0j r ) − (Y0 (s 0j a)/J0 (s 0j a))J0 (s 0j r )), ψ sj,k (r, θ) = (Yk (s kj r ) − (Yk (s kj a)/Jk (s kj a))Jk (s kj r )) sin(kθ), ψ cj,k (r, θ) = (Yk (s kj r ) − (Yk (s kj a)/Jk (s kj a))Jk (s kj r )) cos(kθ).

Ann(a, b, 0, θ f ) 0-D

gk (s) = Yqk (bs) − (Yqk (as)/Jqk (as))Jqk (bs) = 0, k ≥ 1,

qk = πk/θ f ,

ψ j,k (r, θ) = (Yqk (s kj r ) − (Yqk (s kj a)/Jqk (s kj a))Jqk (s kj r )) sin(qk θ).

calculation in order to define a non-symbolic function that is fast-performing. See Sect. 4.10 for a brief overview of related theory. Figure 4.28 shows that we do not want to recompute the symbolic derivative(s) for each Newton iteration. Using each of the three different function declarations f 1 , 2 , and f 3 from Listing 4.34, a root of a Bessel function was found via Newton’s method. In order to get a significant timed event, we forced n iterations, for n ∈ {10, 20, 40, 80, 160}. Note the orders of magnitude of the speedup when we use the hardcoded function f 3 . The definition of f 2 uses a single symbolic computation,

4.8 General Regions

1 2 3 4 5 6 7 8

149

syms q R s t; g

= @(q, R, s) besselj(q, s*R);

f1 z1 f2 f3

= = = =

@(s) subs(diff(g(q,R,t),t),s); f1(s); @(q,R,s) eval(z1); @(q,R,s) (q*besselj(q, R*s))/s-R*besselj(q+1,R*s);

Listing 4.34 The object function g for Newton’s method to find roots of a Bessel function, and three different computations of the derivative of g with respect to s. The first method computes a derivative each time it is called. The second only computes the derivative once, but does an expensive evaluation each time it is called. The third method is hardcoded from a pencil and paper calculation. Figure 4.28 shows the extreme difference in computation time for finding roots using the three different methods.

but apparently the cost of the MATLAB command eval is high in this situation. There are surly other (possibly better or more convenient) ways to define fast inline functions from symbolic computations. The important thing to keep in mind is that time costs can vary greatly, and be extreme. See Listing 2.9 for another example of symbolic computation in MATLAB. See Fig. 4.29 for an example where we use the described approach on a quarter annulus with 0-D BC enforced on all four sides.

4.8.4 The Laplace–Beltrami Operator on a Spherical Section One can build a matrix for the Laplacian in spherical coordinates in a similar way to the cube, using nested kron statements to build a block of blocks matrix. As a slightly simpler example, the Laplace–Beltrami operator on the sphere can be obtained by starting with the Laplacian in spherical coordinates on a thin shell with 0-N BC in the radial direction, and taking a limit as the shell goes to the sphere. Thus, by setting to zero the derivatives with respect to r and otherwise taking r = 1 in the well known in spherical coordinates expression ⎞ ⎛ ⎟ ⎜ 2

⎜∂ u ∂ ∂u 1 ∂2u ⎟ 2 ∂u 1 ⎟, sin θ + + + Δu = ⎜ ⎜ ∂r 2 2 ∂θ r 2 sin2 θ ∂φ2 ⎟ ⎠ ⎝  r ∂r r sin θ ∂θ 1 ∂ 2 ∂u =0 r ( ) 2 ∂r ∂r r one obtains the expression

150

4 Partial Differential Equations 9 scale=1, symbolic scale=1/ 10 , evaluate scale = 1/ 1000 , hardcoded

8

7

6

seconds

5

4

3

2

1

0 0

1

2

3

4

log 2(n/10)

Fig. 4.28 The three graphs show the execution time in iterating Newton’s method n times to find a root of a Bessel function when the derivative f of the object function g is coded in three different ways. The symbolic function f 1 recomputes a derivative in each iteration, a symbolic expression is computed once and then evaluated each time f 2 is called, and a hardcoded expression is used to define f 3 . To make the three graphs visible on the same axis, three different time scales are used. The red curve for f 1 is in seconds, the blue curve for f 2 is in 1/10th of seconds, and the green curve for f 3 is in 1/1000th of seconds



∂u 1 ∂2u ∂2u 1 ∂ ∂2u 1 ∂2u sin θ + + ΔS u = = cot φ + 2 sin θ ∂θ ∂θ ∂θ ∂θ2 sin θ ∂φ2 sin2 θ ∂φ2 for the Laplace–Beltrami operator on the unit sphere. The first and second derivative matrices can be generated in the same manner as for the disk and annulus. Again, midpoints should be used to avoid the singularities at the poles φ = 0 and φ = π. On the entire sphere, periodic BC should be enforced in θ, while values across the poles ¯ should be used for φ boundary terms. On sections of the sphere defined by θ ∈ [0, θ], either 0-D or 0-N BC can be enforced on either boundary, as desired. Figure 4.30 shows the first four eigenfunctions of the Laplace–Beltrami operator on a quarter-sphere with 0-N BC. The eigenvalues and eigenfunctions were computed using eigs after building the corresponding matrix as described above. Figure 4.31 shows the MI=2 sign-changing solution to the semilinear elliptic equation Δ S u + su + u 3 = 0 on the quarter-sphere with 0-N BC. The parameter s was chosen just to the left of λ3 = 6. The initial guess was a small scalar multiple of the corresponding

4.8 General Regions 185.6494

151 273.1503

414.4574

185.7138

273.3553

415.2572

0.4

0.4

0.4

0.4

0.4

0.4

0.3

0.3

0.3

0.3

0.3

0.3

0.2

0.2

0.2

0.2

0.2

0.2

0.1

0.1

0.1

0.1

0.1

0.1

0.2

0.3

0.4

0.1

660.2034

0.2

0.3

0.4

0.1

754.398

0.2

0.3

0.4

0.1

911.5558

0.2

0.3

0.4

0.1 0.1

661.0588

0.2

0.3

0.4

0.1

755.4063

0.4

0.4

0.4

0.4

0.4

0.4

0.3

0.3

0.3

0.3

0.3

0.3

0.2

0.2

0.2

0.2

0.2

0.2

0.1

0.1

0.1

0.1

0.1

0.1

0.2

0.3

0.4

0.1

1446.708

0.2

0.3

0.4

0.1

1541.9321

0.2

0.3

0.4

0.1

1700.9707

0.2

0.3

0.4

1450.9554

0.2

0.3

0.4

0.1

1546.3337

0.4

0.4

0.4

0.4

0.4

0.3

0.3

0.3

0.3

0.3

0.2

0.2

0.2

0.2

0.2

0.2

0.1

0.1

0.1

0.1

0.1

0.3

0.4

0.1

0.2

0.3

0.4

0.1

0.2

0.3

0.4

0.1

0.2

0.3

0.4

0.4

0.2

0.3

0.4

1706.0605

0.3

0.2

0.3

0.1 0.1

0.4

0.1

0.2

913.2445

0.1 0.1

0.2

0.3

0.4

0.1

0.2

0.3

0.4

∈ { 41 , 21 }

Fig. 4.29 Eigenfunctions of −Δ on the quarter annulus with r and 0-D BC. The eigenvalues and eigenfunctions on the left are taken from the output of eigs(−D2, . . .), whereas the eigenfunctions on the right are computed using the separation of variables and Bessel functions: ψ j,k (r, θ) = (Y2k (s kj r ) − (Y2k (s kj a)/J2k (s kj a))J2k (s kj r )) sin(2kθ), with a = 14 , b = 21 . The scalars s kj are the j th zeros of the nonlinear Bessel function expressions gk (s) = Y2k (bs) − (Y2k (as)/J2k (as))J2k (bs), which follows from enforcing the 0-D boundary condition at a and b. Given an eigenvector approximation from eigs, one can observe the corresponding j and k by counting the number of sign changes in the radial and angular directions, respectively. The corresponding approximate eigenvalue can be used as an initial guess to Newton’s method, the secant method, or another iterative zero-finding method, applied to the object function g defined above. The result is a very accurate eigenvalue λ j,k = (s kj )2 , and a correspondingly accurate eigenfunction ψ j,k (r, θ)

eigenfunction Ψ3 . Figure 4.32 shows a MI=2 sign-changing solution on the sphere. It could have been obtained using the appropriate Laplace–Beltrami matrix on the sphere or by stitching together two 0-N solutions on half-spheres, but was in this case stitched together from four MI=2 theta-constant solutions with 0-N BC on four quarter-spheres. Figure 4.33 shows a sign-changing solution on the sphere stitched from four MI=3 non-theta-constant solutions with 0-N BC on quarter-spheres. The same stitched MI=3 solution can be obtained by stitching two MI=3 solutions on the half-sphere or or by solving the periodic problem on the entire sphere.

4.8.5 A General Region Code We generate the second difference matrix D2 for a subdomain Ω of the unit square with 0-D BC. There is a built-in function delsq that can do this. It takes as input a matrix G ⊂ Rm×m , with zero entries denoting points external to the domain, and distinct natural number entries denoting vertex number. The function is written using the built-in function find for ease of high-level programming, and efficiency. The implementation uses both single and double indexing on the same arrays. A more straightforward brute force algorithm for building D2 would perhaps not be too difficult for the novice programmer to implement from scratch, but could easily be several orders of magnitude slower in execution. For example, performing large number of

152

4 Partial Differential Equations

Fig. 4.30 First four eigenfunctions and approximate eigenvalues of the Laplace–Beltrami operator on the quarter-sphere with 0-Neumann BC

1

0.5

0

-0.5

-1 1 0.8

1 0.6

0.8 0.6

0.4 0.4

0.2

0.2 0

0

Fig. 4.31 The MI=2 sign-changing solution to Δ S u + su + u 3 = 0 on the quarter-sphere with 0-Neumann BC

small dynamic memory allocations is often convenient but always expensive, as are some types of function calls within nested loops. We present a moderately long MATLAB code in its entirety. The portion that builds D2 from G is essentially copied from MATLAB® ’s program delsq, but modified to use the cell grid to enforce the BC. The modification presented can be easily further modified to enforce Neumann BC instead, and can be extended to regions in R3 in a straightforward way. The vector p of interior vertex indices that is

4.8 General Regions

153

Fig. 4.32 This MI=2 sign-changing solution on the sphere was stitched from four solutions on quarter-spheres, one such depicted in Fig. 4.31. The same solution can be found by stitching k th solutions on ( 2π k ) -spheres for k ≥ 1

Fig. 4.33 This MI=3 sign-changing solution on the sphere was stitched from four MI=3 non-thetaconstant solutions with 0-N BC on quarter-spheres. The same solution can be obtained by stitching two MI=3 solutions on half-spheres or by solving once on the entire sphere, but cannot be produced by stitching more solutions on smaller sections of the sphere

computed and used in the D2 generation code is reused in a subsequent code segment for formatting an eigenfunction of −D2 in generating a graphic with surf. Listing 4.35 defines the logical membership function Q and initializes the square grid in (0, 1)2 . The cell-grid elements are tested for membership, each vertex given a sequentially increasing index. The number of vertices N inside Ω is known. The loop in Listing 4.36 determines the neighbors inside Ω for each vertex inside Ω. The author’s first attempt at coding the loop was somewhat more straightforward, but had an execution time several orders of magnitude slower than the provided implementation which is almost verbatim what one finds in the MATLAB function delsq. Using find provides significant time savings over using additional for loops. For every neighboring vertex k of vertex j, we set D2( j, k) = 1. After the

154

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

4 Partial Differential Equations

Q=@(x,y)˜((abs(x-.5).1)); n = 500; m = n+2; dx = 1/n; x = linspace(0,1,n+1)'; v=x(1:end-1)+dx/2; va=[0;v;1]; [Y, X] = meshgrid(va, va); G = zeros(m,m); N = 0; for k = 1:n for j = 1:n if Q(v(j), v(k)) N = N+1; G(j+1,k+1) = N; end end end Listing 4.35 Vertices in (0, 1)2 are tested for membership in the region. A vertex number is assigned for each found element of the domain.

loop, the diagonal entries D2( j, j) are set to 8 − deg(vertex j), as per the cell grid and ghost point method of enforcing 0-D BC. The entries could be set to deg(vertex j) instead to enforce the 0-N BC, again by the cell grid. We now have generated the matrix D2 . Calculating and sorting eigenvalues of D2 is no problem with eigs. In Listing 4.37, we extract an eigenvector and pad it with zero boundary data for plotting. Note that we are finding further use for the vector p, the single indices into the array Z ∈ Rm×m corresponding to interior points of Ω. Generating a nice graphic still requires some work. Listing 4.38 determines which cell-grid points in (0, 1)2 are not in Ω, and none of their eight surrounding cell-grid elements are inside Ω either. We have adapted the technique from delsq to perform the eight tests in eight computations of size N . This slight headache seems required in the cell-grid case to get a nice-looking graphic, since the boundary zero set falls between interior points and external ghost points, not passing through cell-grid points themselves. We set the Z values corresponding to these external–external points to NaN (not-a-number). This trick causes surf to ignore those points entirely (see Fig. 4.34). After executing the code in Listing 4.38, one can surf Z versus X and Y (see, for example, Fig. 4.34). The code in Listing 4.36 can be modified to handle a variety of boundary conditions on different portions of the boundary Ω. As an example, mixed 0-D/0-N BC can be enforced by first defining a Boolean function P (similar to Q) which determines which portions of the region will enforce 0-N, and then replacing line 30 of the above code with the four lines in Listing 4.39.

4.8 General Regions

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

p = find(G); i = []; j = []; s = []; for k = [-1 m 1 -m] S q i j s

= = = = =

155

% Indices of interior points % 4 directions

G(p+k); % nbrs in kth direction find(S); % ndx of pts w/interior nbrs [i; G(p(q))]; % col ndx of kth-dir nbr [j; S(q)]; % row ndx of kth-dir nbr [s; ones(length(q),1)]; % 1's for nbrs q of p

end D2 = sparse(i,j,s); % first create off-diag bands O = spdiags(8 - sum(D2)', 0, N,N); % diag vals D2 = 1/dxˆ2 * (D2 - O); Listing 4.36 The algorithm from MATLAB’s delsq command adapted to the cell-grid. The first use of the find command returns a list of single indices into the square matrix G corresponding to nonzero entries, i.e., vertices inside the domain. For each of the four compass directions, neighbors in that direction which are themselves inside the domain are found and the column and row indices of such neighbors are accumulated in i and j. These neighbors each contribute a ‘1’ in their respective (i, j) positions of the . The number of neighbors gives the degree, which indicates the diagonal values.

32 33 34 35 36 37 38 39 40 41

[V,lam]

= eigs(-D2, 6, 'sm');

[lam,ndx] = sort(diag(lam)); V = V(:,ndx); V = (diag(1./max(abs(V))) * V')'; kplot V Z Z(p)

= = = =

4; V(:, kplot); zeros(m); V

Listing 4.37 We calculate sorted and normalized eigenfunctions and extract an eigenvector. The function is padded with zeros to fit the unit square its domain is embedded in.

156

42 43 44 45 46 47 48 49 50

4 Partial Differential Equations

p = find(˜Q(X, Y)); S = zeros(size(p)); for k = [-m-1 -m -m+1 -1 1 m-1 m m+1] pk = max(1,min(p+k,mˆ2)); S = S + G(pk); end q = find(S == 0); Z(p(q)) = NaN; Listing 4.38 We set each entry of Z to NaN if it corresponds to a point not in the domain whose 8 neighbors (including diagonal neighbors) are also not in the region. This gives a nicer looking graphic when the number of divisions is not large.

Fig. 4.34 The fourth eigenfunction of −Δ with 0-D BC on a non-square subdomain of (0, 1)2

NB BC BC(NB) O

= = = =

find(P(X(p),Y(p))); (8 - deg); deg(NB); spdiags(BC, 0, N,N);

% % % %

find 0-N grid points 0-D BC cell-grid vals 0-N BC vals cell-grid diag vals

Listing 4.39 The diagonal values for cell-grid points next to the boundary depend on the choice of 0-D or 0-N on that portion of the boundary. Here, P is a membership function for the 0-N BC. Recall that already p contains indices of interior points. The list of indices N B gives interior points where the 0-N BC is also in effect. If a point is interior and not adjacent to the boundary, it is of degree 4, and so a 4 is placed on the diagonal regardless of BC.

4.8 General Regions

Q1 Q2 Q3 Q4 Q5 Q

= = = = = =

@(x,y) @(x,y) @(x,y) @(x,y) @(x,y) @(x,y)

P

= @(x,y) (((x-.5).ˆ2+(y-.5).ˆ2) , 4 4 4 4 4

4.9 First-Order PDE and the Method of Characteristics

163

1

1

1

0.8

0.8

0.8

0.6

0.6

0.6

0.4

0.4

0.4

0.2

0.2

0.2

0 0.2

0

0 0.4

0.15

1 0.8

0.1

0.6 0.4

0.05 0

0.2 0

0.2

1 0.15

0.8 0.6

0.1

0.4

0.05 0

0.2 0

0.3

1 0.8

0.2

0.6 0.4

0.1 0

0.2 0

Fig. 4.36 Characteristic curve solution plots for the nonlinear transport problem in Example 4.13. The plot on the left is just before the shock, the central plot is at t = 41 where the shock occurs and the solution is no longer differentiable, and the plot on the right shows the non-existence of a solution after the shock

for x ∈ [0, 1]. Find the characteristic solution and the time t at which a shock develops. Solution: The characteristic curves determined by x = px, x(0) = 41 and x(0) = 34 are x(t) = t + 41 and x(t) = 43 − t, respectively. The value u(x(t), t) is constant at 0 and 1 along each of these two curves, respectively. There is a shock at the intersection of these two curves corresponding to t = 41 , x = 21 . Using Listing 4.41, we can obtain the plots in Fig. 4.36 showing the solution profile along characteristics up to and beyond the development of a shock.

Exercises 4.92. (*) Reproduce Fig. 4.36 using the method of characteristics solver from Listing 4.41. 4.93. (*) For an arbitrary initial function f , solve u t + u x = t by the method of characteristics. Compare the exact solution for the initial f in Example 4.13 for x0 , t ∈ [0, 1] with the numerical solution from Listing 4.41. 4.94. (*) For an arbitrary initial function f , solve u t + u x = x by the method of characteristics. Compare the exact solution for the initial f in Example 4.13 with the numerical solution from Listing 4.41 for x0 , t ∈ [0, 1] with appropriately modified p and q definitions. 4.95. (*) For an arbitrary initial function f , solve u t + tu x = x by the method of characteristics. Compare the exact solution for the initial f in Example 4.13 with the numerical solution from Listing 4.41 with appropriately modified p and q definitions.

164

4 Partial Differential Equations

4.96. (*) For an arbitrary initial function f , solve u t + xu x = t by the method of characteristics. Compare the exact solution for the initial f in Example 4.13 with the numerical solution from Listing 4.41 with appropriately modified p and q definitions. 4.97. (*) For an arbitrary initial function f , solve u t + xtu x = 0 by the method of characteristics. Compare the exact solution for the initial f in Example 4.13 with the numerical solution from Listing 4.41 with appropriately modified p and q definitions. 4.98. (*) For the increasing initial function f 1 from the above problems and the decreasing f 2 defined by u(x, 0) = 1, x
, 4 4 4 4 4

for x ∈ [0, 1], solve u t + uu x = 0 by the method of characteristics. Determine which case develops a shock and at what time. Verify that the numerical solutions agree with those solutions.

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions For the convenience of the reader, we present a selection of techniques and examples for some of the most fundamental types of PDE encountered in a typical introductory course on the subject. We apply the separation of variables (SOV) technique to a variety of problems. This brief exposition touches on most if not all of the practical methods by which analytical solutions to this chapter’s provided examples and exercises can be obtained. We compute eigenfunctions of the negative Laplacian for the square, cube, disk, annulus, and sectors. We write Fourier series and eigenfunction expansion solutions for Laplace’s, heat, and wave equations. Various boundary conditions are considered.

4.10.1 Eigenfunctions of the Laplacian For a variety of homogeneous boundary conditions, the equation −Δu = λu on Ω ⊂ Rn can be solved by the separation of variables technique (SOV).

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

165

Eigenfunctions on the Rectangle, Square, and Cube We first consider the unit square Ω = (0, 1)2 , with 0-D BC u = 0 on ∂Ω. In the SOV method as applied to PDE in two variables, one seeks solutions of the form u(x, y) = p(x)q(y). In the case of eigenfunctions of the negative Laplacian on a rectangle, this gives q  p  +λ=− = μ. p  q + pq  = −λpq ⇐⇒ p q The separation constant μ can be assumed by noticing that the two separated ratios are independent functions of x and y, respectively. Generally, one seeks an ODE eigenvalue problem in one of the two variables, determining μ, and then solves the other ODE. In this present case, the BC gives −q  = μq with q(0) = 0 = q(1), and − p  = (λ − μ) p with p(0) = 0 = p(1), whence we have a pair of ODE eigenvalue problems. We see that μ = (kπ)2 and λ − μ = ( jπ)2 , corresponding to qk (y) = sin(kπ y) and p j (x) = sin( jπx). Thus, the eigenvalue problem −Δu = λu on (0, 1)2 with u(0, y) = u(1, y) = u(x, 0) = u(x, 1) = 0 is solved by the doubly indexed family of eigenvalues and eigenfunctions λ j,k = ( j 2 + k 2 )π 2 , ψ j,k (x, y) = sin( jπx) sin(kπ y), for j, k = 1, 2, . . . . For a general rectangle with homogeneous BC, the eigenfunctions p j , qk , and hence ψ j,k will change accordingly, as do the corresponding eigenvalues. Example 4.14 gives an example with mixed BC on a rectangle. In considering the Laplacian on the cube or a rectangular solid, one can separate u(x, y, z) = p(x)q(y, z), whence the eigenvalues λ j,k,l are eigenvalues of the ODE − p  = μ p added to eigenvalues of the two-variable PDE −Δq = γq, each with the appropriate inherited BC. The eigenfunctions ψ j,k,l are the corresponding products of p j with qk,l . Example 4.15 computes normalized, triply indexed eigenvalues and eigenfunctions of the negative Laplacian on the unit cube with 0-D BC. Computing eigenfunction expansion coefficients for functions defined on these types of regions defined in Cartesian coordinates reduces to Fourier series (see Sect. 3.8). Example 4.15 demonstrates this by further computing the eigenfunction expansion of f ≡ 1 on the unit cube with 0-D BC. Example 4.14. Find the eigenvalues and eigenfunctions for −Δu = λu on (0, 1) × (0, π)

166

4 Partial Differential Equations

with the mixed and periodic BC u(0, y) = 0 = u x (1, y), u(x, 0) = u(x, π), u y (x, 0) = u y (x, π). Solution: We have the mixed BC p(0) = 0 = p  (1) giving p j (x) = sin((2 j − 1) π2 x), and the periodic BC q(0) = q(π), q  (0) = q  (π) giving q0 (y) = 1, qkc (y) = cos(2ky), and qks (y) = sin(2ky). Thus for j = 1, 2, . . . and k = 0, 1, 2, . . ., the eigenvalues are λ j,k = (2 j − 1)2

π2 + 4k 2 . 4

The corresponding eigenfunctions are products of a p j with one of q0 , qkc , or qks , e.g., c (x, y) = sin( 3π x) cos(6y) is an eigenfunction corresponding to the eigenvalue ψ2,3 2 9π 2 λ2,3 = 4 + 36. Example 4.15. Find the eigenvalues and normalized eigenfunctions of the negative Laplacian on the unit cube with 0-D BC, and use the result to compute an eigenfunction expansion of f ≡ 1. Solution: The separated eigenvalue problems − p  = μ2 p and −Δq = γ 2 q each inherit 0-D BC, giving μ2 = j 2 π 2 and γ 2 = (k 2 + l 2 )π 2 , with p j (x) = sin( jπx) and qk,l (y, z) = sin(kπ y) sin(lπz). Thus, the eigenvalues and corresponding eigenfunctions of the negative Laplacian on the cube can be described by the triply indexed collections λ j,k,l = ( j 2 + k 2 + l 2 )π 2 and ψ j,k,l = sin( jπx) sin(kπ y) sin(lπz). To expand f ≡ 1 on the unit cube with 0-D BC, we need to compute

a j,k,l =

1, ψ j,k,l  ψ j,k,l , ψ j,k,l 

 1 1 1  1  1  1 =8 ψ j,k,l dx d y d z = 8 sin( jπx) dx sin(kπ y) d y sin(lπz) d z. 0

0

0

0

0

0

By symmetry or direct computation, the integrals are zero for even j, k, and l, and computing the odd indexed integrals gives a2 j−1,2k−1,2l−1 =

64 π3



1 . (2 j − 1)(2k − 1)(2l − 1)

 The series a2 j−1,2k−1,2l−1 ψ2 j−1,2k−1,2l−1 (x, y, z) converges pointwise to f in (0, 1)3 , satisfies the 0-D BC, and defines a triply odd and periodic function on R3 .

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

167

The Laplacian in Polar Coordinates: Bessel’s Equation and Eigenfunctions In regions such as disks and annuli, it is convenient to write the Laplacian in polar coordinates. It is an exercise in the chain rule to use y x = r cos θ, y = r sin θ, x2 + y 2 = r 2 , and tan θ = x to effect a change of coordinates on u xx + u yy . Starting with sin θ u x = u r rx + u θ θx = u r cos θ + u θ (− ), r one computes u xx and similarly u yy and simplifies to obtain 1 1 Δu = u rr + u r + 2 u θθ . r r

(4.2)

Similar calculations can be done for the sphere and other regions where radial and angular coordinates make sense. Applying separation of variables to −Δu = λ2 u on a region described in polar coordinates with appropriate boundary conditions to find solutions of the form u(r, θ) = p(r )q(θ) results in r 2 p  + r p  q  + λ2 r 2 = − = μ2 . p q Using the square power in λ2 and μ2 are historical conveniences, as we know these eigenvalues to be positive. One first solves the eigenvalue problem −q  = μ2 q with inherited BC, periodic for disks and annuli. With the change of variables t = λr , the equation r 2 p  + r p  + (λ2 r 2 − μ2 ) p = 0 can be converted to Bessel’s equation t 2 z  + t z  + (t 2 − μ2 )z = 0. This important second-order linear problem can be solved by using power series [8]. A fundamental set of solutions to Bessel’s equation with parameter μ has been named {Jμ , Yμ }, so-called Bessel functions of the first and second kind. These functions and their properties have been well studied, analogous to the cosine and sine functions, albeit more complicated. Changing variables back to r , one gets that pμJ (r ) = Jμ (λr ) and pμY (r ) = Yμ (λr ) solve our equation, whence the eigenvalues λμ, j are determined by the boundary conditions as usual. When 0-D BC on the unit disk are enforced, μ = k 2 , k = 0, 1, 2, . . ., and we have the following eigenfunctions: j

ψ0, j (r, θ) = J0 (s0 r ),

168

4 Partial Differential Equations j

c ψk, j (r, θ) = Jk (sk r ) cos(kθ), k ∈ N, j

s ψk, j (r, θ) = Jk (sk r ) sin(kθ), k ∈ N, j

where sk is the j th zero of Jk . The Bessel functions of the second kind blow up at r = 0, hence are not eigenfunctions on the disk or any region containing the origin. j If instead 0-N BC were enforced on the boundary of the disk, we would take sk to be the j th zero of the derivative Jk . In either case, the corresponding eigenvalues are j given by λk, j = (sk )2 . One can look up the zeros of Bessel functions and their derivatives, or compute them via Newton’s method. The first and second kind Bessel functions are defined in MATLAB. Known properties and derivative formulas such as 1 Jk (x) = (Jk−1 (x) − Jk+1 (x)) 2 can be used in implementing Newton’s method to find the sμj that enforce the desired boundary condition over the given region. Table 4.1 gives a list of some common polar regions and BC choices, a function whose zeros when squared give the eigenvalues, and a description of the corresponding eigenfunctions. Example 4.16 derives the table entry for the annulus, where both the first and second kinds of Bessel functions are required to satisfy the BC. When considering a sector of a polar region defined by (θ1 , θ2 ), the boundary conditions in θ are no longer periodic. Other BC must be specified, and the corresponding eigenvalues μ2 of the q equation in θ will change. Example 4.17 gives the eigenvalues and eigenfunctions of the negative Laplacian on a quarter disk, with mixed BC. Example 4.16. Find the eigenvalues and eigenfunctions of the negative Laplacian with 0-D BC on the annulus defined by r ∈ (a, b). Solution: As we do for the disk, for the annulus we have the eigenvalues μ2 = k 2 for the angular ODE eigenvalue, corresponding to q0 (θ) = 1, qkc (θ) = cos(kθ), and qks (θ) = sin(kθ). Since the annulus does not include the origin, both kinds of Bessel functions are required to satisfy the BC for the radial ODE eigenvalue problem. By the homogeneous BC, we can seek solutions of the form pk (sr ) = c Jk (sr ) + Yk (sr ). For the 0-D BC, we enforce pk (sa) = 0, giving c = −Yk (sa)/Jk (sa). Enforcing pk (sb) = 0 thus requires s to be a zero of gk (s) = Yk (sb) − (Yk (sa)/Jk (sa))Jk (sb). j If sk

j

is the j th zero of gk , then λk, j = (sk )2 is an eigenvalue of the negative Laplacian on the annulus. The corresponding eigenfunctions are products of q0 , qkc , and qks with j

j

j

j

j

pk (sk r ) = Yk (sk r ) − (Yk (sk a)/Jk (sk a))Jk (sk r ).

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

169

  s For example, ψ2,3 (r, θ) = Y2 (s23r ) − (Y2 (s23 a)/J2 (s23 a))J2 (s23r ) sin(2θ) is an eigenfunction corresponding to λ2,3 = (s23 )2 , where s23 is the third zero of the function g2 defined above. Example 4.17. Find the eigenvalues and eigenfunctions of the negative Laplacian on the quarter disk with BC π u(1, θ) = 0 and u θ (r, 0) = 0 = u θ (r, ). 2 Solution: The BC gives q  (0) = 0 = q  ( π2 ), whence qk (θ) = cos(2kθ) and μ = 2k. The radial j j j ODE has solutions pk (sk r ) = J2k (s2k r ), with s2k the j th zero of J2k . The eigenvalues of the negative Laplacian are given by j

λk, j = (s2k )2 . j

The corresponding eigenfunctions are products of pk (sk r ) with qk (θ), that is j

ψk, j (r, θ) = J2k (s2k r ) cos(2kθ), for k = 0, 1, 2, . . . .

4.10.2 Laplace’s Equation For a variety of regions Ω and corresponding boundary conditions, we can solve equations of the form Δu = 0, t > 0, x ∈ Ω ⊂ Rn , with boundary condition u(x) = g(x) for x ∈ ∂Ω. This is a homogeneous Laplace’s equation with a nonhomogeneous boundary condition. Generally, we seek SOV solutions of the form u(x, y) = p(x)q(y), where usually one variable is a scalar and the other variable is in Rn−1 (also a scalar when n = 2 as in the examples below). Depending on the boundary condition, we seek a separated equation in one of the two variables to be an eigenvalue problem. Solving the eigenvalue problem then allows one to find solutions of the other separated equation. Laplace’s Equation on the Rectangle, Square, and Cube Let us first consider Ω = (0, 1)2 and nonhomogeneous Dirichlet conditions given by u(x, 0) = g1 (x), u(0, y) = g2 (y), u(x, 1) = g3 (x), and u(1, y) = g4 (y), as in Sect. 4.4 (see in particular Fig. 4.5). We will first find a solution u 1 satisfying the condition u(x, 0) = g1 (x), but which is zero on the other three sides. Then our

170

4 Partial Differential Equations

separated equations are given by −

q  p  = = λ. p q

By u 1 (0, y) = 0 = u 1 (1, y) we have p(0) = 0 = p(1). Thus, we have identified the eigenvalue problem − p  = λp, p(0) = 0 = p(1), which has solutions pk (x) = ψk (x) = sin(kπx), λk = k 2 π 2 . Using the Fourier sine series, we now can write the nonhomogeneous boundary function g1 as  g1 (x) = ak ψk (x). We will solve the linear constant coefficient second-order homogeneous equation q  = λk q, where λk is known, with the nonhomogeneous boundary conditions q(0) = ak , q(1) = 0. √



This is not an eigenvalue problem. The general solution qk (y) = c1 e− λk y + c2 e λk y can most conveniently be expressed in terms of cosh and sinh, whence one can write ak sinh(kπ(1 − y)). qk (y) = sinh(kπ) Thus, u 1 (x, y) =



ak sinh(kπ(1 − y)) sin(kπx). sinh(kπ)

The computations involving the other three sides follow similarly, resulting in  bk u 2 (x, y) = sinh(kπ(1 − x)) sin(kπ y), sinh(kπ)  ck sinh(kπ(y)) sin(kπx), u 3 (x, y) = sinh(kπ) and u 4 (x, y) =



dk sinh(kπ(x)) sin(kπ y), sinh(kπ)

where the Fourier sine coefficients {ak , bk , ck , dk } expand g1 , g2 , g3 , and g4 , respec4 u i solves the PDE with the desired boundary conditions. If tively. Then u = i=1 some or all of the BC are Neumann, then the corresponding separated equations will change and lead to different solutions, e.g., with cos(kπx) or cosh(kπ y) terms.

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

171

Example 4.18 solves homogeneous Laplace’s equation on the square with a single nonzero gi given by a single eigenfunction, a so-called ‘pure mode’ problem. In higher dimension rectangular regions, one solves the problem one face at a time, obtaining Rn−1 -dimensional eigenvalue problems paired with an ODE having cosh and sinh solutions. Example 4.19 solves Laplace’s equation on the cube, first with a pure mode BC, and then for a BC requiring Fourier series. Example 4.18. Solve Δu = 0 on (0, 1)2 with u(x, 0) = u(0, y) = u x (1, y) = 0 and u(x, 1) = sin(3πx). Solution: The separated equations are − p  (x) = μ p(x) with p(0) = 0 = p  (1) and q  (y) = μq(y) with q(0) = 0, q(1) = 1. The first eigenvalue problem is solved by pk (x) = sin(kπx) and μ = k 2 π 2 , for k = 1, 2, . . .. The second problem is not an eigenvalue problem. For the known values of μ and the nonhomogeneous BC, the solutions to the second problem can be written as qk (y) = sinh(kπ y)/ sinh(kπ). See Eq. 2.1. The general solution is thus  u(x, y) = ak pk (x)qk (y). For homogeneous Laplace’s equation on the unit square with the pure mode BC u(x, 1) = sin(3πx) on one edge and 0-D BC on the three other edges, a3 = 1 and ak = 0 otherwise. The solution is thus u(x, y) = sin(3πx) sinh(3π y)/ sinh(3π). Example 4.19. On the unit cube (0, 1)3 , solve the equation Δu = 0, u(x, y, 1) = f (x, y), with u = 0 on all other faces, for the two cases f (x, y) = sin(πx) sin(2π y) and f (x, y) ≡ 1. Solution: The general solution is given by   sinh( λ j,k z)  u(x, y, z) = a j,k sin( jπx) sin(kπ y), sinh( λ j,k ) with λ j,k = ( j 2 + k 2 )π 2 . By u(x, y, 1) = sin(πx) sin(2π y), we see that a1,2 = 1 and a j,k = 0 otherwise, so that the solution in the first case is

172

4 Partial Differential Equations

√ sinh( 5πz) sin(πx) sin(2π y). u(x, y, z) = √ sinh( 5π) In the second case with u(x, y, 1) = 1 and u = 0 on all other faces, the boundary function is not a pure mode and one obtains a series solution. Similar to Example 4.15, one computes  1 1 sin( jπx) sin(kπ y) dx d y. a j,k = 4 With γ j,k = written as



0



0

λ2 j−1,2k−1 = ( (2 j − 1)2 + (2k − 1)2 )π, the series solution can be

u(x, y, z) =

16  sinh(γ j,k z) sin((2 j − 1)πx) sin((2k − 1)π y) . π 2 j,k sinh(γ j,k )(2 j − 1)(2k − 1)

Nonhomogeneous Laplace’s Equation One can solve nonhomogeneous PDE of the form Δu = f with nonhomogeneous BC in two steps. First, solve the homogeneous PDE Δu = 0 with the nonhomogeneous BC, as above. Call that solution u 1 . Then, use the method of eigenfunction expansion to solve Δu = f with homogeneous BC of the same type. Call that solution u 2 . Then u = u 1 + u 2 solves the nonhomogeneous PDE with the nonhomogeneous BC. The eigenfunctions used in the expansion of f are on the entire region, i.e., −Δψk (x) = λk ψk (x) for x ∈ Ω, with f (x) =



ak ψk (x).

Example 4.20 solves such a problem on the square, where f is a pure mode of the Laplacian on the square with 0-D BC, and the boundary data is a pure mode of the Laplacian on the interval with 0-D BC. Example 4.20. Solve Δu = sin(πx) sin(2π y) on the unit square with the BC u(1, y) = sin(3π y) on one edge, and u = 0 on all other edges. Solution: First solving the homogeneous equation with nonhomogeneous boundary data given by Δu = 0, u(1, y) = sin(3π y), with u = 0 on all other edges,

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

173

1 we obtain u 1 (x, y) = sinh(3π) sinh(3πx) sin(3π y). By considering the eigenfunction expansion  u 2 (x, y) = a j,k sin( jπx) sin(kπ y),  we obtain Δu 2 (x, y) = − a j,k ( j 2 + k 2 )π 2 sin( jπx) sin(kπ y). Then

Δu 2 (x, y) = sin(πx) sin(2π y) gives a1,2 = − 5π1 2 , and a j,k = 0 otherwise. Summing u 1 and u 2 gives the solution u(x, y) =

1 1 sinh(3πx) sin(3π y) − 2 cos(πx) sin(2π y). sinh(3π) 5π

Laplace’s Equation on Polar Regions On polar regions, one uses the Laplacian in polar coordinates given by Eq. 4.2. Applying separation of variables with u(r, θ) = p(r )q(θ), one obtains r 2 p  + r p  q  =− = μ2 . p q The BC allow one to state an eigenvalue problem out of one or the other of the separated ODE, although as in the rectangle it may be necessary to solve several pairs of eigenvalue and non-eigenvalue ODE, in turn assuming homogeneous BC on all but one ‘edge’ of the region, and then to sum the resulting solutions to one that satisfies all of the BC. On disks and annuli, one has periodic BC in θ, whence the eigenvalue problem is in θ and the eigenfunctions are given by qk (x) ∈ {1} ∪ {cos(kx), sin(kx)}, with corresponding eigenvalues μ2k = k 2 , for k = 0, 2, 3, . . .. In this case, the equation in r r 2 p  + r p  − μ2 p = 0

(4.3)

is not an eigenvalue problem. When μ = 0 we have p = 1 as one solution. The substitution w = p  reduces the second-order equation to the first-order equation w + r1 w = 0, whence w = r1 , giving p = ln r . Thus, corresponding to the eigenvalue μ2 = 0, we have the solution u(r, θ) = a0 on regions containing the origin, e.g., disks, and the solution u(r, θ) = a0 + c0 ln r on regions not containing the origin, e.g., annuli. When μ2 > 0, Eq. 4.3 is of Euler type, whence one seeks solutions of the form α r . Here, this gives r α ((α − 1)α + α − μ2 ) = 0, whence α2 − μ2 = 0 gives a fundamental solution set {r μ , r −μ }. Thus corresponding to the eigenvalue μ2 = k 2 > 0, we have the solution u(r, θ) = (ak cos(kθ) + bk sin(kθ))r k

174

4 Partial Differential Equations

on regions containing the origin, e.g., disks. On regions not containing the origin, e.g., annuli, the general solution includes two r −k terms, i.e., u(r, θ) = (ak cos(kθ) + bk sin(kθ))r k + (ck cos(kθ) + dk sin(kθ))r −k . Summing over k gives the general solution. Example 4.21 finds solutions of Δu = 0 for a disk and for a half annulus, with BC defined by one or two eigenfunctions. Example 4.22 demonstrates that if the BC is not given by a few pure modes, then Fourier series will be required and the solution will be a series. Example 4.21. Solve Δu = 0 (a) on the unit disk with the BC u(1, θ) = 1 + sin(3θ), (b) on the half annulus defined by r ∈ ( 21 , 1) and θ ∈ (0, π) with BC u( 21 , θ) = u(r, 0) = u(r, π) = 0 and u(1, θ) = sin(θ). Solution: For the disk case (a), the general solution is  r k (ak cos(kθ) + bk sin(kθ)). u(r, θ) = a0 + By the BC u(1, θ) = 1 + sin(3θ), we see that a0 = 1 and b3 = 1, with all other coefficients equal to zero. Thus, the solution is u(r, θ) = 1 + r 3 sin(3θ). In contrast in case (b), the equation on the half annulus has 0-D BC in θ giving qk (θ) = sin(kθ), and the polar equation solutions include ln r and r −k terms. With the given BC, our particular solution must be of the form (b1r + d1 r −1 ) sin(θ), all other coefficients in the eigenfunction expansion being zero. Solving the linear system b1r + d1 r −1 = 0 at r = 21 and b1r + d1r −1 = 1 at r = 1 gives the solution u(r, θ) =



1 1 4r − sin(θ). 3 r

Example 4.22. Solve Δu = 0 on the unit disk with the BC u(1, θ) = g(θ), with g defined by g(θ) = 1 for θ ∈ [0, π], and g(θ) = −1 for θ ∈ (π, 2π). Solution: The Fourier series for g is computed to be Laplace’s equation with this BC is

4 π

 sin((2k−1)θ) 2k−1

, whence the solution to

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

175

4  r 2k−1 sin((2k − 1)θ) . π 2k − 1

4.10.3 The Heat Equation For a variety of regions Ω and corresponding boundary conditions, we can solve equations of the form u t = cΔu, t > 0, x ∈ Ω ⊂ Rn , with initial condition u(0, x) = f (x). We seek SOV solutions of the form u(t, x) = p(t)q(x), resulting in the eigenvalue problem −Δq = λq on Ω, with the same boundary conditions. After solving the eigenvalue problem by the methods indicated above in this section, we obtain eigenvalues λk and corresponding eigenfunctions ψk . We note that the most convenient forms of describing collections of eigenfunctions may be multi-indexed. We then solve p  = −cλk p to get pk (t) = ak e−cλk t . This gives the general solution  u(t, x) = ak e−cλk t ψk (x). The ak are determined to be the eigenfunction expansion coefficients of the initial temperature f , since  u(0, x) = f (x) = ak ψk (x). For testing purposes, we often consider the heat equation on the unit interval, square, disk, or other region for which we know or can compute the eigenfunctions, for some homogeneous BC, with heat diffusion constant c = 1 and an initial temperature given by a single eigenfunction ψ with corresponding eigenvalue λ. Then, the solution is u(t, x) = e−λt ψ(x). Arbitrary piecewise continuous initial temperature functions f can be considered by computing their eigenfunction expansion coefficients. Example 4.23 computes some such solutions to the heat equation on an interval and a disk. The method of eigenfunction expansion is closely related to SOV, gives the same results for the cases above, and can handle other complexities. Consider the heat equation on a region with homogeneous BC u t = Δu + g(x), u(0, x) = f (x),

 where the source or sink term g has an eigenfunction expansion bk ψk (x), and the  initial value function f has an eigenfunction expansion ck ψk (x). Then seeking

176

4 Partial Differential Equations

solutions of the form u(t, x) =



ak (t)ψk (x)

leads to the system of ODE ak (t) + λk ak (t) = bk , ak (0) = ck . Solving the system gives the solution to the heat equation. Example 4.24 solves such a system in a case where the initial temperature and source terms are each a single eigenfunction. Example 4.23. Solve the heat equations: (a) u t = u xx , t > 0, x ∈ (0, 1), u(t, 0) = 0 = u(t, 1), with u(0, x) = sin(πx). (b) u t = u xx , t > 0, x ∈ (0, 1), u(t, 0) = 0 = u(t, 1), with u(0, x) = 1. (c) u t = Δu, t > 0, r ∈ [0, 1), θ ∈ [0, 2π), u(t, 1, θ) = 0, with u(0, r, θ) = J4 (s43r ) cos(4θ), with s43 the third zero of J4 (see Table 4.1). Solution: Case (a) has an initial temperature defined in terms of the single eigenfunction ψ1 (x) = sin(πx) corresponding to the eigenvalue λ1 = π 2 , thus the solution to the heat equation is u(t, x) = e−π t sin(πx). 2

 sin((2k−1)πx) , so The initial temperature function in b) has the Fourier sine series π4 2k−1 the solution is 2 2 4  e−(2k−1) π t sin((2k − 1)πx) . π 2k − 1 The initial temperature function c) is again a single eigenfunction, with corresponding eigenvalue (s43 )2 , giving the solution u(t, r, θ) = e−(s4 ) t J4 (s43r ) cos(4θ). 3 2

Example 4.24. By the method of eigenfunction expansion, solve the heat equation u t = u xx + sin(2πx), t > 0, x ∈ (0, 1), u(0) = 0 = u(1), with u(0, x) = sin(πx). Solution: We get the system

a1 + λ1 a1 = 0, a1 (0) = 1, a2 + λ2 a2 = 1, a2 (0) = 0,

and ak + λak = 0, ak (0) = 0 otherwise. The solution to this system is given by 1 2 , and ak (t) = 0 otherwise, a1 (t) = e−π t , a2 (t) = 4π 2

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

177

so that the solution to the heat equation is u(t, x) = e−π t sin(πx) + 2

1 sin(2πx). 4π 2

4.10.4 The Wave Equation For a variety of regions Ω and corresponding boundary conditions, we can solve equations of the form u tt = c2 Δu, t > 0, x ∈ Ω ⊂ Rn , with initial conditions u(0, x) = f (x) and u t (0, x) = g(x). We seek SOV solutions of the form u(t, x) = p(t)q(x), resulting in the eigenvalue problem −Δq = λq on Ω, with the same boundary conditions. After solving the eigenvalue problem by the methods indicated above in this section, we obtain eigenvalues λk and corresponding eigenfunctions ψk . Assume the eigenfunction expansions   f (x) = ak ψk (x) and g(x) = bk ψk (x) are known. We then solve p  = −c2 λk p, √ √ to get pk (t) = ck cos(c λk t) + dk sin(c λk t). This gives the general solution    u(t, x) = (ck cos(c λk t) + dk sin(c λk t))ψk (x). √ The initial value functions give that ck = ak and dk = bk /(c λk ), so our solution is    bk u(t, x) = (ak cos(c λk t) + √ sin(c λk t))ψk (x). c λk Examples 4.25 and 4.26 solve several such wave equations on the interval and disk. The method of eigenfunction expansion can again be used to handle a source term. Consider the wave equation on a region with homogeneous BC u tt = Δu + h(t, x), u(0, x) = f (x), u t (0, x) = g(x),  where the source or sink term h has an eigenfunction expansion b k (t)ψk (x), and the initial value functions have eigenfunction expansions f (x) = ck ψk (x) and  g(x) = dk ψk (x). Then seeking solutions of the form

178

4 Partial Differential Equations

u(t, x) =



ak (t)ψk (x)

leads to the system of ODE ak (t) + λk ak (t) = bk (t), ak (0) = ck , ak (0) = dk . Solving the system of ODE gives the solution to the wave equation. Example 4.27 uses eigenfunction expansions to solve a heat equation with a nonzero source term. Example 4.25. Solve the wave equations: (a) u tt = u xx , t > 0, x ∈ (0, 1), u(t, 0) = 0 = u(t, 1), with u(0, x) = sin(πx) and u t (0, x) = sin(2πx). (b) u tt = u xx , t > 0, x ∈ (0, 1), u(t, 0) = 0 = u(t, 1), with u(0, x) = 1 and u t (0, x) = 0. (c) u tt = Δu, t > 0, r ∈ [0, 1), θ ∈ [0, 2π), u(t, 1, θ) = 0, with u(0, r, θ) = J4 (s43r ) cos(4θ) and u t (0, r, θ) = J2 (s25r ) sin(2θ), j where sk is the j th zero of Jk . Solution: For case (a), from the general solution and setting u(0, x) = sin(πx) and u t (0, x) = 1 . Thus, our particular solution is sin(2πx) gives a1 = 1 and b2 = 2π 1 sin(2πt) sin(2πx). u(t, x) = cos(πt) sin(πx) + 2π  sin((2k−1)πx) The Fourier sine series for the initial displacement in (b) is π4 , so the 2k−1 solution to the wave equation is 4  cos((2k − 1)πt) sin((2k − 1)πx) . u(t, x) = π 2k − 1 Case (c) again involves only a single eigenfunction for each of the initial displacement and velocity, with corresponding eigenvalues (s43 )2 and (s25 )2 . Thus, the solution is 1 u(t, r, θ) = cos(s43 t)J4 (s43 r ) cos(4θ) + 5 sin(s25 t)J2 (s25 r ) sin(2θ). s2 Example 4.26. Solve the wave equation u tt = Δu, t > 0, r ∈ [0, 1), θ ∈ [0, 2π), u(t, 1, θ) = 0, with u(0, r, θ) = 1 − r 2 and u t (0, r, θ) = 0. Solution: Since the initial displacement is radial and the initial velocity is zero, the solution is of the form  j j u(t, r, θ) = a j cos(s0 t)J0 (s0 r ). By the initial condition,

4.10 Theory: Separation of Variables for PDE on Rectangular and Polar Regions

1 aj =

179

j

(1 − r 2 )J0 (s0 r )r dr . 2 1 j J (s r ) r dr 0 0 0

0

Example 4.27. Solve the wave equation u tt = u xx + sin(2πx), t > 0, x ∈ (0, 1), u(0) = 0 = u(1), with u(0, x) = sin(πx) and u t (0, x) = sin(3πx).

Solution:  Seeking solutions of the form u(t, x) = ak (t) sin(kπx), we get the system a1 + λ1 a1 = 0, a1 (0) = 1, a1 (0) = 0, a2 + λ2 a2 = 1, a2 (0) = 0, a2 (0) = 0, a3 + λ3 a3 = 0, a3 (0) = 0, a3 (0) = 1, and

ak + λak = 0, ak (0) = 0, ak (0) = 0

otherwise. The solution to this system gives a1 (t) = cos(πt), a2 (t) =

1 sin(3πt) , and ak (t) = 0 otherwise, , a3 (t) = 4π 2 3π

so that the solution to the wave equation is u(t, x) = cos(πt) sin(πx) +

sin(3πt) 1 sin(3πx). sin(2πx) + 4π 2 3π

Chapter 5

Advanced Topics in Semilinear Elliptic BVP

Summary We lightly introduce a few more advanced topics concerning semilinear elliptic BVP. In particular, we introduce four algorithms for finding solutions of semilinear elliptic BVP. The tangent Newton method is used for branch following. The secant method is used for bifurcation detection. The mountain pass and modified mountain pass algorithms are used for finding the Morse index one and two solutions. We also compute solutions for fully nonlinear equations involving the p-Laplacian. Our main technique in this chapter is in discretizing nonlinear PDE to finite dimensional nonlinear systems to solve with Newton’s method. This chapter assumes that the reader has worked through at least Sects. 3.1, 3.4, and 4.3 of these notes. The full requisite PDE theory background is considerable, generally beyond that found in undergraduate texts such as [8]. The excellent book [6] contains many relevant theoretical PDE facts a reader might want to know, but there is no unifying existence and uniqueness theory for nonlinear problems. By the nature of the subject, such results are found somewhat spread throughout journal literature. The research articles cited in this chapter themselves cite the literature fairly extensively, and would make a good place to start. In particular, the survey paper [31] has a fairly comprehensive bibliography on semilinear elliptic BVP. The more recently published research articles by the author and his coauthors contain refinements, extensions, and applications of the algorithms and ideas briefly introduced in this chapter. In particular, we can overcome many difficulties in branch switching at bifurcations by knowing the invariant subspaces corresponding to the underlying nonlinear operator.

5.1 Branch Following and Bifurcation Detection We discuss two algorithms, one for following a given branch of solutions to a semilinear elliptic boundary value problem with parameter, and a second for detecting the location of points along a branch where new branches of solutions may bifurcate. © Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8_5

181

182

12 13 14 15 16

delta smin pts usold T

5 Advanced Topics in Semilinear Elliptic BVP

= = = = =

1; 0; smax = 10; s = piˆ2; [s, 0]; [zeros(n-1,1); s]; [sin(pi*xs); 0]; T = T/norm(T);

Listing 5.1 The initial tangent vector is an eigenvector padded with a zero to make it vertical. The eigenvector is known to be a good initial search direction at the primary bifurcation point.

5.1.1 The Tangent Newton Method for Branch Following The ideas in this section were first developed in [30], and refined in other articles, notably [34] and [35]. The branches that can be followed in the way that was done in Exercise 3.5 are monotone in the parameter s. In order to follow branches that are not monotone, e.g., have an s-shape bend somewhere, it is necessary to treat the parameter s as an additional unknown. Let m equal the number of grid points, e.g., N or N 2 for Ω = (0, 1) and Ω = (0, 1)2 , respectively. An elementary continuation method is to use an estimated tangent vector T ∈ Rm+1 to obtain a better initial guess and enforce an (m + 1)st equation κ(s, u) = 0. Given an approximate solution (u, s)previous and the estimated (normalized) tangent vector T , the initial guess is (u, s)guess = (u, s)current + δT . The ‘speed’ parameter δ > 0 is chosen experimentally or heuristically, not so big as to jump over interesting branch features. The required extra constraining equation is given by κ(u, s) = ((u, s) − (u, s)guess ) · T . Since this is a linear equation, κ will in fact be zero for each iterate. Thus, the augmented object function G˜ is defined by   G(u, s) ˜ G(u, s) = . κ(u, s) (u, s) ∈ Rm , which equals u in the The augmented Jacobian has an extra column ∂G ∂s 3 case that f (u) = su + u . It also has an extra row (∇κ)(u, s) = T T ∈ Rm+1 , giving ⎡ ⎢ J˜(u, s) = ⎢ ⎣

J (u, s) u

⎤ ⎥ ⎥. ⎦

TT To implement in MATLAB® , first define the grid and the D2 matrix, as in Sect. 3.1. Since s is no longer fixed, the function definitions for f , f p, G, and J should depend on the two independent variables (u, s), e.g., f = @(u,s) s*u + u. ˆ3. In the following code fragment, we start with a known solution on the trivial branch, namely the solution u = 0 at s = λ, an eigenvalue. In Listing 5.1, we start with initializing T = (ψ, 0), a tangent vector pointing in the corresponding eigenfunction direction (Fig. 5.1).

5.1 Branch Following and Bifurcation Detection

183 u ∈ Rm

u ∈ Rm T

(u, s)next • κ(u, s) = 0

T

• (u, s)guess • (u, s)current

κ(u, s) = 0

(u, s)next

• • (u, s)guess

• (u, s)previous s∈R

• s∈R (u, s)current

Fig. 5.1 The tangent method for branch following and branch switching. On the left, the current solution and a previously found solution are used to compute an approximate unit tangent vector T . On the right, the current solution is a bifurcation point and the only known solution on the branch, and T is chosen to be some eigenvector corresponding to a zero eigenvalue of −J (u, s). If the zero eigenvalue is not simple, in practice one must use knowledge of the invariant subspaces inherent in the problem, e.g., symmetry, in order to make good choices for the eigenvector T , or make many random choices within the eigenspace. In both diagrams, the initial guess for augmented Newton’s method is (u, s)guess = (u, s)current + δT , for an appropriate speed δ > 0. All iterates and hence the next branch solution point to be found are constrained to the hyperplane defined by κ(u, s) = 0

In Listing 5.2, as long as the branch stays within the window [smin , smax ], the current tangent vector T is used to produce an initial guess, and Newton’s method is iterated until convergence to a solution. After each nontrivial solution is found, the estimated tangent vector is recomputed. Here, we are saving the coordinates (s, ||u||∞ ) in the array pts. Then, the command plot(pts(:,1),pts(:,2)) can be used to draw the branch found in Fig. 5.2. The choice of schematic function || · ||∞ used here is convenient, but somewhat arbitrary. Other choices may be preferable in the visual display of some branch features.

5.1.2 The Secant Method for Bifurcation Detection If the MI changes at two points along a branch, there must be a point in between where the d th eigenvalue of the Jacobian is zero, where d = max{M I1 , M I2 }. We can use the secant method applied to find such a zero eigenvalue, and hence find the corresponding bifurcation point (u ∗ , s ∗ ). Each iteration of the secant method takes as input a pair of s-values (si−1 , si ) near s ∗ and and returns a closer estimate si+1 . In order to compute the Jacobian and corresponding d th eigenvalue at this new point for input to the next iteration of the secant method, we must first compute the corresponding solution u i+1 on the branch. To do this, we can use Newton’s method

184

17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

5 Advanced Topics in Semilinear Elliptic BVP

while (s>smin) && (s 0) and ym = y .* (y> 0), respectively. Then replacing lines 18–21 in Listing 5.6 with the following few lines implements the MMPA in Listing 5.7 to produce the minimal energy sign-changing solution depicted by the blue solution curve on the right in Fig. 5.8.

5.2 Mountain Pass and Modified Mountain Pass Algorithms for Semilinear BVP

14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37

for j = 1: maxits for i = 1:maxits SS PS y

193

% minimization loop % maximization loop = SG(y); = (y' * SS) / (y' * y) * y; = y + d1 * PS;

err1 = norm(PS) * sqrt(dx); if err1 < tol1 break; end end y

= y - d2 * SS;

err2 = norm(SS) if err2 < tol2 break; end

* sqrt(dx);

end plot(x, [0; y; 0],

'b', 'LineWidth', 5);

Listing 5.6 The Mountain Pass Algorithm.

18 19 20 21 22 23 24 25 26 27 28

yp PSp yp

= y .* (y>0); = (yp'*SS) / (yp'*yp) * yp; = yp + d1 * PSp;

ym PSm ym

= y .* (y 0. The p-Laplacian equals the usual Laplacian when p = 2. The theory for the p-Laplacian is not nearly as well developed as for the usual (linear) Laplacian, and in any case is beyond the scope of this text. We do however present two examples in both ODE and PDE forms. First, we consider the nonlinear ODE equivalent of nonhomogeneous Laplace’s equation with homogeneous 0-D BC given by 

− y  |y  | p−2 = f, y(a) = 0 = y(b), and then we solve the so-called nonlinear eigenvalue problem 

− y  |y  | p−2 = λy|y| p−2 , y(a) = 0 = y(b). We then extend to similar PDE examples. Generally, the p-Laplacian operator evaluated at a function u can be approximated by the matrix ˆ Δ p u ≈ D1out h(D1in u), where D1out and D1in are suitable first difference matrices and h is the so-called odd power function defined by h(t) = t|t| p−2 , which threads over vectors to act like the composition of functions. For Newton’s method, we will need the Jacobian of this term (and the other terms): ˆ 1in . D1out h  (D1in u)D

5.3 The p-Laplacian

195

% RHS and Ex a c t sol f = @( x ) ones ( size ( x ) ) ; g = @( t ) ( 2 / 3 ) * (1 - abs ( t ) .^( p /( p -1) ) ) ; % odd - p o w e r f u n c t i o n and d e r i v a t i v e h = @( t ) t .* abs ( t ) .^( p -2) ; hp = @( t ) (p -1) * abs ( t ) .^( p -2) ; % D_1 inner and D_1 o u t e r b u ild O = kron ( ones ( n +1 ,1) ,[ -1 ,1]) ; D1I = s p d i a g s (O ,[ -1 ,0] , n +1 , n ) ; D1O = s p d i a g s (O ,[0 ,1] , n , n +1) ; D1I (1 ,1) = 2; % cell - grid 0 - D BC D1I ( end , end ) = -2; % cell - grid 0 - D BC D1I = D1I / dx ; D1O = D1O / dx ; % G H J

object = @( u ) = @( u ) = @( u )

f u n c t i o n and J a c o b i a n D1O * h ( D1I * u ) + f ( v ) ; s p d i a g s ( hp ( D1I * u ) ,0 , n +1 , n +1) ; D1O * H ( u ) * D1I ;

Listing 5.8 Defining the object function and Jacobian for applying Newton’s method to solve the ODE Δ p y = f with y(0) = 0 = y(1). See for example Fig. 5.9.

Listing 5.8 builds the object function and Jacobian needed to solve Δp y = f in one dimension, with Newton’s method. We define the RHS function f , the oddpower function h, and the derivative h  , build the first difference matrices, and define the object function and Jacobian for Newton’s method applied to Laplace’s equation with the p-Laplacian in one dimension with Note that D1O * D1I returns the usual three-point central second difference matrix, e.g., the p = 2 usual Laplacian. Figure 5.9 shows the approximate and exact 

solution to a well-known problem [28], − y  |y  | p−2 = 1 with y(−1) = 0 = y(1). Listing 5.9 builds the object function and Jacobian needed to solve the so-called eigenvalue problem. We modify the object function and Jacobian from Listing 5.8. Here, λ is an additional unknown, and so we add an equation enforcing that ||u|| p = 1. By using initial guesses from the truly linear p = 2 case, we can find solutions to the nonlinear eigenvalue problem. In the interval case, the eigenvalues are wellknown [27]:

196

5 Advanced Topics in Semilinear Elliptic BVP 0.7

0.6

0.5

0.4

0.3

0.2

0.1

0 -1

-0.8

-0.6

-0.4

-0.2

0

0.2



Fig. 5.9 The approximate and exact solution to − y  |y  | p=4

0.4

p−2 

0.6

0.8

1

= 1 with y(−1) = 0 = y(1) with

G = @(s,u) [D1Out * h(D1In *u) + ... s * u .*abs(u).ˆ(p-2); ... (1-sum(abs(u).ˆp)*dx)/p]; J = @(s,u) [D1Out * ... spdiags(hp(D1In *u),0,n+1,n+1) * ... D1In + ... s * spdiags(abs(u).ˆ(p-2),0,n,n), ... u.*(abs(u).ˆ(p-2)); ... -(u.*(abs(u).ˆ(p-2)))' * dx, 0]; Listing 5.9 Defining the object function and Jacobian for applying Newton’s method to solve the ODE −Δ p y = λy|y| p−2 with y(0) = y(1).



πp λk = k ( p − 1) b−a p

p ,

2π where π p = p sin(π/ . When p = 2, these are the usual eigenvalues of the negative p) Laplacian with 0-D BC on the interval [a, b]. The presented method will reproduce these eigenvalues with O(( dx)2 ) accuracy. Our implementation for the two-dimensional problems is considerably more complicated. The inner norm of the gradient term must be approximated to respect symmetry, or else spurious features appear resulting in poor solution approximations. If one uses two-point forward first differences throughout, the odd-power term will have x-differences on vertical edges and y-differences on horizontal edges. We cannot use three-point central first differences throughout, due to a well-known problem with

5.3 The p-Laplacian

Z o I

= reshape(f(X,Y),nˆ2,1); = ones(n+1,1); = speye(n,n);

197

% RHS

% Interior difference matrices D_int = spdiags([-o,o],[-1,0],n+1,n); D_int(1,1) = 2; D_int(n+1,n)= -2; D_int = D_int / dx; D1_x_int = kron(I,D_int); D1_y_int = kron(D_int,I); D1_int = [D1_x_int; D1_y_int]; % Exterior difference matrices D_ext = spdiags([-o,o],[0,1],n,n+1) / dx; D1_x_ext = kron(I,D_ext); D1_y_ext = kron(D_ext,I); D1_ext = [D1_x_ext, D1_y_ext]; Listing 5.10 Creating the interior and exterior two-point first difference matrices for approximating x and y first partial derivatives.

spurious features—essentially the Laplacian decouples and produces checkerboard patterns. Listing 5.10 generates the block inner and outer first difference matrices in a straightforward way using two-point forward first differences. Note that D1_int is stacked vertically to produce a gradient whose first half contains x-differences and whose second half contains y-differences. The D1_ext matrix is side-by-side, so that its action is that of the divergence. Listing 5.11 handles the tougher part of computing the norm of the gradient term. We create three-point first difference and block averaging matrices in order to produce the block average-of-difference matrix D1_avg = [D1\verb_y_avg; D1 _x_avg]. This matrix averages central differences in y on cells to get values on vertical edges, and central differences in x on cells to get values on horizontal edges. Ghost points are used to enforce the boundary condition. See Fig. 5.10. Finally, Listing 5.12 contains the Newton loop, where we repeatedly construct the object function evaluated at the current approximation u, and the corresponding Jacobian matrix. The first half of Grad contains x-differences on vertical edges, and the second half contains y-differences on horizontal edges. The first half of D1_avg * u contains averages of y-differences also valid on vertical edges, the second half contains x-differences at horizontal edges. Thus, the Grad.*Norm term of the object function is the equivalent of h(D1In *u) in the ODE code found in Listing 5.8, but care has been taken to be consistent with multiplying differences

198

5 Advanced Topics in Semilinear Elliptic BVP

Fig. 5.10 In order to preserve symmetry, a mixture of two-point first differences and three-point central first differences with averaging is used in the computation of the p-Laplacian. The two-point differences in x and y are approximate derivatives at midpoints of vertical and horizontal edges, respectively, denoted by a circled ‘x’. We average the three-point central differences which evaluate at cell centers to get approximate first differences of y and x at the same vertical and horizontal edge midpoints as their two-point difference counterparts. Ghost points are used to enforce the BC. For 0-D BC, the two three-point differences between ghost points and interior points are taken to be equal and opposite in sign, resulting in an average of zero

evaluated at the same locations. The Norm term of the Jacobian is the equivalent of hp(D1In *u) in that simpler code. With a suitable initial guess, the Newton iteration using the above object function and Jacobian produces O((dx)2 ) accurate approximations to the boundary value problem. Figure 5.11 shows the solution x(1 − x)y(1 − y) obtained by solving Δ p u = f with 0-D BC on the unit square for a particular f and p = 4. The f in this example was reverse-engineered by taking the p = 4 Laplacian of the desired solution. The nonlinear eigenvalue problem on the square can be solved by combining the functionality of the eigenvalue problem on the interval and the block matrices for the square. Doing so on the 2 × 2 square produces eigenvalues that substantially

5.3 The p-Laplacian

199

% Norm difference matrices D_cent = spdiags([-o,o],[-1,1],n,n); D_cent(1,1) = 1; % 0-D BC, 3pt 1st diff, cell-grid D_cent(n,n) = -1; % 0-D BC, 3pt 1st diff, cell-grid D_cent = D_cent / (2*dx); % Averaging Avg Avg(1,1) Avg(n+1,n) Avg_x Avg_y

matrices = spdiags([o o],[-1,0],n+1,n) / 2; = 0; % 0-D BC = 0; % 0-D BC = kron(Avg,I); = kron(I,Avg);

% Average of central difference matrices D1_x_cent = kron(I,D_cent); D1_y_cent = kron(D_cent,I); D1_x_avg = Avg_x * D1_x_cent; D1_y_avg = Avg_y * D1_y_cent; D1_avg = [D1_y_avg; D1_x_avg]; Listing 5.11 Computing Averages of 3-point central first differences in y and x gives approximations at locations consistent with the locations of the 2-point first differences in x and y, respectively.

% components of object function and Jacobian Grad = D1_int*u; Norm = (Grad.ˆ2 + (D1_av*u).ˆ2) .ˆ ((p-2)/2); LpU = D1_ext*(Grad.*Norm); %Object Function Delta_p u - f G = LpU - Z; %Jacobian J = (p-1) * D1_ext *... spdiags(Norm,0,2*n*(n+1),2*n*(n+1)) * ... D1_int; Listing 5.12 Defining the object function and Jacobian for applying Newton’s method to solve the PDE −Δ p u = f with 0-D BC. See Fig. 5.11.

200

5 Advanced Topics in Semilinear Elliptic BVP 0.062495

0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 1 0.8

1 0.6

0.8 0.6

0.4 0.4

0.2

0.2 0

0

Fig. 5.11 Solution to u(x, y) = x(1 − x)y(1 − y) to −Δ p u = f with 0-D BC on the unit square, with p = 4 32.0823

1.5 1 0.5 0 -0.5 -1 -1.5 2 1.5

2 1.5

1 1 0.5

0.5 0

0

Fig. 5.12 The minimal energy sign-changing nonlinear eigenfunction −Δ p u = λu|u| p−2 with 0-D BC on the 2 × 2 square, with p = 4

agree with those published in [27], but since the actual values are not known the order of accuracy cannot be confirmed to be O((dx)2 ). Figure 5.12 shows the second sign-changing eigenfunction, obtained with a guess u 0 (x, y) = sin(πx) sin(π y/2) + sin(πx/2) sin(π y) and λ = 5π 2 /4, the corresponding well known p = 2 values.

5.3 The p-Laplacian

201

Exercises 5.13 (*) Implement Newton’s method to reproduce Fig. 5.9. 5.14 (*) Modify the code from Problem 5.13 to solve (y  |y  |2 ) = f with y(0) = 0 = y(1), where f is the negative p-Laplacian of x(1 − x). You can use a computer algebra system to compute f . 5.15 (**) Reproduce Fig. 5.11. 5.16 (**) Reproduce Fig. 5.12. 5.17 (**) Plot the first three eigenvalues of −Δ p on the unit square with 0-D BC as a function of p ∈ ( 21 , 3).

References

1. Anton, H., Rorres, C.: Elementary Linear Algebra: Applications Version, 9th edn. Wiley, New Jersey (2005) 2. Boyce, W.E., DiPrima, R.C.: Elementary Differential Equations and Boundary Value Problems. Wiley (2012) 3. Burden, R.L., Faires, J.D., Burden, A.M.: Numerical Analysis. Brooks Cole (2015) 4. DuChateau, P., Zachman, D.: Applied Partial Differential Equations. Harper and Row, New York (1989) 5. DuChateau, P., Zachman, D.: Schaum’s Outline of Partial Differential Equations. McGraw-Hill (2011) 6. Evans, L: Partial Differential Equations, 2nd edn. Graduate Studies in Mathematics, vol. 19. American Mathematical Society, Providence (2010) 7. Friedberg, S.H., Insel, A.J., Spence, L.E.: Linear Algebra, 3rd edn. Prentice Hall Inc., Upper Saddle River (1997) 8. Haberman, R: Applied Partial Differential Equations with Fourier Series and Boundary Value Problems. Pearson (2012) 9. Hale, J.K.: Ordinary Differential Equations. Wiley (1969) 10. Langtangen, H.P., Linge, S.: Finite Difference Computing with PDEs. A Modern Software Approach. Springer (2017) 11. Larson, R., Edwards, B.H.: Elementary Linear Algebra, 4th edn. Cengage (2017) 12. Larsson, S., Thomee, V.: Partial Differential Equations with Numerical Methods (Texts in Applied Mathematics). Springer (2009) 13. Le Dret, H., Lucquin, B.: Partial Differential Equations: Modeling, Analysis And Numerical Approximation. International Series of Numerical Mathematics, vol. 168. Birkhäuser/Springer, Cham (2016) 14. Li, Z., Qiao, Z., Tang, T.: Numerical Solution of Differential Equations. Introduction to Finite Difference and Finite Element Methods. Cambridge University Press, Cambridge (2018) 15. Neri, F.: Linear Algebra for Computational Sciences and Engineering. Springer (2016) 16. Neuberger, J.W.: Sobolev Gradients and Differential Equations. Lecture Notes in Mathematics, vol. 1670, viii+150. Springer, Berlin (1997) 17. Schiesser, W.E.: The Numerical Method of Lines: Integration of Partial Differential Equations. Academic Press, San Diego (1991) 18. Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis (Texts in Applied Mathematics). Springer (2010) © Springer Nature Switzerland AG 2023 J. M. Neuberger, Difference Matrices for ODE and PDE, https://doi.org/10.1007/978-3-031-12000-8

203

204

References

19. 20. 21. 22. 23.

Strang, G.: Calculus. Wellesley Cambridge Press (1992) GAP – Groups, Algorithms, and Programming. http://www.gap-system.org cited 2018 Netlib. http://www.http://netlib.org cited 2018 Wikipedia. https://www.wikipedia.org cited 2018 Ambrosetti, A., Rabinowitz, P.H.: Dual variational methods in critical point theory and applications. J. Funct. Anal. 14, 349–381 (1973) Castro, A., Cossio, J., Neuberger, J.M.: A sign-changing solution for a superlinear Dirichlet problem. Rocky Mt. J. Math. 27(4), 1041–1053 (1997) Choi, Y.S., McKenna, P.J.: A mountain pass method for the numerical solution of semilinear elliptic problems. Nonlinear Anal. 20(4), 417–437 (1993) Degbe, E.: Radial and nonradial solutions on the disk. M.S. thesis, Northern Arizona University (2018) Horak, J.: Numerical investigation of the smallest eigenvalues of the p-Laplace operator on planar domains. Electron. J. Differ. Equ. 132, 1–30 (2011) Girg, P., Kotrla, L.: Differentiability properties of p-trigonometric functions. Proceedings of the conference on variational and topological methods: theory, applications, numerical simulations, and open problems. Electron. J. Differ. Equ. Conf. 21, 101–127 (2014) Neuberger, J.M.: A numerical method for finding sign-changing solutions of superlinear Dirichlet problems. Nonlinear World 4(1), 73–83 (1997) Neuberger, J.M., Swift, J.W.: Newton’s method and Morse index for semilinear elliptic PDEs. Int. J. Bifurc. Chaos. Appl. Sci. Eng. 11(3), 801–820 (2001) Neuberger, J.M.: GNGA: recent progress and open problems for semilinear elliptic PDE. Variational methods: open problems, recent progress, and numerical algorithms. Contemp. Math. 357 (2004) (American Mathematical Society, Providence) Neuberger, J.M., Sieben, N., Swift, J.W.: Computing eigenfunctions on the Koch snowflake: a new grid and symmetry. J. Comput. Appl. Math. 191 (2006) Neuberger, J.M., Sieben, N., Swift, J.W.: Symmetry and automated branch following for a semilinear elliptic PDE on a fractal region. SIAM J. Appl. Dyn. Syst. 5 (2006) Neuberger, J.M., Sieben, N., Swift, J.W.: Automated bifurcation analysis for nonlinear elliptic partial difference equations on graphs. Int. J. Bifurc. Chaos 19(8), 25312556 (2009) Neuberger, J.M., Sieben, N., Swift, J.W.: Newton’s method and symmetry for semilinear elliptic PDE on the cube. SIAM J. Appl. Dyn. Syst. 12(3), 1237–1279 (2013)

24. 25. 26. 27. 28.

29. 30. 31.

32. 33. 34. 35.