This textbook is an introduction to the methods needed to solve partial differential equations (PDEs). Readers are intro

*724*
*121*
*3MB*

*English*
*Pages 207
[208]*
*Year 2023*

*Table of contents : Preface Preface to the Second EditionAcknowledgementsContents1 Introduction [DELETE] 1.1 Model Equations 1.1.1 Advection Equation 1.1.2 Diffusion Equation 1.1.3 Laplace's Equation 1.1.4 Wave Equation 1.2 PDEs Are Everywhere2 First Order PDEs [DELETE] 2.1 Constant Coefficient Equations 2.2 Linear Equations 2.3 Method of Characteristics 2.4 Quasilinear Equations 2.5 Higher Dimensional Equations 2.6 Fully Nonlinear First Order Equations 2.6.1 Method of Characteristics 2.6.2 Charpit's Method3 Second Order Linear PDEs 3.1 Introduction 3.2 Standard Forms 3.2.1 Parabolic Standard Form 3.2.2 Hyperbolic Standard Form 3.2.3 Modified Hyperbolic Form 3.2.4 Regular Hyperbolic Form 3.2.5 Elliptic Standard Form 3.3 The Wave Equation4 Fourier Series [DELETE] 4.1 Fourier Series 4.2 Fourier Series on [- π, π] 4.3 Fourier Series on [- L, L] 4.4 Odd and Even Extensions 4.4.1 Sine Series 4.4.2 Cosine Series5 Separation of Variables [DELETE] 5.1 The Heat Equation 5.1.1 Nonhomogeneous Boundary Conditions 5.1.2 Nonhomogeneous Equations 5.1.3 Equations with a Solution Dependent Source Term 5.1.4 Equations with a Solution Dependent Convective Term 5.2 Laplace's Equation 5.2.1 Laplace's Equation on an Arbitrary Rectangular Domain 5.2.2 Laplace's Equation on a Disc 5.3 The Wave Equation6 Fourier Transform [DELETE] 6.1 Fourier Transform 6.2 Fourier Sine and Cosine Transforms7 Higher Dimensional Problems [DELETE] 7.1 2+1 Dimensional Heat Equation 7.2 2+1 Dimensional Diffusion Equation in a Disc 7.2.1 With Angular Symmetry 7.2.2 Without Angular Symmetry 7.3 Laplaces Equation 7.3.1 Cartesian Coordinates 7.3.2 Cylindrical Polar Coordinates 7.3.3 Spherical Polar Coordinates 7.4 Special Functions 7.4.1 Bessel Functions 7.4.2 Legendre FunctionsSolutionsIndex*

Synthesis Lectures on Mathematics & Statistics

Daniel Arrigo

An Introduction to Partial Differential Equations Second Edition

Synthesis Lectures on Mathematics & Statistics Series Editor Steven G. Krantz, Department of Mathematics, Washington University, Saint Louis, MO, USA

This series includes titles in applied mathematics and statistics for cross-disciplinary STEM professionals, educators, researchers, and students. The series focuses on new and traditional techniques to develop mathematical knowledge and skills, an understanding of core mathematical reasoning, and the ability to utilize data in specific applications.

Daniel Arrigo

An Introduction to Partial Differential Equations Second Edition

Daniel Arrigo University of Central Arkansas Conway, AR, USA

ISSN 1938-1743 ISSN 1938-1751 (electronic) Synthesis Lectures on Mathematics & Statistics ISBN 978-3-031-22086-9 ISBN 978-3-031-22087-6 (eBook) https://doi.org/10.1007/978-3-031-22087-6 1st edition: © Morgan & Claypool Publishers 2018 2nd edition: © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This is an introductory book about obtaining exact solutions to Partial Differential Equations (PDEs). It is based on my lecture notes from a course I have taught almost every year since 2001 at the University of Central Arkansas (UCA). When I began teaching the course, I tried several textbooks. There are many fine textbooks on the market but they just seemed to miss the mark for students at UCA. Even though the average ACT scores of incoming freshmen at UCA are among the highest in the state, the textbooks that were available were too sophisticated for my students. I also felt that the way most books taught the subject matter could be improved upon. For example, a lot of the books start with the separation of variables, a technique used for solving second-order linear Partial Differential Equations (PDEs). As most books on ordinary differential equations (ODEs) start with solving first-order ODEs before considering second-order ODEs, I felt the same order would be beneficial in solving PDEs. This naturally led to the presentation in this book. In Chap. 1, I introduce four basic PDEs which some would consider the cornerstone of Applied Mathematics: The advection equation, the heat equation, Laplace’s equation, and the wave equation. After this, I list 12 PDEs (systems of PDEs) that appear in science and engineering and provide a springboard into the subject matter. Most of these PDEs are nonlinear in nature since our world is inherently nonlinear. However, one must first know how to solve linear PDEs before entering the nonlinear world. In Chap. 2, I introduce the student to first-order PDEs. Through a change of variables, we solve constant coefficients and linear PDEs and are led to the method of characteristics. We continue with solving quasilinear and higher dimensional PDEs, and then progress to fully nonlinear first-order PDEs. The chapter ends with Charpit’s method, a method that seeks compatibility between two first-order PDEs. In Chap. 3, we focus on second-order PDEs, and in particular, three standard forms: (i) parabolic standard form, (ii) hyperbolic standard form, and (iii) elliptic standard form. Students learn how to transform to each standard form. The chapter ends with a derivation of the classic d’Alembert solution. In Chap. 4, after a brief introduction to the separation of variables for the heat equation, I introduce the Fourier series. I introduce both the regular Fourier series and the v

vi

Preface

Fourier Sine and Cosine series. Several examples are considered showing various standard functions and their Fourier series representations. At this point, I return the students to solving PDEs. In Chap. 5, we continue our discussion with the separation of variables where we consider the heat equation, Laplace’s equation, and the wave equation. We start with the heat equation and consider several types of problems. One example has fixed homogeneous boundary conditions, no flux boundary conditions, and radiating boundary conditions; then we consider nonhomogeneous boundary conditions. Next, we consider nonhomogeneous equations, equations with solution-dependent source terms, then solution-dependent convective terms. We move on to Laplace’s equation and, finally, to the wave equation. Chapter 6, involves the Fourier (Sine/Cosine) transform. It is a generalization of the Fourier series, where the length of the interval approaches infinity. It is through these transforms that we are able to solve a variety of PDEs on the infinite and half infinite domains. The final chapter, Chap. 7, extends results to more spatial dimensions and in particular, the heat equation in 2 + 1 dimensions and Laplace’s equation in 3 spatial dimensions and, in particular, in Cartesian, cylindrical polar, and spherical polar coordinates. A variety of examples are used illustrating these results. The book is self-contained; the only requirements are a solid foundation in calculus and elementary differential equations. Chapters 1–5 have been the basis of a one-semester course at the University of Central Arkansas for over a decade. The material in Chaps. 6 and 7 could certainly be included. For the times that I have taught the course, I have omitted these chapters because of time constraints and in favor of student seminars. For these, I ask students to pick topics; extensions or applications of the material covered in class, and present oral seminars to the class with formal write-ups on their topic. My goal is that at the end of the course students understand why studying this subject is important.

Preface to the Second Edition The second edition primarily mirrors the first edition. Several typos have been corrected and new material has been added. In particular, incorporating boundary conditions in Charpit’s method, Laplace’s equation on disc and an entire chapter on PDEs in higher dimensions. New material on the Bessel functions and Legendre functions had been added. Several exercises have also been added to supplement this new material. Conway, USA December 2022

Daniel Arrigo

Acknowledgements

I first would like to thank my wife Peggy. She once again became a book widow. My love and thanks. Second, I would like to thank all of my students who, over the past 15+ years, volunteered to read the book and gave much-needed input on both the presentation of material and the complexity of the examples given. Finally, I would like to thank Susanne Filler of Springer Nature Publishers. Once again, she made the process a simple and straightforward one.

vii

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Model Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Advection Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Diffusion Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Laplace’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 PDEs Are Everywhere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 3 3 4 7 10 13 17

2 First Order PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Constant Coefficient Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Quasilinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Higher Dimensional Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Fully Nonlinear First Order Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.1 Method of Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2 Charpit’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19 19 23 28 30 34 35 35 39 48

3 Second Order Linear PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Standard Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Parabolic Standard Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Hyperbolic Standard Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Modified Hyperbolic Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Regular Hyperbolic Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.5 Elliptic Standard Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 49 52 52 54 55 57 60 62

ix

x

Contents

4 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Fourier Series on [−π , π ] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Fourier Series on [−L, L] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Odd and Even Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Sine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Cosine Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69 72 74 79 84 85 87

5 Separation of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 The Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 Nonhomogeneous Boundary Conditions . . . . . . . . . . . . . . . . . . . . . . 5.1.2 Nonhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 Equations with a Solution Dependent Source Term . . . . . . . . . . . . . 5.1.4 Equations with a Solution Dependent Convective Term . . . . . . . . . 5.2 Laplace’s Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Laplace’s Equation on an Arbitrary Rectangular Domain . . . . . . . . 5.2.2 Laplace’s Equation on a Disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 The Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97 97 105 111 114 118 120 127 132 137

6 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Fourier Sine and Cosine Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

145 145 152

7 Higher Dimensional Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 2 + 1 Dimensional Heat Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 2 + 1 Dimensional Diffusion Equation in a Disc . . . . . . . . . . . . . . . . . . . . . 7.2.1 With Angular Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Without Angular Symmetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Laplaces Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1 Cartesian Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2 Cylindrical Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Spherical Polar Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 Special Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2 Legendre Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

161 161 164 165 166 169 169 171 174 176 176 183

Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

193

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

1

Introduction

An ordinary differential equation (ODE) is an equation which involves ordinary derivatives. For example, if y = y(x), then the following are ODEs: y = 0,

y = y,

y − y = 0,

y − y 2 = 0.

(1.1)

In an introductory course in ODEs, one learns techniques to solve such equations. The first in (1.1) is probably the simplest as y = 0 (1.2) gives rise to the solution y=c

(1.3)

where c is an arbitrary constant. The second y = y

(1.4)

y = ce x

(1.5)

gives rise to the solution where again c is an arbitrary constant. Figure 1.1 shows solution curves as we vary the constant c. In order to pick out a particular curve we would have to add to the ODE some sort of initial condition. For example, if we were to say that y(0) = 1, then the single curve highlighted in blue would be our solution. Partial differential equations (PDEs), on the other hand, are equations which involve partial derivatives. For example, if u = u(x, y) then ∂u = 0, ∂x

∂u ∂u ∂u = 0, and + =0 ∂y ∂x ∂y

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_1

(1.6)

1

2

1 Introduction y y

x x

Fig. 1.1 The solutions of (1.2) and (1.4)

are all PDEs. The first in (1.6) is

∂u = 0, ∂x one of the easiest to solve and one can verify that u = c, u = y, u = 3 sin y, and u = y 2 + ln |y|,

(1.7)

(1.8)

are all solutions of (1.7). In fact, the most general solution of (1.7) is u = f (y),

(1.9)

where f (y) is an arbitrary function for y. Similarly the most general solution of ∂u =0 ∂y

(1.10)

u = g(x),

(1.11)

is where g(x) is an arbitrary function for x. One of the subtle differences in the solutions of ODEs and solutions of PDEs is that in ODEs, the solutions contain constants of integration whereas in PDEs, the solutions contain functions of integration. But, what about the third PDE in (1.6)? As this is just the addition of (1.7) and (1.10), one might think that the solution would be u = f (y) + g(x).

(1.12)

1.1

Model Equations

3

However, substitution into

∂u ∂u + =0 ∂x ∂y

(1.13)

shows it is not identically satisfied! The reader might want to show that the actual solution is u = f (x − y), where f is an arbitray function of its argument. So, a natural question is how did this solution come about? This is a good question and one that will be answered in this book. To motivate the study of PDEs, we begin with a standard formulation of the four basic models which lie at the cornerstone of applied mathematics—the advection equation, the diffusion equation, Laplace’s equation and the wave equation.

1.1

Model Equations

1.1.1

Advection Equation

Suppose a certain amount of a chemical spills into a river and flows down stream. Let us suppose that the concentration is given by u = u(x, t). Then the total amount of chemical is given by b u(x, t)d x. (1.14) 0

If the river flows with constant speed c, then this spill will flow downstream and if we assume that the chemical does not diffuse, then at t = h we have b+ch u(x, t + h)d x (1.15) ch

and these two are equal (Fig. 1.2). Thus, b+ch

ch

Fig. 1.2 Chemical spill

b

u(x, t + h)d x =

u(x, t)d x. 0

(1.16)

4

1 Introduction

Differentiating with respect to h gives

b+ch

u t (x, t + h)d x + cu(b + ch, t + h) − cu(ch, t + h) = 0

(1.17)

ch

and using the fundamental theorem of Calculus, the last two terms can be written as

b+ch

u t (x, t + h)d x + c

ch

or

b+ch

u x (x, t + h)d x = 0

(1.18)

ch

b+ch

u t (x, t + h) + cu x (x, t + h) d x = 0

(1.19)

ch

and since b is arbitrary the integrand is zero and in the limit as h → 0 gives u t + cu x = 0.

(1.20)

If we knew the initial concentration at t = 0, say u(x, 0) = f (x)

(1.21)

we would want to solve (1.20) subject to the initial condition (1.21).

1.1.2

Diffusion Equation

The diffusion equation is used to describe the flow of heat and is often called the heat equation. The flow of heat is due to a transfer of thermal energy caused by an agitation of molecular matter. The two basic processes that take place in order for thermal energy to move is (1) conduction—collisions of neighboring molecules not moving appreciably, and (2) convection—vibrating molecules changing locations. We consider the diffusion in a one-dimensional rod. Consider a rod of length L, cross section area A and density ρ(x) (Fig. 1.3). We introduce the thermal energy density function e(x, t) defined by e(x, t) = c(x)ρ(x)u(x, t),

L Fig. 1.3 The typical cross-section

(1.22)

1.1

Model Equations

5

(0, t)

(L, t)

Fig. 1.4 Change in the flux in a rod of length L

where c(x) is the specific heat, ρ(x) is the density along the rod and u(x, t) is the temperature in the rod at the location x and at time t. We define the total heat H (x, t) as H (x, t) =

L

c(x)ρ(x)u(x, t)Ad x.

(1.23)

0

The amount of thermal energy per unit time flowing to the right per unit surface area is called “heat flux” defined by φ(x, t) (Fig. 1.4). If heat is generated within the rod and its heat density Q(x, t) is given, then the total heat generated is L Q(x, t)Ad x. 0

The conservation of heat energy states: rate of change = heat flowing heat energy of total heat across boundaries + generated inside

or, mathematically dH = (φ(0, t) − φ(L, t)) A + dt

L

Q(x, t)Adt, 0

so d dt

L

L

c(x)ρ(x)u(x, t)Ad x = (φ(0, t) − φ(L, t)) A +

0

Q(x, t)Adt. 0

Using Leibniz’ rule and the fundamental theorem of calculus, Eq. (1.24) becomes

L

c(x)ρ(x) 0

which gives

0

L

∂u(x, t) Ad x = − ∂t

L 0

∂φ Ad x + ∂x

L

Q(x, t)Adt, 0

∂u(x, t) ∂φ c(x)ρ(x) + − Q(x, t) Ad x = 0. ∂t ∂x

(1.24)

6

1 Introduction

Since this applies to any length L, then c(x)ρ(x)

∂u(x, t) ∂φ + − Q(x, t) = 0, ∂t ∂x

or c(x)ρ(x)

∂u ∂φ =− + Q(x, t). ∂t ∂x

(1.25)

Flux and Fourier’s Law Fourier’s law is a description on how heat flows in a temperature field and is based on the following: 1. 2. 3. 4.

If the temperature is constant, there is no heat flow. If there is a temperature difference, there will be flow and it will flow hot to cold. The greater the temperature difference, the greater the heat flow. Heat flows differently for different materials.

With these in mind, Fourier suggested the following form for heat flux: φ = −k

∂u ∂x

where k is the thermal conductivity. This then gives (1.25) as ∂u ∂ ∂u k(x, t, u) + Q(x, t). = c(x)ρ(x) ∂t ∂x ∂x

(1.26)

(1.27)

In the case where the density, the specific heat and thermal conductivity are all constant, (1.27) becomes ∂u ∂ 2u (1.28) = D 2 + f (x, t), ∂t ∂x where D = k/ρc, is the coefficient of diffusion, and f = Q/ρc, is the source term. Equation (1.28) is often referred to as the 1 − D diffusion or heat equation with constant diffusion D with source f . Boundary and Initial Conditions In solving equation (1.28) for the temperature u(x, t), it is necessary to prescribe information at the endpoints of the rod as well as a initial temperature distribution. These are referred to as boundary conditions (BCs) and initial conditions (ICs). Boundary Conditions These are conditions at the end of the rod. There are several types and the following are the most common.

1.1

Model Equations

7

(i) Prescribed Temperature Usually given as u(0, t) = u l ,

u(L, t) = u r

where u l and u r are constant, or u(0, t) = u l (t),

u(L, t) = u r (t)

with u l and u r varying with respect to time. (ii) Prescribed Temperature Flux If we were to prescribe the heat flow at the boundary, we would impose −k0

∂u = φ(x, t), ∂x

so

∂u ∂u (0, t) = φ(0, t), −k0 (L, t) = φ(L, t) ∂x ∂x where φ(0, t) and φ(L, t) are given functions of t. In the case of insulated boundaries, i.e. zero flux, then these would be −k0

∂u (0, t) = 0, ∂x

∂u (L, t) = 0. ∂x

(iii) Radiating Boundary Conditions In the case where one or both ends are such that the temperature change is proportional to the temperature itself, we would use ∂u (0, t) = k1 u(0, t), ∂x

∂u (L, t) = k2 u(L, t), ∂x

where ki > 0 is gain and ki < 0 is loss. Initial Conditions In addition to describing the temperature at the boundary, it is also necessary to describe the temperature initially (at t = 0). This is typically given as u(x, 0) = f (x).

1.1.3

Laplace’s Equation

The solutions of Laplace’s equation are used in many important fields of science, notably the fields of electromagnetism, astronomy, and fluid dynamics, because they can be used to accurately describe the behavior of electric, gravitational, and fluid potentials. In the study of heat conduction, the Laplace equation is the steady-state heat equation.

8

1 Introduction

Here we will derive the equation in context of 2D irrotational fluid flows. We consider a small control element emersed in a fluid. We denote the sides of this rectangular region by d x and dy. The flow through the region is given by v =< u, v >, where u and v are velocities in the x and y directions respectively. We also assume the density of the fluid ρ is constant throughout the entire fluid. We will assume for the sake of discussion that the flow is left to right and bottom to top. The mass flow in from the left and bottom of the control element is (1.29) m˙ in = ρu dy + ρv d x x

y

and mass flow out from the right and top is dy + ρv m˙ out = ρu x+d x

y+dy

d x.

(1.30)

If M = ρd A = ρd xd y is the overall mass, which is constant (we are assuming the density dM is constant), then the change in mass is = m˙ in − m˙ out = 0. So from (1.29) and (1.30) dt ρu dy + ρv d x − ρu dy + ρv dx = 0 (1.31) x

y

x+d x

y+dy

Dividing through by d xd y gives ρv − ρv ρu − ρu y+dy y x+d x x − − =0 dx dy

(1.32)

and in the limit as d x, dy → 0 gives (ρu)x + (ρv) y = 0

(1.33)

which is known as the continuity equation. When ρ is constant, then (1.33) becomes u x + v y = 0.

(1.34)

We now consider the deformation of the rectangular element given in Fig. 1.5. At time t, the ∂u velocity in the x direction at the point O is u and at point B is u + dy. Also, the velocity ∂y ∂v in the y direction at the point O is v and at point A is v + d x. ∂x So at time t + dt, the rectangle has deformed to that shown in Fig. 1.6.

1.1

Model Equations

9

u+

u dy y

B

dy

v

v+

O

u

v dx x

A

dx

Fig. 1.5 A control element

u dydt y

B d dy

v dxdt x

d O

dx

A

Fig. 1.6 Deformed control element

For small angles dα = tan dα =

∂v ∂u dt, dβ = tan dβ = dt ∂x ∂y

from which we obtain the angular velocities dα ∂v = , dt ∂x

(1.35)

dβ dα and as dt dt dβ ∂u = . dt ∂y

(1.36)

10

1 Introduction

If we denote ω as the average angular velocity of the rectangle then we obtain ∂u 1 ∂v − ω= 2 ∂x ∂y

(1.37)

If the fluid is irrotational, then ω = 0 giving ∂v ∂u − =0 ∂x ∂y

(1.38)

If we introduce a potential φ such that u=

∂φ ∂φ , v= ∂x ∂y

(1.39)

then we see that (1.38) is automatically satisfied whereas (1.34) becomes ∂ 2φ ∂ 2φ + =0 ∂x2 ∂ y2

(1.40)

which is known as Laplace’s equation.

1.1.4

Wave Equation

Consider the motion of a perfectly elastic string in which the horizontal motion is negligible and the vertical motion is small. Let us represent this vertical displacement by y = u(x, t). The string will move according to a change in tension throughout the string and, in particular, on [x, x + x] where the tension at the endpoints is T (x, t) and T (x + x, t), respectively (see Figs. 1.7 and 1.8). If the mass throughout the string is denoted by ρ = ρ(x), using Newton’s second law F = ma gives ∂ 2u (1.41) F = ma = ρ(x)x 2 . ∂t

zoom

Fig. 1.7 A plucked string

1.1

Model Equations

11

T (x + x, t) (x + x, t)

(x, t)

T (x, t) Fig. 1.8 Tension in a small length of string

This must be balanced by the resultant tension, namely, T ↑ − T ↓. At the endpoints we have T ↓ = T (x, t) sin θ (x, t),

(1.42a)

T ↑ = T (x + x, t) sin θ (x + x, t)

(1.42b)

and the difference gives F = T (x + x, t) sin θ (x + x, t) − T (x, t) sin θ (x, t).

(1.43)

The balance of forces, namely Eqs. (1.41) and (1.43), gives rise to the following ρ(x)x

∂ 2u = T (x + x, t) sin θ (x + x, t) − T (x, t) sin θ (x, t), ∂t 2

and in the limit as x → 0 then

ρ(x)

T (x + x, t) sin θ (x + x, t) − T (x, t) sin θ (x, t) ∂ 2u = lim , x→0 ∂t 2 x

from which we obtain ∂ ∂ 2u T (x, t) sin θ (x, t) . ρ(x) 2 = ∂t ∂x Since we are assuming that displacements are small, then θ (x, t) 0, so that sin θ (x, t) tan θ (x, t)

(1.44)

12

1 Introduction

leads us to replace Eq. (1.44) with ∂ ∂ 2u T (x, t) tan θ (x, t) . ρ(x) 2 = ∂t ∂x Since tan θ (x, t) = this gives Eq. (1.45) as ρ(x)

∂ ∂ 2u = 2 ∂t ∂x

∂u ∂x

T (x, t)

(1.45)

(1.46) ∂u ∂x

.

(1.47)

This partial differential equation is commonly know as the “wave equation”. If we assume that the string is homogeneous and perfectly elastic, then ρ(x) = ρ0 and T (x, t) = T0 , both constant. This, in turn, gives the wave equation as ∂ 2u T0 ∂ 2 u = . ∂t 2 ρ0 ∂ x 2

(1.48)

We note that the units of T0 is kg m/s2 and the units of ρ0 is kg/m giving the units of T0 /ρ0 as m 2 /s 2 —the units of speed. Thus, we introduce the term wave speed which we denote by the variable c where c2 = T0 /ρ0 . This gives the wave equation as ∂ 2u ∂ 2u = c2 . 2 ∂t ∂x2

(1.49)

Boundary Conditions As in the case of the heat equation, it is necessary that boundary conditions be prescribed, i.e., conditions at the end points of the string. For example, if the endpoints are fixed at zero, then the following BCs would be used u(0, t) = 0,

u(L, t) = 0.

(1.50)

If the ends were forced to move, then we would impose the following BCs u(0, t) = f (t),

u(L, t) = g(t),

(1.51)

where f and g would be specified. If the ends were free to move, then we would impose the following BCs u x (0, t) = 0, u x (L, t) = 0. (1.52) Similarly, as with the heat equation, we must also impose initial conditions, however, unlike the case of the heat equation, we must prescribe two initial conditions—both position and speed. For example, we might impose

1.2

PDEs Are Everywhere

13

u (x, 0) = f (x), position, ∂u (x, 0) = g(x), velocity. ∂t

1.2

(1.53a) (1.53b)

PDEs Are Everywhere

In the previous section we derived the advection equation, heat equation, Laplace’s equation and the wave equation. The list does not stop there. In fact, there are literally hundred’s of PDEs that can be found in various areas of science and engineering. Here I list just a few. It is my intent to attribute motivation to why one would construct methods to solve PDEs. 1. Burgers’ Equation u t + uu x = νu x x

(1.54)

is a fundamental partial differential equation that incorporates both nonlinearity and diffusion. It was first introduced as a simplified model for turbulence [1] and appears in various areas of applied mathematics, such as soil-water flow [2], nonlinear acoustics [3], and traffic flow [4]. 2. Fisher’s equation u t = u x x + u(1 − u)

(1.55)

is a model proposed for the wave of advance of advantageous genes [5] and also has applications in early farming [6], chemical wave propagation [7], nuclear reactors [8], chemical kinetics [9], and in theory of combustion [10]. 3. The Fitzhug-Nagumo equation u t = u x x + u(1 − u)(u + k)

(1.56)

models the transmission of nerve impulses [11, 12], and arises in population genetics models [13]. 4. The Korteweg deVries equation (KdV) u t + 6uu x + u x x x = 0

(1.57)

describes the evolution of long water waves down a canal of rectangular cross section. It has also been shown to model longitudinal waves propagating in a one-dimensional lattice, ion-acoustic waves in a cold plasma, waves in elastic rods, and used to describe the axial component of velocity in a rotating fluid flow down a tube [14] .

14

1 Introduction

5. The Eikonial equation |∇u| = F(x), x ∈ Rn

(1.58)

appearing in ray optics [15]. 6. Schrödinger’s equation iψt = −

2 ∇ u + V (x)ψ 2m

(1.59)

√ is an equation that is at the cornerstone of Quantum Mechanics. Here i = −1 is the imaginary unit, ψ is the time-dependent wave function, reduced Planck’s constant, and V (x) is the potential [16]. 7. The Gross-Pitaevskii equation iψt = −∇ 2 ψ + (V (x) + |ψ|2 )ψ

(1.60)

is a model for the single-particle wavefunction in a Bose-Einstein condensate [17, 18]. 8. Plateau’s equation (1 + u 2y )u x x − 2u x u y u x y + (1 + u 2x )u yy = 0

(1.61)

arises in the study minimal surfaces [19]. 9. The Sine-Gordon equation u x y = sin u

(1.62)

arises in the study of surfaces of constant negative curvature [20], and in the study of crystal dislocations [21]. 10. Equilibrium equations

∂σx y ∂σx x + + Fx = 0 ∂x ∂y ∂σx y ∂σ yy + + Fy = 0 ∂x ∂y

(1.63)

arises in linear elasticity. Here, σx x , σx y and σ yy are normal and shear stresses, and Fx and Fy are body forces [22]. 11. The Navier-Stokes equations ∇ ·u =0 ut + u · ∇u = −

∇P + ν∇ 2 u ρ

(1.64)

1.2

PDEs Are Everywhere

15

describe the velocity field and pressure of incompressible fluids. Here ν is the kinematic viscosity, u is the velocity of the fluid parcel, P is the pressure, and ρ is the fluid density [23]. 12. Maxwell’s Equations ρ ∇ ·E= ε0 ∇ ·B=0 (1.65) ∂B ∇ ×E=− ∂t ∂E ∇ × B = μ0 J + ε0 ∂t which appear in Electricity and Magnetism. Here E denotes the electric field, B denotes the magnetic field, D denotes the electric displacement field, J denotes the free current density, ρ denotes the free electric charge density, ε0 the permittivity of free space, and μ0 the permeability of free space [24].

Exercises 1.

Match each solution to its corresponding PDE (i) u = x 2 + y 2 ,

(a) 2u x − u y = 1,

(ii) u = x + y, (iii) u = x 2 + y 2 ,

(b) xu x + yu y = 2u,

(iv) u = x + y,

(d) u 2x + u 2y = 1.

2

2.

Which of the following is not a solution of u tt = 4u x x (i) u = x − 2t, (iii) u =

3.

(c) u x + uu y = u + 2x,

x − 2t , x + 2t

(ii) u =

1 , x + 2t

(iv) u = (x − 2t)(x + 2t).

Show that u = e x f (2x − y),

where f is an arbitrary function of its argument is a solution of u x + 2u y = u. 4.

Show that u = f (x) + g(y),

16

1 Introduction

where f and g are arbitrary functions is a solution of u x y = 0. 5.

Find constant a and b such that u = eat sin bx, and u = eat cos bx are solutions of ut = u x x . e−x /4t is also a solution of √ t 2

6.

Show that u =

ut = u x x . 7.

Consider the PDE u t = u x x + 2 sech2 x u. 2

(E1)

2

If v = ek t sinh kt or v = ek t cosh kt (k is an arbitrary constant), then u = vx − tanh x · v satisfies (E1). 8.

Show

2 f (x)g (y) u = ln ( f (x) + g(y))2

where f (x) and g(y) are arbitrary functions, satisfies Liouville’s equation u x y = eu . 9.

Show

−1 u = 4 tan−1 eax+a y ,

where a is an arbitrary nonzero constant, satisfies the Sine-Gordon equation u x y = sin u. 10.

Show that if u = f (x + ct)

satisfies the KdV equation (1.57), then f satisfies c f + 6 f f + f = 0

(E2)

References

17

where prime denotes differentiation with respect to the argument of f . Show there is one value of c such that f (r ) = 2 sech2 r is a solution of (E2). 11.

The PDE vt − 6v 2 vx + vx x x = 0

is known as the modified Korteweg de Vries (mKdV) equation. Show that if v is a solution of the mKdV, then u = vx − v 2 is a solution of the KdV (1.57).

References 1. J.M. Burgers, Mathematical examples illustrating relations occuring in the theory of turbulent fluid motion. Akademie van Wetenschappen, Amsterdam, Eerste Sectie, Deel XVII 2, 1–53 (1939) 2. P. Broadbridge, Burgers’ equation and layered media: exact solutions and applications to soilwater flow. Math. Comput. Model. 16(11), 163–169 (1992) 3. K. Naugolnykh, L. Ostrovsky, Nonlinear Wave Processes in Acoustics (Cambridge University Press, 1998) 4. R. Haberman, Mathematical Models ; Mechanical Vibrations, Population Dynamics, and Traffic Flow : An Introduction to Applied Mathematics (Imprint Englewood Cliffs, Prentice-Hall, New Jersey, 1977) 5. R.A. Fisher, The wave of advance of advantageous genes. Ann. Eugenics 7, 353–369 (1937) 6. A.J. Ammermann, L.L. Cavalli-Sforva, Measuring the rate of spread of early farming. Man 6, 674–688 (1971) 7. R. Arnold, K. Showalter, J.J. Tyson, Propgation of chemical reactions in space. J. Chem. Educ. 64, 740–742 (1987) 8. J. Canosa, Diffusion in nonlinear multiplicative media. J. Math. Phys. 10, 1862–1868 (1969) 9. P.C. Fife, Mathematical Aspects of Reacting and Diffusing Systems, Lectures in Biomathematics, 28 (Springer, Berlin, 1979) 10. A. Kolmogorov, I. Petrovsky, N. Piscunov, A study of the equation of diffusion with increase in the quantity of matter and its application to a biological problem. Bull. Univ. Moscow, Ser. Internat. Sec A. 1, 1–25 (1937) 11. R. Fitzhugh, Impulses and physiological states in theoretical models of nerve membrane. Biophys. J . 1, 445–466 (1961) 12. J.S. Nagumo, S. Arimoto, S. Yoshizawa, An active pulse transmission line simulating nerve axon. Proc. IRE 50, 2061–2070 (1962) 13. D.G. Aronson, H.F. Weinberger, in Lecture Notes in Mathematics, Vol. 446. Partial Differential Equations and Related Topics, ed. by J.A. Goldstein (Springer, Berlin, 1975) 14. R. Miura, The Korteweg−deVries equation: a survey of results. SIAM Rev. 18(3), 412–459 (1978) 15. D.D. Holm, Geometric Mechanics Part 1: Dynamics and Symmetry (Imperial College Press, 2011) 16. D. Griffths, Introduction to Quantum Mechanics, 2nd edn. (Pearson Prentice Hall, 2004)

18

1 Introduction

17. E.P. Gross, Structure of a quantized vortex in boson systems. Il Nuovo Cimento. 20(3), 454–457 (1961) 18. L.P. Pitaevskii, Vortex lines in an imperfect Bose gas. Sov. Phys. JETP 13(2), 451–454 (1961) 19. J.C.C. Nitsche, On new results in the theory of minimal surfaces. Bull. Amer. Math. Soc. 71, 195–270 (1965) 20. L.P. Eisenhart, A Treatise on the Differential Geometry of Curves and Surfaces (Dover Publishing, New York, 1960) 21. F.C. Frank, J.H. van der Merwe, One-dimensional dislocations. I. Static theory. Proc. R. Soc. Lond. A 198 205–216 (1949) 22. S. Timoshenko, J.N. Goodier, Theory of Elasticity (McGraw-Hill, 1951) 23. G.K. Batchelor, An introduction to Fluid Mechanics (Cambridge University Press. 2000) 24. D.J. Griffiths, Introduction to Electrodynamics, 4th ed. (Pearson, 2012)

2

First Order PDEs

Partial differential equations of the form u x − 2u y = 0,

u x + yu u = 0,

u x + uu y = 1,

u 2x + u 2y = 1,

(2.1)

are all examples of first order partial differential equations. The first equation is constant coefficient, the second equation is linear, the third equation quasilinear and the last equation nonlinear. In general, equations of the form F(x, y, u, u x , u y ) = 0,

(2.2)

are first order partial differential equations. This chapter deals with techniques to construct general solutions of (2.2) when they are constant coefficient, linear, quasilinear and fully nonlinear equations.

2.1

Constant Coefficient Equations

Partial differential equations (PDEs) of the form au x + bu y = cu

(2.3)

where a, b and c are all constant are called constant coefficient PDEs. For example, consider u x − u y = 0.

(2.4)

Our goal is to find a solution u = u(x, y) of this equation. If we introduce the change of variables r = x + y, s = x − y, (2.5) © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_2

19

20

2 First Order PDEs

then the derivatives u x and u y transform as u x = ur + u s ,

u y = ur − u s ,

and Eq. (2.4) becomes ur + u s − ur + u s = 0 or u s = 0.

(2.6)

Integration of (2.6) gives u = f (r ), or u = f (x + y).

(2.7)

Substituting (2.7) into Eq. (2.4) shows that it is satisfied and thus is the solution. Let us introduce a new set of variables, say r = x + y, s = x 2 − y 2 .

(2.8)

The derivatives transform as u x = ur +

s r

+ r us ,

u y = ur +

s r

− r us ,

and Eq. (2.4) becomes 2r u s = 0

⇒

u s = 0.

(2.9)

As (2.9) is the same (2.6), integration yields again u = f (r ), or u = f (x + y), the same solution as given in (2.7). Finally, consider a third change of variables r = x − y, s = x 2 − y 2 . The derivatives transform as s s u s , u y = −u r + r − us , u x = ur + r + r r and Eq. (2.4) becomes, after simplification r u r + su s = 0,

(2.10)

2.1

Constant Coefficient Equations

21

a new first order PDE. This equation is actually more complicated than the one we started with! As the changes of variables (2.5) and (2.8) transformed the original PDE to an equation that is simple and the change of variables (2.10) transformed the original equation that is more complicated, a natural question is: “What is common in the change of variables (2.5) and (2.8) that is not in (2.10)”? The answer is that one of the variables is r = x + y. In fact, if we choose r = x + y and s = s(x, y), arbitrary, then u x = u r + sx u s ,

u y = ur + sy u s ,

and Eq. (2.4) becomes, after simplification

sx − s y u s = 0,

(2.11)

from which we deduce that u s = 0, recovering the solution found in (2.7). We note that sx − s y = 0 since sx − s y = 0 would make the Jacobian of the transformation, r x s y − r y sx = 0 (a nonzero Jacobian ensures that we can transform (x, y) ←→ (r , s)). The question is, how did we know how to choose r = x + y as one of the new variables? For example, suppose we consider 2u x − u y = 0,

(2.12)

u x + 5u y = 0,

(2.13)

or what would be the right choice of r (x, y) that leads to a simple equation like u s = 0? In an attempt to answer this, let us introduce the change of variables r = r (x, y) and s = s(x, y) and try to find r so that the original PDE becomes u s = 0. Under a general changes of variables u x = r x u r + sx u s , u y = r y u r + s y u s , Equation (2.4) becomes

r x − r y u r + sx − s y u s = 0

(2.14)

and in order to obtain our target PDE, it is necessary to choose r x − r y = 0.

(2.15)

However, to solve (2.15) is to solve (2.4)! In fact to solve any PDE in the form of (2.3), using a general change of variables, it would be necessary to have one solution to ar x − br y = 0.

(2.16)

So without knowing one of the variables is r = x + y, our first attempt at trying to solve (2.4) would have failed!

22

2 First Order PDEs

For our second attempt, we will try and work backwards. We will start with the answer, u s = 0 and try and target our original PDE, Eq. (2.4). Therefore, if we start with u s = 0,

(2.17)

and use a general chain rule u s = u x xs + u y ys , then (2.17) becomes u x xs + u y ys = 0.

(2.18)

Choosing xs = 1,

ys = −1,

(2.19)

gives the original Eq. (2.4). Integrating (2.19) gives x = s + a(r ),

y = −s + b(r ),

(2.20)

where a(r ) and b(r ) are arbitrary functions of r . From (2.17), we obtain u = c(r ), and eliminating s from (2.20) gives x + y = a(r ) + b(r ) = d(r ) so

u = c d −1 (x + y)

⇒

⇒

r = d −1 (x + y)

u = f (x + y)

(2.21) (2.22)

since the addition, the inverse and composition of arbitrary functions is arbitrary. The next two examples illustrate this technique further. Example 2.1 Consider 2u x + u y = 0.

(2.23)

u s = u x xs + u y ys ,

(2.24)

If choosing xs = 2,

ys = 1

(2.25)

u s = 2u x + u y ,

(2.26)

u s = 0.

(2.27)

y = s + b(r ), u = c(r ).

(2.28)

gives (2.24) as and via (2.23) gives (2.26) Integrating (2.25) and (2.27) gives x = 2s + a(r ),

2.2

Linear Equations

23

Eliminating s from (2.28) gives x − 2y = a(r ) − 2b(r ) = d(r ) (some arbitrary function) and solving for r gives r = d −1 (x − 2y). Using this, from (2.28) we obtain u = f (x − 2y)

(2.29)

as the solution of (2.23) noting that c(d −1 ) = f . Example 2.2 Consider u x + 4u y = u.

(2.30)

u s = u x xs + u y ys ,

(2.31)

If and we choose xs = 1,

ys = 4,

(2.32)

then (2.31) becomes u s = u x + 4u y ,

(2.33)

u s = u.

(2.34)

and via (2.30) becomes Integrating (2.32) and (2.34) gives x = s + a(r ),

y = 4s + b(r ), u = c(r )es ,

(2.35)

Eliminating s from (2.35) gives

and

4x − y = 4a(r ) − b(r ) = d(r )

(2.36)

u = c(r )es = c(r )e x−a(r ) = c(r )e−a(r ) e x = e(r )e x

(2.37)

(where c(r )e−a(r ) = e(r )) and eliminating r between (2.36) and (2.37) gives u = e x f (4x − y),

(2.38)

the solution of the PDE (2.30).

2.2

Linear Equations

We now turn our attention to first order PDEs of the form2 a(x, y) u x + b(x, y) u y = c(x, y) u,

(2.39)

2 Please note, these are functions of integration and are not the same as a, b and c as given in (2.39).

24

2 First Order PDEs

where a, b and c are now functions of the variables x and y. These type of equations are called linear PDEs.

Example 2.3 Consider xu x − yu y = 0.

(2.40)

Does the method from the preceding section work here? The answer—yes! If we let u s = u x xs + u y ys , and choose xs = x,

ys = −y,

(2.41)

then (2.40) becomes u s = 0.

(2.42)

We are required to solve (2.41) and (2.42). The solution of these are y = b(r )e−s ,

x = a(r )es ,

u = c(r ).

(2.43)

where a, b and c2 are arbitrary functions of integration. Eliminating s from the first two in (2.43) gives x y = a(r )b(r ) ⇒ r = A(x y), (2.44) and eliminating r from the third of (2.43) and (2.44) gives u = c (A(x y))

⇒

u = f (x y).

(2.45)

In general, for linear equations in the form (2.39), if we introduce a general chain rule u s = u x xs + u y ys , then we can target this PDE by choosing xs = a(x, y),

ys = b(x, y), u s = c(x, y)u.

(2.46)

Example 2.4 Consider xu x + yu y = u.

(2.47)

Here, we must solve xs = x,

ys = y,

u s = u.

(2.48)

2.2

Linear Equations

25

The solution of these are x = a(r )es ,

y = b(r )es ,

u = c(r )es .

(2.49)

Eliminating s from the first and second and first and third of (2.49) gives y b(r ) = , x a(r )

u c(r ) = , x a(r )

(2.50)

y y u = f or u = x f . x x x

(2.51)

u t + xu x = 1, u(x, 0) = −x 2 .

(2.52)

and

and further elimination of r gives

Example 2.5 Consider We note that in this example the independent variables have changed from (x, y) to (x, t). It is also an initial value problem, meaning that the arbitrary function that will appear in the final solution will have some prescribed form given by the initial condition. Here we must solve (2.53) ts = 1, xs = x, u s = 1. The solution of these are t = s + a(r ),

x = b(r )es , u = s + c(r ).

(2.54)

Eliminating the variable s in (2.54) gives xe−t = b(r )e−a(r ) , u − t = c(r ) − a(r )

(2.55)

and further elimination of r gives u = t + f (xe−t ).

(2.56)

Now we impose the initial condition u(x, 0) = −x 2 . In doing so, f (x) = −x 2 and thus, the final solution is (Fig. 2.1) 2 (2.57) u = t − xet . We now want to solve this problem in a slightly different way. As we are essentially changing from (x, t) to (r , s), we wish to create a boundary condition in the (r , s) plane that corresponds to the boundary condition u(x, 0) = −x 2 in the (x, t) plane. In the (x, t) plane, the line t = 0 is the boundary where u is defined. We must associate a curve in the (r , s) plane. Here, we choose s = 0 (most will work) and connect the two boundaries by x = r .

26

2 First Order PDEs t

s

x=r

x

r

Fig. 2.1 Change in the boundary from the (x, t) plane to the (r , s) plane

Thus, the new boundary conditions are t = 0,

x = r , u = −r 2 ,

when s = 0.

(2.58)

Solving (2.53) still gives (2.54). However, when we impose the new boundary conditions (2.58) we find that a(r ) = 0, b(r ) = r and c(r ) = −r 2 giving t = s,

x = r es ,

u = s − r 2,

(2.59)

Elimination of r and s in (2.59) gives u = t − x 2 e−2t .

(2.60)

the solution presented in (2.57). Example 2.6 Solve xu x + 2yu y = u + x 2 ,

u(x, x) = 0.

(2.61)

Here, the characteristic equations are xs = x,

ys = 2y,

us = u + x 2.

(2.62)

In the (x, y) plane, the boundary is y = x. In the (r , s), we will choose it to be s = 0. Again, we will connect the boundaries via x = r (Fig. 2.2) giving the boundary conditions x = r,

y = r , u = 0, when s = 0.

(2.63)

2.2

Linear Equations

27 y

s

x=r

r

x

Fig. 2.2 Change in the boundary from the (x, y) plane to the (r , s) plane

It is important to note that in order to solve for u in (2.62), we will need x first. The solution of the first two of (2.62) is x = a(r )es ,

y = b(r )e2s .

(2.64)

The boundary condition in (2.63) give a(r ) = r and b(r ) = r , giving x = r es ,

y = r e2s .

(2.65)

Using x from (2.65) enables us to solve the remaining equation in (2.62) for u. This gives u = r 2 e2s + c(r )es .

(2.66)

Imposing the last boundary condition from (2.63) gives c(r ) = −r 2 and thus, u = r 2 e2s − r 2 e2 .

(2.67)

Eliminating r and s from (2.65) and (2.67) gives u = x2 −

x3 . y

(2.68)

Example 2.7 Consider yu x + xu y = u.

(2.69)

Here, we must solve xs = y,

ys = x,

u s = u.

(2.70)

28

2 First Order PDEs

As the first two of (2.70) is a coupled system, their solution will be coupled. The solution of these are x = a(r )es + b(r )e−s ,

y = a(r )es − b(r )e−s ,

u = c(r )es .

(2.71)

In the previous example, eliminating s was easy. Here this is not the case. Noting that and x − y = 2b(r )e−s

x + y = 2a(r )es ,

(2.72)

and multiplying these gives (x + y)(x − y) = 4a(r )b(r )

⇒

r = g(x 2 − y 2 )

(2.73)

and further, using (2.73) in conjunction with (2.71) leads finally to the solution u = (x + y) f x 2 − y 2 .

(2.74)

This example clearly shows that even though this technique works, trying to eliminate the variables r and s can be quite tricky. In the next section we will bypass the introduction of the variables r and s.

2.3

Method of Characteristics

In solving first order PDEs of the form a(x, y) u x + b(x, y) u y = c(x, y)u,

(2.75)

it was necessary to solve the system of equations xs = a(x, y),

ys = b(x, y), u s = c(x, y)u.

(2.76)

u x + 4u y = u

(2.77)

As seen in Example 2.2, in solving

we associated the system xs = 1, or

∂x = 1, ∂s

ys = 4, u s = u,

(2.78)

∂y = 4, ∂s

∂u = u. ∂s

(2.79)

∂u = ∂s. u

(2.80)

If we separate (2.79), then ∂ x = ∂s,

∂y = ∂s, 4

2.3

Method of Characteristics

29

We can rewrite system (2.80) as ∂x −

∂y = 0, 4

∂u = ∂ x, u

(2.81)

thereby eliminating the variable s. Further, integrating leads to 4 x − y = A(r ), ln |u| − x = B(r )

(2.82)

and eliminating of r leads to ln |u| − x = f (4 x − y)

(2.83)

u = e x f (4 x − y)

(2.84)

or noting that after exponentiation, we replaced e f with f . If we treat r in (2.82) as constant, then we can treat the partial derivatives as ordinary derivatives in (2.81), then 4 d x − dy = 0,

du = d x. u

(2.85)

If we integrate (2.85) we obtain 4 x − y = c1 , ln |u| − x = c2 .

(2.86)

Comparing (2.86) and (2.82) shows that the constants c1 and c2 play the role of A(r ) and B(r ), and since we wish to eliminate r , this is equivalent to having c2 = f (c1 ). This would be the solution of the PDE. We typically write Eq. (2.85) as dy du dx = = . 1 4 u

(2.87)

These are called “characteristics” equations, and the “method is called the method of characteristics”. In general, for linear PDEs of the form (2.75), the method of characteristics requires us to solve dx dy du = = . (2.88) a(x, y) b(x, y) c(x, y)u Example 2.8 Here, we revisit Example 2.7, Eq. (2.69) considered earlier. The characteristic equations are dx dy du = = . (2.89) y x u Solving the first pair in (2.89) we obtain x 2 − y 2 = c1 .

(2.90)

30

2 First Order PDEs

For the second pair in (2.89), we use a result derived in #5 in the exercises that d(x + y) du = , x+y u

(2.91)

u = c2 , x+y

(2.92)

which leads to

and the general solution is c2 = f (c1 ), which leads to u = (x + y) f (x 2 − y 2 ).

(2.93)

2x u x + y u y = 2x, u(x, x) = x 2 + x.

(2.94)

Example 2.9 Consider

The characteristic equations are dx dy du = = . 2x y 2x

(2.95)

Solving the first and second and first and third in (2.95) gives y2 = c1 , u − x = c2 , x giving the general solution as

u=x+ f

y2 x

x + f (x) = x + x 2

⇒

(2.96)

.

(2.97)

f (x) = x 2

(2.98)

Using the initial condition gives

thus giving the solution

u=x+

2.4

y2 x

2 .

(2.99)

Quasilinear Equations

First order PDEs of the form a(x, y, u) u x + b(x, y, u) u y = c(x, y, u),

(2.100)

where a, b and c are all functions of the variables x, y and u, are called quasilinear PDEs. The method of characteristics can also be used here giving

2.4

Quasilinear Equations

31

dx dy du = = , a(x, y, u) b(x, y, u) c(x, y, u)

(2.101)

noting the difference in this and the linear case is the resulting ODEs are fully coupled. The following examples illustrate this case. Example 2.10 Consider yu x + (x − u)u y = y.

(2.102)

dy du dx = = . y x −u y

(2.103)

The characteristic equations are

From the first and third of (2.103), we obtain u − x = c1 ; using this in the first and second of (2.103), we obtain dx dy , (2.104) =− y c1 which yields c1 x +

1 2 y = c2 , 2

or

1 2 y = c2 . 2 Eliminating the constants gives rise to the solution (u − x)x +

(u − x)x +

1 2 y = f (u − x) . 2

Example 2.11 Consider (x + u) u x + (y + u) u y = x − y.

(2.105)

The characteristic equations are dx dy du = = . x +u y+u x−y

(2.106)

To solve these, we re-write them as dx x +u = , du x−y Subtracting gives

dy y+u = . du x−y

x−y d(x − y) = = 1, du x−y

(2.107)

(2.108)

32

2 First Order PDEs

which integrates, giving x − y = u + c1 .

(2.109)

Using this in the first and third of (2.106) gives du dx , = x +u u + c1

(2.110)

which is a linear ODE. It has as its solution

and using c1 above gives

x c1 − ln |u + c1 | − = c2 , u + c1 u + c1

(2.111)

y+u − ln |x − y| = c2 . x−y

(2.112)

Therefore, the solution of the original PDE is given by y+u − ln |x − y| = f (x − y − u). x−y

(2.113)

As the final example in this section, we consider the PDE u t + c(x, t, u)u x = 0.

(2.114)

Equation (2.114) is commonly referred to the first order wave equation and c = c(t, x, u) is usually referred to as the “wave speed.” In particular, we will consider the following three equations (2.115) u t + 2u x = 0, u t + 2xu x = 0, u t + 2uu x = 0, all subject to the initial condition u(x, 0) = sech x. Each can be solved using the method of characteristics giving u = f (x − 2t), u = f xe−2t ,

u = f (x − 2tu),

(2.116)

respectively, and imposing the initial conditions gives the solutions u = sech (x − 2t), u = sech xe−2t , u = sech (x − tu).

(2.117)

Figures 2.3, 2.4 and 2.5 show their respective solutions. It is interesting to note that in Fig. 2.3, the wave moves to the right without changing its shape, in Fig. 2.4, the wave spread out and, in Fig. 2.5, the wave moves to the right with its speed changing according to height.

2.4

Quasilinear Equations

33

t=0

t = 10

t = 20

1.0

0.5

0.0 −10

0

10

20

30

x

Fig. 2.3 The solution (2.117)(i) at times t = 0, 10 and 20 1.0

t=4

t=2

0.5

t=0 0.0 −30

−20

−10

0

10

20

30

x

Fig. 2.4 The solution (2.117)(ii) at times t = 0, 2 and 4 t=0

t=4

t=8

1.0

0.5

0.0 −10

−5

0

5

10

x

Fig. 2.5 The solution (2.117)(iii) at times t = 0, 4 and 8

15

20

34

2.5

2 First Order PDEs

Higher Dimensional Equations

The method of characteristics easily extends to PDEs with more than two independent variables. The following examples demonstrate this. Example 2.12 Consider x u x + y u y − z u z = u.

(2.118)

The characteristic equations for this would be dy dz du dx = = = . x y −z u

(2.119)

We now pick in pairs (i)

dy dx = x y

dx dz = x −z

(ii)

(iii)

dx du = . x u

(2.120)

Integrating each, we obtain, y = c1 , x z = c 2 , x

u = c3 . x

Extending the result when we have two independent variables, the general solution is c3 = f (c1 , c2 ). For this problem, the solution is y u = xf , xz . x Example 2.13 Consider u t + u x − xu y = 0, u(x, y, 0) = x 2 + 2y e−x .

(2.121)

The characteristic equations are dt dx dy = = , du = 0. 1 1 −x

(2.122)

Solving the first and second and second and third and the last of (2.122) gives x − t = c1 ,

x2 + y = c2 , u = c3 . 2

Thus, we have so far, the solution as u= f

x − t,

x2 +y . 2

(2.123)

2.6

Fully Nonlinear First Order Equations

35

Using the initial condition gives x2 + y = x 2 + 2y e−x f x, 2 which we identify f (a, b) = 2be−a , giving the final solution u = x 2 + 2y e−(x−t) .

2.6

Fully Nonlinear First Order Equations

In this section we introduce two methods to obtain exact solutions to first order fully nonlinear PDEs—the method of characteristics and Charpit’s method.

2.6.1

Method of Characteristics

In order to develop the method of characteristics for a fully nonlinear first order equation F(x, y, u, u x , u y ) = 0

(2.124)

we return to the case of quasilinear equations a(x, y, u)u x + b(x, y, u)u y = c(x, y, u)

(2.125)

and define F as F(x, y, u, u x , u y ) = a(x, y, u)u x + b(x, y, u)u y − c(x, y, u) = 0.

(2.126)

Recall that the characteristic equations for Eq. (2.125) are dx = a(x, y, u), ds

dy = b(x, y, u), ds

du = c(x, y, u). ds

(2.127)

If we treat x, y, u, p and q as independent variables (where p = u x and q = u y ), we see that (2.127) can also be obtained by dx = Fp , ds

dy = Fq , ds

du = p F p + q Fq , ds

(2.128)

where as usual, subscripts refer to partial differentiation. The system (2.128) is three equations in five unknowns. To complete the system, we will need two more equations—one for dp dq and one for . Differentiating (2.124) with respect to x and y gives ds ds

36

2 First Order PDEs

Fx + p Fu + px F p + qx Fq = 0,

(2.129a)

Fy + q Fu + p y F p + q y Fq = 0.

(2.129b)

Using the fact that qx = p y ,

p y = qx ,

(2.130)

gives

If we consider

Fx + p Fu + px F p + p y Fq = 0,

(2.131a)

Fy + q Fu + qx F p + q y Fq = 0.

(2.131b)

dp , then from the chain rule (and (2.131a)) ds dp dp dx d p dy = + ds d x ds dy ds = px F p + p y Fq

(2.132)

= −Fx − p Fu . Similarly, if we consider

dq , then from the chain rule (and (2.131b)) ds dq dq d x dq dy = + , ds d x ds dy ds = qx F p + q y Fq

(2.133)

= −Fy − q Fu . Thus, we have the following characteristic equations dx dy = Fp , = Fq , ds ds dp = −Fx − p Fu , ds

du = p F p + q Fq , ds dq = −Fy − q Fu . ds

(2.134)

y2 . 2

(2.135)

We will now consider two examples. Example 2.14 Solve u x = u 2y ,

u(0, y) = −

Here, we define F as F = p − q 2,

(2.136)

so that the characteristic equations (2.134) become dx = 1, ds

dy = −2q, ds

du = p − 2q 2 , ds

dp dq =0 = 0. ds ds

(2.137)

2.6

Fully Nonlinear First Order Equations

37

Trying to solve (2.137), eliminate the variables r and s, and then impose the initial condition is quite a task. As we did previously in this chapter, we will identify a boundary and boundary conditions in the (r , s) plane, solve (2.137) subject to these new conditions, and then eliminate the variables (r , s). In the (x, y) plane, the line x = 0 is the boundary where u is defined. To this, we associate a boundary in the (r , s) plane. Given the flexibility, we can choose s = 0 and connect the two boundaries via r = y. Therefore, we have x = 0,

y = r, u = −

r2 when s = 0 2

(2.138)

To determine p and q on s = 0, it is necessary to consider the initial condition u(0, y) = y2 − . Differentiating with respect to y gives u y (0, y) = −y, and from the original PDE 2 (2.135), u x (0, y) = u 2y (0, y). As we know u y on the boundary, this then gives p = r 2 , q = −r when s = 0.

(2.139)

We now solve (2.137). From the last two equations of (2.137) we obtain p = a(r ),

q = b(r ),

(2.140)

where a and b are arbitrary functions. From the initial condition (2.139), we find that p = r 2,

q = −r ,

(2.141)

for all s. Further, using these we have from (2.137) dx = 1, ds

dy = 2r , ds

du = −r 2 . ds

(2.142)

These integrate to give x = s + c(r ),

y = 2r s + d(r ), u = −r 2 s + e(r ).

(2.143)

Using the initial conditions (2.138) gives x = s,

y = 2r s + r ,

u = −r 2 s −

r2 . 2

(2.144)

On elimination of r and s in (2.144), we obtain u=−

y2 . 2(2x + 1)

(2.145)

Example 2.15 Solve u x u y = u,

u(x, 1 − x) = 1.

(2.146)

38

2 First Order PDEs

Here, we define F as F = pq − u,

(2.147)

so that the characteristic equations (2.134) become dx = q, ds

dy = p, ds

du = 2 pq, ds

dp = p, ds

dq = q. ds

(2.148)

As in the previous example, we will choose a new boundary in the (r , s) plane. For convenience, we will choose s = 0 where we identify that x = r,

y = 1 − r,

u = 1.

(2.149)

As we will need two more boundary conditions, one for p and q, we will use both the original PDE and boundary condition given in (2.146). Differentiating the boundary condition with respect to x gives (2.150) u x (x, 1 − x) − u y (x, 1 − x) = 0 while on the boundary, the original PDE gives u x (x, 1 − x)u y (x, 1 − x) = 1.

(2.151)

If we denote u x = p and u y = q, then from (2.150) and (2.151), we have the following conditions on the boundary p − q = 0,

(2.152a)

pq = 1.

(2.152b)

From these we find that p = ±1, q = ±1—two cases. Each will be considered separately. Case 1 p = q = 1 We first solve the last two equations in (2.148) as the first three equations need p and q. Both are easily solved giving p = a(r )es , q = b(r )es .

(2.153)

Imposing the boundary condition shows that a(r ) = 1 and b(r ) = 1, giving p = es , q = es .

(2.154)

From the first two equations in (2.148), we now have dx = q = es , ds

dy = p = es . ds

(2.155)

2.6

Fully Nonlinear First Order Equations

39

Again, these are easily solved, giving x = es + c(r ),

y = es + d(r ).

(2.156)

Imposing the boundary condition in (2.149) gives c(r ) = r − 1 and d(r ) = −r , leading to x = es + r − 1, q = −r + es − r .

(2.157)

du = 2 pq. At this point we two options: First—bring in ds the solutions from (2.154); or second, use the original PDE pq = u. We often find using the du PDE simplifies the calculations, so we will do this and solve = 2u. Again, this easily ds 2s integrates, giving u = e(r )e and the boundary condition in (2.149) gives e(r ) = 1, leading to (2.158) x = es + r − 1, q = −r + es − r , u = e2s . The final equation to be solved is

Eliminating the parameters r and s in (2.158) gives the exact solution u=

x +y+1 2

2 .

(2.159)

Case 2 p = q = −1 As the procedure is identical to that presented in case 1, we simply give the results. The solutions to the characteristic equations subject to the boundary conditions are x = r + 1 − es ,

y = 2 − r − es ,

u = e2s , p = −es , q = −es .

(2.160)

Eliminating the parameters r and s in (2.160) gives the exact solution. u=

3−x −y 2

2 (2.161)

It is interesting to note that the same PDE subject to the same initial conditions gives rise to two independent solutions. This sometimes happens with nonlinear PDEs - we lose uniqueness of solutions!

2.6.2

Charpit’s Method

An alternate method for deriving exact solutions of first order nonlinear partial differential equations is known as Charpit’s Method. Consider the nonlinear PDE u x = u 2y

(2.162)

40

2 First Order PDEs

and the linear PDE u x − 2x u y = y.

(2.163)

The solution of (2.163) is found to be 2 u = xy + x3 + f x2 + y . 3

(2.164)

Substitution of the solution (2.164) into the nonlinear PDE (2.162) and simplifying gives f 2 (λ) = λ,

(2.165)

where λ = x 2 + y. We solve (2.165) giving f (λ) = c ± 23 λ3/2 , which leads to the exact solution 3/2 2 2 2 u = c + xy + x3 ± . x +y 3 3 Consider the first order PDE (2.166) yu x − 2uu y = 0, whose solution is found to be 4xu + y 2 − f (u) = 0.

(2.167)

Substitution of the solution (2.167) into the nonlinear PDE (2.162) gives u f (u) − f (u) = 0

(2.168)

whose general solution is f = cu, where c is an arbitrary constant, giving an exact solution to the PDE (2.162) as 4xu + y 2 = cu, or

y2 . 4x − c A natural question is: How did we know how to pick a second PDE whose solution would lead to a solution of the original nonlinear PDE? Before we answer this question it is interesting to consider the following pairs of equations u=−

(i) u x = u 2y ,

u x − 2x u y = y,

(2.169a)

(ii) u x = u 2y ,

yu x − 2uu y = 0.

(2.169b)

In the first pair, we augmented the original nonlinear PDE with one that is linear (and easily solvable). We substituted the solution of the linear equation into the nonlinear PDE and obtained an ODE, which we solved. This gave rise to an exact solution to the original

2.6

Fully Nonlinear First Order Equations

41

equation. We also did this for the second pair. Thus, we were able to show that each pair of PDEs shared a common solution. If two PDEs share a common solution, they are said to be compatible. However, it should be noted that not all first order PDEs will be compatible. Consider, for example, u y = 2y. Integrating gives u = y 2 + f (x),

(2.170)

and substituting (2.170) into (2.162) gives f (x) = 4y 2 ,

(2.171)

and clearly no function f (x) will work. So we ask: Is it possible to determine whether two PDEs are compatible before trying to find their common solution? We consider the first pair in (2.169a) and construct higher order PDEs by differentiating with respect to x and y. This leads to the following u x x = 2u y u x y ,

(2.172a)

u x y = 2u y u yy ,

(2.172b)

u x x − 2xu x y − 2u y = 0,

(2.172c)

u x y − 2xu yy = 1.

(2.172d)

Solving the first three of (2.172) gives uxx =

2u 2y ux − x

, uxy =

uy 1 , u yy = uy − x 2(u y − x)

and substitution in the last of (2.172) shows it is identically satisfied. Thus, we have a way to check whether two equations are compatible. This method is known as Charpit’s method. Consider the compatibility of the following first order PDEs F(x, y, u, p, q) = 0, G(x, y, u, p, q) = 0,

(2.173)

where p = u x and q = u y . Calculating the x and y derivatives of (2.173) gives Fx + p Fu + u x x F p + u x y Fq = 0, Fy + q Fu + u x y F p + u yy Fq = 0, G x + pG u + u x x G p + u x y G q = 0, G y + qG u + u x y G p + u yy G q = 0.

(2.174)

42

2 First Order PDEs

Solving the first three (2.174) for u x x , u x y and u yy gives −Fx G q − p Fu G q + Fq G x + p Fq G u , F p G q − Fq G p −F p G x − p F p G u + Fx G p + p Fu G p = , F p G q − Fq G p

uxx = uxy

u yy =

F p2 G x + p F p2 G u − Fy F p G q − q Fu F p G q +q Fu Fq G p − Fx F p G p − p Fu F p G p + Fy Fq G p (F p G q − Fq G p )Fq

.

(assuming that Fq = 0 and F p G q − Fq G p = 0). Substitution into the last of (2.174) gives F p G x + Fq G y + ( p F p + q Fq )G u − (Fx + p Fu )G p − (Fy + q Fu )G q = 0, conveniently written as Dx F F p + D y F Fq = 0, Dy G Gq Dx G G p

(2.175)

where Dx F = Fx + p Fu , D y F = Fy + q Fu , and | · | the usual determinant.

Example 2.16 Consider u x = u 2y .

(2.176)

This is the example we considered already; now we will determine all classes of equation that are compatible with this one. Denoting G = u x − u 2y = p − q 2 , where p = u x and q = u y , then G x = 0, G y = 0, G u = 0, G p = 1, G q = −2q, and the Charpit equations are Dx F F p + D y F Fq = 0, 0 −2q 0 1 or, after expansion

Fx − 2q Fy + p − 2q 2 Fu = 0,

noting that the third term can be replaced by − p Fu due to the original equation. Solving this linear PDE by the method of characteristics gives the solution as

2.6

Fully Nonlinear First Order Equations

43

F = F(2xu y + y, xu x + u, u x , u y ).

(2.177)

If we set F in (2.177) F(a, b, c, d) = c − a, we obtain (2.163) whereas, if we choose F as F(a, b, c, d) = ac − 2bd, we obtain (2.166). Other choices would lead to new compatible equations which could give rise to new exact solutions. Example 2.17 Consider u 2x + u 2y = u 2 .

(2.178)

Denoting p = u x and q = u y , then G = u 2x + u 2y − u 2 = p 2 + q 2 − u 2 . Thus G x = 0, G y = 0, G u = −2u, G p = 2 p, G q = 2q, and the Charpit equation’s are Dx F F p + D y F Fq = 0, −2qu 2q −2 pu 2 p or, after expansion p Fx + q Fy + p 2 + q 2 Fu + pu F p + qu Fq = 0,

(2.179)

noting that the third term can be replaced by u 2 Fu due to the original equation. Solving (2.179), a linear PDE, by the method of characteristics gives the solution as p q p q F = F x − ln u, y − ln u, , . u u u u Consider the following particular example x−

p q ln u + y − ln u = 0, u u

or u x + u y = (x + y) If we let u = e

√ v,

u . ln u

then this becomes vx + v y = 2(x + y),

44

2 First Order PDEs

which, by the method of characteristics, has the solution v = 2x y + f (x − y). This, in turn, gives the solution for u as u=e

√ 2x y+ f (x−y)

.

(2.180)

Substitution into the original equation (2.178) gives the following ODE f 2 − 2λ f − 2 f + 2λ2 = 0, where f = f (λ) and λ = x − y. If we let f = g + 21 λ2 , then we obtain g 2 − 2g = 0,

(2.181)

whose solution is given by (r + c)2 , g = 0, 2 where c is an arbitrary constant of integration. This, in turn, gives g=

1 f = λ2 + cλ + c2 , 2

1 2 λ 2

f =

(2.182)

(2.183)

and substitution into (2.180) gives

u=e

x 2 +y 2 +c(x−y)+ 21 c2

, u=e

√

2x y+(x−y)2 /2

as exact solutions to the original PDE. It is interesting to note that when we substitute the solution of the compatible equation into the original it reduces to an ODE. A natural question is: Does this always happen? This was proven to be true in two independent variables by the author [1]. So now we ask, of the infinite possibilities here, can we choose the right one(s) to give rise to the solution that also satisfies the given boundary condition? The following example illustrates. Example 2.18 Solve 3u 2x − u y + x = 0

(2.184)

subject to the boundary condition u(x, 0) = 0. Denoting G = 3 p 2 − q + x, where p = u x and q = u y , then

(2.185)

2.6

Fully Nonlinear First Order Equations

45

G x = 1, G y = 0, G u = 0, G p = 6 p, G q = −1, and the Charpit equations are Dx F F p + D y F Fq = 0, 1 6p 0 −1 or, after expansion 6 p Fx − Fy + (6 p 2 − q)Fu − F p + 0Fq = 0. Solving this linear PDE by the method of characteristics gives the solution as F = F(x + 3 p 2 , y − p, u − pq + 2 p 3 , q).

(2.186)

So how do we incorporate the boundary condition? Since u(x, 0) = 0, differentiating with respect to x gives u x (x, 0) = 0 or p = 0 on the boundary y = 0. From the original PDE we then have q − x = 0 on the boundary y = 0. Substituting these into (2.186) gives F = F(x, 0, 0, x).

(2.187)

Any combination of these argument’s giving zero would give rise to a PDE whose solution would satisfy the given BCs. For example we could chose a difference the the first and last arguments. This would give x + 3 p 2 − q = 0, (2.188) however, this is just the orginal PDE. Instead we choose the second argument set to zero so p − y = 0.

(2.189)

u = x y + f (y),

(2.190)

We solve giving where f is a function of integration. Substituting into the orginal PDE gives f (y) = 3y 2 ,

(2.191)

f (y) = y 3 + c,

(2.192)

u = x y + y 3 + c.

(2.193)

which we integrate giving and thus Imposing the BC gives c = 0 and thus we have the exact solution u = x y + y3.

(2.194)

46

2 First Order PDEs

Example 2.19 Solve u x u y − xu x − yu y = 0

(2.195)

subject to the boundary condition u(x, x) = 2x 2 .

(2.196)

Denoting G = pq − x p − yq, where p = u x and q = u y , then G x = − p, G y = −q, G u = 0, G p = q − x, G q = p − y, and the Charpit equations are Dx F F p + D y F Fq = 0, −q p − y −p q − x or, after expansion (q − x)Fx + ( p − y)Fy + (2 pq − x p − yq)Fu + p F p + q Fq = 0, noting that the third term can be replaced by pq Fu due to the original equation. Solving this linear PDE by the method of characteristics gives the solution as F = F(q 2 − 2xq, p 2 − 2yp, p/q, pq − 2u).

(2.197)

So how do we incorporate the boundary condition? Since u(x, x) = 2x 2 , differentiating with respect to x gives u x (x, x) + u y (x, x) = 4x or p + q = 4x (on the boundary). From the original PDE we have pq − x p − xq = 0 on the boundary y = x and solving for p and q gives p = q = 2x. Substituting these into (2.197) gives F = F(0, 0, 1, 0).

(2.198)

Again, choosing any argument that is zero or any combination of the arguments that are zero, would give rise to a PDE whose solution would satisfy the given BC. For example we could choose any of these: (a) u 2y − 2xu y = 0, (b) u 2x − 2yu x = 0, (c) u x /u y − 1 = 0,

(2.199)

(d) u x u y − 2u = 0, will all leads to solutions that satisfy the BC. For example, (a) and (b) leads to u = 2x y while (c) leads to u = 21 (x + y)2 .

2.6

Fully Nonlinear First Order Equations

47

We make one final comment here. As any combination of (2.199) will work, we consider u y (b) − u x (d) = 0 leading to (2.200) u x yu y − u = 0, or yu y = u,

(2.201)

as u x = 0 gives rise to the trivial solution. Equation (2.201) is easily solved giving u = y f (x) and substitution into the original PDE gives f f − x f − f = 0.

(2.202)

Now (2.202) is nonlinear ODE but sometimes the solution form and BC give us a hint of what solution to look for. Since u(x, x) = 2x 2 then using u = y f (x) gives x f (x) = 2x 2 or f (x) = 2x which in fact does satisfy the ODE (2.202)! This ultimately leads to u = 2x y which was already obtained.

Exercises 1. Solve the following first order PDE using a change of coordinates (x, y) → (r , s) u x − 2u y = −u, 2xu x + 3y u y = x, u(x, x) = 1, 2u t − u x = 4, u x + u y = 6y, u(x, 1) = 2 + x, xu x − 2uu y = x, u t − xu x = 2t, u(x, 0) = sin x.

(i) (ii) (iii) (iv) (v) (vi)

2. Solve the following first order PDEs using the method of characteristics (i) (ii) (ii) (iv) (v)

x yu x + (x 2 + y 2 )u y = yu, u x + (y + 1)u y = u + x, x 2u x − y2u y = u2, xu x + (x + y)u y = x, yu x + xu y = x y, u(x, 0) = x 2 .

3. Solve the following first order PDEs subject to the initial condition u(x, 0) = sech(x). Explain the behavior of each solution for t > 0 (i) (ii) (iii) (iv)

ut ut ut ut

+ (x + 1)u x = 0, + (u − 2)u x = 0, + (u − 3x)u x = 0, + (1 + 2x + 3u)u x = 0.

48

2 First Order PDEs

4. Use the method characteristics to solve the following. (i) u 2x − 3u 2y − u (ii) u t + u 2x + u (iii) u 2x + u 2y 2 (iv) ux − u u y

= 0, = 0, = 1, = 0,

u(x, 0) u(x, 0) u(x, 1) u(x, y)

= x 2, =√ x, = x 2 + 1, = 1 along y = 1 − x.

5. Use Charpit’s method to find compatible first order equations to the one given. Use any one of your equations to find an exact solution. (i) u 2x + u 2y = x 2 , (ii) u t + uu 2x = 0, (iii) u t + uu 2x = 0. 6. Solve the following PDEs by Charpit’s method (i) u 2x − 3u 2y − u = 0, (ii) u 2x + u y + u = 0, (iii) u 2x − u u y = 0,

u(x, 0) = x 2 u(x, 0) = x, u(x, y) = 1 along y = 1 − x

7. If the characteristic equations are dy du dx = = , a(x, y, u) b(x, y, u) c(x, y, u) show for some constant α, β and γ that dx dy du d(αa + βb + γ c) = = = . a(x, y, u) b(x, y, u) c(x, y, u) αa + βb + γ c 8. Generalize Charpit’s method for nonlinear first order PDEs of the form F x1 , x2 , · · · xn , u x1 , u x2 , · · · u xn = 0.

Reference 1. D.J. Arrigo, Nonclassical contact symmetries and Charpit’s method of compatibility. J. Non. Math. Phys. 12(3), 321–329 (2005)

3

Second Order Linear PDEs

3.1

Introduction

The general class of second order linear PDEs are of the form a(x, y)u x x + b(x, y)u x y + c(x, y)u yy + d(x, y)u x + e(x, y)u y + f (x, y)u = g(x, y).

(3.1)

The three PDEs that lie at the cornerstone of applied mathematics are: the heat equation, the wave equation, and Laplace’s equation, i.e., (i) u t = u x x ,

the heat equation

(ii) u tt = u x x ,

the wave equation

(iii) u x x + u yy = 0,

Laplace’s equation

or, using the same independent variables, x and y (i) u x x − u y = 0,

the heat equation

(3.2a)

(ii) u x x − u yy = 0,

the wave equation

(3.2b)

(iii) u x x + u yy = 0,

Laplace’s equation.

(3.2c)

Analogous to characterizing quadratic equations ax 2 + bx y + cy 2 + d x + ey + f = 0, as either hyperbolic, parabolic or elliptic determined by

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_3

49

50

3 Second Order Linear PDEs

b2 − 4ac > 0,

hyperbolic,

b − 4ac = 0,

parabolic,

b − 4ac < 0,

elliptic,

2 2

we do the same for PDEs. So, for the heat equation (3.2a) a = 1, b = 0, c = 0, so b2 − 4ac = 0 and the heat equation is parabolic. Similarly, the wave equation (3.2b) is hyperbolic and Laplace’s equation (3.2c) is elliptic. This leads to a natural question. Is it possible to transform one PDE to another where the new PDE is simpler? Namely, under a change of variable r = r (x, y), s = s(x, y), can we transform to one of the following standard forms: u rr − u ss + l.o.t.s. = 0, u ss + l.o.t.s. = 0, u rr + u ss + l.o.t.s. = 0,

hyperbolic,

(3.4a)

parabolic,

(3.4b)

elliptic,

(3.4c)

where the term “l.o.t.s” stands for lower order terms. For example, consider the PDE 2u x x − 2u x y + 5u yy = 0.

(3.5)

This equation is elliptic since the elliptic b2 − 4ac = 4 − 4(2)(5) = −36 < 0. If we introduce new coordinates, r = 2x + y, s = x − y, (3.6) then by a change of variable using the chain rule u x = u r r x + u s sx , u y = ur r y + u s sy , u x x = u rr r x2 + 2u r s r x sx + u ss sx2 + u r r x x + u s sx x , u x y = u rr r x r y + u r s (r x s y + r y sx ) + u ss sx s y + u r r x y + u s sx y , u yy = u rr r y2 + 2u r s r y s y + u ss s y2 + u r r yy + u s s yy , gives u x x = 4u rr + 4u r s + u ss , u x y = 2u rr − u r s − u ss , u yy = u rr − 2u r s + u ss . Under (3.6), Eq. (3.5) becomes 2 (4u rr + 4u r s + u ss ) − 2 (2u rr − u r s − u ss ) + 5 (u rr − 2u r s + u ss ) = 0

(3.7)

3.1

Introduction

51

which simplifies to u rr + u ss = 0, which is Laplace’s equation (also elliptic). Before we consider transformations for PDEs in general, it is important to determine whether the equation type could change under transformation. Consider the general class of PDEs au x x + bu x y + cu yy + lots = 0

(3.8)

where a, b, and c are functions of x and y. Note that we have grouped the lower order terms into the term lots as they will not affect the type. Under a change of variable (x, y) → (r , s) with the change of variable formulas (3.7), we obtain a u rr r x2 + 2u r s r x sx + u ss sx2 + u r r x x + u s sx x +b u rr r x r y + u r s (r x s y + r y sx ) + u ss sx s y + u r r x y + u s sx y +c u yy + u rr r y2 + 2u r s r y s y + u ss s y2 + u r r yy + u s s yy = 0.

(3.9)

Rearranging (3.9), and neglecting lower order terms, gives (ar x2 + br x r y + cr y2 )u rr + (2ar x sx + b(r x s y + r y sx ) + 2cr y s y )u r s + (asx2 + bsx s y + cs y2 )u ss = 0.

(3.10)

Setting A = ar x2 + br x r y + cr y2 , B = 2ar x sx + b(r x s y + r y sx ) + 2cr y s y , C=

asx2

(3.11)

+ bsx s y + cs y2 ,

gives from (3.10) Au rr + Bu r s + Cu ss = 0, whose type is given by 2 B 2 − 4 AC = (b2 − 4ac) r x s y − r y sx , from which we deduce that b2 − 4ac > 0

⇒

B 2 − 4 AC > 0

b2 − 4ac = 0

⇒

B 2 − 4 AC = 0

b2 − 4ac < 0

⇒

B 2 − 4 AC < 0,

showing that the equation type is unchanged under transformation. We note that r x s y − r y sx = 0 since the Jacobian of the transformation is nonzero.

52

3 Second Order Linear PDEs

We now consider transformations to standard form. As there are three standard forms, hyperbolic, parabolic and elliptic, we will deal with each type separately.

3.2

Standard Forms

If we introduce the change of coordinates r = r (x, y),

s = s(x, y),

(3.12)

with the derivatives changing as (3.7), we can substitute (3.7) into the general linear equation (3.1) and rearrange, obtaining (ar x2 + br x r y + cr y2 )u rr + 2ar x sx + b(r x s y + r y sx ) + 2cr y s y u r s + (asx2 + bsx s y + cs y2 )u ss + lots = 0.

(3.13)

Our goal now is to target a given standard form and solve a set of equations for the new variables r and s.

3.2.1

Parabolic Standard Form

Comparing (3.13) with the parabolic standard form (3.4b) leads to choosing ar x2 + br x r y + cr y2 = 0,

(3.14a)

2ar x sx + b(r x s y + r y sx ) + 2cr y s y = 0.

(3.14b)

b2 1 Since in the parabolic case b2 − 4ac = 0, then substituting c = , we find both equations 4a of (3.14) are satisfied if (3.15) 2ar x + br y = 0, with the choice of s(x, y) arbitrary. The following examples demonstrate this.

Example 3.1 Consider u x x + 6u x y + 9u yy = 0.

(3.16)

Here, a = 1, b = 6 and c = 9, showing that b2 − 4ac = 0, so the PDE is parabolic. Solving r x + 3r y = 0, 1 If c = 0, then b = 0 and the PDE can be brought into standard form by dividing by a.

3.2

Standard Forms

53

gives r = R(3x − y). As we wish to find new coordinates as to transform the original equation to standard form, we choose r = 3x − y, s = y. We calculate second derivatives u x x = 9u rr ,

u x y = −3u rr + 3u r s ,

u yy = u rr − 2u r s + u ss .

(3.17)

Substituting (3.17) into (3.16) gives u ss = 0.

(3.18)

As it turns out, we can explicitly solve (3.18) giving u = f (r )s + g(r ), where f and g are arbitrary functions of r . In terms of the original variables, we obtain u = y f (3x − y) + g(3x − y), the general solution of (3.16). Example 3.2 Consider x 2 u x x − 4x yu x y + 4y 2 u yy + xu x = 0.

(3.19)

Here, a = x 2 , b = −4x y and c = 4y 2 , showing that b2 − 4ac = 0, so the PDE is parabolic. Solving x 2 r x − 2x yr y = 0, or xr x − 2yr y = 0, gives r = R(x 2 y). As we wish to find new variables r and s, we choose r = x 2 y,

s = y.

54

3 Second Order Linear PDEs

Calculating the first derivative gives u x = 2x yu r ,

(3.20)

and second derivatives u x x = 4x 2 y 2 u rr + 2yu r , u x y = 2x 3 yu rr + 2x yu r s + 2xu r ,

(3.21)

u yy = x u rr + 2x u r s + u ss . 4

2

Substituting (3.20) and (3.21) into (3.19) gives 4y 2 u ss − 4x 2 yu r = 0, or, in terms of the new variables r and s, u ss −

r u r = 0. s2

(3.22)

An interesting question is whether different choices of the arbitrary function R and the variable s would lead to a different standard form. For example, if we chose r = 2 ln x + ln y,

s = ln y,

then (3.19) would become u ss − u r − u s = 0,

(3.23)

a constant coefficient parabolic equation, whereas, choosing r = 2 ln x + ln y,

s = 2 ln x,

then (3.19) would become u ss − u r = 0,

(3.24)

the heat equation.

3.2.2

Hyperbolic Standard Form

In order to obtain the standard form for the hyperbolic type u rr − u ss + lots = 0, from (3.13), we find it is necessary to choose

(3.25)

3.2

Standard Forms

55

ar x2 + br x r y + cr y2 = − asx2 + bsx s y + cs y2 , 2ar x sx + b(r x s y + r y sx ) + 2cr y s y = 0.

(3.26)

The problem is that this system, (3.26), is still a very hard problem to solve (both PDEs are nonlinear and coupled!). Therefore, we introduce a modified hyperbolic form that is much easier to work with before returning to this case.

3.2.3

Modified Hyperbolic Form

The modified hyperbolic standard form is defined as u r s + lots = 0,

(3.27)

noting that a = 0, b = 1, and c = 0, so that b2 − 4ac > 0. In order to target the modified hyperbolic form, from (3.13) it is now necessary to choose ar x2 + br x r y + cr y2 = 0,

asx2 + bsx s y + cs y2 = 0.

(3.28)

If we rewrite (3.28) a

rx ry

2 + 2b

rx + c = 0, ry

a

sx sy

2 + 2b

sx + c = 0, sy

(3.29)

rx sx and . This leads to two first ry sy order linear PDEs for r and s. The solutions of these then give rise to the correct standard variables. The following examples demonstrate this.

(assuming that r y s y = 0), we can solve equations (3.29) for

Example 3.3 Consider u x x − 5u x y + 6u yy = 0 Here, a = 1, b = −5, and c = Thus, (3.29) becomes

6, showing that b2

r x2 − 5r x r y + 6r y2 = 0,

(3.30)

− 4ac = 1 > 0, so the PDE is hyperbolic.

sx2 − 5sx r y + 6s y2 = 0,

and factoring gives r x − 2r y r x − 3r y = 0,

sx − 2s y

sx − 3s y = 0,

56

3 Second Order Linear PDEs

from which we choose r x − 2r y = 0,

sx − 3s y = 0,

giving rise to solutions r = R(2x + y),

s = S(3x + y)

where R and S are arbitrary functions of their arguments. As we wish to find new coordinates to transform the original equation to standard form, we choose r = 2x + y,

s = 3x + y.

Calculating second derivatives gives u x x = 4u rr + 12u r s + 9u ss , u x y = 2u rr + 5u r s + 3u ss ,

(3.31)

u yy = u rr + 2u r s + u ss . Substituting (3.31) into (3.30) gives u r s = 0.

(3.32)

Solving (3.32) gives u = f (r ) + g(s), where f and g are arbitrary functions. In terms of the original variables, we obtain the solution to (3.30) as u = f (2x + y) + g(3x + y). Example 3.4 Consider xu x x − (x + y)u x y + yu yy = 0. Here, a = x, b = −(x + y), and c = is hyperbolic. We thus need to solve

y, showing that b2

− 4ac = (x −

(3.33) y)2

> 0, so the PDE

xr x2 − (x + y)r x r y + yr y2 = 0, or, upon factoring,

xr x − yr y

r x − r y = 0.

As s satisfies the same equation, we choose the first factor for r and the second for s xr x − yr y = 0, sx − s y = 0. Upon solving (3.34), we obtain r = f (x y),

s = g(x + y).

(3.34)

3.2

Standard Forms

57

The simplest choice for the new variables r and s are r = x y,

s = x + y.

Calculating second derivatives gives u x x = y 2 u rr + 2yu r s + u ss , u x y = x yu rr + (x + y)u r s + u ss + u r ,

(3.35)

u yy = x u rr + 2xu r s + u ss , 2

and substituting (3.35) into (3.33) gives

2x y − x 2 − y 2 u r s − (x + y)u r = 0,

or, in terms of the new variables, r and s, gives the standard form ur s +

3.2.4

s u r = 0. s 2 − 4r

Regular Hyperbolic Form

We now wish to transform a given hyperbolic PDE to its regular standard form u rr − u ss + lots = 0.

(3.36)

First, let us consider the following example: x 2 u x x − y 2 u yy = 0.

(3.37)

If we were to transform to modified standard form, we would solve xr x − yr y = 0,

xsx + ys y = 0,

which gives r = f (x y),

s = g(x/y).

If we choose r = ln x + ln y,

s = ln x − ln y,

then the original PDE becomes u r s − u s = 0.

(3.38)

However, if we introduce new variables α and β such that α=

r +s , 2

β=

r −s , 2

58

3 Second Order Linear PDEs

then α = ln x,

β = ln y,

and the PDE (3.37) becomes u αα − u ββ − u α + u β = 0, a PDE in regular hyperbolic form. In fact, one can show (see the exercises) that if α=

r +s , 2

β=

r −s , 2

where r and s satisfies (3.29), then α and β satisfies aαx2 + bαx α y + cα 2y = − aβx2 + bβx β y + cβ y2 , 2aαx βx + b(αx β y + α y βx ) + 2cα y β y = 0,

(3.39a) (3.39b)

which is essentially (3.26). This gives a convenient way to go directly to the variables that lead to the regular hyperbolic form. We note that α, β =

r ±s , 2

(3.40)

so we can consider (3.29) again, but instead of factoring, we treat each as a quadratic equation in r x /r y and sx /s y and solve accordingly. It is important to note the placing of the ±. We demonstrate with the following examples. Example 3.5 Consider 8u x x − 6u x y + u yy = 0.

(3.41)

The corresponding equations for r and s are 8r x2 − 6r x r y + r y2 = 0,

8sx2 − 6sx s y + s y2 = 0,

(3.42)

but as they are identical it suffices to only consider one. Dividing the first of (3.42) by r y2 gives 2 rx rx 8 − 6 + 1 = 0. ry ry Solving (3.5) by the quadratic formula gives 6±2 rx = ry 16 or 8r x − (3 ± 1)r y = 0.

3.2

Standard Forms

59

The method of characteristics gives r = R ((3 ± 1)x + 8y) = R(3x + 8y ± x) which leads to the choice r = 3x + 8y,

s = x.

Under this transformation, the original equation (3.41) becomes u rr − u ss = 0, the desired standard form. Example 3.6 Consider x y 3 u x x − x 2 y 2 u x y − 2x 3 yu yy − y 3 u x + 2x 3 u y = 0.

(3.43)

The corresponding equations for r and s are x y 3r x2 − x 2 y 2 r x r y − 2x 3 yr y2 = 0,

x y 3 sx2 − x 2 y 2 sx r y − 2x 3 ys y2 = 0,

and choosing the first gives y2

rx ry

2 − xy

rx − 2x 2 = 0. ry

Solving by the quadratic formula gives (1 ± 3)x rx = ry 2y or 2yr x − (1 ± 3) xr y = 0. Solving gives

r = R x 2 + 2y 2 ± 3x 2 .

If we choose R to be simple and split according to the ±, we obtain r = x 2 + 2y 2 ,

s = 3x 2 .

Under this transformation, the original equation (3.43) becomes u rr − u ss = 0, the desired standard form.

(3.44)

60

3.2.5

3 Second Order Linear PDEs

Elliptic Standard Form

In order to obtain the standard form for the elliptic type, i.e. u rr + u ss + lots = 0, then from (3.13), it is necessary to choose ar x2 + br x r y + cr y2 = asx2 + bsx s y + cs y2 , 2ar x sx + b(r x s y + r y sx ) + 2cr y s y = 0.

(3.45)

This is almost identical to the regular hyperbolic case but instead of choosing α, β =

r ±s , 2

(3.46)

we choose

r ± is , 2 and the procedure is the same. The next few examples illustrate this. α, β =

(3.47)

Example 3.7 Consider u x x − 4u x y + 5u yy = 0.

(3.48)

The corresponding equations for r and s are r x2 − 4r x r y + 5r y2 = 0,

sx2 − 4sx s y + 5s y2 = 0

but as they are identical it suffices to only consider one. Dividing (3.49) by r y2 gives

rx ry

2 −4

rx + 5 = 0. ry

Solving by the quadratic formula gives rx = 2 ± i, ry or r x − (2 ± i)r y = 0. The method of characteristics gives r = R (2x + y ± i x) ,

(3.49)

3.2

Standard Forms

61

we choose R(λ) = λ, and taking the real and imaginary parts leads to the choice r = 2x + y,

s = x.

Under this transformation, the original equation (3.48) becomes u rr + u ss = 0, the desired standard form.

Example 3.8 Consider 2 2 2 1 + x 2 u x x − 2 1 + x 2 1 + y 2 u x y + 1 + y 2 u yy + 4x 1 + x 2 u x = 0. (3.50) The corresponding equations for r and s are 2 2 2 1 + x 2 r x2 − 2 1 + x 2 1 + y 2 r x r y + 1 + y 2 r y2 = 0, 2 2 2 1 + x 2 sx2 − 2 1 + x 2 1 + y 2 sx s y + 1 + y 2 s y2 = 0,

(3.51a) (3.51b)

but as they are identical it suffices to only consider one. Solving by the quadratic formula gives (2 ± i) 1 + y 2 rx = , ry 1 + x2 or 2 1 + x 2 r x − (1 ± i) 1 + y 2 r y = 0. The method of characteristics gives the solution as r = R tan−1 x + 2 tan−1 y ± i tan−1 x , we choose R(λ) = λ, and choosing gives R and s r = tan−1 x + 2 tan−1 y,

s = tan−1 x.

Under this transformation, the original equation (3.50) becomes u rr + u ss − 2yu r = 0,

62

3 Second Order Linear PDEs

and upon using the original transformation gives u rr + u ss − 2 tan

r −s u r = 0, 2

the desired standard form.

3.3

The Wave Equation

We now re-visit the one-dimensional wave equation u tt = c2 u x x , −∞ < x < ∞

(3.52)

subject to the initial conditions u(x, 0) = f (x), −∞ < x < ∞, ∂u (x, 0) = g(x), −∞ < x < ∞. ∂t

(3.53)

As seen previously, we can target the modified hyperbolic equation by making the change of variables: r = x − ct and s = x + ct. Equation (3.52) becomes u r s = 0,

(3.54)

which was solved in Example 3.3. There, we obtained the solution u = F(r ) + G(s)

(3.55)

u = F(x − ct) + G(x + ct).

(3.56)

or in terms of x and t Before we impose the initial conditions, we want to understand what this solution means. We look at the following examples: Let u = F(x − ct) (that is, G = 0) and let F(x − ct) = sech(x − ct). When c = 1, this simplifies to u = sech(x − t). This is just a wave traveling to the right. Similarly if we look at u = sech(x + ct) this is a wave traveling to the left. Graphs for this at various times for t are given below (Figs. 3.1 and 3.2). Thus, in general u = F(x − ct) + G(x + ct), (3.57) consists of two waves, one traveling right and one traveling left. We now incorporate the initial conditions (3.53) on the solution (3.57). In doing so, we obtain (3.58) F(x) + G(x) = f (x), −cF (x) + cG (x) = g(x).

3.3 The Wave Equation

(a)

63

1

t=0

t = 10

t = 20

0 −10

(b)

0

10

t = 20

20

30

1

t = 10

t=0

0 −30

−20

−10

0

10

Fig. 3.1 a Traveling wave solutions u = sech(x − t) at times t = 0, 10 and 20. b Traveling wave solutions u = sech(x + t) at times t = 0, 10 and 20

From (3.58), we obtain F (x) =

1 1 f (x) − g(x), 2 2c

G (x) =

1 1 f (x) + g(x), 2 2c

(3.59)

and upon integration 1 f (x) − 2 1 G(x) = f (x) + 2 F(x) =

x 1 g(s)ds, 2c x0 x 1 g(s)ds. 2c x0

(3.60a) (3.60b)

where x0 is an arbitrary constant. Thus, we obtain x+ct x−ct

1 1 g(s)ds − g(s)ds , u(x, t) = f (x − ct) + f (x + ct) + 2 2c x0 x0

64

3 Second Order Linear PDEs

(a)

4

t=0

2

x -4 4

(b)

-2

0

2

4 4

(c)

t = .5

t = 1.5

2

2

x

x -4

-2

0

2 4

4

-4

-2

0

2 4

t=1

t=2

2

2

x

x -4

-2

0

4

2

4

-4

-2

0

2

4

Fig. 3.2 a The initial condition at t = 0. b Traveling wave solutions at times t = 0.5 and 1. c Traveling wave solutions at times t = 1.5 and 2

which simplifies to x+ct

1 1 f (x − ct) + f (x + ct) + g(s)ds. u(x, t) = 2 2c x−ct This is known as the d’Alembert solution.

(3.61)

3.3 The Wave Equation

65

Example 3.9 Consider u tt = u x x , subject to the initial conditions: u(x, 0) =

2 if − 1 < x < 1, 0 if otherwise,

If we let H (x) =

u t (x, 0) = 0.

0 if x < 0 1 if x ≥ 0,

(3.62)

(3.63)

then (3.62) can be rewritten as u(x, 0) = 2 (H (x + 1) − H (x − 1))

(3.64)

so the solution for this problem is given by: u(x, t) = H (x + 1 + t) − H (x − 1 + t) + H (x + 1 − t) − H (x − 1 − t).

(3.65)

Figure 3.2 illustrates the solution at times t = 0, 0.5, 1, 1.5 and 2.

Exercises 1. Determine the type of the following second order PDEs (i) (ii) (iii) (iv) (v) (vi) (vii)

(1 + x 2 )u x x − (1 + y 2 )u yy = u x + u y u x x + 2u x y + u yy = 0 y 2 u x x + 2yu x y + u yy = 0 x yu x y + u = u x + u y , x y = 0 2u x x − 5u x y − 3u yy = 1 u x x − 4yu x y + 13u yy = u (1 + y 2 )u x x + 2x yyu x y + (1 + x 2 )u yy = 1

2. Transform the following PDEs to standard form. Find the general solution if possible. 3. Earlier in this chapter, we saw that in order to target hyperbolic form it was necessary to solve the system: ar x2 + br x r y + cr y2 = − asx2 + bsx s y + cs y2 , 2ar x sx + b(r x s y + r y sx ) + 2cr y s y = 0.

(3.66)

66

3 Second Order Linear PDEs (i) (ii) (iii) (iv) (v) (vi) (vii) (viii) (ix)

u x x + 2u x y + u yy = 0 y 2 u x x − 2x yu x y + x 2 u yy = 0 y 2 u x x + 2x yu x y + x 2 u yy + xu x + yu y = 0 2u x x − 3u x y + u yy = u x + u y x 2 u x x − 3x yu x y + 2y 2 u yy = 0 2y 2 u x x − 3yu x y + u yy = 0 13u x x − 6u x y + u yy = 0 u x x − 2 cos xu x y + 2 cos2 xu yy = 0 8y 2 u x x + 4yu x y + 5u yy − 5y −1 u y = 0

Show that if r and s are defined as r =α−β

s = α + β,

then (3.66) is satisfied if α and β satisfy aαx2 + 2bαx α y + cα 2y = 0,

(3.67)

+ 2bβx s y + cβ y2

(3.68)

aβx2

= 0.

Furthermore, to target elliptic form it was necessary to solve the system: ar x2 + br x r y + cr y2 = asx2 + bsx s y + cs y2 , 2ar x sx + b(r x s y + r y sx ) + 2cr y s y = 0. Show that if r and s are defined as r = α − iβ where i =

√

s = α + iβ,

−1, then (3.69) is satisfied if α and β satisfy again, system (3.67).

4. The PDE x 2 u x x − 4x yu x y + 4y 2 u yy + xu x = 0 is parabolic. Introducing new coordinates r = x 2 y, reduces the PDE to u ss − In fact, any choice of

s = y,

r u r = 0. s2

r = R x2 y ,

s = S(x, y),

(3.69)

3.3 The Wave Equation

67

will transform the original PDE to one that is in parabolic standard form (provided the Jacobian of the transformation is not zero). Can the choice of R and S be made such that we can transform to u ss = 0 or u ss = u r ?

4

Fourier Series

At this point, we will consider the major technique for solving standard boundary value equations. Consider, for example, the heat equation ut = u x x ,

0 < x < π,

t >0

(4.1)

subject to u(0, t) = u(π, t) = 0,

(4.2a)

u(x, 0) = 2 sin x.

(4.2b)

Here, we will assume that solutions are of the form u(x, t) = X (x)T (t).

(4.3)

Substituting into the heat equation (4.1) gives X T = X T , or, after dividing by T X

T X = . (4.4) T X Since each side is a function of a different variable, they therefore must be independent of each other, and thus X T =λ= , (4.5) T X for some constant λ. The boundary conditions in (4.2a) become, accordingly, X (0) = X (π ) = 0.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_4

(4.6) 69

70

4 Fourier Series

From (4.5) we have X = λX , which we integrate. In doing so, we have three cases, depending on the sign of λ. These are ⎧ nx −nx ⎪ if λ = n 2 , ⎪ ⎨c 1 e + c 2 e X (x) = c1 x + c2 if λ = 0, ⎪ ⎪ ⎩c sin nx + c cos nx if λ = −n 2 , 1

2

where n is a constant and c1 and c2 are arbitrary constants. We will consider each case separately. In the first case, where λ = n 2 imposing the boundary conditions (4.6) gives c1 e0 + c2 e0 = 0, c1 enπ + c2 e−nπ = 0,

(4.7)

and solving for c1 and c2 shows that both are identically zero. This shows that u = 0 identically, and the initial condition (4.2b) cannot be satisfied. In the second case where λ = 0, imposing the boundary conditions (5.5) gives that c1 0 + c2 = 0, c1 π + c2 = 0,

(4.8)

and solving for c1 and c2 shows again that both are identically zero, and again u = 0. In the third case, λ = −n 2 , using the boundary conditions (4.6) gives c1 sin 0 + c2 cos 0 = 0, c1 sin nπ + c2 cos nπ = 0,

(4.9)

c2 = 0, sin nπ = 0 ⇒ n = 0, ±1, ±2, . . . .

(4.10)

which leads to However, it suffices to consider only the positive values of n. From (5.4) with λ = −n 2 gives

from which we find that

T = −n 2 T ,

(4.11)

T (t) = c3 e−n

(4.12)

2t

where c3 is a constant of integration and thus, gives us a solution to the original PDE as u = X T = ce−n t sin nx, 2

(4.13)

where we have set c = c1 c3 . Finally, imposing the initial condition u(x, 0) = 2 sin x gives u(x, 0) = ce0 sin nx = 2 sin x,

(4.14)

from which we obtain c = 2 and n = 1. Therefore, the solution to the PDE subject to the initial and boundary conditions is u(x, t) = 2e−t sin x.

(4.15)

4 Fourier Series

71

If the initial condition was different, say u(x, 0) = 4 sin 3x, then c = 4 and n = 3 and the solution would be (4.16) u(x, t) = 4e−9t sin 3x. However, if the initial condition was u(x, 0) = 2 sin x + 4 sin 3x, it would be impossible to choose n and c in (4.13) to satisfy both. However, if we were to solve the heat equation with each initial condition separately, we can simply add the solutions together. In this case, this would be u(x, t) = 2e−t sin x + 4e−9t sin 3x. This is called the principle of superposition. Theorem (Principle of Superposition) If u 1 and u 2 are two solutions to the heat equation, then u = c1 u 1 + c2 u 2 is also a solution. The principle of superposition easily extends to more than 2 solutions. Thus, if u(x, t) = (an cos nx + bn sin nx) e−n t , n = 0, 1, 2, 3, 4, . . . 2

are solutions to the heat equation (an and bn are constants), then so is u(x, t) =

∞ 2 (an cos nx + bn sin nx)e−n t .

(4.17)

n=0

If the initial conditions were such that they only involved sine and cosine functions, we could choose the integers n and constants an and bn accordingly to match the terms in the initial condition. However, if the initial condition was, for example, u(x, 0) = π x − x 2 , then it is not obvious how to proceed, as (4.17) contains no x or x 2 terms. However, consider the following 8 −t e sin x, π 8 1 −9t −t u2 = e sin x + e sin 3x , π 27 8 1 1 −25t e−t sin x + e−9t sin 3x + u3 = sin 5x . e π 27 125 u1 =

(4.18)

From Fig. 4.1, one will notice that with each additional term added in (4.18), the solution is a better match to the initial condition. In fact, if we consider u=

∞ 8 1 2 e−(2n−1) t sin(2n − 1)x, 3 π (2n − 1)

(4.19)

n=1

we get an excellent match to the initial condition. Thus, we are led to ask: How are the integers n and constants an and bn chosen as to match the initial condition?

72

4 Fourier Series

Fig. 4.1 The solutions (4.18) with one and two terms at t = 0

4.1

Fourier Series

It is well known that many functions can be represented by a power series f (x) =

∞

an (x − x0 )n ,

i=0

where x0 is the center of the series and an , constants determined by an =

f (n) (x0 ) , i = 0, 1, 2, 3, . . . . n!

4.1 Fourier Series

73

For functions that require different properties, say for example, fixed points at the endpoints of an interval, a different type of series is required. An example of such a series is called Fourier series. For example, suppose that f (x) = π x − x 2 has a Fourier series π x − x 2 = b1 sin x + b2 sin 2x + b3 sin 3x + b4 sin 4x . . . .

(4.20)

How do we choose b1 , b2 , b3 etc., such that the Fourier series looks like the function? Notice that if multiply (4.20) by sin x and integrate from 0 to π π π π 2 2 (π x − x ) sin x d x = b1 sin x d x + b2 sin x sin 2x d x + . . . , 0

0

then we obtain 4 = b1 since

0

π ⇒ b1 = 8/π, 2

(4.21)

π π (π x − x 2 ) sin x d x = 4, sin2 x d x = , 2 0 0 π sin x sin nx d x = 0, n = 2, 3, 4, . . . , and π

0

Similarly, if multiply (4.20) by sin 2x and integrate from 0 to π π π (π x − x 2 ) sin 2x d x = b1 sin x sin 2x d x + b2 0

0

π

sin2 2x d x + . . . .

0

then we obtain

π ⇒ b2 = 0. 2 Multiply (4.20) by sin 3x and integrate from 0 to π π π 2 (π x − x ) sin 3x d x = b1 sin x sin 3x d x + b2 0 = b2

0

0

(4.22)

π

sin 2x sin 3x d x + . . . ,

0

then we obtain

4 π 8 = b3 ⇒ b3 = . 27 2 27π Continuing in this fashion, we would obtain b4 = 0, b5 =

8 8 , b6 = 0, b7 = . 125π 343π

Substitution of (4.21)–(4.24) into (4.20) gives (4.18).

(4.23)

(4.24)

74

4 Fourier Series

Fourier Series on [−π , π ]

4.2

Consider the series f (x) = a0 +

∞

(an cos nx + bn sin nx)

(4.25)

n=1

where a0 , an and bn are constant coefficients. The question is: How do we choose the coefficients as to give an accurate representation of f (x)? We use the following integral identities for cos nπ x and sin nπ x. π π π cos nx d x = 0, sin nx d x = 0, sin nx cos mx d x = 0, ∀m, n −π

−π

π

−π

π −π

−π

cos nx cos mx d x =

sin nx sin mx d x =

0 if m = n π if m = n

(4.26)

0 if m = n π if m = n

First, if we integrate (4.25) from −π to π , then by the properties in (4.26), we are left with π π f (x)d x = a0 d x = 2πa0 , −π

from which we deduce

−π

1 a0 = 2π

π

−π

f (x)d x.

Next we multiply the series (4.25) by cos mx, giving ∞

f (x) cos mx =

1 a0 cos mx + (an cos nx cos mx + bn sin nx cos mx) . 2 n=1

Again, integrate from −π to π . From (4.26), the integration of a0 cos mx is zero; from (4.26), the integration of cos nx cos mx is zero except when n = m; and further, from (4.26) the integration of sin nx cos mx is zero for all m and n. This leaves π π f (x) cos nx d x = an cos2 nx d x = πan , −π

−π

or an =

1 π

π −π

f (x) cos nx d x.

4.2

Fourier Series on [−π , π ]

75

Similarly, if we multiply the series (4.25) by sin mx then we obtain ∞

f (x) sin mx =

1 a0 sin mx + (an cos nx sin mx + bn sin nx sin mx) , 2 n=1

which we integrate from −π to π . From (4.26), the integration of a0 sin mπ x is zero; from (4.26) the integration of sin nx cos mx is zero for all m and n; and further, from (4.26) the integration of sin nx sin mx is zero except when n = m. This leaves π π f (x) sin nx d x = bn sin2 nx d x = π bn , −π

−π

or bn =

1 π

π −π

f (x) sin nx d x.

Therefore, the Fourier series representation of a function f (x) is given by f (x) = a0 +

∞

(an cos nx + bn sin nx)

n=1

where the coefficients a0 , an and bn are chosen such that π 1 1 π a0 = f (x)d x, an = f (x) cos nx d x, 2π −π π −π π 1 bn = f (x) sin nx d x for n = 1, 2, . . . . π −π

(4.27)

As the integral formulas for a0 and an are so similar, it is more convenient for us to shift the factor of 1/2 in a0 into the Fourier series, thus ∞

f (x) =

1 a0 + (an cos nx + bn sin nx) 2 n=1

where all of the an ’s are given by 1 an = π

π −π

f (x) cos nx d x.

(4.28)

Example 4.1 Consider f (x) = x 2 , [−π, π ]. From (4.28) we have

(4.29)

76

4 Fourier Series

π 1 π 2 1 x 3 2π 2 x dx = = π −π π 3 −π 3 π 1 an = x 2 cos nx d x π −π

π 1 x 2 sin nx sin nx x cos nx = −2 +2 π n n2 n 3 −π n 4(−1) = n2 a0 =

and from (4.27) 1 bn = π 1 = π =0

π −π

x 2 sin nx d x

π x 2 cos nx cos nx x sin nx − +2 +2 n n2 n3 −π

Thus, the Fourier series for f (x) = x 2 on [−π, π ] is ∞

f (x) =

(−1)n cos nx π2 . +4 3 n2

(4.30)

n=1

Figure 4.2 shows consecutive plots of the Fourier series (4.30) with 5 and 10 terms on the interval [−π , π ] and [−3π , 3π ].

-2

10

10

5

5

0

0

2 x

Fig. 4.2 The solutions (4.30) with five and ten terms

-5

0

0

5 x

4.2

Fourier Series on [−π , π ]

77

Example 4.2 Consider f (x) = x, [−π, π ].

(4.31)

From (4.28) we have a0 = an = = =

π 1 π 1 x 2 x dx = = 0, π −π π 2 −π 1 π x cos nx d x π −π

1 x sin nx cos nx π + π n n2 −π 0,

and from (4.27) 1 π x sin nx d x bn = π −π

x cos nx 1 sin nx π − = + π n n 2 −π =

2(−1)n+1 . n

Thus, the Fourier series for f (x) = x on [−π, π ] is f (x) = 2

∞ (−1)n+1 sin nx

n

n=1

.

(4.32)

Figure 4.3 shows consecutive plots of the Fourier series (4.32) with 5 and 50 terms on the interval [−π , π ] and [−3π , 3π ].

Example 4.3 Consider f (x) = From (4.28) we have

1

if − π < x < 0,

x + 1 if

0 < x < π.

78

4 Fourier Series

2

2 x 0

-2

2

-5

0

0

0

-2

-2

x 5

Fig. 4.3 The solutions (4.32) with five and fifty terms

1 a0 = π an =

1 π

π 1 0 1 x2 π x + 1 dx = x + + x = + 2, π −π π 2 2 −π 0 0 0 π 1 cos nx d x + (x + 1) cos nx d x π 0 −π

1 sin nx 0 sin nx cos nx π + (x + 1) + n π n n2

0

1 dx + π

1 π −π 1 (−1)n − 1 = π n2

=

π

0

and from (4.27) 1 0 1 π sin nx d x + (x + 1) sin nx d x π −π π 0

cos nx 1 cos nx 0 1 sin nx π −(x + 1) = − + + π n π n n 2 0 −π

bn =

=

(−1)n+1 . n

Thus, the Fourier series is ∞ 1 (−1)n − 1 π (−1)n+1 f (x) = + 1 + cos nx + sin nx . 4 π n2 n

(4.33)

n=1

Figure 4.4 shows a plot of the Fourier series (4.33) with 10 terms on the interval [−π , π ]. As many problems are on intervals more general than [−π, π ], it is natural for us to extend to more general intervals.

4.3

Fourier Series on [−L, L]

79

4

2

0 -2

0

2 x

Fig. 4.4 The solutions (4.33) with 10 terms

4.3

Fourier Series on [−L, L]

Consider the series 1 nπ x nπ x an cos a0 + + bn sin 2 L L ∞

f (x) =

(4.34)

n=1

where L is a positive number and a0 , an , and bn constant coefficients. The question is: How are the coefficients chosen as to give an accurate representation of f (x)? Here, we use the following properties of cos nπL x and sin nπL x :

L −L

cos

nπ x d x = 0, L

L

−L

sin

nπ x d x = 0, L

L

−L

sin

nπ x mπ x cos d x = 0 ∀m, n L L

0 if m = n nπ x mπ x cos cos dx = L L L if m = n −L

L 0 if m = n nπ x mπ x sin sin dx = L L L if m = n −L

L

If we integrate (4.34) from −L to L, then by the identities in (4.35), we are left with

(4.35)

80

4 Fourier Series

L

−L

f (x)d x =

from which we deduce 1 L

a0 =

Next we multiply the series (4.34) by cos

1 2

L

−L L

a0 d x = La0 ,

f (x)d x.

−L mπ x L , giving

mπ x nπ x nπ x 1 mπ x mπ x mπ x an cos = a0 cos + cos + bn sin cos . L 2 L L L L L ∞

f (x) cos

n=1

Again, we integrate from −L to L and use the identities in (4.35). This gives

L −L

f (x) cos

nπ x d x = an L

or an =

1 L

L

−L

L −L

cos2

f (x) cos

Similarly, if we multiply the series (4.34) by sin

nπ x d x = Lan , L

nπ x d x. L

mπ x L ,

then we obtain

mπ x nπ x nπ x 1 mπ x mπ x mπ x an cos = a0 sin + sin + bn sin sin , L 2 L L L L L ∞

f (x) sin

n=1

which we integrate from −L to L. Using the identities in (4.35) gives

L

−L

f (x) sin

nπ x d x = bn L

or bn =

1 L

L

−L

L −L

sin2

f (x) sin

nπ x d x = Lan , L

nπ x d x. L

Therefore, the Fourier series representation of a function f (x) is given by nπ x nπ x 1 an cos a0 + + bn sin 2 L L ∞

f (x) =

n=1

where the coefficients an and bn are chosen such that 1 L nπ x 1 L nπ x an = f (x) cos f (x) sin d x, bn = d x. L −L L L −L L for n = 0, 1, 2, . . . .

(4.36)

4.3

Fourier Series on [−L, L]

81

Example 4.4 Consider f (x) = 9 − x 2 , [−3, 3].

(4.37)

In this case, L = 3, so from (4.36) we have a0 =

1 3

1 an = 3

3 −3 3

9 − x2 dx =

3 1 x 3 = 12, 9x − 3 3 −3

nπ x dx 3 −3

3 27 nπ x nπ x 3x 2 54 18x 1 − + 3 3 sin − 2 2 cos = 3 nπ nπ n π 3 π n 3 −3 =

(9 − x 2 ) cos

36(−1)n+1 n2π 2

and 1 bn = 3

3 −3

(9 − x 2 ) sin

nπ x dx 3

3 27 nπ x nπ x 3x 2 54 18x 1 − − + 3 3 cos − 2 2 sin = 3 nπ nπ n π 3 π n 3 −3 = 0.

Thus, the Fourier series for (4.37) on [−3, 3] is f (x) = 6 +

∞ 36(−1)n+1 n=1

n2π 2

cos

nπ x . 3

(4.38)

Figure 4.5 shows the graph of this Fourier series (4.38) with the first 20 terms.

Example 4.5 Consider

f (x) =

⎧ ⎪ ⎪−2 − x if − 2 < x < −1, ⎨ x ⎪ ⎪ ⎩2 − x

if − 1 < x < 1,

(4.39)

if 1 < x < 2.

In this case L = 2 so from (4.36) we have −1 1 2 1 (−2 − x) d x + x dx + (2 − x) d x a0 = 2 −2 −1 1

1

−1

2 x 2 x 2 x 2 1 −2x − + + 2x − = =0 2 2 2 2 −2

−1

1

82

4 Fourier Series 10

5

0 -5

0

5 x

Fig. 4.5 The solutions (4.38) using the first 20 terms

an = = + + =

1 2

−1

−2

(−2 − x) cos

nπ x dx + 2

1 −1

x cos

nπ x −1 nπ x 2 (x + 2) sin − 2 2 cos − nπ 2 n π 2 −2

x nπ x 1 nπ x 2 sin + 2 2 cos nπ 2 n π 2 −1

(x − 2) nπ x 2 nπ x 2 − sin − 2 2 cos nπ 2 n π 2 1 0

nπ x dx + 2

2

(2 − x) cos

nπ x dx 2

(2 − x) sin

nπ x dx 2

1

and bn = = + + =

1 2

−1 −2

(−2 − x) sin

nπ x dx + 2

1

−1

x sin

nπ x −1 nπ x 2 (x + 2) cos − 2 2 sin nπ 2 n π 2 −2

x nπ x 1 nπ x 2 − cos + 2 2 sin nπ 2 n π 2 −1

(x − 2) nπ x 2 nπ x 2 cos − 2 2 sin nπ 2 n π 2 1 16 nπ sin . 2 2 n π 2

nπ x dx + 2

2 1

(4.40)

4.3

Fourier Series on [−L, L]

83

Thus, the Fourier series for (4.39) on [−2, 2] is πx 16 3π x 5π x 1 1 f (x) = 2 sin − 2 sin + 2 sin − +... π 2 3 2 5 2 ∞ (2n − 1)π x 16 (−1)n+1 sin = 2 . 2 π (2n − 1) 2 n=1

Figure 4.6 show the graph of this Fourier series (4.41) with the first 5 terms.

Example 4.6 Consider f (x) =

0 if − 1 < x < 0, 1 if 0 < x < 1.

In this case, L = 1, so from (4.36) we have 1 a0 = dx = 1

0 1

an = 0

1 1 cos nπ x d x = sin nπ x = 0 nπ 0

and 2

0 -2

0

2 x

-2

Fig. 4.6 The solutions (4.41) with 5 terms

(4.41)

84

4 Fourier Series

1

1

0.5

0.5

0

0 -1

0

-1

1

0

1 x

x

Fig. 4.7 The solutions (4.42) with 10 and 50 terms

1

bn = 0

1 1 1 − (−1)n sin nπ x d x − cos{nπ x = . nπ nπ 0

Thus, the Fourier series for (4.3) on [−1, 1] is f (x) =

∞ 1 1 1 − (−1)n + sin nπ x 2 π n n=1

∞ 1 2 sin(2n − 1)π x = + . 2 π 2n − 1

(4.42)

n=1

Figure 4.7 shows the graph of this Fourier series (4.42) with the first 10 and 50 terms. It is interesting to note that regardless of the number of terms we have in the Fourier series, we cannot eliminate the spikes at x = −1, 0, 1 etc. This phenomenon is know as Gibb’s phenomena.

4.4

Odd and Even Extensions

Consider f (x) = x on [0, π ]. Here the interval is half the interval [−π , π ]. Can we still construct a Fourier series for this? Well, it really depends on what f (x) looks like on the interval [−π , 0]. For example, if f (x) = x on [−π , 0], then the answer is yes. If f (x) = −x on [−π , 0], then the answer is also yes. In either case, as long a we are given f (x) on [−π , 0], the answer is yes. If we are just given f (x) on [0, π ], then it is natural to extend f (x) to [−π , 0] as either an odd extension or even extension. Recall that a function is even if f (−x) = f (x) and odd if f (−x) = − f (x). For example, if

4.4

Odd and Even Extensions

85

f (x) = x, then f (−x) = −x = − f (x) so f (x) = x is odd. Similarly, if f (x) = x 2 , then f (−x) = (−x)2 = x 2 = f (x) so f (x) = x 2 is even. For each extension, the Fourier series will contain only sine terms or cosine terms. These series respectively are called Sine series and Cosine series. Before we consider each series separately, it is necessary to establish the following lemma’s. Lemma 4.1 If f (x) is an odd function then

l

f (x) d x = 0,

−l

and if f (x) is an even function, then

l −l

f (x) d x = 2

l

f (x) d x.

0

The proofs are left as an exercise for the reader. At this point, we are ready to consider each series separately.

4.4.1

Sine Series

If f (x) is given on [0, L], and we assume that f (x) is an odd function, that gives us f (x) on the interval [−L, 0]. We now consider the Fourier coefficients an and bn 1 L nπ x 1 L nπ x an = f (x) cos f (x) sin d x, bn = d x. (4.43) L −L L L −L L Since f (x) is odd and cos nπL x is even, then their product is odd, and by Lemma 4.1 an = 0, ∀n. Similarly, since f (x) is odd and sin bn = The Fourier series is therefore

nπ x L

2 L

is odd, then their product is even, and by Lemma 4.1

L 0

f (x) sin

nπ x d x. L

(4.44)

86

4 Fourier Series

f (x) =

∞

bn sin

n=1

nπ x L

(4.45)

where bn is given in (4.44).

Example 4.7 Find a Fourier sine series for f (x) = x 2 ,

[0, 1].

(4.46)

The coefficient bn is given by bn = 2

1

x 2 sin nπ x d x

0

1 2 x cos nπ x cos nπ x x sin nπ x = 2 − +2 +2 nπ n2π 2 n 3 π 3 0

n n (−1) − 1 (−1) =2 2 − n3π 3 nπ giving the Fourier Sine series as f =2

∞ (−1)n − 1 (−1)n sin nπ x. 2 − n3π 3 nπ

(4.47)

n=1

Example 4.8 Find a Fourier Sine series for f (x) = cos x,

[0, π ].

(4.48)

In calculating the coefficient bn , the case when n = 1 is special and must be handled separately. When n = 1, then (Fig. 4.8) 2 π cos x sin x d x = 0. b1 = π 0 When n > 1, then

2 π cos x sin nx d x bn = π 0

1 cos(n − 1)x 1 cos(n + 1)x π = − − 2 n−1 2 n+1 0 n (1 + (−1)n ) = . n2 − 1

The Fourier Sine series is then given by (Fig. 4.9)

4.4

Odd and Even Extensions

87

y

1

0 -1

0

1

x

-1

Fig. 4.8 The function (4.46) with its odd extension and its Fourier Sine series (4.47) with 10 terms

y 1

x 0 -2

0

2

-1

Fig. 4.9 The function (4.48) with its odd extension and its Fourier Sine series (4.49) with 20 terms

f =

∞ 2 n (1 + (−1)n ) sin nx π n2 − 1 n=2

∞ 8 n = sin 2nx. π 4n 2 − 1 n=1

(4.49)

88

4.4.2

4 Fourier Series

Cosine Series

If f (x) is given on [0, L], we assume that f (x) is an even function, which gives us f (x) on the interval [−L, L]. We now consider the Fourier coefficients an and bn 1 L nπ x an = f (x) cos d x. (4.50) L −L L Since f (x) is even and cos nπL x is even, then their product is even, and by Lemma 4.1 an =

2 L

Similarly for bn = Since f (x) is even and sin

nπ x L

1 L

L

f (x) cos

nπ x d x. L

(4.51)

f (x) sin

nπ x d x. L

(4.52)

0

L −L

is odd, then their product is odd, and by Lemma 4.1 bn = 0, ∀ n.

(4.53)

The Fourier series is therefore ∞

f (x) =

1 nπ x an cos a0 + 2 L

(4.54)

n=1

where an is given in (4.51).

Example 4.9 Find a Fourier cosine series for f (x) = x, [0, 2]. The coefficient a0 is given by 2 a0 = 2 The coefficient n n is given by

2 0

2 x 2 x dx = = 2. 2 0

(4.55)

4.4

Odd and Even Extensions

89

an =

2

x cos 0

nπ x dx 2

nπ x 2 nπ x 4 2x sin + 2 2 cos = nπ 2 n π 2 0 4 = 2 2 (−1)n − 1 n π

giving the Fourier Cosine series as (Fig. 4.10) f =1+

∞ 4 (−1)n − 1 nπ x cos . 2 2 π n 2

(4.56)

n=1

Example 4.10 Find a Fourier cosine series for f (x) = sin x, [0, π ].

(4.57)

The coefficient a0 is given by

y

2

1

x 0 -2

0

2

Fig. 4.10 The function (4.55) with its even extension and its Fourier Sine series (4.56) with 5 terms

90

4 Fourier Series

2 π a0 = sin x d x π 0 2 = [− cos x]|π0 π 4 = . π

(4.58)

For the remaining coefficients an , the case a1 again needs to be considered separately. For a1 , 2 π a1 = sin x cos x d x = 0, π 0 and the coefficient an , n ≥ 2 is given by 2 π an = sin x cos nx d x π 0

2 1 cos(n − 1)x 1 cos(n + 1)x π =− − π 2 n−1 2 n+1 0 n 2 (1 + (−1) ) =− . π n2 − 1 Thus, the Fourier series is (Fig. 4.11) f =−

∞ 2 (1 + (−1)n ) cos nx π n2 − 1 n=2

Example 4.11 Find a Fourier Sine and Cosine series for

4x − x 2 for 0 ≤ x ≤ 2 f (x) = 8 − 2x for 2 < x < 4. For the Fourier Sine series, bn is obtained by 2 2 nπ x nπ x 2 4 (4x − x 2 ) sin (8 − 2x) sin dx + dx 4 0 4 4 2 4

2 2 nπ x nπ x 32 − 16x 2x − 8x 64 sin = + − 3 3 cos n2π 2 4 nπ n π 4 0

4 16 nπ x 4x − 16 nπ x + − 2 2 sin + cos n π 4 nπ 4 2 nπ 16 nπ 64 = 2 2 sin + 3 3 1 − cos . n π 2 n π 2

bn =

Thus, the Fourier Sine series is given by

(4.59)

4.4

Odd and Even Extensions

91

y

1

0.5

x 0 -2

0

2

Fig. 4.11 The function (4.57) with its even extension and its Fourier sine series (4.59) with 5 terms

f =

∞ 16 nπ nπ 64 nπ x 1 − cos sin + sin . π 2n2 2 n3π 3 2 4 n=1

For the Fourier Cosine series, a0 and an are given by a0 =

2 4

2 0

4x − x 2 d x +

2 4

4

8 − 2x d x

2

2

4 x 3 x 2 + 4x − = x − 6 0 2 2 8 14 = +2= 3 3 2

and 2 2 nπ x nπ x 2 4 (4x − x 2 ) cos (8 − 2x) cos dx + dx 4 0 4 4 2 4

2 nπ x nπ x 32 − 16x 8x − 2x 2 64 cos = + + 3 3 sin n2π 2 4 nπ n π 4 0

4 16 nπ x 16 − 4x nπ x + − 2 2 cos + sin n π 4 nπ 4 2 nπ 16 nπ 64 = 2 2 cos − cos nπ − 2 + 3 3 sin . n π 2 n π 2

an =

Thus, the Fourier Cosine series is given by (Fig. 4.12)

(4.60)

92

4 Fourier Series y

4

y

4

2 0

-4

x4

0

0 -4

-4

0

x

4

Fig. 4.12 Fourier Sine and Cosine series (with 5 terms) for the function (4.59)

∞ nπ 7 16 1 4 nπ nπ x cos f = + 2 − cos nπ − 2 + 3 sin cos . (4.61) 3 π n2 2 n π 2 4 n=1

√ Example 4.12 Calculate a Fourier series for f (x) = 1 − x 2 for [−1, 1]. As this function is even, we will have a Fourier Cosine series which is given by ∞

f (x) =

1 an cos nπ x, a0 + 2

(4.62)

n=1

where

1

a0 = 2

0

an = 2

1

1 − x 2d x = 1−

π , 2

(4.63)

x 2 cos nπ xd x.

0

As it turn out (see the exercise section) we have a closed form expression for an , n = 1, 2, . . . ,

1

an = 2

1 − x 2 cos nπ xd x =

0

1 J1 (nπ ), n

(4.64)

where J1 (x) the Bessel function that satisfies x 2 y + x y + (x 2 − 1)y = 0. Thus, we have

(4.65)

Exercises

93

Fig. 4.13 Fourier Cosine series for f (x) =

1 − x 2 with 10 and 50 terms

∞

f (x) =

π J1 (nπ ) cos(nπ x) + . 4 n

(4.66)

n=1

The solution is shown in Fig. 4.13 with 10 and 50 terms. As shown in the examples in this chapter, often only a few terms are needed to obtain a fairly good representation of the function. It is interesting to note that if a discontinuity is encountered on the extension, Gibb’s phenomenon occurs. In the next chapter, we return to solving the heat equation, Laplace’s equation, and the wave equation, using separation of variables as introduced at the beginning of this chapter.

Exercises 1. Find Fourier series for the following (i) f (x) = e−x on [−1, 1] (ii) f (x) =

| x | on [−2, 2] 1 if − 1 < x < 0, (iii) f (x) = 1 − x if 0 < x < 1. (iv) f (x) = (4 − |x|)2 on [−2, 2].

2. Find Fourier Cosine and Sine series for the following and illustrate the function and its corresponding series on [−L, L]. 3. Find the first 10 terms numerically of the Fourier (Sine) series of the following 4. In Example 4.12 we saw the the relation

94

4 Fourier Series (i) f (x) = e x on [0, 1] (ii) f (x) =

x − x 2 on [0, 2] x + 1 if 0 < x < 1, (iii) f (x) = 4 − 2x if 1 < x < 2.

2x 2 if 0 < x < 1, (iv) f (x) = 3 − x if 1 < x < 3. (i) f (x) = e√−x , on [−1, 1] (ii) f (x) = x, on [0, 4] (Sine) (iii) f (x) = −x ln x on [0, 1] (Sine). 2

an = 2

1

1 − x 2 cos nπ xd x =

0

1 J1 (nπ ) n

where J1 (x) a Bessel function that satisfies (4.65). (i) Show under the substitution x = sin θ 1 π/2 1 − x 2 cos nπ xd x = 2 cos2 θ cos (nπ sin θ ) dθ. an = 2 0

(4.67)

(4.68)

0

Since cos x is symmetrical about the y-axis then π an = cos2 θ cos (nπ sin θ ) dθ.

(4.69)

0

Using integration by parts show π π 1 2 cos θ cos (nπ sin θ ) dθ = sin θ sin (nπ sin θ ) dθ. nπ 0 0 Using some basic trig identities show π sin θ sin (nπ sin θ ) dθ = 1/2 0

π

cos (θ − nπ sin θ ) − cos (θ + nπ sin θ ) dθ.

0

Further show

−

π

(4.71)

cos (θ + nπ sin θ ) dθ =

0

so that

π 0

Bessel’s equation is

(4.70)

π

cos (θ − nπ sin θ ) dθ.

(4.72)

0

π

sin θ sin (nπ sin θ ) dθ =

cos (θ − nπ sin θ ) dθ.

(4.73)

0

x 2 y + x y + (x 2 − n 2 )y = 0

(4.74)

4.4

Odd and Even Extensions

95

and for integer values, we have the integral formula 1 π Jn (x) = cos (nθ − x sin θ ) dθ π 0

(4.75)

and combining (4.68) through (7.132) show an =

1 J1 (nπ ). n

(4.76)

5

Separation of Variables

We are now ready to resume our work on solving the three main equations: the heat equation, Laplace’s equation, and the wave equation, using the method of separation of variables.

5.1

The Heat Equation

Consider the heat equation ut = u x x ,

0 < x < 1,

t > 0,

(5.1)

subject to the initial and boundary conditions u(0, t) = u(1, t) = 0,

(5.2a)

u(x, 0) = x − x .

(5.2b)

2

Assuming separable solutions u(x, t) = X (x)T (t), the heat equation (5.1) becomes

(5.3)

X T = X T ,

which, after dividing by X T and expanding gives T X = . T X As T is a function of t only and X a function of x only, this implies that T X = = λ, T X © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_5

97

98

5 Separation of Variables

where λ is a constant. This gives T = λT ,

X = λX .

(5.4)

From (5.2a) and (5.3), the boundary conditions become X (0) = X (1) = 0.

(5.5)

Integrating the X equation in (5.4) gives rise to three cases, depending on the sign of λ. As seen at the beginning of the last chapter, only the case where λ = −k 2 for some constant k is applicable. This has the solution X (x) = c1 sin kx + c2 cos kx.

(5.6)

Imposing the boundary conditions (5.5) shows that (5.6) becomes c1 sin 0 + c2 cos 0 = 0, c1 sin k + c2 cos k = 0,

(5.7)

which leads to c2 = 0,

c1 sin k = 0 ⇒ k = 0, π, 2π, . . . nπ, . . .

(5.8)

where n is an integer. From (5.4), we further deduce that T (t) = c3 e−n giving the solution u(x, t) =

∞

bn e−n

2π 2t

2π 2t

,

sin nπ x,

n=1

where we have set c1 c3 = bn . Using the initial condition (5.2b) gives u(x, 0) = x − x 2 =

∞

bn sin nπ x.

n=1

From the Chap. 4, we recognize this as a Fourier Sine series and know that the coefficients bn are chosen such that 1 (x − x 2 ) sin nπ x d x bn = 2 0

=2

1 − 2x sin nπ x + n2π 2

4 = 3 3 (1 − (−1)n ). n π Thus, the solution of the PDE is

x2 − x 2 − 3 3 nπ n π

1 cos nπ x 0

5.1 The Heat Equation 0.3

99

y t=0

0.2

t = 0.1 0.1

t = 0.2 0.0 0.0

x

0.5

1.0

Fig. 5.1 The solution of the heat equation with fixed boundary conditions

u(x, t) =

∞ 4 1 − (−1)n −n 2 π 2 t e sin nπ x. π3 n3

(5.9)

n=1

Figure 5.1 shows the solution at times t = 0, 0.1 and 0.2. Example 5.1 Solve ut = u x x ,

0 < x < 1,

t > 0,

(5.10)

subject to u x (0, t) = u x (1, t) = 0

(5.11a)

u(x, 0) = x − x .

(5.11b)

2

This problem is similar to the proceeding problem, except the boundary conditions are different. The last problem had the boundaries fixed at zero whereas in this problem, the boundaries are insulated (no flux across the boundary). Again, assuming that the solutions are separable u(x, t) = X (x)T (t),

(5.12)

we obtain the following from the heat equation T = λT ,

X = λX ,

where λ is a constant. The boundary conditions in (5.11a) become

(5.13)

100

5 Separation of Variables

X (0) = X (1) = 0.

(5.14)

Integrating the X equation in (5.13) gives rise again to three cases, depending on the sign of λ. As seen in the Chap. 4, only the case where λ = −k 2 for some constant k is relevant. Thus, we have (5.15) X (x) = c1 sin kx + c2 cos kx. Imposing the boundary conditions (5.11a) shows that c1 k cos 0 − c2 k sin 0 = 0, c1 k cos k − c2 k sin k = 0,

(5.16)

c1 = 0, c2 sin k = 0 ⇒ k = 0, π, 2π, . . . , nπ,

(5.17)

which leads to where n is an integer. From (5.13), we also deduce that T (t) = c3 e−n giving the solution u(x, t) =

∞

an e−n

2π 2t

2π 2t

,

cos nπ x,

n=0

where we have set c1 c3 = an . Using the initial condition gives u(x, 0) = x − x 2 =

∞

an cos nπ x

n=0 ∞

=

a0 an cos nπ x. + 2 n=1

From the Chap. 4, we recognize this as a Fourier cosine series and that the coefficients an are chosen such that 1 2 1 x 1 x 3 2 (x − x ) d x = 2 = , − a0 = 2 2 3 0 3 0 1 (x − x 2 ) cos nπ x d x an = 2 0

1 − 2x cos nπ x + =2 n2π 2 2 = − 2 2 (1 + (−1)n ). n π Thus, the solution of the PDE is

x − x2 2 + 3 3 nπ n π

1 sin nπ x 0

5.1 The Heat Equation

101

y

0.3

t=0 0.2

t = .05

t = .025

0.1

0.0 0.0

x

0.5

1.0

Fig. 5.2 The solution of the heat equation with no flux boundary conditions

u(x, t) =

∞ 1 2 (−1)(n+1) − 1 −n 2 π 2 t e cos nπ x. + 2 6 π n2

(5.18)

n=1

Figure 5.2 shows the solution at times t = 0, 0.25 and 0.5. It is interesting to note that even though that same initial conditions are used for each of the two problems, fixing the boundaries and insulating them gives rise to two totally different behaviors for t > 0.

Example 5.2 Solve u t = u x x , 0 < x < 2,

t >0

(5.19)

u(0, t) = u x (2, t) = 0 x if 0 < x < 1, u(x, 0) = 2 − x if 1 < x < 2.

(5.20a)

subject to

(5.20b)

In this problem, we have a mixture of both fixed and no flux boundary conditions. Again, assuming separable solutions u(x, t) = X (x)T (t), (5.21) gives rise to

T = λT ,

X = λX ,

(5.22)

where λ is a constant. The boundary conditions in (5.20a) accordingly become X (0) = X (1) = 0.

(5.23)

102

5 Separation of Variables

Integrating the X equation in (5.22) with λ = −k 2 for some constant k gives X (x) = c1 sin kx + c2 cos kx.

(5.24)

Imposing the boundary conditions (5.23) shows that c1 sin 0 + c2 cos 0 = 0, c1 k cos 2k − c2 k sin 2k = 0, which leads to π 3π 5π (2n − 1)π , , ,..., , 4 4 4 4

c2 = 0, cos 2k = 0 ⇒ k =

for integer n. From (5.22), we then deduce that T (t) = c3 e−

(2n−1)2 2 π t 16

,

giving the solution u(x, t) =

∞

bn e −

(2n−1)2 2 π t 16

n=1

sin

(2n − 1) π x, 4

where we have set c1 c3 = bn . Using the initial condition (5.20b) gives u(x, 0) =

∞

bn sin

n=1

(2n − 1) π x. 4

Recognizing that we have a Fourier sine series, we obtain the coefficients bn as bn = 2

1

x sin 0

(2n − 1) πx dx + 4

1

2

(2 − x) sin

(2n − 1) πx dx 4

1 (2n − 1) 8x 2n − 1 32 sin π x − cos π x (2n − 1)2 π 2 4 (2n − 1)π 4 0 2 32 (2n − 1) 8(x − 2) 2n − 1 + − sin πx + cos π x 2 2 (2n − 1) π 4 (2n − 1)π 4 1 (2n − 1)π 32 2 sin = + cos nπ . (2n − 1)2 π 2 4

=

Hence, the solution of the PDE is

(2n−1)π ∞ 2 sin + cos nπ (2n−1)2 2 4 32 2n − 1 u(x, t) = 2 e− 16 π t sin π x. 2 π (2n − 1) 4

(5.25)

n=1

Figure 5.3 shows the solution both in short time t = 0, 0.1, 0.2 and later times t = 1, 2, 3.

5.1 The Heat Equation 2

103

y

0.8

t=0

t=2

1

y

t = 10

t = 20

0.4

t = 30 t=1

0 0

1

x

0.0

x

0

2

1

2

Fig. 5.3 Short and later time behavior of the solution (5.25)

Example 5.3 Solve u t = u x x , 0 < x < 2,

t >0

(5.26)

subject to u(0, t) = 0, u x (2, t) = −u(2, t)

(5.27a)

u(x, 0) = 2x − x .

(5.27b)

2

In this problem, we have a fixed left endpoint and a radiating right end point. Assuming separable solutions u(x, t) = X (x)T (t), (5.28) gives rise to

T = λT ,

X = λX ,

(5.29)

where λ is a constant. The boundary conditions in (5.27a) accordingly become X (0) = 0,

X (2) = −X (2).

(5.30)

Integrating the X equation in (5.29) with λ = −k 2 for some constant k gives X (x) = c1 sin kx + c2 cos kx.

(5.31)

Imposing the first boundary condition of (5.30) shows that c1 sin 0 + c2 cos 0 = 0 ⇒ c2 = 0 and the second boundary condition of (5.30) gives tan 2k = −k.

(5.32)

104

5 Separation of Variables y 4

2

0 0

1

2

3

k

4

−2 k3

k1

−4

k2

Fig. 5.4 The graph of y = tan 2k and y = −k

It is important that we recognize that the solutions of (5.32) are not equally spaced as seen in earlier problems. In fact, there are an infinite number of solutions of this equation. Figure 5.4 graphically shows the curves y = −k and y = tan 2k. The first three intersection points are the first three solutions of (5.32). Thus, it is necessary that we solve for k numerically. The first 10 solutions are given in Table 5.1. Therefore, we have (5.33) X (x) = c1 sin kn x. Further, integrating (5.29) for T gives T (t) = c3 e−kn t 2

(5.34)

and together with X , we have the solution to the PDE as u(x, t) =

∞

cn e−kn t sin kn x. 2

(5.35)

n=1

Table 5.1 The first ten solutions of tan 2k = −k n

kn

n

1

1.144465

6

kn 8.696622

2

2.54349

7

10.258761

3

4.048082

8

11.823162

4

5.586353

9

13.389044

5

7.138177

10

14.955947

5.1 The Heat Equation

105

Imposing the boundary conditions (5.27a) gives u(0, t) = 2x − x 2 =

∞

cn sin kn x.

(5.36)

n=1

It is important to know that the cn ’s are not given by the formula cn =

2 2

2

(2x − x 2 ) sin kn x d x

0

because the kn ’s are not equally spaced. So it is necessary to examine (5.36) on its own. Multiplying by sin km x and integrating over [0, 2] gives

2

(2x − x ) sin km x d x = 2

0

∞

2

cn

n=1

sin km x sin kn x d x. 0

For n = m, we have

2

km sin 2kn cos 2km − kn sin 2km cos 2kn (5.37) 2 kn2 − km sin 2kn sin 2km km kn cos 2km cos 2kn − = 2 kn2 − km kn cos 2kn km cos 2km

sin km x sin kn x d x =

0

and imposing (5.32) for each of km and kn shows (5.37) to be identically zero. Therefore, we obtain the following when n = m

2

2

(2x − x 2 ) sin kn x d x = cn

0

sin2 kn x d x,

0

or

cn =

2 0

(2x − x 2 ) sin kn x d x . 2 2 sin kn x d x

(5.38)

0

Table 5.2 gives the first ten cn ’s that correspond to each kn . Having obtained kn and cn , the solution to the problem is found in (5.35). Figure 5.5 show plots at time t = 0, 1, and 2 when 20 terms are used.

5.1.1

Nonhomogeneous Boundary Conditions

In the preceding examples, the boundary conditions were either fixed to zero, insulted, or radiating. Often, we encounter boundary conditions which are nonstandard or nonhomo-

106

5 Separation of Variables

Table 5.2 The coefficients cn from (5.38) n 1

cn

n

0.873214

cn

6

0.028777

2

0.341898

7

−0.016803

3

−0.078839

8

0.015310

9

−0.010202

4

0.071427

5

−0.032299

10

0.009458

y 1.0

t =0

0.5

t =1

t =2 0.0

x 0

1

2

Fig. 5.5 The solution (5.35)

geneous. For example, the boundary may be fixed to a particular constant or the flux is maintained at a constant value. The following examples illustrate this. Example 5.4 Solve u t = u x x , 0 < x < 3, t > 0

(5.39)

u(0, t) = 0, u(3, t) = 3.

(5.40a)

u(x, 0) = 4x − x .

(5.40b)

subject to

2

If we seek separable solutions u(x, t) = X (x)T (t), then from (5.40a) we have X (0)T (t) = 0,

X (3)T (t) = 3,

(5.41)

5.1 The Heat Equation

107

and we have a problem! The second boundary condition doesn’t separate. To overcome this we try and transform this problem to one that we know how to solve. As the right boundary condition is not zero, we try a transformation to fix the boundary to zero. If we try u = v + 3, this cures the problem, however, in the process it transforms the left boundary condition to −3. The next simplest transformation is to introduce u = v + ax + b and ask: Can we choose the constants a and b so that both boundary conditions are zero? Upon substitution of both boundary conditions (5.40a), we obtain 0 = v(0, t) + a × 0 + b, 3 = v(3, t) + 3a + b.

(5.42)

Now we require that v(0, t) = 0 and v(3, t) = 0, which implies that we must choose a = 1 and b = 0. Therefore, we have u = v + x. (5.43) We notice that under the transformation (5.43), the original PDE (5.39) doesn’t change form ut = u x x

⇒ vt = v x x ,

however, the initial condition does change. At t = 0, then u(x, 0) = v(x, 0) + x ⇒ ⇒

4x − x 2 = v(x, 0) + x

(5.44)

v(x, 0) = 3x − x . 2

Thus, we have the new problem to solve vt = v x x ,

0 < x < 3,

t >0

(5.45)

subject to v(x, 0) = 3x − x 2 , v(0, t) = 0, v(3, t) = 0.

(5.46)

As usual, we seek separable solutions v(x, t) = X (x)T (t) which lead to the systems X = −k 2 X and T = −k 2 T with the boundary conditions X (0) = 0 and X (3) = 0. Solving for X gives (5.47) X (x) = c1 sin kx + c2 cos kx, and imposing both boundary conditions gives X (x) = c1 sin and T (t) = c3 e−

nπ x, 3

n2 π 2 9 t

,

where n is an integer. Therefore, we have the solution of (5.45) as

(5.48)

108

5 Separation of Variables

v(x, t) =

∞

bn e −

n2 π 2 9 t

sin

n=1

nπ x. 3

Recognizing that we have a Fourier sine series, we obtain the coefficients bn as 2 3 nπ (3x − x 2 ) sin x dx 3 0 3 3 nπ nπ 2(n 2 π 2 x 2 − 3n 2 π 2 x − 18) 6(2x − 3) sin cos x+ x = − n2π 2 3 n3π 3 3 0 36(1 − (−1)n ) = . n3π 3

bn =

This gives v(x, t) =

∞ 36 (1 − (−1)n ) − n2 π 2 t nπ e 9 sin x 3 3 π n 3 n=1

and since u = v + x, we obtain the solution for u as (see Fig. 5.6) u(x, t) = x +

∞ 36 (1 − (−1)n ) − n2 π 2 t nπ e 9 sin x. 3 3 π n 3

(5.49)

n=1

Example 5.5 Solve ut = u x x ,

0 < x < 1,

t > 0,

(5.50)

y

4

t =0

t =1

2

t =2

x

0 0

1

Fig. 5.6 The solution (5.49) at time t = 0, 1 and 2

2

3

5.1 The Heat Equation

109

subject to u x (0, t) = −1, u x (1, t) = 0

(5.51a)

u(x, 0) = 0.

(5.51b)

Unfortunately, the trick u = v + ax + b won’t work since u x = vx + a and choosing a to fix the right boundary condition to zero only makes the left boundary condition nonzero. To overcome this we might try u = v + ax 2 + bx, but the original equation (5.50) changes u t = u x x ⇒ vt = vx x + 2a.

(5.52)

u = v + a(x 2 + 2t) + bx,

(5.53)

u t = u x x ⇒ vt = v x x .

(5.54)

As a second attempt, we try so now Since u x = vx + 2ax + b, then choosing a = 1/2 and b = −1 gives the the new boundary conditions as vx (0, t) = 0 and vx (1, t) = 0 and the transformation becomes 1 u = v + (x 2 + 2t) − x. 2

(5.55)

Finally, we consider the initial condition (5.51b). From (5.55), we have x2 1 v(x, 0) = u(x, 0) − x 2 + x = x − 2 2 and our problem is transformed to the new problem vt = v x x ,

0 < x < 1, t > 0,

(5.56)

subject to vx (0, t) = vx (1, t) = 0, 1 v(x, 0) = x − x 2 . 2

(5.57a) (5.57b)

A separation of variables v = X T leads to X = −k 2 X and T = −k 2 T from which we obtain (5.58) X = c1 sin kx + c2 cos kx, X = c1 k cos kx − c2 k sin kx, and imposing the boundary conditions (5.57a) gives c1 k cos 0 − c2 k sin 0 = 0, c1 k cos k − c2 k sin k = 0, from which we obtain

110

5 Separation of Variables

c1 = 0, k = nπ,

(5.59)

where n is an integer. This then leads to

and further Finally we arrive at

X (x) = c2 cos nπ x

(5.60)

T (t) = c3 e−n

(5.61)

2π 2t

.

∞

v(x, t) =

a0 2 2 an e−n π t cos nπ x, + 2 n=1

noting that we have chosen an = c1 c3 . Upon substitution of t = 0 and using the initial condition (5.57b), we have ∞

1 a0 x − x2 = an cos nπ x, + 2 2 n=1

a Fourier cosine series. The coefficients are obtained by a0 =

2 1

1

0

1 1 1 2 x − x 2 d x = x 2 − x 3 = 2 3 0 3

2 1 1 (x − x 2 ) cos nπ x d x an = 1 0 2 1 2(1 − x) (2x − x 2 ) 2 sin nπ x = cos nπ x + + 2 2 3 3 n π nπ n π 0 2 = − 2 2. n π Thus, we obtain the solution for v as v(x, t) =

∞ 1 2 1 −n 2 π 2 t e cos nπ x − 2 3 π n2 n=1

and this, together with the transformation (5.55) gives u(x, t) =

∞ 1 2 1 2 1 −n 2 π 2 t e cos nπ x. (x + 2t) − x + − 2 2 3 π n2

(5.62)

n=1

Figure 5.7 shows plots at time t = 0.01, 0.5, 1.0 and 1.5. It is interesting to note that at the left boundary u x = −1 and since the flux φ = −ku x implies that φ = k > 0, that means heat is being added at the left boundary. Hence the profile increases at the left while the right boundary is still insulated (i.e., no flux).

5.1 The Heat Equation

111 y

2

t =1.5 1 t =1.0

t =0.5 t =0.1

0

x 0.5

0.0

1.0

Fig. 5.7 The solution (5.62) at time t = 0, 0.5, 1 and 1.5

5.1.2

Nonhomogeneous Equations

We now focus our attention to solving the heat equation with a source term u t = u x x + Q(x),

0 < x < L,

t > 0,

(5.63)

subject to u(0, t) = 0, u(L, t) = 0,

(5.64a)

u(x, 0) = f (x).

(5.64b)

To investigate this problem, we will consider a particular example where L = 2, f (x) = 2x − x 2 , and Q(x) = 1 − |x − 1|. If we were to consider this problem without a source term and use the separation of variables technique, we would obtain the solution u(x, t) =

∞ nπ x 16 (1 − (−1)n ) − n2 π 2 t e 4 sin . π3 n3 2

(5.65)

n=1

Suppose that we looked for solutions of (5.63) with Q = 0 in the form u(x, t) =

∞ n=1

Tn (t) sin

nπ x , 2

(5.66)

noting that both boundary conditions in (5.64a) are satisfied. Substituting into the heat equation, and isolating coefficients of sin nπ2 x , would lead to Tn (t) = −

n2π 2 Tn (t). 4

112

5 Separation of Variables

Solving this would give Tn (t) = cn e−

n2 π 2 4 t

(5.67)

for some constant cn and (5.66), (5.67), and the initial condition (5.64b), would lead to (5.65). With this idea, we try and solve the heat equation with a source term (5.63) by looking for solutions of the form (5.66). However, in order for this technique to work, it is also necessary to expand the source term in terms of a Fourier Sine series Q(x) =

∞

qn sin

n=1

For Q(x) =

x

nπ x . 2

(5.68)

if 0 < x < 1

(5.69)

2 − x if 1 < x < 2

where qn = = = + =

2

nπ x d x. 2 0 2 1 nπ x nπ x x sin (2 − x) sin dx + d x, 2 2 0 1 nπ x 4 2x nπ x 1 sin − cos n2π 2 2 nπ 2 0 4 nπ x 2x − 4 nπ x 2 − 2 2 sin + cos n π 2 nπ 2 1 8 nπ sin . n2π 2 2 Q(x) sin

(5.70)

Substituting both (5.66) and (5.68) into (5.63) gives ∞ n=1

Tn (t) sin

nπ 2 nπ x nπ x nπ x − Tn (t) sin qn sin = + 2 2 2 2 ∞

∞

n=1

n=1

and re-grouping and isolating the coefficients of sin Tn (t) +

nπ x 2

gives

n2π 2 Tn (t) = qn , 4

a linear ODE in Tn (t)! On solving (5.71) we obtain Tn (t) =

4

qn n2π 2

+ bn e−(

nπ 2 2 ) t

where bn is a constant of integration, giving the final solution

(5.71)

5.1 The Heat Equation

113

∞ 4 nπ x −( nπ )2 t 2 sin u(x, t) = q n + bn e . n2π 2 2

(5.72)

n=1

Imposing the initial condition (5.64b) with f (x) = 2x − x 2 gives 2x − x 2 =

∞ 4 nπ x q + b . n n sin n2π 2 2 n=1

If we set

4 q n + bn , n2π 2

cn = then we have

2x − x 2 =

∞

cn sin

n=1

nπ x . 2

a regular Fourier Sine series. Therefore,

2

cn = 0

=

nπ x 2x − x 2 sin dx 2

16 n3π 3

(1 − cos nπ ) ,

(5.73)

which in turn, gives bn = c n −

4

qn n2π 2

and finally, the solution is ∞ 4 nπ x 4 −( nπ )2 t 2 sin q n + cn − 2 2 q n e u(x, t) = 2 2 n π n π 2

(5.74)

n=1

where qn and cn are given in (5.70) and (5.73), respectively. Plots are given in Fig. 5.8 at times t = 0, 1, 2 and 3. It is interesting to note that if we let t → ∞, the solution approaches a single curve. This is what is called steady state (no changes in time). It is natural to ask: Can we find this steady state solution? The answer is yes. For the steady state, u t → 0 as t → ∞ and the original PDE becomes (5.75) u x x + Q(x) = 0. Integrating twice with Q(x) given in (5.69) gives 3 if 0 < x < 1, − x + c1 x + c2 u = x3 6 2 6 − x + k1 x + k2 if 1 < x < 2,

(5.76)

114

5 Separation of Variables y

1.0

t =0

t =1 0.5 t =2 t =3

0.0

x 0

1

2

Fig. 5.8 The solution (5.74) at time t = 0, 1, 2 and 3

where c1 , c2 , k1 and k2 are constants of integration. Imposing that the solution and its first derivative are continuous at x = 1 and that the solution is zero at the endpoints gives c1 =

1 3 1 , c2 = 0, k1 = , k2 = − . 2 2 3

This, in turn, gives the steady state solution 3 −x + x u = x 3 6 22 3x 6 −x + 2 −

5.1.3

if 0 < x < 1, 1 3

if 1 < x < 2.

(5.77)

(5.78)

Equations with a Solution Dependent Source Term

We now consider the heat equation with a solution dependent source term. For simplicity, we will consider a source term that is linear. Take, for example, u t = u x x + αu, 0 < x < 1, t > 0,

(5.79)

subject to u(0, t) = 0, u(1, t) = 0,

u(x, 0) = x − x 2 ,

(5.80)

where α is some constant. We could try a separation of variables to obtain solutions for this problem, but we will try a different technique. We will try and transform the PDE to one that has no source term. In attempting to do so, we seek a transformation of the form u(x, t) = A(x, t)v(x, t)

(5.81)

5.1 The Heat Equation

115

and ask: Is it possible to find A such that the source term in (5.79) can be eliminated? Substituting (5.81) in (5.79) gives Avt + At v = Avx x + 2 A x vx + A x x v + α Av. Dividing by A and expanding and regrouping gives Ax Ax x At vt = v x x + 2 v x + − + α v. A A A In order to target the standard heat equation vt = vx x , we choose Ax x At − + α = 0. A A

A x = 0,

(5.82)

From the first equation in (5.82), we see that A = A(t), and from the second we obtain A = α A, which has the solution A(t) = A0 eαt , for some constant A0 leading to u = A0 eαt v. The boundary conditions become u(0, t) = 0 ⇒ A0 eαt v(0, t) = 0 ⇒ v(0, t) = 0, u(1, t) = 0 ⇒ A0 eαt v(1, t) = 0 ⇒ v(1, t) = 0, so the boundary conditions are unchanged. Next, we consider the initial condition, so u(x, 0) = x − x 2 ⇒ A0 eα0 v(x, 0) = x − x 2 ⇒ A0 v(x, 0) = x − x 2 . To leave the initial condition unchanged we choose A0 = 1. Thus, under the transformation u = eαt v

(5.83)

the problem (5.79) and (5.80) becomes vt = v x x ,

0 < x < 1, t > 0,

v(x, 0) = 0, v(1, t) = 0, v(x, 0) = x − x 2 . This particular problem was considered at the beginning of this chapter in example (5.10), where the solution was given in (5.9) by v(x, t) =

∞ 4 1 − (−1)n −n 2 π 2 t e sin nπ x, π3 n3 n=1

and so, from (5.83) we obtain the solution of (5.79) subject to (5.80) as u(x, t) =

∞ 4 αt 1 − (−1)n −n 2 π 2 t e e sin nπ x. π3 n3 n=1

(5.84)

116

5 Separation of Variables t = 0.2

y

0.4

0.3 t=0

t = 0.1 t=0

0.2 t = 0.1 0.2 t = 0.2 0.1

0.0

x 0.0

0.5

0.0

x 0.0

1.0

0.5

1.0

Fig. 5.9 The solution of the heat equation with a source (5.79) with α = 5 and 12

Figure 5.9 shows plots at times t = 0, 0.1 and 0.2 when α = 5 and 12. It is interesting to note that in the case where α = 5, the diffusion is slower in comparison with no source term (for α = 0, see Fig. 5.1), whereas there is no diffusion at all when α = 12. We may ask: For what value of α do we achieve a steady state solution? To answer this, consider the first few terms of the solution (5.84) 1 −9π 2 t 8 αt −π 2 t e sin π x + e sin 3π x + . . . . (5.85) u = 3e π 27 Clearly the exponential terms in the brackets in (5.85) will decay to zero, with the first term 2 decaying the slowest. Therefore, it is the balance between eαt and e−π t which determines whether the solution will decay to zero or not. It is equality α = π 2 that leads to the steady state solution 8 sin π x (5.86) u∞ = π3 Example 5.6 Solve u t = u x x + αu, 0 < x < 1,

t >0

(5.87)

subject to u x (0, t) = 0, u x (2, t) = 0,

(5.88a)

u(x, 0) = 4x − x .

(5.88b)

3

It was already established that the transformation (5.83) will transform the heat equation with a solution dependent source term to the plain heat equation and will leave the initial condition and fixed boundary conditions unchanged. It is now necessary to determine what happens to no flux boundary conditions (5.88a). Using (5.83) we have

5.1 The Heat Equation

117

u x (0, t) = 0 ⇒ A0 eαt vx (0, t) = 0 ⇒ vx (0, t) = 0, u x (2, t) = 0 ⇒ A0

eαt v

x (2, t)

(5.89)

= 0 ⇒ vx (2, t) = 0,

(5.90)

and so the boundary conditions remain the same! Thus, problem (5.87) reduces to vt = v x x , v(x, 0) = 2x − x 2 ,

0 < x < 2, t > 0 vx (0, t) = 0,

vx (2, t) = 0.

(5.91)

Using a separation of variables and imposing the boundary conditions gives (see Example 5.2) ∞ 2 nπ 8 (1 + (−1)n ) − n2 π 2 t v= − 2 e 4 cos x, (5.92) 2 3 π n 2 n=1

and together with the transformation (5.83), the solution of (5.87) and (5.88) is ∞ nπ 8 (1 + (−1)n ) − n2 π 2 t αt 2 e 4 cos − x, . u=e 3 π2 n2 2

(5.93)

n=1

Figure 5.10 shows plots at t = 0, 0.2, 0.4 and 0.6 when α = −2 and 2. It is interesting to note that the sign of α will determine whether the solution will grow or decay exponentially.

y

y

3

t =0

8 t = 0.6 6

2 t = 0.4

t = 0.2

4 t = 0.2

1

t = 0.4 t = 0.6

2 t=0

0

x 0

1

2

0

x 0

1

2

Fig. 5.10 The solution of the heat equation with a source (5.87) with no flux boundary condition with α = −2, 2

118

5 Separation of Variables

5.1.4

Equations with a Solution Dependent Convective Term

We now consider the heat equation with a solution dependent linear convection term ut = u x x + β u x ,

0 < x < 1, t > 0,

(5.94)

subject to u(0, t) = 0, u(1, t) = 0,

(5.95a)

u(x, 0) = x − x , 2

(5.95b)

where β is some constant. We consider the same initial and boundary conditions as in the previous section, as it provides a means of comparing the two respective problems. Again, we could try a separation of variables to obtain solutions for this problem, but again we try and transform the PDE to one that has no convection term using a transformation of the form u(x, t) = A(x, t)v. (5.96) Substituting (5.100) in (5.94) gives Avt + At v = Avx x + 2 A x vx + A x x v + β (Avx + A x v) . Dividing by A and expanding and regrouping gives vt = v x x +

2 Ax + β A A x x − At + β A x vx + v. A A

(5.97)

In order to obtain the standard heat equation, we choose 2 A x + β A = 0,

A x x − At + β A x = 0.

(5.98)

1

From the first of (5.98) we obtain that A(x, t) = C(t)e− 2 βx , where C(t) is an arbitrary 2 function of integration; from the second of (5.98), we obtain C + β4 C = 0, which has the 1

solution C(t) = C0 e− 4 β t for some constant C0 . This then gives 2

1

1

A(x, t) = C0 e− 2 βx− 4 β

2t

(5.99)

which in turn gives 1

1

u(x, t) = C0 e− 2 βx− 4 β t v. 2

(5.100)

The boundary conditions from (5.95a) becomes u(0, t) = 0 u(1, t) = 0

⇒ ⇒

1

C0 e− 4 β t v(0, t) = 0 C0 e

2

− 21 − 41 β 2 t

v(1, t) = 0

⇒

v(0, t) = 0,

(5.101a)

⇒

v(1, t) = 0,

(5.101b)

5.1 The Heat Equation

119

so the boundary conditions are unchanged. Next, we consider the initial condition , so 1

u(x, 0) = x − x 2 ⇒ v(x, 0) = (x − x 2 )e 2 βx ,

(5.102)

where we have chosen C0 = 1. Here, the initial condition actually changes. Thus, under the transformation 1 1 2 (5.103) u = e− 2 βx− 4 β t v, the problem (5.94) and (5.95) become vt = vx x , 0 < x < 1, t > 0,

(5.104)

v(x, 0) = 0, v(1, t) = 0,

(5.105a)

and

v(x, 0) = (x − x 2 )e

1 2 βx

.

(5.105b)

Following the solution procedure outlined in the previous section, the solution of (5.104) and (5.105) is given by ∞ 2 2 v(x, t) = bn e−n π t sin nπ x (5.106) n=1

where bn is now given by 2 bn = 1

1

1

(x − x 2 )e 2 βx sin nπ x d x,

(5.107)

0

and from (5.103) 1

1

u(x, t) = e− 2 βx− 4 β

2t

∞

bn e−n

2π 2t

sin nπ x.

(5.108)

n=1

At this point, we will consider two particular examples: β = 6 and β = −12. For β = 6 bn = 2

1

(x − x 2 )e3x sin nπ x d x

0

= −4nπ

27 + n 2 π 2 + 2n 2 π 2 e3 cos nπ . (9 + n 2 π 2 )3

(5.109)

For β = −12 bn = 2

1

(x − x 2 )e−6x sin nπ x d x

0

= 2nπ

7n 2 π 2 + 108 + (5n 2 π 2 + 324)e−6 cos nπ . (36 + n 2 π 2 )3

(5.110)

120 0.3

5 Separation of Variables

y

y

0.3 t =0

t =0 t = 0.025

0.2

0.2 t = 0.05

t = 0.05 0.1

0.1

t = 0.1

0.0

x 0.0

0.5

1.0

0.0

x 0.0

0.5

1.0

Fig. 5.11 The solution of the heat equation with convection with fixed boundary conditions with β = 6, −12

The respective solutions for each are u(x, t) = −4π e−3x−9t ∞ 27 + n 2 π 2 + 2n 2 π 2 e3 cos nπ −n 2 π 2 t × n e sin nπ x, (9 + n 2 π 2 )3

(5.111)

n=1

(5.112) u(x, t) = 2π e6x+36t ∞ 2 2 2 2 −6 7n π + 108 + (5n π + 324)e cos nπ −n 2 π 2 t × n e sin nπ x. (36 + n 2 π 2 )3 n=1

Figure 5.11 shows graphs at a variety of times for β = 6 and β = −12.

5.2

Laplace’s Equation

The two dimensional Laplace’s equation is u x x + u yy = 0.

(5.113)

To this we attach the boundary conditions u(x, 0) = 0, u(x, 1) = x − x 2

(5.114a)

u(0, y) = 0, u(1, 0) = 0.

(5.114b)

We will show that separation of variables also works for this equation. If we assume solutions of the form

5.2

Laplace’s Equation

121

u(x, y) = X (x)Y (y),

(5.115)

then substituting this into (5.113) gives X Y + X Y = 0.

(5.116)

Dividing by X Y and expanding gives X Y + = 0, X Y

(5.117)

and since each term is only a function of x or y, then each must be constant giving X = λ, X

Y = −λ. Y

(5.118)

From the first of (5.114a) and both of (5.114b) we deduce the boundary conditions Y (0) = 0,

X (0) = 0,

X (1) = 0.

(5.119)

The remaining boundary condition in (5.114a) will be used later. As we saw in a previous section, in order to solve the X equation in (5.118) subject to the boundary conditions in (5.119), it is necessary to set λ = −k 2 . The X equation (5.118) has the general solution X = c1 sin kx + c2 cos kx.

(5.120)

To satisfy the boundary conditions in (5.119) it is necessary to have c2 = 0 and k = nπ, k ∈ Z+ so (5.121) X (x) = c1 sin nπ x. From (5.118), we obtain the solution to the Y equation Y (y) = c3 sinh nπ y + c4 cosh nπ y.

(5.122)

Since Y (0) = 0, this implies c4 = 0 so X (x)Y (y) = cn sin nπ x sinh nπ y

(5.123)

where we have chosen cn = c1 c3 . Therefore, we obtain the solution to (5.113) subject to three of the four boundary conditions in (5.114) u=

∞

cn sin nπ x sinh nπ y.

n=1

The remaining boundary condition in (5.114a) now needs to be satisfied, thus

(5.124)

122

5 Separation of Variables

u(x, 1) = x − x = 2

∞

cn sin nπ x sinh nπ.

(5.125)

n=1

This looks like a Fourier Sine series and if we let bn = cn sinh nπ , this becomes ∞

bn sin nπ x = x − x 2 ,

(5.126)

n=1

which is precisely a Fourier sine series. The coefficients bn are given by

2 1 bn = x − x 2 sin nπ x d x 1 0 4 = 3 3 (1 − cos nπ ), n π

(5.127)

and since bn = cn sinh nπ , this gives cn =

4(1 − (−1)n ) . n 3 π 3 sinh nπ

(5.128)

Thus, the solution to Laplace’s equation (5.113) with the boundary conditions given in (5.114) is ∞ sinh nπ y 4 1 − (−1)n sin nπ x . (5.129) u(x, y) = 3 π n3 sinh nπ n=1

Figure 5.12 show both a top view and a 3D view of the solution.

u

y 1.0

0.2 u = 0.2

u = 0.1

0.1

0.5 u = 0.05

0.0

u = 0.025

0.5

u = 0.001 0.0

x 0.0

0.5

1.0

1.0

x

Fig. 5.12 The solution of (5.113) with the boundary conditions (5.114)

1.0

y

5.2

Laplace’s Equation

123

Example 5.7 Solve u x x + u yy = 0,

(5.130)

subject to u(x, 0) = 0, u(x, 1) = 0,

(5.131a)

u(0, y) = 0, u(1, y) = y − y . 2

(5.131b)

Assume separable solutions of the form u(x, y) = X (x)Y (y).

(5.132)

Then substituting this into (5.130) gives X Y + X Y = 0.

(5.133)

Dividing by X Y and expanding gives X Y + = 0, X Y

(5.134)

from which we obtain

X Y = λ, = −λ. X Y From (5.131) we deduce the boundary conditions X (0) = 0,

Y (0) = 0,

Y (1) = 0.

(5.135)

(5.136)

The remaining boundary condition in (5.131b) will be used later. As seen in the previous problem, in order to solve the Y equation in (5.135) subject to the boundary conditions in (5.136), it is necessary to set λ = k 2 . The Y equation (5.135) has the general solution Y = c1 sin ky + c2 cos ky

(5.137)

To satisfy the boundary conditions in (5.136), it is necessary to have c2 = 0 and k = nπ so Y (y) = c1 sin nπ y.

(5.138)

From (5.135), we obtain the solution to the X equation X (x) = c3 sinh nπ x + c4 cosh nπ x.

(5.139)

Since X (0) = 0, this implies c4 = 0. This gives X (x)Y (y) = cn sinh nπ x sin nπ y,

(5.140)

124

5 Separation of Variables 0.2 1

u = 0.001 0.1

u = 0.025 u = 0.05

u = 0.1

0.0 0.0 0.0

u = 0.2 x

0.5

0.5 y

1.0

1.0

0 0

1

Fig. 5.13 The solution of (5.130) with the boundary conditions (5.131)

where we have chosen cn = c1 c3 . Therefore, we obtain u=

∞

cn sinh nπ x sin nπ y.

(5.141)

n=1

The remaining boundary condition in (5.131b) now needs to be satisfied, thus u(1, y) = y − y 2 =

∞

cn sinh nπ sin nπ y.

(5.142)

n=1

If we let bn = cn sinh nπ , this becomes ∞

bn sin nπ y = y − y 2 .

(5.143)

n=1

Comparing with the previous problem, we find that interchanging x and y interchanges the two problems, and so we conclude that bn =

16 n3π 3

(1 − cos nπ ).

(5.144)

Therefore, the solution to Laplace’s equation (5.130) subject to (5.131) is ∞ 4 1 − (−1)n sinh nπ x sin nπ y. u= 3 π n3 sinh nπ n=1

Figure 5.13 shows both a top view and a 3D view of the solution.

(5.145)

5.2

Laplace’s Equation

125

In comparing the solutions (5.129) and (5.145) we find that if we interchange x and y, they are the same. This should not be surprising because if we consider Laplace’s equation with the boundary conditions given in (5.114) and (5.131), that if we interchange x and y, the problems are transformed to each other.

Example 5.8 Solve u x x + u yy = 0

(5.146)

u(x, 0) = x − x 2 , u(x, 1) = 0

(5.147a)

u(0, y) = 0, u(1, y) = 0.

(5.147b)

subject to

Again, assuming separable solutions of the form

leads to

u(x, y) = X (x)Y (y)

(5.148)

X Y + X Y = 0,

(5.149)

when substituted into (5.146). Dividing by X Y and expanding gives X Y + = 0. X Y

(5.150)

Since each term is only a function of x or y then each must be constant giving X = λ, X

Y = −λ. Y

(5.151)

From the second of (5.147a) and both of (5.147b), we deduce the boundary conditions X (0) = 0,

X (1) = 0,

Y (1) = 0,

(5.152)

noting that the last boundary condition is different than the boundary condition considered at the beginning of this section (i.e. Example 5.7 where Y (0) = 0). The remaining boundary condition in (5.147a) will be used later. In order to solve the X equation in (5.151) subject to the boundary conditions (5.152), it is necessary to set λ = −k 2 . The X equation (5.151) has the general solution (5.153) Y = c1 sin kx + c2 cos kx. To satisfy the boundary conditions in (5.152), it is necessary to have c2 = 0 and k = nπ (n is still a positive integer), so (5.154) X (x) = c1 sin nπ x.

126

5 Separation of Variables

From (5.151), we obtain the solution to the Y equation Y (y) = c3 sinh nπ y + c4 cosh nπ y.

(5.155)

Since Y (1) = 0, this implies c3 sinh nπ + c4 cosh nπ = 0

⇒

c4 = −c3

sinh nπ . cosh nπ

(5.156)

From (5.155) we have Y (y) = c3 sinh nπ y − c3 cosh nπ y = −c3

sinh nπ(1 − y) , cosh nπ

sinh nπ cosh nπ (5.157)

so X (x)Y (y) = cn sin nπ x sinh nπ(1 − y)

(5.158)

where we have chosen cn = −c1 c3 / cosh nπ . Therefore, we obtain u=

∞

cn sin nπ x sinh nπ(1 − y).

(5.159)

n=1

The remaining boundary condition is (5.147a) now needs to be satisfied, thus u(x, 0) = x − x 2 =

∞

cn sin nπ x sinh nπ.

(5.160)

n=1

At this point, we recognize that this problem is now identical to the first problem in this section where we obtained cn =

4(1 − (−1)n ) , n 3 π 3 sinh nπ

(5.161)

so the solution to Laplace’s equation with the boundary conditions given in (5.147) is u(x, y) =

∞ sinh nπ(1 − y) 4 1 − (−1)n sin nπ x . 3 3 π n sinh nπ

(5.162)

n=1

Figure 5.14 shows the solution. Example 5.9 As the final example, we consider u x x + u yy = 0

(5.163)

5.2

Laplace’s Equation

127

0.2

0.1

0.0 0.0

0.0 y 0.5

x

0.5 1.0

1.0

Fig. 5.14 The solution of Laplace’s equation with the boundary conditions (5.147)

subject to u(x, 0) = 0, u(x, 1) = 0

(5.164a)

u(0, y) = y − y , u(1, y) = 0.

(5.164b)

2

We could go through a separation of variables to obtain the solution but we can avoid many of the steps by considering the previous three problems and using symmetry arguments. Since interchanging the variables x and y transforms (5.129) to (5.145), then interchanging x and y in (5.162) will give the solution to this problem, namely u(x, y) =

∞ 4 1 − (−1)n sinh nπ(1 − x) sin nπ y. π3 n3 sinh nπ

(5.165)

n=1

Figure 5.15 shows the solution.

5.2.1

Laplace’s Equation on an Arbitrary Rectangular Domain

In this section, we solve Laplace’s equation on an arbitrary domain [0, L x ] × [0, L y ] when all the boundaries are zero except one. Thus, we will solve each of the 4 different problems

128

5 Separation of Variables

0.2

0.1

0.0 0.0

0.0

x

y 0.5

0.5

1.0

1.0

Fig. 5.15 The solution of Laplace’s equation with the boundary conditions (5.164)

depending on whether the top, bottom, right or left boundary is nonzero. We consider each separately. Top Boundary In general, using separation of variables, the solution of u x x + u yy = 0, 0 < x < L x , 0 < y < L y ,

(5.166)

subject to u(x, 0) = 0, u(x, L y ) = f (x), u(0, y) = 0, u(L x , y) = 0, is

nπ y nπ x Lx u= bn sin nπ L y Lx n=1 sinh Lx ∞

where bn =

2 Lx

sinh

Lx 0

f (x) sin

nπ x d x. Lx

(5.168)

(5.169)

Bottom Boundary In general, using separation of variables, the solution of u x x + u yy = 0, 0 < x < L x , 0 < y < L y , subject to

(5.170)

5.2

Laplace’s Equation

129

u(x, 0) = f (x), u(x, L y ) = 0 u(0, y) = 0, u(L x , y) = 0, is u=

∞

bn sin

n=1

where 2 Lx

bn =

nπ x Lx

Lx

nπ(L y − y) Lx nπ L y sinh Lx

(5.172)

nπ x d x. Lx

(5.173)

sinh

f (x) sin

0

Right Boundary In general, using separation of variables, the solution of u x x + u yy = 0, 0 < x < L x , 0 < y < L y ,

(5.174)

subject to u(x, 0) = 0, u(x, L y ) = 0 u(0, y) = 0, u(L x , y) = g(y), is u=

∞ n=1

where bn =

2 Ly

nπ y bn sin Ly

nπ x Ly nπ L x sinh Ly sinh

Ly

g(y) sin 0

nπ y dy. Ly

(5.176)

(5.177)

Left Boundary In general, using separation of variables, the solution of u x x + u yy = 0, 0 < x < L x , 0 < y < L y , subject to u(x, 0) = 0, u(x, L y ) = 0 u(0, y) = g(y), u(L x , y) = 0, is

(5.178)

130

5 Separation of Variables

u=

∞

bn sin

n=1

where 2 Ly

bn =

nπ y Ly

nπ(L x − x) Ly . nπ L x sinh Ly

(5.180)

nπ y dy. Ly

(5.181)

sinh

Ly

g(y) sin 0

Often, we are asked to solve Laplace’s equation subject to 4 different nonzero boundary condition. The following example illustrates. Example 5.10 u x x + u yy = 0, 0 < x < 1, 0 < y < 2 subject to u(x, 2) = x − x , 2

u(x, 0) =

u(0, y) = 2y − y 2 ,

4x 2 ,

u(1, y) =

0 0, subject to the following boundary conditions u(x, 0) = f (x), u t (x, 0) = 0.

7

Higher Dimensional Problems

In this chapter we extend our results from previous chapters to problems involving more spatial dimensions. In particular, we consider 1. 2 + 1 diffusion equation 2. 2 + 1 diffusion equation in a disc with and without angular symmetry 3. 3 dimensional Laplace’s in rectangular, cylindrical and spherical coordinates In our consideration of these topics, we will consider particular problems instead of trying to consider problems in full generality.

7.1

2 + 1 Dimensional Heat Equation

The diffusion equation in the time variable t and spatial variables x and y is u t = u x x + u yy .

(7.1)

We consider heat diffusion on the unit square, 0 ≤ x, y ≤ 1, with zero boundary conditions, i.e. u(t, 0, y) = 0, u(t, 1, y) = 0, u(t, x, 0) = 0, u(t, x, 1) = 0, (7.2) and at t = 0 some initial temperature u(0, x, y) = f (x, y) on [0, 1] × [0, 1],

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6_7

(7.3)

161

162

7 Higher Dimensional Problems

which will be specified in our example. Our goal is to solve (4.64) subject to (7.2) and (7.3). As we did previously, we assume separable solutions of the form u = T (t)X (x)Y (y).

(7.4)

T X Y = T X Y + T X Y ,

(7.5)

T X Y = + , T X Y

(7.6)

Substituting (7.4) into (7.1) gives

or, in separated form

which leads to

X = λ1 , X

Y = λ2 , Y

(7.7)

and

T = λ1 + λ2 , T where λ1 and λ2 are separation constants. The boundary conditions (7.2) becomes X (0) = 0,

X (1) = 0,

Y (0) = 0,

Y (1) = 0,

(7.8)

(7.9)

respectively. The only solution to (7.7) with boundary conditions like those given in (7.9) is when λ1 = −ω12 , λ2 = −ω22 , ω1 , ω2 ∈ R, (7.10) and are of the form X = c1 sin ω1 x + c2 cos ω1 x, Y = c3 sin ω2 y + c4 cos ω2 y.

(7.11)

However, the BCs given in (7.9) require ω1 = mπ and ω2 = nπ, m, n ∈ Z+ and c2 = c4 = 0 thus giving (7.12) X = c1 sin mπ x, Y = c3 sin nπ y. From (7.8) we have upon integrating T = c5 e−(n

2 +m 2 )π 2 t

,

(7.13)

where c5 is an additional constant of integration. Combining our result so far, we have u = c1 c3 c5 e−(n

2 +m 2 )π 2 t

sin mπ x sin nπ y,

and, we did in previous chapters, the principle of superposition gives rise to

(7.14)

7.1 2 + 1 Dimensional Heat Equation

u=

∞ ∞

163

amn e−(n

2 +m 2 )π 2 t

sin mπ x sin nπ y,

(7.15)

m=1 n=1

where we have denoted c1 c3 c5 = amn . A quick check shows that u = 0 on the boundary of the square. We now impose the initial condition (7.3) giving ∞ ∞

amn sin mπ x sin nπ y = f (x, y).

(7.16)

m=1 n=1

This is a double Fourier series whose coefficients are given by

1 1

amn = 4 0

f (x, y) sin mπ x sin nπ y d x dy.

(7.17)

0

Example 7.1 For example, if f (x, y) = x y(1 − x)(1 − y),

(7.18)

then (7.17)

1 1

amn = 4 0

x y(1 − x)(1 − y) sin mπ x sin nπ y d x dy

0

16(1 − cos mπ )(1 − cos nπ ) , = m 3n3π 6

(7.19)

giving the final solution as u=

∞ ∞ 16 (1 − cos mπ )(1 − cos nπ ) −(n 2 +m 2 )π 2 t e sin mπ x sin nπ y. π6 m 3n3

(7.20)

m=1 n=1

The first few terms are given by 64 −2π 2 t 64 −10π 2 t e sin(π x) sin(π y) + e (sin(π x) sin(3π y) + sin(3π x) sin(π y)) π6 27π 6 64 64 2 2 e−18π t sin(3π x) sin(3π y) + e−26π t (sin(π x) sin(5π y) + sin(5π x) sin(π y)) · · · . + 729π 6 125π 6

u=

(7.21)

The solutions are depicted in Fig. 7.1 at t = 0 and then at some later time t > 0.

164

7 Higher Dimensional Problems

Fig. 7.1 Solution of the 2 + 1 heat equation (7.20)

7.2

2 + 1 Dimensional Diffusion Equation in a Disc

We again consider the diffusion equation in 2 space dimensions but now the spatial region is a disc of radius 1. The model equation is u t = u x x + u yy ,

(7.22)

1 1 u t = u rr + u r + 2 u θθ , r r

(7.23)

which, in polar coordinates becomes

and so the solution would be of the form u = u(t, r , θ ). Suppose we wish to solve this equation subject to some BC/ICs. For example, if we have a unit disc with the boundary fixed to zero, the boundary condition would be u(t, 1, θ ) = 0,

(7.24)

and furthermore, some prescribed temperature throughout the disc at t = 0, say u(0, r , θ ) = f (r , θ ) on [0, 1] × [0, 2π ]. Here we will consider two distinct problems: 1. with angular symmetry 2. without angular symmetry.

(7.25)

7.2

2 + 1 Dimensional Diffusion Equation in a Disc

7.2.1

165

With Angular Symmetry

With angular symmetry, we assume the u θ = 0 and so the model equation (7.23) becomes 1 u t = u rr + u r . r

(7.26)

As we did previously, we assume separable solutions of the form u = T (t)R(r ).

(7.27)

Equation (7.26) becomes 1 R + R T r , = T R which leads to

1 R + R T r =λ= , T R

(7.28)

(7.29)

or T − λT = 0,

(7.30a)

r R + R − λr R = 0,

(7.30b)

where λ is the separation constant. As we expect the solutions to be decaying in time we choose λ = −ω2 , ω ∈ R, so that (7.30a) integrates to T = c1 e−ω t , 2

(7.31)

where c1 is an arbitrary constant of integration. Equation (7.30b) becomes r R + R + ω2 r R = 0,

(7.32)

which is Bessel’s equation of order zero which has solutions R = c2 J0 (ωr ) + c3 Y0 (ωr ),

(7.33)

(see Sect. 7.4 on special functions for further details). As Y0 is undefined at r = 0, we choose c3 = 0, and thus (7.34) R = c2 J0 (ωr ). The boundary condition (7.24) via (7.27) becomes R(1) = 0, so that (7.34) gives

(7.35)

166

7 Higher Dimensional Problems

J0 (ω) = 0,

(7.36)

which are the zero’s of the Bessel function J0 (x). These we denote by ωn so J0 (ωn ) = 0, n = 1, 2, 3, . . .. Combining (7.31), (7.34), and the principle of superposition gives the solution as u(t, r ) =

∞

cn e−ωn t J0 (ωn r ). 2

(7.37)

n=1

In order to complete the solution, we would need some sort of initial condition (IC). If we were given the initial condition u(0, r ) = f (r ), (7.38) then the coefficient cn are chosen such that 1 2 cn = r f (r )J0 (ωn r )dr J1 (αn )2 0

(7.39)

(see the special functions Sect. 7.4).

Example 7.2 For example, if f (r ) = 1 − r 2 , then (7.39) is 2 cn = J1 (αn )2

1

r (1 − r 2 )J0 (ωn r )dr =

0

(7.40) 8 ωn3 J1 (ωn )

,

(7.41)

and thus we have the solution u(t, r ) = 8

∞ n=1

e−ωn t J0 (ωn r ). ωn3 J1 (ωn ) 2

(7.42)

This is depicted in Fig. 7.2 at t = 0 and subsequent times.

7.2.2

Without Angular Symmetry

Without angular symmetry, the model equation becomes 1 1 u t = u rr + u r + 2 u θθ , r r

(7.43)

where now the solution is u = u(t, r , θ ). As we did previously, we assume separable solutions of the form

7.2

2 + 1 Dimensional Diffusion Equation in a Disc

167

Fig. 7.2 The solution (7.42) at t = 0 and later times

u = T (t)R(r )(θ ).

(7.44)

Equation (7.43) becomes 1 R + R T 1 r = + 2 , T R r which leads to the separation equations = λ1 , T = λ2 , T R λ1 1 R + − λ2 + 2 = 0, R r R r

(7.45)

(7.46a) (7.46b) (7.46c)

where λ1 and λ2 are separation constants. As we expect the solutions to be periodic in θ with the property that u(t, r , θ + 2π ) = u(t, r , θ ), u θ (t, r , θ + 2π ) = u θ (t, r , θ ),

(7.47)

for all θ and decaying in time, we choose λ1 = −m 2 , m ∈ Z+ and λ2 = −ω2 , ω ∈ R so that (7.46a) become + m 2 = 0,

T + ω T = 0, r R + r R + ω r − m 2 R = 0. 2

The solution of these are:

2 2

2

(7.48a) (7.48b) (7.48c)

168

7 Higher Dimensional Problems

= c1 sin mθ + c2 cos mθ, T = c3 e

−ω2 t

,

R = c4 Jm (ωr ) + c5 Ym (ωr ),

(7.49a) (7.49b) (7.49c)

where Jm (x) and Ym (x) are Bessel functions of the first and second kind and c1 , c2 , c3 , c4 and c5 are arbitrary constants. Since Ym is undefined at r = 0, we choose c5 = 0, and thus R = c4 Jm (ωr ).

(7.50)

As the boundary condition (7.24) becomes R(1) = 0,

(7.51)

Jm (ω) = 0,

(7.52)

from (7.50) we obtain which are the zero’s of the Bessel function Jm (x). As these vary as m varies, we will denote these zero’s as ωmn where n represents the nth zero of the Bessel function of order m, i.e. Jm (ωmn ) = 0. With the principle of superposition, we have u(t, r , θ ) =

∞ ∞

e−ωmn t Jm (ωmn r ) (amn cos mθ + bmn sin mθ ) , 2

(7.53)

m=0 n=1

where amn and bmn are constants to be determined. Example 7.3 Suppose the initial condition was u(0, r , θ ) = 1 − r 2 + r (1 − r 2 ) cos θ.

(7.54)

Substituting this into (7.53) gives ∞ ∞

Jm (ωmn r ) (amn cos mθ + bmn sin mθ ) = 1 − r 2 + r (1 − r 2 ) cos θ.

(7.55)

m=0 n=1

and equating the appropriate trig terms gives ∞ n=1 ∞

J0 (ω0n r )a0n = 1 − r 2 ,

(7.56a)

J1 (ω1n r )a1n = r (1 − r 2 ),

(7.56b)

n=1

amn = 0 for m ≥ 2, n ≥ 1, and bmn = 0 for all m and n. These (Eqs. (7.56a), (7.56b)) are two Bessel series from which we find (see the section on special functions for further details)

7.3

Laplaces Equation

169

Fig. 7.3 The functions f (r ) = 1 − r 2 and f (r ) = r (1 − r 2 ) and their Bessel function approximations

a0n

2 = J1 (ω0n )2

a1n =

2 J2 (ω1n )2

1

r (1 − r 2 )J0 (ω0n r )dr =

0 1

8 3 J (ω ) ω0n 1 n

r 2 (1 − r 2 )J1 (ω1n r )dr = −

0

,

16J0 (ω1n ) 16 = 3 . 3 J 2 (ω ) ω1n ω J n 2 1n 0 (ω1n )

(7.57)

Graphs of f (r ) = 1 − r 2 and f (r ) = r (1 − r 2 ) and their appropriate Bessel series (with 3 terms) are presented in Fig. 7.3. Thus, our solution is u(t, r , θ ) = 8

∞ n=1

7.3

e−ω0n t 2

∞ J0 (ω0n r ) J1 (ω1n r ) 2 e−ω1n t 3 + 16 cos θ. 3 ω0n J1 (ωn ) ω1n J0 (ω1n ) n=1

(7.58)

Laplaces Equation

In this section we consider Laplace’s equation in three dimensions in 1. Cartesian coordinates 2. Cylindrical polar coordinates 3. Spherical polar coordinates

7.3.1

Cartesian Coordinates

Laplace’s equation in 3 dimensions with coordinates (x, y, z) is given by u x x + u yy + u zz = 0.

(7.59)

170

7 Higher Dimensional Problems

We start again by assuming a separation of variables

which leads to

u = X (x)Y (y)Z (z),

(7.60)

X Y Z + + = 0, X Y Z

(7.61)

and the separation equations X = λ1 , X

Y = λ2 , Y

Z = −λ1 − λ2 , Z

(7.62)

or X = λ1 X ,

(7.63a)

(7.63b)

(7.63c)

Y = λ2 Y , Z = −(λ1 + λ2 )Z ,

where λ1 and λ2 are separation constants. As the choice of these separation constants depend on the boundary conditions given, we consider a particular example and consider finding the solution (7.59) on the unit cube 0 < x, y, z < 1 subject to u(0, y, z) = 0, u(1, y, z) = 0, u(x, 0, z) = 0, u(x, 1, z) = 0, u(x, y, 0) = 0, (7.64) and on the top of the cube u(x, y, 1) = f (x, y). (7.65) In terms of our separated variables, the boundary conditions in (7.64) are X (0) = 0,

X (1) = 0,

Y (0) = 0,

Y (1) = 0,

Z (0) = 0.

(7.66)

Solving the first two equations in (7.63) gives X = c1m sin mπ x, Y = c2n sin nπ y,

(7.67)

where λ1 = m 2 π 2 , λ2 = n 2 π 2 and c1m and c2n arbitrary constants. Solving (7.63c) gives Z = c3mn sinh m 2 + n 2 π z + c4mn cosh m 2 + n 2 π z, (7.68) where c3mn and c4mn are arbitrary constants, however, the remaining BC in (7.66) gives c4mn = 0. Combining our results and the principle of superposition gives the solutions as u=

∞ ∞ m=1 n=1

cmn sin mπ x sin nπ y sinh

m 2 + n 2 π z,

(7.69)

7.3

Laplaces Equation

171

noting that we have combined all constants, c1m , c2n and c3mn into the term cmn . With the final BC (7.65) we have ∞ ∞

cmn sin mπ x sin nπ y sinh

m 2 + n 2 π = f (x, y),

(7.70)

m=1 n=1

and if we let amn = cmn sinh

√ m 2 + n 2 π we obtain the double Fourier series

∞ ∞

amn sin mπ x sin nπ y = f (x, y),

(7.71)

m=1 n=1

so that

1 1

amn = 4 0

f (x, y) sin mπ x sin nπ yd xdy.

(7.72)

0

Combining our results, we obtain the solution u=

∞ ∞

amn sin mπ x sin nπ y

m=1 n=1

7.3.2

√

m 2 + n2π z . √ sinh m 2 + n 2 π

sinh

(7.73)

Cylindrical Polar Coordinates

If we introduce cylindrical coordinates (r , θ, z) where x = r cos θ,

y = r sin θ,

z = z,

(7.74)

Laplace’s equation (7.59) becomes 1 1 u rr + u r + 2 u θθ + u zz = 0. r r

(7.75)

Here we will discuss two distinct problems. We solve (7.75) subject to the boundary conditions (i) u(r , 0) = 0, u(r , 1) = 0, u(1, z) = f (z),

(7.76a)

(ii) u(r , 0) = 0, u(1, z) = 0, u(r , 1) = g(r ).

(7.76b)

In both problems we will assume axial symmetry (i.e. u θ = 0). In both problems we start assuming separation of variables u = R(r )Z (z), (7.77) so that (7.75) becomes

172

7 Higher Dimensional Problems

r 2 R + r R − kr 2 R = 0,

Z + k Z = 0.

(7.78a) (7.78b)

Problem (i) In this case, the first two boundary conditions in (7.76a) via (7.77) become Z (0) = Z (1) = 0.

(7.79)

The only solution of (7.78b) to subject to these BCs is when k = (nπ )2 , n = 1, 2, . . . and is given by (7.80) Z = c1 sin nπ z. The corresponding solution to (7.78a) with k = (nπ )2 is R = c2 I0 (nπr ) + c2 K 0 (nπr ),

(7.81)

where I0 and K 0 are modified Bessel functions, however, as K 0 is unbounded at r = 0, we choose c3 = 0 to avoid the singularity. Thus, the solution thus far to (7.75) subject to the first two boundary conditions in (7.76a) is u(r , z) =

∞

cn I0 (nπr ) sin nπ z.

(7.82)

n=1

Imposing the final BC (7.76a) in (7.82) gives u(1, z) =

∞

cn I0 (nπ ) sin nπ z = f (z),

(7.83)

n=1

and if we denote bn = cn I0 (nπ ) then ∞

bn sin nπ z = f (z),

(7.84)

n=1

which we recognize as a Fourier sine series and thus bn = 2

1

f (z) sin(nπ z)dz,

(7.85)

0

which gives cn =

2 I0 (nπ )

1

f (z) sin(nπ z)dz,

0

and this together with (7.82) gives the solution to problem 1.

(7.86)

7.3

Laplaces Equation

173

Example 7.4 If f (z) = 1,

(7.87)

then from (7.86) we have 2 cn = I0 (nπ )

1

sin(nπ z)dz =

0

giving (7.82)

4 , nπ I0 (nπ )

∞ 4 I0 (nπr ) sin nπ z. π n I0 (nπ )

u(r , z) =

(7.88)

(7.89)

n=1

Problem 2 The solution to (7.78) is either R = c1 J0 (ωr ) + c2 Y0 (ωr ),

(7.90)

Z = c3 sin ωz + c4 cos(ωz), or R = c1 I0 (ωr ) + c2 K 0 (ωr ),

(7.91)

Z = c3 sinh ωz + c4 cosh(ωz),

(c1 − c4 arbitrary constants) depending on whether k = −ω2 or k = ω2 , however, subject to the first two BCs in (7.76b) we obtain R = c1 J0 (ωr ),

Z = c3 sin ωz,

(7.92)

and via the superposition principle ∞

u(r , z) =

cn J0 (ωn r ) sinh ωn z.

(7.93)

n=1

Furthermore, subject to the third BC in (7.76b) gives cn =

2 sinh ωn J12 (ωn )

1

r f (r )J0 (ωn r )dr .

(7.94)

0

Example 7.5 If f (r ) = 1 − r 2 ,

(7.95)

then from (7.94) we have cn =

2 sinh ωn J12 (λn )

0

1

r (1 − r 2 )J0 (ωn r )dr =

8 , ωn3 J1 (ωn )

(7.96)

174

7 Higher Dimensional Problems

giving the solution u(r , z) = 8

∞ J0 (ωn r ) sinh ωn z . ωn3 J1 (ωn ) sinh ωn

(7.97)

n=1

7.3.3

Spherical Polar Coordinates

If we introduce the spherical polar coordinates (ρ, θ, φ) x = ρ cos θ sin φ,

y = ρ sin θ sin φ,

z = ρ cos φ,

(7.98)

in these coordinates, Laplace’s equation (7.59) becomes u ρρ +

1 2 1 u ρ + 2 u φφ + cot φu φ + 2 2 u θθ = 0. ρ ρ ρ sin φ

(7.99)

In what follows, we will assume that u θ = 0 (the case where u θ = 0 is left as an exercise to the reader) so (7.99) becomes u ρρ +

2 1 u ρ + 2 u φφ + cot φu φ = 0. ρ ρ

(7.100)

Assuming separation of variables

gives

u = P(ρ) (φ),

(7.101)

P 2 P 1 + cot φ + + 2 = 0, P ρ P ρ

(7.102)

which leads to the separated equations P 2 P k + + 2 = 0, P ρ P ρ or

+ cot φ = k,

ρ 2 P + 2ρ P + k P = 0, + cot φ − k = 0.

(7.103)

(7.104)

As k is an arbitrary constant we chose it to be −n(n + 1) where n is another arbitrary constant. In doing so, (7.104) becomes ρ 2 P + 2ρ P − n(n + 1)P = 0,

(7.105a)

+ cot φ + n(n + 1) = 0.

(7.105b)

Equation (7.105a) is a Cauchy-Euler equation and with this convenient choice of k has solutions

7.3

Laplaces Equation

175

P = c1 ρ n + c2 ρ −n−1 ,

(7.106)

where c1 and c2 are arbitrary constants. If we let t = cos θ , (7.105b) becomes (1 − t 2 ) − 2t + n(n + 1) = 0,

(7.107)

known as Legendre’s equation. The only solutions to (7.107) that remain bounded at t = ±1 is when n ∈ Z+ and are polynomial solutions known as Legendre Polynomials Pn (t) (see Sect. 7.2.2) and for (7.105b), the solutions are

= Pn (cos φ).

(7.108)

Since we wish to have the solution bounded in most problems that include ρ = 0 we choose c2 = 0 and thus, with superposition, we have u(ρ, φ) =

∞

cn ρ n Pn (cos φ).

(7.109)

n=0

If we were given some boundary condition such as u(a, φ) = f (φ), then

∞

(7.110)

cn a n Pn (cos φ) = f (φ),

(7.111)

n=0

so ∞ n=0

cn a

π

n

Pn (cos φ)Pm (cos φ) sin φ dφ =

0

π

f (φ)Pm (cos φ) sin φ dφ,

(7.112)

0

and using the results from Sect. 6.2.4 gives cn as 2n + 1 π cn = f (φ)Pn (cos φ) sin φdφ, 2a n 0

(7.113)

and (7.113) and (7.109) constitute the solution.

Example 7.6 Consider u(1, φ) = cos2 φ. In this example a = 1 and so (7.113) gives

(7.114)

176

7 Higher Dimensional Problems

2n + 1 π cos2 φ Pn (cos φ) sin φ dφ, 2 0 2n + 1 1 2 t Pn (t)dt (t = cos φ), = 2 −1 2n + 1 1 2P2 (t) + 1 = Pn (t)dt, 2 3 −1

cn =

(7.115)

1 where we have used the fact that P2 (t) = (3t 2 − 1). From the orthogonality relations 2 (7.160) we obtain 1 2 (7.116) c0 = , c1 = 0, c2 = , cn = 0 ∀n ≥ 3 3 3 giving the final solution as u=

7.4

2 1 − ρ2 1 P0 (cos φ) + ρ 2 P2 (cos φ) = + ρ 2 cos2 φ. 3 3 3

(7.117)

Special Functions

In the section we consider two special functions: Bessel functions and Legendre functions.

7.4.1

Bessel Functions

The ODE

x 2 y + x y + (x 2 − n 2 )y = 0,

n∈R

(7.118)

is known as Bessel’s equation. Its solution is given by y = c1 Jn (x) + c2 J−n (x)

(n ∈ / N)

y = c1 Jn (x) + c2 Yn (x)

(n ∈ N)

(7.119)

where Jn (x) and J−n (x) are known as Bessel functions of the first kind and Yn (x) Bessel function of the second kind. The Bessel function Jn (x) has the known power series Jn (x) =

∞ i=0

and Yn (x) is defined as Yn (x) = The Gamma function is defined as

x 2i+n (−1)i i!(n + i + 1) 2

Jn (x) cos(nπ ) − J−n (x) . sin nπ

(7.120)

(7.121)

7.4

Special Functions

177

Fig. 7.4 Bessel function J0 (x)

∞

(n) =

x n−1 e−x d x,

(7.122)

0

with the fundamental relation (n + 1) = n(n). For integer values of n we obtain (n + 1) = n!. For integer values of n we obtain Jn (x) =

∞ i=0

(−1)i x 2i+n . i!(n + i)! 2

(7.123)

Figure 7.4 illustration of the Bessel Function J0 (x) where as Fig. 7.5 illustrates of the Bessel functions J0 (x), J1 (x), J2 (x) and Y0 (x), Y1 (x), Y2 (x).

Fig. 7.5 Bessel functions J0 (x), J1 (x), J2 (x) and Y0 (x), Y1 (x), Y2 (x)

178

7 Higher Dimensional Problems

Table 7.1 First 12 zeros of the Bessel functions J0 (x) n

J0

n

J0

n

J0

n

J0

1

2.4048

4

11.7915

7

21.2116

10

30.6346

2

5.5201

5

14.9309

8

24.3525

11

33.7758

3

8.6537

6

18.0711

9

27.4935

12

36.9171

Table 7.2 First 5 zeros of the Bessel functions J0 (x) − J4 (x) n

J0

J1

J2

J3

J4

1

2.4048

3.3817

5.1356

6.3802

7.5883

2

5.5201

7.0156

8.4172

9.7610

11.0647

3

8.6537

10.1735

11.6198

13.0152

14.3725

4

11.7915

13.3237

14.7960

16.2235

17.6160

5

14.9309

16.4706

17.9598

19.4094

20.8269

As the reader can see, Y0 , Y1 and Y2 becomes unbounded near x = 0, and thus we typically we don’t use these in our calculations. As the reader can see, Bessel functions have a number of zero’s. In Table 7.1 we list the first 12 of the Bessel function J0 (x) and list the first 5 of the Bessel functions J0 , J1 and J2 in Table 7.2. Derivatives and Integrals of Bessel Functions As Bessel functions satisfy Bessel’s equation, clearly we must be able to take the derivatives of Bessel functions. The following recursion relations allow us to do this very easily (see the exercise section) n Jn (x) = − Jn (x) + Jn−1 (x), x n Jn (x) = Jn (x) − Jn+1 (x), x

(7.124)

noting from (7.124) we have the following identity Jn+1 (x) =

2n Jn (x) − Jn−1 (x). x

(7.125)

For example, from (7.124) we can obtain J0 (x) = −J1 (x), 1 J1 (x) = − J1 (x) + J0 (x), x 2 J2 (x) = − J2 (x) + J1 (x), x

(7.126)

7.4

Special Functions

179

which via (7.125) gives J2 (x)

4 = 1− 2 x

J1 (x) +

2 J0 (x). x

(7.127)

If we continue

4 8 2 2 J2 (x) = 1 − 2 J1 (x) + 3 J1 (x) + J0 (x) − 2 J0 (x) x x x x

1 4 8 2 2 − J1 (x) + J0 (x) + 3 J1 (x) + [−J1 (x)] − 2 J0 (x) (7.128) = 1− 2 x x x x x

12 3 6 J1 (x) + 1 − 2 J0 (x) = − x3 x x and the reader can verify that substitution of (7.127) and (7.128) into x 2 y + x y + (x 2 − 4)y = 0, shows it is identically satisfied. From the recursion relations (7.124) we can also obtain x n Jn (x) = x n Jn−1 (x),

Jn+1 (x) Jn (x) =− , xn xn which leads us to the following integral formulas x n Jn−1 (x)d x = x n Jn (x) + c, x −n Jn+1 (x)d x = −x −n Jn (x) + c. Using these we are able to derive the following recursive formulas x n J0 (x)d x = x n J1 (x) − (n − 1) x n−1 J1 (x)d x, n n x J1 (x)d x = −x J0 (x) + n x n−1 J0 (x)d x, noting that

(7.129)

(7.130)

(7.131)

(7.132)

J0 (x)d x

(7.133)

does not have a closed form solution. The following table involves powers of x. For higher powers of x n (n ≥ 3) we simply use a combination of (7.132) and the Tables 7.3 and 7.4. We illustrate by finding

180

7 Higher Dimensional Problems

Table 7.3 Integrals involving J0 (x) J0 (x)d x x J0 (x)d x = x J1 (x) + c x 2 J0 (x)d x = x 2 J1 (x) + x J0 (x) − J0 (x)d x + c x 3 J0 (x)d x = (x 3 − 4x)J1 (x) + 2x 2 J0 (x) + c

Table 7.4 Integrals involving J1 (x) J1 (x)d x = −J0 (x) + c x J1 (x)d x = −x J0 (x) + J0 (x)d x + c x 2 J1 (x)d x = 2x J1 (x) − x 2 J0 (x) + c x 3 J1 (x)d x = 3x 2 J1 (x) − (x 3 − 3x)J0 (x) − 3 J0 (x)d x + c

x 4 J1 (x). Using (7.132) first, we have x 4 J1 (x)d x = −x 4 J0 (x) + 4 x 3 J0 (x)d x, and then using Table 7.3

x 4 J1 (x)d x = −x 4 J0 (x) + 4 (x 3 − 4x)J1 (x) + 2x 2 J0 (x) + c, = 8x 2 − x 4 J0 (x) + 4(x 3 − 4x)J1 (x) + c.

(7.134)

(7.135)

(7.136)

For higher order Bessel function we make use of the identity (7.125), namely Jn+1 (x) =

2n Jn (x) − Jn−1 (x), x

and the material preceding. An an example consider x J2 (x)d x. Using (7.137)

(7.137)

(7.138)

7.4

Special Functions

181

Table 7.5 Integrals involving J2 (x) J2 (x)d x = −2J1 (x) + J0 (x)d x + c x J2 (x)d x = −x J1 (x) − 2J0 (x) + c x 2 J2 (x)d x = −x 2 J1 (x) − 3x J0 (x) + 3 J0 (x)d x + c x 3 J2 (x)d x = x 3 J3 (x) = (8x − x 3 )J1 (x) − 4x 2 J0 (x) + c

x J2 (x)d x =

2J1 (x) − x J0 (x)d x

= 2 [−J0 (x)] − [x J1 (x)] + c

(7.139)

= −x J1 (x) − 2J0 (x) + c. The same process can be used to derive the follow integrals (See Table 7.5): Bessel Series As we did with Fourier series, we can approximate functions using Bessel functions. In particular ∞ ci Jn (αi x) (7.140) f (x) = i=1

where Jn the Bessel function of order n and αi , the ith zero of that Bessel function. In order to do so, we must make use of the following:

1

x Jn (αi x)Jn (α j x)d x = 0 αi = α j

(7.141)

0

1

x Jn (αi x)2 d x =

0

1 2 1 2 1 2 (αi ) = Jn+1 (αi ) J (αi ) = Jn−1 2 n 2 2

(7.142)

where αi and α j are zeros of the Bessel function Jn (x). Multiplying the Bessel series (7.140) by x Jn (α j x) and integration over the interval [0, 1] gives ci =

1

1

x f (x)Jn (α j x)d x =

0

1 0

x Jn (αi x) d x 2

x f (x)Jn (αi x)d x

0

1 2 J (αi ) 2 n+1

Example 7.7 Find a J0 Bessel series for f (x) = 1 − x 2 on [0, 1]. From (7.143) we have

(7.143)

182

7 Higher Dimensional Problems

ci =

1

x(1 − x 2 )J0 (αi x)d x

0

(7.144) 1 2 J (αi ) 2 1 where αi are the zeros of J0 (x). We calculate (7.144) in a number of steps. First, if we introduce the substitution t = αi x then αi αi 1 1 1 x(1 − x 2 )J0 (αi x)d x = 2 t J0 (t)dt − 4 t 3 J0 (t)dt αi 0 αi 0 0 αi αi 1 1 = 2 t J1 (t) − 4 (t 3 − 4t)J1 (t) + 2t 2 J0 (t) 0 0 αi αi

α i 1 1 αi = 2 α J1 (t) − 4 (t 3 − 4t)J1 (t) + 2t 2 J0 (t) 0 0 αi αi 1 1 = 2 [ αi J1 (αi )] − 4 (αi3 − 4αi )J1 (αi ) + 2t 2 J0 (αi ) αi αi 4 = 3 J1 (αi ) αi (7.145) so that (7.144) becomes ci =

1 4 8 J (α ) J12 (αi ) = 3 1 i 3 2 αi αi J1 (αi )

and the series is given by f (x) =

∞

8

J0 (αi x). α 3 J (α ) i=1 i 1 i

(7.146)

(7.147)

The first 5 terms are given by f = 1.1080 J0 (2.4048x) − .1398J0 (5.5201x) + 0.0455J0 (8.6537x) − 0.0210 J0 (11.7915x) + 0.0116J0 (14.9309x)

(7.148)

This is shown in Fig. 7.6. Example 7.8 Find a J1 Bessel series for f (x) = x on [0, 1]. From (7.143) we have 1

ci =

x 2 J1 (αi x)d x

0

1 2 J (αi ) 2 2 where αi are zeros of J1 (x). If we introduce the substitution t = αi x then

(7.149)

7.4

Special Functions

183

Fig. 7.6 Bessel function approximations to f (x) = 1 − x 2 and f (x) = x

ci =

1 αi3

αi

t 2 J0 (t)dt

0

1 2

J22 (αi )

2 2αi J1 (αi ) − αi2 J0 (αi ) · αi3 J22 (αi ) 2J0 (αi ) =− 2 J2 (αi ) =

(7.150)

and the Bessel series is given by f (x) =

∞ i=1

−

2J0 (αi ) J1 (αi x). J22 (αi )

(7.151)

The first 5 term are f = 1.2960 J1 (3.8317x) − 0.9499J1 (7.0156x) + 0.7873J1 (, 10.1735x) − 0.6874J1 (13.3237x) + 0.6181J1 (16.4706x).

(7.152)

This is shown in Fig. 7.6.

7.4.2

Legendre Functions

The ODE

(1 − x 2 )y − 2x y + n(n + 1)y = 0, n ∈ R

(7.153)

is known as Legendre’s equation. Legendre’s equation is known to have the power series solution

184

7 Higher Dimensional Problems

y=

∞

ci x i

(7.154)

(i − n)(i + n + 1) ci , i = 0, 1, 2, 3, · · · (i + 1)(in + 2)

(7.155)

i=0

with the recurrence relation ci+2 =

from which we see we have two independent solutions ∞ (n − 2i + 2) · · · (n − 2)l(n + 1)(n + 3) · · · (i + 2n − 1) 2i y1 = 1 + (−1)i x , (2i)! i=1

y2 = x +

∞ i=1

(−1)i

(n − 2i + 1) · · · (n − 1)(n + 2)(n + 4) · · · (i + 2n) 2i+1 . x (2i + 1)! (7.156)

These are referred to as the “odd” and “even” solutions. Both series converge for |x| < 1 for all values of n. However, for noninteger values of n, both series diverge at the endpoints x = ±1. For integer values of n, one of these two series terminates depending on whether n is odd or even while the other still diverges at the endpoints x = ±1. As we require our solutions to be defined for all [−1, 1] we only consider the terminating series which results in a polynomial. We consider these polynomials with the additional requirement that when x = 1, y = 1. They are referred to as Legendre polynomials denoted by Pn (x). The first 4 are given by P0 (x) = 1, P1 (x) = x, 1 P2 (x) = (3x 2 − 1), 2 1 P3 (x) = (5x 3 − 3x), 2

(7.157)

which are shown graphically in Fig. 7.7. We also have a convenient formula, Rodrigues’ formula, to generate these. It is given by Pn (x) =

1 dn 2 (x − 1)2 n!2n d x n

(7.158)

Derivatives and Integral of Legendre Functions As Legendre functions are simply polynomials, their derivative and integrals are easily calculated. We do have the following special relations

7.4

Special Functions

185

Fig. 7.7 Legendre functions P0 (x), P1 (x), P2 (x) and P3 (x)

(n + 1)Pn+1 (x) = (2n + 1)x Pn (x) − n Pn−1 (x) d d Pn+1 (x) = x Pn (x) + (n + 1)Pn (x) dx dx d (2n + 1)Pn (x) = (Pn+1 (x) − Pn−1 (x)) dx Legendre Series As we have seen with Fourier series and Bessel series, we can also have a Legendre series. It is given by ∞ f (x) = cn Pn (x) (7.159) n=0

where Pn (x) are the Legendre polynomials. Using the following orthgonality condition ⎧ 1 ⎨ 0 m = n Pm (x)Pn (x)d x = (7.160) 2 ⎩ m=n −1 2n + 1 we are able to extract the appropriate coefficients cn in (7.159). They are given by cn =

2n + 1 2

1 −1

f (x)Pn (x)d x

(7.161)

Example 7.9 Find a Legendre series for f (x) = x 2 , [−1, 1]

(7.162)

186

7 Higher Dimensional Problems

We use (7.161) to calculate cn for this particular f (x). For this, we obtain c0 =

1 2

3 c1 = 2 c2 =

5 2

1 −1 1

−1 1 −1

1 · x 2d x =

1 , 3

x · x 2 d x = 0,

(7.163)

1 2 (3x 2 − 1) · x 2 d x = , 2 3

with the remaining constants zero. If we denote y2 and the approximations, we have y2 = c0 P0 + c2 P2 (x), 1 2 1 = · 1 + · (3x 2 − 1), 3 3 2 = x 2.

(7.164)

It should be no surprise that we recover the actual function here. Example 7.10 Find a Legendre series for f (x) = sin(π x), [−1, 1]

(7.165)

We use (7.161) to calculate cn for this particular f (x). For this, we obtain c1 =

3 2

c3 =

7 2

c5 =

11 2

1 −1 1

x sin π x d x =

3 , π

7(π 2 − 15) 1 , (5x 3 − 3x) sin π x d x = 2 π3

−1 1

−1

(7.166)

1 11(π 4 − 105π 2 + 945) , x(63x 4 − 70x 2 + 15) sin π x d x = 8 π5

with the even constants zero. The successive approximations y1 , y2 and y3 are y1 = c1 P1 (x), y2 = c1 P1 (x) + c3 P3 (x),

(7.167)

y3 = c1 P1 (x) + c3 P3 (x) + c5 P5 (x). The graphs of y2 and y3 are plotted against the actual function f (x) = sin π x are shown in Fig. 7.8. As the reader will notice, the approximations are quite good.

7.4

Special Functions

187

Fig. 7.8 Legendre approximations versus f (x) = sin π x

Exercises 1. Prove the following Bessel recurrence relations for integer values n n Jn (x) = − Jn (x) + Jn−1 (x), x n Jn (x) = Jn (x) − Jn+1 (x), x using: (1) the series solution Jn (x) =

∞ i=0

(−1)i x 2i+n , i!(n + i)! 2

(7.168)

(7.169)

(2) the integral formula 1 Jn (x) = π

π

cos(nθ − x sin θ )dθ,

(7.170)

0

(3) the differential equation x 2 y + x y + (x 2 − n 2 )y = 0, n ∈ N. 2. Closed form solutions of Bessel’s equation are rare but do exist. Verify that 2 cos x 2 sin x J1/2 (x) = √ , J1/2 (x) = √ , π π x x

(7.171)

(7.172)

satisfy the Bessel equation

1 x y + xy + x + 4 2

2

y = 0.

(7.173)

188

7 Higher Dimensional Problems

Using (7.168) deduce J3/2 (x) = J3/2 (x) =

2 π 2 π

sin x cos x − √ 3/2 x x cos x sin x + √ x 3/2 x

,

(7.174) ,

and J5/2 (x) = J5/2 (x) =

2 π 2 π

3 cos x (3 − x 2 ) sin x − 3/2 x x 5/2 (3 − x 2 ) cos x 3 sin x + 3/2 x x 5/2

,

(7.175) ,

and show they satisfy

9 x 2 y + x y + x 2 + y = 0, 4

25 x 2 y + x y + x 2 + y = 0, 4 respectively. 3. Prove the following integration formulas n n x J0 (x)d x = x J1 (x) − (n − 1) x n−1 J1 (x)d x, x n J1 (x)d x = −x n J0 (x) + n x n−1 J0 (x)d x.

(7.176)

(7.177)

4. Prove

1

x Jn (αi x)Jn (α j x)d x = 0, αi = α j ,

(7.178)

0

1 0

x Jn (αi x)2 d x =

1 2 1 2 1 2 (αi ) = Jn+1 (αi ), Jn (αi ) = Jn−1 2 2 2

(7.179)

where αi and α j are zeros of the Bessel function Jn (x). 5. Find Bessel J0 and J1 function approximations to f (x) = x(1 − x 2 ) on [0, 1] and compare your answers. 6. Consider Legendre’s equation (1 − x 2 )y − 2x y + l(l + 1)y = 0, of orders 0, 1 and 2. (i) For order 0 (l = 0), the Eq. (7.180) becomes

(7.180)

7.4

Special Functions

189

(1 − x 2 )y − 2x y = 0.

(7.181)

y = c1 + c2 (ln |x − 1| − ln |x + 1|) ,

(7.182)

Show the exact solution is

where c1 and c2 area arbitrary constant. Thus, we have two independent solutions y1 = 1,

y2 = ln |x − 1| − ln |x + 1|,

(7.183)

noting the second solution becomes unbounded at x = ±1. (ii) For order 1 ( l = 1), the Eq. (7.180) becomes (1 − x 2 )y − 2x y + 2y = 0.

(7.184)

y = x,

(7.185)

Show one exact solution is and show the second independent solution is y2 = 2 + x (ln |x − 1| − ln |x + 1|) ,

(7.186)

noting the second solution becomes unbounded at x = ±1. (iii) For order 2 (l = 2), the Eq. (7.180) becomes (1 − x 2 )y − 2x y + 6y = 0.

(7.187)

Show if one seeks a solution of the form y = ax 2 + bx + c,

(7.188)

a + 3c = 0, b = 0.

(7.189)

then Further show that if y(1) = 1 we obtain y1 =

1 2 3x − 1 . 2

(7.190)

Show the second independent solution is y2 = 3x +

3x 2 − 1 ln |x − 1| − ln |x + 1| . 2

Note again that this second solution becomes unbounded at x → ±1.

(7.191)

190

7 Higher Dimensional Problems

7. Show for the next two orders l = 3, 4 that one assume that y = ax 3 + bx 2 + cx + d, y = ax 4 + bx 3 + cx 2 + d x + e,

(7.192)

that 3a x (l = 3), 5 6 6a y = ax 5 − ax 3 + x (l = 4), 7 70 y = ax 3 −

(7.193)

and imposing y(1) = 1 gives 1 3 5x − 3x (l = 3), 2 1 y= (l = 4), 35x 5 − 30x 3 + 3x 8 y=

(7.194)

These are Legendre functions P0 (x) − P4 (x) given in P0 (x) = 1, P1 (x) = x, 1 2 P2 (x) = 3x − 1 , 2 1 3 P3 (x) = 5x − 3x , 2 1 P4 (x) = 35x 5 − 30x 3 + 3x . 8

(7.195)

8. Show they satisfy the recursion relations (n + 1)Pn+1 (x) = (2n + 1)x Pn (x) − n Pn−1 (x), d d Pn+1 (x) = x Pn (x) + (n + 1)Pn (x), dx dx d (2n + 1)Pn (x) = (Pn+1 (x) − Pn−1 (x)) . dx 9. Verify the orthogonality condition

1 −1

Pm (x)Pn (x)d x =

for P0 (x), P1 (x), P2 (x) and P3 (x).

⎧ ⎨

0, m = n, 2 ⎩ , m = n, 2n + 1

(7.196)

7.4

Special Functions

191

10. Find a Legendre series for the following on [−1, 1]

11. Solve

(i)

f (x) = cos

(ii)

f (x) = e x

π x, 2

1 1 u t = u rr + u r + 2 u θθ r r

subject to the BC u(t, 1, θ ) = 0 and initial conditions (i) u(0, r , θ ) = 1 − r 2 + r sin θ, (ii) u(0, r , θ ) = 1 − r 2 + r 2 sin 2θ. and compare your answers. 12. Solve u ρρ +

2 1 u ρ + 2 u φφ + cot φu φ = 0, ρ ρ

subject to the BC (i) u(1, φ) = 1 + cos φ, (ii) u(1, φ) = sin φ.

Solutions

Chapter 1 1. (i) (b), (ii) (c), (iii) (d), (iv) (a). 2. (i) yes, (ii) yes, (iii) no, (iv) no. 5. a = −b2 . 9. ab = 1. 10. c = −4.

Chapter 2 1. (i) (iv) (vi)

x x3 − 2 + 1, (iii) u = 2t + f (t + 2x), 2 2y u = 3y 2 + f (x − y), (v) y + 2x + 2(u − x) ln x + f (u − x) = 0, u = t 2 + sin xet u = e−x f (2x + y) , (ii) u =

2. (i) (iv)

3. (i) (iii) (iv) 4. (i) (iv)

2x y y2 u = xf − ln x , (ii) u = −x − 1 + e x f (y + 1)e−x , (iii) u = , 2x 2 x + 3y 2 y u = x + f (x y − x 2 ). (iv) u = −x 2 − 2

u = sech (x + 1)e−t , (ii) u = sech (x − (u − 2)t) , (u − 3x)e3t = u − 3sech −1 u, (1 + 2x + 3y)e−2t = 1 + 2sech −1 u + 3u

u = xe y + e2y , (ii) u = x 2 + y 2 and u = x 2 + (2 − y)2 , (iii) u = e x+y−1 , u = x 2 + x y and u = x 2 − x y + 2x

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6

193

194

5. (i) (ii) (iii)

Solutions

F = F 2xq + q 2 , 2yp + p 2 , 2u + pq, p/q , F = F 2xuq − y, uq, x − 2q1 2 , x − 2q1 2 , p/q , F = F (up − 2y, uq − 2x, p, q)

Chapter 3 1. (i) H (ii) P (iii) P (iv) H (v) H (vi) E (vii) E. 2. (i)

r = x − y,

(ii)

r

(iii)

r

(iv) (v)

u r r

(vi)

r

(vii)

r

(viii)

r

(i x)

r

4. (i) (ii)

s = y,

u ss = 0, u = f (x − y)y + g(x − y), 2r s = y, u ss + u r = 0, = + r − s2 s = x 2 − y 2 , s = x 2 + y 2 , u ss + 2 u s = 0, s − r2 2 2 2 2 = f (x − y ) ln |x + y| + g(x − y ), = x + y, s = x + 2y, u r s + 2u r + 3u s = 0, = ln x + ln y, s = 2 ln x + ln y, u r s + 3u r + 4u s = 0, ur + u s = 2x + y 2 , s = x + y 2 , u r s − = 0, 2s − r = x + 3y, s = 2y, u rr + u ss = 0, r = sin x, s = sin x + y, u rr + u ss − (u r + u s ) = 0, 1 − r2 2 = 3x, s = x − 2y , u rr + u ss = 0. x2

y2,

It is not possible. r = 2 ln x + ln y,

s = 2 ln x.

Chapter 4 1. (i) (ii) (iii) (iv)

∞ 1 cos nπ (cos nπ x + nπ sin nπ x) e − e−1 + e − e−1 2 1 + n2π 2 n=1 ∞ cos nπ − 1 nπ x f (x) = 1 + 4 cos n2π 2 2 n=1 ∞ cos nπ 3 1 − cos nπ cos nπ x + f (x) = + sin nπ x 2 2 4 n π nπ n=1 ∞ cos nπ − 2 nπ x 28 cos f (x) = − 16 3 n2π 2 2

f (x) =

n=1

Solutions

2. (i)

(ii)

(iii)

(iv)

195 ∞ e cos nπ − 1 f c (x) = e − 1 + 2 cos nπ x 1 + n2π 2 n=1 ∞ n(e cos nπ − 1) f s (x) = −2π sin nπ x 1 + n2π 2 n=1 ∞ 3 cos nπ + 1 1 nπ x cos f c (x) = − − 4 2 2 3 n π 2 n=1 ∞ 2 2 n π cos nπ − 4 cos nπ + 4 nπ x sin f s (x) = 4 n3π 3 2 n=1 nπ ∞ 2 cos nπ − 3 cos +1 5 nπ x 2 f c (x) = − 4 cos 2 2 4 n π 2 n=1 nπ ∞ 6 sin + nπ nπ x 2 f s (x) = 2 sin 2 2 n π 2 n=1 nπ nπ ∞ nπ cos nπ − 5nπ cos − 12 sin 8 3 3 cos nπ x f c (x) = − 6 9 n3π 3 3 n=1 nπ nπ ∞ 5nπ sin − 12 cos − 12 nπ x 3 3 f s (x) = 6 sin 3 3 n π 3 n=1

∞

3. (i)

f (x) =

a0 an cos nπ x + 2 n=1

a0 a3 a6 a9

3. (ii)

= 1.493648266 = 0.01692345510 = −0.004164711740 = 0.001845282826

f (x) =

∞ n=1

bn sin

a1 = 0.2807893413 a4 = 0.009434378340 a7 = 0.003055298928 a10 = −0.001493973933

a2 = −0.03869202628 a5 = 0.006011684435 a8 = −0.002336974260

nπ x 4

b1 = 0.8747046386 b4 = −0.1312665368 b7 = 0.1031454701 b10 = −0.05656041726

b2 = −0.2406019431 b5 = 0.1475827953 b8 = −0.06966160069

b3 = 0.2560984582 b6 = −0.09086250412 b9 = 0.07909381104

196

3. (iii)

Solutions

f (x) =

∞

bn sin nπ x

n=1

b1 = 0.3752809082 b4 = 0.01889844269 b7 = 0.006683416220 b10 = 0.003118724960

b2 = 0.07184439810 b3 = 0.03770863282 b5 = 0.01324442018 b6 = 0.008544944317 b8 = 0.004848001062 b9 = 0.004018015103

Chapter 5 1. (i)

u = 16

∞ 1 − cos nπ n=1

1. (ii)

n3π 3

e−n

2 π 2 t/4

sin

nπ x 2

∞

1 + cos nπ nπ x 2 2 2 e−n π t/4 cos u = −8 3 n2π 2 2 n=1

1. (iii) 1. (iv)

∞ (2n − 1)π cos nπ + 4 −(2n−1)2 π 2 t/4 (2n − 1)π x e sin u = 32 (2n − 1)3 π 3 4 n=1 ∞

u = −32

n=1

2. (i)

(2n − 1)π + 4 cos nπ −(2n−1)2 π 2 t/4 (2n − 1)π x e cos (2n − 1)3 π 3 4

u = −2x + 7 + 36

∞ cos nπ − 1 −n 2 π 2 t/9 nπ x e sin n3π 3 3 n=1

2. (ii)

u=

x2

∞

1 + cos nπ 3 nπ x 2 2 + 2t − x − − 18 e−n π t/9 cos 2 n2π 2 3 n=1

3. (i)

u = −2

∞ n=1

3. (ii)

cos nπ sinh nπ y sin nπ x nπ sinh nπ

∞ 1 − cos nπ sinh nπ x sin nπ y u=2 nπ sinh nπ n=1

4. (i) (ii)

Pt − P Px − Px x + 2Q x = 0 ∞ nπ x 4 cos nπ − 1 −n 2 π 2 t/4 v e sin , u= 2 u= 2 x +1 x +1 n3π 3 2 n=1

β β2 6. u = ex p − x − t v 2 4

Solutions

197

7. u(r , θ ) =

∞ 2u 0 1 − cos nπ r n − r −n · sin nθ · n π n a − a −n n=1

∞

8. u(r , θ ) =

a0 ln r r 2n − r −2n an 2n · cos 2nθ · + 2 ln 2 2 − 2−2n n=1

Chapter 6 1. 1 1 4 −iω −ω2 /8 2 (i) √ (iii) √ · e1−iω−ω /4 · (ii) ·e 8 2π ω2 + 4 2 1 2 − ω2 −ω2 /4 2 eiω ω 4i (iv) √ (vi) · 2 · 2 (v) − √ √ ·e 2 ω + 1 (ω + 1) 2π 2π 4 2 2i 1 (vii) √ · (i ω − 1) + (i ω + 1)e−2iω · −1 + 2 e−iω − e−2iω (vii) √ 2π ω3 2πω

2. 4 ω2 + 13 (i) √ · 2 ω − 6ω + 13 ω2 + 6ω + 13 2π

1 2 3. (i) √ · e−x /4 2 ix 2 (iv) √ · e−x 2 2

(ii)

5ω i 2 − √ · e−(ω +25)/16 sinh 8 2 2

π (ii) · (H (x + a) − H (x − a)) 2 √ 2i −(x−1)2 /4 2 (v) − e−(x+1) /4 · e 4

2 ω 1 4. (i) · 2 (ii) √ π ω +1 ω

2 sin ω(1 − cos ω) (v) 2 · π ω2

√ 2π −2|x| (iii) ·e 4

2 −3ω ·e (iii) 4 π

(iv)

2 1 − cos ωa · π ω

198

Solutions

2 1 1 · (ii) √ π ω2 + 1 ω

2 cos ω(1 − cos ω) (v) 2 · π ω2

5. (i)

4 3

(iii)

2 −3ω ·e π

(iv)

2 sin ωa · π ω

6.

2 2 ω−3 ω+3 1 1 (i) + , + 2 2 2 2 π (ω − 3) + 1 (ω + 3) + 1 π (ω − 3) + 1 (ω + 3) + 1

2 2 1 1 1 1 (ii) −√ , −√ √ √ π π ω−5 ω+5 ω+5 ω−5

1 ∞ e−(x−s) /4t u=√ ds 4π t −∞ 1 + s 2 1 t+x 2t + x 2t − x e erfc u= + et−x erfc √ √ 2 2 t 2 t 2

7. (i) (ii)

8. (i) u =

1 π

∞

y e−s ds y 2 + (x − s)2 2

−∞

(ii) u =

9. (i) (ii)

u = A erfc

u=

2 ea t

2

x √

2 t √ √ x x −ax ax e erfc a t − √ − e erfc a t + √ 2 t 2 t

x 10. (i) u = 2 x + (y + 1)2 11. u =

(x 2 + y 2 − 1)y + x 2 − y 2 + 1 (x 2 + y 2 )2 + 2(x 2 − y 2 ) + 1

2 (ii) u = π

∞ 0

e−ωx cos ωy dω 1 + ω2

1 ( f (x + ct) + f (x − ct)) 2

Chapter 7 2(α 2 + 9) ∗ 18 5. (i) ci = 3 − 5i J0 , αi J1 (αi ) αi J1 (αi ) 16J0 (βi ) (ii) ci = − 3 2 βi J2 (βi )

J0i∗

αi

= 0

J0i (t)dt

Solutions

199

where αi and βi are the zeros of J0 (x) and J1 (x) respectively. 10 2 10(π 2 − 12) 18(π 4 − 180π 2 + 1680) , c1 = 0, c2 = , c = 0 c = ··· 3 4 π π3 π5 −1 −1 e−e 259e − 35e , c1 = 3e−1 , c2 = e − 7e−1 , c3 = c4 = 162e − 1197e−1 · · · (ii) c0 = 2 2 (i) c0 =

11 (i) u(t, r , θ ) = 8

∞

e−ω0n t

n=1 ∞

11 (ii) u(t, r , θ ) = 8

n=1

2

∞ J0 (ω0n r ) J1 (ω1n r ) 2 e−ω1n t − 2 sin θ. 3 ω1n J2 (ω1n ) ω0n J1 (ω0n ) n=1

e−ω0n t 2

∞ J0 (ω0n r ) J2 (ω2n r ) 2 e−ω2n t + 2 sin 2θ, 3 J (ω ) ω ω0n 2n J3 (ω2n ) 1 0n n=1

where ω0n , ω1n and ω2n are the zeros of J0 (x), J1 (x) and J2 (x) respectively. 12 (i) u = 1 + 2ρ cos φ π 9π 2 65π 2 5π 2 (ii) u = − ρ P2 (cos φ) − ρ P4 (cos φ) − ρ P6 (cos φ) · · · 4 32 256 4096

Index

A Advection equation, 3

B Bessel functions, 176–183 Boundary conditions, 6, 12 prescribed temperature, 7 prescribed temperature flux, 7 radiating, 7 Burgers’ equation, 13

C Charpit’s method, 39 with BCs, 44 Compatibility, 41 Continuity equation, 8

D D’Alembert solution, 64 Diffusion equation, 4 on a disc, 164 with symmetry, 165 without symmetry, 166

E Eikonal equation, 14 Equation advection, 3

Burgers’, 13 continuity, 8 diffusion, 4 Eikonal, 14 equilibrium, 14 Fisher’s, 13 Fitzhugh-Nagumo, 13 Gross-Pitaevskii, 14 heat, 97 nonhomogeneous boundary conditions, 105 nonhomogeneous equation, 111 solution dependent convective term, 118 solution dependent source term, 114 Korteweg deVries (KdV), 13 Laplace’s, 7, 120 Laplace’s - arbitrary rectangular domain, 127 Liouville’s, 16 Maxwell’s, 15 modified Korteweg deVries (mKdV), 17 Navier-Stokes, 14 Plateau, 14 Schrödinger, 14 Sine-Gordon, 14, 16 wave, 10, 62, 137 Equilibrium equations, 14 Extensions, odd and even, 84

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 D. Arrigo, An Introduction to Partial Differential Equations, Synthesis Lectures on Mathematics & Statistics, https://doi.org/10.1007/978-3-031-22087-6

201

202 F Fisher’s equation, 13 Fitzhugh-Nagumo equation, 13 Fourier Cosine series, 88 Fourier Cosine transform, 152 operational properties, 153 Fourier series, 69–84 double, 163 on [−π, π ], 74 on [−L, L], 79 Fourier Sine series, 85 Fourier Sine transform, 152 operational properties, 153 Fourier’s law, 6 Fourier transform, 145 operational properties, 147

G Gross-Pitaevskii equation, 14

H Heat equation, 97 nonhomogeneous boundary conditions, 105 nonhomogeneous equation, 111 on a disc, 164 solution dependent convective term, 118 solution dependent source term, 114 2 + 1 dimensional, 161 I Initial conditions, 6 Inverse Fourier Cosine transform, 152 Inverse Fourier Sine transform, 152 Inverse Fourier transform, 145

K Korteweg deVries (KdV) equation, 13

L Laplace’s equation, 7, 120 arbitrary rectangular domain, 127 Cylindrical Polar, 171 Disc, 132 Spherical Polar, 174 3D Cartesian, 169

Index Legendre functions, 183–186 Liouville’s equation, 16 M Maxwell’s equation, 15 Method of characteristics, 28 Modified Korteweg deVries (mKdV) equation, 17 N Navier-Stoke’s equations, 14 P Partial differential equation first order constant coefficient, 19 higher dimensional, 34 linear, 23 nonlinear, 35 quasilinear, 30 method of characteristics, 28, 35 second order, 49 elliptic, 60 hyperbolic, 54 modified hyperbolic, 55 parabolic, 52 regular hyperbolic, 57 Plateau equation, 14 Poisson’s integral formula, 136 Principle of superposition, 71 S Schrödinger equation, 14 Series Bessel, 181 Fourier, 69 Fourier Cosine, 88 Fourier Sine, 85 Legendre, 185 Sine-Gordon equation, 14, 16 Standard forms, 50 elliptic, 60 hyperbolic, 50, 54 modified hyperbolic, 55 parabolic, 50, 52 regular hyperbolic, 57

Index T Transform Fourier, 145 Fourier Cosine, 152 Fourier Sine, 152 Inverse Fourier, 145

203 Inverse Fourier Cosine, 152 Inverse Fourier Sine, 152

W Wave equation, 10, 62, 137 first order, 32