Partial Differential Equations

Citation preview

PARTIAL DIFFERENTIAL EQUATIONS

Niels Walet University of Manchester

University of Manchester Partial Differential Equations

Niels Walet

This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects. Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of openaccess texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated. The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education. Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog (http://Blog.Libretexts.org). This text was compiled on 10/01/2023

TABLE OF CONTENTS Licensing

1: Introduction to Partial Differential Equations 1.1: Ordinary Differential Equations 1.2: PDE’s

2: Classication of Partial Differential Equations 2.1: Examples of PDE 2.2: Second Order PDE 2.3: More than 2D 2.E: Classication of Partial Differential Equations (Exercises)

3: Boundary and Initial Conditions 3.1: Introduction to Boundary and Initial Conditions 3.2: Explicit Boundary Conditions 3.3: Implicit Boundary Conditions 3.4: More Realistic Examples of Boundary and Initial Conditions

4: Fourier Series 4.1: Taylor Series 4.2: Introduction to Fourier Series 4.3: Periodic Functions 4.4: Orthogonality and Normalization 4.6: Fourier series for even and odd functions 4.7: Convergence of Fourier series

5: Separation of Variables on Rectangular Domains 5.1: Cookbook 5.2: Parabolic Equation 5.3: Hyperbolic Equation 5.4: Laplace’s Equation 5.5: More Complex Initial/Boundary Conditions 5.6: Inhomogeneous Equations

6: D’Alembert’s Solution to the Wave Equation 6.1: Background to D’Alembert’s Solution 6.2: New Variables 6.3: Examples

7: Polar and Spherical Coordinate Systems 7.1: Polar Coordinates 7.2: Spherical Coordinates

1

https://math.libretexts.org/@go/page/24038

8: Separation of Variables in Polar Coordinates 8.1: Example 8.2: Three cases for λ 8.3: Putting it all together

9: Series Solutions of ODEs (Frobenius’ Method) 9.1: Frobenius’ Method 9.2: Singular Points 9.3: Special Cases

10: Bessel Functions and Two-Dimensional Problems 10.1: Temperature on a Disk 10.2: Bessel’s Equation 10.3: Gamma Function 10.4: Bessel Functions of General Order 10.5: Properties of Bessel functions 10.6: Sturm-Liouville Theory 10.7: Our Initial Problem and Bessel Functions 10.8: Fourier-Bessel Series 10.9: Back to our Initial Problem

11: Separation of Variables in Three Dimensions 11.1: Modelling the Eye 11.2: Properties of Legendre Polynomials 11.3: Fourier-Legendre Series 11.4: Modelling the eye–revisited

Index Detailed Licensing

2

https://math.libretexts.org/@go/page/24038

Licensing A detailed breakdown of this resource's licensing can be found in Back Matter/Detailed Licensing.

1

https://math.libretexts.org/@go/page/115479

CHAPTER OVERVIEW 1: Introduction to Partial Differential Equations In this course we shall consider so-called linear Partial Differential Equations (P.D.E.’s). This chapter is intended to give a short definition of such equations, and a few of their properties. However, before introducing a new set of definitions, let me remind you of the so-called ordinary differential equations ( O.D.E.’s) you have encountered in many physical problems. 1.1: Ordinary Differential Equations 1.2: PDE’s Thumbnail: Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing asteady state temperature distribution. (CC-BY-SA-3.0; Wikipedia A1). This page titled 1: Introduction to Partial Differential Equations is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

1.1: Ordinary Differential Equations ODE’s are equations involving an unknown function and its derivatives, where the function depends on a single variable, e.g., the equation for a particle moving at constant velocity, d x(t) = v, dt

which has the well known solution x(t) = vt + x0 .

The unknown constant x is called an integration constant, and can be determined if we know where the particle is located at time t = 0 . If we go to a second order equation (i.e., one containing the second derivative of the unknown function), we find more integration constants: the harmonic oscillator equation 0

d

2 2

2

x(t) = −ω x(t)

dt

has as solution x = A cos ωt + B sin ωt,

which contains two constants. As we can see from the simple examples, and as you well know from experience, these equations are relatively straightforward to solve in general form. We need to know only the coordinate and position at one time to fix all constants. This page titled 1.1: Ordinary Differential Equations is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1.1.1

https://math.libretexts.org/@go/page/8367

1.2: PDE’s Rather than giving a strict mathematical definition, let us look at an example of a PDE, the heat equation in 1 space dimension ∂

2

u(x, t)

1 ∂u(x, t) =

2

. k

∂x

(1.2.1)

∂t

It is a PDE since partial derivatives are involved.

 Review ∂

To remind you of what that means:

u(x, t) ∂x

denotes the differentiation of u(x, t) w.r.t. x keeping t fixed, ∂

2

2

2

(x t + x t ) = 2xt + t . ∂x

Equation 1.2.1 called linear since u and its derivatives appear linearly, i.e., once per term. No functions of u are allowed. Terms like u , sin(u), u



2

, etc., break this rule, and lead to non-linear equations. These are interesting and important in their own

u ∂x

right, but outside the scope of this course. Equation 1.2.1 is also homogeneous (which just means that every term involves either u or one of its derivatives, there is no term that does not contain u). The equation ∂

2

2

1



u(x, t) =

u(x, t) + sin(x) k ∂t

∂x

is called inhomogeneous, due to the sin(x) term on the right, that is independent of u.

 Linearity Why is all that so important? A linear homogeneous equation allows superposition of solutions. If u and u are both solutions to the heat equation, 1



2

1 u1 (x, t) −

2

∂x



k ∂t

∂ u1 (x, t) =

1

∂t

u2 (x, t) −



2

2

k ∂x2

u2 (x, t) = 0,

(1.2.2)

any combination is also a solution, ∂

2

2

1 [au1 (x, t) + b u2 (x, t)] −

∂x



k ∂t

[au1 (x, t) + b u2 (x, t)] = 0.

For a linear inhomogeneous equation this gets somewhat modified. Let inhomogeneity, ∂

2

∂x2

1

v

be any solution to the heat equation with a

sin(x)



v(x, t) −

v(x, t) = sin(x). k ∂t

In that case v + au , with u a solution to the homogeneous equation, see Equation 1.2.2, is also a solution, 1

1



2

2

1 [v(x, t) + au1 (x, t)] −

∂x ∂

2

2

∂x

1



v(x, t) −

k ∂t

[v(x, t) + au1 (x, t)]



1

v(x, t) + a ( k ∂x



u1 (x, t) − ∂x

=



k ∂t

u1 (x, t))

=

sin(x).

Finally we would like to define the order of a PDE as the power in the highest derivative, even it is a mixed derivative (w.r.t. more than one variable).

1.2.1

https://math.libretexts.org/@go/page/8368

 Exercise 1.2.1 Which of these equations is linear? and which is homogeneous? a.



2

u

b. y

2



2

∂u

+x

∂x2 2

u 2

∂u +u



2

+y

2

u 2



2

2



u

∂y

2

=0

u

+

∂x

2

+x ∂x

∂x

c.

2

=x ∂y

=0

2

∂x

Answer TBA

 Exercise 1.2.2 What is the order of the following equations? a. b.



2

u

∂x2 ∂

2

∂ +

∂x

u =0

∂y 2

u 2

2

∂ −2 ∂

3

4

u



2

u

+ x∂y

∂y

2

=0

Answer TBA

This page titled 1.2: PDE’s is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1.2.2

https://math.libretexts.org/@go/page/8368

CHAPTER OVERVIEW 2: Classication of Partial Differential Equations 2.1: Examples of PDE 2.2: Second Order PDE 2.3: More than 2D 2.E: Classication of Partial Differential Equations (Exercises)

This page titled 2: Classication of Partial Differential Equations is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

2.1: Examples of PDE Partial differential equations occur in many different areas of physics, chemistry and engineering. Let me give a few examples, with their physical context. Here, as is common practice, I shall write ∇ to denote the sum 2

2





2

=

2



2

+

∂x

∂y

2

+…

The wave equation: 1

2

∇ u =

2

c



2

u 2

∂t

This can be used to describes the motion of a string or drumhead (u is vertical displacement), as well as a variety of other waves (sound, light, ...). The quantity c is the speed of wave propagation. The heat or diffusion equation, 1 ∂u

2

∇ u = k ∂t

This can be used to describe the change in temperature (u) in a system conducting heat, or the diffusion of one substance in another (u is concentration). The quantity k , sometimes replaced by a , is the diffusion constant, or the heat capacity. Notice the irreversible nature: If t → −t the wave equation turns into itself, but not the diffusion equation. Laplace’s equation: 2

2

∇ u =0

Helmholtz’s equation: 2

∇ u + λu = 0

This occurs for waves in wave guides, when searching for eigenmodes (resonances). Poisson’s equation: 2

∇ u = f (x, y, …)

The equation for the gravitational field inside a gravitational body, or the electric field inside a charged sphere. Time-independent Schrödinger equation: 2m

2

∇ u = ℏ

2

[E − V (x, y, …)]u = 0

2

has a probability interpretation. Klein-Gordon equation |u|

2

∇ u−

1 2

c



2

u 2

2

+λ u = 0

∂t

2

Relativistic quantum particles,|u| has a probability interpretation. These are all second order differential equations. (Remember that the order is defined as the highest derivative appearing in the equation). This page titled 2.1: Examples of PDE is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

2.1.1

https://math.libretexts.org/@go/page/8366

2.2: Second Order PDE Second order P.D.E. are usually divided into three types. Let demonstrate this for a general two-dimensional PDE: ∂

2

u

a

2



2

u

+ 2c

∂x



2

u

+b ∂x∂y

∂y

2

∂u

∂u

+d

+e ∂x

+fu +g = 0 ∂y

where the coefficients (i.e., a, … , g) can either be constants or given functions of x, y. If g is 0 the system is called homogeneous, otherwise it is called inhomogeneous. Now the differential equation is said to be elliptic ⎫ ⎪

positive ⎧ ⎪ 2

hyperbolic ⎬  if Δ(x, y) = ab − c  is  ⎨ negative ⎭ ⎩ ⎪ ⎪ parabolic zero

Reason Behind Names Why do we use these names? The idea is most easily explained for a case with constant coefficients, and correspond to a classification of the associated quadratic form (replace derivative w.r.t. x and y with ξ and η ) aξ

2

+ bη

2

+ 2cξη + f = 0

We neglect d and e since they only describe a shift of the origin. Such a quadratic equation can describe any of the geometrical figures discussed above. Let me show an example, a = 3, b = 3, c = 1 and f = −3 . Since ab − c = 8 , this should describe an ellipse. We can write 2



2

+ 3η

2

2

ξ +η + 2ξη = 4 (

– √2

)

ξ −η + 2(

– √2

2

)

=3

(2.2.1)

which is indeed the equation of an ellipse, with rotated axes, as can be seen in Figure 2.2.1

Figure 2.2.1 : The ellipse corresponding to Equation 2.2.1 .

We should also realize that Equation 2.2.1 can be written in the vector-matrix-vector form (ξ, η) (

3

1

1

3

ξ )(

) =3 η

We now recognize that ∇ is nothing more than the determinant of this matrix, and it is positive if both eigenvalues are equal, negative if they differ in sign, and zero if one of them is zero. (Note: the simplest ellipse corresponds to x + y = 1 , a parabola to y = x , and a hyperbola to x − y = 1 ). 2

2

2

2

2

This page titled 2.2: Second Order PDE is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

2.2.1

https://math.libretexts.org/@go/page/8365

2.3: More than 2D In more than two dimensions we use a similar definition, based on the fact that all eigenvalues of the coefficient matrix have the same sign (for an elliptic equation), have different signs (hyperbolic) or one of them is zero (parabolic). This has to do with the behavior along the characteristics, as discussed below. Let me give a slightly more complex example 2

x



2

u 2

+y

2



∂x

2

u

∂y

2

+z

2



2

u

∂z



2

u

+ 2xy

2



2

u

+ 2xz ∂x∂y



2

u

+ 2yz ∂x∂z

= 0. ∂y∂z

The matrix associated with this equation is 2



x

xy

⎜ xy ⎝

y

xz

xz

2



yz ⎟

yz

z

2



If we evaluate its characteristic polynomial we find that it is 2

2

λ (x

−y

2

+z

2

− λ) = 0.

Since this has always (for all x, y, z) two zero eigenvalues this is a parabolic differential equation.

 Characteristics and Classification A key point for classifying equations this way is not that we like the conic sections so much, but that the equations behave in very different ways if we look at the three different cases. Pick the simplest representative case for each class, and look at the lines of propagation. This page titled 2.3: More than 2D is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

2.3.1

https://math.libretexts.org/@go/page/8363

2.E: Classication of Partial Differential Equations (Exercises)  Exercise 2.E. 1 What is the order of the following equations 1.



3

u



3

∂x

2.



2

u

∂y

u



4

−2

2

2

+

u



2

u

+

3

∂x

=0

2

∂x u

∂y

=0

2

Answer TBA

 Exercise 2.E. 2 Classify the following differential equations (as elliptic, etc.) 1.



2

u



2



2

u



2

u





2

u 2

∂ y

=0

=0 ∂x ∂u =0 ∂x

∂u

∂u +2

=0

∂x u 2

2

∂u

+2

2

+

2

u

+

2

u

∂y

∂x

5.

2



2

2

∂y

u

∂y

∂x

4.

2

+

2





∂x∂y

∂x

3.

u +

∂x

2.

2

−2

∂y ∂

2

u

+x

∂x

∂y

2

=0

Answer TBA

This page titled 2.E: Classication of Partial Differential Equations (Exercises) is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

2.E.1

https://math.libretexts.org/@go/page/8364

CHAPTER OVERVIEW 3: Boundary and Initial Conditions As you all know, solutions to ordinary differential equations are usually not unique (integration constants appear in many places). This is of course equally a problem for PDE’s. PDE’s are usually specified through a set of boundary or initial conditions. A boundary condition expresses the behavior of a function on the boundary (border) of its area of definition. An initial condition is like a boundary condition, but then for the time-direction. 3.1: Introduction to Boundary and Initial Conditions 3.2: Explicit Boundary Conditions 3.3: Implicit Boundary Conditions 3.4: More Realistic Examples of Boundary and Initial Conditions

This page titled 3: Boundary and Initial Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

3.1: Introduction to Boundary and Initial Conditions As you all know, solutions to ordinary differential equations are usually not unique (integration constants appear in many places). This is of course equally a problem for PDE’s. PDE’s are usually specified through a set of boundary or initial conditions. A boundary condition expresses the behavior of a function on the boundary (border) of its area of definition. An initial condition is like a boundary condition, but then for the time-direction. Not all boundary conditions allow for solutions, but usually the physics suggests what makes sense. Let me remind you of the situation for ordinary differential equations, one you should all be familiar with, a particle under the influence of a constant force, ∂

2

x 2

= a.

Which leads to

(3.1.1)

and

(3.1.2)

l. 21

(3.1.3)

∂t

∂x ∂t

= at + v0 , 1

x =

2

at 2

+ v0 t + x0 .

This contains two integration constants. Standard practice would be to specify (t = 0) = v and x(t = 0) = x . These are linear initial conditions (linear since they only involve x and its derivatives linearly), which have at most a first derivative in them. This one order difference between boundary condition and equation persists to PDE’s. It is kind of obviously that since the equation already involves that derivative, we can not specify the same derivative in a different equation. ∂x ∂t

0

0

The important difference between the arbitrariness of integration constants in PDE’s and ODE’s is that whereas solutions of ODE’s these are really constants, solutions of PDE’s contain arbitrary functions. Let me give an example. Take u = yf (x)

then ∂u = f (x). ∂y

This can be used to eliminate f from the first of the equations, giving ∂u u =y ∂y

which has the general solution u = yf (x). One can construct more complicated examples. Consider u(x, y) = f (x + y) + g(x − y)

which gives on double differentiation ∂

2

u 2

∂x



2

u

− ∂y

2

= 0.

The problem is that without additional conditions the arbitrariness in the solutions makes it almost useless (if possible) to write down the general solution. We need additional conditions, that reduce this freedom. In most physical problems these are boundary conditions, that describes how the system behaves on its boundaries (for all times) and initial conditions, that specify the state of the system for an initial time t = 0 . In the ODE problem discussed before we have two initial conditions (velocity and position at time t = 0 ). This page titled 3.1: Introduction to Boundary and Initial Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

3.1.1

https://math.libretexts.org/@go/page/8361

3.2: Explicit Boundary Conditions For the problems of interest here we shall only consider linear boundary conditions, which express a linear relation between the function and its partial derivatives, e.g., ∂u u(x, y = 0) + x

(x, y = 0) = 0. ∂x

As before the maximal order of the derivative in the boundary condition is one order lower than the order of the PDE. For a second order differential equation we have three possible types of boundary condition

Dirichlet Boundary Condition When we specify the value of u on the boundary, we speak of Dirichlet boundary conditions. An example for a vibrating string with its ends, at x = 0 and x = L , fixed would be u(0, t) = u(L, t) = 0.

von Neumann Boundary Conditions In multidimensional problems the derivative of a function w.r.t. to each of the variables forms a vector field (i.e., a function that takes a vector value at each point of space), usually called the gradient. For three variables this takes the form ∂f grad f (x, y, z) = ∇f (x, y, z) = (

∂f (x, y, z),

∂x

∂f (x, y, z),

∂y

(x, y, z)) . ∂z

Figure 3.2.1: A sketch of the normal derivatives used in the von Neumann boundary conditions.

Typically we cannot specify the gradient at the boundary since that is too restrictive to allow for solutions. We can – and in physical problems often need to – specify the component normal to the boundary, see Figure 3.2.1 for an example. When this normal derivative is specified we speak of von Neumann boundary conditions. In the case of an insulated (infinitely thin) rod of length a , we can not have a heat-flux beyond the ends so that the gradient of the temperature must vanish (heat can only flow where a difference in temperature exists). This leads to the BC ∂u

∂u (0, t) =

∂x

(a, t) = 0. ∂x

Mixed (Robin’s) Boundary Conditions We can of course mix Dirichlet and von Neumann boundary conditions. For the thin rod example given above we could require ∂u u(0, t) +

∂u (0, t) = u(a, t) +

∂x

(a, t) = 0. ∂x

This page titled 3.2: Explicit Boundary Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

3.2.1

https://math.libretexts.org/@go/page/8360

3.3: Implicit Boundary Conditions In many physical problems we have implicit boundary conditions, which just mean that we have certain conditions we wish to be satisfied. This is usually the case for systems defined on an infinite definition area. For the case of the Schrödinger equation this usually means that we require the wavefunction to be normalizable. We thus have to disallow the wave function blowing up at infinity. Sometimes we implicitly assume continuity or differentiability. In general one should be careful about such implicit BC’s, which may be extremely important This page titled 3.3: Implicit Boundary Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

3.3.1

https://math.libretexts.org/@go/page/8359

3.4: More Realistic Examples of Boundary and Initial Conditions A String with Fixed Endpoints Consider a string fixed at x = 0 and x = a , as in Figure 3.4.1

Figure 3.4.1 : A string with fixed endpoints.

It satisfies the wave equation 1 2

c



2

u 2



2

=

∂t

u 2

,    

0 < x < a,

∂x

with boundary conditions u(0, t) = u(a, t) = 0,

t > 0,

and initial conditions, ∂u u(x, 0) = f (x),

(x, 0) = g(x). ∂x

A String with Freely Floating Endpoints Consider a string with ends fastened to air bearings that are fixed to a rod orthogonal to the x-axis. Since the bearings float freely there should be no force along the rods, which means that the string is horizontal at the bearings (Figure 3.4.2).

Figure 3.4.2 : A string with floating endpoints.

It satisfies the wave equation with the same initial conditions as above, but the boundary conditions now are ∂u

∂u (0, t) =

∂x

(a, t) = 0,   

t > 0.

∂x

These are clearly of von Neumann type.

A String with Endpoints Fixed to Strings To illustrate mixed boundary conditions we make an even more complicated contraption where we fix the endpoints of the string to springs, with equilibrium at y = 0 , see Figure 3.4.3 for a sketch.

Figure 3.4.3 : A string with endpoints fixed to springs.

Hook’s law states that the force exerted by the spring (along the y axis) is F = −ku(0, t) , where k is the spring constant. This must be balanced by the force the string on the spring, which is equal to the tension T in the string. The component parallel to the y axis is T sin α , where α is the angle with the horizontal, see Figure 3.4.4.

3.4.1

https://math.libretexts.org/@go/page/8358

Figure 3.4.4 : the balance of forces at one endpoint of the string of Figure 3.4.3

For small α we have ∂u sin α ≈ tan α =

(0, t). ∂x

Since both forces should cancel we find T ∂u u(0, t) −

(0, t) = 0,

t > 0,

k ∂x

and T ∂u u(a, t) −

(a, t) = 0, k ∂x

These are mixed boundary conditions. This page titled 3.4: More Realistic Examples of Boundary and Initial Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

3.4.2

https://math.libretexts.org/@go/page/8358

CHAPTER OVERVIEW 4: Fourier Series In this chapter we shall discuss Fourier series. These infinite series occur in many different areas of physics, in electromagnetic theory, electronics, wave phenomena and many others. They have some similarity to – but are very different from – the Taylor’s series you have encountered before. 4.1: Taylor Series 4.2: Introduction to Fourier Series 4.3: Periodic Functions 4.4: Orthogonality and Normalization 4.5: When is it a Fourier Series? 4.6: Fourier series for even and odd functions 4.7: Convergence of Fourier series

This page titled 4: Fourier Series is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

4.1: Taylor Series One series you have encountered before is Taylor’s series, ∞

n

f (x) = ∑ f

(n)

(x − a) (a)

,

where f

(n)

(x)

(4.1.1)

n!

n=0

is the n th derivative of f . An example is the Taylor series of the cosine around x = 0 (i.e., a = 0 ), cos(0) ′

(2)

cos

(3)

cos

(4)

cos

= 1,



cos (x) = − sin(x),

cos (0) (2)

(x) = − cos(x),

cos

(3)

(x) = sin(x),

cos

(x) = cos(x),

cos

(4)

= 0,

(0)

= −1,

(0)

= 0,

(0)

= 1.

Notice that after four steps we are back where we started. We have thus found (using m = 2n in (4.1.1)) ) ∞

m

(−1)

cos x = ∑ m=0

2m

x

,

(2m)!

 Exercise 4.1.1 Show that ∞

m

(−1)

sin x = ∑ m=0

2m+1

x

.

(2m + 1)!

Answer TBA

This page titled 4.1: Taylor Series is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.1.1

https://math.libretexts.org/@go/page/8357

4.2: Introduction to Fourier Series Rather than Taylor series, that are supposed to work for “any” function, we shall study periodic functions. For periodic functions the French mathematician introduced a series in terms of sines and cosines, f (x) =

a0 2

+ ∑[ an cos(nx) + bn sin(nx)]. n=1

We shall study how and when a function can be described by a Fourier series. One of the very important differences with Taylor series is that they can be used to approximate non-continuous functions as well as continuous ones. This page titled 4.2: Introduction to Fourier Series is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.2.1

https://math.libretexts.org/@go/page/8356

4.3: Periodic Functions We first need to define a periodic function. A function is called periodic with period p if f (x + p) = f (x) , for all x, even if f is not defined everywhere. A simple example is the function f (x) = sin(bx) which is periodic with period (2π) ∕ b . Of course it is also periodic with periodic (4π) ∕ b . In general a function with period p is periodic with period 2p 3p …. This can easily be seen using the definition of periodicity, which subtracts p from the argument f (x + 3p) = f (x + 2p) = f (x + p) = f (x).

The smallest positive value of p for which f is periodic is called the (primitive) period of f.

 Exercise 4.3.1 What is the primitive period of sin(4x)? Answer π 2

.

This page titled 4.3: Periodic Functions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.3.1

https://math.libretexts.org/@go/page/8355

4.4: Orthogonality and Normalization Consider the series a0 2

nπx + ∑ [an cos ( n=1

L

nπx ) + bn sin (

)],

−L ≤ x ≤ L.

L

This is called a trigonometric series. If the series approximates a function f (as will be discussed) it is called a Fourier series and a and b are the Fourier coefficients of f. In order for all of this to make sense we first study the functions nπx {1, cos (

nπx ), sin (

)},

L

n = 1, 2, … ,

L

and especially their properties under integration. We find that L



1 ⋅ 1dx = 2L,

−L

L



nπx 1 ⋅ cos (

L



nπx 1 ⋅ sin (

L

mπx cos (

nπx ) ⋅ cos (

L

−L

L

1 )dx

=

L

∫ 2

L

mπx sin (

nπx ) ⋅ sin (

L

−L

L

L

−L

mπx cos ( L

(m + n)πx

0

if n ≤ m

L

if n = m L

∫ 2

,

)dx L

(4.4.3) , (4.4.4)

(m + n)πx cos (

(m − n)πx ) + cos (

L

−L

={

(4.4.1)

(m − n)πx ) + cos (

L

−L

=

)dx L

(4.4.2)

cos (

1 )dx

L

if n = m

2

nπx ) ⋅ sin (

L



={



if n ≤ m

L

=

(m − n)πx ) + cos (

L

0

1 )dx

(m + n)πx cos (

−L

={



)dx = 0 L

−L



)dx = 0 L

−L

)dx

(4.4.5)

L ,

0

if n ≤ m

L

if n = m

(4.4.6)

If we consider these integrals as some kind of inner product between functions (like the standard vector inner product) we see that we could call these functions orthogonal. This is indeed standard practice, where for functions the general definition of inner product takes the form b

(f , g) = ∫

w(x)f (x)g(x)dx.

a

If this is zero we say that the functions f and g are orthogonal on the interval [a b] with weight function w. If this function is 1, as is the case for the trigonometric functions, we just say that the functions are orthogonal on [a b]. The norm of a function is now defined as the square root of the inner-product of a function with itself (again, as in the case of vectors), − −−−−−−−−−−− − b

2

∥f ∥ = √∫

w(x)f (x ) dx.

a

If we define a normalised form of f (like a unit vector) as f /∥f ∥, we have

4.4.1

https://math.libretexts.org/@go/page/8354

f ∥

∥ = ∥f ∥

− −−−−−−−−−−− −  b 2 ∫ w(x)f (x ) dx  a ⎷

2

−−−−−−−−−−− − √∫

b

a

w(x)f (x )2 dx

=

∥f ∥

∥f ∥ =

∥f ∥

= 1. ∥f ∥

 Exercise 4.4.1 What is the normalised form of {1, cos (

nπx L

), sin (

nπx L

)}?

Answer {

1 √2L

,(

1 √L

) cos (

nπx L

), (

1 √L

) sin (

nπx L

)}

A set of mutually orthogonal functions that are all normalised is called an orthonormal set. This page titled 4.4: Orthogonality and Normalization is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.4.2

https://math.libretexts.org/@go/page/8354

4.5: When is it a Fourier Series? The series discussed before are only useful is we can associate a function with them. How can we do that? Lets us assume that the periodic function use orthogonality),

has a Fourier series representation (exchange the summation and integration, and

f (x)

a0

f (x) =

2

nπx + ∑ [an cos (

L

n=1

nπx ) + bn sin (

)] L

We can now use the orthogonality of the trigonometric functions to find that L

1 ∫ L L

1 ∫ L

nπx f (x) ⋅ cos ( L

−L

L

1 ∫ L

f (x) ⋅ 1dx = a0

−L

)dx = an

nπx f (x) ⋅ sin ( L

−L

)dx = an

This defines the Fourier coefficients for a given f (x). If these coefficients all exist we have defined a Fourier series, about whose convergence we shall talk in a later lecture. An important property of Fourier series is given in Parseval’s lemma: 2

L 2



La

0

(f (x)) dx = 2

−L

∞ 2

2

+ L ∑(an + bn ). n=1

This looks like a triviality, until one realises what we have done: we have once again interchanged an infinite summation and an integration. There are many cases where such an interchange fails, and actually it make a strong statement about the orthogonal set when it holds. This property is usually referred to as completeness. We shall only discuss complete sets in these lectures. Now let us study an example. We consider a square wave (this example will return a few times) −3

if  − 5 + 10n < x < 10n

3

if 10n < x < 5 + 10n

f (x) = {

,

(4.5.1)

where n is an integer, as sketched in Fig. 4.5.1.

Figure 4.5.1 : The square wave (4.5.1 ).

This function is not defined at x = 5n. We easily see that L = 5 . The Fourier coefficients are

4.5.1

https://math.libretexts.org/@go/page/8353

0

1 a0

=

∫ 5

5

0

=

∫ 5

nπx

nπx cos(



5

5

0

5

∣ )∣ ∣

cos( nπ

−5

5

12

0

if n odd



[1 − cos(nπ)] = { nπ

) 5

0

nπx



6 =

nπx 3 sin(

3

∣ )∣ ∣

) =0 5

5



5

3 =

3 cos(

0

1 )+

−5

nπx

∫ 5

nπx −3 sin(

5

1 )+

5

0

∫ 5

3dx = 0

0

−3 cos(

−5

1 bn =



−5

1 an

5

1 −3dx +

0

if n even

And thus (n = 2m + 1 ) 12 f (x) =

1

(2m + 1)πx

∑ π

m=0

sin(

).

2m + 1

5

 Exercise 4.5.1 What happens if we apply Parseval’s theorem to this series? Answer We find 5





144 9dx = 5 π

−5

2

2

1

∑(

) 2m + 1

m=0

Which can be used to show that ∞

1

∑( m=0

2

) 2m + 1

π

2

=

. 8

This page titled 4.5: When is it a Fourier Series? is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.5.2

https://math.libretexts.org/@go/page/8353

4.6: Fourier series for even and odd functions Notice that in the Fourier series of the square wave (4.5.3) all coefficients a vanish, the series only contains sines. This is a very general phenomenon for so-called even and odd functions. n

 EVEn and odd A function is called even if f (−x) = f (x), e.g. cos(x). A function is called odd if f (−x) = −f (x), e.g. sin(x). These have somewhat different properties than the even and odd numbers: 1. The sum of two even functions is even, and of two odd ones odd. 2. The product of two even or two odd functions is even. 3. The product of an even and an odd function is odd.

 Exercise 4.6.1 Which of the following functions is even or odd? a) sin(2x), b) sin(x)cos(x), c) tan(x), d) x , e) x , f) |x| 2

3

Answer even: d, f; odd: a, b, c, e. Now if we look at a Fourier series, the Fourier cosine series f (x) =



a0



+ ∑ an cos

2

n=1

x L

describes an even function (why?), and the Fourier sine series ∞

f (x) = ∑ bn sin n=1

nπ x L

an odd function. These series are interesting by themselves, but play an especially important rôle for functions defined on half the Fourier interval, i.e., on [0, L] instead of [−L, L]. There are three possible ways to define a Fourier series in this way, see Fig. 4.6.1 1. Continue f as an even function, so that f



(0) = 0

.

2. Continue f as an odd function, so that f (0) = 0 . 3.

Figure 4.6.1 : A sketch of the possible ways to continue function, odd function or assuming no symmetry at all.

f

beyond its definition region for

0 < x < L

. From left to right as even

Of course these all lead to different Fourier series, that represent the same function on [0, L]. The usefulness of even and odd Fourier series is related to the imposition of boundary conditions. A Fourier cosine series has df /dx = 0 at x = 0 , and the Fourier sine series has f (x = 0) = 0 . Let me check the first of these statements:

4.6.1

https://math.libretexts.org/@go/page/8352

d [ dx

a0 2





+ ∑ an cos n=1



π x] = −

L

L



∑ nan sin n=1

x =0

at x = 0.

L

Figure 4.6.2 : The function y = 1 − x .

As an example look at the function f (x) = 1 − x , 0 ≤ x ≤ 1 , with an even continuation on the interval [−1, 1]. We find 1

2 a0 =

∫ 1

(1 − x)dx = 1

0 1

an = 2 ∫

(1 − x) cos nπxdx

0

2 = { nπ

2

n π

0 ={

1



2 sin nπx −

2

[cos nπx + nπx sin nπx]}∣ ∣

0

if n even 4 2

n π

if n is odd

2

.

So, changing variables by defining n = 2m + 1 so that in a sum over all m n runs over all odd numbers, 1 f (x) =

4 +

2

π

2



1

∑ m=0

2

cos(2m + 1)πx.

(2m + 1)

This page titled 4.6: Fourier series for even and odd functions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.6.2

https://math.libretexts.org/@go/page/8352

4.7: Convergence of Fourier series The final subject we shall consider is the convergence of Fourier series. I shall show two examples, closely linked, but with radically different behaviour.

Figure 4.7.1 : The square and triangular waves on their fundamental domain.

1. A square wave, f (x) = 1

for −π < x

< 0

; f (x) = −1 for 0

.

< x < π

2. a triangular wave, g(x) = π/2 + x

for −π < x

< 0

; g(x) = π/2 − x for 0

.

< x < π

Note that f is the derivative of g .

Figure 4.7.2: A three-dimensional representation of the Gibbs phenomenon for the square wave. The axis orthogonal to the paper labels the number of Fourier components.

4.7.1

https://math.libretexts.org/@go/page/8351

It is not very hard to find the relevant Fourier series, ∞

4 f (x) =

− π 4

g(x) =

1

∑ m=0



sin(2m + 1)x, 2m + 1 1

∑ π

m=0

2

cos(2m + 1)x.

(2m + 1)

Let us compare the partial sums, where we let the sum in the Fourier series run from m = 0 to m = M instead of m = 0 … ∞. We note a marked difference between the two cases. The convergence of the Fourier series of g is uneventful, and after a few steps it is hard to see a difference between the partial sums, as well as between the partial sums and g . For f , the square wave, we see a surprising result: Even though the approximation gets better and better in the (flat) middle, there is a finite (and constant!) overshoot near the jump. The area of this overshoot becomes smaller and smaller as we increase M . This is called the Gibbs phenomenon (after its discoverer). It can be shown that for any function with a discontinuity such an effect is present, and that the size of the overshoot only depends on the size of the discontinuity! A final, slightly more interesting version of this picture, is shown in Fig. 4.7.3.

Figure 4.7.3 : A three-dimensional representation of the Gibbs phenomenon for the square wave. The axis orthogonal to the paper labels the number of Fourier components. This page titled 4.7: Convergence of Fourier series is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

4.7.2

https://math.libretexts.org/@go/page/8351

CHAPTER OVERVIEW 5: Separation of Variables on Rectangular Domains In this section we shall investigate two dimensional equations defined on rectangular domains. We shall either look at finite rectangles, when we have two space variables, or at semi-infinite rectangles when one of the variables is time. We shall study all three different types of partial differental equations: parabolic, hyperbolic and elliptical. 5.1: Cookbook 5.2: Parabolic Equation 5.3: Hyperbolic Equation 5.4: Laplace’s Equation 5.5: More Complex Initial/Boundary Conditions 5.6: Inhomogeneous Equations Thumbnail: A visualization of a solution to the two-dimensional heat equation with temperature represented by the third dimension. Imaged used wth permission (Public Domain; Oleg Alexandrov). The heat equation is a parabolic partial differential equation that describes the distribution of heat (or variation in temperature) in a given region over time. This page titled 5: Separation of Variables on Rectangular Domains is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

5.1: Cookbook Let me start with a recipe that describes the approach to separation of variables, as exemplified in the following sections, and in later chapters. Try to trace the steps for all the examples you encounter in this course. 1. Take care that the boundaries are naturally described in your variables (i.e., at the boundary one of the coordinates is constant)! 2. Write the unknown function as a product of functions in each variable. 3. Divide by the function, so as to have a ratio of functions in one variable equal to a ratio of functions in the other variable. 4. Since these two are equal they must both equal to a constant. 5. Separate the boundary and initial conditions. Those that are zero can be re-expressed as conditions on one of the unknown functions. 6. Solve the equation for that function where most boundary information is known. 7. This usually determines a discrete set of separation parameters. 8. Solve the remaining equation for each parameter. 9. Use the superposition principle (true for homogeneous and linear equations) to add all these solutions with an unknown constants multiplying each of the solutions. 10. Determine the constants from the remaining boundary and initial conditions. This page titled 5.1: Cookbook is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.1.1

https://math.libretexts.org/@go/page/8350

5.2: Parabolic Equation Let us first study the heat equation in 1 space (and, of course, 1 time) dimension. This is the standard example of a parabolic equation. ∂

2

∂ u =k

∂t

2

u,

0 < x < L, t > 0.

∂x

with boundary conditions u(0, t) = 0, u(L, t) = 0,

t > 0,

and initial condition u(x, 0) = x, 0 < x < L.

We shall attack this problem by separation of variables, a technique always worth trying when attempting to solve a PDE, u(x, t) = X(x)T (t).

This leads to the differential equation ′

X(x)T (t) = kX " (x)T (t).

(5.2.1)

We find, by dividing both sides by XT , that ′

1 T (t)

X " (k) =

k

T (t)

. X(k)

Thus the left-hand side, a function of t , equals a function of x on the right-hand side. This is not possible unless both sides are independent of x and t , i.e. constant. Let us call this constant −λ . We obtain two differential equations ′

T (t) = −λkT (t) X

′′

(x) = −λX(x)

 Exercise 5.2.1 What happens if X(x)T (t) is zero at some point (x = x

0,

t = t0 )

?

Answer Nothing. We can still perform the same trick.

 Note This is not so trivial as I suggest. We either have X(x ) = 0 or T (t ) = 0 . Let me just consider the first case, and assume T (t ) ≠ 0 . In that case we find (from 5.2.1)), substituting t = t , that X (x ) = 0 . 0

0

′′

0

0

0

We now have to distinguish the three cases λ > 0 , λ = 0 , and λ < 0 . λ > 0

Write α

2



, so that the equation for X becomes X

′′

2

(x) = −α X(x).

This has as solution X(x) = A cos αx + B sin αx. X(0) = 0

gives A ⋅ 1 + B ⋅ 0 = 0 , or A = 0 . Using X(L) = 0 we find that

5.2.1

https://math.libretexts.org/@go/page/8349

B sin αL = 0

which has a nontrivial (i.e., one that is not zero) solution when αL = nπ , with n a positive integer. This leads to λ

n

2

=

n π 2

L

2

.

λ = 0

We find that X = A + Bx . The boundary conditions give A = B = 0 , so there is only the trivial (zero) solution. λ < 0

We write λ = −α , so that the equation for X becomes 2

X

′′

2

(x) = −α X(x).

The solution is now in terms of exponential, or hyperbolic functions, X(x) = A cosh x + B sinh x.

The boundary condition at x = 0 gives A = 0 , and the one at x = L gives B = 0 . Again there is only a trivial solution. We have thus only found a solution for a discrete set of “eigenvalues” λ > 0 . Solving the equation for T we find an exponential solution, T = exp(−λkT ) . Combining all this information together, we have n

2

n π un (x, t) = exp(−k

2

nπ t) sin(

2

x). L

L

The equation we started from was linear and homogeneous, so we can superimpose the solutions for different values of n , ∞

2

n π

u(x, t) = ∑ cn exp(−k

2

2

nπ t) sin(

n=1

x). L

L

This is a Fourier sine series with time-dependent Fourier coefficients. The initial condition specifies the coefficients c , which are the Fourier coefficients at time t = 0 . Thus n

L

2 cn =

∫ L 2L

=



nπx x sin

dx L

0 n

(−1 )

n+1

2L

= (−1 )



. nπ

The final solution to the PDE + BC’s + IC is ∞ n+1

n=1

2

2L

u(x, t) = ∑(−1 )

n π exp(−k



2

L

2

nπ t) sin

x. L

This solution is transient: if time goes to infinity, it goes to zero. This page titled 5.2: Parabolic Equation is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.2.2

https://math.libretexts.org/@go/page/8349

5.3: Hyperbolic Equation As an example of a hyperbolic equation we study the wave equation. One of the systems it can describe is a transmission line for high frequency signals, 40m long. ∂

2

∂ V =

2

LC 

∂x

2

V

2

∂t

imp×capac



∂ V (0, t) =

V (40, t) = 0,

∂x

∂x

V (x, 0) = f (x), ∂ V (x, 0) = 0, ∂t

Separate variables, V (x, t) = X(x)T (t).

We find X

′′

T

′′

= LC X

= −λ. T

Which in turn shows that X T

′′

′′

= −λX, λ =−

T. LC

We can also separate most of the initial and boundary conditions; we find ′





X (0) = X (40) = 0,

T (0) = 0.

Once again distinguish the three cases λ > 0 , λ = 0 , and λ < 0 : λ > 0

(almost identical to previous problem) λ

n

2

= αn



n

=



,X

n

40

= cos(αn x)

. We find that

nπt

Tn (t) = Dn cos(



T (0) = 0

implies E

n

=0

nπt −− − ) + En sin( −− − ). 40 √LC 40 √LC

, and taking both together we find (for n ≥ 1 ) nπt Vn (x, t) = cos(

nπx

−− − 40 √LC

) cos(

). 40

λ = 0

X(x) = A + Bx

.

B =0

due to the boundary conditions. We find that

T (t) = Dt + E

, and

D

is 0 due to initial condition. We

conclude that V0 (x, t) = 1.

λ < 0

No solution. Taking everything together we find that V (x, t) =

a0 2



nπt

+ ∑ an cos( n=1

5.3.1

−− − ) cos( 40 √LC

nπx ). 40

https://math.libretexts.org/@go/page/8348

The one remaining initial condition gives V (x, 0) = f (x) =

a0



nπx

+ ∑ an cos(

2

n=1

). 40

Use the Fourier cosine series (even continuation of f ) to find 40

1 a0 =

∫ 20

f (x)dx,

0 40

1 an =

∫ 20

0

nπx f (x) cos(

)dx. 40

This page titled 5.3: Hyperbolic Equation is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.3.2

https://math.libretexts.org/@go/page/8348

5.4: Laplace’s Equation

Figure 5.4.1 : A conducting sheet insulated from above and below. In a square, heat-conducting sheet, insulated from above and below 1





2

u = k ∂t

2



∂x

If we are looking for a steady state solution, i.e., we take Laplace’s equation ∂

2

u+ ∂y

2

u.

u(x, y, t) = u(x, y)

2



2

2

u+

∂x

the time derivative does not contribute, and we get

∂y

2

u = 0,

an example of an elliptic equation. Let us once again look at a square plate of size a × b , and impose the boundary conditions u(x, 0) = 0, u(a, y) = 0, u(x, b) = x, u(0, y) = 0.

(5.4.1)

(This choice is made so as to be able to evaluate Fourier series easily. It is not very realistic!) We once again separate variables, u(x, y) = X(x)Y (y),

and define X

′′

Y

′′

=− X

= −λ. Y

or explicitly X

′′

= −λX,

Y

′′

= λY .

With boundary conditions X(0) = X(a) = 0 , Y (0) = 0 . The 3rd boundary conditions remains to be implemented. Once again distinguish three cases: λ > 0

X(x) = sin αn (x)



n

=

nπ a



n

2

= αn

. We find Y (y) = Cn sinh αn y + Dn cosh αn y ′



= Cn exp(αn y) + Dn exp(−αn y).

Since Y (0) = 0 we find D

n

=0

(5.4.2)

(sinh(0) = 0, cosh(0) = 1 ).

λ ≤ 0

No solutions So we have ∞

u(x, y) = ∑ bn sin αn x sinh αn y n=1

5.4.1

https://math.libretexts.org/@go/page/8347

The one remaining boundary condition gives ∞

u(x, b) = x = ∑ bn sin αn x sinh αn b. n=1

This leads to the Fourier series of x, a

2 bn sinh αn b =

∫ a

nπx x sin

2a =

dx a

0 n+1

(−1 )

.

(5.4.3)



So, in short, we have ∞

2a V (x, y) =

n+1

∑(−1 ) π

sin

nπx a

sinh

n sinh

n=1

nπy

nπb

a

.

a

 Exercise 5.4.1 The dependence on x enters through a trigonometric function, and that on y through a hyperbolic function. Yet the differential equation is symmetric under interchange of x and y . What happens? Answer The symmetry is broken by the boundary conditions.

This page titled 5.4: Laplace’s Equation is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.4.2

https://math.libretexts.org/@go/page/8347

5.5: More Complex Initial/Boundary Conditions It is not always possible on separation of variables to separate initial or boundary conditions in a condition on one of the two functions. We can either map the problem into simpler ones by using superposition of boundary conditions, a way discussed below, or we can carry around additional integration constants.

Let me give an example of these procedures. Consider a vibrating string attached to two air bearings, gliding along rods 4m apart. You are asked to find the displacement for all times, if the initial displacement, i.e. at t = 0 s is one meter and the initial velocity is x/ t  m/s. 0

The differential equation and its boundary conditions are easily written down, ∂

2

2

1 u =

∂x



2

c



2

2

u,

∂t

∂ u(0, t) =

∂x

u(4, t) = 0, t > 0, ∂x

u(x, 0) = 1, ∂ ∂t

u(x, 0) = x/ t0 .

 Exercise 5.5.1 What happens if I add two solutions v and w of the differential equation that satisfy the same BC’s as above but different IC’s, ∂ v(x, 0) = 0 ,

v(x, 0) = x/ t0 , ∂t ∂

w(x, 0) = 1 ,

w(x, 0) = 0? ∂t

Answer u

=v + w , we can add the BC’s.

If we separate variables, u(x, t) = X(x)T (t), we find that we obtain easy boundary conditions for X(x), ′



X (0) = X (4) = 0,

but we have no such luck for (t). As before we solve the eigenvalue equation for X, and find solutions for n = 0, 1, . . ., and X = cos( x) . Since we have no boundary conditions for T (t) , we have to take the full solution,

2

λn =

n π 16

2

,



n

4

T0 (t) = A0 + B0 t, nπ Tn (t) = An cos

nπ ct + Bn sin

4

ct, 4

and thus 1 u(x, t) = 2





(A0 + B0 t) + ∑ (An cos n=1

4

nπ ct + Bn sin

nπ ct) cos

4

x. 4

Now impose the initial conditions

5.5.1

https://math.libretexts.org/@go/page/8346



1 u(x, 0) = 1 = 2

which implies A

0

=2

,A

n

= 0, n > 0



A0 + ∑ An cos n=1

x, 4

. ∂

∂t



1 u(x, 0) = x/ t0 =

2

nπc

B0 + ∑ n=1

nπ Bn cos

4

x. 4

This is the Fourier sine-series of x, which we have encountered before, and leads to the coefficients B if n is odd and zero otherwise.

0

=4

and B

n

=−

64 3

n π

3

c

So finally 64 u(x, t) = (1 + 2t) − π

3

1 ∑ n odd

3

n

nπct sin

nπx cos

4

. 4

This page titled 5.5: More Complex Initial/Boundary Conditions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.5.2

https://math.libretexts.org/@go/page/8346

5.6: Inhomogeneous Equations Consider a rod of length 2m, laterally insulated (heat only flows inside the rod). Initially the temperature u is 1

πx sin(

) + 500 K.

k

2

The left and right ends are both attached to a thermostat, and the temperature at the left side is fixed at a temperature of 500 K and the right end at 100 K. There is also a heater attached to the rod that adds a constant heat of sin( ) to the rod. The differential equation describing this is inhomogeneous πx 2





2

u =k

πx

2

∂t

u + sin(

), 2

∂x

u(0, t) = 500, u(2, t) = 100, 1 u(x, 0) =

πx sin(

k

) + 500. 2

ince the inhomogeneity is time-independent we write u(x, t) = v(x, t) + h(x),

where h will be determined so as to make v satisfy a homogeneous equation. Substituting this form, we find ∂



2

v=k ∂t

′′

2

v + kh

πx + sin(

). 2

∂x

To make the equation for v homogeneous we require 1

′′

h (x) = −

πx sin(

),

k

2

which has the solution 4 h(x) = C1 x + C2 + kπ

πx 2

sin(

). 2

At the same time we let h carry the boundary conditions, h(0) = 500 , h(2) = 100 , and thus 4

πx

h(x) = −200x + 500 + kπ

2

sin(

). 2

The function v satisfies ∂

∂ v =k

∂t

2

2

v,

∂x

v(0, t) = v(π, t) = 0, v(x, 0) = u(x, 0) − h(x) = 200x.

This is a problem of a type that we have seen before. By separation of variables we find ∞

2

n π

v(x, t) = ∑ bn exp(− n=1

2

nπ kt) sin

4

x. 2

The initial condition gives ∞

∑ bn sin nx = 200x. n=1

from which we find

5.6.1

https://math.libretexts.org/@go/page/8345

n+1

bn = (−1 )

800 . nπ

And thus 200 u(x, t) = −

4 + 500 +

x

2

π k

sin

π

πx sin(

800 )+

2



π

n=1

π

2

k

πnx sin(

n+1

x

Note: as t → ∞ , u(x, t) → − x + 500 + . As can be seen in Fig. k = 1/500 in that figure, and summed over the first 60 solutions. 400

n

(−1)



5.6.1

2

)e

−k(nπ/2 ) t

.

(5.6.1)

2

this approach is quite rapid – we have chosen

Figure 5.6.1 : Time dependence of the solution to the inhomogeneous Equation 5.6.1. This page titled 5.6: Inhomogeneous Equations is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

5.6.2

https://math.libretexts.org/@go/page/8345

CHAPTER OVERVIEW 6: D’Alembert’s Solution to the Wave Equation It is usually not useful to study the general solution of a partial differential equation. As any such sweeping statement it needs to be qualified, since there are some exceptions. One of these is the one-dimensional wave equation which has a general solution, due to the French mathematician d’Alembert. 6.1: Background to D’Alembert’s Solution 6.2: New Variables 6.3: Examples

This page titled 6: D’Alembert’s Solution to the Wave Equation is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

6.1: Background to D’Alembert’s Solution I have argued before that it is usually not useful to study the general solution of a partial differential equation. As any such sweeping statement it needs to be qualified, since there are some exceptions. One of these is the one-dimensional wave equation ∂

2

2

∂x

1 u(x, t) −

2

c



2

2

u(x, t) = 0,

∂t

which has a general solution, due to the French mathematician d’Alembert. The reason for this solution becomes obvious when we consider the physics of the problem: The wave equation describes waves that propagate with the speed c (the speed of sound, or light, or whatever). Thus any perturbation to the one dimensional medium will propagate either right- or leftwards with such a speed. This means that we would expect the solutions to propagate along the characteristics x ± ct = constant , as seen in Figure 6.1.1.

Figure 6.1.1 : The change of variables from x and t to w = x + ct and z

= x − ct

.

This page titled 6.1: Background to D’Alembert’s Solution is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

6.1.1

https://math.libretexts.org/@go/page/8342

6.2: New Variables In order to understand the solution in all mathematical details we make a change of variables w = x + ct,

z = x − ct.

We write u(x, t) = u ¯(w, z) . We find ∂

∂ u



∂ u =

=

∂t 2

2

2

∂t

2

∂t ∂

∂w



∂t ∂



z =c(

∂z

u ¯− ∂w

2

u ¯) , ∂z



u ¯−2

2

¯, u

∂ u ¯

2

(

¯, u

∂z

∂z

∂ w+



¯+ u

∂w ∂

¯+ u



∂w

∂ z =

∂x

∂w∂z

u ¯

u =c



¯+2 u



¯ u

∂z

2

∂ u



∂x

∂w2



∂ w+

∂w

2

∂x2



¯ u

=

∂x

u ¯+ ∂w∂z

u ¯)

(6.2.1)

∂z

We thus conclude that ∂

2

2

1 u(x, t) −

∂x

An equation of the type dependence,



2

c



2

2



2

u(x, t) = 4

u ¯ =0 ∂w∂z

∂t

2

∂w∂z

¯ =0 u

can easily be solved by subsequent integration with respect to z and w. First solve for the z

∂ u ¯ = Φ(w),

(6.2.2)

∂w

where Φ is any function of w only. Now solve this equation for the w dependence, ¯(w, z) = ∫ u

Φ(w)dw = F (w) + G(z)

In other words, with F and G arbitrary functions.

6.2.1: Infinite String Equation 6.2.2 is quite useful in practical applications. Let us first look at how to use this when we have an infinite system (no limits on x). Assume that we are treating a problem with initial conditions ∂ u(x, 0) = f (x),

u(x, 0) = g(x). ∂t

Let me assume f (±∞) = 0 . I shall assume this also holds for constants that don’t play a rôle in u). We find

and

F

G

(we don’t have to, but this removes some arbitrary

F (x) + G(x) = f (x), ′



c(F (x) − G (x)) = g(x).

(6.2.3)

The last equation can be massaged a bit to give x

1 F (x) − G(x) =

∫ c

g(y)dy + C

0

 =Γ(x)

Note that Γ is the integral over g . So Γ will always be a continuous function, even if g is not! And in the end we have

6.2.1

https://math.libretexts.org/@go/page/8343

1 F (x) =

[f (x) + Γ(x) + C ] 2 1

G(x) =

[f (x) − Γ(x) − C ]

(6.2.4)

2

Suppose we choose (for simplicity we take c = 1m/s) ⎧ x +1

if −1 < x < 0

f (x) = ⎨ 1 − x ⎩ 0

if 0 < x < 1

.

elsewhere

and g(x) = 0 . The solution is then simply given by 1 u(x, t) =

[f (x + t) + f (x − t)] .

(6.2.5)

2

This can easily be solved graphically, as shown in Figure 6.2.1.

Figure 6.2.1 : The graphical form of (6.2.5 ), for (from left to right) (leftward moving wave) and

1 f (x − t) 2

,

t = 0s t = 0.5s

and t = 1s . The dashed lines are

1 f (x + t) 2

(rightward moving wave). The solid line is the sum of these two, and thus the solution u .

6.2.2: Finite String The case of a finite string is more complex. There we encounter the problem that even though f and g are only known for 0 < x < a , x ± ct can take any value from −∞ to ∞. So we have to figure out a way to continue the function beyond the length of the string. The way to do that depends on the kind of boundary conditions: Here we shall only consider a string fixed at its ends. u(0, t) = u(a, t) = 0, u(x, 0) = f (x)

(6.2.6)

u(x, 0) = g(x).

(6.2.7)

∂ ∂t

Initially we can follow the approach for the infinite string as sketched above, and we find that 1 F (x) =

[f (x) + Γ(x) + C ] , 2 1

G(x) =

[f (x) − Γ(x) − C ] .

(6.2.8)

2

Look at the boundary condition u(0, t) = 0 . It shows that 1

1 [f (ct) + f (−ct)] +

2

[Γ(ct) − Γ(−ct)] = 0. 2

Now we understand that f and Γ are completely arbitrary functions – we can pick any form for the initial conditions we want. Thus the relation found above can only hold when both terms are zero f (x) = −f (−x), Γ(x) = Γ(x).

(6.2.9)

Now apply the other boundary condition, and find

6.2.2

https://math.libretexts.org/@go/page/8343

f (a + x) = −f (a − x), Γ(a + x) = Γ(a − x).

Figure 6.2.2: A schematic representation of the reflection conditions in Equations dashed line represents f and the dotted line Γ.

(6.2.10)

6.2.9

and 6.2.10. The

The reflection conditions for f and Γ are similar to those for sines and cosines, and as we can see from Figure 6.2.2 both f and Γ have period 2a. This page titled 6.2: New Variables is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

6.2.3

https://math.libretexts.org/@go/page/8343

6.3: Examples Now let me look at two examples

 Example 6.3.1 Find graphically a solution to ∂

2

2



2

u =

2

∂t

u

(c = 1m/s)

∂x

u(x, 0) = {

2x

if 0 ≤ x ≤ 2

24/5 − 2x/5

if 2 ≤ x ≤ 12

.

∂ u(x, 0) = 0 ∂t u(0, t) = u(12, t) = 0

Solution We need to continue f as an odd function, and we can take Γ = 0 . We then have to add the left-moving wave

1 f (x + t)

and

2

the right-moving wave

1 f (x − t) 2

, as we have done in Figs. ???

 Example 6.3.2 Find graphically a solution to ∂

2



2

2

u =

∂t

2

u

(c = 1m/s)

∂x

u(x, 0) = 0 ∂ u(x, 0) = { ∂t

1

if 4 ≤ x ≤ 6

0

elsewhere

.

u(0, t) = u(12, t) = 0.

Solution In this case f

=0

. We find x ′

Γ(x) = ∫



g(x )dx

0

⎧0

if 0 < x < 4

= ⎨ −4 + x ⎩ 2

if 4 < x < 6

.

if 6 < x < 12

This needs to be continued as an even function. This page titled 6.3: Examples is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

6.3.1

https://math.libretexts.org/@go/page/8344

CHAPTER OVERVIEW 7: Polar and Spherical Coordinate Systems 7.1: Polar Coordinates 7.2: Spherical Coordinates

This page titled 7: Polar and Spherical Coordinate Systems is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

7.1: Polar Coordinates Polar coordinates in two dimensions are defined by x = ρ cos ϕ, y = ρ sin ϕ, − −− −− − 2

ρ = √x

+y

2

, ϕ = arctan(y/x),

as indicated schematically in Fig. 7.1.1.

Figure 7.1.1 : Polar coordinates

Using the chain rule we find ∂

∂ρ



∂ϕ

  = ∂x

x

  ∂x ∂ϕ



y

=

 −

2

ρ ∂ρ

ρ ∂

= cos ϕ

∂   ∂ϕ sin ϕ



ρ

∂ϕ

 −

 ,

∂ρ ∂

∂ρ



∂ϕ

  = ∂y



 + ∂x ∂ρ



 +

 

∂y ∂ρ y

∂y ∂ϕ



x

=

 +

2

ρ ∂ρ

ρ ∂

= sin ϕ

∂   ∂ϕ cos ϕ



ρ

∂ϕ

 +

 ,

∂ρ

We can write ∂ ∇ = e ^ρ

1

∂ρ

 +e ^ϕ

∂  

ρ ∂ϕ

where the unit vectors e ^ρ = (cos ϕ, sin ϕ), ^ϕ = (− sin ϕ, cos ϕ), e

are an orthonormal set. We say that circular coordinates are orthogonal. We can now use this to evaluate ∇ , 2

2





2

= cos

2

ϕ

sin ϕ cos ϕ  +

2

2

    + sin

∂ ϕ

2

sin ϕ cos ϕ  −



2

2

1

∂ρ 1

ρ ∂ρ



=

sin

∂ρ 2



cos

ϕ



2

2

sin ϕ cos ϕ  +

∂ϕ 2

∂ ∂ρ

2

ρ

cos  +

ρ

ϕ

2

ρ

ϕ



∂ϕ

ρ 2

2

∂ϕ

∂  

2

sin ϕ cos ϕ  −

2

ρ

∂   ∂ϕ

2

2

 

∂ϕ

1  ) +

∂ρ



2

ρ

∂   (ρ

ρ ∂ρ

1  +

2

∂  +

ρ

∂ϕ

ρ



 +

ϕ

 +

2

∂ρ

=

∂ϕ

ρ

2

sin  +

2

∂ρ

2





2

ρ2 ∂ϕ2

 .

A final useful relation is the integration over these coordinates.

7.1.1

https://math.libretexts.org/@go/page/8341

Figure 7.1.2 : Integration in polar coordinates

As indicated schematically in Fig. 7.1.2, the surface related to a change conclusion that an integral over x, y can be rewritten as ∫ V

f (x, y)dxdy = ∫

ρ → ρ + δρ

, ϕ → ϕ + δϕ is

. This leads us to the

ρδρδϕ

f (ρ cos ϕ, ρ sin ϕ)ρ dρ dϕ

V

This page titled 7.1: Polar Coordinates is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

7.1.2

https://math.libretexts.org/@go/page/8341

7.2: Spherical Coordinates Spherical coordinates are defined from Cartesian coordinates as − −−−−−−−− − 2

r = √x

+y

2

+z

2

ϕ = arctan(y/x) − −− −− − √ x2 + y 2 θ = arctan(

) z

or alternatively x = r cos ϕ sin θ, y = r sin ϕ sin θ z = r cos θ

as indicated schematically in Fig. 7.2.1.

Figure 7.2.1 : Spherical coordinates

Using the chain rule we find ∂

∂r

∂x

∂ϕ



  =



∂θ

 + ∂x ∂r x

y  −

r ∂r

2

x



sin ϕ

∂y

y



∂θ

 +

x

∂ ∂ϕ



∂r



  = ∂z

z

cos ϕ





∂θ



 

∂ ∂θ

sin θ



r

∂θ

 − ∂r

 ,

 

2

r

= sin θ sin ϕ

∂θ

∂z ∂θ

− −− −− − 2 2 √x + y

r ∂r



r



 + ∂z ∂ϕ

 −

sin ϕ cos θ  +

r sin θ ∂ϕ

∂ϕ



=

2

 +

 + ∂z ∂r

 ,

yz ∂   − −− −− − 2 2 ∂θ r √x + y

 +

∂ ∂r

∂θ

∂y ∂θ

x2 + y 2

= sin θ sin ϕ



r

 

∂y ∂ϕ

r ∂r

cos ϕ cos θ



 +



=

 

 + r sin θ ∂ϕ

 + ∂y ∂r



 −

∂ϕ



− −− −− − 2 2 ∂θ r √x + y 2

∂ϕ



∂r

xz  +

2

∂r

  =

∂x ∂θ ∂

+y

= sin θ cos ϕ



 

∂x ∂ϕ



=



 +

 .

once again we can write ∇ in terms of these coordinates.

7.2.1

https://math.libretexts.org/@go/page/8340

∂ ∇ =

e ^r

∂r

1  +e ^ϕ



r sin θ ∂ϕ

1  +e ^θ

∂  

r ∂θ

where the unit vectors e ^r =

(sin θ cos ϕ, sin θ sin ϕ, cos θ),

e ^ϕ =

(− sin ϕ, cos ϕ, 0),

e ^θ =

(cos ϕ cos θ, sin ϕ cos θ, − sin θ).

are an orthonormal set. We say that spherical coordinates are orthogonal. We can use this to evaluate Δ = ∇ , 2

1 Δ =

2

r



2



  (r ∂r

1  ) +

∂r

2

r

1



∂   (sin θ

sin θ ∂θ

1  ) +

∂θ

2

r



2

2

 

∂ϕ

Finally, for integration over these variables we need to know the volume of the small cuboid contained between r and r + δr , θ and θ + δθ and ϕ and ϕ + δϕ .

Figure 7.2.2 : Integration in spherical coordinates

The length of the sides due to each of these changes is δr, rδθ and r sin θδθ , respectively (these are the Jacobians for the conversion of Cartesian coordinates to polar and spherical coordinates, respectively). We thus conclude that ∫ V

f (x, y, z)dxdydz = ∫

2

f (r, θ, ϕ)r

sin θ dr dθ dϕ.

V

Contributors and Attributions Template:ContribWalet This page titled 7.2: Spherical Coordinates is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

7.2.2

https://math.libretexts.org/@go/page/8340

CHAPTER OVERVIEW 8: Separation of Variables in Polar Coordinates 8.1: Example 8.2: Three cases for λ 8.3: Putting it all together

This page titled 8: Separation of Variables in Polar Coordinates is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

8.1: Example Consider a circular plate of radius c m, insulated from above and below. The temperature on the circumference is the circle, and 0  C on the other half.



100  C

on half



Figure 8.1.1 : The boundary conditions for the temperature on a circular plate.

The differential equation to solve is 2



2

u

ρ

∂u





2

+ ∂ρ

∂ρ

2

u 2

u = 0,

∂ϕ

with boundary conditions u(c, ϕ) = {

100

if 0 < ϕ < π

0

if π < ϕ < 2π

.

8.1.1: Periodic BC There is no real boundary in the ϕ direction, but we introduce one, since we choose to let ϕ run from 0 to 2π only. So what kind of boundary conditions do we apply? We would like to see “seamless behaviour”, which specifies the periodicity of the solution in ϕ , u(ρ, ϕ + 2π) = u(ρ, ϕ), ∂u

∂u (ρ, ϕ + 2π) =

∂ϕ

(ρ, ϕ). ∂ϕ

If we choose to put the seem at ϕ = −π we have the periodic boundary conditions u(ρ, 2π) = u(ρ, 0), ∂u

∂u (ρ, 2π) =

∂ϕ

(ρ, 0). ∂ϕ

We separate variables, and take, as usual u(ρ, ϕ) = R(ρ)Φ(ϕ).

This gives the usual differential equations ′′

Φ 2

′′

ρ R

− λΦ = 0,



+ ρR + λR = 0.

Our periodic boundary conditions gives a condition on Φ, Φ(0) = Φ(2π),





Φ (0) = Φ (2π).

(8.1.1)

The other boundary condition involves both R and Φ. This page titled 8.1: Example is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

8.1.1

https://math.libretexts.org/@go/page/8337

8.2: Three cases for λ As usual we consider the cases λ > 0 , λ < 0 and λ = 0 separately. Consider the Φ equation first, since this has the most restrictive explicit boundary conditions (8.1.4). 2

λ = −a

0

The solution (hyperbolic sines and cosines) cannot satisfy the boundary conditions. Now let me look at the solution of the R equation for each of the two cases (they can be treated as one), 2

′′



2

ρ R (ρ) + ρR (ρ) − n R(ρ) = 0.

Let us attempt a power-series solution (this method will be discussed in great detail in a future lecture) α

R(ρ) = ρ

.

We find the equation α

ρ

2

[α(α − 1) + α

2

α

−n ] = ρ

2



2

−n ] = 0

If n ≠ 0 we thus have two independent solutions (as should be)

8.2.1

https://math.libretexts.org/@go/page/8339

−n

Rn (ρ) = C ρ

The term with the negative power of ρ diverges as temperature). We keep the regular solution,

ρ

n

+ Dρ

goes to zero. This is not acceptable for a physical quantity (like the n

Rn (ρ) = ρ .

For n = 0 we find only one solution, but it is not very hard to show (e.g., by substitution) that the general solution is R0 (ρ) = C0 + D0 ln(ρ).

We reject the logarithm since it diverges at ρ = 0 . This page titled 8.2: Three cases for λ is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

8.2.2

https://math.libretexts.org/@go/page/8339

8.3: Putting it all together In summary, we have u(ρ, ϕ) =



A0

n

+∑ρ

2

(An cos nϕ + Bn sin nϕ) .

n=1

The one remaining boundary condition can now be used to determine the coefficients A and B , n

A0

U (c, ϕ) =

∞ n

+∑c

2

={

n

(An cos nϕ + Bn sin nϕ)

n=1

100

if 0 < ϕ < π

0

if π < ϕ < 2π

.

We find π

1 A0 =

π

100 dϕ = 100,

0 π

1

n

c An =

n



∫ π ∫ π

={

π

sin(nϕ)|0 = 0,

nπ π

1

c Bn =

100 100 cos nϕ dϕ =

0

100 100 sin nϕ dϕ = −

π

cos(nϕ)|

0



0

200/(nπ)

if n is odd

0

if n is even

.

In summary 200 u(ρ, ϕ) = 50 +

ρ ∑ (

π

n odd

n

sin nϕ

) c

.

(8.3.1)

n

We clearly see the dependence of u on the pure number r/c, rather than ρ. A three dimensional plot of the temperature is given in Fig. 8.3.1.

Figure 8.3.1 : The temperature (8.3.2 ) This page titled 8.3: Putting it all together is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

8.3.1

https://math.libretexts.org/@go/page/8338

CHAPTER OVERVIEW 9: Series Solutions of ODEs (Frobenius’ Method) 9.1: Frobenius’ Method 9.2: Singular Points 9.3: Special Cases

This page titled 9: Series Solutions of ODEs (Frobenius’ Method) is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

9.1: Frobenius’ Method Let us look at the a very simple (ordinary) differential equation, y

′′

(t) = t y(t),

with initial conditions y(0) = a , y (0) = b . Let us assume that there is a solution that is analytical near t = 0 . This means that near t = 0 the function has a Taylor’s series ′

∞ k

y(t) = c0 + c1 t + … = ∑ ck t . k=0

(We shall ignore questions of convergence.) Let us proceed ∞ ′

k−1

y (t) = c1 + 2 c2 t + …

= ∑ kck t

,

k=1 ∞

y

′′

k−2

(t) = 2 c2 + 3 ⋅ 2t + …

= ∑ k(k − 1)ck t

,

k=2 ∞ 2

ty(t) = c0 t + c1 t

k+1

+…

= ∑ ck t

.

(9.1.1)

k=0

Combining this together we have y

′′

2

− ty

= [2 c2 + 3 ⋅ 2t + …] − [ c0 t + c1 t

+ …]

= 2 c2 + (3 ⋅ 2 c3 − c0 )t + … ∞ k−2

= 2 c2 + ∑ {k(k − 1)ck − ck−3 } t

.

(9.1.2)

k=3

Here we have collected terms of equal power of t . The reason is simple. We are requiring a power series to equal 0. The only way that can work is if each power of x in the power series has zero coefficient. (Compare a finite polynomial....) We thus find c2 = 0,

k(k − 1)ck = ck−3 .

The last relation is called a recurrence of recursion relation, which we can use to bootstrap from a given value, in this case c and c . Once we know these two numbers, we can determine c ,c and c : 0

1

3

4

1 c3 =

These in turn can be used to determine c

6,

c7 , c8

6

5

1 c0 ,

c4 =

12

1 c1 ,

c5 =

20

c2 = 0.

, etc. It is not too hard to find an explicit expression for the c ’s

3m − 2 c3m =

(3m)(3m − 1)(3m − 2)

c3(m−1)

3m − 2

3m − 5

(3m)(3m − 1)(3m − 2)

(3m − 3)(3m − 4)(3m − 5)

=

c3(m−1)

(3m − 2)(3m − 5) … 1 =

c0 , (3m)! 3m − 1

c3m+1 = (3m + 1)(3m)(3m − 1)

c3(m−1)+1

3m − 1

3m − 4

(3m + 1)(3m)(3m − 1)

(3m − 2)(3m − 3)(3m − 4)

=

c3(m−2)+1

(3m − 2)(3m − 5) … 2 =

c1 , (3m + 1)!

c3m+1 = 0.

(9.1.3)

The general solution is thus

9.1.1

https://math.libretexts.org/@go/page/8334



∞ 3m

y(t) = a [1 + ∑ c3m t

3m+1

] + b [1 + ∑ c3m+1 t

m=1

].

m=1

The technique sketched here can be proven to work for any differential equation y

′′



(t) + p(t)y (t) + q(t)y(t) = f (t)

provided that p(t), q(t) and f (t) are analytic at t = 0 . Thus if p, q and f have a power series expansion, so has y . This page titled 9.1: Frobenius’ Method is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

9.1.2

https://math.libretexts.org/@go/page/8334

9.2: Singular Points As usual there is a snag. Most equations of interest are of a form where p and/or q are singular at the point t (usually t = 0 ). Any point t where p(t) and q(t) are singular is called a singular point. Of most interest are a special class of singular points called regular singular points, where the differential equation can be given as 0

0

0

2

(t − t0 ) y

′′



(t) + (t − t0 )α(t)y (t) + β(t)y(t) = 0,

with α and β analytic at t = t . Let us assume that this point is t the equation 0

0

2

x y

′′

. Frobenius’ method consists of the following technique: In

=0



(x) + xα(x)y (x) + β(x)y(x) = 0,

we assume a generalised series solution of the form ∞ γ

y(x) = x

k

∑ cn x . n=0

Equating powers of x we find γ

γ(γ − 1)c0 x

γ

+ α0 γ c0 x

γ

+ β0 c0 x

= 0,

etc. The equation for the lowest power of x can be rewritten as γ(γ − 1) + α0 γ + β0 = 0.

(9.2.1)

Equation 9.2.1 is called the indicial equation. It is a quadratic equation in γ, that usually has two (complex) roots. Let me call these γ , γ . If γ − γ is not integer one can prove that the two series solutions for y with these two values of γ are independent solutions. 1

2

1

2

Let us look at an example 2

t y

′′

3 (t) +



ty (t) + ty = 0. 2

Here α(t) = 3/2 , β(t) = t , so t = 0 is indeed a regular singular point. The indicial equation is 3 γ(γ − 1) +

γ =γ

2

+ γ/2 = 0.

2

which has roots γ

1

=0



2

= −1/2

, which gives two independent solutions k

y1 (t) = ∑ ck t , k −1/2

y2 (t) = t

k

∑ dk t . k

 INDEPENDENT SolutionS Independent solutions are really very similar to independent vectors: Two or more functions are independent if none of them can be written as a combination of the others. Thus x and 1 are independent, and 1 + x and 2 + x are dependent. This page titled 9.2: Singular Points is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

9.2.1

https://math.libretexts.org/@go/page/8335

9.3: Special Cases For the two special cases I will just give the solution. It requires a substantial amount of algebra to study these two cases.

Case I: Two equal roots If the indicial equation has two equal roots, γ

1

, we have one solution of the form

= γ2

∞ γ

y1 (t) = t

1

n

∑ cn t . n=0

The other solution takes the form ∞ γ +1

y2 (t) = y1 (t) ln t + t

n

∑ dn t .

1

n=0

Notice that this last solution is always singular at t = 0 , whatever the value of γ ! 1

Case II: Two roots differing by an integer If the indicial equation that differ by an integer, γ

1

, we have one solution of the form

− γ2 = n > 0

∞ γ

y1 (t) = t

1

n

∑ cn t . n=0

The other solution takes the form ∞ γ

y2 (t) = ay1 (t) ln t + t

n

∑ dn t .

2

n=0

The constant a is determined by substitution, and in a few relevant cases is even 0, so that the solutions can be of the generalised series form.

 Example 9.3.1: Find two independent solutions of 2

t y

′′



+ ty + ty = 0

near t = 0 . Solution The indicial equation is γ

2

=0

, so we get one solution of the series form n

y1 (t) = ∑ cn t . n

We find 2

t y

′′

1

n

= ∑ n(n − 1)cn t n

ty



1

n

= ∑ ncn t n

n+1

ty1 = ∑ cn t



n

= ∑ cn′ −1 t ′

n

n

We add terms of equal power in x,

9.3.1

https://math.libretexts.org/@go/page/8336

2

t y

′′

1

ty



1

ty1 2

t y

′′



+ ty + ty

=0

+

0t

2

+

2c2 t

2

=0

+

c1 t

+

2c2 t

=0

+

c0 t

+

c1 t

=0

+

(c1 + c0 )t

+

2

2

+

(4 c2 + c1 )t

3

6c3 t

3

+

3c3 t

+

c2 t

+

3

2

(9 c3 + c2 )t

+



+



+



+



Both of these ways give ∞ 2

t y

′′



2

n

+ ty + ty = ∑(cn n

+ cn−1 )t ,

n=1

and lead to the recurrence relation 1 cn = −

cn−1

2

n

which has the solution 1

n

cn = (−1 )

2

n!

and thus ∞

1

n

y1 (t) = ∑(−1 )

n

n!2

n=0

x

Let us look at the second solution ∞ n

y2 (t) = ln(t)y1 (t) + t ∑ dn t n=0

 y3 (t)

Here I replace the power series with a symbol, y for convenience. We find 3

y



= ln(t)y

2



+

1

y1 (t)

+y



y

′′

= ln(t)y

2

′′

1

2 y (t) 1

+



3

t

y1 (t) −

t

2

+ +y

′′

3

t

Taking all this together, we have, 2

t y

′′

2

+ ty



2

+ ty2

2

= ln(t) (t y1 ′

2

′′



+ ty1 + ty1 ) − y1 + 2ty



1

= 2ty1 + t y3

′′

2

+ y1 + t y3

′′



+ ty3 + y3



+ ty3 + ty3 = 0.

If we now substitute the series expansions for y and y we get 1

3

2

2 cn + dn (n + 1 )

+ dn−1 = 0,

which can be manipulated to the form Here there is some missing material

 Example 9.3.2: Find two independent solutions of 2

t y

′′

2



+ t y − ty = 0

near t = 0 . Solution

9.3.2

https://math.libretexts.org/@go/page/8336

The indicial equation is α(α − 1) = 0 , so that we have two roots differing by an integer. The solution for α = 1 is y can be checked by substitution. The other solution should be found in the form

1

=t

, as

k

y2 (t) = at ln t + ∑ dk t k=0

We find y



2

=

k−1

a + a ln t + ∑ kdk t k=0

y

′′

2

=

k−2

a/t + ∑ k(k − 1)dk t k=0

We thus find ∞ 2

t y

′′

2

2

+t y



2

2

k

− ty2 = a(t + t ) + ∑ [ dk k(k − 1) + dk−1 (k − 2)] t k=q

We find d0 = a,

On fixing d

0

=1

2 d2 + a = 0,

dk = (k − 2)/(k(k − 1))dk−1

(k > 2)

we find ∞

1

y2 (t) = 1 + t ln t + ∑ k=2

k+1

(−1 )

k

t

(k − 1)!k!

This page titled 9.3: Special Cases is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

9.3.3

https://math.libretexts.org/@go/page/8336

CHAPTER OVERVIEW 10: Bessel Functions and Two-Dimensional Problems 10.1: Temperature on a Disk 10.2: Bessel’s Equation 10.3: Gamma Function 10.4: Bessel Functions of General Order 10.5: Properties of Bessel functions 10.6: Sturm-Liouville Theory 10.7: Our Initial Problem and Bessel Functions 10.8: Fourier-Bessel Series 10.9: Back to our Initial Problem

This page titled 10: Bessel Functions and Two-Dimensional Problems is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

10.1: Temperature on a Disk Let us now turn to a different two-dimensional problem. A circular disk is prepared in such a way that its initial temperature is radially symmetric, u(ρ, ϕ, t = 0) = f (ρ).

Then it is placed between two perfect insulators and its circumference is connected to a freezer that keeps it at 0 Fig. 10.1.2.



 C

, as sketched in

Figure 10.1.1 : A circular plate, insulated from above and below. Since the initial conditions do not depend on ϕ , we expect the solution to be radially symmetric as well, u(ρ, t), which satisfies the equation ∂u



2

u

=k[ ∂t

1 ∂u +

2

∂ρ

], ρ ∂ρ

u(c, t) = 0, u(ρ, 0) = f (ρ).

Figure 10.1.2 : The initial temperature in the disk. Once again we separate variables, u(ρ, t) = R(ρ)T (t) , which leads to the equation 1 T

′′

R



+

1 ρ



R

= k

= −λ.

T

R

This corresponds to the two equations 2

′′

ρ R



2

+ ρR + λ ρ R = 0, T



R(c) = 0m

+ λkT = 0.

The radial equation (which has a regular singular point at ρ = 0 ) is closely related to one of the most important equation of − mathematical physics, Bessel’s equation. This equation can be reached from the substitution ρ = x/√λ , so that with R(r) = X(x) we get the equation 2

x

d

2

2

dx

d X(x) + x

2

X(x) + x X(x) = 0,

− X(√λ c) = 0.

dx

This page titled 10.1: Temperature on a Disk is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.1.1

https://math.libretexts.org/@go/page/8333

10.2: Bessel’s Equation Bessel’s equation of order ν is given by 2

′′

x y



2

+ x y + (x

−ν

2

)y = 0.

Clearly x = 0 is a regular singular point, so we can solve by Frobenius’ method. The indicial equation is obtained from the lowest power after the substitution y = x , and is γ

γ

2

−ν

2

=0

So a generalized series solution gives two independent solutions if ν the power series, ν

y =x



1

n

2

. Now let us solve the problem and explicitly substitute

n

∑ an x . n

From Bessel’s equation we find m+ν

∑(n + ν )(n + ν − 1)aν x

m+ν

+ ∑(n + ν )aν x

n

2

+ ∑(x

n

−ν

2

)aν = 0

n

which leads to 2

[(m + ν )

−ν

2

] am = −am−2

or 1 am = −

am−2 . m(m + 2ν )

If we take ν

=n >0

, we have 1 am = −

m(m + 2n)

am−2 .

This can be solved by iteration, 1 a2k = −

1

2

1 =(

a2(k−2) k(k − 1)(k + n)(k + n − 1) k

1 = (−

=

1 n

n!2

n!

) 4

0

1

) 4

If we choose1 a

a2(k−1)

4 k(k + n)

a0 . k!(k + n)!

we find the Bessel function of order n ∞

k

(−1)

Jn (x) = ∑ k=0

x (

k!(k + n)!

2k+n

)

.

2

There is another second independent solution (which should have a logarithm in it) with goes to infinity at x = 0 .

10.2.1

https://math.libretexts.org/@go/page/8332

Figure 10.2.1 : A plot of the first three Bessel functions J and Y . n

n

The general solution of Bessel’s equation of order n is a linear combination of J and Y , y(x) = AJn (x) + BYn (x).

1. This can be done since Bessel’s equation is linear, i.e., if g(x) is a solution C g(x) is also a solution.↩ This page titled 10.2: Bessel’s Equation is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.2.2

https://math.libretexts.org/@go/page/8332

10.3: Gamma Function For ν not an integer the recursion relation for the Bessel function generates something very similar to factorials. These quantities are most easily expressed in something called a Gamma-function, defined as ∞

Γ(ν ) = ∫

e

−t

ν −1

t

dt,

ν > 0.

0

Some special properties of Γ function now follow immediately: ∞

Γ(1) =



e

−t

dt = −e

−1

0

∣ ∣



= 1 −e

0



Γ(ν ) =





e

−t

ν −1

t

de

−∞

−t ν −1

dt = − ∫

0

=1

t

dt

dt

0 ∞

=

−e

−t

ν −1

t

∣ ∣

∞ 0

+ (ν − 1) ∫

e

−t

ν −2

t

dt

0

The first term is zero, and we obtain Γ(ν ) = (ν − 1)Γ(ν − 1)

From this we conclude that Γ(2) = 1 ⋅ 1 = 1, Γ(3) = 2 ⋅ 1 ⋅ 1 = 2, Γ(4) = 3 ⋅ 2 ⋅ 1 ⋅ 1 = 2, Γ(n) = (n − 1)!.

Thus for integer argument the Γ function is nothing but a factorial, but it also defined for other arguments. This is the sense in which Γ generalises the factorial to non-integer arguments. One should realize that once one knows the Γ function between the values of its argument of, say, 1 and 2, one can evaluate any value of the Γ function through recursion. Given that Γ(1.65) = 0.9001168163we find Γ(3.65) = 2.65 × 1.65 × 0.9001168163 = 3.935760779.

 Exercise 10.3.1 Evaluate Γ(3), Γ(11), Γ(2.65). Answer 2! = 2

, 10! = 3628800, 1.65 × 0.9001168163 = 1.485192746 .

We also would like to determine the Γ function for ν

0), n

J−n (x) = (−1 ) Jn (x), d

−ν

[x

−ν

Jν (x)] = −x

Jν +1 (x),

dx d

ν

dx

ν

[ x Jν (x)] = x Jν −1 (x), d

1 [ Jν (x)] =

dx

[ Jν −1 (x) − Jν +1 (x)] , 2

x Jν +1 (x) = 2ν Jν (x) − x Jν −1 (x), ∫

−ν

x



−ν

Jν +1 (x) dx = −x

ν

Jν (x) + C ,

ν

x Jν −1 (x) dx = x Jν (x) + C .

Let me prove a few of these. First notice from the definition that J

is even or odd if n is even or odd,

n (x)



k

(−1)

k=0

(

−n

)

.

2

k!(n + k)!

Substituting x = 0 in the definition of the Bessel function gives 0 if ν of 0, which are all equally zero. Let’s look at J

n+2k

x

Jn (x) = ∑

>0

, since in that case we have the sum of positive powers

: ∞

k

(−1)

k=0

(

k

(−1)

( k!Γ(−n + k + 1)!



l+n

(−1)

x

=∑ l=0

( (l + n)!l!

−n+2k

x

=∑ k=n

) 2

k!Γ(−n + k + 1)!



n+2k

x

J−n (x) = ∑

) 2

n+2l

) 2

n

= (−1 ) Jn (x).

Here we have used the fact that since Γ(−l) = ±∞ , 1/Γ(−l) = 0 [this can also be proven by defining a recurrence relation for 1/Γ(l)]. Furthermore we changed summation variables to l = −n + k . The next one: d

−ν

[x dx

−ν

Jν (x)] = 2



d

k

(−1)

dx

k=0

(

k

(−1)

k=1

(

l

(−1)

( (l)!Γ(ν + l + 2)

∞ −ν

= −x

2

(−1)

x (

(−1)



x (

(l)!Γ(ν + 1 + l + 1)

2l+1

) 2

(l)!Γ(ν + 1 + l + 1) l

l=0 −ν

2l+1

)



∞ −ν

2

l

l=0

= −x

x

∑ l=0

= −2

)

(k − 1)!Γ(ν + k + 1)

∞ −ν

}

2k−1

x



= −2

) 2

k!Γ(ν + k + 1)

∞ −ν

=2

2k

x

{∑

2l+ν +1

) 2

Jν +1 (x).

10.5.1

https://math.libretexts.org/@go/page/8329

Similarly d

ν

dx

ν

[ x Jν (x)] = x Jν −1 (x).

The next relation can be obtained by evaluating the derivatives in the two equations above, and solving for J

:

ν (x)

−ν

x

−ν −1



Jν (x) − ν x ν

ν −1

x Jν (x) + ν x

Multiply the first equation by x and the second one by x ν

−ν

−ν

Jν (x) = −x

Jν +1 (x),

ν

Jν (x) = x Jν −1 (x).

and add:

1 −2ν x

Jν (x) = −Jν +1 (x) + Jν −1 (x).

After rearrangement of terms this leads to the desired expression. Eliminating J between the equations gives (same multiplication, take difference instead) ν



2 Jν (x) = Jν +1 (x) + Jν −1 (x).

Integrating the differential relations leads to the integral relations. Bessel function are an inexhaustible subject – there are always more useful properties than one knows. In mathematical physics one often uses specialist books. This page titled 10.5: Properties of Bessel functions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.5.2

https://math.libretexts.org/@go/page/8329

10.6: Sturm-Liouville Theory In the end we shall want to write a solution to an equation as a series of Bessel functions. In order to do that we shall need to understand about orthogonality of Bessel function – just as sines and cosines were orthogonal. This is most easily done by developing a mathematical tool called Sturm-Liouville theory. It starts from an equation in the so-called self-adjoint form ′



[r(x)y (x)] + [p(x) + λs(x)]y(x) = 0

(10.6.1)

where λ is a number, and r(x) and s(x) are greater than 0 on [a, b]. We apply the boundary conditions ′

a1 y(a) + a2 y (a) = 0, ′

b1 y(b) + b2 y (b) = 0,

with a and a not both zero, and b and b similar. 1

2

1

2

 Theorem 10.6.1 If there is a solution to (10.6.1) then λ is real. Proof Assume that λ is a complex number (λ = α + iβ ) with solution Φ . By complex conjugation we find that ′





+ [p(x) + λs(x)]Φ(x) = 0







[r(x)(Φ ) (x)] + [p(x) + λ s(x)](Φ )(x) = 0

where ∗ note complex conjugation. Multiply the first equation by equations: ∗







Φ (x)





and the second by







Φ(x)

, and subtract the two



− λ)s(x)Φ (x)Φ(x) = Φ(x)[r(x)(Φ ) (x)] − Φ (x)[r(x)Φ (x)] .

Now integrate over x from a to b and find b ∗



− λ) ∫

b ∗



s(x)Φ (x)Φ(x) dx = ∫

a











Φ(x)[r(x)(Φ ) (x)] − Φ (x)[r(x)Φ (x)] dx

a

The second part can be integrated by parts, and we find b ∗



− λ) ∫













s(x)Φ (x)Φ(x) dx = [ Φ (x)r(x)(Φ ) (x) − Φ (x)r(x)Φ (x)|

b a

a ′









= r(b) [ Φ (b)(Φ ) (b) − Φ (b)Φ (b)] ′









−r(a) [ Φ (a)(Φ ) (a) − Φ (a)Φ (a)] = 0,

where the last step can be done using the boundary conditions. Since both conclude that ∫

b

a



s(x)Φ (x)Φ(x) dx > 0



Φ (x)Φ(x)

and s(x) are greater than zero we

, which can now be divided out of the equation to lead to λ = λ . ∗

 Theorem 10.6.2 Let Φ and Φ be two solutions for different values of λ , λ n

m

n

≠ λm

, then

b



s(x)Φn (x)Φm (x) dx = 0.

a

Proof The proof is to a large extent identical to the one above: multiply the equation for Subtract and find

10.6.1

Φn (x)

by

Φm (x)

and vice-versa.

https://math.libretexts.org/@go/page/8328

b

(λn − λm ) ∫

s(x)Φm (x)Φn (x) dx = 0

a

which leads us to conclude that b



s(x)Φn (x)Φm (x) dx = 0.

a

 Theorem 10.6.3 Under the conditions set out above a. There exists a real infinite set of eigenvalues λ , … , λ , … with lim = ∞. b. If Φ is the eigenfunction corresponding to λ , it has exactly n zeroes in [a, b]. 0

n

n

n→∞

n

Proof No proof shall be given. Clearly the Bessel equation is of self-adjoint form: rewrite 2

x y

′′



2

+ x y + (x

−ν

2

)y = 0

as (divide by x) ν

′ ′

2

[x y ] + (x −

)y = 0 x

We cannot identify ν with λ , and we do not have positive weight functions. It can be proven from properties of the equation that the Bessel functions have an infinite number of zeroes on the interval [0, ∞). A small list of these: J0

:

2.42

5.52

8.65

11.79



J1/2

:

π









J8

:

11.20

16.04

19.60

22.90



This page titled 10.6: Sturm-Liouville Theory is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.6.2

https://math.libretexts.org/@go/page/8328

10.7: Our Initial Problem and Bessel Functions We started the discussion from the problem of the temperature on a circular disk, solved in polar coordinates, Since the initial conditions do not depend on ϕ , we expect the solution to be radially symmetric as well, u(ρ, t), which satisfies the equation ∂u



2

u

=k[ ∂t

2

1 ∂u +

∂ρ

], ρ ∂ρ

u(c, t) = 0, u(ρ, 0) = f (ρ).

With u(ρ, t) = R(ρ)T (t) we found the equations 2

′′

ρ R



2

+ ρR + λ ρ R = 0 T



R(c) = 0

+ λkT = 0.

The equation for R is clearly self-adjoint, it can be written as ′ ′

[ρR ] + λρR = 0 −

So how does the equation for R relate to Bessel’s equation? Let us make the change of variables x = √λρ . We find d

− d = √λ , dρ dx −

and we can remove a common factor √λ to obtain (X(x) = R(ρ) ) ′ ′

[x X ] + xX = 0,

which is Bessel’s equation of order 0, i.e., − R(ρ) = J0 (ρ√λ ).

The boundary condition R(c) = 0 shows that − − c √λn = xn ,

where x are the points where J

0 (x)

n

=0

. We thus conclude − − Rn (ρ) = J0 (ρ√λn ).

the first five solutions R (for c = 1 ) are shown in Fig. 10.7.1. n

Figure 10.7.1 : A graph of the first five functions R

n

From Sturm-Liouville theory we conclude that ∞



ρdρ Rn (ρ)Rm (ρ) = 0

if  n ≠ m.

0

Together with the solution for the T equation, Tn (t) = exp(−λn kt)

we find a Fourier-Bessel series type solution ∞

− − u(ρ, t) = ∑ An J0 (ρ√λn ) exp(−λn kt), n=1

10.7.1

https://math.libretexts.org/@go/page/8327

with λ

n

2

= (xn /c )

.

In order to understand how to determine the coefficients Bessel series in a little more detail.

An

from the initial condition

u(ρ, 0) = f (ρ)

we need to study Fourier-

This page titled 10.7: Our Initial Problem and Bessel Functions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.7.2

https://math.libretexts.org/@go/page/8327

10.8: Fourier-Bessel Series So how can we determine in general the coefficients in the Fourier-Bessel series ∞

f (ρ) = ∑ Cj Jν (αj ρ)? j=1

The corresponding self-adjoint version of Bessel’s equation is easily found to be (with R

j (ρ)





ν

2 j

)

2

(ρR ) + (α ρ − j

= Jν (αj ρ)

)Rj = 0.

ρ

Where we assume that f and R satisfy the boundary condition ′

b1 f (c) + b2 f (c) = 0 ′

b1 Rj (c) + b2 R (c) =

0

j

From Sturm-Liouville theory we do know that c



ρJν (αi ρ)Jν (αj ρ) = 0

if  i ≠ j,

0

but we shall also need the values when i = j ! Let us use the self-adjoint form of the equation, and multiply with 2ρR , and integrate over ρ from 0 to c , ′

c ′





ν

2

2 ′

[(ρR ) + (α ρ − j

j

)Rj ] 2ρR dρ = 0. j

ρ

0

This can be brought to the form (integrate the first term by parts, bring the other two terms to the right-hand side) c

0

c

2

d





(ρR ) dρ = 2 ν

2



j



c ′

2

j

j

Rj R dρ − 2 α

0 2

2





ρ Rj R dρ j

0

c

c ∣ 2 ′ 2 2 (ρR ) ∣ =ν R ∣ − 2α ∫ j j ∣0 j ∣0

c 2



ρ Rj R dρ. j

0

The last integral can now be done by parts: c

c 2

2∫

j

0

c

2 2 ρR dρ + ρR ∣ . j j∣



ρ Rj R dρ = −2 ∫

0

0

So we finally conclude that c 2



j



2

2

2

j

j

2

ρR dρ = [(α ρ

−ν

2

2

)R

j

0

∣ ′ + (ρR ) ∣ j ∣

c

. 0

In order to make life not too complicated we shall only look at boundary conditions where (mixed or purely f (c) = 0 ) go very similar! Using the fact that R (r) = J (α ρ) , we find

f (c) = R(c) = 0

. The other cases



j



R

j

=

ν

j

′ αj Jν (αj ρ).

We conclude that c 2



j



2 2 2 ′ ρ R dρ = [(ραj Jν (αj ρ)) ∣ ∣ j

0 2

=c

2

2

′ α (Jν (αj c)) j

2

=c α

j

2

c

0

2

2

ν ( αj c

Jν (αj c) − Jν +1 (αj c)) 2

2

= c α (Jν +1 (αj c)) , j

We thus finally have the result

10.8.1

https://math.libretexts.org/@go/page/8326

c



2

2

c

2

ρ R dρ =

J

j

2

ν +1

2

0

(αj c).

 Example 10.8.1: Consider the function 3

f (x) = {

x

0 < x < 10

0

x > 10

Expand this function in a Fourier-Bessel series using J . 3

Solution From our definitions we find that ∞

f (x) = ∑ Aj J3 (αj x), j=1

with 10

2 Aj =

2

100 J4 (10 αj ) 2

3



x J3 (αj x)dx

0 10αj

1

=

2

100 J4 (10 αj ) 2

5



α

j

1

=

2

100 J4 (10 αj )

5

4

s J3 (s)ds

0

4

(10 αj ) J4 (10 αj )ds

α

j

200 =

. αj J4 (10 αj )

Using α = … , we find that the first five values of five partial sums are plotted in Fig. 10.8.1. j

Aj

are 1050.95, −821.503, 703.991, −627.577, 572.301 . The first

Figure 10.8.1 : A graph of the first five partial sums for x expressed in J . 3

3

This page titled 10.8: Fourier-Bessel Series is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.8.2

https://math.libretexts.org/@go/page/8326

10.9: Back to our Initial Problem After all that we have learned, we know that in order to determine the solution of the initial problem in Sec. 10.1 we would have to calculate the integrals c

2 Aj =

c2 J

2

1

∫ (c αj )

f (ρ)J0 (αj ρ)ρdρ

0

This page titled 10.9: Back to our Initial Problem is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

10.9.1

https://math.libretexts.org/@go/page/8325

CHAPTER OVERVIEW 11: Separation of Variables in Three Dimensions We have up to now concentrated on 2D problems, but a lot of physics is three dimensional, and often we have spherical symmetry – that means symmetry for rotation over any angle. In these cases we use spherical coordinates, as indicated in figure 7.2.1. 11.1: Modelling the Eye 11.2: Properties of Legendre Polynomials 11.3: Fourier-Legendre Series 11.4: Modelling the eye–revisited

This page titled 11: Separation of Variables in Three Dimensions is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

1

11.1: Modelling the Eye Let me model the temperature in a simple model of the eye, where the eye is a sphere, and the eyelids are circular. In that case we can put the z -axis straight through the middle of the eye, and we can assume that the temperature does only depend on r, θ and not on ϕ . We assume that the part of the eye in contact with air is at a temperature of 20 C, and the part in contact with the body is at 36 C. ∘



Figure 11.1.1 : The temperature on the outside of a simple model of the eye If we look for the steady state temperature it is described by Laplace’s equation, 2

∇ u(r, θ) = 0.

Expressing the Laplacian ∇ in spherical coordinates (see chapter 7) we find 2

1 2

r



2



1

  (r ∂r

u) + ∂r

2

r



∂   (sin θ

sin θ ∂θ

u) = 0. ∂θ

Once again we solve the equation by separation of variables, u(r, θ) = R(r)T (θ).

After this substitution we realize that 2

′ ′

′ ′

[r R ]

[sin θT ] =−

R

= λ. T sin θ

The equation for R will be shown to be easy to solve (later). The one for T is of much more interest. Since for 3D problems the angular dependence is more complicated, whereas in 2D the angular functions were just sines and cosines. The equation for T is ′ ′

[sin θT ] + λT sin θ = 0.

This equation is called Legendre’s equation, or actually it carries that name after changing variables to from 0 to π, we find sin θ > 0 , and we have

x = cos θ

. Since

θ

runs

− −−− − 2 sin θ = √ 1 − x .

After this substitution we are making the change of variables we find the equation (y(x) = T (θ) = T (arccos x) , and we now − −−− − differentiate w.r.t. x, d/dθ = −√1 − x d/dx ) 2

d

2

dy

[(1 − x ) dx

] + λy = 0. dx

This equation is easily seen to be self-adjoint. It is not very hard to show that x = 0 is a regular (not singular) point – but the equation is singular at x = ±1 . Near x = 0 we can solve it by straightforward substitution of a Taylor series, j

y(x) = ∑ aj x . j=0

We find the equation

11.1.1

https://math.libretexts.org/@go/page/8324



∞ j−2

∑ j(j − 1)aj x j=0

∞ j

− ∑ j(j − 1)aj x

∞ j

− 2 ∑ jaj x

j=0

j

+ λ ∑ aj x

j=0

=0

j=0

After introducing the new variable i = j − 2 , we have ∞

∞ i

j

∑(i + 1)(i + 1)ai+2 x − ∑[j(j + 1) − λ] aj x j=0

= 0.

j=0

Collecting the terms of the order x , we find the recurrence relation k

k(k + 1) − λ ak+2 =

ak . (k + 1)(k + 2)

If λ = n(n + 1) this series terminates – actually those are the only acceptable solutions, any one where λ takes a different value actually diverges at x = +1 or x = −1 , not acceptable for a physical quantity – it can’t just diverge at the north or south pole ( x = cos θ = ±1 are the north and south pole of a sphere). We thus have, for n even, 2

yn = a0 + a2 x

n

+ … + an x .

For odd n we find odd polynomials, 3

yn = a1 x + a3 x

n

+ … + an x .

One conventionally defines (2n)! an =

2

n

.

n! 2

With this definition we obtain P0

=

P2

=

P4

=

1, 3 2 1 8

    2

x

1



2 4

(35 x

P1

=

P3

=

P5

=

x, 1

, 2

− 30 x

+ 3),

3

(5 x

− 3x),

2 1 8

5

(63 x

3

− 70 x

+ 15x).

A graph of these polynomials can be found in figure 11.1.2.

Figure 11.1.2 : The first few Legendre polynomials P

.

n (x)

This page titled 11.1: Modelling the Eye is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

11.1.2

https://math.libretexts.org/@go/page/8324

11.2: Properties of Legendre Polynomials Let F (x, t) be a function of the two variables x and t that can be expressed as a Taylor’s series in t , ∑ called a generating function of the functions c .

n

n

cn (x)t

. The function

F

is then

n

 Exercise 11.2.1 Show that F (x, t) =

is a generating function of the polynomials x .

1

n

1−xt

Answer Look at ∞

1

n

n

= ∑x t 1 − xt

(|xt| < 1).

n=0

 Exercise 11.2.2 Show that F (x, t) = exp(

tx−t 2t

)

is the generating function for the Bessel functions, ∞

tx − t

n

F (x, t) = exp(

) = ∑ Jn (x)t .

2t

n=0

Answer TBA

 Exercise 11.2.4 (The case of most interest here) ∞

1 F (x, t) =

n

− − − − − − − − − − = ∑ Pn (x)t . √ 1 − 2xt + t2 n=0

Answer TBA

Rodrigues’ Generating Formula 1 Pn (x) =

d

n

n 2

n

(x

n

− 1) .

(11.2.1)

2 n! dx

 properties of Legendre Polynomials 1. P (x) is even or odd if n is even or odd. 2. P (1) = 1 . 3. P (−1) = (−1) . 4. (2n + 1)P (x) = P (x) − P (x) . 5. (2n + 1)x P (x) = (n + 1)P (x) + nP (x) . 6. ∫ P (x )dx = [P (x) − P (x)] . n n

n

n



n



n+1

n−1

n

x

−1



n

n+1



n−1

1

2n+1

n+1

n−1

Let us prove some of these relations, first Rodrigues’ formula (Equation 11.2.1). We start from the simple formula 2

(x

d − 1)

2

(x

n

− 1)

2

− 2nx(x

n

− 1)

= 0,

dx

which is easily proven by explicit differentiation. This is then differentiated n + 1 times,

11.2.1

https://math.libretexts.org/@go/page/8323

d

n+1 2

n+1

dx

[(x

d − 1)

2

(x

n

− 1)

2

d

n

− 2nx(x

− 1) ]

n 2

= n(n + 1)

(x

n

dx

d

n

− 1)

+ 2(n + 1)x

dx d

2

d

n

(x

n

− 1)

− 2nx

2

n

dx

d = −n(n + 1)

(x

n

d

d

n n

dx

2

(x

2

+ (x

d − 1)

− 1)

d + 2x

d

2

d {

2

(x

dx

2

(x

2

n+2

(x

n

− 1)

n

− 1)

n+1 2

n+1

(x

n

− 1)

2

+ (x

n

n

n+2

dx

d − 1)

dx

(1 − x ) dx

We have thus proven that coefficient,

n+1

n

− 1)

n+1

dxn+1

n

dx = −[

2

(x

dx

n

− 2n(n + 1)

n+1

d

n

− 1 ) } + n(n + 1) {

dx

n+2 2

dxn+2

(x

n

− 1)

n 2

n

(x

n

− 1 ) }] = 0.

dx

satisfies Legendre’s equation. The normalization follows from the evaluation of the highest

n

− 1)

d

n 2n

dxn

and thus we need to multiply the derivative with

1 n

2 n!

x

2n! =

n

x , n!

to get the properly normalized P . n

Let’s use the generating function to prove some of the other properties: 2.: 1

n

F (1, t) =

= ∑t 1 −t

has all coefficients one, so P

n (1)

=1

n

. Similarly for 3.: 1

n

F (−1, t) =

n

= ∑(−1 ) t . 1 +t

n

Property 5. can be found by differentiating the generating function with respect to t : d

1



d

− − − − − − − − − − dt √ 1 − 2tx + t2

= dt

n

∑ t Pn (x) n=0

x −t

n−1

(1 − 2tx + t2 )1 .5

n

2

1 − 2xt + t

n−1

∑ t Pn (x) = ∑ nt n=0

Pn (x)

n=0

∞ n

∞ n+1

∑ t x Pn (x) − ∑ t n=0

Pn (x)

n=0



x −t



= ∑ nt

Pn (x)

n=0

∞ n−1

= ∑ nt n=0



∞ n

n+1

Pn (x) − 2 ∑ nt x Pn (x) + ∑ nt n=0



Pn (x)

n=0 ∞

n

n

n

∑ t (2n + 1)x Pn (x) = ∑(n + 1)t Pn+1 (x) + ∑ nt Pn−1 (x) n=0

n=0

n=0

Equating terms with identical powers of t we find (2n + 1)x Pn (x) = (n + 1)Pn+1 (x) + nPn−1 (x).

Proofs for the other properties can be found using similar methods. This page titled 11.2: Properties of Legendre Polynomials is shared under a CC BY-NC-SA 2.0 license and was authored, remixed, and/or curated by Niels Walet via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request.

11.2.2

https://math.libretexts.org/@go/page/8323

11.3: Fourier-Legendre Series Since Legendre’s equation is self-adjoint, we can show that P series in Legendre polynomials we shall need the integrals

n (x)

forms an orthogonal set of functions. To decompose functions as

1

2n + 1

2



Pn (x)dx =

, 2

−1

which can be determined using the relation 5. twice to obtain a recurrence relation 1



1 2

Pn (x)dx = ∫

−1

(2n − 1)x Pn−1 (x) − (n − 1)Pn−2 (x) Pn (x)

dx n

−1 1

(2n − 1) =

∫ n

x Pn (x)Pn−1 (x)dx

−1 1

(2n − 1) =

(n + 1)Pn+1 (x) + nPn−1 (x)

∫ n

1

(2n − 1) =

∫ 2n + 1

Pn−1 (x)dx

2n + 1

−1

2

P

n−1

(x)dx,

−1

and the use of a very simple integral to fix this number for n = 0 , 1



P

2

0

(x)dx = 2.

−1

So we can now develop any function on [−1, 1] in a Fourier-Legendre series f (x) = ∑ An Pn (x) n 1

2n + 1 An =

∫ 2

f (x)Pn (x)dx

−1

 Exercise 11.3.1: Fourier-Legendre series Find the Fourier-Legendre series for 0,

−1 < x < 0

1,

0