Partial Differential Equations

Citation preview

PARTIAL DIFFERENTIAL EQUATIONS

Erich Miersemann Universität Leipzig

Universität Leipzig Partial Differential Equations

Erich Miersemann

This text is disseminated via the Open Education Resource (OER) LibreTexts Project (https://LibreTexts.org) and like the hundreds of other texts available within this powerful platform, it is freely available for reading, printing and "consuming." Most, but not all, pages in the library have licenses that may allow individuals to make changes, save, and print this book. Carefully consult the applicable license(s) before pursuing such effects. Instructors can adopt existing LibreTexts texts or Remix them to quickly build course-specific resources to meet the needs of their students. Unlike traditional textbooks, LibreTexts’ web based origins allow powerful integration of advanced features and new technologies to support learning.

The LibreTexts mission is to unite students, faculty and scholars in a cooperative effort to develop an easy-to-use online platform for the construction, customization, and dissemination of OER content to reduce the burdens of unreasonable textbook costs to our students and society. The LibreTexts project is a multi-institutional collaborative venture to develop the next generation of openaccess texts to improve postsecondary education at all levels of higher learning by developing an Open Access Resource environment. The project currently consists of 14 independently operating and interconnected libraries that are constantly being optimized by students, faculty, and outside experts to supplant conventional paper-based books. These free textbook alternatives are organized within a central environment that is both vertically (from advance to basic level) and horizontally (across different fields) integrated. The LibreTexts libraries are Powered by NICE CXOne and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. This material is based upon work supported by the National Science Foundation under Grant No. 1246120, 1525057, and 1413739. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation nor the US Department of Education. Have questions or comments? For information about adoptions or adaptions contact [email protected]. More information on our activities can be found via Facebook (https://facebook.com/Libretexts), Twitter (https://twitter.com/libretexts), or our blog (http://Blog.Libretexts.org). This text was compiled on 10/01/2023

TABLE OF CONTENTS Licensing

1: Introduction 1.0: Introduction 1.1: Examples 1.2: Ordinary Differential Equations 1.3: Partial Differential Equations 1.E: Introduction (Exercises)

2: Equations of First Order 2.0: Prelude to First Order Equations 2.1: Linear Equations 2.2: Quasilinear Equations 2.2.1: A Linearization Method 2.2.2: Initial Value Problem of Cauchy 2.3: Nonlinear Equations in Two Variables 2.3.1: Initial Value Problem of Cauchy 2.4: Nonlinear Equations in \(\mathbb{R}^n\) 2.5: Hamilton-Jacobi Theory 2.E: Equations of First Order (Exercises)

3: Classification 3.1: Linear Equations of Second Order 3.1.1: Normal Form in Two Variables 3.2: Quasilinear Equations of Second Order 3.2.1: Quasilinear Elliptic Equations 3.3: Systems of First Order 3.3.1: Examples 3.4: Systems of Second Order 3.4.1: Examples 3.5: Theorem of Cauchy-Kovalevskaya 3.5.1 Appendix: Real Analytic Functions 3.E: Classification (Exercises)

Index

4: Hyperbolic Equations 4.1: One-Dimensional Wave Equation 4.2: Higher Dimensions 4.2.1: Case n=3 4.2.2: Case n=2 4.3: Inhomogeneous Equations 4.4: A Method of Riemann 4.5: Initial-Boundary Value Problems 4.5.1: Oscillation of a String 4.5.2: Oscillation of a Membrane 4.5.3: Inhomogeneous Wave Equations

1

https://math.libretexts.org/@go/page/91472

4.E: 4: Hyperbolic Equations (Exercises)

Index

5: Fourier Transform 5.1: Definition and Properties 5.1.1: Pseudodifferential Operators 5.E: Fourier Transform (Exercises)

6: Parabolic Equations 6.1: Poisson's Formula 6.2: Inhomogeneous Heat Equation 6.3: Maximum Principle 6.4: Initial-Boundary Value Problems 6.4.1: Fourier's Method 6.4.2: Uniqueness 6.5: Black-Scholes Equation 6.E: Parabolic Equations (Exercises)

7: Elliptic Equations of Second Order 7.1: Fundamental Solution 7.2: Representation Formula 7.2.1: Conclusions from the Representation Formula 7.3.1: Boundary Value Problems: Dirichlet Problem 7.3.2: Boundary Value Problems: Neumann Problem 7.3.3: Boundary Value Problems: Mixed Boundary Value Problem 7.4: Green's Function for \(\Delta\) 7.4.1: Green's Function for a Ball 7.4.2: Green's Function and Conformal Mapping 7.5: Inhomogeneous Equation 7.E: Elliptic Equations of Second Order (Exercises)

Bibliography Index Glossary Detailed Licensing

2

https://math.libretexts.org/@go/page/91472

Licensing A detailed breakdown of this resource's licensing can be found in Back Matter/Detailed Licensing.

1

https://math.libretexts.org/@go/page/115477

CHAPTER OVERVIEW 1: Introduction Topic hierarchy 1.0: Introduction 1.1: Examples 1.2: Ordinary Differential Equations 1.3: Partial Differential Equations 1.E: Introduction (Exercises) Thumbnail: Visualization of heat transfer in a pump casing, created by solving the heat equation. Heat is being generated internally in the casing and being cooled at the boundary, providing asteady state temperature distribution. (CC-BY-SA-3.0; Wikipedia A1). This page titled 1: Introduction is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1

1.0: Introduction Preface These lecture notes are intented as a straightforward introduction to partial differential equations which can serve as a textbook for undergraduate and beginning graduate students. For additional reading we recommend following books: W. I. Smirnov [21], I. G. Petrowski [17], P. R. Garabedian [8], W. A. Strauss [23], F. John [10], L. C. Evans [5] and R. Courant and D. Hilbert[4] and D. Gilbarg and N. S. Trudinger [9]. Some material of these lecture notes was taken from some of these books.

Introduction Ordinary and partial differential equations occur in many applications. An ordinary differential equation is a special case of a partial differential equation but the behavior of solutions is quite different in general caused by the fact that the functions for which we are looking at are functions of more than one independent variable. Equation $$F(x,y(x),y'(x),\ldots,y^{(n)})=0\] is an ordinary differential equation of n-th order for the unknown function y(x), where F is given. An important problem for ordinary differential equations is the initial value problem ′

y (x) = f (x, y(x)) y(x0 ) = y0  ,

where f is a given real function of two variables x,y and x ,, y are given real numbers. 0

0

Picard-Lindelöf Theorem. Suppose (i) f (x, y) is continuous in a rectangle 2

Q = {(x, y) ∈ R

:  |x − x0 | < a,  |y − y0 | < b}.

(1.0.1)

(ii) There is a constant K such that |f (x, y)| ≤ K for all (x, y) ∈ Q. (ii) Lipschitz condition: There is a constant L such that |f (x, y2 ) − f (x, y1 )| ≤ L| y2 − y1 |

for all (x, y

1 ),

(1.0.2)

.

(x, y2 )

Figure 1.0.1: Initial value problem Then there exists a unique solution y ∈ C

1

(x0 − α, x0 + α)

of the above initial value problem, where α = min(b/K, a) .

The linear ordinary differential equation $$y^{(n)}+a_{n-1}(x)y^{(n-1)}+\ldots a_1(x)y'+a_0(x)y=0,\] where a are continuous functions, has exactly n linearly independent solutions. In contrast to this property the partial differential u +u = 0 in R has infinitely many linearly independent solutions in the linear space C (R ) . j

2

xx

2

yy

2

The ordinary differential equation of second order

1.0.1

https://math.libretexts.org/@go/page/2193

$$y''(x)=f(x,y(x),y'(x))\] has in general a family of solutions with two free parameters. Thus, it is naturally to consider the associated initial value problem y

′′



(x) = f (x, y(x), y (x)) ′

y(x0 ) = y0 ,  y (x0 ) = y1 ,

where y and y are given, or to consider the boundary value problem 0

1

y

′′



(x) = f (x, y(x), y (x))

y(x0 ) = y0 ,  y(x1 ) = y1 .

Figure 1.0.2: Boundary value problem Initial and boundary value problems play an important role also in the theory of partial differential equations. A partial differential equation for the unknown function u(x, y) is for example $$F(x,y,u,u_x,u_y,u_{xx},u_{xy},u_{yy})=0,\] where the function F is given. This equation is of second order. An equation is said to be of n-th order if the highest derivative which occurs is of order n . An equation is said to be linear if the unknown function and its derivatives are linear in F . For example, $$a(x,y)u_x+b(x,y)u_y+c(x,y)u=f(x,y),\] where the functions a ,b ,c and f are given, is a linear equation of first order. An equation is said to be quasilinear if it is linear in the highest derivatives. For example, $$a(x,y,u,u_x,u_y)u_{xx}+b(x,y,u,u_x,u_y)u_{xy}+c(x,y,u,u_x,u_y)u_{yy}=0\] is a quasilinear equation of second order.

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 1.0: Introduction is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1.0.2

https://math.libretexts.org/@go/page/2193

1.1: Examples Example 1.1.1: uy = 0

, where u = u(x, y). All functions u = w(x) are solutions.

Example 1.1.2: ux = uy

, where

. A change of coordinates transforms this equation into an equation of the first example. Set

u = u(x, y)

ξ = x + y,  η = x − y

, then ξ +η u(x, y) = u (

ξ −η ,

) =: v(ξ, η).

2

(1.1.1)

2

Assume u ∈ C , then 1

1 vη =

2

(ux − uy ).

(1.1.2)

If u = u , then v = 0 and vice versa, thus v = w(ξ) are solutions for arbitrary C -functions w(ξ). Consequently, we have a large class of solutions of the original partial differential equation: u = w(x + y) with an arbitrary C -function w. 1

x

y

η

1

Example 1.1.3: A necessary and sufficient condition such that for given C -functions M ,  N the integral 1

P1



 M (x, y)dx + N (x, y)dy

(1.1.3)

P0

is independent of the curve which connects the points differential equation (condition of integrability)

P0

with

P1

in a simply connected domain

My = Nx

2

Ω ⊂R

is that the partial

(1.1.4)

in Ω.

Figure 1.1.1: Independence of the path This is one equation for two functions. A large class of solutions is given by M = Φ ,  N = Φ , where Φ(x, y) is an arbitrary C -function. It follows from Gauss theorem that these are all C -solutions of the above differential equation. x

2

y

1

Example 1.1.4: Method of an integrating multiplier for an ordinary differential

1.1.1

https://math.libretexts.org/@go/page/2128

equation Consider the ordinary differential equation M (x, y)dx + N (x, y)dy = 0

(1.1.5)

for given C -functions M ,  N . Then we seek a C -function μ(x, y) such that μM dx + μN dy is a total differential, i. e., that (μM ) = (μN ) is satisfied. This is a linear partial differential equation of first order for μ : $$ M\mu_y-N\mu_x=\mu (N_x-M_y). \] 1

1

y

x

Example 1.1.5: Two C -functions u(x, y) and v(x, y) are said to be functionally dependent if 1

ux

uy

vx

vy

det (

) = 0,

(1.1.6)

which is a linear partial differential equation of first order for u if v is a given C -function. A large class of solutions is given by 1

u = H (v(x, y)),

(1.1.7)

where H is an arbitrary C -function. 1

Example 1.1.6: Cauchy-Riemann equations Set f (z) = u(x, y) + iv(x, y) , where z = x + iy and u,  v are given C (Ω)-functions. Here Ω is a domain in R . If the function f (z) is differentiable with respect to the complex variable z then u,  v satisfy the Cauchy-Riemann equations 2

1

ux = vy ,   uy = −vx .

It is known from the theory of functions of one complex variable that the real part differentiable function f (z) are solutions of the Laplace equation △u = 0,   △v = 0,

where △u = u

xx

+ uyy

(1.1.8)

u

and the imaginary part

v

of a

(1.1.9)

.

Example 1.1.7: Newton Potential The Newton potential 1 u =

is a solution of the Laplace equation in R $$ u_{xx}+u_{yy}+u_{zz}=0. \]

3

− −−−−−−−− − √ x2 + y 2 + z 2

(1.1.10)

, that is, of

∖ (0, 0, 0)

1.1.2

https://math.libretexts.org/@go/page/2128

Example 1.1.8: Heat equation Let u(x, t) be the temperature of a point x ∈ Ω at time t , where Ω ⊂ R is a domain. Then u(x, t) satisfies in Ω × [0, ∞) the heat equation 3

ut = k△u,

where △u = u

x1 x1

+ ux2 x2 + ux3 x3

(1.1.11)

and k is a positive constant. The condition u(x, 0) = u0 (x),   x ∈ Ω,

where u

0 (x)

(1.1.12)

is given, is an initial condition associated to the above heat equation. The condition u(x, t) = h(x, t),   x ∈ ∂Ω,  t ≥ 0,

(1.1.13)

where h(x, t) is given, is a boundary condition for the heat equation. If h(x, t) = g(x) , that is, h is independent of t , then one expects that the solution u(x, t) tends to a function Moreover, it turns out that v is the solution of the boundary value problem for the Laplace equation

v(x)

if

t → ∞

.

△v = 0  in Ω v = g(x)  on ∂Ω.

Example 1.1.9: Wave equation

Figure 1.1.2: Oscillating string The wave equation 2

utt = c △u,

(1.1.14)

where u = u(x, t), c is a positive constant, describes oscillations of membranes or of three dimensional domains, for example. In the one-dimensional case 2

utt = c uxx

(1.1.15)

u(x, 0) = u0 (x),   ut (x, 0) = u1 (x),

(1.1.16)

describes oscillations of a string. Associated initial conditions are

where u

0,

 u1

are given functions. Thus the initial position and the initial velocity are prescribed.

If the string is finite one describes additionally boundary conditions, for example $$ u(0,t)=0,\ \ u(l,t)=0\ \ \mbox{for all}\ t\ge 0. \]

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig)

1.1.3

https://math.libretexts.org/@go/page/2128

Integrated by Justin Marshall. This page titled 1.1: Examples is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1.1.4

https://math.libretexts.org/@go/page/2128

1.2: Ordinary Differential Equations Set $$E(v)=\int_a^bf(x,v(x),v'(x))\ dx\] and for given u

a,

 ub ∈ R

$$V=\{v\in C^2[a,b]:\ v(a)=u_a,\ v(b)=u_b\},\] where y and f is sufficiently regular. One of the basic problems in the calculus of variation is (P) min

E(v)

v∈V

.

Euler equation Let u ∈ V be a solution of (P), then $$\frac{d}{dx}f_{u'}(x,u(x),u'(x))=f_u(x,u(x),u'(x))\] in (a, b).

Exercise 1.2.1: Proof For fixed

[a, b] with ϕ(a) = ϕ(b) = 0 and real ϵ , |ϵ| < ϵ , set g(ϵ) = E(u + ϵϕ) . Since g(0) ≤ g(ϵ) it follows . Integration by parts in the formula for g (0) and the following basic lemma in the calculus of variations imply Euler's equation. 2

ϕ ∈ C

0





g (0) = 0

Figure 1.2.1.1: Admissible Variations Basic lemma in the calculus of variations. Let h ∈ C (a, b) and b



h(x)ϕ(x) dx = 0

(1.2.1)

a

for all ϕ ∈ C

1

0

(a, b)

Proof. Assume h(x (x − δ, x + δ) . Set

. Then h(x) ≡ 0 on (a, b).

0)

0

>0

for an

x0 ∈ (a, b)

, then there is a

δ >0

such that

(x0 − δ, x0 + δ) ⊂ (a, b)

and

h(x) ≥ h(x0 )/2

on

0

$$ \phi(x) =\left\{ (δ

2

2

2

− |x − x0 | )

x ∈ (x0 − δ, x0 + δ) 0

(1.2.2)

x ∈ (a, b) ∖ [ x0 − δ, x0 + δ]

\right. . \] Thus ϕ ∈ C

1

0

(a, b)

and

1.2.1

https://math.libretexts.org/@go/page/2130

$$\int_a^b h(x)\phi(x)\ dx\ge \frac{h(x_0)}{2}\int_{x_0-\delta}^{x_0+\delta}\phi(x)\ dx>0,\] which is a contradiction to the assumption of the lemma. □

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 1.2: Ordinary Differential Equations is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1.2.2

https://math.libretexts.org/@go/page/2130

1.3: Partial Differential Equations The same procedure as above applied to the following multiple integral leads to a second-order quasilinear partial differential equation. Set $$E(v)=\int_\Omega\ F(x,v,\nabla v)\ dx,\] where Ω ⊂ R is a domain, x = (x , … , x ), v = v(x) :  Ω ↦ R , and ∇v = (v sufficiently regular in its arguments. For a given function h , defined on ∂Ω , set n

1

1

n

x1

, … , vxn )

. Assume that the function

F

is

$$V=\{v\in C^2(\overline{\Omega}):\ v=h\ \mbox{on}\ \partial\Omega\}.\] Euler equation. Let u ∈ V be a solution of (P), then n



∑ i=1

∂xi

Fux − Fu = 0

(1.3.1)

i

in Ω. Proof. Exercise. Hint: Extend the above fundamental lemma of the calculus of variations to the case of multiple integrals. The interval (x − δ, x + δ) in the definition of ϕ must be replaced by a ball with center at x and radius δ . 0

0

0

Example 1.2.2.1: Dirichlet integral In two dimensions the Dirichlet integral is given by $$D(v)=\int_\Omega\ \left(v_x^2+v_y^2\right)\ dxdy\] and the associated Euler equation is the Laplace equation △u = 0 in Ω. Thus, there is natural relationship between the boundary value problem $$\triangle u=0\ \ \mbox{in}\ \Omega,\ u=h\ \ \mbox{on}\ \ \partial\Omega\] and the variational problem $$\min_{v\in V}\ D(v).\] But these problems are not equivalent in general. It can happen that the boundary value problem has a solution but the variational problem has no solution, for an example see Courant and Hilbert [4], Vol. 1, p. 155, where h is a continuous function and the associated solution u of the boundary value problem has no finite Dirichlet integral. The problems are equivalent, provided the given boundary value function h is in the class H (∂Ω), see Lions and Magenes [14]. 1/2

Example 1.2.2.2: Minimal surface equation The non-parametric minimal surface problem in two dimensions is to find a minimizer u = u(x

1,

x2 )

of the problem

$$\min_{v\in V}\int_\Omega\ \sqrt{1+v_{x_1}^2+v_{x_2}^2}\ dx,\] where for a given function h defined on the boundary of the domain Ω $$V=\{v\in C^1(\overline{\Omega}):\ v=h\ \mbox{on}\ \partial\Omega\}.\]

1.3.1

https://math.libretexts.org/@go/page/2131

Figure 1.2.2.1: Comparison surface Suppose that the minimizer satisfies the regularity assumption u ∈ C u is a solution of the minimal surface equation (Euler equation) in Ω ∂ ∂x1

⎛ ⎜



ux

1



− − − − − − − −⎟+ 2

⎝ √ 1 + |∇u|



∂x2

2

(Ω)

⎛ ⎜

, then



ux

2

− − − − − − − − ⎟ = 0. 2

⎝ √ 1 + |∇u|

(1.3.2)



In fact, the additional assumption u ∈ C (Ω) is superfluous since it follows from regularity considerations for quasilinear elliptic equations of second order, see for example Gilbarg and Trudinger [9]. 2

Let Ω = R . Each linear function is a solution of the minimal surface equation (1.3.2). It was shown by Bernstein [2] that there are no other solutions of the minimal surface quation. This is true also for higher dimensions n ≤ 7 , see Simons [19]. 2

If n ≥ 8 , then there exists also other solutions which define cones, see Bombieri, De Giorgi and Giusti [3]. The linearized minimal surface equation over u ≡ 0 is the Laplace equation △u = 0 . In R linear functions are solutions but also many other functions in contrast to the minimal surface equation. This striking difference is caused by the strong nonlinearity of the minimal surface equation. 2

More general minimal surfaces are described by using parametric representations. An example is shown in Figure 1.2.2.21. See [18], pp. 62, for example, for rotationally symmetric minimal surfaces.

Figure 1.2.2.2: Rotationally symmetric minimal surface 1

An experiment from Beutelspacher's Mathetikum, Wissenschaftsjahr 2008, Leipzig

Neumann type boundary value problems Set V

=C

1

¯ ¯ ¯ ¯

(Ω)

and

$$E(v)=\int_\Omega\ F(x,v,\nabla v)\ dx-\int_{\partial\Omega}\ g(x,v)\ ds,\] where F and g are given sufficiently regular functions and Ω ⊂ R is a bounded and sufficiently regular domain. Assume u is a minimizer of E(v) in V , that is n

$$u\in V:\ \ E(u)\le E(v)\ \ \mbox{for all}\ v\in V,\]

1.3.2

https://math.libretexts.org/@go/page/2131

then n

∫ Ω

 ( ∑ Fu

x i

(x, u, ∇u)ϕx + Fu (x, u, ∇u)ϕ) dx i

i=1

− ∫

 gu (x, u)ϕ ds = 0

∂Ω

for all ϕ ∈ C

1

¯ ¯ ¯ ¯

(Ω)

. Assume additionally u ∈ C

2

(Ω)

, then u is a solution of the Neumann type boundary value problem

n



∑ i=1

Fu

x i

∂xi

− Fu = 0  in Ω

n

∑ Fux νi − gu = 0  on ∂Ω, i

i=1

where ν = (ν , … , ν ) is the exterior unit normal at the boundary lemma of the calculus of variations. 1

n

∂Ω

. This follows after integration by parts from the basic

Example 1.2.2.3: Laplace equation Set $$E(v)=\frac{1}{2}\int_\Omega\ |\nabla v|^2\ dx-\int_{\partial\Omega}\ h(x)v\ ds,\] then the associated boundary value problem is △u = 0  in Ω ∂u = h  on ∂Ω. ∂ν

Example 1.2.2.4: Capillary equation Let Ω ⊂ R and set 2

$$E(v)=\int_\Omega\ \sqrt{1+|\nabla v|^2}\ dx+\frac{\kappa}{2}\int_\Omega\ v^2\ dx -\cos\gamma\int_{\partial\Omega}\ v\ ds.\] Here κ is a positive constant (capillarity constant) and γ is the (constant) boundary contact angle, i. e., the angle between the container wall and the capillary surface, defined by v = v(x , x ) , at the boundary. 1

2

Then the related boundary value problem is div (T u) = κu  in Ω ν ⋅ T u = cos γ on ∂Ω,

where we use the abbreviation $$Tu=\frac{\nabla u}{\sqrt{1+|\nabla u|^2}},\] div (T u) is the left hand side of the minimal surface equation (1.3.2) and it is twice the mean curvature of the surface defined by z = u(x , x ) , see an exercise. 1

2

The above problem describes the ascent of a liquid, water for example, in a vertical cylinder with cross section Ω. Assume the gravity is directed downwards in the direction of the negative x -axis. Figure 1.2.2.3 shows that liquid can rise along a vertical wedge which is a consequence of the strong non-linearity of the underlying equations, see Finn [7]. This photo was taken from [15]. 3

1.3.3

https://math.libretexts.org/@go/page/2131

Figure 1.2.2.3: Ascent of liquid in a wedge

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 1.3: Partial Differential Equations is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1.3.4

https://math.libretexts.org/@go/page/2131

1.E: Introduction (Exercises) These are homework exercises to accompany Miersemann's "Partial Differential Equations" Textmap. This is a textbook targeted for a one semester first course on differential equations, aimed at engineering students. Partial differential equations are differential equations that contains unknown multivariable functions and their partial derivatives. Prerequisite for the course is the basic calculus sequence.

Q1.1 Find nontrivial solutions u of $$ u_xy-u_yx=0 \ . \]

Q1.2 Prove: In the linear space C

2

2

(R )

there are infinitely many linearly independent solutions of △u = 0 in R . 2

Hint: Real and imaginary part of holomorphic functions are solutions of the Laplace equation.

Q1.3 Find all radially symmetric functions which satisfy the Laplace equation in R ∖ {0} for n ≥ 2 . A function u is said to be radially symmetric if u(x) = f (r), where r = (∑

n

n

i

Hint: Show that a radially symmetric u satisfies △u = r

1−n

n−1

(r



f )



by using ∇u(x) = f



(r)

x r

2

1/2

x ) i

.

.

Q1.4 Prove the basic lemma in the calculus of variations: Let Ω ⊂ R be a domain and f ∈ C (Ω) such that n



 f (x)h(x) dx = 0

(1.E.1)

Ω

for all h ∈ C

2

0

(Ω)

. Then f

≡0

in Ω.

Q1.5 Write the minimal surface equation (1.2.2.1) as a quasilinear equation of second order.

Q1.6 Prove that a sufficiently regular minimizer in C (Ω) of 1

¯ ¯ ¯ ¯

E(v) = ∫

 F (x, v, ∇v) dx − ∫

Ω

 g(v, v) ds,

(1.E.2)

∂Ω

is a solution of the boundary value problem n



∑ i=1

∂xi

Fux − Fu = 0  in Ω i

n

∑ Fu

x i

νi − gu = 0  on ∂Ω,

i=1

where ν

= (ν1 , … , νn )

is the exterior unit normal at the boundary ∂Ω .

1.E.1

https://math.libretexts.org/@go/page/2132

Q1.7 Prove that ν ⋅ T u = cos γ on ∂Ω , where γ is the angle between the container wall, which is here a cylinder, and the surface defined by z = u(x , x ) , at the boundary of S , ν is the exterior normal at ∂Ω . 1

S

,

2

Hint: The angle between two surfaces is by definition the angle between the two associated normals at the intersection of the surfaces.

Q1.8 Let Ω be bounded and assume u ∈ C

2

¯ ¯ ¯ ¯

(Ω)

is a solution of div T u = C  in Ω ∇u ν ⋅

− − − − − − − − = cos γ on ∂Ω, 2

√ 1 + |∇u|

where C is a constant. Prove that $$ C={|\partial\Omega|\over|\Omega|}\cos\gamma\ . \] Hint: Integrate the differential equation over Ω.

Q1.9 Assume Ω = B

R (0)

is a disc with radius R and the center at the origin. − −−−− −

Show that radially symmetric solutions u(x) = w(r), r = √x

2 1

rw (

2

+x

2

, of the capillary boundary value problem are solutions of





− − − − − −) √ 1 + w′2 w

= κrw  in 0 < r < R



− − − − − − = cos γ  if r = R. √ 1 + w′2

Remark. It follows from a maximum principle of Concus and Finn [7] that a solution of the capillary equation over a disc must be radially symmetric.

Q1.10 Find all radially symmetric solutions of rw (





− − − − − −) √ 1 + w′2

= C r  in 0 < r < R



w − − − − − − = cos γ  if r = R. √ 1 + w′2

Hint: From an exercise above it follows that $$ C=\frac{2}{R}\cos\gamma. \]

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall.

1.E.2

https://math.libretexts.org/@go/page/2132

This page titled 1.E: Introduction (Exercises) is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann. 1.E: Introduction (Exercises) has no license indicated.

1.E.3

https://math.libretexts.org/@go/page/2132

CHAPTER OVERVIEW 2: Equations of First Order The main tool for studying related problems is the theory of ordinary differential equations, which is quite different for systems of partial differential of first order. Topic hierarchy 2.0: Prelude to First Order Equations 2.1: Linear Equations 2.2: Quasilinear Equations 2.2.1: A Linearization Method 2.2.2: Initial Value Problem of Cauchy 2.3: Nonlinear Equations in Two Variables 2.3.1: Initial Value Problem of Cauchy 2.4: Nonlinear Equations in \(\mathbb{R}^n\) 2.5: Hamilton-Jacobi Theory 2.E: Equations of First Order (Exercises) Thumbnail: Sir William Rowan Hamilton. Sir William Rowan Hamilton was an Irish physicist, astronomer, andmathematician, who made important contributions to classical mechanics, optics, and algebra. His studies of mechanical and optical systems led him to discover new mathematical concepts and techniques. His best known contribution to mathematical physics is the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 2: Equations of First Order is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1

2.0: Prelude to First Order Equations For a given sufficiently regular function F the general equation of first order for the unknown function ) is $$F(x,u,\nabla u)=0\] in n . The main tool for studying related problems is the theory of ordinary differential equations. This is quite different for systems of partial differential of first order. The general linear partial differential equation of first order can be written as $$\sum_{i=1}^na_i(x)u_{x_i}+c(x)u=f(x)\] for given functions a

i,

 c

and f . The general quasilinear partial differential equation of first order is $$\sum_{i=1}^na_i(x,u)u_{x_i}+c(x,u)=0.\]

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 2.0: Prelude to First Order Equations is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

2.0.1

https://math.libretexts.org/@go/page/3445

2.1: Linear Equations Let us begin with the linear homogeneous equation a1 (x, y)ux + a2 (x, y)uy = 0.

(2.1.1)

Assume there is a C -solution z = u(x, y). This function defines a surface S which has at P 1

= (x, y, u(x, y))

the normal

1 − − − − − − − − (−ux , −uy , 1)

N =

(2.1.2)

2

√ 1 + |∇u|

and the tangential plane defined by ζ − z = ux (x, y)(ξ − x) + uy (x, y)(η − y).

(2.1.3)

Set p = u (x, y), q = u (x, y) and z = u(x, y). The tuple (x, y, z, p, q) is called surface element and the tuple (x, y, z) support of the surface element. The tangential plane is defined by the surface element. On the other hand, differential equation (2.1.1) x

y

a1 (x, y)p + a2 (x, y)q = 0

(2.1.4)

defines at each support (x, y, z) a bundle of planes if we consider all (p, q) satisfying this equation. For fixed (x, y), this family of planes Π(λ) = Π(λ; x, y) is defined by a one parameter family of ascents p(λ) = p(λ; x, y), q(λ) = q(λ; x, y). The envelope of these planes is a line since a1 (x, y)p(λ) + a2 (x, y)q(λ) = 0,

which implies that the normal N(λ) on Π(λ) is perpendicular on (a

1,

Consider a curve x(τ ) = (x(τ ), y(τ ), z(τ )) on S , let T be the tangential plane at x = (x(τ ), y(τ ), z(τ and consider on T the line x0

0

0

0 ))

0

(2.1.5)

.

a2 , 0)

of S

x0



1

L :   l(σ) = x0 + σ x (τ0 ),   σ ∈ R ,

(2.1.6)

see Figure 2.1.1.

Figure 2.1.1: Curve on a surface We assume Tx

0

coincides with the envelope, which is a line here, of the family of planes and consider two planes

L

= Π(λ0 )

Π(λ0 ) :    z − z0 = Π(λ0 + h) :    z − z0

=

Π(λ)

at

. Assume that

(x, y, z)

(x − x0 )p(λ0 ) + (y − y0 )q(λ0 ) (x − x0 )p(λ0 + h) + (y − y0 )q(λ0 + h).

At the intersection l(σ) we have (x − x0 )p(λ0 ) + (y − y0 )q(λ0 ) = (x − x0 )p(λ0 + h) + (y − y0 )q(λ0 + h).

2.1.1

(2.1.7)

https://math.libretexts.org/@go/page/2133

Thus, ′







x (τ0 )p (λ0 ) + y (τ0 )q (λ0 ) = 0.

(2.1.8)

a1 (x(τ0 ), y(τ0 ))p(λ) + a2 (x(τ0 ), y(τ0 ))q(λ) = 0

(2.1.9)

From the differential equation

it follows ′



a1 p (λ0 ) + a2 q (λ0 ) = 0.

(2.1.10)

Consequently ′



x (τ )



(x (τ ), y (τ )) =

(a1 (x(τ ), y(τ )), a2 (x(τ ), y(τ )),

(2.1.11)

a1 (x(τ , y(τ ))

since τ was an arbitrary parameter. Here we assume that x (τ ) ≠ 0 and a ′

1 (x(τ ),

0

Then we introduce a new parameter t by the inverse of τ

= τ (t) τ



x (s)  ds.

1 (x,



y),  y (t) = a2 (x, y)

(2.1.12)

a1 (x(s), y(s))

τ0



.

, where

t(τ ) = ∫

It follows x (t) = a

y(τ )) ≠ 0

. We denote x(τ (t)) by x(t) again.

Now we consider the initial value problem ′



x (t) = a1 (x, y),   y (t) = a2 (x, y),   x(0) = x0 ,  y(0) = y0 .

(2.1.13)

From the theory of ordinary differential equations it follows (Theorem of Picard-Lindelöf) that there is a unique solution in a neighbourhood of t = 0 provided the functions a ,  a are in C . From this definition of the curves (x(t), y(t)) is follows that the field of directions (a (x , y ), a (x , y )) defines the slope of these curves at (x(0), y(0)). 1

1

1

0

0

2

0

2

0

Definition. The differential equations in (2.1.13) are called characteristic equations or characteristic system and solutions of the associated initial value problem are called characteristic curves. Definition. A function ϕ(x, y) is said to be an integral of the characteristic system if ϕ(x(t), y(t)) = const. for each characteristic curve. The constant depends on the characteristic curve considered. Proposition 2.1. Assume ϕ ∈ C is an integral, then u = ϕ(x, y) is a solution of (2.1.1). 1

Proof. Consider for given (x

0,

y0 )

the above initial value problem (2.1.13). Since ϕ(x(t), y(t)) = const. it follows ′

ϕx x + ϕy y



=0

(2.1.14)

for $ |t| 0 and sufficiently small. Thus $$ \phi_x(x_0,y_0)a_1(x_0,y_0)+\phi_y(x_0,y_0)a_2(x_0,y_0)=0. \] 0



Remark. If ϕ(x, y) is a solution of equation (2.1.1) then also H (ϕ(x, y)), where H (s) is a given C -function. 1

2.1.2

https://math.libretexts.org/@go/page/2133

Examples Example 2.1.1: Consider a1 ux + a2 uy = 0,

where a

1,

 a2

(2.1.15)

are constants. The system of characteristic equations is ′

x = a1 ,  y



= a2 .

(2.1.16)

Thus the characteristic curves are parallel straight lines defined by x = a1 t + A,  y = a2 t + B,

(2.1.17)

where A,  B are arbitrary constants. From these equations it follows that ϕ(x, y) := a2 x − a1 y

is constant along each characteristic curve. Consequently, see Proposition 2.1, equation. From an exercise it follows that

(2.1.18)

u = a2 x − a1 y

u = H (a2 x − a1 y),

is a solution of the differential

(2.1.19)

where H (s) is an arbitrary C -function, is also a solution. Since u is constant when a x − a y is constant, equation (2.1.19) defines cylinder surfaces which are generated by parallel straight lines which are parallel to the (x, y)-plane, see Figure 2.1.2. 1

2

1

Figure 2.1.2: Cylinder surfaces

Example 2.1.2: Consider the differential equation x ux + y uy = 0.

(2.1.20)

The characteristic equations are ′

x = x,  y



= y,

(2.1.21)

and the characteristic curves are given by t

t

x = Ae ,  y = Be ,

(2.1.22)

where A,  B are arbitrary constants. Then an integral is y/x, x ≠ 0 , and for a given C -function the function u = H (x/y) is a solution of the differential equation. If y/x = const. , then u is constant. Suppose that H (s) > 0 , for example, then u defines right helicoids (in German: Wendelflächen), see Figure 2.1.3. 1



2.1.3

https://math.libretexts.org/@go/page/2133

Figure 2.1.3: Right helicoid, a

2

2

0 small. The inverse parameters are s = s(x , y ) , t = t(x , y ) , see Figure 2.2.2.2. ′



1









0 (s),



y(s))

,



2

Figure 2.2.2.2: Uniqueness proof Let ′





A :   x(t) := x(s , t),  y(t) := y(s , t),  z(t) := z(s , t)

(2.2.2.5)

be the solution of the above initial value problem for the characteristic differential equations with the initial data ′











x(s , 0) = x0 (s ),  y(s , 0) = y0 (s ),  z(s , 0) = z0 (s ).

According to its construction this curve is on the surface S defined by u = u(x, y) and u(x , y ′

(2.2.2.6)







) = z(s , t )

. Set

ψ(t) := v(x(t), y(t)) − z(t),

(2.2.2.7)

then ′





ψ (t) = vx x + vy y − z



= xx a1 + vy a2 − a3 = 0

and ′





ψ(0) = v(x(s , 0), y(s , 0)) − z(s , 0) = 0

(2.2.2.8)

since v is a solution of the differential equation and satisfies the initial condition by assumption. Thus, ψ(t) ≡ 0 , i. e.,

2.2.2.2

https://math.libretexts.org/@go/page/2169







v(x(s , t), y(s , t)) − z(s , t) = 0.

(2.2.2.9)

Set t = t , then ′









v(x , y ) − z(s , t ) = 0,

which shows that v(x , y ′







) = u(x , y )

(2.2.2.10)

because of z(s , t ) = u(x , y ) . ′









Remark. In general, there is no uniqueness if the initial curve Γ is a characteristic curve, see an exercise and Figure 2.2.2.3, which illustrates this case.

Figure 2.2.2.3: Multiple solutions

Examples Example 2.2.2.1: Consider the Cauchy initial value problem ux + uy = 0

(2.2.2.11)

with the initial data x0 (s) = s,  y0 (s) = 1,  z0 (s) is a given C

These initial data are non-characteristic since equations





0

0

y a1 − x a2 = −1





1

-function.

(2.2.2.12)

. The solution of the associated system of characteristic ′

x (t) = 1,  y (t) = 1,  u (t) = 0

(2.2.2.13)

x(s, 0) = x0 (s),  y(s, 0) = y0 (s),  z(s, 0) = z0 (s)

(2.2.2.14)

x = t + x0 (s),  y = t + y0 (s),  z = z0 (s),

(2.2.2.15)

x = t + s,  y = t + 1,  z = z0 (s).

(2.2.2.16)

with the initial conditions

is given by

i. e.,

It follows s = x − y + 1,  t = y − 1 and that u = z

0 (x

− y + 1)

is the solution of the Cauchy initial value problem.

2.2.2.3

https://math.libretexts.org/@go/page/2169

Example 2.2.2.2: A problem from kinetics in chemistry. Consider for x ≥ 0 , y ≥ 0 the problem ux + uy = (k0 e

−k1 x

+ k2 ) (1 − u)

(2.2.2.17)

with initial data u(x, 0) = 0,  x > 0,  and u(0, y) = u0 (y),  y > 0.

(2.2.2.18)

Here the constants k are positive, these constants define the velocity of the reactions in consideration, and the function u (y) is given. The variable x is the time and y is the height of a tube, for example, in which the chemical reaction takes place, and u is the concentration of the chemical substance. j

0

In contrast to our previous assumptions, the initial data are not in (x, y)-plane has a corner at the origin, see Figure 2.2.2.4.

C

1

. The projection

C1 ∪ C2

of the initial curve onto the

Figure 2.2.2.4: Domains to the chemical kinetics example The associated system of characteristic equations is ′





x (t) = 1,  y (t) = 1,  z (t) = (k0 e

−k1 x

+ k2 ) (1 − z).

(2.2.2.19)

It follows x = t + c , y = t + c with constants c . Thus the projection of the characteristic curves on the (x, y)-plane are straight lines parallel to y = x . We will solve the initial value problems in the domains Ω and Ω , see Figure 2.2.2.4, separately. 1

2

j

1

2

(i) The initial value problem in Ω . The initial data are 1

x0 (s) = s,  y0 (s) = 0,  z0 (0) = 0,  s ≥ 0.

(2.2.2.20)

x = x(s, t) = t + s,  y = y(s, t) = t.

(2.2.2.21)

It follows

Thus ′

z (t) = (k0 e

−k1 (t+s)

+ k2 )(1 − z),  z(0) = 0.

(2.2.2.22)

The solution of this initial value problem is given by z(s, t) = 1 − exp(

k0

e

−k1 (s+t)

k1

− k2 t −

k0

e

−k1 s

).

(2.2.2.23)

k1

Consequently u1 (x, y) = 1 − exp(

k0 k1

e

−k1 x

− k2 y − k0 k1 e

2.2.2.4

−k1 (x−y)

)

(2.2.2.24)

https://math.libretexts.org/@go/page/2169

is the solution of the Cauchy initial value problem in Ω . If time x tends to ∞, we get the limit $$ \lim_{x\to\infty} u_1(x,y)=1-e^{-k_2y}. \] 1

(ii) The initial value problem in Ω . The initial data are here 2

x0 (s) = 0,  y0 (s) = s,  z0 (0) = u0 (s),  s ≥ 0.

(2.2.2.25)

x = x(s, t) = t,  y = y(s, t) = t + s.

(2.2.2.26)

It follows

Thus ′

z (t) = (k0 e

−k1 t

+ k2 )(1 − z),  z(0) = 0.

(2.2.2.27)

The solution of this initial value problem is given by z(s, t) = 1 − (1 − u0 (s)) exp(

k0

e

−k1 t

k1

− k2 t −

k0

).

(2.2.2.28)

k1

Consequently u2 (x, y) = 1 − (1 − u0 (y − x)) exp(

k0

e

−k1 x

k1

− k2 x −

k0

)

(2.2.2.29)

k1

is the solution in Ω . 2

If x = y , then

If u (0) > 0 , then u by x = y . 0

1

< u2

k0

1 − exp(

u2 (x, y) =

1 − (1 − u0 (0)) exp(

k1

e

−k1 x

u1 (x, y) =

− k2 x −

k0 k1

e

k0

)

k1

−k1 x

− k2 x −

k0

).

k1

if x = y , i. e., there is a jump of the concentration of the substrate along its burning front defined

Remark. Such a problem with discontinuous initial data is called Riemann problem. See an exercise for another Riemann problem. The case that a solution of the equation is known Here we will see that we get immediately a solution of the Cauchy initial value problem if a solution of the homogeneous linear equation a1 (x, y)ux + a2 (x, y)uy = 0

(2.2.2.30)

is known. Let x0 (s),  y0 (s),  z0 (s),  s1 < s < s2

2.2.2.5

(2.2.2.31)

https://math.libretexts.org/@go/page/2169

be the initial data and let u = ϕ(x, y) be a solution of the differential equation. We assume that ′



0

0

ϕx (x0 (s), y0 (s))x (s) + ϕy (x0 (s), y0 (s))y (s) ≠ 0

(2.2.2.32)

g(s) = ϕ(x0 (s), y0 (s))

(2.2.2.33)

is satisfied. Set

and let s = h(g) be the inverse function. The solution of the Cauchy initial problem is given by u

0

.

(h(ϕ(x, y)))

This follows since in the problem considered a composition of a solution is a solution again, see an exercise, and since $$ u_0\left(h(\phi(x_0(s),y_0(s))\right)=u_0(h(g))=u_0(s). \]

Example 2.2.2.3: Consider equation ux + uy = 0

(2.2.2.34)

x0 (s) = s,  y0 (s) = 1,  u0 (s) is a given function.

(2.2.2.35)

with initial data

A solution of the differential equation is ϕ(x, y) = x − y . Thus ϕ((x0 (s), y0 (s)) = s − 1

(2.2.2.36)

u0 (ϕ + 1) = u0 (x − y + 1)

(2.2.2.37)

and

is the solution of the problem.

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 2.2.2: Initial Value Problem of Cauchy is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

2.2.2.6

https://math.libretexts.org/@go/page/2169

2.3: Nonlinear Equations in Two Variables Here we consider equation F (x, y, z, p, q) = 0,

where z = u(x, y), p = u

x (x,

y)

,q=u

y (x,

y)

and F

∈ C

2

(2.3.1)

is given such that F

2 p

2

+ Fq

≠0

.

In contrast to the quasilinear case, this general nonlinear equation is more complicated. Together with (2.3.1) we will consider the following system of ordinary equations which follow from considerations below as necessary conditions, in particular from the assumption that there is a solution of (2.3.1). ′

x (t) = Fp

(2.3.2)



y (t) = Fq

(2.3.3)



z (t) = p Fp + q Fq

(2.3.4)



p (t) = −Fx − Fu p

(2.3.5)



q (t) = −Fy − Fu q.

(2.3.6)

Definition. Equations (2.3.2)--(2.3.6) are said to be characteristic equations of equation (2.3.1) and a solution $$(x(t),y(t),z(t),p(t),q(t))\] of the characteristic equations is called characteristic strip or Monge curve.

Figure 2.3.1: Gaspard Monge (Panthèon, Paris) We will see, as in the quasilinear case, that the strips defined by the characteristic equations build the solution surface of the Cauchy initial value problem. Let z = u(x, y) be a solution of the general nonlinear differential equation (2.3.1). Let (x

be fixed, then equation (2.3.1) defines a set of planes given by , i. e., planes given by z = v(x, y) which contain the point (x , y , z ) and for which v = p , v = q at (x , y ). In the case of quasilinear equations these set of planes is a bundle of planes which all contain a fixed straight line, see Section 2.1. In the general case of this section the situation is more complicated. 0,

y0 , z0 )

(x0 , y0 , z0 , p, q)

0

x

y

0

0

0

0

Consider the example 2

p

+q

2

= f (x, y, z),

(2.3.7)

where f is a given positive function. Let E be a plane defined by z = v(x, y) and which contains (x the plane E directed downward is

0,

y0 , z0 )

. Then the normal on

$${\bf N}=\frac{1}{\sqrt{1+|\nabla v|^2}}(p,q,-1),\] where p = v (x , y ) , q = v (x , y ) . It follows from (2.3.7) that the normal N makes a constant angle with the z -axis, and the z -coordinate of N is constant, see Figure 2.3,2. x

0

0

y

0

0

2.3.1

https://math.libretexts.org/@go/page/2136

Figure 2.3.2: Monge cone in an example Thus the endpoints of the normals fixed at (x envelope of all these planes.

0,

y0 , z0 )

define a circle parallel to the (x, y)-plane, i. e., there is a cone which is the

We assume that the general equation (2.3.1) defines such a Monge cone at each point in touches at each point its Monge cone, see Figure 2.3.3.

3

R

. Then we seek a surface

S

which

Figure 2.3.3: Monge cones More precisely, we assume there exists, as in the above example, a one parameter C -family 1

$$p(\lambda)=p(\lambda;x,y,z),\ q(\lambda)=q(\lambda;x,y,z)\] of solutions of (2.3.1). These (p(λ), q(λ)) define a family Π(λ) of planes. Let $${\bf x}(\tau)=(x(\tau),y(\tau),z(\tau))\] be a curve on the surface S which touches at each point its Monge cone, see Figure 2.3.4. Thus we assume that at each point of the surface S the associated tangent plane coincides with a plane from the family Π(λ) at this point.

Figure 2.3.4: Monge cones along a curve on the surface Consider the tangential plane T

x0

of the surface S at x

0

= (x(τ0 ), y(τ0 ), z(τ0 ))

. The straight line

$${\bf l}(\sigma)={\bf x}_0+\sigma {\bf x}'(\tau_0),\qquad -\infty 0,  y > 0

, where u , 0 < u

0 (y)

0

0

x2 }

and in Ω

2

2

= {(x1 , x2 ) ∈ R

:  x1 < x2 }

2.E.2

,

https://math.libretexts.org/@go/page/2139

g(x1 ) = {

with constants u

l

≠ ur

ul

x1 < 0

ur

x1 > 0

(2.E.12)

.

Q2.14 Determine the opening angle of the Monge cone, that is, the angle between the axis and the apothem (in German: Mantellinie) of the cone, for equation 2

2

ux + uy = f (x, y, u),

where f

>0

(2.E.13)

.

Q2.15 Solve the initial value problem 2

2

ux + uy = 1,

where x

0 (θ)

,q

= a cos θ,  y0 (θ) = a sin θ,  z0 (θ) = 1,  p0 (θ) = cos θ

a = const. > 0

0 (θ)

(2.E.14)

= sin θ

if 0 ≤ θ < 2π ,

.

Q2.16 Show that the integral ϕ(α, β; θ, r, t), see the Kepler problem, is a complete integral.

Q2.17 − − − − −

− a) Show that S = √− α  x + √1 − α  y + β , α,  β ∈ R

1

,  0 < α < 1

, is a complete integral of S

x

− − − − − − 2

− √1 − Sy

=0

.

b) Find the envelope of this family of solutions.

Q2.18 Determine the length of the half axis of the ellipse $$ r=\frac{p}{1-\varepsilon^2\sin(\theta-\theta_0)},\ 0\le\varepsilon 0 if ρ > 0 . Then the above system of four equations is ′

1 vt + (v ⋅ ∇)v +



p (ρ)∇ρ = f

(3.3.1.5)

ρ

ρt + ρ div v + v ⋅ ∇ρ = 0,

(3.3.1.6)

where ∇ ≡ ∇ and div ≡ div , i. e., these operators apply on the spatial variables only. x

x

The characteristic differential equation is here $$ \left| dχ dt

0

0

1

0



ρ 1

0

dt

0

0

ρχx1

ρχx2

ρ



1

dt

ρ



p χx1 ′

p χx2 (3.3.1.9) ′

p χx3 dχ

ρχx3

dt

\right|=0, \] where $$\dfrac{d\chi}{dt}:=\chi_t+(\nabla_x\chi)\cdot v. \] Evaluating the determinant, we get the characteristic differential equation dχ (

2

) dt

dχ ((

2

)

2



− p (ρ)| ∇x χ| ) = 0.

(3.3.1.7)

dt

This equation implies consequences for the speed of the characteristic surfaces as the following consideration shows. Consider a family S(t) of surfaces in R defined by χ(x, t) = c , where x ∈ R and c is a fixed constant. As usually, we assume that ∇ χ ≠ 0 . One of the two normals on S(t) at a point of the surface S(t) is given by, see an exercise, 3

3

x

∇x χ n =

.

(3.3.1.8)

| ∇x χ|

3.3.1.4

https://math.libretexts.org/@go/page/2173

Let Q ∈ S(t ) and let Q ∈ S(t ) be a point on the line defined by Q and t < t , t − t small, see Figure 3.3.1.2. 0

0

0

0

1

1

1

1

Q0 + sn

, where n is the normal (3.3.1.8) on S(t

0)

at

0

3.3.1.2: Definition of the speed of a surface Definition. The limit | Q1 − Q0 |

P = lim

(3.3.1.10)

t1 − t0

t1 →t0

is called speed of the surface S(t) . Proposition 3.2. The speed of the surface S(t) is χt

P =−

.

(3.3.1.11)

| ∇x χ|

Proof. The proof follows from χ(Q

0,

t0 ) = 0

and χ(Q

0

, where d = |Q

+ dn, t0 + △t) = 0

1

− Q0 |

and △t = t

1

− t0

. □

Set v := v ⋅ n which is the component of the velocity vector in direction n . From ({3.3.1.8) we get $$ v_n=\frac{1}{|\nabla_x\chi|}v\cdot \nabla_x\chi. \] n

Definition. V

:= P − vn

, the difference of the speed of the surface and the speed of liquid particles, is called relative speed.

Figure 3.3.1.3: Definition of relative speed Using the above formulas for P and v it follows n

V = P − vn = −

χt



| ∇x χ|

v ⋅ ∇x χ

1



=−

| ∇x χ|

.

(3.3.1.12)

| ∇x χ| dt

Then, we obtain from the characteristic equation (3.3.1.7) that V

2

2

| ∇x χ|

(V

2

2

| ∇x χ|

An interesting conclusion is that there are two relative speeds: V Definition.

− − − − ′ √p (ρ)

2



− p (ρ)| ∇x χ| ) = 0.

=0

or V

2



= p (ρ)

(3.3.1.13)

.

is called speed of sound.

3.3.1.5

https://math.libretexts.org/@go/page/2173

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 3.3.1: Examples is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

3.3.1.6

https://math.libretexts.org/@go/page/2173

3.4: Systems of Second Order Here we consider the system n kl

∑ A

(x, u, ∇u)uxk xl + lower order terms = 0,

(3.4.1)

k,l=1

where A are (m × m) matrices and u = (u , … , u ) . We assume A = A , which is no restriction of generality provided $u\in C^2$ is satisfied. As in the previous sections, the classification follows from the question whether or not we can calculate formally the solution from the differential equations, if sufficiently many data are given on an initial manifold. Let the initial manifold S be given by χ(x) = 0 and assume that ∇χ ≠ 0 . The mapping x = x(λ), see previous sections, leads to kl

T

1

kl

lk

m

n kl

∑ A

χxk χxl vλn λn = terms known on S,

(3.4.2)

k,l=1

where v(λ) = u(x(λ)) . The characteristic equation is here n kl

det ( ∑ A

χxk χxl ) = 0.

(3.4.3)

k,l=1

If there is a solution χ with ∇χ ≠ 0 , then it is possible that second derivatives are not continuous in a neighborhood of S . Definition. The system is called elliptic if n kl

det ( ∑ A

ζk ζl ) ≠ 0

(3.4.4)

k,l=1

for all ζ ∈ R , ζ ≠ 0 .

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 3.4: Systems of Second Order is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

3.4.1

https://math.libretexts.org/@go/page/2145

3.4.1: Examples Example 3.4.1.1: Navier-Stokes equations The Navier-Stokes system for a viscous incompressible liquid is 1 vt + (v ⋅ ∇x )v = −

ρ

∇x p + γ △x v

divx  v = 0,

where ρ is the (constant and positive) density of liquid, γ is the (constant and positive) viscosity of liquid, v = v(x, t) velocity vector of liquid particles, x ∈ R or in R , p = p(x, t) pressure. 3

2

The problem is to find solutions v,  p of the above system.

Example 3.4.2.1: Linear elasticity Consider the system ∂ ρ

2

u 2

= μ△x u + (λ + μ)∇x (divx  u) + f .

(3.4.1.1)

∂t

Here is, in the case of an elastic body in R , u(x, t) = (u (x, t), u (x, t), u (x, t)) displacement vector, f (x, t) density of external force, ρ (constant) density, λ,  μ (positive) Lamé constants. 3

1

2

3

The characteristic equation is det C

=0

where the entries of the matrix C are given by 2

cij = (λ + μ)χxi χxj + δij (μ| ∇x χ|

2

− ρχ ) . t

(3.4.1.2)

The characteristic equation is 2

((λ + 2μ)| ∇x χ|

2

2

− ρχ ) (μ| ∇x χ| t

2 2

− ρχ ) t

= 0.

(3.4.1.3)

It follows that two different speeds P of characteristic surfaces S(t) , defined by χ(x, t) = const. , are possible, namely − −−−− − λ + 2μ P1 = √

We recall that P

= −χt /| ∇x χ|

ρ

− − μ

,   and  P2 = √

.

(3.4.1.4)

ρ

.

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 3.4.1: Examples is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

3.4.1.1

https://math.libretexts.org/@go/page/2174

3.5: Theorem of Cauchy-Kovalevskaya Consider the quasilinear system of first order (3.3.1) of Section 3.3. Assume an initial manifolds S is given by χ(x) = 0 , ∇χ ≠ 0 , and suppose that χ is not characteristic. Then, see Section 3.3, the system (3.3.1) can be written as n−1

ux

n

i

= ∑ a (x, u)ux + b(x, u)

(3.5.1)

i

i=1

u(x1 , … , xn−1 , 0) = f (x1 , … , xn−1 )

Here is u = (u

1,

T

… , um )

, b = (b

1,

T

… , bn )

(3.5.2)

and a are (m × m) -matrices. i

We assume that a , b and f are in C with respect to their arguments. From (3.5.1) and (3.5.2) it follows that we can calculate formally all derivatives D u in a neighborhood of the plane {x :  x = 0} , in particular in a neighborhood of 0 ∈ R . Thus we have a formal power series of u(x) at x = 0 : i



α

n

$$u(x)\sim \sum\frac{1}{\alpha !}D^\alpha u(0) x^\alpha.\] For notations and definitions used here and in the following see the appendix to this section. Then, as usually, two questions arise: i. Does the power series converge in a neighborhood of 0 ∈ R ? ii. Is a convergent power series a solution of the initial value problem (3.5.1), (3.5.2)? Remark. Quite different to this power series method is the method of asymptotic expansions. Here one is interested in a good approximation of an unknown solution of an equation by a finite sum ∑ ϕ (x) of functions ϕ . In general, the infinite sum ∑ ϕ (x) does not converge, in contrast to the power series method of this section. See [15] for some asymptotic formulas in capillarity. N

i=0

i

i



i=0

i

Theorem 3.1. (Cauchy-Kovalevskaya). There is a neighborhood of 0 ∈ R such there is a real analytic solution of the initial value problem (3.5.1), (3.5.2). This solution is unique in the class of real analytic functions. Proof. The proof is taken from F. John \cite{John}. We introduce u − f as the new solution for which we are looking at and we add a new coordinate u to the solution vector by setting u (x) = x . Then ⋆



n

$$u^\star_{x_n}=1,\ u^\star_{x_k}=0,\ k=1,\ldots,n-1,\ u^\star(x_1,\ldots,x_{n-1},0)=0\] and the extended system (3.5.1), (3.5.2) is $$ \left( u1,xn ⋮

(3.5.3)

um,xn ⋆

ux

n

\right)= \sum_{i=1}^{n-1}\left( i

a

0

0

0

(3.5.4)

\right) \left( u1,x

i



(3.5.5)

um,xi ⋆

ux

i

3.5.1

https://math.libretexts.org/@go/page/2146

\right)+ \left( b1 ⋮

(3.5.6)

bm 1

\right), \] where the associated initial condition is u(x

.

1,

The

new

u

is

T

u = (u1 , … , um ) ⋆

T

b = (x1 , … , xn−1 , u1 , … , um , u )

,

… , xn−1 , 0) = 0

the

new

are

i

a

i

and



a (x1 , … , xn−1 , u1 , … , um , u )

the

new

is

b

.

Thus we are led to an initial value problem of the type n−1

N i

uj,xn = ∑ ∑ a

jk

i=1

(z)uk,xi + bj (z),  j = 1, … , N

(3.5.7)

k=1

uj (x) = 0  if xn = 0,

where j = 1, … , N and z = (x

1,

The point here is that a

i jk

… , xn−1 , u1 , … , uN )

(3.5.8)

.

and b are independent of x . This fact simplifies the proof of the theorem. j

n

From (3.5.7) and (3.5.8) we can calculate formally all D

β

uj

. Then we have formal power series for u : j

$$u_j(x)\sim \sum_\alpha c_\alpha^{(j)}x^\alpha,\] where $$c_\alpha^{(j)}=\frac{1}{\alpha!}D^\alpha u_j(0).\] We will show that these power series are (absolutely) convergent in a neighborhood of 0 ∈ R , i.e., they are real analytic functions, see the appendix for the definition of real analytic functions. Inserting these functions into the left and into the right hand side of ( 3.5.7) we obtain on the right and on the left hand side real analytic functions. This follows since compositions of real analytic functions are real analytic again, see Proposition A7 of the appendix to this section. The resulting power series on the left and on the right have the same coefficients caused by the calculation of the derivatives D u (0) from (3.5.7). It follows that u (x), j = 1, … , n , defined by its formal power series are solutions of the initial value problem (3.5.7), (3.5.8). α

j

j

Set $$d=\left(\frac{\partial}{\partial z_1},\ldots,\frac{\partial}{\partial z_{N+n-1}}\right)\] Lemma A. Assume u ∈ C



in a neighborhood of 0 ∈ R . Then α

D

uj (0) = Pα (d

β

i

a

jk

(0), d

γ

bj (0)) ,

(3.5.9)

where |β|,  |γ| ≤ |α| and P are polynomials in the indicated arguments with non-negative integers as coefficients which are independent of a and of b . α

i

Proof. It follows from equation (3.5.7) that α

Dn D

Here is D

n

= ∂/∂ xn

uj (0) = Pα (d

β

i

a

jk

(0), d

γ

δ

bj (0), D uk (0)).

(3.5.10)

and α,  β,  γ,  δ satisfy the inequalities

$$|\beta|,\ |\gamma|\le|\alpha|,\ \ |\delta|\le|\alpha|+1,\] and, which is essential in the proof, the last coordinates in the multi-indices α = (α , … , α ) , δ = (δ , … , δ ) satisfy δ since the right hand side of (3.5.7) is independent of x . Moreover, it follows from (3.5.7) that the polynomials P have integers as coefficients. The initial condition (3.5.8) implies 1

n

1

n

n

≤ αn

n

α

3.5.2

https://math.libretexts.org/@go/page/2146

α

D

where

uj (0) = 0,

(3.5.11)

, that is, α = 0 . Then, the proof is by induction with respect to α . The induction starts with , then we replace $D^\delta u_k(0)$ in the right hand side of (3.5.10) by (3.5.11), that is by zero. Then it follows from ( 3.5.10) that α = (α1 , … , αn−1 , 0)

n

n

αn = 0

$$D^\alpha u_j(0)=P_\alpha (d^\beta a_{jk}^i(0),d^\gamma b_j(0),D^\delta u_k(0)),\] where α = (α

1,

… , αn−1 , 1)

. □

Definition. Let f

= (f1 , … , fm )

,F

= (F1 , … , Fm ) α

|D

for all α . We write f

0 and 0 < r < ct it follows that M (r, t) is the solution of the initial value problem with given initially data ( 4.2.1.1) since F (s) = F (s) , G (s) = G(s) if s > 0 . 0

0

0

Since for fixed t > 0 $$u(x,t)=\lim_{r\to 0} M_0(r,t),\] it follows from d'Hospital's rule that ′

u(x, t) = ctF (ct) + F (ct) + tG(ct) d =

(tF (ct)) + tG(ct). dt

Theorem 4.2. Assume f ∈ C (R ) and g ∈ C (R ) are given. Then there exists a unique solution initial value problem (4.2.2)-(4.2.3), where n = 3 , and the solution is given by the Poisson's formula 3

3

2

3

1 u(x, t) =

∂ 2

4πc

u ∈ C

2

3

(R

× [0, ∞))

of the

1 (

∂t

∫ t

 f (y) dSy )

(4.2.1.3)

∂ Bct (x)

1 +

2



4π c t

 g(y) dSy .

(4.2.1.4)

∂ Bct (x)

Proof. Above we have shown that a C -solution is given by Poisson's formula. Under the additional assumption f from Poisson's formula that this formula defines a solution which is in C , see F. John [10], p. 129. 2

∈ C

3

it follows

2



Corollary. From Poisson's formula we see that the domain of dependence for |y − x| = c|t − t | with the hyperplane defined by t = 0 , see Figure 4.2.1.2.

u(x, t0 )

is the intersection of the cone defined by

0

4.2.1.2

https://math.libretexts.org/@go/page/2177

Figure 4.2.1.2: Domain of dependence, case n = 3 .

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.2.1: Case n=3 is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.2.1.3

https://math.libretexts.org/@go/page/2177

4.2.2: Case n=2 Consider the initial value problem −2

vxx + vyy = c

where f

∈ C

3

,  g ∈ C

2

vtt

(4.2.2.1)

v(x, y, 0) = f (x, y)

(4.2.2.2)

vt (x, y, 0) = g(x, y),

(4.2.2.3)

.

Using the formula for the solution of the three-dimensional initial value problem we will derive a formula for the two-dimensional case. The following consideration is called Hadamard's method of decent. Let v(x, y, t) be a solution of (4.2.2.1)-(4.2.2.3), then $$u(x,y,z,t):=v(x,y,t)\] is a solution of the three-dimensional initial value problem with initial data f (x, y), g(x, y), independent of z , since 4.2.2.1)-(4.2.2.3). Hence, since u(x, y, z, t) = u(x, y, 0, t) + u (x, y, δz, t)z, 0 < δ < 1 , and u = 0 , we have z

u

satisfies (

z

$$v(x,y,t)=u(x,y,0,t).\] Poisson's formula in the three-dimensional case implies 1 v(x, y, t) =

∂ 2

4πc

1 (

∂t

∫ t

 f (ξ, η) dS)

∂ Bct (x,y,0)

1 +



2

4π c t

 g(ξ, η) dS.

(4.2.2.4)

∂ Bct (x,y,0)

Figure 4.2.2.1: Domains of integration The integrands are independent on ζ . The surface S is defined by χ(ξ, η, ζ) := (ξ − x ) exterior normal n at S is n = ∇χ/|∇χ| and the surface element is given by dS = (1/|n is

2

3

. Then the |)dξdη , where the third coordinate of n 2

+ (η − y )



2

2

2

−c t

=0

$$n_3=\pm\frac{\sqrt{c^2 t^2-(\xi-x)^2-(\eta-y)^2}}{ct}.\] The positive sign applies on S =S ∪S . +

S

+

, where

ζ >0

and the sign is negative on

S



where

ζ T , T sufficiently large. cto

0

0

2

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.2.2: Case n=2 is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.2.2.2

https://math.libretexts.org/@go/page/2178

4.3: Inhomogeneous Equations Here we consider the initial value problem n

1

□u = w(x, t)  on x ∈ R ,  t ∈ R

where □u := u

tt

2

− c △u

. We assume f

(4.3.1)

u(x, 0) = f (x)

(4.3.2)

ut (x, 0) = g(x),

(4.3.3)

∈ C

3

, g ∈ C and w ∈ C , which are given. 2

1

Set u = u + u , where u is a solution of problem (4.3.1)-(4.3.3) with w := 0 and u is the solution where f 4.3.1)-(4.3.3). Since we have explicit solutions u in the cases n = 1 , n = 2 and n = 3 , it remains to solve 1

2

1

2

=0

and g = 0 in (

1

n

1

□u = w(x, t)  on x ∈ R ,  t ∈ R

(4.3.4)

u(x, 0) = 0

(4.3.5)

ut (x, 0) = 0.

(4.3.6)

The following method is called Duhamel's principle which can be considered as a generalization of the method of variations of constants in the theory of ordinary differential equations. To solve this problem, we make the ansatz t

u(x, t) = ∫

 v(x, t, s) ds,

(4.3.7)

0

where v is a function satisfying □v = 0  for all s

(4.3.8)

v(x, s, s) = 0.

(4.3.9)

and

From ansatz (4.3.7) and assumption (4.3.9) we get t

ut = v(x, t, t) + ∫

 vt (x, t, s) ds,

0 t

= ∫

 vt (x, t, s).

(4.3.10)

0

It follows u (x, 0) = 0 . The initial condition 4.3.7) we see that t

u(x, t) = 0

is satisfied because of the ansatz (4.3.7). From (4.3.10) and ansatz ( t

utt = vt (x, t, t) + ∫

 vtt (x, t, s) ds,

0 t

△x u = ∫

 △x v(x, t, s) ds.

0

Therefore, since u is an ansatz for (4.3.4)-(4.3.6), t 2

utt − c △x u = vt (x, t, t) + ∫

(□v)(x, t, s) ds

0

= w(x, t).

Thus necessarily v

t (x,

, see (4.3.8). We have seen that the ansatz provides a solution of (4.3.4)-(4.3.6) if for all s

t, t) = w(x, t)

□v = 0,   v(x, s, s) = 0,   vt (x, s, s) = w(x, s).

Let v



(x, t, s)

(4.3.11)

be a solution of

4.3.1

https://math.libretexts.org/@go/page/2150

□v = 0,   v(x, 0, s) = 0,   vt (x, 0, s) = w(x, s),

(4.3.12)

then $$v(x,t,s):=v^*(x,t-s,s)\] is a solution of (4.3.11). In the case n = 3 , where v is given by, see Theorem 4.2, ∗

$$v^*(x,t,s)=\frac{1}{4\pi c^2 t}\int_{\partial B_{ct}(x)}\ w(\xi,s)\ dS_\xi.\] Then ∗

v(x, t, s) = v (x, t − s, s) 1 =



2

4π c (t − s)

 w(ξ, s) dSξ .

∂ Bc( t−s) (x)

from ansatz (4.3.7) it follows t

u(x, t) = ∫

 v(x, t, s) ds

0 t

1 =



2

4πc

Changing variables by τ

w(ξ, s)  ∫

0

 

∂ Bc( t−s) (x)

t −s

 dSξ ds.

yields

= c(t − s)

ct

1 u(x, t) =

2



4πc

w(ξ, t − τ /c)  ∫

0

 

 dSξ dτ

τ

∂ Bτ (x)

w(ξ, t − r/c)

1 =

2



4πc

 

 dξ, r

Bct (x)

where r = |x − ξ| . Formulas for the cases n = 1 and initial values for these cases.

n =2

follow from formulas for the associated homogeneous equation with inhomogeneous

Theorem 4.4. The solution of □u = w(x, t),   u(x, 0) = 0,   ut (x, 0) = 0,

(4.3.13)

where w ∈ C , is given by: 1

Case n = 3 : w(ξ, t − r/c)

1 u(x, t) =

2

4πc

where r = |x − ξ| , x = (x

1,

x2 , x3 )

, ξ = (ξ

1,

ξ2 , ξ3 )



 

 dξ,

(4.3.14)

r

Bct (x)

.

Case n = 2 : t

1 u(x, t) =

∫ 4πc

x = (x1 , x2 )

, ξ = (ξ

1,

ξ2 )

w(ξ, τ )   (∫

0

 

Bc( t−τ ) (x)

− − − − − − − − − − − −  dξ)  dτ , √ c2 (t − τ )2 − r2

(4.3.15)

.

Case n = 1 : t

1 u(x, t) =

∫ 2c

0

x+c(t−τ)

  (∫

 w(ξ, τ ) dξ)  dτ .

(4.3.16)

x−c(t−τ)

Remark. The integrand on the right in formula for n = 3 is called retarded potential. The integrand is taken not at t , it is taken at an earlier time t − r/c .

4.3.2

https://math.libretexts.org/@go/page/2150

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.3: Inhomogeneous Equations is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.3.3

https://math.libretexts.org/@go/page/2150

4.4: A Method of Riemann Riemann's method provides a formula for the solution of the following Cauchy initial value problem for a hyperbolic equation of second order in two variables. Let $${\mathcal S}:\ \ x=x(t), y=y(t),\ \ t_1\le t \le t_2,\] be a regular curve in R , that is, we assume x,  y ∈ C 2

1

[ t1 , t2 ]

and x

′2

+y

′2

≠0

. Set

$$Lu:=u_{xy}+a(x,y)u_x+b(x,y)u_y+c(x,y)u,\] where a,  b ∈ C and c,  f 1

in a neighborhood of S . Consider the initial value problem

∈ C

Lu = f (x, y)

where f

∈ C

in a neighbourhood of S and u

0,

(4.4.1)

u0 (t) = u(x(t), y(t))

(4.4.2)

p0 (t) = ux (x(t), y(t))

(4.4.3)

q0 (t) = uy (x(t), y(t)),

(4.4.4)

 p0 ,  q0 ∈ C

1

are given.

We assume: i. u (t) = p (t)x (t) + q (t)y (t) (strip condition), ii. S is not a characteristic curve. Moreover assume that the characteristic curves, which are lines here and are defined by x = const. and y = const. , have at most one point of intersection with S , and such a point is not a touching point, i. e., tangents of the characteristic and S are different at this point. ′

0



0



0

We recall that the characteristic equation to (4.4.1) is χ χ = 0 which is satisfied if χ (x, y) = 0 or χ (x, y) = 0 . One family of characteristics associated to these first partial differential of first order is defined by x (t) = 1,  y (t) = 0 , see Chapter 2. x

y

x



Assume u,  v ∈ C and that u 1

xy

,  vxy

y



exist and are continuous. Define the adjoint differential expression by

$$Mv=v_{xy}-(av)_x-(bv)_y+cv.\] We have 2(vLu − uM v) = (ux v − vx u + 2buv)y + (uy v − vy u + 2auv)x .

(4.4.5)

Set P = −(ux v − xx u + 2buv) Q = uy v − vy u + 2auv.

From (4.4.5) it follows for a domain Ω ∈ R

2

2∫

 (vLu − uM v) dxdy = ∫

Ω

 (−Py + Qx ) dxdy

Ω

= ∮

 P dx + Qdy,

(4.4.6)

where integration in the line integral is anticlockwise. The previous equation follows from Gauss theorem or after integration by parts: $$\int_\Omega\ (-P_y+Q_x)\ dxdy=\int_{\partial\Omega}\ (-Pn_2+Qn_1)\ ds,\] where n = (dy/ds, −dx/ds) , s arc length, (x(s), y(s)) represents ∂Ω . Assume u is a solution of the initial value problem (4.4.1)-(4.4.4) and suppose that v satisfies $$Mv=0\ \ \mbox{in}\ \Omega.\]

4.4.1

https://math.libretexts.org/@go/page/2151

Figure 4.4.1: Riemann's method, domain of integration Then, if we integrate over a domain Ω as shown in Figure 4.4.1, it follows from (4.4.6) that 2∫

 vf  dxdy = ∫

Ω

 P dx + Qdy + ∫

BA

 P dx + Qdy + ∫

AP

 P dx + Qdy.

(4.4.7)

PB

The line integral from B to A is known from initial data, see the definition of P and Q. Since $$u_xv-v_xu+2buv=(uv)_x+2u(bv-v_x),\] it follows ∫

P dx + Qdy = − ∫

AP

((uv)x + 2u(bv − vx ))  dx

AP

=

−(uv)(P ) + (uv)(A) − ∫

 2u(bv − vx ) dx.

AP

By the same reasoning we obtain for the third line integral ∫

P dx + Qdy = ∫

PB

((uv)y + 2u(av − vy ))  dy

PB

=

(uv)(B) − (uv)(P ) + ∫

2u(av − vy ) dy.

PB

Combining these equations with (4.4.6), we get 2v(P )u(P ) =



(ux v − vx + 2buv) dx − (uy v − vy u + 2auv) dy

BA

+u(A)v(A) + u(B)v(B) + 2 ∫

u(bv − vx ) dx

AP

+2 ∫

u(av − vy ) dy − 2 ∫

PB

f v dxdy.

(4.4.8)

Ω

Let v be a solution of the initial value problem, see Figure 4.2.2 for the definition of domain D(P ),

Figure 4.4.2: Definition of Riemann's function M v = 0  in D(P )

(4.4.9)

bv − vx = 0  on C1

(4.4.10)

av − vy = 0  on C2

(4.4.11)

v(P ) = 1.

4.4.2

(4.4.12)

https://math.libretexts.org/@go/page/2151

Assume v satisfies (4.4.9)-(4.4.12), then 2u(P ) = u(A)v(A) + u(B)v(B) − 2 ∫

 f v dxdy

Ω

=∫

(ux v − vx + 2buv) dx − (uy v − vy u + 2auv) dy,

BA

where the right hand side is known from given data. A function v = v(x, y; x

0,

y0 )

satisfying (4.4.9)-(4.4.12) is called Riemann's function.

Remark. Set w(x, y) = v(x, y; x

0,

y0 )

for fixed x

0,

 y0

. Then (4.4.9)-(4.4.12) imply x

w(x, y0 ) = exp( ∫

 b(τ , y0 ) dτ )  on C1 ,

x0 y

w(x0 , y) = exp( ∫

 a(x0 , τ ) dτ )  on C2 .

y0

Example 4.4.1: , then a Riemann function is v(x, y) ≡ 1 .

uxy = f (x, y)

Example 4.4.2: Consider the telegraph equation of Chapter 3 $$\varepsilon \mu u_{tt}=c^2\triangle_xu-\lambda\mu u_t,\] where u stands for one coordinate of electric or magnetic field. Introducing $$u=w(x,t)e^{\kappa t},\] where κ = −λ/(2ε) , we arrive at $$w_{tt}=\frac{c^2}{\varepsilon\mu}\triangle_xw-\frac{\lambda^2}{4\epsilon^2}.\] Stretching the axis and transform the equation to the normal form we get finally the following equation, the new function is denoted by u and the new variables are denoted by x, y again, $$u_{xy}+cu=0,\] with a positive constant c . We make the ansatz for a Riemann function $$v(x,y;x_0,y_0)=w(s),\ \ s=(x-x_0)(y-y_0)\] and obtain $$sw''+w'+cw=0.\] −− −

Substitution σ = √4cs leads to Bessel's differential equation $$\sigma^2 z''(\sigma)+\sigma z'(\sigma)+\sigma^2 z(\sigma)=0,\] where z(σ) = w(σ

2

/(4c))

. A solution is

$$J_0(\sigma)=J_0\left(\sqrt{4c(x-x_0)(y-y_0)}\right)\] which defines a Riemann function since J

0 (0)

=1

.

Remark. Bessel's differential equation is $$x^2y''(x)+xy'(x)+(x^2-n^2)y(x)=0,\] where n ∈ R . If n ∈ N ∪ {0} , then solutions are given by Bessel functions. One of the two linearly independent solutions is bounded at 0. This bounded solution is the Bessel function J (x) of first kind and of order n , see [1], for example. 1

n

4.4.3

https://math.libretexts.org/@go/page/2151

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.4: A Method of Riemann is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.4.4

https://math.libretexts.org/@go/page/2151

4.5: Initial-Boundary Value Problems In previous sections we looked at solutions defined for all x ∈ R and t ∈ R . In this and in the following section we seek solutions u(x, t) defined in a bounded domain Ω ⊂ R and for all t ∈ R and which satisfy additional boundary conditions on ∂Ω. n

n

1

1

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.5: Initial-Boundary Value Problems is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.5.1

https://math.libretexts.org/@go/page/2152

4.5.1: Oscillation of a String Let u(x, t), x ∈ [a, b], t ∈ R , be the deflection of a string, see Figure 1.1.1 from Chapter 1. Assume the deflection occurs in the (x, u)-plane. This problem is governed by the initial-boundary value problem 1

utt (x, t) = uxx (x, t)  on (0, l)

(4.5.1.1)

u(x, 0) = f (x)

(4.5.1.2)

ut (x, 0) = g(x)

(4.5.1.3)

u(0, t) = u(l, t) = 0.

(4.5.1.4)

Assume the initial data f , g are sufficiently regular. This implies compatibility conditions f (0) = f (l) = 0 and g(0) = g(l) .

Fourier's method To find solutions of differential equation (4.5.1.1) we make the separation of variables ansatz u(x, t) = v(x)w(t).

(4.5.1.1)

Inserting the ansatz into (4.5.1.1) we obtain ′′

′′

v(x)w (t) = v (x)w(t),

(4.5.1.2)

or, if v(x)w(t) ≠ 0 , ′′

′′

w (t)

v (x) =

w(t)

.

(4.5.1.3)

v(x)

It follows, provided v(x)w(t) is a solution of differential equation (4.5.1.1) and v(x)w(t) ≠ 0 , ′′

w (t) = const. =: −λ

(4.5.1.4)

w(t)

and ′′

v (x) = −λ

(4.5.1.5)

v(x)

since x,  t are independent variables. Assume v(0) = v(l) = 0 , then v(x)w(t) satisfies the boundary condition (4.5.1.4). Thus we look for solutions of the eigenvalue problem ′′

−v (x) = λv(x)  in (0, l) v(0) = v(l) = 0,

(4.5.1.5) (4.5.1.6)

which has the eigenvalues π λn = (

2

n) ,   n = 1, 2, … ,

(4.5.1.6)

l

and associated eigenfunctions are π vn = sin(

nx).

(4.5.1.7)

l

4.5.1.1

https://math.libretexts.org/@go/page/2179

Solutions of ′′

−w (t) = λn w(t)

(4.5.1.8)

− − − − sin(√λn t),     cos(√λn t).

(4.5.1.9)

are

Set − − − − wn (t) = αn cos(√λn t) + βn sin(√λn t),

where α ,  β ∈ R . It is easily seen that w (t)v (principle of superposition)

(4.5.1.10)

1

n

n

n

n (x)

is a solution of differential equation (4.5.1.1), and, since (4.5.1.1) is linear and homogeneous, also

N

uN = ∑ wn (t)vn (x)

(4.5.1.11)

n=1

which satisfies the differential equation (4.5.1.1) and the boundary conditions (4.5.1.4). Consider the formal solution of (4.5.1.1), ( 4.5.1.4) ∞

− − − − − − u(x, t) = ∑ (αn cos(√λn t) + βn sin(√λn t)) sin(√λn x).

(4.5.1.7)

n=1

''Formal'' means that we know here neither that the right hand side converges nor that it is a solution of the initial-boundary value problem. Formally, the unknown coefficients can be calculated from initial conditions (4.5.1.2), (4.5.1.3) as follows. We have ∞

− − u(x, 0) = ∑ αn sin(√λn x) = f (x).

(4.5.1.12)

n=1

− − k x)

Multiplying this equation by sin(√λ

and integrate over (0, l), we get l

− − 2   sin (√λk x) dx = ∫

αn ∫ 0

l

− −  f (x) sin(√λk x) dx.

(4.5.1.13)

0

We recall that l

∫ 0

l − − − −   sin(√λn x) sin(√λk x) dx = δnk . 2

(4.5.1.14)

Then l

2 αk =

∫ l

πk  f (x) sin(

x) dx.

(4.5.1.8)

l

0

By the same argument it follows from ∞

− − − − ut (x, 0) = ∑ βn √λn sin(√λn x) = g(x)

(4.5.1.15)

n=1

4.5.1.2

https://math.libretexts.org/@go/page/2179

that l

2 βk =

∫ kπ

πk  g(x) sin(

x) dx.

(4.5.1.9)

l

0

Under additional assumptions f ∈ C (0, l), g ∈ C (0, l) it follows that the right hand side of (4.5.1.7), where α , β are given by (4.5.1.8) and (4.5.1.9), respectively, defines a classical solution of (4.5.1.1)-(4.5.1.4) since under these assumptions the series for u and the formal differentiate series for u , u , u , u converges uniformly on 0 ≤ x ≤ l , 0 ≤ t ≤ T , 0 < T < ∞ fixed, see an exercise. 4

3

0

n

0

t

tt

x

n

xx

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.5.1: Oscillation of a String is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.5.1.3

https://math.libretexts.org/@go/page/2179

4.5.2: Oscillation of a Membrane Let Ω ⊂ R be a bounded domain. We consider the initial-boundary value problem 2

1

utt (x, t) = △x u  in Ω × R ,

(4.5.2.1)

¯ ¯ ¯ ¯

u(x, 0) = f (x),   x ∈ Ω,

(4.5.2.2)

¯ ¯ ¯ ¯

ut (x, 0) = g(x),   x ∈ Ω,

(4.5.2.3)

1

u(x, t) = 0  on ∂Ω × R .

(4.5.2.4)

As in the previous subsection for the string, we make the ansatz (separation of variables) $$ u(x,t)=w(t)v(x) \] which leads to the eigenvalue problem −△v = λv  in Ω,

(4.5.2.5)

v = 0  on ∂Ω.

(4.5.2.6)

Let λ are the eigenvalues of (4.5.2.5), (4.5.2.6) and v a complete associated orthonormal system of eigenfunctions. We assume Ω is sufficiently regular such that the eigenvalues are countable, which is satisfied in the following examples. Then the formal solution of the above initial-boundary value problem is n

n

$$ u(x,t)=\sum_{n=1}^\infty\left(\alpha_n\cos(\sqrt{\lambda_n}t)+\beta_n\sin(\sqrt{\lambda_n}t)\right)v_n(x), \] where αn = ∫

 f (x)vn (x) dx

Ω

1 βn =

− − √λn



 g(x)vn (x) dx.

Ω

Note In general, eigenvalues of (4.5.2.5), (4.5.2.5) are not known explicitly. There are numerical methods to calculate these values. In some special cases, see next examples, these values are known.

Examples Example 4.5.2.1: Rectangle membrane Let Ω = (0, a) × (0, b).

(4.5.2.1)

Using the method of separation of variables, we find all eigenvalues of (4.5.2.5), (4.5.2.6) which are given by − −−−−− − 2

λkl

=√

2

k

2

a

l +

2

,   k, l = 1, 2, …

(4.5.2.2)

b

and associated eigenfunctions, not normalized, are $$ u_{kl}(x)=\sin\left(\frac{\pi k}{a}x_1\right)\sin\left(\frac{\pi l}{b}x_2\right). \]

4.5.2.1

https://math.libretexts.org/@go/page/2180

Example 4.5.2.2: Disk membrane Set 2

Ω = {x ∈ R

2

2

:  x

+x

1

2

2

< R }.

(4.5.2.3)

In polar coordinates, the eigenvalue problem (4.5.2.5), (4.5.2.6) is given by 1

1



((rur )r +

r

uθθ ) = λu

r

(4.5.2.6)

u(R, θ) = 0,

(4.5.2.7)

here is u = u(r, θ) := v(r cos θ, r sin θ) . We will find eigenvalues and eigenfunctions by separation of variables u(r, θ) = v(r)q(θ),

(4.5.2.4)

where v(R) = 0 and q(θ) is periodic with period 2π since u(r, θ) is single valued. This leads to 1 −



1



((rv ) q + r

vq

′′

) = λvq.

(4.5.2.5)

r

Dividing by vq , provided vq ≠ 0 , we obtain ′

1 −



(rv (r)) (

1 q

′′

(θ)

+

r

v(r)

) = λ, r

(4.5.2.8)

q(θ)

which implies q

′′

(θ) = const. =: −μ.

(4.5.2.6)

q(θ)

Thus, we arrive at the eigenvalue problem −q

′′

(θ) = μq(θ)

q(θ) = q(θ + 2π).

It follows that eigenvalues μ are real and nonnegative. All solutions of the differential equation are given by − − q(θ) = A sin(√μ θ) + B cos(√μ θ),

(4.5.2.7)

where A , B are arbitrary real constants. From the periodicity requirement − − − − A sin(√μ θ) + B cos(√μ θ) = A sin(√μ (θ + 2π)) + B cos(√μ (θ + 2π))

(4.5.2.8)

it follows x +y sin x − sin y = 2 cos

x −y sin

2

2

x +y cos x − cos y = −2 sin

sin 2

4.5.2.2

x −y 2

https://math.libretexts.org/@go/page/2180

− − − − − sin(√μ π) (A cos(√μ θ + √μ π) − B sin(√μ θ + √μ π)) = 0,

(4.5.2.9)

which implies, since A , B are not zero simultaneously, because we are looking for q not identically zero, − − sin(√μ π) sin(√μ θ + δ) = 0

(4.5.2.10)

for all θ and a δ = δ(A, B, μ) . Consequently the eigenvalues are 2

μn = n ,   n = 0, 1, …  .

Inserting q

′′

2

(θ)/q(θ) = −n

(4.5.2.11)

into (4.5.2.8), we obtain the boundary value problem 2

′′



2

r v (r) + rv (r) + (λ r

2

− n )v = 0  on (0, R)

(4.5.2.9)

v(R) = 0 sup

(4.5.2.10)

|v(r)| < ∞.

(4.5.2.11)

r∈(0,R)





Set z = √λr and v(r) = v(z/√λ) =: y(z) , then, see (4.5.2.9), 2

z y

′′



(z) + zy (z) + (z

2

2

− n )y(z) = 0,

(4.5.2.12)

where z > 0 . Solutions of this differential equations which are bounded at zero are Bessel functions of first kind and n -th − order J (z) . The eigenvalues follows from boundary condition (4.5.2.10), i. e., from J (√λR) = 0 . Denote by τ the zeros of J (z) , then the eigenvalues of (4.5.2.6)-(4.5.2.6) are n

n

nk

n

λnk = (

τnk

2

)

(4.5.2.13)

R

and the associated eigenfunctions are −− − Jn (√λnk r) sin(nθ),  n = 1, 2, … −− − Jn (√λnk r) cos(nθ),  n = 0, 1, 2, … .

Thus the eigenvalues λ

0k

are simple and λ

nk ,

Remark. For tables with zeros of particular, the asymptotic formula

Jn (x)

 n ≥ 1

, are double eigenvalues.

and for much more properties of Bessel functions see \cite{Watson}. One has, in 1/2

2 Jn (x) = (

)

1 (cos(x − nπ/2 − π/5) + O (

πx

))

(4.5.2.14)

x

as x → ∞ . It follows from this formula that there are infinitely many zeros of J

.

n (x)

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.5.2: Oscillation of a Membrane is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

4.5.2.3

https://math.libretexts.org/@go/page/2180

4.5.3: Inhomogeneous Wave Equations Let Ω ⊂ R be a bounded and sufficiently regular domain. In this section we consider the initial-boundary value problem n

1

utt = Lu + f (x, t)  in Ω × R

(4.5.3.1)

¯ ¯ ¯ ¯

u(x, 0) = ϕ(x)  x ∈ Ω

(4.5.3.2)

¯ ¯ ¯ ¯

ut (x, 0) = ψ(x)  x ∈ Ω

(4.5.3.3) 1

u(x, t) = 0  for x ∈ ∂Ω and t ∈ R ,

where u = u(x, t), x = (x

1,

,

(4.5.3.4)

are given and L is an elliptic differential operator. Examples for L are:

… , xn ) f ,  ϕ,  ψ

1. L = ∂ /∂ x , oscillating string. 2. L = △ , oscillating membrane. 2

2

x

3.

n



ij

Lu = ∑

(a

i,j=1

(x)ux ) ,

∂xj

(4.5.3.1)

i

¯ ¯ ¯ ¯

where a = a are given sufficiently regular functions defined on Ω. We assume L is uniformly elliptic, that is, there is a constant ν > 0 such that ij

ji

$$\sum_{i,j=1}^na^{ij}\zeta_i\zeta_j\ge\nu|\zeta|^2\] for all x ∈ Ω and ζ ∈ R . n

4. Let u = (u

1,

… , um )

and

$$Lu=\sum_{i,j=1}^n\frac{\partial}{\partial x_j}\left(A^{ij}(x)u_{x_i}\right),\] ¯ ¯ ¯ ¯

where A = A are given sufficiently regular (m × m) -matrices on Ω. We assume that L defines an elliptic system. An example for this case is the linear elasticity. ij

ji

Consider the eigenvalue problem −Lv = λv  in Ω

(4.5.3.5)

v = 0  on ∂Ω.

(4.5.3.6)

Assume there are infinitely many eigenvalues $$0 0,  x ∈ R u(x, 0) = x ut (x, 0) = 0 .

Hint: Find a solution of the differential equation independent on t , and transform the above problem into an initial value problem with homogeneous differential equation by using this solution.

4.E.2

https://math.libretexts.org/@go/page/2153

Q4.11 Find with the method of separation of variables nonzero solutions u(x, t), 0 ≤ x ≤ 1,  0 ≤ t < ∞, of utt − uxx + u = 0 ,

(4.E.13)

such that u(0, t) = 0 , and u(1, t) = 0 for all t ∈ [0, ∞).

Q4.12 2

2

utt − c uxx = λ u,  λ = const. 2

2

u(x, t) = f (x 2

z f

′′

2

2

− c t ) = f (s),  s := x ′

(z) + zf (z) + (z

2

(4.E.14) 2

2

−c t

(4.E.15)

2

− n )f = 0,  z > 0

(4.E.16)

with n = 0 . Remark. The above differential equation for u is the transformed telegraph equation (see Section 4.4).

Q4.13 Find the formula for the solution of the following Cauchy initial value problem the initial conditions on S are given by

, where S :

uxy = f (x, y)

y = ax + b

,

a >0

, and

u = αx + βy + γ, ux = α, uy = β, a,  b,  α,  β,  γ

constants.

Q4.14 Find all eigenvalues μ of −q

′′

(θ) = μq(θ)

q(θ) = q(θ + 2π) .

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 4.E: 4: Hyperbolic Equations (Exercises) is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann. 4.E: Hyperbolic Equations (Exercises) has no license indicated.

4.E.3

https://math.libretexts.org/@go/page/2153

CHAPTER OVERVIEW Back Matter Index

1

https://math.libretexts.org/@go/page/44999

Index B

F

Bessel's differential equation

Fourier's Method

4.4: A Method of Riemann

L Laplace equation

6.4.1: Fourier's Method

1.3: Partial Differential Equations

C

G

P

CAPILLARY EQUATION

Green's Function

Parabolic Equations

1.3: Partial Differential Equations

7.4: Green's Function for \(\Delta\)

D

I

Diffusion in a tube

Inhomogeneous Equations

6.4.1: Fourier's Method

6: Parabolic Equations

Poisson's formula 6.1: Poisson's Formula 4.3: Inhomogeneous Equations

DIRICHLET INTEGRAL 1.3: Partial Differential Equations

1

https://math.libretexts.org/@go/page/45000

CHAPTER OVERVIEW 5: Fourier Transform Fourier's transform is an integral transform which can simplify investigations for linear differential or integral equations since it transforms a differential operator into an algebraic equation. Topic hierarchy 5.1: Definition and Properties 5.1.1: Pseudodifferential Operators 5.E: Fourier Transform (Exercises)

Thumbnail: The real and imaginary parts of the Fourier transform of a delayed pulse. The Fourier transform decomposes a function into eigenfunctions for the group of translations. (CC-BY-SA-4.0; Sławomir_Biały).

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 5: Fourier Transform is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

1

5.1: Definition and Properties Definition. Let f

∈ C

s

0

n

(R )

, s = 0, 1, … . The function f^ defined by ˆ(ξ) = (2π )−n/2 ∫ f

 e

R

−iξ⋅x

f (x) dx,

(5.1.1)

g(ξ) dξ

(5.1.2)

n

where ξ ∈ R , is called {\it Fourier transform} of f , and the function g˜ given by n

−n/2

g ˜(x) = (2π )



 e

R

iξ⋅x

n

is called inverse Fourier transform, provided the integrals on the right hand side exist. From (5.1.1) it follows by integration by parts that differentiation of a function is changed to multiplication of its Fourier transforms, or an analytical operation is converted into an algebraic operation. More precisely, we have Proposition 5.1. α |α| α ˆ ˆ D f (ξ) = i ξ f (ξ),

(5.1.3)

where |α| ≤ s . The following proposition shows that the Fourier transform of particular, the right hand side of (5.1.2) exists for g := f^ if f ∈ C

decreases rapidly for (R ) .

f n+1

s

n

0

Proof. Let ξ = (ξ , … , ξ | ξ | = max | ξ | . Then 1

j

k

n)

, provided

f ∈ C

s

0

n

(R )

. In

n

0

Proposition 5.2. Assume g ∈ C (R ) , then there is a constant M $$ |\widehat{g}(\xi)|\le \frac{M}{(1+|\xi|)^s}. \]

|ξ| → ∞

= M (n, s, g)

such that

be fixed and let j be an index such that

k

1/2

n

− ≤ √n | ξj |

2

|ξ| = ( ∑ ξ ) k

(5.1.4)

k=1

which implies s s

(1 + |ξ|)

s

= ∑(

k

)|ξ | k

k=0 s s

≤ 2

k/2

∑n

k

| ξj |

k=0 s

s/2

≤ 2 n

∑ |ξ

α

|.

|α|≤s

This inequality and Proposition 5.1 imply

5.1.1

https://math.libretexts.org/@go/page/2154

s

s

s/2

α

(1 + |ξ| ) | g ˆ(ξ)| ≤ 2 n

∑ |(iξ )

g ˆ(ξ)|

|α|≤s

s

s/2

≤ 2 n

α

∑ ∫

 | D

R

|α|≤s

g(x)| dx =: M .

n



The notation inverse Fourier transform for (5.1.2) is justified by ˜

ˆ

Theorem 5.1. fˆ = f and f˜ = f . Proof. See \cite{Yosida}, for example. We will prove the first assertion −n/2

(2π )



 e

R

ˆ f (ξ) dξ = f (x)

iξ⋅x

(5.1.5)

n

here. The proof of the other relation is left as an exercise. All integrals appearing in the following exist, see Proposition 5.2 and the special choice of g . (i) Formula ix⋅ξ ˆ  g(ξ)f (ξ)e  dξ = ∫

∫ R

n

 g ˆ(y)f (x + y) dy

R

(5.1.6)

n

follows by direct calculation: −n/2

∫ R

 g(ξ) ((2π )



n

R

−n/2

= (2π )



(∫

R

f (y) dy) e

ix⋅ξ

 dξ

 g(ξ)e

R

−iξ⋅(y−x)

 dξ) f (y) dy

n

n

ˆ(y)f (x + y) dy.  g

=∫ R

n

−ix⋅y

ˆ(y − x)f (y) dy  g

=∫ R

 e n

n

(ii) Formula −n/2

(2π )

∫ R

 e

−iy⋅ξ

g(εξ) dξ = ε

−n

ˆ(y/ε) g

(5.1.7)

n

for each ε > 0 follows after substitution z = εξ in the left hand side of (5.1.1). (iii) Equation ix⋅ξ ˆ  g(εξ)f (ξ)e  dξ = ∫

∫ R

n

ˆ(y)f (x + εy) dy  g

R

(5.1.8)

n

follows from (5.1.6) and (5.1.7). Set G(ξ) := g(εξ) , then (5.1.6) implies ix⋅ξ ˆ  G(ξ)f (ξ)e  dξ = ∫

∫ R

n

R

ˆ  G(y)f (x + y) dy.

(5.1.9)

n

Since, see (5.1.7), −n/2 ˆ G(y) = (2π ) ∫ R

= ε

−n

 e

−iy⋅ξ

g(εξ) dξ

n

ˆ(y/ε), g

5.1.2

https://math.libretexts.org/@go/page/2154

we arrive at ˆ  g(εξ)f (ξ) = ∫

∫ R

n

ε

R

−n

ˆ(y/ε)f (x + y) dy g

n

ˆ(z)f (x + εz) dz. g

= ∫ R

n

Letting ε → 0 , we get ix⋅ξ ˆ  f (ξ)e  dξ = f (x) ∫

g(0) ∫ R

n

 g ˆ(y) dy.

R

(5.1.10)

n

Set 2

g(x) := e

−|x| /2

,

(5.1.11)

then n/2

ˆ(y) dy = (2π )  g

∫ R

.

(5.1.12)

n

Since g(0) = 1 , the first assertion of Theorem 5.1 follows from (5.1.10) and (5.1.12). It remains to show (5.1.12). (iv) Proof of (5.1.12). We will show −n/2

g ˆ(y) : = (2π )

2



 e

R

−|x| /2

e

−ix⋅x

 dx

n

2

= e

−|y | /2

.

The proof of 2



 e

R

−|y | /2

n/2

 dy = (2π )

(5.1.13)

n

is left as an exercise. Since 2

|x| y x y – +i –)⋅( – +i – ) = −( 2 √2 √2 √2 √2 x

−(

2

|y| + ix ⋅ y −

)

(5.1.14)

2

it follows 2

∫ R

 e

−|x| /2

e

−ix⋅y

 dx = ∫

n

R

 e

−η

2

2

e

−|y | /2

2

= e

−|y | /2

∫ R

n/2

= 2

 dx

n

 e

−η

2

 dx

n

2

e

−|y | /2

∫ R

 e

−η

2

 dη

n

where η :=

x y – +i –. √2 √2

5.1.3

(5.1.15)

https://math.libretexts.org/@go/page/2154

Consider first the one-dimensional case. According to Cauchy's theorem we have ∮

 e

−η

2

 dη = 0,

(5.1.16)

C

where the integration is along the curve C which is the union of four curves as indicated in Figure ??? .

Figure 5.1.1: Proof of (5.1.12) Consequently ∫ C3

 e

−η

R

1

2

 dη =

2

 e – ∫ √2 −R

−x /2

 dx − ∫

 e

−η

2

 dη − ∫

C2

 e

−η

2

 dη.

(5.1.17)

C4

It follows lim ∫ R→∞

−  dη = √π

(5.1.18)

 dη = 0,   k = 2,  4.

(5.1.19)

 e

−η

2

C3

since lim ∫ R→∞

 e

−η

2

Ck

The case n > 1 can be reduced to the one-dimensional case as follows. Set η =

x y – +i – = (η1 , … , ηn ), √2 √2

(5.1.20)

where xl ηl =

From dη = dη

1

… dηl

yl

– +i –. √2 √2

(5.1.21)

and n

e

−η

n

2

=e

− ∑l=1 η

2

l

= ∏e

−η

2

l

(5.1.22)

l=1

it follows n 2

∫ R

 e

−eta

 dη = ∏ ∫

n

l=1

 e

−η

2

l

 dηl ,

(5.1.23)

Γl

where for fixed y $$

5.1.4

https://math.libretexts.org/@go/page/2154

\Gamma_l=\{z\in{\mathbb C}:\ z=\frac{x_l}{\sqrt{2}}+i\frac{y_l}{\sqrt{2}}, -\infty 0 a δ = δ(ε) such that |ϕ(y) − ϕ(ξ)| < ε if |y − ξ| < 2δ . Set M := sup |ϕ(y)| . Then, see Proposition 6.1, R

n

u(x, t) − ϕ(ξ) = ∫

 K(x, y, t) (ϕ(y) − ϕ(ξ))  dy.

R

(6.1.9)

n

It follows, if |x − ξ| < δ and t > 0 , that |u(x, t) − ϕ(ξ)| ≤ ∫

 K(x, y, t) |ϕ(y) − ϕ(ξ)|  dy

Bδ (x)

+∫

 K(x, y, t) |ϕ(y) − ϕ(ξ)|  dy

R

n

∖ Bδ (x)

≤ ∫

 K(x, y, t) |ϕ(y) − ϕ(ξ)|  dy

B2δ (x)

+2M ∫ R

≤ ε∫ R

 K(x, y, t) dy n

∖ Bδ (x)

 K(x, y, t) dy + 2M ∫ n

R

 K(x, y, t) dy n

∖ Bδ (x)

< 2ε

6.1.2

https://math.libretexts.org/@go/page/2156

if 0 < t ≤ t , t sufficiently small. 0

0



Remarks. 1. Uniqueness follows under the additional growth assumption 2

|u(x, t)| ≤ M e

a|x|

  in DT ,

(6.1.10)

where M and a are positive constants, see Proposition 6.2 below. In the one-dimensional case, one has uniqueness in the class u(x, t) ≥ 0 in D , see [10], pp. 222. T

2. u(x, t) defined by Poisson's formula depends on all values ϕ(y), y ∈ R . That means, a perturbation of ϕ , even far from a fixed x, has influence to the value u(x, t). This means that heat travels with infinite speed, in contrast to the experience. n

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 6.1: Poisson's Formula is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

6.1.3

https://math.libretexts.org/@go/page/2156

6.2: Inhomogeneous Heat Equation Here we consider the initial value problem for u = u(x, t), u ∈ C



n

(R

× R+ )

,

n

ut − △u = f (x, t)  in x ∈ R ,  t ≥ 0, u(x, 0) = ϕ(x),

where ϕ and f are given. From $$ \widehat{u_t-\triangle u}=\widehat{f(x,t)} \] we obtain an initial value problem for an ordinary differential equation: ˆ du

2 ˆ ˆ = f + |ξ | u (ξ, t)

dt ˆ ˆ(ξ, 0) = ϕ u (ξ).

The solution is given by $$\widehat{u}(\xi,t)=e^{-|\xi|^2 t}\widehat{\phi}(\xi)+\int_0^t\ e^{-|\xi|^2(t-\tau)}\widehat{f}(\xi,\tau)\ d\tau.\] Applying the inverse Fourier transform and a calculation as in the proof of Theorem 5.1, step (vi), we get} −n/2

u(x, t) = (2π )

∫ R

t

   + ∫

 e

ix⋅ξ

2

 e

2

(e

−|ξ| t

ˆ ϕ (ξ)

n

−|ξ| (t−τ)

ˆ(ξ, τ ) dτ ) dξ. f

0

From the above calculation for the homogeneous problem and calculation as in the proof of Theorem 5.1, step (vi), we obtain the formula 1 u(x, t) =

2

∫  ϕ(y)e − − n n (2 √πt) R t

+∫ 0

−|y−x| /(4t)

 dy

1 ∫ R

 f (y, τ ) n

− − − − − − − n (2 √ π(t − τ ) )

2

 e

−|y−x| /(4(t−τ))

 dy dτ .

This function u(x, t) is a solution of the above inhomogeneous initial value problem provided $$\phi\in C(\mathbb{R}^n),\ \ \sup_{\mathbb{R}^n}|\phi(x)|0,\] is the Gamma function. Definition. A function $$\gamma(x,y)=s(r)+\phi(x,y)\] is called fundamental solution associated to the Laplace equation if ϕ ∈ C

2

(Ω)

and △



=0

for each fixed y ∈ Ω .

Remark. The fundamental solution γ satisfies for each fixed y ∈ Ω the relation $$-\int_\Omega\ \gamma(x,y)\triangle_x\Phi(x)\ dx=\Phi(y)\ \ \mbox{for all}\ \Phi\in C_0^2(\Omega),\] see an exercise. This formula follows from considerations similar to the next section. In the language of distribution, this relation can be written by definition as $$-\triangle_x\gamma(x,y)=\delta(x-y),\] where δ is the Dirac distribution, which is called δ -function.

7.1.1

https://math.libretexts.org/@go/page/2162

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 7.1: Fundamental Solution is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

7.1.2

https://math.libretexts.org/@go/page/2162

7.2: Representation Formula In the following we assume that Ω, the function ϕ which appears in the definition of the fundamental solution and the potential function u considered are sufficiently regular such that the following calculations make sense, see [6] for generalizations. This is the case if Ω is bounded, ∂Ω is in C , ϕ ∈ C (Ω) for each fixed y ∈ Ω and u ∈ C (Ω) . 1

¯ ¯ ¯ ¯

2

¯ ¯ ¯ ¯

2

Figure 7.2.1: Notations to Green's identity Theorem 7.1. Let u be a potential function and γ a fundamental solution, then for each fixed y ∈ Ω ∂u(x) u(y) = ∫

(γ(x, y) ∂nx

∂Ω

Proof. Let B

ρ (y)

⊂Ω

be a ball. Set Ω

ρ (y)

∂γ(x, y) − u(x)

= Ω ∖ Bρ (y)

)  dSx .

∂nx

(7.2.1)

. See Figure 7.2.2 for notations.

Figure 7.2.2: Notations to Theorem 7.1 From Green's formula, for u,  v ∈ C

2

¯ ¯ ¯ ¯

(Ω)

, ∂u



 (v△u − u△v) dx = ∫

Ωρ (y)

  (v

∂v −u

)  dS

∂n

∂ Ωρ (y)

(7.2.2)

∂n

we obtain, if v is a fundamental solution and u a potential function, ∂u ∫

  (v

∂v −u

∂n

∂ Ωρ (y)

)  dS = 0.

(7.2.3)

∂n

Thus we have to consider ∂u ∫

 v

∂u  dS = ∫

∂n

∂ Ωρ (y)

 u

∂ Ωρ (y)

We estimate the integrals over ∂ B

ρ (y)

 dS + ∫ ∂n

∂Ω

∂v ∫

∂u

 v

 v

∂ Bρ (y)

 dS ∂n

∂v  dS = ∫

∂n

∂v

 u

 dS + ∫ ∂n

∂Ω

∂ Bρ (y)

 u

 dS. ∂n

:

(i) ∣

∂u

∣∫ ∣

∂ Bρ (y)

 v

∣  dS ∣ ≤ M ∫

∂n



 |v| dS

∂ Bρ (y)

≤ M (∫

n−1

 s(ρ) dS + C ωn ρ

),

∂ Bρ (y)

7.2.1

https://math.libretexts.org/@go/page/2163

where M = M (y) = sup |∂u/∂n|,   ρ ≤ ρ0 , Bρ (y) 0

C = C (y) =

sup

|ϕ(x, y)|.

x∈Bρ (y) 0

From the definition of s(ρ) we get the estimate as ρ → 0 ∂u ∫

O(ρ| ln ρ|)

 v

∂ Bρ (y)

n =2

 dS = {

(7.2.4)

∂n

O(ρ)

n ≥ 3.

(ii) Consider the case n ≥ 3 , then ∂v ∫

 u

1  dS =

∂n

∂ Bρ (y)

1 ∫

ωn

 u

∂ϕ

n−1

 dS + ∫

ρ

∂ Bρ (y)

 u

1 =

n−1

n−1



ωn ρ

 u dS + O(ρ

n−1

n−1

∈ ∂ Bρ (y)

 dS + O(ρ

),

∂ Bρ (y)

= u(x0 ) + O(ρ

0

n−1

u(x0 ) ∫

ωn ρ

for an x

)

∂ Bρ (y)

1 =

 dS ∂n

∂ Bρ (y)

).

.

Combining this estimate and (7.2.4), we obtain the representation formula of the theorem. □

Corollary. Set ϕ ≡ 0 and r = |x − y| in the representation formula of Theorem 7.1, then 1 u(y) =

∂u ∫



  (ln r

∂(ln r) −u

∂nx

∂Ω

1 u(y) =

1 ∫

(n − 2)ωn

∂Ω

 (

)  dSx ,   n = 2,

(7.2.5)

∂nx

n−2

r

2−n

∂(r

∂u −u ∂nx

∂nx

) )  dSx ,   n ≥ 3.

(7.2.6)

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 7.2: Representation Formula is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

7.2.2

https://math.libretexts.org/@go/page/2163

7.2.1: Conclusions from the Representation Formula Similar to the theory of functions of one complex variable, we obtain here results for harmonic functions from the representation formula, in particular from (7.2.5), (7.2.6). We recall that a function u is called harmonic if u ∈ C (Ω) and △u = 0 in Ω . 2

Proposition 7.1. Assume u is harmonic in Ω. Then u ∈ C



(Ω)

.}

Proof. Let Ω ⊂⊂ Ω be a domain such that y ∈ Ω . It follows from representation formulas (7.2.5), (7.2.6), where Ω := Ω , that D u(y) exist and are continuous for all l since one can change differentiation with integration in right hand sides of the representation formulas. 0

0

0

l



Remark. In fact, a function which is harmonic in Ω is even real analytic in Ω, see an exercise. Proposition 7.2 (Mean value formula for harmonic functions). Assume u is harmonic in Ω. Then for each B

ρ (x)

⊂⊂ Ω

1 u(x) =



n−1

ωn ρ

 u(y) dSy .

(7.2.1.1)

∂ Bρ (x)

Proof. Consider the case n ≥ 3 . The assertion follows from (7.2.6) where Ω := B

ρ (x)

1 ∫

∂u

n−2

r

∂ Bρ (x)

∂ny

1  dSy =

since r = ρ and

∂u

n−2



ρ

∂ Bρ (x)

∂ny

 dSy

1 =

ρn−2



 △u dy

Bρ (x)

= 0. □

We recall that a domain Ω ∈ R is called connected if Ω is not the union of two nonempty open subsets Ω ∩ Ω = ∅ . A domain in R is connected if and only if its path connected. n

Ω1

,

Ω2

such that

n

1

2

Proposition 7.3 (Maximum principle). Assume u is harmonic in a connected domain and achieves its supremum or infimum in Ω. Then u ≡ const. in Ω. Proof. Consider the case of the supremum. Let x

0

∈ Ω

such that

u(x0 ) = sup u(x) =: M .

(7.2.1.2)

Ω

Set and Ω

Ω1 := {x ∈ Ω :  u(x) = M }

2

:= {x ∈ Ω :  u(x) < M }

. The set Ω is not empty since x 1

0

∈ Ω1

. The set Ω is open since 2

. Consequently, Ω is empty if we can show that Ω is open. Let x ∈ Ω , then there is a ρ > 0 such that and u(x) = M for all x ∈ B (x) . If not, then there exists ρ > 0 and x ˆ such that |x ˆ − x| = ρ , 0 < ρ < ρ and u(x ˆ) < M . From the mean value formula, see Proposition 7.2, it follows u ∈ C

2

(Ω)

2

ρ

¯¯ ¯

1

1

0

¯¯¯¯¯¯¯¯¯¯¯¯¯ ¯ ¯¯ ¯ Bρ (x) 0

⊂Ω

¯¯ ¯

0

¯¯ ¯

0

1 M =

n−1

ωn ρ

M ∫

 u(x) dS
0\}, \] 2

Figure 7.3.1.1: Counterexample Assume u ∈ C

2

¯ ¯ ¯ ¯

(Ω) ∩ C (Ω ∖ {0})

is a solution of △u = 0  in Ω u = 0  on ∂Ω ∖ {0}.

This problem has solutions u ≡ 0 and u = Im(z + z

−1

)

, where z = x

1

+ i x2

. Concerning another example see an exercise.

In contrast to this behavior of the Laplace equation, one has uniqueness if $\triangle u=0$ is replaced by the minimal surface equation $$ \frac{\partial}{\partial x_1}\left(\frac{u_{x_1}}{\sqrt{1+|\nabla u|^2}}\right)+ \frac{\partial}{\partial x_2}\left(\frac{u_{x_2}}{\sqrt{1+|\nabla u|^2}}\right)=0. \]

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 7.3.1: Boundary Value Problems: Dirichlet Problem is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

7.3.1.1

https://math.libretexts.org/@go/page/2186

7.3.2: Boundary Value Problems: Neumann Problem The Neumann problem (second boundary value problem) is to find a solution u ∈ C

2

(Ω) ∩ C

1

¯ ¯ ¯ ¯

(Ω)

of

△u = 0  in Ω

(7.3.2.1)

∂u = Φ  on ∂Ω,

(7.3.2.2)

∂n

where Φ is given and continuous on ∂Ω . Proposition 7.5. Assume Ω is bounded, then a solution to the Dirichlet problem is in the class u ∈ C to a constant.

2

¯ ¯ ¯ ¯

(Ω)

uniquely determined up

Proof. Exercise. Hint: Multiply the differential equation △w = 0 by w and integrate the result over Ω. ¯ ¯ ¯ ¯

Another proof under the weaker assumption u ∈ C (Ω) ∩ C Notes: Linear Elliptic Equations of Second Order, for instance. 1

2

(Ω)

follows from the Hopf boundary point lemma, see Lecture

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 7.3.2: Boundary Value Problems: Neumann Problem is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

7.3.2.1

https://math.libretexts.org/@go/page/2187

7.3.3: Boundary Value Problems: Mixed Boundary Value Problem The Mixed boundary value problem (third boundary value problem) is to find a solution u ∈ C

2

(Ω) ∩ C

△u = 0  in Ω

1

¯ ¯ ¯ ¯

(Ω)

of (7.3.3.1)

∂u + hu = Φ  on ∂Ω,

(7.3.3.2)

∂n

where Φ and h are given and continuous on ∂Ω .e Φ and h are given and continuous on ∂Ω . Proposition 7.6. Assume Ω is bounded and sufficiently regular, then a solution to the mixed problem is uniquely determined in the class u ∈ C

2

¯ ¯ ¯ ¯

(Ω)

provided h(x) ≥ 0 on ∂Ω and h(x) > 0 for at least one point x ∈ ∂Ω .

Proof. Exercise. Hint: Multiply the differential equation △w = 0 by w and integrate the result over Ω.

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) Integrated by Justin Marshall. This page titled 7.3.3: Boundary Value Problems: Mixed Boundary Value Problem is shared under a not declared license and was authored, remixed, and/or curated by Erich Miersemann.

7.3.3.1

https://math.libretexts.org/@go/page/2188

7.4: Green's Function for Δ Δ Theorem 7.1 says that each harmonic function satisfies ∂u(y) u(x) = ∫

(γ(y, x)

∂γ(y, x) − u(y)

∂ny

∂Ω

∂ny

)  dSy ,

(7.4.1)

where γ(y, x) is a fundamental solution. In general, u does not satisfies the boundary condition in the above boundary value problems. Since γ = s + ϕ , see Section 7.2, where ϕ is an arbitrary harmonic function for each fixed x, we try to find a ϕ such that u satisfies also the boundary condition. Consider the Dirichlet problem, then we look for a ϕ such that γ(y, x) = 0,   y ∈ ∂Ω,  x ∈ Ω.

(7.4.2)

Then ∂γ(y, x) u(x) = − ∫

 

u(y) dSy ,   x ∈ Ω.

∂ny

∂Ω

(7.4.3)

Suppose that u achieves its boundary values Φ of the Dirichlet problem, then ∂γ(y, x) u(x) = − ∫

  ∂ny

∂Ω

Φ(y) dSy ,

(7.4.4)

We claim that this function solves the Dirichlet problem (7.3.1.1), (7.3.1.2). A function γ(y, x) which satisfies (7.4.2), and some additional assumptions, is called Green's function. More precisely, we define a Green function as follows. ¯ ¯ ¯ ¯

Definition. A function G(y, x), y,  x ∈ Ω , x ≠ y , is called Green function associated to Ω and to the Dirichlet problem (7.3.1.1), (7.3.1.2) if for fixed x ∈ Ω, that is we consider G(y, x) as a function of y , the following properties hold: (i) G(y, x) ∈ C

2

¯ ¯ ¯ ¯

(Ω ∖ {x}) ∩ C (Ω ∖ {x})

(ii) G(y, x) − s(|x − y|) ∈ C

2

,△

¯ ¯ ¯ ¯

(Ω) ∩ C (Ω)

y G(y,

x) = 0,   x ≠ y

.

.

(iii) G(y, x) = 0 if y ∈ ∂Ω , x ≠ y . Remark. We will see in the next section that a Green function exists at least for some domains of simple geometry. Concerning the existence of a Green function for more general domains see [13]. It is an interesting fact that we get from (i)-(iii) of the above definition two further important properties, provided sufficiently regular and connected.

Ω

is bounded,

Proposition 7.7. A Green function has the following properties. In the case n = 2 we assume {\rm diam} Ω < 1 . (A) G(x, y) = G(y, x)\ \ (symmetry). (B) 0 < G(x, y) < s(|x − y|),   x,  y ∈ Ω,  x ≠ y . Proof. (A) Let (2)

G(y, x

)

(1)

x

(2)

,  x

∈ Ω

. Set ¯ ¯¯¯¯ ¯

are harmonic in Ω ∖ (B

1

(i)

Bi = Bρ (x ¯ ¯¯¯¯ ¯

∪ B2 )

)

,

i = 1,  2

. We assume

¯ ¯ ¯ ¯ ¯ ¯

Bi ⊂ Ω

and

B1 ∩ B2 = ∅

. Since

(1)

G(y, x

)

and

we obtain from Green's identity, see Figure 7.4.1 for notations,

7.4.1

https://math.libretexts.org/@go/page/2165

Figure 7.4.1: Proof of Proposition 7.7 (1)

0 = ∫

(G(y, x ¯ ¯ ¯ ¯ ¯ ¯ ¯



(2)

)

G(y, x

)

∂ny

¯ ¯ ¯ ¯ ¯ ¯ ¯

∂(Ω∖( B1 ∪B2 ))

(2)

−G(y, x



(1)

)

G(y, x ∂ny

(1)

= ∫

(G(y, x



(1)



G(y, x

(G(y, x

(G(y, x

(2)

G(y, x

(2)

) − G(y, x



(1)

)

(2)

G(y, x

(2)

) − G(y, x

G(y, x



(1)

)

∂ny

∂B2

)) dSy

∂ny

∂ )

(1)

G(y, x

∂ny (1)



∂ ) ∂ny

∂ )

∂B1

+

(2)

) − G(y, x

∂ny

∂Ω

+

(2)

)

))dSy

G(y, x ∂ny

)) dSy

)) dSy .

The integral over ∂Ω is zero because of property (iii) of a Green function, and ∫

(1)

 (G(y, x

∂ )

(2)

G(y, x

) −

(2)

G(y, x

(1)

(1)

 (G(y, x

∂ )

(2)

G(y, x

) −

(2)

,x (2)

G(y, x

∂ny

∂B2

(1)

G(y, x ∂ny

→ G(x



∂ )

∂ny

∂B1

), ∂

(1)

)

G(y, x ∂ny

(2)

→ −G(x

))dSy

(1)

,x

)) dSy

)

as ρ → 0 . This follows by considerations as in the proof of Theorem 7.1. (B) Since G(y, x) = s(|x − y|) + ϕ(y, x)

(7.4.5)

and G(y, x) = 0 if y ∈ ∂Ω and x ∈ Ω we have for y ∈ ∂Ω ϕ(y, x) = −s(|x − y|).

From the definition of s(|x − y|) it follows that principle implies that ϕ(y, x) < 0 for all y,  x ∈ Ω. Consequently

ϕ(y, x) < 0

if

y ∈ ∂Ω

. Thus, since

(7.4.6)

△y ϕ = 0

in

Ω

, the maximum-minimum

G(y, x) < s(|x − y|),   x,  y ∈ Ω,  x ≠ y.

(7.4.7)

G(y, x) > 0,   x,  y ∈ Ω,  x ≠ y.

(7.4.8)

It remains to show that

Fix x ∈ Ω and let B ρ, 0 < ρ < ρ ,

ρ (x)

be a ball such that B

ρ (x)

⊂Ω

for all 0 < ρ < ρ . There is a sufficiently small ρ 0

0

>0

such that for each

0

7.4.2

https://math.libretexts.org/@go/page/2165

¯ ¯¯¯¯¯¯¯¯¯¯¯ ¯

G(y, x) > 0  for all y ∈ Bρ (x),  x ≠ y,

(7.4.9)

see property (iii) of a Green function. Since ¯ ¯¯¯¯¯¯¯¯¯¯¯ ¯

△y G(y, x) = 0  in Ω ∖ Bρ (x) G(y, x) > 0  if y ∈ ∂ Bρ (x) G(y, x) = 0  if y ∈ ∂Ω

it follows from the maximum-minimum principle that $$ G(y,x)>0\ \ \mbox{on}\ \Omega\setminus\overline{B_\rho(x)}. \] □

Contributors and Attributions Prof. Dr. Erich Miersemann (Universität Leipzig) This page titled 7.4: Green's Function for Miersemann.

Δ

is shared under a not declared license and was authored, remixed, and/or curated by Erich

7.4.3

https://math.libretexts.org/@go/page/2165

7.4.1: Green's Function for a Ball If Ω = B

R (0)

is a ball, then Green's function is explicitly known.

Let Ω = B (0) be a ball in R with radius R and the center at the origin. Let x,  y ∈ B the sphere ∂ B (0) , that is, in particular |y||y | = R , see Figure 7.4.1.1 for notations. n

R (0)

R



and let y the reflected point of y on ′

2

R

Figure 7.4.1.1: Reflection on ∂ B

R (0)

Set $$G(x,y)=s(r)-s\left(\frac{\rho}{R}r_1\right),\] where s is the singularity function of Section 7.1, r = |x − y| and $$\rho^2=\sum_{i=1}^ny_i^2,\ \ \ r_1=\sum_{i=1}^n\left(x_i-\frac{R^2}{\rho^2}y_i\right)^2.\] This function G(x, y) satisfies (i)-(iii) of the definition of a Green function. We claim that $$u(x)=-\int_{\partial B_R(0)} \frac{\partial}{\partial n_y}G(x,y)\Phi\ dS_y\] is a solution of the Dirichlet problem (7.3.1.1), (7.3.1.2). This formula is also true for a large class of domains Ω ⊂ R , see [13]. n

Lemma. $$-\frac{\partial}{\partial n_y}G(x,y)\bigg|_{|y|=R}=\frac{1}{R\omega_n}\frac{R^2-|x|^2}{|y-x|^n}.\] Proof. Exercise. Set 1

2

R

H (x, y) = Rωn

2

− |x |

n

,

(7.4.1.1)

|y − x|

which is called Poisson's kernel. Theorem 7.2. Assume Φ ∈ C (∂Ω) . Then u(x) = ∫

 H (x, y)Φ(y) dSy

(7.4.1.1)

∂ BR (0)

is the solution of the first boundary value problem (7.3.1.1), (7.3.1.2) in the class C

2

¯ ¯ ¯ ¯

(Ω) ∩ C (Ω)

.

Proof. The proof follows from following properties of H (x, y): i. H (x, y) ∈ C ,   |y| = R,  |x| < R,  x ≠ y , ii. △ H (x, y) = 0,   |x| < R,  |y| = R , iii. ∫  H (x, y) dS = 1,   |x| < R , ∞

x

y

∂ BR (0)

iv. H (x, y) > 0,   |y| = R,  |x| < R , v. Fix ζ ∈ ∂ B (0) and δ > 0 , then lim R

x→ζ,|x| δ

.

(i), (iv) and (v) follow from the definition (7.4.1.1) of H and (ii) from (7.4.1.1) or from $$H=-\frac{\partial G(x,y)}{\partial n_y}\bigg|_{y\in\partial B_R(0)},\] G

harmonic and G(x, y) = G(y, x).

Property (iii) is a consequence of formula $$u(x)=\int_{\partial B_R(0)}\ H(x,y)u(y)\ dS_y,\]

7.4.1.1

https://math.libretexts.org/@go/page/2189

for each harmonic function u, see calculations to the representation formula above. We obtain (ii) if we set u ≡ 1 . It remains to show that u, given by Poisson's formula, is in ζ ∈ ∂ B (0) and let x ∈ B (0). Then R

¯¯¯¯¯¯¯¯¯¯¯¯ ¯

C (BR (0))

and that

u

achieves the prescribed boundary values. Fix

R

u(x) − Φ(ζ) = ∫

 H (x, y) (Φ(y) − Φ(ζ))  dSy

∂ BR (0)

= I1 + I2 ,

where I1 = ∫

 H (x, y) (Φ(y) − Φ(ζ))  dSy

∂ BR (0), |y−ζ| 0 there is a δ = δ(ϵ) > 0 such that $$|\Phi(y)-\Phi(\zeta)| 0 such that

R (0) ′

1|

≤ϵ

because of (iii) and (iv). Set M

= max∂ BR (0) |ϕ|

. From (v) we conclude

$$H(x,y) δ

, see Figure 7.4.1.2 for notations.

Figure 7.4.1.2: Proof of Theorem 7.2 Thus |I

2|