A beginner's guide to the theory of viscosity solutions [2 ed.]

  • Commentary
  • draft version
Citation preview

A Beginner’s Guide to

the Theory of Viscosity Solutions Shigeaki Koike∗ June 28, 2012

2nd edition

26 August 2010



Department of Mathematics, Saitama University, 255 Shimo-Okubo, Sakura, Saitama, 338-8570 JAPAN

i

Preface for the 2nd edition The first version of this text has been out of print since 2009. It seems that there is no hope to publish the second version from the publisher. Therefore, I have decided to put the files of the second version in my Home Page. Although I corrected many errors in the first version, there must be some mistakes in this version. I would be glad if the reader would kindly inform me errors and typos etc.

Acknowledgment for the 2nd edition I would like to thank Professors H. Ishii, K. Ishii, T. Nagasawa, M. Ohta, T. Ohtsuka, and graduate students, T. Imai, K. Kohsaka, H. Mitake, S. Nakagawa, T. Nozokido for pointing out numerous errors in the First edition.

26 August 2010

ii

Shigeaki Koike

Preface This book was originally written in Japanese for undergraduate students in the Department of Mathematics of Saitama University. In fact, the first hand-written draft was prepared for a series of lectures on the viscosity solution theory for undergraduate students in Ehime University and Hokkaido University. The aim here is to present a brief introduction to the theory of viscosity solutions for students who have knowledge on Advanced Calculus (i.e. differentiation and integration on functions of several-variables) and hopefully, a little on Lebesgue Integration and Functional Analysis. Since this is written for undergraduate students who are not necessarily excellent, I try to give “easy” proofs throughout this book. Thus, if you do not feel any difficulty to read User’s guide [6], you should try to read that one. I also try not only to show the viscosity solution theory but also to mention some related “classical” results. Our plan of this book is as follows: We begin with our motivation in section 1. Section 2 introduces the definition of viscosity solutions and their properties. In section 3, we first show “classical” comparison principles and then, extend them to viscosity solutions of first- and second-order PDEs, separately. We establish two kinds of existence results via Perron’s method and representation formulas for Bellman and Isaacs equations in section 4. We discuss boundary value problems for viscosity solutions in sections 5. Section 6 is a short introduction to the Lp -viscosity solution theory, on which we have an excellent book [4]. In Appendix, which is the hardest part, we give proofs of fundamental propositions. In order to learn more on viscosity solutions, I give a list of “books”: A popular survey paper [6] by Crandall-Ishii-Lions on the theory of viscosity solutions of second-order, degenerate elliptic PDEs is still a good choice for undergraduate students to learn first. However, to my experience, it seems a bit hard for average undergraduate students to understand. Bardi-Capuzzo Dolcetta’s book [1] contains lots of information on viscosity solutions for first-order PDEs (Hamilton-Jacobi equations) while FlemingSoner’s [10] complements topics on second-order (degenerate) elliptic PDEs with applications in stochastic control problems. Barles’ book [2] is also nice to learn his original techniques and French language simultaneously ! iii

It has been informed that Ishii would write a book [15] in Japanese on viscosity solutions in the near future, which must be more advanced than this. For an important application via the viscosity solution theory, we refer to Giga’s [12] on curvature flow equations. Also, I recommend the reader to consult Lecture Notes [3] (Bardi-Crandall-Evans-Soner-Souganidis) not only for various applications but also for a “friendly” introduction by Crandall, who first introduced the notion of viscosity solutions with P.-L. Lions in early 80s. If the reader is interested in section 6, I recommend him/her to attack Caffarelli-Cabr´e’s [4]. As a general PDE theory, although there are so many books on PDEs, I only refer to my favorite ones; Gilbarg-Trudinger’s [13] and Evans’ [8]. Also as a textbook for undergraduate students, Han-Lin’s short lecture notes [14] is a good choice. Since this is a text-book, we do not refer the reader to original papers unless those are not mentioned in the books in our references.

Acknowledgment First of all, I would like to express my thanks to Professors H. Morimoto and Y. Tonegawa for giving me the opportunity to have a series of lectures in their universities. I would also like to thank Professors K. Ishii, T. Nagasawa, and a graduate student, K. Nakagawa, for their suggestions on the first draft. I wish to express my gratitude to Professor H. Ishii for having given me enormous supply of ideas since 1980. I also wish to thank the reviewer for several important suggestions. My final thanks go to Professor T. Ozawa for recommending me to publish this manuscript. He kindly suggested me to change the original Japanese title (“A secret club on viscosity solutions”).

iv

Contents 1 1.1 1.2 1.2.1 1.2.2

2 2.1 2.2 3 3.1 3.1.1 3.1.2

3.2 3.3 3.3.1 3.3.2 3.3.3

4 4.1 4.2 4.2.1 4.2.2

4.3 5 5.1 5.2 5.3 5.4

Introduction From classical solutions to weak solutions Typical examples of weak solutions Burgers’ equation Hamilton-Jacobi equation

Definition Vanishing viscosity method Equivalent definitions Comparison principle Classical comparison principle Degenerate elliptic PDEs Uniformly elliptic PDEs

Comparison principle for first-order PDEs Extension to second-order PDEs Degenerate elliptic PDEs Remarks on the structure condition Uniformly elliptic PDEs

Existence results Perron’s method Representation formulas Bellman equation Isaacs equation

Stability Generalized boundary value problems Dirichlet problem State constraint problem Neumann problem Growth condition at |x| → ∞

v

1 2 4 4 6 9 9 17 23 24 24 24 26 31 33 35 38 40 40 45 46 51 58 62 65 68 72 75

6 6.1 6.2 6.3 6.3.1 6.3.2

6.4 6.5 7 7.1 7.2 7.3 7.4 7.5

Lp -viscosity solutions A brief history Definition and basic facts Harnack inequality Linear growth Quadratic growth

H¨older continuity estimates Existence result Appendix Proof of Ishii’s lemma Proof of the ABP maximum principle Proof of existence results for Pucci equations Proof of the weak Harnack inequlity Proof of the local maximum principle References Notation Index Index

vi

78 78 81 84 85 87 89 92 95 95 100 105 108 113 118 120 121

1

Introduction

Throughout this book, we will work in Ω (except in sections 4.2 and 5.4), where

Ω ⊂ Rn is open and bounded. √ We denote by ⟨·, ·⟩ the standard inner product in R , and set |x| = n

⟨x, x⟩ for x ∈ Rn . We use the standard notion of open balls: For r > 0 and x ∈ Rn , Br (x) := {y ∈ Rn | |x − y| < r},

and Br := Br (0).

For a function u : Ω → R, we denote its gradient and Hessian matrix at x ∈ Ω, respectively, by     Du(x) :=         2 D u(x) :=      

∂ 2 u(x) ∂x21

···

i-th

···

∂u(x) ∂x1

∂u(x) ∂xn

···

j-th

.. .

.. .

.. . ∂ 2 u(x) ∂xn ∂x1

  , 

.. .

···

∂ 2 u(x) ∂xi ∂xj

.. . ···

∂ 2 u(x) ∂x1 ∂xn

···

.. . .. . .. .

···

∂ 2 u(x) ∂x2n

      .     

Also, S n denotes the set of all real-valued n × n symmetric matrices. Note that if u ∈ C 2 (Ω), then D2 u(x) ∈ S n for x ∈ Ω. We recall the standard ordering in S n : X≤Y

⇐⇒

⟨Xξ, ξ⟩ ≤ ⟨Y ξ, ξ⟩

for ∀ξ ∈ Rn .

We will also use the following notion in sections 6 and 7: For ξ =t (ξ1 , . . . , ξn ), η =t (η1 , . . . , ηn ) ∈ Rn , we denote by ξ ⊗ η the n × n matrix whose (i, j)-entry is ξi ηj for 1 ≤ i, j ≤ n; 

ξ 1 η1 · · ·  .  ..

  ξ⊗η =    

j-th

.. .

· · · ξ1 ηn .. . .. ··· . .. .

· · · ξi ηj .. .. . . ξn η1 · · · · · · · · · ξn ηn i-th

1



    .    

We are concerned with general second-order partial differential equations (PDEs for short): F (x, u(x), Du(x), D2 u(x)) = 0 in Ω.

(1.1)

We suppose (except in several sections) that F : Ω × R × Rn × S n → R is continuous with respect to all variables.

1.1

From classical solutions to weak solutions

As the first example of PDEs, we present the Laplace equation: −△u = 0 in Ω.

(1.2)

Here, we define △u := trace(D2 u). In the literature of the viscosity solution theory, we prefer to have the minus sign in front of △. Of course, since we do not require any boundary condition yet, all polynomials of degree one are solutions of (1.2). In many textbooks (particularly those for engineers), under certain boundary condition, we learn how to solve (1.2) when Ω has some special shapes such as cubes, balls, the half-space or the whole space Rn . Here, “solve” means that we find an explicit formula of u using elementary functions such as polynomials, trigonometric ones, etc. However, the study on (1.2) in such special domains is not applicable because, for instance, solutions of equation (1.2) represent the density of a gas in a bottle, which is neither a ball nor a cube. Unfortunately, in general domains, it seems impossible to find formulas for solutions u with elementary functions. Moreover, in order to cover problems arising in physics, engineering and finance, we will have to study more general and complicated PDEs than (1.2). Thus, we have to deal with general PDEs (1.1) in general domains. If we give up having formulas for solutions of (1.1), how do we investigate PDEs (1.1) ? In other words, what is the right question in the study of PDEs ? In the literature of the PDE theory, the most basic questions are as follows:

2

(1) Existence: Does there exist a solution ? (2) Uniqueness: Is it the only solution ? (3) Stability: If the PDE changes a little, does the solution change a little ? The importance of the existence of solutions is trivial since, otherwise, the study on the PDE could be useless. To explain the significance of the uniqueness of solutions, let us remember the reason why we study the PDE. Usually, we discuss PDEs or their solutions to understand some specific phenomena in nature, engineerings or economics etc. Particularly, people working in applications want to know how the solution looks like, moves, behaves etc. For this purpose, it might be powerful to use numerical computations. However, numerical analysis only shows us an “approximate” shapes, movements, etc. Thus, if there are more than one solution, we do not know which is approximated by the numerical solution. Also, if the stability of solutions fails, we could not predict what will happen from the numerical experiments even though the uniqueness of solutions holds true. Now, let us come back to the most essential question: What is the “solution” of a PDE ? For example, it is natural to call a function u : Ω → R a solution of (1.1) if there exist the first and second derivatives, Du(x) and D2 u(x), for all x ∈ Ω, and (1.1) is satisfied at each x ∈ Ω when we plug them in the left hand side of (1.1). Such a function u will be called a classical solution of (1.1). However, unfortunately, it is difficult to seek for a classical solution because we have to verify that it is sufficiently differentiable and that it satisfies the equality (1.1) simultaneously. Instead of finding a classical solution directly, we have decided to choose the following strategy: (A) Find a candidate of the classical solution, (B) Check the differentiability of the candidate. In the standard books, the candidate of a classical solution is called a weak solution; if the weak solution has the first and second derivatives, then 3

it becomes a classical solution. In the literature, showing the differentiability of solutions is called the study on the regularity of those. Thus, with these terminologies, we may rewrite the above with mathematical terms: (A) Existence of weak solutions, (B) Regularity of weak solutions. However, when we cannot expect classical solutions of a PDE to exist, what is the right candidate of solutions ? We will call a function the candidate of solutions of a PDE if it is a “unique” and “stable” weak solution under a suitable setting. In section 2, we will define such a candidate named “viscosity solutions” for a large class of PDEs, and in the proceeding sections, we will extend the definition to more general (possibly discontinuous) functions and PDEs. In the next subsection, we show a brief history on “weak solutions” to remind what was known before the birth of viscosity solutions.

1.2

Typical examples of weak solutions

In this subsection, we give two typical examples of PDEs to derive two kinds of weak solutions which are unique and stable. 1.2.1

Burgers’ equation

We consider Burgers’ equation, which is a model PDE in Fluid Mechanics: ∂u 1 ∂(u2 ) + = 0 in R × (0, ∞) ∂t 2 ∂x under the initial condition: u(x, 0) = g(x) for x ∈ R,

(1.3)

(1.4)

where g is a given function. In general, we cannot find classical solutions of (1.3)-(1.4) even if g is smooth enough. See [8] for instance. In order to look for the appropriate notion of weak solutions, we first introduce a function space C01 (R × [0, ∞)) as a “test function space”: {

C01 (R

× [0, ∞)) :=

} there is K > 0 such that ϕ ∈ C (R × [0, ∞)) . supp ϕ ⊂ [−K, K] × [0, K] 1

4

Here and later, we denote by supp ϕ the following set: supp ϕ := {(x, t) ∈ R × [0, ∞) | ϕ(x, t) ̸= 0}. Suppose that u satisfies (1.3). Multiplying (1.3) by ϕ ∈ C01 (R × [0, ∞)) and then, using integration by parts, we have ∫ ∫ R



(

0

)

∫ ∂ϕ u2 ∂ϕ u + (x, t)dtdx + u(x, 0)ϕ(x, 0)dx = 0. ∂t 2 ∂x R

Since there are no derivatives of u in the above, this equality makes sense if u ∈ ∪K>0 L1 ((−K, K)×(0, K)). Hence, we may adapt the following property as the definition of weak solutions of (1.3)-(1.4).  ∫ ∫     R

   

∞ 0

(

)

∫ ∂ϕ u2 ∂ϕ g(x)ϕ(x, 0)dx = 0 + (x, t)dtdx + u ∂t 2 ∂x R

for all ϕ ∈ C01 (R × [0, ∞)).

We often call this a weak solution in the distribution sense. As you noticed, we derive this notion by an essential use of integration by parts. We say that a PDE is in divergence form when we can adapt the notion of weak solutions in the distribution sense. When the PDE is not in divergence form, we say that it is in nondivergence form. We note that the solution of (1.3) may have singularities even though the initial value g belongs to C ∞ by an observation via “characteristic method”. From the definition of weak solutions, we can derive the so-called RankineHugoniot condition on the set of singularities. On the other hand, unfortunately, we cannot show the uniqueness of weak solutions of (1.3)-(1.4) in general while we know the famous Lax-Oleinik formula (see [8] for instance), which is the “expected” solution. In order to obtain the uniqueness of weak solutions, for the definition, we add the following property (called “entropy condition”) which holds for the expected solution given by the Lax-Oleinik formula: There is C > 0 such that Cz u(x + z, t) − u(x, t) ≤ t for all (x, t, z) ∈ R × (0, ∞) × (0, ∞). We call u an entropy solution of (1.3) if it is a weak solution satisfying this inequality. It is also known that such a weak solution has a certain stability property. 5

We note that this entropy solution satisfies the above mentioned important properties; “existence, uniqueness and stability”. Thus, it must be a right definition for weak solutions of (1.3)-(1.4). 1.2.2

Hamilton-Jacobi equations

Next, we shall consider general Hamilton-Jacobi equations, which arise in Optimal Control and Classical Mechanics: ∂u + H(Du) = 0 in (x, t) ∈ Rn × (0, ∞) ∂t

(1.5)

under the same initial condition (1.4). In this example, we suppose that H : Rn → R is convex, i.e. H(θp + (1 − θ)q) ≤ θH(p) + (1 − θ)H(q)

(1.6)

for all p, q ∈ Rn , θ ∈ [0, 1]. Remark. Since a convex function is locally Lipschitz continuous in general, we do not need to assume the continuity of H. Example. In Classical Mechanics, we often call this H a “Hamiltonian”. As a simple example of H, we have H(p) = |p|2 . Notice that we cannot adapt the weak solution in the distribution sense for (1.5) since we cannot use the integration by parts. We next introduce the Lagrangian L : Rn → R defined by L(q) = sup {⟨p, q⟩ − H(p)}. p∈Rn

When H(p) = |p|2 , it is easy to verify that the maximum is attained in the right hand side of the above. It is surprising that we have a neat formula for the expected solution (called Hopf-Lax formula) presented by {

(

)

}

x−y + g(y) . u(x, t) = minn tL y∈R t

(1.7)

More precisely, it is shown that the right hand side of (1.7) is differentiable and satisfies (1.5) almost everywhere. 6

Thus, we could call u a weak solution of (1.5)-(1.4) when u satisfies (1.5) almost everywhere. However, if we decide to use this notion as a weak solution, the uniqueness of those fails in general. We will see an example in the next section. As was shown for Burgers’ equation, in order to say that the “unique weak” solution is given by (1.7), we have to add one more property for the definition of weak solutions: There is C > 0 such that u(x + z, t) − 2u(x, t) + u(x − z, t) ≤ C|z|2

(1.8)

for all x, z ∈ R, t > 0. This is called the “semi-concavity” of u. We note that (1.8) is a hypothesis on the one-sided bound of second derivatives of functions u. In 60s, Kruzkov showed that the limit function of approximate solutions by the vanishing viscosity method (see the next section) has this property (1.8) when H is convex. He named u a “generalized” solution of (1.5) when it satisfies (1.5) almost everywhere and (1.8). To my knowledge, between Kruzkov’s works and the birth of viscosity solutions, there had been no big progress in the study of first-order PDEs in nondivergence form. Remark. The convexity (1.6) is a natural hypothesis when we consider only optimal control problems where one person intends to minimize some “costs” (“energy” in terms of Physics). However, when we treat game problems (one person wants to minimize costs while the other tries to maximize them), we meet non-convex and non-concave (i.e. “fully nonlinear”) Hamiltonians. See section 4.2. In this book, since we are concerned with viscosity solutions of PDEs in nondivergence form, for which the integration by parts argument cannot be used to define the notion of weak solutions in the distribution sense, we shall give typical examples of such PDEs. Example. (Bellman and Isaacs equations) We first give Bellman equations and Isaacs equations, which arise in (stochastic) optimal control problems and differential games, respectively. As will be seen, those are extensions of linear PDEs. Let A and B be sets of parameters. For instance, we suppose A and B are (compact) subsets in Rm (for some m ≥ 1). For a ∈ A, b ∈ B, x ∈ Ω, 7

r ∈ R, p =t(p1 , . . . , pn ) ∈ Rn , and X = (Xij ) ∈ S n , we set La (x, r, p, X) := −trace(A(x, a)X) + ⟨g(x, a), p⟩ + c(x, a)r, La,b (x, r, p, X) := −trace(A(x, a, b)X) + ⟨g(x, a, b), p⟩ + c(x, a, b)r. Here A(·, a), A(·, a, b), g(·, a), g(·, a, b), c(·, a) and c(·, a, b) are given functions for (a, b) ∈ A × B. For inhomogeneous terms, we consider functions f (·, a) and f (·, a, b) in Ω for a ∈ A and b ∈ B. We call the following PDEs Bellman equations: sup{La (x, u(x), Du(x), D2 u(x)) − f (x, a)} = 0 for x ∈ Ω.

(1.9)

a∈A

Notice that the supremum over A is taken at each point x ∈ Ω. Taking account of one more parameter set B, we call the following PDEs Isaacs equations: sup inf {La,b (x, u(x), Du(x), D2 u(x)) − f (x, a, b)} = 0 for x ∈ Ω a∈A b∈B

(1.10)

and inf sup{La,b (x, u(x), Du(x), D2 u(x)) − f (x, a, b)} = 0 for x ∈ Ω. (1.10′ )

b∈B a∈A

Example. (“Quasi-linear” equations) We say that a PDE is quasi-linear if the coefficients of D2 u contains u or Du. Although we will not study quasilinear PDEs in this book, we give some of those which are in nondivergence form. We first give the PDE of mean curvature type: (

)

F (x, p, X) := − |p|2 trace(X) − ⟨Xp, p⟩ . Notice that this F is independent of x-variables. We refer to [12] for applications where this kind of operators appears. Next, we show a relatively “new” one called L∞ -Laplacian: F (x, p, X) := −⟨Xp, p⟩. Again, this F does not contain x-variables. We refer to Jensen’s work [16], where he first studied the PDE “−⟨D2 uDu, Du⟩ = 0 in Ω” via the viscosity solution approach. 8

2

Definition

In this section, we derive the definition of viscosity solutions of (1.1) via the vanishing viscosity method. We also give some basic properties of viscosity solutions and equivalent definitions using “semi-jets”.

2.1

Vanishing viscosity method

When the notion of viscosity solutions was born, in order to explain the reason why we need it, many speakers started in their talks by giving the following typical example called the eikonal equation: |Du|2 = 1 in Ω.

(2.1)

We seek C 1 functions satisfying (2.1) under the Dirichlet condition: u(x) = 0 for x ∈ ∂Ω.

(2.2)

However, since there is no classical solution of (2.1)-(2.2) (showing the nonexistence of classical solutions is a good exercise), we intend to derive a reasonable definition of weak solutions of (2.1). In fact, we expect that the following function (the distance from ∂Ω) would be the unique solution of this problem (see Fig 2.1): u(x) = dist(x, ∂Ω) := inf |x − y|. y∈∂Ω

y

PSfrag replacements −1 1 0 y y = u(x) x

1 y=u(x)

x -1

0

1

Fig 2.1

9

If we consider the case when n = 1 and Ω = (−1, 1), then the expected solution is given by u(x) = 1 − |x| for x ∈ [−1, 1].

(2.3)

Since this function is C ∞ except at x = 0, we could decide to call u a weak solution of (2.1) if it satisfies (2.1) in Ω except at finite points. y

PSfrag replacements 1 −1 0 x y

x -1

0

1

-1 Fig 2.2

However, even in the above simple case of (2.1), we know that there are infinitely many such weak solutions of (2.1) (see Fig 2.2); for example, −u is the weak solution and    x+1

for x ∈ [−1, − 21 ), for x ∈ [− 12 , 12 ), . . . etc. u(x) = −x   x − 1 for x ∈ [ 21 , 1], Now, in order to look for an appropriate notion of weak solutions, we introduce the so-called vanishing viscosity method; for ε > 0, we consider the following PDE as an approximate equation of (2.1) when n = 1 and Ω = (−1, 1): { −εu′′ε + (u′ε )2 = 1 in (−1, 1), (2.4) uε (±1) = 0. The first term, −εu′′ε , in the left hand side of (2.4) is called the vanishing viscosity term (when n = 1) as ε → 0. By an elementary calculation, we can find a unique smooth function uε in the following manner: We first note that if a classical solution of (2.4)

10

exists, then it is unique. Thus, we may suppose that u′ε (0) = 0 by symmetry. Setting vε = u′ε , we first solve the ODE: {

−εvε′ + vε2 = 1 in (−1, 1), vε (0) = 0.

(2.5)

It is easy to see that the solution of (2.5) is given by vε (x) = − tanh

( )

x . ε

Hence, we can find uε by 

uε (x) =

( )

x ε   ( ) −ε log 1 cosh ε

cosh

(

= −ε log

e ε + e− ε x

x

)

e ε + e− ε 1

1

.

It is a good exercise to show that uε converges to the function in (2.3) uniformly in [−1, 1]. Remark. Since uˆε (x) := −uε (x) is the solution of {

εu′′ + (u′ )2 = 1 in (−1, 1), u(±1) = 0,

we have uˆ(x) := limε→0 uˆε (x) = −u(x). Thus, if we replace −εu′′ by +εu′′ , then the limit function would be different in general. To define weak solutions, we adapt the properties which hold for the (uniform) limit of approximate solutions of PDEs with the “minus” vanishing viscosity term. Let us come back to general second-order PDEs: F (x, u, Du, D2 u) = 0 in Ω.

(2.6)

We shall use the following definition of classical solutions: Definition. We call u : Ω → R a classical subsolution (resp., supersolution, solution) of (2.6) if u ∈ C 2 (Ω) and F (x, u(x), Du(x), D2 u(x)) ≤ 0 (resp., ≥ 0, = 0) in Ω. 11

Remark. If F does not depend on X-variables (i.e. F (x, u, Du) = 0; firstorder PDEs), we only suppose u ∈ C 1 (Ω) in the above in place of u ∈ C 2 (Ω). Throughout this text, we also suppose the following monotonicity condition with respect to X-variables: Definition. We say that F is (degenerate) elliptic if {

F (x, r, p, X) ≤ F (x, r, p, Y ) for all x ∈ Ω, r ∈ R, p ∈ Rn , X, Y ∈ S n provided X ≥ Y.

(2.7)

We notice that if F does not depend on X-variables (i.e. F = 0 is the first-order PDE), then F is automatically elliptic. We also note that the left hand side F (x, r, p, X) = −trace(X) of the Laplace equation (1.2) is elliptic. We will derive properties which hold true for the (uniform) limit (as ε → +0) of solutions of −ε△u + F (x, u, Du, D2 u) = 0 in Ω (ε > 0).

(2.8)

Note that since −εtrace(X) + F (x, r, p, X) is “uniformly” elliptic (see in section 3 for the definition) provided that F is elliptic, it is easier to solve (2.8) than (2.6) in practice. See [13] for instance. Proposition 2.1. Assume that F is elliptic. Let uε ∈ C 2 (Ω) ∩ C(Ω) be a classical subsolution (resp., supersolution) of (2.8). If uε converges to u ∈ C(Ω) (as ε → 0) uniformly in any compact sets K ⊂ Ω, then, for any ϕ ∈ C 2 (Ω), we have F (x, u(x), Dϕ(x), D2 ϕ(x)) ≤ 0 (resp., ≥ 0) provided that u − ϕ attains its maximum (resp., minimum) at x ∈ Ω. Remark. When F does not depend on X-variables, we only need to suppose ϕ and uε to be in C 1 (Ω) as before. Proof. We only give a proof of the assertion for subsolutions since the other one can be shown in a symmetric way.

12

Suppose that u − ϕ attains its maximum at xˆ ∈ Ω for ϕ ∈ C 2 (Ω). Setting ϕδ (y) := ϕ(y) + δ|y − xˆ|4 for small δ > 0, we see that (u − ϕδ )(ˆ x) > (u − ϕδ )(y) for y ∈ Ω \ {ˆ x}. (This tiny technique to replace a maximum point by a “strict” one will appear in Proposition 2.2.) Let xε ∈ Ω be a point such that (uε − ϕδ )(xε ) = maxΩ (uε − ϕδ ). Note that xε also depends on δ > 0. Since uε converges to u uniformly in Br (ˆ x) and xˆ is the unique maximum point of u − ϕδ , we note that limε→0 xε = xˆ. Thus, we see that xε ∈ Ω for small ε > 0. Notice that if we argue by ϕ instead of ϕδ , the limit of xε might differ from xˆ. Thus, at xε ∈ Ω, we have −ε△uε (xε ) + F (xε , uε (xε ), Duε (xε ), D2 uε (xε )) ≤ 0. Since D(uε − ϕδ )(xε ) = 0 and D2 (uε − ϕδ )(xε ) ≤ 0, in view of ellipticity, we have −ε△ϕδ (xε ) + F (xε , uε (xε ), Dϕδ (xε ), D2 ϕδ (xε )) ≤ 0. Sending ε → 0 in the above, we have F (ˆ x, u(ˆ x), Dϕδ (ˆ x), D2 ϕδ (ˆ x)) ≤ 0. Since Dϕδ (ˆ x) = Dϕ(ˆ x) and D2 ϕδ (ˆ x) = D2 ϕ(ˆ x), we conclude the proof.

2

Definition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (2.6) if, for any ϕ ∈ C 2 (Ω), F (x, u(x), Dϕ(x), D2 ϕ(x)) ≤ 0 (resp., ≥ 0) provided that u − ϕ attains its maximum (resp., minimum) at x ∈ Ω. We call u : Ω → R a viscosity solution of (2.6) if it is both a viscosity sub- and supersolution of (2.6). Remark. Here, we have given the definition to “general” functions but we will often suppose that they are (semi-)continuous in Theorems etc. In fact, in our propositions in sections 2.1, we will suppose that viscosity sub- and supersolutions are continuous. 13

However, all the proposition in section 2.1 can be proved by replacing upper and lower semi-continuity for viscosity subsolutions and supersolutions, respectively. We will introduce general viscosity solutions in section 3.3. Notation. In order to memorize the correct inequality, we will often say that u is a viscosity subsolution (resp., supersolution) of F (x, u, Du, D2 u) ≤ 0 (resp., ≥ 0) in Ω if it is a viscosity subsolution (resp., supersolution) of (2.6). Proposition 2.2. For u : Ω → R, the following (1) and (2) are equivalent:  (1) u is a viscosity subsolution (resp., supersolution) of (2.6),     (2) if 0 = (u − ϕ)(ˆ x) > (u − ϕ)(x) (resp., < (u − ϕ)(x)) 2  for ϕ ∈ C (Ω), xˆ ∈ Ω and x ∈ Ω \ {ˆ x},    2

then F (ˆ x, ϕ(ˆ x), Dϕ(ˆ x), D ϕ(ˆ x)) ≤ 0 (resp., ≥ 0).

graphid

graph of phi graphi1

PSfrag replacements graph of ϕ graph of ϕδ graph of ϕ(·) + (u − ϕ)(ˆ x) x xˆ graph of u

graph of u

x hatx Fig 2.3

Proof. The implication (1) ⇒ (2) is trivial. For the opposite implication in the subsolution case, suppose that u − ϕ attains a maximum at xˆ ∈ Ω. Set ϕδ (x) = ϕ(x) + δ|x − xˆ|4 + (u − ϕ)(ˆ x). 14

See Fig 2.3. Since 0 = (u − ϕδ )(ˆ x) > (u − ϕδ )(x) for x ∈ Ω \ {ˆ x}, (2) gives F (ˆ x, ϕδ (ˆ x), Dϕδ (ˆ x), D2 ϕδ (ˆ x)) ≤ 0, which implies the assertion.

2

By the next proposition, we recognize that viscosity solutions are right candidates of weak solutions when F is elliptic. Proposition 2.3. Assume that F is elliptic. A function u : Ω → R is a classical subsolution (resp., supersolution) of (2.6) if and only if it is a viscosity subsolution (resp., supersolution) of (2.6) and u ∈ C 2 (Ω). Proof. Suppose that u is a viscosity subsolution of (2.6) and u ∈ C 2 (Ω). Taking ϕ ≡ u, we see that u − ϕ attains its maximum at any points x ∈ Ω. Thus, the definition of viscosity subsolutions yields F (x, u(x), Du(x), D2 u(x)) ≤ 0 for x ∈ Ω. On the contrary, suppose that u ∈ C 2 (Ω) is a classical subsolution of (2.6). Fix any ϕ ∈ C 2 (Ω). Assuming that u − ϕ takes its maximum at x ∈ Ω, we have D(u − ϕ)(x) = 0 and D2 (u − ϕ)(x) ≤ 0. Hence, in view of ellipticity, we have 0 ≥ F (x, u(x), Du(x), D2 u(x)) ≥ F (x, u(x), Dϕ(x), D2 ϕ(x)). 2 We introduce the sets of upper and lower semi-continuous functions: For K ⊂ Rn , U SC(K) := {u : K → R | u is upper semi-continuous in K}, and LSC(K) := {u : K → R | u is lower semi-continuous in K}. Remark. Throughout this book, we use the following maximum principle for semi-continuous functions: 15

An upper semi-continuous function in a compact set attains its maximum. We give the following lemma which will be used without mentioning it. Since the proof is a bit technical, the reader may skip it over first. Proposition 2.4. Assume that u ∈ U SC(Ω) (resp., u ∈ LSC(Ω)) is a viscosity subsolution (resp., supersolution) of (2.6) in Ω. Then, for any open set Ω′ ⊂ Ω, u is a viscosity subsolution (resp., supersolution) of (2.6) in Ω′ . Proof. We only show the assertion for subsolutions since the other can be shown similarly. For ϕ ∈ C 2 (Ω′ ), by Proposition 2.2, we suppose that for some x ˆ ∈ Ω′ , 0 = (u − ϕ)(ˆ x) > (u − ϕ)(y)

for all y ∈ Ω′ \ {ˆ x}.

For simplicity, we shall suppose x ˆ = 0. Choose r > 0 such that B2r ⊂ Ω′ . We then choose ξk ∈ C ∞ (Rn ) (k = 1, 2) such that 0 ≤ ξk ≤ 1 in Rn , ξ1 + ξ2 = 1 in Rn , ξ1 = 1

in Br ,

and ξ2 = 1

in Rn \ B2r .

We define ψ = ξ1 ϕ + M ξ2 , where M = supΩ u + 1. Since it is easy to verify that ψ ∈ C 2 (Rn ), and 0 = (u − ψ)(0) > (u − ψ)(x) for x ∈ Ω \ {0}, we leave the proof to the reader. This concludes the proof. 2

2.2

Equivalent definitions

We present equivalent definitions of viscosity solutions. However, since we will need those in the proof of uniqueness for second-order PDEs, the reader may postpone this subsection until section 3.3. First, we introduce “semi”-jets of functions u : Ω → R at x ∈ Ω by     

u(y) ≤ u(x) + ⟨p, y − x⟩ 1 J 2,+ u(x) := (p, X) ∈ Rn × S n + ⟨X(y − x), y − x⟩   2   +o(|y − x|2 ) as y ∈ Ω → x

16

        

and     

u(y) ≥ u(x) + ⟨p, y − x⟩ 1 n n 2,− J u(x) := (p, X) ∈ R × S + ⟨X(y − x), y − x⟩   2   +o(|y − x|2 ) as y ∈ Ω → x

        

.

Note that J 2,− u(x) = −J 2,+ (−u)(x). Remark. We do not impose any continuity for u in these definitions. We recall the notion of “small order o” in the above: For k ≥ 1, f (x) ≤ o(|x|k ) (resp., ≥ o(|x|k )) as x → 0

⇐⇒

   there is ω ∈ C([0, ∞), [0, ( ∞)) such that ω(0) = 0, and )  

f (x) ≤ ω(r) k x∈Br \{0} |x| sup

resp.,

f (x) ≥ −ω(|x|) x∈Br \{0} |x|k inf

In the next proposition, we give some basic properties of semi-jets: (1) is a relation between semi-jets and classical derivatives, and (2) means that semi-jets are “defined” in dense sets of Ω. Proposition 2.5. For u : Ω → R, we have the following: (1) If J 2,+ u(x) ∩ J 2,− u(x) ̸= ∅, then Du(x) and D2 u(x) exist and, J 2,+ u(x) ∩ J 2,− u(x) = {(Du(x), D2 u(x))}. (2) If u ∈ U SC(Ω) (resp., u ∈ LSC(Ω)), then } 2,+ x ∈ Ω ∃xk ∈ Ω such that J u(xk ) ̸= ∅, lim xk = x k→∞

{

Ω= (

resp., Ω =

}) 2,− x ∈ Ω ∃xk ∈ Ω such that J u(xk ) ̸= ∅, lim xk = x . k→∞

{

Proof. The proof of (1) is a direct consequence from the definition. We give a proof of the assertion (2) only for J 2,+ . Fix x ∈ Ω and choose r > 0 so that B r (x) ⊂ Ω. For ε > 0, we can choose xε ∈ B r (x) such that u(xε ) − ε−1 |xε − x|2 = maxy∈B r (x) (u(y) − ε−1 |y − x|2 ). Since |xε − x|2 ≤ ε(maxB r (x) −u(x)), we see that xε converges to x ∈ B r (x) 17

as ε → 0. Thus, we may suppose that xε ∈ Br (x) for small ε > 0. Hence, we have 1 u(y) ≤ u(xε ) + (|y − x|2 − |xε − x|2 ) for all y ∈ B r (x). ε It is easy to check that (2(xε − x)/ε, 2ε−1 I) ∈ J 2,+ u(xε ). 2 We next introduce a sort of closure of semi-jets:   

∃x ∈ Ω and ∃(p , X ) ∈ J 2,± u(x ) k k k k 2,± such that (xk , u(xk ), pk , Xk ) J u(x) := (p, X) ∈ Rn × S n   → (x, u(x), p, X) as k → ∞

    

.

Proposition 2.6. For u : Ω → R, the following (1), (2), (3) are equivalent.  (1) u is a viscosity subsolution (resp., supersolution) of (2.6).     2,+ 2,−    (2) For x ∈ Ω and (p, X) ∈ J u(x) (resp., J u(x)),

we have F (x, u(x), p, X) ≤ 0 (resp., ≥ 0).

 2,+ 2,−    (3) For x ∈ Ω and (p, X) ∈ J u(x) (resp., J u(x)),   

we have F (x, u(x), p, X) ≤ 0 (resp., ≥ 0).

Proof. Again, we give a proof of the assertion only for subsolutions. 2,+ Step 1: (2) =⇒ (3). For x ∈ Ω and (p, X) ∈ J u(x), we can find (pk , Xk ) ∈ J 2,+ u(xk ) with xk ∈ Ω such that limk→∞ (xk , u(xk ), pk , Xk ) = (x, u(x), p, X) and F (xk , u(xk ), pk , Xk ) ≤ 0, which implies (3) by sending k → ∞. Step 2: (3) =⇒ (1). For ϕ ∈ C 2 (Ω), suppose also (u−ϕ)(x) = max(u−ϕ). Thus, the Taylor expansion of ϕ at x gives 1 u(y) ≤ u(x)+⟨Dϕ(x), y−x⟩+ ⟨D2 ϕ(x)(y−x), y−x⟩+o(|x−y|2 ) as y → x. 2 2,+

Thus, we have (Dϕ(x), D2 ϕ(x)) ∈ J 2,+ u(x) ⊂ J u(x). Step 3: (1) =⇒ (2). For (p, X) ∈ J 2,+ u(x) (x ∈ Ω), we can find nondecreasing, continuous ω : [0, ∞) → [0, ∞) such that ω(0) = 0 and 1 u(y) ≤ u(x) + ⟨p, y − x⟩ + ⟨X(y − x), y − x⟩ + |y − x|2 ω(|y − x|) 2 18

(2.9)

as y → x. In fact, by the definition of o, we find ω0 ∈ C([0, ∞), [0, ∞)) such that ω0 (0) = 0, and ω0 (r) ≥

{

}

1 1 u(y) − u(x) − ⟨p, y − x⟩ − ⟨X(y − x), y − x⟩ , 2 2 y∈Br (x)\{x} |x − y| sup

we verify that ω(r) := sup0≤t≤r ω0 (t) satisfies (2.9). Now, we define ϕ by 1 ϕ(y) := ⟨p, y − x⟩ + ⟨X(y − x), y − x⟩ + ψ(|x − y|), 2 where



√ 3t

(∫

2s

ψ(t) := t

s

)

ω(r)dr ds ≥ t2 ω(t).

It is easy to check that (Dϕ(x), D2 ϕ(x)) = (p, X) and (u − ϕ)(x) ≥ (u − ϕ)(y) for y ∈ Ω. 2

Therefore, we conclude the proof.

Remark. In view of the proof of Step 3, we verify that for x ∈ Ω, } ∃ϕ ∈ C 2 (Ω) such that u − ϕ (Dϕ(x), D ϕ(x)) ∈ R × S , attains its maximum at x

{

J

2,+

J

2,−

u(x) =

n

2

n

} ∃ϕ ∈ C 2 (Ω) such that u − ϕ (Dϕ(x), D ϕ(x)) ∈ R × S . attains its minimum at x

{

u(x) =

n

2

n

Thus, we intuitively know J 2,± u(x) from their graph. Example. Consider the function u ∈ C([−1, 1]) in (2.3). From the graph below, we may conclude that J 2,− u(0) = ∅, and J 2,+ u(0) = ({1} × [0, ∞)) ∪ ({−1} × [0, ∞)) ∪ ((−1, 1) × R). See Fig 2.4.1 and 2.4.2. We omit how to obtain J 2,± u(0) of this and the next examples. We shall examine J 2,± for discontinuous functions. For instance, consider the Heaviside function: {

u(x) :=

1 for x ≥ 0, 0 for x < 0. 19

PSfrag replacements y = ϕ(x) y =x+1 1 y y = u(x) x 1 −1 0 PSfrag replacements y = ϕ(x) 1 y y = u(x) x 1 −1 0 y = ax + 1 (|a| < 1)

PSfrag replacements y = ϕ(x) 1 y y = u(x) x 1 0 y = ax + 1 (a > 0)

y y=x+1 y=u(x)

y=phi(x)

x

Fig 2.4.1

y y=ax+1

(|a|0) 0

Fig 2.5

20

x

We see that J 2,− u(0) = ∅ and J 2,+ u(0) = ({0} × [0, ∞)) ∪ ((0, ∞) × R). See Fig 2.5. In order to deal with “boundary value problems” in section 5, we prepare some notations: For a set K ⊂ Rn , which is not necessarily open, we define semi-jets of u : K → R at x ∈ K by     

u(y) ≤ u(x) + ⟨p, y − x⟩ 1 2,+ n n JK u(x) := (p, X) ∈ R × S + ⟨X(y − x), y − x⟩   2   +o(|y − x|2 ) as y ∈ K → x     

u(y) ≥ u(x) + ⟨p, y − x⟩ 1 2,− JK u(x) := (p, X) ∈ Rn × S n + ⟨X(y − x), y − x⟩   2   +o(|y − x|2 ) as y ∈ K → x

        

,

        

,

and   

∃x ∈ K and ∃(p , X ) ∈ J 2,± u(x ) k k k k K 2,± J K u(x) := (p, X) ∈ Rn × S n such that (x , u(x ), p , X ) k k k k   → (x, u(x), p, X) as k → ∞

    

.

Remark. It is obvious to verify that x∈Ω

=⇒

2,±

2,±

JΩ2,± u(x) = JΩ2,± u(x) and J Ω u(x) = J Ω u(x).

For x ∈ Ω, we shall simply write J 2,± u(x) (resp., J 2,± 2,± JΩ2,± u(x) (resp, J Ω u(x) = J Ω u(x)).

2,±

u(x)) for JΩ2,± u(x) =

Example. Consider u(x) ≡ 0 in K := [0, 1]. It is easy to observe that 2,+ J u(x) = JK u(x) = {0} × [0, ∞) provided x ∈ (0, 1). It is also easy to verify that 2,+ JK u(0) = ({0} × [0, ∞)) ∪ ((0, ∞) × R), 2,+

and 2,− JK u(0) = ({0} × (−∞, 0]) ∪ ((−∞, 0) × R). 2,±

We finally give some properties of JΩ2,± and J Ω . Since the proof is easy, we omit it.

21

Proposition 2.7. For u : Ω → R, ψ ∈ C 2 (Ω) and x ∈ Ω, we have JΩ2,± (u + ψ)(x) = (Dψ(x), D2 ψ(x)) + JΩ2,± u(x) and

2,±

2,±

J Ω (u + ψ)(x) = (Dψ(x), D2 ψ(x)) + J Ω u(x).

22

3

Comparison principle

In this section, we discuss the comparison principle, which implies the uniqueness of viscosity solutions when their values on ∂Ω coincide (i.e. under the Dirichlet boundary condition). In the study of the viscosity solution theory, the comparison principle has been the main issue because the uniqueness of viscosity solutions is harder to prove than existence and stability of them. First, we recall some “classical” comparison principles and then, show how to modify the proof to a modern “viscosity” version. In this section, the comparison principle roughly means that “Comparison principle” 

viscosity subsolution u   viscosity supersolution v =⇒   u ≤ v on ∂Ω

u ≤ v in Ω

Modifying our proofs of comparison theorems below, we obtain a slightly stronger assertion than the above one: viscosity subsolution u viscosity supersolution v

}

=⇒

max(u − v) = max(u − v) ∂Ω



We remark that the comparison principle implies the uniqueness of (continuous) viscosity solutions under the Dirichlet boundary condition: “Uniqueness for the Dirichlet problem” viscosity solutions u and v u = v on ∂Ω

}

=⇒

u = v in Ω

Proof of “the comparison principle implies the uniqueness”. Since u (resp., v) and v (resp., u), respectively, are a viscosity subsolution and supersolution, by u = v on ∂Ω, the comparison principle yields u ≤ v (resp., v ≤ u) in Ω. 2 In this section, we mainly deal with the following PDE instead of (2.6). νu + F (x, Du, D2 u) = 0 in Ω,

(3.1)

where we suppose that ν ≥ 0,

(3.2)

F : Ω × Rn × S n → R is continuous.

(3.3)

and

23

3.1

Classical comparison principle

In this subsection, we show that if one of viscosity sub- and supersolutions is a classical one, then the comparison principle holds true. We call this the “classical” comparison principle. 3.1.1

Degenerate elliptic PDEs

We first consider the case when F is (degenerate) elliptic and ν > 0. Proposition 3.1. Assume that ν > 0 and (3.3) hold. Assume also that F is elliptic. Let u ∈ U SC(Ω) (resp., v ∈ LSC(Ω)) be a viscosity subsolution (resp., supersolution) of (3.1) and v ∈ LSC(Ω) ∩ C 2 (Ω) (resp., u ∈ U SC(Ω) ∩ C 2 (Ω)) a classical supersolution (resp., subsolution) of (3.1). If u ≤ v on ∂Ω, then u ≤ v in Ω. Proof. We only prove the assertion when u is a viscosity subsolution of (3.1) since the other one can be shown similarly. Set maxΩ (u − v) =: θ and choose xˆ ∈ Ω such that (u − v)(ˆ x) = θ. Suppose that θ > 0 and then, we will get a contradiction. We note that xˆ ∈ Ω because u ≤ v on ∂Ω. Thus, the definition of u and v respectively yields νu(ˆ x) + F (ˆ x, Dv(ˆ x), D2 v(ˆ x)) ≤ 0 ≤ νv(ˆ x) + F (ˆ x, Dv(ˆ x), D2 v(ˆ x)). Hence, by these inequalities, we have νθ = ν(u − v)(ˆ x) ≤ 0, which contradicts θ > 0. 3.1.2

2

Uniformly elliptic PDEs

Next, we present the comparison principle when ν = 0 but F is uniformly elliptic in the following sense. Notice that if ν > 0 and F is uniformly elliptic, then Proposition 3.1 yields Proposition 3.3 below because our uniform ellipticity implies (degenerate) ellipticity. Throughout this book, we freeze the “uniform ellipticity” constants: 0 < λ ≤ Λ. 24

With these constants, we introduce the Pucci’s operators: For X ∈ S n , P + (X) := max{−trace(AX) | λI ≤ A ≤ ΛI for A ∈ S n }, P − (X) := min{−trace(AX) | λI ≤ A ≤ ΛI for A ∈ S n }. We give some properties of P ± . We omit the proof since it is elementary. Proposition 3.2. For X, Y ∈ S n , we have the following: P + (X) = −P − (−X), P ± (θX) = θP ± (X) for θ ≥ 0, + − P { is convex, P is concave, P − (X) + P − (Y ) ≤ P − (X + Y ) ≤ P − (X) + P + (Y ) (4) ≤ P + (X + Y ) ≤ P + (X) + P + (Y ).

(1) (2) (3)

Definition. We say that F : Ω × Rn × S n → R is uniformly elliptic (with the uniform ellipticity constants 0 < λ ≤ Λ) if P − (X − Y ) ≤ F (x, p, X) − F (x, p, Y ) ≤ P + (X − Y ) for x ∈ Ω, p ∈ Rn , and X, Y ∈ S n . We also suppose the following continuity on F with respect to p ∈ Rn : There is µ > 0 such that |F (x, p, X) − F (x, p′ , X)| ≤ µ|p − p′ |

(3.4)

for x ∈ Ω, p, p′ ∈ Rn , and X ∈ S n . Proposition 3.3. Assume that (3.2), (3.3) and (3.4) hold. Assume also that F is uniformly elliptic. Let u ∈ U SC(Ω) (resp., v ∈ LSC(Ω)) be a viscosity subsolution (resp., supersolution) of (3.1) and v ∈ LSC(Ω) ∩ C 2 (Ω) (resp., u ∈ U SC(Ω) ∩ C 2 (Ω)) a classical supersolution (resp., subsolution) of (3.1). If u ≤ v on ∂Ω, then u ≤ v in Ω. Proof. We give a proof only when u is a viscosity subsolution and v a classical supersolution of (3.1). 25

Suppose that maxΩ (u − v) =: θ > 0. Then, we will get a contradiction again. For ε > 0, we set ϕε (x) = εeδx1 , where δ := max{(µ + 1)/λ, ν + 1} > 0. We next choose ε > 0 so small that ε max eδx1 ≤ x∈Ω

θ 2

x) = maxΩ (u − v + ϕε ) ≥ θ. Let xˆ ∈ Ω be the point such that (u − v + ϕε )(ˆ By the choice of ε > 0, since u ≤ v on ∂Ω, we see that xˆ ∈ Ω. From the definition of viscosity subsolutions, we have νu(ˆ x) + F (ˆ x, D(v − ϕε )(ˆ x), D2 (v − ϕε )(ˆ x)) ≤ 0. By the uniform ellipticity and (3.4), we have νu(ˆ x) + F (ˆ x, Dv(ˆ x), D2 v(ˆ x)) + P − (−D2 ϕε (ˆ x)) − µ|Dϕε (ˆ x)| ≤ 0. Noting that |Dϕε (ˆ x)| ≤ δεeδxˆ1 and P − (−D2 ϕε (ˆ x)) ≥ δ 2 ελeδxˆ1 , we have νu(ˆ x) + F (ˆ x, Dv(ˆ x), D2 v(ˆ x)) + δε(λδ − µ)eδˆx1 ≤ 0.

(3.5)

Since v is a classical supersolution of (3.1), by (3.5) and δ ≥ (µ + 1)/λ, we have ν(u − v)(ˆ x) + δεeδxˆ1 ≤ 0. Hence, we have ν(θ − ϕε (ˆ x)) ≤ −δεeδxˆ1 , which gives a contradiction because δ ≥ ν + 1.

3.2

2

Comparison principle for first-order PDEs

In this subsection, without assuming that one of viscosity sub- and supersolutions is a classical one, we establish the comparison principle when F in (3.1) does not depend on D2 u; first-order PDEs. We will study the comparison principle for second-order ones in the next subsection. In the viscosity solution theory, Theorem 3.4 below was the first surprising result. Here, instead of (3.1), we shall consider the following PDE: νu + H(x, Du) = 0 in Ω. 26

(3.6)

We shall suppose that ν > 0,

(3.7)

and that there is a continuous function ωH : [0, ∞) → [0, ∞) such that ωH (0) = 0 and |H(x, p) − H(y, p)| ≤ ωH (|x − y|(1 + |p|)) for x, y ∈ Ω and p ∈ Rn . (3.8) In what follows, we will call ωH in (3.8) a modulus of continuity. For notational simplicity, we use the following notation: M := {ω : [0, ∞) → [0, ∞) | ω(·) is continuous, ω(0) = 0}. Theorem 3.4. Assume that (3.7) and (3.8) hold. Let u ∈ U SC(Ω) and v ∈ LSC(Ω) be a viscosity sub- and supersolution of (3.6), respectively. If u ≤ v on ∂Ω, then u ≤ v in Ω. Proof. Suppose maxΩ (u − v) =: θ > 0 as usual. Then, we will get a contradiction. Notice that since both u and v may not be differentiable, we cannot use the same argument as in Proposition 3.1. Now, we present the most important idea in the theory of viscosity solutions to overcome this difficulty. Setting Φε (x, y) := u(x) − v(y) − (2ε)−1 |x − y|2 for ε > 0, we choose (xε , yε ) ∈ Ω × Ω such that Φε (xε , yε ) = max Φε (x, y). x,y∈Ω

Noting that Φε (xε , yε ) ≥ maxx∈Ω Φε (x, x) = θ, we have |xε − yε |2 ≤ u(xε ) − v(yε ) − θ. 2ε

(3.9)

Since Ω is compact, we can find xˆ, yˆ ∈ Ω, and εk > 0 such that limk→∞ εk = 0 x, yˆ). and limk→∞ (xεk , yεk ) = (ˆ We shall simply write ε for εk (i.e. in what follows, “ε → 0” means that εk → 0 when k → ∞). Setting M := maxΩ u − minΩ v, by (3.9), we have |xε − yε |2 ≤ 2εM → 0 (as ε → 0). 27

Thus, we have xˆ = yˆ. Since (3.9) again implies 0 ≤ lim inf ε→0

|xε − yε |2 |xε − yε |2 ≤ lim sup 2ε 2ε ε→0 ≤ lim sup(u(xε ) − v(yε )) − θ ε→0

≤ (u − v)(ˆ x) − θ ≤ 0, we have

|xε − yε |2 = 0. (3.10) ε→0 ε Moreover, since (u − v)(ˆ x) = θ > 0, we have xˆ ∈ Ω from the assumption u ≤ v on ∂Ω. Thus, for small ε > 0, we may suppose that (xε , yε ) ∈ Ω × Ω. Furthermore, ignoring the left hand side in (3.9), we have lim

θ ≤ lim inf (u(xε ) − v(yε )).

(3.11)

ε→0

Taking ϕ(x) := v(yε ) + (2ε)−1 |x − yε |2 , we see that u − ϕ attains its maximum at xε ∈ Ω. Hence, from the definition of viscosity subsolutions, we have ( ) xε − yε νu(xε ) + H xε , ≤ 0. ε On the other hand, taking ψ(y) := u(xε ) − (2ε)−1 |y − xε |2 , we see that v − ψ attains its minimum at yε ∈ Ω. Thus, from the definition of viscosity supersolutions, we have (

xε − yε νv(yε ) + H yε , ε

)

≥ 0.

The above two inequalities yield (

ν(u(xε ) − v(yε )) ≤ ωH

)

|xε − yε |2 |xε − yε | + . ε

Sending ε → 0 in the above together with (3.10) and (3.11), we have νθ ≤ 0, which is a contradiction. 2 x) and Remark. In the above proof, we could show that limε→0 u(xε ) = u(ˆ limε→0 v(yε ) = v(ˆ x) although we do not need this fact. In fact, by (3.9), we have v(yε ) ≤ u(xε ) − θ, 28

which implies that v(ˆ x) ≤ lim inf v(yε ) ≤ lim inf u(xε ) − θ ≤ lim sup u(xε ) − θ ≤ u(ˆ x) − θ, ε→0

ε→0

ε→0

and v(ˆ x) ≤ lim inf v(yε ) ≤ lim sup v(xε ) ≤ lim sup u(xε ) − θ ≤ u(ˆ x) − θ. ε→0

ε→0

ε→0

Hence, since all the inequalities become the equalities, we have u(ˆ x) = lim inf u(xε ) = lim sup u(xε ) and v(ˆ x) = lim inf v(yε ) = lim sup v(yε ). ε→0

ε→0

ε→0

ε→0

We remark here that we cannot apply Theorem 3.4 to the eikonal equation (2.1) because we have to suppose ν > 0 in the above proof. We shall modify the above proof so that the comparison principle for viscosity solutions of (2.1) holds. To simplify our hypotheses, we shall consider the following PDE: H(x, Du) − f (x) = 0 in Ω.

(3.12)

Here, we suppose that H has homogeneous degree α > 0 with respect to the second variable; there is α > 0 such that H(x, µp) = µα H(x, p) for x ∈ Ω, p ∈ Rn and µ > 0.

(3.13)

To recover the lack of assumption ν > 0, we suppose the positivity of f ∈ C(Ω); there is σ > 0 such that min f (x) =: σ > 0.

(3.14)

x∈Ω

Example. When H(x, p) = |p|2 (i.e. α = 2) and f (x) ≡ 1 ( i.e. σ = 1), equation (3.12) becomes (2.1). The second comparison principle for first-order PDEs is as follows: Theorem 3.5. Assume that (3.8), (3.13) and (3.14) hold. Let u ∈ U SC(Ω) and v ∈ LSC(Ω) be a viscosity sub- and supersolution of (3.12), respectively. If u ≤ v on ∂Ω, then u ≤ v in Ω. 29

Proof. Suppose that maxΩ (u − v) =: θ > 0 as usual. Then, we will get a contradiction. If we choose µ ∈ (0, 1) so that θ (1 − µ) max u ≤ , 2 Ω then we easily verify that θ max(µu − v) =: τ ≥ . 2 Ω We note that for any z ∈ Ω such that (µu − v)(z) = τ , we may suppose z ∈ Ω. In fact, otherwise (i.e. z ∈ ∂Ω), if we further suppose that µ < 1 is close to 1 so that −(1 − µ) min∂Ω v ≤ θ/4, then the assumption (u ≤ v on ∂Ω) implies θ θ ≤ τ = µu(z) − v(z) ≤ (µ − 1)v(z) ≤ , 2 4 which is a contradiction. For simplicity, we shall omit writing the dependence on µ for τ and (xε , yε ) below. At this stage, we shall use the idea in the proof of Theorem 3.4: Consider the mapping Φε : Ω × Ω → R defined by Φε (x, y) := µu(x) − v(y) −

|x − y|2 . 2ε

Choose (xε , yε ) ∈ Ω × Ω such that maxx,y∈Ω Φε (x, y) = Φε (xε , yε ). Note that Φε (xε , yε ) ≥ τ ≥ θ/2. As in the proof of Theorem 3.4, we may suppose that limε→0 (xε , yε ) = (ˆ x, yˆ) for some (ˆ x, yˆ) ∈ Ω × Ω (by taking a subsequence if necessary). Also, we easily see that |xε − yε |2 ≤ µu(xε ) − v(yε ) − τ ≤ Mµ := µ max u − min v. 2ε Ω Ω

(3.15)

Thus, sending ε → 0, we have xˆ = yˆ. Hence, (3.15) implies that µu(ˆ x) − v(ˆ x) = τ , which yields xˆ ∈ Ω because of the choice of µ. Thus, we see that (xε , yε ) ∈ Ω × Ω for small ε > 0. Moreover, (3.15) again implies |xε − yε |2 = 0. ε→0 ε lim

30

(3.16)

Now, taking ϕ(x) := (v(yε ) + (2ε)−1 |x − yε |2 )/µ, we see that u − ϕ attains its maximum at xε ∈ Ω. Thus, we have (

xε − yε H xε , µε

)

≤ f (xε ).

Hence, by (3.13), we have (

H xε ,

xε − yε ε

)

≤ µα f (xε ).

(3.17)

On the other hand, taking ψ(y) = µu(xε ) − (2ε)−1 |y − xε |2 , we see that v − ψ attains its minimum at yε ∈ Ω. Thus, we have (

xε − yε H yε , ε

)

≥ f (yε ).

(3.18)

Combining (3.18) with (3.17), we have (

)

(

xε − yε x ε − yε f (yε ) − µ f (xε ) ≤ H yε , − H xε , ε ( ε ( )) |xε − yε | ≤ ωH |xε − yε | 1 + . ε

)

α

Sending ε → 0 in the above with (3.16), we have (1 − µα )f (ˆ x) ≤ 0, which contradicts (3.14).

3.3

2

Extension to second-order PDEs

In this subsection, assuming a key lemma, we will present the comparison principle for fully nonlinear, second-order, (degenerate) elliptic PDEs (3.1). We first remark that the argument of the proof of the comparison principle for first-order PDEs cannot be applied at least immediately. Let us have a look at the difficulty. Consider the following simple PDE: νu − △u = 0,

(3.19)

where ν > 0. As one can guess, if the argument does not work for this “easiest” PDE, then it must be hopeless for general PDEs. 31

However, we emphasize that the same argument as in the proof of Theorem 3.4 does not work. In fact, let u ∈ U SC(Ω) and v ∈ LSC(Ω) be a viscosity sub- and supersolution of (3.19), respectively, such that u ≤ v on ∂Ω. Setting Φε (x, y) := u(x) − v(y) − (2ε)−1 |x − y|2 as usual, we choose (xε , yε ) ∈ Ω × Ω so that maxx,y∈Ω Φε (x, y) = Φε (xε , yε ) > 0 as before. We may suppose that (xε , yε ) ∈ Ω × Ω converges to (ˆ x, xˆ) (as ε → 0) for some xˆ ∈ Ω such that (u − v)(ˆ x) > 0. From the definitions of u and v, we have n n νu(xε ) − ≤ 0 ≤ νv(yε ) + . ε ε Hence, we only have 2n , ν(u(xε ) − v(yε )) ≤ ε which does not give any contradiction as ε → 0. How can we go beyond this difficulty ? In 1983, P.-L. Lions first obtained the uniqueness of viscosity solutions for elliptic PDEs arising in stochastic optimal control problems (i.e. Bellman equations; F is convex in (Du, D2 u)). However, his argument heavily depends on stochastic representation of viscosity solutions as “value functions”. Moreover, it seems hard to extend the result to Isaacs equations; F is fully nonlinear. The breakthrough was done by Jensen in 1988 in case when the coefficients on the second derivatives of the PDE are constant. His argument relies purely on “real-analysis” and can work even for fully nonlinear PDEs. Then, Ishii in 1989 extended Jensen’s result to enable us to apply to elliptic PDEs with variable coefficients. We present here the so-called Ishii’s lemma, which will be proved in Appendix. Lemma 3.6. (Ishii’s lemma) Let u and w be in U SC(Ω). For ϕ ∈ C (Ω × Ω), let (ˆ x, yˆ) ∈ Ω × Ω be a point such that 2

x) + w(ˆ y ) − ϕ(ˆ x, yˆ). max (u(x) + w(y) − ϕ(x, y)) = u(ˆ

x,y∈Ω

Then, for each µ > 1, there are X = X(µ), Y = Y (µ) ∈ S n such that 2,+

2,+

(Dx ϕ(ˆ x, yˆ), X) ∈ J Ω u(ˆ x),

(Dy ϕ(ˆ x, yˆ), Y ) ∈ J Ω w(ˆ y ), 32

and

(

−(µ + ∥A∥)

I 0 0 I

)

(



X 0 0 Y

)

≤A+

1 2 A, µ

where A = D2 ϕ(ˆ x, yˆ) ∈ S 2n . Remark. We note that if we suppose that u, w ∈ C 2 (Ω) and (ˆ x, yˆ) ∈ Ω×Ω in the hypothesis, then we easily have (

2

2

X = D u(ˆ x), Y = D w(ˆ y ), and

X 0 0 Y

)

≤ A.

Thus, the last matrix inequality means that when u and w are only continuous, we get some error term µ−1 A2 , where µ > 1 will be large. We also note that for ϕ(x, y) := |x − y|2 /(2ε), we have 1 A := D ϕ(ˆ x, yˆ) = ε

(

2

For the last identity, since

{⟨

∥A∥2 := sup

(

A

x y

I −I −I I

)

(

,A

x y

)

2 and ∥A∥ = . ε

(3.20)

)⟩ } |x|2 + |y|2 = 1 ,

the triangle inequality yields ∥A∥2 = 2ε−2 sup{|x − y|2 | |x|2 + |y|2 = 1} ≤ 4/ε2 . On the other hand, taking x = −y (i.e. |x|2 = 1/2) in the supremum of the definition of ∥A∥2 in the above, we have ∥A∥2 ≥ 4/ε2 . Remark. The other way to show the above identity, we may use the fact that for B ∈ S n , in general, ∥B∥ = max{|λk | | λk is the eigen-value of B}. 3.3.1

Degenerate elliptic PDEs

Now, we give our hypotheses on F , which is called the structure condition. Structure condition n There is an(ωF ∈ M 1 satisfy ) such ( that if X, ) Y ∈ S( and µ > ) I 0 X 0 I −I −3µ ≤ ≤ 3µ , 0 I 0 −Y −I I then F (y, µ(x − y), Y ) − F (x, µ(x − y), X) ≤ ωF (|x − y|(1 + µ|x − y|)) for x, y ∈ Ω. 33

(3.21)

In section 3.3.2, we will see that if F satisfies (3.21), then it is elliptic. We first prove the comparison principle when (3.21) holds for F using this lemma. Afterward, we will explain why assumption (3.21) is reasonable. Theorem 3.7. Assume that ν > 0 and (3.21) hold. Let u ∈ U SC(Ω) and v ∈ LSC(Ω) be a viscosity sub- and supersolution of (3.1), respectively. If u ≤ v on ∂Ω, then u ≤ v in Ω. Proof. Suppose that maxΩ (u − v) =: θ > 0 as usual. Then, we will get a contradiction. Again, for ε > 0, consider the mapping Φε : Ω × Ω → R defined by Φε (x, y) = u(x) − v(y) −

1 |x − y|2 . 2ε

Let (xε , yε ) ∈ Ω×Ω be a point such that maxx,y∈Ω Φε (x, y) = Φε (xε , yε ) ≥ θ. As in the proof of Theorem 3.4, we may suppose that lim(xε , yε ) = (ˆ x, xˆ) for some xˆ ∈ Ω (i.e. xε , yε ∈ Ω for small ε > 0).

ε→0

Moreover, since we have (u − v)(ˆ x) = θ, |xε − yε |2 lim = 0, ε→0 ε

(3.22)

θ ≤ lim inf (u(xε ) − v(yε )).

(3.23)

and ε→0

In view of Lemma 3.6 (taking w := −v, µ := 1/ε, ϕ(x, y) = |x − y|2 /(2ε)) and its Remark, we find X, Y ∈ S n such that (

)

xε − yε , X ∈ J¯2,+ u(xε ), ε

and 3 − ε

(

I 0 0 I

)

(



(

X 0 0 −Y

xε − yε ,Y ε )

3 ≤ ε

(

)

∈ J¯2,− v(yε ),

I −I −I I

)

.

Thus, the equivalent definition in Proposition 2.6 implies that (

)

(

)

xε − yε xε − yε , X ≤ 0 ≤ νv(yε ) + F yε , ,Y . νu(xε ) + F xε , ε ε 34

Hence, by virtue of our assumption (3.21), we have (

ν(u(xε ) − v(yε )) ≤ ωF

)

|xε − yε |2 |xε − yε | + . ε

(3.24)

Taking the limit infimum, as ε → 0, together with (3.22) and (3.23) in the above, we have νθ ≤ 0, which is a contradiction. 3.3.2

2

Remarks on the structure condition

In order to ensure that assumption (3.21) is reasonable, we first present some examples. For this purpose, we consider the Isaacs equation as in section 1.2.2. F (x, p, X) := sup inf {La,b (x, p, X) − f (x, a, b)}, a∈A b∈B

where La,b (x, p, X) := −trace(A(x, a, b)X) + ⟨g(x, a, b), p⟩ for (a, b) ∈ A × B. If we suppose that A and B are compact sets in Rm (for some m ≥ 1), and that the coefficients in the above and f (·, a, b) satisfy the hypotheses below, then F satisfies (3.21).  (1) ∃M1 > 0 and ∃σij (·, a, b) : Ω → R such that Aij (x, a, b) =    m  ∑     σik (x, a, b)σjk (x, a, b), and |σjk (x, a, b) − σjk (y, a, b)| ≤ M1 |x − y|     k=1  

for x, y ∈ Ω, i, j = 1, . . . , n, k = 1, . . . , m, a ∈ A, b ∈ B,

 (2) ∃M2 > 0 such that |gi (x, a, b) − gi (y, a, b)| ≤ M2 |x − y| for x, y ∈ Ω,      i = 1, . . . , n, a ∈ A, b ∈ B,      (3) ∃ωf ∈ M such that   

|f (x, a, b) − f (y, a, b)| ≤ ωf (|x − y|) for x, y ∈ Ω, a ∈ A, b ∈ B.

We shall show (3.21) only when F (x, p, X) := −

n ∑ m ∑

σik (x, a, b)σjk (x, a, b)Xij

i,j=1 k=1

35

for a fixed (a, b) ∈ A × B because we can modify the proof below to general F. Thus, we shall omit writing indices a and b. To verify assumption (3.21), we choose X, Y ∈ S n such that (

X 0 0 −Y

)

(

≤ 3µ

I −I −I I

)

.

Setting ξk =t (σ1k (x), . . . , σnk (x)) and ηk =t (σ1k (y), . . . , σnk (y)) for any fixed k ∈ {1, 2, . . . , m}, we have ⟨(

X 0 0 −Y

)(

ξk ηk

) (

,

ξk ηk

)⟩

⟨(

I −I ≤ 3µ −I I = 3µ|ξk − ηk |2 ≤ 3µnM12 |x − y|2 .

)(

ξk ηk

) (

,

ξk ηk

)⟩

Therefore, taking the summation over k ∈ {1, . . . , m}, we have F (y, µ(x − y), Y ) − F (x, µ(x − y), X) ≤ =

n ∑

(−Aij (y)Yij + Aij (x)Xij )

i,j=1 m ∑

(−⟨Y ηk , ηk ⟩ + ⟨Xξk , ξk ⟩)

k=1

≤ 3µmnM12 |x − y|2 . 2 We next give other reasons why (3.21) is a suitable assumption. The reader can skip the proof of the following proposition if he/she feels that the above reason is enough to adapt (3.21). Proposition 3.8. (1) (3.21) implies ellipticity. (2) Assume that F is uniformly elliptic. If ω ¯ ∈ M satisfies that supr≥0 ω ¯ (r)/(r + 1) < ∞, and |F (x, p, X) − F (y, p, X)| ≤ ω ¯ (|x − y|(∥X∥ + |p| + 1))

(3.25)

for x, y ∈ Ω, p ∈ Rn , X ∈ S n , then (3.21) holds for F . Proof. For a proof of (1), we refer to Remark 3.4 in [6]. For the reader’s convenience, we give a proof of (2) which is essentially used in a paper by Ishii-Lions (1990). Let X, Y ∈ S n satisfy the matrix inequality in (3.21). Note that X ≤ Y .

36

(

Multiplying

−I −I −I I (

have (

Thus, multiplying

ξ sη

)

to the last matrix inequality from both sides, we

X −Y X +Y

X +Y X −Y

)

(

≤ 12µ

0 0 0 I

)

.

)

for s ∈ R and ξ, η ∈ Rn with |η| = |ξ| = 1, we see that

0 ≤ (12µ − ⟨(X − Y )η, η⟩)s2 − 2⟨(X + Y )ξ, η⟩s − ⟨(X − Y )ξ, ξ⟩. Hence, we have |⟨(X + Y )ξ, η⟩|2 ≤ |⟨(X − Y )ξ, ξ⟩|(12µ + |⟨(X − Y )η, η⟩|), which implies ∥X + Y ∥ ≤ ∥X − Y ∥1/2 (12µ + ∥X − Y ∥)1/2 . Thus, we have 1 ∥X∥ ≤ (∥X − Y ∥ + ∥X + Y ∥) ≤ ∥X − Y ∥1/2 (6µ + ∥X − Y ∥)1/2 . 2 Since X ≤ Y (i.e. the eigen-values of X − Y are non-positive), we see that F (y, p, X) − F (y, p, Y ) ≥ P − (X − Y ) ≥ λ∥X − Y ∥.

(3.26)

For the last inequality, we recall Remark after Lemma 3.6. Since we may suppose ω ¯ is concave, for any fixed ε > 0, there is Mε > 0 such that ω ¯ (r) ≤ ε + Mε r and ω ¯ (r) = inf ε>0 (ε + Mε r) for r ≥ 0. By (3.25) and (3.26), since ∥X∥ ≤ 3µ and ∥Y ∥ ≤ 3µ, we have F (y, p, Y ) − F (x, p, X) { } ≤ ε + Mε |x − y|(|p| + 1) + sup Mε |x − y|t1/2 (6µ + t)1/2 − λt . 0≤t≤6µ

Noting that Mε |x − y|t1/2 (6µ + t)1/2 − λt ≤ we have

3 2 M µ|x − y|2 , λ ε

F (y, µ(x − y), Y ) − F (x, µ(x − y), X) ≤ ε + Mε |x − y|(µ|x − y| + 1) + 3λ−1 Mε2 µ|x − y|2 ,

which implies the assertion by taking the infimum over ε > 0.

37

2

3.3.3

Uniformly elliptic PDEs

We shall give a comparison result corresponding to Proposition 3.3; F is uniformly elliptic and ν ≥ 0. Theorem 3.9. Assume that (3.2), (3.3), (3.4) and (3.21) hold. Assume also that F is uniformly elliptic. Let u ∈ U SC(Ω) and v ∈ LSC(Ω) be a viscosity sub- and supersolution of (3.1), respectively. If u ≤ v on ∂Ω, then u ≤ v in Ω. Remark. As in Proposition 3.3, we may suppose ν = 0. Proof. Suppose that maxΩ (u − v) =: θ > 0. Setting σ := (µ + 1)/λ, we choose δ > 0 so that θ δ max eσx1 ≤ . 2 x∈Ω We then set τ := maxx∈Ω (u(x) − v(x) + δeσx1 ) ≥ θ > 0. Putting ϕ(x, y) := (2ε)−1 |x − y|2 − δeσx1 , we let (xε , yε ) ∈ Ω × Ω be the maximum point of u(x) − v(y) − ϕ(x, y) over Ω × Ω. By the compactness of Ω, we may suppose that (xε , yε ) → (ˆ x, yˆ) ∈ Ω × Ω as ε → 0 (taking a subsequence if necessary). Since u(xε ) − v(yε ) ≥ ϕ(xε , yε ), we have |xε −yε |2 ≤ 2ε(maxΩ u−minΩ v +2−1 θ) and moreover, xˆ = yˆ. Hence, we have u(ˆ x) − v(ˆ x) + δeσˆx1 ≥ τ, which implies xˆ ∈ Ω because of our choice of δ. Thus, we may suppose that (xε , yε ) ∈ Ω × Ω for small ε > 0. Moreover, as before, we see that |xε − yε |2 = 0. ε→0 ε lim

(3.27)

Applying Lemma 3.6 to uˆ(x) := u(x)+δeσx1 and −v(y), we find X, Y ∈ S n 2,+ 2,− such that ((xε − yε )/ε, X) ∈ J uˆ(xε ), ((xε − yε )/ε, Y ) ∈ J v(yε ), and 3 − ε

(

I O O I

)

(



X O O −Y

)

3 ≤ ε

(

I −I −I I

)

We shall simply write x and y for xε and yε , respectively. 38

.

Note that Proposition 2.7 implies (

)

x−y 2,+ − δσeσx1 e1 , X − δσ 2 eσx1 I1 ∈ J u(x), ε

where e1 ∈ Rn and I1 ∈ S n are given by    e1 :=   

1 0 .. .





    

  I1 :=   

and

0

1 0 .. .

0 ··· 0 ··· .. .

0 0 .. .

0 0 ··· 0

   .  

Setting r := δσeσx1 , from the definition of u and v, we have (

x−y 0 ≤ F y, ,Y ε

)

(

)

x−y − F x, − re1 , X − σrI1 . ε

In view of the uniform ellipticity and (3.4), we have (

x−y ,Y 0 ≤ rµ + σrP (I1 ) + F y, ε +

)

(

)

x−y − F x, ,X . ε

Hence, by (3.21) and the definition of P + , we have (

0 ≤ r(µ − σλ) + ωF

)

|x − y|2 |x − y| + , ε

which together with (3.27) yields 0 ≤ δσeσˆx1 (µ − σλ). This is a contradiction because of our choice of σ > 0. 2

39

4

Existence results

In this section, we present some existence results for viscosity solutions of second-order (degenerate) elliptic PDEs. We first present a convenient existence result via Perron’s method, which was established by Ishii in 1987. Next, for Bellman and Isaacs equations, we give representation formulas for viscosity solutions. From the dynamic programming principle below, we will realize how natural the definition of viscosity solutions is.

4.1

Perron’s method

In order to introduce Perron’s method, we need the notion of viscosity solutions for semi-continuous functions. Definition. For any function u : Ω → R, we denote the upper and lower semi-continuous envelope of u by u∗ and u∗ , respectively, which are defined by u∗ (x) = lim

ε→0

sup

u(y) and u∗ (x) = lim

inf

ε→0 y∈Bε (x)∩Ω

y∈Bε (x)∩Ω

u(y).

We give some elementary properties for u∗ and u∗ without proofs. Proposition 4.1. For u : Ω → R, we have (1) u∗ (x) ≤ u(x) ≤ u∗ (x) for x ∈ Ω, (2) u∗ (x) = −(−u)∗ (x) for x ∈ Ω, (3) u∗ (resp., u∗ ) is upper (resp., lower) semi-continuous in Ω, i.e. lim sup u∗ (y) ≤ u∗ (x), (resp., lim inf u∗ (y) ≥ u∗ (x)) for x ∈ Ω, y→x

y→x

(4) if u is upper (resp., lower) semi-continuous in Ω, then u(x) = u∗ (x) (resp., u(x) = u∗ (x)) for x ∈ Ω. With these notations, we give our definition of viscosity solutions of F (x, u, Du, D2 u) = 0 in Ω.

(4.1)

Definition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (4.1) if u∗ (resp., u∗ ) is a viscosity subsolution (resp., supersolution) of (4.1). 40

We call u : Ω → R a viscosity solution of (4.1) if it is both a viscosity sub- and supersolution of (4.1). Remark. We note that we supposed that viscosity sub- and supersolutions are, respectively, upper and lower semi-continuous in our comparison principle in section 3. Adapting the above new definition, we omit the semicontinuity for viscosity sub- and supersolutions in Propositions 3.1, 3.3 and Theorems 3.4, 3.5, 3.7, 3.9. In what follows, we use the above definition. Remark. We remark that the comparison principle Theorem 3.7 implies the continuity of viscosity solutions. “Continuity of viscosity solutions” viscosity solution u satisfies u∗ = u∗ on ∂Ω

}

=⇒

u ∈ C(Ω)

Proof of the continuity of u. Since u∗ and u∗ are, respectively, a viscosity subsolution and a viscosity supersolution and u∗ ≤ u∗ on ∂Ω, Theorem 3.7 yields u∗ ≤ u∗ in Ω. Because u∗ ≤ u ≤ u∗ in Ω, we have u = u∗ = u∗ in Ω; u ∈ C(Ω). 2 We first show that the “point-wise” supremum (resp., infimum) of viscosity subsolutions (resp., supersolution) becomes a viscosity subsolution (resp., supersolution). Theorem 4.2. Let S be a non-empty set of upper (resp., lower) semicontinuous viscosity subsolutions (resp., supersolutions) of (4.1). Set u(x) := supv∈S v(x) (resp., u(x) := inf v∈S v(x)). If supx∈K |u(x)| < ∞ for any compact sets K ⊂ Ω, then u is a viscosity subsolution (resp., supersolution) of (4.1). Proof. We only give a proof for subsolutions since the other can be proved in a symmetric way. For xˆ ∈ Ω, we suppose that 0 = (u∗ − ϕ)(ˆ x) > (u∗ − ϕ)(x) for x ∈ Ω \ {ˆ x} 2 and ϕ ∈ C (Ω). We shall show that F (ˆ x, ϕ(ˆ x), Dϕ(ˆ x), D2 ϕ(ˆ x)) ≤ 0. 41

(4.2)

Let r > 0 be such that B2r (ˆ x) ⊂ Ω. We can find s > 0 such that max (u∗ − ϕ) ≤ −s.

∂Br (ˆ x)

(4.3)

We choose xk ∈ Br (ˆ x) such that limk→∞ xk = xˆ, u∗ (ˆ x) − k −1 ≤ u(xk ) and |ϕ(xk ) − ϕ(ˆ x)| < 1/k. Moreover, we select upper semi-continuous uk ∈ S such that uk (xk ) + k −1 ≥ u(xk ). By (4.3), for 3/k < s, we have max (uk − ϕ) < (uk − ϕ)(xk ).

∂Br (ˆ x)

Thus, for large k > 3/s, there is yk ∈ Br (ˆ x) such that uk − ϕ attains its maximum over B r (ˆ x) at yk . Hence, we have F (yk , uk (yk ), Dϕ(yk ), D2 ϕ(yk )) ≤ 0.

(4.4)

Taking a subsequence if necessary, we may suppose z := limk→∞ yk . Since (u∗ − ϕ)(ˆ x) ≤ (uk − ϕ)(xk ) +

3 3 3 ≤ (uk − ϕ)(yk ) + ≤ (u∗ − ϕ)(yk ) + k k k

by the upper semi-continuity of u∗ , we have (u∗ − ϕ)(ˆ x) ≤ (u∗ − ϕ)(z), which yields z = xˆ, and moreover, limk→∞ uk (yk ) = u∗ (ˆ x) = ϕ(ˆ x). Therefore, sending k → ∞ in (4.4), by the continuity of F , we obtain (4.2). 2 Our first existence result is as follows. Theorem 4.3. Assume that F is elliptic. Assume also that there are a viscosity subsolution ξ ∈ U SC(Ω) ∩ L∞ loc (Ω) and a viscosity supersolution ∞ η ∈ LSC(Ω) ∩ Lloc (Ω) of (4.1) such that ξ≤η

in Ω.

Then, u(x) := supv∈S v(x) (resp., uˆ(x) = inf w∈Sˆ w(x)) is a viscosity solution of (4.1), where {

S :=

} v is a viscosity subsolution v ∈ U SC(Ω) of (4.1) such that ξ ≤ v ≤ η in Ω

42

(

{

resp., Sˆ :=

}) w is a viscosity supersolution w ∈ LSC(Ω) . of (4.1) such that ξ ≤ w ≤ η in Ω

Sketch of proof. We only give a proof for u since the other can be shown in a symmetric way. First of all, we notice that S = ̸ ∅ since ξ ∈ S. Due to Theorem 4.2, we know that u is a viscosity subsolution of (4.1). Thus, we only need to show that it is a viscosity supersolution of (4.1). Assume that u ∈ LSC(Ω). Assuming that 0 = (u − ϕ)(ˆ x) < (u − ϕ)(x) 2 for x ∈ Ω \ {ˆ x} and ϕ ∈ C (Ω), we shall show that F (ˆ x, ϕ(ˆ x), Dϕ(ˆ x), D2 ϕ(ˆ x)) ≥ 0. Suppose that this conclusion fails; there is θ > 0 such that F (ˆ x, ϕ(ˆ x), Dϕ(ˆ x), D2 ϕ(ˆ x)) ≤ −2θ. Hence, there is r > 0 such that F (x, ϕ(x) + t, Dϕ(x), D2 ϕ(x)) ≤ −θ

for x ∈ Br (ˆ x) ⊂ Ω and |t| ≤ r. (4.5)

First, we claim that ϕ(ˆ x) < η(ˆ x). Indeed, otherwise, since ϕ ≤ u ≤ η in Ω, η − ϕ attains its minimum at xˆ ∈ Ω. See Fig 4.1.

y=eta_*(x)

PSfrag replacements y = η(x) y = u(x) y = ξ(x) xˆ x y = ϕ(x) Ω

y=u(x)

y=phi(x) y=xi^*(x)

hatx x O Fig 4.1

Hence, from the definition of supersolution η, we get a contradiction to (4.5) for x = xˆ and t = 0. 43

We may suppose that ξ(ˆ x) < η(ˆ x) since, otherwise, ξ = ϕ = η at xˆ. Setting 3ˆ τ := η(ˆ x) − u(ˆ x) > 0, from the lower and upper semi-continuity of η and ξ, respectively, we may choose s ∈ (0, r] such that ξ(x) + τˆ ≤ ϕ(x) + 2ˆ τ ≤ η(x) for x ∈ B2s (ˆ x). Moreover, we can choose ε ∈ (0, s) and τ0 ∈ (0, min{ˆ τ , r}) such that ϕ(x) + 2τ0 ≤ u(x) for x ∈ B s+ε (ˆ x) \ Bs−ε (ˆ x). If we can define a function w ∈ S such that w(ˆ x) > u(ˆ x), then we finish our proof because of the maximality of u at each point. Now, we set {

w(x) :=

max{u(x), ϕ(x) + τ0 } in Bs (ˆ x), u(x) in Ω \ Bs (ˆ x).

See Fig 4.2.

PSfrag replacements y = η(x) y = u(x) y = ξ(x) y = w(x) xˆ x y = ϕ(x) + τ0 Ω 3ˆ τ

y=eta_*(x) 2s y=u(x)

y=phi(x)+s

y=xi^*(x)

hatx x O y=w(x)

Fig 4.2

It suffices to show that w ∈ S. Because of our choice of τ0 , s > 0, it is easy to see ξ ≤ w ≤ η in Ω. Thus, we only need to show that w is a viscosity subsolution of (4.1). To this end, we suppose that (w∗ − ψ)(x) ≤ (w∗ − ψ)(z) = 0 for x ∈ Ω, and then we will get F (z, w∗ (z), Dψ(z), D2 ψ(z)) ≤ 0.

(4.6)

If z ∈ Ω\B s (ˆ x) =: Ω′ , by Proposition 2.4, then u∗ −ψ attains its maximum ′ at z ∈ Ω , we get (4.6). 44

If z ∈ ∂Bs (ˆ x), then (4.6) holds again since w = u in Bs+ε (ˆ x) \ B s−ε (ˆ x). It remains to show (4.6) when z ∈ Bs (ˆ x). Since ϕ + τ0 is a viscosity subsolution of (4.1) in Bs (ˆ x), Theorem 4.2 with Ω := Bs (ˆ x) yields (4.6). 2 Correct proof, which the reader may skip first. Since we do not suppose that u ∈ LSC(Ω) here, we have to work with u∗ . Suppose that 0 = (u∗ −ϕ)(ˆ x) < (u∗ −ϕ)(x) for x ∈ Ω\{ˆ x} for some ϕ ∈ C 2 (Ω), x ˆ ∈ Ω, θ > 0 and F (ˆ x, ϕ(ˆ x), Dϕ(ˆ x), D2 ϕ(ˆ x)) ≤ −2θ. Hence, we get (4.5) even in this case. We also show that the w defined in the above is a viscosity subsolution of (4.1). It only remains to check that supΩ (w − u) > 0. In fact, choosing xk ∈ B1/k (ˆ x) such that u∗ (ˆ x) +

1 ≥ u(xk ), k

we easily verify that if 1/k ≤ min{τ0 /2, s} and |ϕ(ˆ x) − ϕ(xk )| < τ0 /2, then we have w(xk ) ≥ ϕ(xk ) + τ0 > ϕ(ˆ x) +

4.2

τ0 τ0 = u∗ (ˆ x) + ≥ u(xk ). 2 2

2

Representation formula

In this subsection, for given Bellman and Isaacs equations, we present the expected solutions, which are called “value functions”. In fact, via the dynamic programming principle for the value functions, we verify that they are viscosity solutions of the corresponding PDEs. Although this subsection is very important to learn how the notion of viscosity solutions is the right one from a view point of applications in optimal control and games, if the reader is more interested in the PDE theory than these applications, he/she may skip this subsection. We shall restrict ourselves to investigate the formulas only for first-order PDEs 45

because in order to extend the results below to second-order ones, we need to introduce some terminologies from stochastic analysis. However, this is too much for this thin book. As will be seen, we study the minimization of functionals associated with ordinary differential equations (ODEs for short), which is called a “deterministic” optimal control problem. When we adapt “stochastic” differential equations instead of ODEs, those are called “stochastic” optimal control problems. We refer to [10] for the later. Moreover, to avoid mentioning the boundary condition, we will work on the whole domain Rn . Throughout this subsection, we also suppose (3.7); ν > 0. 4.2.1

Bellman equation

We fix a control set A ⊂ Rm for some m ∈ N. We define A by A := {α : [0, ∞) → A | α(·) is measurable}. For x ∈ Rn and α ∈ A, we denote by X(·; x, α) the solution of {

X ′ (t) = g(X(t), α(t)) for t > 0, X(0) = x,

(4.7)

where we will impose a sufficient condition on continuous functions g : Rn × A → Rn so that (4.7) is uniquely solvable. For given f : Rn × A → R, under suitable assumptions (see (4.8) below), we define the cost functional for X(·; x, α): ∫

J(x, α) :=



e−νt f (X(t; x, α), α(t))dt.

0

Here, ν > 0 is called a discount factor, which indicates that the right hand side of the above is finite. Now, we shall consider the optimal cost functional, which is called the value function in the optimal control problem; u(x) := inf J(x, α) for x ∈ Rn . α∈A

Theorem 4.4. (Dynamic Programming Principle) Assume that ( )    (1) sup ∥f (·, a)∥L∞ (Rn ) + ∥g(·, a)∥W 1,∞ (Rn ) < ∞, a∈A

  (2) sup |f (x, a) − f (y, a)| ≤ ωf (|x − y|) a∈A

46

for x, y ∈ Rn ,

(4.8)

where ωf ∈ M. For any T > 0, we have (∫

T

u(x) = inf

α∈A

)

e−νt f (X(t; x, α), α(t))dt + e−νT u(X(T ; x, α)) .

0

Proof. For fixed T > 0, we denote by v(x) the right hand side of the above. Step 1: u(x) ≥ v(x). Fix any ε > 0, and choose αε ∈ A such that u(x) + ε ≥



∞ 0

e−νt f (X(t; x, αε ), αε (t))dt.

Setting xˆ = X(T ; x, αε ) and α ˆ ε ∈ A by α ˆ ε (t) = αε (T + t) for t ≥ 0, we have ∫

∞ 0

e−νt f (X(t; x, αε ), αε (t))dt =



T

0

+e

e−νt f (X(t; x, αε ), αε (t))dt

−νT



∞ 0

e−νt f (X(t; xˆ, α ˆ ε ), α ˆ ε (t))dt.

Here and later, without mentioning, we use the fact that X(t + T ; x, α) = X(t; xˆ, α ˆ ) for T > 0, t ≥ 0 and α ∈ A, where α ˆ (t) := α(t + T ) (t ≥ 0) and xˆ := X(T ; x, α). Indeed, the above relation holds true because of the uniqueness of solutions of (4.7) under assumptions (4.8). See Fig 4.3. Thus, taking the infimum in the second term of the right hand side of the above among A, we have u(x) + ε ≥



T

e−νt f (X(t; x, α), α(t))dt + e−νT u(ˆ x),

0

which implies one-sided inequality by taking the infimum over A since ε > 0 is arbitrary. Step 2: u(x) ≤ v(x). Fix ε > 0 again, and choose αε ∈ A such that v(x) + ε ≥

∫ 0

T

e−νt f (X(t; x, αε ), αε (t))dt + e−νT u(ˆ x),

47

X(t+T;x,a)=X(t;hatx,hata)

X(t;x,a)

PSfrag replacements X(t; x, α) X(t + T ; x, α) = X(t; xˆ, α ˆ) t=0 x t=T xˆ = X(T ; x, α)

t=T t=0

hatx=X(T;x,a)

x

x

Fig 4.3

where xˆ := X(T ; x, αε ). We next choose α1 ∈ A such that ∫

u(ˆ x) + ε ≥ Now, setting

∞ 0

{

αε (t) for t ∈ [0, T ), α1 (t − T ) for t ≥ T,

α0 (t) := we see that v(x) + 2ε ≥

e−νt f (X(t; xˆ, α1 ), α1 (t))dt.

∫ 0



e−νt f (X(t; x, α0 ), α0 (t))dt,

which gives the opposite inequality by taking the infimum over α0 ∈ A since ε > 0 is arbitrary again. 2 Now, we give an existence result for Bellman equations. Theorem 4.5. Assume that (4.8) holds. Then, u is a viscosity solution of sup{νu − ⟨g(x, a), Du⟩ − f (x, a)} = 0 in Rn .

(4.9)

a∈A

Sketch of proof. In Steps 1 and 2, we give a proof when u ∈ U SC(Rn ) and u ∈ LSC(Rn ), respectively. Step 1: Subsolution property. Fix ϕ ∈ C 1 (Rn ), and suppose that 0 = (u − ϕ)(ˆ x) ≥ (u − ϕ)(x) for some xˆ ∈ Rn and any x ∈ Rn . Fix any a0 ∈ A, and set α0 (t) := a0 for t ≥ 0 so that α0 ∈ A.

48

For small s > 0, in view of Theorem 4.4, we have ϕ(ˆ x) − e−νs ϕ(X(s; xˆ, α0 )) ≤ ∫u(ˆ x) − e−νs u(X(s; xˆ, α0 )) s ≤ e−νt f (X(t; xˆ, α0 ), a0 )dt. 0

Setting X(t) := X(t; xˆ, α0 ) for simplicity, by (4.7), we see that e−νt {νϕ(X(t)) − ⟨g(X(t), α0 ), Dϕ(X(t))⟩} = −

) d ( −νt e ϕ(X(t)) . dt

(4.10)

Hence, we have 0≥

∫ 0

s

e−νt {νϕ(X(t)) − ⟨g(X(t), a0 ), Dϕ(X(t))⟩ − f (X(t), a0 )}dt.

Therefore, dividing the above by s > 0, and then sending s → 0, we have 0 ≥ νϕ(ˆ x) − ⟨g(ˆ x, a0 ), Dϕ(ˆ x)⟩ − f (ˆ x, a0 ), which implies the desired inequality of the definition by taking the supremum over A. Step 2: Supersolution property. To show that u is a viscosity supersolution, we argue by contradiction. Suppose that there are xˆ ∈ Rn , θ > 0 and ϕ ∈ C 1 (Rn ) such that 0 = (u − ϕ)(ˆ x) ≤ (u − ϕ)(x) for x ∈ Rn , and that sup{νϕ(ˆ x) − ⟨g(ˆ x, a), Dϕ(ˆ x)⟩ − f (ˆ x, a)} ≤ −2θ. a∈A

Thus, we can find ε > 0 such that sup{νϕ(x) − ⟨g(x, a), Dϕ(x)⟩ − f (x, a)} ≤ −θ a∈A

for x ∈ Bε (ˆ x).

(4.11)

By assumption (4.8) for g, setting t0 := ε/(supa∈A ∥g(·, a)∥L∞ (Rn ) +1) > 0, we easily see that |X(t; xˆ, α) − xˆ| ≤



t 0

|X ′ (s; xˆ, α)|ds ≤ ε for t ∈ [0, t0 ] and α ∈ A.

Hence, by setting X(t) := X(t; xˆ, α) for any fixed α ∈ A, (4.11) yields νϕ(X(t)) − ⟨g(X(t), α(t)), Dϕ(X(t))⟩ − f (X(t), α(t)) ≤ −θ 49

(4.12)

for t ∈ [0, t0 ]. Since (4.10) holds for α in place of α0 , multiplying e−νt in (4.12), and then integrating it over [0, t0 ], we obtain ϕ(ˆ x) − e

−νt0

ϕ(X(t0 )) −



t0 0

θ e−νt f (X(t), α(t))dt ≤ − (1 − e−νt0 ). ν

Thus, setting θ0 = θ(1 − e−νt0 )/ν > 0, which is independent of α ∈ A, we have ∫ t0 u(ˆ x) ≤ e−νt f (X(t), α(t))dt + e−νt0 u(X(t0 )) − θ0 . 0

Therefore, taking the infimum over A, we get a contradiction to Theorem 4.4. 2 Correct proof, which the reader may skip first. Step 1: Subsolution property. Assume that there are x ˆ ∈ Rn , θ > 0 and ϕ ∈ n C 1 (R ) such that 0 = (u∗ − ϕ)(ˆ x) ≥ (u∗ − ϕ)(x) for x ∈ Rn and that x) − ⟨g(ˆ x, a), Dϕ(ˆ x)⟩ − f (ˆ x, a)} ≥ 2θ. sup{νϕ(ˆ

a∈A

In view of (4.8), there are a0 ∈ A and r > 0 such that νϕ(x) − ⟨g(x, a0 ), Dϕ(x)⟩ − f (x, a0 ) ≥ θ

for x ∈ B2r (ˆ x).

(4.13)

For large k ≥ 1, we can choose xk ∈ B1/k (ˆ x) such that u∗ (ˆ x) ≤ u(xk ) + k −1 and |ϕ(ˆ x) − ϕ(xk )| < 1/k. We will only use k such that 1/k ≤ r. Setting α0 (t) := a0 , we note that Xk (t) := X(t; xk , α0 ) ∈ B2r (ˆ x) for t ∈ [0, t0 ] with some t0 > 0 and for large k. On the other hand, by Theorem 4.4, we have u(xk ) ≤



t0

0

e−νt f (Xk (t), a0 )dt + e−νt0 u(Xk (t0 )).

Thus, we have ϕ(xk ) −

2 1 ≤ ϕ(ˆ x) − ≤ u(xk ) ≤ k k



t0 0

e−νt f (Xk (t), a0 )dt + e−νt0 ϕ(Xk (t0 )).

Hence, by (4.13) as in Step 1 of Sketch of proof, we see that 2 − k



∫ 0

t0

e−νt {f (Xk (t), a0 ) + ⟨g(Xk (t), a0 ), Dϕ(Xk (t))⟩ − νϕ(Xk (t))}dt

θ ≤ − (1 − e−νt0 ), ν

50

which is a contradiction for large k. Step 2: Supersolution property. Assume that there are x ˆ ∈ Rn , θ > 0 and ϕ ∈ C 1 (Rn ) such that 0 = (u∗ − ϕ)(ˆ x) ≤ (u∗ − ϕ)(x) for x ∈ Rn and that sup{νϕ(ˆ x) − ⟨g(ˆ x, a), Dϕ(ˆ x)⟩ − f (ˆ x, a)} ≤ −2θ.

a∈A

In view of (4.8), there is r > 0 such that νϕ(x) − ⟨g(x, a), Dϕ(x)⟩ − f (x, a) ≤ −θ

for x ∈ B2r (ˆ x) and a ∈ A.

(4.14)

For large k ≥ 1, we can choose xk ∈ B1/k (ˆ x) such that u∗ (ˆ x) ≥ u(xk ) − k −1 and |ϕ(ˆ x) − ϕ(xk )| < 1/k. In view of (4.8), there is t0 > 0 such that Xk (t; xk , α) ∈ B2r (ˆ x)

1 for all k ≥ , α ∈ A and t ∈ [0, t0 ]. r

Now, we select αk ∈ A such that 1 u(xk ) + ≥ k



t0 0

e−νt f (X(t; xk , αk ), αk (t))dt + e−νt0 u(X(t0 ; xk , αk )).

Setting Xk (t) := X(t; xk , αk ), we have ϕ(xk ) +

2 1 3 ≥ ϕ(ˆ x) + ≥ u(xk ) + ≥ k k k



t0 0

e−νt f (Xk (t), αk (t))dt + e−νt0 ϕ(Xk (t)).

Hence, we have 3 ≥ k



t0 0

e−νt {⟨g(Xk (t), αk (t)), Dϕ(Xk (t))⟩ + f (Xk (t), αk (t)) − νϕ(Xk (t))}dt.

Putting (4.14) with αk in the above, we have 3 ≥θ k



t0

which is a contradiction for large k ≥ 1.

4.2.2

e−νt dt,

0

2

Isaacs equation

In this subsection, we study fully nonlinear PDEs (i.e. p ∈ Rn → F (x, p) is neither convex nor concave) arising in differential games.

51

We are given continuous functions f : Rn × A × B → R and g : Rn × A × B → Rn such that     (1)    (2)

{

sup

}

∥f (·, a, b)∥L∞ (Rn ) + ∥g(·, a, b)∥W 1,∞ (Rn ) < ∞,

(a,b)∈A×B

(4.15)

|f (x, a, b) − f (y, a, b)| ≤ ωf (|x − y|) for x, y ∈ Rn ,

sup (a,b)∈A×B

where ωf ∈ M. Under (4.15), we shall consider Isaacs equations: sup inf {νu − ⟨g(x, a, b), Du⟩ − f (x, a, b)} = 0 in Rn ,

(4.16)

inf sup{νu − ⟨g(x, a, b), Du⟩ − f (x, a, b)} = 0 in Rn .

(4.16′ )

a∈A b∈B

and b∈B a∈A

As in the previous subsection, we shall derive the expected solution. We first introduce some notations: While we will use the same notion A as before, we set B := {β : [0, ∞) → B | β(·) is measurable}. Next, we introduce the so-called sets of “non-anticipating strategies”:   

for any T > 0, if α and α ∈ A satisfy 1 2 Γ := γ : A → B that α1 (t) = α2 (t) for a.a. t ∈ (0, T ),   then γ[α1 ](t) = γ[α2 ](t) for a.a. t ∈ (0, T )

    

and   

for any T > 0, if β and β ∈ B satisfy 1 2 ∆ :=  δ : B → A that β1 (t) = β2 (t) for a.a. t ∈ (0, T ),  then δ[β1 ](t) = δ[β2 ](t) for a.a. t ∈ (0, T )

    

.

Using these notations, we will consider maximizing-minimizing problems of the following cost functional: For α ∈ A, β ∈ B, and x ∈ Rn , ∫



J(x, α, β) :=

e−νt f (X(t; x, α, β), α(t), β(t))dt,

0

52

where X(·; x, α, β) is the (unique) solutions of {

X ′ (t) = g(X(t), α(t), β(t)) for t > 0, X(0) = x.

(4.17)

The expected solutions for (4.16) and (4.16′ ), respectively, are given by ∫



u(x) = sup inf

e−νt f (X(t; x, α, γ[α]), α(t), γ[α](t))dt,

γ∈Γ α∈A 0



and



v(x) = inf sup δ∈∆ β∈B

e−νt f (X(t; x, δ[β], β), δ[β](t), β(t))dt.

0

We call u and v upper and lower value functions of this differential game, respectively. In fact, under appropriate hypotheses, we expect that v ≤ u, which cannot be proved easily. To show v ≤ u, we first observe that u and v are, respectively, viscosity solutions of (4.16) and (4.16′ ). Noting that sup inf {νr−⟨g(x, a, b), p⟩−f (x, a, b)} ≤ inf sup{νr−⟨g(x, a, b), p⟩−f (x, a, b)} a∈A b∈B

b∈B a∈A

for (x, r, p) ∈ Rn ×R×Rn , we see that u (resp., v) is a viscosity supersolution (resp., subsolution) of (4.16′ ) (resp., (4.16)). Thus, the standard comparison principle implies v ≤ u in Rn (under suitable growth condition at |x| → ∞ for u and v). We shall only deal with u since the corresponding results for v can be obtained in a symmetric way. To show that u is a viscosity solution of the Isaacs equation (4.16), we first establish the dynamic programming principle as in the previous subsection: Theorem 4.6. (Dynamic Programming Principle) Assume that (4.15) hold. Then, for T > 0, we have  ∫

u(x) = sup inf  γ∈Γ α∈A

T

e 0

−νt



f (X(t; x, α, γ[α]), α(t), γ[α](t))dt  . +e−νT u(X(T ; x, α, γ[α]))

Proof. For a fixed T > 0, we denote by w(x) the right hand side of the above. Step 1: u(x) ≤ w(x). For any ε > 0, we choose γε ∈ Γ such that u(x) − ε ≤ inf



α∈A 0



e−νt f (X(t; x, α, γε [α]), α(t), γε [α](t))dt =: Iε . 53

For any fixed α0 ∈ A, we define the mapping T0 : A → A by {

α0 (t) for t ∈ [0, T ), α(t − T ) for t ∈ [T, ∞)

T0 [α] :=

for α ∈ A.

Thus, for any α ∈ A, we have Iε





T

e−νt f (X(t; x, α0 , γε [α0 ]), α0 (t), γε [α0 ](t))dt

0∫ ∞

+

T

e−νt f (X(t; x, T0 [α], γε [T0 [α]]), T0 [α](t), γε [T0 [α]](t))dt

=: Iε1 + Iε2 . We next define γˆ ∈ Γ by γˆ [α](t) := γε [T0 [α]](t + T ) for t ≥ 0 and α ∈ A. Note that γˆ belongs to Γ. Setting xˆ := X(T ; x, α0 , γε [α0 ]), we have Iε2

−νT



=e



e−νt f (X(t; xˆ, α, γˆ [α]), α(t), γˆ [α](t))dt.

0

Taking the infimum over α ∈ A, we have u(x) − ε ≤ Iε1 + e−νT inf α∈A =: Iε1 + Iˆε2 .





e−νt f (X(t; xˆ, α, γˆ [α]), α(t), γˆ [α](t))dt

0

Since Iˆε2 ≤ e−νT u(ˆ x), we have u(x) − ε ≤ Iε1 + e−νT u(ˆ x), which implies u(x) − ε ≤ w(x) by taking the infimum over α0 ∈ A and then, the supremum over Γ. Therefore, we get the one-sided inequality since ε > 0 is arbitrary. Step 2: u(x) ≥ w(x). For ε > 0, we choose γε1 ∈ Γ such that  ∫

w(x) − ε ≤ inf  α∈A

0

T



e−νt f (X(t; x, α, γε1 [α]), α(t), γε1 [α](t))dt  . +e−νT u(X(T ; x, α, γε1 [α]))

54

For any fixed α0 ∈ A, setting xˆ = X(T ; x, α0 , γε1 [α0 ]), we have w(x) − ε ≤



T

0

e−νt f (X(t; x, α0 , γε1 [α0 ]), α0 (t), γε1 [α0 ](t))dt + e−νT u(ˆ x).

Next, we choose γε2 ∈ Γ such that u(ˆ x) − ε ≤ inf





α∈A 0

e−νt f (X(t; xˆ, α, γε2 [α]), α(t), γε2 [α](t))dt. =: I.

For α ∈ A, we define the mapping T1 : A → A by T1 [α](t) := α(t + T ) for t ≥ 0. Thus, we have I≤



∞ 0

ˆ e−νt f (X(t; xˆ, T1 [α0 ], γε2 [T1 [α0 ]]), T1 [α0 ](t), γε2 [T1 [α0 ]](t))dt =: I.

Now, for α ∈ A, setting {

γˆ [α](t) :=

γε1 [α](t) for t ∈ [0, T ), 2 γε [T1 [α]](t − T ) for t ∈ [T, ∞),

ˆ and X(t) := X(t; xˆ, T1 [α0 ], γε2 [T1 [α0 ]]), we have Iˆ =



=e



T νT

ˆ − T ), T1 [α0 ](t − T ), γ 2 [T1 [α0 ]](t − T ))dt e−ν(t−T ) f (X(t ε





T

ˆ − T ), α0 (t), γˆ [α0 ](t))dt. e−νt f (X(t

Since {

X(t; x, α0 , γˆ [α0 ]) =

X(t; x, α0 , γε1 [α0 ]) for t ∈ [0, T ), ˆ − T) X(t for t ∈ [T, ∞),

we have w(x) − 2ε ≤



∞ 0

e−νt f (X(t; x, α0 , γˆ [α0 ]), α0 (t), γˆ [α0 ](t))dt.

Since α0 is arbitrary, we have w(x) − 2ε ≤ inf



α∈A 0



e−νt f (X(t; x, α, γˆ [α]), α(t), γˆ [α](t))dt, 55

which yields the assertion by taking the supremum over Γ and then, by sending ε → 0. 2 Now, we shall verify that the value function u is a viscosity solution of (4.16). Since we only give a sketch of proofs, one can skip the following theorem. For a correct proof, we refer to [1], originally by Evans-Souganidis (1984). Theorem 4.7. Assume that (4.15) holds. (1) Then, u is a viscosity subsolution of (4.16). (2) Assume also the following properties:   (i) A ⊂ Rm is compact for some integer m ≥ 1.    (ii) there is an ω ∈ M such that A

   

|f (x, a, b) − f (x, a′ , b)| + |g(x, a, b) − g(x, a′ , b)| ≤ ωA (|a − a′ |) for x ∈ Rn , a, a′ ∈ A and b ∈ B.

(4.18)

Then, u is a viscosity supersolution of (4.16). Remark. To show that v is a viscosity subsolution of (4.16′ ), instead of (4.18), we need to suppose the following hypotheses:   (i)   

B ⊂ Rm is compact for some integer m ≥ 1. (ii) there is an ωB ∈ M such that  |f (x, a, b) − f (x, a, b′ )| + |g(x, a, b) − g(x, a, b′ )| ≤ ωB (|b − b′ |)    for x ∈ Rn , b, b′ ∈ B and a ∈ A,

(4.18′ )

while to verify that v is a viscosity supersolution of (4.16′ ), we only need (4.15). Sketch of proof. We shall only prove the assertion assuming that u ∈ U SC(Rn ) and u ∈ LSC(Rn ) in Step 1 and 2, respectively. To give a correct proof without the semi-continuity assumption, we need a bit careful analysis similar to the proof for Bellman equations. We omit the correct proof here. Step 1: Subsolution property. Suppose that the subsolution property fails; there are x ∈ Rn , θ > 0 and ϕ ∈ C 1 (Rn ) such that 0 = (u − ϕ)(x) ≥ (u − ϕ)(y) (for all y ∈ Rn ) and sup inf {νu(x) − ⟨g(x, a, b), Dϕ(x)⟩ − f (x, a, b)} ≥ 3θ.

a∈A b∈B

We note that X(·; x, α, γ[α]) are uniformly continuous for any (α, γ) ∈ A × Γ in view of (4.15).

56

Thus, we can choose that a0 ∈ A such that inf {νϕ(x) − ⟨g(x, a0 , b), Dϕ(x)⟩ − f (x, a0 , b)} ≥ 2θ.

b∈B

For any γ ∈ Γ, setting α0 (t) = a0 for t ≥ 0, we simply write X(·) for X(·; x, α0 , γ[α0 ]). Thus, we find small t0 > 0 such that νϕ(X(t)) − ⟨g(X(t), a0 , γ[α0 ](t)), Dϕ(X(t))⟩ − f (X(t), a0 , γ[α0 ](t)) ≥ θ for t ∈ [0, t0 ]. Multiplying e−νt in the above and then, integrating it over [0, t0 ], we have θ (1 − e−νt0 ) ≤ − ν



{

t0

0

}

) d ( −νt e ϕ(X(t)) + e−νt f (X(t), a0 , γ[α0 ](t)) dt dt ∫

= ϕ(x) − e−νt0 ϕ(X(t0 )) −

t0

e−νt f (X(t), a0 , γ[α0 ](t))dt.

0

Hence, we have θ u(x) − (1 − e−νt0 ) ≥ ν



t0

ˆ e−νt f (X(t), a0 , γ[α0 ](t))dt + e−νt0 u(X(t0 )) =: I.

0

Taking the infimum over A, we have  ∫

Iˆ ≥ inf  α∈A

0

t0



e−νt f (X(t; x, α, γ[α]), α(t), γ[α](t))dt  . +e−νt0 u(X(t0 ; x, α, γ[α]))

Therefore, since γ ∈ Γ is arbitrary, we have  ∫

θ u(x) − (1 − e−νt0 ) ≥ sup inf  ν γ∈Γ α∈A

t0 0



e−νt f (X(t; x, α, γ[α]), α(t), γ[α](t))dt  , +e−νt0 u(X(t0 ; x, α, γ[α]))

which contradicts Theorem 4.6. Step 2: Supersolution property. Suppose that the supersolution property fails; there are x ∈ Rn , θ > 0 and ϕ ∈ C 1 (Rn ) such that 0 = (u − ϕ)(x) ≤ (u − ϕ)(y) for y ∈ Rn , and sup inf {νu(x) − ⟨g(x, a, b), Dϕ(x)⟩ − f (x, a, b)} ≤ −3θ.

a∈A b∈B

For any a ∈ A, there is b(a) ∈ B such that νu(x) − ⟨g(x, a, b(a)), Dϕ(x)⟩ − f (x, a, b(a)) ≤ −2θ.

57

In view of (4.18), there is ε(a) > 0 such that if |a − a′ | < ε(a) and |x − y| < ε(a), then we have νϕ(y) − ⟨g(y, a′ , b(a)), Dϕ(y)⟩ − f (y, a′ , b(a)) ≤ −θ. From the compactness of A, we may select {ak }M k=1 such that A=

M ∪

Ak ,

k=1

where Ak := {a ∈ A | |a − ak | < ε(ak )}. ˆ ˆ Furthermore, we set Aˆ1 = A1 , and inductively, Aˆk := Ak \ ∪k−1 j=1 Aj ; Ak ∩ Aj = ∅ for k ̸= j. We may also suppose that Aˆk ̸= ∅ for k = 1, . . . , M . For α ∈ A, we define γ0 [α](t) := b(ak )

provided α(t) ∈ Aˆk .

Now, setting X(t) := X(t; x, α, γ0 [α]), we find t0 > 0 such that νϕ(X(t)) − ⟨g(X(t), α(t), γ0 [α](t)), Dϕ(X(t))⟩ − f (X(t), α(t), γ0 [α](t)) ≤ −θ for t ∈ [0, t0 ]. Multiplying e−νt in the above and then, integrating it, we obtain ϕ(x) − e−νt0 ϕ(X(t0 )) −



t0

0

θ e−νt f (X(t), α(t), γ0 [α](t))dt ≤ − (1 − e−νt0 ). ν

Since α ∈ A is arbitrary, we have  ∫

θ u(x) + (1 − e−νt0 ) ≤ inf  α∈A ν

0

t0



e−νt f (X(t; x, α, γ0 [α]), α(t), γ0 [α](t))dt  , +e−νt0 u(X(t0 ; x, α, γ0 [α]))

which contradicts Theorem 4.6 by taking the supremum over Γ.

4.3

2

Stability

In this subsection, we present a stability result for viscosity solutions, which is one of the most important properties for “solutions” as noted in section 1. Thus, this result justifies our notion of viscosity solutions. However, since we will only use Proposition 4.8 below in section 7.3, the reader may skip the proof. 58

First of all, for possibly discontinuous F : Ω × R × Rn × S n → R, we are concerned with F (x, u, Du, D2 u) = 0 in Ω. (4.19) We introduce the following notation: {

} y ∈ Ω ∩ B (x), |s − r| < ε, ε F (y, s, q, Y ) , |q − p| < ε, ∥Y − X∥ < ε

{

} y ∈ Ω ∩ B (x), |s − r| < ε, ε F (y, s, q, Y ) . |q − p| < ε, ∥Y − X∥ < ε

F∗ (x, r, p, X) := lim inf ε→0



F (x, r, p, X) := lim sup ε→0

Definition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (4.19) if u∗ (resp., u∗ ) is a viscosity subsolution (resp., supersoluion) of F∗ (x, u, Du, D2 u) ≤ 0

(

)

resp., F ∗ (x, u, Du, D2 u) ≥ 0

in Ω.

We call u : Ω → R a viscosity solution of (4.19) if it is both a viscosity sub- and supersolution of (4.19). Now, for given continuous functions Fk : Ω × R × Rn × S n → R, we set F (x, r, p, X) 

|y − x| < 1/k, |s − r| < 1/k, := lim inf Fj (y, s, q, Y ) |q − p| < 1/k, ∥Y − X∥ < 1/k  k→∞  and j ≥ k

  

F (x, r, p, X)

  

 

|y − x| < 1/k, |s − r| < 1/k, := lim sup Fj (y, s, q, Y ) |q − p| < 1/k, ∥Y − X∥ < 1/k  k→∞  and j ≥ k  

 

 

,

.

Our stability result is as follows. Proposition 4.8. Let Fk : Ω × R × Rn × S n → R be continuous functions. Let uk : Ω → R be a viscosity subsolution (resp., supersolution) of Fk (x, uk , Duk , D2 uk ) = 0 in Ω.

59

Setting u (resp., u) by u(x) := lim sup{(uj )∗ (y) | y ∈ B1/k (x) ∩ Ω, j ≥ k} k→∞

(

resp., u(x) := lim inf{(uj )∗ (y) | y ∈ B1/k (x) ∩ Ω, j ≥ k}

)

k→∞

for x ∈ Ω, then u (resp., u) is a viscosity subsolution (resp., supersolution) of F (x, u, Du, D2 u) ≤ 0 (resp., F (x, u, Du, D2 u) ≥ 0) in Ω. Remark. We note that u ∈ U SC(Ω), u ∈ LSC(Ω), F ∈ LSC(Ω × R × R × S n ) and F ∈ U SC(Ω × R × Rn × S n ). n

Proof. We only give a proof for subsolutions since the other can be shown similarly. Given ϕ ∈ C 2 (Ω), we let x0 ∈ Ω be such that 0 = (u−ϕ)(x0 ) > (u−ϕ)(x) for x ∈ Ω \ {x0 }. We shall show that F (x0 , u(x0 ), Dϕ(x0 ), D2 ϕ(x0 )) ≤ 0. We may choose xk ∈ Br (x0 ) (for a subsequence if necessary), where r ∈ (0,dist(x0 , ∂Ω)), such that lim xk = x0

k→∞

and

lim (uk )∗ (xk ) = u(x0 ).

k→∞

(4.20)

We select yk ∈ B r (x0 ) such that ((uk )∗ − ϕ)(yk ) = supBr (x0 ) ((uk )∗ − ϕ). We may also suppose that limk→∞ yk = z for some z ∈ B r (x0 ) (taking a subsequence if necessary). Since ((uk )∗ − ϕ)(yk ) ≥ ((uk )∗ − ϕ)(xk ), (4.20) implies 0 = lim inf ((uk )∗ − ϕ)(xk ) ≤ lim inf ((uk )∗ − ϕ)(yk ) k→∞

k→∞

≤ lim inf (uk )∗ (yk ) − ϕ(z) k→∞

≤ lim sup(uk )∗ (yk ) − ϕ(z) ≤ (u − ϕ)(z). k→∞

Thus, this yields z = x0 and limk→∞ (uk )∗ (yk ) = u(x0 ). Hence, we see that yk ∈ Br (x0 ) for large k ≥ 1. Since (uk )∗ − ϕ attains a maximum over Br (x0 ) at yk ∈ Br (x0 ), by the definition of uk (with Proposition 2.4 for Ω′ = Br (x0 )), we have Fk (yk , (uk )∗ (yk ), Dϕ(yk ), D2 ϕ(yk )) ≤ 0, which concludes the proof by taking the limit infimum with the definition of F. 2 60

5

Generalized boundary value problems

In order to obtain the uniqueness of solutions of an ODE, we have to suppose certain initial or boundary condition. In the study of PDEs, we need to impose appropriate conditions on ∂Ω for the uniqueness of solutions. Following the standard PDE theory, we shall treat a few typical boundary conditions in this section. Since we are mainly interested in degenerate elliptic PDEs, we cannot expect “solutions” to satisfy the given boundary condition on the whole of ∂Ω. The simplest example is as follows: For Ω := (0, 1), consider the “degenerate” elliptic PDE −

du + u = 0 in (0, 1). dx

Note that it is impossible to find a solution u of the above such that u(0) = u(1) = 1. Our plan is to propose a definition of “generalized” solutions for boundary value problems. For this purpose, we extend the notion of viscosity solutions to possibly discontinuous PDEs on Ω while we normally consider those in Ω. For general G : Ω × R × Rn × S n → R, we are concerned with G(x, u, Du, D2 u) = 0 in Ω.

(5.1)

As in section 4.3, we define {

} y ∈ Ω ∩ B (x), |s − r| < ε, ε G(y, s, q, Y ) , |q − p| < ε, ∥Y − X∥ < ε

{

} y ∈ Ω ∩ B (x), |s − r| < ε, ε G(y, s, q, Y ) . |q − p| < ε, ∥Y − X∥ < ε

G∗ (x, r, p, X) := lim inf ε→0

G∗ (x, r, p, X) := lim sup ε→0

Definition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (5.1) if, for any ϕ ∈ C 2 (Ω), G∗ (x, u∗ (x), Dϕ(x), D2 ϕ(x)) ≤ 0 (

)

resp., G∗ (x, u∗ (x), Dϕ(x), D2 ϕ(x)) ≥ 0

provided that u∗ − ϕ (resp., u∗ − ϕ) attains its maximum (resp., minimum) at x ∈ Ω. 61

We call u : Ω → R a viscosity solution of (5.1) if it is both a viscosity sub- and supersolution of (5.1). Our comparison principle in this setting is as follows: “Comparison principle in this setting” viscosity subsolution u of (5.1) viscosity supersolution v of (5.1)

}

=⇒ u ≤ v in Ω

Note that the boundary condition is contained in the definition. Using the above new definition, we shall formulate the boundary value problems in the viscosity sense. Given F : Ω × R × Rn × S n → R and B : ∂Ω × R × Rn × S n → R, we investigate general boundary value problems {

F (x, u, Du, D2 u) = 0 in Ω, B(x, u, Du, D2 u) = 0 on ∂Ω.

(5.2)

Setting G by {

G(x, r, p, X) :=

F (x, r, p, X) for x ∈ Ω, B(x, r, p, X) for x ∈ ∂Ω,

we give the definition of boundary value problems (5.2) in the viscosity sense. Definition. We call u : Ω → R a viscosity subsolution (resp., supersolution) of (5.2) if it is a viscosity subsolution (resp., supersolution) of (5.1), where G is defined in the above. We call u : Ω → R a viscosity solution of (5.2) if it is both a viscosity sub- and supersolution of (5.2). Remark. When F and B are continuous and G is given as above, G∗ and G can be expressed in the following manner: ∗

{

F (x, r, p, X) min{F (x, r, p, X), B(x, r, p, X)} { F (x, r, p, X) G∗ (x, r, p, X) = max{F (x, r, p, X), B(x, r, p, X)} G∗ (x, r, p, X) =

62

for x ∈ Ω, for x ∈ ∂Ω, for x ∈ Ω, for x ∈ ∂Ω.

It is not hard to extend the existence and stability results corresponding to Theorem 4.3 and Proposition 4.8, respectively, to viscosity solutions in the above sense. However, it is not straightforward to show the comparison principle in this new setting. Thus, we shall concentrate our attention to the comparison principle, which implies the uniqueness (and continuity) of viscosity solutions. The main difficulty to prove the comparison principle is that we have to “avoid” the boundary conditions for both of viscosity sub- and supersolutions. To explain this, let us consider the case when G is given by (5.2). Let u and v be, respectively, a viscosity sub- and supersolution of (5.1). We shall observe that the standard argument in Theorem 3.7 does not work. For ε > 0, suppose that (x, y) → u(x) − v(y) − (2ε)−1 |x − y|2 attains its maximum at (xε , yε ) ∈ Ω × Ω. Notice that there is NO reason to verify that (xε , yε ) ∈ Ω × Ω. The worst case is that (xε , yε ) ∈ ∂Ω × ∂Ω. In fact, in view of Lemma 3.6, 2,+ we find X, Y ∈ S n such that ((xε − yε )/ε, X) ∈ J Ω u(xε ), ((xε − yε )/ε, Y ) ∈ 2,− J Ω v(yε ), the matrix inequalities in Lemma 3.6 hold for X, Y . Hence, we have {

(

min F xε , u(xε ),

)

(

)

(

xε − yε xε − yε , X , B xε , u(xε ), ,X ε ε

)}

≤0

and {

(

xε − yε xε − yε max F yε , v(yε ), , Y , B yε , v(yε ), ,Y ε ε

)}

≥ 0.

However, even if we suppose that (3.21) holds for F and B “in Ω”, we cannot get any contradiction when (

F xε , u(xε ), or

(

)

(

)

)

(

)

xε − yε xε − yε , X ≤ 0 ≤ B yε , v(yε ), ,Y ε ε

xε − yε xε − yε B xε , u(xε ), , X ≤ 0 ≤ F yε , v(yε ), ,Y . ε ε It seems impossible to avoid this difficulty as long as we use |x − y|2 /(2ε) as “test functions”. 63

Our plan to go beyond this difficulty is to find new test functions ϕε (x, y) (instead of |x − y|2 /(2ε)) so that the function (x, y) → u(x) − v(y) − ϕε (x, y) attains its maximum over Ω × Ω at an interior point (xε , yε ) ∈ Ω × Ω. To this end, since we will use several “perturbation” techniques, we suppose two hypotheses on F : First, we shall suppose the following continuity of F with respect to (p, X)-variables.   

There is an ω0 ∈ M such that |F (x, p, X) − F (x, q, Y )| ≤ ω0 (|p − q| + ∥X − Y ∥)   for x ∈ Ω, p, q ∈ Rn , X, Y ∈ S n .

(5.3)

The next assumption is a bit stronger than the structure condition (3.21):                   

5.1

There is ω ˆ F ∈ M such that n if X, Y ∈ S and) µ > 1 (satisfy ) ) ( ( I 0 X 0 I −I ≤ ≤ 3µ , then −3µ 0 I 0 −Y −I I F (y, p, Y ) − F (x, p, X) ≤ ω ˆ F (|x − y|(1 + |p| + µ|x − y|)) for x, y ∈ Ω, p ∈ Rn , X, Y ∈ S n .

(5.4)

Dirichlet problem

First, we consider Dirichlet boundary value problems (Dirichlet problems for short) in the above sense. Assuming that viscosity sub- and supersolutions are continuous on ∂Ω, we will obtain the comparison principle for them. We now recall the classical Dirichlet problem {

νu + F (x, Du, D2 u) = 0 in Ω, u − g = 0 on ∂Ω.

(5.5)

Note that the Dirichlet problem of (5.5) in the viscosity sense is as follows: {

subsolution ⇐⇒

νu + F (x, Du, D2 u) ≤ 0 in Ω, 2 min{νu + F (x, Du, D u), u − g} ≤ 0 on ∂Ω,

and {

supersolution ⇐⇒

νu + F (x, Du, D2 u) ≥ 0 in Ω, 2 max{νu + F (x, Du, D u), u − g} ≥ 0 on ∂Ω. 64

We shall suppose the following property on the shape of Ω, which may be called an “interior cone condition” (see Fig 5.1): {

For each z ∈ ∂Ω, there are rˆ, sˆ ∈ (0, 1) such that x − rn(z) + rξ ∈ Ω for x ∈ Ω ∩ Brˆ(z), r ∈ (0, rˆ) and ξ ∈ Bsˆ(0).

(5.6)

Here and later, we denote by n(z) the unit outward normal vector at z ∈ ∂Ω.

PSfrag replacements x −rn(z) n(z) rˆ sˆr z Ω

nz

z

hatr x

rs

-rnz

O

Fig 5.1

Theorem 5.1. Assume that ν > 0, (5.3), (5.4) and (5.6) hold. For g ∈ C(∂Ω), we let u and v : Ω → R be, respectively, a viscosity sub- and supersolution of (5.5) such that lim inf u∗ (x) ≥ u∗ (z) and x∈Ω→z

lim sup v∗ (x) ≤ v∗ (z) for z ∈ ∂Ω.

(5.7)

x∈Ω→z

Then, u∗ ≤ v∗ in Ω. Remark. Notice that (5.7) implies the continuity of u∗ and v∗ on ∂Ω. Proof. Suppose that maxΩ (u∗ − v∗ ) =: θ > 0. We simply write u and v for u∗ and v∗ , respectively. Case 1: max∂Ω (u − v) = θ. We choose z ∈ ∂Ω such that (u − v)(z) = θ. We shall divide three cases: Case 1-1: u(z) > g(z). For ε, δ ∈ (0, 1), where δ > 0 will be fixed later, setting ϕ(x, y) := (2ε2 )−1 |x − y − εδn(z)|2 − δ|x − z|2 , we let (xε , yε ) ∈ Ω × Ω be the maximum point of Φ(x, y) := u(x) − v(y) − ϕ(x, y) over Ω × Ω. 65

Since z − εδn(z) ∈ Ω for small ε > 0 by (5.6), Φ(xε , yε ) ≥ Φ(z, z − εδn(z)) implies that |xε − yε − εδn(z)|2 ≤ u(xε ) − v(yε ) − u(z) + v(z − εδn(z)) − δ|xε − z|2 . (5.8) 2ε2 √ Since |xε − yε | ≤ M ε, where M := 2(maxΩ u − minΩ v − u(z) + v(z) + 1)1/2 , for small ε > 0, we may suppose that (xε , yε ) → (ˆ x, xˆ) and (xε − yε )/ε → zˆ n for some xˆ ∈ Ω and zˆ ∈ R as ε → 0 along a subsequence (denoted by ε again). Thus, from the continuity (5.7) of v at z ∈ ∂Ω, (5.8) implies that θ ≤ u(ˆ x) − v(ˆ x) − δ|ˆ x − z|2 , which yields xˆ = z. Moreover, we have |xε − yε − εδn(z)|2 = 0, ε→0 ε2 lim

which implies that

|xε − yε | = δ. (5.9) ε Furthermore, we note that yε = xε − εδn(z) + o(ε) ∈ Ω because of (5.6). Applying Lemma 3.6 with Proposition 2.7 to u(x) + ε−1 δ⟨n(z), x⟩ − δ|x − z|2 − 2−1 δ 2 and v(y) + ε−1 δ⟨n(z), y⟩, we find X, Y ∈ S n such that lim

ε→0

(

xε − yε δ − n(z) + 2δ(xε − z), X + 2δI ε2 ε (

and 3 − 2 ε

(

xε − yε δ − n(z), Y ε2 ε

I O O I

)

(



X O O −Y

) 2,+

∈ J Ω u(xε ),

(5.10)

) 2,−

∈ J Ω v(yε ), )

3 ≤ 2 ε

(

I −I −I I

(5.11) )

.

Putting pε := ε−2 (xε − yε ) − δε−1 n(z), by (5.3), we have F (xε , pε , X) − F (xε , pε + 2δ(xε − z), X + 2δI) ≤ ω0 (2δ|xε − z| + 2δ). (5.12) Since yε ∈ Ω and u(xε ) > g(xε ) for small ε > 0 provided xε ∈ ∂Ω, in view of (5.10) and (5.11), we have ν(u(xε ) − v(yε )) ≤ F (yε , pε , Y ) − F (xε , pε + 2δ(xε − z), X + 2δI). 66

Combining this with (5.12) , by (5.4), we have (

ν(u(xε )−v(yε )) ≤ ω0 (2δ|xε −z|+2δ)+ ω ˆF

(

|xε − yε | |xε − yε | 1 + |pε | + ε2

))

.

Sending ε → 0 together with (5.9) in the above, we have νθ ≤ ω0 (2δ) + ω ˆ F (2δ 2 ), which is a contradiction for small δ > 0, which only depends on θ and ν. Case 1-2: v(z) < g(z). To get a contradiction, we argue as above replacing ϕ(x, y) by ψ(x, y) := (2ε2 )−1 |x − y + εδn(z)|2 − δ|x − z|2 so that xε = yε − εδn(z) + o(ε) ∈ Ω for small ε > 0. Note that we need here the continuity of u on ∂Ω in (5.7) while the other one in (5.7) is needed in Case 1-1. (See also the proof of Theorem 5.3 below.) Case 1-3: u(z) ≤ g(z) and v(z) ≥ g(z). This does not occur because 0 < θ = (u − v)(z) ≤ 0. Case 2: sup∂Ω (u − v) < θ. In this case, using the standard test function |x − y|2 /(2ε) (without δ|x − z|2 term), we can follow the same argument as in the proof of Theorem 3.7. 2 Remark. Unfortunately, without assuming the continuity of viscosity solutions on ∂Ω, the comparison principle fails in general. In fact, setting F (x, r, p, X) ≡ r and g(x) ≡ −1, consider the function {

u(x) :=

0 for x ∈ Ω, −1 for x ∈ ∂Ω.

Note that u∗ ≡ 0 and u∗ ≡ u in Ω, which are respectively a viscosity sub- and supersolution of G(x, u, Du, D2 u) = 0 in Ω. Therefore, this example shows that the comparison principle fails in general without assumption (5.7).

5.2

State constraint problem

The state constraint boundary condition arises in a typical optimal control problem. Thus, if the reader is more interested in the PDE theory, he/she may skip Proposition 5.2 below, which explains why we will adapt the “state constraint boundary condition” in Theorem 5.3.

67

To explain our motivation, we shall consider Bellman equations of first-order. sup{νu − ⟨g(x, a), Du⟩ − f (x, a)} = 0

in Ω.

a∈A

Here, we use the notations in section 4.2.1. We introduce the following set of controls: For x ∈ Ω, A(x) := {α(·) ∈ A | X(t; x, α) ∈ Ω

for t ≥ 0}.

We shall suppose that A(x) ̸= ∅ for all x ∈ Ω.

(5.13)

Also, we suppose that    (1)   (2)

(

)

sup ∥f (·, a)∥L∞ (Ω) + ∥g(·, a)∥W 1,∞ (Ω) < ∞,

a∈A

sup |f (x, a) − f (y, a)| ≤ ωf (|x − y|) for x, y ∈ Ω,

(5.14)

a∈A

where ωf ∈ M. We are now interested in the following the optimal cost functional: ∫

u(x) :=

inf

α∈A(x) 0



e−νt f (X(t; x, α), α(t))dt.

Proposition 5.2. Assume that ν > 0, (5.13) and (5.14) hold. Then, we have (1) u is a viscosity subsolution of sup{νu − ⟨g(x, a), Du⟩ − f (x, a)} ≤ 0 in Ω,

a∈A

(2) u is a viscosity supersolution of sup{νu − ⟨g(x, a), Du⟩ − f (x, a)} ≥ 0 in Ω.

a∈A

Remark. We often say that u satisfies the state constraint boundary condition when it is a viscosity supersolution of “F (x, u, Du, D2 u) ≥ 0 in ∂Ω”. Proof. In fact, at x ∈ Ω, it is easy to verify that the dynamic programming principle (Theorem 4.4) holds for small T > 0. Thus, we may show Theorem 4.5 replacing Rn by Ω.

68

Hence, it only remains to show (2) on ∂Ω. Thus, suppose that there are x ˆ ∈ ∂Ω, θ > 0 and ϕ ∈ C 1 (Ω) such that (u∗ − ϕ)(ˆ x) = 0 ≤ (u∗ − ϕ)(x) for x ∈ Ω, and sup{νϕ(ˆ x) − ⟨g(ˆ x, a), Dϕ(ˆ x)⟩ − f (ˆ x, a)} ≤ −2θ.

a∈A

Then, we will get a contradiction. Choose xk ∈ Ω ∩ B1/k (ˆ x) such that u∗ (ˆ x) + k −1 ≥ u(xk ) and |ϕ(ˆ x) − ϕ(xk )| < 1/k. In view of (5.14), there is t0 > 0 such that for any α ∈ A(xk ) and large k ≥ 1, we have νϕ(Xk (t)) − ⟨g(Xk (t), α(t)), Dϕ(Xk (t))⟩ − f (Xk (t), α(t)) ≤ −θ

for t ∈ (0, t0 ),

where Xk (t) := X(t; xk , α). Thus, multiplying e−νt and then, integrating it over (0, t0 ), we have ϕ(xk ) ≤ e−νt0 ϕ(Xk (t0 )) +



t0

0

θ e−νt f (Xk (t), α(t))dt − (1 − e−νt0 ). ν

Since we have u(xk ) ≤

2 + e−νt0 u(Xk (t0 )) + k

∫ 0

t0

θ e−νt f (Xk (t), α(t))dt − (1 − e−νt0 ), ν

taking the infimum over A(xk ), we apply Theorem 4.4 to get 0≤

2 θ − (1 − e−νt0 ), k ν

which is a contradiction for large k.

2

Motivated by this proposition, we shall consider more general second-order elliptic PDEs. Theorem 5.3. Assume that ν > 0, (5.3), (5.4), (5.6) and (5.12) hold. Let u : Ω → R be, respectively, a viscosity sub- and supersolution of νu + F (x, Du, D2 u) ≤ 0 in Ω, and νv + F (x, Dv, D2 v) ≥ 0 in Ω. Assume also that

lim inf u∗ (x) ≥ u∗ (z) for z ∈ ∂Ω. x∈Ω→z

69

(5.15)

Then, u∗ ≤ v∗ in Ω. Remark. In 1986, Soner first treated the state constraint problems for deterministic optimal control (i.e. first-order PDEs) by the viscosity solution approach. We note that we do not need continuity of v on ∂Ω while we need it for Dirichlet problems. For further discussion on the state constraint problems, we refer to Ishii-Koike (1996). We also note that the proof below is easier than that for Dirichlet problems in section 5.1 because we only need to avoid the boundary condition for viscosity subsolutions. Proof. Suppose that maxΩ (u∗ − v∗ ) =: θ > 0. We shall write u and v for u∗ and v∗ , respectively, again. We may suppose that max∂Ω (u − v) = θ since otherwise, we can use the standard procedure to get a contradiction. Now, we proceed the same argument in Case 1-2 in the proof of Theorem 5.1 (although it is not precisely written). For ε, δ > 0, setting ϕ(x, y) := (2ε2 )−1 |x − y + εδn(z)|2 + δ|x − z|2 , where n is the unit outward normal vector at z ∈ ∂Ω, we let (xε , yε ) ∈ Ω × Ω the maximum point of u(x) − v(y) − ϕ(x, y) over Ω × Ω. As in the proof of Theorem 3.4, we have |xε − yε | = δ. ε→0 ε

lim (xε , yε ) = (z, z) and

lim

ε→0

(5.16)

Since xε = yε − εδn(z) + o(ε) ∈ Ω for small ε > 0, in view of Lemma 3.6 with Proposition 2.7, we can find X, Y ∈ S n such that (

xε − yε δ + n(z) + 2δ(xε − z), X + 2δI ε2 ε (

and 3 − 2 ε

(

x ε − yε δ + n(z), Y ε2 ε

I O O I

)

(



X O O −Y

)

)

2,+

∈ J Ω u(xε ),

2,−

∈ J Ω v(yε ), )

3 ≤ 2 ε

(

I −I −I I

)

.

Setting pε := ε−2 (xε − yε ) + δε−1 n(z), we have ν(u(xε ) − v(yε )) ≤ F (yε , pε , Y ) − F (xε , pε + ( 2δ(xε − z),(X + 2δI) ≤ ω0 (2δ|xε − z| + 2δ) + ω ˆF

70

|xε − yε | |xε − yε | 1 + |pε | + ε

))

.

Hence, sending ε → 0 with (5.16), we have νθ ≤ ω0 (2δ) + ω ˆ F (2δ 2 ), which is a contradiction for small δ > 0.

5.3

2

Neumann problem

In the classical theory and modern theory for weak solutions in the distribution sense, the (inhomogeneous) Neumann condition is given by ⟨n(x), Du(x)⟩ − g(x) = 0 on ∂Ω, where n(x) denotes the unit outward normal vector at x ∈ ∂Ω. In Dirichlet and state constraint problems, we have used a test function which forces one of xε and yε to be in Ω. However, in the Neumann boundary value problem (Neumann problem for short), we have to avoid the boundary condition for viscosity sub- and supersolutions simultaneously. Thus, we need a new test function different from those in sections 5.1 and 5.2. We first define the signed distance function from Ω by {

ρ(x) :=

inf{|x − y| | y ∈ ∂Ω} for x ∈ Ωc , − inf{|x − y| | y ∈ ∂Ω} for x ∈ Ω.

In order to obtain the comparison principle for the Neumann problem, we shall impose a hypothesis on Ω (see Fig 5.2):  (1)    

There is rˆ > 0 such that Ω ⊂ (Brˆ(z + rˆn(z)))c for z ∈ ∂Ω.  (2) There is a neighborhood N of ∂Ω such that    ρ ∈ C 2 (N ), and Dρ(x) = n(x) for x ∈ ∂Ω.

(5.17)

Remark. This assumption (1) is called the “uniform exterior sphere condition”. Since |x − z − rˆn(z)| ≥ rˆ for z ∈ ∂Ω and x ∈ Ω, we have |x − z|2 ⟨n(z), x − z⟩ ≤ 2ˆ r

for z ∈ ∂Ω and x ∈ Ω.

(5.18)

It is known that when ∂Ω is “smooth” enough, (2) of (5.17) holds true. 71

r rnz

PSfrag replacements rn(z) ∂Ω rˆ z Ω

pO

z Fig 5.2

O

We shall consider the inhomogeneous Neumann problem: {

νu + F (x, Du, D2 u) = 0 in Ω, ⟨n(x), Du⟩ − g(x) = 0 on ∂Ω.

(5.19)

Remember that we adapt the definition of viscosity solutions of (5.19) for the corresponding G in (5.2). Theorem 5.4. Assume that ν > 0, (5.3), (5.4) and (5.17) hold. For g ∈ C(∂Ω), we let u and v : Ω → R be a viscosity sub- and supersolution of (5.19), respectively. Then, u∗ ≤ v∗ in Ω. Remark. We note that we do not need any continuity of u and v on ∂Ω. Proof. As before, we write u and v for u∗ and v∗ , respectively. As in the proof of Theorem 3.7, we suppose that maxΩ (u − v) =: θ > 0. Also, we may suppose that max∂Ω (u − v) = θ. Let z ∈ ∂Ω be a point such that (u − v)(z) = θ. For small δ > 0, we see that the mapping x ∈ Ω → u(x) − v(y) − δ|x − z|2 takes its strict maximum at z. For small ε, δ > 0, where δ > 0 will be fixed later, setting ϕ(x, y) := (2ε)−1 |x − y|2 − g(z)⟨n(z), x − y⟩ + δ(ρ(x) + ρ(y) + 2) + δ|x − z|2 , we let (xε , yε ) ∈ Ω × Ω be the maximum point of Φ(x, y) := u(x) − v(y) − ϕ(x, y) over Ω ∩ N × Ω ∩ N , where N is in (5.17). Since Φ(xε , yε ) ≥ Φ(z, z), as before, we may extract a subsequence, which is denoted by (xε , yε ) again, such that (xε , yε ) → (ˆ x, xˆ). We may suppose xˆ ∈ ∂Ω. Since Φ(ˆ x, xˆ) ≥ lim supε→0 Φ(xε , yε ), we have u(ˆ x) − v(ˆ x) − δ|ˆ x − z|2 ≥ θ, 72

which yields xˆ = z. Moreover, we have |xε − yε |2 = 0. ε→0 ε lim

(5.20)

Applying Lemma 3.6 to u(x) − δ(ρ(x) + 1) − g(z)⟨n(z), x⟩ − δ|x − z|2 and −v(y) − δ(ρ(y) + 1) + g(z)⟨n(z), y⟩, we find X, Y ∈ S n such that (

)

2,+

pε + δn(xε ) + 2δ(xε − z), X + δD2 ρ(xε ) + 2δI ∈ J Ω u(xε ), (

)

2,−

pε − δn(yε ), Y − δD2 ρ(yε ) ∈ J Ω v(yε ),

(5.21) (5.22)

where pε := ε−1 (xε − yε ) + g(z)n(z), and 3 − ε

(

I 0 0 I

)

(



X 0 0 −Y

)

3 ≤ ε

(

I −I −I I

)

.

When xε ∈ ∂Ω, by (5.18), we calculate in the following manner: ⟨n(xε ), Dx ϕ(xε , yε )⟩ = ⟨n(xε ), pε + δn(xε ) + 2δ(xε − z)⟩ |xε − yε |2 ≥− + g(z)⟨n(xε ), n(z)⟩ + δ − 2δ|xε − z|. 2ˆ rε Hence, given δ > 0, we see that ⟨n(xε ), Dx ϕ(xε , yε )⟩ − g(xε ) ≥

δ 2

for small ε > 0.

Thus, by (5.21), this yields νu(xε ) + F (xε , pε + δn(xε ) + 2δ(xε − z), X + δD2 ρ(xε ) + 2δI) ≤ 0. (5.23) Of course, if xε ∈ Ω, then the above inequality holds from the definition. On the other hand, similarly, if yε ∈ ∂Ω, then ⟨n(yε ), −Dy ϕ(xε , yε )⟩ − g(yε ) ≤ −

δ 2

for small ε > 0.

Hence, by (5.22), we have νv(yε ) + F (yε , pε − δn(yε ), Y − δD2 ρ(yε )) ≥ 0. 73

(5.24)

Using (5.3) and (5.4), by (5.23) and (5.24), we have ν(u(xε ) − v(yε )) ≤ F (y(ε , pε , Y ) −(F (xε , pε , X) + 2ω0 (δM )) ) |xε − yε | ≤ω ˆ F |xε − yε | 1 + |pε | + + 2ω0 (δM ), ε where M := 3 + supx∈N ∩Ω (2|x − z| + |D2 ρ(x)|). Sending ε → 0 with (5.20) in the above, we have νθ ≤ 2ω0 (δM ), which is a contradiction for small δ > 0.

5.4

2

Growth condition at |x| → ∞

In the standard PDE theory, we often consider PDEs in unbounded domains, typically, in Rn . In this subsection, we present a technique to establish the comparison principle for viscosity solutions of νu + F (x, Du, D2 u) = 0 in Rn .

(5.25)

We remind the readers that in the proofs of comparison results we always suppose maxΩ (u − v) > 0, where u and v are, respectively, a viscosity suband supersolution. However, considering Ω := Rn , the maximum of u − v might attain its maximum at “|x| → ∞”. Thus, we have to choose a test function ϕ(x, y), which forces u(x) − v(y) − ϕ(x, y) to takes its maximum at a point in a compact set. For this purpose, we will suppose the linear growth condition (for simplicity) for viscosity solutions. We rewrite the structure condition (3.21) for Rn :  n  There is an(ωF ∈ M 1 satisfy  ) such ) Y ∈ S( and µ > ) ( that if X,     I 0 X 0 I −I  −3µ ≤ ≤ 3µ , 0 I 0 −Y −I I    then F (y, µ(x − y), Y ) − F (x, µ(x − y), X)     n

(5.26)

≤ ωF (|x − y|(1 + µ|x − y|)) for x, y ∈ R .

We will also need the Lipschitz continuity of (p, X) → F (x, p, X), which is stronger than (5.3). {

There is µ0 > 0 such that |F (x, p, X) − F (x, q, Y )| ≤ µ0 (|p − q| + ∥X − Y ∥) for x ∈ Rn , p, q ∈ Rn , X, Y ∈ S n . 74

(5.27)

Proposition 5.5. Assume that ν > 0, (5.26) and (5.27) hold. Let u and v : Rn → R be, respectively, a viscosity sub- and supersolution of (5.25). Assume also that there is C0 > 0 such that u∗ (x) ≤ C0 (1 + |x|) and v∗ (x) ≥ −C0 (1 + |x|) for x ∈ Rn .

(5.28)

Then, u∗ ≤ v∗ in Rn . Proof. We shall simply write u and v for u∗ and v∗ , respectively. For δ > 0, we set θδ := supx∈Rn (u(x) − v(x) − 2δ(1 + |x|2 )). We note that (5.28) implies that there is zδ ∈ Rn such that θδ = u(zδ )−v(zδ )−2δ(1+|zδ |2 ). Set θ := lim supδ→0 θδ ∈ R ∪ {∞}. When θ ≤ 0, since (u − v)(x) ≤ 2δ(1 + |x|2 ) + θδ

for δ > 0 and x ∈ Rn ,

we have u ≤ v in Rn . Thus, we may suppose θ ∈ (0, ∞]. Setting Φδ (x, y) := u(x) − v(y) − (2ε)−1 |x − y|2 − δ(1 + |x|2 ) − δ(1 + |y|2 ) for ε, δ > 0, where δ > 0 will be fixed later, in view of (5.28), we can choose (xε , yε ) ∈ Rn × Rn such that Φδ (xε , yε ) = max(x,y)∈Rn ×Rn Φδ (x, y) ≥ θδ . As before, extracting a subsequence if necessary, we may suppose that |xε − yε |2 = 0. (5.29) ε→0 ε By Lemma 3.6 with Proposition 2.7, putting pε := (xε − yε )/ε, we find X, Y ∈ S n such that lim

and 3 − ε Hence, we have

(

(pε + 2δxε , X + 2δI) ∈ J

2,+

(pε − 2δyε , Y − 2δI) ∈ J

2,−

I O O I

)

(



X O O −Y

)

3 ≤ ε

u(xε ), v(yε ), (

I −I −I I

)

.

ν(u(xε ) − v(yε )) ≤ F (yε , pε − 2δyε , Y − 2δI) − F (xε , pε + 2δxε , X + 2δI) ≤ F (y(ε , pε , Y ) −(F (xε , pε , X) +)) 2δµ0 (2 + |xε | + |yε |) |xε − yε | ≤ ωF |xε − yε | 1 + + νδ(2 + |xε |2 + |yε |2 ) + Cδ, ε 75

where C = C(µ0 , ν) > 0 is independent of ε, δ > 0. For the last inequality, we used “2ab ≤ τ a2 + τ −1 b2 for τ > 0”. Therefore, we have (

νθ ≤ ωF

(

|xε − yε | |xε − yε | 1 + ε

))

+ Cδ.

Sending ε → 0 in the above together with (5.29), we get νθ ≤ Cδ, which is a contradiction for small δ > 0. 2

76

6

Lp-viscosity solutions

In this section, we discuss the Lp -viscosity solution theory for uniformly elliptic PDEs: F (x, Du, D2 u) = f (x) in Ω, (6.1) where F : Ω × Rn × S n → R and f : Ω → R are given. Since we will use the fact that u + C (for a constant C ∈ R) satisfies the same (6.1), we suppose that F does not depend on u itself. Furthermore, to compare with classical results, we prefer to have the inhomogeneous term (the right hand side of (6.1)). The aim in this section is to obtain the a priori estimates for Lp -viscosity solutions without assuming any continuity of the mapping x → F (x, q, X), and then to establish an existence result of Lp -viscosity solutions for Dirichlet problems. Remark. In general, without the continuity assumption of x → F (x, p, X), even if X → F (x, p, X) is uniformly elliptic, we cannot expect the uniqueness of Lp -viscosity solutions. Because Nadirashvili (1997) gave a counterexample of the uniqueness.

6.1

A brief history

Let us simply consider the Poisson equation in a “smooth” domain Ω with zero-Dirichlet boundary condition: {

−△u = f in Ω, u = 0 on ∂Ω.

(6.2)

In the literature of the regularity theory for uniformly elliptic PDEs of second-order, it is well-known that “if f ∈ C σ (Ω) for some σ ∈ (0, 1), then u ∈ C 2,σ (Ω)”.

(6.3)

Here, C σ (U ) (for a set U ⊂ Rn ) denotes the set of functions f : U → R such that |f (x) − f (y)| sup |f (x)| + sup < ∞. |x − y|σ x∈U x,y∈U,x̸=y

77

Also, C k,σ (U ), for an integer k ≥ 1, denotes the set of functions f : U → R so that for any multi-index α = (α1 , . . . , αn ) ∈ {0, 1, 2, . . .}n with |α| := ∑n α σ i=1 αi ≤ k, D f ∈ C (U ), where Dα f :=

∂ |α| f . · · · ∂xαnn

∂xα1 1

These function spaces are called H¨older continuous spaces and the implication in (6.3) is called the Schauder regularity (estimates). Since the PDE in (6.2) is linear, the regularity result (6.3) may be extended to “if f ∈ C k,σ (Ω) for some σ ∈ (0, 1), then u ∈ C k+2,σ (Ω)”.

(6.4)

Moreover, we obtain that (6.4) holds for the following PDE: −trace(A(x)D2 u(x)) = f (x) in Ω,

(6.5)

where the coefficient A(·) ∈ C ∞ (Ω, S n ) satisfies that λ|ξ|2 ≤ ⟨A(x)ξ, ξ⟩ ≤ Λ|ξ|2

for ξ ∈ Rn and x ∈ Ω.

Furthermore, we can obtain (6.4) even for linear second-order uniformly elliptic PDEs if the coefficients are smooth enough. Besides the Schauder estimates, we know a different kind of regularity results: For a solution u of (6.5), and an integer k ∈ {0, 1, 2, . . .}, “if f ∈ W k,p (Ω) for some p > 1, then u ∈ W k+2,p (Ω)”.

(6.6)

Here, for an open set O ⊂ Rn , we say f ∈ Lp (O) if |f |p is integrable in O, and f ∈ W k,p (O) if for any multi-index α with |α| ≤ k, Dα f ∈ Lp (O). Notice that Lp (Ω) = W 0,p (Ω). This (6.6) is called the Lp regularity (estimates). For a later convenience, for p ≥ 1, we recall the standard norms of Lp (O) and W k,p (O), respectively: ∥u∥Lp (O) :=

(∫ O

)1/p

|u(x)| dx p

,

and ∥u∥W k,p (O) :=



∥Dα u∥Lp (O) .

|α|≤k

In Appendix, we will use the quantity ∥u∥Lp (Ω) even for p ∈ (0, 1) although this is not the “norm” (i.e. the triangle inequality does not hold). 78

We refer to [13] for the details on the Schauder and Lp regularity theory for second-order uniformly elliptic PDEs. As is known, a difficulty occurs when we drop the smoothness of Aij . An extreme case is that we only suppose that Aij are bounded (possibly discontinuous, but still satisfy the uniform ellipticity). In this case, what can we say about the regularity of “solutions” of (6.5) ? The extreme case for PDEs in divergence form is the following: n ∑

(

)

∂u ∂ Aij (x) (x) = f (x) in Ω. − ∂xj i,j=1 ∂xi

(6.7)

De Giorgi (1957) first obtained H¨older continuity estimates on weak solutions of (6.7) in the distribution sense; for any ϕ ∈ C0∞ (Ω), ∫ Ω

(⟨A(x)Du(x), Dϕ(x)⟩ − f (x)ϕ(x)) dx = 0.

Here, we set {

C0∞ (Ω)

:=

} ϕ(·) is infinitely many times differentiable, ϕ:Ω→R . and supp ϕ is compact in Ω

We refer to [14] for the details of De Giorgi’s proof and, a different proof by Moser (1960). Concerning the corresponding PDE in nondivergence form, by a stochastic approach, Krylov-Safonov (1979) first showed the H¨older continuity estimates on “strong” solutions of −trace(A(x)D2 u(x)) = f (x) in Ω.

(6.8)

Afterward, Trudinger (1980) (see [13]) gave a purely analytic proof for it. Since these results appeared before the viscosity solution was born, they could only deal with strong solutions, which satisfy PDEs in the a.e. sense. In 1989, Caffarelli proved the same H¨older estimate for viscosity solutions of fully nonlinear second-order uniformly elliptic PDEs. To show H¨older continuity of solutions, it is essential to prove the following “Harnack inequality” for nonnegative solutions. In fact, to prove the Harnack inequality, we split the proof into two parts:

79

weak Harnack inequality for “super”solutions

       

local maximum principle for “sub”solutions

      

=⇒

Harnack inequality for “solutions”

In section 6.4, we will show that Lp -viscosity solutions satisfy the (interior) H¨older continuous estimates.

6.2

Definition and basic facts

We first recall the definition of Lp -strong solutions of general PDEs: F (x, u, Du, D2 u) = f (x) in Ω.

(6.9)

We will use the following function space: 2,p Wloc (Ω) := {u : Ω → R | ζu ∈ W 2,p (Ω) for all ζ ∈ C0∞ (Ω)}.

Throughout this section, we suppose at least p>

n 2

2,p so that u ∈ Wloc (Ω) has the second-order Taylor expansion at almost all points in Ω, and that u ∈ C(Ω).

Definition. We call u ∈ C(Ω) an Lp -strong subsolution (resp., super2,p solution, solution) of (6.9) if u ∈ Wloc (Ω), and F (x, u(x), Du(x), D2 u(x)) ≤ f (x) (resp., ≥ f (x), = f (x)) a.e. in Ω. Now, we present the definition of Lp -viscosity solutions of (6.9). Definition. We call u ∈ C(Ω) an Lp -viscosity subsolution (resp., su2,p persolution) of (6.9) if for ϕ ∈ Wloc (Ω), we have lim ess. inf

ε→0

Bε (x)

(

)

F (y, u(y), Dϕ(y), D2 ϕ(y)) − f (y) ≤ 0

80

(

(

)

)

resp. lim ess. sup F (y, u(y), Dϕ(y), D ϕ(y)) − f (y) ≥ 0 ε→0

2

Bε (x)

provided that u − ϕ takes its local maximum (resp., minimum) at x ∈ Ω. We call u ∈ C(Ω) an Lp -viscosity solution of (6.9) if it is both an Lp viscosity sub- and supersolution of (6.9). Remark. Although we will not explicitly utilize the above definition, we recall the definition of ess. supA and ess. inf A of h : A → R, where A ⊂ Rn is a measurable set: ess. sup h(y) := inf{M ∈ R | h ≤ M a.e. in A}, A

and ess. inf h(y) := sup{M ∈ R | h ≥ M a.e. in A}. A

Coming back to (6.1), we give a list of assumptions on F : Ω × Rn × S n → R:

   (1) F (x, 0, O) = 0 for x ∈ Ω,  

(2) x → F (x, q, X) is measurable for (q, X) ∈ Rn × S n , (3) F is uniformly elliptic.

(6.10)

We recall the uniform ellipticity condition of X → F (x, q, X) with the constants 0 < λ ≤ Λ from section 3.1.2. For the right hand side f : Ω → R, we suppose that f ∈ Lp (Ω) for p ≥ n.

(6.11)

We will often suppose the Lipschitz continuity of F with respect to q ∈ Rn ; {

there is µ ≥ 0 such that |F (x, q, X) − F (x, q ′ , X)| ≤ µ|q − q ′ | for (x, q, q ′ , X) ∈ Ω × Rn × Rn × S n .

(6.12)

Remark. We note that (1) in (6.10) and (6.12) imply that F has the linear growth in Du; |F (x, q, O)| ≤ µ|q| for x ∈ Ω and q ∈ Rn . Remark. We note that when x → F (x, q, X) and x → f (x) are continuous, the definition of Lp -viscosity subsolution (resp., supersolution) of (6.1) 81

coincides with the standard one under assumption (6.10) and (6.12). For a ´ ech [5]. proof, we refer to a paper by Caffarelli-Crandall-Kocan-Swi¸ In this book, we only study the case of (6.11) but most of results can be extended to the case when p > p⋆ = p⋆ (Λ, λ, n) ∈ (n/2, n), where p⋆ is the so-called Escauriaza’s constant (see the references in [4]). The following proposition is obvious but it will be very convenient to study Lp -viscosity solutions of (6.1) under assumptions (6.10), (6.11) and (6.12). Proposition 6.1. Assume that (6.10), (6.11) and (6.12) hold. If u ∈ C(Ω) is an Lp -viscosity subsolution (resp., supersolution) of (6.1), then it is an Lp -viscosity subsolution (resp., supersolution) of P − (D2 u) − µ|Du| ≤ f (

in Ω

resp., P + (D2 u) + µ|Du| ≥ f

)

in Ω .

We recall the Aleksandrov-Bakelman-Pucci (ABP for short) maximum principle, which will play an essential role in this section (and also Appendix). To this end, we introduce the notion of “upper contact sets”: For u : O → R, we set {

Γ[u, O] :=

} there is p ∈ Rn such that x∈O . u(y) ≤ u(x) + ⟨p, y − x⟩ for all y ∈ O

Proposition 6.2. (ABP maximum principle) For µ ≥ 0, there is C0 := C0 (Λ, λ, n, µ, diam(Ω)) > 0 such that if for f ∈ Ln (Ω), u ∈ C(Ω) is an Ln -viscosity subsolution (resp., supersolution) of P − (D2 u) − µ|Du| ≤ f (resp., P + (D2 u) + µ|Du| ≥ f

in Ω+ [u] in Ω+ [−u]),

then max u ≤ max u+ + diam(Ω)C0 ∥f + ∥Ln (Γ[u,Ω]∩Ω+ [u]) (



∂Ω



resp., max(−u) ≤ max(−u) + diam(Ω)C0 ∥f ∥ +



∂Ω

82

Ln (Γ[−u,Ω]∩Ω+ [−u])

)

,

y=u(x)

PSfrag replacements Γ[u, Ω] y = u(x) Ω x

x O

G[u,O] Fig 6.1

where Ω+ [u] := {x ∈ Ω | u(x) > 0}. The next proposition is a key tool to study Lp -viscosity solutions, particularly, when f is not supposed to be continuous. The proof will be given in Appendix. Proposition 6.3. Assume that (6.11) holds for p ≥ n. For any µ ≥ 0, there are an Lp -strong subsolution u and an Lp -strong supersolution v ∈ 2,p C(B 1 ) ∩ Wloc (B1 ), respectively, of {

P + (D2 u) + µ|Du| ≤ f in B1 , u = 0 on ∂B1 ,

{

and

P − (D2 v) − µ|Dv| ≥ f in B1 , v = 0 on ∂B1 .

Moreover, we have the following estimates: for w = u or w = v, and small ˆ δ ∈ (0, 1), there is Cˆ = C(Λ, λ, n, µ, δ) > 0 such that ∥w∥W 2,p (Bδ ) ≤ Cˆδ ∥f ∥Lp (B1 ) . Remark. In view of the proof (Step 2) of Proposition 6.2, we see that −C∥f − ∥Ln (B1 ) ≤ w ≤ C∥f + ∥Ln (B1 ) in B1 , where w = u, v.

6.3

Harnack inequality

In this subsection, we often use the cube Qr (x) for r > 0 and x =t (x1 , . . . , xn ) ∈ Rn ; Qr (x) := {y =t (y1 , . . . , yn ) | |xi − yi | < r/2 for i = 1, . . . , n}, 83

and Qr := Qr (0). Notice that Br/2 (x) ⊂ Qr (x) ⊂ Br√n/2 (x) for r > 0. We will prove the next two propositions in Appendix. Proposition 6.4. (Weak Harnack inequality) For µ ≥ 0, there are p0 = p0 (Λ, λ, n, µ) > 0 and C1 := C1 (Λ, λ, n, µ) > 0 such that if u ∈ C(B 2√n ) is a nonnegative Lp -viscosity supersolution of P + (D2 u) + µ|Du| ≥ 0 in B2√n , then we have ∥u∥Lp0 (Q1 ) ≤ C1 inf u. Q1/2

Remark. Notice that p0 might be smaller than 1. Proposition 6.5. (Local maximum principle) For µ ≥ 0 and q > 0, there is C2 = C2 (Λ, λ, n, µ, q) > 0 such that if u ∈ C(B 2√n ) is an Lp -viscosity subsolution of P − (D2 u) − µ|Du| ≤ 0 in B2√n , then we have sup u ≤ C2 ∥u+ ∥Lq (Q2 ) . Q1

Remark. Notice that we do not suppose that u ≥ 0 in Proposition 6.5. 6.3.1

Linear growth

The next corollary is a direct consequence of Propositions 6.4 and 6.5. Corollary 6.6. For µ ≥ 0, there is C3 = C3 (Λ, λ, n, µ) > 0 such that if u ∈ C(B 2√n ) is a nonnegative Lp -viscosity sub- and supersolution of P − (D2 u) − µ|Du| ≤ 0 and P + (D2 u) + µ|Du| ≥ 0 in B2√n , respectively, then we have sup u ≤ C3 inf u. Q1

Q1

84

In order to treat inhomogeneous PDEs, we will need the following corollary: Corollary 6.7. For µ ≥ 0 and f ∈ Lp (B3√n ) with p ≥ n, there is C4 = C4 (Λ, λ, n, µ) > 0 such that if u ∈ C(B 3√n ) is a nonnegative Lp viscosity sub- and supersolution of P − (D2 u) − µ|Du| ≤ f

and P + (D2 u) + µ|Du| ≥ f

in B2√n ,

respectively, then we have (

)

sup u ≤ C4 inf u + ∥f ∥Lp (B3√n ) . Q1

Q1

2,p Proof. According to Proposition 6.3, we find v, w ∈ C(B 3√n )∩Wloc (B3√n ) such that { P + (D2 v) + µ|Dv| ≤ −f + a.e. in B3√n , v = 0 on ∂B3√n ,

and

{

P − (D2 w) − µ|Dw| ≥ f − a.e. in B3√n , w = 0 on ∂B3√n .

ˆ In view of Proposition 6.3 and its Remark, we can choose Cˆ = C(Λ, λ, n, µ) > 0 such that ˆ + ∥Lp (B √ ) 0 ≤ −v ≤ C∥f 3 n

in B3√n ,

ˆ + ∥Lp (B √ ) , ∥v∥W 2,p (B2√n ) ≤ C∥f 3 n

and ˆ − ∥Lp (B √ ) 0 ≤ w ≤ C∥f 3 n

in B3√n ,

ˆ − ∥Lp (B √ ) . ∥w∥W 2,p (B2√n ) ≤ C∥f 3 n

Since v, w ∈ W 2,p (B2√n ), it is easy to verify that u1 := u + v and u2 := u + w are, respectively, an Lp -viscosity sub- and supersolution of P − (D2 u1 ) − µ|Du1 | ≤ 0 and P + (D2 u2 ) + µ|Du2 | ≥ 0 in B2√n . Since v ≤ 0 in B3√n , applying Proposition 6.5 to u1 , for any q > 0, we find C2 (q) > 0 such that ˆ + ∥Lp (B √ ) sup u ≤ sup u1 + C∥f 3 n Q1

Q1

ˆ + ∥Lp (B √ ) ≤ C2 (q)∥(u1 )+ ∥Lq (Q2 ) + C∥f 3 n ˆ + ∥Lp (B √ ) . ≤ C2 (q)∥u∥Lq (Q2 ) + C∥f 3 n 85

(6.13)

On the other hand, applying Proposition 6.4 to u2 , there is p0 > 0 such that (

)

ˆ − ∥Lp (B √ ) . (6.14) ∥u∥Lp0 (Q2 ) ≤ ∥u2 ∥Lp0 (Q2 ) ≤ C1 inf u2 ≤ C1 inf u + C∥f 3 n Q1

Q1

Therefore, combining (6.14) with (6.13) for q = p0 , we can find C4 > 0 such that the assertion holds. 2 Corollary 6.8. (Harnack inequality, final version) Assume that (6.10), (6.11) and (6.12) hold. If u ∈ C(Ω) is an Lp -viscosity solution of (6.1), and if B3√nr (x) ⊂ Ω for r ∈ (0, 1], then (

sup u ≤ C4 Qr (x)

)

inf u + r

2− n p

Qr (x)

∥f ∥Lp (Ω) ,

where C4 > 0 is the constant in Corollary 6.7. Proof. By translation, we may suppose that x = 0. Setting v(x) := u(rx) for x ∈ B3√n , we easily see that v is an Lp -viscosity subsolution and supersolution of P − (D2 v) − µ|Dv| ≤ r2 fˆ and P + (D2 v) + µ|Dv| ≥ −r2 fˆ,

in B3√n ,

n respectively, where fˆ(x) := f (rx). Note that ∥fˆ∥Lp (B3√n ) = r− p ∥f ∥Lp (B3√nr ) . Applying Corollary 6.7 to v and then, rescaling v to u, we conclude the assertion. 2

6.3.2

Quadratic growth

Here, we consider the case when q → F (x, q, X) has quadratic growth. We refer to [10] for applications where such quadratic nonlinearity appears. We present a version of the Harnack inequality when F has a quadratic growth in Du in place of (6.12); {

there is µ ≥ 0 such that |F (x, q, X) − F (x, q ′ , X)| ≤ µ(|q| + |q ′ |)|q − q ′ | for (x, q, q ′ , X) ∈ Ω × Rn × Rn × S n ,

which together with (1) of (6.10) implies that |F (x, q, O)| ≤ µ|q|2

for (x, q) ∈ Ω × Rn . 86

(6.15)

The associated Harnack inequality is as follows: Theorem 6.9. For µ ≥ 0 and f ∈ Lp (B3√n ) with p ≥ n, there is C5 = C5 (Λ, λ, n, µ) > 0 such that if u ∈ C(B 3√n ) is a nonnegative Lp viscosity sub- and supersolution of P − (D2 u) − µ|Du|2 ≤ f

and P + (D2 u) + µ|Du|2 ≥ f

in B3√n ,

respectively, then we have sup u ≤ C5 e Q1

2µ M λ

(

)

inf u + ∥f ∥Lp (B3√n ) , Q1

where M := supB3√n u. Proof. Set α := µ/λ. Fix any δ ∈ (0, 1). We claim that v := eαu − 1 and w := 1 − e−αu are, respectively, a nonnegative Lp -viscosity sub- and supersolution of P − (D2 v) ≤ α(eαM + δ)f +

and P + (D2 w) ≥ −α(1 + δ)f −

in B3√n .

We shall only prove this claim for v since the other for w can be obtained similarly. 2,p Choose ϕ ∈ Wloc (B3√n ) and suppose that u−ϕ attains its local maximum √ at x ∈ B3 n . Thus, we may suppose that v(x) = ϕ(x) and v ≤ ϕ in Br (x), where B2r (x) ⊂ B3√n . Note that 0 ≤ v ≤ eαM − 1 in B3√n . For any δ ∈ (0, 1), in view of W 2,p (Br (x)) ⊂ C σ0 (B r (x)) with some σ0 ∈ (0, 1), we can choose ε0 ∈ (0, r) such that −δ ≤ ϕ ≤ v + δ

in Bε0 (x).

Setting ψ(y) := α−1 log(ϕ(y) + 1) for y ∈ Bε0 (x) (extending ψ ∈ W 2,p in B3√n \ Bε0 (x) if necessary), we have lim ess inf

ε→0

Since Dψ =

Bε (x)

Dϕ α(ϕ + 1)

(

)

P − (D2 ψ) − µ|Dψ|2 − f + ≤ 0.

and D2 ψ =

87

D2 ϕ Dϕ ⊗ Dϕ − , α(ϕ + 1) α(ϕ + 1)2

the above inequality yields (

lim ess inf

ε→0

Bε (x)

)

P − (D2 ϕ) − f + ≤ 0. α(ϕ + 1)

Since 0 < 1 − δ ≤ ϕ + 1 ≤ eαM + δ in Bε0 (x), we have lim ess inf

ε→0

Bε (x)

(

)

P − (D2 ϕ) − α(eαM + δ)f + ≤ 0.

Since αu ≤ v ≤ αueαM and αue−αM ≤ w ≤ αu, using the same argument to get (6.13) and (6.14), we have sup u ≤ Q1

{ } 1 sup v ≤ C6 ∥v∥Lp0 (Q2 ) + (eαM + δ)∥f + ∥Lp (B3√n ) α Q1 { } ≤ C7 e2αM ∥w∥Lp0 (Q2 ) + (eαM + δ)∥f + ∥Lp (B3√n ) {

≤ C8 e2αM inf w + (e2αM + δ)∥f ∥Lp (B3√n ) ≤ C9 e

{

2αM

}

}

Q1

inf u + (1 + δ)∥f ∥Lp (B3√n ) . Q1

Since Ck (k = 6, . . . , 9) are independent of δ > 0, sending δ → 0, we conclude the proof. 2 Remark. We note that the same argument by using two different transformations for sub- and supersolutions as above can be found in [14] for uniformly elliptic PDEs in divergence form with the quadratic nonlinearity.

6.4

H¨ older continuity estimates

In this subsection, we show how the Harnack inequality implies the H¨older continuity. Theorem 6.10. Assume that (6.10), (6.11) and (6.12) hold. For each compact set K ⊂ Ω, there is σ = σ(Λ, λ, n, µ, p, dist(K, ∂Ω), ∥f ∥Lp (Ω) ) ∈ (0, 1) such that if u ∈ C(Ω) is an Lp -viscosity solution of (6.1), then there is ˆ Cˆ = C(Λ, λ, n, µ, p, dist(K, ∂Ω), maxΩ |u|, ∥f ∥Lp (Ω) ) > 0 ˆ − y|σ |u(x) − u(y)| ≤ C|x

for x, y ∈ K.

Remark. We notice that σ is independent of supΩ |u|. 88

In our proof below, we may relax the dependence maxΩ |u| in Cˆ by sup{|u(x)| | dist(x, K) < ε}

for small ε > 0. √ Proof. Setting r0 := min{1,dist(K, ∂Ω)/(3 n)} > 0, we may suppose that there is C4 > 1 such that if w ∈ C(Ω) is a nonnegative Lp -viscosity sub- and supersolution of P − (D2 w) − µ|Dw| ≤ f

and P + (D2 w) + µ|Dw| ≥ f

in Ω,

respectively, then we see that for any r ∈ (0, r0 ] and x ∈ K (i.e. B3√nr (x) ⊂ Ω), ( ) 2− n sup w ≤ C4 inf w + r p ∥f ∥Lp (Ω) . Qr (x)

Qr (x)

For simplicity, we may suppose x = 0 ∈ K. Now, we set m(r) := inf u and osc(r) := M (r) − m(r).

M (r) := sup u,

Qr

Qr

It is sufficient to find C > 0 and σ ∈ (0, 1) such that M (r) − m(r) ≤ Crσ

for small r > 0.

We denote by S(r) the set of all nonnegative w ∈ C(B 3√nr ), which is, respectively, an Lp -viscosity sub- and supersolution of P − (D2 w) − µ|Dw| ≤ |f | and P + (D2 w) + µ|Dw| ≥ −|f | in B3√nr . Setting v1 := u − m(r) and w1 := M (r) − u, we see that v1 and w1 belong to S(r). Hence, setting C10 := max{C4 ∥f ∥Lp (Ω) , C4 , 4} > 3, we have (

sup v1 ≤ C10

Qr/2

)

inf v1 + r

Qr/2

Thus, setting β := 2 −

2− n p

(

and

sup w1 ≤ C10

Qr/2 n p

)

inf w1 + r

Qr/2

> 0, we have (

)

(

)

M (r/2) − m(r) ≤ C10 m(r/2) − m(r) + (r/2)β , M (r) − m(r/2) ≤ C10 M (r) − M (r/2) + (r/2)β . 89

2− n p

.

Hence, adding these inequalities, we have (C10 + 1)(M (r/2) − m(r/2)) ≤ (C10 − 1)(M (r) − m(r)) + 2C10 (r/2)β . Therefore, setting θ := (C10 − 1)/(C10 + 1) ∈ (1/2, 1) and C11 := 2C10 /(C10 + 1), we see that osc(r/2) ≤ θosc(r) + C11 (r/2)β . Moreover, changing r/2k−1 for integers k ≥ 2, we have osc(r/2k ) ≤ θk osc(r) + C11 rβ

k ∑

2−βj

j=1

C11 β ≤ θk osc(r0 ) + β r 2 −1

≤ C12 (θk + rβ ),

where C12 := max{osc(r0 ), C11 /(2β − 1)}. For r ∈ (0, r0 ), by setting s = rα , where α = log θ/(log θ−β log 2) ∈ (0, 1), there is a unique integer k ≥ 1 such that s s ≤ r < , 2k 2k−1 which yields log(s/r) log(s/r) ≤k< + 1. log 2 log 2 Hence, recalling θ ∈ (1/2, 1), we have (

)

osc(r) ≤ osc(s/2k−1 ) ≤ C12 (θk + (2s)β ) ≤ 2β C12 θ(α−1) log r/ log 2 + rβα . Setting σ := (α − 1) log θ/ log 2 ∈ (0, 1) (because θ ∈ (1/2, 1)), we have θ(α−1) log r/ log 2 = rσ

and rβα = rσ .

Thus, setting C13 := 2β C12 , we have osc(r) ≤ C13 rσ . 2

(6.16)

Remark. We note that we may derive (6.16) when p > n/2 by taking β = 2 − np > 0. We shall give the corresponding H¨older continuity for PDEs with quadratic nonlinearity (6.15). Since we can use the same argument as in the proof of 90

Theorem 6.1 using Theorem 6.9 instead of Corollaries 6.7 and 6.8, we omit the proof of the following: Corollary 6.11. Assume that (6.10), (6.11) and (6.15) hold. For each ˆ compact set K ⊂ Ω, there are Cˆ = C(Λ, λ, n, µ, p, dist(K, ∂Ω), supΩ |u|) > 0 and σ = σ(Λ, λ, n, µ, p, dist(K, ∂Ω), supΩ |u|) ∈ (0, 1) such that if an Lp viscosity solution u ∈ C(Ω) of (6.1), then we have ˆ − y|σ |u(x) − u(y)| ≤ C|x

for x, y ∈ K.

Remark. Note that both of σ and Cˆ depend on supΩ |u| in this quadratic case.

6.5

Existence result

For the existence of Lp -viscosity solutions of (6.1) under the Dirichlet condition, we only give an outline of proof, which was first shown in a paper by Crandall´ ech in [7] (1999). Kocan-Lions-Swi¸ Theorem 6.12. Assume that (6.10), (6.11) and (6.12) hold. Assume also that (1) of (5.17) holds. For given g ∈ C(∂Ω), there is an Lp -viscosity solution u ∈ C(Ω) of (6.1) such that u(x) = g(x) for x ∈ ∂Ω. (6.17) Remark. We may relax assumption (1) of (5.17) so that the assertion holds for Ω which may have some “concave” corners. Such a condition is called “uniform exterior cone condition”. Sketch of proof. Step1: We first solve approximate PDEs, which have to satisfy a sufficient condition in Step 3; instead of (6.1), under (6.17), we consider Fk (x, Du, D2 u) = fk

in Ω,

(6.18)

where “smooth” Fk and fk approximate F and f , respectively. In fact, Fk and fk are given by F ∗ ρ1/k and f ∗ ρ1/k , where ρ1/k is the standard mollifier with respect to x-variables. We remark that F ∗ ρ1/k means the convolution of F (·, p, X) and ρ1/k .

91

We find a viscosity solution uk ∈ C(Ω) of (6.18) under (6.17) via Perron’s method for instance. At this stage, we need to suppose the smoothness of ∂Ω to construct viscosity sub- and supersolutions of (6.18) with (6.17). Remember that if F and f are continuous, then the notion of Lp -viscosity solutions equals to that of the standard ones (see Proposition 2.9 in [5]). In view of (1) of (5.17) (i.e. the uniform exterior sphere condition), we can construct viscosity sub- and supersolutions of (6.18) denoted by ξ ∈ U SC(Ω) and η ∈ LSC(Ω) such that ξ = η = g on ∂Ω. To show this fact, we only note that we can modify the argument in Step 1 in section 7.3. Step 2: We next obtain the a priori estimates for uk so that they converge to a continuous function u ∈ C(Ω), which is the candidate of the original PDE. For this purpose, after having established the L∞ estimates via Proposition 6.2, we apply Theorem 6.10 (interior H¨older continuity) to uk in Step 1 because (6.10)-(6.12) hold for approximate PDEs with the same constants λ, Λ, µ. We need a careful analysis to get the equi-continuity up to the boundary ∂Ω. See Step 1 in section 7.3 again. Step 3: Finally, we verify that the limit function u is the Lp -viscosity solution via the following stability result, which is an Lp -viscosity version of Proposition 4.8. To state the result, we introduce some notations: For B2r (x) ⊂ Ω with r > 0 and x ∈ Ω, and ϕ ∈ W 2,p (Br (x)), we set Gk [ϕ](y) := Fk (y, Dϕ(y), D2 ϕ(y)) − fk (y), and G[ϕ](y) := F (y, Dϕ(y), D2 ϕ(y)) − f (y) for y ∈ Br (x). Proposition 6.13. Assume that Fk and F satisfy (6.10) and (6.12) with λ, Λ > 0 and µ ≥ 0. For f, fk ∈ Lp (Ω) with p ≥ n, let uk ∈ C(Ω) be an Lp -viscosity subsolution (resp., supersolution) of (6.18). Assume also that uk converges to u uniformly on any compact subsets of Ω as k → ∞, and that for any B2r (x) ⊂ Ω with r > 0 and x ∈ Ω, and ϕ ∈ W 2,p (Br (x)), lim ∥(G[ϕ] − Gk [ϕ])+ ∥Lp (Br (x)) = 0

k→∞

(



)

resp., lim ∥(G[ϕ] − Gk [ϕ]) ∥Lp (Br (x)) = 0 . k→∞

Then, u ∈ C(Ω) is an Lp -viscosity subsolution (resp., supersolution) of (6.1).

92

Proof of Proposition 6.13. We only give a proof of the assertion for subsolutions. Suppose the contrary: There are r > 0, ε > 0, x ∈ Ω and ϕ ∈ W 2,p (B2r (x)) such that B3r (x) ⊂ Ω, 0 = (u − ϕ)(x) ≥ (u − ϕ)(y) for y ∈ B2r (x), and u − ϕ ≤ −ε in B2r (x) \ Br (x),

(6.19)

G[ϕ](y) ≥ ε a.e. in B2r (x).

(6.20)

and For simplicity, we shall suppose that r = 1 and x = 0. It is sufficient to find ϕk ∈ W 2,p (B2 ) such that limk→∞ supB2 |ϕk | = 0, and Gk [ϕ + ϕk ](y) ≥ ε

a.e. in B2 .

Indeed, since uk − (ϕ + ϕk ) attains its maximum over B2 at an interior point z ∈ B2 by (6.19), the above inequality contradicts the fact that uk is an Lp viscosity subsolution of (6.18). Setting h(x) := G[ϕ](x) and hk (x) := Gk [ϕ](x), in view of Proposition 6.3, we 2,p (B2 ) such that can find ϕk ∈ C(B 2 ) ∩ Wloc     

P − (D2 ϕk ) − µ|Dϕk | ≥ (h − hk )+ a.e. in B2 , ϕk = 0 on ∂B2 , +∥ p  0 ≤ ϕ ≤ C∥(h − h ) in B2 , k k L (B )  2   ∥ϕk ∥W 2,p (B1 ) ≤ C∥(h − hk )+ ∥Lp (B2 ) . We note that our assumption together with the third inequality in the above yields limk→∞ supB2 |ϕk | = 0. Using (6.10), (6.12) and (6.20), we have Gk [ϕ + ϕk ] ≥ P − (D2 ϕk ) − µ|Dϕk | + hk ≥ (h − hk )+ + ε − (h − hk ) ≥ ε a.e. in B2 . 2

93

7

Appendix

In this appendix, we present proofs of the propositions, which appeared in the previous sections. However, to prove them, we often need more fundamental results, for which we only give references. One of such results is the following “Area formula”, which will be employed in sections 7.1 and 7.2. We refer to [9] for a proof of a more general Area formula.   

Area formula

ξ ∈ C (R , R ), ∫ ∫ n 1 g ∈ L (R ), |g(y)|dy ≤ |g(ξ(x))||det(Dξ(x))|dx =⇒  ξ(A) A A ⊂ Rn measurable  1

n

n

We note that the Area formula is a change of variable formula when |det(Dξ)| may vanish. In fact, the equality holds if |det(Dξ)| > 0 and ξ is injective.

7.1

Proof of Ishii’s lemma

First of all, we recall an important result by Aleksandrov. We refer to the Appendix of [6] and [10] for a “functional analytic” proof, and to [9] for a “measure theoretic” proof. Lemma 7.1. (Theorem A.2 in [6]) If f : Rn → R is convex, then for a.a. x ∈ Rn , there is (p, X) ∈ Rn × S n such that 1 f (x + h) = f (x) + ⟨p, h⟩ + ⟨Xh, h⟩ + o(|h|2 ) as |h| → 0. 2 (i.e., f is twice differentiable at a.a. x ∈ Rn .) We next recall Jensen’s lemma, which is a version of the ABP maximum principle in 7.2 below. Lemma 7.2. (Lemma A.3 in [6]) Let f : Rn → R be semi-convex (i.e. x → f (x) + C0 |x|2 is convex for some C0 ∈ R). Let xˆ ∈ Rn be a strict maximum point of f . Set fp (x) := f (x) − ⟨p, x⟩ for x ∈ Rn and p ∈ Rn . Then, for r > 0, there are C1 , δ0 > 0 such that |Γr,δ | ≥ C1 δ n

for δ ∈ (0, δ0 ], 94

where

{

}

x) . Γr,δ := x ∈ Br (ˆ x) ∃p ∈ B δ such that fp (y) ≤ fp (x) for y ∈ Br (ˆ Proof. By translation, we may suppose xˆ = 0. For integers m, we set f m (x) = f ∗ ρ1/m (x), where ρ1/m is the mollifier. Note that x → f m (x) + C0 |x|2 is convex. Setting {



}

m m Γm r,δ = x ∈ Br ∃p ∈ B δ such that fp (y) ≤ fp (x) for y ∈ B r ,

where fpm (x) = f m (x)−⟨p, x⟩, we claim that there are C1 , δ0 > 0, independent of large integers m, such that n |Γm r,δ | ≥ C1 δ

for δ ∈ (0, δ0 ].

k We remark that this concludes the assertion. In fact, setting Am := ∪∞ k=m Γr,δ , ∞ we have ∩∞ m=1 Am ⊂ Γr,δ . Because, for x ∈ ∩m=1 Am , we can select pk ∈ B δ and mk such that limk→∞ mk = ∞, and

max fpmk k = fpmk k (x). Br

Hence, sending k → ∞ (along a subsequence if necessary), we find pˆ ∈ B δ such that maxB r fpˆ = fpˆ(x), which yields x ∈ Γr,δ . Therefore, we have C1 δ n ≤ lim |Am | = | ∩∞ m=1 A| ≤ |Γr,δ |. m→∞

Now we shall prove our claim. First of all, we notice that x → f m (x) + C0 |x|2 is convex. Since 0 is the strict maximum of f , we find ε0 > 0 such that ε0 = f (0) −

max

B 4r/3 \Br/3

f.

Fix p ∈ B δ0 , where δ0 = ε0 /(3r). For m ≥ 3/r, we note that f m (x) − ⟨p, x⟩ ≤ f (0) − ε0 + δ0 r ≤ f (0) − 95

2ε0 3

in B r \ B2r/3 .

On the other hand, for large m, we verify that f m (0) ≥ f (0) − ωf (m−1 ) > f (0) −

ε0 , 3

where ωf denotes the modulus of continuity of f . Hence, in view of these observations, for any p ∈ B δ0 , if maxB r fpm = fpm (x) for x ∈ B r , then x ∈ Br . In other words, we see that B δ = Df m (Γm r,δ ) for δ ∈ (0, δ0 ]. Thanks to the Area formula, we have |Bδ | =

∫ Df m (Γm ) r,δ

dy ≤

∫ Γm r,δ

|detD2 f m |dx ≤ (2C0 )n |Γm r,δ |.

Here, we have employed that −2C0 I ≤ D2 f m ≤ O in Γm r,δ .

2

Although we can find a proof of the next proposition in [6], we recall the proof with a minor change for the reader’s convenience. Proposition 7.3. (Lemma A.4 in [6]) If f ∈ C(Rm ), B ∈ S m , ξ → f (ξ) + (λ/2)|ξ|2 is convex and maxξ∈Rm {f (ξ) − 2−1 ⟨Bξ, ξ⟩} = f (0), then there is an X ∈ S m such that (0, X) ∈ J

2,+

f (0) ∩ J

2,−

f (0) and

− λI ≤ X ≤ B.

Proof. For any δ > 0, setting fδ (ξ) := f (ξ) − 2−1 ⟨Bξ, ξ⟩ − δ|ξ|2 , we notice that the semi-convex fδ attains its strict maximum at ξ = 0. In view of Lemmas 7.1 and 7.2, there are ξδ , qδ ∈ Bδ such that ξ → fδ (ξ) + ⟨qδ , ξ⟩ has a maximum at ξδ , at which f is twice differentiable. It is easy to see that Df (ξδ ) → 0 (as δ → 0) and, moreover, from the convexity of ξ → f (ξ) + (λ/2)|ξ|2 , −λI ≤ D2 f (ξδ ) ≤ B + 2δI. Noting (Df (ξδ ), D2 f (ξδ )) ∈ J 2,+ f (ξδ ) ∩ J 2,− f (ξδ ), we conclude the assertion by taking the limit as δ → 0. 2 We next give a “magic” property of sup-convolutions. For the reader’s convenience, we put the proof of [6]. 96

Lemma 7.4. (Lemma A.5 in [6]) For v ∈ U SC(Rn ) with supRn v < ∞ and λ > 0, we set (

vˆ(ξ) := sup x∈Rn

)

λ v(x) − |x − ξ|2 . 2

For η, q ∈ Rn , Y ∈ S n , and (q, Y ) ∈ J 2,+ vˆ(η), we have (q, Y ) ∈ J 2,+ v(η + λ−1 q) and vˆ(η) + In particular, if (0, Y ) ∈ J

2,+

|q|2 = v(η + λ−1 q). 2λ

vˆ(0), then (0, Y ) ∈ J

2,+

v(0).

Proof. For (q, Y ) ∈ J 2,+ vˆ(η), we choose y ∈ Rn such that λ vˆ(η) = v(y) − |y − η|2 . 2 Thus, from the definition, we see that for any x, ξ ∈ Rn , λ v(x) − |ξ − x|2 ≤ vˆ(ξ) ≤ vˆ(η) + ⟨q, ξ − η⟩ 2 1 + ⟨Y (ξ − η), ξ − η⟩ + o(|ξ − η|2 ) 2 λ = v(y) − |y − η|2 + ⟨q, ξ − η⟩ 2 1 + ⟨Y (ξ − η), ξ − η⟩ + o(|ξ − η|2 ). 2 Taking ξ = x − y + η in the above, we have (q, Y ) ∈ J 2,+ v(y). To verify that y = η + λ−1 q, putting x = y and ξ = η − ε(λ(η − y) + q) for ε > 0 in the above again, we have ε|λ(η − y) + q|2 ≤ o(ε), which concludes the proof.

2

Proof of Lemma 3.6. First of all, extending upper semi-continuous functions u, w in Ω into Rn by −∞ in Rn \ Ω, we shall work in Rn × Rn instead of Ω × Ω. By translation, we may suppose that xˆ = yˆ = 0, at which u(x) + w(y) − ϕ(x, y) attains its maximum. 97

Furthermore, replacing u(x), w(y) and ϕ(x, y), respectively, by u(x) − u(0) − ⟨Dx ϕ(0, 0), x⟩,

w(y) − w(0) − ⟨Dy ϕ(0, 0), y⟩

and ϕ(x, y) − ϕ(0, 0) − ⟨Dx ϕ(0, 0), x⟩ − ⟨Dy ϕ(0, 0), y⟩, we may also suppose that ϕ(0, 0) = u(0) = w(0) = 0 and Dϕ(0, 0) = (0, 0) ∈ Rn × Rn . ⟨ ( ) ( )⟩ A x x Since ϕ(x, y) = , +o(|x|2 +|y|2 ), where A := D2 ϕ(0, 0) ∈ y 2 y S 2n see that the mapping (x, y) → u(x) + w(y) − ⟨, for each( η > ) 0, ( we)⟩ 1 x x (A + ηI) , attains its (strict) maximum at 0 ∈ R2n . y y 2 We will show the assertion for A + ηI in place of A. Then, sending η → 0, we can conclude the proof. Therefore, we need to prove the following: Simplified version of Ishii’s lemma. n For upper semi-continuous )⟩ u and w in R , we suppose that ⟨ ( ) functions ( A x x , ≤ u(0) + w(0) = 0 in Rn × Rn . u(x) + w(y) − y 2 y 2,+ Then, for each µ > 1, there are X,(Y ∈ S n such that (0, X) ) ( ) ∈ J u(0), 1 I O X O 2,+ ≤ ≤ A + A2 . (0, Y ) ∈ J w(0) and −(µ + ∥A∥) O I O Y µ Proof of the simplified version of Lemma 3.6. Since H¨older’s inequality implies ⟨

A

(

x y

) (

,

)⟩

x y

⟨(

)(

) (

)⟩

1 ξ ξ ≤ A + A2 , η η µ +(µ + ∥A∥)(|x − ξ|2 + |y − η|2 )

for x, y, ξ, η ∈ Rn and µ > 0, setting λ = µ + ∥A∥, we have λ λ 1 u(x) − |x − ξ|2 + w(y) − |y − η|2 ≤ 2 2 2

⟨(

1 A + A2 µ

)(

ξ η

) (

,

ξ η

)⟩

.

Using the notation in Lemma 7.4, we denote by uˆ and w ˆ the sup-convolution of u and w, respectively, with the above λ > 0. Thus, we have 1 uˆ(ξ) + w(η) ˆ ≤ 2

⟨(

1 A + A2 µ

)(

ξ η

98

) (

,

ξ η

)⟩

for all ξ, η ∈ Rn .

Since uˆ(0) ≥ u(0) = 0 and w(0) ˆ ≥ w(0) = 0, the above inequality implies uˆ(0) = w(0) ˆ = 0. In view of Proposition 7.3 with m = 2n, f (ξ, η) = uˆ(ξ) + w(η) ˆ and 2,+ 2,− −1 2 2n B = A + µ A , there is Z ∈ S such that (0, Z) ∈ J f (0, 0) ∩ J f (0, 0) and −λI ≤ Z ≤ B. 2,± Hence, from the definition of J , it is easy to verify that there are X, Y ∈ 2,+ 2,− 2,+ 2,− S n such that (0, X) ∈ J uˆ(0) ∩ J uˆ(0), (0, Y ) ∈ J w(0) ˆ ∩ J w(0), ˆ and (

Z=

X O O Y

)

.

Applying the last property in Lemma 7.4 to uˆ and w, ˆ we see that (0, X) ∈ J

7.2

2,+

u(0) and (0, Y ) ∈ J

2,+

w(0). 2

Proof of the ABP maximum principle

First of all, we remind the readers of our strategy in this and the next subsections. We first show that the ABP maximum principle holds under f ∈ Ln (Ω) ∩ C(Ω) in Steps 1 and 2 of this subsection. Next, using this fact, we establish the existence of Lp -strong solutions of “Pucci” equations in the next subsection when f ∈ Lp (Ω). Employing this existence result, in Step 3, we finally prove Proposition 6.2; the ABP maximum principle when f ∈ Ln (Ω). ABP maximum principle for f ∈ Ln (Ω) ∩ C(Ω) (Section 7.2) ⇓ p Existence of L -strong solutions of Pucci equations (Section 7.3) ⇓ ABP maximum principle for f ∈ Ln (Ω) (Section 7.2) Proof of Proposition 6.2. We give a proof in [5] for the subsolution assertion of Proposition 6.2. By scaling, we may suppose that diam(Ω) ≤ 1. Setting r0 := max u − max u+ , ∂Ω



we may also suppose that r0 > 0 since otherwise, the conclusion is obvious. 99

We first introduce the following notation: For u : Ω → R and r ≥ 0, {



}

Γr := x ∈ Ω ∃p ∈ B r such that u(y) ≤ u(x) + ⟨p, y − x⟩ for y ∈ Ω . Recalling the upper contact set in section 6.2, we note that Γ[u, Ω] =



Γr .

r>0

Step 1: u ∈ C 2 (Ω) ∩ C(Ω). We first claim that for r ∈ (0, r0 ), {

(i) B r = Du(Γr ), (ii) D2 u ≤ O in Γr .

(7.1)

To show (i), for p ∈ B r , we take xˆ ∈ Ω such that u(ˆ x) − ⟨p, xˆ⟩ = maxx∈Ω (u(x) − ⟨p, x⟩). Since u(x) − u(ˆ x) ≤ r < r0 for x ∈ Ω, taking the maximum over Ω, we have xˆ ∈ Ω. Hence, we see p = Du(ˆ x), which concludes (i). For x ∈ Γr , Taylor’s formula yields 1 u(y) = u(x) + ⟨Du(x), y − x⟩ + ⟨D2 u(x)(y − x), y − x⟩ + o(|y − x|2 ). 2 Hence, we have 0 ≥ ⟨D2 u(x)(y − x), y − x⟩ + o(|y − x|2 ), which shows (ii). ( )1−n Now, we introduce functions gκ (p) := |p|n/(n−1) + κn/(n−1) for κ > 0. We shall simply write g for gκ . Thus, for r ∈ (0, r0 ), we see that ∫ Du(Γr )

g(p)dp ≤ =



g(Du(x))|det(D2 u(x))|dx

∫ Γr ( Γr

|Du|n/(n−1) + κn/(n−1)

)1−n

|detD2 u(x)|dx.

Recalling (7.1), we utilize |detD2 u| ≤ (−trace(D2 u)/n)n in Γr to find C > 0 such that ∫

Br

g(p)dp ≤ C



( Γr

|Du|n/(n−1) + κn/(n−1)

)1−n

(−trace(D2 u))n dx.

(7.2)

Thus, since (µ|Du| + f + )n ≤ g(Du)−1 (µn + κ−n (f + )n ) by H¨older’s inequality, we have ( )n ) ∫ ∫ ( f+ n g(p)dp ≤ C µ + dx. (7.3) κ Br Γr 100

On the other hand, since (|p|n + κn )−1 ≤ g(p), we have (( )n

r κ

log

)

+1 ≤C



∫ 1 dp ≤ C g(p)dp. |p|n + κn Br

Br

Hence, noting Γr ⊂ Ω+ [u] for r ∈ (0, r0 ), by (7.3), we have [

{

r ≤ κ exp C

(



(

f+ µ + κ n

Γ[u,Ω]∩Ω+ [u]

)n )

}

dx − 1

]1/n

.

(7.4)

When ∥f + ∥Ln (Γ[,Ω]∩Ω+ [u]) = 0, then sending κ → 0, we get a contradiction. Thus, we may suppose that ∥f + ∥Ln (Γ[,Ω]∩Ω+ [u]) > 0. Setting κ := ∥f + ∥Ln (Γ[u,Ω]∩Ω+ [u]) and r := r0 /2, we can find C > 0, independent of u, such that r0 ≤ C∥f + ∥Ln (Γ[u,Ω]∩Ω+ [u]) . Remark. We note that we do not need to suppose f to be continuous in Step 1 while we need it in the next step. Step 2: u ∈ C(Ω) and f ∈ Ln (Ω) ∩ C(Ω). First of all, because of f ∈ C(Ω), we remark that u is a “standard” viscosity subsolution of P − (D2 u) − µ|Du| ≤ f

in Ω+ [u].

(See Proposition 2.9 in [5].) Let uε be the sup-convolution of u for ε > 0; {

}

|x − y|2 u (x) := sup u(y) − . 2ε y∈Ω ε

Note that uε is semi-convex and thus, twice differentiable a.e. in Rn . We claim that for small ε > 0, uε is a viscosity subsolution of P − (D2 uε ) − µ|Duε | ≤ f ε

in Ωε ,

(7.5)

where f ε (x) := sup{f + (y) | |x − y| ≤ 2(∥u∥L∞ (Ω) ε)1/2 } and Ωε := {x ∈ Ω+ [u] | dist(x, ∂Ω+ [u]) > 2(∥u∥L∞ (Ω) ε)1/2 }. Indeed, for x ∈ Ωε and (q, X) ∈ ε x) − (2ε)−1 |x − xˆ|2 , we easily J 2,+ uε (x), choosing xˆ ∈ Ω such that √ u (x) = u(ˆ verify that |q| = ε−1 |ˆ x − x| ≤ 2 ∥u∥L∞ (Ω) /ε. Thus, by Lemma 7.4, we see 2,+ that (q, X) ∈ J u(x + εq). Hence, we have P − (X) − µ|q| ≤ f + (x + εq) ≤ f ε (x). 101

We note that for small ε > 0, we may suppose that rε := max uε − max(uε )+ > 0. ∂Ωε

Ωε

(7.6)

Here, we list some properties on upper contact sets: For small δ > 0, we set Ωδ := {x ∈ Ω | dist(x, ∂Ω) > δ}. δ

Lemma 7.5. Let vδ ∈ C(Ω ) and v ∈ C(Ω) satisfy that vδ → v uniformly on any compact sets in Ω as δ → 0. Assume that rˆ := maxΩ v−max∂Ω v + > 0. Then, for r ∈ (0, rˆ), we have the following properties:  (1) Γr [v, Ω] is a compact set in Ω+ [v],    δ   

(2) lim sup Γr [vδ , Ω ] ⊂ Γr [v, Ω], δ→0

ˆ α, (3) for small α > 0, there is δα such that ∪0≤δ 0. Suppose the contrary; if there is xk ∈ Γr [v, Ω] such that xk ∈ Ω → xˆ ∈ ∂Ω, then there is pk ∈ B r such that v(y) ≤ v(xk ) + ⟨pk , y − xk ⟩ for y ∈ Ω. Hence, sending k → ∞, we have max v − max v + ≤ r < rˆ, Ω

∂Ω

which is a contradiction. Thus, we can find a compact set K ⊂ Ω such that Γr [v, Ω] ⊂ K. Moreover, if v(x) ≤ 0 for x ∈ Γr [v, Ω], then we get a contradiction: rˆ ≤ max v ≤ r < rˆ. Ω

Next, choose x ∈ lim supδ→0 Γr [vδ , Ωδ ]. Then, for any k ≥ 1, there are δk ∈ (0, 1/k) and pk ∈ B r such that vδk (y) ≤ vδk (x) + ⟨pk , y − x⟩ for y ∈ Ωδk . We may suppose pk → p for some p ∈ B r taking a subsequence if necessary. Sending k → ∞ in the above, we see that x ∈ Γr [v, Ω]. 102

If (3) does not hold, then there are α0 > 0, δk ∈ (0, 1/k) and xk ∈ ˆ αr 0 . We may suppose again that limk→∞ xk = xˆ for some xˆ ∈ Ω. Γr [vδk , Ωδk ] \ Γ When xˆ ∈ ∂Ω, since there is pk ∈ B r such that vδk (y) ≤ vδk (xk ) + ⟨pk , y − xk ⟩ for y ∈ Ω, we have rˆ < rˆ, which is a contradiction. Thus, we may suppose ˆ α0 that xˆ ∈ Ω and, then xˆ ∈ Γr [v, Ω]. Thus, there is k0 ≥ 1 such that xk ∈ Γ r for k ≥ k0 , which is a contradiction. 2 For δ > 0, we set uεδ := uε ∗ ρδ , where ρδ is the standard mollifier. We set ε ε + ε ε ε ˜ ε,δ Γ r := Γr [uδ , Ωε ] for r ∈ (0, rδ ), where rδ := maxΩε uδ − max∂Ωε (uδ ) . Notice ε that for small δ > 0, rδ > 0. In view of the argument to derive (7.2) in Step 1, we have ∫ Br



g(p)dp ≤ C

( ˜ ε,δ Γ r

|Duεδ |n/(n−1) + κn/(n−1)

)1−n

(−trace(D2 uεδ ))n dx

for small r > 0. Also, by the same argument for (ii) in (7.1), we can show that D2 uεδ (x) ≤ ˜ ε,δ . Furthermore, from the definition of uε , we verify that −ε−1 I ≤ O in Γ r D2 uεδ (x) in Ωε . Hence, sending δ → 0 with Lemma 7.5 (3), we have ∫ Br

g(p)dp ≤ C ≤C

∫ Γr



( [uε ,Ω

ε]

|Duε |n/(n−1) + κn/(n−1)

(

(

fε µ + κ

)n )

n

Γr [uε ,Ωε ]

)1−n

(−trace(D2 uε ))n dx

dx.

Therefore, sending ε → 0 (again with Lemma 7.5 (3)), we obtain (7.4), which implies the conclusion. Remark. Using the ABP maximum principle in Step 2 (i.e. f ∈ C(Ω)), we can give a proof of Proposition 6.3, which will be seen in section 7.3. Thus, in Step 3 below, we will use Proposition 6.3. Step 3: u ∈ C(Ω) and f ∈ Ln (Ω). Let fk ∈ C(Ω) be nonnegative functions such that ∥fk − f + ∥Ln (Ω) → 0 as k → ∞. 2,n In view of Proposition 6.3, we choose ϕk ∈ C(Ω) ∩ Wloc (Ω) such that  + 2 +   P (D ϕk ) + µ|Dϕk | = fk − f  

∥ϕk ∥L∞ (Ω)

a.e. in Ω, ϕk = 0 on ∂Ω, + ≤ C∥fk − f ∥Ln (Ω) . 103

Setting wk := u + ϕk − ∥ϕk ∥L∞ (Ω) , we easily verify that wk is an Ln -viscosity subsolution of P − (D2 wk ) − µ|Dwk | ≤ fk in Ω. Note that Ω+ [wk ] ⊂ Ω+ [u]. Thus, by Step 2, we have max wk ≤ max wk + C∥(fk )+ ∥Ln (Γr [wk ,Ω]∩Ω+ [u]) . ∂Ω



Therefore, sending k → ∞ with Lemma 7.5 (2), we finish the proof.

7.3

2

Proof of existence results for Pucci equations

We shall solve Pucci equations under the Dirichlet condition in Ω. For simplicity of statemants, we shall treat the case when Ω is a ball though we will need the existence result in smooth domains later. To extend the result for general Ω with smooth boundary, we only need to modify the function v z in the argument below. For µ ≥ 0 and f ∈ Lp (Ω) with p ≥ n, {

and

{

P − (D2 u) − µ|Du| ≥ f in B1 , u = 0 on ∂B1 ,

(7.7)

P + (D2 u) + µ|Du| ≤ f in B1 , u = 0 on ∂B1 .

(7.8)

Note that the first estimate of (7.10) is valid by Proposition 6.2 when the inhomogeneous term is continuous. Sketch of proof. We only show the assertion for (7.8). Step 1: f ∈ C ∞ (B 1 ). We shall consider the case when f ∈ C ∞ (B 1 ). Set Sλ,Λ := {A := (Aij ) ∈ S n | λI ≤ A ≤ ΛI}. We can choose a countable set S0 := {Ak := (Akij ) ∈ Sλ,Λ }∞ k=1 such that S0 = Sλ,Λ . Noting that µ|q| = max{⟨b, q⟩ | b ∈ ∂Bµ } for q ∈ Rn , we choose B0 := {bk ∈ ∂Bµ }∞ k=1 such that B 0 = ∂Bµ . According to Evans’ result in 1983, we can find classical solutions uN ∈ C(Ω) ∩ C 2 (Ω) of   

max

k=1,...,N

{

}

−trace(Ak D2 u) + ⟨bk , Du⟩ = f in B1 , u = 0 on ∂B1 . 104

(7.9)

Moreover, we find σ = σ(ε) ∈ (0, 1), Cε > 0 (for each ε ∈ (0, 1)) and C1 > 0, which are independent of N ≥ 1, such that ∥uN ∥L∞ (B1 ) ≤ C1 ∥f ∥Ln (B1 )

and ∥uN ∥C 2,σ (B1−ε ) ≤ Cε .

(7.10)

Note that the first estimate of (7.10) is valid by Proposition 6.2 when the inhomogeneous term is continuous. More precisely, by the classical comparison principle, Proposition 3.3, we have uN ≤ u1 in B 1 . (7.11) Furthermore, we can construct a subsoluion of (7.9) for any N ≥ 1 in 2 the following manner: Fix z ∈ ∂B1 . Set v z (x) := α(e−β|x−2z| − e−β ), where α, β > 0 (independent of z ∈ ∂B1 ) will be chosen later. We first note that v z (z) = 0 and v z (x) ≤ 0 for x ∈ B 1 . Setting Lk w(x) := −trace(Ak D2 w(x)) + ⟨bk , Dw(x)⟩, we verify that Lk v z (x) ≤ 2αβe−β|x−2z| (Λn − 2βλ|x − 2z|2 + µ|x − 2z|) ≤ 2αβe−9β (Λn − 2βλ + 3µ). 2

Thus, fixing β := (Λn + 3µ + 1)/(2λ), we have Lk v z (x) ≤ −2αβe−9β . Hence, taking α > 0 large enough so that 2αβe−9β ≥ ∥f ∥L∞ (B1 ) , we have max Lk v z (x) ≤ f (x) in B1 .

k=1,2,...,N

Now, putting V (x) := supz∈∂B1 v z (x), in view of Theorem 4.2, we see that V is a viscosity subsolution of max Lk u(x) − f (x) ≤ 0 in B1 .

k=1,2,...,N

Moreover, it is easy to check that V ∗ (x) = 0 for x ∈ ∂B1 . Thus, by Proposition 3.3 again, we obtain that V ≤ uN

in B 1 .

(7.12)

Therefore, in view of (7.10)-(7.12), we can choose a sequence Nk and u ∈ C 2 (B1 ) such that limk→∞ Nk = ∞, (uNk , DuNk , D2 uNk ) → (u, Du, D2 u) uniformly in B1−ε 105

for each ε ∈ (0, 1), and

V ≤ u ≤ u1

in B 1 .

(7.13)

We note that (7.13) implies that u∗ = u∗ on ∂B1 . By virtue of the stability result (Proposition 4.8), we see that u is a viscosity solution of P + (D2 u) + µ|Du| − f = 0 in B1 since supk≥1 {−trace(Ak X) + ⟨bk , p⟩} = P + (X) + µ|p|. Hence, Theorem 3.9 yields u ∈ C(B 1 ). Therefore, by Proposition 2.3, we see that u ∈ C(B 1 ) ∩ C 2 (B1 ) is a classical solution of (7.8). Step 2: f ∈ Lp (B1 ). (Lemma 3.1 in [5]) Choose fk ∈ C ∞ (B 1 ) such that ∥fk − f ∥Lp (Ω) → 0 as k → ∞. Let uk ∈ C(B 1 ) ∩ C 2 (B1 ) be a classical solution of P + (D2 u) + µ|Du| − fk = 0 in B1 such that uk = 0 on ∂B1 . ∞ We first claim that {uk }∞ k=1 is a Cauchy sequence in L (B1 ). Indeed, since (1) and (4) of Proposition 3.2 imply that ≤ = ≤ ≤

P − (D2 (uj − uk )) − µ|D(uj − uk )| P + (D2 uj ) + P − (−D2 uk ) + µ|Duj | − µ|Duk | fj − fk P + (D2 uj ) − P + (D2 uk ) + µ|D(uj − uk )| P + (D2 (uj − uk )) + µ|D(uj − uk )|,

using Proposition 6.2 when the inhomogeneous term is continuous, we have max |uj − uk | ≤ C∥fj − fk ∥Ln (B1 ) . B1

Recalling p ≥ n, we thus have ∥uj − uk ∥L∞ (B1 ) ≤ C∥fj − fk ∥Lp (B1 ) . Hence, we find u ∈ C(B 1 ) such that uk converges to u uniformly in B 1 as k → ∞.

106

Therefore, by the standard covering and limiting arguments with weakly convergence in W 2,p locally, it suffices to find C > 0, independent of k ≥ 1, such that ∥uk ∥W 2,p (B1/2 ) ≤ C. Moreover, we see that −C∥f − ∥Lp (B1 ) ≤ u ≤ C∥f + ∥Lp (B1 ) in B1 . For ε ∈ (0, 1/2), we select η := ηε ∈ C 2 (B1 ) such that  (i)    

0 ≤ η ≤ 1 in B1 , (ii) η = 0 in B1 \ B1−ε ,  (iii) η = 1 in B1−2ε ,    (iv) |Dη| ≤ C0 ε−1 , |D2 η| ≤ C0 ε−2 in B1 , where C0 > 0 is independent of ε ∈ (0, 1/2). Now, we recall Caffarelli’s result (1989) (see also [4]): There is a universal constant Cˆ > 0 such that ˆ + (D2 (ηuk ))∥Lp (B ) . ∥D2 (ηuk )∥Lp (B ) ≤ C∥P 1−ε

1−ε

Hence, we find C1 > 0 such that for 0 < ε < 1/4, ˆ + (D2 (ηuk ))∥Lp (B ∥D2 uk ∥Lp (B ) ≤ ∥D2 (ηuk )∥Lp (B ) ≤ C∥P 1−2ε

(

1−ε

−1

3/4 )

−2

≤ C1 ∥fk ∥Lp (B1−ε ) + ε ∥Duk ∥Lp (B1−ε ) + ε ∥uk ∥Lp (B1−ε )

)

Multiplying ε2 > 0 in the above, we get ε2 ∥D2 uk ∥Lp (B1−2ε ) ≤ C1 (∥fk ∥Lp (B1 ) + ϕ1 (uk ) + ϕ0 (uk )), where ϕj (uk ) := sup0 0 such that ϕ1 (uk ) ≤ δϕ2 (uk ) + Cδ ϕ0 (uk ), we find C3 > 0 such that

(

)

ϕ2 (uk ) ≤ C3 ∥fk ∥Lp (B1 ) + ϕ0 (uk ) . On the other hand, since we have L∞ -estimates for uk , we conclude the proof. 2 Remark. It is possible to show that the uniform limit u in Step 2 is an Lp -viscosity solution of (7.8) by Proposition 6.13. Moreover, since it is 2,p known that if Lp -viscosity supersolution of (7.8) belongs to Wloc (B1 ), then p + 2 it is an L -strong supersolution (see [5]), u satisfies P (D u) + µ|Du| = f (x) a.e. in B1 . 107

7.4

Proof of the weak Harnack inequality

We need a modification of Lemma 4.1 in [4] since our PDE (7.14) below has the first derivative term. Lemma 7.6. (cf. Lemma 4.1 in [4]) There are ϕ ∈ C 2 (B 2√n ) and ξ ∈ C(B2√n ) such that  (1)    

P − (D2 ϕ) − µ|Dϕ| ≥ −ξ in B2√n , (2) ϕ(x) ≤ −2 for x ∈ Q3 ,  (3) ϕ(x) = 0 for x ∈ ∂B2√n ,    (4) ξ(x) = 0 for x ∈ B2√n \ B1/2 . √ √ Proof. Set ϕ0 (r) := A{1 − (2 n/r)α } for A, α > 0 so that ϕ0 (2 n) = 0. Q3 -2sqrtn

PSfrag replacements y = ϕ(x) y = ϕ0√ (x) 2√n −2 n −2 0 Q3 1/2 r −1/2

2sqrtn

1/2

-1/2

r

0

y=phi -2

y=phi0 Fig 7.1

Since {

√ Dϕ0 (|x|) = A(2 √n)α α|x|−α−2 x, D2 ϕ0 (|x|) = A(2 n)α α|x|−α−4 {|x|2 I − (α + 2)x ⊗ x},

we caluculate in the following way: At x ̸= 0, we have √ P − (D2 ϕ0 (|x|)) − µ|Dϕ0 (|x|)| ≥ A(2 n)α α|x|−α−2 {(α + 2)λ − nΛ − µ|x|}. 108

√ Setting α := λ−1 (nΛ + 2µ n) − 2 so that α > 0 for n ≥ 2, we see that the right hand side of the above is nonnegative for x ∈ B2√n \ {0}. Thus, taking ϕ ∈ √ C 2 (B2√n ) such that ϕ(x) = ϕ0 (|x|) for x ∈ B2√n \ B1/2 and ϕ(x) ≤ ϕ0 (3 n/2) for x ∈ B3√n/2 , we can choose a continuous function ξ satisfying (1) and (4). See Fig 7.1. √ Moreover, taking A := 2/{(4/3)α − 1} so that ϕ0 (3 n/2) = −2, we see that (2) holds. 2 We now present an important “cube decomposition lemma”. ˜ := Qr (x) with We shall explain a terminology for the lemma: For a cube Q n n ˜ r > 0 and x ∈ R , we call Q a dyadic cube of Q if it is one of cubes {Qk }2k=1 ˜ ⊂ ∪2n Qk . ˜ and ∪2n Qk ⊂ Q so that Qk := Qr/2 (xk ) for some xk ∈ Q, k=1 k=1 Lemma 7.7. (Lemma 4.2 in [4]) Let A ⊂ B ⊂ Q1 be measurable sets and 0 < δ < 1 such that (a) |A| ≤ δ, ˜ ⊂ Q1 satisfies |A ∩ Q| > δ|Q|, (b) Assume that if a dyadic cube Q of Q ˜ ⊂ B. then Q Then, |A| ≤ δ|B|. Proof of Proposition 6.4. Assuming that u ∈ C(B 2√n ) is a nonnegative viscosity supersolution of P + (D2 u) + µ|Du| ≥ 0 in B2√n ,

(7.14)

we shall show that for some constants p0 > 0 and C1 > 0, ∥u∥Lp0 (Q1 ) ≤ C1 inf u. Q1/2

To this end, it is sufficient to show that if u ∈ C(B 2√n ) satisfies that inf Q1/2 u ≤ 1, then we have ∥u∥Lp0 (Q1 ) ≤ C1 for some constants p0 , C1 > 0. (

)−1

Indeed, by taking v(x) := u(x) inf Q1/2 u + δ for any δ > 0 in place of u, we have ∥v∥Lp0 (Q1 ) ≤ C1 , which implies the assertion by sending δ → 0. Lemma 7.8. There are θ > 0 and M > 1 such that if u ∈ C(B 2√n ) is a nonnegative Lp -viscosity supersolution of (7.14) such that inf u ≤ 1, Q3

109

(7.15)

then we have |{x ∈ Q1 | u(x) ≤ M }| ≥ θ. Remark. In our setting of proof of Proposition 7.4, assumption (7.15) is automatically satisfied. Proof of Lemma 7.8. Choose ϕ ∈ C 2 (B2√n ) and ξ ∈ C(B2√n ) from Lemma 7.6. Using (4) of Proposition 3.2, we easily see that w := u + ϕ is an Ln viscosity supersolution of P + (D2 w) + µ|Dw| ≥ −ξ

in B2√n .

Since inf Q3 w ≤ −1 and w ≥ 0 on ∂B2√n by (2) and (3) in Lemma 7.6, respectively, by Proposition 6.2, we find Cˆ > 0 such that ˆ 1 ≤ sup(−w) ≤ sup (−w) ≤ C∥ξ∥ Ln (Γ[−w,B2√n ]∩B +√ Q3

B2√n

2 n

[−w]) .

(7.16)

In view of (4) of Lemma 7.6, (7.16) implies that 1 ≤ Cˆ max |ξ||{x ∈ Q1 | (u + ϕ)(x) < 0}|. B1/2

Since u(x) ≤ −ϕ(x) ≤ max(−ϕ) =: M B 2√n

for x ∈ B2√n .

Therefore, setting θ = (Cˆ supQ1 |ξ|)−1 > 0 and M = supB2√n (−ϕ) ≥ 2, we have θ ≤ |{x ∈ Q1 | u(x) ≤ M }|. 2 We next show the following: Lemma 7.9. Under the same assumptions as in Lemma 7.8, we have |{x ∈ Q1 | u(x) > M k }| ≤ (1 − θ)k

for all k = 1, 2, . . .

Proof. Lemma 7.8 yields the assertion for k = 1. Suppose that it holds for k − 1. Setting A := {x ∈ Q1 | u(x) > M k } and B := {x ∈ Q1 | u(x) > M k−1 }, we shall show |A| ≤ (1 − θ)|B|. 110

Since A ⊂ B ⊂ Q1 and |A| ≤ |{x ∈ Q1 | u(x) > M }| ≤ δ := 1 − θ, in view of Lemma 7.8, it is enough to check that property (b) in Lemma 7.7 holds. ˜ := Q1/2j−1 (ˆ To this end, let Q := Q1/2j (z) be a dyadic cube of Q z ) (for some z, zˆ ∈ Q1 and j ≥ 1) such that |A ∩ Q| > δ|Q| =

1−θ . 2jn

(7.17)

˜ ⊂ B. It remains to show Q ˜ such that x˜ ∈ Assuming that there is x˜ ∈ Q / B; i.e. u(˜ x) ≤ M k−1 . Set v(x) := u(z + 2−j x)/M k−1 for x ∈ B2√n . Since |˜ xi − zi | ≤ 3/2j+1 , we k−1 see that inf Q3 v ≤ u(˜ x)/M ≤ 1. Furthermore, since z ∈ Q1 , z + 2−j x ∈ B2√n for x ∈ B2√n . tildex tildeQ

hatz

PSfrag replacements x˜ ˜ Q Q zˆ 1

z Q 2j 2j-1

z+2jQ3

2j

z+

1 2j−1 1 Q 2j 3

Fig 7.2

Thus, since v is an Lp -viscosity supersolution of P + (D2 v) + µ|Dv| ≥ 0, Lemma 7.8 yields |{x ∈ Q1 | v(x) ≤ M }| ≥ θ. Therefore, we have |{x ∈ Q | u(x) ≤ M k }| ≥

θ = θ|Q|. 2jn

Thus, we have |Q \ A| ≥ θ|Q|. Hence, in view of (7.17), we have |Q| = |A ∩ Q| + |Q \ A| > δ|Q| + θ|Q| = |Q|, 111

2

which is a contradiction.

Back to the proof of Proposition 6.4. A direct consequence of Lemma 7.9 ˜ ε > 0 such that is that there are C, ˜ −ε |{x ∈ Q1 | u(x) ≥ t}| ≤ Ct

for t > 0.

(7.18)

Indeed, for t > M , we choose an integer k ≥ 1 so that M k+1 ≥ t > M k . Thus, we have |{x ∈ Q1 | u(x) ≥ t}| ≤ |{x ∈ Q1 | u(x) > M k }| ≤ (1 − θ)k ≤ C˜0 t−ε , where C˜0 := (1 − θ)−1 and ε := − log(1 − θ)/ log M > 0. Since 1 ≤ M ε t−ε for 0 < t ≤ M , taking C˜ := max{C˜0 , M ε }, we obtain (7.18). Now, recalling Fubini’s theorem, ∫ Q1

up0 (x)dx ≤ =

∫ {x∈Q ∫ ∞1 | u(x)≥1} p0 tp0 −1 |{x 1

up0 (x)dx + 1 ∈ Q1 | u(x) ≥ t}|dt + 1,

(see Lemma 9.7 in [13] for instance), in view of (7.18), for any p0 ∈ (0, ε), we can find C(p0 ) > 0 such that ∥u∥Lp0 (Q1 ) ≤ C(p0 ). 2

7.5

Proof of the local maximum principle

Although our proof is a bit technical, we give a modification of Trudinger’s proof in [13] (Theorem 9.20), in which he observed a precise estimate for “strong” subsolutions on the upper contact set. Recently, Fok in [11] (1996) gave a similar proof to ours. We note that we can find a different proof of the local maximum principle in [4] (Theorem 4.8 (2)). Proof of Proposition 6.5. We give a proof only when q ∈ (0, 1] because it is immediate to show the assertion for q > 1 by H¨older’s inequality. Let x0 ∈ Q1 be such that maxQ1 u = u(x0 ). It is sufficient to show that max u ≤ C2 ∥u+ ∥Lq (B1/2 (x0 ))

B 1/4 (x0 )

112

since B1/2 (x0 ) ⊂ Q2 . Thus, by considering u((x − x0 )/2) instead of u(x), it is enough to find C2 > 0 such that max u ≤ C2 ∥u+ ∥Lq (B1 ) . B 1/2

We may suppose that max u > 0

(7.19)

B1

since otherwise, the conclusion is trivial. Furthermore, by the continuity of u, we can choose τ ∈ (0, 1/4) such that 1 − 2τ ≥ 1/2 and max u > 0. B 1−2τ

We shall consider the sup-convolution of u again: For ε ∈ (0, τ ), {

}

|x − y|2 . u (x) := sup u(y) − 2ε y∈B1 ε

By the uniform convergence of uε to u, (7.19) yields max uε > 0 for small ε > 0.

(7.20)

B 1−τ

For small ε > 0, we can choose δ := δ(ε) ∈ (0, τ ) such that limε→0 δ = 0, and P − (D2 uε ) − µ|Duε | ≤ 0 a.e. in B1−δ . Putting η ε (x) := {(1 − δ)2 − |x|2 }β for β := 2n/q ≥ 2, we define v ε (x) := η ε (x)uε (x). We note that rε := max v ε > 0. B 1−δ

Fix r ∈ (0, rε ) and set Γεr := Γr [v ε , B1−δ ]. By (1) in Lemma 7.5, we see + [v ε ]. Γεr ⊂ B1−δ For later convenience, we observe that Dv ε (x) = −2βxη(x)(β−1)/β uε (x) + η(x)Duε (x), D2 v ε (x) = −2βη(x)(β−1)/β {uε (x)I + x ⊗ Duε (x) + Duε (x) ⊗ x} +4β(β − 1)η(x)(β−2)/β uε (x)x ⊗ x + η(x)D2 uε (x). 113

(7.21) (7.22)

Since uε is twice differentiable almost everywhere, we can choose a measurable set Nε ⊂ B1−δ such that |Nε | = 0 and uε is twice differentiable at x ∈ B1−δ \ Nε . Of course, v ε is also twice differentiable at x ∈ B1−δ \ Nε . By (7.22), we have P − (D2 v ε ) ≤ ηP − (D2 uε ) + 2βη (β−1)/β {Λnuε − P − (x ⊗ Duε + Duε ⊗ x)} + in B1−δ [v ε ]. By using (7.21), the last term in the above can be estimated from above by C{η −2/β (v ε )+ + η −1/β |Dv ε |}.

Moreover, using (7.21) again, we have P − (D2 uε ) ≤ µ|Duε | ≤ µη −1 |Dv ε | + Cη −1/β (uε )+ . Hence, we find C > 0 such that P − (D2 v ε ) ≤ Cη −1/β |Dv ε | + Cη −2/β (v ε )+ =: g ε

in B1−δ \ Nε .

(7.23)

We next claim that there is C > 0 such that |Dv ε (x)| ≤ Cη −1/β (x)v ε (x) for x ∈ Γεr \ Nε .

(7.24)

First, we note that at x ∈ Γεr \ Nε , v ε (y) ≤ v ε (x) + ⟨Dv ε (x), y − x⟩ for y ∈ B1−δ . To show this claim, since we may suppose |Dv ε (x)| > 0 to get the estimate, setting y := x − tDv ε (x)|Dv ε (x)|−1 ∈ ∂B1−δ for t ∈ [1 − δ − |x|, 1 − δ + |x|], we see that 0 = v ε (y) ≤ v ε (x) − t|Dv ε (x)|, which implies

|Dv ε (x)| ≤ Cv ε (x)η −1/β (x) in Γεr \ Nε .

(7.25)

Here, we use Lemma 2.8 in [5], which will be proved in the end of this subsection for the reader’s convenience: Lemma 7.10. Let w ∈ C(Ω) be twice differentiable a.e. in Ω, and satisfy P − (D2 w) ≤ g

114

a.e. in Ω,

where g ∈ Lp (Ω) with p ≥ n. If −C1 I ≤ D2 w(x) ≤ O a.e. in Ω for some C1 > 0, then w is an Lp -viscosity subsolution of P − (D2 w) ≤ g

in Ω.

(7.26)

Since uε is Lipschitz continuous in B1−δ , by (7.22), we see that v ε is an Ln -viscosity subsolution of P − (D2 v ε ) ≤ g ε

in B1−δ .

Noting (7.25), in view of Proposition 6.2, we have max v ε ≤ C∥η −2/β (v ε )+ ∥Ln (Γεr ) B 1−δ

(

) β−2

≤ C max(v )

β

ε +

B 1−δ

∥((uε )+ )2/β ∥Ln (B1−δ ) ,

which together with our choice of β yields max v ε ≤ C∥(uε )+ ∥Lq (B1−δ ) . B 1−δ

Therefore, by (7.20), we have max uε ≤ C max v ε ≤ C∥(uε )+ ∥Lq (B1−δ ) , B 1/2

B 1−δ

Therefore, sending ε → 0 in the above, we finish the proof.

2

Proof of Lemma 7.10. In order to show that w ∈ C(Ω) is an Lp -viscosity subsolution of (7.26), we suppose the contrary; there are ε, r > 0, xˆ ∈ Ω and 2,p ϕ ∈ Wloc (Ω) such that 0 = (w − ϕ)(ˆ x) = maxΩ (w − ϕ), B2r (ˆ x) ⊂ Ω, and P − (D2 ϕ) − g ≥ 2ε a.e. in Br (ˆ x). We may suppose that xˆ = 0 ∈ Ω. Setting ψ(x) := ϕ(x) + τ |x|4 for small τ > 0, we observe that h := P − (D2 ψ) − g ≥ ε a.e. in Br . Notice that 0 = (w − ψ)(0) > (w − ψ)(x) for x ∈ Br \ {0}. 115

Moreover, we observe P − (D2 (w − ψ)) ≤ −ε a.e. in Br .

(7.27)

Consider wδ := w ∗ ρδ , where ρδ is the standard mollifier for δ > 0. From our assumption, we see that, as δ → 0, {

(1) wδ → w uniformly in Br , (2) D2 wδ → D2 w a.e. in Br .

By Lusin’s Theorem, for any α > 0, we find Eα ⊂ Br such that |Br \Eα | < α, ∫ Br \Eα

(1 + |P − (−D2 ψ)|)p dx < α,

and D 2 wδ → D 2 w

uniformly in Eα

(as δ → 0).

Setting hδ := P − (D2 (wδ − ψ)), we find C > 0 such that hδ ≤ C + P − (−D2 ψ) because of our hypothesis. Hence, we have ∥(hδ )+ ∥pLp (Br ) ≤ C

∫ Br \Eα

(1 + |P − (−D2 ψ)|)p dx +

∫ Eα

|(hδ )+ |p dx.

Sending δ → 0 in the above, by (7.27), we have lim sup ∥(hδ )+ ∥Lp (Br ) ≤ C∥(1 + |P − (−D2 ψ)|)∥Lp (Br \Eα ) ≤ Cα.

(7.28)

δ→0

On the other hand, in view of Proposition 6.2, we see that max(wδ − ψ) ≤ max(wδ − ψ) + C∥(hδ )+ ∥Lp (Br ) . Br

∂Br

Hence, by sending δ → 0, this inequality together with (7.28) implies that 0 = max(w − ψ) ≤ max(w − ψ) + Cα for any α > 0. Br

∂Br

This is a contradiction since max∂Br (w − ψ) < 0. 116

2

References [1] M. Bardi & I. Capuzzo Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations, Systems & Control, Birk¨auser, 1997. ´ [2] G. Barles, Solutions de Viscosit´e des Equations de Hamilton-Jacobi, Math´ematiques et Applications 17, Springer, 1994. [3] M. Bardi, M. G. Crandall, L. C. Evans, H. M. Soner & P. E. Souganidis, Lecture Notes in Mathematics, 1660, Springer, 1997. ´, Fully Nonlinear Elliptic Equations, [4] L. A. Caffarelli & X. Cabre Colloquium Publications 43, AMS, 1995. ´ ¸ ch, [5] L. A. Caffarelli, M. G. Crandall, M. Kocan & A. Swie On viscosity solutions of fully nonlinear equations with measurable ingredients, Comm. Pure Appl. Math. 49 (1996), 365-397. [6] M. G. Crandall, H. Ishii & P.-L. Lions, User’s guide to viscosity solutions of second order partial differential equations, Bull. Amer. Math. Soc., 27 (1992), 1-67. ´ ¸ ch, Existence [7] M. G. Crandall, M. Kocan, P.-L. Lions & A. Swie results for boundary problems for uniformly elliptic and parabolic fully nonlinear equations, Electron. J. Differential Equations, 1999 (1999), 1-22. [8] L. C. Evans, Partial Differential Equations, GSM 19, Amer. Math. Soc., 1998. [9] L. C. Evans & R. F. Gariepy, Measure Theory and Fine Properties of Functions, Studies in Advanced Mathematics, CRC Press, 1992. [10] W. H. Fleming & H. M. Soner, Controlled Markov Processes and Viscosity Solutions, Applications of Mathematics 25, Springer, 1992. [11] P.-K. Fok, Some maximum principles and continuity estimates for fully nonlinear elliptic equations of second order, Dissertation, University of California at Santa Barbara, 1996.

117

[12] Y. Giga, Surface Evolution Equations – a level set method – Preprint, 2002. [13] D. Gilbarg & N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, 2nd edition, GMW 224, Springer, 1983. [14] Q. Han & F. H. Lin, Elliptic Partial Differential Equations, Courant Institute Lecture Note 1, Amer. Math. Soc., 1997. [15] H. Ishii, Viscosity Solutions, (in Japanese). [16] R. Jensen, Uniqueness of Lipschitz extensions: Minimizing the sup norm of the gradient, Arch. Rational Mech. Anal., 123 (1993), 51-74.

118

Notation Index A, 46 B, 52 C k,σ , 79 C σ , 78 C0∞ , 80 ∆, 52 ess. inf, 80 ess. sup, 80 G∗ , G∗ , 62 Γ, 52 Γ[u, O], 83 J 2,± , 17, 22 2,± J , 18, 22 △, 2 Lp , 79 LSC, 15 M, 27 n, 66 o, 17 P ± , 25 supp, 5 U SC, 15 u∗ , u∗ , 40 W 2,p , 79 2,p Wloc , 81

119