Systems with Non-Smooth Inputs: Mathematical Models of Hysteresis Phenomena, Biological Systems, and Electric Circuits 9783110709865, 9783110706307

The authors present a completely new and highly application-oriented field of nonlinear analysis. The work covers the th

152 79 3MB

English Pages 288 Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Systems with Non-Smooth Inputs: Mathematical Models of Hysteresis Phenomena, Biological Systems, and Electric Circuits
 9783110709865, 9783110706307

Table of contents :
Introduction
Contents
1 DN-systems: theory
2 DN-systems: applications
3 Equations with nonlinear differentials
4 Smooth approximating models
Appendix: Mathematica Procedures
Bibliography
Index

Citation preview

Jürgen Appell, Nguyen Thi Hien, Lyubov Petrova, Irina Pryadko Systems with Non-Smooth Inputs

Also of Interest Variational Methods in Nonlinear Analysis. With Applications in Optimization and Partial Differential Equations Dimitrios C. Kravvaritis, Athanasios N. Yannacopoulos, 2020 ISBN 978-3-11-064736-5, e-ISBN (PDF) 978-3-11-064738-9, e-ISBN (EPUB) 978-3-11-064745-7 Relaxation in Optimization Theory and Variational Calculus Tomáš Roubíček, 2020 ISBN 978-3-11-058962-7, e-ISBN (PDF) 978-3-11-059085-2, e-ISBN (EPUB) 978-3-11-058974-0

Time-Frequency Analysis of Operators Elena Cordero, Luigi Rodino, 2020 ISBN 978-3-11-053035-3, e-ISBN (PDF) 978-3-11-053245-6, e-ISBN (EPUB) 978-3-11-053060-5

Approximation Methods in Optimization of Nonlinear Systems Peter I. Kogut, Olga P. Kupenko, 2019 ISBN 978-3-11-066843-8, e-ISBN (PDF) 978-3-11-066852-0, e-ISBN (EPUB) 978-3-11-066859-9

Fixed Points of Nonlinear Operators. Iterative Methods Haiyun Zhou, Xiaolong Qin, 2020 ISBN 978-3-11-066397-6, e-ISBN (PDF) 978-3-11-066401-0, e-ISBN (EPUB) 978-3-11-066709-7

Jürgen Appell, Nguyen Thi Hien, Lyubov Petrova, Irina Pryadko

Systems with Non-Smooth Inputs |

Mathematical Models of Hysteresis Phenomena, Biological Systems, and Electric Circuits

Mathematics Subject Classification 2020 Primary: 93-02, 34A60, 46B40, 47J40, 47L07, 93A10; Secondary: 15B48, 34C05, 34C55, 34D08, 34D30, 37G15, 47H04, 47H30, 49M37, 74N30 Authors Jürgen Appell University of Würzburg Department of Mathematics Emil-Fischer-Str. 30 Campus Hubland Nord D-97074 Würzburg Germany [email protected] Nguyen Thi Hien Hanoi University of Industry Faculty of Fundamental Sciences 298 Cau Dien Ha Noi Vietnam [email protected]

Lyubov Petrova Voronezh State University Faculty of Mechanics and Mathematics Universitetskaya pl. 1 RU-394006 Voronezh Russian Federation [email protected] Irina Pryadko Voronezh State University Faculty of Mechanics and Mathematics Universitetskaya pl. 1 RU-394006 Voronezh Russian Federation [email protected]

ISBN 978-3-11-070630-7 e-ISBN (PDF) 978-3-11-070986-5 e-ISBN (EPUB) 978-3-11-070993-3 Library of Congress Control Number: 2020952295 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2021 Walter de Gruyter GmbH, Berlin/Boston Cover image: vchal / Gettyimages Typesetting: VTeX UAB, Lithuania Printing and binding: CPI books GmbH, Leck www.degruyter.com

|

To the loving memory of our teacher and friend Boris Nikolaevich Sadovsky (1937–2013) without whom this book and many other things could not exist

Introduction This book consists of two parts which may be summarized under the title Systems with non-smooth inputs. The first part (Chapters 1 and 2) is concerned with so-called systems with diode nonlinearities (or DN-systems, for short). This terminology was created by a group of teachers and collaborators of the Voronezh State University during the 1980th years working on the mathematical modelling and numerical investigation of electric circuits. To explain the theory, methods, and applications of this topic, let us briefly recall the structure of a simple electrical network. Consider the series circuit, as shown in Figure 1. The letter E represents a source of electromotive force governed by some function e = e(t). This may be a battery or a generator which produces a voltage (potential difference) u = u(t) that causes in turn a current i = i(t) to flow through the circuit when the switch S is closed. The symbol R represents a resistance to the flow of current such as that produced by a lightbulb or a toaster. When current flows through a coil of wire L, a magnetic field is produced which opposes any change in the current through the coil. The change in voltage produced by the coil is proportional to the rate of change of the current, and the constant of proportionality is called the inductance L of the coil. Finally, a capacitor (or condenser), indicated by C, usually consists of two metal plates separated by a material through which very little current can flow. A capacitor has the effect of reversing the flow of current as one plate or the other becomes charged.

Figure 1

Let q = q(t) be the charge on the capacitor at time t. To derive a differential equation for q we recall Kirchhoff’s second law: In a closed circuit, the impressed voltage equals the sum of the voltage drops in the rest of the circuit. Now, the voltage drop across a resistance R (measured in Ohm) at time t equals Ri(t), the voltage drop across an inductance L (measured in Henry) at time t equals Ldi/dt, and the voltage drop across a capacitance C (measured in Farad) at time t equals q(t)/C. Consequently, L

di 1 + Ri(t) + q(t) = e(t). dt C

https://doi.org/10.1515/9783110709865-201

(1)

VIII | Introduction This is linear inhomogeneous first order differential equation for i which may be solved very easily by standard methods. Taking into account that i = dq/dt, one may transform (1) into the equation L

d2 q dq 1 + q(t) = e(t) +R dt C dt 2

(2)

which is linear second order equation for q which may also be solved by standard methods, such as variation of constants. Now suppose that the capacitor is replaced by an ideal diode D, i. e., an element which leads the current in the direction of the arrow (i. e., from the anode to the cathode), but not in the reverse direction.

Figure 2

In this case one is led to a a differential inclusion (also called multivalued differential equation by some authors) of the form di ∈ E(t) − δi(t) − NQ (i), dt

(3)

with E(t) = e(t)/L and δ = R/L. The important point is that the inclusion (3) contains the so-called normal cone NQ (i) of the set Q = [0, ∞) at the point i. Since the theory of cones is fundamental in the mathematical modelling of DN-nonlinearities, we take the liberty to recall the basic notions. Given a closed convex set Q ⊆ ℝm and a point x ∈ Q, the normal cone of Q at x is defined by NQ (x) := {y ∈ ℝm : ⟨y, z − x⟩ ≤ 0 for all z ∈ Q},

(4)

where ⟨⋅, ⋅⟩ denotes the usual scalar product in ℝm . Similarly, the tangent cone of Q at x is defined by TQ (x) := {z ∈ ℝm : ⟨z, y⟩ ≤ 0 for all y ∈ NQ (x)}.

(5)

Introduction

| IX

The sets NQ (x) and TQ (x) form then a pair of mutually adjoint cones. It is illuminating to study the analytical properties of the multivalued maps x 󳨃→ NQ (x) and x 󳨃→ TQ (x); for example, the map NQ is closed and maximal monotone, but the map TQ is in general not closed. We will study these and related questions in some detail in Section 1.2; thus, this section may be considered as a friendly introduction into the theory of Hilbert spaces with cones for non-specialists in the field. Now, the connection with the problem stated above is as follows. In the most general setting, a differential inclusion of the form ̇ ∈ f (t, x(t)) − NQ (x(t)), x(t)

(6)

where f : [a, b] × Q → ℝm is a given function with certain regularity properties, is then called system with diode nonlinearity (DN-system). Interestingly, the inclusion (6) may be equivalently reformulated in the form ̇ = τx(t) f (t, x(t)), x(t)

(7)

where τx (y) denotes the metric projection of y ∈ ℝm onto the tangent cone TQ (x), i. e., the unique element of best approximation to y in the closed convex set TQ (x). Equation (7) is much more complicated than the familiar differential equation ̇ = f (t, x(t)), x(t)

(8)

because it contains the projection operator τ which depends on the position x(t) at time t. A large part of Chapter 1 will in fact be devoted to the question which results from the theory of the ordinary differential equation (8) carry over to the DN-problem (7). It turns out that one has to develop a large machinery of sophisticated constructions to answer this question. One of the most important results is an existence and uniqueness theorem for (7), subject to a suitable initial condition, which is similar to, but not a consequence of, the classical Cauchy–Peano theorem. The theory gives particularly interesting and precise results under two restrictions: the dimension is m = 2 (i. e., we are considering only planar DN-systems), and the nonlinearity f in (7) does not depend on t (i. e., we are considering only the autonomous case). Under these and some additional hypotheses (too technical to be stated here), one may then prove that the equation (7) (or, equivalently, the inclusion (6)) has a unique closed trajectory which is orbitally stable in a sense to be made precise. This result may be considered as an analogue to the well-known PoincaréBendixson theorem on ω-limit sets of planar dynamical systems. While Chapter 1 is basically concerned with the theoretical background of DNsystems, Chapter 2 deals with applications. The most important application refers of course to electric networks. In Section 2.1 we will show that, under certain natural physical hypotheses on the paths joining two inputs of a diodic converter, the circuit may be represented as DN-system, and so treated by means of the theory developed in the first chapter.

X | Introduction

Figure 3

A typical application is, for example, the double-phase semi-periodic rectifier with circuit feed and load sketched above. However, applications of DN-systems go much further. In Section 2.2 we show that also the standard problem of convex programming for a function f : Q → ℝm (Q ⊆ ℝm closed and convex) may be reduced to the differential inclusion ẋ ∈ −∇f (x) − NQ (x)

(9)

which is a DN-system, since it contains the normal cone NQ (x). In particular, using classical differential inequalities on circular domains we may derive upper estimates for the distance of the solution x = x(t) of (9) from Q for small time intervals. Interestingly, also biological systems with constraints may be viewed as special DN-systems. In Section 2.3 we show this for the classical predator-prey system {

Ṅ 1 = ε1 N1 − γ1 N1 N2 , Ṅ 2 = γ2 N1 N2 − ε2 N2 ,

(10)

which is also known under the name Volterra–Lotka model in the literature. Here N1 is the number of preys (e. g., rabbits), N2 is the number of some predator (e. g., foxes), and ε1 , ε2 , γ1 and γ2 are parameters describing the interaction of the two species. The study of autooscillations of (10) leads then to the DN-system Ṅ = τN [F(N) + δ(N − N)],

(11)

where N = (N1 , N2 ) is considered on a closed convex domain Q ⊆ ℝ2+ , F : Q → ℝ2 is a nonlinearity determined by the right-hand side of (10), N is the (unique) equilibrium

Introduction

| XI

of (10) in the classical model, and δ(N − N) is a term which takes into account exterior effects on the population of the two species. Note that (11) is a generalized differential equation of type (7), and therefore accessible to the techniques developed in Chapter 1. As a result, one may prove that, under suitable additional hypotheses, the system (11) admits a unique closed trajectory, and all other trajectories different from the equilibrium N influence on this trajectory.

Figure 4

In the above figure we have sketched the phase portraits for Q = [1, 4] × [1, 3], γ1 = γ2 = 1, ε1 = 2, and ε2 = 3, hence N = (3, 2). The closed trajectory is the boundary of the dinosaurian egg in Q, and the equilibrium N is marked in the interior of the egg. The second part (Chapters 3 and 4) of the book is concerned with so-called equations with nonlinear differentials. Let us explain what we mean by this weird and cryptic name. The classical differential equation (8) may be rewritten “in differentials” in the form dx = f (t, x)dt or, using increments, in the form x(t + dt) − x(t) = D(t, x, dt) + o(dt),

(12)

where D(t, x, τ) = f (t, x)τ. However, the relation (12) also makes sense if the dependence of D on the third argument τ is nonlinear. One of the first applications of this observation in the theory of quasi-differential systems referred to problems for integral funnels, where solutions do not take values in ℝm , but in the power set P(ℝm ) of certain subsets, namely sections of integral funnels. For our purposes it is more important that we can use equations like (12) for studying essentially non-smooth problems in spaces with linear structure. One of the most successful applications refers to hysteresis relays whose heuristic description reads as follows. A (non-ideal) relay is a transducer with a continuous input x(t) and an output y(t) which assumes only the values 0 (“off”) and 1 (“on”). More precisely, for given numbers α and β > α one has y(t) = 0 for x(t) < α and y(t) = 1 for x(t) > β, while for α ≤ x(t) ≤ β both values y(t) = 0 or y(t) = 1 are possible. The numbers α and β are

XII | Introduction called the lower resp. upper threshold value of the relay. For obvious reasons, any point of discontinuity of the relay output y is called a switching point. So the output jumps up from 0 to 1 if the input reaches the threshold value β, while it jumps down from 1 to 0 if the input reaches the threshold value α. In other words, the domain of admissible values of a relay with threshold values α and β has the form Ω(α, β) = {(x, 0) : x ≤ β} ∪ {(x, 1) : x ≥ α}, i. e., consists of two horizontal half-lines in the (x, y)-plane as sketched below.

Figure 5

This phenomenological description has been made formally more precise for a rigorous mathematical treatment by several authors, e. g., by Krasnosel’sky and Pokrovsky in their pioneering monograph [24], as follows. For any initial state (x0 , y0 ) ∈ Ω(α, β) at time t = t0 all continuous inputs x(t) are admissible which satisfy x(t0 ) = x0 . The output y(t) associated to such an input x(t) for t ≥ t0 is then defined in the following way. Let Ω0 (α, β) = {x : x(t1 ) = α for some t1 ≥ t0 } ∩ {x : x(τ) < β for all τ ∈ (t1 , t]},

Ω1 (α, β) = {x : x(t1 ) = β for some t1 ≥ t0 } ∩ {x : x(τ) > α for all τ ∈ (t1 , t]},

and 0 { { { y(t) = { 1 { { { y0

if x(t) ≤ α or x ∈ Ω0 (α, β), if x(t) ≥ β or x ∈ Ω1 (α, β),

(13)

if α < x(τ) < β for all τ ∈ [t0 , t].

It is not hard to see that in this description the output function changes its value precisely when the input function reaches one of the threshold values α or β; in particular, the output is right-continuous.

Introduction

| XIII

However, we will consider in Chapter 3 a different approach building on the theory of nonlinear differentials. To this end, note that the differential of the output signal of the relay can be expressed locally, i. e., for small dt > 0, in the form 1 { { { D(t, y(t), dt) = { −1 { { { 0

if x(t) = β and y(t) = 0, if x(t) = α and y(t) = 1,

(14)

otherwise.

For this reason, the global input-output dependence can be interpreted as solution of equation (12), where D(t, y, dt) is given by (14), and the domain of definition of the map D(⋅, ⋅, τ) consists precisely of all points (t, y) ∈ ℝ × ℝm such that either x(t) ≤ β and y = 0, or x(t) ≥ α and y = 1. In contrast to the Krasnosel’sky–Pokrovsky model, the differential of the output function D in (14) is left-continuous, not right-continuous. However, the basic properties of the relay are not affected by this difference. More important is the fact that, to recover the output function y of a relay in the Krasnosel’sky–Pokrovsky approach one has to know the input function x “in the past”, i. e., right from the starting point. For this reason some authors call this phenomenon hysteresis problem with memory. In our approach, however, it suffices to know the functions x and y “at one moment” to predict the development of the output function y in the near future. Particular important examples of hysteresis systems are stops and plays whose precise definition will be given in Section 3.3. Here the difference between the two models becomes even more apparent from the mathematical viewpoint: Krasnosel’sky and Pokrovsky first consider only piecewise monotone inputs, and then pass to continuous inputs by a limiting process, while buiding on nonlinear differentials one may consider right from the beginning arbitrary continuous input functions. Apart from stops and plays, other systems which may be treated as problems involving nonlinear differentials are discussed in Section 3.3.5 from the viewpoint of stability. As an application, we consider the stability (w. r. t. small perturbations) of a temperature regulator with heating and cooling elements. In the final section of Chapter 3 we study linear systems with non-smooth action which allows us to present a refinement of the previous results. We point out that many problems treated in this book cannot be solved explicitly, but only numerically. In fact, approaches are often at odd, not only in assumptions and conclusions, but also in techniques. The numerical study is sometimes carried out by so-called smooth models which are discussed in detail in Chapter 4. So let us now describe the contents of Chapter 4. Given two models for a hysteresis problem, in Section 4.1 we introduce characteristics which enable us to “measure the nearness” of these models, with a particular emphasis on planar relays. Such characteristics are important if we want to approximate a given system by a similar system with “nicer properties”, e. g., with smooth inputs and outputs. The same may be done

XIV | Introduction for stops and plays; we will carry out this programme in Section 4.3. Here it is of course reasonable to expect that an approximate solution is close to the actual solution, provided that our smooth system is also close to the given one. This may be considered as a weak analogue to the notion of well-posedness in the theory of (both ordinary and partial) differential equations. Moreover, in Section 4.4 we show that the concept of smooth approximations and nearness estimates also make sense for the various problems with nonlinear differentials we studied in detail in Chapter 3. Finally, in an Appendix we present the Mathematica Programme which were helpful for drawing some of the numerous pictures. Würzburg, Hanoi, Voronezh

The Authors

Contents Introduction | VII 1 1.1 1.1.1 1.1.2 1.1.3 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.2.5 1.2.6 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.5 1.4 1.4.1 1.4.2 1.4.3 1.4.4 1.4.5

DN-systems: theory | 1 Preliminaries | 1 Electrical circuits with ideal diode | 1 A special case | 3 Equations with discontinuous right hand side | 4 Convex sets, cones, and projections | 5 Definitions and examples | 5 Projections onto closed convex sets | 11 The normal cone | 16 The adjoint cone | 19 The tangent cone | 24 Bound cones and faces | 28 Existence and uniqueness of solutions | 31 Statement of the problem | 31 Continuation of the right hand side | 34 Euler polygons | 39 Solvability in the original set | 41 Uniqueness and continuous dependence | 41 Oscillations in DN-systems | 46 Forced oscillations | 46 Closed trajectories of 2D systems | 49 Proof of the main theorem | 56 Conditions for orbital stability | 66 Orbitally stable trajectories | 73

2 2.1 2.1.1 2.1.2 2.1.3 2.1.4 2.1.5 2.1.6 2.2 2.2.1 2.2.2 2.2.3 2.2.4

DN-systems: applications | 81 Electrical circuits with diode converters | 81 Examples of diode voltage converters | 81 Diode nonlinearities as operators | 85 The outer characteristic of a diode converter | 86 An example | 88 Numerical experiments | 90 Electrical diode circuits as DN-systems | 92 Convex programming and DN-systems | 101 Statement of the problem | 101 Some differentiability properties | 102 An upper limit estimate | 104 Numerical experiments | 105

XVI | Contents 2.3 2.3.1 2.3.2 2.3.3

Biological systems with size constraints | 107 The classical predator-prey model | 107 Autooscillations in a predator-prey model | 111 Two examples | 114

3 3.1 3.1.1 3.1.2 3.1.3 3.2 3.2.1 3.2.2 3.2.3 3.2.4 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.4.5 3.5 3.5.1 3.5.2 3.5.3

Equations with nonlinear differentials | 123 Locally explicit models of hysteresis elements | 124 Locally explicit equations: basic notions | 124 Locally explicit equations: basic properties | 125 An example | 129 Equations with relay | 131 Hysteresis relays: heuristic description | 131 Relays as locally explicit models | 133 Hysteresis relays: basic properties | 134 Generalized relays: basic properties | 146 Stops and plays | 153 The stop operator | 153 The play operator | 156 An alternative model for stop operators | 163 M-switches | 165 A control-correction system | 170 Systems with locally explicit equations | 172 Closed systems with relay | 172 Systems with M-switch | 175 Closed systems with stop-type hysteresis element | 185 On ψ-stable solutions of dynamical systems | 187 The ψ-stable behaviour of a temperature regulator | 198 Linear systems with non-smooth action | 209 Statement of the problem | 209 Unique solvability | 210 An example | 213

4 4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.2 4.2.1 4.2.2

Smooth approximating models | 215 Smooth relay models with hysteresis | 215 Equivalence of models | 216 A smooth model | 216 The nearness theorem: formulation | 220 Some estimates | 224 The nearness theorem: proof | 226 Planar systems with one relay | 232 Statement of the problem | 232 A periodicity criterion | 232

Contents | XVII

4.2.3 4.2.4 4.3 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 4.4 4.4.1 4.4.2 4.4.3 4.4.4 4.4.5

Numerical analysis | 235 Nearness estimates | 237 Smooth description of stops and plays | 238 Statement of the problem | 238 Smooth models | 239 Stops with smooth inputs | 240 Stops with continuous inputs | 241 Plays with continuous inputs | 243 Numerical analysis and nearness estimates | 244 Smooth description of DN models | 246 Statement of the problem | 246 Exactness of smooth descriptions | 246 A special case | 248 An example | 251 Numerical analysis and nearness estimates | 254

Appendix: Mathematica Procedures | 257 Bibliography | 259 Index | 265

1 DN-systems: theory This chapter is concerned with so-called systems with diode nonlinearities (or DN-systems, for short). We discuss such systems first heuristically, and then provide the necessary mathematical tools like convex sets and cones. Afterwards we study electrical circuits with diode converters and prove existence and uniqueness theorems for solutions of Cauchy problems involving DN-systems, as well as existence and stability theorems for forced periodic oscillations, with a particular emphasis on planar systems.

1.1 Preliminaries In this introductory section we collect some notions which will be needed in what follows for stating our main results. We begin by introducing DN-systems on an intuitive level and illustrate our approach by means of simple examples. We also show how the mathematical description leads quite naturally either to differential inclusions involving multivalued maps or differential equations with discontinuous right hand side. 1.1.1 Electrical circuits with ideal diode Consider the electrical circuit in Figure 1.1. Recall that the inputs L (the inductance), R (the resistance), and E (the electromotive force source) of the circuit are related by the equations L

diL = uL , dt

RiR = uR ,

uE = −e(t).

The parameters L (measured in Henry) and R (measured in Ohm) are given constants, while the known time-dependent function e(t) is measured in Volt. The functions i (measured in Ampère) and u (measured in Volt) with corresponding subscripts are the current and voltage, respectively; their direction is indicated in Figure 1.1 by the arrow.

Figure 1.1 https://doi.org/10.1515/9783110709865-001

2 | 1 DN-systems: theory An ideal diode D is an element which leads the current in the direction of the arrow (i. e., from the anode to the cathode), but not in the reverse direction [17, 33, 34]. For negative voltage (uD < 0), the current is zero (iD = 0), while for vanishing voltage (uD = 0), the current is positive or zero (iD ≥ 0); a positive voltage is not possible. So the dependence of the current on the voltage in an ideal diode may be described formally as set of three relations i ≥ 0,

u ≤ 0,

iu = 0,

(1.1.1)

or, graphically, as the broken line in the (u, i)-plane sketched in Figure 1.2.

Figure 1.2

The broken line may be viewed as graph of the multivalued map D defined by {0} { { { i ∈ D(u) := { (−∞, 0] { { { 0

if u < 0, if u = 0, if u > 0

or, equivalently, as graph of the inverse multivalued map {0} { { { u ∈ D−1 (i) := { (−∞, 0] { { { 0

if i > 0, if i = 0,

(1.1.2)

if i < 0.

For such a circuit the second Kirchhoff rule [9, 11] states that uL + uR + uD + uE = 0, i. e., the sum of the voltages of all elements is zero. Using the fact that the current of all elements in a non-ramified1 circuit is the same (iL = iR = iD =: i), we arrive at the 1 A circuit is called non-ramified (or non-bifurcating) if every knot joins exactly two electrical elements.

1.1 Preliminaries | 3

differential equation L

di + Ri = e(t) − u dt

for the unknown functions i and u := uD . For a complete description of the circuit we also have to take into account the relations (1.1.1) between u and i (or, equivalently, the multivalued map D−1 in (1.1.2)). Then the mathematical modelling leads to the differential inclusion2 L

di ∈ e(t) − Ri − D−1 (i). dt

It is convenient to write this differential inclusion in the form di ∈ E(t) − δi − D−1 (i), dt where3 E(t) :=

e(t) , L

δ :=

R . L

In what follows, we will call the set Q := [0, ∞) the phase space of the circuit and use the shortcut4 NQ (i) := D−1 (i). With this notation, we end up with di ∈ E(t) − δi − NQ (i) dt

(t ≥ 0),

(1.1.3)

which is a differential inclusion in the phase space Q. In this form we will treat the circuit for the remaining part of this section and call it a DN-system in what follows. 1.1.2 A special case Let us consider the change of voltage in the circuit sketched in Figure 1.1 in the simple special case e(t) ≡ −1, L = R = 1, and i(0) = 1. Then (1.1.3) becomes di ∈ −(1 + i + NQ (i)). dt

(1.1.4)

For i > 0 this is equivalent to the (singlevalued) differential equation di = −1 − i, dt

(1.1.5)

2 Such inclusions are also called differential equations with multivalued right hand side in the literature. 3 Observe that the last term need not be renormalized, since D−1 (i)/L = D−1 (i), as (1.1.2) shows. 4 The letter N is explained by the fact that NQ (i) is the normal cone to Q at i ∈ Q. We will treat some aspects of normal cones in Subsection 1.2.3 below.

4 | 1 DN-systems: theory since Ni = {0}, while for i = 0 it is equivalent to the differential inclusion di ∈ [−1, ∞), dt

(1.1.6)

since N0 = (−∞, 0], see Figure 1.2. Subject to the initial condition i(0) = 1, the unique solution of (1.1.5) is i(t) = 2e−t − 1. On the interval [0, log 2) this function is positive, hence a solution of the differential inclusion (1.1.3). Moreover, the function i(t) ≡ 0 obviously solves the inclusion (1.1.6) everywhere. Therefore it is natural to consider the (continuous) function i = i(t) := {

2e−t − 1

for 0 ≤ t < log 2,

0

for t ≥ log 2

as solution of the differential inclusion (1.1.4), although it is not differentiable5 at the point t = log 2 in the classical sense. We point out that this phenomenon is typical for DN-systems. Therefore it makes sense to admit solutions which are not differentiable in finitely many points; however, solutions must be continuous everywhere. A precise definition of this solution concept will be given in Section 1.3 below. 1.1.3 Equations with discontinuous right hand side Suppose that a solution i = i(t) of (1.1.3) has a (classical) derivative at t = t1 . If in this case i(t1 ) > 0, we have Ni = {0}, hence i󸀠 = E(t) − δi everywhere. On the other hand, in case i(t1 ) = 0 we have i󸀠 (t1 ) = 0, because the nonnegative function i has then a minimum in t1 . It follows that necessarily E(t1 ) ≤ 0, since otherwise di(t1 ) = 0 ∈ ̸ [E(t1 ), ∞) = E(t1 ) − δ ⋅ 0 − Ni , dt contradicting (1.1.3). Consequently, any solution i = i(t) of (1.1.3) satisfies E(t) − δi di ={ dt 0

for i > 0, for i = 0

at each point t where i has a classical derivative. For every point i in the phase space Q = [0, ∞) we put6 TQ (i) := {

(−∞, ∞)

for i > 0,

[0, ∞)

for i = 0.

5 In fact, i−󸀠 (log 2) = −1, while i+󸀠 (log 2) = 0. 6 Here the letter T stands for the tangent cone to Q at i, see Subsection 1.2.5 below.

1.2 Convex sets, cones, and projections | 5

By τi we denote the metric projection7 which associates to each real number its (unique) point of best approximation in TQ (i), so τi (r) = r if i > 0, and τi (r) = max {0, r} if i = 0. Consequently, for i > 0 we get τi (E(t) − δi) = E(t) − δi, hence i󸀠 = τi (E(t) − δi) if the derivative i󸀠 exists. On the other hand, for i = 0 we get τi (E(t) − δi) = τ0 (E(t)) = {

E(t)

if E(t) ≥ 0,

0

if E(t) < 0.

So, if E(t∗ ) < 0 at some point t∗ where i reaches a zero (which means that i(t∗ ) = 0, but i(t) > 0 on (t∗ − δ, t∗ ) for some δ > 0), then the projection operator τi is discontinuous at t∗ . We may summarize our discussion as follows: at any point where i󸀠 exists and i vanishes, the solution of (1.1.3) satisfies di = τi (E(t) − δi). dt

(1.1.7)

This is a differential equation with discontinuous right hand side; such equations have been extensively studied in the literature, see, e. g., [14]. The differential inclusion (1.1.3) and the differential equation (1.1.7) are two possible ways to describe systems with diode nonlinearities. Although these descriptions look apparently rather different, we will see later (see Theorem 1.3.3 below) that they are actually equivalent in a sense to be made precise.

1.2 Convex sets, cones, and projections Now we introduce and discuss several topics which turn out to be useful in the study of DN-systems, the most important ones being normal, adjoint, and tangent cones. These topics are intimately related to the Hilbert space structure of the phase space ℝm with the Euclidean norm. 1.2.1 Definitions and examples In contrast to the very simple scalar example treated in Section 1.1, we consider now higher dimensional phase spaces, typically convex subsets of ℝm like the closed positive octant m ℝm + := {(x1 , x2 , . . . , xm ) ∈ ℝ : x1 , x2 , . . . , xm ≥ 0}

7 We will study metric projections in detail in Subsection 1.2.2 below.

(1.2.1)

6 | 1 DN-systems: theory or the closed negative octant m ℝm − := {(x1 , x2 , . . . , xm ) ∈ ℝ : x1 , x2 , . . . , xm ≤ 0}.

(1.2.2)

In particular, ℝ+ = [0, ∞) and ℝ− = (−∞, 0]. The notion of convexity is standard in calculus, geometry, functional analysis, variational calculus, and optimization theory. Definition 1.2.1. A subset Q ⊆ ℝm is called convex if x ∈ Q, y ∈ Q, and 0 ≤ t ≤ 1 imply that tx + (1 − t)y ∈ Q, i. e., together with any two points x and y the set Q contains the straight line joining x and y. If M ⊆ ℝm is any nonempty set, a convex combination of elements of M has the form k

x = ∑ λj xj j=1

(xj ∈ M, λj ≥ 0, λ1 + ⋅ ⋅ ⋅ + λk = 1).

The set of all convex combinations of M is called the convex hull of M and denoted by conv(M). The simplest examples of convex subsets of ℝm are of course the open balls Br (x0 ) := {x ∈ ℝm : ‖x − x0 ‖ < r}

(x0 ∈ ℝm , r > 0)

(1.2.3)

(x0 ∈ ℝm , r > 0)

(1.2.4)

and the closed balls8 Br (x0 ) := {x ∈ ℝm : ‖x − x0 ‖ ≤ r}

with centre x0 ∈ ℝm and radius r > 0. By construction, the convex hull of M is the smallest convex set containing M. Definition 1.2.1 is modelled in analogy to the linear hull span(M) of a set M ⊆ ℝm which consists of all elements of the form k

x = ∑ λj xj j=1

(xj ∈ M, λj ∈ ℝ).

Observe that the linear hull span(M) of a set M ⊆ ℝm is always closed, but the convex hull conv(M) need not be closed, even if M is. For example, the set M := (ℝ × {0})∪{(0, 1)} is closed in ℝ2 , but conv(M) = (ℝ×[0, 1))∪{(0, 1)} is not.9 It makes therefore sense to introduce the closed convex hull of M as the closure of conv(M), i. e., conv(M) := conv(M).

(1.2.5)

8 Here it is important that we are in a normed space; recall that in a metric space balls need not be convex, and a closed ball need not coincide with the closure of the corresponding open ball. 9 However, a remarkable result called Mazur lemma in the literature states that the convex hull of a compact subset of Euclidean space is always compact; so it is not accidental that we have chosen M in this counterexample as an unbounded set in the plane.

1.2 Convex sets, cones, and projections | 7

By construction, the closed convex hull of M is the smallest closed convex set containing M. It is easy to see that a set M is a subspace of ℝm if and only if span(M) = M. Similarly, the following result is well-known, but we recall the proof for convenience. Proposition 1.2.2. A set Q ⊆ ℝm is convex if and only if conv(Q) = Q; it is convex and closed if and only if conv(Q) = Q. Proof. The sufficiency of conv(Q) = Q for the convexity of Q is trivial. To prove necessity suppose that Q is convex. Given elements x1 , . . . , xk ∈ Q, we prove that any convex combination of these elements also belong to Q by induction over k. For k = 1 this is trivial, while for k = 2 it follows from the definition of convexity, since λ2 = 1 − λ1 . Suppose we have shown this for fixed k ≥ 2, and choose x1 , . . . , xk+1 ∈ M and λ1 , . . . , λk+1 ≥ 0 satisfying λ1 + ⋅ ⋅ ⋅ + λk+1 = 1. Then x=

λk λ1 λ2 x + x + ⋅⋅⋅ + x 1 − λk+1 1 1 − λk+1 2 1 − λk+1 k

is a convex combination of k elements of Q, and so x ∈ Q, by our induction hypothesis. It follows that k+1

∑ λj xj = (1 − λk+1 )x + λk+1 xk+1 ∈ Q, j=1

and so Q is convex. The proof for the closed convex hull (1.2.5) is similar. Let us now state some facts about convex sets. The intersection of convex sets is again convex, but the union of course is not. Trivial examples of convex sets are the empty set, the whole space ℝm , and the balls (1.2.3) and (1.2.4). For further reference, we collect some more interesting convex sets10 in the following Example 1.2.3. To this end, we denote as usual by M ⊥ := {x ∈ ℝm : ⟨x, y⟩ = 0 for all y ∈ M}

(1.2.6)

the orthogonal complement 11 to a subset M ⊆ ℝm . Example 1.2.3. The following subsets Q ⊂ ℝm are convex: – Given a fixed vector n ∈ ℝm \ {0}, the convex set12 n⊥ := {x ∈ ℝm : ⟨x, n⟩ = 0}

(1.2.7)

is a hyperplane, i. e., a linear subspace of codimension 1. 10 The convexity of all sets in Example 1.2.3 is a simple consequence of the linearity of a scalar product in both variables. 11 Here ⟨x, y⟩ = x1 y1 + ⋅ ⋅ ⋅ + xm ym denotes the usual scalar product of x and y. We point out that M ⊥ is always a subspace, even if M is not. 12 To be compatible with (1.2.6) we should write {n}⊥ rather than n⊥ ; however, the notation (1.2.7) is common practice.

8 | 1 DN-systems: theory –

Given a fixed vector n ∈ ℝm \ {0} and c ∈ ℝ, the convex set n⊥ (c) := {x ∈ ℝm : ⟨x, n⟩ = c}



is an affine hyperplane, i. e., an affine subspace of codimension 1. More generally, the intersection of hyperplanes of the form {n1 , n2 , . . . , nk }⊥ := {x ∈ ℝm : ⟨x, n1 ⟩ = ⟨x, n2 ⟩ = ⋅ ⋅ ⋅ = ⟨x, nk ⟩ = 0}



(1.2.9)

is a convex set. The shift Q + s = {x + s : x ∈ Q}



(1.2.8)

(1.2.10)

of a convex set Q by a fixed vector s ∈ ℝm is again a convex set; in particular, the shift L + s of a subspace L ⊂ ℝm is convex.13 Similarly, a half-space of the form Hn := {x ∈ ℝm : ⟨x, n⟩ ≤ 0}

(1.2.11)

for some fixed vector n ∈ ℝm \ {0} is a convex set, as well as a shifted half-space of the form Hn (c) := {x ∈ ℝm : ⟨x, n⟩ ≤ c}

(1.2.12)

(c ∈ ℝ fixed), see Figure 1.3.

Figure 1.3

This set may be written in the form Hn (c) = Hn + s, where c = −⟨s, n⟩. Clearly, Hn = Hn (0). 13 Such a shift is called affine subspace and may be interpretated as solution set of a system of inhomogeneous linear equations.

1.2 Convex sets, cones, and projections | 9

Observe that 𝜕Hn = n⊥ ,

𝜕Hn (c) = n⊥ (c),

i. e., the boundary of a (shifted) half-space is an (affine) hyperplane. In our next example the set Q is somewhat different, because it is defined by means of a convex function. Example 1.2.4. Let f : ℝm → ℝ be a continuous convex function, i. e., f(

x+y f (x) + f (y) )≤ 2 2

(x, y ∈ ℝm ),

(1.2.13)

and let Q := {x ∈ ℝm : f (x) ≤ 0}. It is well known that (1.2.13) is equivalent to the (apparently more general) condition f ((1 − t)x + ty) ≤ (1 − t)f (x) + tf (y) (x, y ∈ ℝm , 0 ≤ t ≤ 1). This implies that Q is a convex subset of ℝm . Many important convex subsets may be constructed in this way. For example, taking f (x) := ⟨x, n⟩ − c (c ∈ ℝ fixed) we get the convex set (1.2.12), while taking f (x) := ‖x‖2 − R2 (R > 0 fixed) we get the closed ball BR (0) with centre at zero14 and radius R in the Euclidean space ℝm . We remark that in all examples considered so far, the set Q is not only convex, but also closed, so conv(Q) = Q. Our last example of a convex set is so important that it will be used over and over in the sequel; so we state this example in a separate Definition 1.2.5. A subset K ⊆ ℝm is called a cone if x ∈ K, y ∈ K, and s, t ≥ 0 imply that sx + ty ∈ K. If M ⊆ ℝm is any nonempty set, a conic combination of elements of M has the form k

x = ∑ λj xj j=1

(xj ∈ M, λj ≥ 0).

The set of all conic combinations of M is called conic hull of M and denoted by cone(M). So, apart from the trivial case K = {0}, a cone is always an unbounded subset of ℝm . By construction, the conic hull of M is the smallest cone containing M. In analogy to (1.2.5), we still introduce the closed conic hull of M as the closure of cone(M), i. e., cone(M) := cone(M).

(1.2.14)

14 Here and in what follows we use the same symbol 0 for the scalar zero and the zero vector, since it is clear from the context what we mean.

10 | 1 DN-systems: theory A comparison with Definition 1.2.1 shows that conv(M) ⊆ cone(M) ⊆ span(M),

conv(M) ⊆ cone(M)

(1.2.15)

for any set M ⊆ ℝm , where examples for strict inclusion are easily found: Example 1.2.6. Consider the plane ℝ2 with the canonical basis M := {e1 , e2 } = {(1, 0), (0, 1)}. Here conv(M) is the segment joining (1, 0) and (0, 1), cone(M) is the closed right upper quadrant ℝ2+ , and span(M) is the whole plane. On the other hand, taking M := {e1 , e2 , −(e1 + e2 )} = {(1, 0), (0, 1), (−1, −1)} we see that conv(M) is the triangle with vertices (−1, −1), (1, 0) and (0, 1), while cone(M) is the whole plane. The last set in (1.2.15) gives rise to a new definition; we will use this notion in Subsection 1.2.6 below. Definition 1.2.7. Given any set M ⊂ ℝm , the closed conic hull K := cone(M)

(1.2.16)

of M is called the tight cone over M. Clearly, the tight cone over M is the smallest closed cone containing M. For example, if {e1 , . . . , em } is any basis in ℝm , then the cone K = ℝm is tight over the set M := {e1 , . . . , em , −(e1 + ⋅ ⋅ ⋅ + em )}; this generalizes Example 1.2.6. The following is parallel to Proposition 1.2.2. Proposition 1.2.8. A set K ⊆ ℝm is a cone if and only if cone(K) = K. Proof. The sufficiency of cone(K) = K for K to be a cone is again trivial. To prove necessity suppose that K is a cone, and choose x1 , . . . , xk ∈ K and λ1 , . . . , λk ≥ 0. Then λ := λ1 + ⋅ ⋅ ⋅ + λk > 0, and the element15 k

k

j=1

j=1

x = ∑ λj xj = λ ∑

λj λ

xj

belongs to K, because K is convex and the last sum is a convex combination of x1 , . . . , xk ∈ K. 15 Here we may suppose that λ > 0, since the case λ = 0 is trivial.

1.2 Convex sets, cones, and projections | 11

Examples of cones are the whole space, any subspace, and any half-space. If K is the intersection of finitely many half-spaces, then K is called bound cone. We will study such cones in more detail in Subsection 1.2.6 below. We will also show a certain converse: every cone K ⊆ ℝm may be represented as intersection of (in general, infinitely many) closed half-spaces in ℝm , see Proposition 1.2.23. We remark that the very general Definition 1.2.5 is somewhat in contrast to our geometric intuition of a cone. In fact, some authors require in addition that K ∩(−K) = {0} which means that the cone has a “peak” and extends only “in one direction”; this excludes, e. g., the whole real line (−∞, ∞) in dimension 1, but includes the half-line ℝ+ . For our purposes, however, it is convenient to drop the assumption16 K ∩ (−K) = {0}. 1.2.2 Projections onto closed convex sets In what follows we will mostly consider sets which are not only convex, but also closed. Definition 1.2.9. Let Q ⊂ ℝ be nonempty and y ∈ ℝm be fixed. An element x ∈ Q is called point of best approximation to y in Q and denoted by x = P(y, Q) if ‖y − x‖ = dist(y, Q) := inf {‖y − z‖ : z ∈ Q}.

(1.2.17)

If x = P(y, Q) exists and is unique for any y ∈ ℝm , then the map P(⋅, Q) : ℝm → Q is called metric projection of Q. We point out that a point of best approximation may not exist, or may exist but not be unique. For example, if Q := {(z1 , z2 ) ∈ ℝ2 : z12 + z22 < 1} and y := (2, 0), then there is no point x ∈ Q satisfying (1.2.17). Furthermore, if Q := {(z1 , z2 ) ∈ ℝ2 : z12 + z22 = 1} and y := (0, 0), then every point x ∈ Q satisfies (1.2.17). Finally, if Q := {(z1 , z2 ) ∈ ℝ2 : max {|z1 |, |z2 |} ≤ 1} and y := (2, 0), then all points x ∈ {1} × [−1, 1] ⊂ Q satisfy (1.2.17), but the other points in Q do not. The reason for these counterexamples is that in the first case the set Q is not closed, in the second case the set Q is not convex, and in the third case the norm is not Euclidean. However, if ‖ ⋅ ‖ denotes the Euclidean norm in Definition 1.2.9, and Q is closed and convex, then we get both existence and uniqueness: Theorem 1.2.10 (Metric projection). If ℝm is equipped with the Euclidean norm, and Q ⊆ ℝm is closed and convex, then for each y ∈ ℝm there exists a unique point x = P(y, Q) satisfying (1.2.17). Proof. The statement is trivial for y ∈ Q. Given y ∈ ℝm \ Q, let d := dist(y, Q), and choose a sequence (zn )n in Q such that ‖y − zn ‖ < d + 1/n. Since the sequence (zn )n is 16 In the older literature, cones in the sense of our Definition 1.2.5 are called wedges.

12 | 1 DN-systems: theory bounded, we find a subsequence (znm )m which convergences to x. Then ‖y − x‖ = d and x ∈ Q, since Q is closed, and so x = P(y, Q) which proves existence. To prove uniqueness, suppose that there exists another element x ∈ Q satisfying ‖y − x‖ = d. Now we use the fact that the Euclidean norm in ℝm satisfies the parallelogramm identity17 ‖u + v‖2 + ‖u − v‖2 = 2‖u‖2 + 2‖v‖2

(u, v ∈ ℝm ).

Putting x̃ := (x + x)/2 and applying the parallelogramm identity to u := y − x and v := y − x yields 1󵄩 1 󵄩2 ‖2y − x − x‖2 = 󵄩󵄩󵄩(y − x) + (y − x)󵄩󵄩󵄩 4 4 1 1 1 1 = − ‖x − x‖2 + ‖y − x‖2 + ‖y − x‖2 = − ‖x − x‖2 + d2 . 4 2 2 4

‖y − x‖̃ 2 =

Now, in case x ≠ x this would imply ‖y − x‖̃ < d, contradicting the fact18 that x̃ ∈ Q and the definition of d. So we have proved that x = x.

Figure 1.4

Figure 1.4 explains the geometrical reason for uniqueness: if x and x are different, their midpoint x̃ belongs to Q but is closer to y than x and x, because the distance ‖y − x‖̃ is the height of the equilateral triangle xyx. Interestingly, the element of best approximation x = P(y, Q) may be characterized in a completely different way which we state for further reference as 17 We remark that the parallelogramm identity holds in every Hilbert space. Moreover, it characterizes Hilbert spaces in the sense that a norm in a Banach space is generated by some scalar product as ‖x‖ = √⟨x, x⟩ if and only if it satisfies the parallelogramm identity. Important examples are the sequence space ℓ2 and the function space L2 . 18 It is here that we use the convexity of Q.

1.2 Convex sets, cones, and projections | 13

Theorem 1.2.11 (Variational inequality). If ℝm is equipped with the Euclidean norm, Q ⊆ ℝm is closed and convex, and y ∈ ℝm is fixed, then we have x = P(y, Q) if and only if x ∈ Q and ⟨y − x, z − x⟩ ≤ 0

(1.2.18)

for each z ∈ Q. Proof. Suppose first that x ∈ Q satisfies (1.2.17), and let z ∈ Q be fixed. Then the point xt := x+t(z−x) = (1−t)x+tz belongs to Q for every t ∈ [0, 1], by convexity. The definition of P(y, Q) implies that the function φ : [0, 1] → ℝ+ defined by φ(t) := ‖y − xt ‖2 satisfies φ(t) ≥ φ(0) = ‖y − x‖2 = dist(y, Q)2

(0 ≤ t ≤ 1),

i. e., attains its minimum on [0, 1] at t = 0, and so φ󸀠+ (0) ≥ 0. But ‖y − xt ‖2 − ‖y − x‖2 t→0+ t = 2 lim ⟨y − x − t(z − x), −(z − x)⟩ = −2⟨y − x, z − x⟩

φ󸀠+ (0) = lim

t→0+

which implies (1.2.18). The converse implication follows from a simple geometrical reasoning. In fact, if x ∈ Q and (1.2.18) holds for every z ∈ Q, then the vectors y − x and z − x form an obtuse angle (see Figure 1.5).

Figure 1.5

But this implies that ‖y − z‖ ≥ ‖y − x‖, because in the triangle xyz the norm ‖y − z‖ denotes the length of the opposite side, while the norm ‖y − x‖ denotes the length of the adjacent side to the obtuse angle.

14 | 1 DN-systems: theory The relation (1.2.18) is often called variational inequality, because it naturally occurs in the calculus of variation.19 An interesting question is to analyze the regularity of the metric projection P(⋅, Q) : m ℝ → Q. The following proposition shows that the projection is Lipschitz continuous with Lipschitz constant 1: Proposition 1.2.12. If ℝm is equipped with the Euclidean norm, and Q ⊆ ℝm is closed and convex, then the map P(⋅, Q) : ℝm → Q is nonexpansive, i. e., satisfies 󵄩󵄩 󵄩 󵄩󵄩P(y, Q) − P(y, Q)󵄩󵄩󵄩 ≤ ‖y − y‖

(1.2.19)

for all y, y ∈ ℝm . Proof. Let x := P(y, Q) and x := P(y, Q), and consider two (m − 1)-dimensional hyperplanes Ly and Ly which pass through x and x and are orthogonal to the line joining these points, see Figure 1.6.

Figure 1.6

From Theorem 1.2.11 we know that ⟨y − x, x − x⟩ ≤ 0, because y lies in the closed set at one side of Ly , and x at the other side. Similarly, we have ⟨y − x, x − x⟩ ≤ 0, 19 However, in calculus of variation the underlying space is usually not the finite dimensional Euclidean space, but some infinite dimensional Hilbert space of functions (e. g., a Sobolev space); this admits important applications to solvability of boundary value problems for both ordinary and partial differential equations. Details and many examples may be found in [8].

1.2 Convex sets, cones, and projections | 15

because y lies in the closed set at one side of Ly , and x at the other side. This implies that the segment [y, y] meets the hyperplanes Ly and Ly in some points z and z, respectively. Therefore, ‖x − x‖ ≤ ‖z − z‖ ≤ ‖y − y‖

(1.2.20)

which proves the statement. Let us give another geometric interpretation of (1.2.19) in terms of quadrangles. In the notation of Proposition 1.2.12, consider the quadrangle xxyy in the threedimensional affine subspace which contains the four vertices of this quadrangle. By properties of the projection P(⋅, Q), each of the angles ∠(xxy) and ∠(yxx) is ≥ π/2, but their sum is ≤ 2π. Denote by z and z the points in the segments [x, y] and [xy], respectively, at which the points of these segments attain their minimal distance,20 see Figure 1.7.

Figure 1.7

Now, if z ≠ x and z ≠ x, then none of the angles ∠(xzz) and ∠(xzz) is acute, because otherwise the vertex at an acute angle may be slightly shifted towards x (or x), diminishing in this way the length of the segment [zz]. This reasoning shows that the sum of the four angles of the quadrangle xzzx is precisely 2π, and so this quadrangle is actually a rectangle. We conclude that (1.2.20) is true, which proves (1.2.19) under the assumption that z ≠ x and z ≠ x. 20 This minimum exists, because we evaluate here the distance as continuous nonnegative function on a compact set.

16 | 1 DN-systems: theory In case z = x, say, we necessarily have z = x, since otherwise in the triangle with angle ≥ π/2 at x the side zz would be strictly longer than the side xx, contradicting the minimality of ‖z − z‖. So we have proved (1.2.20) also in this case. 1.2.3 The normal cone In this and the following subsections we will study cones of special type. The first type is closely related to metric projections. Definition 1.2.13. Let Q ⊆ ℝm be closed and convex and x ∈ Q. The normal cone to Q at x is defined by NQ (x) := {y ∈ ℝm : ⟨y, z − x⟩ ≤ 0 for all z ∈ Q},

(1.2.21)

m

where ⟨⋅, ⋅⟩ denotes, as before, the scalar product in ℝ . Theorem 1.2.11 shows that we may characterize NQ (x) equivalently as follows: NQ (x) consists precisely of all vectors y ∈ ℝm satisfying P(x + y, Q) = x. Conversely, we have P(y, Q) = x if and only if x ∈ Q and y − x ∈ NQ (x). An important property of NQ is described in the following Proposition 1.2.14. The normal cone (1.2.21) is a cone in the sense of Definition 1.2.5, and it is always a closed set. Proof. The simple proof relies on the linearity and continuity of the scalar product. Given z ∈ Q, y1 , y2 ∈ NQ (x), and t1 , t2 ≥ 0 we have ⟨t1 y1 + t2 y2 , z − x) = t1 ⟨y1 , z − x⟩ + t2 ⟨y2 , z − x⟩ ≤ 0, hence t1 y1 + t2 y2 ∈ NQ (x). If (yk )k is a sequence in NQ (x) converging to y ∈ ℝm we get ⟨y, z − x⟩ = lim ⟨yk , z − x⟩ ≤ 0 k→∞

for all z ∈ Q, hence y ∈ NQ (x). It is easy to derive some elementary properties of the normal cone (1.2.21) for special x and Q. For example, we always have 0 ∈ NQ (x), and NQ (x) = ℝm for any singleton Q = {z}. Moreover, if x is an interior point of Q, then NQ (x) = {0}, so the normal cone NQ (x) is interesting only for boundary points x ∈ 𝜕Q. In the next example we calculate normal cones with respect to a convex set Q with empty interior. Example 1.2.15. In the plane ℝ2 , consider the segment Q := [0, 1] × {0} on the x1 -axis. Then ℝ− × ℝ { { { NQ ((x1 , 0)) = { {0} × ℝ { { { ℝ+ × ℝ

if x1 = 0, if 0 < x1 < 1, if x1 = 1.

1.2 Convex sets, cones, and projections | 17

These are all possibilities, because 𝜕Q = Q in this example. We still remark that NQ1 (x) ⊇ NQ2 (x) for x ∈ Q1 ⊆ Q2 . Another two useful properties of the normal cone (1.2.21) are described in the following Proposition 1.2.16. (a) The equality NQ+s (x + s) = NQ (x)

(1.2.22)

holds, where Q + s denotes the shift (1.2.10) of Q. (b) If L ⊆ ℝm is a linear subspace, then NL (x) = L⊥

(x ∈ L),

(1.2.23)

where L⊥ denotes the orthogonal complement (1.2.6) of L. Proof. (a) By definition, every y ∈ NQ (x) satisfies ⟨y, z − x⟩ ≤ 0

(z ∈ Q).

But this is trivially equivalent to ⟨y, (z + s) − (x + s)⟩ ≤ 0

(z + s ∈ Q + s),

hence y ∈ NQ+s (x + s). (b) Every y ∈ L⊥ satisfies ⟨y, z − x⟩ = 0 for all z ∈ L, since also x ∈ L. Conversely, fix y ∈ NL (x); then ⟨y, z⟩ ≤ ⟨y, x⟩

(z ∈ L).

Assume ⟨y, z0 ⟩ ≠ 0 for some z0 ∈ L. Replacing z0 by −z0 , if necessary, we may suppose that ⟨y, z0 ⟩ > 0. But then we can use z := λz0 in (1.2.23) and let λ → ∞ to get a contradiction. Thus, ⟨y, z⟩ = 0 for all z ∈ L which means precisely that y ∈ L⊥ . We illustrate Definition 1.2.13 by a series of examples related to Example 1.2.3. Example 1.2.17. The following normal cones correspond to the convex sets from Example 1.2.3. – Let Q = n⊥ be as in (1.2.7) and x ∈ 𝜕Q = Q; then NQ (x) = span ({n}). – Let Q = n⊥ (c) be as in (1.2.8) and x ∈ 𝜕Q = Q; then NQ (x) = span ({n}). – Let Q = Hn be as in (1.2.11) and x ∈ 𝜕Q; then NQ (x) = cone({n}). – Let Q = Hn (c) be as in (1.2.12) and x ∈ 𝜕Q; then NQ (x) = cone({n}). In fact, the first assertion follows from Proposition 1.2.16 (b) for L := span ({n}), while the second assertion follows from the shift invariance stated in Proposition 1.2.16 (a).

18 | 1 DN-systems: theory Let us prove the third assertion in this list, the fourth relation follows then again from the shift invariance of the normal cone. First we show that cone({n}) ⊆ NHn (x). Indeed, for x ∈ 𝜕Hn we have ⟨x, n⟩ = 0, while every z ∈ Hn satisfies ⟨z, n⟩ ≤ 0. So the elements y ∈ NHn (x) are precisely those which satisfy 0 ≥ ⟨y, z − x⟩ = ⟨y, z⟩ − ⟨y, x⟩.

(1.2.24)

But this is obviously true for y = λn with λ ≥ 0, because in this case ⟨y, z⟩ ≤ λc = ⟨y, x⟩. Conversely, we claim that only elements y of this form may satisfy (1.2.24). To see this, suppose that y ∈ ̸ cone({n}), which means that y and n do not point in the same direction. Then we find a vector z ∈ Hn satisfying ⟨y, z⟩ > 0, i. e., building an acute angle with y. Consequently, y ∈ ̸ Hn , which proves the inclusion NHn (x) ⊆ cone({n}). Here is another example. Let Q = Br (0) be the closed ball (1.2.4) of radius r > 0 centred at zero, and let ‖x‖ = r. Then NQ (x) = cone({x}).

(1.2.25)

In fact, for ‖z‖ = r and z → x the obtuse angle between x and z − x can be chosen arbitrarily close to π/2. The problem of finding the normal cone NQ (x) for the set Q = {x ∈ ℝm : f (x) ≤ 0} constructed in Example 1.2.4 by means of a convex function f is more difficult, because the answer essentially depends on the properties of the function f . If Q1 and Q2 are convex sets, then Q1 ∩ Q2 is again convex, but the normal cone NQ1 ∩Q2 (x) does in general not coincide with the union of NQ1 (x) and NQ2 (x): Example 1.2.18. In the Euclidean space ℝ2 , let x := (0, 0), n1 := (1, 0), n2 := (0, 1), and Q1 := n⊥ 1 = {0} × ℝ,

Q2 := n⊥ 2 = ℝ × {0}.

We know from Example 1.2.17 that NQ1 (x) = span({n1 }) = Q2 ,

NQ2 (x) = span({n1 }) = Q1 ,

and so NQ1 (x) ∪ NQ2 (x) = Q1 ∪ Q2 is the union of the two axes. On the other hand, NQ1 ∩Q2 (x) = N{(0,0)} (x) = ℝ2 . It is not surprising that the “correct” equality reads NQ1 ∩Q2 (x) = cone[NQ1 (x) ∪ NQ2 (x)],

(1.2.26)

where the set on the right hand side of (1.2.26) denotes the conic hull in the sense of Definition 1.2.5. This is analogous to the formula for the linear hull of the union of two subspaces, or for the convex hull of the union of two convex sets.

1.2 Convex sets, cones, and projections | 19

1.2.4 The adjoint cone The adjoint cone of a cone K ⊂ ℝm is a special normal cone and is defined as follows. Definition 1.2.19. Let K ⊆ ℝm be a cone. The adjoint cone to K is defined by K ∗ := NK (0) := {y ∈ ℝm : ⟨y, z⟩ ≤ 0 for all z ∈ K},

(1.2.27)

where ⟨⋅, ⋅⟩ denotes, as before, the scalar product in ℝm . Some elementary properties of the adjoint cone are easily deduced. Clearly, (K)∗ = K and K ∗∗ = K for every closed cone K ⊆ ℝm , see Proposition 1.2.22 below. An adjoint cone is always closed.21 If K is a subspace of ℝm , then K ∗ is also a subspace, and ⟨x, y⟩ = 0 for every x ∈ K and y ∈ K ∗ . If K is a cone and Hn denotes the half-space (1.2.11), then K ∗ may be represented as intersection ∗

K ∗ = ⋂ Hn , n∈K

(1.2.28)

where n runs over all elements of the cone K. The representation K ∗ = cone({n ∈ ℝm : Hn ⊇ K})

(1.2.29)

is also true, where n runs over all elements which are normal to some half-space containing K, and the set on the right hand side of (1.2.29) is the conic hull in the sense of Definition 1.2.5. The following theorem provides an important interconnection between cones, adjoint cones, and metric projections. Theorem 1.2.20 (Orthogonal decomposition). Let K ⊆ ℝm be a closed cone, K ∗ its adjoint cone, and y ∈ ℝm . Denote by P(y, K) the point of best approximation of y in K in the sense of Definition 1.2.9. Then the equalities ⟨y − P(y, K), P(y, K)⟩ = 0

(1.2.30)

y − P(y, K) = P(y, K ∗ )

(1.2.31)

and

hold, i. e., the difference of y and its metric projection to the cone K coincides with its metric projection to the cone K ∗ . 21 This may be regarded as analogue to the well-known fact that the dual space X ∗ of a normed space X is always complete, even if X itself is not.

20 | 1 DN-systems: theory Proof. Let P(y, K) =: x and y − x =: z. Since P(z + x, K) = P(y, K), we have z ∈ NK (x), so from Theorem 1.2.11 we conclude that ⟨z, u − x⟩ ≤ 0 for all u ∈ K. Taking first u := 2x and then u := 0 we obtain ⟨z, 2x − x⟩ = ⟨z, x⟩ ≤ 0 and ⟨z, −x⟩ = −⟨z, x⟩ ≤ 0. This shows that ⟨z, x⟩ = 0 and proves (1.2.30). We claim that z ∈ K ∗ , i. e., ⟨z, v⟩ ≤ 0 for any v ∈ K. Indeed, again from Theorem 1.2.11 it follows that ⟨z, v⟩ = ⟨y − x, v − x⟩ ≤ 0

(v ∈ K).

Moreover, for every w ∈ K ∗ we have ⟨y − z, w − z⟩ = ⟨x, w − z⟩ = ⟨x, w⟩ − ⟨x, z⟩ = ⟨x, w⟩ − 0 ≤ 0. But this means precisely that z = P(y, K ∗ ), by (1.2.18), and the proof is complete.

Figure 1.8

The geometrical meaning of Theorem 1.2.20 is clear (see Figure 1.8) and may be reformulated as follows: every element y ∈ ℝm is the orthogonal sum of its projections onto K and K ∗ . In fact, writing P(y, K) = x, P(y, K ∗ ) = z, and y − x = z,̃ from (1.2.29) and (1.2.31) we conclude that ⟨z,̃ x⟩ = 0 and z̃ = P(y, K ∗ ), hence z̃ = z, by the uniqueness of the metric projection. Interestingly, the converse of this result is also true: if an element y ∈ ℝm is the orthogonal sum y = x + z of two elements x ∈ K and z ∈ K ∗ , then x = P(y, K) and z = P(y, K ∗ ). To see this, it suffices to observe that, for any u ∈ K and v ∈ K ∗ , we have ⟨y − x, u − x⟩ = ⟨z, u − x⟩ = ⟨z, u⟩ − ⟨z, x⟩ = ⟨z, u⟩ − 0 ≤ 0

1.2 Convex sets, cones, and projections | 21

and, similarly, ⟨y − z, v − z⟩ = ⟨x, v − z⟩ = ⟨x, v⟩ − ⟨x, z⟩ = ⟨y, v⟩ − 0 ≤ 0. The assertion follows then again from Theorem 1.2.11. We illustrate Theorem 1.2.20 by means of a very simple example. Example 1.2.21. Let n := (1, 0, . . . , 0) ∈ ℝm be the first normalized basis vector. Consider the hyperplane K := n⊥ = {(x1 , x2 , . . . , xm ) ∈ ℝm : ⟨x, n⟩ = x1 = 0}. From Example 1.2.17 we know that K ∗ = NK (0) = span({n}) = {(λ, 0, . . . , 0) : λ ∈ ℝ}. Now, the element of best approximation of y = (y1 , y2 , . . . , ym ) ∈ ℝm in K is obviously given by P(y, K) = (0, y2 , . . . , ym ), while the element of best approximation of y in K ∗ is given by P(y, K ∗ ) = (y1 , 0, . . . , 0). It is clear that these elements fulfil (1.2.30) and (1.2.31). Now consider the halfspace K := Hn = {(x1 , x2 , . . . , xm ) ∈ ℝm : ⟨x, n⟩ = x1 ≤ 0}. Again from Example 1.2.17 we know that K ∗ = NK (0) = cone({n}) = {(λ, 0, . . . , 0) : λ ∈ ℝ+ }. by

Here the element of best approximation of y = (y1 , y2 , . . . , ym ) ∈ ℝm in K is given

P(y, K) = {

(0, y2 , . . . , ym )

if y1 > 0,

(y1 , y2 , . . . , ym )

if y1 ≤ 0,

while the element of best approximation of y = (y1 , y2 , . . . , ym ) ∈ ℝm in K ∗ is given by P(y, K ∗ ) = {

(y1 , 0, . . . , 0)

if y1 > 0,

(0, 0, . . . , 0)

if y1 ≤ 0,

and it is again obvious that (1.2.30) and (1.2.31) are fulfilled.

22 | 1 DN-systems: theory For further reference we state a corollary on the second adjoint K ∗∗ := (K ∗ )∗ of K in the following Proposition 1.2.22. For a closed cone K ⊆ ℝm , the equality K ∗∗ = K

(1.2.32)

holds. Proof. The inclusion K ⊆ K ∗∗ follows from the symmetry of the definition (1.2.27) in y and z. Suppose that z ∈ ̸ K. By (1.2.31) we have then the orthogonal decomposition z = x + y, where x = P(z, K) ∈ K and y = P(z, K ∗ ) ∈ K ∗ . The closedness of K implies that y ≠ 0, and so ⟨z, y⟩ = ⟨z − x, y⟩ = ⟨y, y⟩ = ‖y‖2 > 0, hence z ∈ ̸ K ∗∗ . The important Theorem 1.2.20 allows us to prove another representation result for cones; compare this with (1.2.28). Proposition 1.2.23. Any closed cone K ⊆ ℝm admits the representation K = ⋂ Hn , Hn ⊇K

(1.2.33)

where the intersection in (1.2.33) is taken over all closed half-spaces (1.2.11) containing K. Proof. Let K be a cone in ℝm , and denote by Q the intersection on the right hand side of (1.2.33) (which is clearly a convex set). The inclusion K ⊆ Q is obvious, so fix y ∈ ℝm \K. By Theorem 1.2.20 we may write y as sum (1.2.31), where P(y, K) ∈ K,

P(y, K ∗ ) ∈ K ∗ ,

⟨P(y, K), P(y, K ∗ )⟩ = 0.

Here z := P(y, K ∗ ) must be different from zero, since otherwise y = P(y, K) ∈ K, contradicting our choice of y. Therefore, ⟨y, z⟩ = ⟨P(y, K) + z, z⟩ = ⟨P(y, K), z⟩ + ‖z‖2 = ‖z‖2 > 0. This shows that the half-space Hz = {x ∈ ℝm : ⟨x, z⟩ ≤ 0} does not contain y. On the other hand, Hz ⊇ K, and so y ∈ ̸ Q as claimed. The construction of the adjoint cone K ∗ of K may remind the reader of the definition of the adjoint A∗ : ℝn → ℝm of an operator A : ℝm → ℝn (matrix) in Linear Algebra. In fact, there is a close connection; to discuss this we briefly recall some facts about linear operators between finite dimensional spaces.

1.2 Convex sets, cones, and projections | 23

Given a linear operator A : ℝm → ℝn (i. e., an (n × m)-matrix), consider the kernel Ker(A) = {x ∈ ℝm : Ax = 0} of A and the image (or range) Im(A) = A(ℝm ) = {Ax : x ∈ ℝm } of A. For any set N ⊆ ℝn we denote by22 A−1 (N) = {x ∈ ℝm : Ax ∈ N} the preimage of N. The adjoint A∗ of A is the uniquely determined operator A∗ : ℝn → ℝm which satisfies ⟨Ax, y⟩ = ⟨x, A∗ y⟩

(1.2.34)

for all x ∈ ℝm and y ∈ ℝn . Of course, if A is represented by its canonical (n × m)-matrix, then A∗ is represented by the transposed (m × n)-matrix. A well-known (and quite important) result in Linear Algebra states that23 Ker(A∗ ) = Im(A)⊥

(1.2.35)

(A∗ ) (E ⊥ ) = A(E)⊥

(1.2.36)

and −1

for every subspace E ⊆ ℝm . (Note that (1.2.35) is a special case of (1.2.36) which may be obtained by putting E := ℝm .) A certain analogue of (1.2.36) for cones is contained in the following Proposition 1.2.24. Let K ⊆ ℝm be a cone, and A : ℝm → ℝn be a linear operator. Then the equality (A∗ ) (K ∗ ) = A(K)∗ −1

(1.2.37)

holds, where A∗ denotes the adjoint operator of A and K ∗ the adjoint cone of K. 22 The notation A−1 does not mean that the operator should be invertible, i. e., that Ker(A) = {0}. 23 The equality (1.2.35) is sometimes called Fredholm alternative; it holds in the much more general setting of bounded linear operators (of a special form) between Banach spaces.

24 | 1 DN-systems: theory Proof. Let first z ∈ (A∗ )−1 (K ∗ ), i. e., A∗ z ∈ K ∗ . By the definition (1.2.27) of the adjoint cone this means that ⟨A∗ z, x⟩ ≤ 0

(x ∈ K).

(1.2.38)

But the relation (1.2.34) shows that this is equivalent to ⟨z, Ax⟩ ≤ 0 for every x ∈ K, and so z ∈ A(K)∗ . Conversely, let z ∈ A(K)∗ . Again by (1.2.27) this means that ⟨z, Ax⟩ ≤ 0 for every x ∈ K, and (1.2.34) shows that this implies (1.2.38). We conclude that A∗ z ∈ K ∗ , i. e., z ∈ (A∗ )−1 (K ∗ ).

1.2.5 The tangent cone For any closed convex set Q ⊆ ℝm and fixed element x ∈ Q, the set K = NQ (x) introduced in (1.2.21) is a closed cone. So one could ask how the adjoint cone K ∗ to K looks like, and how it is related to Q and x. This gives rise to yet another definition. Definition 1.2.25. Let Q ⊆ ℝm be closed and convex and x ∈ Q. The tangent cone to Q at x is defined by TQ (x) := {z ∈ ℝm : ⟨z, y⟩ ≤ 0 for all y ∈ NQ (x)},

(1.2.39)

where ⟨⋅, ⋅⟩ denotes, as before, the scalar product in ℝm . A comparison of (1.2.21) and (1.2.27) shows in fact that TQ (x) = NQ (x)∗ .

(1.2.40)

In Figure 1.9 we have sketched the normal cone NQ (x) and the tangent cone TQ (x) at some boundary point x ∈ 𝜕Q.

Figure 1.9

1.2 Convex sets, cones, and projections | 25

Being an adjoint cone, the tangent cone is always closed. In Figure 1.10 we have shadowed24 the tangent cone to Q at three points x1 , x2 and x3 . In particular, the tangent cone TQ (x1 ) at the interior point x1 is the whole space ℝm , while the tangent cone TQ (x2 ) at the boundary point is a half-space.

Figure 1.10

For calculating tangent cones the following property is useful which is parallel to Proposition 1.2.16. Proposition 1.2.26. (a) The equality TQ+s (x + s) = TQ (x)

(1.2.41)

holds, where Q + s denotes the shift (1.2.10) of Q. (b) If L ⊆ ℝm is a linear subspace, then TL (x) = L

(x ∈ L).

(1.2.42)

Proof. (a) By definition, every z ∈ TQ (x) satisfies ⟨z, y⟩ ≤ 0 for all y ∈ NQ (x). But the condition y ∈ NQ (x) is equivalent to the condition y ∈ NQ+s (x + s), by (1.2.22), hence z ∈ TQ+s (x + s). (b) Every z ∈ TL (x) satisfies ⟨z, y⟩ ≤ 0 for each y ∈ L⊥ , and so also for y replaced by −y. It follows that ⟨z, y⟩ = 0, hence z ∈ (L⊥ )⊥ = L. At this point we repeat the comments we made on normal cones now for tangent cones. For a singleton Q = {x} we get TQ (x) = {0}. Moreover, if x is an interior point of Q, then TQ (x) = ℝm , so also the tangent cone TQ (x) is interesting only for boundary 24 Here the grey region should be extended across the dotted circle lines.

26 | 1 DN-systems: theory points x ∈ 𝜕Q. For the other convex sets Q from Example 1.2.3 we get the following picture. Example 1.2.27. The following tangent cones correspond to the convex sets from Example 1.2.17. – Let Q = n⊥ be as in (1.2.7) and x ∈ 𝜕Q; then TQ (x) = n⊥ . –

– –

Let Q = n⊥ (c) be as in (1.2.8) and x ∈ 𝜕Q; then TQ (x) = n⊥ .

Let Q = Hn be as in (1.2.11) and x ∈ 𝜕Q; then TQ (x) = Hn .

Let Q = Hn (c) be as in (1.2.12) and x ∈ 𝜕Q; then TQ (x) = Hn .

Let us prove again the last assertion in this list. By Proposition 1.2.26, we may suppose without loss of generality that c = 0, hence Hn (c) = Hn . For x ∈ 𝜕Hn we have ⟨x, n⟩ = 0, while every z ∈ THn (x) satisfies ⟨y, z⟩ ≤ 0 for each y ∈ NHn (x). But we already know that NHn (x) = cone({n}) = {λn : λ ≥ 0}. Now, the relation ⟨λn, z⟩ ≤ 0 is equivalent to ⟨n, z⟩ ≤ 0, since λ ≥ 0, and the latter condition means nothing else but z ∈ Hn . The proof of the reverse inclusion Hn ⊆ THn (x) follows the same line as in Example 1.2.17. A straightforward calculation shows that the tangent cone to the closed ball Q = Br (0) with centre 0 and radius r > 0 at x ∈ 𝜕Br (0) is TQ (x) = THx (r) (x) = Hx .

(1.2.43)

The problem of finding the tangent cone TQ (x) for the set Q = {x ∈ ℝm : f (x) ≤ 0} constructed in Example 1.2.4 by means of a convex function f is again difficult, and there is no general recipe to solve this problem. For the reader’s ease we collect our calculations from the preceding examples in a synoptic table. Table 1.1: Normal cone and tangent cone. Q= x∈ NQ (x) TQ (x)

{x}

ℝm

Q

Q

ℝm

𝜕Q

{0}

span({n})



n

n

{0}

m

n⊥



n⊥ (c)

Hn

Hn (c)

𝜕Q

𝜕Q

𝜕Q

span({n})

cone({n})

cone({n})

Hn

Hn



We remark that the tangent cone behaves “more naturally” than the normal cone with respect to intersections. Indeed, if Q1 and Q2 are closed convex sets, then the tangent

1.2 Convex sets, cones, and projections | 27

cone of Q1 ∩ Q2 is simply TQ1 ∩Q2 (x) = TQ1 (x) ∩ TQ2 (x)

(1.2.44)

which is in fact more natural than formula (1.2.26) for the normal cone. As application of (1.2.44) let us calculate the tangent cone of the convex set (1.2.9). Since Tn⊥ (x) = n⊥ , see Table 1.1, we conclude from (1.2.44) that k

k

⊥ T{n1 ,n2 ,...,nk }⊥ (x) = ⋂ Tn⊥ (x) = ⋂ n⊥ j = {n1 , n2 , . . . , nk } . j=1

j

j=1

(1.2.45)

A special case of this calculation is contained in the following Example 1.2.28. In the Euclidean space ℝ2 , let x, n1 , n2 , Q1 , and Q2 be as in Example 1.2.18. Then TQ1 (x) = Q1 and TQ2 (x) = Q2 , hence TQ1 ∩Q2 (x) = T{(0,0)} (x) = {(0, 0)} = Q1 ∩ Q2 = TQ1 (x) ∩ TQ2 (x), in accordance with (1.2.44). The term “tangent cone” suggests a connection of (1.2.39) with derivatives. In fact, the following is true. Proposition 1.2.29. For fixed τ ∈ ℝ and ε > 0, let x : [τ − ε, τ + ε] → ℝm be some function, and let Q ⊆ ℝm be closed and convex. Then the following holds. (a) If x(τ + s) ∈ Q for 0 ≤ s < ε and the right hand derivative x+󸀠 (τ) of x at τ exists, then x+󸀠 (τ) ∈ TQ (x(τ)). (b) If x(τ + s) ∈ Q for −ε < s ≤ 0 and the left hand derivative x−󸀠 (τ) of x at τ exists, then −x−󸀠 (τ) ∈ TQ (x(τ)). Proof. If the right hand derivative x+󸀠 (τ) of x at τ exists, then 1 ⟨x(τ + s) − x(τ), u⟩ ≤ 0 (s > 0) s for every u ∈ NQ (x(τ)), by the definition (1.2.21) of the normal cone. Passing in this inequality to the limit as s → 0+ we obtain ⟨x+󸀠 (τ), u⟩ ≤ 0

(u ∈ NQ (x(τ))).

Thus x+󸀠 (τ) ∈ TQ (x(τ)), by the definition (1.2.39) of the tangent cone, which proves (a). The proof of (b) is similar. We will use Proposition 1.2.29 in Theorem 1.3.3 in the next Section 1.3, where we will briefly discuss two different, though equivalent, solution concepts for DNsystems.

28 | 1 DN-systems: theory 1.2.6 Bound cones and faces A cone K is called bound cone if K may be represented as intersection of finitely many closed halfspaces. Equivalently, this means that the elements x ∈ K are precisely the solutions of a system of inequalities { { { { { { { { {

⟨x, ξ1 ⟩ ≥ 0, ⟨x, ξ2 ⟩ ≥ 0, ... ⟨x, ξk ⟩ ≥ 0,

(1.2.46)

where ξ1 , ξ2 , . . . , ξk ∈ ℝm are fixed. Examples of bound cones are subspaces, closed halfspaces, and rays starting from 0. Clearly, the finite intersection of bound cones is again a bound cone, but the union of two bound cones need not even be a cone. In this section we are going to study special properties of bound cones. To this end, some definitions are in order. Definition 1.2.30. Let K be the bound cone which is characterized by the system (1.2.46). Any set K 󸀠 of elements x ∈ ℝm which solves (1.2.46), with some inequalities replaced by equalities, is called face of K. In particular, we call the solution set of 󸀠 (1.2.46), with all inequalities replaced by equalities, the minimal face Kmin of K. A 󸀠 bound cone K is called pointed if Kmin = {0}. 󸀠 As the name suggests, the minimal face Kmin of a bound cone K is a subspace of ℝ which is contained in any other face of K. Obviously, every face of a bound cone is also a bound cone, since the condition ⟨x, ξj ⟩ = 0 is equivalent to ⟨x, ξj ⟩ ≥ 0 and ⟨x, −ξj ⟩ ≥ 0. Moreover, every face of a face K 󸀠 of K is a face of K. So the family of all faces of a fixed cone K is a finite set with a transitive (but not total) ordering, the 󸀠 minimal element being Kmin , and the maximal element being K. The one-dimensional faces25 of a cone K ⊆ ℝm are called the edges of K. For example, a pointed cone has edges, because the rank of the matrix of the system which describes its minimal face is m. m

Example 1.2.31. Let ξ1 := (1, 0) ∈ ℝ2 and ξ2 := (0, 1) ∈ ℝ2 , and consider the set K := ℝ2+ = {(x1 , x2 ) ∈ ℝ2 : x1 ≥ 0, x2 ≥ 0},

(1.2.47)

which by (1.2.45) is a bound cone. The lower-dimensional faces of this cone are K1󸀠 = ℝ+ × {0},

K2󸀠 = {0} × ℝ+ ,

󸀠 Kmin = {(0, 0)},

where the last face is obviously minimal. So the cone (1.2.47) is pointed. 25 These faces are obtained by replacing k − 1 inequalities in (1.2.46) by equalities.

1.2 Convex sets, cones, and projections | 29

Now let ξ1 := (1, 0, 0) ∈ ℝ3 and ξ2 := (0, 1, 0) ∈ ℝ3 , and consider the set K := ℝ2+ × ℝ = {(x1 , x2 , x3 ) ∈ ℝ3 : x1 ≥ 0, x2 ≥ 0},

(1.2.48)

which by (1.2.45) is a bound cone. The lower-dimensional faces of this cone are K1󸀠 = ℝ+ × {0} × ℝ,

K2󸀠 = {0} × ℝ+ × ℝ,

󸀠 Kmin = {(0, 0)} × ℝ,

󸀠 where the last face is obviously minimal. Since Kmin ≠ {(0, 0, 0)}, the cone (1.2.48) is not pointed.

Now we are going to collect some facts about bound (in particular, pointed) cones, faces, and edges. Our first result uses the notion of tight cones introduced in Definition 1.2.7. 󸀠 Proposition 1.2.32. Let K be a bound cone, and let Kmin be its minimal face. Then one can find a pointed cone K1 ⊆ K such that 󸀠 K = cone(Kmin ∪ K1 ),

(1.2.49)

i. e., K is tight over the union of its minimal face with some pointed cone. 󸀠 Proof. Let K1 := (Kmin )⊥ ∩ K. Being the intersection of two bound cones, K1 is bound. But K1 is also pointed, because the system of equations describing the minimal face 󸀠 󸀠 󸀠 󸀠 of K1 is the same as for Kmin ∩ (Kmin )⊥ , and Kmin ∩ (Kmin )⊥ = {0}. 󸀠 It is clear that cone(Kmin ∪ K1 ) ⊆ K. On the other hand, we may write every x ∈ ℝm 󸀠 as x = y + z with y ∈ Kmin and z ∈ (K 󸀠 )⊥ min . In case x ∈ K we have z = x + (−y), where 󸀠 −y ∈ Kmin ⊂ K, hence z ∈ K. Therefore, z ∈ K1 and 󸀠 󸀠 x ∈ cone(Kmin ∪ K1 ) ⊆ cone(Kmin ∪ K1 )

as claimed. To illustrate Proposition 1.2.32, consider the bound cone (1.2.48) in the second part 󸀠 of Example 1.2.31 which is not pointed. Here Kmin = {(0, 0)} × ℝ is the x3 -axis, so as pointed cone component in the representation (1.2.49) we may take K1 := ℝ2+ × {0}. The next proposition [5] shows that elements in pointed cones may be “glued together” from lower-dimensional faces. Proposition 1.2.33. Let K ⊆ ℝm be a pointed cone of dimension ≥ 2. Then every element x ∈ K may be represented as sum x = x1 +x2 of elements x1 and x2 which belong to lowerdimensional faces of K. Proof. We sketch the geometrical idea. If X belongs to some lower-dimensional face of K, the assertion holds with x1 = x and x2 = 0. Otherwise we fix some y ∈ K \ {λx : λ ≥ 0} and consider the plane E passing through x, y and 0. Then K1 := K ∩ E is a two-dimensional pointed cone.

30 | 1 DN-systems: theory

Figure 1.11

Now we choose x1 and x2 as vertices of the parallelogramm sketched in Figure 1.11, and the claim follows. Our previous results admit some useful corollaries which we state in the next two propositions. Proposition 1.2.34. Let K be a pointed cone, and let M be the family of its edges. Then K = cone(M), i. e., K is tight over its edges. Proof. It is clear that cone(M) ⊆ K, so we have to prove only the reverse inclusion. We do this by induction over the dimension of K. If K is one-dimensional, then K consists only of edges, and the statement is trivial. Suppose we have proved the assertion for r-dimensional cones, and let K be (r + 1)-dimensional, where 2 ≤ r + 1 ≤ m. By Proposition 1.2.33 we may write every x ∈ K as sum x = x1 + x2 , where x1 and x2 belong to faces of dimension ≤ r. By induction hypothesis, we have then x1 , x2 ∈ cone(M), and so also x ∈ cone(M). Proposition 1.2.35. Every bound cone K is tight over a finite set M. Moreover, if K = cone(M) for some finite set M, then the adjoint cone K ∗ of K is bound. 󸀠 ∪ K1 ), where K1 is Proof. In fact, from Proposition 1.2.32 we know that K = cone(Kmin 󸀠 pointed and therefore tight over a finite set M1 . Moreover, the minimal face Kmin of K 󸀠 p 󸀠 is a subspace, say Kmin = ℝ with p ≤ m. Then Kmin is tight over M2 := {e1 , . . . , ep , −(e1 + ⋅ ⋅ ⋅ + ep )}, as we have seen before, and we may take M := M1 ∪ M2 .

1.3 Existence and uniqueness of solutions | 31

To prove the second statement, suppose that K = cone({n1 , n2 , . . . , ns }). Then s

K ∗ = ⋂ Hni , i=1

where Hni = {x ∈ ℝm : ⟨x, ni ⟩ ≤ 0} is the halfspace defined by (1.2.11). From Proposition 1.2.35 it follows that, if one of the cones K or K ∗ is bound, then the other cone is also bound.

1.3 Existence and uniqueness of solutions Now we are going to define precisely what we mean by a DN-system and its solution, whose heuristic description was given in Subsection 1.1.1. This definition builds on the theory of convex sets and cones developed in the preceding section. 1.3.1 Statement of the problem Let Q be a fixed convex closed subset of ℝm , and consider the normal cone NQ (x) (Definition 1.2.13) and the tangent cone TQ (x) (Definition 1.2.25) at some point x ∈ Q, see again Figure 1.9 above. Definition 1.3.1. Given a convex set Q ⊆ ℝm , by νx we denote the metric projection of ℝm onto the normal cone NQ (x), and by τx the metric projection of ℝm onto the tangent cone TQ (x). So by definition we have νx (y) = P(y, NQ (x)),

τx (y) = P(y, TQ (x))

and 󵄩󵄩 󵄩 󵄩󵄩y − νx (y)󵄩󵄩󵄩 = dist(y, NQ (x)),

󵄩󵄩 󵄩 󵄩󵄩y − τx (y)󵄩󵄩󵄩 = dist(y, TQ (x))

(1.3.1)

for all y ∈ ℝm . Moreover, the orthogonal decomposition y = νx (y) + τx (y),

⟨νx (y), τx (y)⟩ = 0

(1.3.2)

follows from Theorem 1.2.20. Recall that a function x : J → ℝm is called absolutely continuous if for each ε > 0 one can find a δ > 0 such that, given any family {[a1 , b1 ], . . . , [ak , bk ]} of nonoverlapping intervals [aj , bj ] ⊂ J, from k

∑(bj − aj ) ≤ δ j=1

32 | 1 DN-systems: theory it follows that k

󵄨 󵄨 ∑󵄨󵄨󵄨x(bj ) − x(aj )󵄨󵄨󵄨 ≤ ε. j=1

Obviously, every Lipschitz continuous function is absolutely continuous, and every absolutely continuous function is continuous. In contrast to continuous functions, an absolutely continuous function is almost everywhere differentiable. Definition 1.3.2. Given a function f : J × Q → ℝm , where J is some real interval and Q ⊆ ℝm is closed and convex, a system with diode nonlinearity (or DN-system, for short) may be written either as differential equation ẋ = τx f (t, x)

(1.3.3)

involving the projection operator τx , or as differential inclusion ẋ ∈ f (t, x) − NQ (x)

(1.3.4)

involving the normal cone NQ (x). A solution of (1.3.3) (respectively, (1.3.4)) is a locally absolutely continuous function x = x(t) satisfying the equation ̇ = τx(t) f (t, x(t)) x(t)

(1.3.5)

̇ ∈ f (t, x(t)) − NQ (x(t)) x(t)

(1.3.6)

respectively the inclusion

for almost all t ∈ J. Definition 1.3.2 is fundamental for the remaining part of this chapter. The next theorem shows that these two approaches are actually equivalent in the following sense: Theorem 1.3.3. The differential equation (1.3.3) and the differential inclusion (1.3.4) are equivalent in the sense that they have the same solutions. Proof. Suppose first that x is an absolutely continuous function on some interval J which solves (1.3.5) almost everywhere. Applying Theorem 1.2.20 to the choice K := NQ (x(t)),

K ∗ := TQ (x(t)),

y := f (t, x(t))

we obtain τx(t) f (t, x(t)) = f (t, x(t)) − νx(t) f (t, x(t)) ∈ f (t, x(t)) − NQ (x(t)), which shows that x satisfies the differential inclusion (1.3.6).

(1.3.7)

1.3 Existence and uniqueness of solutions | 33

Conversely, suppose that x is an absolutely continuous function on some interval ̇ J which solves (1.3.6) almost everywhere. Choose a point u ∈ Nx(t) such that x(t) = ̇ and u are orthogonal, i. e., ⟨x(t), ̇ u⟩ = 0. In fact, from f (t, x(t)) − u. We claim that x(t) ̇ = x+󸀠 (t) = x−󸀠 (t) and Proposition 1.2.29 it follows that x(t) ̇ ∈ TQ (x(t)), x(t)

̇ u⟩ ≤ 0, ⟨x(t),

̇ ∈ TQ (x(t)), −x(t)

̇ u⟩ ≥ 0, ⟨x(t),

as well as

̇ u⟩ = 0. hence ⟨x(t), ̇ and the orthogonality of u and x(t) ̇ Now, the representation f (t, x(t)) = u + x(t) shows that we may apply the remark after Theorem 1.2.20 to the same choice (1.3.7) ̇ of K, K ∗ , and y (for x := u and z := x(t)). As a result, we get u = νx(t) f (t, x(t)) and ̇ = τx(t) f (t, x(t)), and the last equality is precisely (1.3.5). x(t) Fix (t0 , x0 ) ∈ ℝ × Q and H > 0, and let f : [t0 , t0 + H] × Q → ℝm be a continuous function satisfying a Lipschitz condition 󵄩󵄩 󵄩 󵄩󵄩f (t, x) − f (t, y)󵄩󵄩󵄩 ≤ L‖x − y‖

(t0 ≤ t ≤ t0 + H; x, y ∈ Q)

(1.3.8)

with some Lipschitz constant L > 0. Under these hypotheses,26 we are now going to prove existence and uniqueness of a solution of the initial value problem {

ẋ = τx f (t, x), x(t0 ) = x0

(1.3.9)

on an interval [t0 , t0 + h] for some h ∈ (0, H]. Recall that a solution of (1.3.9) is an absolutely continuous function x = x(t) which satisfies (1.3.5) for almost all t ∈ [t0 , t0 + h], as well as the initial condition in (1.3.9). To achieve this goal, we will proceed in several steps which build on the following ideas: – Extension of the function f (t, ⋅) in (1.3.9) from Q to a larger set Q1 ⊃ Q. – Construction of a cylinder which contains the graphs of possible approximate solutions. – Definition of special approximate solutions (“Euler polygons”). – Passing to the limit in the approximation by a compactness argument. – Proof of the fact that the limit function solves (1.3.9) in the set Q1 . 26 These are the typical hypotheses which are imposed for the proof of the classical Picard–Lindelöf theorem; however, our situation is more complicated, since the right hand side of (1.3.9) contains a cone projection which differs from point to point.

34 | 1 DN-systems: theory – –

Proof of the fact that the obtained solution actually lies in Q. Proof of the uniqueness of the solution in Q.

The first two steps of this programme will be carried out in the following Subsection 1.3.2, the remaining steps in subsequent subsections.

1.3.2 Continuation of the right hand side Our goal is to extend the function f in (1.3.9) from [t0 , t0 + H] × Q to the larger set [t0 , t0 + H] × Q1 , where Q1 := {x ∈ ℝm : dist(x, Q) ≤ 1}. To this end, we introduce the notation ex := {

0

for x ∈ Q,

x−P(x,Q) ‖x−P(x,Q)‖

for x ∈ Q1 \ Q

(1.3.10)

and define a map π : [t0 , t0 + H] × Q1 → ℝ+ by π(t, x) := max {⟨f (t, P(x, Q)), ex ⟩, 0}

((t, x) ∈ [t0 , t0 + H] × Q1 ).

(1.3.11)

We collect some properties of this map in the following Lemma 1.3.4. The proof of this lemma is straightforward; its geometric idea is sketched in the following Figure 1.12.

Figure 1.12

1.3 Existence and uniqueness of solutions | 35

Lemma 1.3.4. The map (1.3.11) has the following properties. (a) The equality π(t, x)ex = M(x − P(x, Q)) holds for some M ≥ 0, i. e., the vector π(t, x)ex coincides with the projection of f (t, P(x, Q)) onto the normal cone NQ (x), see (1.2.8). (b) The relation f (t, P(x, Q)) − π(t, x)ex ∈ TQ (x) holds, where TQ (x) denotes the tangent cone (1.2.39). (c) The estimate 󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩f (t, P(x, Q)) − π(t, x)ex 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩f (t, P(x, Q))󵄩󵄩󵄩

(1.3.12)

is true. Using the auxiliary map (1.3.11) we define now f ̃ : [t0 , t0 + H] × Q1 → ℝm by 󵄩 󵄩 f ̃(t, x) := f (t, P(x, Q)) − (󵄩󵄩󵄩x − P(x, Q)󵄩󵄩󵄩 + π(t, x))ex .

(1.3.13)

Since P(x, Q) = x and ex = 0 for x ∈ Q, it is clear that f ̃(t, x) = f (t, x) for (t, x) ∈ [t0 , t0 + H] × Q, so f ̃ extends f . The function f ̃ in (1.3.13) is continuous in the interior of Q, as well as on the set Q1 \Q, but may have points of discontinuity on the boundary 𝜕Q of Q. To overcome this flaw, we associate to the function f ̃ the multivalued map27 F : [t0 , t0 + H] × Q1 → 𝒫 (ℝm ) defined by F(t, x) := {

f (t, x) − NQ (x) {f ̃(t, x)}

for x ∈ Q, for x ∈ Q1 \ Q.

(1.3.14)

In fact, this map has a useful closedness property whose definition [6] we recall for the reader’s ease: Definition 1.3.5. Let D ⊆ ℝn , and let F : D → 𝒫 (ℝm ) be a multivalued map. Then F is said to be closed on D if the following holds for each x ∈ D: if (xk )k is any sequence in D and yk ∈ F(xk ), then the relations lim xk = x,

k→∞

lim yk = y

k→∞

for some y ∈ ℝm imply that y ∈ F(x). 27 As usual, 𝒫(M) denotes the family of all subsets of M.

(1.3.15)

36 | 1 DN-systems: theory Roughly speaking, closedness is a weaker property than continuity of a multivalued map; in many applications it suffices to deduce the desired results. Before returning to the multivalued map (1.3.14), we consider another two important multivalued maps related to convex sets and cones from the viewpoint of closedness. Proposition 1.3.6. Let Q ⊂ ℝm be a closed convex set, and let NQ (x) denote the normal cone (1.2.21) to Q at x. Then the map NQ : x 󳨃→ NQ (x) is closed. Proof. Obviously, it suffices to prove the closedness of NQ at each boundary point of Q. So fix x ∈ 𝜕Q, and choose sequences (xk )k in Q and (yk )k in ℝm such that yk ∈ NQ (xk ),

lim xk = x,

k→∞

lim yk = y

k→∞

(1.3.16)

for some y ∈ ℝm . The first relation in (1.3.16) means that ⟨yk , z − xk ⟩ ≤ 0

(z ∈ Q).

Passing in this inequality to the limit as k → ∞ yields ⟨y, z − x⟩ ≤ 0

(z ∈ Q)

which means that y ∈ NQ (x).

Figure 1.13

The following example shows that the tangent cone does not behave as nicely as the normal cone from the viewpoint of closedness. Example 1.3.7. Let Q ⊂ ℝm be a closed convex set, and let TQ (x) denote the tangent cone (1.2.39) to Q at x. Then the map TQ : x 󳨃→ TQ (x) need not be closed. For example, take as Q the closed unit disc in ℝ2 and xk := (1 − 1/k, 0),

x := (1, 0).

1.3 Existence and uniqueness of solutions | 37

Clearly, xk → x as k → ∞, xk is an interior point of Q, and x is a boundary point of Q. From (1.2.25) and (1.2.42) it follows that NQ (xk ) = {(0, 0)},

NQ (x) = ℝ+ × {0},

TQ (xk ) = ℝ2 ,

TQ (x) = ℝ− × ℝ.

So the map NQ is trivially closed at x (in accordance with the preceding Proposition 1.3.6), but the map TQ is not, as may be seen by choosing yk = y = (1, 0). Now we come back to the announced closedness result for the multivalued map (1.3.14). Proposition 1.3.8. The map F defined in (1.3.14) is closed on [t0 , t0 + H] × Q1 . Proof. Fix (t, x) ∈ [t0 , t0 + H] × Q1 , and let (tk , xk )k be a sequence in [t0 , t0 + H] × Q1 which satisfies (1.3.15), where yk ∈ F(tk , xk ). We distinguish three cases. 1st case: xk ∈ Q for all k, hence x ∈ Q. Then yk ∈ F(tk , xk ) implies that f (tk , xk ) − yk ∈ NQ (xk ) which means that ⟨f (tk , xk ) − yk , z − x⟩ ≤ 0

(z ∈ Q).

Passing to the limit in this estimate we see that ⟨f (t, x) − y, z − x⟩ ≤ 0 (z ∈ Q), and so f (t, x) − y ∈ NQ (x) and y ∈ F(t, x). 2nd case: xk ∈ Q1 \ Q for all k, but x ∈ Q. Using the notation (1.3.11) we have then 󵄩 󵄩 yk = f (tk , P(xk , Q)) − (󵄩󵄩󵄩xk − P(xk , Q)󵄩󵄩󵄩 + π(tk , xk ))exk → y

(k → ∞).

Moreover, f (tk , P(xk , Q)) → f (t, P(x, Q)) = f (t, x) and 󵄩󵄩 󵄩 󵄩󵄩xk − P(xk , Q)󵄩󵄩󵄩 → 0

(k → ∞).

Since, as observed earlier, π(tk , xk )exk is the projection of f (tk , P(xk , Q)) onto the cone NQ (xk ) ⊆ NQ (P(xk , Q)), we further conclude that π(tk , xk )exk ∈ NQ (P(xk , Q)), hence again y ∈ F(t, x). 3rd case: xk ∈ Q1 \ Q for all k and x ∈ Q1 \ Q. In this case the assertion simply follows from the continuity of the function f ̃ defined in (1.3.13) on Q1 \ Q.

38 | 1 DN-systems: theory Let (t0 , x0 ) ∈ [t0 , t0 + H] × Q and a > 0 be fixed, and let28 󵄩 󵄩 M := max {󵄩󵄩󵄩f (t, x)󵄩󵄩󵄩 : t0 ≤ t ≤ t + H, x ∈ Q ∩ Ba (x0 )},

(1.3.17)

where Br (x0 ) denotes the closed ball (1.2.4). Observe that x ∈ Ba (x0 ) implies P(x, Q) ∈ Ba (x0 ), since P(x0 , Q) = x0 and the function x 󳨃→ P(x, Q) is nonexpansive, see Proposition 1.2.12. We claim that this implies 󵄩 󵄩󵄩 ̃ 󵄩󵄩f (t, x)󵄩󵄩󵄩 ≤ M + a

(t0 ≤ t ≤ t + H, x ∈ Ba (x0 )).

Indeed, from (1.3.12) we obtain 󵄩󵄩 ̃ 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩󵄩f (t, x)󵄩󵄩󵄩 = 󵄩󵄩󵄩f (t, P(x, Q)) − (󵄩󵄩󵄩x − P(x, Q)󵄩󵄩󵄩 + π(t, x))ex 󵄩󵄩󵄩 󵄩 󵄩 󵄩 󵄩 ≤ 󵄩󵄩󵄩f (t, P(x, Q)) − π(t, x)ex 󵄩󵄩󵄩 + 󵄩󵄩󵄩x − P(x, Q)󵄩󵄩󵄩 󵄩 󵄩 󵄩 󵄩 ≤ 󵄩󵄩󵄩f (t, P(x, Q))󵄩󵄩󵄩 + 󵄩󵄩󵄩x − P(x, Q)󵄩󵄩󵄩 ≤ M + a,

(1.3.18)

where in the last estimate we have used the fact that P(x, Q) ∈ Q ∩ Ba (x0 ). Now the claim follows from the estimate ‖x − P(x, Q)‖ ≤ ‖x − x0 ‖ ≤ a. Let us now consider all (approximate) solutions on the interval [t0 , t0 + h] with h := min {H,

a }, M+a

(1.3.19)

see Figure 1.14.

Figure 1.14

In other words, the “graphs” of such solutions will be contained in the cylinder Z := [t0 , t0 + h] × Ba (x0 ) (which in the scalar case m = 1, see Figure 1.14, is just a rectangle). 28 Of course, the constant M in (1.3.17) depends on our choice of a, but this is not important for our calculations.

1.3 Existence and uniqueness of solutions | 39

1.3.3 Euler polygons The main idea is now to replace the system (1.3.9) by the system {

ẋ ∈ F(t, x),

(1.3.20)

x(t0 ) = x0 ,

where F is given by (1.3.14). The advantage of (1.3.20) is that the right-hand side F is well-behaved, as Proposition 1.3.8 shows. However, we have to pay a price for this: differential inclusions are usually more difficult to treat than differential equations. The procedure is the familiar one: first we construct a sequence of piecewise linear “brute force approximations” on subintervals, and then we show by means of a compactness argument that we may pass to the limit. So for k ∈ ℕ we consider the equidistant partition Pk := {tk,0 , tk,1 , . . . , tk,k } of the interval [t0 , t0 + h], with h given by (1.3.19), where tk,0 := t0 , tk,1 := t0 +

h h , . . . , tk,k−1 := t0 + (k − 1) , tk,k := t0 + h, k k

and define xk (t) on [t0 , t0 + h] in the following way. First we put xk (t0 ) := x0 . Once we have constructed xk (t) for tk,0 ≤ t ≤ tk,j−1 , we define xk (t) for tk,j−1 ≤ t ≤ tk,j by xk (t) := xk (tk,j−1 ) + f ̃(tk,j−1 , xk (tk,j−1 ))(t − tk,j−1 ),

(1.3.21)

where f ̃ is given by (1.3.13). The functions xk : [t0 , t0 + h] → ℝm constructed in this way satisfy ẋk (t) = f ̃(tk,j−1 , xk (tk,j−1 )) (tk,j−1 ≤ t ≤ tk,j ), 󵄩󵄩 󵄩 󵄩󵄩xk (s) − xk (t)󵄩󵄩󵄩 ≤ (M + a)|s − t|

(t0 ≤ s, t ≤ t0 + h),

(1.3.22) (1.3.23)

with M as in (1.3.17), and xk (t) ∈ Ba (x0 )

(t0 ≤ t ≤ t0 + h).

(1.3.24)

Indeed, (1.3.22) follows from taking derivatives in (1.3.21), while (1.3.23) follows from (1.3.22), (1.3.18), and the mean value theorem, since 󵄩󵄩 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩󵄩xk (s) − xk (t)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩ẋk (τ)󵄩󵄩󵄩 |s − t| = 󵄩󵄩󵄩f ̃(tk,j−1 , xk (tk,j−1 ))󵄩󵄩󵄩 |s − t| ≤ (M + a)|s − t|, for t0 ≤ s ≤ t ≤ t0 + h and some suitable τ ∈ [s, t] ∩ [tk,j−1 , tk,j ]. Finally, (1.3.24) follows from the estimate a 󵄩󵄩 󵄩 󵄩 󵄩 = a, 󵄩󵄩x0 − xk (t)󵄩󵄩󵄩 = 󵄩󵄩󵄩xk (t0 ) − xk (t)󵄩󵄩󵄩 ≤ (M + a)|t0 − t| ≤ (M + a) M+a

40 | 1 DN-systems: theory where we have used (1.3.19). The important point is now that (1.3.23) and (1.3.24) show that the set {xk : k ∈ ℕ} is equicontinuous and bounded, and so the Arzelà-Ascoli compactness criterion implies that there exists a subsequence of (xk )k (which for simplicity we still denote by (xk )k ) converging uniformly on [t0 , t0 + h] to some continuous function x. Theorem 1.3.9. The limit function x is a solution of the system (1.3.20) in [t0 , t0 + h] × Q1 . Proof. Since the Lipschitz condition (1.3.23) with Lipschitz constant M + a carries over from the functions xk to their uniform limit, the function x is absolutely continuous on [t0 , t0 + h]. We have to show that x solves the differential inclusion and the initial condition in (1.3.20). The fact that x(t0 ) = x0 is clear, since this holds for every function xk . Let t be a ̇ exists. Given any ε > 0, we show that point in [t0 , t0 + h] where x(t) ̇ ∈ F(t, x(t)) + Bε (0); x(t)

(1.3.25)

passing to the limit for ε → 0+ we see then that x satisfies the differential inclusion in (1.3.20). Choose δ > 0 so small that F(t, x) ⊆ F(t, x(t))+Bε (0) for |t −t| < δ and ‖x(t)−x(t)‖ < (M + a + 1)δ. Moreover, we may find a k0 ∈ ℕ such that h≤

kδ , 2

󵄩 󵄩 max 󵄩󵄩󵄩xk (t) − x(t)󵄩󵄩󵄩 ≤ δ

t0 ≤t≤t0 +h

(k ≥ k0 ).

If |t − t| < δ/2 and k ≥ k0 , the partition point tk,j−1 of the polygon xk which lies next to t from the left satisfies |tk,j−1 − t| < δ. Consequently, 󵄩 󵄩 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩xk (tk,j−1 ) − x(t)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩xk (tk,j−1 ) − xk (t)󵄩󵄩󵄩 + 󵄩󵄩󵄩xk (t) − x(t)󵄩󵄩󵄩 ≤ (M + a + 1)δ and ẋk (t) = f ̃(tk,j−1 , xk (tk,j−1 )) ∈ F(tk,j−1 , xk (tk,j−1 )) ⊆ F(tk,j−1 , xk (tk,j−1 )) + Bε (0). Now we use the identity t

xk (t) − xk (t) 1 = ∫ ẋk (s) ds. t−t t−t

(1.3.26)

t

29

It is not hard to see that the integral mean on the right-hand side of (1.3.26) belongs to the closed convex hull of the set {ẋk (t) : t0 ≤ t ≤ t0 + h}. Therefore, xk (t) − xk (t) ∈ F(t, x(t)) + Bε (0), t−t and passing first to the limit k → ∞ and then to the limit t → t gives (1.3.25). 29 The integral in (1.3.26) may be understood as limit of Riemann sums.

1.3 Existence and uniqueness of solutions | 41

1.3.4 Solvability in the original set Theorem 1.3.9 guarantees the existence of a solution of (1.3.20) in [t0 , t0 + h] × Q1 . Now we show that the solution even takes values in the smaller set Q. We may formulate this fact in the following way. Theorem 1.3.10. The limit function x takes its values in Q, and therefore is a solution of the system (1.3.9). Proof. Suppose that the assertion is false; since x is continuous and Q is closed, we may then find an interval (α, β) ⊂ [t0 , t0 + h] such that x(t) ∈ ̸ Q for α < t < β. Let 󵄩 󵄩2 d(t) := dist2 (x(t), Q) = 󵄩󵄩󵄩x(t) − x(t)󵄩󵄩󵄩 , where we have used the shortcut x := P(x, Q). We show that d(t +Δt) < d(t) for t ∈ (α, β) and sufficiently small Δt > 0. In fact, 󵄩2 󵄩 󵄩2 󵄩 d(t + Δt) = 󵄩󵄩󵄩x(t + Δt) − x(t + Δt)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩x(t + Δt) − x(t)󵄩󵄩󵄩 = ⟨x(t + Δt) − x(t), x(t + Δt) − x(t)⟩ = ⟨x(t + Δt) − x(t) + x(t) − x(t), x(t + Δt) − x(t) + x(t) − x(t)⟩ 󵄩 󵄩2 󵄩 󵄩2 = 󵄩󵄩󵄩x(t + Δt) − x(t)󵄩󵄩󵄩 + 󵄩󵄩󵄩x(t) − x(t)󵄩󵄩󵄩 + 2⟨x(t + Δt) − x(t), x(t) − x(t)⟩ = o(Δt) + d(t) + 2⟨f ̃(t, x(t))Δt, x(t) − x(t)⟩ = o(Δt) + d(t) + 2⟨f (t, x(t)) − π(t, x(t))ex − d(t)ex , d(t)ex ⟩Δt. Consequently, d(t + Δt) ≤ d(t)(1 − 2Δt) + o(Δt) < d(t). We conclude that the function d is decreasing outside Q, and so the function x(t) cannot leave Q. The proof is complete. 1.3.5 Uniqueness and continuous dependence In the preceding subsections we have proved the existence of solutions of the system (1.3.9); now we are going to prove uniqueness. We do this in four steps. First we show that the multivalued map x 󳨃→ NQ (x) has a certain monotonicity property. Afterwards we prove a differential inequality and a related Gronwall-type lemma which show that solutions continuously depend on the data. This implies, in particular, that there is at most one solution. We begin with the general definition of monotonicity of multivalued maps. This notion is extremely useful in topological degree theory and has important applications in the theory of elliptic equations [4, 8].

42 | 1 DN-systems: theory Definition 1.3.11. Let D ⊆ ℝm , and let F : D → 𝒫 (ℝm ) be a multivalued map. Then F is called monotone if for every x, x̂ ∈ D, y ∈ F(x), and ŷ ∈ F(x)̂ we have ⟨y − y,̂ x − x⟩̂ ≥ 0.

(1.3.27)

If there is some c > 0 such that ⟨y − y,̂ x − x⟩̂ ≥ c‖x − x‖̂ 2

(1.3.28)

̂ then F is called uniformly monotone. Finally, F is called for y ∈ F(x) and ŷ ∈ F(x), maximal monotone if there is no proper monotone extension of F. Definition 1.3.11 may be found, e. g., in [4, 7, 77]. The name is of course motivated by the fact that in case m = 1 and singlevalued F this definition gives precisely the monotonically increasing functions. The following two examples are parallel to Proposition 1.3.6 and Example 1.3.7. Example 1.3.12. Let Q ⊂ ℝm be a closed convex set, and let NQ (x) denote the normal cone (1.2.21) to Q at x. Then the map NQ : x 󳨃→ NQ (x) is maximal monotone. To prove ̂ The last two conditions mean, by monotonicity, fix x, x̂ ∈ Q, y ∈ NQ (x), and ŷ ∈ NQ (x). Definition 1.2.13, that ⟨y, z − x⟩ ≤ 0,

⟨y,̂ z − x⟩̂ ≤ 0

(z ∈ Q).

Putting in the first estimate z := x̂ and in the second estimate z := x we get ⟨y − y,̂ x − x⟩̂ = −⟨y, x̂ − x⟩ − ⟨y,̂ x − x⟩̂ ≥ 0, which is precisely the monotonicity condition (1.3.27) for NQ . To prove that NQ is even maximal monotone, we remark that every point z ∈ ℝm admits a projection z = P(z, Q) onto Q, and therefore z − z ∈ NQ (z). Therefore the map I + NQ is surjective, and by a well-known result30 this is equivalent to the maximal monotonicity of NQ . The example Q := [0, 2]×[0, 2] ⊂ ℝ2 , x = (0, 1), x̂ = (1, 0), y = (−ε, 0), and ŷ = (0, −ε) with ε > 0 shows that the map NQ is in general not uniformly monotone.

One may show that a maximal monotone map is always closed, see [4, p. 369]. This fact allows us to give an interesting geometric interpretation of the closedness of the map NQ : given x0 ∈ 𝜕Q and α > 0, there exists a δ > 0 such that the angle between an arbitrary vector y ∈ NQ (x), for x ∈ 𝜕Q ∩ Bδ (x0 ), and the vector in NQ (x0 ) which is closest to y, is ≤ α, see Figure 1.15. 30 In the literature, see e. g. [7, p. 22], this result is called Minty’s theorem.

1.3 Existence and uniqueness of solutions | 43

Figure 1.15

To see this, assume the contrary. Then we find some α > 0, a sequence (xn )n converging to x0 , and elements yn ∈ NQ (xn ) with ‖yn ‖ = 1 and the property that the angle between yn and the vector in NQ (x0 ) which is closest to yn , is larger than α. Without loss of generality we may assume that the sequence (yn )n converges to some y0 , which implies that y0 ∈ ̸ NQ (x0 ). But this contradicts the closedness of the maximal monotone operator NQ and proves the assertion. Example 1.3.13. Let Q ⊂ ℝm be a closed convex set, and let TQ (x) denote the tangent cone (1.2.39) to Q at x. Then the map TQ : x 󳨃→ TQ (x) is in general not monotone. To see this, choose Q ⊂ ℝ2 as in Example 1.3.7 and x = (1, 0),

x̂ = y = (0, 0),

ŷ = (1, 0).

Then y ∈ TQ (x) = ℝ− × ℝ and ŷ ∈ TQ (x)̂ = ℝ2 . However, ⟨y − y,̂ x − x⟩̂ = −1, and so (1.3.27) is not true. Now we prove the announced differential inequality for the norm of the difference of two hypothetical solutions. Lemma 1.3.14. Suppose that x1 = x1 (t) and x2 = x2 (t) are two solutions of the differential inclusion ẋ ∈ f (t, x) − NQ (x) on some interval [t0 , t1 ]. Then d 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩x (t) − x2 (t)󵄩󵄩󵄩 ≤ 2L󵄩󵄩󵄩x1 (t) − x2 (t)󵄩󵄩󵄩 dt 󵄩 1 for almost all t ∈ [t0 , t1 ], where L is the Lipschitz constant in (1.3.8).

44 | 1 DN-systems: theory Proof. The function t 󳨃→ z(t) := ‖x1 (t) − x2 (t)‖2 is absolutely continuous. At any point t ∈ [t0 , t1 ] of differentiability of z we get ̇ = 2⟨x1 (t) − x2 (t), ẋ1 (t) − ẋ2 (t)⟩ z(t)

= 2⟨x1 (t) − x2 (t), f (t, x1 (t)) − f (t, x2 (t))⟩ − 2⟨x1 (t) − x2 (t), ξ1 − ξ2 ⟩

for suitable points ξ1 ∈ NQ (x1 (t)) and ξ2 ∈ NQ (x2 (t)). Dropping the last (positive) term ⟨x1 (t) − x2 (t), ξ1 − ξ2 ⟩ and using the Cauchy Schwarz inequality we obtain ̇ ≤ 2󵄩󵄩󵄩󵄩x1 (t) − x2 (t)󵄩󵄩󵄩󵄩 󵄩󵄩󵄩󵄩f (t, x1 (t)) − f (t, x2 (t))󵄩󵄩󵄩󵄩 z(t) 󵄩 󵄩2 ≤ 2L󵄩󵄩󵄩x1 (t) − x2 (t)󵄩󵄩󵄩 = 2Lz(t) which proves the assertion.31 Now we come to the announced Gronwall-type result for solutions of differential inequalities. Although the result is well-known from every calculus course we recall a simple proof. Lemma 1.3.15. Suppose that z : [t0 , t1 ] → ℝ is absolutely continuous and satisfies the differential inequality ̇ ≤ az(t) z(t)

(1.3.29)

for almost all t ∈ [t0 , t1 ] and some a > 0. Then z(t) ≤ z(t0 )ea(t−t0 )

(1.3.30)

for almost all t ∈ [t0 , t1 ]. Proof. From our hypotheses it follows that the function b : [t0 , t1 ] → ℝ defined by ̇ − az(t) belongs to L1 ([t0 , t1 ]) and satisfies b(t) ≤ 0 almost everywhere. By b(t) := z(t) construction, z satisfies the differential equation ̇ = az(t) + b(t). z(t)

(1.3.31)

On the other hand, the function z1 : [t0 , t1 ] → ℝ defined by t

z1 (t) := z(t0 )ea(t−t0 ) + eat ∫ e−as b(s) ds (t0 ≤ t ≤ t1 ) t0

31 Here we have used that f (t, ⋅) satisfies a global Lipschitz condition on any closed set which contains the graphs of the functions x1 and x2 , together with the fact that a local Lipschitz condition on a compact set implies a global Lipschitz condition.

1.3 Existence and uniqueness of solutions | 45

is absolutely continuous and satisfies the differential equation (1.3.31) as well almost everywhere on [t0 , t1 ]. Indeed, t

z1̇ (t) = az(t0 )ea(t−t0 ) + aeat ∫ e−as b(s) ds + eat e−at b(t) = az1 (t) + b(t). t0

But z1 also assumes the same value z(t0 ) at t0 as z, and it is the only solution with this property. To see this, suppose that z2 : [t0 , t1 ] → ℝ is another absolutely continuous function which satisfies (1.3.31) almost everywhere on [t0 , t1 ], as well as the initial condition z2 (t0 ) = z1 (t0 ). Then the difference w := z1 − z2 is absolutely continuous and satisfies both w(t0 ) = 0 and ̇ = aw(t) w(t)

(1.3.32)

almost everywhere on [t0 , t1 ]. Consequently, t

w(t) = a ∫ w(s) ds, t0

which shows that w ∈ C 1 ([t0 , t1 ]) satisfies (1.3.32) everywhere on [t0 , t1 ]. So we get w(t) ≡ 0, hence t

z(t) ≡ z1 (t) ≡ z(t0 )ea(t−t0 ) + eat ∫ e−as b(s) ds ≤ z(t0 )ea(t−t0 ) , t0

since eat > 0, t ≥ t0 , and e−as b(s) ≤ 0. Now we are in the position to prove a uniqueness result for solutions of the system (1.3.9) under the hypotheses formulated at the beginning of this subsection. Theorem 1.3.16. Let Q ⊂ ℝm be a closed convex set, (t0 , x0 ) ∈ ℝ × Q, H > 0, and f : [t0 , t0 + H] × Q → ℝm a continuous function satisfying the Lipschitz condition (1.3.8) ̂ are solutions of with some Lipschitz constant L > 0. Suppose that x = x(t) and x̂ = x(t) the equation ẋ = τx f (t, x)

(1.3.33)

̂ 0 ) = x̂0 , on [t0 , t1 ] ⊆ [t0 , t0 + H] and satisfy the initial condition x(t0 ) = x0 and x(t respectively. Then 󵄩󵄩 ̂ 󵄩󵄩󵄩󵄩 ≤ eL(t−t0 ) ‖x0 − x̂0 ‖ 󵄩󵄩x(t) − x(t)

(t0 ≤ t ≤ t1 ).

Consequently, the Cauchy problem (1.3.9) has at most one solution on [t0 , t1 ].

(1.3.34)

46 | 1 DN-systems: theory 2 ̂ Proof. Applying Lemma 1.3.14 to the function z(t) := ‖x(t) − x(t)‖ , hence z(t0 ) := ‖x0 − 2 x̂0 ‖ , we obtain (1.3.29) with a := 2L. So from Lemma 1.3.15 it follows that

z(t) ≤ z(t0 )e2L(t−t0 ) and taking square roots we obtain the estimate (1.3.34). In particular, in case x0 = x̂0 ̂ for t0 ≤ t ≤ t1 which is the announced uniqueness result. we get x(t) ≡ x(t)

1.4 Oscillations in DN-systems So far we have discussed existence and uniqueness of solutions of (1.3.9) from a theoretical viewpoint, building on the theory of cones in the Euclidean space ℝm . In applications one has often to deal with processes which are periodic or close to be periodic in a sense to be made precise. We discuss this here by means of an important example involving an oscillating electrical contour. Afterwards we prove the existence, under suitable hypotheses, of a so-called transversal segment for the planar DN-system under consideration. 1.4.1 Forced oscillations For the contour which is sketched in Figure 1.16, we may write the second Kirchhoff law in the form Ri + L

di + u = E(t), dt

where u is the voltage at the condensator C, and the notation is the same as in Section 1.1.

Figure 1.16

Replacing the current i by C u,̇ where the dot denotes as usual the time derivative, and dividing by LC, we obtain ü +

R 1 1 u̇ + u= E(t). L LC LC

1.4 Oscillations in DN-systems | 47

Using the shortcut δ :=

R , 2L

ω :=

1 , √LC

(1.4.1)

where32 δ ≥ 0 and ω > 0, we end up with the second order equation ü + 2δu̇ + ω2 u = ω2 E(t).

(1.4.2)

Since (1.4.2) is a linear equation with constant coefficients, the corresponding homogeneous equation ü + 2δu̇ + ω2 u = 0

(1.4.3)

may be solved by the classical Ansatz u(t) = eλt . In this case we get the characteristic equation λ2 + 2δλ + ω2 = 0 whose solutions are λ1,2 = −δ ± √δ2 − ω2 . So the functions φ1 (t) = eλ1 t and φ2 (t) = eλ2 t form a fundamental system for the solutions of (1.4.3). As usual, we have to distinguish three cases for λ1 and λ2 . 1st case: δ > ω. Then λ1 and λ2 are both real and negative, so every solution u(t) = C1 eλ1 t + C1 eλ1 t

(C1 , C2 ∈ ℝ)

of (1.4.3) tends to 0 as t → ∞. Such solutions are called asymptotically damped solutions. 2nd case: δ < ω. Then λ1,2 = −δ ± iω̂ form a pair of conjugate complex numbers, where ω̂ := √ω2 − δ2 , so we get the solutions ̂ u(t) = e−δt (C1 cos ωt̂ + C2 sin ωt)

(C1 , C2 ∈ ℝ).

In case δ > 0 we may write this solution in the form u(t) = e−δt √C12 + C22 cos(ωt̂ − α)

(1.4.4)

where the phase shift α may be determined by the equality cos α =

C1 √C12

+ C22

.

The formula (1.4.4) shows that u(t) → 0 as t → ∞ which means that (1.4.4) represents a damped oscillation. In case δ = 0 we simply get 32 The number ω is the frequency of free oscillations of the contour.

48 | 1 DN-systems: theory u(t) = √C12 + C22 cos(ωt̂ − α) which represents a periodic oscillation.33 3rd case: δ = ω. Then λ = −δ is a double root, and the general solution u(t) = e−δt (C1 + C2 t)

(C1 , C2 ∈ ℝ)

of (1.4.3) represents a non-oscillatory damped state satisfying u(t) → 0 as t → ∞. The solvability of the inhomogeneous equation (1.4.2) (and the behaviour of its solutions) depends of course on the form of the right-hand side E(t). Suppose that the source voltage is a periodic function of time, i. e., E(t) = U cos Ωt, where U and Ω are given parameters. Then (1.4.2) has the form ü + 2δu̇ + ω2 u = ω2 U cos Ωt,

(1.4.5)

and special periodic solutions34 of (1.4.5) are u(t) =

ω2 U

√(ω2 − Ω2 )2 + (2δΩ)2

cos(Ωt − β),

(1.4.6)

where the phase shift β may be determined by the equality cos β =

ω2 − Ω2

√(ω2 − Ω2 )2 + (2δΩ)2

.

By the well-known superposition principle, the general solution of (1.4.5) may be written in case 0 < δ < ω in the form u(t) = e−δt √C12 + C22 cos(ωt̂ − α) +

ω2 U

√(ω2 − Ω2 )2 + (2δΩ)2

cos(Ωt − β).

In any case, all solutions of (1.4.5) asymptotically approach forced oscillations. Now we are interested in the following problem: for what value of the capacity C the amplitude of the forced oscillations of the current become maximal? After differentiating (1.4.6) and multiplying by C we get from (1.4.1)

=

1 L√( LC

Cω2 UΩ

sin(Ωt − β) √(ω2 − Ω2 )2 + (2δΩ)2 UΩ sin(Ωt − β) = A(C) sin(Ωt − β), R − Ω2 )2 + (2 2L Ω)2

̇ =− i(t) = C u(t)

where we have written the amplitude of the oscillating contour 33 We remark that this case does not occur in practice, since it means that R = 0. 34 For δ > 0 the solutions (1.4.6) are called forced oscillations.

1.4 Oscillations in DN-systems | 49

A(C) =

U √( CΩ1 2

− L)2 + R2

,

as function of the capacity C. An easy calculation shows that A(C) becomes maximal for Cmax = 1/LΩ2 with A(Cmax ) = U/R, hence ω2 = Ω2 . In other words, the amplitude is maximal if the free and the forced oscillation have the same frequency; this is of course precisely what one expects for physical reasons. 1.4.2 Closed trajectories of 2D systems In this subsection we follow [30, 31]. Consider the two-dimensional autonomous DNsystem ẋ = τx f (x)

(x ∈ Q)

(1.4.7)

on a closed convex domain Q ⊆ ℝ2 with nonempty interior; here τx denotes as before the metric projection (1.3.1) on the tangent cone TQ (x). Following [31], we will discuss the existence, uniqueness, and stability of closed trajectories of (1.4.7), where we assume throughout that f : Q → ℝ2 is continuous. To this end, some definitions are in order. Definition 1.4.1. Given any solution φ of (1.4.7) on some interval I, the set {φ(t) : t ∈ I} ⊂ ℝ2 is called trajectory of (1.4.7). If I = [0, T] and, in addition, we have φ(0) = φ(T) and φ(t1 ) ≠ φ(t2 ) for 0 ≤ t1 < t2 < T, this set is called a closed trajectory. Definition 1.4.2. We say that an element p ∈ ℝ2 is an ω-limit point of a solution φ of (1.4.7) if there exists a real sequence (tn )n such that tn → ∞ and φ(tn ) → p as n → ∞. The set of all ω-limit points of a solution φ is called ω-limit set of φ and denoted by Ωφ . Definition 1.4.2 can be found, together with some comments and examples, in the textbook [16]. The main result of this section is Theorem 1.4.3 below which gives a sufficient condition for the existence of a closed trajectory in terms of ω-limit sets and normal cones.35 A crucial assumption in this theorem will be the following – Condition on the non-existence of stationary points (NSP): The system (1.4.7) has a bounded solution φ : ℝ+ → ℝ2 with the property that the corresponding ω-limit set Ωφ does not contain stationary points, i. e., f (p) ∈ ̸ NQ (p) for any p ∈ Ωφ , where NQ (p) denotes the normal cone (1.2.21) to Q at p. Theorem 1.4.3 (Existence of closed trajectories). Let Q ⊆ ℝ2 be closed and convex and f : Q → ℝ2 be continuous. Suppose that Condition NSP is fulfilled. Then the system (1.4.7) has a closed trajectory. 35 Theorem 1.4.3 may be considered as a generalization of the classical Poincaré–Bendixon theorem, see [16].

50 | 1 DN-systems: theory The proof of Theorem 1.4.3 is based on a series of auxiliary lemmas. We start by constructing a special function φ which is contained in the ω-limit set Ωφ of a solution φ and is the limit (uniformly on each bounded interval) of solutions of (1.4.7). Afterwards we show that the set of all solutions of (1.4.7) is closed (in a suitably topology) which implies that φ is also a solution. The closedness of the trajectory of the solution φ will then be proved by means of transversality arguments; this means, roughly speaking, that φ intersects certain intervals, for any p ∈ Ωφ , only in p infinitely many times. But the fact that the trajectory of φ consists of ω-limit points of φ implies then that this trajectory coincides with Ωφ , and hence is closed. To carry out this programme, we start by constructing the special function φ. Fix a point p0 ∈ Ωφ , where φ : ℝ+ → ℝ2 is a bounded solution, and choose a sequence (tn )n such that tn → ∞ and φ(tn ) → p0 as n → ∞. For each n ∈ ℕ, let φn : ℝ+ → ℝ2 be the shift of φ defined by φn (t) := φ(t + tn ). Then φn (0) = φ(tn ) → p0

(n → ∞).

Observe that the sequence (φn )n is bounded, since φ is bounded, and equicontinuous, since f , hence also τx f , is bounded on bounded sets. Therefore we may construct a decreasing (with respect to inclusion) sequence (Ak )k of index sets Ak ⊂ ℕ such that the sequence (φn )n∈Ak converges uniformly on the interval [0, k]. Denoting the k-th element of Ak (in increasing order) by nk , we see that the sequence (φnk )k converges on ℝ+ pointwise to some function φ, and the convergence is uniform on each compact subinterval of ℝ+ . It is not hard to see that φ(0) = p0 and p0 ∈ Ωφ . This implies, in particular, that φ : ℝ+ → ℝ2 is bounded, and 0 ≠ Ωφ ⊆ Ωφ .

Figure 1.17

1.4 Oscillations in DN-systems | 51

In Figure 1.17 the trajectory of φ(tn ) = φ(t + tn ) starts at some point φ(0) = φ(tn ) and goes upwards to the left for t ≥ 0. Here the trajectory of φ coincides with Ωφ = Ωφ , and for t → ∞ the point φ(t) winds counterclockwise along a neighbourhood of Ωφ . Now we are going to prove a special property of the set of all solutions of (1.4.7). Lemma 1.4.4. The set of all solutions of the system (1.4.7) is closed in the topology of uniform convergence on compact intervals. More precisely, if (xn )n is a sequence of solutions xn : J → ℝ2 of (1.4.7) on some interval J ⊆ ℝ which converges uniformly on each compact interval I ⊆ J to some function x, then x is also a solution of (1.4.7). Proof. As we have seen, the system (1.4.7) is equivalent to the inclusion f (x) ∈ ẋ + NQ (x),

(1.4.8)

where the operator NQ on the right-hand side of (1.4.8) is maximal monotone, see Example 1.3.12. In particular, putting bn := f ∘ xn , with xn as in the hypothesis, we have bn (t) ∈ ẋn (t) + NQ (xn (t)).

(1.4.9)

By our assumption, the sequence (bn )n converges, uniformly on each compact interval I ⊆ J, to the function b := f ∘ x. Consider the inclusion ̇ + NQ (y(t)), b(t) ∈ y(t)

(1.4.10)

together with the initial condition y(t0 ) = x(t0 ), where t0 ∈ J is arbitrary. As is shown in [7, Prop. 3.13], this problem has a unique solution y : J ∩ [t0 , ∞) → ℝ2 . Choosing z ∈ NQ (xn ) in (1.4.9) and u ∈ NQ (y) in (1.4.10), and taking scalar products with xn − y yields ⟨ẋn − y,̇ xn − y⟩ + ⟨z − u, xn − y⟩ = ⟨bn − b, xn − y⟩. Since the term ⟨z − u, xn − y⟩ is nonnegative, this implies 1 d 󵄩󵄩 󵄩2 󵄩 󵄩 󵄩x (t) − y(t)󵄩󵄩󵄩 ≤ ⟨bn (t) − b(t), xn (t) − y(t)⟩ ≤ k(I, n)󵄩󵄩󵄩xn (t) − y(t)󵄩󵄩󵄩, 2 dt 󵄩 n where 󵄩 󵄩 k(I, n) := max {󵄩󵄩󵄩bn (t) − b(t)󵄩󵄩󵄩 : t ∈ I} → 0

(n → ∞),

by assumption. In other words, the nonnegative scalar function ψ(t) := ‖xn (t) − y(t)‖2 satisfies the differential inequality ψ̇ ≤ 2k(I, n)√ψ, and therefore √ψ(t) ≤ √ψ(t0 ) + k(I, n)(t − t0 ),

52 | 1 DN-systems: theory i. e., 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩xn (t) − y(t)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩xn (t0 ) − y(t0 )󵄩󵄩󵄩 + k(I, n)(t − t0 )

(t ∈ I ∩ [t0 , ∞)).

Consequently, xn → y on I, and so y = x, which means that x solves (1.4.7) on J ∩ [t0 , ∞). Since t0 ∈ J was chosen arbitrarily, x is a solution of (1.4.7) on the whole interval J, and we are done. Lemma 1.4.4 implies, in particular, that the function φ constructed above as limit of solutions is itself a solution of (1.4.7). In the next lemma we use the tangent cone introduced in Definition 1.2.25. Lemma 1.4.5. If f (p) is an interior point of TQ (p), then also f (x) is an interior point of TQ (x) for all x sufficiently close to p. Proof. If the assertion is false, there exists a sequence (xn )n converging to p such that f (xn ) is not an interior point of TQ (xn ). Then we may find a sequence (yn )n of elements yn ∈ NQ (xn ) satisfying ‖yn ‖ = 1,

⟨f (xn ), yn ⟩ ≥ 0

(1.4.11)

for all n. Without loss of generality we may assume that the sequence (yn )n converges to some point y, and y ∈ NQ (p), since the map NQ is closed, see Proposition 1.3.6. Passing in (1.4.11) to the limit for n → ∞ we get ⟨f (p), y⟩ ≥ 0, contradicting our assumption that f (p) is an interior point of TQ (p). Before we formulate our next lemma, we introduce some terminology. Definition 1.4.6. Given three vectors u, v, w ∈ ℝ2 , where both u and w are different from 0, we say that v lies between u and w if either v = 0 or, when rotating u counterclockwise towards the direction of w, at a certain moment the rotating vector has the same direction as v. For x ∈ 𝜕Q, let ν1 (x) and ν2 (x) be normalized vectors on the boundary of the normal cone NQ (x) with the property that every vector v ∈ NQ (x) lies between ν1 (x) and ν2 (x). We may have ν1 (x) = ν2 (x), but never ν1 (x) = −ν2 (x), because Q has nonempty interior. Given two vectors u, w ∈ ℝ2 \ {0}, we say that v lies between u and NQ (x) if v lies between u and ν1 (x), while v lies between NQ (x) and w if v lies between ν2 (x) and w. For example, every point x ∈ ℝ2+ = {(x1 , x2 ) : x1 ≥ 0, x2 ≥ 0} clearly lies between the basis vectors u = (1, 0) and w = (0, 1) (but not between w and u). In what follows, by C : ℝ2 → ℝ2 we denote the counterclockwise rotation by π/2 in the plane, i. e., C(x1 , x2 ) := (−x2 , x1 )

((x1 , x2 ) ∈ ℝ2 ).

(1.4.12)

1.4 Oscillations in DN-systems | 53

Figure 1.18

By means of the rotation map (1.4.12), we define now ν : Q → ℝ2 as follows. If x is an interior point of Q (see Figure 1.19 a)), we put ν(x) := C −1 f (x).

Figure 1.19

By construction, we have then

󵄩 󵄩2 ⟨f (x), Cν(x)⟩ = ⟨f (x), f (x)⟩ = 󵄩󵄩󵄩f (x)󵄩󵄩󵄩 > 0.

(1.4.13)

If x is a boundary point of Q (see Figure 1.19 b)), we put ν(x) := ν1 (x) + ν2 (x). It is not hard to see that ν(x) belongs to the normal cone NQ (x) and is the bisectrix of the angle of NQ (x), while −ν(x) belongs to the interior of the tangent cone TQ (x) and is the bisectrix of the angle of TQ (x). The next lemma describes a special property of ω-limit points in the boundary of Q. Lemma 1.4.7. If p ∈ 𝜕Q is an ω-limit point of a solution of (1.4.7) satisfying f (p) ≠ 0, then f (p) is not an interior point of TQ (p). Proof. Suppose that the claim is false, i. e., there is some ω-limit point p ∈ 𝜕Q of a solution such that f (p) is an interior point of TQ (p) different from zero. Since p ∈ 𝜕Q, the

54 | 1 DN-systems: theory vectors ν1 (p) and ν2 (p) described above are not zero, and both vectors ν(p) = ν1 (p) + ν2 (p) and −ν(p) = −ν1 (p) − ν2 (p) are not zero either. Moreover, all x ∈ TQ (p) satisfy ⟨x, ν(p)⟩ ≤ 0, because −ν(x) is the bisectrix of the angle of TQ (p) which does not exceed π. Consider the function V : Q × (ℝ2 \ {0}) → ℝ defined by36 V(x, z) := ⟨x − p,

z ⟩. ‖z‖

By Lemma 1.4.5, f (x) is an interior point of TQ (x) for x close to p, and so τx f (x) = f (x). Given ε > 0, let a := ⟨f (p), −ν(p)⟩ − ε.

Figure 1.20

From the continuity of f it follows then that in case a > 0 we find a δ > 0 such that for all x ∈ Q ∩ Bδ (p) and z ∈ Bδ (−ν(p)) we have ⟨f (x), z⟩ > a. Therefore, for every solution x = x(t) of (1.4.7) which lies in Q ∩ Bδ (p) we have d ̇ z⟩ = ⟨f (x(t)), z⟩ > a V(x(t), z) = ⟨x(t), dt for all z ∈ Bδ (−ν(p)). This means that Q∩Bδ (p) is a neighbourhood of p in Q with the following property: for every trajectory contained in this neighbourhood the velocity of the projection of its 36 Geometrically, this function characterizes the z-projection of the distance of x to the fixed point p.

1.4 Oscillations in DN-systems | 55

distance to p onto any vector in a neighbourhood of −ν(p) is bounded away from zero. Consequently, no trajectory of (1.4.7) may enter this neighbourhood from outside, and every trajectory which starts inside this neighbourhood must necessarily leave it. But this contradicts our assumption that p is an ω-limit point and proves the lemma. The next lemma gives a geometrical statement of the disposition of the vectors f (p), τp f (p), and Cν(p), where C denotes the rotation (1.4.12). Lemma 1.4.8. Let p be an ω-limit point of the system (1.4.7). Then the scalar products ⟨τp f (p), Cν(p)⟩ and ⟨f (p), Cν(p)⟩ have the same sign. Proof. For p ∈ Ωϕ , Condition NSP implies that f (p) ≠ 0, and for interior points p of Q we have ⟨f (p), Cν(p)⟩ = ‖f (p)‖2 > 0, see (1.4.13). For p ∈ 𝜕Q, Condition NSP implies that f (p) is not only different from zero, but cannot point to the direction of ν(p). In Lemma 1.4.5 we have shown that it cannot point to the direction of −ν(p) either, since −ν(p) is an interior point of TQ (p). So f (p) and ν(p) are not collinear, which implies that ⟨f (p), Cν(p)⟩ ≠ 0. Now we distinguish two cases for p ∈ 𝜕Q. Let first ⟨f (p), Cν(p)⟩ > 0.

(1.4.14)

Then both vectors f (p) and τp f (p) lie between ν2 (p) and −ν(p). Consequently, τp f (p) lies between the vectors ν(p) and −ν(p), but is different from both of them, and so ⟨τp f (p), Cν(p)⟩ > 0. Now let ⟨f (p), Cν(p)⟩ < 0.

(1.4.15)

Then, conversely, both vectors f (p) and τp f (p) lie between −ν(p) and ν1 (p), and so also between −ν(p) and ν(p), which implies ⟨τp f (p), Cν(p)⟩ < 0. In either case the assertion is true. In the sequel we use the shortcut ηp (x) := ⟨τx f (x), Cν(p)⟩.

(1.4.16)

So ηp (p) = ⟨τp f (p), Cν(p)⟩ is the scalar product we considered in Lemma 1.4.8. Moreover, if φ is a solution of (1.4.7) then ̇ ηp (φ(t)) = ⟨τφ(t) f (φ(t)), Cν(p)⟩ = ⟨φ(t), Cν(p)⟩.

(1.4.17)

We will use the relation (1.4.17) several times in the sequel. Before formulating our next lemma we introduce some notation and recall an important definition. Definition 1.4.9. Given a closed convex set Q ⊆ ℝ2 and two distinct points a, b ∈ Q, we write [[a, b]] := conv({a, a + b}) = {(1 − s)a + sb : 0 ≤ s ≤ 1} ⊂ Q

(1.4.18)

56 | 1 DN-systems: theory for the (relatively closed) segment joining a and b, and ((a, b)) := [[a, b]] \ {a, b} = {(1 − s)a + sb : 0 < s < 1} ⊂ Q

(1.4.19)

for its (relative) interior. The segment (1.4.18) is called transversal for the system (1.4.7) if the scalar product ⟨τx f (x), C(b − a)⟩ does not change sign for all x ∈ [[a, b]], where C denotes the rotation (1.4.12). Example 1.4.10. Let Q := B1 (0) be the closed unit ball in ℝ2 , and let f (x) := Cx be the (linear) rotation (1.4.12). Then every radial segment [[a, b]] ⊂ Q which does not contain (0, 0) as interior point is transversal37 for the system (1.4.7). 1.4.3 Proof of the main theorem In what follows we assume throughout that the Condition NSP stated above is fulfilled. Before proceeding to the proof of Theorem 1.4.3, we need some auxiliary lemmas. Lemma 1.4.11 (Existence of transversal segments). There exist numbers r > 0 and α > 0 such that ηp (x)ηp (p) > α

(1.4.20)

for all x ∈ Q ∩ Br (p). In particular, the set 󵄩 󵄩 L := {p + sν(p) ∈ Q : s ∈ ℝ, 󵄩󵄩󵄩sν(p)󵄩󵄩󵄩 ≤ r}

(1.4.21)

is a transversal segment for the system (1.4.7). Proof. Observe that the set (1.4.21) is in fact the intersection of Q with a set of the form (1.4.18) if we put a := p − r

ν(p) , ‖ν(p)‖

b := p + r

ν(p) . ‖ν(p)‖

If (1.4.20) is false we find a sequence (xk )k which converges to p and satisfies ηp (xk )ηp (p) ≤

1 k

(k = 1, 2, 3, . . .).

We claim that xk ∈ 𝜕Q for sufficiently large k. In fact, if xj were an interior point of Q for infinitely many indices j, for these indices we would obtain ⟨f (p), Cν(p)⟩ηp (p) = lim ⟨f (xj ), Cν(p)⟩ηp (p) = lim ηp (xj )ηp (p) ≤ 0, j→∞

j→∞

contradicting Lemma 1.4.8. So we may assume, without loss of generality, that xk ∈ 𝜕Q for all k, and so p ∈ 𝜕Q. 37 However, these are not the only transversal segments for (1.4.7).

1.4 Oscillations in DN-systems | 57

Suppose first that (1.4.14) holds, and so ηp (p) > 0, by Lemma 1.4.8. Then38 ηp (xk )ηp (p) = ⟨μk , ν(p)⟩ηp (p)
0 and δ > 0 with the following property: whenever ‖ψ(t1 ) − p‖ < δ and ψ is defined on [t1 − T, t1 + T], we can find t2 ∈ [t1 − T, t1 + T] such that 󵄩󵄩 󵄩 󵄩󵄩ψ(t2 ) − p󵄩󵄩󵄩 < ε.

ψ(t2 ) ∈ L,

(1.4.23)

Proof. From p ∈ Lo it follows that ‖p − p‖ < r. Fix ε1 ∈ (0, ε] such that Bε1 (p) ⊆ Br (p), and define 󵄩 󵄩 H := max {󵄩󵄩󵄩f (x)󵄩󵄩󵄩 : x ∈ Bε1 (p) ∩ Q},

T :=

ε1 . H

Choosing 0 < δ < ε1 , we claim that ‖ψ(t1 ) − p‖ < δ implies that ‖ψ(t) − p‖ < ε1 for all t satisfying |t − t1 | < T1 := In fact, for t as in (1.4.24) we have

ε1 − δ . H

(1.4.24)

t 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩 ̇ ds − p󵄩󵄩󵄩󵄩 󵄩󵄩ψ(t) − p󵄩󵄩󵄩 = 󵄩󵄩󵄩ψ(t1 ) + ∫ ψ(s) 󵄩󵄩 󵄩󵄩󵄩 󵄩󵄩 t1

󵄨󵄨 t 󵄨󵄨 󵄩 󵄨󵄨 󵄩 󵄩 󵄨󵄨 󵄩 ≤ 󵄩󵄩󵄩ψ(t) − p󵄩󵄩󵄩 + 󵄨󵄨󵄨∫󵄩󵄩󵄩τψ(s) f (ψ(s))󵄩󵄩󵄩 ds󵄨󵄨󵄨 < δ + H|t − t1 | ≤ δ + HT1 = ε1 . 󵄨󵄨 󵄨󵄨 󵄨t1 󵄨 We define now a function w : [t1 − T, t1 + T] → ℝ by and w(t) := ⟨ψ(t) − p, Cν(p)⟩

(|t − t1 | ≤ T)

and show that this function has a zero t2 ∈ [t1 − T1 , t1 + T1 ] ⊆ [t1 − T, t1 + T]; this together with (1.4.24) gives (1.4.23). Note that ‖ψ(t1 ) − p‖ < δ implies |w(t1 )ηp (p)| ≤ δ‖ν(p)‖ |ηp (p)|, where ηp (x) is given by (1.4.16). We distinguish two cases. 1st case: Let w(t1 )ηp (p) > 0. Then w(t1 − T1 )ηp (p) = w(t1 )ηp (p) + [w(t1 − T1 ) − w(t1 )]ηp (p) t1 −T1

󵄩 󵄩󵄨 󵄨 ̇ ≤ δ󵄩󵄩󵄩ν(p)󵄩󵄩󵄩 󵄨󵄨󵄨ηp (p)󵄨󵄨󵄨 + ∫ ⟨ψ(s), Cν(p)⟩ηp (p) ds t1

t1

󵄩 󵄩󵄨 󵄨 󵄩 󵄩󵄨 󵄨 = δ󵄩󵄩󵄩ν(p)󵄩󵄩󵄩 󵄨󵄨󵄨ηp (p)󵄨󵄨󵄨 − ∫ ηp (ψ(s))ηp (p) ds ≤ δ󵄩󵄩󵄩ν(p)󵄩󵄩󵄩 󵄨󵄨󵄨ηp (p)󵄨󵄨󵄨 − T1 α, t1 −T1

1.4 Oscillations in DN-systems | 59

where α is taken from (1.4.20). We conclude that 󵄨 󵄩󵄨 󵄩 w(t1 − T1 )ηp (p) ≤ δ󵄩󵄩󵄩ν(p)󵄩󵄩󵄩 󵄨󵄨󵄨ηp (p)󵄨󵄨󵄨 − T1 α ≤ 0 if we choose39 0 0 : φ(t) ∈ L} ≠ 0

(1.4.26)

Moreover, the set (1.4.26) must be discrete. To see this, observe that the function x 󳨃→ ⟨x − p, Cν(p)⟩ηp (p) is strictly increasing on the r-neighbourhood of p w. r. t. the trajectory of φ: in fact, from Lemma 1.4.11 we deduce that d ̇ ⟨φ(t) − p, Cν(p)⟩ηp (p) = ⟨φ(t), Cν(p)⟩ηp (p) = ηp (φ(t))ηp (p) > α > 0 dt for ‖φ(t) − p‖ < r. Moreover, this function vanishes on the transversal segment L. In particular, the unique accumulation point of the set (1.4.26) may be +∞, so the set is discrete. If we enumerate all elements of the set (1.4.26) in increasing order, we get a strictly increasing unbounded sequence (tn )n in ℝ+ such that φ(tn ) ∈ L and ℝ+ ∩ φ−1 (L) = {(tn , φ(tn )) : n = 1, 2, 3, . . .}. Here all points φ(tn ) are mutually different, because we have assumed that φ has no closed trajectory. We choose φ(t2 )−φ(t1 ) as a positive orientation on the transversal segment L and prove now by induction40 over n that φ(tn+1 ) > φ(tn ) for all n. 39 Observe that this restriction is compatible with the previous condition 0 < δ < ε1 for δ. 40 We may express this geometrically by saying that the sequence (φ(tn ))n is strictly increasing on the trajectory with respect to the chosen orientation.

60 | 1 DN-systems: theory The inequality φ(t1 ) < φ(t2 ) holds by definition of the order.41 Suppose that φ(ti ) < φ(ti+1 ) for i = 2, 3, . . . , m; we show that then also φ(tm+1 ) < φ(tm+2 ). If this is not true, we find some k ≤ m such that φ(tk ) < φ(tm+2 ) < φ(tk+1 ). Let Γ be the curve which consists of the trajectory of φ(t) for tk ≤ t ≤ tk+1 and the straight line joining φ(tk+1 ) and φ(tk ). Then Γ is a Jordan curve, i. e., is closed and continuous without self-intersections, so by Jordan’s theorem [16] the complement ℝ2 \ Γ consists of a bounded (interior) connected component Δ1 and an unbounded (exterior) connected component Δ2 with 𝜕Δ1 = 𝜕Δ2 = Γ.

Figure 1.22

Let ξ ∈ ((φ(tk ), φ(tk+1 ))), with ((a, b)) given by (1.4.19). We show that, for ε > 0 sufficiently small, the set M+ (ε) := {x ∈ ℝ2 : ⟨x − p, Cν(p)⟩ηp (p) > 0, dist(x, [[φ(tk+1 ), ξ ]]) < ε} is entirely contained in one of the components Δ1 or Δ2 . We claim that there exists a ε > 0 with the property that M+ (ε) ∩ φ([tk , tk+1 ]) = 0. In fact, otherwise we may find 41 In fact, in Figure 1.22 the arrow from φ(t1 ) to φ(t2 ) points at the forward direction of the trajectory.

1.4 Oscillations in DN-systems | 61

a sequence (sj )j in [tk , tk+1 ] such that ⟨φ(sj ) − p, Cν(p)⟩ηp (p) > 0, and the sequence (φ(sj ))j converges to some point in [[φ(tk+1 ), ξ ]]. It is not hard to see that in this case we get 0 > ⟨φ(tk+1 ) − p, Cν(p)⟩ηp (p) − ⟨φ(sj ) − p, Cν(p)⟩ηp (p) tk+1

= ηp (p) ∫ ηp (φ(σ)) dσ > α(tk+1 − sj ) sj

for all sufficiently large indices j, where α is the same as in (1.4.20). But this is a contradiction, because α > 0 and tk+1 > sj . So our assertion is proved for the set M+ (ε). Similarly, we may define another set M− (ε) by M− (ε) := {x ∈ ℝ2 : ⟨x − p, Cν(p)⟩ηp (p) < 0, dist(x, [[φ(tk+1 ), ξ ]]) < ε} and prove that also this set is entirely contained in one of the components42 Δ1 or Δ2 for sufficiently small ε > 0. By Lemma 1.4.11, the trajectory φ(t) intersects the transversal segment L in the direction from M− (ε) to M+ (ε). Now we distinguish two cases.43 1st case: Let M+ (ε) ⊆ Δ2 , hence M− (ε) ⊆ Δ1 , see Figure 1.22. Then we find a δ > 0 such that φ((tk+1 , tk+1 + δ)) ⊆ Δ2 ,

φ((tk+2 − δ, tk+2 )) ⊆ Δ1 .

Therefore there exists a point t∗ ∈ (tk+1 + δ, tk+2 − δ) such that φ(t∗ ) ∈ Γ, contradicting our assumptions.44 2nd case: Let M+ (ε) ⊆ Δ1 , hence M− (ε) ⊆ Δ2 . Then a similar reasoning leads to a contradiction, interchanging the role of Δ1 and Δ2 . So we have proved the monotonicity of the sequence (φ(tn ))n on the trajectory in the sense established above. Lemma 1.4.12 shows that the point p and all points in Ωφ ∩ Lo are accumulation points of the sequence (φ(tn ))n . But we have just proved that this sequence is strictly increasing, so it may have only one accumulation point. We conclude that only p can be this point, and the proof is complete. 42 Of course, the sets M+ (ε) and M− (ε) belong to different components. 43 In the first case the set Δ1 = Δ1 ∪ Γ is called expanding snail in the paper [79], in the second case contracting snail. 44 More precisely, we get a contradiction to our induction hypothesis if φ(t∗ ) ∈ [[φ(tk ), φ(tk+1 ]], and a contradiction to our assumption of nonexistence of closed trajectories if φ(t∗ ) = φ(t ∗ ) for some t ∗ ∈ (tk , tk+1 ).

62 | 1 DN-systems: theory Now we are in a position to prove the announced Theorem 1.4.3. Proof of Theorem 1.4.3. From the preceding two lemmas it follows that the trajectory of φ(t) ⊆ Ωφ intersects the segment (1.4.22) more than once in p. Since p ∈ Ωφ was arbitrarily chosen, we see that {(t, φ(t)) : t ≥ 0} = Ωφ is a closed trajectory of the system (1.4.7), and so we are done. Now it’s time to take a deep breath and to illustrate Theorem 1.4.3 and all preparative lemmas by means of some examples. We start with some introductory remarks in the special case when Q := {(x1 , x2 ) ∈ ℝ2 : x12 + x22 ≤ 1} is the closed unit disc in the plane. Since TQ (x) = ℝ2 for interior points x of Q, we consider only points x ∈ 𝜕Q = {(x1 , x2 ) ∈ ℝ2 : x12 + x22 = 1} on the boundary of Q. For such points the normal cone NQ (x) = cone({x}) = {λx : λ ≥ 0} is a ray, while the tangent cone TQ (x) = {z ∈ ℝ2 : ⟨z, x⟩ ≤ 0} is a halfspace. So the right hand side of the system (1.4.7) is here τx f (x) = {

f (x)

if ⟨f (x), x⟩ ≤ 0,

f (x) − ⟨f (x), x⟩x

if ⟨f (x), x⟩ > 0.

Let us calculate the number ηp (x) from (1.4.16). Suppose first that ‖p‖ < 1. By definition of ν(p) we have then Cν(p) = CC −1 f (p), hence ηp (x) = ⟨τx f (x), f (p)⟩

(x ∈ Q).

Suppose now that ‖p‖ = 1 and, without loss of generality, p ∈ ℝ2+ . Then we may take ν1 (p) = (1, 0) and ν2 (p) = (0, 1) and get, by definition of ν(p) ηp (x) = ⟨τx f (x), (−1, 1)⟩

(x ∈ Q).

Now we choose three special nonlinearities f in the right-hand side of (1.4.7) to illustrate Theorem 1.4.3. Example 1.4.14. Let f (x) :=

1 1 1 (1 − ‖x‖)x + Cx = ( (1 − √x12 + x22 )x1 − x2 , (1 − √x12 + x22 )x2 + x1 ). 10 10 10

1.4 Oscillations in DN-systems | 63

Then τx f (x) = f (x) for every point x ∈ Q. By means of a computer programme one may sketch the graph of the solution φ starting from the initial point (1/10, 1/10) which is given in Figure 1.23 below. The trajectory of the solution φ is not closed, its ω-limit set Ωφ is the unit circumference 𝜕Q. It is not hard to see that f (p) ∈ ̸ NQ (p) = {λp : λ ≥ 0} for p ∈ Ωφ . Moreover, φ = Ωφ = Ωφ is a solution with closed trajectory. In Figure 1.23 we have sketched for some ω-limit point the transversal line L, as well as well as the intersection points φ(tk ) of L with the trajectory.

Figure 1.23

From (1.4.16) and our preceding discussion we know that, for p ∈ ℝ2+ , ‖p‖ = 1, ν1 (p) = (1, 0) and ν2 (p) = (0, 1) we have ηp (x) = ⟨τx f (x), Cν(p)⟩ = ⟨f (x), (−1, 1)⟩ = x1 + x2 +

1 (1 − ‖x‖)(x2 − x1 ) 10

in this example. Observe that only the function f in the construction of the closed trajectory, but not the explicit form of the boundary 𝜕Q. So the same qualitative result is true if we replace Q by some closed disc with radius r > 1. On the other hand, if we consider instead the unbounded set Q := ℝ2 , our computer programme gives the two solutions given in Figure 1.24 which correspond to the initial points (1/10, 1/10) and (2, 0):

64 | 1 DN-systems: theory

Figure 1.24

Example 1.4.15. Now let f (x) :=

1 1 1 x + Cx = ( x1 − x2 , x2 + x1 ). 10 10 10

In this case the equality τx f (x) = f (x) holds only for interior points of Q, while for points x ∈ 𝜕Q we have τx f (x) = Cx.

Figure 1.25

1.4 Oscillations in DN-systems | 65

Here we may calculate the solution with initial value (1/10, 1/10) explicitly. This solution is φ(t) =

et/10 (cos t − sin t, cos t + sin t), 10

as soon as it does not touch the unit circumference (i. e., the boundary of Q), which is a cycle of the corresponding system. So in this example not the function, but the boundary of Q is responsible for the occurrence of a cycle. Example 1.4.16. One may also study a mixed problem, where both factors give raise to a cycle. Consider again Example 1.4.14, but now on the closed half-plane Q := {(x1 , x2 ) ∈ ℝ2 : x1 ≥ −4/5}.

Figure 1.26

Here we get the cycle which is sketched in the following Figure 1.26 darker than the other curves. We may summarize our discussion as follows. In all three Examples 1.4.14, 1.4.15, and 1.4.16, the hypotheses of Theorem 1.4.3 are satisfied. In Example 1.4.14 non-closed trajectories of a bounded solution occur, but in the other two examples non-closed trajectories do not exist.

66 | 1 DN-systems: theory 1.4.4 Conditions for orbital stability In this subsection we establish some sufficient conditions for the existence, uniqueness, and so-called orbital stability45 of closed trajectories of the system (1.4.7). Throughout this subsection, we suppose that Q is a (strict) subset of ℝ2 which is closed and convex and contains 0 as interior point, and that f : Q → ℝ2 is Lipschitz continuous. Moreover, we assume that f (x) ∈ ̸ NQ (x)

(x ∈ Q \ {0})

(1.4.27)

and ⟨Bx, f (x)⟩ ≥ μ(‖x‖)

(x ∈ Q),

(1.4.28)

where B is some positive definite symmetric matrix and μ : ℝ+ → ℝ+ some function satisfying μ(s) > 0 for s > 0. Finally, let46 󵄨󵄨 󵄨 2 󵄨󵄨⟨f (x), Cx⟩󵄨󵄨󵄨 ≥ ν‖x‖

(x ∈ Q)

(1.4.29)

for some constant ν > 0, where C denotes the rotation (1.4.12). We point out that without loss of generality we may drop the absolute value in (1.4.29) and assume that ⟨f (x), Cx⟩ ≥ ν‖x‖2

(x ∈ Q).

(1.4.30)

Indeed, this is a consequence of the following Lemma 1.4.17. There exists a surjective isometry D : Q → D(Q) ⊆ ℝ2 such that ẏ = τy g(y)

(y ∈ Q)

(1.4.31)

and ⟨g(y), Cy⟩ ≥ ν‖y‖2

(y ∈ Q),

(1.4.32)

where g := D ∘ f ∘ D−1 . In particular, we have ⟨g(y), Cy⟩ > 0 for all y ∈ Q \ {0}. 45 The precise definition may be found in Definition 1.4.18 below. 46 Note that the condition (1.4.29) and the Lipschitz continuity of f imply that f (0) = 0.

(1.4.33)

1.4 Oscillations in DN-systems | 67

Proof. Let D : Q → D(Q) ⊆ ℝ2 be the linear reflection (y1 , y2 ) = D(x1 , x2 ) := (x1 , −x2 )

(1.4.34)

along the x1 -axis which satisfies D−1 = D and transforms the system (1.4.7) into the system (1.4.31). It is clear that (1.4.31) shares all properties with the original system,47 but has the additional property (1.4.32). This follows from the equality DτQ (x)f (x) = τD(Q) (Dx)Df (x), where τx denotes the projection onto the tangent cone TQ (x), and τD(Q) denotes the projection onto the tangent cone TD(Q) (Dx), see Definition 1.3.1. Furthermore, this implies in turn that the systems (1.4.7) and (1.4.31) are equivalent (in the sense that D establishes a 1-1 correspondence between their solutions in Q respectively D(Q)). Moreover, (1.4.29) is replaced for the new system by the estimate48 ⟨Df (Dy), Cy⟩ = ⟨f (Dy), DCy⟩ = −⟨f (Dy), CDy⟩ ≥ ν‖y‖2

(y ∈ D(Q))

which proves the statement. Of course, (1.4.33) is a direct consequence of (1.4.32). Condition (1.4.28) means, in geometrical terms, that the quadratic form x 󳨃→ ⟨Bx, x⟩ is strictly increasing if x = x(t) moves along a trajectory without touching the boundary. This follows from the estimate d 󵄩 󵄩 ̇ ⟨Bx(t), x(t)⟩ = 2⟨Bx(t), x(t)⟩ = 2⟨Bx(t), f (x(t))⟩ > μ(󵄩󵄩󵄩x(t)󵄩󵄩󵄩) ≥ 0. dt A very simple example of a function which satisfies all conditions given above is f (x) := Ax, where A is a matrix which has a complex eigenvalue with positive real part. As we have seen in Subsection 1.4.2, under the above hypotheses the system (1.4.7), subject to the initial condition x(0) = x0 ∈ Q, has a unique solution which is defined on the whole semi-axis ℝ+ . Our goal is here to study the semigroup {g t : t ≥ 0}, where g t x0 denotes the value of this unique solution at time t; in particular, g 0 x0 = x0 . We give now the precise definition of (asymptotically) orbital stability of a trajectory, and then prove some lemmas which lead to our main result (see Theorem 1.4.24): under the hypotheses on Q and f made above, the system (1.4.7) has a unique closed trajectory which is orbitally stable and attracts all solutions for large time. Definition 1.4.18. A closed trajectory ℓ belonging to a solution φ of (1.4.7) is called orbitally stable if for each ε > 0 there exists some δ > 0 such that ‖x0 −φ(t1 )‖ < δ implies 47 For example, it preserves closedness of trajectories, orbital stability etc. 48 Here we use the fact that the reflection D in (1.4.34) is, as the rotation C in (1.4.12), a surjective isometry of the plane, and that these two operations satisfy C ∘ C = −I, D ∘ D = I, and D ∘ C = −C ∘ D.

68 | 1 DN-systems: theory that ‖g t x0 −φ(t1 +t)‖ < ε for all t > 0. Moreover, ℓ is called asymptotically orbitally stable if there exists some δ > 0 such that ‖x0 − φ(t1 )‖ < δ implies that ‖g t x0 − φ(t1 + t)‖ → 0 as t → ∞. Finally, ℓ is called strongly orbitally stable49 if for any x0 ∈ Q there exists s ≥ 0 such that g s+t x0 ∈ ℓ for t ≥ 0. If the first property given in Definition 1.4.18 are fulfilled only in50 Δ1 [resp. Δ2 ], the trajectory is called orbitally stable from inside [resp. from outside]. Definition 1.4.18 is similar, but not equivalent, to the usual stability properties in Lyapunov’s sense: if a solution is Lyapunov-stable, then it is also orbitally stable, and similarly for asymptotic stability, but not vice versa. Our first geometric lemma illustrates the position of the vector f (x) for x ∈ 𝜕Q. To this end, we recall the terminology of Definition 1.4.6. Lemma 1.4.19. Suppose that the hypotheses on Q and f stated above are fulfilled, and x is a boundary point of Q. Then f (x) lies between NQ (x) and −x, as well as between NQ (x) and τx f (x). Proof. First we prove the first assertion which means, in the terminology of Definition 1.4.6, that f (x) lies between ν2 (x) and −x. Consider a connected component51 Γ of 𝜕Q, and on Γ a point x which is closest to 0, i. e., ‖x‖ = min {‖z‖ : z ∈ Γ}. In such a point the assertion holds, because in this case NQ (x) coincides with the ray cone({x}). Moreover, from (1.4.30) it follows that the vector f (x) lies, for each x ∈ Q \ {0}, between x and −x as claimed. We denote the set of all x ∈ 𝜕Q with the property that f (x) lies between NQ (x) and −x by M1 . By our previous reasoning we know that M1 ≠ 0. We show that both sets M1 and M2 := Γ \ M1 are open;52 since Γ is connected, this implies that M2 = 0, so every point x ∈ 𝜕Q has the property that f (x) lies between NQ (x) and −x, and so we are done. For each x ∈ 𝜕Q we consider the following three mutually disjoint sets: let U1 be a neighbourhood of −x, U2 a neighbourhood of f (x), and U3 an α-angular neighbourhood53 of the normal cone NQ (x). The existence of these neighbourhoods follows from 49 Geometrically, this means that all solutions of (1.4.7) eventually “flow towards” ℓ after a sufficiently large time. 50 Here Δ1 and Δ2 denote, as in the proof of Lemma 1.4.13, the interior resp. exterior component of the trajectory. 51 In most cases Γ is connected, so Γ = 𝜕Q; however, if Q is a strip in the plane, then 𝜕Q consists of two parallel lines, and so Γ ≠ 𝜕Q. 52 Here the term “open” of course refers to the relative topology on Γ. 53 By this we mean that y ∈ U3 if either the angle between y and its metric projection P(y, NQ (x)) onto NQ (x) is less than α, or y = 0.

1.4 Oscillations in DN-systems | 69

(1.4.27) and (1.4.30). Clearly, all elements of each of these neighbourhoods lie in the same position with respect to −x, f (x), and NQ (x). Since the function f is continuous and the multivalued map NQ : x 󳨃→ NQ (x) is closed (see Proposition 1.3.6), we find a neighbourhood U of x such that −x󸀠 ∈ U1 ,

f (x 󸀠 ) ∈ U2 ,

NQ (x 󸀠 ) ⊆ U3

for all x󸀠 ∈ U. This shows that both sets M1 and M2 are open.

Figure 1.27

Since Γ is connected, we conclude that M2 = 0, so f (x) lies between NQ (x) and −x for every x ∈ 𝜕Q. It remains to show that f (x) also lies between NQ (x) and τx f (x). Observe that −x = 0 − x ∈ TQ (x), because 0 ∈ Q. Consequently, one of the edges of TQ (x) is generated by a unit vector u which lies, together with f (x), between NQ (x) and −x. If f (x) ∈ TQ (x), we have τx f (x) = f (x), and the assertion is true. If f (x) ∈ ̸ TQ (x), then f (x) strictly lies between NQ (x) and u. However, since u is orthogonal to NQ (x), the vectors f (x) and u form an acute angle, while the vectors τx f (x) and u are collinear. On the other hand, the angle between the second edge of TQ (x) and f (x) is not acute, because the inner angle of TQ (x) cannot be larger than π, while the outer angle of TQ (x) cannot be smaller than π for x ∈ 𝜕Q. So the proof is complete. In the next lemma we give lower estimates for the norm of the velocity vector τx f (x) and for the angle between NQ (x) and −x for x belonging to a compact portion of the boundary of Q.

70 | 1 DN-systems: theory Lemma 1.4.20. Suppose that the hypotheses on Q and f stated above are fulfilled, and let K ⊂ ℝ2 be compact. Then there exists a = a(K) > 0 such that 󵄩 󵄩 inf {󵄩󵄩󵄩τx f (x)󵄩󵄩󵄩 : x ∈ 𝜕Q ∩ K} ≥ a.

(1.4.35)

Moreover, there exists δ = δ(K) > 0 such that inf {⟨x, n⟩ : x ∈ 𝜕Q ∩ K, n ∈ NQ (x), ‖n‖ = 1} ≥ δ.

(1.4.36)

Proof. If (1.4.35) is false we find a sequence (xn )n in 𝜕Q ∩ K such that ‖τxn f (xn )‖ → 0 as n → ∞. Here we assume without loss of generality that (xn )n converges to some point x0 ∈ 𝜕Q ∩ K, and so (f (xn ))n converges to f (x0 ). Observe that f (xn ) − τxn f (xn ) ∈ NQ (xn ). Since the multivalued map NQ is maximal monotone (see Example 1.3.12), and hence has a closed graph, we see that f (x0 ) ∈ NQ (x0 ). But this contradicts our hypothesis (1.4.27), and so the first assertion is proved. Now we prove (1.4.36). Since 0 is an interior point of Q, by assumption, we have ⟨0 − x, n⟩ = −⟨x, n⟩ < 0 for all x ∈ 𝜕Q and n ∈ NQ (x). The compactness of K implies that δ := inf {⟨x, n⟩ : x ∈ 𝜕Q ∩ K, n ∈ NQ (x)} ≥ 0; we claim that δ > 0. Indeed, in case δ = 0 we find a sequence (xk )k in 𝜕Q ∩ K and corresponding points nk ∈ NQ (xk ) satisfying ‖nk ‖ = 1,

⟨xk , nk ⟩ ≥ 0,

lim ⟨xk , nk ⟩ = 0.

k→∞

Again, we may assume without loss of generality that (xk )k converges to some point x ∈ 𝜕Q ∩ K, and (nk )k converges to some point n ∈ NQ (x) as k → ∞. This yields 0 = lim ⟨xk , nk ⟩ = ⟨x, n⟩, k→∞

contradicting our statement at the beginning of the proof. In the following definition we recall the notion of rotation angle and angular velocity of a vector function. To this end, we associate to each non-zero vector x = (x1 , x2 ) the set Φ(x) of all rotation angles of the positive x1 -axis towards x. For example, Φ((1, 0)) = {2kπ : k ∈ ℤ}. Every angle ψ ∈ Φ(x) satisfies the system of equations cos ψ =

x1 √x12 + x22

,

sin ψ =

x2 √x12 + x22

.

(1.4.37)

For example, for (x1 , x2 ) = (0, 1) we can take ψ = π/2, and so Φ((0, 1)) = { π2 + 2kπ : k ∈ ℤ}. Fix a point x = x0 ∈ ℝ2 \ {0} with corresponding angle ψ = ψ0 according to (1.4.37). Then the system (1.4.37) may be solved, by the implicit function theorem, in a

1.4 Oscillations in DN-systems | 71

neighbourhood U of x0 with respect to ψ, which defines an explicit function ψ = ψ(x) on U. It is not hard to see that ∇ψ(x) = (

x 𝜕 x x x Cx 𝜕 arctan 2 , arctan 2 ) = (− 2 2 2 , 2 1 2 ) = 𝜕x1 x1 𝜕x2 x1 ‖x‖2 x1 + x2 x1 + x2

for x ∈ U, where C denotes the rotation operator (1.4.12). Since this gradient does not depend on x0 or ψ0 , we denote it by ∇Φ(x). Let I be an interval and φ : I → ℝ2 \ {0} an absolutely continuous function. Then the function w(t) :=

d ̇ Φ(φ(t)) = ⟨∇Φ(φ(t)), φ(t)⟩ dt

(1.4.38)

is defined almost everywhere on I. Definition 1.4.21. The function (1.4.38) is called the angular velocity of φ, while the function t

ΔΦ(φ; t0 , t) := ∫ w(s) ds,

(1.4.39)

t0

with t0 ∈ I fixed, is called the rotation angle54 of φ at time t. Observe that, if φ is a solution of the planar system (1.4.7), then w(t) = ⟨∇Φ(φ(t)), τφ(t) f (φ(t))⟩

(1.4.40)

is the angular velocity of this solution. By abuse of notation, we will write ̇ ̇ Φ(x) = ⟨∇Φ(x), x⟩

(1.4.41)

̇ in the sequel, so w(t) = Φ(φ(t)) if φ is a solution of (1.4.7). In the next lemma we give a lower estimate for the scalar product (1.4.41). Lemma 1.4.22. For any compact set K ⊂ ℝ2 there exists a ν = ν(K) > 0 such that ⟨Cx, τx f (x)⟩ ̇ Φ(x) = ≥ν ‖x‖2

(1.4.42)

̇ for all x ∈ (Q ∩ K) \ {0}, where Φ(x) is defined by (1.4.41), and Cx denotes the rotation (1.4.12). 54 Strictly speaking, the rotation angle depends on the initial time t0 , but this is not important in our discussion.

72 | 1 DN-systems: theory Proof. To find lower estimates for the fraction in (1.4.42) we distinguish two cases. 1st case: Let f (x) = τx f (x). Then we have ⟨Cx, τx f (x)⟩ = ⟨Cx, f (x)⟩ ≥ ν‖x‖2 , by (1.4.30), so (1.4.42) is true. 2nd case: Let f (x) ≠ τx f (x). Then the same reasoning as in Lemma 1.4.11 shows that C −1 τx f (x) ∈ NQ (x), since the vector f (x) lies between the set NQ (x) and the vector τx f (x) (in the sense of Definition 1.4.6). From Lemma 1.4.20 it follows that ⟨Cx, τx f (x)⟩ ⟨x, C −1 τx f (x)⟩ ‖τ f (x)‖ a = ≥ δ x 2 ≥ δ 2 =: ν, ‖x‖2 ‖x‖2 ‖x‖ κ where a is taken from (1.4.35), and κ := sup {‖x‖ : x ∈ K}. In either case we have proved (1.4.42). The next lemma shows that every solution which starts from some initial value x0 ≠ 0 must touch the boundary after some time. Lemma 1.4.23. Suppose that the hypotheses (1.4.27), (1.4.28), and (1.4.30) are satisfied. Then for every x0 ∈ Q \ {0} there exists T ≥ 0 such that g T x0 ∈ 𝜕Q. Proof. If the assertion is false we may choose some point x0 ∈ Q \ {0} with the property that the corresponding trajectory {g t x0 : t ≥ 0} with g 0 x0 = x0 is entirely contained in the interior of Q. From (1.4.28) it follows then that the function b(t) := ⟨Bg t x0 , g t x0 ⟩ (t ≥ 0) is strictly increasing in a neighbourhood of any point t satisfying b(t) > 0 (i. e., g t x0 ≠ 0). Since b(0) = ⟨Bx0 , x0 ⟩ > 0, this shows that b(t) ≥ b(0) for all t ≥ 0, which means that 󵄩 󵄩2 ⟨Bx0 , x0 ⟩ ≤ ⟨Bg t x0 , g t x0 ⟩ ≤ ‖B‖ 󵄩󵄩󵄩g t x0 󵄩󵄩󵄩 . We claim that this implies that 󵄩󵄩 t 󵄩󵄩 󵄩󵄩g x0 󵄩󵄩 → ∞

(t → ∞).

(1.4.43)

Indeed, suppose that ‖g tn x0 ‖ ≤ R for some R > 0 and some sequence (tn )n with tn → ∞ as n → ∞. Then tn

󵄩 󵄩2 ̇ ds ≥ b(0) + μ t , ‖B‖R ≥ ‖B‖ 󵄩󵄩󵄩g tn x0 󵄩󵄩󵄩 ≥ b(tn ) = b(0) + ∫ b(s) 0 n 2

0

1.4 Oscillations in DN-systems | 73

where μ0 := min {μ(r) : ⟨Bx0 , x0 ⟩ ≤ ‖B‖r 2 ≤ ‖B‖R2 } > 0, with μ(r) as in (1.4.28). This contradicts the unboundedness of the sequence (tn )n , and so we have proved (1.4.43). Choose x ∈ ℝ2 \ Q and ε ∈ (0, 1) such that55 εx ∈ Q. For sufficiently large t we have ‖g t εx‖ > ‖x‖, by (1.4.43). Moreover, by (1.4.30) we may choose t in such a way that g t εx = ηx for some η > 1. Summarizing we have then εx ∈ Q,

ηx ∈ Q,

x ∈ ̸ Q,

contradicting the convexity of Q. Our results show that our assumptions imply geometrically that the directional field corresponding to the function f in the plane forms some “twister” whose vortices wind counterclockwise around 0.

Figure 1.28

1.4.5 Orbitally stable trajectories Since the set Q has a nonempty boundary, every (nonzero) trajectory snuggles up to the boundary of Q by the twister, as time increases. In particular, the trajectory cannot “escape to infinity”, but is attracted by a periodic solution with closed trajectory. This is the contents of the following important Theorem 1.4.24. Here we need the notion of strong orbitally stability introduced in Definition 1.4.18. 55 This is possible, since we suppose that 0 is an interior point of Q.

74 | 1 DN-systems: theory Theorem 1.4.24 (Existence and uniqueness of orbitally stable trajectories). Suppose that the hypotheses (1.4.27)–(1.4.30) are fulfilled. Then the system (1.4.7) has a unique strongly orbitally stable trajectory. The proof of Theorem 1.4.24 is again based on a series of auxiliary lemmas. More precisely, we will proceed in several steps by proving the following facts: – All trajectories of the system meet after some finite time. – There exists a unique solution in case of a bounded set Q. – There exists a closed trajectory which attracts other trajectories. – Solutions continuously depend on initial data. – Closed trajectories are orbitally stable. – All trajectories admit a priori estimates. – There exists a unique solution also in case of an unbounded set Q. To start this programme, let us prove first that, if we start from two initial values x1 and x2 , the corresponding trajectories g t x1 and g t x2 meet somewhere after a finite time. Lemma 1.4.25. For any initial values x1 and x2 there exist t1 ≥ 0 and t2 ≥ 0 such that g t1 x1 = g t2 x2 . Proof. If the assertion is false we may find points x1 and x2 such that g t1 x1 ≠ g t2 x2 for all t1 ≥ 0 and t2 ≥ 0. Then every ray which starts from 0 meets one of these two trajectories earlier56 than the other. Moreover, this phenomenon holds for all such rays, by our assumptions. But this implies that one of the trajectories is entirely contained in the interior of Q, contradicting Lemma 1.4.23. The next lemma shows the existence of a special neighbourhood of a closed trajectory which we will need in the sequel. Lemma 1.4.26. Let ℓ := {g t x0 : t ≥ 0} be a closed trajectory of system (1.4.7). Then there exist δ0 > 0 and T0 > 0 such that g T0 x0 ∈ ℓ for all x0 ∈ Q which satisfy dist(x0 , ℓ) < δ0 . Proof. We denote the bounded connected component of ℝ2 \ ℓ by S1 , the unbounded connected component by S2 , and construct the pair of numbers (δ0 , T0 ) separately for S1 and S2 and call them (δ1 , T1 ) and (δ2 , T2 ), respectively. By Lemma 1.4.23 we may find t1 > 0 such that x1 := g t1 x 0 ∈ ℓ ∩ 𝜕Q; moreover, we fix the ray r := {λx1 : λ ≥ 0} and choose x0 ∈ Q ∩ S2 . Lemma 1.4.22 implies that the vector g t x0 turns around zero, when the time grows by 2π/ν, at least by an angle of 2π. Consequently, g t x0 intersects the ray r at the point x1 . This shows that we may choose δ2 > 0 arbitrarily and T2 := 2π/ν. 56 By “earlier” we mean here that the intersection point of the ray with one trajectory is closer to zero than the intersection point with the other trajectory.

1.4 Oscillations in DN-systems | 75

Now we construct the number δ1 > 0 for S1 as follows. Choose a vector b in the ray r in such a way that the ellipse {x ∈ ℝ2 : ⟨Bx, x⟩ = ⟨Bb, b⟩} entirely belongs to the component S1 . Then this ellipse has a positive distance to the trajectory ℓ, and we choose δ1 > 0 strictly less than this distance. By Lemma 1.4.25, we find t1 such that g t1 b ∈ ℓ. Fix x0 ∈ S1 with dist(x0 , ℓ) < δ1 . Then the curve g t x0 either intersects the segment [b, x1 ] or meets 𝜕Q, and hence enters ℓ, after a time t ≤ 2π/ν0 . So in this case we may choose T1 := t1 + 2π/ν0 . The next auxiliary lemma of this section is concerned with the continuous dependence of closed trajectories on the data. Lemma 1.4.27 (Continuous dependence). Let x0 , x0 ∈ Q. Then the estimate 󵄩󵄩 t t 󵄩 Lt 󵄩󵄩g x0 − g x0 󵄩󵄩󵄩 ≤ e ‖x0 − x0 ‖

(t ≥ 0)

(1.4.44)

holds. In particular, the estimate 󵄩󵄩 t 󵄩󵄩 Lt 󵄩󵄩g x0 󵄩󵄩 ≤ e ‖x0 ‖

(t ≥ 0)

(1.4.45)

holds for any x0 ∈ Q. Proof. Let g t x0 =: x and g t x0 =: x. Using the notation of Definition 1.3.1 we get then57 1 d ‖x − x‖2 = ⟨τx f (x) − τx f (x), x − x⟩ 2 dt = ⟨f (x) − f (x), x − x⟩ − ⟨νx f (x) − νx f (x), x − x⟩ ≤ L‖x − x‖2 . By well-known arguments, this estimate implies (1.4.44). The estimate (1.4.45) is of course the special case x0 = 0. Consider again the closed trajectory ℓ from Lemma 1.4.26. By definition, the orbital stability of this trajectory means that for each ε > 0 we can find a δ > 0 such that, given any x0 ∈ Q, from dist(x0 , ℓ) < δ it follows that dist(g t x0 , ℓ) < ε for all t ≥ t0 . Fix ε > 0 and choose a corresponding δ > 0 such that δ ≤ δ0 and eLT0 δ < ε, where δ0 and T0 are as in Lemma 1.4.26. Let dist(x0 , ℓ) < δ, i. e., ‖x0 − g t1 x 0 ‖ < δ for some t1 > 0. From the continuous dependence of the solutions of (1.4.7) from initial values it follows then that 󵄩 󵄩 󵄩 󵄩 dist(g t x0 , ℓ) ≤ 󵄩󵄩󵄩g t x0 − g t1 +t x0 󵄩󵄩󵄩 ≤ eLt 󵄩󵄩󵄩x0 − g t1 x 0 󵄩󵄩󵄩 < eLt δ

(t > 0).

Therefore our choice of δ > 0 implies that dist(g t x0 , ℓ) < ε for t ≤ T0 , and dist(g t x0 , ℓ) = 0 for t > T0 , by the confluence of the trajectories for large times. This proves the orbital stability of the closed trajectory. 57 Here we use the monotonicity of the multivalued map NQ , see Example 1.3.12, and the Lipschitz continuity of the function f .

76 | 1 DN-systems: theory Now we define three auxiliary sets Q1 , Q2 , and Q3 in the following way. We set Q1 := {x ∈ Q : Cx ∈ TQ (x)},

Q2 := Q \ Q1 ,

and by Q3 we denote the set of all points x ∈ 𝜕Q of minimal norm. Recall that the ̇ shortcut Φ(x) was defined in (1.4.41). Lemma 1.4.28. With the above notation, the following holds. ̇ (a) For every x ∈ Q1 \ {0} we have Φ(x) ≥ ν, where ν is as in (1.4.30). (b) The set Q2 is contained in 𝜕Q. (c) The set Q3 is nonempty and bounded. Proof. From (1.3.2), (1.4.30) and the definition of Q1 it follows that ⟨Cx, τx f (x)⟩ ⟨Cx, f (x)⟩ ⟨Cx, νx f (x)⟩ ̇ Φ(x) = = − ≥ ν, ‖x‖2 ‖x‖2 ‖x‖2 for x = x(t) = g t x0 which proves (a). It is clear that the interior of Q is included in Q1 , since TQ (x) = ℝ2 for every interior point x of Q. Therefore Q2 is either empty, or contains only boundary points of Q, i. e. (b) is true. To prove (c) fix x0 ∈ 𝜕Q. The map x 󳨃→ ‖x‖ has a global minimum on the set {x ∈ 𝜕Q : ‖x‖ ≤ ‖x0 ‖} which is then a local minimum of this map on 𝜕Q. So Q3 ≠ 0. Suppose that Q3 is unbounded. Then we find an infinite sequence (xn )n in Q3 satisfying ‖xn+1 ‖ ≥ 2‖xn ‖ for all n ∈ ℕ. The straight line {x ∈ ℝ2 : ⟨x, xn ⟩ = ‖xn ‖2 } is then a line of support to Q, which implies that ⟨x, xn ⟩ ≤ ‖xn ‖2 for every x ∈ Q. Consequently, the angle α(m, n) between xm and xn (m ≠ n) satisfies cos α(m, n) ≤ 21 , i. e., α(m, n) ≥ π3 . But the constructed sequence (xn )n cannot contain more than six such elements, and this contradiction proves (c). Lemma 1.4.28 (c) implies that r0 := max {‖x‖ : x ∈ Q3 } < ∞.

(1.4.46)

So the following lemma which gives a norm estimate for trajectories makes sense. Lemma 1.4.29. Fix r ≥ r0 , with r0 as in (1.4.46), and let R > re2πL/ν with ν as in (1.4.30). Then ‖x0 ‖ ≤ r implies ‖g t x0 ‖ < R for t > 0. Proof. Suppose that the assertion is false. Choose t1 and t2 such that 󵄩󵄩 t1 󵄩󵄩 󵄩󵄩g x0 󵄩󵄩 = r,

󵄩󵄩 t2 󵄩󵄩 󵄩󵄩g x0 󵄩󵄩 = R,

󵄩 󵄩 r < 󵄩󵄩󵄩g t x0 󵄩󵄩󵄩 < R

(1.4.47)

1.4 Oscillations in DN-systems | 77

for t1 < t < t2 . Putting x1 := g t1 x0 and T := t2 − t1 we may then reformulate (1.4.47) in the form 󵄩󵄩 0 󵄩󵄩 󵄩󵄩g x1 󵄩󵄩 = r,

󵄩󵄩 T 󵄩󵄩 󵄩󵄩g x1 󵄩󵄩 = R,

󵄩 󵄩 r < 󵄩󵄩󵄩g t x1 󵄩󵄩󵄩 < R

(1.4.48)

for 0 < t < T. Now we distinguish two cases.

Figure 1.29

1st case: g t x1 ∈ Q1 for all t ∈ (0, T). From (1.4.45) and our choice of R it follows then that Tν ≥ 2π, and so the estimate T

̇ s (x1 )) ds ≥ Tν ≥ 2π ΔΦ(g t x1 ; 0, T) = ∫ Φ(g 0

holds for the rotation angle (1.4.39) of the trajectory g t x1 . In other words, this trajectory makes a complete turn around zero during the time interval [0, T], and therefore at some time t ∈ (0, T) enters, by our choice of r, the disc Br , because the set Q3 is entirely contained in this disc. But this contradicts (1.4.48). 2nd case: g t3 x1 =: x3 ∈ Q2 for some t3 ∈ (0, T). From Cx3 ∈ ̸ TQ (xr ) it follows then that Cx3 forms an acute angle with NQ (x3 ), while −x3 forms an obtuse angle58 with NQ (x3 ), by the second statement of Lemma 1.4.20. So Lemma 1.4.19 implies that −x3 forms an acute angle with τx3 f (x3 ). For t3 ≤ t ≤ T we denote by ψ(t) the point where the ray starting from zero and passing through g t x1 hits the boundary 𝜕(Q ∩ BR ), see Figure 1.30. 58 However, this angle cannot be larger than π.

78 | 1 DN-systems: theory

Figure 1.30

Lemma 1.4.22 implies that this ray for increasing time t rotates counterclockwise around the origin. Therefore for times t slightly larger than t3 the vectors g t x1 and ψ(t) remain in the acute angular domain formed by −x3 and τx3 f (x3 ). So for these t we have 󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩ψ(t)󵄩󵄩󵄩 < 󵄩󵄩󵄩ψ(t3 )󵄩󵄩󵄩 = ‖x3 ‖ < R; moreover, 󵄩󵄩 󵄩 󵄩 T 󵄩 󵄩󵄩ψ(T)󵄩󵄩󵄩 ≥ 󵄩󵄩󵄩g x1 󵄩󵄩󵄩 = R, by assumption. Comparing these two estimates we conclude that the function t 󳨃→ ‖ψ(t)‖ attains a minimum t4 in the open interval (t3 , T). But the fact that the vector ψ(t) winds counterclockwise around the origin with nonzero velocity implies that ψ(t4 ) is a point of (locally) minimal norm on 𝜕Q, i. e., ψ(t4 ) ∈ BR . However, this immediately yields 󵄩󵄩 t4 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩g x1 󵄩󵄩 ≤ 󵄩󵄩ψ(t4 )󵄩󵄩󵄩 ≤ r, contradicting (1.4.48). So we have shown that any trajectory which starts in the ball Br remains in the interior of the ball BR , and so the proof is complete. So we have proved Theorem 1.4.24 in the case of a bounded set Q. The remaining part reads as follows. Proof of Theorem 1.4.24. In this final part we show how the proof of Theorem 1.4.24 can be accomplished by reducing it for general Q to the case of a bounded set Q. So instead of the autonomous planar DN-system (1.4.7) we consider the system ẋ = τx f (x)

(x ∈ QR ),

(1.4.49)

1.4 Oscillations in DN-systems | 79

where we have put QR := Q ∩ BR and R is chosen as in Lemma 1.4.29. The hypotheses (1.4.28), (1.4.29) and (1.4.30) made for the system (1.4.7) on Q obviously carry over to the system (1.4.49) on QR . We show that this is also true for the hypothesis (1.4.27), which means that f (x) ≠ NQr (x)

(x ∈ Qr \ {0}).

(1.4.50)

In fact, for ‖x‖ < R this follows from (1.4.27). If ‖x‖ = R and x is an interior point of Q then NQr (x) = ℝ+ x = {λx : λ ≥ 0}, and so (1.4.50) follows from (1.4.30). Finally, suppose that ‖x‖ = R and x ∈ 𝜕Q. In this case we have NQr (x) = conv(ℝ+ x ∪ NQ (x)). If x ∈ Q1 then NQ (x) lies between −x and x, because Cx ∈ TQ (x) lies between x and −x. Again by (1.4.30), f (x) strictly lies between x and −x, hence f (x) ∈ ̸ NQR (x). If x ∈ Q2 then Cx lies between NQ (x) and −x and forms an acute angle with its element of best approximation in NQ (x), since Q is solid. From the second statement of Lemma 1.4.20 it follows that x lies either in NQ (x) or between −x and NQ (x); therefore f (x) ∈ ̸ conv(ℝ+ x ∪ NQ (x)). So we have shown that in all cases the hypotheses of the corresponding theorem are fulfilled for the bounded domain QR , and so the system (1.4.49) admits a closed trajectory which attracts all trajectories of (1.4.49). This closed trajectory is also a trajectory of the original system (1.4.7), because it is a part of any trajectory which is emitted from the disc Br and therefore is entirely contained in the interior of the disc BR . To show that the closed trajectory attracts all trajectories of the original system (1.4.7) we fix an arbitrary point x0 and choose r > 0 in such a way that the disc Br contains both the point x0 and the closed trajectory of (1.4.7). Then there exists R > 0 such that the trajectory g t x0 enters the closed trajectory at some time t, because g t x 0 also solves the system (1.4.7). We conclude that in fact the closed trajectory attracts all trajectories of (1.4.49), and hence it is unique. Finally, since the closed trajectory is contained, together with some neighbourhood, in QR , the orbital stability follows from the result proved above for bounded Q. So Theorem 1.4.24 is completely proved. Example 1.4.30. Let us return to Examples 1.4.14, 1.4.15, and 1.4.16 which we already used to illustrate Theorem 1.4.3. In Examples 1.4.14 and 1.4.16 the hypotheses of Theorem 1.4.24 are not met. To see this, consider the symmetric positive definite matrix B := (

a c

c ), b

80 | 1 DN-systems: theory where a > 0, b > 0, and ab > c2 , hence det B > 0. In this case we get ⟨Bx, Cx⟩ = −(ax1 + cx2 )x2 + (cx1 + bx2 )x1 = c(x12 − x22 ) + (b − a)x1 x2 . Choosing, in particular, x := (1, 0) we arrive at the condition c > 0, while choosing x := (0, 1) we arrive at the condition c < 0. Consequently, the hypothesis (1.4.29) cannot be satisfied, and so Theorem 1.4.24 does not apply.59 On the other hand, the hypotheses (1.4.27)–(1.4.30) are met in Example 1.4.15; let 1 us show this. Condition (1.4.27) is clearly satisfied for f (x) = 10 x + Cx, because f (x) ∈ NQ (x) = {λx : λ ≥ 0} would mean that Cx = (λ −

1 )x, 10

contradicting the fact that ⟨Cx, x⟩ = 0. Further, since the matrix B is invertible, we certainly have 1 ⟨Bx, f (x)⟩ = ⟨Bx, 10 x + Cx⟩ ≥ μ(‖x‖),

which shows that (1.4.28) is true. Finally, 1 󵄨󵄨 󵄨 ⟨x, Cx⟩ + ‖Cx‖2 = ‖x‖2 , 󵄨󵄨⟨f (x), Cx⟩󵄨󵄨󵄨 = 10 and so (1.4.29) holds with ν = 1. Since τx f (x) = Cx for x ∈ 𝜕Q in this example, taking K := Q one easily verifies that (1.4.35) holds with a = 1, (1.4.36) holds with δ = 1, and (1.4.42) holds with ν = 1.

59 However, the fact that strongly orbitally stable trajectories exist anyway shows that the hypotheses (1.4.27)–(1.4.30) are only sufficient, but not necessary.

2 DN-systems: applications In this chapter we discuss some applications of the theoretical concepts developed in the preceding Chapter 1. First we show how the multivalued operator NQ which associates to each point x in a convex set Q the normal cone NQ (x) may be interpreted as diode nonlinearity, and consider an important example involving electrical circuits. More generally, we give a sufficient condition under which the mathematical model of an electrical circuit leads to a diode nonlinearity (or DN-system). Finally, we show that certain problems from convex programming or biological systems with size constraints may also be treated in the framework of such DN-systems.

2.1 Electrical circuits with diode converters The main purpose of this section is to impose appropriate physical conditions on a the diode converter of an electrical circuit under which its mathematical description leads to so called DN-systems.

2.1.1 Examples of diode voltage converters Now we go back to the formal definition of a DN-system and its solution. Recall that such a system has the form of either a differential equation ẋ = τx f (t, x)

(2.1.1)

involving the projection operator τx , or, equivalently, a differential inclusion ẋ ∈ f (t, x) − NQ (x)

(2.1.2)

involving the normal cone NQ (x). As before, by a solution of (2.1.1) (respectively, (2.1.2)) we mean a locally absolutely continuous function x = x(t) satisfying the equation ̇ = τx(t) f (t, x(t)) x(t)

(2.1.3)

̇ ∈ f (t, x(t)) − NQ (x(t)) x(t)

(2.1.4)

respectively the inclusion

for almost all t ∈ J. We are going to show now how this notion applies to a class of electrical circuits involving diode converters. We start with two examples. Example 2.1.1. Our first example is the single-phase semi-periodic bridge rectifier sketched in Figure 2.1. https://doi.org/10.1515/9783110709865-002

82 | 2 DN-systems: applications

Figure 2.1

In Figure 2.2 we have enumerated the diodes of this converter clockwise by 1, 2, 3, and 4, and denoted the voltage between the zero knot and the k-th knot by uk (k = 1, 2, 3). Then the voltage at the four diodes becomes y1 = u1 ,

y2 = u2 − u1 ,

y3 = u2 − u3 ,

y4 = u3 .

(2.1.5)

Figure 2.2

The fact that the voltage of an ideal diode (see Subsection 1.1.1) attains only nonpositive values implies the system of inequalities { { { { { { { { { { { { {

u1 ≤ 0, u2 ≤ u1 , u3 ≤ 0,

(2.1.6)

u4 ≤ u3 .

Each of these inequalities characterizes a half-space in ℝ3 , so the solution set of all inequalities is a bound cone which is built up by the intersection of all these halfspaces. If we denote the outer current running through the k-th knot by ik , we get the current vector1 i := (i1 , i2 , i3 )T . The first Kirchhoff rule implies then that the current 1 The superscript T means, as usual, the transposed vector which transforms here a row vector into a column vector.

2.1 Electrical circuits with diode converters | 83

arriving at the zero knot is −(i1 + i2 + i3 ). So denoting the anode currents by x1 , . . . , x4 , we obtain the linear system i1 = x1 − x2 , { { { i2 = x2 + x3 , { { { { i3 = x4 − x3 ,

(2.1.7)

or, in matrix form, i = Ax, where 1

−1

0

0

A := ( 0

1

0

0

−1

1

1

0 )

(2.1.8)

and x := (x1 , x2 , x3 , x4 )T . Since the current at each diode is nonnegative, the current vector i = (i1 , i2 , i3 ) belongs to the conic hull (see Definition 1.2.5) of the columns of A; we denote this conic hull by K. Note that the k-th column in this matrix (2.1.7) is precisely the outer normal to the half-space determined by the k-th relation in (2.1.6). Consequently, the solution set of the inequalities (2.1.6) is the adjoint cone K ∗ to K in the sense of Definition 1.2.19. In fact, expressing in (2.1.5) the vector u = (u1 , u2 , u3 ) in terms of y = (y1 , y2 , y3 , y4 ) we get u1 = y1 , { { { u2 = y1 + y2 , { { { { u3 = y1 + y2 − y3 = y4 ,

(2.1.9)

or, in matrix form, u = By, where 1

0

0

0

1

1

−1

0

B := ( 1

1

0

0 ).

Combining (2.1.7) and (2.1.9) gives ⟨i, u⟩ = ⟨Ax, By⟩ = (x1 − x2 )y1 + (x2 + x3 )(y1 + y2 ) + (x4 − x3 )(y1 + y2 − y3 )

= x1 y1 − x2 y1 + x2 y1 + x2 y2 + x3 y1 + x3 y2 + x4 y4 − x3 y1 − x3 y2 + x3 y3 = x1 y1 + x2 y2 + x4 y4 + x3 y3 = ⟨x, y⟩ = 0.

So we see that the three relations i ∈ K,

u ∈ K∗,

⟨i, u⟩ = 0

are fulfilled which characterize elements from a cone and its adjoint cone.

(2.1.10)

84 | 2 DN-systems: applications Example 2.1.2. Our second example is the three-phase six-pulse bridge rectifier2 sketched in Figure 2.3.

Figure 2.3

Here both the current vector i = (i1 , i2 , i3 , i4 ) and the voltage vector u = (u1 , u2 , u3 , u4 ) are four-dimensional, because they correspond to the four knots 1, 2, 3, and 4 in Figure 2.2. Since this example is similar to Example 2.1.1, we only sketch the idea. The relations (2.1.10) hold again, where now K is the conic hull of the 6 columns of the matrix

A := (

1

−1

0

0

0

0

0

0

1

−1

0

0

0

0

0

0

−1

1

0

1

0

1

0

1

),

and its adjoint K ∗ is the intersection of the half-spaces determined by the inequalities { { { { { { { { { { { { { { { { { { { { { { { { {

u1 ≤ 0, u2 ≤ 0, u4 ≤ u1 , u4 ≤ u2 , u4 ≤ u5 , u5 ≤ 0.

So also in this example we get the duality relations between K and K ∗ which we discussed in a general framework in Subsection 1.2.4. 2 The converter in Figure 2.3 is sometimes called Larionov scheme in the literature.

2.1 Electrical circuits with diode converters | 85

2.1.2 Diode nonlinearities as operators In Subsection 2.1.1 we have described DN-systems by means of the differential inclusion (2.1.2) involving the normal cone NQ (x) at x. Now we consider this equation from a slightly different viewpoint; to this end, we introduce a new name for the normal cone. Definition 2.1.3. Given a convex set Q ⊆ ℝm , we call the multivalued map NQ which associates to each x ∈ Q the normal cone NQ (x) ⊆ ℝm the generalized diode nonlinearity operator (or DN-operator, for short) generated by Q. Some important analytical properties of the map NQ have been given in Proposition 1.3.6 and Example 1.3.12 in the first chapter. In this subsection we consider the particularly important case when the set Q is a cone K ⊆ ℝm . So by Definition 1.2.13 we have NK (x) = {y ∈ ℝm : ⟨y, z − x⟩ ≤ 0 for all z ∈ K}.

(2.1.11)

The following proposition provides an explicit description of the elements of NK (x) for x ∈ K in terms of the adjoint cone K ∗ . Proposition 2.1.4 (Characterization of DN-operators). For the generalized diode nonlinearity NK generated by some cone K ⊆ ℝm , the following three conditions are equivalent. (a) y ∈ NK (x); (b) x = P(y + x, K); (c) x ∈ K, y ∈ K ∗ , and ⟨x, y⟩ = 0. Moreover, the equality NK−1 = NK ∗ holds in the sense of multivalued maps. Proof. Every y ∈ NK (x) satisfies, by (2.1.11), the inequality ⟨y + x − x, z − x⟩ = ⟨y, z − x⟩ ≤ 0

(z ∈ K).

But this means precisely that x = P(y + x, K), by Theorem 1.2.11 (with y replaced by y + x), so (a) implies (b). If (b) holds we apply (1.2.30), again with y replaced by y + x, and obtain ⟨y, x⟩ = ⟨y + x − x, x⟩ = ⟨y + x − P(y + x, K), P(y + x)⟩ = 0 which is (c), since x ∈ K and y ∈ K ∗ . Finally, the fact that (c) implies (a) follows directly from the definition of the map NK . The last statement is a consequence of the fact that x ∈ NK−1 (y) is equivalent to (a), while x ∈ NK ∗ (y) is equivalent to (c). We illustrate Proposition 2.1.4 by means of a simple, but typical example.

86 | 2 DN-systems: applications Example 2.1.5. Let K ⊆ ℝm be a bound cone, and let NK be the generalized diode nonlinearity generated by K. We claim that, for x ∈ K, the set NK (x) is precisely the intersection NK (x) = K ∗ ∩ x ⊥ , where K ∗ is the adjoint cone (1.2.27) of K, and x ⊥ is the (m − 1)-dimensional subspace (1.2.7) orthogonal to x. In fact, this follows immediately from the equivalence of (a) and (c) in Proposition 2.1.4. The following theorem establishes a rule how diode nonlinearities are changed by linear transforms. Theorem 2.1.6 (Linear transform). Let K ⊆ ℝm be a cone, x ∈ K, u ∈ ℝn , and A : ℝm → ℝn a linear operator. Then the equivalence u ∈ NA(K) (Ax) ⇐⇒ A∗ u ∈ NK (x)

(2.1.12)

holds, where A∗ denotes the adjoint of A. Proof. The relation on the left hand side of (2.1.12) means, by Proposition 2.1.4, that Ax ∈ A(K),

u ∈ A(K)∗ ,

⟨Ax, u⟩ = 0.

But this is obviously equivalent to x ∈ K,

A∗ u ∈ K ∗ ,

⟨x, A∗ u⟩ = 0

which is precisely the right hand side of (2.1.12), again by Proposition 2.1.4. 2.1.3 The outer characteristic of a diode converter Let us consider a diode converter which represents an electrical circuit consisting of m ideal diods. We will assume that all knots of this circuit are inputs, i. e., contacts through which the diode converter may be joined with other circuits. We enumerate the knots (inputs) in a predetermined way by 0, 1, . . . , n. In the j-th diode, we denote the corresponding current by xj and the corresponding voltage (from the anode to the cathode) by yj . Furthermore, the incoming current (i. e., the current which flows from an outer circuit to the diode converter) through the k-th input will be denoted by ik , the incoming voltage i. e., the voltage between the k-th input and the 0-th input, by uk (k = 1, 2, . . . , n). With this notation, the so-called Volt–Ampère characteristic3 of the 3 The Volt–Ampère characteristic describes the relations between the voltages and currents in the whole circuit or parts of it. The so-called outer characteristic is a particular case of a Volt–Ampère characteristic and describes only the relations between either incoming or outgoing voltages and currents, but does not take into account interior elements.

2.1 Electrical circuits with diode converters | 87

ideal diode may be written in the form x ∈ ℝm +,

y ∈ ℝm −,

⟨x, y⟩ = 0.

(2.1.13)

The relation between the anode current vector x = (x1 , x2 , . . . , xm ) and the input current vector i = (i1 , i2 , . . . , in ) is given by Ax = i, where the (n × m)-matrix A = (akj )k,j with entries 1 { { { akj = { −1 { { { 0

if the j-th diode anode is joined with the k-th knot, if the j-th diode cathode is joined with the k-th knot,

(2.1.14)

otherwise.

An example of such a constallation is given in Figure 2.3. We do not take into account the rule for the 0-th knot, since it follows from the corresponding rules for the other knots and our assumption that the sum of all input currents is zero. In the following theorem, we denote by Aj := (a1j , a2j , . . . , anj )T

(j = 1, 2, . . . , m)

(2.1.15)

the j-th column of the matrix A = (akj )k,j with elements (2.1.14). Theorem 2.1.7. Let K = ℝm + be the cone of all m-tuples with nonnegative coordinates. Then the input voltage vector u and input current vector i are related by the equality u ∈ NA(K) (i),

(2.1.16)

where NA(K) denotes the DN-operator generated by A(K). Proof. Note first that the cone A(K) contains all elements generated by the columns A1 , . . . , Am of the matrix A, i. e. A(K) = {s1 A1 + . . . + sm Am : s1 , . . . , sm ≥ 0}, see (2.1.15). Fix x and y satisfying (2.1.13), which means that y ∈ NK (x), by Proposition 2.1.4. Let k(j, +) be the number of the knot which is joined to the anode of the j-th diode, and let k(j, −) be the analogous number for the cathode. Then yj = uk(j,+) − uk(j,−)

(j = 1, 2, . . . , m).

In the j-th column in (2.1.15), only the entries with index k(j, +) or k(j, −) can be different from zero. If one of these indices is zero, then the corresponding input voltage is also zero. From this we conclude that y = A∗ u, where A∗ is the adjoint matrix to A. So Theorem 2.1.6 implies that u ∈ NA(K) (Ax) = NA(K) (i) as claimed.

88 | 2 DN-systems: applications 2.1.4 An example We illustrate our results obtained in the previous subsection by means of a typical example. Example 2.1.8. Consider the double-phase semi-periodic rectifier with circuit feed and load sketched in Figure 2.4.

Figure 2.4

We show how this circuit chain may be represented as generalized diode nonlinearity in the sense of Definition 2.1.3. To this end, some notation is in order. By I1 we denote the current of the feed chain EL1 R1 , by I2 we denote that of the load chain L2 R2 . The choice of positive directions is indicated by arrows. The input voltage of the feed chain (between the knots 1 and 3) is denoted by U1 , that of the load chain (between the knots 0 and 2) by U2 . Then we arrive at the system of equations {

L1 I1̇ + R1 I1 + U1 = E(t), L2 I2̇ + R2 I2 + U2 = 0.

Furthermore, denote by x = (x1 , x2 , x3 , x4 ) and y = (y1 , y2 , y3 , y4 ) the anode current and voltage vectors, respectively, by i = (i1 , i2 , i3 ) the vector of exterior currents, and by u = (u1 , u2 , u3 ) the vector of exterior voltages. Then formula (2.1.16) from Theorem 2.1.7 holds, where K = A(ℝ4+ ), and A is the matrix (2.1.8). Observe that the input current and voltage I and U of the exterior circuits are related to the input current and voltage i and u of the diode converter by means of the

2.1 Electrical circuits with diode converters | 89

equalities i = B∗ I,

u = BU

with the matrix B := (

1

0

0

−1

1

0

).

So writing K1 := B(K ∗ )∗ , where K = A(ℝ4+ ) as before, from Theorem 2.1.6 we conclude that I ∈ K1 ,

U ∈ K1∗ ,

⟨I, U⟩ = 0.

In the differential equations which describe this system, the coefficients of the derivatives are not normalized, so we have not yet obtained an exact representation as diode nonlinearity. To achieve this, we make the additional transform X := CI,

Y := C −1 U,

K2 := CK1

with C := (

√L1 0

0 √L2

(2.1.17)

).

Then our model may be described by the differential inclusion Ẋ ∈ e(t) − rX − NK2 (X),

(2.1.18)

where e(t) :=

E(t) 1 ( ), √L1 0

r := (

R1 /L1 0

0

R2 /L2

).

Instead of (2.1.18), we may also use the equivalent equation4 Ẋ = τX (e(t) − rX),

(2.1.19)

where τX is the projection onto the tangent cone TK2 (X) at X, see (1.3.1). Of course, in the very simple Example 2.1.8 one may find the cone K1 also by a direct calculation. Using the same notation as above, Figure 2.4 shows that I1 = x1 − x2 ,

I2 = x1 + x4 = x2 + x3 .

4 The equivalence of (2.1.18) and (2.1.19) follows from Theorem 1.3.3, applied to f (t, X(t)) = e(t)−rX(t).

90 | 2 DN-systems: applications So the feed and load chains I1 and I2 satisfy I1 + I2 = x1 + x3 ≥ 0,

I1 − I2 = −(x2 + x4 ) ≤ 0.

Consequently, I ∈ K1 implies that both I2 ≥ −I1 and I2 ≥ I1 . Similarly, Figure 2.4 shows that U1 = y1 − y4 ,

U2 = y1 + y2 = y3 + y4 .

So the input voltages U1 and U2 of the feed and load chains satisfy U1 + U2 = y1 + y3 ≤ 0,

U1 − U2 = −(y2 + y4 ) ≥ 0.

Consequently, U ∈ K1∗ implies that both U2 ≤ −U1 and U2 ≤ U1 . The corresponding cones K1 , K1∗ , K2 , and K2∗ are sketched in Figure 2.5.

Figure 2.5

2.1.5 Numerical experiments In this subsection we use the symbolic programme Mathematica 6 for the numerical implementation and graphical representation of the examples of the preceding Subsection 2.1.4. For the sake of definiteness, we take L1 = R1 = L2 = R2 = 1,

E(t) = 5 cos 6t,

(2.1.20)

and the initial states are supposed to be zero.5 In the first picture (Figure 2.6) we have sketched the phase trajectory of the vector X = I which realizes the beginning of the ascent along the right-hand boundary of the 5 Note that, by our choice (2.1.20), the matrix C in (2.1.17) is the identity matrix, so X = I and Y = U.

2.1 Electrical circuits with diode converters | 91

cone K2 and then passes to periodic oscillations between the right and left rays which form the cone boundary.

Figure 2.6

In the second picture (Figure 2.7) we have sketched the coordinates of the vector X = I which illustrate our results in terms of a rectifier of the current: the first coordinate X1 is drawn with a black line, the second coordinate X2 (which coincides with the feed current) with a broken line.

Figure 2.7

Instead of (2.1.20), we take now L1 = 4, R1 = 1, L2 = 9, R2 = 1, E0 = 5, Q = 6. The third picture (Figure 2.8) and fourth picture (Figure 2.9) illustrate the deformation of the cone K2 under this change of the inductance parameters.

92 | 2 DN-systems: applications

Figure 2.8

The asymmetry of the periodic oscillation is due to the change of the relation between the inductance parameters.

Figure 2.9

The Mathematica programme which generated these pictures can be found in the Appendix. 2.1.6 Electrical diode circuits as DN-systems In this subsection we study an important interconnection between DN-systems and electric circuits. A related discussion can be found in [2, 59, 61]. Consider a connected electrical circuit which contains the usual elements: sources S, resistances R, capacities C, inductances L, and diodes D. In order to illustrate the Kirchhoff rules in the theory of electrical circuits, it is common to draw a certain graph tree which contains all knots, but no contour. The branches (elements)

2.1 Electrical circuits with diode converters | 93

which are not contained in the tree all called connectivity branches; each of them closes precisely one principal contour which includes, apart from the given branch, only branches of the tree. Conversely, every branch of the tree forms precisely one principal cross-section, i. e., a choice of branches which contain, apart from the given branch of the tree, all those connectivity branches whose principal contours include the given branch. We denote by Ū the voltage vector and by I ̄ the current vector in the branches of a circuit, while by U we denote the voltage vector and by I the current vector in the branches of the tree. Then the principal contour equation Ū = MU and the principal cross-section equation I = −M ∗ I ̄ are related by the same matrix M and its adjoint M ∗ (see [28, 80, 82]). We denote by D1 the set of all diodes which are connected to the circuit by capacities (i. e., which form a contour together with one of the involved capacities), and by k1 the number of elements of D1 . The set of all the other diodes is denoted by D2 , the number of its elements by k2 . Enumerating the diodes of D1 and D2 separately in an arbitrary order, we construct a diodic converter D in the following way. All elements of D1 are considered as branches of D and denoted by D1 , while all elements of D2 are considered as branches of D and denoted by D2 . Consider the column vector x := (y1 , y2 , . . . , yk1 , x1 , x2 , . . . , xk2 )T

(2.1.21)

consisting of the k1 anode voltages of the diode D1 and the k2 anode currents of the diode D2 , together with the row vector y := (x1 , x2 , . . . , xk1 , y1 , y2 , . . . , yk2 )

(2.1.22)

consisting of the k1 anode currents of the diode D1 and the k2 anode voltages of the k k diode D2 . The vector (2.1.21) belongs to the cone ℝ−1 × ℝ+2 , the vector (2.1.22) belongs k1 k2 to the cone ℝ+ × ℝ− , and each of these cones is adjoint to the other. In particular, the elements x and y are mutually orthogonal, so they are related by means of a diodic nonlinearity operator. Denote by v := (u1 , u2 , . . . , uk1 , i1 , i2 , . . . , in )T the mixed vector consisting of the anode voltages at the diodic converter D1 and the currents at the inputs of the diodic converter D2 , and by u := (i1 , i2 , . . . , ik1 , u1 , u2 , . . . , un ) the mixed vector consisting of the currents at the diodic converter D1 and the voltages at the branches of the diodic converter D2 . Then the dependence of v on x may be

94 | 2 DN-systems: applications expressed through the equation v=(

E O

O ) x, A

with A denoting the matrix from Subsection 2.1.3, and E the unit matrix of rank k1 , while the dependence of y on u may be expressed through the equation E y=( O

O E ) u=( A O ∗

O ) u. A∗

Consequently, Theorem 2.1.6 implies that u and v are connected through the corresponding DN-operator. In what follows, we will assume the following fundamental – LC-condition: every path of the elements S, R, C, and L which joins two inputs from the part D2 of the diodic converter contains at least one inductivity. Now we split all elements of the circuit into the 6 groups C, D2 , S, R, L and D1 . In the group C we first enumerate the capacities which are parallel-joined to the diodes of D1 , then all the other capacities in arbitrary order. In the groups S, R and L we enumerate all elements in arbitrary order. Finally, in the groups D2 and D1 we keep the original order imposed in the construction of D =D1 ∪D2 . Now we construct the circuit tree by resetting the groups and their elements according to the chosen enumeration, where we connect every time an element to the tree when it does not form a contour for the previously connected elements. From this description and the LC-conditions stated above it follows that all elements of D2 belong to the tree, while all elements of D1 are branches which close a contour with one of the capacities, see Figure 2.10.

Figure 2.10

Once we have constructed the tree in this way, we enumerate separately in every group the tree elements and the connecting elements (not in the tree) in the old order. More-

2.1 Electrical circuits with diode converters | 95

over, we overline the voltages and currents of all connecting branches.6 Then the equations of the principal contours and principal cross-sections of the resulting tree look like as follows. { { { { { { { { { { { { { { { { { { {

ū C = M11 uC , ū S = M31 uC + M33 uS , (2.1.23)

ū R = M41 uC + M43 uS + M44 uR , ū L = M51 uC + M52 uD2 + M53 uS + M54 uR + M55 uL ,

ū D1 = M61 uC ,

and { { { { { { { { { { { { { { { { { { {

∗ ̄ ∗ ̄ ∗ ̄ ∗ ̄ ∗ ̄ iC = −M11 iC − M31 iS − M41 iR − M51 iL − M61 iD1 , ∗ ̄ ∗ ̄ ∗ ̄ iS = −M33 iS − M43 iR − M53 iL ,

(2.1.24)

∗ ̄ ∗ ̄ iR = −M44 iR − M54 iL , ∗ ̄ ∗ ̄ iL = −M55 iL − M61 iD1 ,

∗ ̄ iD2 = −M52 iL .

In matrix form the linear relations (2.1.23) and (2.1.24) read M11

ū C

M31 ū S ) ( ( ( ū R ) = ( M41 ) ( ( ̄uL M51 ( ū D1 )

0

0

0

0

M33

0

0

0

M43

M44

0

0

M53

M54

M55

M52

uS ) )( ) ( uR ) ) )( uL

0

0

0

0

) ( uD2 )

∗ M31

∗ M41

∗ M51

∗ M61

∗ M33

∗ M43

∗ M53

0

0

∗ M44

∗ M54

0

0

0

0

0

0

∗ M55 ∗ M52

iS̄ ) )( ) ( iR̄ ) , ) )( iL̄

0

) ( iD̄ 1 )

( M61

uC

and iC

∗ M11

iS 0 ( ) ( ( iR ) = − ( 0 ( ) ( iL 0 ( iD2 )

( 0

iC̄

respectively. For the sake of completeness of the mathematical description, we also add the equations for the inductivity, capacity, and resistance of the circuit {

L̄ iL󸀠̄ = ū L ,

C̄ ū 󸀠C = iC̄ ,

ū R = R̄ iR̄ ,

LiL󸀠 = uL ,

Cu󸀠C = iC ,

uR = RiR ,

where L,̄ L, C,̄ C, R,̄ and R are diagonal matrices with positive entries. 6 Observe that then the enumeration in the diodic converter does not change.

(2.1.25)

96 | 2 DN-systems: applications We assume that uS and iS̄ are known.7 Let us now study the transformation of the system of the equations described above. To begin with, we first remove uL from the fifth equation in (2.1.23) and iC̄ from the fifth equation in (2.1.24) in the following way. Taking derivatives in the first equation of (2.1.23) and multiplying by C,̄ we obtain −1 󸀠 −1 ̄ ̄ iC̄ = C̄ ū 󸀠C = CM 11 C CuC = CM11 C iC ,

where we have used the fifth and second equality in (2.1.25). Replacing in this relation iC by the right hand side of the second equation in (2.1.24) yields ∗ ̄ ∗ ̄ ∗ ̄ ∗ ̄ ∗ ̄ iC + M11 CM11 C −1 iC = −M31 iS − M41 iR − M51 iL − M61 iD1 , ∗ ̄ since the term M11 iC cancels out. Similarly, taking derivatives in the second equation of (2.1.24) and multiplying by L, we obtain ∗ ̄ −1 ̄ 󸀠̄ ∗ ̄ −1 uL = LiL󸀠 = −LM55 L LiL = −LM55 L ū L ,

where we have used the fourth and first equality in (2.1.25). Replacing in this relation ū L by the right hand side of the fifth equation in (2.1.23) yields ∗ ̄ −1 ū L + M55 LM55 L ū L = M51 uC + M52 uD2 + M53 uS + M54 uR , ∗ ̄ ∗ since the term M55 uL cancels out. Putting A := C + M11 CM11 and B := L̄ + M55 LM55 −1 −1 ̄ and applying the operators AC and BL to the inductivity and capacity equation, respectively, we end up with ∗ ̄ ∗ ̄ ∗ ̄ ∗ ̄ Au󸀠C = −M31 iS − M41 iR − M51 iL − M61 iD1

(2.1.26)

BiL󸀠̄ = M51 uC + M52 uD2 + M53 uS + M54 uR .

(2.1.27)

and

Now we remove uR and iR̄ from these equations. To determine iR̄ we use the fourth equation in (2.1.23) and the second equation in (2.1.24) and obtain ∗ ̄ ∗ ̄ (R̄ + M44 RM44 )iR = M41 uC + M43 uS + M44 RM54 iL . ∗ Being symmetric and positive definite, the matrix S := R̄ + M44 RM44 is invertible, and so ∗ ̄ iR̄ = S−1 M41 uC + S−1 M43 uS + S−1 M44 RM54 iL .

(2.1.28)

7 This means that all voltage sources belong to the tree, while all current sources are connecting branches; in the opposite case the scheme may be contradictory.

2.1 Electrical circuits with diode converters | 97

To determine the voltage and the other currents we put (2.1.28) into the equation for the resistance and get ̄ −1 M41 uC + RS ̄ −1 M43 uS + RS ̄ −1 M44 RM ∗ iL̄ . ū R = RS 54 Likewise, putting (2.1.28) into the second equation from (2.1.24) yields ∗ −1 ∗ −1 ∗ −1 ∗ ̄ iR = −M44 S M41 uC − M44 S M43 uS + (M44 S M44 R − E)M54 iL ,

where E denotes as usual the identity matrix. Finally, applying the matrix R we arrive at ∗ −1 ∗ −1 ∗ −1 ∗ ̄ uR = −RM44 S M41 uC − RM44 S M43 uS + R(M44 S M44 R − E)M54 iL .

Introducing the vectors x := (

iL̄ ), uC

u := (

iD̄ 1 ), uD2

v := (

ū D1 ), iD2

y := (

iS̄ ) uS

and the matrices A1 := ( A3 := (

B

O O

∗ M61

O A

), −M52 O

A2 := ( ),

∗ −1 ∗ M54 R(M44 S M44 R − E)M54 ∗ −1 ∗ ∗ M41 S M44 RM54 − M51

A4 := (

O

∗ −1 M51 − M54 RM44 S M41 ∗ −1 −M41 S M41

∗ −1 M53 − M54 RM44 S M43

∗ −M31

∗ −1 −M41 S M43

),

),

we may write (2.1.26) and (2.1.27) more concisely in the form8 A1 x󸀠 = A2 x − A3 u + A4 y,

(2.1.29)

and the first and third equation from (2.1.23) more concisely in the form v = A∗3 x. Observe that the matrix A1 is symmetric and positive definite, so the matrices A1/2 1 and A−1/2 are well-defined. Putting 1 X := A1/2 1 x,

U := A−1/2 A3 x, 1

f (t, X) := A−1/2 A2 A−1/2 X + A−1/2 A4 y 1 1 1

(2.1.30)

we see that v = A∗3 A−1/2 X. Applying A−1/2 to both sides of the equality (2.1.29) we obtain 1 1 X 󸀠 = f (t, X) − A−1/2 A3 u. 1

(2.1.31)

8 Recall that the function y = y(t) is known; this function defines the action of the voltage and current sources.

98 | 2 DN-systems: applications The diode converter vectors u and v are connected, as we already observed, with the diode nonlinearity operator (see Example 2.1.8). Since v may be expressed through X, from Theorem 2.1.7 we may therefore conclude that the vector A−1/2 A3 u in 1 (2.1.31) is related to X through the diode nonlinearity NK generated by some cone K. Consequently, (2.1.31) may be reformulated as DN-system X 󸀠 ∈ f (t, X) − NK (X),

(2.1.32)

as we have shown in Subsection 2.1.1. We summarize our discussion with the following Theorem 2.1.9. Suppose that the physical model for the diode converter of an electrical circuit may be represented in such a way that it satisfies the LC-conditions. Then the mathematical model for the circuit may be represented as DN-system. Theorem 2.1.9 is not only of theoretical interest. In fact, as soon as we are able to solve the differential inclusion (2.1.32), we may calculate all voltages and currents in the corresponding circuit. To conclude, we give two examples to illustrate the applicability of Theorem 2.1.9. In the first example we verify the hypotheses and calculations preceding Theorem 2.1.9. The second example shows that the conditions we imposed to arrive at the differential inclusion (2.1.32) (in particular, Condition LC), are sufficient, but not necessary. Example 2.1.10. Consider the circuit sketched in Figure 2.11. Since this circuit contains two inductances, two resistances, and one diode, the corresponding vectors occurring in this circuit are ū ī u i ū L = ( L1 ) , iL̄ = ( L̄ 1 ) , uR = ( R1 ) , iR = ( R1 ) . ū i u i L2

Figure 2.11

L2

R2

R2

2.1 Electrical circuits with diode converters | 99

Moreover, u1 uD2 = ( u2 ) , u3

i1 iD2 = ( i2 ) , i3

uS = E(t).

The only principal contour equation occurring here in (2.1.23) is ū L = M52 uD2 + M53 E(t) + M54 uR , where M52 = (

−1 0

0 −1

1 ), 0

M53 = (

−1 ), 0

M54 = (

0 ), −1

−1 0

(2.1.33)

while the three principal cross-section equations in (2.1.24) occurring here have the form ∗ ̄ iS = −M53 iL ,

∗ ̄ iR = −M54 iL ,

∗ ̄ iD2 = −M52 iL .

All other matrices in (2.1.23) or (2.1.24) are zero. Let us now see how the differential equations in (2.1.25) look like in this case. Since L1 iL󸀠̄ 1 = ū L1 ,

L2 iL󸀠̄ 2 = ū L2 ,

uR1 = R1 iR1 ,

uR2 = R2 iR2 ,

the first and last equations in (2.1.25) hold with L L̄ = ( 1 0

0 ), L2

R=(

R1 0

0 ). R2

∗ ̄ ∗ ̄ ̄ The matrix A = C+M11 CM11 does not occur, but B = L+M 55 LM55 = L, since M55 = O. Consequently,

A1 = L,̄

A2 = −R,

A3 = −M52 ,

A4 = M53 ,

by (2.1.33). Taking into account (2.1.29) and the fact that x = iL̄ and y = uS = E(t), the transformation (2.1.30) reads ̄ L1/2 1 iL1

X = A1/2 1 x =(

),

̄ L1/2 2 iL2

U = A−1/2 A3 u = ( 1

L−1/2 (u1 − u3 ) 1 L−1/2 u2 2

),

and f (t, X) = A−1/2 A2 A−1/2 X + A−1/2 A4 y 1 1 1 = (

L−1/2 1

0

0

L−1/2 2

+(

L−1/2 1 0

)(

0

L−1/2 2

−R1

0

0

−R2

)(

)(

L−1/2 1

0

0

L−1/2 2

−R1 L−1 −1 1 ) E(t) = ( 0 0

)( 0

−R2 L−1 2

̄ L1/2 1 iL1

̄ L1/2 2 iL2

)

)X + (

−E(t)L−1/2 1 ). 0

100 | 2 DN-systems: applications So from our main result we conclude that the circuit in Figure 2.11 may be described by the DN-system (2.1.32) for given alimentation inputs E(t). The cone K in (2.1.32) may be described explicitly. Let KD be the cone consisting of all elements of the form 1 z := ( 0 0

−1 1 0

0 1 −1

ξ1 0 ξ1 − ξ2 ξ2 ) = ( ξ2 − ξ3 ) , 0 )( ξ3 1 ξ4 − ξ3 ξ4

where the vector (ξ1 , ξ2 , ξ3 , ξ4 ) runs over the positive octant ℝ4+ . Then K is the adjoint cone to −A−1/2 M52 KD∗ , where M52 is the first matrix in (2.1.33). 1 Example 2.1.11. The circuit sketched in Figure 2.12 below does not satisfy Condition LC, because the three paths which contain the elements J1 and J2 and join the diodic converter from D2 do not include an inductivity.

Figure 2.12

The equations for the voltage drop in the contour {E, R, L, D2 , D1 } has here the form uR + uL + uD2 − uD1 = uE ,

(2.1.34)

where uE = uE (t) is some given time-dependent function. We write i := iR = iL = iD2 pJ2 = −iD1 + J1 , with given constants J1 and J2 . Expressing the voltages uR and uL in (2.1.34) through the relations uR = iR and uL = i󸀠 L, we arrive at iR + i󸀠 L + uD2 − uD1 = uE .

(2.1.35)

Since the anode currents iD1 = J1 − i and iD2 = J2 + i of the diodes D1 and D2 , respectively, assume only nonnegative values, we get −J2 ≤ i ≤ J1 ; so for the correct

2.2 Convex programming and DN-systems | 101

performance of the circuit we require that J2 + J2 ≥ 0 in order guarantee that Q := [−J2 , J1 ] ≠ 0. Now we distinguish three cases for the position of i in Q. If −J2 < i < J1 then iD1 > 0 and iD2 > 0, so uD1 = uD2 = 0. If i = −J2 then iD1 > 0, iD2 = 0, uD1 = 0, and uD2 ≤ 0. Finally, if i = J1 then iD1 = 0, iD2 > 0, uD1 ≤ 0, and uD2 = 0. In any case the vector u := uD2 − uD1 belongs to NQ (i). Writing (2.1.35) in the form i󸀠 (t) = −

uE (t) R(t) u(t) u(t) − i(t) − =: f (t, i) − , L(t) L(t) L(t) L(t)

and observing that also u/L belongs to NQ (i), since L is positive, we end up with the differential inclusion i󸀠 ∈ f (t, i) − NQ (i) which is precisely of the form (2.1.2). We point out, however, that Q in this example is a convex set, but not a cone.

2.2 Convex programming and DN-systems In this section we show how DN-systems may help to solve a standard problem of convex programming.

2.2.1 Statement of the problem The standard problem of convex programming may be formulated as minimization problem for a convex function f on a convex closed set Q ⊆ ℝm . Recall that a function f : Q → ℝ is called convex if f (αx1 + (1 − α)x2 ) ≤ αf (x1 ) + (1 − α)f (x2 )

(x1 , x2 ∈ Q; 0 ≤ α ≤ 1).

Several methods are known for the approximate solution of the convex programming problem. However, these methods are usually quite cumbersome, since one has to ensure at each step that one does not leave the convex set Q. Here we propose an alternative method based on the use of DN-systems, which even works if one leaves the underlying set Q at a certain step. To this end, we suppose that f is differentiable in some open domain D ⊃ Q, and that the gradient ∇f is bounded on this domain. Consider the system ẋ = −∇f (x) − M(x − P(x, Q)) (x ∈ D),

(2.2.1)

102 | 2 DN-systems: applications where M is some positive real parameter (to be specified later), and P(x, Q) denotes the point of best approximation to x in Q, see Definition 1.2.9. In particular, we have M(x − P(x, Q)) = 0 ∈ NQ (x) for x ∈ Q, and therefore any solution x = x(t) of (2.2.1) which remains in Q also solves the DN-system ẋ ∈ −∇f (x) − NQ (x). Since we are interested in extending x also outside the set Q, where the normal cone NQ (x) does not make sense, we consider the term M(x − P(x, Q)) as natural extension of NQ (x) for x ∈ ̸ Q. If the solution x = x(t) of (2.2.1) remains in the set Q, its trajectory moves with velocity ẋ = −∇f (x) towards the minimal value f0 := min{f (x) : x ∈ Q}

(2.2.2)

of f in Q. Suppose that ∇f (x) ≠ 0 in Q. In the following subsection we will show that in this case the solution x = x(t) of (2.2.1) is outside Q and moves towards the set {x ∈ D : f (x) < f0 }. 2.2.2 Some differentiability properties Given a convex closed set Q, consider the function V : ℝm → ℝ+ defined by 󵄩 󵄩2 󵄩 󵄩2 V(x) := 󵄩󵄩󵄩x − P(x, Q)󵄩󵄩󵄩 = ‖x‖2 − 2⟨x, P(x, Q)⟩ + 󵄩󵄩󵄩P(x, Q)󵄩󵄩󵄩 .

(2.2.3)

In the next lemma we prove some natural differentiability property9 of this function. Lemma 2.2.1. The function (2.2.3) is continuously differentiable with ∇V(x) = 2(x − P(x, Q))

(x ∈ ℝm ).

(2.2.4)

Proof. We write Px instead of P(x, Q). To prove the assertion we have to show that V(y) − V(x) − ⟨2(x − Px), y − x⟩ →0 ‖y − x‖

(y → x)

for any x ∈ ℝm . A straightforward calculations shows that V(y) − V(x) − 2⟨x − Px, y − x⟩

= ‖y − Py‖2 − ‖x − Px‖2 − 2⟨x − Px, y − x⟩ = ⟨y − x + Px − Py, y − Py⟩ + ⟨x − Px, y − x + Px − Py⟩ − 2⟨x − Px, y − x⟩

9 Recall that we always consider the Euclidean norm.

2.2 Convex programming and DN-systems | 103

= ⟨y − x + Px − Py, y − x⟩ + ⟨y − x + Px − Py, x − Py⟩ + ⟨x − Px, y − x + Px − Py⟩ − 2⟨x − Px, y − x⟩. Consider first the scalar product ⟨y − x + Px − Py, y − x⟩ in this equality. Since the metric projection operator P is nonexpansive (see Proposition 1.2.12), we have ⟨y − x + Px − Py, y − x⟩ = ‖y − x‖2 + ⟨Px − Py, y − x⟩

≤ ‖y − x‖2 + ‖Px − Py‖‖y − x‖ ≤ 2‖y − x‖2 = o(‖y − x‖). (2.2.5)

For the other three scalar products in the above equality we get ⟨y − x + Px − Py, x − Py⟩ + ⟨y − x + Px − Py, x − Px⟩ − 2⟨x − Px, y − x⟩ = ⟨y − x + Px − Py, x − Py⟩ + ⟨Px − Py, x − Px⟩ − ⟨y − x, x − Px⟩ = ⟨y − x, x − Py⟩ + ⟨Px − Py, 2x − Px − Py⟩ − ⟨y − x, x − Px⟩ = ⟨Px − Py, x − Px⟩ + ⟨Px − Py, y − Py⟩ = ⟨Px − Py, y + x − Px − Py⟩. So it remains to show that ⟨Px − Py, x − Px⟩ + ⟨Px − Py, y − Py⟩ = o(‖y − x‖).

(2.2.6)

By a compactness argument we may find α ∈ ℝ (actually, α ≥ 0) and a sequence (yn )n such that lim y n→∞ n

= x,

lim ⟨

n→∞

Px − Pyn , x − Px⟩ = α. ‖yn − x‖

Recall that the variational inequality (1.2.18) states that ⟨u − Pu, z − Pu⟩ ≤ 0 for all z ∈ Q, and this characterizes the element Pu. Applying this to u := x and z := Pyn yields ⟨Px − Pyn , x − Px⟩ = −⟨x − Px, z − Px⟩ ≥ 0, while applying this to u := yn and z := Px yields ⟨Px − Pyn , yn − Pyn ⟩ = ⟨yn − Pyn , z − Pyn ⟩ ≤ 0. So the continuity of the metric projection P implies lim ⟨

n→∞

Px − Pyn Px − Pyn , x − Px⟩ = lim ⟨ , yn − Pyn ⟩ = 0, n→∞ ‖y − x‖ ‖yn − x‖ n

which together with (2.2.5) proves (2.2.6). Now we prove a differential inequality for the function (2.2.3) in an annular region. To this end, for 0 < r < R we put Xr,R := {x ∈ ℝm : r ≤ dist(x, Q) ≤ R}

(2.2.7)

󵄩 󵄩 S := max{󵄩󵄩󵄩∇f (x)󵄩󵄩󵄩 : x ∈ Xr,R }.

(2.2.8)

and

104 | 2 DN-systems: applications Observe that the maximum in (2.2.8) exists, since Xr,R is compact and f ∈ C 1 (D), by assumption.10 Lemma 2.2.2. The function (2.2.3) satisfies the differential inequality d V(x(t)) ≤ −2MV(x(t)) + 2S√V(x(t)), dt

(2.2.9)

for x(t) ∈ Xr,R , where x = x(t) solves the system (2.2.1). Proof. We use the same notation as in (1.4.41). From (2.2.1), (2.2.3) and (2.2.4) it follows that ̇ ̇ = −⟨2(x − Px), ∇f (x) + M(x − Px)⟩ V(x) = ⟨∇V(x), x⟩ = −2M‖x − Px‖2 + 2⟨x − Px, −∇f (x)⟩ 󵄩 󵄩 ≤ −2M‖x − Px‖2 + 2‖x − Px‖󵄩󵄩󵄩∇f (x)󵄩󵄩󵄩 ≤ −2MV(x) + 2S√V(x), which proves the assertion.

2.2.3 An upper limit estimate Now we come to the main result of this section. Let f0 be the minimum of the function f as in (2.2.2). Suppose that x = x(t) solves (2.2.1), and ∇f (x) ≠ 0 in Q. Theorem 2.2.3. For each ε > 0 there exist constants M > 0 and T > 0 such that f (x(t)) < f0

(t ≥ T)

and dist(x(t), Q) ≤ ε

(t ≥ T),

whenever x(0) = x0 ∈ Xr,R with Xr,R given by (2.2.7). Proof. First of all, we remark that from (2.2.9) and a well-known estimate for differential inequalities (see [25, p. 16]) it follows that dist(x(t), Q) = √V(x(t)) ≤

S S + (‖x0 − Px0 ‖ − )e−Mt , M M

(2.2.10)

and the last term can be made arbitrarily small for t ≥ T if M > 0 and T > 0 are sufficiently large. 10 Here we suppose that R is so small that the Xr,R ⊂ D with D as in (2.2.1).

2.2 Convex programming and DN-systems | 105

By (2.2.4), we may write the system (2.2.1) in the form ẋ = −∇Z(x), where Z(x) := f (x) + 21 MV(x). In fact, 1 ∇Z(x) = ∇f (x) + M∇V(x) = ∇f (x) + M(x − Px). 2 It is not hard to see that Z is a convex function, because f is convex, and that ‖Z(x)‖ → ∞ as ‖x‖ → ∞ and x ∈ ̸ Q. Therefore the minimal points of Z form a nonempty closed convex set, and every trajectory of the system (2.2.1) approaches this set asymptotically. Observe that, if x is a minimal point of Z, then f (x) < f0 . In fact, for every x ∈ Q with f (x) = f0 we have Z(x) = f0 . On the other hand, Z(x) < Z(x), because x cannot be a minimal point11 for Z. But from V(x) > V(x) = 0 we conclude that f (x) < f (x) = f0 , and so f (x(t)) < f0 for sufficiently large t as claimed. The proof is complete. Let us make some comment on the role of the constants M and T in Theorem 2.2.3. Of course, if we are allowed to choose M > 0 arbitrarily large, then the last term in (2.2.10) may be made arbitrarily small for any t > 0, which means that we may take T = 0; however, we often have to adjust M > 0 and T > 0 simultaneously in the sense that, the larger we may choose M, the smaller we may take T, and vice versa. In order to calculate, up to an error ε > 0, the optimal value at time T, one has to choose the constant M which represents the velocity of the approach of the approximation towards the optimum. More precisely, the smaller is the time T > 0, the larger we have to take M > 0. Only if we are lucky and choose as initial value already the optimum, we get T = 0, and then the velocity is also M = 0, because there is no need to change anything. 2.2.4 Numerical experiments To illustrate our results, we consider the problem of finding the minimal value f0 of the C 1 -function f defined by f (x, y) := 2 + (x − 2)2 + (y − 5)2 on the closed convex set Q := {(x, y) ∈ ℝ2 : (x − 2)2 + (y − 2)2 ≤ 4}. Of course, here one may calculate f0 very easily and gets f0 = f (2, 4) = 3. To analyze the system (2.2.1), we make the obvious substitution x − 2 =: ξ ,

y − 2 =: η.

11 This follows from the fact that ∇Z(x) = ∇f (x) ≠ 0, by assumption.

106 | 2 DN-systems: applications Then Q = {(ξ , η) ∈ ℝ2 : ξ 2 + η2 ≤ 4} becomes a closed disc centred at (0, 0), and f (ξ , η) := 2 + ξ 2 + (η − 3)2 . For this set Q we have { 0 (ξ , η) − P(ξ , η) = { ‖(ξ ,η)‖−2 { ‖(ξ ,η)‖ (ξ , η)

for ‖(ξ , η)‖ ≤ 2, for ‖(ξ , η)‖ > 2.

So after some calculation the system (2.2.1) reads ξ̇ = { { { { { { η̇ = {

2Mξ √ξ 2 +η2 2Mη √ξ 2 +η2

− (M + 2)ξ , − (M + 2)η + 6.

From numerical experiments based on Mathematica one sees that here the corresponding DN-operator behaves like a “magnet” which attracts every trajectory towards the closed disc Q, see Figure 2.13.

Figure 2.13

2.3 Biological systems with size constraints | 107

The Mathematica programme which generated the picture in Figure 2.13 can also be found in the Appendix.

2.3 Biological systems with size constraints In this section we sketch some applications of the material discussed in the preceding Section 2.2 to problems arising in biology and life sciences.

2.3.1 The classical predator-prey model A predator-prey system12 is a pair of first order nonlinear differential equations describing the dynamics of biological systems, or eco-systems, in which two species of animals interact: the predator and the prey.13 The populations change through time according to the pair of equations {

Ṅ 1 = ε1 N1 − γ1 N1 N2 , Ṅ 2 = γ2 N1 N2 − ε2 N2 ,

(2.3.1)

where N1 = N1 (t) is the number of prey (for example, rabbits), N2 = N2 (t) is the number of some predator (for example, foxes) at time t, Ṅ 1 = Ṅ 1 (t) and Ṅ 2 = Ṅ 2 (t) represent the growth rates of the two populations over time, and the dot indicates, as usual, the time derivative. The positive real numbers ε1 , γ1 , ε2 , γ2 are parameters describing the interaction of the two species. Let us spend some time on the meaning of these four parameters. If only animals of prey type live in the region (i. e., N2 (t) ≡ 0), their growth rate is a fixed positive number, say Ṅ 1 (t) ≡ ε1 . Conversely, if the predator-type animals which live only (or mostly) on animals of the prey species stay in an isolated part of the region, their growth rate is a fixed negative number, say Ṅ 2 (t) ≡ −ε2 . So in absence of the prey population (i. e., N1 (t) ≡ 0), the predator population would decay to zero due to starvation. Now, if these two species live together in a common region, the number of preys will increase more slowly if the number of predators is large, while the number of predators will increase faster if the number of preys is large. Volterra’s basic assumption was that the growth coefficient of the preys is ε1 − γ1 N2 (t), while the growth 12 A name which is also used in the literature is Lotka–Volterra model; this model originated in the study of fish populations of the Mediterranean [90]. 13 Here the eco-system is supposed to be closed which means that no migration is allowed into or out of the system.

108 | 2 DN-systems: applications coefficient of the predators is −ε2 − γ2 N1 (t). Therefore we arrive at a system of the form {

Ṅ 1 = N1 (ε1 − γ1 N2 ), Ṅ 2 = −N2 (ε2 − γ2 N1 )

(2.3.2)

which is of course the same as (2.3.1). In the following Figure 2.14 we have plotted N1 (t) and N2 (t) separately against the time axis, where the higher curve is the graph of the prey function N1 , and the lower curve is the graph of the predator function N2 .

Figure 2.14

In the example sketched in Figure 2.14 N2 slightly slags N1 , reflecting the fact that the amount of prey has a retarded effect on the number of predators. The main problem connected with the system (2.3.2) is to find its equilibrium states and to analyze its stability. Clearly, the system {

N1 (ε1 − γ1 N2 ) = 0, N2 (ε2 − γ2 N1 ) = 0

(2.3.3)

has the two solutions (N1 , N2 ) = (0, 0) and (N1 , N2 ) = (ε2 /γ2 , ε1 /γ1 ); they represent equilibria of (2.3.2). For the second equilibrium in (2.3.3) we introduce the new co-

2.3 Biological systems with size constraints | 109

ordinates ε2 =: X 1 , γ2

ε1 =: X 2 , γ1

X := (

X1

X2

).

For the stability analysis of (2.3.2) we use Lyapunov’s stability theorem for the first approximation. The Jacobian matrix containing the partial derivatives of the righthand side of (2.3.2) w. r. t. N1 and N2 is A(N1 , N2 ) = (

ε1 − γ1 N2 γ2 N2

−γ1 N1

−ε2 + γ2 N1

).

In particular, A(0, 0) = (

ε1 0

0 −ε2

),

A(ε2 /γ2 , ε1 /γ1 ) = (

0

γ2 ε1 /γ1

−γ1 ε2 /γ2 0

).

An easy calculation shows that A(0, 0) has the eigenvalues ε1 and −ε2 , and A(ε2 /γ2 , ε1 /γ1 ) has the eigenvalues ±ωi, where ω := √ε1 ε2 . This shows that (0, 0) is an unstable saddle point, while (X 1 , X 2 ) is a stable centre.14

The linearization of (2.3.2) near the equilibrium X := (X 1 , X 2 ) leads to the system γ1 ε2 { ẋ = − γ2 y, { ̇ γ2 ε1 { y = γ1 x,

(2.3.4)

where x = x(t) and y = y(t) approximate N1 − X 1 and N2 − X 2 , respectively. Eliminating y from (2.3.4) yields the single second order equation ẍ + ω2 x = 0 for x, with the general solution x(t) = A cos(ωt + φ0 ),

(2.3.5)

where A and φ0 depend of course on the initial conditions for x and y. Similarly, eliminating x from (2.3.4) we obtain the general solution y(t) = B sin(ωt + φ0 ).

(2.3.6)

14 Lyapunov’s theorem on first order approximations does not give any information here, because different behaviour is possible in case of purely imaginary eigenvalues; however, the phase portrait below (Figure 2.15) shows that it is a stable centre.

110 | 2 DN-systems: applications Here the phase shift φ0 is the same as in (2.3.5), and the amplitudes A and B are connected by the formula γ1 ε2 B = γ2 ωA. Geometrically, (2.3.5) and (2.3.6) imply that the phase space trajectories of the linearized system (2.3.4) in the (N1 , N2 )-plane are ellipses with half-axes A and B and centre at X. Figure 2.15 below corresponds to the choice ε1 = γ1 = 1 and ε2 = γ2 = 2. Here the trajectories are oriented counterclockwise.

Figure 2.15

The solutions of the linearized system (2.3.4) are called small fluctuations; their period is given in explicit form by T=

2π 2π = . ω √ε1 ε2

The behaviour of the trajectories of the nonlinear system (2.3.2) close to the equilibrium X is rather similar to that of (2.3.4). Already in Volterra’s book [90] the relation ε

N1 2 eγ2 N1 = CN2 1 e−γ1 N2 −ε

is given. In Figure 2.16 we sketch the trajectories for some initial values. This Figure corresponds, as Figure 2.15, to the choice ε1 = γ1 = 1 and ε2 = γ2 = 2.

2.3 Biological systems with size constraints | 111

Figure 2.16

2.3.2 Autooscillations in a predator-prey model Now we study predator-prey equations from the viewpoint of the theory of DN-systems developed in the Subsections 1.4.3 and 2.1.2. On the convex closed set Q := ℝ2+ we consider the DN-system Ẋ = τX [F(X) + δ(X − X)].

(2.3.7)

Here X = (X1 , X2 ) is the pair of prey population X1 (t) and predator population X2 (t) at time t, and the nonlinearity F(X) denotes the right-hand side of the Volterra–Lotka system (2.3.3), i. e., F(X) := (

(ε1 − γ1 X2 )X1 −(ε2 − γ2 X1 )X2

).

By X we denote the completion of the equilibrium in the classical model, while ε1 , γ1 , ε2 , γ2 and δ are nonnegative parameters. The term δ(X − X) in (2.3.7) takes into account exterior effects on the population of the two species. The constraint X ∈ Q models a special way to regulate the values of X1 and X2 . If δ = 0 then (2.3.7) is a classical predator-prey system; in this case we have F(X) ∈ TX (Q), hence τX F(X) = F(X), for every X ∈ Q. On the other hand, if δ > 0 and ε1 = γ1 = ε2 =

112 | 2 DN-systems: applications γ2 = 0 (hence X = 0), then (2.3.7) splits into the two quite simple separated equations {

Ẋ 1 = δX1 , Ẋ 2 = δX2 .

Let us now establish some criteria for auto-oscillations of (2.3.7), i. e., for the existence of a unique orbitally stable (in the strong sense) limit cycle. To this end, it is useful to introduce the new coordinates x1 := X1 − X 1 and x2 := X2 − X 2 . We get then NQ (X) = NQ−X (Q − X), that

TQ (X) = TQ−X (Q − X).

(2.3.8)

In fact, to see that the first equality in (2.3.8) is true observe that Y ∈ NQ (X) means ⟨Y, ξ − X⟩ ≤ 0

(ξ ∈ Q),

by definition of the normal cone, see Definition 1.2.13. But this condition is equivalent, by a simple shift, to ⟨Y, ξ − (X − X)⟩ ≤ 0

(ξ := ξ − X ∈ Q − X),

and this in turn means nothing else but Y ∈ NX−X (Q − X). The second relation in (2.3.8) follows from the first, by duality. Consequently, in the new coordinates the DN-system (2.3.7) takes the form ẋ = (

ẋ1 ẋ2

) = τx (

−γ1 x2 X1 + δx1 γ2 x1 X2 + δx2

) =: τx f (x),

while the constraint X ∈ Q in the new coordinates takes the form x ∈ Q − X =: Q. Now we verify condition (1.4.28) for the diagonal matrix B := (

γ2 ε1 γ1

0

0 γ1 ε2 γ2

).

By definition of f we obtain for x = (x1 , x2 )T ⟨Bx, f (x)⟩ = ( =(

γ2 ε1 2 γ1 ε2 2 x + x )δ + x1 x2 (γ1 ε2 x2 − γ2 ε1 x1 ) γ1 1 γ2 2 γ2 ε1 δ γεδ − γ2 ε1 x2 )x12 + ( 1 2 + γ1 ε2 x1 )x22 . γ1 γ2

(2.3.9)

2.3 Biological systems with size constraints | 113

So if we choose η > 0 in such a way that δ , γ2

x2 ≤ η +

δ γ1

δ − ε2 , γ2

X2 ≤ η +

δ + ε1 γ1

x1 ≥ η −

((x1 , x2 ) ∈ Q)

(2.3.10)

or, equivalently, X1 ≥ η −

((X1 ,2 ) ∈ Q),

then (1.4.28) is fulfilled with μ(t) := ηt 2 . To verify condition (1.4.30) we have to show that ⟨f (x), Cx⟩ = γ1 x22 X1 + γ2 x12 X2 ≥ ν(x12 + x22 ); this is true if ω := min{γ1 X1 , γ2 X2 } ≥ ν > 0.

(2.3.11)

Of course, (2.3.11) is only a sufficient condition for (1.4.30). Now we are going to formulate and prove the existence of a stable closed trajectory for a generalized predator-prey system. Theorem 2.3.1. Let Q ⊂ ℝ2 be nonempty, convex, and bounded, and suppose that the parameters occurring in the system (2.3.7) are positive and satisfy the estimates (2.3.10) and (2.3.11) for every X = (X1 , X2 ) ∈ Q. Assume that X is an interior point of Q, and that for every X ∈ 𝜕Q there exists ξ ∈ Tx (Q) such that ⟨ξ , F(X) + δ(X − X)⟩ > 0.

(2.3.12)

Then the system (2.3.7) admits a unique closed trajectory, and all other trajectories different from the equilibrium X influence on this trajectory. Proof. We use Theorem 1.4.24. Of course, it suffices to prove the statement for the shifted system (2.3.9). As already observed, from (2.3.10) and (2.3.11) it follows that the conditions (1.4.28) and (1.4.30) are fulfilled for this system. Since Q is bounded and f is continuously differentiable on Q, we conclude that f satisfies a Lipschitz condition on Q. Finally, for verifying (1.4.27) fix x ∈ 𝜕Q, and recall that z ∈ NQ (x) if and only if ⟨y−x, z⟩ ≤ 0 for each y ∈ Q, see Definition 1.2.13. In the original coordinates this means that we have Z := F(X) + δ(X − X) ∈ NQ (X) if and only if ⟨Y − X, Z⟩ ≤ 0 for any Y ∈ Q. Therefore condition (2.3.12) means nothing else but Z ∈ ̸ NQ (X) for all X ∈ 𝜕Q or, in the shifted coordinates, that z = f (x) ∈ ̸ NQ (x) for all x ∈ 𝜕Q.

114 | 2 DN-systems: applications On the other hand, if x is an interior point of Q the relation f (x) ∈ NQ (x) = {0} means that −γ1 x2 X1 + δx1 = 0,

γ2 x1 X2 + δx2 = 0.

(2.3.13)

Multiplying the first equality in (2.3.13) by x2 , the second by x1 , and subtracting the first from the second gives γ2 x12 X2 + γ1 x22 X1 = 0. Since γ1 , γ2 , X1 and X2 are positive, this implies that x1 = x2 = 0. So all hypotheses of Theorem 1.4.24 are satisfied for the system (2.3.9), and we are done.

2.3.3 Two examples To illustrate Theorem 2.3.1 we close with two examples. Example 2.3.2. Consider the predator-prey system ̇ { N1 = 2N1 − N1 N2 , { ̇ { N2 = N1 N2 − 3N2

(2.3.14)

on the rectangle Q := [1, 4] × [1, 3]. So we have γ1 = γ2 = 1,

ε1 = 2,

ε2 = 3,

X = (3, 2),

ω = ν = 1,

(2.3.15)

which shows that (2.3.11) is fulfilled. The conditions (2.3.10) connecting the parameters δ and η read here X1 ≥ η − δ + 3,

X2 ≤ η + δ + 2

((X1 , X2 ) ∈ [1, 4] × [1, 3]),

so we may choose, for example, δ = 3 and η = 1. For this choice of δ and η, the nonlinearity in (2.3.7) becomes F(X) + δ(X − X) = (

5X1 − X1 X2 − 3 −x1 x2 + 3x1 − 3x2 )=( ). X1 X2 − 2 x1 x2 + 2x1 + 3x2

It remains to check hypothesis (2.3.12). Fix X ∈ 𝜕Q; here we have to distinguish points in the (relatively) open boundary segments and in the corner points. Let X be in on the right boundary, say, but not a corner point, so X = (4, X2 ) with 1 < X2 < 3. In this case the tangent cone at X is the halfplane TQ (X) = {(ξ1 , ξ2 ) : ξ1 ≤ 4} = (−∞, 4] × ℝ.

2.3 Biological systems with size constraints | 115

So choosing ξ := (1, 0) we get ⟨ξ , F(X) + δ(X − X)⟩ = (17 − 4X2 )ξ1 + (4X2 − 2)ξ2 = 5 > 0. The other boundary points of Q may be treated analogously to show that (2.3.7) is fulfilled for the system (2.3.14). The situation is different for the corner points A := (1, 1),

B := (4, 1),

C := (4, 3),

D := (1, 3).

A straightforward calculation shows that { { { { { { { { { { { { { { { { { { { { { { { { { { {

−5 ) ∈ NA (Q), −5 7 ) ∈ NB (Q), F(B) + δ(B − X) = ( −2 0 F(C) + δ(C − X) = ( ) ∈ NC (Q), 6 −7 F(D) + δ(D − X) = ( ) ∈ ̸ NA (Q). −3 F(A) + δ(A − X) = (

So the only corner point in which (2.3.19) also holds is the left-upper point D = (1, 3); in the other corner points Theorem 2.3.1 does not apply. In fact, the point A is locally stable and attracts the trajectory which starts near the equilibrium, see Figure 2.17.

Figure 2.17

116 | 2 DN-systems: applications On Figure 2.17 one sees that the point B attracts another trajectory, due to the fact that (2.3.12) is also violated for the corner point B. In Figure 2.18 we have plotted the time evolution of the predators (continuous line) against the preys (broken line).

Figure 2.18

One may sketch many other trajectories which in part are attracted by the corner point A, in part by the corner point B. The points C and D are not equilibria; trajectories which start from these points are attracted by the corner point A. In Figure 2.19 below we have plotted N1 (t) and N2 (t) separately against the time axis after some rescaling. The graph of the predator function N2 slightly slags the graph of the prey function N1 , reflecting the fact that the amount of prey has a retarded effect on the number of predators. Finally, in Figure 2.20 we have sketched a closed trajectory of the system (2.3.14), where the rectangle Q = [1, 4] × [1, 3] is compact, and the parameters of the system are given by (2.3.15). The closed trajectory is the boundary of the dinosaurian egg in Q, and the equilibrium X is marked in the interior of the egg. We remark that the first term F(X) in Example 2.3.2 plays the role of a vortex force (or turbulence force) and guarantees the existence of a cycle. The second term δ(X −X) plays in turn the role of a centrifugal force (or escape force) and drives the solution towards the boundary of the domain. Increasing δ leads to the phenomenon that the

2.3 Biological systems with size constraints | 117

Figure 2.19

Figure 2.20

118 | 2 DN-systems: applications centrifugal force dominates the vortex force and destroys the cycle, with the consequence that new stationary points appear on the boundary. In our example the points A and B are stable, while the point C is unstable. Example 2.3.3. Now we consider the predator-prey system Ẋ = τX [F(X) + 0.26(X − X)],

(2.3.16)

where F(X) = (

X1 − X1 X2

).

X1 X2 − X2

So in this example we have γ1 = γ2 = ε1 = ε2 = 1,

δ = 0.26,

(2.3.17)

and X = (1, 1) is the unique equilibrium. We take the closed bounded rectangle Q := {(X1 , X2 ) : 0.75 ≤ X1 ≤ 1.7, 0.5 ≤ X2 ≤ 1.2}

(2.3.18)

as domain which contains the equilibrium point. We claim that all hypotheses of Theorem 2.3.1 are fulfilled. Indeed, the estimates X1 ≥ 0.75 >

ε2 − δ 1 − 0.26 = = 0.74, γ2 1

X2 ≤ 1.2
0 such that φ(t + dt) − φ(t) − D(t, φ(t), dt) = 0 (0 ≤ dt < δ).

(3.1.2)

Following [73] we call an equations with nonlinear differentials locally explicit (over U) if the Cauchy problem for this equation has a strong solution for every initial value in U. We point out that, when comparing two models describing an equation with nonlinear differentials like (3.0.2), the following simple observation is useful: if two func-

3.1 Locally explicit models of hysteresis elements | 125

̃ y, dt) have the same domain of definition and are tangential tion D(t, y, dt) and D(t, from the right in the sense that lim

dt→0+

̃ y, dt) D(t, y, dt) − D(t, =0 dt

((t, y) ∈ U),

(3.1.3)

then the equation (3.0.2) is equivalent1 to the same equation with D replaced by D.̃ In [72] the following different definition of locally explicit equations has been given. The quasicurrent γ corresponding to equation (3.0.2) is defined in [78, 80] by γtt+dt y = y + D(t, y, dt).

(3.1.4)

If this function is left-continuous with respect to the variable dt and has the local semigroup property (over U), then equation (3.0.2) is called locally explicit in [72]. Recall that here the local semigroup property for γ means that for all (t0 , y0 ) ∈ U there exists a δ > 0 such that for each t1 ∈ [t0 , t0 + δ) we have t

t

t

γt12 γt01 y0 = γt02 y0

(t2 ∈ [t1 , t1 + δ1 )).

(3.1.5)

In what follows we fix, for any pair (t0 , y0 ) ∈ U, a δ > 0 with the indicated property and denote it by Δ(t0 , y0 ). Imposing then the initial condition y(t0 ) = y0

(3.1.6)

we consider in the sequel the Cauchy problem (3.0.2)/(3.1.6). 3.1.2 Locally explicit equations: basic properties The following theorem shows that both definitions for locally explicit equations are actually equivalent. Theorem 3.1.1. The quasicurrent associated to equation (3.0.2) has the local semigroup property over U if and only if the Cauchy problem (3.0.2)/(3.1.6) admits a strong solution for each (t0 , y0 ) ∈ U. Proof. Suppose first that equation (3.0.2) generates a quasicurrent γ which satisfies (3.1.5). We claim that then the function φ(t) = γtt0 y0 is a strong solution of the Cauchy problem (3.0.2)/(3.1.6) on the interval [t0 , t0 + Δ(t0 , y0 )]. The initial condition (3.1.6) trivially follows from (3.1.1). For t0 ≤ t ≤ t0 + Δ(t0 , y0 ) we have φ(t + dt) − φ(t) − D(t, φ(t), dt) = γtt+dt y0 − γtt0 y0 − D(t, φ(t), dt) 0 = γtt+dt y0 − γtt+dt γtt0 y0 . 0 1 Here equivalence of two equations means that they have the same solutions.

126 | 3 Equations with nonlinear differentials But (3.1.5) implies that the last term is zero for sufficiently small dt > 0, and so φ is in fact a strong solution of equation (3.0.2). Conversely, fix an initial value (t0 , y0 ) ∈ U, and suppose that the Cauchy problem (3.0.2)/(3.1.6) has a strong solution φ for this initial value. Then φ(t) = γtt0 y0 = y0 + D(t0 , y0 , t − t0 )

(t0 ≤ t < t0 + δ)

for some δ > 0. We show that condition (3.1.5) holds. Indeed, note that for every t1 ∈ [t0 , t0 + δ) we have t

t

t

φ(t2 ) = γt12 φ(t1 ) = γt12 γt01 y0

(t1 ≤ t2 < t1 + δ1 ) t

if δ1 > 0 is sufficiently small. Therefore φ(t2 ) = γt02 y0 which shows that γ has the local semigroup property (3.1.5). So far we have only considered existence. Now we show that strong solutions of equation (3.0.2) are unique for increasing arguments in the following sense.2 Theorem 3.1.2 (Uniqueness). Let φ and ψ be two strong solutions of equation (3.0.2) satisfying φ(t1 ) = ψ(t1 ). Then φ(t) ≡ ψ(t) for all t ∈ [t1 , ∞) ∩ 𝒟(φ) ∩ 𝒟(ψ). Proof. Suppose that φ(t2 ) ≠ ψ(t2 ) for some t2 > t1 , and let t3 = inf{t : t1 ≤ t ≤ t2 , φ(t) ≠ ψ(t)}. Since φ and ψ are left-continuous, we deduce that φ(t3 ) = ψ(t3 ). Consequently, ψ(t3 + dt) − φ(t3 + dt) = ψ(t3 ) + D(t3 , ψ(t3 ), dt) − φ(t3 ) − D(t3 , φ(t3 ), dt) = 0 for sufficiently small dt > 0, contradicting the definition of t3 as infimum. We remark that for general (not necessarily strong) solutions of equation (3.0.2) one cannot expect an analogue to Theorem 3.1.2 even for locally explicit equations. A corresponding example will be given in Example 3.2.25 below. Sometimes it is useful to write the locally explicit equation (3.0.2) without the term o(dt). If we are interested only in strong solutions we will sometimes write (3.0.2) in the form y(t + dt) − y(t) = D(t, y(t), dt), ∗

(3.1.7)

where the asterisque at the equality sign means that (3.1.7) holds, for any t ∈ 𝒟(y), t ≠ sup 𝒟(y), only for sufficiently small3 dt > 0. In this way, (only) strong solutions of (3.0.2) solve equation (3.1.7); for further reference we state this in the following theorem which is only a reformulation of Theorem 3.1.1. 2 Here and in what follows we denote by 𝒟(φ) the domain of definition of a function φ. 3 Here the degree of smallness may depend on t.

3.1 Locally explicit models of hysteresis elements | 127

Theorem 3.1.3. The quasicurrent (3.1.4) has the local semigroup property (3.1.5) on U if and only if equation (3.1.7) has a solution for each (t0 , y0 ) ∈ U. Apart from existence and uniqueness results for (strong) solutions, continuation theorems for solutions of equation (3.0.2) are also of interest. In the following Theorems 3.1.4 and 3.1.5 we state a maximality and a continuation result for solutions. To this end, we recall some definitions and facts from the theory of ordered sets (see, e. g., [13]). Let M be some family of subsets of a given set, ordered by the usual inclusion. A set m̃ ∈ M is called maximal (in (M, ⊆)) if there is no m̂ ∈ M such that4 m̃ ⊂ m.̂ A subfamily L ⊆ M is called linearly ordered if l1 ⊆ l2 or l1 ⊇ l2 for any l1 , l2 ∈ L. Moreover, m ∈ M is called a majorant for L if l ⊆ m for all l ∈ L. The well-known Zorn lemma asserts that, if every linearly ordered subfamily L ⊆ M admits a majorant, then M has at least one maximal element. Theorem 3.1.4 (Extension). Every solution of equation (3.0.2) admits a maximal extension. Proof. Let φ be a fixed solution of (3.0.2), and denote by F the family of all possible solutions of (3.0.2) which extend φ. We apply Zorn’s lemma to the set M of all graphs of functions in F, i. e., M = {Γ(φ) : φ ∈ F},

Γ(φ) = {(t, φ(t)) : t ∈ 𝒟(φ)},

ordered by the usual inclusion ⊆. Let L ⊆ M be linearly ordered, and put m = ⋃ l. l∈L

By definition, l ⊆ m for any l ∈ L; we claim that m ∈ M. Since L is linearly ordered, we conclude that m = Γ(φ) for some extension φ of φ. It remains to show that φ solves equation (3.0.2). To see this, fix τ ∈ 𝒟(φ), and choose a solution ψ of (3.0.2) satisfying φ(t) = ψ(t) for t ∈ 𝒟(ψ). Now, if τ ≠ inf 𝒟(φ) we may choose ψ in such a way that τ ∈ 𝒟(ψ) and τ ≠ inf 𝒟(ψ), which implies that both φ and ψ are left-continuous at τ. Similarly, if τ ≠ sup 𝒟(φ) we may choose ψ in such a way that τ ≠ sup 𝒟(ψ), which implies that both φ and ψ satisfy (3.0.2) at t = τ. Consequently, m ∈ M, and so m is a majorant for L in M. Zorn’s lemma implies that there is a maximal element in M which is the required maximal extension of φ. Now we give a sufficient condition under which a solution which exists on an open interval may be extended to the right endpoint of this interval. 4 As usual, the sign ⊂ denotes strict inclusion, i. e., A ⊂ B means A ⊆ B and A ≠ B.

128 | 3 Equations with nonlinear differentials Theorem 3.1.5 (Global solvability). Suppose that equation (3.0.2) is locally explicit, and let t0 < T ≤ ∞. Assume that, for every t1 ∈ (t0 , T), any solution φ : 𝒟(φ) → ℝm with [t0 , t1 ) ⊆ 𝒟(φ) of the Cauchy problem (3.0.2)/(3.1.6) has a limit y1 = lim φ(t), t→t1 −

(3.1.8)

where (t1 , y1 ) ∈ U. Then every solution of (3.0.2)/(3.1.6) may be extended to the whole interval [t0 , T). Proof. By Theorem 3.1.4 we may restrict ourselves, without loss of generality, to solutions of (3.0.2)/(3.1.6) which do not admit a proper extension. Let φ be such a solution. We claim that t1 = sup 𝒟(φ) ≥ T which proves the assertion. Supposing that t1 < T, we distinguish two cases for 𝒟(φ). Suppose first that 𝒟(φ) = [t0 , t1 ). Then the function φ : [t0 , t1 ] → ℝm defined by φ(t) = {

φ(t)

for t0 ≤ t < t1 ,

y1

for t = t1

is left-continuous and a proper extension of φ, a contradiction. On the other hand, suppose that 𝒟(φ) = [t0 , t1 ], and consider the solution ψ which satisfies y(t1 ) = y1 . Since equation (3.0.2) is locally explicit, we find some Δ > 0 such that ψ is defined on [t1 , t1 + Δ). But then the function φ : [t0 , t1 + Δ) → ℝm defined by φ(t) = {

φ(t)

for t0 ≤ t < t1 ,

ψ(t)

for t1 ≤ t < t1 + Δ

is left-continuous and a proper extension of φ, again a contradiction. This shows that t1 ≥ T as claimed, and we are done. We point out that a certain variant of Theorem 3.1.5 for strong solutions is also true. In fact, since we proved the local existence for the Cauchy problem (3.0.2)/(3.1.6) in the class of strong solutions, we arrive at the following Theorem 3.1.6. Suppose that equation (3.0.2) is locally explicit, and let t0 < T ≤ ∞. Assume that, for every t1 ∈ (t0 , T), any strong solution φ : [t0 , t1 ) → ℝm of the Cauchy problem (3.0.2)/(3.1.6) has a limit (3.1.8), where (t1 , y1 ) ∈ U. Then every strong solution of (3.0.2)/(3.1.6) may be extended to the whole interval [t0 , T). We close this section with a uniqueness theorem for locally explicit equations. To this end, we need a lemma which may be considered as a generalization of the van Kampen lemma and was proved in [73]. Lemma 3.1.7. Let u : [t0 , T) → ℝm and φs : [s, T) → ℝm , where t0 ≤ s < T, be leftcontinuous functions which satisfy φs (s) = u(s) and lim

t→s+

‖φs (t) − u(t)‖) =0 t−s

(t0 ≤ s < T).

3.1 Locally explicit models of hysteresis elements | 129

Suppose that there exists a function L : [t0 , T) × [t0 , T) → [0, ∞) such that L(t, ) ̇ is bounded on every compact interval and 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩φs (t) − φp (t)󵄩󵄩󵄩 ≤ L(t, p)󵄩󵄩󵄩φs (p) − φp (p)󵄩󵄩󵄩 for t0 ≤ s < p ≤ t < T. Then u = φt0 . Proof. For fixed t ∈ (t0 , T), consider the function δ : (t0 , t) → ℝm defined by δ(s) := φt0 (t) − φs (t)

(t0 < s < t).

For t0 ≤ s < p ≤ t we get then ‖φ (p) + u(p)‖) ‖δ(p) − δ(s)‖) ‖φp (t) − φs (t)‖) = ≤ sup L(t, p) s →0 p−s p−s p−s s≤p≤t as p → s+. Consequently, δ+󸀠 (s) = 0. Since δ is left-continuous, this implies that δ(s) = δ(t0 ) = 0, hence φt0 (t) = u(t). Theorem 3.1.8. Suppose that every strong solution φ of the locally explicit equation (3.0.2) admits a forward extension up to T, i. e., if s < T satisfies s ∈ 𝒟(φ), then φ may be extended as strong solution to [s, T). Assume that the family of all strong solutions of (3.0.2) satisfies a Lipschitz condition in the sense that 󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩φ(t) − ψ(t)󵄩󵄩󵄩 ≤ L(t, p)󵄩󵄩󵄩φ(p) − ψ(p)󵄩󵄩󵄩, where the function L : [t0 , T) × [t0 , T) → [0, ∞) meets the hypotheses of Lemma 3.1.7. Then every solution of (3.0.2) is a strong solution, and so every solution of the Cauchy problem (3.0.2)/(3.1.6) with admissible initial value is unique from the right. Proof. Let u : [t0 , T) → ℝm be any solution of the locally explicit equation (3.0.2). Since (s, u(s)) ∈ U for all s ∈ 𝒟(u), then (3.0.2) has a strong solution φs : [s, T) → ℝm starting from the point (s, u(s)). Since both u and φs solve equation (3.0.2), we have u(t) − φs (t) = o(t − s)

(s ≤ t).

But this implies, by Lemma 3.1.7, that u = φt0 , and so u is a strong solution as well. 3.1.3 An example In this section we discuss an example which illustrates our previous results. Example 3.1.9. Consider the equation t+dt

x(t + dt) − x(t) = ∫ max{0, F(τ) − μN} dτ, t

(3.1.9)

130 | 3 Equations with nonlinear differentials where μ and N are positive constants, representing the friction coefficient and the support reaction, respectively, of the underlying system. The function F : [t0 , T) → ℝ is bounded and has at most finitely many points of discontinuity. We show that (3.1.9) is a locally explicit equation. Introducing the quasicurrents t γt02 x0

t2

= x0 + ∫ max{0, F(τ) − μN} dτ t0

and t γt01 x0

t1

= x0 + ∫ max{0, F(τ) − μN} dτ, t0

we get t

t1

t

t2

γt12 γt01 x0 = x0 + ∫ max{0, F(τ) − μN} dτ + ∫ max{0, F(τ) − μN} dτ t0

t1

t2

t

= x0 + ∫ max{0, F(τ) − μN} dτ = γt02 x0 . t0

This shows that the quasicurrent has the required semigroup property. Now we verify the hypotheses of Theorem 3.1.6. The existence of the limit (3.1.8) for every t1 is clear. Since the function F in (3.1.9) is bounded, we get 󵄨󵄨 t 󸀠󸀠 󵄨󵄨 󵄨󵄨 󵄨 󵄨󵄨∫ max{0, F(τ) − μN} dτ󵄨󵄨󵄨 ≤ c󵄨󵄨󵄨t 󸀠 − t 󸀠󸀠 󵄨󵄨󵄨 󵄨 󵄨 󵄨󵄨󵄨 󵄨󵄨󵄨 󵄨t 󸀠 󵄨 with some constant c > 0. For fixed ε > 0, let δ := ε/c. Given t 󸀠 , t 󸀠󸀠 ∈ (t1 − δ, t1 ) we have t󸀠

φ(t ) = φ(t1 − δ) + ∫ max{0, F(τ) − μN} dτ 󸀠

t1 −δ

and t 󸀠󸀠

φ(t ) = φ(t1 − δ) + ∫ max{0, F(τ) − μN} dτ. 󸀠󸀠

t1 −δ

3.2 Equations with relay | 131

Consequently, t 󸀠󸀠 󵄨󵄨 󵄨󵄨 t 󸀠 󵄨󵄨 󵄨󵄨 󵄨󵄨 󸀠 󸀠󸀠 󵄨󵄨 󵄨 󵄨󵄨φ(t ) − φ(t )󵄨󵄨 = 󵄨󵄨 ∫ max{0, F(τ) − μN} dτ − ∫ max{0, F(τ) − μN} dτ󵄨󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨 󵄨t −δ t −δ 1

1

󵄨󵄨 󵄨󵄨 t 󸀠󸀠 󵄨󵄨 󵄨󵄨 󵄨 󵄨 󵄨 = 󵄨󵄨∫ max{0, F(τ) − μN} dτ󵄨󵄨󵄨 ≤ c󵄨󵄨󵄨t 󸀠 − t 󸀠󸀠 󵄨󵄨󵄨 < cδ = ε 󵄨󵄨 󵄨󵄨 󵄨 󵄨t 󸀠

if |t 󸀠 − t 󸀠󸀠 | < δ. Thus, all hypotheses are fulfilled, and so Theorem 3.1.6 applies to the problem (3.1.9).

3.2 Equations with relay In this section we recall three descriptions of so-called relay nonlinearities due to Krasnosel’sky–Pokrovsky [19–24], Tsypkin [84–88], Yakubovich [91–93], and Zubov [96]. Moreover, we show that relay systems may be interpreted in the framework of locally explicit equations and strong solutions. 3.2.1 Hysteresis relays: heuristic description Now we are going to define what we mean by a relay nonlinearity and establish some connection with locally explicit equations. Definition 3.2.1. A (non-ideal) relay is a transducer with a continuous input x(t) and an output y(t) which assumes only the values 0 and 1. More precisely, for given numbers α and β > α one has y(t) = 0 for x(t) < α and y(t) = 1 for x(t) > β, while for α ≤ x(t) ≤ β both values y(t) = 0 or y(t) = 1 are possible. The numbers α and β are called the lower resp. upper threshold value of the relay. Any point of discontinuity of the relay output y is called switching point.

Figure 3.1

132 | 3 Equations with nonlinear differentials So the output jumps up from 0 to 1 if the input reaches the threshold value β, while it jumps down from 1 to 0 if the input reaches the threshold value α. In other words, the domain of admissible values of a relay with threshold values α and β has the form Ω(α, β) = {(x, 0) : x ≤ β} ∪ {(x, 1) : x ≥ α}, i. e., consists of two horizontal half-lines in the (x, y)-plane (see Figure 3.1). This phenomenological description has been made formally more precise for a rigorous mathematical treatment, e. g., in [23, 24, 88]. Tsypkin [84–88] defines equations with hysteresis elements not by means of functions of the input x, but by an operator which is induced by x and some parameter σ in the form y(t) = Φ(x(s)|tt0 ; σ).

(3.2.1)

This notation means that the output y(t) at time t is not only determined by the input x(t), but also by the input history,5 i. e., by the values of x(s) for s in the whole interval [t0 , t]. An analysis of nonlinear elements with hysteresis in the most general setting may be found in the work of Yakubovich [91–93]. In [88, p. 74], Tsypkin calls the relay whose phenomenological description was given above, an element with positive hysteresis and without zone of non-sensibility. If we denote the output after the last switch of the relay at time t1 by σ = y1 ∈ {0, 1}, the corresponding equation takes the form y(t) = Φ(x, y1 ) = {

1

if y1 = 1 and x(t) > β or α < x(s) ≤ β,

0

if y1 = 0 and x(t) < α or α ≤ x(s) < β.

In the monograph [23] (see also [24]) Krasnosel’sky and Pokrovsky give the following description6 of a such a relay. For any initial state (x0 , y0 ) ∈ Ω(α, β) at time t = t0 all continuous inputs x(t) (t ≥ t0 ) are admissible which satisfy x(t0 ) = x0 . The output y(t) (t ≥ t0 ) associated to such an input x(t) is then defined in the following way. Let Ω0 (α, β) = {x : x(t1 ) = α for some t1 ≥ t0 } ∩ {x : x(τ) < β for all τ ∈ (t1 , t]}, Ω1 (α, β) = {x : x(t1 ) = β for some t1 ≥ t0 } ∩ {x : x(τ) > α for all τ ∈ (t1 , t]},

and 0 { { { y(t) = { 1 { { { y0

if x(t) ≤ α or x ∈ Ω0 (α, β), if x(t) ≥ β or x ∈ Ω1 (α, β), if α < x(τ) < β for all τ ∈ [t0 , t].

5 In a more suggestive language, some authors call this phenomenon output with memory. 6 Here we write the equation in a slightly modified, though equivalent, form.

(3.2.2)

3.2 Equations with relay | 133

It is not hard to see that in this description the output function changes its value precisely when the input function reaches one of the threshold values α or β; in particular, the output function is right-continuous. 3.2.2 Relays as locally explicit models As a consequence of the phenomenological descriptions given in the preceding section, the increment of the output signal of the relay can be expressed locally, i. e., for small dt > 0, in the form 1 { { { D(t, y(t), dt) = { −1 { { { 0

if x(t) = β and y(t) = 0, if x(t) = α and y(t) = 1,

(3.2.3)

otherwise.

For this reason, the global input–output dependence can be interpreted as strong solution of equation (3.0.2), where D(t, y(t), dt) is given by (3.2.3), and the domain U consists precisely of all (t, y) ∈ ℝ × ℝm such that either x(t) ≤ β and y = 0, or x(t) ≥ α and y = 1. In contrast to the model described in [23], the output function D in (3.2.3) is leftcontinuous, not right-continuous.7 However, the basic properties of a relay discussed in [23] are not affected by this difference. Now we state a result on existence and uniqueness of strong solutions for the equation (3.0.2) with initial condition (3.1.6). Theorem 3.2.2 (Existence and uniqueness). For any continuous input σ : [t0 , T) → ℝm (t0 < T ≤ ∞), the problem (3.0.2) with D(t, u(t), dt) given by (3.2.3), has a unique strong solution u : [t0 , T) → ℝm satisfying (3.1.6). Moreover, all solutions of this problem are strong solutions. We do not give the proof of Theorem 3.2.2 here, since in Subsection 3.2.3 we will prove a more general result (Theorem 3.2.24). Instead, we give another result on the local dependence of the solution on the input.8 Theorem 3.2.3 (Input–output dependence). Under the hypotheses of Theorem 3.2.2, ̃ 0 ) = σ0 and u(t0 ) = u(t ̃ 0 ) = u0 . Then there exists a δ > 0 such suppose that σ(t0 ) = σ(t ̃ for t0 < t < t0 + δ. that u(t) ≡ u(t) Proof. For the sake of definiteness, let u0 (t) = 0, hence σ(t0 ) ≤ β. We distinguish two ̃ < β on (t0 , t0 +δ) for some δ > 0, by continuity. cases. If σ(t0 ) < β, then σ(t) < β and σ(t) 7 As pointed out before, the left-continuity of solutions is crucial in the theory of locally explicit solutions. 8 Following [73], in the sequel we will write σ for the input and u for the output function.

134 | 3 Equations with nonlinear differentials ̃ = 0 on [t0 , t0 + δ). On the other hand, in case σ(t0 ) = β we Consequently, u(t) = u(t) ̃ > α on (t0 , t0 + δ) for some δ > 0, and therefore u(t) = u(t) ̃ =1 have σ(t) > α and σ(t) on (t0 , t0 + δ). The case u0 (t) = 1 is treated analogously. In view of Theorem 3.2.3 it seems reasonable to introduce the following notation: we write v(σ(t0 ), u(t0 )) for the value of u(t) on (t0 , t0 + δ) for sufficiently small9 δ > 0 and call v(σ(t0 ), u(t0 )) the output signal after t0 . Of course, this constant may assume only the values 0 or 1.

3.2.3 Hysteresis relays: basic properties In this subsection we derive some properties of the hysteresis relay described above, where we assume throughout that the hypotheses of Theorem 3.2.2 are met. We start with two monotonicity properties. Proposition 3.2.4 (Monotonicity with respect to inputs). Let σ, σ̃ : [t0 , T) → ℝm be ̃ for t0 ≤ t < T, and let u, ũ : [t0 , T) → ℝm be two continuous inputs satisfying σ(t) ≤ σ(t) ̃ 0 ), then u(t) ≤ u(t) ̃ for all t ∈ [t0 , T). the corresponding outputs. If u(t0 ) ≤ u(t ̃ Proof. Suppose that the claim is false, and put t1 := inf{t : t ≥ t0 , u(t) > u(t)}. Then ̃ 1 ), u(t ̃ = v(̃ σ(t ̃ 1 )) v(σ(t1 ), u(t1 )) = u(t) > u(t) where v and ṽ denote the corresponding output signals introduced at the end of the previous subsection. Consequently, u(t) = u(t1 ) + D(t1 , u(t1 ), t − t1 ) = 1

(3.2.4)

̃ 1 , u(t ̃ = u(t ̃ 1 ) + D(t ̃ 1 ), t − t1 ) = 0 u(t)

(3.2.5)

and

on (t1 , t1 + δ) for sufficiently small δ > 0, which is possible in two cases. ̃ 1 ) > α. By leftIn the first case we have u(t1 ) = 1, hence σ(t1 ) > α, and so σ(t ̃ 1 ) > α it follows ̃ 1 ), and so also u(t ̃ 1 ) = 1. But from σ(t continuity, this implies u(t1 ) ≤ u(t ̃ ̃ that D(t1 , u(t1 ), t1 − dt) = 1, contradicting (3.2.5). ̃ 1 ) ≥ β. But σ(t ̃ 1) > β In the second case we have u(t1 ) = 0, hence σ(t1 ) = β, and so σ(t ̃ 1 , u(t ̃ 1 ) = β implies u(t ̃ 1 ) = 1 and so also u(t) ̃ = 1, while σ(t ̃ 1 )+D(t ̃ 1 ), t1 −dt) = 1, implies u(t ̃ 1 ), contradicting again (3.2.5). independently of u(t 9 The smallness of δ depends on the behaviour of σ(t) for t > t0 .

3.2 Equations with relay | 135

Proposition 3.2.5 (Monotonicity with respect to threshold values). Let α, α,̃ β and β̃ be real numbers satisfying α ≤ α̃ and β ≤ β,̃ and let σ : [t0 , T) → ℝm be a continuous input. ̃ respectively. Denote by u, ũ : [t0 , T) → ℝm the outputs corresponding to (α, β) and (α,̃ β), ̃ 0 ), then u(t) ≥ u(t) ̃ for all t ∈ [t0 , T). If u(t0 ) = u(t ̃ Proof. Suppose that the claim is false, and put t1 := inf{t : t ≥ t0 , u(t) < u(t)}. Then ̃ = 1 on (t1 , t1 + δ) for sufficiently small δ > 0, which is possible in two u(t) = 0 and u(t) cases. In the first we have u(t1 ) = 0, hence σ(t1 ) < β ≤ β.̃ By left-continuity, this implies ̃ 1 ), and so also u(t ̃ 1 ) = 0. But from σ(t1 ) < β̃ it follows that u(t) ̃ u(t1 ) ≥ u(t = 0, a contradiction. ̃ 1 ), t − t1 ) = −1 and σ(t1 ) = α ≤ α.̃ In the second case we have u(t1 ) = 1, hence D(t1 , u(t ̃ 1 ) = 0, then σ(t1 ) = β̃ > α,̃ since u(t) ̃ ̃ 1 ) = 1, then σ(t1 ) > α,̃ If u(t = 1. Similarly, if u(t ̃ since u(t) = 1. In either case we have a contradiction which proves the assertion. The next proposition shows that, under suitable hypotheses, small perturbations of the input in the norm of C([t0 , T]) lead to small perturbations of the output in the norm of L1 ([t0 , T]) if we keep the initial value u0 fixed. Proposition 3.2.6 (Continuous dependence on inputs). Let σ : [t0 , T] → ℝm be a continuous input with the property that σ(t) ≠ β in any local maximum t of σ, and σ(t) ≠ α in any local minimum t of σ. Then for every ε > 0 there exists a δ > 0 such that 󵄨 ̃ 󵄨󵄨󵄨󵄨 < δ ‖σ − σ‖̃ C = max 󵄨󵄨󵄨σ(t) − σ(t) t ≤t≤T 0

(3.2.6)

implies T

󵄨 ̃ 󵄨󵄨󵄨󵄨 dt < ε, ‖u − u‖̃ L1 = ∫󵄨󵄨󵄨u(t) − u(t)

(3.2.7)

t0

̃ 0 ). provided that u(t0 ) = u(t Proof. For the sake of definiteness we put u0 = 0, hence σ(t0 ) ≤ β. Assume that σ(t) < β for t0 ≤ t ≤ T. Choose δ := β − ‖σ‖C ; then ‖σ − σ‖̃ C < δ implies ̃ ̃ σ(t) < β on [t0 , T]. Consequently, u(t) ≡ u(t) = 0 on [t0 , T], and so (3.2.6) is trivially satisfied. Assume now that σ(t ∗ ) = β for some t ∗ ∈ [t0 , T]. Then we recursively get finitely many points10 τ1 := inf{t : t0 ≤ t ≤ T, σ(t) = β},

τ3 := inf{t : τ2 ≤ t ≤ T, σ(t) = β}, ...,

τ2 := inf{t : τ1 ≤ t ≤ T, σ(t) = α}, ...,

τk := inf{t : τk−1 ≤ t ≤ T, σ(t) = α resp. σ(t) = β}.

10 The continuity of σ implies that this may indeed happen only for finitely many points τ1 , . . . , τk .

136 | 3 Equations with nonlinear differentials Here τk ≠ T, because σ(T) = β would imply that T is a local maximum of σ, while σ(T) = α would imply that T is a local minimum of σ, contradicting our hypotheses. Consider the intervals [τ1 − ε/3k, τ1 + ε/3k], [τ2 − ε/3k, τ2 + ε/3k], . . . , [τk − ε/3k, τk + ε/3k], where we choose ε > 0 so small that all these intervals are disjoint and (possibly except for the first one in case τ1 = t0 ) contained in [t0 , T]. Now we put δ1󸀠 := max{σ(s) : τ1 ≤ s ≤ τ1 + ε/3k} − β, δ1󸀠󸀠 := β − max{σ(s) : t0 ≤ s ≤ τ1 − ε/3k}, δ2󸀠 := α − min{σ(s) : τ2 ≤ s ≤ τ2 + ε/3k}, δ2󸀠󸀠 := min{σ(s) : τ1 + 3ε/3k ≤ s ≤ τ2 − ε/3k} − α, ... and so on until defining δk󸀠 and δk󸀠󸀠 analogously. Moreover, we define11 δ1 := min{δ1󸀠 , δ1󸀠󸀠 } if τ1 ≠ t0 and δ1 := δ1󸀠 otherwise, δ2 := min{δ2󸀠 , δ2󸀠󸀠 }, and so on until δk := min{δk󸀠 , δk󸀠󸀠 }. Finally, we put δk+1 := {

min{σ(s) : τk + ε/3k ≤ s ≤ T} − α

if σ(τk ) = β,

β − max{σ(s) : τk + ε/3k ≤ s ≤ T}

if σ(τk ) = α

and δ := min{δ1 , δ2 , . . . , δk+1 }. Fix σ̃ ∈ C([t0 , T]) with (3.2.6), and suppose that τ1 ≠ t0 . ̃ < β for all t ∈ [t0 , τ1 − ε/3k], and σ(t ̃ 1 ) > β for some t1 ∈ (τ1 − ε/3k, τ1 + ε/3k]. Then σ(t) ̃ ̃ 2 ) < α for some t2 ∈ (τ2 − Similarly, σ(t) > α for all t ∈ [τ1 + ε/3k, τ2 − ε/3k], and σ(t ε/3k, τ2 + ε/3k]. Applying this reasoning to j = 1, . . . , k we see that u = ũ outside the intervals [τj − ε/3k, τj + ε/3k] for j = 1, . . . , k. Therefore, for calculating the norm ‖u − u‖̃ L1 we have to take into account only integrals over these intervals; as a result we get T

󵄨 ̃ 󵄨󵄨󵄨󵄨 dt = ∫󵄨󵄨󵄨u(t) − u(t)

max{t0 ,τ1 −ε/3k}

τ1 +ε/3k





t0

t0

τ2 −ε/3k

+

󵄨󵄨 ̃ 󵄨󵄨󵄨󵄨 dt + 󵄨󵄨u(t) − u(t)

max{t0 ,τ1 −ε/3k}

󵄨 ̃ 󵄨󵄨󵄨󵄨 dt + ⋅ ⋅ ⋅ + ∫ 󵄨󵄨󵄨u(t) − u(t)

τ1 +ε/3k

󵄨󵄨 ̃ 󵄨󵄨󵄨󵄨 dt 󵄨󵄨u(t) − u(t)

T

2ε 󵄨 ̃ 󵄨󵄨󵄨󵄨 dt ≤ k < ε, ∫ 󵄨󵄨󵄨u(t) − u(t) 3k

τk +ε/3k

and so we are done. 11 Observe that δj󸀠 > 0 for j = 1, . . . , k, since σ has not a local maximum at τj for j odd, and not a local minimum at τj for j even.

3.2 Equations with relay | 137

In some circumstances it is useful to describe how “close” (or “distant”) two inputs or outputs of a system are. Of course, this depends on the metric we use for measuring distances. For relay systems a suitable distance was proposed in [75], where the “closedness” between two functions f and g was measured as Hausdorff distance between their graphs Γf and Γg , i. e., ρΓ (f , g) := inf{ε > 0 : Γf ⊆ Γg + Bε , Γg ⊆ Γf + Bε },

(3.2.8)

with Bε denoting the ball around zero with radius ε.12 Observe that, compared with the supremum metric or integral metric, the distance (3.2.8) has the advantage to make sense also when the functions f and g have different domains of definition. But even in case of a common domain of definition, all these three metrics have in general different values. To show this, it suffices to consider the supremum metric 󵄨 󵄨 ‖f − g‖∞ := sup 󵄨󵄨󵄨f (t) − g(t)󵄨󵄨󵄨 a≤t≤b

(3.2.9)

and the integral metric b

󵄨 󵄨 ‖f − g‖L1 := ∫󵄨󵄨󵄨f (t) − g(t)󵄨󵄨󵄨 dt

(3.2.10)

a

on an interval [a, b]. Here is an example. Example 3.2.7. We take [a, b] = [0, 2π] and consider the relay with threshold values −2 ̃ := 2 sin(t + π/10), and initial data u(0) = u(0) ̃ and 2, inputs σ(t) := 2 sin t and σ(t) = 0, respectively. The output function (3.2.2) corresponding to σ is then 0 { { { u(t) = χ(π/2,3π/2] (t) = { 1 { { { 0

for 0 ≤ t ≤ π/2, for π/2 < t ≤ 3π/2, if 3π/2 < t ≤ 2π,

while the output function (3.2.2) corresponding to σ̃ is 0 { { { ̃ ={ 1 u(t) { { { 0

for 0 ≤ t ≤ 2π/5, for 2π/5 < t ≤ 7π/5, if 7π/5 < t ≤ 2π.

12 Strictly speaking, (3.2.8) is not a metric, because the Hausdorff distance between two sets measures the distance between their closures, and the graphs of two functions may be different, but have the same closure.

138 | 3 Equations with nonlinear differentials ̃ Both outputs satisfy the initial condition u(0) = u(0) = 0. A straightforward calculation shows then that π π ρΓ (u, u)̃ = , ‖u − u‖̃ ∞ = 1, ‖u − u‖̃ L1 = . 10 5 For general f and g, it is clear that ρΓ (f , g) ≤ ‖f − g‖∞ ,

‖f − g‖L1 ≤ (b − a)‖f − g‖∞ .

However, we do not have the converse inequalities ‖f −g‖∞ ≤ cρΓ (f , g) or ‖f −g‖L1 ≤ cρΓ (f , g) for any constant c > 0. For example, for f , g : [1 + ε, 2] → ℝ defined by 1 f (t) := , t we have ρΓ (f , g) = 1,

‖f − g‖∞ =

g(t) := 1 , (1 + ε)ε

1 t−1 ‖f − g‖L1 = log

1+ε . 2ε

So both terms ‖f − g‖∞ and ‖f − g‖L1 may be taken arbitrarily large by choosing ε sufficiently small, while ρΓ (f , g) does not depend on ε. Interestingly, an estimate of the form ρΓ (f , g) ≤ c‖f − g‖L1 is not true either. To see this, fix ε ∈ (0, 1) and consider the functions f (t) ≡ 0 and 1

1 g(t) := χ[0,ε] (t) = { ε ε 0

for 0 ≤ t ≤ ε, for ε < t ≤ 1.

Then ‖f − g‖L1 = ‖g‖L1 = 1 does not depend on ε, while ρΓ (f , g) = 1/ε may be made arbitrarily large by choosing ε sufficiently small. Proposition 3.2.8 (Continuous dependence on inputs). Under the hypotheses of Proposition 3.2.6, the output of a relay function depends continuously on its input, where as before we consider the supremum metric (3.2.9) for inputs and the Hausdorff distance (3.2.8) for outputs. Proposition 3.2.8 is proved in the same way as Proposition 3.2.6, with the only difference that it suffices to consider intervals of the form [τi − ε/2, τi + ε/2] here. We continue with the study of the local description (3.2.3) of a relay function. In view of (3.0.2) and (3.1.7), we may write (3.0.2) and (3.2.3) in the form 0 { { { u(t + dt) = { 1 { { { u(t) ∗

if σ(t) ≤ α, if σ(t) ≥ β,

(3.2.11)

if α < σ(t) < β.

Observe that, for the description of a relay with threshold values α and β in the locally explicit form (3.2.11), the so-called domain of admissible states Ω(α, β) of this relay consists of points (σ, u) ∈ ℝ2 lying on the horizontal rays (−∞, β]×{0} and [α, ∞)× {1}. This gives rise to the following

3.2 Equations with relay | 139

Definition 3.2.9. For (σ, u) ∈ Ω(α, β), we denote by Rtt0 (α, β, σ)(u0 ) the solution of (3.2.11) which satisfies the initial condition u(t0 ) = u0 and call Rtt0 (α, β, σ)(u0 ) the relay resolvent of (σ, u0 ). Recall that a switching point of a relay is a point discontinuity of its output. We have seen that a relay may be described by the explicit method of Krasnosel’sky– Pokrovsky, or as a locally explicit equation. The following theorem shows that these two descriptions are basically equivalent. Theorem 3.2.10. Let u1 and u2 be two solutions of the equations (3.2.2) and (3.2.11) which both satisfy the initial condition u1 (t0 ) = u2 (t0 ) = u0 . Let (σ0 , u0 ) be an admissible state of the corresponding relay in the sense introduced above. Then u1 (t) = u2 (t) for all t ∈ [t0 , T], except for the switching points of the relay. Proof. Let E be the set of all t ∈ [t0 , T] which are not switching points of the relay in the description (3.2.2) and (3.2.11). It is not hard to see that both functions u1 and u2 are continuous on E. We have to show that u1 (t) = u2 (t) for all t ∈ E. If this is not true we find a minimal point at which this fails, i. e., t1 ∈ [t0 , T] satisfying t1 = inf{t ∈ E : u1 (t) ≠ u2 (t)}.

(3.2.12)

Observe that t1 cannot belong to E, but the continuity of u1 and u2 implies that u1 (t1 ) = u2 (t1 ). We distinguish the three cases σ(t1 ) ≤ α, σ(t1 ) ≥ β, and α < σ(t1 ) < β. 1st case: σ(t1 ) ≤ α. Then u2 (t1 +dt) = 0 for all sufficiently small dt > 0, see (3.2.11). Now, in case σ(t1 ) < α we also have σ(t1 + dt) < α, by continuity, and so u1 (t1 + dt) = 0, by (3.2.2). On the other hand, in case σ(t1 ) = α we have σ(t1 + dt) < β, again by continuity, and so u1 (t1 + dt) = 0, by the first line in (3.2.2). Summarizing we have shown that in this case we have u1 (t1 + dt) = u2 (t1 + dt)

(3.2.13)

for all sufficiently small dt > 0. 2nd case: σ(t1 ) ≥ β. This case is treated analogously to the first case, using now the second line in (3.2.2) for proving (3.2.13). 3rd case: α < σ(t1 ) < β. Then our continuity argument shows that also α < σ(t1 +dt) < β for all sufficiently small dt > 0. Since u1 (t1 ) = u2 (t1 ), (3.2.13) follows now from the third line in (3.2.2) and (3.2.11). In any case we have proved (3.2.13), contradicting our definition (3.2.12) of t1 , and so the proof is complete. In the following propositions we prove 5 useful properties of the family of the functions Rtt0 (α, β, σ).

140 | 3 Equations with nonlinear differentials Proposition 3.2.11 (Shift invariance). The equality t Rt−c t0 −c (α, β, Sc σ)(u0 ) = Rt0 (α, β, σ)(u0 )

(3.2.14)

holds for all t ∈ [t0 , T], where Sc denotes the shift operator Sc σ(t) := σ(t + c).

(3.2.15)

Proof. For t0 ≤ t ≤ T we put Rtt0 (α, β, σ)(u0 ) =: u(t). We denote by u : [t0 − c, T − c] → ℝ the solution of (3.2.11) corresponding to the input function Sc σ and the initial condition u(t0 − c) = u0 , i. e., 0 { { { u(t + dt) + o(dt) = { 1 { { { u(t)

if σ(t) ≤ α, if σ(t) ≥ β,

(3.2.16)

if α < σ(t) < β.

Replacing in (3.2.16) t by t − c we see that the function t 󳨃→ u(t − c) solves equation (3.2.11). Moreover, it also satisfies the initial condition u(t0 − c) = u(t0 ). So the existence and uniqueness theorem for solutions of (3.2.11) implies that u(t − c) = u(t) on [t0 , T] which is nothing else but (3.2.13). ̂ on [t0 , T], then also Proposition 3.2.12 (Volterra property). If σ(t) ≡ σ(t) ̂ 0) Rtt0 (α, β, σ)(u0 ) ≡ Rtt0 (α, β, σ)(u on [t0 , T]. Proof. Our assumption on σ and σ̂ implies that both functions t 󳨃→ Rtt0 (α, β, σ)(u0 ) and ̂ 0 ) solve (3.2.11) which proves the statement. t 󳨃→ Rtt0 (α, β, σ)(u Proposition 3.2.13 (Semigroup property). The equality t

Rtt1 (α, β, σ)Rt10 (α, β, σ)(u0 ) = Rtt0 (α, β, σ)(u0 )

(3.2.17)

is true for t0 ≤ t1 ≤ t. Proof. Suppose that the left-hand side of (3.2.17) is defined, and denote it by t

Rtt1 (α, β, σ)Rt10 (α, β, σ)(u0 ) =: u(t). From the existence and uniqueness theorem for solutions of (3.2.11) it follows that there exist uniquely determined functions ϕ and ψ which solve (3.2.11) and satisfy ϕ(t0 ) = u0 ,

ϕ(t1 ) = u1 ,

ψ(t1 ) = u1 ,

ψ(t) = u.

3.2 Equations with relay | 141

This implies in turn that there exist a uniquely determined function φ : [t0 , T] → ℝ which solves (3.2.11) and satisfies φ(t0 ) = u0 ,

φ(t) ≡ u(t).

Consequently, the right-hand side of (3.2.17) is well-defined and coincides with u(t), and so we are done. Proposition 3.2.14 (Statistical property). The equality θt+(1−θ)t0

Rt0

(α, β, σ)(u0 ) = Rtt0 (α, β, σθ )(u0 )

(3.2.18)

holds for all t ∈ [t0 , T], where θ > 0 and σθ (t) := σ(θt + (1 − θ)t0 ).

(3.2.19)

Proof. Denoting by u and uθ the solutions of (3.2.11) which correspond to σ and σθ , respectively, we have to show that u(θt + (1 − θ)t0 ) = uθ (t)

(t0 ≤ t ≤ T).

(3.2.20)

Replacing in (3.2.11) the argument t by the argument θt + (1 − θ)t0 , we see that both functions occurring in (3.2.20) are solutions corresponding to the input σθ . Moreover, uθ (t0 ) = u(θt0 + (1 − θ)t0 ) = u0 . So (3.2.18) follows from our uniqueness result for solutions of (3.2.11). Proposition 3.2.15 (Controllability). Given (σ0 , u0 ), (σ1 , u1 ) ∈ Ω(α, β) there exists a continuous input function σ : [t0 , t1 ] → ℝ such that σ(t0 ) = σ0 ,

σ(t1 ) = σ1 ,

t

Rt10 (α, β, σ)(u0 ) = u1

(3.2.21)

i. e., σ joins the admissible states (σ0 , u0 ) and (σ1 , u1 ). Proof. For the proof we have to distinguish the cases u1 = 1 and u1 = 0. 1st case: u1 = 1. In this case we necessarily have σ1 ≥ α. Then we take as σ a quadratic polynomial whose graph is a concave parabola passing through (t0 , σ0 ) and (t1 , σ1 ), and having its maximum at the point (τ, max{σ0 , σ1 } + β − α), where t0 < τ < t1 . This function has the required properties (3.2.21), since at the moment τ the input value is not less than the upper threshold value β, so for t > τ the relay is switched on (u = 1) until the moment when the input reaches the lower threshold value α. In particular, u(t1 ) = u1 = 1. 2nd case: u1 = 0. In this case we proceed analogously and take as σ a quadratic polynomial whose graph is a convex parabola passing through (t0 , σ0 ) and (t1 , σ1 ), and having its minimum at the point (τ, min{σ0 , σ1 } + α − β), where again t0 < τ < t1 .

142 | 3 Equations with nonlinear differentials Following [89], we consider now an asymptotic version of the relay resolvent introduced in Definition 3.2.9. Definition 3.2.16. We define a function Rt−∞ (α, β, σ)(u0 ) as follows. If there exists some t1 < t such that σ(t1 ) ≤ α or σ(t1 ) ≥ β, we put Rt−∞ (α, β, σ)(u0 ) := {

Rtt1 (α, β, σ)(0) Rtt1 (α, β, σ)(1)

for σ(t1 ) ≤ α, for σ(t1 ) ≥ β.

(3.2.22)

On the other hand, if α < σ(t1 ) < β for all t1 < t, we put Rt−∞ (α, β, σ)(u0 ) := u0 . For completeness, we still introduce the convention R−∞ −∞ (α, β, σ)(u0 ) := u0 . In what follows, we call Rt−∞ (α, β, σ)(u0 ) the asymptotic relay resolvent of (σ, u0 ). Of course, we have to show that (3.2.22) is well-defined, i. e., independent of the choice of t1 . So fix some t2 < t such that σ(t2 ) ≤ α or σ(t2 ) ≥ β. We claim that Rtt2 (α, β, σ)(u2 ) = Rtt1 (α, β, σ)(u1 ),

(3.2.23)

where ui = 0 for σ(ti ) ≤ α and ui = 1 for σ(ti ) ≥ β (i = 1, 2). For definiteness we assume that t2 ≤ t1 . Then Proposition 3.2.13 implies that t

Rtt2 (α, β, σ)(u2 ) = Rtt1 (α, β, σ)Rt12 (α, β, σ)(u2 ).

(3.2.24)

Since σ(t1 ) ≤ α or σ(t1 ) ≥ β at t1 < t, we further obtain13 t

Rtt1 (α, β, σ)Rt12 (α, β, σ)(u2 ) = Rtt1 (α, β, σ)(u1 ).

(3.2.25)

Combining (3.2.24) and (3.2.25) we see that (3.2.23) holds, and so our definition of Rt−∞ (α, β, σ) is correct. In the following propositions we summarize some properties of the family of functions Rt−∞ (α, β, σ), as we have done before for Rtt0 (α, β, σ). Proposition 3.2.17 (Shift invariance). The equality t Rt−c −∞ (α, β, Sc σ)(u0 ) = R−∞ (α, β, σ)(u0 )

holds for all t ≤ T, where Sc denotes the shift operator (3.2.15). 13 We remark that the value u1 on the right-hand side of (3.2.25) does not necessarily coincide with t Rt1 (α, β, σ)(u2 ). 2

3.2 Equations with relay | 143

Proof. In case t = −∞ we have, by definition, −∞ R−∞ −∞ (α, β, Sc σ)(u0 ) = R−∞ (α, β, σ)(u0 ) = u0 .

If α < σ(t1 ) < β for all t1 < t we put t 1 := t1 − c. For any t 1 < t − c we have then α < Sc σ(t 1 ) < β, since Sc σ(t 1 ) = σ(t 1 + c) = σ(t1 ). Consequently, Definition 3.2.16 shows that t Rt−c −∞ (α, β, Sc σ)(u0 ) = R−∞ (α, β, σ)(u0 ) = u0 .

It remains to analyze the case when σ(t1 ) ≤ α or σ(t1 ) ≥ β for some t1 < t. Putting again t 1 := t1 − c, we obtain then t 1 < t − c, as well as Sc σ(t 1 ) ≤ α or Sc σ(t 1 ) ≥ β, since Sc σ(t 1 ) = σ(t1 ). Consequently, Definition 3.2.16 shows that14 t−c t−c Rt−c −∞ (α, β, Sc σ)(u0 ) = Rt (α, β, Sc σ)(u1 ) = Rt1 −c (α, β, Sc σ)(u1 ) 1

=

Rtt1 (α, β, σ)(u1 )

= Rt−∞ (α, β, σ)(u0 ),

where u1 = 0 in case σ(t1 ) ≤ α, and u1 = 1 in case σ(t1 ) ≥ β. ̂ on (−∞, T], then also Proposition 3.2.18 (Volterra property). If σ(t) ≡ σ(t) ̂ 0) Rt−∞ (α, β, σ)(u0 ) ≡ Rt−∞ (α, β, σ)(u on (−∞, T]. Proof. This is an obvious consequence of Proposition 3.2.12. Proposition 3.2.19 (Semigroup property). The equality Rtτ (α, β, σ)Rτ−∞ (α, β, σ)(u0 ) = Rt−∞ (α, β, σ)(u0 )

(3.2.26)

is true for −∞ ≤ τ ≤ t. Proof. In case τ = −∞ the assertion follows from R−∞ −∞ (α, β, σ)(u0 ) = u0 . So let τ > −∞. If there exists t1 < τ such that σ(t1 ) ≤ α or σ(t1 ) ≥ β we get Rτ−∞ (α, β, σ)(u0 ) = Rτt1 (α, β, σ)(u1 ) and Rt−∞ (α, β, σ)(u0 ) = Rtt1 (α, β, σ)(u1 ), where u1 = 0 or u1 = 1. These equalities, together with Proposition 3.2.13, imply then (3.2.26) 14 For the third equality sign we have used (3.2.13).

144 | 3 Equations with nonlinear differentials It remains to prove (3.2.26) in the case that τ > −∞ and α < σ(s) < β for all s ∈ (−∞, τ). Then Rtτ (α, β, σ)Rτ−∞ (α, β, σ)(u0 ) = Rtτ (α, β, σ)(u0 ),

(3.2.27)

since Rτ−∞ (α, β, σ)(u0 ) = u0 . In view of (3.2.27) for the proof of (3.2.26) it suffices to show that Rtτ (α, β, σ)(u0 ) = Rt−∞ (α, β, σ)(u0 ).

(3.2.28)

If, in addition to our hypothesis α < σ(s) < β for all s ∈ (−∞, τ), we also know that α < σ(s) < β for all s ∈ [τ, t), we deduce that Rtτ (α, β, σ)(u0 ) = u0 = Rt−∞ (α, β, σ)(u0 ) which is (3.2.28). Conversely, if there exists t1 ∈ [τ, t) such that σ(t1 ) ≤ α or σ(t1 ) ≥ β we get, by our definition (3.2.22) of Rt−∞ (α, β, σ), Rt−∞ (α, β, σ)(u0 ) = Rtt1 (α, β, σ)(u1 ),

(3.2.29)

where u1 = 0 in case σ(t1 ) ≤ α and u1 = 1 in case σ(t1 ) ≥ β. But again the semigroup property proved in Proposition 3.2.13 shows that Rtτ (α, β, σ)(u0 ) = Rtt1 (α, β, σ)Rtτ1 (α, β, σ)(u0 ). This equality may be rewritten in the form Rtτ (α, β, σ)(u0 ) = Rtt1 (α, β, σ)(u1 ), and this together with (3.2.29) implies (3.2.28). So we have proved (3.2.26) in all possible cases. Proposition 3.2.20 (Statistical property). The equality θt+(1−θ)t0

R−∞

(α, β, σ)(u0 ) = Rt−∞ (α, β, σθ )(u0 )

(3.2.30)

holds for all t ∈ [−∞, T], where θ > 0 and σθ is given by (3.2.19). Proof. In case t = −∞ both sides of (3.2.30) coincide with u0 , so let t > −∞. Then for the point t1 appearing in Definition 3.2.16 we take t1 := θtθ + (1 − θ)t0 for θt+(1−θ)t0 R−∞ (α, β, σ)(u0 ), where tθ is the corresponding point for Rt−∞ (α, β, σθ )(u0 ). So we have θt+(1−θ)t0

R−∞

(α, β, σ)(u0 ) = Rtθtθ +(1−θ)t0 (α, β, σ)(u1 )

and Rt−∞ (α, β, σθ )(u0 ) = Rttθ (α, β, σθ )(u1 ).

3.2 Equations with relay | 145

As before, we distinguish two cases. 1st case: α < σθ (tθ ) < β for all tθ < t, hence15 α < σ(t1 ) < β for all t1 < θt + (1 − θ)t0 . Then both sides of (3.2.30) coincide with u0 , and so (3.2.30) is true. 2nd case: σθ (tθ ) ≤ α or σθ (tθ ) ≥ β for some tθ < t, hence σ(t1 ) ≤ α resp. σ(t1 ) ≥ β for t1 = θtθ + (1 − θ)t0 < θt + (1 − θ)t0 . Then for the proof of (3.2.30) we have to show that θt+(1−θ)t0

Rt1

(α, β, σθ )(u1 ) = Rttθ (α, β, σθ )(u1 ),

(3.2.31)

where u1 = 0 in case σθ (tθ ) ≤ α and u1 = 1 in case σθ (tθ ) ≥ β. Denoting as in the proof of Proposition 3.2.14 by u and uθ the solutions of (3.2.11) which correspond to σ and σθ , respectively, we can prove in the same way as in (3.2.20) that u(θt + (1 − θ)t0 ) = uθ (t)

(−∞ < t ≤ T).

This shows that (3.2.31) is true, and so also (3.2.30). Proposition 3.2.21 (Controllability). Given (σ0 , u0 ), (σ1 , u1 ) ∈ Ω(α, β) there exists a continuous input function σ : (−∞, t1 ] → ℝ such that σ(t0 ) = σ0 ,

σ(t1 ) = σ1 ,

t

1 R−∞ (α, β, σ)(u0 ) = u1

i. e., σ joins the admissible states (σ0 , u0 ) and (σ1 , u1 ). Proof. For the proof we take any continuous function σ : (−∞, t1 ] → ℝ which satisfies σ(t1 ) = σ1 (t1 ) and has the property that there exists some t∗ < t1 such that σ(t∗ ) = {

α

if u1 = 0,

β

if u1 = 1,

and α < σ(t) < β for t∗ < t < t1 . We close this subsection with a statement on the generation of periodic outputs which is particularly useful in applications. Theorem 3.2.22. Suppose that σ : ℝ → ℝ is a T-periodic input, and u0 ∈ {0, 1}. Then the corresponding output u(t) := Rt−∞ (α, β, σ)(u0 ) is also T-periodic. If σ(t∗ ) ∈ (−∞, α] ∪ [β, ∞) for some t∗ , the output u does not depend on u0 . Conversely, if α < t∗ < β for all t∗ , there are two (constant) T-periodic outputs corresponding to σ, viz. u(t) ≡ 0 if u0 = 0, and u(t) ≡ 1 if u0 = 1. 15 This follows from σ(t1 ) = σ(θtθ + (1 − θ)t0 ) = σθ (tθ ).

146 | 3 Equations with nonlinear differentials Proof. The T-periodicity of the output function u follows from the equality t−T t Rt−T −∞ (α, β, σ)(u0 ) = R−∞ (α, β, ST σ)(u0 ) = R−∞ (α, β, σ)(u0 ),

where ST σ(t) = σ(t + T), see (3.2.15). Suppose that σ(t∗ ) ∈ (−∞, α] ∪ [β, ∞) for some t∗ ; for the sake of definiteness let σ(t∗ ) ≥ β. Then u(t) = Rtt∗ (α, β, σ)(1) for all t > t∗ . Since a T-periodic function u is completely determined on the semi-axis (t∗ , ∞), we conclude that u(t) does not depend on u0 . The last assertion is an immediate consequence of the definition of Rt−∞ (α, β, σ).

3.2.4 Generalized relays: basic properties An important generalization of Definition 3.2.1 consists in replacing the “bang-bang” output y which can take only the values 0 and 1 by arbitrary continuous functions. This leads to the following Definition 3.2.23. Let α and β > α, and let f : (−∞, β] → ℝ and g : [α, ∞) → ℝ be continuous functions satisfying f (x) ≠ g(x) on (α, β). A generalized relay is a transducer which is described by equation (3.0.2), where the nonlinear differential D(t, y, dt) has, for dt > 0, the form { { { { { { D(t, y(t), dt) = { { { { { { {

f [σ(t + dt)] − f [σ(t)]

if σ(t) < β and y(t) = f [σ(t)],

g[σ(t + dt)] − f [σ(t)]

if σ(t) ≥ β and y(t) = f [σ(t)],

g[σ(t + dt)] − g[σ(t)]

if σ(t) > α and y(t) = g[σ(t)],

f [σ(t + dt)] − g[σ(t)]

if σ(t) ≤ α and y(t) = g[σ(t)].

(3.2.32)

Moreover, we put D(t, y, 0) := 0. We define U := {(t, u) : σ(t) ≤ β, u = f [σ(t)]} ∪ {(t, u) : σ(t) ≥ α, u = g[σ(t)]}.

(3.2.33)

As before, the numbers α and β are called the lower resp. upper threshold value of the relay. A comparison with Definition 3.2.1 shows that we obtain the function (3.2.3) for f (x) ≡ 0 and g(x) ≡ 1. So the following existence and uniqueness result also covers Theorem 3.2.2. Theorem 3.2.24 (Existence and uniqueness). For any continuous input σ : [t0 , T) → ℝm (t0 < T ≤ ∞), the problem (3.0.2) with D(t, y, dt) given by (3.2.32), has a unique strong solution u : [t0 , T) → ℝm satisfying (3.1.6). Proof. We show that a generalized relay equation is locally explicit. First of all, the function (3.2.32) is left-continuous on (0, T − t), being a composition of continuous

3.2 Equations with relay | 147

functions. Let f [σ(t)] =: u, and consider the first two cases σ(t) < β and σ(t) ≥ β in (3.2.32). In the first case we have γtt+dt = f [σ(t + dt)] for dt > 0, where γ denotes the quasicurrent (3.1.4). Put Δ := {

T −t

if σ(τ) < β for all τ ∈ [t, T),

min{τ ∈ [t, T) : σ(τ) = β} − t

if σ(τ) ≥ β for some τ ∈ [t, T).

Obviously, Δ > 0. Moreover, for t ≤ t1 ≤ t2 < t + Δ we have σ(t1 ) < β, hence t

t

t

t

γt12 γt 1 u = γt12 f [σ(t1 )] = f [σ(t2 )] = γt 2 u. So for the number δ occurring in (3.1.5) we may take here δ = t + Δ − t1 . Similarly, in the second case we have γtt+dt = g[σ(t + dt)] for dt > 0. As before, we put now Δ := {

T −t

if σ(τ) > α for all τ ∈ [t, T),

min{τ ∈ [t, T) : σ(τ) = α} − t

if σ(τ) ≤ α for some τ ∈ [t, T).

Then σ(t1 ) > α for t ≤ t1 ≤ t2 < t + Δ, and so t

t

t

t

γt12 γt 1 u = γt12 g[σ(t1 )] = g[σ(t1 )] + g[σ(t2 )] − g[σ(t1 )] = γt 2 u. In the third and fourth case in (3.2.32), when u = g[σ(t)], the proof follows the same reasoning. Now we prove global solvability, which means that, for any t1 ∈ (t0 , T) and any solution φ of (3.2.32) the left-hand limit u1 := lim φ(t)

(3.2.34)

t→t1 −

exists, where (t1 , u1 ) ∈ U, see (3.2.33). If f [σ(t1 )] = g[σ(t1 )] we may only have φ(t) = f [σ(t)] or φ(t) = g[σ(t)], and in this case the limit (3.2.34) obviously exists and satisfies (t1 , u1 ) ∈ U. On the other hand, in case f [σ(t1 )] ≠ g[σ(t1 )] we also have f [σ(t)] ≠ g[σ(t)] for t1 − δ1 < t ≤ t1 and some δ1 > 0, since f and g are continuous. Clearly, σ(t1 ) ≠ α or σ(t1 ) ≠ β. For definiteness, suppose that σ(t1 ) ≠ α. Then also σ(t) ≠ α for t1 − δ2 < t ≤ t1 and some δ2 > 0, since σ is continuous. To show that the limit (3.2.34) exists we prove that any solution φ : [t0 , t1 ) → ℝm of (3.2.32) satisfies φ(t) = f [σ(t)] or φ(t) = g[σ(t)] for t close to t1 . Suppose that this false. Then we find a solution φ : [t0 , t1 ) → ℝm and points t2 and t3 such that t1 − δ < t2 < t3 < t1 ,

φ(t2 ) = g[σ(t2 )],

φ(t3 ) = f [σ(t3 )],

148 | 3 Equations with nonlinear differentials where we have put δ := min{δ1 , δ2 }. Let t4 := inf{t : t2 ≤ t ≤ t3 , φ(t) = f [σ(t)]}. then φ(t4 ) = g[σ(t4 )], because φ is left-continuous. Consequently, we can find a positive sequence (dtk )k satisfying, on the one hand, φ(t4 + dtk ) − φ(t4 ) = f [σ(t4 + dtk )] − g[σ(t4 )] = D(t4 , g[σ(t4 )], dtk ) + o(dtk ), and D(t4 , g[σ(t4 )], dtk ) = g[σ(t4 + dtk )] − g[σ(t4 )], on the other, since σ(t4 ) ≠ α. Passing to the limit, as k → ∞, we obtain g[σ(t4 )] = f [σ(t4 )], a contradiction. We conclude that the limit (3.2.34) exists and equals either g[σ(t1 )] or f [σ(t1 )], because (t1 , u1 ) ∈ U. Combining now Theorem 3.1.1, Theorem 3.1.2 and Theorem 3.1.6 completes the proof. Theorem 3.2.24 provides an existence and uniqueness result for strong solutions of a generalized relay problem. The following example shows that, apart from a unique strong solution, such a problem may have other solutions. Example 3.2.25. Consider the equation (3.0.2) with D(t, y, dt) given by (3.2.32), for α := −1 and β := 1, where f (x) = 0 for x ≤ 1 and g(x) = (x + 1)2 for x ≥ −1. Let σ(t) = t. We claim that both φ(t) ≡ 0 and ψ(t) = (t + 1)2 are solution, where φ is a strong solution, but ψ is not. Clearly, the zero function φ satisfies (3.1.2) on [−1, 1). On the other hand, (3.0.2) is also fulfilled for y = ψ(t). In fact, for t > −1 the left-hand side of (3.0.2) equals y(t + dt) − y(t) = (t + 1 + dt)2 − (t + 1)2 = 2(t + 1)dt + dt 2 , while the right-hand side of (3.0.2) equals D(t, y(t), dt) + o(dt) = (t + 1 + dt)2 − (t + 1)2 + o(dt) = 2(t + 1)dt + dt 2 + o(dt). For t = −1 in turn, (3.0.2) simply becomes dt 2 = o(dt) which is trivially true. So problem (3.0.2) has two solutions for this choice of D(t, y, dt). The reason for the non-uniqueness result in Example 3.2.25 is explained by the following Theorem 3.2.26 which excludes second solutions. Theorem 3.2.26. Suppose that the functions f and g in (3.2.32) satisfy the relation lim inf dt→0+

|f [σ(t + dt)] − g[σ(t + dt)]| >0 dt

(3.2.35)

for every t. Then each solution of the generalized relay equation (3.0.2)/(3.2.32) is a strong solution. Consequently, the problem (3.0.2)/(3.2.32)/(3.1.6) has a unique solution.

3.2 Equations with relay | 149

Proof. Suppose that there exists a solution ψ which is not a strong solution. Then we find a point t 󸀠 and positive sequence (dtk )k such that dtk → 0 and ψ(t 󸀠 + dtk ) − ψ(t 󸀠 ) − D(t 󸀠 , ψ(t 󸀠 ), dtk ) ≠ 0. Let φ be any strong solution16 of (3.0.2)/(3.2.32) satisfying φ(t 󸀠 ) = ψ(t 󸀠 ). Obviously, φ(t + dtk ) ≠ ψ(t 󸀠 + dtk ) for sufficiently large k, and 󸀠

󵄨 󵄨 󵄨 󵄨󵄨 󸀠 󸀠 󸀠 󸀠 󵄨󵄨ψ(t + dtk ) − φ(t + dtk )󵄨󵄨󵄨 = 󵄨󵄨󵄨f [σ(t + dtk )] − g[σ(t + dtk )]󵄨󵄨󵄨, since any solution can assume only the values f [σ(t)] or g[σ(t)]. On the other hand, ψ(t 󸀠 +dtk )−φ(t 󸀠 +dtk ) = ψ(t 󸀠 )+D(t 󸀠 , ψ(t 󸀠 ), dtk )+o(dtk )−φ(t 󸀠 )−D(t 󸀠 , φ(t 󸀠 ), dtk ) = o(dtk ), hence 󵄨󵄨 󵄨 󸀠 󸀠 󵄨󵄨f [σ(t + dtk )] − g[σ(t + dtk )]󵄨󵄨󵄨 = o(dtk ). But this implies |f [σ(t 󸀠 + dtk )] − g[σ(t 󸀠 + dtk )]| = 0, k→∞ dtk lim

contradicting (3.2.35). Clearly, (3.2.35) cannot hold for our choice of f and g in Example 3.2.25. In fact, at the point t = −1 we get lim inf dt→0+

|g(−1 + dt)| |f [σ(−1 + dt)] − g[σ(−1 + dt)]| = lim = 0, dt→0+ dt dt

since σ(t) = t. One of the most important properties of the relay described in Subsection 3.2.3 was monotonicity. Now we are going to formulate and prove two analogous monotonicity properties of generalized relays. Since the functions f and g in (3.2.32) may be arbitrarily chosen, it is not surprising that we have to impose some hypotheses on these functions to get the desired monotonicity properties of the corresponding relay. This will be done in Theorem 3.2.28 and Theorem 3.2.30 below. For technical reasons, we first reformulate our relay problem in the form f [σ(t + dt)] { { { u(t + dt) + o(dt) = { g[σ(t + dt)] { { { u(t)

if σ(t) ≤ α, if σ(t) ≥ β,

(3.2.36)

if α < σ(t) < β.

which is obviously equivalent to (3.0.2)/(3.2.32). In the sequel we assume throughout that the generalized relay has only strong solutions. 16 The existence of such a solution is guaranteed by Theorem 3.2.24.

150 | 3 Equations with nonlinear differentials Definition 3.2.27. Following [1] we say that a generalized relay is monotone with respect to inputs if the following is true: given two inputs σ and σ,̃ any point t0 in their ̃ 0) common domain of definition, and corresponding outputs u and u,̃ from u(t0 ) ≤ u(t ̃ ̃ for all t ≥ t0 . and σ(t) ≤ σ(t) for all t ≥ t0 it follows that u(t) ≤ u(t) Theorem 3.2.28. A generalized relay is monotone with respect to inputs if and only if the following three conditions on f and g are satisfied: (a) The function f is increasing on (−∞, β]. (b) The function g is increasing on [α, ∞). (c) The estimate f (x1 ) < g(x2 ) holds for any x1 , x2 ∈ (α, β). Proof. First we show that (a)/(b)/(c) are necessary for monotonicity. So assume that the generalized relay is monotone with respect to inputs in the sense of Definition 3.2.27, and suppose first that (a) is not true. Then we find points x1 and x2 such ̃ 0 ) = f (x0 ), and t1 := t0 + h, that x1 < x2 ≤ β and f (x1 ) > f (x2 ). Let x0 < x1 , u(t0 ) = u(t where h > 0. As inputs we choose the two functions x0 { { { σ(t) := { x1 { { { linear

for t = t0 , for t = t1 , otherwise,

x0 { { { ̃ := { x2 σ(t) { { { linear

for t = t0 , for t = t1 , otherwise.

̃ Then σ(t) ≤ σ(t) < β for t0 ≤ t < t1 . Consequently, the relay does not switch for ̃ 1 ). the given inputs on the interval [t0 , t1 ], which implies that u(t1 ) = f (x1 ) > f (x2 ) = u(t But this contradicts the assumed monotonicity of the relay, so we have proved (a). Property (b) for the function g is proved in the same way. Now suppose that (c) is not true. Then we find points x1 and x2 such that α < x1 , x2 < β and f (x1 ) ≥ g(x2 ). Since f (x) ≠ g(x) on (α, β), and both f and g are continuous, only two cases are possible. 1st case: f (x) > g(x) for all x ∈ (α, β). Fix any t0 ∈ ℝ and t1 > t0 , and let t2 := 21 (t0 + t1 ). We define a linear input σ and a piecewise linear input σ̃ on [t0 , t1 ] with peak in t2 such that the conditions ̃ 0 ) < σ(t1 ) = σ(t ̃ 1 ) < σ(t ̃ 2) = β α < σ(t0 ) = σ(t are fulfilled. Moreover, we put ̃ 0 ) = f [(α + β)/2]. u(t0 ) = u(t ̃ 2 ) = β it follows that From σ(t u(t) = f [σ(t)],

̃ ̃ = g[σ(t)] u(t) (t > t2 ).

But this implies that ̃ 1 ) = g[σ(t1 )] < f [σ(t1 )] = u(t1 ), u(t contradicting the assumed monotonicity of the relay.

3.2 Equations with relay | 151

2nd case: f (x) < g(x) for all x ∈ (α, β). Observe that then x1 > x2 , since otherwise the monotonicity of f and g on (α, β) would imply g(x2 ) ≤ f (x1 ) ≤ f (x2 ). Given t0 and t1 as before, we put now u(t0 ) = g(x2 ),

̃ 0 ) = f (x1 ), u(t

and define linear inputs σ and σ̃ on [t0 , t1 ] such that the conditions σ(t0 ) = x2 ,

̃ 0 ) = x1 , σ(t

̃ 1 ) = (x1 + x2 )/2 σ(t1 ) = σ(t

̃ < β for t0 ≤ t ≤ t1 . Consequently, the relay does not are fulfilled. Then α < σ(t) ≤ σ(t) switch for the given inputs on the interval [t0 , t1 ], which implies that ̃ 1 )] = u(t ̃ 1 ), u(t1 ) = g[σ(t1 )] = g[(x1 + x2 )/2] > f [(x1 + x2 )/2] = f [σ(t contradicting again the assumed monotonicity of the relay. So we have proved that (a)/(b)/(c) are necessary conditions for monotonicity. Now we show that the assumptions (a)/(b)/(c) are also sufficient for monotonicity. So assume that (a)/(b)/(c) hold, and suppose that the generalized relay is not monotone with respect to inputs in the sense of Definition 3.2.27. Then we find inputs σ and ̃ ̃ 0 ), and a point t ∗ > t0 such that σ̃ satisfying σ(t) ≤ σ(t), initial values u(t0 ) ≤ u(t ̃ ∗ ). Putting u(t ∗ ) > u(t ̃ t1 := inf{t ≥ t0 : u(t) > u(t)} the left-continuity of the outputs implies that ̃ 1 ). u(t1 ) ≤ u(t

(3.2.37)

Moreover, on a suitable interval (t1 , t1 + δ1 ) we have u(t) = f [σ(t)] or u(t) = g[σ(t)], ̃ ̃ ̃ = f [σ(t)] ̃ = g[σ(t)]. as well as u(t) or u(t) We consider all cases which are theoretically possible, and show that each of them leads to a contradiction. ̃ ̃ ̃ 1st case: u(t) = f [σ(t)] > f [σ(t)] = u(t). This case cannot occur, because σ(t) ≤ σ(t) and both f and g are monotonically increasing. ̃ ̃ 2nd case: u(t) = g[σ(t)] > g[σ(t)] = u(t). This case cannot occur either, for the same reason. ̃ ̃ 1 ) > α and σ(t1 ) < β, see (3.2.36). ̃ 3rd case: u(t) = f [σ(t)] > g[σ(t)] = u(t). Then σ(t Since the inputs σ and σ̃ are continuous, on a suitable interval (t1 , t1 + δ2 ) we have ̃ σ(t) > α and σ(t) < β. We claim that (a)/(b)/(c) implies f (x1 ) < g(x2 ) for x1 < β and x2 > α. In fact, choosing x1∗ > α and x2∗ < β such that x1 < x1∗ < β and α < x2∗ < x2 we obtain f (x1 ) ≤ f (x1∗ ) < g(x2∗ ) ≤ g(x2 ). This shows that also the third case cannot occur.

152 | 3 Equations with nonlinear differentials ̃ ̃ 1 ) < β. From ̃ 4th case: u(t) = g[σ(t)] > f [σ(t)] = u(t). Then σ(t1 ) > α and σ(t ̃ 1) < β α < σ(t1 ) ≤ σ(t it follows that u(t1 ) = g[σ(t1 )],

̃ 1 )]. ̃ 1 ) = f [σ(t u(t

̃ 1 )] < g[σ(t1 )] = u(t1 ), contrã 1 ) = f [σ(t But this together with (c) implies that u(t dicting (3.2.37). We have shown that all 4 cases lead to a contradiction, and so we have proved that (a)/(b)/(c) implies the monotonicity of the generalized relay with respect to inputs. We make some comments on Theorem 3.2.28. Obviously, the functions f (x) ≡ 0 and g(x) ≡ 1 satisfy (a)/(b)/(c), so Proposition 3.2.4 is contained as a special case in Theorem 3.2.28. If f and g are strictly increasing, Condition (c) follows from the simple assumption f (β) ≤ g(α). If f and g are merely increasing, the assumption f (β) ≤ g(α) is necessary for (c), while the assumption f (β) < g(α) is sufficient for (c). Now we are going to establish a parallel monotonicity result with respect to the threshold values of a generalized relay. Throughout the remaining part of this subsection we suppose that f and g are defined on the real line with f (x) ≠ g(x) for all x ∈ ℝ. Definition 3.2.29. We say that a generalized relay is monotone with respect to threshold values if the following is true: given threshold values α, α,̃ β and β̃ with α ≤ α̃ and β ≤ β,̃ any point t0 , a continuous input σ, and corresponding outputs u and u,̃ from ̃ 0 ) it follows that u(t) ≥ u(t) ̃ for all t ≥ t0 . u(t0 ) ≥ u(t Theorem 3.2.30. A generalized relay is monotone with respect to threshold values if and only if f (x) < g(x) for all x ∈ ℝ. Proof. Suppose first that the relay is monotone with respect to threshold values, and assume that there exists some x0 ∈ ℝ such that f (x0 ) ≥ g(x0 ). Since f (x) ≠ g(x), by our general hypothesis, this implies that f (x) > g(x) for all x ∈ ℝ. Let α < α̃ = σ(t0 ) < β = β̃ ̃ 0 ) = g[σ(t0 )]. Then (3.2.36) shows that and u(t0 ) = u(t ̃ u(t) = g[σ(t)] < f [σ(t)] = u(t)

(t0 < t < t0 + δ)

for some δ > 0. But this contradicts the assumed monotonicity of the relay with respect to threshold values. Conversely, suppose that f (x) < g(x) for all x ∈ ℝ, and assume that the relay is not monotone with respect to threshold values. Then we find α, α,̃ β and β̃ with α ≤ α̃ and

3.3 Stops and plays |

153

β ≤ β,̃ a point t0 , a continuous input σ, and corresponding outputs u and ũ such that ̃ 0 ), but u(t) < u(t) ̃ for some t > t0 . Putting u(t0 ) ≥ u(t ̃ t1 := inf{t ≥ t0 : u(t) < u(t)} ̃ 1 ). For t in some interval the left-continuity of the outputs implies that u(t1 ) ≥ u(t ̃ ̃ (t1 , t1 + δ) we have u(t) = f [σ(t)] or u(t) = g[σ(t)], as well as u(t) = f [σ(t)] or u(t) = ̃ = g[σ(t)]. But the fact that f (x) < g(x) implies that necessarily u(t) = f [σ(t)] and u(t) g[σ(t)]. Consequently, α ≤ α̃ < σ(t1 ) < β ≤ β,̃ ̃ 1 ), a contradiction. The proof is complete. hence u(t1 ) = f [σ(t1 )] ≥ g[σ(t1 )] = u(t We illustrate the difference between Theorem 3.2.28 and Theorem 3.2.30 by means of a simple example. Example 3.2.31. Consider a generalized relay with data f (x) := −x,

g(x) := 1 − x,

σ(t) := t,

̃ := 2t, σ(t)

α := 1,

β := 2,

̃ u(0) = u(0) = 0.

For this choice we obtain u(t) = {

−t

for 0 ≤ t ≤ 2,

1−t

for t > 2,

̃ ={ u(t)

−2t

for 0 ≤ t ≤ 1,

1 − 2t

for t > 1,

̃ for all t > 0. Observe that the hypotheses (a) and (b) of So we have u(t) > u(t) Theorem 3.2.28 are not satisfied, but the monotonicity of the threshold values follows from Theorem 3.2.30.

3.3 Stops and plays Stops and plays are special transducers which operate on continuous inputs. In this section we give first a phenomenological description and then a mathematical model of such transducers. 3.3.1 The stop operator From the phenomenological point of view, a stop on the interval17 [0, 1] is a transducer which associates to a monotonically increasing continuous input σ = σ(t) and an initial state u0 ∈ [0, 1] the output u(t) := min{u0 + σ(t) − σ(t0 ), 1}.

(3.3.1)

17 Here we consider only one-dimensional stops, although higher dimensional stops are also important in applications.

154 | 3 Equations with nonlinear differentials Analogously, for a monotonically decreasing continuous input σ = σ(t) and an initial state u0 ∈ [0, 1] the corresponding output is u(t) := max{u0 + σ(t) − σ(t0 ), 0}.

(3.3.2)

The generalization to piecewise monotone inputs σ is straightforward. In [20] it is shown that one may even extend correctly this definition, by means of functionalanalytic techniques, to arbitrary continuous inputs σ. Below we will provide a mathematical model of a stop in the form (3.0.2) which is equivalent to the phenomenological description just given and makes sense for any continuous input.18 To this end, we observe that for monotone inputs σ and small dt ≥ 0 we have σ(t + dt) − σ(t) { { { D(t, u, dt) = { σ(t + dt) − max{σ(s) : t ≤ s ≤ t + dt} { { { σ(t + dt) − min{σ(s) : t ≤ s ≤ t + dt}

if 0 < u < 1, if u = 1,

(3.3.3)

if u = 0.

For piecewise monotone inputs this formula therefore describes the local behaviour of the increment of the output signal. Consequently, the dependence of the output of a stop on a piecewise monotone input is globally described by the strong solutions of (3.0.2), with D(t, u, dt) given by (3.3.3). Here the set U of possible values (t, u) (see Subsection 3.1.1) is U = 𝒟(σ) × [0, 1]. We consider strong solutions of the problem (3.0.2)/(3.3.3) as model for the stop in case of arbitrary continuous inputs. For example, in the following Figure 3.2 the output u = u(t) of the stop, sketched by a black line, corresponds to the continuous input σ = σ(t) := −2 cos t, sketched by a dotted line, and the initial value u0 = 0.

Figure 3.2 18 This is the main advantage of our approach: we do not need a “detour” over monotone inputs, as the Krasnosel’sky–Pokrovsky approach.

3.3 Stops and plays | 155

Of course, a general problem is here again to find conditions for the existence (and possibly uniqueness) of strong solutions. The most general result in this spirit is given in the following Theorem 3.3.1 (Existence and uniqueness for stops). Let 0 ≤ u0 ≤ 1, and let σ : [t0 , T) → ℝ be continuous, where t0 < T ≤ ∞. Then the problem (3.0.2)/(3.3.3) has a unique strong solution u : [t0 , T) → ℝ which satisfies the initial condition (3.1.6). Proof. To simplify the notation, for t1 < t2 we use the shortcut σ[t1 , t2 ] := max{σ(s) : t1 ≤ s ≤ t2 }

(3.3.4)

in the sequel. Given (t0 , u0 ) ∈ U, we show that the function u(t) = γtt0 u0 is a strong solution of (3.0.2)/(3.3.3) on a suitable interval [t0 , t0 + a) (a > 0). To this end, we distinguish three cases. 1st case: 0 < u0 < 1. Then we have γtt0 u0 = u0 + σ(t) − σ(t0 ), so we may choose a > 0 in such a way that 0 < γtt0 u0 < 1 for t0 ≤ t < t0 + a. As a consequence, we obtain γtt+dt u0 − γtt0 u0 = σ(t + dt) − σ(t) = D(t, u(t), dt) (t0 ≤ t < t0 + a), 0 which shows that u(t) = γtt0 u0 is in fact a strong solution of (3.0.2)/(3.3.3) on [t0 , t0 + a). 2nd case: u0 = 1. Here we choose a > 0 in such a way that γtt0 u0 = 1 + σ(t) − σ[t0 , t] > 0

(t0 ≤ t < t0 + a)

If in this case γtt0 u0 = 1 for some t ∈ (t0 , t0 + a), it follows that σ(t) = σ[t0 , t], hence γtt+dt u0 − γtt0 u0 = 1 + σ(t + dt) − σ[t0 , t + dt] − 1 = σ(t + dt) − σ[t0 , t + dt] = D(t, u(t), dt). 0 On the other hand, if 0 < γtt0 u0 < 1 for some t ∈ (t0 , t0 + a), it follows that σ(t) < σ[t0 , t], and so also σ(t + dt) < σ[t0 , t] for sufficiently small dt > 0. This implies that σ[t0 , t + dt] = σ[t0 , t], so we conclude that γtt+dt u0 − γtt0 u0 = σ(t + dt) − σ(t) = D(t, u(t), dt). 0 3rd case: u0 = 0. This case is treated in the same way as the second case. Now we prove our assertion on global solvability; to show the existence of the limit lim u(t) = u1 ,

t→t1

(3.3.5)

we use the classical Cauchy criterion. If 0 < u(t) < 1 for all t belonging to some interval ̃ for a := t1 − t ̃ and obtain [t,̃ t1 ), we may apply the above reasoning to the point (t,̃ u(t)) ̃ ̃ u(t) = u(t) + σ(t) − σ(t), since strong solutions are unique. But this implies that 󵄨󵄨 󸀠 󵄨 󸀠 󸀠󸀠 󵄨 󸀠󸀠 󵄨 󵄨󵄨u(t ) − u(t )󵄨󵄨󵄨 = 󵄨󵄨󵄨σ(t ) − σ(t )󵄨󵄨󵄨 → 0

(t ̃ ≤ t 󸀠 , t 󸀠󸀠 < t1 , t 󸀠 , t 󸀠󸀠 → t1 −).

156 | 3 Equations with nonlinear differentials On the other hand, if such an interval [t,̃ t1 ) does not exist, we find a sequence (tn )n with tn → t1 − such that u(tn ) ∈ {0, 1}, where either u(tmk ) = 0 or u(tmk ) = 1 for some infinite subsequence of indices. For definiteness, suppose that the sequence (tnk )k is infinite. Since the function σ is continuous (actually, uniformly continuous) on [t0 , t1 ], we find some δ > 0 such that 󵄨 󵄨󵄨 󸀠 󸀠 󵄨󵄨σ(t) − σ[t , t]󵄨󵄨󵄨 < 1 (t1 − δ < t, t < t1 ). Now, if t1 − δ < tnp < t1 for some p, then u(t) = 1 + σ(t) − σ[tnp , t] tnp ≤ t < t1 ), as we have seen before. But both functions σ and σ[tnp , ⋅] are uniformly continuous on [tnp , t1 ]. Consequently, 󵄨󵄨 󸀠 󵄨 󸀠 󸀠󸀠 󵄨 󸀠󸀠 󸀠󸀠 󸀠 󵄨 󵄨󵄨u(t ) − u(t )󵄨󵄨󵄨 = 󵄨󵄨󵄨σ(t ) − σ(t ) + σ[tnp , t ] − σ[tnp , t ]󵄨󵄨󵄨 → 0

(t ̃ ≤ t 󸀠 , t 󸀠󸀠 < t1 , t 󸀠 , t 󸀠󸀠 → t1 −).

We conclude that the limit (3.3.5) exists and satisfies u1 ∈ [0, 1], since u(t) ∈ [0, 1]. Using now Theorem 3.1.1, Theorem 3.1.2, and Theorem 3.1.6 we have proved the assertion. 3.3.2 The play operator Apart from stop operators, so-called play operators provide another important example of a transducer which acts on continuous inputs. In this subsection we develop a theory for play operators which is parallel to what we have done in Subsection 3.3.1 for stop operators. A (as before, one-dimensional) play on an interval [0, h] (h > 0) is a transducer which associates to a monotonically increasing continuous input σ = σ(t) and an initial state u0 satisfying u0 − h ≤ σ(t0 ) ≤ u0 the output u(t) := max{u0 , σ(t)}. Analogously, for a monotonically decreasing continuous input σ = σ(t) and an initial state u0 satisfying u0 − h ≤ σ(t0 ) ≤ u0 the corresponding output is u(t) := min{u0 , σ(t) + h}. The generalization to piecewise monotone inputs σ is again straightforward, and also this definition may be generalized, by means of functional-analytic techniques, to arbitrary continuous inputs σ. Below we will provide a mathematical model of a play in the form (3.0.2) which is equivalent to the phenomenological description just given and makes sense for any

3.3 Stops and plays | 157

continuous input. To this end, we observe that for monotone inputs σ and small dt ≥ 0 we have max{σ(s) : t ≤ s ≤ t + dt} − σ(t) { { { D(t, u, dt) = { min{σ(s) : t ≤ s ≤ t + dt} − σ(t) { { { 0

if σ(t) = u, if σ(t) = u − h,

(3.3.6)

otherwise.

For piecewise monotone inputs this formula therefore describes the local behaviour of the increment of the output signal. Consequently, the dependence of the output of a play on a piecewise monotone input is globally described by the strong solutions of (3.0.2), with D(t, u, dt) given by (3.3.6). Here the set U of possible values (t, u) (see Subsection 3.1.1) is U = {(t, u) : u − h ≤ σ(t) ≤ u}. We consider strong solutions of the problem (3.0.2)/(3.3.6) as model for the play in case of arbitrary continuous inputs. For example, in the following Figure 3.3 the output u = u(t) of the play, sketched as a black line, corresponds to the continuous input σ = σ(t) := −2 cos t, sketched as a broken line, h := 1, and the initial value u0 = −1.

Figure 3.3

The following existence and uniqueness theorem for strong solutions is parallel to Theorem 3.3.1. Theorem 3.3.2 (Existence and uniqueness for plays). Let σ : [t0 , T) → ℝ be continuous, where t0 < T ≤ ∞. Then the problem (3.0.2)/(3.3.6) has a unique strong solution u : [t0 , T) → ℝ which satisfies the initial condition (3.1.6).

158 | 3 Equations with nonlinear differentials Proof. Given (t0 , u0 ) ∈ U, we show that the function u(t) = γtt0 u0 is a strong solution of (3.0.2)/(3.3.6) on a suitable interval [t0 , t0 + a) (a > 0). As in Theorem 3.3.1, we distinguish three cases. 1st case: u0 − h < σ(t0 ) < u0 . Then we have γtt0 u0 = u0 , so we may choose a > 0 in such a way that u0 − h < σ(t) < u0 for t0 ≤ t < t0 + a, because σ is continuous. As a consequence, we obtain γtt+dt u0 − γtt0 u0 = u0 − u0 = 0 = D(t, u(t), dt) (t0 ≤ t < t0 + a), 0 which shows that u(t) = γtt0 u0 is in fact a strong solution of (3.0.2)/(3.3.6) on [t0 , t0 + a). 2nd case: σ(t0 ) = u0 . Here we choose a > 0 in such a way that σ(t) > σ[t0 , t] − h (t0 ≤ t < t0 + a), where we have used again the shortcut (3.3.4). We have then γtt0 u0 = u0 + σ[t0 , t] − σ(t0 ) = σ[t0 , t] ≥ σ(t). If in this case σ(t) = γtt0 u0 for some t ∈ (t0 , t0 + a), it follows that σ(t) = σ[t0 , t], hence γtt+dt u0 − γtt0 u0 = σ[t0 , t + dt] − σ[t0 , t] = σ[t, t + dt] − σ(t) = D(t, u(t), dt). 0 On the other hand, if σ(t) < γtt0 u0 for some t ∈ (t0 , t0 + a), it follows that σ(t) < σ[t0 , t], and so also σ(t + dt) < σ[t0 , t] for sufficiently small dt > 0. This implies that σ[t0 , t + dt] = σ[t0 , t], so we conclude that γtt+dt u0 − γtt0 u0 = σ[t0 , t + dt] − σ[t0 , t] = D(t, u(t), dt). 0 3rd case: σ(t0 ) = u0 − h. This case is treated in the same way as the second case. We have shown that in any case the function u(t) = γtt0 u0 is a strong solution of (3.0.2)/(3.3.6); uniqueness follows from Theorem 3.1.2. It remains to prove our assertion on global solvability; this is a consequence of a certain interconnection between stops and plays which we are going to give in the following Theorem 3.3.3. Let σ : [t0 , T) → ℝ be continuous, where t0 < T ≤ ∞. Moreover, let u = u(t) be a strong solution of the stop problem (3.0.2)/(3.3.3), and v = v(t) be the a strong solution of the play problem (3.0.2)/(3.3.6) with h := 1. Then u and v are related by the equality u(t) + v(t) = σ(t) + 1.

(3.3.7)

3.3 Stops and plays |

159

Proof. First of all we remark that 0 ≤ u0 ≤ 1 implies σ(t0 ) ≤ v0 := σ(t0 ) + 1 − u0 ≤ σ(t0 ) + 1, so the initial conditions are compatible. Suppose that (3.3.7) does not hold for all t ∈ [t0 , T), and denote by t1 the infimum of all t for which (3.3.7) fails. Then u(t1 ) + v(t1 ) = σ(t1 ) + 1, by the left-continuity of strong solutions of locally explicit equations. We write u(t) = γtt1 u(t1 ),

v(t) = γtt1 v(t1 )

for the quasicurrent generated by the stop (3.3.3) and the play (3.3.6) which exist one [t1 , t1 + δ) for some δ > 0. We distinguish again the three possible cases. 1st case: u(t1 ) = 1. Then v(t1 ) = σ(t1 ), hence u(t) = 1 + σ(t) − σ[t1 , t],

v(t) = v(t1 ) + σ[t1 , t] − σ(t1 ) = σ[t1 , t],

where we have used (3.3.4). But this implies that u(t) + v(t) = 1 + σ(t)

(t1 ≤ t < t1 + δ),

(3.3.8)

contradicting our choice of t1 . 2nd case: u(t1 ) = 0. Then v(t1 ) = σ(t1 ) + 1, hence u(t) = σ(t) − min{σ(s) : t1 ≤ s ≤ t} and v(t) = σ(t1 ) + 1 + min{σ(s) : t1 ≤ s ≤ t} − σ(t1 ) = 1 + min{σ(s) : t1 ≤ s ≤ t}. This again implies that (3.3.8) is true, contradicting our choice of t1 . 3rd case: 0 < u(t1 ) < 1. Then σ(t1 ) < v(t) = 1 + σ(t1 ) − u(t1 ) < σ(t1 ) + 1, and therefore u(t) = u(t1 ) + σ(t) − σ(t1 ),

v(t) = 1 + σ(t1 ) − u(t1 ).

(3.3.9)

Adding up (3.3.9) yields u(t) + v(t) = 1 + σ(t) which is (3.3.7). The formula (3.3.7) is extremely useful, inasmuch as it allows us prove some property shared by stop and play operators only for one of them. We illustrate Theorem 3.3.3 by means of two examples.

160 | 3 Equations with nonlinear differentials Example 3.3.4. For σ(t) := −2 cos t, consider the stop operator sketched in Figure 3.2 (subject to the initial value u0 := 0). The corresponding operator (3.3.3) becomes here σ(t + dt) − σ(t) { { { D(t, u, dt) = { σ(t + dt) + min{2 cos(s) : t ≤ s ≤ t + dt} { { { σ(t + dt) + max{2 cos(s) : t ≤ s ≤ t + dt}

if 0 < u < 1, if u = 1,

(3.3.10)

if u = 0.

The output function may be given explicitly: it has the form { { { { { { u(t) = { { { { { { {

π 3

2 − 2 cos t

for 2πn ≤ t ≤

1

for

−1 − 2 cos t

for π + 2πn < t ≤

0

for

π 3

+ 2πn,

+ 2πn < t ≤ π + 2πn,

4π 3

4π 3

+ 2πn,

(3.3.11)

+ 2πn < t < 2π + 2πn.

By Theorem 3.3.1, this is the only strong solution of this problem. After illustrating our results for the stop operator in the preceding example, we do now the same for the play operator in the following Example 3.3.5. For σ(t) := −2 cos t, consider the play operator given in Figure 3.3 (subject to the initial value v0 := −1). The corresponding operator (3.3.6) becomes here − min{−σ(s) : t ≤ s ≤ t + dt} − σ(t) { { { D(t, v, dt) = { − max{−σ(s) : t ≤ s ≤ t + dt} − σ(t) { { { 0

if σ(t) = v, if σ(t) = v − h,

(3.3.12)

otherwise,

respectively. Again, the output function may be given here explicitly: it has the form { { { { { { v(t) = { { { { { { {

π 3

−1

for 2πn ≤ t ≤

−2 cos t

for

2

for π + 2πn < t ≤

1 − 2 cos t

for

π 3

+ 2πn,

+ 2πn < t ≤ π + 2πn,

4π 3

4π 3

+ 2πn,

(3.3.13)

+ 2πn < t < 2π + 2πn.

By Theorem 3.3.2, this is the only strong solution of this problem. Observe that, if we take the sum of the output function u from (3.3.11) for the stop (3.3.10) and the output function v from (3.3.13) for the play (3.3.12), we end up with u(t) + v(t) = 1 − 2 cos t = 1 + σ(t) in all cases, in accordance with formula (3.3.7) in Theorem 3.3.3. The next two theorems [69] show that the map which associates to an input function the corresponding solution is Lipschitz continuous for stop equations, and even nonexpansive for play equations.

3.3 Stops and plays | 161

Theorem 3.3.6. Let σ, σ̃ : [t0 , T) → ℝ be continuous, where t0 < T ≤ ∞. Suppose that v and ṽ are solutions of the corresponding play equation which satisfy the same admissible ̃ 0 ) = v0 . Then the Lipschitz condition initial condition v(t0 ) = v(t 󵄨󵄨 ̃ 0 , t] ̃ 󵄨󵄨󵄨󵄨 ≤ |σ − σ|[t 󵄨󵄨v(t) − v(t)

(t0 ≤ t < T)

(3.3.14)

holds, where we have employed again the shortcut (3.3.4). Proof. Assuming the contrary we may find a point t1 ∈ (t0 , T) and a δ1 > 0 such that 󵄨󵄨 ̃ 0 , t1 ] ̃ 1 )󵄨󵄨󵄨󵄨 = |σ − σ|[t 󵄨󵄨v(t1 ) − v(t

(3.3.15)

󵄨󵄨 ̃ 0 , t] (t1 < t < t1 + δ1 ). ̃ 󵄨󵄨󵄨󵄨 > |σ − σ|[t 󵄨󵄨v(t) − v(t)

(3.3.16)

and

Since the play equation is locally explicit, we have v(t) = γtt1 v(t1 ),

̃ = γtt v(t ̃ 1) v(t) 1

for t belonging to some interval [t1 , t1 + δ2 ). Let δ := min{δ1 , δ2 }. Obviously, |v(t) − ̃ ̃ v(t)| > 0 (without loss of generality, v(t) > v(t)) for t1 < t < t1 + δ. Then (3.3.15) and ̃ < v(t ̃ 1 ). (3.3.16) show that for any t ∈ (t1 , t1 + δ) we either have v(t) > v(t1 ) or v(t) In the first case we fix some t2 ∈ (t1 , t1 + δ) with v(t2 ) > v(t1 ). Then from (3.3.6) it follows that v(t1 ) = σ(t1 ), hence v(t) = v(t1 ) + σ[t1 , t] − σ(t1 ) = σ[t1 , t]

(t1 ≤ t < t1 + δ).

So there exists some τ ∈ (t1 , t2 ] satisfying σ[t1 , t2 ] = σ(τ) = σ[t1 , τ] = v(τ). ̃ ≥ σ(τ), this further implies that Since v(τ) ̃ ̃ 0 , τ], ̃ = σ(τ) − v(τ) ̃ ≤ σ(τ) − σ(τ) v(τ) − v(τ) ≤ |σ − σ|[t ̃ 2 ) < v(t ̃ 1 ) is treated analogously. The proof is comcontradicting (3.3.16). The case v(t plete. Theorem 3.3.7. Let σ, σ̃ : [t0 , T) → ℝ be continuous, where t0 < T ≤ ∞. Suppose that u and ũ are solutions of the corresponding stop equation which satisfy the same ̃ 0 ) = u0 . Then the Lipschitz condition admissible initial condition u(t0 ) = u(t 󵄨󵄨 ̃ 0 , t] ̃ 󵄨󵄨󵄨󵄨 ≤ 2|σ − σ|[t 󵄨󵄨u(t) − u(t)

(t0 ≤ t < T)

holds, where we have employed again the shortcut (3.3.4).

162 | 3 Equations with nonlinear differentials Proof. Combining (3.3.14) with the equality (3.3.7) which relates the solutions of stop and play equations we obtain 󵄨󵄨 ̃ + 1 − v(t)) ̃ 󵄨󵄨󵄨󵄨 ̃ 󵄨󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨(σ(t) + 1 − v(t)) − (σ(t) 󵄨󵄨u(t) − u(t) 󵄨 ̃ 0 , t], ̃ 󵄨󵄨󵄨󵄨 + |σ − σ|[t ̃ 󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨v(t) − v(t) ̃ 󵄨󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨󵄨σ(t) − σ(t) ≤ 󵄨󵄨󵄨σ(t) − σ(t) ̃ and passing to the maximum of |σ(t) − σ(t)| on [t0 , t] in the last expression gives the result. So far we have given existence and uniqueness theorems for strong solutions of the stop and play equations. The question arises whether or not there may be other solutions. The above Theorems 3.1.8, 3.3.1 and 3.3.2 allow us to show that this may not occur: Theorem 3.3.8 (Regularity). Any solution of the stop equation (3.0.2)/(3.3.3) is a strong solution. The same is true for the play equation (3.0.2)/(3.3.6). Proof. Consider first the stop equation (3.0.2)/(3.3.3). We have seen in Theorem 3.3.1 that the stop equations is locally explicit, and all its strong solutions may be extended until reaching the right endpoint T of the domain of definition of the input function σ. By Theorem 3.1.8 it is therefore sufficient to verify that a Lipschitz condition of the form 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨φ(t) − ψ(t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨φ(τ) − ψ(τ)󵄨󵄨󵄨 (t ≥ τ)

(3.3.17)

holds for strong solutions. Suppose that (3.3.17) is not true for all t ≥ τ, and denote by t1 the infimum of all t ≥ τ for which (3.3.17) fails. Without loss of generality we may assume that φ(t1 ) > ψ(t1 ). If φ(t1 ) = 1 and 0 < ψ(t1 ) < 1, we find a δ > 0 such that φ(t) − ψ(t) = φ(t1 ) − σ[t1 , t] − ψ(t1 ) + σ(t1 )

(t1 < t < t1 + δ).

Since the input σ is continuous, we may here choose δ > 0 in such a way that σ[t1 , t] − σ(t1 ) < φ(t1 ) − ψ(t1 )

(t1 < t < t1 + δ).

But then 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨φ(t1 ) − ψ(t1 ) − σ[t1 , t] + σ(t1 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨φ(t1 ) − ψ(t1 )󵄨󵄨󵄨, hence 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨φ(t) − ψ(t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨φ(t1 ) − ψ(t1 )󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨φ(τ) − ψ(τ)󵄨󵄨󵄨, which contradicts our choice of t1 . The remaining cases 1 = φ(t1 ) > ψ(t1 ) = 0,

1 > φ(t1 ) > ψ(t1 ) > 0,

may also occur; the proof in theses cases is the same.

1 > φ(t1 ) > ψ(t1 ) = 0

3.3 Stops and plays | 163

The fact that the play equation (3.0.2)/(3.3.6) is locally explicit, and its solutions may be extended up to T, has been proved in Theorem 3.3.2. The Lipschitz condition (3.3.17) for solutions of the play equation follows from formula (3.3.7). In fact, 󵄨󵄨 󵄨󵄨 ̃ ̃ ̃ 󵄨󵄨󵄨 = 󵄨󵄨󵄨(σ(t) + 1 − φ(t)) ̃ − (σ(t) + 1 − ψ(t)) 󵄨󵄨 󵄨󵄨φ(t) − ψ(t) 󵄨 󵄨 󵄨 󵄨 ̃ 󵄨 󵄨 󵄨 ̃ 󵄨󵄨󵄨, − ψ(τ) ≤ 󵄨󵄨󵄨φ(t) − ψ(t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨φ(τ) − ψ(τ)󵄨󵄨󵄨 = 󵄨󵄨󵄨φ(τ) 󵄨 where φ and ψ are solutions of the stop equation, while φ̃ and ψ̃ are solutions of the play equation. In (3.3.3) we have introduced our standard stop model, and in (3.3.6) we have introduced our standard play model. We point out that the original construction of stop and play models from [23] was slightly different. In fact, for any monotone input function σ the authors define in [23, p. 13] the output function u for t ≥ t0 by σ(t) + h { { { u(t) = { u(t0 ) { { { σ(t)

if σ(t) ≤ u(t0 ) − h, if u(t0 ) − h < σ(t) ≤ u(t0 ), if σ(t) ≥ u(t0 ).

One may show, however, that both models are equivalent in the following sense. Suppose that σ = σ(t) is a continuous input function, and let (σn )n be a sequence of piecewise monotone continuous input functions which converges to σ as n → ∞. Moreover, denote by u = u(t) the output function which corresponds to the play equation (3.3.6) and satisfies the initial condition u(t0 ) = u0 . Likewise, denote by un = un (t) the output function which corresponds to the play equation with σ replaced by σn and satisfies the same initial condition un (t0 ) = u0 . In the model proposed in [23], u is then the limit of the sequence (un )n , while in our model (3.3.6) the output function ũ solves the Cauchy problem (3.0.2)/(3.1.6) with D(t, u, dt) given by (3.3.6). But the estimate (3.3.14) in Theorem 3.3.6 implies that 󵄨󵄨 ̃ 󵄨 󵄨󵄨u(t) − un (t)󵄨󵄨󵄨 ≤ (σ − σn )[t0 , t]. ̃ We conclude that |u(t)−u n (t)| → 0 as n → ∞, and so actually ũ = u. The analogous statement for the stop model follows from the linking equation (3.3.7). 3.3.3 An alternative model for stop operators To describe the stop model with a monotone or piecewise monotone input one may also use the formula (3.0.2) with σ(t + dt) − σ(t) { { { D(t, u, dt) = { min{σ(t + dt) − σ(t), 0} { { { max{σ(t + dt) − σ(t), 0}

if 0 < u < 1, if u = 1, if u = 0.

(3.3.18)

164 | 3 Equations with nonlinear differentials Of course, (3.3.18) looks simpler than (3.3.3). However, the advantage of (3.3.3) consists in the fact that with (3.3.18) we get a locally explicit equation for arbitrary continuous inputs, while (3.3.18) may be not locally explicit if the input is not (piecewise) monotone. Moreover, it may happen that the corresponding Cauchy problem does not have a solution, as the following Example 3.3.9 [68] shows. Example 3.3.9. Let εn := 1/n, and consider the input function σ : [0, 1] → ℝ defined by19 0 { { { σ(t) = { −(4n + 1)t + 2 { { { (4n − 1)t − 2

for t = 0, for ε2n+1 < t ≤ ε2n

(3.3.19)

for ε2n < t ≤ ε2n−1 .

We claim that the Cauchy problem (3.0.2)/(3.1.6) with nonlinear differential (3.3.18) and input signal (3.3.19) has a unique solution u on [t0 , 1] for every t0 ∈ (0, 1). To see ̃ u, dt) the function (3.3.18), this, we denote by D(t, u, dt) the function (3.3.3) and by D(t, and show that these two functions are tangential from the right in the sense of (3.1.3). We distinguish the two cases which correspond to (3.3.19) for t > 0. 1st case: ε2n+1 ≤ t < ε2n . Then for small dt > 0 we have σ(t + dt) − max{σ(s) : t ≤ s ≤ t + dt} = min{σ(t + dt) − σ(t), 0} and σ(t + dt) − min{σ(s) : t ≤ s ≤ t + dt} = max{σ(t + dt) − σ(t), 0}. 2nd case: ε2n ≤ t < ε2n−1 . Then for small dt > 0 we have σ(t + dt) − max{σ(s) : t ≤ s ≤ t + dt} = 0 = min{σ(t + dt) − σ(t), 0} and σ(t + dt) − min{σ(s) : t ≤ s ≤ t + dt} = σ(t + dt) − σ(t) = max{σ(t + dt) − σ(t), 0}. ̃ u, dt) = o(dt) as dt → 0+, and so D and D̃ are This shows that D(t, u, dt) − D(t, tangential from the right. Consequently, the corresponding stop equations for D and D̃ have the same solutions. In particular, the unique solvability of the Cauchy problem (3.0.2)/(3.1.6) with nonlinear differential (3.3.18) and input signal (3.3.19) follows from Theorem 3.3.1 and Theorem 3.3.7. 19 Geometrically, σ is a piecewise linear peak function satisfying σ(εk ) = εk for k odd, and σ(εk ) = −εk for k even.

3.3 Stops and plays | 165

On the other hand, one can show that the equation (3.0.2) with nonlinear differential (3.3.18) and input signal (3.3.19) does not have solutions u satisfying the initial condition u(0) = 0. In fact, suppose that there is a solution φ = φ(t) of this problem; then φ(dt) − D(0, 0, dt) = o(dt)

(dt → 0+).

In particular, this implies that [φ(ε2n+1 ) − D(0, 0, ε2n+1 )](2n + 1) → 0

(n → ∞).

On the other hand, we have D(0, 0, ε2n+1 ) = ε2n+1 and φ(ε2n+1 ) = φ(ε2n+2 ) + σ(ε2n+1 ) − σ(ε2n+2 ) ≥ ε2n+1 + ε2n+2 for sufficiently large n, by (3.3.18) and (3.3.19). This contradiction show that our assumption was false, and so the claim ist proved.

3.3.4 M-switches Let K = (K1 , K2 , . . . , Kk ) be an ordered system of mutually disjoint closed subsets Ki ⊂ ℝm , K̃ := K1 ∪ K2 ∪ ⋅ ⋅ ⋅ ∪ Kk , M = (M1 , M2 , . . . , Mk ) an ordered system of functions Mi : {0, 1, . . . , r} → {0, 1, . . . , r}, and a ∈ {0, 1} a parameter. Definition 3.3.10. We call the elements of the set ρ := {0, 1, . . . , r} the states of the Mswitch M [43, 74]. Moreover, assuming that the input function y = y(t) is defined on some interval 𝒟(y) and takes values in ℝm , we call t ∈ 𝒟(y) an input moment of y in the set Ki if y(t) ∈ Ki and one of the following two conditions is fulfilled: – either t is the left endpoint of 𝒟(y) and a = 1; – or y(τ) ∈ ̸ Ki for all τ ∈ (t − δ, t) and some δ > 0. The state change of this M-switch is defined by means of the equation with nonlinear differential s(t + dt) − s(t) = SK,M,a (t, s(t), y, dt) + o(dt),

(3.3.20)

SK,M,a (t, s, y, dt) := Mi (s) − s

(3.3.21)

where

if t is an input of y in Ki and dt > 0, and SK,M,a (t, s, y, dt) := 0 otherwise. The function (3.3.21) is the so-called switch function of the M-switch.

166 | 3 Equations with nonlinear differentials We point out that the variable y in (3.3.20) and (3.3.21) does not only depend on the point t, because the behaviour of t 󳨃→ y(t) is important for S also in a left neighbourhood (t − δ, t) for some δ > 0. This fact is often expressed in the following form as20 – Volterra property: from y(τ) = z(τ) for t − δ < τ ≤ t it follows that SK,M,a (t, s, y, dt) = SK,M,a (t, s, z, dt). We remark that this Volterra property occurs here also in the following stronger sense: if two inputs y and z coincide up to the moment t, they also coincide at the moment t + dt. Let us now return to a relay function with threshold values α and β, input signal σ = σ(t) and output signal u = u(t) (see Subsection 3.2.1). Here we suppose that, possibly, u(t) = 1 for σ(t) < α and u(t) = 0 for σ(t) > β. We show now that such a relay can be considered as an M-switch in the sense of Definition 3.3.10. In fact, let us put K1 := (−∞, α], K2 := [β, ∞), M1 ≡ 0, M2 ≡ 1, and a := 1. Since the function Mi is then constant on the set Ki , the relay may be described by −s(t) { { { s(t + dt) − s(t) = { 1 − s(t) { { { 0 ∗

for y(t) ∈ K1 and dt > 0, for y(t) ∈ K2 and dt > 0, otherwise,

or, equivalently, 0 { { { s(t + dt) = { 1 { { { s(t) ∗

for y(t) ≤ α and dt > 0, for y(t) ≥ β and dt > 0,

(3.3.22)

otherwise,

where the asterisque is defined in (3.1.7). If the initial values belong to the admissible set, the model (3.3.22) is equivalent to the model (3.0.2)/(3.2.3). To see this, let us denote by s = s(t) the output function which corresponds to σ(t) = y(t) in (3.0.2)/(3.2.3). Then s(t) + 1 { { { s(t + dt) = { s(t) − 1 { { { s(t) ∗

for y(t) = β, s(t) = 0, and dt > 0, for y(t) = α, s(t) = 1, and dt > 0,

(3.3.23)

otherwise.

From this we see that in case y(t) ≥ β and dt > 0 the equality s(t + dt) = 1 holds for any value of s(t), which means that s(t + dt) = s(t + dt). Similarly, in case y(t) ≤ α and dt > 0 we have s(t + dt) = 0. So (3.3.23) is equivalent, after some obvious change of notation, to (3.0.2)/(3.2.3) as claimed. 20 A system having this property is sometimes called system with memory [26].

3.3 Stops and plays | 167

If the initial values do not belong to the admissible set,21 they enter this set immediately and remain there in the sequel. It is not hard to see that, after this extension of admissible relay values, the relay keeps all the properties which we studied in Section 3.2. Theorem 3.3.11. Suppose that one of the following two conditions is fulfilled for the equation (3.3.20): (a) The function y is continuous, and each function Mi : ρ → ρ is constant, i. e., Mi ≡ si for i = 1, . . . , k. (b) For every t ∈ 𝒟(y) there exists an interval (t, t + ε) which connects no input moments of y in any of the sets Ki . Then equation (3.3.20) is locally explicit. Proof. Let t0 ∈ 𝒟(y) and s(t0 ) = s0 ∈ ρ.

(3.3.24)

We show that, under the assumption (a), the Cauchy problem (3.3.20)/(3.3.21)/ (3.3.24) has a strong solution φ = φ(t) on some right neighbourhood of t0 . If on every interval [t0 , t0 + η) (η > 0) there exists an input moment of y in some set Ki , then this set Ki is the same for all sufficiently small η. In this case we put φ(t) := {

s0

for t = t0 ,

Mi (s0 ) = si

for t0 < t < t0 + ε.

In the opposite case we simply put φ(t) ≡ s0

(t0 ≤ t < t0 + ε).

Here we choose ε > 0 in such a way that on [t0 , t0 + ε) we have y(t) ∈ ̸ K̃ \ Ki in the first case, and y(t) ∈ ̸ K̃ in the second case.22 We claim that the function φ defined in this way is a strong solution of (3.3.20) on [t0 , t0 + ε). In fact, let t0 < t1 < t2 < t0 + ε. Since in the interval (t0 , t0 + ε) there are no input moments of y in any of the sets K1 , . . . , Kk , with the only possible exclusion Ki in the first case, we have then SK,M,a (t1 , φ(t1 ), y, t2 − t1 ) = 0. This means that any constant function is a solution on (t0 , t0 + ε). For t1 = t0 we have SK,M,a (t1 , φ(t1 ), y, t2 − t1 ) = si − s0 21 This is true if, for example, y0 < α and s0 = 1. 22 Such an ε exists, since y is continuous and all sets K1 , . . . , Kk are closed.

168 | 3 Equations with nonlinear differentials in the first case, and SK,M,a (t1 , φ(t1 ), y, t2 − t1 ) = 0 in the second case. In any case we have shown that φ is a strong solution of (3.3.20)/ (3.3.21). Now we show that, under the assumption (b), the Cauchy problem (3.3.20)/ (3.3.21)/(3.3.24) has also a strong solution φ = φ(t) on some right neighbourhood of t0 . If t0 is an input moment of y in some set Ki , we put φ(t) := {

s0

for t = t0 ,

Mi (s0 )

for t0 < t < t0 + ε.

In the opposite case we simply put23 φ(t) ≡ s0

(t0 ≤ t < t0 + ε).

As in the preceding case, it is not hard to see that φ is a strong solution of (3.3.20)/(3.3.21) on [t0 , t0 + ε). The proof is complete. From Theorem 3.1.1 and Theorem 3.1.2 we thus deduce the following Theorem 3.3.12. If one of the Conditions (a) or (b) in Theorem 3.3.11 is fulfilled, the Cauchy problem (3.3.20)/(3.3.21)/(3.3.24) has a unique strong solution φ = φ(t) on some right neighbourhood of t0 . One could ask whether the two Conditions (a) and (b) from Theorem 3.3.11 are independent, or one of them implies the other. The following two examples show their independence. Example 3.3.13. Consider the relay with switching values α = 0 and β = 1 and the “zigzag” input signal y : [0, 1] → ℝ defined by 1 { { { y(t) := { 2nt { { { 2 − 2nt

for t = 0, for for

1 1 < t ≤ 2n , 2n+1 1 1 < t ≤ 2n−1 . 2n

It is not hard to see that the corresponding M-switch satisfies Condition (a) from Theorem 3.3.11, and so it is described by a locally explicit equation. On the other hand, it does not satisfy Condition (b), because in each right neighbourhood of t0 = 0 we find an input point of y in the set K2 = [β, ∞). 23 Here the interval (t0 , t0 + ε) is taken, according to Condition (b), in such a way that it contains no input moments of y in any of the sets Ki .

3.3 Stops and plays | 169

Example 3.3.14. Consider the M-switch with smooth input signal y(t) := 2 sin t, parameter a ∈ {0, 1}, subsets K1 := {0} and K2 := {1}, and functions M1 and M2 defined by M1 (0) = M1 (1) := 0, M2 (0) := 1, and M2 (1) := 0. This M-witch satisfies Condition (b) from Theorem 3.3.11, because the input y is piecewise monotone, and so it is described by a locally explicit equation. On the other hand, it is easy to see that it does not satisfy Condition (a). Now we turn to the problem of global solvability. To this end, we replace Condition (b) in Theorem 3.3.11 by the following stronger condition, where h > 0 is some fixed number: (c) The length of intervals between different input moments is not smaller than h. Theorem 3.3.15 (Extension of solutions). Suppose that one of the Conditions (a) or (c) is fulfilled for the equation (3.3.20). Let y : [t0 , T) → ℝm , where t0 < T ≤ ∞, any input function. Then every solution of (3.3.20) may be extended to [t0 , T). Proof. We show that, for any t1 ∈ (t0 , T) and every solution φ, the limit lim φ(t) = si

t→t1 −

(si ∈ ρ)

(3.3.25)

exists. The assertion will then follow from Theorem 3.1.5. Suppose that Condition (a) holds. Then the continuity of y implies that ‖y(t)‖ ≤ H for t0 ≤ t ≤ t1 . For i, j = 1, . . . , k, i ≠ j, we denote by dij := dist(B(0, H) ∩ Ki , B(0, H) ∩ Kj ) = min{‖ui − uj ‖ : ‖ui ‖, ‖uj ‖ ≤ H, ui ∈ Ki , uj ∈ Kj } the distance between the compact sets B(0, H) ∩ Ki and B(0, H) ∩ Kj . Since these sets are disjoint for i ≠ j, we know that d := min{dij : i ≠ j} > 0. To prove the existence of the limit (3.3.25), we show that every solution defined on [t0 , t1 ) is identically equal to some si if this solution approaches t1 . Assuming the contrary, for each δ > 0 we find points t2 and t3 such that t1 − δ < t2 < t3 < t1 ,

y(t2 ) ∈ Ki ,

y(t3 ) ∈ Kj ,

where i ≠ j, and so ‖y(t2 ) − y(t3 )‖ ≥ d. On the other hand, by the uniform continuity of y the number δ may of course be chosen in such a way that ‖y(t2 ) − y(t3 )‖ < d, a contradiction. So we have proved the existence of the limit (3.3.25) under Condition (a). Suppose now that Condition (c) holds. Then there may be at most one input point in the interval [t1 − h, t1 ). If there is no input point, we have φ(t) = φ(t1 ) = si for

170 | 3 Equations with nonlinear differentials t1 − h ≤ t < t1 . If there is just one input point, we have φ(t) = Mi (φ(t2 )) = si for t2 < t < t1 , where t2 is an input point of y in Ki . In both cases we have proved the existence of the limit (3.3.25) under Condition (c). Now we are going to prove a uniqueness result for M-switches which are also based on Conditions (a) and (b) from Theorem 3.3.11. Theorem 3.3.16 (Uniqueness of solutions). Suppose that one of the Conditions (a) or (b) is fulfilled for the equation (3.3.20). Then every solution of (3.3.20) is a strong solution. Proof. Suppose that the assertion is not true, which means that there exists a solution ψ of (3.3.20) which is not a strong solution. Then we find a point t 󸀠 and a positive sequence (dtk )k converging to 0 such that ψ(t 󸀠 + dtk ) − ψ(t 󸀠 ) − SK,M,a (t 󸀠 , ψ(t 󸀠 ), y, dtk ) ≠ 0. Let φ be any strong solution of the M-switch (3.3.20) which satisfies φ(t 󸀠 ) = ψ(t 󸀠 ). Obviously, φ(t 󸀠 + dtk ) ≠ ψ(t 󸀠 + dtk ) for sufficiently large k, and therefore 󵄨󵄨 󸀠 󵄨 󸀠 󵄨󵄨ψ(t + dtk ) − φ(t + dtk )󵄨󵄨󵄨 = |si − sj | ≥ 1, since i ≠ j. On the other hand we have ψ(t 󸀠 + dtk ) − φ(t 󸀠 + dtk ) = ψ(t 󸀠 ) + SK,M,a (t 󸀠 , ψ(t 󸀠 ), y, dtk ) + o(dtk ) − φ(t 󸀠 ) − SK,M,a (t 󸀠 , φ(t 󸀠 ), y, dtk ) = o(dtk ). This contradiction shows that our assumption on the existence of ψ was false and proves the assertion.

3.3.5 A control-correction system The idea of the concept of control-correction systems consists in splitting the action of the system into certain intervals, where every control-correction moment is determined by the previous moment and the corresponding state of the process [71]. The output signal x(t) at time t is influenced by input signals in each interval between two successive control-correction moments like a servo, i. e., the increment of outputs is equal to the increment of inputs. The selection of an input during a time interval is determined by the moment which follows the previous control-correction moment. The mathematical model of such a system consists of two equations with nonlinear differential and looks as follows. The first equation is τ(t + dt) − τ(t) + o(dt) = {

0

if t < τ(t) or dt = 0,

a(t, x(t))

if t = τ(t) and dt > 0,

(3.3.26)

3.3 Stops and plays |

171

where a(t, x) > 0. This equation describes the relation between the state of the process x and the control τ. The second equation is x(t + dt) − x(t) + o(dt) ={

f (τ(t), t + dt) − f (τ(t), t)

if t < τ(t),

f (t + a(t, x(t)), t + dt) − f (t + a(t, x(t)), t)

if t = τ(t)

(3.3.27)

and describes the state of the process. Here the function f is supposed to be leftcontinuous in the second argument. The connection with our preceding discussion is given by the following Theorem 3.3.17. The system (3.3.26)/(3.3.27) is locally explicit. Proof. We show that, given any initial point (t0 , τ0 , x0 ) with τ0 ≥ t0 , the system (3.3.26)/(3.3.27) has a strong solution on [t0 , t0 + δ) for some δ > 0. Here we distinguish two possible cases. If τ0 > t0 we set δ := τ0 − t0 . Then the pair of functions τ(t) ≡ τ0 ,

x(t) = x0 + f (τ0 , t) − f (τ0 , t0 )

is a strong solution of the system (3.3.26)/(3.3.27) on the interval [t0 , t0 + δ). On the other hand, in case τ0 = t0 we set δ := a(t0 , x0 ) and τ1 := τ0 + δ, and claim that the pair of functions τ(t) ≡ {

τ0

for t = t0 ,

τ1

for t0 < t < t0 + δ,

x(t) = x0 + f (τ1 , t) − f (τ1 , t0 )

is a strong solution of the system (3.3.26)/(3.3.27) on the interval [t0 , t0 + δ). In fact, for t0 < t1 ≤ t2 < t0 + δ we obtain τ(t2 ) − τ(t1 ) = τ0 + a(t0 , x0 ) − τ0 − a(t0 , x0 ) = 0 and τ(t1 ) − τ(t0 ) = τ0 + a(t0 , x0 ) − τ0 = a(t0 , x0 ). Similarly, x(t2 ) − x(t1 ) = x0 + f (τ1 , t2 ) − f (τ1 , t0 ) − x0 − f (τ1 , t1 ) + f (τ1 , t0 ) = f (τ(t1 ), t2 ) − f (τ(t1 ), t1 ) and x(t1 ) − x(t0 ) = x0 + f (τ1 , t1 ) − f (τ1 , t0 ) − x0 = f (τ0 + a(t0 , x0 ), t1 ) − f (τ0 + a(t0 , x0 ), t0 ). So we see that, for any initial point (t0 , τ0 , x0 ) with τ0 ≥ t0 , the system (3.3.26)/ (3.3.27) has a strong solution on [t0 , t0 + δ), and the proof is complete.

172 | 3 Equations with nonlinear differentials

3.4 Systems with locally explicit equations In this section we consider three types of systems which consist of ordinary differential equations and locally explicit equations with nonlinear differentials. In the general case one may write such systems in the form v(t + dt) − v(t) = V(t, v, xtt+dt , dt) + o(dt)

(3.4.1)

ẋ = f (t, v, x),

(3.4.2)

and

̇ where v = v(t), x = x(t), and ẋ = x(t), and xtt+dt denotes the restriction of x on the interval [t, t+dt]. We call (3.4.2) the equation of a control object, and v a control function. In what follows, we will study the local and global solvability of a Cauchy problem for the equations (3.4.1) and (3.4.2), as well as uniqueness of solutions. 3.4.1 Closed systems with relay Suppose that the action of a control object is described by means of the equation ̇ = f (t, u(t), x(t)), x(t)

(3.4.3)

where f : ℝ × {0, 1} × ℝm → ℝm is continuous in the first and third argument and satisfies a Lipschitz-type condition of the form ⟨f (t, u, x) − f (t, u, y), x − y⟩ ≤ L(t)‖x − y‖2 ,

(3.4.4)

where ⟨⋅, ⋅⟩ denotes the scalar product in ℝm and L : ℝ → ℝ is continuous. The control function u plays the role of the output of a relay which is described by (3.0.2)/(3.2.3), while the input is a continuous scalar function of the state x(t) of the control object x, i. e., σ(t) = p(x(t)).

(3.4.5)

A solution of the system (3.0.2)/(3.2.3)/(3.4.3)/(3.4.5) is, by definition, a pair (x, u) of functions defined on some interval I ⊆ ℝ, with the following properties: – the function x : I → ℝm is continuous and satisfies (3.4.3) at all points of continuity of u; – the function u : I → {0, 1} is left-continuous and satisfies (3.0.2) for all t ∈ I. We will consider this system subject to the initial condition x(t0 ) = x0 for x and the initial condition (3.1.6) for u.

(3.4.6)

3.4 Systems with locally explicit equations | 173

For further reference we recall now without proof a well-known differential inequality (see, e. g., [19, p. 16]). Let f : [t0 , t1 ] × ℝ → ℝ be a given continuous function, and suppose that ψ : [t0 , t1 ] → ℝ solves the differential equation dψ(t) = f (t, ψ(t)) (t0 ≤ t ≤ t1 ) dt and satisfies the initial condition ψ(t0 ) = u0 . Lemma 3.4.1. Suppose that φ : [t0 , t1 ] → ℝ is continuous and satisfies the condition D∗ φ(t) ≤ f (t, φ(t))

(t0 ≤ t ≤ t1 ),

where D∗ φ(t) = lim inf Δt→0+

φ(t + Δt) − φ(t) Δt

denotes the lower Dini derivative of φ. Then φ(t0 ) ≤ u0 implies that φ(t) ≤ ψ(t) for all t ∈ [t0 , t1 ]. Let us go back to our control object (3.4.3). The following theorem gives a global existence and uniqueness result for the Cauchy problem. Theorem 3.4.2. Suppose that the hypotheses stated at the beginning of Subsection 3.4.1 are satisfied. Then the problem (3.0.2)/(3.1.6)/(3.2.3)/(3.4.3)/(3.4.5)/(3.4.6) has a unique solution (x, u) on [t0 , ∞). Moreover, the points of discontinuity of u have no finite accumulation point. Proof. Given the initial data t0 , x0 , u0 , and σ0 := p(x0 ), see (3.4.5), by Theorem 3.2.3 we may find a number u1 := v(σ0 , u0 ) and a function x 0 : [t0 , ∞) → ℝm which solves (3.4.3)/(3.4.6) for u = u1 . The existence and uniqueness of a global solution follows24 from the continuity of the function f in the right-hand side of (3.4.3) and the Lipschitztype condition (3.4.4) for f (t, u, ⋅). Let σ 0 (t) := p(x0 (t)), and consider the solution u0 = u0 (t) of the Cauchy problem (3.0.2)/(3.1.6)/(3.2.3) with σ replaced by σ 0 . Putting t1 := inf{t : t > t0 , u0 (t) ≠ u1 } by Theorem 3.2.3 we have t1 > t0 . If t1 is finite we put x1 := x 0 (t1 ) and σ1 := p(x1 ) and obtain in the same way functions x1 = x1 (t), σ 1 = σ 1 (t), u1 = u1 (t) and numbers u2 = v(σ1 , u1 ) and t2 > t1 . This procedure may be continued finitely or infinitely many times. 24 This result may be found, for example, in [32, 62–64].

174 | 3 Equations with nonlinear differentials By construction, the pair of functions (x, u) with x(t) = xi (t),

u(t) = ui (t)

(ti ≤ t < ti+1 )

is then a solution on the union of all intervals [ti , ti+1 ). Moreover, it is not hard to see that this solution is unique. It remains to show that every bounded interval [t0 , T) contains only finitely many points ti ; this implies then that x is defined on the whole half-line [t0 , ∞). To show this, we claim the upper estimate M M 󵄩󵄩 󵄩 L(T−t0 ) (‖x0 ‖ + ) − 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ e L L

(3.4.7)

is true, where 󵄩 󵄩 M := max{󵄩󵄩󵄩f (t, u, 0)󵄩󵄩󵄩 : t0 ≤ t ≤ T, u ∈ {0, 1}} and L := max{L(t) : t0 ≤ t ≤ T}. Indeed, for ti ≤ t < ti+1 we have 1 d 󵄩󵄩 󵄩2 󵄩x(t)󵄩󵄩󵄩 = ⟨f (t, ui+1 , x(t)), x(t)⟩ 2 dt 󵄩 = ⟨f (t, ui+1 , x(t)) − f (t, ui+1 , 0), x(t)⟩ + ⟨f (t, ui+1 , 0), x(t)⟩ 󵄩 󵄩2 󵄩 󵄩 ≤ L󵄩󵄩󵄩x(t)󵄩󵄩󵄩 + M 󵄩󵄩󵄩x(t)󵄩󵄩󵄩. From Lemma 3.4.1 it follows that M M 󵄩󵄩 󵄩 L(t −t ) 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ e i+1 i (‖xi ‖ + ) − . L L

(3.4.8)

By induction we get from (3.4.8) in turn that M M 󵄩 󵄩󵄩 L(t −t ) 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ e i+1 0 (‖x0 ‖ + ) − . L L Indeed, if (3.4.9) is true for i − 1 the estimate M M 󵄩󵄩 󵄩 L(t −t ) 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ e i+1 i (‖xi ‖ + ) − L L ≤ eL(ti+1 −ti ) [eL(ti −t0 ) (‖x0 ‖ + = eL(ti+1 −t0 ) (‖x0 ‖ +

M M M M )− + ]− L L L L

M M )− , L L

proves (3.4.9) for i. Obviously, (3.4.9) implies (3.4.7).

(3.4.9)

3.4 Systems with locally explicit equations | 175

We have shown that the graph of the function x is contained in the compact cylinder [t0 , T] × B(0, R), with R given by the right-hand side of (3.4.7). So the continuous functions f (⋅, 0, ⋅) and f (⋅, 1, ⋅) are bounded on this cylinder. Consequently, the function d t 󳨃→ ‖ dt x(t)‖ is bounded on [t0 , T), and so, being absolutely continuous, the function x satisfies a Lipschitz condition 󵄩󵄩 󸀠 󵄨 󸀠 󸀠󸀠 󵄨 󸀠 󸀠󸀠 󸀠󸀠 󵄩 󵄩󵄩x(t ) − x(t )󵄩󵄩󵄩 ≤ L1 󵄨󵄨󵄨t − t 󵄨󵄨󵄨 (t0 ≤ t , t < T).

(3.4.10)

We show now that there exists a δ > 0 such that ti+1 − ti ≥ δ for all indices i; from this it obviously follows that the interval [t0 , T) contains only finitely many points ti as claimed. Clearly, 󵄨󵄨 󵄨 󵄨󵄨σ(ti+1 ) − σ(ti )󵄨󵄨󵄨 = β − α.

(3.4.11)

Since the function p is uniformly continuous on the ball B(0, R), with R denoting again the right-hand side of (3.4.7), we can find a δ1 > 0 such that 󵄨󵄨 󸀠 󸀠󸀠 󵄨 󵄨󵄨p(x ) − p(x )󵄨󵄨󵄨 < β − α for x󸀠 , x󸀠󸀠 ∈ B(0, R) with ‖x󸀠 − x󸀠󸀠 ‖ < δ1 . We put δ := δ1 /L1 , where L1 is the Lipschitz constant for x from (3.4.10). So from ti+1 − ti < δ it follows that 󵄩󵄩 󵄩 󵄩󵄩x(ti+1 ) − x(ti )󵄩󵄩󵄩 ≤ L1 |ti+1 − ti | ≤ L1 δ = δ1 , hence 󵄨 󵄨󵄨 󵄨 󵄨 󵄨󵄨σ(ti+1 ) − σ(ti )󵄨󵄨󵄨 = 󵄨󵄨󵄨p(x(ti+1 )) − p(x(ti ))󵄨󵄨󵄨 < β − α, contradicting (3.4.11). So the case ti+1 − ti < δ is not possible, which proves our assertion. 3.4.2 Systems with M-switch Now we consider a system of three equations with three unknown functions: a variable state function s = s(t), a (not necessarily smooth) function u = u(t), and a smooth function x = x(t). The system has the form25 s(t + dt) − s(t) = SK,M,a (t, s(t), y, dt), ∗

(3.4.12)

where SK,M,a denotes the switch function introduced in (3.3.21), u(t + dt) − u(t) = D(t, s(t + dt), u(t), xtt+dt , dt), ∗

25 Here the asterisque above the equality sign has the same meaning as in (3.1.7).

(3.4.13)

176 | 3 Equations with nonlinear differentials and ẋ = f (t, s(t), u(t), x(t)).

(3.4.14)

The function y in (3.4.12) depends in the form y(t) = Y(t, u(t), x(t))

(3.4.15)

on the functions u and x. Equation (3.4.14) may be not fulfilled in points of discontinuity of s; however, the function x is supposed to be continuous everywhere. Together with the system (3.4.12)/(3.4.13)/(3.4.14), we will impose initial conditions of the form s(t0 ) = s0 ,

u(t0 ) = u0 ,

x(t0 ) = x0 .

(3.4.16)

Let V ⊆ ℝm be closed, Ux0 ⊆ ℝn an open neighbourhood of x0 , and Uu0 ⊆ ℝm an open neighbourhood of u0 . In what follows, we impose the following conditions on the function D in (3.4.13), the function f in (3.4.14), and the function Y in (3.4.15). (a) For fixed h > 0, the function D : [t0 , t0 + h) × ρ × V × C([t, t + dt], Ux0 ) × [0, ∞) → ℝm is continuous in the argument dt 󳨃→ D(t, s, u, xtt+dt , dt) if x is continuous. (b) Replacing s(t + dt) by a fixed s1 ∈ ρ, equation (3.4.13) is locally explicit for any x ∈ C([t, t + dt], Ux0 ). (c) The function D satisfies a Lipschitz condition 󵄩󵄩 󵄩 󵄩󵄩D(t, s, u, φ, dt) − D(t, s, u, ψ, dt)󵄩󵄩󵄩 ≤ LD ‖φ − ψ‖ with respect to the fourth argument. (d) The function f : [t0 , t0 + h) × ρ × Uu0 × Ux0 → ℝn is continuous in the first argument and satisfies a Lipschitz condition 󵄩󵄩 󵄩 󵄩󵄩f (t, σ, u, ξ ) − f (t, σ, v, η)󵄩󵄩󵄩 ≤ Lf (‖u − v‖ + ‖ξ − η‖) with respect to the third and fourth argument. (e) The function Y : [t0 , t0 + h) × V × Ux0 → ℝN .

3.4 Systems with locally explicit equations | 177

(3.4.13) and (3.4.14) is a closed system with unknown functions u and x if s(t + dt) and s(t) in (3.4.13) and (3.4.14) are replaced by a fixed s1 ∈ ρ. (f) For any solution (u, x) of a closed system (3.4.13) and (3.4.14), and any function y defined by (3.4.15), the equation (3.4.12) is locally explicit. We point out that the hypothesis (f) is satisfied if, for example, the function Y in (3.4.15) is continuous and the functions Mi which define the M-switch, see Theorem 3.3.11, are constant w. r. t. ρ. Under the hypotheses enumerated above, we may now prove the following Theorem 3.4.3 (Local existence and uniqueness). Suppose that the Conditions (a)–(f) are satisfied. Then the problem (3.4.12)–(3.4.16) has a solution on some interval [t0 , T); moreover, this solution is unique. Proof. We put s1 := Mi (s0 ) if y(t0 ) = Y(t0 , u0 , x0 ) ∈ Ki for some i and a = 1, and s1 := s0 otherwise. Choose r > 0 such that B(x0 , r) ⊆ Ux0 . For any continuous function x : [t0 , T0 ] → B(x0 , r), where t0 < T0 < t0 + h, let u(t) := u0 + D(t0 , s1 , u0 , xtt0 , t − t0 ); then u is continuous, by (a). Moreover, 󵄩󵄩 󵄩 󵄩 󵄩 t 󵄩󵄩u(t) − u0 󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩D(t0 , s1 , u0 , xt0 , t − t0 ) − D(t0 , s1 , u0 , x0 , t − t0 )󵄩󵄩󵄩 󵄩 󵄩 󵄩 󵄩 + 󵄩󵄩󵄩D(t0 , s1 , u0 , x0 , t − t0 )󵄩󵄩󵄩 ≤ LD 󵄩󵄩󵄩xtt0 − x0 󵄩󵄩󵄩 + M ≤ LD r + M =: R,

(3.4.17)

where LD is from Condition (c) and 󵄩 󵄩 M := max{󵄩󵄩󵄩D(t0 , s1 , u0 , x0 , t − t0 )󵄩󵄩󵄩 : t0 ≤ t ≤ T0 }. The estimate (3.4.17) shows that the function u maps [t0 , T0 ] into the ball B(u0 , R). The Uryson–Volterra operator t

Ax(t) = x0 + ∫ f [τ, s1 , u(τ), x(τ)] dτ t0

satisfies t

󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩Ax(t) − x0 󵄩󵄩󵄩 ≤ ∫󵄩󵄩󵄩f [τ, s1 , u(τ), x(τ)]󵄩󵄩󵄩 dτ ≤ H(T0 − t0 ), t0

where we have put 󵄩 󵄩 H := max{󵄩󵄩󵄩f (τ, s1 , v, ξ )󵄩󵄩󵄩 : t0 ≤ τ ≤ T0 , ‖v − x0 ‖ ≤ R, ‖ξ − x0 ‖ ≤ r}.

(3.4.18)

178 | 3 Equations with nonlinear differentials Note that the function under the integral in (3.4.18) is continuous, by condition (d) and the continuity of u and x. Taking now T1 < min{T0 , t0 +

r 1 ,t + }, H 0 Lf (LD + 1)

(3.4.19)

where LD is from Condition (c) and Lf from Condition (d), we deduce that the operator (3.4.18) maps the complete metric space X := C([t0 , T1 ], B(x0 , r)) into itself. We claim that the operator (3.4.18) is a contraction on this space. In fact, for any x, x ∈ X and t0 ≤ t ≤ T1 we have 󵄩 󵄩󵄩 󵄩󵄩Ax(t) − Ax(t)󵄩󵄩󵄩 t

󵄩 󵄩 ≤ ∫󵄩󵄩󵄩f [τ, s1 , u(τ), x(τ)] − f [τ, s1 , u(τ), x(τ)]󵄩󵄩󵄩 dτ t0

t

󵄩 󵄩 󵄩 󵄩 ≤ ∫ Lf (󵄩󵄩󵄩u(τ) − u(τ)󵄩󵄩󵄩 + 󵄩󵄩󵄩x(τ) − x(τ)󵄩󵄩󵄩) dτ t0

T1

󵄩 󵄩 󵄩 󵄩 ≤ Lf {∫[󵄩󵄩󵄩D(t0 , s1 , u0 , xtτ0 , t − t0 ) − D(t0 , s1 , u0 , x τt0 , t − t0 )󵄩󵄩󵄩 + 󵄩󵄩󵄩x(τ) − x(τ)󵄩󵄩󵄩] dτ} t0

T1

T1

≤ Lf {∫ LD ‖x − x‖X dτ + ∫ ‖x − x‖X dτ} = Lf (LD + 1)(T1 − t0 )‖x − x‖X . t0

t0

and the statement follows from our estimate (3.4.19) for T1 . So the operator (3.4.18) has a unique fixed point in X which is a solution of (3.4.14). By Condition (b), the function u in turn solves (3.4.13) on some interval [t0 , T2 ) ⊆ [t0 , T1 ). By means of the corresponding x we consider now the function y(t) = Y(t, u(t), x(t)). By Condition (f), we find some interval [t0 , T) ⊆ [t0 , T2 ) such that the function s with s(t0 ) = s0 and s(t) = s1 for t > t0 solves (3.4.12) on this interval. Summarizing we have shown that the triple (s(t), u(t), x(t)) is a solution of (3.4.12)–(3.4.16) on [t0 , T) with s(t) = s1 for t > t0 . ̃ u(t), ̃ ̃ Now we prove uniqueness. Suppose that (s(t), u(t), x(t)) and (s(t), x(t)) are different solutions of (3.4.12)–(3.4.16) on [t0 , T). Let ̃ u(t), ̃ ̃ x(t))}. t1 := inf{t > t0 : (s(t), u(t), x(t)) ≠ (s(t), Since x and u are continuous and s is left-continuous, we know that ̃ 1 ), u(t ̃ 1 ), x(t ̃ 1 )). (s(t1 ), u(t1 ), x(t1 )) = (s(t

(3.4.20)

3.4 Systems with locally explicit equations | 179

Moreover, the Volterra property of the function SK,M,a (see Subsection 3.3.3) im̃ = s1 on some interval (t1 , t2 ). plies that s(t) = s(t) Consider the Uryson–Volterra operator t

̃ Ax(t) = x(t1 ) + ∫ f [τ, s1 , u(τ), x(τ)] dτ t1

which is analogous to (3.4.18) and is defined for continuous functions x : [t1 , t2 ] → B(x0 , r). The function u is given by u(t) = u(t1 ) + D(t1 , s1 , u(t1 ), xtt1 , t − t1 ). In the same way as before, one can show that the operator à maps, for sufficiently small η > 0, the complete metric space X := C([t1 , t1 + η], B(x(t1 ), r)) into itself and is a contraction on this space. Since à has then a unique fixed point in X, we conclude ̃ and that, for t1 ≤ t ≤ t1 + η, we have x(t) ≡ x(t) ̃ + D(t1 , s1 , u(t ̃ 1 ), x̃tt , t − t1 ) = u(t). ̃ u(t) = u(t1 ) + D(t1 , s1 , u(t1 ), xtt1 , t − t1 ) = u(t) 1 But this contradicts our choice (3.4.20) of t1 , and so we are done. Denote by φ(t) := gtt0 (s0 , v0 , x0 ) the value of the solution26 of (3.4.12)–(3.4.15) at time t that starts at (s0 , v0 , x0 ) at time t = t0 . of the problem (3.4.12)–(3.4.15). If τ := sup 𝒟(φ), from Theorem 3.4.3 it follows that τ > t0 and τ ∈ ̸ 𝒟(φ). Theorem 3.4.3 gives conditions for the local solvability of the problem (3.4.12)– (3.4.15). The next theorem is a global solvability result. Theorem 3.4.4 (Global existence). Let Ux0 = ℝn , Uu0 = ℝm , and t0 + h =: T ≤ ∞. Assume that the hypotheses (a), (c), (d), (e), and (f) are satisfied, where the Lipschitz conditions in (c) and (d) are local. Moreover, suppose in addition that (g) For any function x ∈ C([t0 , T1 ], ℝn ), where t0 < T1 ≤ T, equation (3.4.13) with initial condition u0 ∈ V has a bounded solution on [t0 , T1 ). (h) The estimate ⟨f (t, s, v, ξ ), ξ ⟩ ≤ Lr (‖ξ ‖2 )

(‖v‖ ≤ r)

holds for ‖ξ ‖ sufficiently large, where the increasing function Lr satisfies the Osgoodtype condition ∞

∫ 1

dz = ∞. Lr (z)

Then either τ = T, or the function s has on [t0 , τ) infinitely many jumps. 26 In the literature, the function φ is sometimes called the shift operator. Observe that φ is welldefined, by the uniqueness statement in Theorem 3.4.3.

180 | 3 Equations with nonlinear differentials Proof. Suppose that τ < T and s has on [t0 , τ) only a finite number of jumps. Let t1 be the last switching moment; then s(t) ≡ s1 ∈ ρ for t1 < t < τ. Condition (g) implies that u is bounded on [t1 , τ), say ‖u(t)‖ ≤ r. If Condition (h) holds for ‖ξ ‖ ≥ h, we have ⟨f (t, s, v, ξ ), ξ ⟩ ≤ H

(t0 ≤ t ≤ τ, ‖v‖ ≤ r, ‖ξ ‖ ≤ h),

by Condition (d). Now, the function ℒr (z) := max{H, Lr (z)} is increasing and satisfies the same Osgood-type condition as Lr . Moreover, Condition (h) holds, with Lr replaced by ℒr for all ξ ∈ ℝm . Consequently, the estimate d 󵄩󵄩 󵄩2 󵄩 󵄩2 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ 2⟨f (t, s, u(t), x(t)), x(t)⟩ ≤ 2ℒr (󵄩󵄩󵄩x(t)󵄩󵄩󵄩 ) dt shows that the function x is bounded on [t1 , τ). From classical results on differential inequalities it follows that ‖x(t)‖2 ≤ z(t), where z solves the scalar Cauchy problem {

ż = 2ℒr (z)

for t1 ≤ t < τ,

z(t1 ) = z1

with z1 := ‖x(t1 )‖2 . Obviously, the solution z is bounded on [t1 , τ), since otherwise z(t)

lim ∫ t→τ

z1

dζ ℒr (ζ )

= ∞,

contradicting the fact that z(t)



z1

dζ = 2(τ − t1 ) < 2(T − t1 ). ℒr (ζ )

Thus, both u and x are bounded, and therefore the function t 󳨃→ f (t, s1 , u(t)), x(t)) is also bounded on [t1 , τ). But (3.4.14) shows that then the derivative ẋ is bounded as well, and so the limit ̇ =: ξ lim x(t) t→τ

exists. Consequently, we may extend the function x uniquely onto the interval [t0 , τ + 1). By Condition (g), the function u may then also uniquely extended to this interval. Applying Theorem 3.4.3 with initial point (τ, s1 , u(τ), ξ ) we have extended the solution beyond τ, a contradiction. We illustrate our abstract results by means of two examples which are typical in applications to system theory.

3.4 Systems with locally explicit equations | 181

Example 3.4.5. This example describes a so-called temperature regulator.27 The temperature x = x(t) is confined to the interval [α, β], i. e., α ≤ x(t) ≤ β. The relay which here plays the role of the switch, turns on a heating process if the temperature reaches the lower bound α, and turns off this process if the temperature reaches the upper ̇ which is governed by some stop is supposed to bound β. The heating velocity ẋ = x(t) remain in an interval [v0 , v1 ] for 0 < v0 < v1 . The input function for the stop has the form σ(t) = p(t, x(t)),

(3.4.21)

where we assume that p : ℝ2 → ℝ is continuous with respect to the first variable and satisfies a Lipschitz condition 󵄨󵄨 󵄨 󵄨󵄨p(t, u) − p(t, v)󵄨󵄨󵄨 ≤ L|u − v|

(u, v ∈ ℝ)

with some Lipschitz constant L > 0 independent of t. The output function u = u(t) takes its values in [0, 1] and changes with the velocity in such a way that the function (3.4.21) keeps the value 0 or 1, respectively. So the heating process when turning on the relay is described by the formula ẋ = (v1 − v0 )u(t) + v0 ,

(3.4.22)

while the cooling process when turning off the relay is described by the formula ẋ = −f0 (x),

(3.4.23)

where f0 : [α, ∞) → [m, ∞) (m > 0) is some C 1 function. In this example the M-switch is a relay with threshold values α and β > α. The input y = y(t) of the relay describes the regulated temperature, the output s = s(t) assumes the values 0 (relay switched off) and 1 (relay switched on). We take K1 := (−∞, α] and K2 := [β, ∞). In case x(t1 ) ∈ K1 we have s(t) = 0 if t > t1 and the heating source is active, as long as x(t) does not enter K2 . Similarly, in case x(t1 ) ∈ K2 we have s(t) = 1 if t > t1 and the heating source is turned off, as long as x(t) does not enter K1 . Consequently, in the terminology of Definition 3.3.10 we have here a = 1, M1 (s) ≡ 0 on K1 , and M2 (s) ≡ 1 on K2 . For α < x(t) < β the state of the relay does not change until x(t) enters one of the sets K1 or K2 . Summarizing our discussion, we may describe the relay under consideration in the form −s(t) { { { s(t + dt) − s(t) = { 1 − s(t) { { { 0 ∗

if x(t) ≤ α and dt > 0, if x(t) ≥ β and dt > 0, if α < x(t) < β or dt = 0.

27 In Subsection 3.4.5 below we will study certain stability properties of such regulators.

(3.4.24)

182 | 3 Equations with nonlinear differentials Since Condition (a) from Theorem 3.3.11 is satisfied for this problem, Condition (f) also holds. The velocity of heat transfer is governed by the stop whose input function is (3.4.21). So the stop equation with input function u = u(t) which assumes its values in [0, 1] has, in the notation (3.3.3), the form u(t + dt) − u(t) σ(t + dt) { { { = { σ(t + dt) − max{σ(s) : t ≤ s ≤ t + dt} { { { σ(t + dt) − min{σ(s) : t ≤ s ≤ t + dt} ∗

if 0 < u(t) < 1, if u(t) = 1,

(3.4.25)

if u(t) = 0.

The part of (3.4.25) after the curly bracket depends only on u(t) ∈ [0, 1] and σtt+dt ∈ C([t, t + dt], ℝ) and is continuous with respect to dt, which means that Condition (a) holds in the sense of the global Theorem 3.4.4. Moreover, in the variable σtt+dt it satisfies a Lipschitz condition. In fact, for u(t) = 1 we have 󵄨󵄨 ̃ + dt) + σ[t, ̃ t + dt]󵄨󵄨󵄨󵄨 󵄨󵄨σ(t + dt) − σ[t, t + dt] − σ(t 󵄨 ̃ + dt)󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨σ[t, t + dt] − σ[t, ̃ t + dt]󵄨󵄨󵄨󵄨, ≤ 󵄨󵄨󵄨σ(t + dt) − σ(t

(3.4.26)

where we have adopted the abbreviation (3.3.4). For the sake of definiteness, assume ̃ t + dt]. Denote by t1 the point, where the continuous function σ that σ[t, t + dt] ≥ σ[t, takes its maximal value on [t, t + dt], i. e., σ(t1 ) = σ[t, t + dt]. Then 󵄨󵄨 ̃ t + dt]󵄨󵄨󵄨󵄨 = σ[t, t + dt] − σ[t, ̃ t + dt] 󵄨󵄨σ[t, t + dt] − σ[t, ̃ t + dt] ≤ σ(t1 ) − σ(t ̃ 1 ) ≤ 󵄩󵄩󵄩󵄩σtt+dt − σ̃ tt+dt 󵄩󵄩󵄩󵄩. = σ(t1 ) − σ[t,

(3.4.27)

So from (3.4.26) and (3.4.27) we conclude that the Lipschitz condition holds with Lipschitz constant 2. The proof is analogous in case u(t) = 0 and obvious in case 0 < u(t) < 1. Taking into account that the map xtt+dt 󳨃→ σtt+dt is also Lipschitz continuous (with Lipschitz constant L, by assumption), we see that (3.4.25) satisfies Condition (c). In Theorem 3.3.1 we have already shown28 that Condition (b) is also satisfied. Finally, the equation for the function x = x(t) has the form ̇ ={ x(t)

(v1 − v0 )u(t) + v0

if s(t) = 0,

−f0 (x(t))

if s(t) = 1,

(3.4.28)

by (3.4.22) and (3.4.23). The right-hand side of (3.4.28) is defined for s(t) ∈ {0, 1}, u(t) ∈ ℝ, and x(t) ∈ ℝ (if 𝒟(f0 ) = ℝ). It satisfies a global Lipschitz condition in u(t) (with Lipschitz constant v1 − v0 ), and a local Lipschitz constant in x(t), since f0 is 28 Indeed, we have shown that even the stronger Condition (g) holds.

3.4 Systems with locally explicit equations | 183

continuously differentiable. In other words, Condition (d) is satisfied in the global variant. To verify Condition (h) we remark that ⟨f (t, s, v, ξ ), ξ ⟩ = {

(v1 − v0 )vξ + v0 ξ

if s = 0,

−f0 (ξ )ξ

if s = 1.

(3.4.29)

Suppose first that ξ < α. Then s = 0, so we have to estimate the first term in the right-hand side of (3.4.29). As mentioned after stating Condition (h), it suffices to require this only for large |ξ |. We may also assume in addition that ξ < −1, because ⟨f (t, s, v, ξ ), ξ ⟩ ≤ (v1 − v0 )rξ 2 + v0 ξ 2

(3.4.30)

for |v| ≤ r. But the function ξ 2 󳨃→ (v1 − v0 )rξ 2 + v0 ξ 2 on the right-hand side of (3.4.30) obviously satisfies an Osgood condition for large values of ξ . Suppose now29 that ξ > β. Then s = 1, hence ⟨f (t, s, v, ξ ), ξ ⟩ = −f0 (ξ )ξ ≤ 0 ≤ ξ 2 ,

(3.4.31)

and also the function on the right-hand side of (3.4.31) satisfies an Osgood condition for large values of ξ . Thus, the system (3.4.24)/(3.4.25)/(3.4.28) satisfies all hypotheses of the global Theorem 3.4.4. Consequently, either τ = sup 𝒟(φ) = ∞, where φ(t) = gtt0 (s0 , u0 , x0 ), or s(t) has infinitely many jumps on [t0 , τ). We show now that the second case for τ < ∞ cannot occur. Indeed, x(t) < α implies s(t) = 0, and the right-hand side of (3.4.28) is not smaller than v0 , since u(t) ∈ [0, 1]. On the other hand, x(t) > β implies s(t) = 1 and ẋ = −f0 (x(t)) ≤ −m < 0. So the values x(t) belong to [x0 , β] for x0 < α, to [α, x0 ] for x0 > β, and to [α, β] for ̇ is bounded on [t0 , τ) which shows that α ≤ x0 ≤ β. In any case, the derivative x(t) x(t) cannot change in finite time infinitely often between K1 and K2 . Consequently, s(t) cannot have infinitely many jumps. Summarizing our discussion, we may assert that the system (3.4.24)/(3.4.25)/ (3.4.28) has, for any initial value, a unique solution on [t0 , ∞). In Example 3.4.5 we have considered a fairly general system with a finite number of switches on a bounded interval. Now we give an example of a system with an infinite number of switches on a bounded interval. Example 3.4.6. Consider a system consisting of an M-switch with input x = x(t) for which ρ = {0, 1}, K1 = {0}, M1 (0) = 1, and M1 (1) = 0. The underlying differential equation is ẋ = f (t, s(t), u(t), x(t)), 29 Without loss of generality we assume in addition that ξ > |β|.

184 | 3 Equations with nonlinear differentials where f (t, 0, u, x) = {

3t 2 sin 1t − t cos 1t

if t < 0,

0

if t ≥ 0

and f (t, 1, u, x) = −f (t, 0, u, x). Since (3.4.13) is missing here, we will assume that u(t) ≡ u0 . The initial conditions we impose are s(−1) = 0,

u(−1) = u0 ,

x(−1) = sin 1.

Let us show that this system satisfies the hypotheses of the global Theorem 3.4.4. Conditions (a), (b) and (c) on D are satisfied, because D(t, s, u, φ, dt) ≡ 0. The function f is obviously continuous in t and does not depend on u and x. Replacing s(t) in the differential equation by 0 or 1 we get solutions of the form (u0 , t 3 sin 1t ) and (u0 , −t 3 sin 1t + c) for t < 0, and (u0 , c) for t ≥ 0. For c ≠ 0, the equation t 3 sin 1t = c has only finitely many solutions30 in the interval [−1, 0). Consequently, every point t ∈ [−1, ∞) is adjacent from the right to some interval which does not contain input points31 of x which enter K1 . On the other hand, in case c = 0 we have the infinitely many input points tn := −1/nπ (n = 1, 2, 3, . . .). But also in this case one may find at every point t ∈ [−1, ∞) adjacent intervals from the right which do not contain input points entering K1 . In this way we have shown that Condition (b) for locally explicit M-switches (Theorem 3.3.11) holds, and so also Condition (f). Condition (g) is trivially satisfied, since u(t) ≡ u0 . Finally, since 󵄨󵄨 1 1 󵄨󵄨󵄨 󵄨 ⟨f (t, s, v, ξ ), ξ ⟩ ≤ 󵄨󵄨󵄨3t 2 sin − cos 󵄨󵄨󵄨|ξ | ≤ 4|ξ | 󵄨󵄨 t t 󵄨󵄨

(−1 ≤ t < 0),

we get again an estimate with a function which satisfies an Osgood condition32 for large values of ξ . By Theorem 3.4.4, we know that either τ = sup 𝒟(φ) = ∞, where φ(t) = gtt0 (s0 , u0 , x0 ), or s(t) has infinitely many jumps on [t0 , τ). We claim that τ = 0 in this example. First of all, it is not hard to see that for −1 ≤ t < 0 the solution of the corresponding Cauchy problem is φ(t) = (s(t), u0 , x(t)), where s(t) := {

0

for − 1 ≤ t ≤ − π1 or −

1

for −

1 (2n−1)π

1 2nπ

1 < t ≤ − 2nπ

1 < t ≤ − (2n+1)π ,

30 or no solution at all. 31 For t ≥ 0 we have no input points at all, because x(t) = c ≠ 0. 32 Clearly, this is also true for t ≥ 0.

3.4 Systems with locally explicit equations | 185

and x(t) := {

t 3 sin 1t

−t 3 sin 1t

for − 1 ≤ t ≤ − π1 or − for −

1 (2n−1)π

1 2nπ

1 < t ≤ − 2nπ .

1 < t ≤ − (2n+1)π ,

This shows that s(t) has infinitely many jumps in the interval [−1, 0) at all points tn = −1/nπ, and so has no left limit for t → 0−. In other words, it is impossible to extend the solution to t = 0, and so τ = 0. 3.4.3 Closed systems with stop-type hysteresis element Consider the problem (3.4.3), (3.4.5), and u(t + dt) − u(t) = E(t, u, σtt+dt , dt) + o(dt),

(3.4.32)

subject to the initial conditions u(t0 ) = u0 and (3.4.6). Following [66], we impose the following conditions for some T > 0. (i) The function f : 𝒟1 × 𝒟2 × 𝒟3 → ℝn is continuous, where 𝒟1 is a neighbourhood of t0 ∈ ℝ, 𝒟3 a neighbourhood of x0 ∈ ℝn , and 𝒟2 ⊂ ℝm . (j) The function p : 𝒟3 → ℝ is continuous. (k) The function E : (t, u, σ, dt) 󳨃→ E(t, u, σtt+dt , dt) is defined for t ∈ 𝒟1 , u ∈ 𝒟2 , σ ∈ C[t0 , T] and dt ∈ [0, ∞), takes values in ℝm , and is continuous with respect to σ and dt. (l) For any continuous function σ : [t0 , T] → ℝ, (3.4.32) is a locally explicit equation with function D(t, u, dt) = E(t, u, σtt+dt , dt). Under these hypotheses, we may prove the following local solvability theorem. Theorem 3.4.7 (Local existence). Suppose that the Conditions (i)–(l) are satisfied. Then the problem (3.4.3)/(3.4.5)/(3.4.6)/(3.4.32) with u(t0 ) = u0 has at least one solution on [t0 , t0 + h] for some h > 0. Proof. Given any continuous function x : [t0 , T] → 𝒟3 , consider the function u defined by u(t) := E(t0 , u0 , σtt0 , t − t0 ), where σ(t) = p(x(t)) as in (3.4.5). The function u solves problem (3.4.32) on some interval [t0 , t0 + Δ(x)), where Δ(x) depends on (t0 , u0 ) and on the choice of the function x. Since u(t) ∈ 𝒟2 , we may rewrite (3.4.3) in the form ẋ = f ̃(t, xtt0 ), where f ̃(t, xtt0 ) = f (t, u0 + E(t0 , u0 , σtt0 , x(t)).

(3.4.33)

186 | 3 Equations with nonlinear differentials We claim that the function (t, x) 󳨃→ f ̃(t, x) is continuous. In fact, suppose that xk → x in C([t0 , T], 𝒟3 ) and tk → t in ℝ, as k → ∞. Then t

t

f ̃(tk , (xk )tk0 ) = f (tk , u0 + E[t0 , u0 , (σk )tk0 , xk (tk )) and t

f ̃(t, x tt0 ) = f (t, u0 + E[t0 , u0 , σ t0 , x(t)). Since f is continuous in all arguments and E is continuous in the last two arguments, by Conditions (i) and (h), it suffices to show that σk → σ in C[0, T] as k → ∞. Observe that the range of the functions xk and x is compact in ℝn . Therefore the modulus of continuity ωp of the continuous function p, see Condition (j), satisfies ωp (δ) → 0 as δ → 0+. Consequently, 󵄨 󵄨 ‖σk − σ‖ = max 󵄨󵄨󵄨p(xk (t)) − p(x(t))󵄨󵄨󵄨 ≤ ωp (‖xk − x‖) → 0 (k → ∞). t ≤t≤T 0

As usual, the problem (3.4.33) with initial condition (3.4.6) is equivalent to the fixed point equation x = Jx,

(3.4.34)

where the integral operator J : C([t0 , T], 𝒟3 ) → C([t0 , T], ℝn ) is given by t

Jx(t) := x0 + ∫ f ̃(s, xts0 ) ds.

(3.4.35)

t0

Since f ̃ is continuous, as we have shown before, the function (t, x) 󳨃→ f ̃(t, xtt0 ) is uniformly continuous on the compact set [t0 , T] × {x, x1 , x2 , . . .}. For every ε > 0 we can therefore find a δ > 0 such that ‖xk − x‖ < δ implies 󵄩󵄩 ̃ t 󵄩 t 󵄩󵄩f (t, (xk )t0 ) − f ̃(t, xt0 )󵄩󵄩󵄩 < ε

(t0 ≤ t ≤ T).

This shows that f ̃(t, (xk )tt0 ) → f ̃(t, xtt0 ) as k → ∞, uniformly in t, and so Jxk → Jx. We conclude that the integral operator (3.4.35) is continuous on C([t0 , T], 𝒟3 ). Denote by x0 the constant function x0 (t) ≡ x0 . Then there exists a δ > 0 such that for all t ∈ ℝ satisfying t0 ≤ t ≤ t0 + δ and all x ∈ C[0, T] satisfying ‖x − x0 ‖ ≤ δ we have t 󵄩 󵄩󵄩 ̃ t 󵄩󵄩f (t, xt0 ) − f ̃(t0 , (x 0 )t00 )󵄩󵄩󵄩 ≤ 1.

3.4 Systems with locally explicit equations | 187

Let B := B(x0 , δ) denote the closed ball of radius δ > 0 around x 0 in C[t0 , T], where T − t0 < δ. For any x ∈ B we have T

t 󵄩 󵄩 󵄩 󵄩 ‖Jx − x0 ‖ ≤ ∫󵄩󵄩󵄩f ̃(s, xts0 )󵄩󵄩󵄩 ds ≤ (1 + 󵄩󵄩󵄩f ̃(t0 , (x 0 )t00 )󵄩󵄩󵄩)(T − t0 ). t0

So if we choose T ≤ t0 +

δ

t

1 + ‖f ̃(t0 , (x0 )t00 )‖

we see that Jx ∈ B, which means that J maps the closed ball B into itself. We conclude the proof by showing that the range J(B) is equicontinuous; the existence of a solution of the fixed point equation (3.4.34) follows then from the classical Arzelà-Ascoli compactness criterion and Schauder’s fixed point theorem. In fact, for x ∈ B and t0 ≤ t1 ≤ t2 ≤ T we have t2

t 󵄩 󵄩󵄩 󵄩 󵄩 󵄩 s 󵄩 󵄩󵄩Jx(t1 ) − Jx(t2 )󵄩󵄩󵄩 ≤ ∫󵄩󵄩󵄩f ̃(s, xt0 )󵄩󵄩󵄩 ds ≤ (1 + 󵄩󵄩󵄩f ̃(t0 , (x 0 )t00 )󵄩󵄩󵄩)(t2 − t1 ) t1

which shows that J(B) is equicontinuous, hence precompact. So there exists a solution x of (3.4.34) which also solves (3.4.33). As indicated at the beginning of the proof, with the fixed point x we associate the number Δ(x) > 0 and a solution u = u(t) of (3.4.32) on the interval [t0 , t0 + Δ(x)]. Then the pair (u, x) solves (3.4.3)/(3.4.5)/(3.4.6)/(3.4.32) with u(t0 ) = u0 on [t0 , t0 + h] for 0 < h < min{Δ(x), T − t0 }. The proof is complete. We point out that some models of automatic control systems with stop-type hysteresis elements may be represented in the form (3.4.3)/(3.4.5)/(3.4.32). Moreover, we have seen that the converter of a stop element may be written, for any continuous input σ = σ(t), as locally explicit equation (3.4.32), where the function D(t, u, dt) occurring in (3.3.3) is nothing else but the function E(t, u, σtt+dt , dt) occurring in the right-hand side of (3.4.32). Theorem 3.4.7 shows that the corresponding Cauchy problem for such a stop-type hysteresis equation is, under some natural assumptions, always locally solvable.

3.4.4 On ψ-stable solutions of dynamical systems In this section we consider examples of so-called generalized dynamical systems and illustrate their connection with locally explicit equations. We start with the corresponding definition.

188 | 3 Equations with nonlinear differentials Definition 3.4.8. A generalized dynamical system is a family A of continuous functions which are defined on some interval I ⊆ ℝ, take their values in ℝn . The functions belonging to A are called solutions. We will be interested, in particular, in the stability of such dynamical systems. Such questions have been studied, for instance, in [42, 65, 66, 95]; we summarize some cases with the following Example 3.4.9. (a) The set of all solutions of the ordinary differential equation ẋ = f (t, x), where f : ℝ × ℝn → ℝn is continuous, obviously forms a generalized dynamical system. (b) More generally, the set of all solutions of the ordinary differential inclusion ẋ ∈ F(t, x), where F(t, x) is a subset of ℝn with suitable additional properties, forms a generalized dynamical system.33 (c) Consider the locally explicit equation (3.0.2), where D(t, y, ⋅) is right-continuous at 0, i. e., D(t, y, dt) → 0 as dt → 0+. Then the solution set of (3.0.2) is a generalized dynamical system. (d) Consider the system with M-switch (3.4.12)/(3.4.13)/(3.4.14), where the initial state s0 of the system is fixed, and the right-hand side of (3.4.13) is also right-continuous in the argument dt at 0. Then the set of solution pairs (u, x) forms a generalized dynamical system. Now we are going to define what we mean some non-standard stability concepts for generalized dynamical systems. Here we distinguish between two types: – stability of solutions for t ≥ t0 , where t0 is a fixed initial moment, called ψ0 -stability, and, more generally, – stability of solutions for t ≥ t1 , uniformly with respect to several initial moments t1 , called ψ-stability. Definition 3.4.10. Let φ : I → ℝn be a fixed solution of a generalized dynamical system A, t0 ∈ I an initial moment, U0 := ([t0 , ∞) ∩ I) × [0, ∞) = {(t, a) : t ∈ I, t ≥ t0 , a ≥ 0},

(3.4.36)

33 This follows from the fact that the solutions of such a differential inclusion are supposed to be absolutely continuous, see [14, p. 40] or [6, p. 120].

3.4 Systems with locally explicit equations | 189

and ψ0 : U0 → [0, ∞) some continuous function which is increasing in the second argument. We say that φ is ψ0 -stable if there exists Δ > 0 such that, for all φ ∈ A and all t ∈ 𝒟(φ) ∩ I ∩ [t0 , ∞), we have 󵄩󵄩 󵄩 󵄩 󵄩 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ ψ0 (t, 󵄩󵄩󵄩φ(t0 ) − φ(t0 )󵄩󵄩󵄩) provided that ‖φ(t0 ) − φ(t0 )‖ < Δ. The general stability notion in Definition 3.4.10 is related to several classical stability definitions. Thus, for systems of ordinary differential equations, exponential stability is nothing else but ψ0 -stability for ψ0 (t, a) = Me−γ(t−t0 ) a. If I = [t0 , ∞) and ψ0 (t, a) → 0 as a → 0+, uniformly with respect to t, then ψ0 -stability implies Lyapunov-stability. Moreover, if in addition ψ0 (t, a) → 0 as t → ∞, then ψ0 -stability also implies asymptotic stability. The previously defined notions in general depend on the initial moment t0 . In order to define parallel notions, uniformly with respect to initial moments, we replace (3.4.36) by the set U := ([t1 , ∞) ∩ I) × {t1 } × [0, ∞) = {(t, t1 , a) : t, t1 ∈ I, t ≥ t1 , a ≥ 0} and replace ψ0 by a continuous function ψ : U → [0, ∞) which is (not necessarily strictly) increasing in the third argument. The following definition is parallel to Definition 3.4.10 and was given in [63, 65]. Definition 3.4.11. In the above notation, we say that a solution φ of a generalized dynamical system A is ψ-stable, uniformly with respect to initial moments if there exists Δ > 0 such that, for all φ ∈ A and all t, t1 ∈ 𝒟(φ) ∩ I satisfying t ≥ t1 , we have 󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ ψ(t, t1 , 󵄩󵄩󵄩φ(t1 ) − φ(t1 )󵄩󵄩󵄩) provided that ‖φ(t1 ) − φ(t1 )‖ < Δ. Of, course ψ-stability is stronger than ψ0 -stability. We also remark that, if I = [t0 , ∞) and ψ(t, t1 , a) → 0 as a → 0+, uniformly with respect to t, then ψ-stability implies classical stability, uniformly with respect to initial moments. Both the notion of ψ0 -stability and the more general notion of ψ-stability have been introduced and studied in [65]. These concepts are similar to that of measuring asymptotic stability via KL-functions, see [15, 83]. To illustrate the abstract Definitions 3.4.10 and 3.4.11, let us consider now a series of examples. Example 3.4.12. It is well known that the solutions of the linear system with constant coefficients ẋ = Ax

(3.4.37)

190 | 3 Equations with nonlinear differentials satisfy an estimate of the type 󵄩 󵄩 󵄩 󵄩󵄩 α(t−t1 ) (t − t1 )p−1 󵄩󵄩󵄩x(t1 )󵄩󵄩󵄩, 󵄩󵄩x(t)󵄩󵄩󵄩 ≤ Me where α is the largest real part of the eigenvalues of A, and p is the largest dimension of the Jordan blocks corresponding to the eigenvalue with real part α. In the terminology of Definition 3.4.11 this means that, choosing ψ(t, t1 , a) = Meα(t−t1 ) (t − t1 )p−1 a, the dynamical system (3.4.37) is ψ-stable, uniformly with respect to initial moments. Example 3.4.13. Here we consider five specific examples from the viewpoint of ψ0 -stability, where the initial moment t0 = 1 is fixed, and all equations are considered for t ≥ 1. (a) Since the solution of the initial value problem {

ẋ = − xt ,

x(1) = x0

has the unique solution x(t) = x0 /t, the zero solution x(t) ≡ 0 is ψ0 -stable on [1, ∞) for ψ0 (t, a) = a/t. (b) Similarly, the zero solution of the differential equation ẋ = −

2x log t t

(t ≥ 1)

is ψ0 -stable for ψ0 (t, a) =

a . t log t

(c) The zero solution of the differential equation ẋ = −√3 x is ψ0 -stable for 3/2

ψ0 (t, a) = (max{a2/3 + 32 (1 − t), 0}) (d) The zero solution of the differential equation ẋ = −x 3 is ψ0 -stable for ψ0 (t, a) =

a

√1 + 2a2 (t − 1)

.

.

3.4 Systems with locally explicit equations | 191

(e) Finally, the zero solution of the differential equation ẋ = {

− x1

for x ≠ 0,

0

for x = 0

is ψ0 -stable for ψ0 (t, a) = √max{a2 + 2(1 − t), 0}. Since in all these examples we have ψ0 (t, a) → 0 as a → 0+, uniformly with respect to t, as well as ψ0 (t, a) → 0 as t → ∞, the zero solution is also asymptotically stable. Example 3.4.14. Now we consider three more examples from the viewpoint of ψ0 -stability, ψ-stability, and other stability concepts. (a) The zero solution of the differential equation ẋ = x sin t is ψ-stable, uniformly with respect to initial moments, because ψ(t, t1 , a) = aecos t1 −cos t tends to zero as a tends to zero, uniformly in t. (b) The zero solution of the differential equation ẋ = x, subject to the initial condition x(t0 ) = x0 , is ψ0 -stable for ψ0 (t, a) = aet−t0 . However, here we have ψ0 (t, a) → 0 as a → 0+ for fixed t0 , but not uniformly with respect to t. (c) Let b : [t0 , ∞) → ℝ be a continuous function and t

B(t) := ∫ b(s) ds t0

its primitive satisfying B(t0 ) = 0. Suppose that B(t) ≤ 0 for t ≥ t0 . Then every solution of the initial value problem {

ẋ = b(t)x|x|p (t ≥ t0 ), x(t0 ) = x0 ,

where p > 0, satisfies |x0 |p 󵄨󵄨 󵄨p . 󵄨󵄨x(t)󵄨󵄨󵄨 = 1 − p|x0 |p B(t)

(3.4.38)

192 | 3 Equations with nonlinear differentials So the zero solution of (3.4.38) is ψ0 -stable if we choose34 ψ0 (t, a) =

a . (1 − pap B(t))1/p

Moreover, if B(t) → −∞ for t → ∞, the zero solution of (3.4.38) is asymptotically stable. The occurrence of the exponent p > 0 in equation (3.4.38) (and the corresponding expression for ψ0 ) leads to the following special case of Definition 3.4.10 and Definition 3.4.11. Definition 3.4.15. Let φ : I → ℝn be a fixed solution of a generalized dynamical system A, t0 ∈ I an initial moment, and p0 > 0. We say that φ is p0 -stable if there exist Δ > 0 and M > 0 such that, for all φ ∈ A and all t ∈ 𝒟(φ) ∩ I ∩ (t0 , ∞), we have M 󵄩 󵄩󵄩 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ (t − t0 )p0 provided that ‖φ(t0 ) − φ(t0 )‖ < Δ. Similarly, we say that a solution φ of A is uniformly p-stable if there exist Δ > 0 and M > 0 such that, for all φ ∈ A and all t, t1 ∈ 𝒟(φ) ∩ I satisfying t > t1 , we have M 󵄩 󵄩󵄩 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ (t − t1 )p provided that ‖φ(t1 ) − φ(t1 )‖ < Δ. Clearly, Definition 3.4.15 is a special case of Definitions 3.4.10 (resp. Definition 3.4.11), because the function ψ0 (resp. the function ψ) does here not depend on a. For example, we have seen in Example 3.4.13 that the zero solution of the differential equation ẋ = −x 3 is ψ0 -stable for ψ0 (t, a) =

a

√1 + 2a2 (t − 1)

.

However, a trivial calculation shows that the unique solution of the initial value problem {

ẋ = −x 3 , x(t0 ) = x0

34 Note that the assumption B(t) ≤ 0 guarantees that ψ0 (t, ⋅) is increasing, as required in Definition 3.4.10.

3.4 Systems with locally explicit equations | 193

has the unique solution x(t) =

x0 √2x02 (t − t0 ) + 1

,

So the zero solution is also p0 -stable for p0 = 1/2. We remark that exponential stability, uniformly with respect to initial moments, implies uniform p-stability for arbitrary p > 0. In fact, the exponential stability of a solution φ means that there exist M > 0, γ > 0, and δ > 0 such that, for all φ ∈ A and every t, t1 ∈ I ∩ 𝒟(φ) satisfying t ≥ t1 we have 󵄩󵄩 󵄩 −γ(t−t1 ) 󵄩 󵄩󵄩φ(t1 ) − φ(t1 )󵄩󵄩󵄩) 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ Me 󵄩 󵄩 provided that ‖φ(t1 ) − φ(t1 )‖ < Δ. The function g : [t1 , ∞) → [0, ∞) defined by g(t) := (t − t1 )p e−γ(t−t1 ) has a unique maximum on [t1 , ∞), namely, t ̂ = t1 +

p , γ

g(t)̂ =

pp . γ p ep

So putting M1 := MΔpp /γ p ep , we have M1 󵄩󵄩 󵄩 󵄩󵄩φ(t) − φ(t)󵄩󵄩󵄩 ≤ (t − t1 )p every t, t1 ∈ I ∩ 𝒟(φ) satisfying t ≥ t1 , provided that ‖φ(t1 ) − φ(t1 )‖ < Δ. This means precisely that φ is uniformly p-stable as claimed. Given a generalized dynamical system A with a fixed solution φ ∈ A, we call the set Aφ := {φ − φ : φ ∈ A} the reduced system with respect to φ. Here the domain of definition of φ−φ is 𝒟(φ−φ) = I ∩ 𝒟(φ) ∩ 𝒟(φ). So a solution φ of A is stable in one of the senses introduced above if and only if the zero solution of Aφ is stable in the same sense. In what follows, we will use throughout the reduced dynamical system35 which we denote simply by A. The next proposition provides some useful upper estimates for Lyapunov-type functions associated to stable solutions of a generalized dynamical system A. Proposition 3.4.16. Suppose that there exist a continuous function V : I × ℝn → ℝ and numbers γ, r, b, H, α > 0 such that the following three hypotheses are fulfilled. 35 Here and in the sequel we assume that the zero solution is defined on the whole interval I, so we need not consider intersections of domains of definition with I.

194 | 3 Equations with nonlinear differentials (a) For all (t, x) ∈ I × ℝn with ‖x‖ ≤ H we have r‖x‖b ≤ V(t, x). (b) For all φ ∈ A and all t ∈ 𝒟(φ) from ‖φ(t)‖ ≤ H it follows that D∗ V(t, φ(t)) ≤ −γV α (t, φ(t)), where D∗ V denotes the lower Dini derivative of V. (c) The convergence V(t, x) → 0 as x → 0 holds uniformly with respect to t ∈ I. Then there exists Δ > 0 such that, for any φ ∈ A and all t, t1 ∈ 𝒟(φ) satisfying t > t1 and ‖φ(t1 )‖ ≤ H, the following estimates are true. V α−1 (t, φ(t)) ≤

V α−1 (t1 , φ(t1 )) 1 + γ(α − 1)(t − t1 )V α−1 (t1 , φ(t1 ))

(3.4.39)

if α > 1, V(t, φ(t)) ≤ V(t1 , φ(t1 ))e−γ(t−t1 )

(3.4.40)

V 1−α (t, φ(t)) ≤ max{0, V 1−α (t1 , φ(t1 )) − γ(1 − α)(t − t1 )}

(3.4.41)

if α = 1, and

if α < 1. Proof. We claim that we can find Δ1 > 0 such that, for all φ ∈ A and every t, t1 ∈ 𝒟(φ) from ‖φ(t1 )‖ < Δ1 it follows that ‖φ(t)‖ < H for all t ≥ t1 . In fact, assumption (c) implies that there exists Δ1 > 0 such that V(t, x) < rH b for t ∈ I and ‖x‖ < Δ1 . We show that any solution of the generalized dynamical system which enters the Δ1 -neighbourhood of the zero solution, does not leave afterwards its H-neighbourhood. To see this, suppose that φ is a solution for which there exists t2 > t1 with the property that ‖φ(t1 )‖ < Δ1 but ‖φ(t2 )‖ = H. From assumption (b), Lemma 3.4.1, and the fact that the map t 󳨃→ V(t, φ(t)) is decreasing it follows then that rH b > V(t1 , φ(t1 )) ≥ V(t2 , φ(t2 )) ≥ rH b , a contradiction. So we have shown that, if φ(t1 ) at some moment t1 belongs to the Δ1 -neighbourhood of the zero solution, then 󵄩 󵄩b r 󵄩󵄩󵄩φ(t)󵄩󵄩󵄩 ≤ V(t, φ(t)),

D∗ V(t, φ(t)) ≤ −γV α (t, φ(t)) (t ≥ t1 ).

Let φ be some solution of the generalized dynamical system satisfying ‖φ(t1 )‖ < Δ ≤ Δ1 . A straightforward calculation shows that the upper solution of the initial value problem {

v̇ = −γvα , v(t1 ) = v1

3.4 Systems with locally explicit equations | 195

has the form v1 [1 + γ(α − 1)(t − t1 )v1α−1 ]1/(1−α) { { { v(t) = { v1 e−γ(t−t1 ) { { 1−α 1/(1−α) } { max{0, [v1 + γ(α − 1)(t − t1 )]

for α > 1, for α = 1, for α < 1.

Taking v(t) := V(t, φ(t)), assumption (b) implies that, for α > 1, 1/(1−α)

v(t) ≤ v1 [1 + γ(α − 1)(t − t1 )v1α−1 ]

(t ≥ t1 ),

and (3.4.39) follows. Similarly, for α = 1 we get (3.4.40), while for α < 1 we get (3.4.41). If we replace in Proposition 3.4.16 the assumption (c) by the weaker assumption that V(t0 , x) → 0 as x → 0 for just one point t0 ∈ I, the estimates (3.4.39), (3.4.40) and (3.4.41) take the weaker form V α−1 (t, φ(t)) ≤

V α−1 (t0 , φ(t0 )) 1 + γ(α − 1)(t − t0 )V α−1 (t0 , φ(t0 ))

if α > 1, V(t, φ(t)) ≤ V(t0 , φ(t0 ))e−γ(t−t0 ) if α = 1, and V 1−α (t, φ(t)) ≤ max{0, V 1−α (t0 , φ(t0 )) − γ(1 − α)(t − t0 )} if α < 1. In the remaining part of this subsection we use Proposition 3.4.16 to derive conditions for the ψ-stability and uniform p-stability of solutions. We start with the following Theorem 3.4.17. Suppose that there exist a continuous function V : I × ℝn → ℝ and numbers γ, r, R, b, H, α > 0 such that the following two hypotheses are fulfilled. (a) For all (t, x) ∈ I × ℝn with ‖x‖ ≤ H we have r‖x‖b ≤ V(t, x) ≤ R‖x‖b . (b) For all φ ∈ A and all t ∈ 𝒟(φ) from ‖φ(t)‖ ≤ H it follows that D∗ V(t, φ(t)) ≤ −γV α (t, φ(t)). Then the zero solution of the generalized dynamical system A is ψ-stable, uniformly with respect to initial moments, where c0 a[1 + c2 (t − t1 )abc1 ]−1/bc1 { { { ψ(t, t1 , a) = { c0 ae−γ(t−t1 )/b { { −bc 1/bc { max{0, [(c0 a) 1 + γc1 (t − t1 )] 1 }

for α > 1, for α = 1 for α < 1.

(3.4.42)

196 | 3 Equations with nonlinear differentials Here the constants c0 , c1 , c2 are given by c0 =

R1/b , r 1/b

c1 = α − 1,

c2 = γ(α − 1)r α−1 .

(3.4.43)

Proof. Observe that in all three cases the function ψ is increasing in the last argument. We also remark that the two-sided estimate in (a) implies that assumption (c) from Proposition 3.4.16 is satisfied. So we conclude that the three estimates (3.4.39), (3.4.40) and (3.4.41) hold true. This means that in case α > 1 we get for t ≥ t1 the estimate 󵄩 󵄩b(α−1) r α−1 󵄩󵄩󵄩φ(t)󵄩󵄩󵄩 ≤

Rα−1 ‖φ(t1 )‖b(α−1) 1 + γ(α − 1)(t − t1 )r α−1 ‖φ(t1 )‖b(α−1)

or, dividing by r α−1 and taking into account (3.4.43), c0 ‖φ(t1 )‖ 󵄩󵄩 󵄩 󵄩 󵄩 = ψ(t, t1 , 󵄩󵄩󵄩φ(t1 )󵄩󵄩󵄩). 󵄩󵄩φ(t)󵄩󵄩󵄩 ≤ [1 + c2 (t − t1 )‖φ(t1 )‖bc1 ]1/bc1 The cases α = 1 and α < 1 are treated analogously. Note that the function (3.4.42) satisfies ψ(t, t1 , a) → 0 as a → 0+, uniformly with respect to t. According to the remark after Definition 3.4.11, the zero solution is therefore stable, uniformly with respect to initial moments. At this point we can make the same remark as after the proof of Proposition 3.4.16: if we replace the upper estimate in Theorem 3.4.17 (a) by the weaker estimate V(t0 , x) ≤ R‖x‖b for some t0 ∈ I, the function (3.4.42) takes the special form c0 a[1 + c2 (t − t0 )abc1 ]−1/bc1 { { { ψ0 (t, a) = { c0 ae−γ(t−t0 )/b { { −bc 1/bc { max{0, [(c0 a) 1 + γc1 (t − t0 )] 1 }

for α > 1, for α = 1 for α < 1

and may be used for proving ψ0 -stability of the zero solution in the sense of Definition 3.4.10. Now we turn to sufficient criteria for the uniform p-stability of solutions which we introduced in Definition 3.4.15. Theorem 3.4.18. Suppose that the hypotheses of Proposition 3.4.16 are fulfilled, where α > 1. Then the zero solution of the generalized dynamical system A is uniformly p-stable for p := 1/(α − 1)b. Proof. By Proposition 3.4.16, we know that any solution φ which enters the Δ1 -neighbourhood of the zero solution at some moment t1 satisfies (3.4.39) for t > t1 . Consequently, we have V α−1 (t, φ(t)) ≤

V α−1 (t1 , φ(t1 )) 1 < , 1 + γ(α − 1)(t − t1 )V α−1 (t1 , φ(t1 )) γ(α − 1)(t − t1 )

3.4 Systems with locally explicit equations | 197

hence 󵄩b(α−1) 󵄩 < r α−1 󵄩󵄩󵄩φ(t)󵄩󵄩󵄩

1 , γ(α − 1)(t − t1 )

i. e., 󵄩 󵄩󵄩 󵄩󵄩φ(t)󵄩󵄩󵄩
t1 ).

But this means precisely that the zero solution is uniformly p-stable for p = 1/(α − 1)b as claimed. We make some comments on Theorem 3.4.18. First of all, under its hypotheses the zero solution is asymptotically stable if I = [t0 , ∞). This follows from Lyapunov’s theorem36 (see [10, p. 240]). Moreover, the same reduction to p0 -stability with respect to some fixed time moment t0 is true as before. If we replace Condition (c) in Theorem 3.4.18 by the weaker condition that V(t0 , x) → 0 as x → 0 for just one t0 ∈ I, then the zero solution is p0 -stable in the sense of Definition 3.4.15 for p0 = 1/(α − 1)b. We illustrate the Theorems 3.4.17 and 3.4.18 by means of an application to a planar system of differential equations with power-type right-hand side. Example 3.4.19. Consider the system of differential equations {

ẋ = −x + y4 , ẏ = x4 − y3 .

(3.4.44)

We take H := 1/4 and V(x, y) := x2 +y2 . Since all norms in ℝ2 are equivalent, we see that assumption (a) in Theorem 3.4.17 is satisfied for b := 2. Assumption (c) of Proposition 3.4.16 is trivially true. To verify assumption (b), observe that here D∗ V(t, φ(t)) = ̇ V(φ(t)) for any solution φ, where ̇ y) = −2x 2 + 2xy4 − 2y4 + 2x 4 y. V(x, We distinguish several cases. For x ≥ y > 0 we have ̇ y) ≤ −2x 2 − 2y4 + 4x5 ≤ −2y4 − 31 2x 2 ≤ −2y4 − 31 x 4 ≤ −HV 2 (x, y) V(x, 32 16 by our choice of the radius H. Similarly, for y ≥ x > 0 we have ̇ y) ≤ −2x 2 − 2y4 + 4y5 ≤ −2x 2 − 1 2y4 ≤ −2x 4 − y4 ≤ −HV 2 (x, y). V(x, 2 36 More precisely, it follows from the well-known generalization of Lyapunov’s theorem to the lower right derivative as used in Condition (b).

198 | 3 Equations with nonlinear differentials Finally, it is easily seen that the estimate in (b) is also satisfied, with γ = 1/4 and α = 2, if x < 0 or y < 0. From Theorem 3.4.17 and Theorem 3.4.18 we conclude that the zero solution of (3.4.44) is ψ-stable, uniformly with respect to initial moments t1 , for ψ(t, t1 , a) =

2a √4 + (t − t1 )a2

,

and also uniformly p-stable for p = 1/2. 3.4.5 The ψ-stable behaviour of a temperature regulator In what follows, we consider a system of three equations which arise in the mathematical modelling of a temperature regulation device with heating and cooling elements.37 These equations describe the change of the state s ∈ {0, 1} of a so-called non-ideal relay with threshold values α and β > α, as well as of the regulated temperature x as functions of time t. The device associates to a continuous input function x = x(t) a (usually, discontinuous) output function s = s(t) which assumes only the values 0 and 1. As we have seen in Subsection 3.3.3, the analytical description of such a device leads to a locally explicit equation we already studied several times in this chapter, viz. 1 { { { s(t + dt) = { 0 { { { s(t) ∗

if x(t) ≥ β and dt > 0, if x(t) ≤ α and dt > 0,

(3.4.45)

otherwise.

Here the asterisk above the equality has the same meaning as in (3.1.7). By a solution of (3.4.45) we mean a left-continuous function s which satisfies equation (3.4.45) in the sense of (3.1.7). We have already shown that equation (3.4.45) has a unique solution for every initial state s0 and continuous input x. The change in the regulated temperature is described by the heating equation ẋ = f (t, x)

(3.4.46)

ẋ = −h(x)

(3.4.47)

for s = 0, and by the cooling equation

37 We use these technical terms only by analogy to their possible applications in thermodynamics without specifying the impact of our results to such applications.

3.4 Systems with locally explicit equations | 199

for s = 1, respectively. Throughout this subsection we suppose that both functions f : ℝ2 → ℝ and h : ℝ → ℝ are continuous and satisfy the boundedness conditions 0 < m ≤ f (t, x),

h(x) ≤ M

(−∞ < t, x < ∞).

(3.4.48)

We consider the system (3.4.45)–(3.4.47) together with the initial conditions x(t0 ) = x0

(3.4.49)

s(t0 ) = s0 .

(3.4.50)

and

Let us describe the structure of solutions of the system (3.4.45)–(3.4.47) with initial data (3.4.49)/(3.4.50) for four possible combinations of x0 and s0 . Suppose first that x0 < β and s0 = 0. From the classical Cauchy-Peano theorem it follows that there exists a (possibly non-unique) solution x = X0 (t) of (3.4.46) and (3.4.49) for t sufficiently close to t0 . Moreover, by our boundedness assumption (3.4.48) on the function f , any solution may be extended to a strictly increasing solution on the semi-axis [t0 , ∞), since Ẋ 0 ≥ m > 0. Consequently, by the intermediate value theorem there exists some (unique) point t1 > t0 such that X0 (t1 ) = β. Obviously, the pair (S, X) with S(t) ≡ 0 and X(t) = X0 (t) is then a solution of (3.4.45)–(3.4.47) with initial data (3.4.49)/(3.4.50) on the interval [t0 , t1 ]. Now, the equation (3.4.47) with initial condition x(t1 ) = β has a solution x = X1 (t) which may be defined uniquely on the whole real line through the relation x

−∫ β

dξ = t − t1 h(ξ )

(t ∈ ℝ).

Again, we may find a unique point t2 > t1 such that X1 (t2 ) = α. We may then extend the pair (S, X) to the interval (t1 , t2 ] by putting S(t) ≡ 1 and X(t) = X1 (t) for t1 < t ≤ t2 ; this pair obviously solves (3.4.45)–(3.4.47). Afterwards, we again may extend the pair (S, X) to some interval (t2 , t3 ] by putting S(t) ≡ 0 and X(t) = X2 (t), where X2 solves (3.4.46) with initial condition x(t2 ) = α, and the point t3 may be recovered from the condition X2 (t3 ) = β. Continuing this procedure, we may extend the pair (S, X) successively and obtain a solution of (3.4.45)–(3.4.47) on the whole semi-axis [t0 , ∞). Since t2i+1 − t2i is bounded away from zero for all i, the intervals (t2i , t2i+1 ] and (t2i+1 , t2i+2 ] cover the whole semi-axis [t0 , ∞). In case s0 = 1 and x0 > α, on the interval (t2i , t2i+1 ] we have S(t) = 1 and X coincides with the solution X2i of (3.4.47) satisfying X2i (t2i ) = α, while on the interval (t2i+1 , t2i+2 ] we have S(t) = 0 and X coincides with the solution X2i+1 of (3.4.46) satisfying X2i+1 (t2i+1 ) = β.

200 | 3 Equations with nonlinear differentials Now, in case s0 = 0 and x0 ≥ β the relay will switch on immediately, so the solution pair (S, X) differs from that just described only by the condition S(t0 ) = 0. Finally, in case s0 = 1 and x0 ≤ α the solution is everywhere the same as in case s0 = 0 and x0 < β, except for the point t0 . Now we are going to discuss the problem of ψ-stability of the system (3.4.45)– (3.4.47) in the sense of Definition 3.4.10; details may be found in [3]. Let (S, X) be a fixed solution of the system on I := [t0 , ∞), and let ψ0 : [0, ∞) × [0, ∞) → ℝ be some function which is increasing in the second argument. Recall that (S, X) is said to be ψ0 -stable with respect to perturbations of X at time t = t0 if there exists Δ > 0 such that any other solution (S, X) of (3.4.45)–(3.4.47) on I with S(t0 ) = S(t0 ),

󵄨󵄨 󵄨 󵄨󵄨X(t0 ) − X(t0 )󵄨󵄨󵄨 < Δ,

satisfies 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ ψ0 (t − t0 , 󵄨󵄨󵄨X(t0 ) − X(t0 )󵄨󵄨󵄨) (t ∈ I).

(3.4.51)

In applications one often has to construct such a function ψ0 in a rather sophisticated way which is suggested by the problem under consideration, as we will do in (3.4.54) below. In what follows, we will suppose for the sake of definiteness that S(t0 ) = S(t0 ) = 0,

X(t0 ) =: x 0 < β,

X(t0 ) =: x0 < β.

(3.4.52)

As observed before, the functions Xk which define the solution X on the subintervals Ik := [tk , tk+1 ] are strictly monotone, hence invertible. Denoting by Tk the inverse of Xk on Ik , from (3.4.46) and (3.4.52) we conclude that the functions T2i solve the equation dt 1 = , dx f (t, x) subject to the initial condition T2i (β) = t2i+1 . Moreover, T2i (α) = t2i for i ≥ 1, and T0 (x0 ) = t0 . Similarly, the functions T2i+1 solve the equation dt 1 =− , dx h(x) subject to the initial conditions T2i+1 (α) = t2i+1 and T2i+1 (α) = t2i+2 . This implies, in particular, that α

T2i+1 (α) − T2i+1 (β) = t2i+2 − t2i+1 = − ∫ β

dξ =: d. h(ξ )

(3.4.53)

3.4 Systems with locally explicit equations | 201

For the fixed solution (S, X) we use the notation I k = [t k , t k+1 ] (k > 0), I 0 = [t0 , t 1 ],

X k , and T k ; in particular, T k = X k as before. Moreover, we use the shortcut t 0 := T 0 (x0 ). If x0 < x0 , the point x0 does not belong to the range of the function X 0 , i. e., to the domain of definition of the function T 0 . In this case we extend X 0 to the left of t0 as solution of the equation ẋ = f (t, x); obviously, then X 0 (t) tends monotonically to −∞ as t → −∞. We are now in a position to prove our main result on the ψ0 -stability of (S, X). Here ψ0 will be a function which depends on a parameter λ > 0, namely −1

ψ0 (τ, a) := {

M ae−γτ m M a[1 + 2(λ m

− 1)γτ( ma )2(λ−1) ]1/2(1−λ) +

for λ = 1, for λ ≠ 1,

(3.4.54)

where M and m are the boundedness constants from (3.4.48), γ > 0 will be specified below, and y+ := max{y, 0}. Observe that the function ψ0 is, for any choice of λ, decreasing with respect to the first and increasing with respect to the second argument; moreover, ψ0 (τ, a) → 0 as τ → ∞. With this choice of ψ0 , we are going to prove now the following Theorem 3.4.20. Suppose that there exist positive constants H, q and λ such that the estimate 󵄨󵄨 󵄨 󵄨󵄨t − T 2i (x)󵄨󵄨󵄨 ≤ H

(t ≥ t0 , −∞ < x < ∞)

(3.4.55)

for i = 0, 1, 2, . . . implies an estimate of the form 󵄨 󵄨2λ [f (t, x) − f (T 2i (x), x)](t − T 2i (x)) ≥ q󵄨󵄨󵄨t − T 2i (x)󵄨󵄨󵄨 .

(3.4.56)

Then the solution (S, X) is ψ0 -stable with respect to perturbations of X at time t = t0 , where the function ψ0 is given by (3.4.54) with γ :=

qmμ . M 2 (μ + 4Md)

(3.4.57)

Here we have used the shortcut μ := min{β − x0 , β − α},

(3.4.58)

and the constant d is given by (3.4.53). Proof. Let (S, X) be an arbitrary solution of the system (3.4.45)–(3.4.47) with satisfies the initial conditions (3.4.49) and (3.4.50). We divide the proof into 5 steps. Step 1. To each t ≥ t0 we associate the index k = k(t) of the uniquely determined interval Ik which contains t; for endpoints of intervals we choose the corresponding even

202 | 3 Equations with nonlinear differentials number k. As before, we extend the function X 0 , if necessary, to the left as solution of equation (3.4.46). Then we obtain 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 = |X(T k (X(t))) − X(Tk (X(t)))| ≤ M 󵄨󵄨󵄨T k (X(t)) − Tk (X(t))󵄨󵄨󵄨.

(3.4.59)

Let 2i be the largest even number smaller than or equal to k, and let t ̂ denote the point of I2i which is closest to t on the left. If t ̂ = t then k = 2i and 󵄨󵄨 󵄨 󵄨 ̂ − T2i (X(t)) ̂ 󵄨󵄨󵄨. 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ M 󵄨󵄨󵄨T 2i (X(t)) 󵄨

(3.4.60)

The same is true if t ̂ < t and k = 2i + 1. In fact, in this case we have X(t)̂ = β and 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨 󵄨󵄨T 2i+1 (X(t)) − T2i+1 (X(t))󵄨󵄨󵄨 = 󵄨󵄨󵄨T 2i+1 (β) − T2i+1 (β)󵄨󵄨󵄨 = 󵄨󵄨󵄨T 2i (β) − T2i (β)󵄨󵄨󵄨, since all functions Tk and T k with odd index k differ only by a constant. This shows that (3.4.59) implies (3.4.60) also in this case. Step 2. For u ∈ ℝ, we denote as usual by ent(u) := max{k ∈ ℤ : k ≤ u} the integer part of u. Consider the equation 1 dτ = , dy f (τ + id, y − i(β − α))

(3.4.61)

where y−α i := max{ent( β−α ), 0}.

We claim that the function θ(y) := T2i (y − i(β − α)) − id,

(3.4.62)

is a continuous solution of (3.4.61) for y ≥ x0 , where the derivative is meant as lower Dini derivative. In fact, for these y we have d x D∗ θ(y) D∗ T2i (y − i(β − α)) d+ T2i (x) 󵄨󵄨󵄨󵄨 = = ⋅ + 󵄨󵄨 󵄨 dy dy dx 󵄨x=y−i(β−α) dy =

1 1 = . f (T2i (y − i(β − α)), y − i(β − α)) f (θ(y) + id, y − i(β − α))

To show that the function θ is continuous, it suffices to prove its left-continuity, because it is right-differentiable. Fix y0 > x0 ; we have to show that θ(y) → θ(y0 ) as y → y0 −. To this end, we distinguish two cases. y0 −α =: m ∈ ℕ, hence i(y0 ) = m and i(y) = m − 1. In this case we Suppose first that β−α get lim θ(y) = lim T2(m−1) (y − (m − 1)(β − α)) − (m − 1)d

y→y0 −

y→y0 −

= T2(m−1) (y0 − (m − 1)(β − α)) − (m − 1)d

= T2(m−1) (β) − (m − 1)d = T2m−1 (β) − (m − 1)d

= T2m−1 (α) − d − (m − 1)d = T2m−1 (α) − md = θ(y0 ).

3.4 Systems with locally explicit equations | 203

y0 −α β−α

On the other hand, if and so

∈ ̸ ℕ, then i(y) = i(y0 ) for y < y0 sufficiently close to y0 ,

lim θ(y) = lim T2i (y − i(β − α)) − id = T2i (y0 − i(β − α)) − id = θ(y0 ).

y→y0 −

y→y0 −

Since y0 was arbitrary, we have proved the left-continuity of the function θ on (x0 , ∞). Step 3. From (3.4.60) and (3.4.62) we obtain the estimate 󵄨 󵄨 󵄨 󵄨󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ M 󵄨󵄨󵄨θ(X(t)̂ + i(β − α)) − θ(X(t)̂ + i(β − α))󵄨󵄨󵄨.

(3.4.63)

Let us now estimate the term |θ(y) − θ(y)| by means of condition (3.4.56). Let x := y − i(β − α), and consider the function V(y) := (θ(y) − θ(y))2 . From (3.4.61) we get then 1 1 dV = 2(θ(y) − θ(y))[ − ] dy f (θ(y) + id, x) f (θ(y) + id, x) =−

2(θ(y) − θ(y))[f (T2i (x), x) − f (T 2i (x), x)] f (T2i (x), x)f (T 2i (x), x)

.

Let us suppose that |t0 − t 0 | < H (see (3.4.55)). Then on some interval of the form [x0 , x∗ ) we have 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨θ(y) − θ(y)󵄨󵄨󵄨 = 󵄨󵄨󵄨T2i (x) − T 2i (x)󵄨󵄨󵄨 < H. From (3.4.56) it further follows that dV 2 ≤ − 2 (T2i (x) − T 2i (x))[f (T2i (x), x) − f (T 2i (x), x)] dy M ≤−

2q λ 2q 󵄨󵄨 󵄨2λ 󵄨T (x) − T 2i (x)󵄨󵄨󵄨 = − 2 V . M 2 󵄨 2i M

Using a classical theorem on differential inequalities ([19, p. 16], see also [27]) we conclude that V(y) ≤ u(y − x0 , |t0 − t 0 |) (x0 ≤ y < x ∗ ), where the function u solves the initial value problem {

dV dy

2q λ = −M 2V ,

V(x0 ) = (t0 − t 0 )2 .

204 | 3 Equations with nonlinear differentials Consequently, 󵄨󵄨 󵄨 󵄨󵄨θ(y) − θ(y)󵄨󵄨󵄨 ≤ u(y − x0 , |t0 − t 0 |)

(x0 ≤ y < x ∗ ),

(3.4.64)

where for η ≥ 0 and τ ≥ 0 we have put 2q 2(1−λ) 1/2(1−λ) ]+ [− M 2 η(1 − λ) + τ { { { { −qη/M 2 u(η, τ) := { τe { { τ { 2q { [1+ M2 η(λ−1)τ2(λ−1) ]1/2(λ−1)

for λ < 1, for λ = 1, for λ > 1.

Since the function u is, for any choice of λ, decreasing in the first argument, and u(0, |t0 − t 0 |) = |t0 − t 0 | < H, we conclude that we may take x ∗ arbitrarily large. This means that (3.4.64) holds for y belonging to the half-line [x0 , ∞). From (3.4.60) it follows that |t0 − t 0 | < H implies 󵄨 󵄨 󵄨 󵄨󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ M 󵄨󵄨󵄨θ(y)̂ − θ(y)̂ 󵄨󵄨󵄨 ≤ Mu(ŷ − x0 , |t0 − t 0 |); here and in the sequel we use the shortcut ŷ := X(t)̂ + i(β − α)). Since the function u is decreasing in the first and increasing in the second argument, to finish the proof it remains to show that ŷ − x0 ≥ c(t − t0 )

(3.4.65)

for some constant c > 0, and to note that 󵄨 󵄨 1 |t0 − t 0 | = 󵄨󵄨󵄨T 0 (x0 ) − T 0 (x0 )󵄨󵄨󵄨 ≤ |x0 − x 0 | m

(3.4.66)

with m as in (3.4.48). Step 4. We are now going to prove (3.4.65) separately for the three cases i = 0, i = 1, and i ≥ 2. In case i = 0 we simply have ̂ − θ[X(t0 )] ≤ t − t0 = t ̂ − t0 = θ[X(t)]

1 (ŷ − x0 ), m

and so (3.4.65) holds with c := m. Likewise, in case i = 1 we have 1 t − t̂ t − t0 = t ̂ − t0 + t − t ̂ = θ(y)̂ − θ(x0 ) + t − t ̂ ≤ (ŷ − x0 ) + (t − t0 ). m t − t0 Consequently, ŷ − x0 ≥ m(t − t0 )(1 −

t − t̂ ). t − t0

3.4 Systems with locally explicit equations | 205

The difference t − t0 may be estimated from below by t − t0 = θ(β) − θ(x0 ) + t − t ̂ ≥

1 (β − x0 ) + t − t.̂ M

We assume now that the Δ occurring in the definition of ψ0 -stability satisfies Δ≤

β − x0 . 2

We get then β − x0 ≥ β − x0 − |x0 − x0 | ≥

β − x0 , 2

hence 2M(t − t)̂ t − t̂ ≤ . t − t0 β − x0 + 2M(t − t)̂

(3.4.67)

Note that the quotient on the right-hand side of (3.4.67) is monotonically increasing in t − t.̂ Using that t − t ̂ ≤ d, we see that we may choose c := m(1 −

2Md ) β − x0 + 2Md

to prove (3.4.65) in case i = 1. It remains to prove (3.4.65) for i ≥ 2. To get an upper estimate for t − t0 we first observe that i

i−1

j=1

j=1

t − t0 = (t1 − t0 ) + ∑(t2j − t2j−1 ) + ∑(t2j+1 − t2j ) + (t ̂ − t2i ) + (t − t)̂ i−1

= (θ(β) − θ(x0 )) + id + ∑[θ(β(j + 1) − jα) − θ(jβ − (j − 1)α)] j=1

+ [θ(y)̂ − θ(iβ − (i − 1)α)] + (t − t)̂ = id + θ(y)̂ − θ(x0 ) + (t − t)̂ ≤

1 id + t − t ̂ 1 (ŷ − x0 ) + id + (t − t)̂ = (ŷ − x0 ) + (t − t0 ). m m t − t0

Here we have used the fact that tl+1 − tl = d for l odd, but tl+1 − tl = t2j+1 − t2j = T2j (β) − T2j (α) = θ(β(j + 1) − jα) − θ(jβ − (j − 1)α) (j ≥ 1) for l even. On the other hand, we may get a lower estimate for t − t0 in the form i−1

t − t0 = T0 (β) − T0 (x0 ) + id + ∑[T2j (β) − T2j (α)] + (t − t)̂ j=1



1 1 ̂ (β − x0 ) + id + (i − 1) (β − α) + (t − t). M M

206 | 3 Equations with nonlinear differentials Choosing now Δ from the definition of ψ0 -stability as in the preceding calculation, we obtain β − x0 ≥

β − x0 , 2

hence t − t0 ≥

1 1 ̂ (β − x0 ) + id + (i − 1) (β − α) + (t − t). 2M M

This in turn yields id + (t − t)̂ ≤ t − t0

1 (β 2M

id + (t − t)̂

− x0 ) + id + (i − 1) M1 (β − α) + (t − t)̂

.

Now, since for i = 1 we obviously have id + (t − t)̂ ≤ t − t0

1 (β 2M

d + (t − t)̂

− x0 ) + d + (t − t)̂



4Md < 1, (β − x 0 ) + 4Md

we conclude that (3.4.65) holds for i = 1 with 4Md ). β − x 0 + 4Md

c := m(1 − On the other hand, for i ≥ 2 we have

2M(i + 1)d id + (t − t)̂ ≤ t − t0 (β − x0 ) + 2M(i + 1)d + 2(i − 1)(β − α) ≤

2Md +

2Md

2(i−1) (β i+1

− α)



2Md

2Md + 32 (β − α)

< 1,

which shows that we may take c := m(1 −

3Md ) (β − α) + 3Md

in this case. So (3.4.65) holds true in all possible cases with c=

mμ , μ + 4Md

where m and M are given by (3.4.48), μ by (3.4.58), and d by (3.4.53). Step 5. Now we are almost done. Let 1 Δ := min{ (β − x 0 ), mH}. 2

3.4 Systems with locally explicit equations | 207

Then |x0 −x0 | < Δ implies |t0 −t 0 | < H, by (3.4.66), and so (3.4.64) holds. Combining this with (3.4.63), (3.4.65) and (3.4.66) yields 1 󵄨󵄨 󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ Mu(c(t − t0 ), |x0 − x0 |) = ψ0 (t − t0 , |x0 − x 0 |), m where ψ0 is given by (3.4.54). This completes the proof. We point out that the inequality (3.4.56) which was crucial for our proof, is in some sense an analogue to the one-sided Lipschitz conditions considered by Donchev and Farkhi in [12]. We further remark that in our proof we discussed estimates for the approximation rate of x under several initial conditions for x = β. We conclude our discussion with an example which illustrates our abstract Theorem 3.4.20 and shows how the crucial estimate (3.4.56) may be fulfilled under the assumption (3.4.55). Example 3.4.21. Let t0 = 0, α = − π2 , and β = π2 , and let f and h be defined by f (t, x) = 1 +

1 sinκ (t − x), 2

h(x) ≡ 1,

where the exponent κ > 0 is such that the function t 󳨃→ sinκ t is odd. It is not hard to see that (S(t), X(t)) := {

π 2

< t ≤ 2iπ + π2 ,

(0, t − 2iπ)

for 2iπ −

(1, (2i + 1)π − t)

for (2i + 1)π −

π 2

< t ≤ (2i + 1)π +

π 2

is a solution of the corresponding system (3.4.45)–(3.4.47). The inverse function to X 2i on the interval [2iπ − π2 , 2iπ + π2 ] is here T 2i (x) = x + 2iπ. We claim that the solution (S, X) is ψ0 -stable in the sense of Definition 3.4.10, with ψ0 given by (3.4.54). In fact, both functions f and h are continuous and satisfy (3.4.48) with m := 21 and M := 32 . We claim that (3.4.55) (with H := π/2) implies (3.4.56). Indeed, for |τ| ≤ π2 we have | sin τ| ≥ π2 |τ|, and so |t − T 2i (x)| ≤ π2 implies (f (t, x) − f (T 2i (x), x))(t − T 2i (x))

1 = ( sinκ (t − x))(t − x − 2iπ) 2 κ 󵄨󵄨 󵄨󵄨 1 1 2 󵄨 󵄨 = 󵄨󵄨󵄨 sinκ (t − x − 2iπ)󵄨󵄨󵄨|t − x − 2iπ| ≥ ( ) |t − x − 2iπ|κ+1 󵄨󵄨 󵄨󵄨 2 2 π

which is nothing else but (3.4.56) with κ

1 2 q := ( ) , 2 π

1 λ := (κ + 1). 2

(3.4.68)

208 | 3 Equations with nonlinear differentials So all hypotheses of our Theorem are satisfied, and therefore we may find some Δ > 0 such that |X(0) − X(0)| = |X(0)| < Δ and S(0) = 0 imply that 󵄨 󵄨 󵄨 󵄨󵄨 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ ψ0 (t, 󵄨󵄨󵄨X(0)󵄨󵄨󵄨)

(t ≥ 0),

where ψ0 is given by (3.4.54). For the sake of definiteness, let us consider the three special cases κ = 31 , κ = 1 and κ = 3. By (3.4.68), this choice leads to the three cases λ = 32 , λ = 1 and λ = 2, respectively. Consequently, taking into account the two cases for ψ0 in (3.4.54), we conclude that our solution is ψ0 -stable with γτ 3/2 3a[1 − 32 (2a) 2/3 ]+ { { { { { −γτ ψ0 (τ, a) = { 3ae { { 3a { { 2 { √1+8a γτ

for κ = 31 , for κ = 1, for κ = 3.

(3.4.69)

We sketch the trajectories for these three cases in the following Figures 3.4, 3.5, and 3.6.

Figure 3.4

Figure 3.5

Figure 3.6

3.5 Linear systems with non-smooth action

| 209

In the first case the solutions X and X eventually coincide, because 3/2

γt 2 󵄨󵄨 󵄨 󵄨 󵄨 ] =0 󵄨󵄨X(t) − X(t)󵄨󵄨󵄨 ≤ 3󵄨󵄨󵄨X(0)󵄨󵄨󵄨[1 − 3 (2|X(0)|)2/3 + for 2γt ≥ 3(2|X(0)|)2/3 , in the other cases they are just close to each other for t ≥ 0 in the sense of (3.4.51). Of course, the constant γ in (3.4.69) may also be calculated. Since d = π and μ = π, from (3.4.57) we get the value γ=

κ

1 2 ( ) 63 π

in this example. Distinguishing between the three values for κ given above, this may be computed explicitly.

3.5 Linear systems with non-smooth action A vast literature is devoted to the study of processes with impulse-type action and to impulse-type control systems (see, e. g., [36–41, 81, 94]). In this section we show that processes involving non-smooth outer actions may be described by means of equations which are very similar to equations with nonlinear differentials.

3.5.1 Statement of the problem Following [75, 76], in this subsection we consider a linear system of type Δx(t) = A(t)x(t) + ΔB(t) + o(dt + JB(t)).

(3.5.1)

Here Δx(t) = x(t + dt) − x(t) denotes the increment of the unknown function x with values in ℝm , corresponding to the increment dt > 0 of the real variable t, A : I → ℝm×m a matrix valued function defined on a real interval I, and B : I → ℝm a left-continuous vector function of bounded variation. Moreover, ΔB(t) := B(t + dt) − B(t),

󵄩 󵄩 JB(t) := lim 󵄩󵄩󵄩ΔB(t)󵄩󵄩󵄩. dt→0+

(3.5.2)

A solution of (3.5.1) is, by definition, a left-continuous vector function x : Ix → ℝm , where Ix ⊆ I, which in every point t ∈ Ix , t ≠ sup Ix satisfies the relation lim

dt→0+

Δx(t) − A(t)x(t)dt − ΔB(t) = 0. dt + JB(t)

(3.5.3)

210 | 3 Equations with nonlinear differentials Observe that every solution of (3.5.1) admits a right limit in each point t, where lim Δx(t) = lim ΔB(t).

dt→0+

dt→0+

(3.5.4)

Indeed, the denominator in (3.5.2) is bounded, and so we have lim [Δx(t) − A(t)x(t)dt − ΔB(t)] = 0,

dt→0+

which implies lim [Δx(t) − ΔB(t)] = 0.

dt→0+

Since the function B has bounded variation, it may have only countably many discontinuities, all of them being of first kind (jumps). But from our hypothesis on the left-continuity of B it follows that jumps may only occur from the right. Consequently, the limit JB(t) in (3.5.2) exists, and (3.5.4) holds. Also, note that in every point of discontinuity of B the relations (3.5.3) and (3.5.4) are equivalent, because the denominator in (3.5.3) has then a limit different from 0. We point out that the problem (3.5.1), subject to an initial condition, may have no solution of we replace o(dt + JB(t)) by just o(dt). This is shown in the following Example 3.5.1. Consider the problem {

Δx(t) = x(t)dt + Δχ(0,∞) (t) + o(dt), x(0) = 0,

(3.5.5)

where χ(0,∞) is the characteristic function of the positive half-line. For t > 0, problem (3.5.5) is equivalent to the equation ẋ = x, with solution x(t) = t ce with c ∈ ℝ. For t = 0, problem (3.5.5) has the form x(dt) = 1 + o(dt), which implies that x(0+) = 1, i. e., c = 1. This means that the only solution is x(t) = et χ(0,∞) (t), which does not satisfy (3.5.5) at 0, because Δx(0) = edt − 0 = 1 + dt + o(dt) ≠ 1 + o(dt). In what follows, we consider equation (3.5.1) together with the initial condition x(t0 ) = x0 .

(3.5.6)

For proving the uniqueness of solutions to the problem (3.5.1)/(3.5.6) we use the so-called Martynenko lemma. 3.5.2 Unique solvability The following theorem, which may be regarded as generalized “variation of constants” formula, gives an existence and uniqueness result for the Cauchy problem of (3.5.1).

3.5 Linear systems with non-smooth action

| 211

Theorem 3.5.2 (Existence and uniqueness). Let Φ(t) denote the fundamental matrix of the homogeneous system ẋ = A(t)x(t). Then the function t

x(t) = Φ(t)[x0 + ∫ Φ−1 (s) dB(s)]

(3.5.7)

t0

is the unique solution of problem (3.5.1)/(3.5.6) on I ∩ [t0 , ∞). Here the integral in (3.5.7) is a Riemann–Stieltjes integral with respect to the BV-function B. Proof. We may write the increment of the function (3.5.7) in the form t+dt

t

Δx(t) = Φ(t + dt)[x0 + ∫ Φ (s) dB(s)] − Φ(t)[x0 + ∫ Φ−1 (s) dB(s)] −1

t0

t0

t

t+dt

= ΔΦ(t)[x0 + ∫ Φ (s) dB(s)] + Φ(t + dt) ∫ Φ−1 (s) dB(s). −1

t0

t0

The relation ̇ ΔΦ(t) = Φ(t)dt + o(dt) = A(t)Φ(t)dt + o(dt) and the fact that all integrals in square brackets are bounded near t imply that t+dt

Δx(t) = A(t)x(t)dt + Φ(t + dt) ∫ Φ−1 (s) dB(s) + o(dt).

(3.5.8)

t0

The second term in the right-hand side of (3.5.8) may be rewritten in the form t+dt

t+dt

Φ(t + dt) ∫ Φ−1 (t + dt) dB(s) − Φ(t + dt) ∫ [Φ−1 (t + dt) − Φ−1 (s)] dB(s) t0

t0

t+dt

= ΔB(t) − Φ(t + dt) ∫ [Φ−1 (t + dt) − Φ−1 (s)] dB(s). t0

Observe that, by well-known properties of the Stieltjes integral, t+dt 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩Φ(t + dt) ∫ [Φ−1 (t + dt) − Φ−1 (s)] dB(s)󵄩󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩󵄩 t0

󵄩 󵄩 󵄩 󵄩 ≤ 󵄩󵄩󵄩Φ(t + dt)󵄩󵄩󵄩 max{󵄩󵄩󵄩Φ−1 (t + dt) − Φ−1 (s)󵄩󵄩󵄩 : t ≤ s ≤ t + dt}Var(B; [t, t + dt]),

212 | 3 Equations with nonlinear differentials where Var(B; [t, t + dt]) denotes the total variation of B on the interval [t, t + dt]. Since both matrix functions Φ and Φ−1 are of class C 1 , there exist constants c1 , c2 > 0 such that 󵄩 󵄩󵄩 󵄩󵄩Φ(t + dt)󵄩󵄩󵄩 ≤ c1 ,

󵄩 󵄩 max{󵄩󵄩󵄩Φ−1 (t + dt) − Φ−1 (s)󵄩󵄩󵄩 : t ≤ s ≤ t + dt} ≤ c2 dt

for sufficiently small dt > 0. Moreover, by (3.5.2) we have Var(B; [t, t + dt]) = JB(t) + Var(B; (t, t + dt]),

lim Var(B; (t, t + dt]) = 0.

dt→0+

Combining these facts we arrive at the estimate 󵄩󵄩 󵄩 󵄩󵄩Δx(t) − A(t)x(t)dt − ΔB(t)󵄩󵄩󵄩 󵄩 󵄩 ≤ c1 c2 dt[JB(t) + Var(B; (t, t + dt])] + 󵄩󵄩󵄩o(dt)󵄩󵄩󵄩 = o(JB(t) + dt). But this means precisely that (3.5.7) is a solution of problem (3.5.1)/(3.5.6), and so we have proved existence. For the uniqueness proof we use the Martynenko lemma [25]. Suppose that x = x(t) and x = x(t) are solutions of problem (3.5.1)/(3.5.6). Since these solutions are leftcontinuous and have the same jumps, if there are any, the function y := x − x is continuous. Obviously, y satisfies the relation Δy(t) = A(t)y(t) + o(dt + JB(t)). In every point t where B is continuous, this relation is equivalent to ̇ = A(t)y(t). y(t)

so

̇ Now, the scalar function φ defined by φ(t) := ‖y(t)‖ satisfies D∗ φ(t) ≤ ‖y(t)‖, and

󵄩 󵄩 D∗ φ(t) ≤ 󵄩󵄩󵄩A(t)󵄩󵄩󵄩φ(t) =: f (t, φ(t)).

(3.5.9)

Note that (3.5.9) may fail only in the countably many points of discontinuity of B. Moreover, the solution ψ of ̇ = f (t, ψ(t)) = 󵄩󵄩󵄩A(t)󵄩󵄩󵄩ψ(t) ψ(t) 󵄩 󵄩 is ψ(t) = ceα(t) , where α is a primitive of the continuous map t 󳨃→ ‖A(t)‖. Consequently, the Martynenko lemma implies that φ(t) ≡ 0 for t ≥ t0 , which proves uniqueness.

3.5 Linear systems with non-smooth action

| 213

3.5.3 An example To conclude, we illustrate our abstract results with an example. Example 3.5.3. Consider the problem {

Δx(t) = −x(t)dt + Δβ(−t/2) + o(Jβ(−t/2) + dt) x(0) = 0,

(3.5.10)

where β(t) := t − ent(t) is the fractional part of t and J is as in Subsection 3.5.1. The function B is a left-continuous function of bounded variation satisfying B(t) ≤ β(−t/2), therefore we may find the solution of (3.5.10) as in (3.5.7). Here we simply have Φ(t) = e−t , so we may express the solution as Stieltjes integral t

x(t) = ∫ es−1 dβ(−s/2).

(3.5.11)

0

This can be made more explicit for t > 0. In case 0 < t ≤ 2 we have B(t) = 1 − t/2, hence t

s

∫ e dB(s) = e

s

0

B(s)|t0

t

− ∫ B(s)es ds = 0

3 − et , 2

where we used integration by parts. In case t > 2 we choose k ∈ ℕ with 2k < t ≤ 2(k +1) and obtain t

∫e 0

s−t

k

2j

dβ(−s/2) = ∑ ∫ e

s−t

j=1 2(j−1)

t

dβ(−s/2) + ∫ es dβ(−s/2) = 2j

Combining these equalities we arrive at the representation x(t) =

ent(t/2)

3 1 − e−t − + ∑ e2j−t 2 2 j=1

for the solution of (3.5.10) for t > 0.

k 3 − et + ∑ e2j . 2 j=1

4 Smooth approximating models In this chapter we associate to each of the problems we considered in previous chapters (hysteresis, relay, stop, play, DN problems) a so-called “smooth model” which has better analytical properties. A particular emphasis is put on upper estimates for the distance of solutions of the original and the related smooth model. Throughout this chapter we followed the papers [45–52].

4.1 Smooth relay models with hysteresis Let us go back to the heuristic description of a relay which we discussed in Subsection 3.2.1. Given α < β, we consider again a transducer continuous input x = x(t) and an output y = y(t) which assumes only the values 0 for x(t) ≤ α and 1 for x(t) ≥ β. So the output jumps up from 0 to 1 if the input reaches the threshold value β, while it jumps down from 1 to 0 if the input reaches the threshold value α. The domain of admissible values of the relay with threshold values α and β has the form Ω(α, β) = {(x, 0) : x ≤ β} ∪ {(x, 1) : x ≥ α}, i. e., consists of two horizontal half-lines in the (x, y)-plane (see Figure 4.1).

Figure 4.1

Defining as in Subsection 3.2.1 two sets Ω0 (α, β) and Ω1 (α, β) by Ω0 (α, β) = {x : x(t1 ) = α for some t1 ≥ t0 } ∩ {x : x(τ) < β for all τ ∈ (t1 , t]}, Ω1 (α, β) = {x : x(t1 ) = β for some t1 ≥ t0 } ∩ {x : x(τ) > α for all τ ∈ (t1 , t]}, https://doi.org/10.1515/9783110709865-004

216 | 4 Smooth approximating models the relay associates to the input function x : [t0 , ∞) → ℝ the output function 0 { { { y(t) = { 1 { { { y0

if x(t) ≤ α or x ∈ Ω0 (α, β), if x(t) ≥ β or x ∈ Ω1 (α, β),

(4.1.1)

if α < x(τ) < β for all τ ∈ [t0 , t],

where y0 := y(t0 ) ∈ {0, 1}. This description is due to Krasnosel’sky and Pokrovsky [24]. Following [73], such a relay may be described as locally explicit problem of the form y(t) { { { y(t + dt) = { 1 { { { 0

if α < x(t) < β, if x(t) ≥ β,

(4.1.2)

if x(t) ≤ α.

A solution of this problem is then a left-continuous function y = y(t), which satisfies (4.1.2) for each t from its domain of definition and each sufficiently small positive dt, i. e., 0 < dt < δ(t), δ(t) > 0. It is not hard to see that in the description (4.1.1) the output function changes its value precisely when the input function reaches one of the threshold values α or β; in particular, the output is right-continuous. On the other hand, the model (4.1.2) requires the left-continuity of the output function; this is due to the specific character of locally explicit equations. 4.1.1 Equivalence of models As we have shown in Theorem 3.2.10, both descriptions, the Krasnosel’sky-Pokrovsky approach and our approach, are basically equivalent. As in Section 3.2, we will denote, for (x, y) ∈ Ω(α, β), by Rtt0 (α, β, x)y0 the solution of (4.1.2) which satisfies the initial condition y(t0 ) = y0 , and call Rtt0 (α, β, x)y0 the relay resolvent of (x, y0 ). Observe that Rtt0 (α, β, x)y0 depends not only on the value of the input x at t, but on the behaviour of x on the whole interval [t0 , t]. For the numerical analysis of relay systems, but also for the qualitative analysis by means of numerical simulations, one may use modern programming techniques. Here the relay models with hysteresis, involving ordinary differential equations are most appropriate. We will discuss such models in the following sections. 4.1.2 A smooth model In this subsection we will study the so-called smooth model {

ẇ = K[(x − β)+ (1 − w) − (α − x)+ w], ỹ = ent(w + 1/2).

(4.1.3)

4.1 Smooth relay models with hysteresis | 217

Here w = w(t) is a smooth output function, K is a (large) parameter, x = x(t) is a ̃ is a discrete output function, x+ := max{x, 0} is the continuous input function, ỹ = y(t) positive part of x, and ent(x) = max{k ∈ ℤ : k ≤ x} is the integer part of x. The representation (4.1.3) illustrates the following behaviour of the system. On time intervals where x(t) < α, the change of the smooth output is described by the equation ẇ = −K(α − x(t))w, which has the unique equilibrium w(t) ≡ 0. Analogously, on time intervals where x(t) > β, the change of the smooth output is described by the equation ẇ = K(x(t) − β)(1 − w), which has the unique equilibrium w(t) ≡ 1. The large parameter K allows us to approximate rapidly the corresponding equilibrium of w(t). In the third case α ≤ x(t) ≤ β ̇ = 0, which means that the smooth output is constant. we get w(t) It is easy to see that the interval [0, 1] is time-invariant in the sense that 0 ≤ w(t0 ) ≤ 1 for some t0 implies that also 0 ≤ w(t) ≤ 1 for all t ≥ t0 . In fact, if w(t1 ) = 1 and ̇ w(t) > 1 for t1 < t ≤ t2 , say, then w(τ) > 0 for some τ ∈ (t1 , t2 ), contradicting (4.1.3). ̃ ̃ = 0 for 0 ≤ w(t) ≤ 1/2, and y(t) ̃ = 1 for The discrete output function y satisfies y(t) t ̃ ̃ which corresponds to the 1/2 < w(t) ≤ 1. By Rt0 (α, β, x)(y0 ) we denote the output y(t) ̃ 0 ) = w(t0 ) = y0 . threshold values α and β, the input x, and the initial condition y(t In Section 3.1 we have compared the approaches of Krasnosel’sky-Pokrovsky [24] and Pryadko-Sadovsky [73] to the description of a relay problem. We also remarked there that these two approaches are essentially equivalent. The last statement has been made more precise in Theorem 3.2.10 (in a slightly different notation). For further reference, we state this again as follows. Theorem 4.1.1. Let y1 = y1 (t) be a solution of problem (4.1.1) and y2 = y2 (t) be a solution of problem (4.1.2) on some interval [t0 , T], subject to the same initial condition y1 (t0 ) = y2 (t0 ) = y0 . Moreover, let (x0 , y0 ) be an admissible state of the relay in both models. Then y1 (t) = y2 (t) for all t ∈ [t0 , T], possibly except for switching points t of the relay. Recall by Rtt0 (α, β, x)y0 the solution of (4.1.2), subject to the initial condition y(t0 ) = ̃ of the smooth model, suby0 , see Definition 3.2.9, and by R̃ tt0 (α, β, x)y0 the output y(t) ̃ ject to the same initial condition y(t0 ) = y0 . Then it is interesting to study the noncoincidence set N(R, R;̃ K) := {t : t0 ≤ t ≤ T, Rtt0 (α, β, x)(y0 ) ≠ R̃ tt0 (α, β, x)(y0 )},

(4.1.4)

where K is the parameter occurring in the description (4.1.3) of the smooth model. The following theorem gives a statement on the asymptotic behaviour of this set as K → ∞.

218 | 4 Smooth approximating models Theorem 4.1.2. The relation lim μ(N(R, R;̃ K)) = 0

K→∞

(4.1.5)

holds true, here μ denotes the Lebesgue measure. Proof. For the proof we distinguish two cases. 1st case: Suppose first that the locally explicit relay does not switch at all on [t0 , T]. Then the output does not change for t0 ≤ t ≤ T, i. e., x(t) < β if y0 = 0, or x(t) > α if y0 = 1. In the first case we have y(t) ≡ 0 and ẇ = −K(α − x)+ w, hence t

w(t) = w0 exp(−K ∫(α − x(s))+ ds) = 0 = y(t), t0

̃ and so y(t) = y(t) ≡ 0 on [t0 , T]. In the second case the argument is similar. Consequently, we deduce that (4.1.5) is true. 2nd case: Suppose now that the locally explicit relay switches in the interval [t0 , T]. Consider a sequence (tj )j of switch points in this interval, which means that x(tj ) = α and x(t) < β for tj ≤ t < tj+1 , and x(tj+1 ) = β and x(t) > α for tj+1 ≤ t < tj+2 . Let Δ := inf{t2 − t1 , t3 − t2 , . . . , tj+1 − tj , . . .}, where we may assume1 that Δ > 0. But this implies that the sequence (T − tj )j is finite, and the number of elements in its range is bounded from above by the number m := ent((T − t0 )/Δ) + 1. From our hypotheses on the input function x it follows that for every switch point tj (j = 1, . . . , m) there exists δj > 0 such that, for all t ∈ (tj , tj + δj ), we have x(t) < α if x(tj ) = α, and x(t) > β if x(tj ) = β. For fixed ε > 0, consider the number δ := min{

ε , δ , . . . , δm }. m+1 1

(4.1.6)

Since the values of the output function y do not change on [t0 , t1 ], by the locally ̃ = y(t) ≡ y0 for explicit nature of (4.1.2), we conclude from the 1st case above that y(t) t0 ≤ t ≤ t1 . If x(tj ) = α, we have ̇ = −K(α − x(t))+ w(t) w(t)

(tj < t ≤ tj+1 )

which shows that w = w(t) is decreasing on (tj , tj+1 ]. In particular, ̇ = −K(α − x(t))w(t) w(t)

(tj < t ≤ tj + δ),

1 Here we use the uniform continuity of the function x = x(t) on [t0 , T].

4.1 Smooth relay models with hysteresis | 219

hence t

w(t) = wj exp(−K ∫(α − x(s)) ds).

(4.1.7)

tj

For this j we put tj +δ

aj := ∫ (α − x(s)) ds

(4.1.8)

tj

and choose Mj so large that Mj aj > log 2. Since 0 ≤ wj ≤ 1, from (4.1.7) and (4.1.8) it follows that 0 ≤ w(tj + δ) = wj e−Kaj
log 2. With the same argument as before, we coñ = y(t) ≡ 1 for tj + δ ≤ t ≤ tj+1 . clude then that y(t) Now, for every j ∈ {1, 2, . . . , m} either Mj or Nj is defined as above, and we take the other to be zero. Letting then K0 := max{M1 , N1 , M2 , N2 , . . . , Mm , Nm }, we see that the estimate ̃ ≠ y(t)}) ≤ mδ < ε μ({t : t0 ≤ t ≤ T, y(t) if K ≥ K0 , by (4.1.6). But this is precisely our claim. Now we are going to formulate some kind of “nearness result” for systems. To this end, we consider simultaneously the two systems { { { { { { { { { { { { {

u̇ = f (t, u, y), x(t) = φ(u(t)), y(t) = Rtt0 (α, β, x)y0 ,

u(t0 ) = u0

(4.1.10)

220 | 4 Smooth approximating models and { { { { { { { { { { { { {

̃ ũ̇ = f (t, u,̃ y), ̃ = φ(u(t)), ̃ x(t) ̃ = R̃ tt (α, β, x)y ̃ 0, y(t) 0

(4.1.11)

̃ 0 ) = u0 . u(t

In (4.1.10) the function y = y(t) is the output of the locally explicit model (4.1.2), ̃ is the output of the smooth model (4.1.3). We while in (4.1.11) the function ỹ = y(t) suppose that the function f : ℝ × ℝn × {0, 1} → ℝn is continuous in the first and continuously differentiable in the second argument. Moreover, the function φ is also continuously differentiable. Finally, we assume that ⟨∇φ(u), f (t, u, 1)⟩ < 0

(4.1.12)

⟨∇φ(u), f (t, u, 0)⟩ > 0

(4.1.13)

for φ(u) = α, and

for φ(u) = β.

4.1.3 The nearness theorem: formulation Under these assumptions, we are now ready to formulate our first nearness theorem. Theorem 4.1.3 (Nearness theorem). Under the above hypotheses, for every solution u of (4.1.10) on [t0 , T] we can find constants C > 0 and K0 > 0 such that, for any solution ũ of (4.1.11), the estimate C 󵄩󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ 󵄩󵄩u(t) − u(t) √K

(t0 ≤ t ≤ T)

(4.1.14)

holds for K ≥ K0 , where K is the parameter occurring in (4.1.3). Before proving Theorem 4.1.3 we make some comments. Since any solution u of (4.1.10) is continuous, it is bounded on [t0 , T], say 󵄩󵄩 󵄩 󵄩󵄩u(t)󵄩󵄩󵄩 ≤ ρ

(t0 ≤ t ≤ T).

(4.1.15)

Therefore we can find constants f0 > 0, f1 > 0 and M > 0 such that 󵄩󵄩 󵄩 󵄩󵄩f (t, u, y)󵄩󵄩󵄩 ≤ f0 , 󵄩󵄩 𝜕f (t, u, y) 󵄩󵄩 󵄩󵄩 󵄩󵄩 󵄩󵄩 ≤ f1 , 󵄩󵄩 𝜕u 󵄩󵄩 󵄩󵄩

(4.1.16) (4.1.17)

4.1 Smooth relay models with hysteresis | 221

and 󵄩 󵄩󵄩 󵄩󵄩∇φ(u)󵄩󵄩󵄩 ≤ M

(4.1.18)

for t0 ≤ t ≤ T, ‖u‖ ≤ 2ρ, and y ∈ {0, 1}. We can also find constants δ0 > 0 and a > 0 such that ⟨∇φ(u), f (t, u, 1)⟩ ≤ −a

(4.1.19)

⟨∇φ(u), f (t, u, 0)⟩ ≥ a

(4.1.20)

for |φ(u) − α| < δ0 , and

for |φ(u) − β| < δ0 . Indeed, suppose that (4.1.19) is false. Then we find sequences (tn )n and (un )n satisfying t0 ≤ tn ≤ T,

‖un ‖ ≤ 2ρ,

󵄨󵄨 󵄨 1 󵄨󵄨φ(un ) − α󵄨󵄨󵄨 < , n

1 ⟨∇φ(un ), f (tn , un , 1)⟩ > − . n

Passing to subsequences, if necessary, we may assume that tn → t and un → u as n → ∞. But then φ(u) = α,

⟨∇φ(u), f (t, u, 1)⟩ ≥ 0,

contradicting (4.1.12). Let Γη := {u ∈ ℝn : ‖u‖ ≤ 2ρ, φ(u) = η}. From the uniform continuity of φ and (4.1.19) we conclude that there exists ε1 ∈ (0, ρ) such that (4.1.19) holds for dist(u, Γα ) < ε1 , and (4.1.20) holds for dist(u, Γβ ) < ε1 . Using this notation, we will construct the constants C and K0 later in the proof of Theorem 4.1.3. This will be achieved through a series of auxiliary lemmas. Lemma 4.1.4 (Dependence on initial parameters). Let u = u(t) and v = v(t) be solutions of the system u̇ = f (t, u, y1 ), and u = u(t) a solution of the system u̇ = f (t, u, y1 ), where y1 , y1 ∈ {0, 1}. Assume that these solutions are defined on [t0 , t1 ] ⊆ [t0 , T], satisfy the initial conditions u(t0 ) = u0 ,

v(t0 ) = v0 ,

u(t0 ) = u0 ,

and take values in the ball of radius 2ρ with ρ given by (4.1.15). Then the estimates 󵄩󵄩 󵄩 f (t −t ) 󵄩󵄩u(t) − v(t)󵄩󵄩󵄩 ≤ e 1 1 0 ‖u0 − v0 ‖

(t0 ≤ t ≤ t1 )

(4.1.21)

and 󵄩 󵄩󵄩 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 ≤ 2f0 (t1 − t0 ) + ‖u0 − u0 ‖

(t0 ≤ t ≤ t1 )

are true, where f0 is taken from (4.1.16) and f1 from (4.1.17).

(4.1.22)

222 | 4 Smooth approximating models Proof. From 1 d ‖u − v‖2 = ⟨u̇ − v,̇ u − v⟩ = ⟨f (t, u, y1 ) − f (t, v, y1 ), u − v⟩ 2 dt and (4.1.17) we get 1 d ‖u − v‖2 ≤ f1 ‖u − v‖2 . 2 dt

(4.1.23)

Consequently, 󵄩󵄩 󵄩2 󵄩2 2f (t−t ) 󵄩 2f (t −t ) 2 󵄩󵄩u(t) − v(t)󵄩󵄩󵄩 ≤ e 1 0 󵄩󵄩󵄩u(t0 ) − v(t0 )󵄩󵄩󵄩 = e 1 1 0 ‖u0 − v0 ‖ , which proves (4.1.21). Moreover, from t

󵄩 󵄩 󵄩 󵄩󵄩 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 ≤ ∫󵄩󵄩󵄩f (s, u, y1 ) − f (s, u, y1 )󵄩󵄩󵄩 ds + ‖u0 − u0 ‖ t0

and (4.1.16) we obtain (4.1.22). ̃ ∗ ) = 0, u(t ̃ =0 ̃ ∗ ) = ũ ∗ , and φ(ũ ∗ ) = β. Then w(t∗ ) ≤ 1/2 and y(t) Suppose that y(t ̃ ∗ ) = 1, u(t ̃ ∗ ) = ũ ∗ , and φ(ũ ∗ ) = α, then the on [t∗ , t∗ + λ] for some λ ≥ 0. Similarly, if y(t ̃ = 1 on [t∗ , t∗ + λ] for some λ ≥ 0, state of the relay in the smooth model remains y(t) because w(t∗ ) > 1/2, hence w(t∗ + dt) > 1/2 for sufficiently small dt > 0. Lemma 4.1.5 (Elaboration time of the smooth model). There exists a constant K1 > 0 such that λ≤

C0 √K

(K ≥ K1 ),

(4.1.24)

2 log 2 , a

(4.1.25)

where C0 := √ and a is given by (4.1.19) resp. (4.1.20). ̃ ∗ ) = 0, u(t ̃ ∗ ) = ũ ∗ , and Proof. We restrict ourselves to proving the claim in case y(t φ(ũ ∗ ) = β; in the other case the proof is similar. ̃ = 0 on [t∗ , t∗ + λ] for some λ ≥ 0. If λ = 0 there is So we have w(t) ≤ 1/2 and y(t) ̃ − ũ ∗ ‖ ≤ ε1 on [t∗ , t∗ + ε1 /f0 ]. By nothing to prove. If 0 < λ < ε1 /f0 it follows that ‖u(t) (4.1.20) this implies that ̃ f (t, u,̃ 0)⟩ ≥ a (t∗ ≤ t ≤ t∗ + λ). x̃̇ (t) = ⟨∇φ(u),

4.1 Smooth relay models with hysteresis | 223

Consequently, ̃ ≥ a(t − t∗ ) + β ≥ β x(t)

(t∗ ≤ t ≤ t∗ + λ),

(4.1.26)

which means that the system is described for the smooth model in the form ̇ = K(x(t) ̃ − β)(1 − w(t)) w(t)

(t∗ ≤ t ≤ t∗ + λ).

Solving this equation we obtain t

̃ − β) ds). w(t) = 1 − (1 − w(t∗ )) exp(−K ∫(x(s) t∗

But (4.1.26) implies that w(t) ≥ 1 − exp(−

Ka (t − t∗ )2 ) 2

(t∗ ≤ t ≤ t∗ + λ),

and so w(t) ≤ 1/2 on [t∗ , t∗ + λ], hence 2

Ka ε1 1 ≥ 1 − exp(− ). 2 2 f02 This shows that, putting K1 :=

2f02 log 2 , ε12 a

(4.1.27)

we have 1 − exp(−

Ka 2 1 λ )≤ 2 2

(K ≥ K1 ),

and after some trivial rearrangement we get (4.1.24). Lemma 4.1.6 (Nearness of level sets). The relations lim dist(Γα+δ , Γα ) = 0,

δ→0

lim dist(Γβ−δ , Γβ ) = 0

δ→0

hold, where Γη = {u ∈ ℝn : ‖u‖ ≤ 2ρ, φ(u) = η} as before.

(4.1.28)

224 | 4 Smooth approximating models Proof. We restrict ourselves to the proof of the first relation in (4.1.28). If the assertion is not true, we find ε0 > 0 and a sequence (un )n such that un ∈ Γα+1/n ,

dist(un , Γα ) ≥ ε0 .

Since (un )n belongs to a compact set, we may assume without loss of generality that un → u; by construction, we have then u ∈ ̸ Γα . On the other hand, the continuity of φ implies that φ(u) = α, a contradiction. Lemma 4.1.6 shows that we may find δ1 > 0 such that dist(Γα+δ , Γα ) < ε1 ,

dist(Γβ−δ , Γβ ) < ε1

for 0 < δ < δ1 . Moreover, since φ is uniformly continuous on Γα , we may further find ε2 ∈ (0, min{ε1 , ρ/C1 }), where C1 := 1 +

2f0 M , a

(4.1.29)

such that φ(u) ≤ α + δ1 for dist(u, Γα ) ≤ ε2 . 4.1.4 Some estimates With this notation, we prove another estimate for the existence interval of two systems. Lemma 4.1.7 (Time estimate for outputs). In the notation introduced above, suppose that the hypotheses ̃ 1 ) = α, x(t

(4.1.30)

̃ 1 ) = 1, y(t1 ) = y(t

x(t) > α

(t1 ≤ t < t 1 ),

(4.1.31) (4.1.32)

and 󵄩󵄩 ̃ 1 )󵄩󵄩󵄩󵄩 ≤ ε2 󵄩󵄩u(t1 ) − u(t

(4.1.33)

are satisfied. Then the estimate t 1 − t1 ≤

M 󵄩󵄩 ̃ 1 )󵄩󵄩󵄩󵄩 󵄩u(t ) − u(t a󵄩 1

(4.1.34)

holds true. Proof. First we show that dist(u(t), Γα ) < ε1

(t1 ≤ t ≤ t 1 ).

(4.1.35)

4.1 Smooth relay models with hysteresis | 225

In fact, otherwise there exists some t2 ∈ [t1 , t 1 ] such that dist(u(t2 ), Γα ) = ε1

(4.1.36)

and dist(u(t), Γα ) < ε1

(t1 ≤ t < t2 ).

(4.1.37)

From (4.1.36) it follows that φ(u(t2 )) ≥ α + δ1 , while from (4.1.37) and (4.1.31) it follows that the map t 󳨃→ φ(u(t)) is strictly decreasing on [t1 , t2 ). Moreover, (4.1.33) and the definition of ε2 imply that φ(u(t2 )) < φ(u(t1 )) ≤ α + δ1 , a contradiction. Since (4.1.35) holds on [t1 , t 1 ], we see that (4.1.19) holds for t1 ≤ t ≤ t 1 . But this implies, together with (4.1.30) and (4.1.32), that t

̇ ds ≤ x(t1 ) − a(t − t1 ), α ≤ x(t) = x(t1 ) + ∫ x(s) t1

hence t 1 − t1 ≤

̃ 1 )) φ(u(t1 )) − φ(u(t . a

Taking into account (4.1.18) we obtain (4.1.34). ̃ 1 ) = β; for the sake of completeOf course, an analogous result is true in case x(t ness, we state this without proof in the following Lemma 4.1.8. Suppose that the hypotheses ̃ 1 ) = β, x(t

̃ 1 ) = 0, y(t1 ) = y(t x(t) < β

(t1 ≤ t < t 1 ),

and (4.1.33) are satisfied. Then the estimate (4.1.34) holds true. If the relay described by the locally explicit equation (4.1.2) does not switch at all, Theorem 4.1.3 is trivially true with C = 0 and arbitrary K0 > 0. So let us analyze the case when the relay switches in the interval [t0 , T]. First we show that the number of switches is then finite. Let τ, τ ∈ [t0 , T] be two consecutive switching moments; then 󵄨 󵄨 β − α = 󵄨󵄨󵄨φ(u(τ)) − φ(u(τ))󵄨󵄨󵄨. By (4.1.16) and (4.1.18), this implies that |τ − τ| ≥

β−α . Mf0

(4.1.38)

226 | 4 Smooth approximating models If {t1 , t2 , . . . , tm } are all switching points of the locally explicit relay in [t0 , T] (in increasing order), we have T − t0 ≥ (tm − tm−1 ) + (tm−1 − tm−2 ) + ⋅ ⋅ ⋅ + (t2 − t1 ), and so T − t0 ≥ (m − 1)

β−α , Mf0

by (4.1.38), hence m≤

Mf0 (T − t0 ) + 1, β−α

(4.1.39)

which gives an upper bound for m. We introduce for every j ∈ {1, 2, . . . , m} the following notation: tj is the j-th switching point of the locally explicit model, sj is the j-th switching point of the smooth model,2 and sj̃ is the closest point left from sj satisfying u(̃ sj̃ ) = u(tj ). For j = 0, 1, . . . , m we still use the shortcut Δj := 2f0 C0 ef1 (tj −t0 ) C1

2j

C1 − 1 C12 − 1

(4.1.40)

,

with f0 given by (4.1.16), f1 by (4.1.17), C0 by (4.1.25), and C1 by (4.1.29). Moreover, we write K 0 := 0,

K j+1 :=

(C1 Δj + 2f0 C0 )2 e2f1 (tj+1 −tj ) ε22

,

(4.1.41)

where ε2 is the same as after Lemma 4.1.6. By construction, we have then Δj+1 > Δj ,

K j+1 > K j ,

K j+1 > max{K1 , Δ2j /ε22 },

with K1 given by (4.1.27).

4.1.5 The nearness theorem: proof Now we are in a position to reformulate and prove the desired nearness theorem in the following form. 2 Here we take tm+1 = T and tj = sj = t0 for j = 0.

4.1 Smooth relay models with hysteresis | 227

Theorem 4.1.9 (Nearness theorem, revisited). For every j ∈ {0, 1, . . . , m + 1} and K ≥ K j , the estimate Δj 󵄩󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ 󵄩󵄩u(t) − u(t) √K

(t0 ≤ t ≤ tj )

(4.1.42)

holds, where K is the parameter occurring in (4.1.3). Proof. We prove the assertion by induction over j. For j = 0 there is nothing to prove, ̃ 0 ) = u(t0 ) = u0 , so assume that we have proved it for fixed j. If tj ≤ sj we get, since u(t by Lemma 4.1.5, for K ≥ K j+1 the estimate sj − tj ≤

C0 M Δj + . √K a √K

(4.1.43)

But then our induction hypothesis, together with (4.1.22), yields C1 Δj + 2f0 C0 f (t −t ) 󵄩󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ e 1 j+1 j 󵄩󵄩u(t) − u(t) √K

(t0 ≤ t ≤ tj+1 )

(4.1.44)

for K ≥ K j+1 . It is not hard to see that (4.1.44) is also true in case tj > sj . If tj+1 > sj+1 , the estimate (4.1.44) holds for K ≥ K j+1 and t0 ≤ t ≤ sj+1 . On the other hand, in this case ̃ , tj+1 ]. Therefore, all hypotheses of Lemma 4.1.6 are satisfied on the interval [sj++1 ̃ ≤ tj+1 − sj+1 ≤ tj+1 − sj+1

M 󵄩󵄩 ̃ )󵄩󵄩󵄩󵄩, 󵄩u(s̃ ) − u(̃ sj+1 a 󵄩 j+1

(4.1.45)

as well as 󵄩󵄩 ̃ ) − u(̃ sj+1 ̃ )󵄩󵄩󵄩󵄩 ̃ ) − u(̃ sj+1 ̃ )󵄩󵄩󵄩󵄩 ≤ C1 󵄩󵄩󵄩󵄩u(sj+1 ̃ 󵄩󵄩󵄩󵄩 ≤ 2f0 (tj+1 − sj+1 ) + 󵄩󵄩󵄩󵄩u(sj+1 󵄩󵄩u(t) − u(t)

(4.1.46)

̃ ≤ t ≤ tj+1 . In all cases we have shown that for sj+1 C1 (C1 Δj + 2f0 C0 ) f (t −t ) 󵄩󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ e 1 j+1 j 󵄩󵄩u(t) − u(t) √K

(t0 ≤ t ≤ tj+1 ),

(4.1.47)

which implies that (4.1.42) holds with j replaced by j + 1. Now the statement follows, since the constants satisfy C ≥ Δm+1 and K0 ≥ K m+1 . The proof is complete. ̃ Note that we proved Theorem 4.1.9 under the additional hypothesis ‖u(t)‖ ≤ 2ρ for t0 ≤ t ≤ T. We claim that 󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ ρ} = T. T1 := sup{t : t ≥ t0 , 󵄩󵄩󵄩u(t) − u(t) Indeed, otherwise we may apply Theorem 4.1.3 on the interval [t0 , T1 ] ⊂ [t0 , T], and the definition of ε2 in Lemma 4.1.6 shows that K0 > C 2 /ρ2 . Consequently, for K ≥ K0 ̃ 1 )‖ = ρ. we get C < ρ√K, contradicting the fact that ‖u(T1 ) − u(T

228 | 4 Smooth approximating models Now we are going to consider the special case when we may choose the parameter K > 0 in (4.1.3) arbitrarily, but improve the estimate for the constant C > 0 in (4.1.10). To this end, we suppose that our system is scalar (n = 1), φ(u) = u, and the function f : ℝ × ℝ × {0, 1} → ℝ satisfies f (t, u, 0) > 0 and f (t, u, 1) < 0. The last requirement implies that there exists a constant b > 0 such that f (t, u, 0) ≥ b,

f (t, u, 1) ≤ −b

(4.1.48)

for t0 ≤ t ≤ T and |u| ≤ 2ρ. Finally, we assume that there exists a constant k ≥ 0 such that [f (t, u, 0) − f (t, u, 0)](t − t) ≥ k(t − t)2

(t0 ≤ t ≤ T, |u| ≤ 2ρ)

(4.1.49)

[f (t, u, 1) − f (t, u, 1)](t − t) ≤ −k(t − t)2

(t0 ≤ t ≤ T, |u| ≤ 2ρ).

(4.1.50)

and

The conditions (4.1.49) and (4.1.50) mean, roughly speaking, that the absolute value of f is increasing w. r. t. t. In case of a smooth function f , we may express this through the conditions 𝜕f (t, u, 0) ≥ k, 𝜕t

𝜕f (t, u, 1) ≤ −k 𝜕t

(t0 ≤ t ≤ T, |u| ≤ 2ρ).

Proposition 4.1.10. Under the hypotheses (4.1.48)–(4.1.50), in the special case considered above one may take f0 T−t0 2 log 2 { { 2f0 (f0 β−α + 1)( b + 1)√ b C := { f 2f0 { 2 ( 0 + 1)√ 2 log b { 1−e−k(β−α)/f02 b

if k = 0, if k > 0.

(4.1.51)

in Theorem 4.1.3. ̃ j ) = u(tj ). Our Proof. Let sj be the closest point (from the right) to sj which satisfies u(s aim is to estimate the differences sj − sj̃ and sj − sj for j = 1, 2, . . . , m. If u(̃ sj̃ ) = β from (4.1.48) it follows that t

̃ − β = ∫ f (s, u(s), ̃ u(t) 0) ds ≥ b(t − sj̃ )

(sj̃ ≤ t < sj ).

sj̃

So the equation in the smooth model (4.1.3) has the form ̇ = K(u(t) ̃ − β)(1 − w(t)) (sj̃ ≤ t < sj ). w(t)

(4.1.52)

4.1 Smooth relay models with hysteresis | 229

Solving this equation we obtain t

̃ − β) ds). w(t) = 1 − (1 − w(sj̃ )) exp(−K ∫(u(s) sj̃

But (4.1.52) implies that w(t) ≥ 1 − exp(−

Kb (t − sj̃ )2 ) 2

(sj̃ ≤ t ≤ sj ).

On the other hand, 0 ≤ w(t) ≤ 1/2, so exp(−

Kb 1 (s − sj̃ )2 ) ≥ , 2 j 2

and sj − sj̃ ≤ √

2 log 2 1 b √K

(j = 1, 2, . . . , m)

(4.1.53)

which is the desired upper estimate for sj − sj̃ . Let us now estimate sj − sj . It is not hard to see that sj

sj

sj

̃ ̃ ̃ ̃ j ) + ∫ f (s, u(s), ̃ j ) = u(s 1) ds = u(̃ sj̃ ) + ∫ f (s, u(s), 0) ds + ∫ f (s, u(s), 1) ds, u(s sj

sj

sj̃

which implies that sj

sj

̃ ̃ 0) ds = − ∫ f (s, u(s), 1) ds. ∫ f (s, u(s), sj

sj̃

So from (4.1.16) and (4.1.48) it follows that b(sj − sj ) ≤ f0 √

2 log 2 1 , b √K

hence sj − sj ≤

f0 √ 2 log 2 1 b b √K

(j = 1, 2, . . . , m)

(4.1.54)

which is the desired upper estimate for sj − sj . Of course, the same estimates (4.1.53) and (4.1.54) may be deduced in case u(̃ sj̃ ) = α.

230 | 4 Smooth approximating models ̃ ] the differenWe observe that the functions u and ũ satisfy on [tj , tj+1 ] resp. [sj , sj+1 tial equation u̇ = f (t, u, 0). Consequently, the inverse functions satisfy the equation dt 1 = du f (t, u, 0) ξ

and take in α the values tj resp. sj . We use the notation gα s for the value of the solution ξ

of this equation at the point ξ , subject to the initial condition t(α) = s. So ξ 󳨃→ gα tj is

ξ ̃ for the inverse function to u = u(t), and ξ 󳨃→ gα sj is the inverse function to ũ = u(t) α ≤ ξ ≤ β. Similarly, by g ξα s we denote the value of the solution of the equation

dt 1 = du f (t, u, 1) at the point ξ , subject to the initial condition t(α) = s. If u = u(t) is decreasing on ̃ are their ̃ ], then ξ 󳨃→ g ξα tj+1 and ξ 󳨃→ g ξα sj+1 ̃ is decreasing on [sj , sj+1 [tj , tj+1 ], and ũ = u(t) inverse functions on the indicated intervals. ̃ ] be intervals, where our functions are increasing. For s = Let [tj , tj+1 ] and [sj , sj+1 ξ

ξ

gα sj and t = gα tj we get then the differential inequality ̇ − t) = ( (ṡ − t)(s =−

1 1 − )(s − t) f (s, ξ , 0) f (t, ξ , 0) [f (s, ξ , 0) − f (t, ξ , 0)](s − t) k ≤ − 2 (s − t)2 , f (t, ξ , 0)f (s, ξ , 0) f0

by (4.1.16) and (4.1.49). Consequently, 2

|s − t| ≤ e−k(ξ −α)/f0 |sj − tj |

(4.1.55)

and, in particular, 2

̃ − tj+1 | ≤ e−k(β−α)/f0 |sj − tj |. |sj+1

(4.1.56)

̃ ] be intervals, where our functions are On the other hand, let [tj , tj+1 ] and [sj , sj+1 β β ̃ and tj = g α tj+1 . For s = g ξα sj+1 ̃ and t = g ξα tj+1 we get then the decreasing, so sj = g α sj+1 reverse differential inequality ̇ − t) = ( (ṡ − t)(s =−

1 1 − )(s − t) f (s, ξ , 1) f (t, ξ , 1) [f (s, ξ , 1) − f (t, ξ , 1)](s − t) k ≥ 2 (s − t)2 , f (t, ξ , 1)f (s, ξ , 1) f0

by (4.1.16) and (4.1.50). Consequently, 2

̃ − tj+1 |. |sj − tj | ≥ ek(β−α)/f0 |sj+1

4.1 Smooth relay models with hysteresis | 231

Combining this with (4.1.56) and putting Λ := (1 +

f0 √ 2 log 2 1 ) b b √K

we get 2

sj − tj = sj − sj + sj − sj̃ + sj̃ − tj ≤ Λ + e−k(β−α)/f0 (sj−1 − tj−1 ) 2

2

2

2

̃ − tj−1 )) = Λ + e−k(β−α)/f0 (Λ + e−k(β−α)/f0 (sj−1 2

2

≤ Λ + e−k(β−α)/f0 (Λ + e−k(β−α)/f0 (Λ + e−k(β−α)/f0 (Λ + ⋅ ⋅ ⋅ + e−k(β−α)/f0 (s1 − t1 )))) = Λ[1 + exp(−

k(β − α) k(β − α) k(β − α) ) + exp(−2 ) + ⋅ ⋅ ⋅ + exp(−(j − 1) )]. 2 2 f0 f0 f02

We conclude that { jΛ sj − tj ≤ { Λ 2 { 1−e−k(β−α)/f0

if k = 0,

(4.1.57)

if k > 0.

Furthermore, for tj ≤ t ≤ sj we have 󵄩󵄩 ̃ 󵄩 󵄩 ̃ 󵄩 󵄩 󵄩 󵄩 󵄩 − u(̃ sj̃ )󵄩󵄩󵄩 + 󵄩󵄩󵄩u(̃ sj̃ ) − u(tj )󵄩󵄩󵄩 + 󵄩󵄩󵄩u(tj ) − u(t)󵄩󵄩󵄩 ≤ 2f0 (sj − tj ), 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩u(t)

(4.1.58)

and the same estimate is true for sj ≤ t ≤ sj , since 󵄩󵄩 ̃ 󵄩 󵄩 ̃ ̃ j ) − u(tj )󵄩󵄩󵄩󵄩 + 󵄩󵄩󵄩󵄩u(tj ) − u(t)󵄩󵄩󵄩󵄩. ̃ j )󵄩󵄩󵄩󵄩 + 󵄩󵄩󵄩󵄩u(s − u(s 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 ≤ 󵄩󵄩󵄩u(t) Now, for every t ∈ [sj , tj+1 ] there exists a unique point s ∈ [tj , tj+1 ] such that u(s) = ̃ ξ = u(t). If [tj , tj+1 ] is an interval where our functions are decreasing, then β

β

2

sj − tj = g ξ s − g ξ t ≥ ek(β−ξ )/f0 (s − t), so s − t ≤ sj − tj . On the other hand, if [tj , tj+1 ] is an interval where our functions are increasing, then the same estimate follows immediately from (4.1.55), and 󵄩󵄩 ̃ 󵄩 󵄩 󵄩 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 = 󵄩󵄩󵄩u(s) − u(t)󵄩󵄩󵄩 ≤ f0 |s − t| ≤ f0 |sj − tj |

(sj ≤ t ≤ tj+1 ).

(4.1.59)

̃ − u(t)‖ ≤ 2f0 |sj − tj | for tj ≤ t ≤ So the estimates (4.1.58) and (4.1.59) imply that ‖u(t) tj+1 . Moreover, using (4.1.57) we get 2jf0 Λ 󵄩󵄩 ̃ 󵄩 { 󵄩󵄩u(t) − u(t)󵄩󵄩󵄩 ≤ { 2f0 Λ { 1−e−k(β−α)/f02

if k = 0, if k > 0.

(4.1.60)

The locally explicit model has on [t0 , T] precisely m switches, and on every subinterval [tj , tj+1 ] the estimate (4.1.60) holds, where tm+1 = T. So (4.1.60) is true on [t0 , T] with j = m. Combining this with (4.1.53) and (4.1.54), we conclude that the desired

232 | 4 Smooth approximating models estimate (4.1.14) holds on the hole interval [t0 , T], where the constant C is given by (4.1.51).

4.2 Planar systems with one relay Now we specialize on solutions with values in the plane ℝ2 . 4.2.1 Statement of the problem Consider the problem { { { { { { { { { { { { { { { { { { {

1 − 2y(t) u̇ 1 )=( u̇ 2 ε 󵄩󵄩 󵄩󵄩 x(t) = 󵄩󵄩u(t)󵄩󵄩, y(t) = Rtt0 (α, β, x)y0 , (

−ε u )( 1 ), 1 − 2y(t) u2 (4.2.1)

u(t0 ) = u0 ∈ ℝ2 \ {(0, 0)}.

Here u = (u1 , u2 ) : [t0 , ∞) → ℝ2 is continuous, and ‖u(t)‖ denotes the Euclidean norm of u(t). The function y = y(t) is the output of the relay, given by the locally explicit equation (4.1.2) with threshold values α > 0 and β > α, which corresponds to the input x = x(t). 4.2.2 A periodicity criterion We are interested in those values of ε in (4.2.1) for which the system has a periodic solution. The following theorem gives a condition which is both necessary and sufficient. Theorem 4.2.1 (Periodicity criterion for solutions). A necessary and sufficient condition under which a solution is eventually periodic is that ε (log β − log α) ∈ ℚ, π where ε is in the matrix in (4.2.1). Proof. First we show that there exists a moment t1 ≥ t0 where the solution u of (4.2.1) satisfies ‖u(t1 )‖ = α. In case y(t) = 1 the differential equation in (4.2.1) has the general solution u(t) = e−t Φε (t) (

c1 ) c2

(c1 , c2 ∈ ℝ),

(4.2.2)

4.2 Planar systems with one relay | 233

where the matrix Φε (t) = (

cos εt sin εt

− sin εt ) cos εt

has the usual properties Φ−1 ε (t) = Φε (−t),

Φε (t1 )Φε (t2 ) = Φε (t1 + t2 ).

(4.2.3)

From (4.2.2) and (4.2.3) we conclude that u(t) = e−(t−t0 ) Φε (t − t0 )u(t0 )

(4.2.4)

solves (4.2.1). Similarly, in case y(t) = 0 we get the solution u(t) = et−t0 Φε (t − t0 )u(t0 ).

(4.2.5)

If ‖u0 ‖ > β from this moment the relay is switched on (y = 1) until the moment when the input x(t) reaches for the first time the lower threshold value α. Therefore the solution of (4.2.1) may be calculated by (4.2.4), and so the input function 󵄩 󵄩 x(t) = 󵄩󵄩󵄩u(t)󵄩󵄩󵄩 = e−(t−t0 ) ‖u0 ‖ is strictly decreasing and tends to 0 as t → ∞. This implies that at some moment t1 this input function assumes the value α, and so the solution of (4.2.1) in this moment crosses the sphere of radius α. Analogously, if ‖u0 ‖ < α the solution of (4.2.1) may be calculated by (4.2.5) and crosses for t = t1 the sphere of radius α, and afterwards leaves it. It remains to analyze the case α ≤ ‖u0 ‖ ≤ β. Here it is not difficult to see in both cases (y(t) = 1 or y(t) = 0) that the norm of the solution of (4.2.1) lies between α and β, and that at some moment t = t1 we have ‖u(t1 )‖ = α. Therefore after some moment the solution u does not leave the region α ≤ ‖u(t)‖ ≤ β. After these preliminary considerations, we show now how to find a value of ε for which the solution of (4.2.1) is periodic. Clearly, after the moment t = t1 the relay is switched off (y = 0), and the solution may be calculated by (4.2.5), with t0 replaced by t1 . This solution touches the larger sphere with radius β at some moment t2 ≥ t1 and satisfies u(t2 ) = et2 −t1 Φε (t2 − t1 )u(t1 ).

(4.2.6)

This implies that t2 −t1 = log β −log α. After the moment t = t2 the relay is switched on (y = 1), and the solution may be calculated by (4.2.5), with t0 replaced by t2 . This solution touches the smaller sphere with radius α at some moment t3 ≥ t2 and satisfies u(t3 ) = e−(t3 −t2 ) Φε (t3 − t2 )u(t2 ),

(4.2.7)

234 | 4 Smooth approximating models which implies that t3 − t2 = log β − log α. We have shown that after time log β − log α the solution u follows a trajectory which starts at the small sphere and ends up at the large sphere, or vice versa. For n ∈ ℕ, we denote by t2n the moments when the relay switches from 0 to 1, and by t2n−1 the moments when the relay switches from 1 to 0. So we have 󵄩 󵄩 x(t2n ) = 󵄩󵄩󵄩u(t2n )󵄩󵄩󵄩 = β,

󵄩 󵄩 x(t2n−1 ) = 󵄩󵄩󵄩u(t2n−1 )󵄩󵄩󵄩 = α.

From the properties (4.2.3) of the fundamental matrix it follows that t2n+1 − t2n = t2n − t2n−1 = ⋅ ⋅ ⋅ = t2 − t1 = log β − log α

(4.2.8)

and u(t2n+1 ) = e−(t2n+1 −t2n ) Φε (t2n+1 − t2n )u(t2n ) = e−(t2n+1 −t2n ) Φε (t2n+1 − t2n )et2n −t2n−1 Φε (t2n − t2n−1 )u(t2n−1 ). By (4.2.8) we may rewrite this in the form u(t2n+1 ) = Φε (2(log β − log α))u(t2n−1 ), and by induction we further obtain u(t2n+1 ) = Φε (2n(log β − log α))u(t1 ).

(4.2.9)

Clearly, the solution u is periodic if and only if u(t2n+1 ) = u(t1 ) for some n ∈ ℕ. This together with the equality (4.2.9) shows that the matrix Φε (2n(log β − log α)) is then the unit matrix for this n. Equivalently, this is true if and only if cos[2nε(log β − log α)] = 1. But this means precisely that either ε = 0 or ε=

kπ n(log β − log α)

for some k ∈ ℤ \ {0}. In the first case the periodic solution has the period T = 2(log β − log α), in the second case the period T = 2n(log β − log α). This completes the proof.

4.2 Planar systems with one relay | 235

4.2.3 Numerical analysis In order carry out a numerical analysis of (4.2.1) we rewrite the system in the form { { { { { { { { { { { { { { { { { { {

̃ ũ1̇ 1 − 2y(t) )=( ̇ ũ2 ε 󵄩 󵄩 󵄩 󵄩 ̃ 󵄩󵄩, ̃ = 󵄩󵄩u(t) x(t) t̃ ̃ = Rt (α, β, x)y ̃ 0, y(t) 0 (

−ε ũ )( 1 ), ̃ 1 − 2y(t) ũ 2 (4.2.10)

̃ 0 ) = u0 ∈ ℝ2 \ {(0, 0)}, u(t

̃ is the output signal of the relay in the setting of the smooth model (4.1.3). where ỹ = y(t) Using the programme Mathematica 7.0 we discuss numerical solutions of (4.2.10) for the initial data α := 1, β := 3, u0 := (1, 2), y0 := 0, and three different choices of the parameter ε. Example 4.2.2. The choice ε := 2π/9 log 3 satisfies the periodicity condition in Theorem 4.2.1. Here the periodic solution of (4.2.10) has for K = K1 = 107 the period T = 18 log 3. Its qualitative behaviour is sketched in Figure 4.2.

Figure 4.2

If we take a different parameter K = K1 /2, the trajectories of the solutions for K = K1 and K = K1 /2 coincide. This means that the smooth model (4.1.3) gives a stable result which does not depend on K if K is sufficiently large.

236 | 4 Smooth approximating models

Figure 4.3

On the other hand, if we take K smaller, say K = 50, see Figure 4.3, the smooth model (4.1.3) does not give a stable result in the sense that the solution essentially depends on K (and need not be periodic). Example 4.2.3. The choice ε := 6/9 log 3 does not satisfy the periodicity condition in Theorem 4.2.1, and so we cannot expect periodic solutions.

Figure 4.4

4.2 Planar systems with one relay | 237

Here the trajectory of the solution of (4.2.10) fills an annular region, as t → ∞ with inner radius α and outer radius β, see Figure 4.4. Example 4.2.4. Let us now consider the particular choice ε := 0. Here the solution of (4.2.10) is periodic with period T = 2 log 3.

Figure 4.5

The trajectory of this periodic solution is sketched in Figure 4.5.

4.2.4 Nearness estimates In the systems (4.2.1) and (4.2.10) we have considered the function φ : ℝ2 → ℝ defined by φ(u) = ‖u‖ = √u21 + u22 which automatically satisfies the conditions (4.1.12) and (4.1.13). Indeed, for φ(u) = α we have ⟨∇φ(u), f (t, u, 1)⟩ = =

u −1 1 ⟨( 1 ) , ( u2 ε ‖u‖ u −u1 1 ⟨( 1 ) , ( u2 εu1 ‖u‖

−ε u ) ( 1 )⟩ −1 u2 −εu2 )⟩ = −‖u‖, −u2

238 | 4 Smooth approximating models and so ⟨∇φ(u), f (t, u, 1)⟩ = −α < 0. Analogously one may show that ⟨∇φ(u), f (t, u, 0)⟩ = β > 0 for φ(u) = β. Let us now calculate the constants ρ, α, f0 , f1 , and M for this system. By our proof of the periodicity criterion in Theorem 4.2.1 we may assume that α ≤ ‖u0 ‖ ≤ β, hence ρ := β for t ∈ [t0 , T]. Moreover, by what we have just proved we may take a := α. Since f (t, u, y) = Au, where A is the matrix of the differential equation in (4.2.1), we get 󵄩󵄩 󵄩 󵄩󵄩f (t, u, y)󵄩󵄩󵄩 ≤ ‖A‖‖u‖ ≤ β√1 + ε2 =: f0 and 󵄩󵄩 𝜕 󵄩󵄩 󵄩󵄩 󵄩 󵄩󵄩 f (t, u, y)󵄩󵄩󵄩 = ‖A‖ = √1 + ε2 =: f1 . 󵄩󵄩 𝜕u 󵄩󵄩 Finally, for the constant M we have 󵄩 󵄩 ⟨u , u ⟩ M = 󵄩󵄩󵄩∇φ(u)󵄩󵄩󵄩 = 1 2 = 1. ‖u‖ Denote by u and ũ the solutions of the systems (4.2.1) and (4.2.10), respectively, where α ≤ ‖u0 ‖ ≤ β. The Nearness Theorem shows that C 󵄩󵄩 ̃ 󵄩󵄩󵄩󵄩 ≤ 󵄩󵄩u(t) − u(t) √K

(t0 ≤ t ≤ T)

for K ≥ K0 , where the constant C may be calculated as in Subsection 4.1.5, taking ρ = β, a = α, f0 = β√1 + ε2 , f1 = √1 + ε2 , and M = 1.

4.3 Smooth description of stops and plays After our discussion of smooth models for relay nonlinearities we pass now to stops and plays which we introduced in Section 3.3.

4.3.1 Statement of the problem Let us briefly recall the model for stops and plays given by Krasnosel’sky and Pokrovsky in the monograph [24]. A piecewise smooth function x = x(t) (t ≥ t0 ) is mapped by

4.3 Smooth description of stops and plays | 239

a stop into a function φ defined by ẋ { { { φ̇ = { max{x,̇ 0} { { { min{x,̇ 0}

if 0 < φ < 1, if φ = 0,

(4.3.1)

if φ = 1,

and by a play into a function ψ defined by 0 { { { ̇ ψ = { max{x,̇ 0} { { { min{x,̇ 0}

if x < ψ < x + 1, if ψ = x,

(4.3.2)

if ψ = x + 1.

In [24, p. 111] it is proved that, under appropriate initial conditions, the differential Equations (4.3.1) and (4.3.2) are uniquely solvable. Here a solution is a locally absolutely continuous function which satisfies the equation almost everywhere. The authors also show in [24, Lemma 2.2] that the map x 󳨃→ φ satisfies in the norm of the space C a Lipschitz condition with Lipschitz constant 1, while the map x 󳨃→ ψ satisfies in the same space a Lipschitz condition with Lipschitz constant 2. It is therefore possible to extend the definition of these operators to arbitrary continuous inputs. Our aim here is to propose, for the numerical study of hysteresis phenomena involving stops and plays, in subsequent subsections a smooth model in the form of an ordinary differential equation with large parameter. 4.3.2 Smooth models Given a continuous function x = x(t) on3 [t0 , t0 +T], we define the “smoothened” input function ξ = ξ (t) which solves the initial value problem {

ξ ̇ = K[x(t) − x(t − 1/K)], ξ (t0 ) = x(t0 ) =: x0 .

(4.3.3)

In contrast to the classical model of a stop and play due to [24] we discuss now another concept of stops and plays which we call smooth model. Given a continuous input x = x(t), we define the corresponding output u = u(t) for a stop by u̇ = ξ ̇ + K[(−u(t)+ − (u(t) − 1)+ ],

(4.3.4)

and the corresponding output v = v(t) for a play by v̇ = K[(ξ (t) − v(t))+ − (v(t) − 1 − ξ (t))+ ]. Here K is a (large) parameter as before. 3 For t < t0 we set x constant as x(t) ≡ x(t0 ).

(4.3.5)

240 | 4 Smooth approximating models 4.3.3 Stops with smooth inputs Our main goal is now to estimate the difference between the output functions in the classical and the smooth model, and to estimate in this way the velocity of convergence of the smooth model towards the classical model for K → ∞. Let y = y(t) be a smooth4 function on [t0 , t0 + T]. Then the equation ̇ = ẏ + K[(−u(t)) − (u(t) − 1) ] u(t) + +

(4.3.6)

defines a smooth output of a stop for a smooth input. Let φ = φ(t) an output of the stop which corresponds to the input y and the initial condition φ(t0 ) = u(t0 ) =: u0 . Theorem 4.3.1 (Nearness estimates for stop outputs). Under the above assumptions, we have for every K > 0 the estimate 󵄨 C 󵄨󵄨 󵄨󵄨u(t) − φ(t)󵄨󵄨󵄨 ≤ K

(t0 ≤ t ≤ t0 + T),

(4.3.7)

̇ where C := sup{|y(t)| : t0 ≤ t ≤ t0 + T}. Proof. For C = 0 the estimate (4.3.7) is clear, since in this case the output functions coincide with the initial data. So suppose that C > 0 and (4.3.7) does not hold for all t. Then we find t1 ∈ [t0 , t0 + T] such that 󵄨󵄨 󵄨 C 󵄨󵄨u(t1 ) − φ(t1 )󵄨󵄨󵄨 = , K

(4.3.8)

but 󵄨 C 󵄨󵄨 󵄨󵄨u(t) − φ(t)󵄨󵄨󵄨 > K

(t1 < t < t1 + δ)

(4.3.9)

for some δ > 0. Moreover, if u(t1 ) − φ(t1 ) =

C , K

then also u(t) − φ(t) >

C K

(t1 < t < t1 + δ),

(4.3.10)

since u and φ are continuous. We claim that in every point t ∈ (t1 , t1 + δ), where φ satisfies (4.3.10), we have d (u(t) − φ(t)) ≤ 0. dt 4 More precisely, it suffices that y is continuous and piecewise continuously differentiable.

(4.3.11)

4.3 Smooth description of stops and plays | 241

From (4.3.10) it follows immediately that u(t) > 0 for t1 < t < t1 + δ. Moreover, if 0 < u(t) ≤ 1 for some t, it also follows from (4.3.10) that 0 ≤ φ(t) < 1. Then (4.3.1) and (4.3.6) imply that if 0 < φ(t) < 1,

0 d (u(t) − φ(t)) = { dt ẏ − max{0, y}̇

if φ(t) = 0.

(4.3.12)

This immediately gives (4.3.11). If u(t) > 1 for some t, (4.3.1) and (4.3.6) imply that { { { d (u(t) − φ(t)) = { { dt { {

−K(u(t) − 1)

if 0 < φ(t) < 1,

ẏ − K(u(t) − 1) − max{0, y}̇

if φ(t) = 0,

ẏ − K(u(t) − 1) − min{0, y}̇

if φ(t) = 1.

(4.3.13)

We have shown that (4.3.11) is true whenever 0 ≤ φ(t) < 1. On the other hand, in case φ(t) = 1 we get from (4.3.10) the estimate u(t) − 1 > C/K, and so ẏ − K(u(t) − 1) − min{0, y}̇ < ẏ − C − min{0, y}̇ ≤ 0. So in all cases (4.3.11) is true. Now, from (4.3.11) and (4.3.8) we conclude that u(t) − φ(t) ≤ u(t1 ) − φ(t1 ) =

C K

(t1 < t < t1 + δ),

which contradicts (4.3.9). If u(t1 ) − φ(t1 ) = −C/K, we may rewrite (4.3.9) in the form φ(t) − u(t) >

C K

(t1 < t < t1 + δ).

As before, we get then from (4.3.1) d (φ(t) − u(t)) ≤ 0, dt again contradicting (4.3.9). 4.3.4 Stops with continuous inputs Now we prove a different nearness result for the outputs of a stop whose inputs are continuous. To this end, we assume that φ = φ(t) and u = u(t) are solutions of (4.3.1) and (4.3.4) which correspond to the initial conditions φ(t0 ) = u(t0 ) = u0 . Theorem 4.3.2 (Nearness estimates for stop outputs). Under the above assumptions, we have for every K > 0 the estimate 󵄨󵄨 󵄨 󵄨󵄨u(t) − φ(t)󵄨󵄨󵄨 ≤ 3ω(x, 1/K)

(t0 ≤ t ≤ t0 + T),

(4.3.14)

242 | 4 Smooth approximating models where 󵄨 󵄨 ω(x, δ) := sup{󵄨󵄨󵄨x(t1 ) − x(t2 )󵄨󵄨󵄨 : t0 ≤ t1 , t2 ≤ t0 + T, |t1 − t2 | ≤ δ}

(4.3.15)

denotes the modulus of continuity of x on [t0 , t0 + T]. Proof. We denote by ξi = ξi (t) (i = 1, 2) the smoothened input functions, see (4.3.3), which correspond to Ki , and by φi = φi (t) the solutions of (4.3.1), with ẋ replaced by ξi̇ , satisfying the initial condition φi (t0 ) = u0 . Our nearness estimates for the outputs of a play with smooth input show that the solutions ui = ui (t) of (4.3.4) with K = Ki and initial condition ui (t0 ) = u0 satisfy the estimate 1 󵄨󵄨 󵄨 󵄨 󵄨 sup{󵄨󵄨󵄨ξi̇ (t)󵄨󵄨󵄨 : t0 ≤ t ≤ t0 + T}. 󵄨󵄨ui (t) − φi (t)󵄨󵄨󵄨 ≤ Ki

(4.3.16)

In terms of the modulus of continuity (4.3.15) this may be rewritten in the form 󵄨󵄨 󵄨 󵄨󵄨ui (t) − φi (t)󵄨󵄨󵄨 ≤ ω(x, 1/Ki )

(t0 ≤ t ≤ t0 + T).

(4.3.17)

As mentioned before, the stop operator (4.3.1) satisfies a Lipschitz condition with Lipschitz constant 2, i. e., 󵄨󵄨 󵄨 󵄨󵄨φ1 (t) − φ2 (t)󵄨󵄨󵄨 ≤ 2‖ξ1 − ξ2 ‖

(t0 ≤ t ≤ t0 + T).

(4.3.18)

But we may also estimate the distance between ξi and x, as a consequence of (4.3.3), because from t−1/Ki

t

ξi (t) = x(t0 ) + Ki ∫ x(s) ds − Ki t0

t

∫ x(s) ds = Ki ∫ x(s) ds t0 −1/Ki

t−1/Ki

it follows that t

ξi (t) − x(t) = Ki ∫ [x(s) − x(t)] ds

(t0 ≤ t ≤ t0 + T),

t−1/Ki

hence 󵄨󵄨 󵄨 󵄨󵄨ξi (t) − x(t)󵄨󵄨󵄨 ≤ ω(x, 1/Ki )

(t0 ≤ t ≤ t0 + T).

(4.3.19)

So from (4.3.18) and (4.3.19) we conclude that 󵄨󵄨 󵄨 󵄨󵄨φ1 (t) − φ2 (t)󵄨󵄨󵄨 ≤ 2ω(x, 1/K1 ) + 2ω(x, 1/K2 ).

(4.3.20)

Combining this with the elementary estimate 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨u1 (t) − φ2 (t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨u1 (t) − φ1 (t)󵄨󵄨󵄨 + 󵄨󵄨󵄨φ1 (t) − φ2 (t)󵄨󵄨󵄨

(4.3.21)

4.3 Smooth description of stops and plays | 243

we finally arrive at the estimate 󵄨 󵄨󵄨 󵄨󵄨u1 (t) − φ2 (t)󵄨󵄨󵄨 ≤ 3ω(x, 1/K1 ) + 2ω(x, 1/K2 ).

(4.3.22)

Now we take K1 = K and let K2 → ∞; then ξ2 tends uniformly on [t0 , t0 + T] to x, while φ2 tends to the output φ of the stop which corresponds to the continuous input x. This proves (4.3.14), and we are done.

4.3.5 Plays with continuous inputs We pass now to a nearness theorem for the play operator. To this end, we assume that ψ = ψ(t) and v = v(t) are solutions of (4.3.2) and (4.3.5) which correspond to the initial conditions ψ(t0 ) = v(t0 ) = v0 . Theorem 4.3.3 (Nearness estimates for play outputs). Under the above assumptions, we have for every K > 0 the estimate 󵄨󵄨 󵄨 󵄨󵄨v(t) − ψ(t)󵄨󵄨󵄨 ≤ 2ω(x, 1/K)

(t0 ≤ t ≤ t0 + T),

(4.3.23)

where ω(x, δ) denotes the modulus of continuity (4.3.15). Proof. We denote by vi = vi (t) (i = 1, 2) the solution of (4.3.5) for K = Ki satisfying the initial condition vi (t0 ) = v0 . Putting zi (t) := vi (t) − ξi (t) we obtain zi̇ = −ξi̇ + Ki [(−zi (t))+ − (zi (t) − 1)+ ].

(4.3.24)

This means that zi is a smooth output of a play with smooth input −ξi , parameter Ki , and initial condition zi (t0 ) = v0 − x0 . Denoting by φi = φi (t) the solution of (4.3.1), with ẋ is replaced by −ξi̇ , which satisfies the initial condition φi (t0 ) = v0 − x0 , we get from (4.3.16) 1 󵄨󵄨 󵄨 󵄨 󵄨 sup{󵄨󵄨󵄨−ξi̇ (t)󵄨󵄨󵄨 : t0 ≤ t ≤ t0 + T}. 󵄨󵄨zi (t) − φi (t)󵄨󵄨󵄨 ≤ Ki

(4.3.25)

In the notation (4.3.15) this implies 󵄨󵄨 󵄨 󵄨󵄨zi (t) − φi (t)󵄨󵄨󵄨 ≤ ω(x, 1/Ki )

(t0 ≤ t ≤ t0 + T).

(4.3.26)

The function ψi (t) := φi (t) + ξi (t) is a solution of (4.3.2), with x replaced by ξi , which satisfies the initial condition ψi (t0 ) = v0 . This means that ψi is the output of a play with input ξi , and together with (4.3.26) we deduce that 󵄨󵄨 󵄨 󵄨 󵄨 󵄨󵄨vi (t) − ψi (t)󵄨󵄨󵄨 = 󵄨󵄨󵄨zi (t) − φi (t)󵄨󵄨󵄨 ≤ ω(x, 1/Ki ).

(4.3.27)

244 | 4 Smooth approximating models But in [24, Lemma 2.2] it was shown that the play operator (4.3.2) satisfies a Lipschitz condition with Lipschitz constant 1, i. e., 󵄨 󵄨󵄨 󵄨󵄨ψ1 (t) − ψ2 (t)󵄨󵄨󵄨 ≤ ‖ξ1 − ξ2 ‖

(t0 ≤ t ≤ t0 + T).

Using a similar argument as in the proof of Theorem 4.3.2 we obtain 󵄨 󵄨󵄨 󵄨󵄨ψ1 (t) − ψ2 (t)󵄨󵄨󵄨 ≤ ω(x, 1/K1 ) + ω(x, 1/K2 ).

(4.3.28)

Combining this with the elementary estimate 󵄨󵄨 󵄨 󵄨 󵄨 󵄨 󵄨 󵄨󵄨v1 (t) − ψ2 (t)󵄨󵄨󵄨 ≤ 󵄨󵄨󵄨v1 (t) − ψ1 (t)󵄨󵄨󵄨 + 󵄨󵄨󵄨ψ1 (t) − ψ2 (t)󵄨󵄨󵄨 we finally arrive at the estimate 󵄨󵄨 󵄨 󵄨󵄨v1 (t) − ψ2 (t)󵄨󵄨󵄨 ≤ 2ω(x, 1/K1 ) + ω(x, 1/K2 ).

(4.3.29)

The remaining part of the proof goes as before: taking K1 = K and letting K2 → ∞ we get (4.3.23). 4.3.6 Numerical analysis and nearness estimates To illustrate our abstract results we present the outcome of some numerical tests. In particular, using Mathematica we illustrate the systems (4.3.1) and (4.3.2) by sketching graphical solutions of (4.3.4) and (4.3.5).

Figure 4.6

For the choice x(t) := −

|t sin t − 1| + 2, 2

u0 := 0.8,

v0 := 1.7,

K = K1 := 106

the output of the corresponding stop is shown in Figure 4.6, while the output of the corresponding play is shown in Figure 4.7.

4.3 Smooth description of stops and plays | 245

Figure 4.7

The Mathematica programme gives in this case the estimate ω(x, 1/K) ≤ 10−5 on the interval [t0 , t0 + T] = [0, 10]. So for this choice (4.3.14) reads 3 󵄨󵄨 󵄨 󵄨󵄨u(t) − φ(t)󵄨󵄨󵄨 ≤ 100.000

(0 ≤ t ≤ 10),

2 󵄨󵄨 󵄨 󵄨󵄨v(t) − ψ(t)󵄨󵄨󵄨 ≤ 100.000

(0 ≤ t ≤ 10).

while (4.3.23) becomes

If we choose another parameter K = K1 /2, the solutions of (4.3.4) and (4.3.5) are essentially the same. This means that the smooth solutions of the stop system and play system are stable and do not depend on K if K is sufficiently large.

Figure 4.8

On the other hand, if we choose a small value for K, say K = 10, the results are not stable any more. To illustrate this phenomenon, we sketch in Figure 4.8 the solution of (4.3.4) (i. e., the stop output u) for K = 10.

246 | 4 Smooth approximating models Figure 4.8 shows that the values of the output function u in this case may leave the interval [0, 1].

4.4 Smooth description of DN models Now we consider again systems with diode nonlinearities (or DN systems, for short) which we discussed in detail in Chapters 1 and 2. Let us briefly recall some of the basic notions. Given a convex closed set Q ⊆ ℝn , by TQ (x) we denote the tangent cone to Q at x ∈ ℝn , and by τx the metric projection of ℝn onto TQ (x). 4.4.1 Statement of the problem Recall that, in this notation, a (generalized) DN system has the form ẋ = τx f (t, x).

(4.4.1)

Here f : [t0 , t0 + T] × Q → ℝn is supposed to be continuous and bounded (by C > 0, say), and f (t, ⋅) satisfies a Lipschitz condition 󵄩󵄩 󵄩 󵄩󵄩f (t, u) − f (t, v)󵄩󵄩󵄩 ≤ L‖u − v‖

(u, v ∈ Q).

(4.4.2)

We remark that the right-hand side of (4.4.1) may have a discontinuity on the boundary of Q. A solution of (4.4.1) is a locally absolutely continuous function x = x(t) ̇ = τx(t) f (t, x(t)) almost everywhere. which satisfies the equality x(t) Together with (4.4.1) we are going to consider a smooth model which is given by ẏ = f (t, y) − K(y − y),

(4.4.3)

where y = P(y, Q) is the element of best approximation to y in Q, see Definition 1.2.9, and K > 0 is a (large) parameter.

4.4.2 Exactness of smooth descriptions The following Theorem 4.4.1 provides an estimate for the distance between solutions of (4.4.1) and (4.4.3) with identical initial condition. Theorem 4.4.1. Let x = x(t) and y = y(t) be solutions of the systems (4.4.1) and (4.4.3), respectively, which both satisfy the initial condition x(t0 ) = y(t0 ) = x0 ∈ Q

(4.4.4)

4.4 Smooth description of DN models | 247

Then the estimate LT 󵄩 Ce 󵄩󵄩 󵄩󵄩x(t) − y(t)󵄩󵄩󵄩 ≤ √KL

(4.4.5)

holds for t0 ≤ t ≤ t0 + T, where C is an upper bound for f , K is from (4.4.3), and L is the Lipschitz constant in (4.4.2). Proof. As in Chapter 1, we denote by NQ (x) we denote the normal cone to Q at x ∈ ℝn , and by νx the metric projection of ℝn onto NQ (x). The relation (1.3.2) between τx and νx allows us then to rewrite (4.4.1) in the form ẋ = f (t, x) − νx f (t, x).

(4.4.6)

At every point x ∈ Q, where ẋ exists, we consider the scalar function5 1 d ‖x − y‖2 = ⟨ẋ − y,̇ x − y⟩ 2 dt = ⟨f (t, x) − f (t, y) − νx f (t, x) + K(y − y), x − y⟩.

p(x, y) :=

(4.4.7)

From the Lipschitz condition (4.4.2) we get ⟨f (t, x) − f (t, y), x − y⟩ ≤ L‖y − y‖‖x − y‖. On the other hand, since the map P(⋅, Q) : ℝn → Q is nonexpansive, see Proposition 1.2.12, we also obtain ⟨f (t, x) − f (t, y), x − y⟩ ≤ L‖x − y‖2 . This together with (4.4.7) implies p(x, y) ≤ L‖x − y‖2 + ⟨νx f (t, x), y − x⟩ + K⟨y − y, x − y⟩ + ⟨y − y, y − y⟩. ity

By Theorem 1.2.11, the equality y = P(y, Q) is equivalent to the variational inequal⟨y − y, x − y⟩ ≤ 0

(x ∈ Q).

(4.4.8)

Therefore p(x, y) ≤ L‖x − y‖2 + ⟨νx f (t, x), y − x⟩, hence p(x, y) ≤ L‖x − y‖2 + ⟨νx f (t, x), y − y⟩ + ⟨νx f (t, x), y − x⟩. 5 As usual, ⟨⋅, ⋅⟩ denotes the scalar product.

(4.4.9)

248 | 4 Smooth approximating models But the definition of NQ (x), see Definition 1.2.13, shows that ⟨νx f (t, x), y − x⟩ ≤ 0. Consequently, we and up with p(x, y) ≤ L‖x − y‖2 + ⟨νx f (t, x), y − y⟩. As was shown in [29], the upper estimate 󵄩 C 󵄩 ‖y − y‖ = 󵄩󵄩󵄩y − P(y, Q)󵄩󵄩󵄩 ≤ K holds for the distance of y to its element of best approximation in Q. The fact that C is an upper bound for the function f further implies p(x, y) ≤ L‖x − y‖2 + tion

C2 . K

Summarizing these estimates we have shown that the absolutely continuous funct

󵄩 󵄩2 u(t) := 󵄩󵄩󵄩x(t) − y(t)󵄩󵄩󵄩 = 2 ∫ p(x, y)(s) ds 0

satisfies the inhomogeneous linear initial value problem 2

{

u̇ = 2Lu + 2 CK − b(t),

u(t0 ) = 0.

where b is some nonnegative L1 -function. We conclude that the solution u satisfies t

u(t) = 2 ∫ e2L(t−s) [ t0

C 2 b(s) − ] ds, K 2

hence t

u(t) ≤ 2

C2 C 2 2Lt e ∫ e2L(t−s) ds ≤ K KL t0

for t0 ≤ t ≤ t0 + T. This finishes the proof. 4.4.3 A special case Now we consider a special case where Theorem 4.4.1 applies, and more explicit computations are possible. Let n1 , n2 ∈ ℝ2 be two fixed normalized vectors in the plane. As convex closed subset Q ⊂ ℝ2 we take the intersection Q = Q1 ∩ Q2 of the two half-

4.4 Smooth description of DN models | 249

planes6 Qi := {z ∈ ℝ2 : ⟨ni , z⟩ ≤ 0} (i = 1, 2). To avoid trivial results, we suppose that the vectors n1 and n2 are not collinear, and denote by ω ∈ (0, π) the smaller angle between them. In this special case the smooth model is defined by the equation ż = f (t, z) − K max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } ∑ nk , k∈M(z)

(4.4.10)

where as usual u+ := max{u, 0}, and M(z) contains the index k for which max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } = ⟨nk , z⟩. For this choice of Q, we get the following refinement of Theorem 4.4.1. Theorem 4.4.2. Let x = x(t) and z = z(t) be solutions of the systems (4.4.1) and (4.4.10), respectively, which both satisfy the initial condition x(t0 ) = z(t0 ) = x0 ∈ Q

(4.4.11)

CeLT 󵄩󵄩 󵄩 󵄩󵄩x(t) − z(t)󵄩󵄩󵄩 ≤ √KL min{2 sin ω0 , 1} sin ω0

(4.4.12)

Then the estimate

holds for t0 ≤ t ≤ t0 + T, where C, K, and L are as before, and 2ω0 = min{ω, π − ω}. Proof. Similarly as before, we estimate the term p(x, z) :=

1 d ‖x − z‖2 2 dt

and distinguish two cases. 1st case: The maximum max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } is realized by only one of the scalar products, say ⟨n1 , z⟩+ . Then (4.4.10) takes the simpler form ż = f (t, z) − K(z − z1 ),

z1 := P(z, Q1 ).

If z1 ∈ Q we get, in analogy to the proof of the preceding Theorem 4.4.1, the estimate p(x, z) ≤ L‖x − z‖2 +

6 Geometrically, ni is the outer normal to the halfplane Qi .

C2 . K

(4.4.13)

250 | 4 Smooth approximating models If z1 ∈ ̸ Q it is not hard to verify that the projection z := P(z, Q) is a corner point of Q. Denoting by γ the angle between the vectors zz and zz1 , it is easy to see that ω0 ≤ γ ≤ π/2. So for the distance between z and Q we get the upper estimate ‖z − z1 ‖ ≤

C , K

hence ‖z − z‖ =

‖z − z1 ‖ C ≤ . sin γ K sin ω0

(4.4.14)

As in the proof of Theorem 4.4.1, this leads to the estimate p(x, z) ≤ L‖x − z‖2 + ⟨νx f (t, x), z − x⟩, which is parallel to (4.4.9) with y replaced by z. Furthermore, p(x, z) ≤ L‖x − z‖2 + C‖z − z‖. Combining this with (4.4.14) we obtain p(x, z) ≤ L‖x − z‖2 +

C2 K sin ω0

(4.4.15)

as claimed. 2nd case: The maximum max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } is realized by both scalar products, i. e., max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } = ⟨n1 , z⟩+ = ⟨n2 , z⟩+ . Then (4.4.10) takes the form ż = f (t, z) − K max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ }(n1 + n2 ) = f (t, z) − K[⟨n1 , z⟩+ n1 + ⟨n2 , z⟩+ n2 ], hence ż = f (t, z) − K(z − z1 + z − z2 ).

(4.4.16)

In case z ∈ Q we have z = z1 = z2 , and thus7 p(x, z) ≤ L|x − z‖2 + ⟨νx f (t, x), z − x⟩ ≤ L‖x − z‖2 . 7 Here we again use the fact that ⟨νx f (t, x), z − x⟩ ≤ 0, by definition of the normal cone.

(4.4.17)

4.4 Smooth description of DN models | 251

On the other hand, in case z ∈ ̸ Q the point z lies on the bisectrix of the angle between the vectors zz1 and zz2 , where z1 = P(z, Q1 ),

z2 = P(z, Q2 ),

z = P(z, Q).

But then we get z − z1 + z − z2 = 2

ω z−z ‖z − z1 ‖ cos ‖z − z‖ 2

= 2(z − z) sin(

π ω ω π ω − ) cos = 2(z − z) sin2 ( − ). 2 2 2 2 2

So we may (4.4.16) rewrite in the form ż = f (t, z) − 2K(z − z) sin2 ((π − ω)/2). Now, from Theorem 4.4.1 we deduce that p(x, z) ≤ L‖x − z‖2 +

C2

2K sin2 ((π − ω)/2)

.

Consequently, p(x, z) ≤ L‖x − z‖2 +

C2 . 2K sin ω0

(4.4.18)

So in this case we have obtained (4.4.18) from (4.4.17). Combining (4.4.18) with (4.4.15) we see that the absolutely continuous function t

󵄩 󵄩2 v(t) := 󵄩󵄩󵄩x(t) − z(t)󵄩󵄩󵄩 = 2 ∫ p(x, z)(s) ds 0

satisfies the differential inequality ̇ ≤ 2Lv(t) + v(t)

C2 , K min{2 sin ω0 , 1} sin ω0

and solving this as in the proof of Theorem 4.4.1 we arrive at (4.4.12).

4.4.4 An example Following [44] we illustrate now our abstract results by means of an application to an electric circuit system.

252 | 4 Smooth approximating models Example 4.4.3. Consider the double-face semi-periodic rectifier with circuit feed E(t) and load sketched in Figure 4.9 which contains a resistance R1 , an inductivity L1 , and 4 diodes in a rectangular arrangement. This system is joined to the circuit L2 R2 shown in Figure 2.4.

Figure 4.9

We suppose that we are dealing with ideal diodes which means that the currents ik and the voltages uk (k = 1, 2, 3, 4) between the anodes and cathodes satisfy the system ik ≥ 0,

uk ≤ 0,

ik uk = 0.

The source in this electric diode system generates the voltage E(t), the resistance R1 , and the inductance L1 . The mathematical model leads then to the system of two equations {

L1 I1̇ + R1 I1 + u1 = E(t),

(4.4.19)

L2 I2̇ + R2 I2 + u2 = 0.

After introducing the new coordinates x := (

x1 ), x2

e(t) :=

E(t) 1 ( ), 0 √L1

r := (

R1 /L1 0

0 ), R2 /L2

(4.4.20)

4.4 Smooth description of DN models | 253

where x1 := I1 √L1 and x2 := I2 √L2 , the system (4.4.19) becomes equivalent to the system ẋ = τx (e(t) − rx),

(4.4.21)

which is precisely of the form (2.1.19). As we have seen in Subsection 2.1.4, the vector x lies in the cone8 √L1 /L2 −√L1 /L2 ),( )) , 1 1

Q := cone ((

where cone denotes the conic hull of the indicated vectors, see Definition 1.2.5. Observe that, by construction, the closed convex set Q is of the form Q = Q1 ∩ Q2 we considered in Theorem 4.4.2, where Q1 is the halfplane with outer normal N1 := (

1 ), −√L1 /L2

and Q2 is the halfplane with outer normal N2 := (

−1 ). −√L1 /L2

As before, together with (4.4.21) we consider the corresponding smooth model ż = e(t) − rz − K max{⟨n1 , z⟩+ , ⟨n2 , z⟩+ } ∑ nk , k∈M(z)

(4.4.22)

where ni is the normalized vector Ni , and the notation is the same as in (4.4.10). The angle ω between n1 and n2 may be calculated from the relation cos ω =

⟨N1 , N2 ⟩ L − L2 = 1 . ‖N1 ‖‖N2 ‖ L1 + L2

So we obtain sin

L2 ω =√ 2 L1 + L2

and sin(

L1 π ω − )=√ . 2 2 L1 + L2

This implies that ω0 :=

1 min{ω, π − ω} = arcsin μ(L1 , L2 ), 2

8 The idea how to prove this may be found in [44] and [60].

254 | 4 Smooth approximating models where we have used the shortcut μ(L1 , L2 ) := √

min{L1 , L2 } . L1 + L2

In order to apply Theorem 4.4.2, suppose that the nonlinearity f (t, u) := e(t) − ru in (4.4.22) is continuous and bounded (by C > 0), and satisfies a Lipschitz condition 󵄩 󵄩󵄩 󵄩󵄩f (t, u) − f (t, v)󵄩󵄩󵄩 ≤ L‖u − v‖. We get then the estimate CeLH 󵄩󵄩 󵄩 󵄩󵄩x(t) − z(t)󵄩󵄩󵄩 ≤ √KLμ(L1 , L2 ) min{2μ(L1 , L2 ), 1}

(4.4.23)

for the distance between the solutions of (4.4.21) and (4.4.22), respectively, on the interval [t0 , t0 + H]. 4.4.5 Numerical analysis and nearness estimates In the preceding subsections we have used smooth models several times several times to illustrate the qualitative behaviour of solutions of nonlinear systems. To conclude, we show for the example of the preceding subsection in a quantitative discussion how the choice of the parameter K influences the shape of the solution. The analysis is carried out with the help of the programme Mathematica. Consider the problems (4.4.19) and (4.4.22), where L1 = L2 = 1,

R1 = 0.1,

R2 = 0.2,

E(t) = 3 cos 2t,

t0 = 0,

z(0) = 0,

H = 5.

If we take K = K1 = 108 , the solution of (4.4.22) approximates the solution of (2.1.19), and the trajectory looks like sketched in Figure 4.10.

Figure 4.10

4.4 Smooth description of DN models | 255

To reproduce the estimate (4.4.23) for this choice, note that 󵄩󵄩 󵄩 R /L L = ‖r‖ = 󵄩󵄩󵄩󵄩( 1 1 0 󵄩󵄩

󵄩󵄩 R R 0 1 󵄩 )󵄩󵄩󵄩󵄩 = max{ 1 , 2 } = . R2 /L2 󵄩󵄩 L1 L2 5

Figure 4.10 shows that ‖x‖ ≤ √2, hence √2 󵄩 󵄩󵄩 =: C. 󵄩󵄩e(t) − rx 󵄩󵄩󵄩 ≤ 3 + 5 Moreover, μ(L1 , L2 ) := √

min{L1 , L2 } 1 . = √2 L1 + L2

Putting this into (4.4.23) yields 󵄩󵄩 󵄩 (15 + √2)e 1 . 󵄩󵄩x(t) − z(t)󵄩󵄩󵄩 ≤ √5 10000 On the other hand, if we choose another large value for K, the trajectories of (4.4.22) for K1 = K and K1 = 2K essentially coincide. This means that the smooth approximation gives a stable result which is basically independent of K if K is sufficiently large.

Figure 4.11

Finally, if K is small, say K = 50, the calculation shows that the smooth approximation does not give a stable result. In fact, Figure 4.11 shows that the trajectory depends then strongly on the size of K, and the solution may even leave the cone Q.

Appendix: Mathematica Procedures In this Appendix we collect some Mathematica Procedures which we used for generating pictures in Chapter 2 and 4. The following two procedures refer to Section 2.1.5. L1=1; L2=1; R1=1; R2=1; E0=5; Q=6; x0=0; y0=0; T=5; M=1000; G4=Plot[x*(Sqrt[L2]/Sqrt[L1]),{x,-1,1},PlotStyle◻Blue]; G5=Plot[-(Sqrt[L2]/Sqrt[L1])*x,{x,-1,1},PlotStyle◻Blue]; NDSolve[{x’[t]==-(R1/L1)*x[t]-M*Max[0,x[t]*Sqrt[L2]-y[t]*Sqrt[L1]]M*Min[0,x[t]*Sqrt[L2]-y[t]*Sqrt[L1]]+(E0/Sqrt[L1]*Cos[Q*t], y’[t]==-(R2/L2)*y[t]+M*Max[0,x[t]*Sqrt[L2]-y[t]*Sqrt[L1]](M)*Min[0,y[t]*Sqrt[L1]+x[t]*Sqrt[L1]],x[0]==x0,y[0]==y0},{x,y},{t,0,T}]; G1=Plot[{Evaluate[{x[t]}/.%]},{t,0,T},PlotStyle→RGBColor[1,0,0]]; G2=Plot[{Evaluate[{y[t]}/.%%]},{t,0,T},PlotStyle→RGBColor[0,1,0]]; G3=ParametricPlot[{Evaluate[{x[t],y[t]}/.%%%]},{t,0,T},PlotStyle→ RGBColor[0,1,0]]; Show[G4,G5,G3,PlotRange→{{-1,1},{0,0,4}}] Show[G1,G2,PlotRange→{{0,5},{-0.2,0.4}}]

x0=0; y0=3; T=200; M=1000; G4=Plot[Sqrt[4-(x-2)^2]+2,{x,0,4},PlotStyle◻RGBColor[0,1,0]]; G5=Plot[2-Sqrt[4-(x-2)^2],{x,0,4},PlotStyle◻RGBColor[0,1,0]]; NDSolve[{x’[t]◻-2*(x[t]-2)+(2-x[t])*M*Max[0,(x[t]-2)*(x[t]-2)+(y[t]-2)* (y[t]-2)-4], y’[t]◻-2*(y[t]-5)+(2-y[t])*M*Max[0,(x[t]-2)*(x[t]-2)+(y[t]-2)*(y[t]-2)-4], x[0]◻x0,y[0]◻y0}{x,y}{t,0,T}]; G3=ParametricPlot[{Evaluate[{x[t]}/.%]},{t,0,T},PlotStyle◻RGBColor[1,0,0]]; Show[G4,G5,G3,PlotRange◻{{0,4.5},{0,4.5}}] The following procedure refers to Section 4.4.5. https://doi.org/10.1515/9783110709865-005

258 | Appendix: Mathematica Procedures L1=1; L2=1; R1=0.1; R2=0.2; E0=3; w=2; x0=0; y0=0; T=5; M=100000; G4=Plot[Abs[x]*(Sqrt[L1/L2]),{x,-1,1},PlotStyle→Green; AxesLabel→{Subscript[x,1],Subscript[x,2]}]; NDSolve[{x’[t]==-(R1/L1)*x[t]-M*Max[0,x[t]-y[t]]-M*Min[0,x[t]+y[t]+E0*Cos[w*t], y’[t]==-(R2/L2)*y[t]+M*Max[0,x[t]-y[t]]-M*Min[0,x[t]+y[t]], x[0]==x0,y[0]==y0},{x,y},{t,0,T}]; G1=Plot[{Evaluate[{x[t]}/.%]},{t,0,T}, PlotStyle→RGBColor[1,0,0], AxesLabel→{t,Subscript[I,1],Subscript[I,2]}]; G2=Plot[{Evaluate[{y[t]}/.%%]},{t,0,T}, PlotStyle→RGBColor[0,1,0], AxesLabel→{t,Subscript[I,1],Subscript[I,2]}]; G3=ParametricPlot[{Evaluate[{x[t],y[t]}/.%%%]},{t,0,T}, PlotStyle→Red, AxesLabel→{Subscript[x,1],Subscript[x,2]}]; Show[G4,G3] Show[G3] Show[G1] Show[G2] Show[G1,G2]

Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

[22]

[23]

[24] [25]

Abramova Yo. V., Pryadko I. N.: Monotonicity properties of the locally explicit model of a generalized relay (Russian) Vestnik Voron. Gos. Univ., Ser. Fiz. Mat. 2 (2008), 122–125. Appell J., Petrova L. P.: Systems with diode nonlinearity in the modelling of some electrical circuits, Vestnik Voron. Gos. Univ. 3 (2017), 122–133. Appell J., Pryadko I., Sadovsky B. N.: On the stability of some relay-type regulation systems, Z. Angew. Math. Mech. 88, 10 (2008), 808–816. Aubin J.-P., Ekeland I.: Applied Nonlinear Analysis, John Wiley & Sons, New York 1994. Beklemishev D. B.: Complementary Chapters of Linear Algebra (Russian), Nauka, Moscow 1983. Borisovich Yu. G., Gel’man B. D., Myshkis A. D., Obukhovsky V. V.: Introduction to the Theory of Multivalued Maps and Differential Inclusions (Russian), Kom. Kniga, Moscow 2005. Brézis H.: Opérateurs maximaux monotones et semigroupes de contractions dans les espaces de Hilbert, Noth Holland, Amsterdam 1973. Brézis H.: Functional Analysis, Sobolev Spaces and Partial Differential Rquations, Springer, New York 2011. Danilov L. V., Matkhanov P. N., Filippov E. S.: Theory of Nonlinear Electric Circuits (Russian), Energoatomizdat, Leningrad 1990. Demidovich B. P.: Lectures on Mathmatical Stability Theory (Russian), Nauka, Moscow 1967. Dezoer Ch. A., Ku E. S.: Basic Circuit Theory (Russian), Svyaz’, Moscow 1976. Donchev T., Farkhi E: Stability and Euler approximation of one-sided Lipschitz differential inclusions, SIAM J. Control Optim. 36, 2 (1998), 780–796. Dunford N., Schwartz J. T.: Linear Operators I, Interscience, New York 1958. Filippov A. F.: Differential Equations with Discontinuous Right Hand Side (Russian), Nauka, Moscow 1985; Engl. transl.: Kluwer Acad. Publ., Dordrecht 1988. Hahn W.: Stability of Motion, Springer, Berlin 1967. Hartman F.: Ordinary Differential Equations, John Wiley & Sons, New York 1964. Ionkina P. A. (Ed.): Theoretical Fundations of Electrotechnics. Vol. I: Foundations of the Theory of Linear Circuits (Russian), Vysshaya Shkola, Moscow 1976. Kloeden P. E., Sadovsky B. N., Vasileva I. E.: Quasi-flows and equations with nonlinear differentials, Nonlinear Anal. 51, 7 (2002), 1143–1158. Krasnosel’sky M. A.: The Shift Operator along the Trajectories of Differential Equations (Russian), Nauka, Moscow 1966. Krasnosel’sky M. A., Pokrovsky A. V.: Systems of hysterons (Russian) Dokl. Akad. Nauk SSSR 200 (1971), 286–289; Engl. transl.: Soviet Math. Dokl. 12 (1971), 1388–1391. Krasnosel’sky M. A., Pokrovsky A. V.: Periodic oscillations in systems with relay nonlinearities (Russian), Dokl. Akad. Nauk SSSR 216 (1974), 733–736; Engl. transl.: Soviet Math. Dokl. 15 (1974), 873–877. Krasnosel’sky M. A., Pokrovsky A. V.: Regular solutions of integral equations with discontinuous nonlinearity (Russian), Dokl. Akad. Nauk SSSR 226 (1976), 506–509; Engl. transl.: Soviet Math. Dokl. 17 (1976), 128–132. Krasnosel’sky M. A., Pokrovsky A. V.: Modelling hysteresis transducers by a continuum of relay systems (Russian), Dokl. Akad. Nauk SSSR 227 (1976), 547–550; Engl. transl.: Soviet Math. Dokl. 17 (1976), 447–451. Krasnosel’sky M. A., Pokrovsky A. V.: Systems with Hysteresis (Russian), Nauka, Moscow 1983; Engl. transl.: Springer, Berlin 1989. Krivosheeva O. V.: Some remarks on differential inequalities, Vestnik Voron. Gos. Univ., Ser. Fiz. Mat. 1 (2008), 264–267.

https://doi.org/10.1515/9783110709865-006

260 | Bibliography

[26] Kurbatov V. G.: Functional Differential Operators and Equations, Kluwer Acad. Publ., Dordrecht 1999. [27] Lakshmikantham V., Leela S: Differential and Integral Inequalities: Theory and Applications, Acad. Press, New York 1969. [28] Lisitkaya I. N., Sinitskij L. A., Shumkov Yu. M.: Analysis of Electric Circuits with Magnetic and Semi-Conductive Elements (Russian), Naukova Dumka, Kiev 1969. [29] Lobanova O. A.: On the motion of a point in a constrained phase space (Russian), Sbornik Statej Aspir. Voron. Gos. Univ., Voronezh, 1999, 88–92. [30] Lobanova O. A., Sadovsky B. N.: On the existence of a limit cycle for a linear system with constraint (Russian), Vestnik Voron. Gos. Univ. (Fiz.-Mat.) 2001, 1 (2001), 108–110. [31] Lobanova O. A., Sadovsky B. N.: On two-dimensional dynamical systems with constraint (Russian), Differ. Uravn. 43, 4 (2007), 449–456. [32] Lyubopytnova O. L., Sadovsky B. N.: On the Osgood uniquences theorem for the soolution of the Cauchy problem (Russian), Differ. Uravn. 38, 8 (2002), 1213–1216. [33] Martin T. L.: Electronic Circuits, Prentice-Hall, New York 1955. [34] Matkhanov P. N.: Foundations of the Analysis of Electric Circuits: Nonlinear Circuits (Russian), Vysshaya Shkola, Moscow 1986. [35] Mayergoyz I. D. Mathematical Models of Hysteresis, Springer, New York 1991. [36] Miller B. M.: A nonlinear impulse-control problem (Russian), Avtom. Telemeh. 1976 (1976), 63–72; Engl. transl.: Autom. Remote Control 37 (1976), 865–873. [37] Miller B. M.: Nonlinear sampled-data control of processes described by ordinary differential equations I (Russian), Avtom. Telemeh. 1978, 1 (1978), 75–86; Engl. transl.: Autom. Remote Control 39 (1978), 57–67. [38] Miller B. M.: Nonlinear sampled-data control of processes described by ordinary differential equations II (Russian), Avtom. Telemeh. 1978, 3 (1978), 34–42; Engl. transl.: Autom. Remote Control 39 (1978), 338–344. [39] Miller B. M.: Generalized solutions of nonlinear optimization problems with impulse control I: existence of solutions (Russian), Avtom. Telemeh. 1995, 4 (1995), 62–76; Engl. transl.: Autom. Remote Control 56, 4 (1995), 505–516. [40] Miller B. M.: Generalized solutions of nonlinear optimization problems with impulse control II: Representation of solutions by differential equations with measure (Russian), Avtom. Telemeh. 1995, 5 (1995), 56–70; Engl. transl.: Autom. Remote Control 56, 5 (1995), 657–669. [41] Mil’man B. D., Myshkis A. D.: On the stability of motion in the presence of shocks (Russian), Sib. Mat. Zh. 2, 1 (1960), 233–237. [42] Movchan A. A.: Stability of processes with respect to two metrics (Russian), Prikl. Mat. Meh. 24 (1960), 988–1001. [43] Myshkis A. D., Khokhryakov A. Ya: Turbulent dynamical systems I (Russian), Mat. Sb. 45, 3 (1958), 401–414. [44] Nesterenko R. V., Sadovsky B. N.: On forced oscillations in a two-dimensional cone (Russian), Avtom. Telemeh. 2 (2002), 14–21; Engl. transl.: Autom. Remote Control 63, 2 (2002), 181–188. [45] Nguyen T. H.: Analysis of auto-oscillations in a system with two relays (Russian), Trudy Mat. Fak. Voron. Gos. Univ. (new series) 10 (2006), 112–118. [46] Nguyen T. H.: Analysis of auto-oscillations in a system with two relays (Russian), Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2006, 69. [47] Nguyen T. H.: Smooth models for a stop and play (Russian), Vestnik Voron. Gos. Univ. (Fiz.-Mat.) 2 (2009), 92–95. [48] Nguyen T. H.: Smooth models for a stop and play (Russian), Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2010, 108–109.

Bibliography | 261

[49] Nguyen T. H.: On the exactness of a smooth model for a system with diodic nonlinearity (Russian), Vestnik Voron. Gos. Univ. (Fiz.-Mat.) 2 (2010), 240–243. [50] Nguyen T. H.: A smooth model of system with diode nonlinearity, Acta Math. Vietnam. 38, 4 (2013), 607–616. [51] Nguyen T. H., Sadovsky B. N.: A smooth models for a relay with hysteresis (Russian), Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2010, 109–110. [52] Nguyen T. H., Sadovsky B. N.: A smooth models for a relay with hysteresis (Russian), Avtom. Telemeh. 11 (2010), 100–111; Engl. transl.: Autom. Remote Control 71, 11 (2010), 2320–2330. [53] Panasyuk A. I.: Quasidifferential equations in metric spaces (Russian), Differ. Uravn. 21, 8 (1985), 1344–1353; Engl. transl.: Diff. Equ. 21 (1985), 914–921. [54] Panasyuk A. I.: Properties of solutions of approximate quasidifferential equations and the integral funnel equation (Russian), Differ. Uravn. 28, 9 (1992), 1537–1544. [55] Panasyuk A. I.: Quasidifferential equations in a complete metric space under Carathéodory type conditions I (Russian), Differ. Uravn. 31, 6 (1995), 962–972; Engl. transl.: Diff. Equ. 31, 6 (1995), 901–910. [56] Panasyuk A. I.: Quasidifferential equations in a complete metric space under Carathéodory type conditions II (Russian), Differ. Uravn. 31, 8 (1995), 1361–1369; Engl. transl.: Diff. Equ. 31, 8 (1995), 1308–1317. [57] Panasyuk A. I.: Properties of solutions of quasidifferential equations and the integral funnel equation (Russian), Differ. Uravn. 31, 9 (1995), 1488–1492; Engl. transl.: Diff. Equ. 31, 9 (1995), 1442–1446. [58] Panasyuk A. I., Bentsman G.: Application of quasidifferential to the description of discontinuous processes (Russian), Differ. Uravn. 33, 10 (1997), 1339–1348; Engl. transl.: Diff. Equ. 33, 10 (1997), 1346–1355. [59] Petrova L. P.: On perturbations of systems with diode nonlinearities (Russian), Sb. Stat. Mosk. (1983), 254–260. [60] Petrova L. P.: On a model of an ideal diodic converter (Russian), Trudy Mat. Fak. Voron. Gos. Univ. (new series) 1 (1996), 68–71. [61] Petrova L. P., Sadovsky B. N.: On the mathematical theory of electric circuits with a diodic voltage converter (Russian), Izd. Voron. Gos. Univ., Voronezh, 1982. [62] Pryadko I. N.: On a method for constructing guiding functions (Russian), in: Sbornik Statej Aspir. Stud. Mat. Fak. Voron. Gos. Univ., Voronezh, 1999, 141–144. [63] Pryadko I. N.: Some variants of the Lyapunov-Persidsky theorem (Russian), in: Tezisy Dokladov MNK ADM, Voronezh, 2000, 171–172. [64] Pryadko I. N.: Using the shift operator for constructing guiding functions (Russian), in: Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2000, 143. [65] Pryadko I. N.: ψ-stability (Russian), Trudy Mat. Fak. Voron. Gos. Univ. (new series) 5 2001, 161. [66] Pryadko I. N.: On some variants of the notion of stability (Russian), in: Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2002, 71. [67] Pryadko I. N.: On the Cauchy problem for systems containing locally explicit equations, Z. Anal. Anwend. 23, 4 (2004), 819–824. [68] Pryadko I. N.: On locally explicit equations (Russian), Kand. Diss. Voron. Gos. Univ., Voronezh, 2006. [69] Pryadko I. N.: On a locally explicit play model (Russian), Vestnik Voron. Gos. Univ., Ser. Fiz. Mat. 2 (2006), 230–234. [70] Pryadko I. N.: An example for modelling an essentially discontinuous process by means of a locally explicit equation (Russian), in: Zimnyaya Shkola Voron. Gos. Univ., Voronezh, 2006, 84. [71] Pryadko I. N.: On a system which may be described by means of locally explicit equations (Russian), Trudy Mat. Fak. Voron. Gos. Univ. (new series) 10 (2006), 131–135.

262 | Bibliography

[72] Pryadko I. N., Sadovsky B. N.: Modelling some hysteresis elements by locally explicit equations (Russian), in: Sovrem. Probl. Funkt. Anal. Differ. Uravn., Voronezh, 2003, 196–197. [73] Pryadko I. N., Sadovsky B. N.: On locally explicit models for some non-smooth systems (Russian), Avtom. Telemeh. 10 (2004), 40–50; Engl. transl.: Autom. Remote Control 65, 10 (2004), 1556–1565. [74] Pryadko I. N., Sadovsky B. N.: On locally explicit equations and systems with switching, Funct. Differ. Equ. 13, 3–4 (2006), 571–584. [75] Pryadko I. N., Sadovsky B. N.: On the graph metric on a set of functions (Russian), Vestnik Voron. Gos. Univ., Ser. Fiz. Mat. 1 (2008), 261–263. [76] Pryadko I. N., Sadovsky B. N.: On the description of nonsmooth actions (Russian), Differ. Uravn. 47, 8 (2011), 1205–1208; Engl. transl.: Diff. Equ. 47, 8 (2011), 1219–1222. [77] Sadovsky B. N.: Systems with diodic nonlinearities and maximal monotone operators (Russian), in: 8th Shkola Teor. Oper. Funkt. Prostr., Riga, 1983. [78] Sadovsky B. N.: On quasicurrents (Russian), in: Konf. Voron. Gos. Univ., Voronezh, 1995, 80. [79] Sadovsky B. N., Lobanova O. A.: On two-dimensional dynamical systems with constraint (Russian), in: Sovrem. Probl. Funkt. Anal. Differ. Uravn., Voronezh, 2003, 170–171. [80] Sadovsky B. N., Sobolevskaya M. P.: On the mathematical theory of circuits and tiristors (Russian), in: Dinam. Inhom. Systems (VNIISI), Moscow, 1984, 178–182. [81] Samojlenko A. M., Perestyuk N. A.: Differential Equations with Impulse Actions (Russian), Vishcha Slkola, Kiev 1987. [82] Sinitsky L. A.: Methods of Analytic Mechanics in the Theory of Electric Circuits (Russian), Vishcha Shkola, L’vov 1978. [83] Sontag E. D.: Input to state stability: Basic concepts and results, in: Nonlin. Optimal Control Theory [Ed.: Nistri P., Stefani G.], Springer, Berlin 2007, 163–220. [84] Tsypkin Ya. Z.: Frequency characteristics of serial relay systems (Russian), Avtom. Telemeh. 20, 12 (1959), 1603–1610. [85] Tsypkin Ya. Z.: The influence of random noise on periodic solutions of automatic relay systems (Russian), Dokl. Akad. Nauk SSSR 139 (1961), 570–573. [86] Tsypkin Ya. Z.: On the stability of automatic relay systems “in the large” (Russian), Izv. Akad. Nauk SSSR 21, 1 (1963), 121–135. [87] Tsypkin Ya. Z.: The frequency method for the analysis of auto-oscillations and forced oscillations in automatic regulation relay systems (Russian), Mashinostroenie 3 (1969), 101–104. [88] Tsypkin Ya. Z.: Automatic Relay Systems (Russian), Nauka, Moscow 1974. [89] Vladimirov A. A., Pokrovsky A. V.: Vector hysteresis nonlinearities of Mises type (Russian), Dokl. Akad. Nauk SSSR 257 (1981), 506–509. [90] Volterra V.: Leçon sur la théorie mathématique de la lutte pour la vie, Gauthier-Villars, Paris 1931. [91] Yakubovich V. A.: Frequency conditions for the absolute stability of control systems with hysteresis-type nonlinearity (Russian), Dokl. Akad. Nauk SSSR 149 (1963), 288–291; Engl. transl.: Soviet Phys. Dokl. 8 (1963), 235–237. [92] Yakubovich V. A.: Frequency conditions for the absolute stability and dissipativity of control systems with a single differentiable nonlinearity (Russian), Dokl. Akad. Nauk SSSR 160 (1965), 298–301; Engl. transl.: Soviet Math. Dokl. 6 (1965), 98–101. [93] Yakubovich V. A.: The method of matrix inequalities in the stability theory of nonlinear control systems. Part II: Absolute stability in a class of nonlinearities with a condition on the derivatives (Russian), Avtom. Telemeh. 26 (1965), 577–590; Engl. transl.: Autom. Remote Control 26 (1965), 577–592.

Bibliography | 263

[94] Zavalishchin S. T., Sesekin A. N.: Dynamic Impulse Systems: Theory and Applications, Kluwer, Dordrecht 1997. [95] Zubov S. V.: Stability of Motion (Russian), Vysshaya Shkola, Moscow 1984. [96] Zubov S. V.: Stability of periodic solutions in systems with hysteresis (Russian), in: Proc. Conf. Nonlin. Anal. Appl., Moscow, 1998, 293–307.

Index admissible state 138, 165 admissible value 132 affine subspace 8 amplitude 48 angular velocity 71 anode 2 – current 83, 87, 93 – voltage 2, 93 autooscillation 111 best approximation 11, 21 biological system 107 bisectrix 54 boundary point 16, 25, 68 bridge rectifier 81, 84 capacity 48 cathode 2, 87 – current 2 – voltage 2 Cauchy problem 45, 125, 128, 168 centre 109 chain – circuit 88 – feed 88 – load 88 characteristic – outer 86 – Volt–Ampère 86 circuit 2 – diode 92 – non-bifurcating 2 – non-ramified 2 – oscillating 42 classical model 239 combination – conic 9 – convex 6 – linear 6 cone 9, 83 – adjoint 19, 30, 83 – bound 28, 86 – closed 22 – normal 16, 81, 101, 250 – pointed 28 – second adjoint 22 – tangent 24

– tight 10, 30 connectivity branch 93 continuation 34 continuous dependence 41, 75, 135, 138 contour 46 – oscillating 46 – principal 93 control-correction system 170 control equation 172 control function 172 controllability 141, 145 control object 172 convex combination 6 convex hull 6 convex programming 101 cooling process 181 cross-section 93 – principal 93 current 1 – exterior 88 – incoming 86 – outgoing 86 – source 97 cycle 65 derivative – Dini 173, 194 – left hand 27 – right hand 27 differential equation 32 differential inclusion 3, 32 differential inequality 43, 44 Dini derivative 173, 194 diode 2 – circuit 92 – converter 81, 86, 93, 95, 98 – ideal 2, 252 – non-ideal 124 – nonlinearity 1, 32, 85 directional field 73 DN-system 1, 32, 81, 92, 121 dual space 19 dynamical system 188 eco-system 107 eigenvalue 190 elaboration time 222

266 | Index

electromotive force 1 equation – control 172 – cooling 198 – differential 32 – heating 198 – locally explicit 124, 167 – quasidifferential 123 – with nonlinear differential 123 – with relay 131, 172 – Uryson–Volterra 177 equilibrium 108, 109 Euler polygon 39 existence 31, 74, 133, 146, 155, 157 – global 128, 173, 179 – local 124, 177, 185 extension 127, 169 face 28 – lower dimensional 29 – minimal 28 feed chain 88 force – centrifugal 116 – electromotive 1 – escape 116 – vortex 118 Fredholm alternative 23 frequency 47 function – absolutely continuous 31 – BV- 211 – control 172 – convex 9, 101 – KL- 189 – Lipschitz continuous 14, 161 – Lyapunov- 193 – nonexpansive 14 – peak 164 graph tree 92 Gronwall lemma 41, 44 halfspace 8 – shifted 8 Hausdorff distance 137 heating process 181 Hilbert space 12

hull – closed conic 9 – closed convex 6 – conic 9 – convex 6 – linear 6 hyperplane 7 – affine 8 hysteresis 124 – positive 132 image 23 inductance 1, 252 – parameter 91 input 86 – continuous 243 – current 87 – history 132 – moment 165, 167 – monotone 163 – play 156 – stop 153 – voltage 87 input-output dependence 133 integral funnel 123 interior point 52, 113 isometry 57, 66 kernel 23 Kirchhoff rule 2 – first 82 – second 2, 46 KL-function 189 knot 2, 87 Larionov scheme 84 LC-condition 94 Lemma – Gronwall 41, 44 – Martynenko 210 – Mazur 6 – van Kampen 128 – Zorn 127 linear transform 86 Lipschitz condition 44, 129, 161, 176, 254 – global 44 – local 44 load chain 88 locally explicit equation 124, 167 locally explicit model 124

Index |

majorant 127 map – multivalued 2 – nonexpansive 14 Martynenko lemma 210, 212 Mazur lemma 6 metric projection 5, 11, 19, 31, 68 modulus of continuity 186 monotonicity – w. r. t. inputs 134, 150 – w. r. t. outputs 134 – w. r. t. threshold values 135, 152 M-switch 165, 175, 183 multivalued differential equation 3 multivalued map 2 – closed 35, 36 – lower semicontinuous 35 – maximal monotone 42 – monotone 42, 75 – uniformly monotone 42 – upper semicontinuous 35 nearness theorem 220, 221, 237 – for initial parameters 221 – for level sets 223 – for outputs 224 – for play outputs 243 – for solutions 221 – for stop outputs 240, 241 non-coincidence set 217 NSP-condition 49, 55 numerical analysis 90, 105, 235, 244, 254 octant – negative 6 – positive 5 ω-limit point 53, 55 operator – adjoint 23, 86 – closed 36 – diode nonlinearity 85 – DN- 85, 87 – integral 186 – linear 23, 86 – lower semicontinuous 42 – maximal monotone 42 – monotone 42 – play 156 – projection 81

– shift 179 – stop 153, 163 – uniformly monotone 42 – upper semicontinuous 42 – Uryson–Volterra 177 orthogonal complement 7, 17 orthogonal decomposition 19 orthogonal sum 20 oscillation 46 – asymptotically damped 47 – damped 47 – forced 46 – free 47 – periodic 48, 91 Osgood condition 179, 183 outer normal 7, 249 output 132 – bang-bang 146 – continuous 138 – left-continuous 133 – monotone 134 – periodic 145 – play 156 – right-continuous 133 – signal 134 – stop 153 – with memory 132 parallelogramm identity 12 peak function 164 periodicity criterion 232 phase portrait 109 phase space 3 play 156 – with continuous input 243 predator-prey system 107, 111 preimage 23 projection – metric 5, 11, 19, 31 – nonexpansive 14 quadrangle 15 quasicurrent 125 quasidifferential equation 123 range 23 rectangle 15 rectifier 88, 91, 252 reflection 67

267

268 | Index

regularity 162 relay 131, 172, 232 – asymptotic resolvent 142 – generalized 146 – ideal 124 – non-ideal 124, 131, 198 – resolvent 139 resistance 1, 252 Riemann sum 40 Riemann–Stieltjes integral 211 rotation 52, 67 – angle 71 – map 53 saddle point 109 scalar product 7, 57, 247 segment 46, 56 – transversal 56 semigroup property 125, 127, 140, 143 – local 125 set – closed 6 – compact 6 – convex 6 – maximal 127 – non-coincidence 217 – ordered 127 shift 8, 17, 25 – invariance 17, 140, 142 – operator 179 smooth model 121, 239, 249 – of a DN-system 121, 246 – of a hysteresis 215 – of a play 239 – of a relay 215 – of a stop 239, 241 snail 61 – contracting 61 – expanding 61 Sobolev space 14 solution – damped 47 – left-continuous 124 – non-oscillatory 48 – oscillatory 48 – p-stable 192, 196 – p0 -stable 192 – ψ-stable 189 – ψ0 -stable 189, 201

– regular 162 – right-continuous 124 – strong 124, 158 stability 108 – asymptotic 68, 189 – asymptotically orbital 68 – exponential 189, 193 – Lyapunov 109, 189 – orbital 66 – p- 192, 196 – p0 - 192 – ψ- 189 – ψ0 - 189, 201 – strongly orbital 68 stable centre 109 state change 165 statistical property 141, 144 stop 153, 185 – with continuous input 241 – with smooth input 240 switching point 131, 139 system – closed 172, 185 – control-correction 170 – DN- 85, 87 – dynamical 188, 196 – linear 209 – locally explicit 124, 167, 171 – planar 232, 248 – predator-prey 107, 111 – reduced 193 – Volterra–Lotka 107 – with memory 166 – with M-switch 166, 175 – with non-smooth action 209 – with relay 172 temperature regulator 181, 198 – ψ-stable 198 theorem – Arzelà–Ascoli 187 – Cauchy–Peano 199 – Jordan 60 – Lyapunov 109, 197 – Minty 42 – nearness 220, 221, 227 – Picard–Lindelöf 33 – Poincaré–Bendixon 49 – Schauder 187

Index |

threshold value 131 – lower 131, 146 – upper 131, 146 trajectory 49, 113 – attracting 67, 115 – closed 49, 59, 113 – orbitally stable 67, 73 transducer 131, 146 transposed matrix 86 transposed vector 82 twister 73

van Kampen lemma 128 variational inequality 13 voltage 1, 252 – converter 81 – exterior 88 – incoming 86 – outgoing 86 Volt–Ampère characteristic 86 Volterra–Lotka model 107 Volterra property 140, 143, 166 vortex 73

uniqueness 31, 41, 74, 126, 133, 146, 155, 157, 170 – global 173, 210 – local 177

wedge 11 zone of non-sensibility 132 Zorn lemma 127

269