Generalized Ordinary Differential Equations in Abstract Spaces and Applications [1 ed.] 1119654939, 9781119654933

GENERALIZED ORDINARY DIFFERENTIAL EQUATIONS IN ABSTRACT SPACES AND APPLICATIONS Explore a unified view of differential e

403 132 3MB

English Pages 512 [515] Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Generalized Ordinary Differential Equations in Abstract Spaces and Applications [1 ed.]
 1119654939, 9781119654933

Table of contents :
Cover
Title Page
Copyright
Contents
List of Contributors
Foreword
Preface
Chapter 1 Preliminaries
1.1 Regulated Functions
1.1.1 Basic Properties
1.1.2 Equiregulated Sets
1.1.3 Uniform Convergence
1.1.4 Relatively Compact Sets
1.2 Functions of Bounded ℬ‐Variation
1.3 Kurzweil and Henstock Vector Integrals
1.3.1 Definitions
1.3.2 Basic Properties
1.3.3 Integration by Parts and Substitution Formulas
1.3.4 The Fundamental Theorem of Calculus
1.3.5 A Convergence Theorem
Appendix 1.A: The McShane Integral
Chapter 2 The Kurzweil Integral
2.1 The Main Background
2.1.1 Definition and Compatibility
2.1.2 Special Integrals
2.2 Basic Properties
2.3 Notes on Kapitza Pendulum
Chapter 3 Measure Functional Differential Equations
3.1 Measure FDEs
3.2 Impulsive Measure FDEs
3.3 Functional Dynamic Equations on Time Scales
3.3.1 Fundamentals of Time Scales
3.3.2 The Perron Δ‐integral
3.3.3 Perron Δ‐integrals and Perron–Stieltjes integrals
3.3.4 MDEs and Dynamic Equations on Time Scales
3.3.5 Relations with Measure FDEs
3.3.6 Impulsive Functional Dynamic Equations on Time Scales
3.4 Averaging Methods
3.4.1 Periodic Averaging
3.4.2 Nonperiodic Averaging
3.5 Continuous Dependence on Time Scales
Chapter 4 Generalized Ordinary Differential Equations
4.1 Fundamental Properties
4.2 Relations with Measure Differential Equations
4.3 Relations with Measure FDEs
Chapter 5 Basic Properties of Solutions
5.1 Local Existence and Uniqueness of Solutions
5.1.1 Applications to Other Equations
5.2 Prolongation and Maximal Solutions
5.2.1 Applications to MDEs
5.2.2 Applications to Dynamic Equations on Time Scales
Chapter 6 Linear Generalized Ordinary Differential Equations
6.1 The Fundamental Operator
6.2 A Variation‐of‐Constants Formula
6.3 Linear Measure FDEs
6.4 A Nonlinear Variation‐of‐Constants Formula for Measure FDEs
Chapter 7 Continuous Dependence on Parameters
7.1 Basic Theory for Generalized ODEs
7.2 Applications to Measure FDEs
Chapter 8 Stability Theory
8.1 Variational Stability for Generalized ODEs
8.1.1 Direct Method of Lyapunov
8.1.2 Converse Lyapunov Theorems
8.2 Lyapunov Stability for Generalized ODEs
8.2.1 Direct Method of Lyapunov
8.3 Lyapunov Stability for MDEs
8.3.1 Direct Method of Lyapunov
8.4 Lyapunov Stability for Dynamic Equations on Time Scales
8.4.1 Direct Method of Lyapunov
8.5 Regular Stability for Generalized ODEs
8.5.1 Direct Method of Lyapunov
8.5.2 Converse Lyapunov Theorem
Chapter 9 Periodicity
9.1 Periodic Solutions and Floquet's Theorem
9.1.1 Linear Differential Systems with Impulses
9.2 (θ,T)‐Periodic Solutions
9.2.1 An Application to MDEs
Chapter 10 Averaging Principles
10.1 Periodic Averaging Principles
10.1.1 An Application to IDEs
10.2 Nonperiodic Averaging Principles
Chapter 11 Boundedness of Solutions
11.1 Bounded Solutions and Lyapunov Functionals
11.2 An Application to MDEs
11.2.1 An Example
Chapter 12 Control Theory
12.1 Controllability and Observability
12.2 Applications to ODEs
Chapter 13 Dichotomies
13.1 Basic Theory for Generalized ODEs
13.2 Boundedness and Dichotomies
13.3 Applications to MDEs
13.4 Applications to IDEs
Chapter 14 Topological Dynamics
14.1 The Compactness of the Class ℱ0(Ω,h)
14.2 Existence of a Local Semidynamical System
14.3 Existence of an Impulsive Semidynamical System
14.4 LaSalle's Invariance Principle
14.5 Recursive Properties
Chapter 15 Applications to Functional Differential Equations of Neutral Type
15.1 Drops of History
15.2 FDEs of Neutral Type with Finite Delay
References
List of Symbols
Index
EULA

Citation preview

Generalized Ordinary Differential Equations in Abstract Spaces and Applications

Generalized Ordinary Differential Equations in Abstract Spaces and Applications Edited by

Everaldo M. Bonotto Universidade de São Paulo São Carlos, SP, Brazil

Márcia Federson Universidade de São Paulo São Carlos, SP, Brazil

Jaqueline G. Mesquita Universidade de Brasília Brasília, DF, Brazil

This edition first published 2021 © 2021 John Wiley and Sons, Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions. The right of Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita to be identified as the authors of the editorial material in this work has been asserted in accordance with law. Registered Office John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA Editorial Office 111 River Street, Hoboken, NJ 07030, USA For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats. Limit of Liability/Disclaimer of Warranty The contents of this work are intended to further general scientific research, understanding, and discussion only and are not intended and should not be relied upon as recommending or promoting scientific method, diagnosis, or treatment by physicians for any particular patient. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of medicines, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each medicine, equipment, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data applied for ISBN: 9781119654933 Cover design by Wiley Cover images: © piranka/E+/Getty Images Set in 9.5/12.5pt STIXTwoText by Straive, Chennai, India 10 9 8 7 6 5 4 3 2 1

To GOD, VANDA, HUMBERTO, and FILIPE, with deep gratitude. —S. M. Afonso To my family.

To my family,

—F. G. de Andrade

TERESINHA, JOSÉ and TIAGO. —F. Andrade da Silva To GOD. To my parents, GERALDO and ZULMIRA. To my brother, DANIEL. —M. Ap. Silva

To my wife FABIANA

To my family.

and my son ARTHUR.

—R. Collegari

—E. M. Bonotto To GOD, YAHWEH. —M. Federson To my parents,

To GOD, my family,

LOURIVAL and HELENA.

and my hometown SABANALARGA,

—M. Frasson

with all my love. —R. Grau To my husband, LUÍS. —J. G. Mesquita

To GOD, for his infinite

To GOD. To my family.

love, which reached me and

—P. H. Tacuri

changed my life forever. —M. C. Mesquita To my wife, MICHELLE. —E. Toon and we dedicate this book to the memories of our friends LUCIENE P. GIMENES ARANTES & ŠTEFAN SCHWABIK.

vii

Contents List of Contributors xi Foreword xiii Preface xvii 1

1.1 1.1.1 1.1.2 1.1.3 1.1.4 1.2 1.3 1.3.1 1.3.2 1.3.3 1.3.4 1.3.5

2

2.1 2.1.1 2.1.2 2.2 2.3

Preliminaries 1 Everaldo M. Bonotto, Rodolfo Collegari, Márcia Federson, Jaqueline G. Mesquita, and Eduard Toon Regulated Functions 2 Basic Properties 2 Equiregulated Sets 7 Uniform Convergence 9 Relatively Compact Sets 11 Functions of Bounded -Variation 14 Kurzweil and Henstock Vector Integrals 19 Definitions 20 Basic Properties 25 Integration by Parts and Substitution Formulas 29 The Fundamental Theorem of Calculus 36 A Convergence Theorem 44 Appendix 1.A: The McShane Integral 44 The Kurzweil Integral 53 Everaldo M. Bonotto, Rodolfo Collegari, Márcia Federson, and Jaqueline G. Mesquita The Main Background 54 Definition and Compatibility 54 Special Integrals 56 Basic Properties 57 Notes on Kapitza Pendulum 67

viii

Contents

3

3.1 3.2 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4 3.4.1 3.4.2 3.5 4 4.1 4.2 4.3 5

5.1 5.1.1 5.2 5.2.1 5.2.2 6

6.1 6.2 6.3 6.4

Measure Functional Differential Equations 71 Everaldo M. Bonotto, Márcia Federson, Miguel V. S. Frasson, Rogelio Grau, and Jaqueline G. Mesquita Measure FDEs 74 Impulsive Measure FDEs 76 Functional Dynamic Equations on Time Scales 86 Fundamentals of Time Scales 87 The Perron Δ-integral 89 Perron Δ-integrals and Perron–Stieltjes integrals 90 MDEs and Dynamic Equations on Time Scales 98 Relations with Measure FDEs 99 Impulsive Functional Dynamic Equations on Time Scales 104 Averaging Methods 106 Periodic Averaging 107 Nonperiodic Averaging 118 Continuous Dependence on Time Scales 135 Generalized Ordinary Differential Equations 145 Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita Fundamental Properties 146 Relations with Measure Differential Equations 153 Relations with Measure FDEs 160 Basic Properties of Solutions 173 Everaldo M. Bonotto, Márcia Federson, Luciene P. Gimenes (in memorian), Rogelio Grau, Jaqueline G. Mesquita, and Eduard Toon Local Existence and Uniqueness of Solutions 174 Applications to Other Equations 178 Prolongation and Maximal Solutions 181 Applications to MDEs 191 Applications to Dynamic Equations on Time Scales 197 Linear Generalized Ordinary Differential Equations 205 Everaldo M. Bonotto, Rodolfo Collegari, Márcia Federson, and Miguel V. S. Frasson The Fundamental Operator 207 A Variation-of-Constants Formula 209 Linear Measure FDEs 216 A Nonlinear Variation-of-Constants Formula for Measure FDEs 220

Contents

7

7.1 7.2 8

8.1 8.1.1 8.1.2 8.2 8.2.1 8.3 8.3.1 8.4 8.4.1 8.5 8.5.1 8.5.2 9

9.1 9.1.1 9.2 9.2.1 10 10.1 10.1.1 10.2 11

11.1

Continuous Dependence on Parameters 225 Suzete M. Afonso, Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita Basic Theory for Generalized ODEs 226 Applications to Measure FDEs 236 Stability Theory 241 Suzete M. Afonso, Fernanda Andrade da Silva, Everaldo M. Bonotto, Márcia Federson, Luciene P. Gimenes (in memorian), Rogelio Grau, Jaqueline G. Mesquita, and Eduard Toon Variational Stability for Generalized ODEs 244 Direct Method of Lyapunov 246 Converse Lyapunov Theorems 247 Lyapunov Stability for Generalized ODEs 256 Direct Method of Lyapunov 257 Lyapunov Stability for MDEs 261 Direct Method of Lyapunov 263 Lyapunov Stability for Dynamic Equations on Time Scales 265 Direct Method of Lyapunov 267 Regular Stability for Generalized ODEs 272 Direct Method of Lyapunov 275 Converse Lyapunov Theorem 282 Periodicity 295 Marielle Ap. Silva, Everaldo M. Bonotto, Rodolfo Collegari, Márcia Federson, and Maria Carolina Mesquita Periodic Solutions and Floquet’s Theorem 297 Linear Differential Systems with Impulses 303 (θ,T)-Periodic Solutions 307 An Application to MDEs 313 Averaging Principles 317 Márcia Federson and Jaqueline G. Mesquita Periodic Averaging Principles 320 An Application to IDEs 325 Nonperiodic Averaging Principles 330 Boundedness of Solutions 341 Suzete M. Afonso, Fernanda Andrade da Silva, Everaldo M. Bonotto, Márcia Federson, Rogelio Grau, Jaqueline G. Mesquita, and Eduard Toon Bounded Solutions and Lyapunov Functionals 342

ix

x

Contents

11.2 11.2.1

An Application to MDEs An Example 356

12

Control Theory 361 Fernanda Andrade da Silva, Márcia Federson, and Eduard Toon Controllability and Observability 362 Applications to ODEs 365

12.1 12.2 13 13.1 13.2 13.3 13.4 14

14.1 14.2 14.3 14.4 14.5 15

15.1 15.2

352

Dichotomies 369 Everaldo M. Bonotto and Márcia Federson Basic Theory for Generalized ODEs 370 Boundedness and Dichotomies 381 Applications to MDEs 391 Applications to IDEs 400 Topological Dynamics 407 Suzete M. Afonso, Marielle Ap. Silva, Everaldo M. Bonotto, and Márcia Federson The Compactness of the Class  0 (Ω,h) 408 Existence of a Local Semidynamical System 411 Existence of an Impulsive Semidynamical System 418 LaSalle’s Invariance Principle 423 Recursive Properties 425 Applications to Functional Differential Equations of Neutral Type 429 Fernando G. Andrade, Miguel V. S. Frasson, and Patricia H. Tacuri Drops of History 429 FDEs of Neutral Type with Finite Delay 435 References 455 List of Symbols 471 Index 473

xi

List of Contributors Suzete M. Afonso Departamento de Matemática – Instituto de Geociências e Ciências Exatas Universidade Estadual Paulista “Júlio de Mesquita Filho” (UNESP) Rio Claro–SP, Brazil

Everaldo M. Bonotto Departamento de Matemática Aplicada e Estatística Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil

Fernando G. Andrade Colégio Técnico de Bom Jesus Universidade Federal do Piauí Bom Jesus–PI, Brazil

Rodolfo Collegari Faculdade de Matemática Universidade Federal de Uberlândia Uberlândia–MG, Brazil

Fernanda Andrade da Silva Departamento de Matemática Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil Marielle Ap. Silva Departamento de Matemática Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil

Márcia Federson Departamento de Matemática Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil Miguel V. S. Frasson Departamento de Matemática Aplicada e Estatística Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil

xii

List of Contributors

Luciene P. Gimenes (in memorian) Departamento de Matemática – Centro de Ciências Exatas Universidade Estadual de Maringá Maringá–PR, Brazil

Maria Carolina Mesquita Departamento de Matemática Instituto de Ciências Matemáticas e de Computação (ICMC) Universidade de São Paulo São Carlos–SP, Brazil

Rogelio Grau Departamento de Matemáticas y Estadística División de Ciencias Básicas Universidad del Norte Barranquilla, Colombia

Patricia H. Tacuri Departamento de Matemática – Centro de Ciências Exatas Universidade Estadual de Maringá Maringá–PR, Brazil

Jaqueline G. Mesquita Departamento de Matemática Instituto de Ciências Exatas Universidade de Brasília Brasília–DF, Brazil

Eduard Toon Departamento de Matemática Instituto de Ciências Exatas Universidade Federal de Juiz de Fora Juiz de Fora–MG, Brazil

xiii

Foreword Since the origins of the calculus, the development of ordinary differential equations has always both influenced and followed the development of the integral calculus. Newton’s theory of fluxions was instrumental for solving the equations of mechanics, Leibniz and the Bernoulli brothers explored the differential equations solvable by quadrature, Euler developed similar numerical methods for computing definite integrals and approximate solutions of differential equations, that Cauchy converted into existence results for the integral of continuous functions and the solutions of initial value problems. Following this distinguished line, Jaroslav Kurzweil solved in 1957 the delicate question of finding optimal conditions for the continuous dependence on parameters of solutions of ordinary differential systems. To this aim, he introduced a new concept of integral generalizing Ward’s extension of the Perron integral, and showed how to define it through a technically minor but basic modification of the definition of the Riemann integral. It happened that, under this type of convergence, the limit of a sequence of ordinary differential equations could be a more general object, a generalized differential equations, defined in terms of the Kurzweil integral. As it often happens in mathematics, a similar integral was introduced independently in 1961 by Ralph Henstock during his investigations on the Ward integral. This new theory, besides providing a simple, elegant, and pedagogical approach to the classical concepts of Lebesgue and Perron integrals, shed a new light on many classical questions in differential equations like stability and the averaging method. The theory was actively developed in Praha, around Kurzweil, by ˇ Jiˇrí Jarník, Štefan Schwabik, Milán Tvrdy, Ivo Vrkoˇc, Dana Franková, and others. Their results are beautifully described in the monographs Generalized Ordinary Differential Equations of Schwabik and Generalized Ordinary Differential Equations of Kurzweil. Contributions came also from many other countries. In particular, inspired by the fruitful visits of Márcia Federson in Praha and the enthusiastic lectures

xiv

Foreword

of Štefan Schwabik in São Carlos, a school was created and developed at the University of São Paulo and spread into other Brazilian institutions. Those researches gave a great impetus to the theory of generalized ordinary differential equations from the fundamental theory to many new important applications. Because of its definition through suitable limits of Riemann sums, the Kurzweil–Henstock integral directly applies to functions with values in Banach spaces. This is particularly emphasized by the monograph Topics in Banach Space Integration of Schwabik and Ye GuoJu. It was therefore a natural and fruitful idea to consider generalized ordinary differential equations in Banach spaces, not for just the sake of generality, but also because of their specific applications, and in particular their use in reinterpreting the concept of functional-differential equation. This is the viewpoint adopted in the present substantial monograph, whose red wire is to show that many types of evolution equations in Banach space can be treated in an unified and more general way as a special case of the Kurzweil generalized differential equations. The general ideas are introduced and motivated through an elegant treatment of the measure functional differential equations, where in the integrated form of the differential equation, the Lebesgue measure is replaced by some Stieltjes one. The approach covers differential equations with impulses and the dynamic equations on time scales. The generalized differential equations in Banach spaces are then introduced and developed in a systematic way. Most classical problems of the theory of ordinary differential equations extending to this new and general setting. This includes the existence, continuation, and continuous dependence of solutions, the linear equations, the Lyapunov stability, the periodic and bounded solutions, the averaging method, some control theory, the dichotomy theory, questions of topological dynamics, and the measure neutral functional differential equations. It is not surprising that the richness and wideness of the content of this substantial monograph is the result of the intensive team activity of 14 contributors: S.M. Afonso, F. Andrade da Silva, M. Ap. Silva, E.M. Bonotto, R. Collegari, F.G. de Andrade, M. Federson, M. Frasson, L.P. Gimenes, R. Grau, J.G. Mesquita, M.C. Mesquita, P.H. Tacuri, and E. Toon. Most of them have already published contributions to the area and other ones got a PhD recently in this direction. Each chapter is authored by a selected subgroup of the team, but it appears that a joint final reading took place for the final product. Each chapter provides a state-of-the-art of its title, and many classes of readers will find in this book a renewed picture of their favorite area of expertise and an inspiration for further research. In a world where, even for scientific research, competition is too often praised as a necessary driving force, this monograph shows that the true answer is better

Foreword

found in an open, enthusiastic, and unselfish cooperation. In this way, the book not only magnifies the work of Jaroslav Kurzweil and of Štefan Schwabik but also their wonderful spirit. October 2020

Jean Mawhin Louvain-la-Neuve, Belgium

xv

xvii

Preface It is well known that the remarkable theory of generalized ordinary differential equations (we write generalized ODEs, for short) was born in Czech Republic in the year 1957 with the brilliant paper [147] by Professor Jaroslav Kurzweil. In Brazil, the theory of generalized ODEs was introduced by Professor Štefan Schwabik during his visit to the Universidade de São Paulo, in the city of São Paulo, in 1989. Nevertheless, it was only in 2002 that the theory really started to be developed here. The article by Professors Márcia Federson and Plácido Táboas, published in the Journal of Differential Equations in 2003 (see [92]), was the first Brazilian publication on the subject. Now, 18 years later, members of the Brazilian research group on Functional Differential Equations and Nonabsolute Integration decided to gather the results obtained over these years in order to produce a comprehensive literature about our results developed so far, regarding the theory of generalized ODEs in abstract spaces. Originally, this monograph was thought to be organized by professors Márcia Federson, Everaldo M. Bonotto, and Jaqueline G. Mesquita, with the contribution of the following authors: Suzete M. Afonso, Fernando G. Andrade, Fernanda Andrade da Silva, Marielle Ap. Silva, Rodolfo Collegari, Miguel Frasson, Luciene P. Gimenes, Rogelio Grau, Maria Carolina Mesquita, Patricia H. Tacuri, and Eduard Toon. However, after a while, it became a production of us all, with contributions of everyone to all chapters and to the uniformity, coherence, language, and interrelationship of the results. We, then, present this carefully crafted work to disseminate the theories involved here, especially those on Kurzweil–Henstock nonabsolute integration and on generalized ODEs. In the introductory chapter, named Preliminaries, we brought together two main issues that permeate this book. The first one concerns the spaces where the functions within the right-hand sides of differential or integral equations live. The other one concerns the theory of nonabsolute vector-valued integrals in the senses of J. Kurzweil and R. Henstock. Sections 1.1 and 1.2 are devoted to properties of

xviii

Preface

the space of regulated functions and the space of functions of bounded -variation. Among the main results of Section 1.1, we mention a characterization, based on [96, 97], of relatively compact sets of the space of regulated functions. Section 1.2 deals with properties of functions of bounded -variation where the Helly’s choice principle for abstract spaces is a spotlight. The book [127] is the main reference for this section. The third section is devoted to nonabsolute vector-valued integrals. The basis of this theory is presented here and results specialized for Perron–Stieltjes integrals are included. We highlight substitution formulas and an integration by parts formula coming from [212]. Other important references to this section are [72, 73, 210]. The second chapter is devoted to the integral as defined by Jaroslav Kurzweil in [147]. We compiled some historical data on how the idea of the integral came about. Highlights of this chapter include the Saks–Henstock lemma, the Hake-type theorem, and the change of variables theorem. We end this chapter with a brief history of the Kapitza pendulum equation whose solution is highly oscillating and, therefore, suitable for being treated via Kurzweil–Henstock nonabsolute integration theory. An important reference to Chapter 2 is [209]. Before entering the theory of generalized ODEs, we take a trip through the theory of measure functional differential equations (we write measure FDEs for short). Then, the third chapter appears as an embracing collection of results on measure FDEs for Banach space-valued functions. In particular, we investigate equations of the form t

y(t) = y(t0 ) +

∫t0

f (ys , s) dg(s),

t ∈ [t0 , t0 + 𝜎],

where ys is a memory function and the integral on the right-hand side is in the sense of Perron–Stieltjes. We show that these equations encompass not only impulsive functional dynamic equations on time scales but also impulsive measure FDEs. Examples illustrating the relations between any two of these equations are also included. References [85, 86] feature as the foundation for this relations. Among other topics covered by Chapter 3, we mention averaging principles, covering the periodic and nonperiodic cases, and results on continuous dependence of solutions on time scales. References [21, 82, 178] are crucial here. In Chapter 4, we enter the theory of generalized ODE itself. We begin by recalling the concept of a nonautonomous generalized ODE of the form dx = DF(x, t), d𝜏 where F takes a pair (x, t) of a regulated function x and a time t to a regulated function. The main reference to this chapter is [209]. Measure FDEs in the integral form described above feature in Chapter 4 as supporting actors, because now their

Preface

solutions can be related to solutions of the generalized ODEs, whose right-hand sides involve functions which look like t0 − r ≤ 𝜗 ≤ t0 , ⎧0, ⎪ 𝜗 ⎪ f (xs , s) dg(s), t0 ≤ 𝜗 ≤ t ≤ t0 + 𝜎, F(x, t)(𝜗) = ⎨∫t0 ⎪ t f (xs , s) dg(s), t ≤ 𝜗 ≤ t0 + 𝜎. ⎪ ⎩∫t0 This characteristic of generalized ODEs plays an important role in the entire manuscript, since it allows one to translate results from generalized ODEs to measure FDEs. Chapter 5, based on [78], brings together the foundations of the theory of generalized ODEs. Section 5.1 concerns local existence and uniqueness of a solution of a nonautonomous generalized ODE with applications to measure FDEs and functional dynamic equations on time scales. Second 5.2 is devoted to results on prolongation of solutions of generalized ODEs, measure differential equations, and dynamic equations on time scales. Chapter 6 deals with a very important class of differential equations, the class of linear generalized ODEs. The origins of linear generalized ODEs goes back to the papers [209–211]. Here, we recall the notion of fundamental operator associated with a linear generalized ODE for Banach space-valued functions and we travel on the same road as the authors of [45] to obtain a variation-of-constants formula for a linear perturbed generalized ODE. Concerning applications, we extend the class of equations to include linear measure FDEs. After linear generalized ODEs are investigated, we move to results on continuous dependence of solutions on parameters. This is the core of Chapter 7 which is based on [4, 95, 96, 177]. Given a family of generalized ODEs, we present sufficient conditions so that the family of their corresponding solutions converge uniformly, on compact sets, to the solution of the limiting generalized ODE. We also prove that given a generalized ODE and its solution x0 ∶ [a, b] → X, where X is a Banach space, one can obtain a family of generalized ODEs whose solutions converge uniformly to x0 on [a, b]. As we mentioned before, many types of differential equations can be regarded as generalized ODEs. This fact allows us to derive stability results for these equations through the relations between the solutions of a certain equations and the solutions of a generalized ODEs. At the present time, the stability theory for generalized ODEs is undergoing a remarkable development. Recent results in this respect are contained in [3, 7, 80, 89, 90] and are gathered in Chapter 8. We also show the effectiveness of Lyapunov’s Direct Method to obtain several stability results, in addition to proving converse Lyapunov theorems for some types of stability. The

xix

xx

Preface

types of stability explore here are variational stability, Lyapunov stability, regular stability, and many relations permeating these concepts. The existence of periodic solutions to any kind of equation is also of great interest, especially in applications. Chapter 9 is devoted to this matter in the framework of generalized ODEs, whose results are also specified to measure differential equations and impulsive differential equations. Section 9.1 brings together a result which provides conditions for the solutions of a linear generalized ODE taking values in ℝn to be periodic and a result which relates periodic solutions of linear nonhomogeneous generalized ODEs to periodic solutions of linear homogeneous generalized ODEs. Still in this section, a characterization of the fundamental matrix of periodic linear generalized ODEs is established. This is the analogue of the Floquet theorem for generalized ODEs involving finite dimensional space-valued functions. In Section 9.2, inspired in an approach by Jean Mawhin to treat periodic boundary value problem (we write periodic BVP for short), we introduce the concept of a (𝜃, T)-periodic solution for a nonlinear homogeneous generalized ODE in Banach spaces, where T > 0 and 𝜃 > 0. A result that ensures a correspondence between solutions of a (𝜃, T)-periodic BVP and (𝜃, T)-periodic solutions of a nonlinear homogeneous generalized ODE is the spotlight here. Then, the existence of a (𝜃, T)-periodic solution is guaranteed. Averaging methods are used to investigate the solutions of a nonautonomous differential equations by means of the solutions of an “averaged ” autonomous equation. In Chapter 10, we present a periodic averaging principle as well as a nonperiodic one for generalized ODEs. The main references to this chapter are [83, 178]. Chapter 11 is designed to provide the reader with a systematic account of recent developments in the boundedness theory for generalized ODEs. The results of this chapter were borrowed from the articles [2, 79]. Chapter 12 is devoted to the control theory in the setting of abstract generalized ODEs. In its first section, we introduce concepts of observability, exact controllability, and approximate controllability, and we give necessary and sufficient conditions for a system of generalized ODEs to be exactly controllable, approximately controllable, or observable. In Section 12.2, we apply the results to classical ODEs. The study of exponential dichotomy for linear generalized ODEs of type dx = D[A(t)x] d𝜏 is the heartwood of Chapter 13, where sufficient conditions for the existence of exponential dichotomies are obtained, as well as conditions for the existence of bounded solutions for the nonhomogeneous equation dx = D[A(t)x + f (t)]. d𝜏

Preface

Using the relations between the solutions of generalized ODEs and the solutions of other types of equations, we translate our results to measure differential equations and impulsive differential equations. The main reference for this chapter is [29]. The aim of Chapter 14 is to bring together the theory of semidynamical systems generated by generalized ODEs. We show the existence of a local semidynamical system generated by a nonautonomous generalized ODE of the form dx = DF(x, t), d𝜏 where F belongs to a compact class of right-hand sides. We construct an impulsive semidynamical system associated with a generalized ODE subject to external impulse effects. For this class of impulsive systems, we present a LaSalle’s invariance principle-type result. Still in this chapter, we present some topological properties for impulsive semidynamical systems as minimality and recurrence. The main reference here is [4]. Chapter 15 is intended for applications of the theory developed in some of the previous chapters to a class of more general functional differential equations, namely, measure FDE of neutral type. In Section 15.1, some historical notes ranging from the beginnings of the term equation, passing through “functional differential equation,” and reaching functional differential equation of neutral type are put together. Then, we present a correspondence between solutions of a measure FDE of neutral type with finite delays and solutions of a generalized ODEs. Results on existence and uniqueness of a solution as well as continuous dependence of solutions on parameters based on [76] are also explored. We end this preface by expressing our immense gratitude to professors Jaroslav Kurzweil, Štefan Schwabik (in memorian) and Milan Tvrdý for welcoming several members of our research group at the Institute of Mathematics of the Academy of Sciences of the Czech Republic so many times, for the countably many good advices and talks, and for the corrections of proofs and theorems during all these years. October 2020

Everaldo M. Bonotto Márcia Federson Jaqueline G. Mesquita São Carlos, SP, Brazil

xxi

1

1 Preliminaries Everaldo M. Bonotto 1 , Rodolfo Collegari 2 , Márcia Federson 3 , Jaqueline G. Mesquita 4 , and Eduard Toon 5 1 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Faculdade de Matemática, Universidade Federal de Uberlândia, Uberlândia, MG, Brazil 3 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 4 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil 5 Departamento de Matemática, Instituto de Ciências Exatas, Universidade Federal de Juiz de Fora, Juiz de Fora, MG, Brazil

This preliminary chapter is devoted to two pillars of the theory of generalized ordinary differential equations for which we use the short form “generalized ODEs”. One of these pillars concerns the spaces in which the solutions of a generalized ODE are generally placed. The other pillar concerns the theory of nonabsolute integration, due to Jaroslav Kurzweil and Ralph Henstock, for integrands taking values in Banach spaces. As a matter of fact, such integration theory permeates the entire book. It (the theory of non absolute integration) is within the heartwood of the theory of generalized ODEs, appearing (the same theory of nonabsolute integration) in the integral form of a very special case of nonautonomous generalized ODEs, namely (now we mention the name of the special case of generalized ODEs), measure functional differential equations. The solutions of a Cauchy problem for a generalized ODE, with right-hand side in a class of functions introduced by J. Kurzweil in [147–149], usually belong to a certain space of functions of bounded variation (see Lemma 4.9). However, since functions of bounded variation are also regulated functions in the sense described by Jean Dieudonné and, more generally, by the group Nicolas Bourbaki, and because the space of regulated functions is more adequate for dealing with discontinuous functions appearing naturally in Stieltjes-type integrals, it is important to present a substantial content about this space. Thus, the first section of this chapter describes the main properties of the space of regulated functions Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

2

1 Preliminaries

with the icing of the cake being a characterization of its relatively compact subsets due to D. Franková. Regarding functions of bounded variation, which are known to be of bounded semivariation and, hence, of bounded -variation, we present, in the second section of this chapter, a coherent overview of functions of bounded -variation over bilinear triples. Among the results involving functions of bounded variation, the theorem of Helly (or the Helly’s choice principle for Banach space-valued function) due to C. S. Hönig is a spotlight. On the other hand, functions of bounded semivariation appear, for instance, in the integration by parts formula for Kurzweil and Henstock integrals of Stieltjes-type. In the third section of this chapter, we describe the second pillar and main background of the theory of generalized ODEs, namely, the framework of vector-valued nonabsolute integrals of Kurzweil and Henstock. Here, we call the reader’s attention to the fact that we refer to Kurzweil vector integrals as Perron–Stieltjes integrals so that, when a more general definition of the Kurzweil integral is presented in Chapter 2, the reader will not be confused. One of the highlights of the third section is, then, the integration by parts formula for Perron–Stieltjes integrals. An extra section called “Appendix,” which can be skipped in a first reading of the book, concerns other types of gauge-based integrals which use the interesting idea of Edward James McShane. The well-known Bochner–Lebesgue integral comes into the scene and an equivalent definition of it as the limit of Riemannian-type sums comes up.

1.1 Regulated Functions Regulated functions appear in the works by J. Dieudonné [58, p. 139] and N. Bourbaki [32, p. II.4]. The raison d’être of regulated functions lies on the fact that every regulated function f ∶ [0, T] ⊂ ℝ → ℝn has a primitive, that is, there (t) = f (t) almost exists a continuous function F ∶ [0, T] ⊂ ℝ → ℝn such that dF dt everywhere in [0, T], in the sense of the Lebesgue measure. The interested reader may want to check this fact as described, for instance, by the group N. Bourbaki in [32, Corollaire I, p. II.6].

1.1.1 Basic Properties Let X be a Banach space with norm ∥ ⋅ ∥. Here, we describe regulated functions f ∶ [a, b] → X, where [a, b], with a < b, is a compact interval of the real line ℝ. Definition 1.1: A function f ∶ [a, b] → X is called regulated, if the lateral limits lim f (s) = f (t− ),

s→t−

t ∈ (a, b],

and

lim f (s) = f (t+ ),

s→t+

t ∈ [a, b),

1.1 Regulated Functions

exist. The space of all regulated functions f ∶ [a, b] → X will be denoted by G([a, b], X). We denote the subspace of all continuous functions f ∶ [a, b] → X by C([a, b], X) and, by G− ([a, b], X), we mean the subspace of regulated functions f ∶ [a, b] → X which are left-continuous on (a, b]. Then, the following inclusions clearly hold C([a, b], X) ⊂ G− ([a, b], X) ⊂ G([a, b], X). Remark 1.2: Let O ⊂ X. By G([a, b], O), we mean the set of all elements f ∈ G([a, b], X) for which f (t) ∈ O for every t ∈ [a, b]. Thus, it is clear that G([a, b], O) ⊂ G([a, b], X) and the range of f ∈ G([a, b], O) belongs to O. Note that, for a given t ∈ [a, b], f (t+ ) and f (t− ) do not necessarily belong to O. Any finite set d = {t0 , t1 , … , tm } of points in the closed interval [a, b] such that a = t0 < t1 < · · · < tm = b is called a division of [a, b]. We write simply d = (ti ). Given a division d = (ti ) of [a, b], its elements are usually denoted by t0 , t1 , … , t|d| , where, from now on, |d| denotes the number of intervals in which [a, b] is divided through the division d. The set of all divisions d = (ti ) of [a, b] is denoted by D[a,b] . Definition 1.3: A function f ∶ [a, b] → X is called a step function, if there is a division d = (ti ) ∈ D[a,b] such that for each i = 1, … , |d|, f (t) = ci ∈ X, for all t ∈ (ti−1 , ti ). We denote by E([a, b], X) the set of all step functions f ∶ [a, b] → X. It is clear that E([a, b], X) ⊂ G([a, b], X). Moreover, we have the following important result which is a general version of the result presented in [127, Theorem I.3.1]. Theorem 1.4: Let O ⊂ X and consider a function f ∶ [a, b] → O. The assertions below are equivalent: (i) f ∶ [a, b] → O is the uniform limit of step functions {fn }n∈ℕ , with fn ∶ [a, b] → O; (ii) f ∈ G([a, b], O); (iii) given 𝜖 > 0, there exists a division d ∶ a = t0 < t1 < t2 < · · · < t|d| = b such that sup {∥ f (t) − f (s) ∥∶ t, s ∈ (tj−1 , tj ), j = 1, … , |d|} < 𝜖. Proof. We will prove (i) ⇒ (ii), (ii) ⇒ (iii) and, then, (iii) ⇒ (i).

3

4

1 Preliminaries

(i) ⇒ (ii) Note that f (t) ∈ O for all t ∈ [a, b]. We need to show that f ∈ G([a, b], X), see Remark 1.2. Let t ∈ [a, b). We will only prove that lims→t+ f (s) exists, because the existence of lims→t− f (s) follows analogously. Consider a sequence {tn }n∈ℕ in [a, b] such that tn ↘ t, that is, tn ⩾ t, for every n ∈ ℕ, and tn converges to t as n → ∞. Consider the sequence {fn }n∈ℕ of step functions from [a, b] to O such that fn → f uniformly as n → ∞. Then, given 𝜖 > 0, there exists k ∈ ℕ such that ∥ f (t) − fk (t) ∥< 𝜖∕4, for all t ∈ [a, b]. In addition, since fk is a step function, there exists N0 ∈ ℕ such that ∥ fk (tn ) − fk (t+ ) ∥< 𝜖∕4, for all n ⩾ N0 . Therefore, for n, m ⩾ N0 , we have ∥ f (tn ) − f (tm ) ∥ ⩽ ∥ f (tn ) − fk (tn ) ∥ + ∥ fk (tn ) − fk (t+ ) ∥ + ∥ fk (t+ ) − fk (tm ) ∥ + ∥ fk (tm ) − f (tm ) ∥< 𝜖. Then, once X is a Banach space, lims→t+ f (s) exists. (ii) ⇒ (iii) Let 𝜖 > 0 be given. Since f ∈ G([a, b], O), it follows that f ∈ G([a, b], X) (see Remark 1.2). Thus, for every t ∈ (a, b), there exists 𝛿t > 0 such that sup

𝑤,s∈(t−𝛿t ,t)

∥ f (𝑤) − f (s) ∥< 𝜖

and

sup

𝑤,s∈(t,t+𝛿t )

∥ f (𝑤) − f (s) ∥< 𝜖.

Similarly, there are 𝛿a , 𝛿b > 0 such that sup

𝑤,s∈(a,a+𝛿a )

∥ f (𝑤) − f (s) ∥< 𝜖

and

sup

𝑤,s∈(b−𝛿b ,b)

∥ f (𝑤) − f (s) ∥< 𝜖.

Notice that the set A of intervals {[a, a + 𝛿a ), (t − 𝛿t , t + 𝛿t ), (b − 𝛿b , b] ∶ t ∈ (a, b)} is an open cover of the interval [a, b] and, hence, there is a division d = (ti ) of [a, b], with i = 1, 2, … , |d|, such that {[a, a + 𝛿a ), (t1 − 𝛿t1 , t1 + 𝛿t1 ), … , (b − 𝛿b , b]} is a finite subcover of A for [a, b] and, moreover, sup {∥ f (t) − f (s) ∥∶ t, s ∈ (ti−1 , ti ), i = 1, … , |d|} < 𝜖. (iii) ⇒ (i). Given n ∈ ℕ, let dn = (ti ), i = 1, 2, … , |dn |, be a division of [a, b] such that 1 sup {∥ f (t) − f (s) ∥∶ t, s ∈ (ti−1 , ti ), i = 1, 2, … , |dn |} < n and 𝜏i ∈ (ti−1 , ti ), i = 1, 2, … , |dn |. Define fn (t) =

|dn | ∑ i=1

f (𝜏i )𝜒(ti−1 ,ti ) (t) +

|dn | ∑

f (ti )𝜒ti (t),

i=0

where 𝜒B denotes the characteristic function of a measurable set B ⊂ ℝ. Note that fn (t) ∈ O for all t ∈ [a, b] and all n ∈ ℕ. Moreover, {fn }n∈ℕ is a sequence of ◽ step functions which converge uniformly to f , as n → ∞. It is a consequence of Theorem 1.4 (with O = X) that the closure of E([a, b], X) is G([a, b], X). Therefore, G([a, b], X) is a Banach space when equipped with the

1.1 Regulated Functions

usual supremum norm ∥ f ∥∞ = sup ∥ f (t) ∥ . t∈[a,b]

See also [127, Theorem I.3.6]. If B([a, b], X) denotes the Banach space of bounded functions from [a, b] to X, equipped with the supremum norm, then the inclusion G([a, b], X) ⊂ B([a, b], X) follows from Theorem 1.4, items (i) and (ii), taking the limit of step functions which are constant on each subinterval of continuity. Recently, D. Franková established a fourth assertion equivalent to those assertions of Theorem 1.4 in the case where O = X. See [97, Theorem 2.3]. One can note, however, that such result also holds for any open set O ⊂ X. This is the content of the next lemma. Lemma 1.5: Let O ⊂ X and f ∶ [a, b] → O be a function. Then the assertions of Theorem 1.4 are also equivalent to the following assertion: (iv) for every 𝜖 > 0, there is a division a = t0 < t1 < · · · < t|d| = b such that ∥ f (t′′ ) − f (t′ ) ∥< 𝜖, for all ti−1 < t′ < t′′ < ti and i = 1, 2, … , |d|. Proof. Note that condition (iii) from Theorem 1.4 implies condition (iv). Now, assume that condition (iv) holds. Given 𝜖 > 0, there is a division a = t0 < t1 < · · · < t|d| = b such that ∥ f (t′′ ) − f (t′ ) ∥< 𝜖, for all ti−1 < t′ < t′′ < ti and i = 1, 2, … , |d|. According to [97, Theorem 2.3], take 𝜏i ∈ (ti−1 , ti ) and consider a step function h ∶ [a, b] → O given by { f (𝜏i ), t ∈ (ti−1 , ti ), i = 1, 2, … , |d|, h(t) = f (ti ), t = ti , i = 0, 1, … , |d|. Hence, sup

t∈[a,b]

∥ h(t) − f (t) ∥⩽ 𝜖.



The next result, borrowed from [7, Lemma 2.3], specifies the supremum of a function f ∈ G([a, b], X). Proposition 1.6: Let f ∈ G([a, b], X). Then sup t∈[a,b] ∥ f (t) ∥= c, where either c =∥ f (𝜎) ∥, for some 𝜎 ∈ [a, b], or c =∥ f (𝜎 − ) ∥, for some 𝜎 ∈ (a, b], or c =∥ f (𝜎 + ) ∥ for some 𝜎 ∈ [a, b). Proof. Let c = sup t∈[a,b] ∥ f (t) ∥. Since G([a, b], X) ⊂ B([a, b], X), c < ∞. By the definition of the supremum, for all n ∈ ℕ, one can choose xn ∈ [a, b] such that 0 ⩽ c− ∥ f (xn ) ∥< n1 which implies lim ∥ f (xn ) ∥= c.

n→∞

5

6

1 Preliminaries

Since {xn }n∈ℕ ⊂ [a, b], there exists a subsequence {xn′ }n′ ∈ℕ ⊂ {xn }n∈ℕ such that xn′ → 𝜎 ∈ [a, b] as n′ → ∞. Since f is regulated, c = limn→∞ ∥ f (xn′ ) ∥ belongs to { } ∥ f (𝜎) ∥, ∥ f (𝜎 − ) ∥, ∥ f (𝜎 + ) ∥ and the proof is complete. ◽ The composition of regulated functions may not be a regulated function as shown by the next example proposed by Dieudonné as an exercise. See, for instance, [58, Problem 2, p. 140]. Example 1.7: Consider, for instance, functions f , g ∶ [0, 1] → ℝ given by f (t) = t sin 1t , for t ∈ (0, 1], f (0) = 0 and g(t) = sgn t, that is, g is the sign function. Both f and g belong to G([0, 1], ℝ). However, the composition g ∘f does not. The next result, borrowed from [209, Theorem 10.11], gives us an interesting property of left-continuous regulated functions. Such result will be used in Chapters 8 and 11. We state it here without any proof. Proposition 1.8: Let f , g ∈ G− ([a, b], ℝ). If for every t ∈ [a, b), there exists 𝛿(t) > 0 such that for every 𝜂 ∈ (0, 𝛿(t)), we have f (t + 𝜂) − f (t) ⩽ g(t + 𝜂) − g(t), then f (s) − f (a) ⩽ g(s) − g(a),

s ∈ [a, b].

We end this first section by introducing some notation for certain spaces of regulated functions defined on unbounded intervals of the real line ℝ. Given t0 ∈ ℝ, we denote by G([t0 , ∞), X) the space of regulated functions from [t0 , ∞) to X. In order to obtain a Banach space, we can intersect the space G([t0 , ∞), X) with the space B([t0 , ∞), X) of bounded functions from [t0 , ∞) to X, in which case, we write BG([t0 , ∞), X) and equip such space with the supremum norm, f ∈ BG([t0 , ∞), X) →∥ f ∥∞ = sup ∥ f (t) ∥∈ ℝ+ , t∈[t0 ,∞)

where ℝ+ = [0, ∞). Alternatively, we can consider a subspace G0 ([t0 , ∞), X) of G([t0 , ∞), X) formed by all functions f ∶ [t0 , ∞) → X such that sup e−(t−t0 ) ∥ f (t) ∥< ∞.

t∈[t0 ,∞)

The next result shows that G0 ([t0 , ∞), X) is a Banach space with respect to a special norm. This result, whose proof follows ideas similar to those of [124, 220], will be largely used in. Chapters 5 and 8 Proposition 1.9: The space G0 ([t0 , ∞), X), equipped with the norm ∥ f ∥[t0 ,∞) = sup e−(t−t0 ) ∥ f (t) ∥, t∈[t0 ,∞)

f ∈ G0 ([t0 , ∞), X),

1.1 Regulated Functions

is a Banach space. Proof. Let T ∶ G0 ([t0 , ∞), X) → BG([t0 , ∞), X) be the linear mapping defined by (Ty)(t) = e−(t−t0 ) y(t), for all y ∈ G0 ([t0 , ∞), X) and t ∈ [t0 , ∞). Claim. T is an isometric isomorphism. Indeed, T is an isometry because ∥ T(y)∥BG([t0 ,∞),X) = sup ∥ (Ty)(t) ∥= sup ∥ y(t) ∥ e−(t−t0 ) =∥ y∥G0 ([t0 ,∞),X) , t∈[t0 ,∞)

t∈[t0 ,∞)

for all y ∈ G0 ([t0 , ∞), X). Moreover, if y ∈ BG([t0 , ∞), X), then u ∶ [t0 , ∞) → X defined by u(t) = et−t0 y(t),

for all t ∈ [t0 , ∞)

is such that Ty = u and u ∈ G0 ([t0 , ∞), X), since sup ∥ u(s) ∥ e−(s−t0 ) = sup ∥ y(s) ∥ es−t0 e−(s−t0 ) = sup ∥ y(s) ∥< ∞.

s∈[t0 ,∞)

s∈[t0 ,∞)

s∈[t0 ,∞)

Therefore, T is onto and the Claim is proved. Once T is an isometric isomorphism and BG([t0 , ∞), X) is a Banach space, we ◽ conclude that G0 ([t0 , ∞), X) is also a Banach space.

1.1.2 Equiregulated Sets In this subsection, our goal is to investigate important properties of equiregulated sets. In addition to [97], the reference [40] also deals with a characterization of subsets of equiregulated functions. Definition 1.10: A set  ⊂ G([a, b], X) is called equiregulated, if it has the following property: for every 𝜖 > 0 and every 𝜎 ∈ [a, b], there exists 𝛿 > 0 such that (i) if f ∈ , t′ ∈ [a, b] and 𝜎 − 𝛿 < t′ < 𝜎, then ∥ f (𝜎 − ) − f (t′ ) ∥< 𝜖; (ii) if f ∈ , t′′ ∈ [a, b] and 𝜎 < t′′ < 𝜎 + 𝛿, then ∥ f (t′′ ) − f (𝜎 + ) ∥< 𝜖. The next result, which can be found in [97, Proposition 3.2], gives a characterization of equiregulated functions taking values in a Banach space. Theorem 1.11: A set  ⊂ G([a, b], X) is equiregulated if and only if for every 𝜖 > 0, there is a division d ∶ a = t0 < t1 < · · · < t|d| = b of [a, b] such that ∥ f (t′ ) − f (t) ∥⩽ 𝜖, for every f ∈  and

[t, t′ ]

(1.1) ⊂ (tj−1 , tj ), for j = 1, 2, … , |d|.

7

8

1 Preliminaries

Proof. (⇒) Let 𝜖 > 0 be given and let B be the set of all 𝛾 ∈ (a, b] such that there is a division d′ ∈ D[a,𝛾] , that is, d′ ∶ a = t0 < t1 < · · · < t|d′ | = 𝛾 for which (1.1) holds with |d′ | instead of |d|. Since  is equiregulated, there is 𝛿1 ∈ (0, b − a] such that ∥ f (t) − f (a+ ) ∥⩽ 2𝜖 , for every f ∈  and t ∈ (a, a + 𝛿1 ). Let a1 = a + 𝛿1 and a = t0 < t1 = a1 . Thus, for [t, t′ ] ⊂ (a, a1 ) and f ∈ , we have ∥ f (t) − f (t′ ) ∥⩽∥ f (t) − f (a+ ) ∥ + ∥ f (t′ ) − f (a+ ) ∥⩽ 𝜖. Hence, a1 ∈ B. Let c̃ be the supremum of the set B. Since f ∈ , f is regulated. Thus, there exists 𝛿 > 0 such that ∥ f (̃c− ) − f (t) ∥⩽ 2𝜖 for every f ∈  and t ∈ (̃c − 𝛿, c̃ ) ∩ [a, b]. Take c ∈ B ∩ (̃c − 𝛿, c̃ ) and a division d′′ ∈ D[a,c] , say, d′′ ∶ a = t0 < t1 < · · · < t|d′′ | = c, such that (1.1) holds with |d′′ | instead of |d|. Denote t|d′′ |+1 = c̃ . Then, for [t, t′ ] ⊂ (t|d′′ | , t|d′′ |+1 ) and f ∈ , we have ∥ f (t) − f (t′ ) ∥⩽∥ f (t) − f (̃c− ) ∥ + ∥ f (t′ ) − f (̃c− ) ∥⩽ 𝜖, which implies c̃ ∈ B. Thus, we have two possibilities: either c̃ = b or c̃ < b. In the first case, the proof is finished. In the second case, one can use a similar argument as the one we used before in order to find e ∈ (̃c, b] such that e ∈ B, and this contradicts the fact that c̃ = sup B. Thus, c̃ = b, and we finish the proof of the sufficient condition. (⇐) Now, we prove the necessary condition. Given 𝜖 > 0, there exists a division d′ ∈ D[a,b] , say, d′ ∶ a = t0 < t1 < · · · < t|d′ | = b such that the inequality (1.1) is fulfilled, for every f ∈  and every [t, t′ ] ⊂ (tj−1 , tj ), with j = 1, 2, … , |d′ |. Then, for every j = 1, 2, … , |d′ |, take 𝜏j ∈ (tj−1 , tj ) and 𝛿 > 0 such that (𝜏j − 𝛿, 𝜏j + 𝛿) ⊂ (tj−1 , tj ). Thus, (1.1) is satisfied, for all t, t′ ∈ (𝜏j − 𝛿, 𝜏j + 𝛿). In particular, if either t = 𝜏j and t′ ∈ (𝜏j − 𝛿, 𝜏j ], or t = 𝜏j and t′ ∈ [𝜏j , 𝜏j + 𝛿), then the inequality (1.1) holds. Thus,  is equiregulated. ◽ The next result describes an interesting property of equiregulated sets of G([a, b], X). Such result can be found in [97, Proposition 3.8]. Theorem 1.12: Assume that a set  ⊂ G([a, b], X) is equiregulated and, for any t ∈ [a, b], there is a number 𝛾t such that ∥ f (t) − f (t− ) ∥⩽ 𝛾t , t ∈ (a, b],

and

∥ f (t+ ) − f (t) ∥⩽ 𝛾t , t ∈ [a, b).

Then, there is a constant K > 0 such that, for every f ∈ , ∥ f (t) − f (a) ∥⩽ K,

t ∈ [a, b].

Proof. Take B as the set of all numbers 𝜏 ∈ (a, b] fulfilling the condition that there exists a positive number K𝜏 for which we have ∥ f (t) − f (a) ∥⩽ K𝜏 , for every f ∈ 

1.1 Regulated Functions

and every t ∈ [a, 𝜏]. Since  is an equiregulated set, there exists 𝛿 > 0 such that ∥ f (t) − f (a+ ) ∥⩽ 1, for every f ∈  and every t ∈ (a, a + 𝛿]. From this fact and the hypotheses, we can infer that for every f ∈  and every t ∈ (a, a + 𝛿], we have ∥ f (t) − f (a) ∥⩽∥ f (t) − f (a+ ) ∥ + ∥ f (a+ ) − f (a) ∥⩽ 1 + 𝛾a = K(a+𝛿) . Then, (a, a + 𝛿] ⊂ B. Let 𝜏0 = sup B. The equiregulatedness of  implies that there exists 𝛿 ′ > 0 such that ∥ f (t) − f (𝜏0− ) ∥⩽ 1, for every f ∈  and t ∈ [𝜏0 − 𝛿 ′ , 𝜏0 ). Take 𝜏 ∈ B ∩ [𝜏0 − 𝛿 ′ , 𝜏0 ). Thus, for every f ∈ , ∥ f (𝜏0− ) − f (a) ∥⩽∥ f (𝜏0− ) − f (𝜏) ∥ + ∥ f (𝜏) − f (a) ∥⩽ 1 + K𝜏 ,

t ∈ (𝜏, 𝜏0 ),

which, together with the hypotheses, yield ∥ f (𝜏0 ) − f (a) ∥⩽∥ f (𝜏0 ) − f (𝜏0− ) ∥ + ∥ f (𝜏0− ) − f (a) ∥⩽ 𝛾𝜏0 + 1 + K𝜏 . Hence, 𝜏0 ∈ B. Let 𝜏0 < b. Since  is equiregulated, there exists 𝛿 ′′ > 0 such that, for every f ∈ , ∥ f (t) − f (𝜏0+ ) ∥⩽ 1, for all t ∈ (𝜏0 , 𝜏0 + 𝛿 ′′ ]. Therefore, for every f ∈ , we have ∥ f (t) − f (a) ∥ ⩽∥ f (t) − f (𝜏0+ ) ∥ + ∥ f (𝜏0+ ) − f (𝜏0 ) ∥ + ∥ f (𝜏0 ) − f (a) ∥ ⩽ 1 + 𝛾𝜏0 + K𝜏0 = K(𝜏0 +𝛿′′ ) , for t ∈ (𝜏0 , 𝜏0 + 𝛿 ′′ ], where K𝜏0 = 𝛾𝜏0 + 1 + K𝜏 . Note that 𝜏0 + 𝛿 ′′ ∈ B which contra◽ dicts the fact that 𝜏0 = sup B. Hence, 𝜏0 = b and the statement follows.

1.1.3 Uniform Convergence This subsection brings a few results borrowed from [177]. In particular, Lemma 1.13 describes an interesting and useful property of equiregulated converging sequences of Banach space-valued functions and it is used later in the proof of a version of Arzelà–Ascoli theorem for Banach space-valued regulated functions. Lemma 1.13: Let {fk }k∈ℕ be a sequence of functions from [a, b] to X. If the sequence {fk }k∈ℕ converges pointwisely to f0 and is equiregulated, then it converges uniformly to f0 . Proof. By hypothesis, the sequence of functions {fk }k∈ℕ is equiregulated. Then, Theorem 1.11 yields that, for every 𝜖 > 0, there is a division d = (ti ) ∈ D[a,b] for which 𝜖 ∥ fk (t) − fk (s) ∥< , 3 for every k ∈ ℕ and ti−1 < s < t < ti , i = 1, 2, … , |d|.

9

10

1 Preliminaries

Take 𝜏i ∈ (ti−1 , ti ). Because the sequence {fk }k∈ℕ converges pointwisely to f0 , we have fk (ti ) → f0 (ti ) and also fk (𝜏i ) → f0 (𝜏i ), for i = 1, 2, … , |d|. Thus, for every 𝜖 > 0, there is k0 ∈ ℕ such that, whenever k > k0 , we have ∥ fk (ti ) − f0 (ti ) ∥< 3𝜖 and ∥ fk (𝜏i ) − f0 (𝜏i ) ∥< 3𝜖 , for i = 1, 2, … , |d|. Take an arbitrary t ∈ [a, b]. Then, either t = ti for some i, or t ∈ (ti−1 , ti ) for some i. In the former case, ∥ fk (t) − f0 (t) ∥< 3𝜖 . The other case yields ∥ fk (t) − f0 (t) ∥⩽∥ fk (t) − fk (𝜏i ) ∥ + ∥ fk (𝜏i ) − f0 (𝜏i ) ∥ + ∥ f0 (𝜏i ) − f0 (t) ∥< 𝜖. Then, ∥ fk − f0 ∥∞ < 𝜖 and, therefore, fk → f0 uniformly on [a, b].



Lemma 1.14: Let {fk }k∈ℕ be a sequence in G([a, b], X). The following assertions hold: (i) if the sequence of functions fk converges uniformly to f0 as k → ∞ on [a, b], then fk (t− ) → f0 (t− ), for t ∈ (a, b], and fk (t+ ) → f0 (t+ ), for t ∈ [a, b); (ii) if the sequence of functions fk converges pointwisely to f0 as k → ∞ on [a, b] and fk (t− ) → f0 (t− ), for t ∈ (a, b], and fk (t+ ) → f0 (t+ ), for t ∈ [a, b), where f0 ∈ G([a, b], X), then the sequence fk converges uniformly to f0 as k → ∞. Proof. We start by proving (i). By hypothesis, the sequence {fk }k∈ℕ converges uniformly to f0 . Then, Moore–Osgood theorem (see, e.g., [19]) implies lim lim fk (s) k→∞s→t−

= lim− lim fk (s), t ∈ (a, b]. s→t k→∞

→ f0 (t− ), for t ∈ (a, b]. In a similar way, one can show that Therefore, fk fk (t+ ) → f0 (t+ ), for every t ∈ [a, b). Now, we prove (ii). It suffices to show that {fk ∶ [a, b] → X ∶ k ∈ ℕ} is an equiregulated set. Indeed, by Lemma 1.13, fk converges uniformly to f0 . Since the function f0 is regulated, its lateral limits exist. Then, for every t0 ∈ [a, b] and every 𝜖 > 0, there is 𝛿 > 0 such that (t− )

∥ f0 (t0 − ) − f0 (t) ∥ < 𝜖, for t0 − 𝛿 ⩽ t < t0 , ∥ f0 (t) − f0 (t0 + ) ∥ < 𝜖, for t0 < t ⩽ t0 + 𝛿, for every t ∈ [a, b]. But the hypotheses say that we can find n0 ∈ ℕ such that, for n ⩾ n0 , we have ∥ fn (t0 − 𝛿) − f0 (t0 − 𝛿) ∥< 𝜖, ∥ fn (t0 ) − f0 (t0 ) ∥< 𝜖, ∥ fn (t0+ ) − f0 (t0+ ) ∥< 𝜖, ∥ fn (t0 + 𝛿) − f0 (t0 + 𝛿) ∥< 𝜖 and ∥ fn (t0− ) − f0 (t0− ) ∥< 𝜖. When t ∈ [a, b] satisfies t0 − 𝛿 ⩽ t < t0 , we have, for every n ⩾ n0 , ∥ fn (t0− ) − fn (t) ∥⩽∥ fn (t0− ) − f0 (t0− ) ∥ + ∥ f0 (t0− ) − f0 (t) ∥ + ∥ f0 (t) − fn (t) ∥< 3𝜖.

1.1 Regulated Functions

On the other hand, when t ∈ [a, b] satisfies t0 < t ⩽ t0 + 𝛿, we get, for n ⩾ n0 , ∥ fn (t) − fn (t0+ ) ∥⩽∥ fn (t) − f0 (t) ∥ + ∥ f0 (t) − f0 (t0+ ) ∥ + ∥ fn (t0+ ) − f0 (t0+ ) ∥< 3𝜖. But this yields the fact that {fn }n∈ℕ is an equiregulated sequence, and the proof is complete. ◽ The next lemma guarantees that, if a sequence of functions {fk }k∈ℕ is bounded by an equiregulated sequence of functions, then {fk }k∈ℕ is also equiregulated. Lemma 1.15: Let {fk }k∈ℕ be a sequence of functions in G([a, b], X). Suppose, for each k ∈ ℕ, the function fk satisfies ∥ fk (s2 ) − fk (s1 ) ∥⩽ |hk (s2 ) − hk (s1 )|,

(1.2)

for every s1 , s2 ∈ [a, b], where hk ∶ [a, b] → ℝ for each k ∈ ℕ and the sequence {hk }k∈ℕ is equiregulated. Then, the sequence {fk }k∈ℕ is equiregulated. Proof. Let 𝜖 > 0 be given. Since the sequence {hk }k∈ℕ is equiregulated, it follows from Theorem 1.11 that there is a division d = (ti ) ∈ D[a,b] such that |hk (t′ ) − hk (t)| ⩽ 𝜖, for every k ∈ ℕ and [t, t′ ] ⊂ (tj−1 , tj ), for j = 1, 2, … , |d|. Thus, by (1.2), ∥ fk (t′ ) − fk (t) ∥⩽ 𝜖, for every k ∈ ℕ and every interval [t, t′ ] ⊂ (tj−1 , tj ), with j = 1, 2, … , |d|. Finally, Theorem 1.11 ensures the fact that the sequence {fk }k∈ℕ is equiregulated. ◽ A clear outcome of Lemmas 1.13 and 1.15 follows below: Corollary 1.16: Let {fk }k∈ℕ be a sequence of functions from [a, b] to X and suppose the function fk satisfies condition (1.2) for every k ∈ ℕ and s1 , s2 ∈ [a, b], where hk ∶ [a, b] → ℝ and the sequence {hk }k∈ℕ is equiregulated. If the sequence {fk }k∈ℕ converges pointwisely to a function f0 , then it also converges uniformly to f0 .

1.1.4 Relatively Compact Sets In this subsection, we investigate an extension of the Arzelà–Ascoli theorem for regulated functions taking values in a general Banach space X with norm ∥ ⋅ ∥. Unlike the finite dimensional case, when we consider functions taking values in X, the relatively compactness of a set  ⊂ G([a, b], X) does not come out as a consequence of the equiregulatedness of the set  and the boundedness of the set {f (t) ∶ f ∈ } ⊂ X, for each t ∈ [a, b]. In the following lines, we present an example, borrowed from [177] which illustrates this fact.

11

12

1 Preliminaries

Example 1.17: Let Z ⊂ X be bounded. Suppose Z not relatively compact in X. Thus, given an arbitrary 𝜖 > 0, there is a sequence of functions {zn }n∈ℕ in Z for which ∥ zn ∥⩽ K

and

∥ zn − zm ∥⩾ 𝜖,

for all n ≠ m and for some constant K > 0. Hence, the set B = {yn ∶ [0, 1] → X ∶ yn (t) = tzn , n ∈ ℕ} is bounded, once {zn }n∈ℕ is bounded. Moreover, B is equiregulated and {yn (0)}n∈ℕ is relatively compact in X. On the other hand, B is not relatively compact in G([0, 1], X). At this point, it is important to say that, in order to guarantee that a set  ⊂ G([a, b], X) is relatively compact, one needs an additional condition. It is clear that, if one assumes that, for each t ∈ [a, b], the set {f (t) ∶ f ∈ } is relatively compact in X, then  becomes relatively compact in G([a, b], X). This is precisely what the next result says, and we refer to it as the Arzelà–Ascoli theorem for Banach space-valued regulated functions. Such important result can be found in [97] and [177] as well. Theorem 1.18: Suppose  ⊂ G([a, b], X) is equiregulated and, for every t ∈ [a, b], {f (t) ∶ f ∈ } is relatively compact in X. Then,  is relatively compact in G([a, b], X). Proof. Take a sequence of functions {fn }n∈ℕ ⊂ . The set  is equiregulated by hypothesis. Then, for every 𝜖 > 0, there exists a division d = (ti ) ∈ D[a,b] fulfilling ∥ fn (t′ ) − fn (t) ∥
nk , for which {fnk (ti )}k∈ℕ and {fnk (𝜏i )}k∈ℕ are also relatively compact sets of X, for every i. This last statement implies that there exist {yi ∶ i = 0, 1, 2, … , |d|} ⊂ X and {zi ∶ i = 1, 2, … , |d|} ⊂ X satisfying yi = lim fnk (ti ) and zi = lim fnk (𝜏i ). k→∞

k→∞

Thus, there exists N ∈ ℕ such that ‖ ‖ ‖ 𝜖 ‖ 𝜖 and ‖fnk (𝜏i ) − zi ‖ < , ‖fnk (ti ) − yi ‖ < ‖ ‖ ‖ 4 ‖ 4

1.1 Regulated Functions

provided k > N. In particular, for q > k, we have 𝜖 𝜖 ∥ fnq (ti ) − yi ∥< and ∥ fnq (𝜏i ) − zi ∥< , 4 4 for i = 1, 2, … , |d|. Take t ∈ [a, b] and consider q ∈ ℕ such that q > k. Therefore, either t = ti , for some i ∈ {1, 2, … , |d|}, in which case, we have 𝜖 ∥ fnk (t) − fnq (t) ∥⩽∥ fnk (ti ) − yi ∥ + ∥ fnq (ti ) − yi ∥< , 2 or t ∈ (ti−1 , ti ), for some i ∈ {1, 2, … , |d|}, in which case, we have ∥ fnk (t) − fnq (t) ∥ ⩽∥ fnk (t) − fnk (𝜏i ) ∥ + ∥ fnk (𝜏i ) − fnq (𝜏i ) ∥ + ∥ fnq (t) − fnq (𝜏i ) ∥ ⩽∥ fnk (t) − fnk (𝜏i ) ∥ + ∥ fnq (t) − fnq (𝜏i ) ∥ + ∥ fnk (𝜏i ) − zi ∥ 𝜖 𝜖 𝜖 𝜖 + ∥ fnq (𝜏i ) − zi ∥< + + + = 𝜖. 4 4 4 4 Hence, for every t ∈ [a, b], {fnk (t)}k∈ℕ ⊂ X satisfies the Cauchy condition. Due to the fact that X is a complete space and because {fnk (t)}k∈ℕ is a Cauchy sequence, the limit limk→∞ fnk (t) exists. We conclude by considering f0 (t) = limk→∞ fnk (t). Then, fnk → f0 on [a, b], by Lemma 1.13. Hence, f0 is the uniform limit of the subsequence {fnk } in G([a, b], X). Finally, any sequence {fn }n∈ℕ ⊂  admits a converging subsequence which, in turn, implies that  is a relatively compact set, and the proof is finished. ◽ We end this subsection by mentioning an Arzelà–Ascoli-type theorem for regulated functions taking values in ℝn . A slightly different version of it can be found in [96]. Corollary 1.19: The following conditions are equivalent: (i) a set  ⊂ G([a, b], ℝn ) is relatively compact; (ii) the set {f (a) ∶ f ∈ } is bounded, and there are an increasing continuous function 𝜂 ∶ [0, ∞) → [0, ∞), with 𝜂(0) = 0, and a nondecreasing function 𝑣 ∶ [a, b] → ℝ such that, for every f ∈ , |f (t2 ) − f (t1 )| ⩽ 𝜂(𝑣(t2 ) − 𝑣(t1 )), for a ⩽ t1 ⩽ t2 ⩽ b; (iii)  is equiregulated, and for every t ∈ [a, b], the set {f (t) ∶ f ∈ } is bounded. We point out in [96, Theorem 2.17], item (ii), it is required that 𝑣 is an increasing function. However, it is not difficult to see that if u is a nondecreasing function, then taking 𝑣(t) = u(t) + t yields 𝑣 is an increasing function. Therefore, Corollary 1.19 follows as an immediate consequence of [96, Theorem 2.17].

13

14

1 Preliminaries

1.2 Functions of Bounded -Variation The concept of a function of bounded -variation generalizes the concepts of a function of semivariation and of a function of bounded variation, as we will see in the sequel. Definition 1.20: A bilinear triple (we write BT) is a set of three vector spaces E, F, and G, where F and G are normed spaces with a bilinear mapping  ∶ E × F → G. For x ∈ E and y ∈ F, we write xy = (x, y), and we denote the BT by (E, F, G) or simply by (E, F, G). A topological BT is a BT (E, F, G), where E is also a normed space and  is continuous. IfE and F are normed spaces, then we denote by L(E, F) the space of all linear continuous functions from E to F. We write E′ = L(E, ℝ) and L(E) = L(E, E), where ℝ denotes the real line. Next, we present examples, borrowed from [127], of bilinear triples. Example 1.21: Let X, Y , and Z denote Banach spaces. The following are BT: (i) (ii) (iii) (iv)

E = L(X, Y ), F = L(Z, X), G = L(Z, Y ), and (𝑣, u) = 𝑣∘u; E = L(X, Y ), F = X, G = Y , and (u, x) = u(x); E = Y , F = Y ′ , G = ℝ, and (y, y′ ) = ⟨y, y′ ⟩; E = G = Y , F = ℝ and (y, 𝜆) = 𝜆y.

Given a BT (E, F, G) , we define, for every x ∈ E, a norm ∥ x∥ = sup {∥ (x, y) ∥∶∥ y ∥⩽ 1} and we set E = {x ∈ E ∶∥ x ∥< ∞}. Whenever the space E is endowed with the norm ∥ ⋅∥ , we say that the topological BT (E , F, G) is associated with the BT (E, F, G). Let E be a vector space and ΓE be a set of seminorms defined on E such that p1 , … , pm ∈ ΓE implies sup [p1 , … , pm ] ∈ ΓE . Then, ΓE defines a topology on E, and the sets Vp,𝜖 = {x ∈ E ∶ p(x) < 𝜖}, p ∈ ΓE , 𝜖 > 0, form a basis of neighborhoods of 0. The sets x0 + Vp,𝜖 form a basis of the neighborhood of x0 ∈ E. Moreover, when endowed with this topology, E is called a locally convex space (see [127], p. 3, 4). Example 1.22: Every normed or seminormed space E is a locally convex space. For other examples of locally convex spaces, we refer to [110].

1.2 Functions of Bounded -Variation

Definition 1.23: Given a BT (E, F, G) , and a function 𝛼 ∶ [a, b] → E, for every division d = (ti ) ∈ D[a,b] , we define } {‖ |d| ‖ ‖∑ [ ] ‖ ‖ ‖ 𝛼(ti ) − 𝛼(ti−1 ) yi ‖ SBd (𝛼) = SB[a,b],d (𝛼) = sup ‖ and ‖ ∶ yi ∈ F, ‖yi ‖ ⩽ 1 ‖ ‖ ‖ i=1 ‖ } { ‖ SB(𝛼) = SB[a,b] (𝛼) = sup SBd (𝛼) ∶ d ∈ D[a,b] . Then, SB(𝛼) is the -variation of 𝛼 on [a, b]. We say that 𝛼 is a function of bounded -variation, whenever SB(𝛼) < ∞. In this case, we write 𝛼 ∈ SB([a, b], E). The following properties are not difficult to prove. See, e.g. [127, 4.1 and 4.2]. (SB1) SB([a, b], E) is a vector space and the mapping 𝛼 ∈ SB([a, b], E) → SB(𝛼) ∈ ℝ+ is a seminorm. (SB2) Given 𝛼 ∈ SB([a, b], E), the function t ∈ [a, b] → SB[a,t] (𝛼) ∈ ℝ+ is increasing. (SB3) Given 𝛼 ∈ SB([a, b], E) and c ∈ (a, b), SB[a,b] (𝛼) ⩽ SB[a,c] (𝛼) + SB[c,b] (𝛼). Definition 1.24: Consider the BT (L(X, Y ), X, Y ). Then, instead of SB(𝛼) and SB([a, b], L(X, Y )), we write simply SV(𝛼) and SV([a, b], L(X, Y )), respectively. Hence, SV([a, b], L(X, Y )) = SB([a, b], L(X, Y )) [ ] and we call any element of SV( a, b , L(X, Y )) a function of bounded semivariation. Definition 1.25: Given a function 𝛼 ∶ [a, b] → E, E a normed space, and a division d = (ti ) ∈ D[a,b] , we define vard (𝛼) =

|d| ∑ i=1

‖𝛼(t ) − 𝛼(t )‖ i−1 ‖ ‖ i

and the variation of 𝛼 is given by } { var(𝛼) = varba (𝛼) = sup vard (𝛼) ∶ d ∈ D[a,b] . If var(𝛼) < ∞, then 𝛼 is called a function of bounded variation, in which case, we write 𝛼 ∈ BV([a, b], E). It is not difficult to prove that BV([a, b], L(E, F)) ⊂ SV([a, b], L(E, F)) and SV([a, b], E′ ) = BV([a, b], E′ ). Moreover (see [127, Corollary I.3.4]), BV([a, b], X) ⊂ G([a, b], X). The space BV([a, b], X) is complete when equipped with the variation norm, ∥ ⋅∥BV , given by ∥ f ∥BV =∥ f (a) ∥ +varba (f ),

15

16

1 Preliminaries

for f ∈ BV([a, b], X). When there is no room for misunderstanding, we may use the notation ∥ ⋅ ∥ instead of ∥ ⋅∥BV . Remark 1.26: Consider a BT (E, F, G). The definition of variation of a function 𝛼 ∶ [a, b] → E, where E is a normed space, can also be considered as a particular case of the ℬ-variation of 𝛼 in two different ways. (i) Let E = F ′ , G = ℝ or G = ℂ and ℬ(x′ , x) = ⟨x, x′ ⟩. By the definition of the norm in E = F ′ , we have |d| ∑

‖𝛼(t ) − 𝛼(t )‖ i−1 ‖ ‖ i i=1 {| |d| } | |∑ | = sup || ⟨xi , 𝛼(ti ) − 𝛼(ti−1 )⟩|| ∶ xi ∈ F, ∥ xi ∥⩽ 1 = SBd (𝛼). | i=1 | | | Thus, when we consider the BT (Y ′ , Y , ℝ), we write BV(𝛼) and BV([a, b], Y ′ ) instead of SB(𝛼) and SB([a, b], Y ′ ), respectively. (ii) Let F = E′ , G = ℝ or G = ℂ and ℬ(x, x′ ) = ⟨x, x′ ⟩. By the Hahn–Banach Theorem, we have { } ‖𝛼(t ) − 𝛼(t )‖ = sup |⟨𝛼(t ) − 𝛼(t ), x′ ⟩| ∶ x′ ∈ E′ , ∥ x′ ∥⩽ 1 i−1 ‖ i i−1 | ‖ i i | i i vard (𝛼) =

and, hence, |d| ∑

‖𝛼(t ) − 𝛼(t )‖ i−1 ‖ ‖ i {| |d| } | |∑ | ′ | ′ ′ ′ | = sup | ⟨𝛼(ti ) − 𝛼(ti−1 ), xi ⟩| ∶ xi ∈ E , ∥ xi ∥⩽ 1 = SBd (𝛼). | i=1 | | |

vard (𝛼) =

i=1

Definition 1.27:

Given c ∈ [a, b], we define the spaces

BVc ([a, b], X) = {f ∈ BV([a, b], X) ∶ f (c) = 0}

and

SVc ([a, b], L(X, Y )) = {𝛼 ∈ SV([a, b], L(X, Y )) ∶ 𝛼(c) = 0} . Such spaces are complete when endowed, respectively, with the norm given by the variation varba (f ) and the norm given by the semivariation SV(𝛼) = sup SVd (𝛼), d∈D[a,b]

where

‖∑ ‖ ‖ |d| [ ] ‖ ‖ 𝛼(t ) − 𝛼(t ) x SVd (𝛼) = sup ‖ i i−1 i‖ , ‖ ‖ ∥xi ∥⩽1 ‖ i=1 ‖ ‖ and d ∶ a = t0 < t1 < · · · < t|d| = b is a division of [a, b].

1.2 Functions of Bounded -Variation

The following properties are not difficult to prove: (V1) Every 𝛼 ∈ BV([a, b], E) is bounded and ∥ 𝛼(t) ∥⩽∥ 𝛼(a) ∥ +varta (𝛼), t ∈ [a, b]. (V2) Given 𝛼 ∈ BV([a, b], E) and c ∈ (a, b), we have varba (𝛼) = varca (𝛼) + varbc (𝛼). Remark 1.28: Note that property (V1) above implies ∥ 𝛼∥∞ ⩽∥ 𝛼∥BV for all 𝛼 ∈ BV([a, b], X). For more details about the spaces in Definition 1.27, the reader may want to consult [127]. The next results are borrowed from [126]. We include the proofs here since this reference is not easily available. Lemmas 1.29 and 1.30 below are, respectively, Theorems I.2.7 and I.2.8 from [126]. Lemma 1.29: Let 𝛼 ∈ BV([a, b], X). Then, (i) For all t ∈ (a, b], there exists 𝛼(t− ) = lim+ 𝛼(t − 𝜖). 𝜖→0

(ii) For all t ∈ [a, b), there exists 𝛼(t+ ) = lim+ 𝛼(t + 𝜖). 𝜖→0

Proof. We only prove item (i), because item (ii) follows analogously. Consider an increasing sequence {tn }n∈ℕ in [a, t) converging to t. Then, n ∑

∥ 𝛼(ti ) − 𝛼(ti−1 ) ∥⩽ varta (𝛼),

i=1

∑∞ for all n ∈ ℕ. Therefore, we have i=1 ∥ 𝛼(ti ) − 𝛼(ti−1 ) ∥⩽ varta (𝛼) and, hence, ∑∞ i=j ∥ 𝛼(ti ) − 𝛼(ti−1 ) ∥→ 0, as j → ∞. Thus, {𝛼(tn )}n∈ℕ is a Cauchy sequence, since for any given 𝜖 > 0, we have ∥ 𝛼(tm ) − 𝛼(tn ) ∥⩽

m ∑

∥ 𝛼(ti ) − 𝛼(ti−1 ) ∥⩽ 𝜖,

i=n+1

for sufficiently large m, n. Finally, note that the limit 𝛼(t− ) of {𝛼(tn )}n∈ℕ is inde◽ pendent of the choice of {tn }n∈ℕ , and we finish the proof. It comes from Lemma 1.29 that all functions x ∶ [a, b] → X of bounded variation are also regulated functions (see, e.g. [127, Corollary 3.4]) which, in turn, are Darboux integrable [127, Theorem 3.6]. Lemma 1.30: Let 𝛼 ∈ BV([a, b], X). For every t ∈ [a, b], let 𝑣(t) = varta (𝛼). Then, (i) 𝑣(t+ ) − 𝑣(t) =∥ 𝛼(t+ ) − 𝛼(t) ∥, t ∈ [a, b); (ii) 𝑣(t) − 𝑣(t− ) =∥ 𝛼(t) − 𝛼(t− ) ∥, t ∈ (a, b].

17

18

1 Preliminaries

Proof. By property (SB2), 𝑣 is increasing and, hence, 𝑣(t+ ) and 𝑣(t− ) exist. By Lemma 1.29, 𝛼(t+ ) and 𝛼(t− ) also exist. We prove (i). The proof of (ii) follows analogously. Suppose s > t. Then, property (V2) implies varsa (𝛼) = varta (𝛼) + varst (𝛼). Thus, ∥ 𝛼(s) − 𝛼(t) ∥⩽ varst (𝛼) = varsa (𝛼) − varta (𝛼) and, hence, ∥ 𝛼(t+ ) − 𝛼(t) ∥⩽ 𝑣(t+ ) − 𝑣(t). Conversely, for any given d ∈ D[a,t] , let 𝑣d (t) = vard (𝛼). Then for every 𝜖 > 0, there exists 𝛿 > 0 such that 𝑣(t + 𝜎) − 𝑣(t+ ) ⩽ 𝜖 and ∥ 𝛼(t + 𝜎) − 𝛼(t+ ) ∥⩽ 𝜖, and there exists a division d ∶ a = t0 < t1 < · · · < tn = t < tn+1 = t + 𝜎 such that 𝑣(t + 𝜎) − 𝑣d (t + 𝜎) ⩽ 𝜖,

for all 0 < 𝜎 ⩽ 𝛿.

Then, 𝑣(t + 𝜎) − 𝑣(t) ⩽ 𝑣d (t + 𝜎) + 𝜖 − 𝑣d (t) =∥ 𝛼(t + 𝜎) − 𝛼(t) ∥ +𝜖 ⩽∥ 𝛼(t+ ) − 𝛼(t) ∥ +2𝜖 and, hence, 𝑣(t+ ) − 𝑣(t) ⩽∥ 𝛼(t+ ) − 𝛼(t) ∥ which completes the proof.



Using the fact that ∥ 𝛼(t) ∥⩽∥ 𝛼(a) ∥ +varta (𝛼), the following corollary follows immediately from Lemma 1.30. Corollary 1.31: Let 𝛼 ∈ BV([a, b], X). Then the sets {t ∈ [a, b) ∶ ∥ 𝛼(t+ ) − 𝛼(t) ∥⩾ 𝜖}

and {t ∈ (a, b] ∶ ∥ 𝛼(t) − 𝛼(t− ) ∥⩾ 𝜖}

are finite for every 𝜖 > 0. Thus, we have the next result which can be found in [126, Proposition I.2.10]. Proposition 1.32: Let 𝛼 ∈ BV([a, b], X). Then the set of points of discontinuity of 𝛼 is countable. Let us define BVa+ ([a, b], X) = {𝛼 ∈ BVa ([a, b], X) ∶ 𝛼(t+ ) = 𝛼(t), t ∈ (a, b)}. A proof that BVa+ ([a, b], X) equipped with the variation norm, ∥ ⋅∥BV , is complete can be found in [126, Theorem I.2.11]. We reproduce it in the next theorem. Theorem 1.33: BVa+ ([a, b], X), equipped with the variation norm, is a Banach space. Proof. We know that BVa ([a, b], X), with the variation norm, is a Banach space. Let {fn }n∈ℕ be a sequence in BVa+ ([a, b], X) converging to f ∈ BVa ([a, b], X) in the

1.3 Kurzweil and Henstock Vector Integrals

variation norm. Then, since for every n ∈ ℕ and every t ∈ [a, b), fn (t) = f (t+ ), we obtain ∥ f (t) − f (t+ ) ∥⩽∥ f (t) − fn (t) ∥ + ∥ fn (t+ ) − f (t+ ) ∥⩽ 2 varba (f − fn ), which tends to zero as n → ∞. Hence, f ∈ BVa+ ([a, b], X).



We end this section with the Helly’s choice principle for Banach space-valued functions due to C. S. Hönig. See [127, Theorem I.5.8]. Theorem 1.34 (Theorem of Helly): Let (E, F, G) be a BT and consider a sequence {𝛼n }n∈ℕ of elements of SB([a, b], E), with SB(𝛼n ) ⩽ M, for all n ∈ ℕ, and such that there exists 𝛼 ∶ [a, b] → E, with 𝛼n (t)y → 𝛼(t)y for all t ∈ [a, b] and all y ∈ F. Then, 𝛼 ∈ SB([a, b], E) and SB(𝛼) ⩽ M. Moreover, if 𝛼n ∈ BV([a, b], E), with varba (𝛼n ) ⩽ M, for all n ∈ ℕ, then 𝛼 ∈ BV([a, b], E) and BV(𝛼) ⩽ M. Proof. Consider a division d ∶ a = t0 < t1 < · · · < t|d| = b and let yi ∈ F, with ∥ yi ∥⩽ 1, for i = 1, 2, … , |d|. Then, for all n ∈ ℕ, we have ‖∑ ‖ ‖ |d| ‖ ‖ |d| [ ] ‖ ‖∑ ] ‖ [ ‖ ‖ ⩽‖ ‖ ) − 𝛼(t ) y (t ) − 𝛼 (t ) y 𝛼(t 𝛼 i i−1 i‖ n i n i−1 i‖ ‖ ‖ ‖ i=1 ‖ ‖ i=1 ‖ ‖ ‖ ‖ ‖ ‖∑ ‖ ‖ |d| [ ] [ ] ‖ ‖ +‖ 𝛼 (t ) − 𝛼(t ) y − 𝛼 (t ) − 𝛼(t ) y n i i i n i−1 i−1 i‖ , ‖ ‖ i=1 ‖ ‖ ‖ where the first member on the right-hand side of the inequality is smaller than M, since SB(𝛼n ) ⩽ M. Moreover, by hypothesis, given 𝜖 > 0, there is N > 0 such that, ‖[ ] ‖ 𝜖 ‖ for all n ⩾ N, n ∈ ℕ, ‖ ‖ 𝛼n (ti ) − 𝛼(ti ) yi ‖ ⩽ 2|d| , for i = i, i − 1. Hence, for all n ⩾ N, ‖ ‖ ‖ ‖∑ ‖ |d| [ ] ‖ ‖ ‖ 𝛼(t ) − 𝛼(t ) y i i−1 i‖ ⩽ M + 𝜖 ‖ ‖ ‖ i=1 ‖ ‖ and we conclude the proof of the first part. The second part follows analogously.◽ For more details about functions of bounded variation, the reader may want to consult [68], for instance.

1.3 Kurzweil and Henstock Vector Integrals Throughout this section, we consider functions 𝛼 ∶ [a, b] → L(X, Y ) and f ∶ [a, b] → X, where X and Y are Banach spaces.

19

20

1 Preliminaries

1.3.1 Definitions We start by recalling some definitions of vector integrals in the sense of J. Kurzweil and R. Henstock. At first, we need some auxiliary concepts, namely tagged division, gauge, and 𝛿-fine tagged division. Definition 1.35: Let [a, b] be a compact interval. (i) Any set of point-interval pairs (𝜉i , [ti−1 , ti ]) such that d = (ti ) ∈ D[a,b] and 𝜉i ∈ [ti−1 , ti ] for i = 1, 2, … , |d|, is called a tagged division of [a, b]. In this case, we write d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , where TD[a,b] denotes the set of all tagged divisions of [a, b]. [ ] [ ] (ii) Any subset of a tagged division of a, b is a tagged partial division of a, b and, in this case, we write d ∈ TPD[a,b] . (iii) A gauge on a set A ⊂ [a, b] is any function 𝛿 ∶ A → (0, ∞). Given a gauge 𝛿 on [a, b], we say that d = (𝜉i , [ti−1 , ti ]) ∈ TPD[a,b] is a 𝛿-fine tagged partial division, ] { } [ whenever ti−1 , ti ⊂ t ∈ [a, b] ∶ ||t − 𝜉i || < 𝛿(𝜉i ) for i = 1, 2, … , |d|, that is, 𝜉i ∈ [ti−1 , ti ] ⊂ (𝜉i − 𝛿(𝜉i ), 𝜉i + 𝛿(𝜉i )), whenever i = 1, 2, … , |d|. Before presenting the definition of any integral based on Riemannian sums concerning 𝛿-fine tagged divisions of an interval [a, b], we bring up an important result which guarantees the existence of a 𝛿-fine tagged division for a given gauge 𝛿. This result is known as Cousin Lemma, and a proof of it can be found in [120, Theorem 4.1]. Lemma 1.36 (Cousin Lemma): Given a gauge 𝛿 of [a, b], there exists a 𝛿-fine tagged division of [a, b]. Definition 1.37: We say that 𝛼 is Kurzweil f -integrable (or Kurzweil integrable with respect to f ), if there exists I ∈ Y such that for every 𝜖 > 0, there is a gauge 𝛿 on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , ‖ ‖∑ [ ] ‖ ‖ |d| ‖ 𝛼(𝜉 ) f (t ) − f (t ) − I ‖ < 𝜖. i i i−1 ‖ ‖ ‖ ‖ i=1 ‖ ‖ b

In this case, we write I = ∫a 𝛼(t) df (t) and 𝛼 ∈ Kf ([a, b], L(X, Y )). Analogously, we define the Kurzweil integral of f ∶ [a, b] → X with respect to a function 𝛼 ∶ [a, b] → L(X, Y ).

1.3 Kurzweil and Henstock Vector Integrals

Definition 1.38: We say that f is Kurzweil 𝛼− integrable (or Kurzweil integrable with respect to 𝛼), if there exists I ∈ Y such that given 𝜖 > 0, there is a gauge 𝛿 on [a, b] such that ‖∑ ‖ ‖ |d| [ ‖ ] ‖ ‖ < 𝜖, 𝛼(t ) − 𝛼(t ) f (𝜉 ) − I i i−1 i ‖ ‖ ‖ i=1 ‖ ‖ ‖ b

whenever d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] is 𝛿-fine. In this case, we write I = ∫a d𝛼(t)f (t) [ ] and f ∈ K 𝛼 ( a, b , X). b

Suppose the Kurzweil vector integral ∫a 𝛼(t) df (t) exists. Then, we define a

b

𝛼(t) df (t) = −

∫b

∫a

𝛼(t) df (t). b

An analogous consideration holds for the Kurzweil vector integral ∫a d𝛼(t)f (t). If the gauge 𝛿 in the definition of 𝛼 ∈ Kf ([a, b], L(X, Y )) is a constant funcb

tion, then we obtain the Riemann–Stieltjes integral ∫a 𝛼(t) df (t), and we write 𝛼 ∈ Rf ([a, b], L(X, Y )). Similarly, when we consider only constant gauges 𝛿 in [ ] the definition of f ∈ K 𝛼 ( a, b , X), we obtain the Riemann–Stieltjes integral b ∫a d𝛼(t)f (t), and we write f ∈ R𝛼 ([a, b], X). Hence, if we denote by Rf ([a, b], L(X, Y )) the set of all functions 𝛼 ∶ [a, b] → L(X, Y ) which are Riemann integrable with respect to f ∶ [a, b] → X and by R𝛼 ([a, b], X) the set of all functions f ∶ [a, b] → X which are Riemann integrable with respect to 𝛼 ∶ [a, b] → L(X, Y ), then we have Rf ([a, b], L(X, Y )) ⊂ Kf ([a, b], L(X, Y ))

and

R𝛼 ([a, b], X) ⊂ K 𝛼 ([a, b], X). The next very important remark concerns the terminology we adopt from now on in this book concerning Kurzweil vector integrals given by Definitions 1.37 and 1.38. Remark 1.39: We refer to the vector integrals from Definitions 1.37 and 1.38, namely, b

b

𝛼(t) df (t)

∫a

and

∫a

d𝛼(t)f (t),

where f ∶ [a, b] → X and 𝛼 ∶ [a, b] → L(X, Y ), as Perron–Stieltjes integrals. For the particular case where 𝛼 ∶ [a, b] → L(X) is the identity in L(X) and f ∶ [a, b] → X, we refer to the integral b

∫a

f (t) dt

21

22

1 Preliminaries

simply as a Perron integral. Our choice to use this terminology is due to the fact that, in Chapter 2, we deal with a more general definition of the Kurzweil integral which encompasses all integrals presented here. Moreover, since the same notation for the integrals b

∫a

b

𝛼(t) df (t),

∫a

b

d𝛼(t)f (t),

and

∫a

f (t) dt

are used for Riemann–Stieltjes integrals, we will specify which integral we are dealing with whenever there is possibility for an ambiguous interpretation. The vector integral of Henstock, which we define in the sequel, is more restrictive than the Kurzweil vector integral for integrands taking values in infinite dimensional Banach spaces. Again, consider functions f ∶ [a, b] → X and 𝛼 ∶ [a, b] → L(X, Y ). Definition 1.40: We say that 𝛼 is Henstock f -integrable (or Henstock variationally integrable with respect to f ), if there exists a function Af ∶ [a, b] → Y (called the [ ] associate function of 𝛼) such that for every 𝜖 > 0, there is a gauge 𝛿 on a, b such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , |d| ∑ [ ] [ ]‖ ‖ ‖𝛼(𝜉i ) f (ti ) − f (ti−1 ) − Af (ti ) − Af (ti−1 ) ‖ < 𝜖. ‖ ‖ i=1

We write 𝛼 ∈ Hf ([a, b], L(X, Y )) in this case. In an analogous way, we define the Henstock 𝛼-integrability of f ∶ [a, b] → X. Definition 1.41: We say that f is Henstock 𝛼-integrable (or Henstock variationally integrable with respect to 𝛼), if there exists a function F𝛼 ∶ [a, b] → Y (called the associate function of f ) such that for every 𝜖 > 0, there is a gauge 𝛿 on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , |d| ∑ [ ]‖ ‖ ‖[𝛼(ti ) − 𝛼(ti−1 )]f (𝜉i ) − F𝛼 (ti ) − F𝛼 (ti−1 ) ‖ < 𝜖. ‖ ‖ i=1

We write f ∈ H𝛼 ([a, b], X) in this case. Next, we define indefinite vector integrals. Definition 1.42 (Indefinite Vector Integrals): (i) Given f ∶ [a, b] → X and 𝛼 ∈ Kf ([a, b], L(X, Y )), we define the indefinite integral 𝛼̃ f ∶ [a, b] → Y of 𝛼 with respect to f by

1.3 Kurzweil and Henstock Vector Integrals t

𝛼̃ f (t) =

𝛼(s) df (s),

∫a

t ∈ [a, b].

[ ] If, in addition, 𝛼 ∈ Hf ([a, b], L(X, Y )), then 𝛼̃ f (t) = Af (t) − Af (a), t ∈ a, b . (ii) Given 𝛼 ∶ [a, b] → L(X, Y ) and f ∈ K 𝛼 ([a, b], X), we define the indefinite inte𝛼 gral f̃ ∶ [a, b] → Y of f with respect to 𝛼 by 𝛼 f̃ (t) =

t

∫a

d𝛼(s)f (s),

t ∈ [a, b].

𝛼 If, moreover, f ∈ H 𝛼 ([a, b], X), then f̃ (t) = F 𝛼 (t) − F 𝛼 (a), t ∈ [a, b].

Note that Definition 1.42 yields the inclusions Hf ([a, b], L(X, Y )) ⊂ Kf ([a, b], L(X, Y )) 𝛼

and

𝛼

H ([a, b], X) ⊂ K ([a, b], X). If in item (ii) of Definition 1.42, we consider the particular case, where X = Y , and for every t ∈ [a, b], 𝛼(t) is the identity in X, then instead of K 𝛼 ([a, b], X), 𝛼 R𝛼 ([a, b], X), H 𝛼 ([a, b], X) and f̃ , we write simply K([a, b], X), R([a, b], X), H([a, b], X) and f̃ respectively, where f̃ (t) =

t

∫a

f (s) ds,

t ∈ [a, b].

If in item (i) of Definition 1.42, we have X = Y = ℝ, then one can identify the isomorphic spaces L(ℝ, ℝ) and ℝ and, hence, the spaces Kf ([a, b], L(ℝ)), Kf ([a, b], ℝ), Hf ([a, b], L(ℝ)), and Hf ([a, b], ℝ) can also be identified, because one has Kf ([a, b], ℝ) = Hf ([a, b], ℝ). Indeed. It is clear that Hf ([a, b], ℝ) ⊂ Kf ([a, b], ℝ). In order to prove that Kf ([a, b], ℝ) ⊂ Hf ([a, b], ℝ), it is enough to write the usual Riemannian-type sum as two sums (with the positive and negative parts of the sum): |d| ∑ i=1

=

|𝛼(𝜏 )[f (t ) − f (t )] − [𝛼(t ̃ i ) − 𝛼(t ̃ i−1 )]|| i i−1 | i

|d| ∑ {

} 𝛼(𝜏i )[f (ti ) − f (ti−1 )] − [𝛼(t ̃ i ) − 𝛼(t ̃ i−1 )] +

i=1

+

|d| ∑ {

} 𝛼(𝜏i )[f (ti ) − f (ti−1 )] − [𝛼(t ̃ i ) − 𝛼(t ̃ i−1 )] −

i=1

⩽ 𝜖 + 𝜖 = 2𝜖, for every 𝛿-fine d = (𝜏i , [ti−1 , ti ]) ∈ TD[a,b] corresponding to a given 𝜖 > 0.

23

24

1 Preliminaries

As we already mentioned, we will consider a more general definition of the Kurzweil integral in Chapter 2. Thus, in the remaining of this chapter, we refer to the integrals t

∫a

t

𝛼(s) df (s),

and

∫a

d𝛼(s)f (s),

t ∈ [a, b],

as Perron–Stieltjes integrals, where 𝛼 ∶ [a, b] → L(X, Y ) and f ∶ [a, b] → X. As it should be expected, the above integrals are linear and additive over nonoverlapping intervals. These facts will be put aside for a while, because in Chapter 2 they will be proved for the more general form of the Kurzweil integral. In the meantime, we present a simple example of a function which is Riemann improper integrable (and, hence, also Perron integrable, due to Theorem 2.9), but it is not Lebesgue integrable (because it is not absolutely integrable). Example 1.43: Let f ∶ [0, ∞) → ℝ be given by f (t) =

sin t , t

for t ∈ (0, ∞), and t

f (0) = L, for some L ∈ ℝ. Then, it is not difficult to prove that limt→∞ ∫0 f (s) ds ∞| | exists, but ∫0 | sins s | ds = ∞, once | | ∞

∫0

n n n k𝜋 | k𝜋 ∑ ∑ | | sin s | 2 ∑1 1 | ds ⩾ | | sin s | ds ⩾ . | sin s| ds = | s | | s | ∫ 𝜋 k=1 k k𝜋 ∫(k−1)𝜋 | | | k=1 (k−1)𝜋 | k=1

Another example is also needed at this point. Borrowed from [73, example 2.1], the example below exhibits a function f ∈ R([a, b], X) ⧵ H([a, b], X) (that is, f belongs to R([a, b], X), but not to H([a, b], X)), satisfying f̃ = 0. However, f (t) ≠ 0 for almost every t ∈ [a, b]. Thus, such a function is also an element of K([a, b], X) (because it belongs to R([a, b], X)) which does not belong to H([a, b], X), showing that, in the infinite dimensional-valued case, H([a, b], X) may be a proper subset of K([a, b], X). Example 1.44: Let I ⊂ ℝ be an arbitrary set and let E be a normed space. A fam∑ ily {xi }i∈I of elements of E is summable with sum x ∈ E (we write i∈I xi = x), if for every 𝜖 > 0, there is a finite subset F𝜖 ⊂ I such that for every finite subset F ⊂ I ∑ with F ⊃ F𝜖 , ∥ x − i∈F xi ∥< 𝜖. Let l (I) denote the set of all families {xi }i∈I , xi ∈ ℝ, such that the family { 2} 2 |xi | i∈I is summable, that is, { } ∑ 2 { } |x | < ∞ . l2 (I) = x = xi i∈I , xi ∈ ℝ ∶ | i| i∈I

∑ It is known that the expression ⟨x, y⟩ = i∈I xi yi defines an inner product and l2 (I), ∑ equipped with the norm ∥ x∥2 = ( i∈I |xi |2 )1∕2 is a Hilbert space. As a consequence

1.3 Kurzweil and Henstock Vector Integrals

of the Basis Theorem, since l2 (I) is a Hilbert space, {ei }i∈I is a maximal orthonormal system for l2 (I), that is, ei (j) = ⟨ei , ej ⟩ = 𝛿ij and 𝛿ij stands for the Kronecker delta (see [128, Theorem 4.6], item 6, p. 61)). In what follows, we will use the the Bessel equality given as ∑ ∑ ∥ x ∥22 = |⟨xi , ei ⟩|2 = |xi |2 , x ∈ l2 (I). i∈I

i∈I

Let [a, b] be a nondegenerate closed interval of ℝ and X = l2 ([a, b]) be equipped with the norm )1∕2 ( ∑ 2 |x | x → ‖x‖2 = . | i| i∈[a,b]

Consider a function f ∶ [a, b] → X given by f (t) = et , t ∈ [a, b]. Given 1 𝜖 > 0, there exists 𝛿 > 0, with 𝛿 2 < 𝜖 1 , such that for every ( 𝛿2 )-fine (b−a) 2

d = (𝜉j , [tj−1 , tj ]) ∈ TD[a,b] , ‖∑ ‖ ‖∑ ‖ ‖ |d| ‖ ‖ |d| ‖ ‖ f (𝜉 )(t − t ) − 0‖ = ‖ e (t − t )‖ j j j−1 𝜉j j j−1 ‖ ‖ ‖ ‖ ‖ j=1 ‖ ‖ ‖ ‖ ‖2 ‖ j=1 ‖2 [ |d| ] 12 ] 12 [ |d| ∑ ∑ 1 = |tj − tj−1 |2 < 𝛿2 (tj − tj−1 ) < 𝜖, j=1

j=1

where we applied the Bessel equality. Thus, f ∈ R([a, b], X) ⊂ K([a, b], X) and f̃ = t 0, since ∫a f (s) ds = 0 for every t ∈ [a, b]. On the other hand, |d| ∑ ‖ ‖ ‖f (𝜉j )(tj − tj−1 ) − 0‖ = b − a ‖ ‖2 j=1

for every (𝜉j , [tj−1 , tj ]) ∈ TD[a,b] . Hence, f ∉ H([a, b], X).

1.3.2 Basic Properties The first result we present in this subsection is known as the Saks–Henstock lemma, and it is useful in many situations. For a proof of it, see [210, Lemma 16], for instance. Similar results hold if we replace Kf ([a, b], L(X, Y )) by Rf ([a, b], L(X, Y )) and also K 𝛼 ([a, b], X) by R𝛼 ([a, b], X). Lemma 1.45 (Saks–Henstock Lemma): The following assertions hold. (i) Let 𝛼 ∶ [a, b] → L(X, Y ) and f ∈ K 𝛼 ([a, b], X). Given 𝜖 > 0, let 𝛿 be a gauge on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , ‖ ‖∑ b ‖ ‖ |d| [ ] ‖ < 𝜖. ‖ ) − 𝛼(t ) f (𝜉 ) − d𝛼(t)f (t) 𝛼(t i i−1 i ‖ ‖ ∫a ‖ ‖ i=1 ‖ ‖

25

26

1 Preliminaries

Then, for every 𝛿-fine d′ = (𝜂j , [sj−1 , sj ]) ∈ TPD[a,b] , we have ]‖ ′ [ ‖∑ sj ‖ |d | [ ] ‖ ‖ 𝛼(sj ) − 𝛼(sj−1 ) f (𝜂j ) − d𝛼(t)f (t) ‖ ‖ ‖ ⩽ 𝜖. ∫ ‖ j=1 ‖ sj−1 ‖ ‖ (ii) Let f ∶ [a, b] → X and 𝛼 ∈ Kf ([a, b], L(X, Y )). Given 𝜖 > 0, let 𝛿 be a gauge on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , ‖ ‖∑ b ‖ ‖ |d| [ ] ‖ < 𝜖. ‖ 𝛼(𝜉 ) f (t ) − f (t ) − 𝛼(t) df (t) i i i−1 ‖ ‖ ∫a ‖ ‖ i=1 ‖ ‖ Then, for every 𝛿-fine d′ = (𝜂j , [sj−1 , sj ]) ∈ TPD[a,b] , we have ]‖ ′ [ ‖∑ sj ‖ ‖ |d | [ ] ‖ 𝛼(𝜂j ) f (sj ) − f (sj−1 ) − 𝛼(t) df (t) ‖ ‖ ⩽ 𝜖. ‖ ∫ ‖ ‖ j=1 sj−1 ‖ ‖ Now, we define some important sets of functions. [ ] Definition 1.46: Let C𝜎 ([a, b], L(X, Y )) be the set of all functions 𝛼 ∶ a, b → L(X, Y ) which are weakly continuous, that is, for every x ∈ X, the function t ∈ [ ] [a, b] → 𝛼(t)x ∈ Y is continuous, and we denote by G𝜎 ( a, b , L(X, Y )) the set of all weakly regulated functions 𝛼 ∶ [a, b] → L(X, Y ), that is, for every x ∈ X, the function t ∈ [a, b] → 𝛼(t)x ∈ Y is r egulated. Given 𝛼 ∈ G𝜎 ([a, b], L(X, Y )) and x ∈ X, let us define ̂ = (𝛼x)(t+ ) = lim (𝛼x)(t + 𝜌), t ∈ [a, b) 𝛼(t+)x + 𝜌→0

𝛼(t−)x ̂ = (𝛼x)(t− ) = lim+ (𝛼x)(t − 𝜌), t ∈ (a, b]. 𝜌→0

̂ and 𝛼(t−) By the Banach–Steinhaus theorem, the limits 𝛼(t+) ̂ exist and belong to L(X, Y ). Then, by the Uniform Boundedness Principle, G𝜎 ([a, b], L(X, Y )) is a Banach space when equipped with the usual supremum norm. It is also clear that C𝜎 ([a, b], L(X, Y )) ⊂ G𝜎 ([a, b], L(X, Y )). The next result concerns the existence of Perron–Stieltjes integrals. A proof of its item (i) can be found in [210, Theorem 15]. A proof of item (ii) follows similarly as the proof of item (i). See also [212, Proposition 7]. Theorem 1.47: The following assertions hold. (i) If 𝛼 ∈ G([a, b], L(X, Y )) and f ∈ BV([a, b], X), then 𝛼 ∈ Kf ([a, b], L(X, Y )). (ii) If 𝛼 ∈ SV([a, b], L(X, Y )) ∩ G𝜎 ([a, b], L(X, Y )) and f ∈ G([a, b], X), then we have f ∈ K 𝛼 ([a, b], X).

1.3 Kurzweil and Henstock Vector Integrals

The following consequence of Theorem 1.47 will be used later in many chapters. The inequalities follow after some calculations. See, for instance, [210, Proposition 10]. Corollary 1.48: The following assertions hold. (i) If 𝛼 ∈ G([a, b], L(X)) and f ∈ BV([a, b], X), then the Perron–Stieltjes integral b ∫a 𝛼(s) df (s) exists, and we have b ‖ b ‖ ‖ ‖ 𝛼(s) df (s)‖ ⩽ ∥ 𝛼(s) ∥ d[varsa (f )] ⩽∥ 𝛼∥∞ varba (f ). ‖ ‖∫a ‖ ∫a ‖ ‖ Similarly, if f ∈ G([a, b], X) and g ∶ [a, b] → ℝ is nondecreasing, then ‖ ‖ b ‖ ‖ f (s) dg(s)‖ ⩽∥ f ∥∞ [g(b) − g(a)]. ‖ ‖ ‖∫a ‖ ‖ (ii) If 𝛼 ∈ BV([a, b], L(X)) and f ∈ G([a, b], X), then the Perron–Stieltjes integral b ∫a d𝛼(s)f (s) exists, and we have b ‖ ‖ b ‖ ‖ d𝛼(s)f (s)‖ ⩽ d[varsa (𝛼)] ∥ f (s) ∥⩽ varba (𝛼) ∥ f ∥∞ . ‖ ‖ ∫a ‖∫a ‖ ‖

The next result, borrowed from [74, Theorem 1.2], gives us conditions for indefinite Perron–Stieltjes integrals to be regulated functions. Theorem 1.49: The following assertions hold: 𝛼 (i) if 𝛼 ∈ G𝜎 ([a, b], L(X, Y )) and f ∈ K 𝛼 ([a, b], X), then f̃ ∈ G([a, b], Y ); (ii) if f ∈ G([a, b], X) and 𝛼 ∈ Kf ([a, b], L(X, Y )), then 𝛼̃ f ∈ G([a, b], Y ).

Proof. We prove (i). Item (ii) follows similarly. For item (i), it is enough to show that [ ] 𝛼 𝛼 ̂ − 𝛼(𝜉) f (𝜉), 𝜉 ∈ [a, b), f̃ (𝜉 + ) − f̃ (𝜉) = 𝛼(𝜉 +) because, in this case, the equality 𝛼 𝛼 ̂ f (𝜉), f̃ (𝜉) − f̃ (𝜉 − ) = [𝛼(𝜉) − 𝛼(𝜉 −)]

𝜉 ∈ (a, b]

follows in an analogous way. By hypothesis, f ∈ K 𝛼 ([a, b], X). Hence, given 𝜖 > 0, there is a gauge 𝛿 on [a, b] such that for every 𝛿-fine division d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , ‖ ‖∑ b ‖ 𝜖 ‖ |d| [ ] ‖< . ‖ ) − 𝛼(t ) f (𝜉 ) − d𝛼(t)f (t) 𝛼(t i i−1 i ‖ 2 ‖ ∫a ‖ ‖ i=1 ‖ ‖

27

28

1 Preliminaries

Now, let 𝜉 ∈ [a, b). Since 𝛼 ∈ G𝜎 ([a, b], L(X, Y )), there exists (𝛼x)(𝜉 + ), for every x ∈ X. In particular, there exists 𝜇 > 0 such that ] 𝜖 ‖[ ̂ f (𝜉)‖ ‖ 𝛼(𝜉 + 𝜌) − 𝛼(𝜉 +) ‖ < , for 0 < 𝜌 < 𝜇. ‖ ‖ 2 If 𝛿(𝜉) < 𝜇 and 0 < 𝜌 < 𝛿(𝜉), then by the Saks–Henstock lemma (Lemma 1.45) 𝜉+𝜌 ‖ ‖ 𝜖 ‖ ‖ d𝛼(t)f (t)‖ ⩽ . ‖[𝛼(𝜉 + 𝜌) − 𝛼(𝜉)] f (𝜉) − ‖ ‖ 2 ∫𝜉 ‖ ‖

Thus,

[ ] 𝛼 ‖̃𝛼 + ̂ − 𝛼(𝜉) f (𝜉)‖ ‖ ‖f (𝜉 ) − f̃ (𝜉) − 𝛼(𝜉 +) ‖ ‖ ‖ 𝜉+𝜌 ‖ [ ] ‖ ̂ − 𝛼(𝜉) f (𝜉)‖ =‖ d𝛼(t)f (t) − 𝛼(𝜉 +) ‖ ‖∫𝜉 ‖ ‖ ‖ ‖ 𝜉+𝜌 ‖ ‖ ‖ ⩽‖ d𝛼(t)f (t) − [𝛼(𝜉 + 𝜌) − 𝛼(𝜉)] f (𝜉)‖ ‖∫ 𝜉 ‖ ‖ ‖ [ ] ‖ ̂ − 𝛼(𝜉) f (𝜉)‖ + ‖[𝛼(𝜉 + 𝜌) − 𝛼(𝜉)] f (𝜉) − 𝛼(𝜉 +) ‖ 0 and 𝓁 ⩾ 0. Assume that f ∶ [a, b] → [0, ∞) is bounded and satisfies t

f (t) ⩽ k + 𝓁

∫a

f (s) dg(s),

t ∈ [a, b].

Then, f (t) ⩽ ke𝓁[g(t)−g(a)] ,

t ∈ [a, b].

Other properties of Perron–Stieltjes integrals can be found in Chapter 2, where they appear within the consequences of the main results presented there.

1.3.3 Integration by Parts and Substitution Formulas The first result of this section is an Integration by Parts Formula for Riemann– Stieltjes integrals. It is a particular consequence of Proposition 1.70 presented in the end of this section. A proof of it can be found in [126, Theorem II.1.1]. Theorem 1.53 (Integration by Parts): Let (E, F, G) be a BT. Suppose (i) either 𝛼 ∈ SB([a, b], E) and f ∈ C([a, b], F); (ii) or 𝛼 ∈ C([a, b], E) and f ∈ BV([a, b], F). Then, 𝛼 ∈ Rf ([a, b], E) and f ∈ R𝛼 ([a, b], F), that is, the Riemann–Stieltjes inteb

b

grals ∫a d𝛼(t)f (t) and ∫a 𝛼(t) df (t) exist, and moreover, b

b

d𝛼(t)f (t) = 𝛼(b)f (b) − 𝛼(a)f (a) −

∫a

∫a

𝛼(t) df (t).

Next, we state a result which is not difficult to prove using the definitions involved in the statement. See [72, Theorem 5]. Recall that the indefinite integral of a function f in K([a, b], X) is denoted by (see Definition 1.42) f̃ (t) =

t

∫a

f (s) ds, t ∈ [a, b].

Theorem 1.54: Suppose f ∈ H([a, b], X) and 𝛼 ∈ Kf̃ ([a, b], L(X, Y )) is bounded. Then 𝛼f ∈ K([a, b], Y ) and b

∫a

b

𝛼(t)f (t) dt =

∫a

𝛼(t)df̃ (t).

If, in addition, 𝛼 ∈ Hf̃ ([a, b], L(X, Y )), then 𝛼f ∈ H([a, b], Y ).

(1.3)

29

30

1 Preliminaries

By Theorem 1.47, the Perron–Stieltjes integral ∫a 𝛼(t)df̃ (t) exists and the next corollary follows. b

Corollary 1.55: Let 𝛼 ∈ G([a, b], L(X, Y )) and f ∈ H([a, b], X) be such that f̃ ∈ BV([a, b], X). Then, 𝛼f ∈ K([a, b], Y ) and (1.3) holds. A second corollary of Theorem 1.54 follows by the fact that Riemann–Stieltjes integrals are special cases of Perron–Stieltjes integrals. Then, it suffices to apply Theorems 1.49 and 1.53. Corollary 1.56: Suppose the following conditions hold: (i) either 𝛼 ∈ C([a, b], L(X, Y )) and f ∈ H([a, b], X), with f̃ ∈ BV([a, b], X); (ii) or f ∈ H([a, b], X) and 𝛼 ∈ SV([a, b], L(X, Y )). Then, 𝛼f ∈ K([a, b], Y ), equality (1.3) holds, and we have b

∫a

𝛼(t)df̃ (t) = 𝛼(b)f̃ (b) − 𝛼(a)f̃ (a) −

b

∫a

d𝛼(t)f̃ (t).

(1.4)

The next theorem is due to C. S. Hönig (see [129]), and it concerns multipliers for Perron–Stieltjes integrals. Theorem 1.57: Suppose f ∈ K([a, b], X) and 𝛼 ∈ SV([a, b], L(X, Y )). Then, 𝛼f ∈ K([a, b], Y ) and Eqs. (1.3) and (1.4) hold. Since H([a, b], X) ⊂ K([a, b], X) and BV([a, b], L(X, Y )) ⊂ SV([a, b], L(X, Y )), it is immediate that if f ∈ H([a, b], X) and 𝛼 ∈ BV([a, b], L(X, Y )), then 𝛼f ∈ K([a, b], Y ). As a matter of fact, the next result gives us information about the multipliers for the Henstock vector integral. See [72, Theorem 7]. Theorem 1.58: Assume that f ∈ H([a, b], X) and 𝛼 ∈ BV([a, b], L(X, Y )). Then, 𝛼f ∈ H([a, b], Y ) and equalities (1.3) and (1.4) hold. Proof. Since f ∈ H([a, b], X), f̃ is continuous by Theorem 1.49. Thus, given 𝜖 > 0, there exists 𝛿 ∗ > 0 such that [ ] 𝜔(f̃ , c, d ) = sup {∥ f̃ (t) − f̃ (s) ∥∶ t, s ∈ [c, d]} < 𝜖, [ ] whenever 0 < d − c < 𝛿 ∗ , where c, d ⊂ [a, b]. Moreover, there is a gauge 𝛿 on ∗ [a, b], with 𝛿(t) < 𝛿2 for t ∈ [a, b], such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , |d| ‖ ti ‖ ∑ ‖ ‖ f (t) dt‖ < 𝜖. ‖f (𝜉i )(ti − ti−1 ) − ‖ ‖ ∫ ti−1 i=1 ‖ ‖

1.3 Kurzweil and Henstock Vector Integrals

Thus, |d| ‖ ti ‖ ∑ ‖ ‖ 𝛼(t)f (t) dt‖ ‖𝛼(𝜉i )f (𝜉i )(ti − ti−1 ) − ‖ ‖ ∫ ti−1 i=1 ‖ ‖ [ ]‖ |d| ‖ ti |d| ‖ t ‖ i ∑‖ [ ] ‖ ∑‖ ‖ 𝛼(t) − 𝛼(𝜉i ) f (t) dt‖ ⩽ f (t) dt ‖ + ‖𝛼(𝜉i ) f (𝜉i )(ti − ti−1 ) − ‖ ‖ ‖ ‖ ‖∫ ∫ti−1 i=1 ‖ ‖ i=1 ‖ ti−1 ‖ |d| ‖ t ‖ ∑‖ i [ ] ‖ < ‖𝛼‖∞ 𝜖 + 𝛼(t) − 𝛼(𝜉i ) f (t) dt‖ . ‖ ‖∫t ‖ i=1 ‖ i−1 ‖ But by Corollary 1.56, item (ii), 𝛼f ∈ K([a, b], Y ) and b

b

𝛼(t)f (t) dt =

∫a

∫a

𝛼(t)df̃ (t) = 𝛼(b)f̃ (b) − 𝛼(a)f̃ (a) −

b

∫a

d𝛼(t)f̃ (t)

and a similar formula also holds for every subinterval contained in [a, b]. Hence, [ ] [ ] for 𝛽ti = 𝛼(ti ) − 𝛼(𝜉i ) f̃ (ti ) and 𝛽ti−1 = 𝛼(ti−1 ) − 𝛼(𝜉i ) f̃ (ti−1 ), we have |d| ‖ |d| ‖ t ti ‖ ∑ ‖ ∑ ] ‖ ‖ ‖ i[ ‖ 𝛼(t) − 𝛼(𝜉i ) f (t) dt‖ = d𝛼(t)f̃ (t)‖ ‖ ‖𝛽ti − 𝛽ti−1 − ‖ ‖ ‖∫t ‖ ∫ t i−1 i=1 ‖ i−1 ‖ i=1 ‖ ‖ |d| ‖ ti 𝜉i ‖ ∑ ‖ ‖ = d𝛼(t)f̃ (t) − 𝛽ti−1 − d𝛼(t)f̃ (t)‖ ‖𝛽ti − ‖ ‖ ∫ ∫ 𝜉 t i i−1 i=1 ‖ ‖ |d| ‖ 𝜉 i ∑ ‖ ti [ ] [ ]‖ ‖ = d𝛼(t) f̃ (ti ) − f̃ (t) + d𝛼(t) f̃ (ti−1 ) − f̃ (t) ‖ ⩽ 𝜖 varba (𝛼), ‖ ‖∫𝜉 ‖ ∫ ti−1 i=1 ‖ i ‖ ] [ since for every t ∈ ti−1 , ti , we have ‖̃ ‖ ‖f (ti ) − f̃ (t)‖ ⩽ sup {∥ f̃ (t) − f̃ (s) ∥∶ t, s ∈ [ti−1 , ti ]} and ‖ ‖ ‖̃ ‖ ‖f (ti−1 ) − f̃ (t)‖ ⩽ sup {∥ f̃ (t) − f̃ (s) ∥∶ t, s ∈ [ti−1 , ti ]}. ‖ ‖ The proof is then complete.



A proof of the next result, borrowed from [72, Theorem 8], follows from the definitions of the integrals. Theorem 1.59: Let 𝛼 ∈ H([a, b], L(X, Y )) and f ∈ K 𝛼̃ ([a, b], X). If f is bounded, [ ] then 𝛼f ∈ K( a, b , Y ) and b

∫a

b

𝛼(t)f (t) dt =

∫a

d𝛼(t)f ̃ (t).

(1.5)

If, in addition, f ∈ H 𝛼̃ ([a, b], X), then 𝛼f ∈ H([a, b], Y ). Corollary 1.60: Suppose 𝛼 ∈ H([a, b], L(X, Y )) with 𝛼̃ ∈ SV([a, b], L(X, Y )) and [ ] f ∈ G( a, b , X). Then, 𝛼f ∈ K([a, b], Y ) and (1.5) holds.

31

32

1 Preliminaries

Proof. By Theorem 1.49, 𝛼̃ ∈ C([a, b], L(X, Y )). Then, the result follows from Theorem 1.47, since C([a, b], L(X, Y )) ⊂ G𝜎 ([a, b], L(X, Y )). ◽ The next corollaries follow from Theorems 1.49 and 1.53. Corollary 1.61: Suppose 𝛼 ∈ H([a, b], L(X, Y )) with 𝛼̃ ∈ SV([a, b], L(X, Y )) and [ ] f ∈ C( a, b , X). Then, 𝛼 f ∈ K([a, b], Y ), and we have b

b

𝛼(t)f (t) dt =

∫a

∫a

d𝛼(t)f ̃ (t)

(1.6)

and the following integration by parts formula holds b

∫a

b

d𝛼(t) ̃ f (t) = 𝛼(b)f ̃ (b) − 𝛼(a)f ̃ (a) −

∫a

𝛼(t) ̃ df (t).

(1.7)

Corollary 1.62: Consider functions 𝛼 ∈ H([a, b], L(X, Y )) and f ∈ BV([a, b], X). Then, 𝛼f ∈ K([a, b], Y ) and equalities (1.6) and (1.7) hold. The next two theorems generalize Corollary 1.62. For their proofs, the reader may want to consult [72]. Theorem 1.63: Consider f ∈ BV([a, b], X). If 𝛼 ∈ K([a, b], L(X, Y )) (respectively, 𝛼 ∈ H([a, b], L(X, Y ))), then 𝛼f ∈ K([a, b], Y ) (respectively, 𝛼f ∈ H([a, b], Y )) and both (1.6) and (1.7) hold. The next result is a Substitution Formula for Perron–Stieltjes integrals. A similar result holds for Riemann–Stieltjes integrals. For a proof of it, see [72, Theorem 11]. Theorem 1.64: Consider functions 𝛼 ∈ SV([a, b], L(X, Y )), f ∶ [a, b] → Z, and 𝛽 ∈ Kf ([a, b], L(Z, X)). Let g(t) = 𝛽̃f (t) =

t

∫a

𝛽(s) df (s),

t ∈ [a, b].

Then, 𝛼 ∈ Kg ([a, b], L(X, Y )) if and only if 𝛼𝛽 ∈ Kf ([a, b], L(Z, Y )), in which case, we have [ t ] b b b 𝛼(t)𝛽(t) df (t) = 𝛼(t) dg(t) = 𝛼(t)d 𝛽(s) df (s) and (1.8) ∫a ∫a ∫a ∫a ‖ [ ‖ b ] ‖ ‖ 𝛼(t)𝛽(t) df (t)‖ ⩽ SV[a,b] (𝛼) + ‖𝛼(a)‖ ‖g‖∞ . (1.9) ‖ ‖ ‖∫a ‖ ‖

1.3 Kurzweil and Henstock Vector Integrals

Using Theorem 1.53, one can prove the next corollary. See [72, Corollary 8]. From now on, X, Y , Z, and W are Banach spaces. Corollary 1.65: Consider functions 𝛼 ∈ SV([a, b], L(X, Y )), f ∈ C([a, b], W), and 𝛽 ∈ Kf ([a, b], L(W, X)), and define g(t) = 𝛽̃f (t) =

t

∫a

𝛽(s) df (s),

t ∈ [a, b].

Then, 𝛼𝛽 ∈ Kf ([a, b], L(W, Y )) and equality (1.8) and inequality (1.9) hold. Another substitution formula for Perron–Stieltjes integrals is presented next. Its proof uses a very nice trick provided by Professor C. S. Hönig while advising M. Federson’s Master Thesis. Such result is borrowed from [72, Theorem 12]. Theorem 1.66: Consider functions 𝛾 ∶ [a, b] → L(W, Y ), 𝛼 ∈ K 𝛾 ([a, b], L(X, W)), f ∈ BV([a, b], X) and 𝛽 = 𝛼̃ 𝛾 ∶ [a, b] → L(X, Y ), that is, t

𝛽(t) =

∫a

d𝛾(s)𝛼(s),

t ∈ [a, b].

Then, f ∈ K 𝛽 ([a, b], X), if and only if 𝛼 f ∈ K 𝛾 ([a, b], Y ) and t

∫a

t

d𝛾(t)𝛼(t)f (t) =

∫a

(1.10)

d𝛽(t)f (t).

Proof. Since 𝛼 ∈ K 𝛾 ([a, b], L(X, W)), given 𝜖 > 0, there exists a gauge 𝛿 on [a, b] such that, for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , { }‖ ‖∑ ti ] ‖ ‖ |d| [ ‖ < 𝜖. ‖ 𝛾(t ) − 𝛾(t ) 𝛼(𝜉 ) − d𝛾(t)𝛼(t) i i−1 i ‖ ‖ ∫ti −1 ‖ ‖ i=1 ‖ ‖ b

b

Taking approximated sums for ∫a d𝛾(t)𝛼(t)f (t) and ∫a d𝛽(t)f (t), we obtain ‖∑ ‖ |d| ∑ ‖ |d| [ ‖ [ ] ] ‖ ‖ 𝛾(t 𝛽(t ) − 𝛾(t ) 𝛼(𝜉 )f (𝜉 ) − ) − 𝛽(t ) f (𝜉 ) i i−1 i i i i−1 i ‖ ‖ ‖ i=1 ‖ i=1 ‖ ‖ { } ‖∑ ‖ |d| t i ‖ ‖ [ ] 𝛾(ti ) − 𝛾(ti−1 ) 𝛼(𝜉i ) − =‖ d𝛾(t)𝛼(t) f (𝜉i ) ‖ ‖ ‖. ∫ ‖ i=1 ‖ ti−1 ‖ ‖ But, if 𝛾i ∈ L(X, Y ) and xi ∈ X, then ( n ) ( n ) n n ∑ ∑ ∑ ∑ 𝛾i xi = 𝛾i x0 + 𝛾i (xj − xj−1 ), n ∈ ℕ. i=1

i=1

j=1

i=j

33

34

1 Preliminaries

[ ] ti Hence, taking 𝛾i = 𝛾(ti ) − 𝛾(ti−1 ) 𝛼(𝜉i ) − ∫t −1 d𝛾(t)𝛼(t), xi = f (𝜉i ), x0 = f (a) and i n = |d|, we obtain { }‖ ‖∑ ti ‖ |d| [ ] ‖ ‖ ‖f (a)‖ 𝛾(t ) − 𝛾(t ) 𝛼(𝜉 ) − d𝛾(t)𝛼(t) I ⩽‖ i i−1 i ‖ ‖ ∫ ‖ i=1 ‖ ti −1 ‖ ‖ ‖ ‖ |d| |d| ∑ ∑ ‖ ‖ ‖ 𝛾 ‖ ‖f (t ) − f (t )‖ < 𝜖 ‖f (a)‖ + 𝜖var(f ), + i‖ ‖ i i−1 ‖ ‖ ‖ j=1 ‖ ‖ i=j ‖ where we applied the Saks–Henstock lemma (Lemma 1.45) to obtain }‖ ‖ ‖ |d| { ‖∑ ti [ ] ‖ ‖ |d| ‖ ‖∑ ‖ ⩽ 𝜖, ‖ 𝛾‖=‖ 𝛾(t ) − 𝛾(t ) 𝛼(𝜉 ) − d𝛾(t)𝛼(t) i‖ i i−1 i ‖ ‖ ‖ ∫ ‖ ‖ i=j ‖ ‖ i=j ti −1 ‖ ‖ ‖ ‖ for every j = 1, 2, … , |d|. ◽ A proof of the next proposition follows similarly as the proof of Theorem 1.66. Proposition 1.67: Let J be any interval of the real line and a, b ∈ J, with a < b. Consider functions f ∶ J → L(X) and 𝑣 ∶ J → ℝ of locally bounded variation. Assume that 𝛼 ∶ J → L(X) is locally Perron–Stieltjes integrable with respect to 𝑣, that is, the Perron–Stieltjes integral ∫I 𝛼(t) d𝑣(t) exists, for every compact interval I ⊂ J. Assume, further, that 𝛽 ∶ J → L(X), defined by t

𝛽(t) =

𝛼(s) d𝑣(s),

∫a

t ∈ J, b

is also of locally bounded variation. Then, the Perron–Stieltjes integrals ∫a d𝛽(r)f (r) b and ∫a 𝛼(r)f (r) d𝑣(r) exist and b

∫a

b

d𝛽(r)f (r) =

∫a

𝛼(r)f (r) d𝑣(r).

(1.11)

Yet another substitution formula for Perron–Stieltjes integrals, borrowed from [72, Theorem 11], is brought up here and, again, another interesting trick provided by Professor Hönig is used in its proof. Such substitution formula will be used in Chapter 3 in order to guarantee the existence of some Perron–Stieltjes integrals. As a matter of fact, the corollary following Theorem 1.68 will do the job. Theorem 1.68: Consider functions 𝛽 = 𝛼̃ f ∶ [a, b] → W, that is,

f ∶ [a, b] → X,

t

𝛽(t) =

∫a

𝛼(s) df (s),

for every t ∈ [a, b]

𝛼 ∈ Kf ([a, b], L(X, W)),

1.3 Kurzweil and Henstock Vector Integrals

and assume that 𝛾 ∈ SV([a, b], L(W, Y )). Thus, 𝛾 ∈ K𝛽 ([a, b], L(W, Y )) if and only if 𝛾 𝛼 ∈ Kf ([a, b], L(X, Y )), in which case, we have b

∫a

b

𝛾(t)𝛼(t) df (t) =

∫a

𝛾(t)d𝛽(t).

(1.12)

Proof. By hypothesis, 𝛼 ∈ Kf ([a, b], L(X, W)). Therefore, for every 𝜖 > 0, there is a gauge 𝛿 of [a, b] such that for every 𝛿-fine d = (𝜉i , ti ) ∈ TD[a,b] , we have { }‖ ‖∑ ti ‖ |d| ‖ [ ] ‖ ‖ < 𝜖. 𝛼(𝜉 ) f (t ) − f (t ) − 𝛼(t) df (t) i i i−1 ‖ ‖ ∫ti−1 ‖ i=1 ‖ ‖ ‖ b

Taking approximated Riemannian-type sums for the integrals ∫a 𝛾(t)𝛼(t) df (t) and b ∫a 𝛾(t)d𝛽(t), we obtain ‖ ‖∑ |d| [ ] ∑ [ ]‖ ‖ |d| ‖ ‖ 𝛾(𝜉 )𝛼(𝜉 ) f (t ) − f (t ) − 𝛾(𝜉 ) 𝛽(t ) − 𝛽(t ) i i i i−1 i i i−1 ‖ ‖ ‖ ‖ i=1 i=1 ‖ ‖ { } ‖∑ ‖ |d| t i ‖ ‖ [ ] ‖ =‖ ‖ 𝛾(𝜉i ) 𝛼(𝜉i ) f (ti ) − f (ti−1 ) − ∫ 𝛼(t) df (t) ‖ = I. ‖ i=1 ‖ ti−1 ‖ ‖ On the other hand, when 𝛾i ∈ L (X, Y ) and xi ∈ X, we have ( n ) ( n ) n n ∑ ∑ ∑ ( ) ∑ 𝛾j − 𝛾j−1 𝛾i xi = xi + 𝛾0 xi , n ∈ ℕ. i=i

j=1

[

i=j

i=j

] ( ) ti 𝛼(t) df (t), 𝛾i = 𝛾 𝜉i , 𝛾0 = 𝛾(a), and Then, taking xi = 𝛼(𝜉i ) f (ti ) − f (ti−1 ) − ∫ti−1 n = |d|, we get ( |d| ) ‖ ( |d| ) ‖∑ ∑ ‖ |d| [ ] ∑ ‖ I=‖ 𝛾(𝜉j ) − 𝛾(𝜉j−1 ) xi + 𝛾0 xi ‖ ‖ ‖ ⩽ SV(𝛾) 𝜖+ ∥ 𝛾(a) ∥ 𝜖, ‖ j=1 ‖ i=j i=j ‖ ‖ ‖∑|d| ‖ because the Saks-Henstock lemma (Lemma 1.45) yields ‖ i=j xi ‖ ⩽ 𝜖, for every ‖ ‖ j ∈ {1, 2, … , |d|}. ◽ Corollary 1.69: Consider functions f ∶ [a, b] → X, 𝛼 ∈ Kf ([a, b], L(X, W)), 𝛽 = 𝛼̃ f ∈ BV([a, b], W), and 𝛾 ∈ G([a, b], L(W, Y )) ∩ SV([a, b], L(W, Y )). Then, we have 𝛾 ∈ K𝛽 ([a, b], L(W, Y )), 𝛾 𝛼 ∈ Kf (a, b], L(X, Y )), and Eq. (1.12) holds. Proof. Theorem 1.47, item (i), yields 𝛾 ∈ Kg ([a, b], L(W, Y )). Then, the statement follows from Theorem 1.68. ◽ The next result gives us an integration by parts formula for Perron–Stieltjes integrals. A proof of it can be found in [212, Theorem 13].

35

36

1 Preliminaries

Proposition 1.70: Suppose 𝛼 ∈ G([a, b], L(X)) and f ∈ BV([a, b], X) or 𝛼 ∈ SV([a, b], L(X)) and f ∈ G([a, b], X). Then, the Perron–Stieltjes integrals b b ∫a d𝛼(t)f (t) and ∫a 𝛼(t) df (t) exist, and the following equality holds: b

b

d𝛼(t)f (t) +

∫a

∫a

𝛼(t) df (t) = 𝛼(b)f (b) − 𝛼(a)f (a) ∑ ∑ − Δ+ 𝛼(𝜏)Δ+ f (𝜏) + Δ− 𝛼(𝜏)Δ− f (𝜏), a⩽𝜏 0 such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ STD[0,1] (the reader may want to check the notation STD[a,b] in the appendix of this chapter), ‖ ‖ |d| ∑ ‖ ‖ ‖ ‖f̃ (1) − f (𝜉 )(t − t ) i i i−1 ‖ < 𝜖. ‖ ‖ ‖ i=1 ‖ ‖

37

38

1 Preliminaries



Consider 0 < 𝛿 < 𝜖 and take a 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TPD[0,1] . ∑ If 𝜉i ⩽ s and ti < 𝜉i + 𝛿, then ti < s + 𝛿. Therefore, 𝜉i ⩽s (ti − ti−1 ) < s + 𝛿 and, hence, ∑ (ti − ti−1 ) < 𝛿. −s + 𝜉i ⩽s



If 𝜉j > s and tj−1 > 𝜉j − 𝛿, then tj−1 > s − 𝛿. Therefore, 0⩽

|d| ∑ ∑ (tj − tj−1 ) < 1 − (s − 𝛿) = (ti − ti−1 ) − s + 𝛿 𝜉j >s

i=1

and we obtain ∑ (ti − ti−1 ) < 𝛿. s− 𝜉i ⩽s

Finally, we get ‖ | ‖ | |d| |d| ∑ ∑ ‖ | ‖ | ‖ = sup |f̃ (1)(s) − | ‖f̃ (1) − f (𝜉 )(t − t ) f (𝜉 )(s)(t − t ) i i i−1 ‖ i i i−1 | ‖ | 0⩽s⩽1 | ‖ | ‖ i=1 i=1 ‖∞ | ‖ | | ∑ | | | = sup ||s − (ti − ti−1 )|| < 𝛿 < 𝜖 0⩽s⩽1 | | 𝜉i ⩽s | | and the Claim is proved. A less restrict version of the Fundamental Theorem of Calculus is stated next. A proof of it follows as in [108, Theorem 9.6]. Theorem 1.75 (Fundamental Theorem of Calculus): Suppose F ∶ [a, b] → X is a continuous function such that there exists the derivative F ′ (t) = f (t), for nearly everywhere on [a, b] (i.e. except for a countable subset of [a, b]). Then, [ ] f ∈ H( a, b , X) and t

∫a

f (s) ds = F(t) − F(a),

t ∈ [a, b].

Now, we present a class of functions f ∶ [a, b] → X, laying between absolute continuous and continuous functions, for which we can obtain a version of the Fundamental Theorem of Calculus for Henstock vector integrals. Let m denote the Lebesgue measure. Definition 1.76: A function f ∶ [a, b] → X satisfies the strong Lusin condition, [ ] and we write f ∈ SL( a, b , X), if given 𝜖 > 0 and B ⊂ [a, b] with m(B) = 0, then

1.3 Kurzweil and Henstock Vector Integrals

there is a gauge 𝛿 on B such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TPD[a,b] with |d| ∑ ‖ 𝜉i ∈ B for all i = 1, 2, … , |d|, we have ‖ ‖f (ti ) − f (ti−1 )‖ < 𝜖. i=1

If we denote by AC([a, b], X) the space of all absolutely continuous functions from [a, b] to X, then it is not difficult to prove that AC([a, b], X) ⊂ SL([a, b], X) ⊂ C([a, b], X). In SL([a, b], X), we consider the usual supremum norm, ∥ ⋅∥∞ , induced by C([a, b], X). The next two versions of the Fundamental Theorem of Calculus for Henstock vector integrals, as described in Definition 1.41, are borrowed from [70, Theorems 1 and 2]. We use the term almost everywhere in the sense of the Lebesgue measure m. Theorem 1.77: If f ∈ SL([a, b], X) and A ∈ SL([a, b], Y ) are both differentiable and 𝛼 ∶ [a, b] → L(X, Y ) is such that A′ (t) = 𝛼(t)f ′ (t) for almost every t ∈ [a, b], then 𝛼 ∈ Hf ([a, b], L(X, Y )) and A = 𝛼̃ f , that is, t

∫a

t

𝛼(s)f ′ (s) ds =

∫a

𝛼(s) df (s),

t ∈ [a, b].

Theorem 1.78: If f ∈ SL([a, b], X) is differentiable and 𝛼 ∈ Hf ([a, b], L(X, Y )) is bounded, then 𝛼̃ f ∈ SL([a, b], Y ), and there exists the derivative (𝛼̃ f )′ (t) = 𝛼(t)f ′ (t) [ ] for almost every t ∈ a, b , that is, ] [ t d 𝛼(s) df (s) = 𝛼(t)f ′ (t), almost everywhere in [a, b]. dt ∫a Corollary 1.79: Suppose f ∈ SL([a, b], X) is differentiable and nonconstant on [ ] any nondegenerate subinterval of [a, b] and 𝛼 ∈ Hf ( a, b , L(X, Y )) is bounded and such that 𝛼̃ f = 0. Then, 𝛼 = 0 almost everywhere in [a, b]. [ ] From Corollary 1.50, we know that if f ∈ C( a, b , X) and 𝛼 ∈ Kf ([a, b], L(X, Y )), then 𝛼̃ f ∈ C([a, b], Y ). For the Henstock vector integral, we have the following analogue whose proof can be found in [70, Theorem 7]. Theorem 1.80: If f ∈ SL([a, b], X) and 𝛼 ∈ Hf ([a, b], L(X, Y )), then we have 𝛼̃ f ∈ SL([a, b], Y ). The next result is borrowed from [70, Theorem 5]. We reproduce its proof here.

39

40

1 Preliminaries

Theorem 1.81: Suppose f ∈ SL([a, b], X) and 𝛼 ∶ [a, b] → L(X, Y ) is such that 𝛼 = 0 almost everywhere on [a, b]. Then, 𝛼 ∈ Hf ([a, b], L(X, Y )) and 𝛼̃ f = 0, that is, t

𝛼̃ f (t) =

∫a

𝛼(s) df (s) = 0, for every t ∈ [a, b].

Proof. Consider the sets E = {t ∈ ℝ ∶ 𝛼(t) ≠ 0}

and En = {t ∈ E ∶ n − 1 0, there exists a gauge 𝛿n on En such that, for every 𝛿n -fine tagged partial division d = (𝜉ni , [tni −1 , tni ]) ∈ TPD[a,b] , with 𝜉ni ∈ En , for i = 1, 2, … , |d|, we have |d| ∑

∥ f (tni ) − f (tni −1 ) ∥
m. Thus, fn n∈ℕ is a ∥ ⋅∥A -Cauchy sequence. On the other hand, √ ‖f (t)‖ = ‖g (t) + g (t) + · · · + g (t)‖ = n, 2 n ‖2 ‖ n ‖2 ‖ 1 for every t ∈ [0, 1]. Hence, there is no function f (t) ∈ l2 (ℕ × ℕ), with t ∈ [0, 1], ‖ such that limn→∞ ‖ ‖fn − f ‖A = 0. The next result follows from Theorem 1.80. A proof of it can be found in [75, Theorem 5]. Theorem 1.85: Suppose f ∈ SL([a, b], X) is nonconstant on any nondegenerate [ ] subinterval of a, b . Then, the mapping 𝛼 ∈ Hf ([a, b], L(X, Y )) → 𝛼̃ f ∈ Ca ([a, b], X) ‖ ‖ is an isometry, that is ‖𝛼̃ f ‖ = ‖𝛼‖A,f onto a dense subspace of Ca ([a, b], X). ‖ ‖∞ The next result, known as straddle Lemma, will be useful to prove that the space G([a, b], L(X, Y )) of regulated functions from [a, b] to L(X, Y ) is dense in Kf ([a, b], L(X, Y )) in the Alexiewicz norm ‖⋅‖A,f . For a proof of the straddle Lemma, the reader may want to consult [130, 3.4] or [119]. Lemma 1.86 (Straddle Lemma): Suppose f , F ∶ [a, b] → X are functions such that F is differentiable, with F ′ (𝜉) = f (𝜉), for all 𝜉 ∈ [a, b]. Then, given 𝜖 > 0, there exists 𝛿(𝜉) > 0 such that ‖F(t) − F(s) − f (𝜉)(t − s)‖ < 𝜖(t − s), whenever 𝜉 − 𝛿(𝜉) < s < 𝜉 < t < 𝜉 + 𝛿(𝜉).

1.3 Kurzweil and Henstock Vector Integrals

The next result is adapted from [75, Theorem 8]. Proposition 1.87: Suppose f ∈ SL([a, b], X) is differentiable and nonconstant on any nondegenerate subinterval of [a, b]. Then, the Banach space G([a, b], L(X, Y )) is dense in Kf ([a, b], L(X, Y )) under the Alexiewicz norm ‖ ⋅ ‖A,f . Proof. Assume that 𝛼 ∈ Kf ([a, b], L(X, Y )) and let 𝜖 > 0 be given. We need to find a function 𝛽 ∈ G([a, b], L(X, Y )) such that ‖𝛽 − 𝛼‖A,f < 𝜖, or equivalently, t ‖ t ‖ ‖ ‖ 𝛽(s) df (s) − 𝛼(s) df (s)‖ < 𝜖, ‖ ‖∫a ‖ ∫a ‖ ‖

t ∈ [a, b].

(1.13)

By Corollary 1.50, 𝛼̃ f ∈ Ca ([a, b], Y ) = {x ∈ C([a, b], Y ) ∶ x(a) = 0}. Let us denote by Ca(1) ([a, b], Y ) the subspace of Ca ([a, b], Y ) of functions which are differentiable with continuous derivative. Hence, there is a function h ∈ Ca(1) ([a, b], Y ) such that ‖ ‖ ‖h − 𝛼̃ f ‖ < 𝜖. ‖ ‖∞

(1.14)

Let 𝛽 ∶ [a, b] → L(X, Y ) be defined by 𝛽(t)x = h′ (t), for all x ∈ X such that x ≠ 0, and by 𝛽(t)0 = 0. In particular, 𝛽(t)f ′ (t) = h′ (t) whenever f ′ (t) ≠ 0. Therefore, 𝛽(t)f ′ (t) = h′ (t) for almost every t ∈ [a, b], since f ∶ [a, b] → X is differentiable and nonconstant on any nondegenerate subinterval of [a, b]. Hence, b 𝛽 ∈ G([a, b], L(X, Y )). Then, the Riemann integral ∫a 𝛽(s)f ′ (s) ds exists and t

t

𝛽(s)f ′ (s) ds =

∫a

∫a

h′ (s) ds = h(t),

t ∈ [a, b],

(1.15)

where we applied the Fundamental Theorem of Calculus for the Riemann integral in order to obtain the last equality. Thus, replacing (1.15) in (1.14), we obtain t ‖ t ‖ ‖ ‖ 𝛽(s)f ′ (s) ds − 𝛼(s) df (s)‖ < 𝜖, ‖ ‖∫ a ‖ ∫a ‖ ‖

t ∈ [a, b].

(1.16)

Now, in view of (1.13), it remains to prove that the Perron–Stieltjes integral b ∫a 𝛽(s) df (s) exists and t

∫a

t

𝛽(s) df (s) =

∫a

𝛽(s)f ′ (s) ds,

(1.17)

t ∈ [a, b]. b

Let 𝛿1 be the gauge on [a, b] from the definition of ∫a 𝛽(s)f ′ (s) ds. Take t ∈ [a, b], and for every 𝜉 ∈ [a, t], let 𝛿2 (𝜉) > 0 be such that if 𝜉 − 𝛿2 (𝜉) < s < 𝜉 < u < 𝜉 + 𝛿2 (𝜉), then, by the Straddle Lemma (Lemma 1.86), we have ‖f (u) − f (s) − f ′ (𝜉)(u − s)‖ < 𝜖(u − s). ‖ ‖

(1.18)

43

44

1 Preliminaries

{ } Fix t ∈ [a, b]. We now define a gauge 𝛿 on [a, t] by 𝛿(𝜉) = min 𝛿1 (𝜉), 𝛿2 (𝜉) , for every 𝜉 ∈ [a, t]. Hence, for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,t] , we have ‖∑ ‖ t ‖ |d| [ ] ‖ ‖ 𝛽(𝜉 ) f (t ) − f (t ) − 𝛽(s)f ′ (s) ds‖ i i i−1 ‖ ‖ ∫ ‖ i=1 ‖ a ‖ ‖ ‖∑ ‖ |d| |d| ∑ ‖ ‖ [ ] ′ ‖ ⩽‖ 𝛽(𝜉 ) f (t ) − f (t ) − 𝛽(𝜉 )f (t )(t − t ) i i i−1 i i i i−1 ‖ ‖ ‖ i=1 ‖ i=1 ‖ ‖ ‖∑ ‖ t ‖ |d| ‖ ′ ′ ‖ +‖ ‖ 𝛽(𝜉i )f (ti )(ti − ti−1 ) − ∫ 𝛽(s)f (s) ds ‖ ‖ i=1 ‖ a ‖ ‖ |d| ∑ ‖f (t ) − f (t ) − f ′ (t )(t − t )‖ + 𝜖 < ‖𝛽‖ i−1 i i i−1 ‖ ‖ i i=1

< ‖𝛽‖

|d| ∑ 𝜖(ti − ti−1 ) + 𝜖 = ‖𝛽‖ 𝜖(t − a) + 𝜖, i=1

by (1.18) and by the Riemann integrability of 𝛽(⋅) f ′ (⋅). Finally, (1.13) follows from (1.16) and (1.17) and the proof is complete. ◽

1.3.5 A Convergence Theorem As the last result of this introductory chapter, we mention a convergence theorem for Perron–Stieltjes integrals. Such result is used in Chapter 3. A proof of it can be found in [[180], Theorem 2.2]. Theorem 1.88: Consider functions f , fn ∈ G([a, b], X) and 𝛼, 𝛼n ∈ BV([a, b], L(X)), for n ∈ ℕ. Suppose lim ∥ fn − f ∥∞ = 0,

n→∞

Then

lim ∥ 𝛼n − 𝛼∥∞ = 0 and sup varba (𝛼n ) < ∞.

n→∞

n∈ℕ

) t ‖ t ‖ ‖ ‖ d𝛼n (s)fn (s) − d𝛼(s)f (s)‖ = 0. sup ‖ ‖ ∫ ∫a t∈[a,b] ‖ ‖ a ‖

( lim

n→∞

Appendix 1.A: The McShane Integral The integrals introduced by J. Kurzweil [152] and independently by R. Henstock [118] in the late 1950s are equivalent to the restricted Denjoy integral and the Perron integral for integrands taking values in ℝ (see [108], for instance). In particular, the definitions of the so-called ”Kurzweil–Henstock” integrals are based

Appendix 1.A: The McShane Integral

on Riemannian sums, and are therefore easy to deal with even by undergraduate students. Not only that, but the Kurzweil–Henstock–Denjoy–Perron integral encompasses the integrals of Newton, Riemann, and Lebesgue. In 1969, E. J. McShane (see [173, 174]) showed that a small change in the subdivision process of the domain of integration within the Kurzweil–Henstock (or Perron) integral leads to the Lebesgue integral. This is a very nice finding, since now the Lebesgue integral can be taught by presenting its Riemannian definition straightforwardly and, then, obtaining immediately some very interesting properties such as the linearity of the Lebesgue integral which comes directly from the fact that the Riemann sum can be split into two sums. The monotone convergence theorem for the Lebesgue integral is another example of a result which is naturally obtained from its equivalent definition due to McShane. The Kurzweil integral and the variational Henstock integral can be extended to Banach space-valued functions as well as to the evaluation of integrands over unbounded intervals. The extension of the McShane integral, proposed by R. A. Gordon (see [107]) to Banach space-valued functions, gives a more general integral than that of Bochner–Lebesgue. As a matter of fact, the idea of McShane into the definition due to Kurzweil enlarges the class of Bochner–Lebesgue integrals. On the other hand, when the idea of McShane is employed in the variational Henstock integral, one gets precisely the Bochner–Lebesgue integral. This interesting fact was proved by W. Congxin and Y. Xiabo in [47] and, independently, by C. S. Hönig in [131]. Later, L. Di Piazza and K. Musal generalized this result (see [55]). We clarify here that unlike the proof by Congxin and Xiabo, based on the Fréchet differentiability of the Bochner–Lebesgue integral, Hönig’s idea to prove the equivalence between the Bochner–Lebesgue integral and the integral we refer to as Henstock–McShane integral uses the fact that the indefinite integral of a Henstock–McShane integrable function is itself a function of bounded variation and the fact that absolute Henstock integrable functions are also functions of bounded variation. In this way, the proof provided in [131] becomes simpler. We reproduce it in the next lines, since reference [131] is not easily available. We also refer to [73] for some details. Definition 1.89: We say that a function f ∶ [a, b] → X is Bochner–Lebesgue inte{ } grable (we write f ∈ 1 ([a, b], X)), if there exists a sequence fn n∈ℕ of simple functions, fn ∶ [a, b] → X, n ∈ ℕ, such that (i) fn → f almost everywhere (i.e. lim ‖ f (t) − f (t)‖ ‖ = 0 for almost every n→∞ ‖ n t ∈ [a, b]), and b

(ii)

lim

n,m→∞

‖f (t) − f (t)‖ dt = 0. m ‖ ∫a ‖ n

45

46

1 Preliminaries

With the notation of Definition 1.89, we define b

∫a

b

f (t) dt = lim

n→∞

∫a

b

fn (t) dt

and ‖f ‖1 =

∫a

‖f (t)‖ dt.

Then, the space of all equivalence classes of Bochner–Lebesgue integrable functions, equipped with the norm ‖f ‖1 , is complete. The next definition can be found in [239], for instance. Definition 1.90: We say that a function f ∶ [a, b] → X is measurable, whenever there is a sequence of simple functions fn ∶ [a, b] → X such that fn → f almost everywhere. When this is the case, b

f ∈ 1 ([a, b], X)

if and only if

∫a

‖f (t)‖ dt < ∞.

(1.A.1)

Again, we explicit the “name” of the integral we are dealing with, whenever we believe there is room for ambiguity. As we mentioned earlier, when only real-valued functions are considered, the Lebesgue integral is equivalent to a modified version of the Kurzweil–Henstock (or Perron) integral called McShane integral. The idea of slightly modifying the definition of the Kurzweil–Henstock integral is due to E. J. McShane [173, 174]. Instead of taking tagged divisions of an interval [a, b], McShane considered what we call semitagged divisions, that is, a = t0 < t1 … < t|d| = b is a division of [a, b] and, to each subinterval [ti−1 , ti ], with i = 1, 2, … , |d|, we associate a point 𝜉i ∈ [a, b] called “tag” of the subinterval [ti−1 , ti ]. We denote such semitagged division by d = (𝜉i , [ti−1 , ti ]) and, by STD[a,b] , we mean the set of all semitagged divisions of the interval [a, b]. But what is the difference between a semitagged division and a tagged division? Well, in a semitagged division (𝜉i , [ti−1 , ti ]) ∈ STD[a,b] , it is not required that a tag 𝜉i belongs to its associated subinterval [ti−1 , ti ]. In fact, neither the subintervals need to contain their corresponding tags. Nevertheless, likewise for tagged divisions, given a gauge 𝛿 of [a, b], in order for a semitagged division (𝜉i , [ti−1 , ti ]) ∈ STD[a,b] to be 𝛿-fine, we need to require that ] { } [ for all i = 1, 2, … ti−1 , ti ⊂ t ∈ [a, b] ∶ ||t − 𝜉i || < 𝛿(𝜉i ) This simple modification provides an elegant characterization of the Lebesgue integral through Riemann sums (see [174]). Let us denote by KMS([a, b], ℝ) the space of all real-valued Kurzweil–McShane integrable functions f ∶ [a, b] → ℝ, that is, f ∈ KMS([a, b], ℝ) is integrable in the sense of Kurzweil with the modification of McShane. Formally, we have the

Appendix 1.A: The McShane Integral

next definition which can be extended straightforwardly to Banach space-valued functions. Definition 1.91: We say that f ∶ [a, b] → ℝ is Kurzweil–McShane integrable, and we write f ∈ KMS([a, b], ℝ) if and only if there exists I ∈ ℝ such that for every 𝜖 > 0, there is a gauge 𝛿 on [a, b] such that | | |d| ∑ | | |I − f (𝜉i )(ti − ti−1 )|| < 𝜖, | | | i=1 | | whenever d = (𝜉i , [ti−1 , ti ]) ∈ STD[a,b] is 𝛿-fine. We denote the Kurzweil–McShane b integral of a function f ∶ [a, b] → ℝ by (KMS) ∫a f (t) dt. The following inclusions hold R([a, b], ℝ) ⊂ 1 ([a, b], ℝ) = KMS([a, b], ℝ) ⊂ K([a, b], ℝ) = H([a, b], ℝ). Moreover, K([a, b], ℝ) ⧵ 1 ([a, b], ℝ) ≠ ∅ as one can note by the next classical example. Example 1.92: Let F ∶ [0, 1] → ℝ be defined by F(t) = t2 sin(t−2 ), if 0 < t ⩽ 1, and F(0) = 0, and consider f = F ′ . Since f is Riemann improper integrable, f ∈ K([a, b], ℝ) = H([a, b], ℝ), because the Kurzweil–Henstock (or Perron) integral contains its improper integrals (see Theorem 2.9, [158], or [213]). However, f ∉ 1 ([a, b], ℝ), since f is not absolutely integrable (see also [227]). Example 1.92 tells us that the elements of K([a, b], ℝ) = H([a, b], ℝ) are not absolutely integrable. When McShane’s idea is applied to Kurzweil and Henstock vector integrals, the story changes. In fact, the modification of McShane applied to the Kurzweil vector integral originates an integral which encompasses the Bochner–Lebesgue integral (see Example 1.74). On the other hand, when McShane’s idea is used to modify the variational Henstock integral, we obtain exactly the Bochner–Lebesgue integral (see [47 and 131]). Thus, if HMS([a, b], X) denotes the space of Henstock–McShane integrable functions f ∶ [a, b] → X, that is, f ∈ HMS([a, b], X) is integrable in the sense of Henstock with the modification of McShane, then HMS([a, b], X) = 1 ([a, b], X). We will prove this equality in the sequel. Furthermore, HMS([a, b], X) ⊂ H([a, b], X), KMS([a, b], X) ⊂ K([a, b], X), and RMS([a, b], X) ⊂ R([a, b], X), where we use the notation KMS([a, b], X) and RMS([a, b], X) to denote, respectively, the spaces of Kurzweil–McShane and Riemann–McShane integrable functions from [a, b] to X. For other interesting results, the reader may want to consult [55].

47

48

1 Preliminaries

Our aim in the remaining of this chapter is to show that the integrals of Bochner–Lebesgue and Henstock–McShane coincide. See [132, Theorem 10.4]. The next results are due to C. S. Hönig. They belong to a brochure of a series of lectures Professor Hönig gave in Rio de Janeiro in 1993. We include the proofs here, once the brochure is in Portuguese. { } Lemma 1.93: Let fn n∈ℕ be a sequence in KMS([a, b], X) and f ∶ [a, b] → X be a function. Suppose there exists b

lim (KMS)

n→∞

‖f (t) − f (t)‖ dt = 0. ‖ ∫a ‖ n

Then, f ∈ KMS([a, b], X) and b

lim (KMS)

n→∞

∫a

b

fn (t) dt = (KMS)

∫a

f (t) dt.

Proof. Given 𝜖 > 0, take n𝜖 such that for m, n ⩾ n𝜖 , b

(KMS)

‖f (t) − f (t)‖ dt < 𝜖 m ‖ ∫a ‖ n

and take a gauge 𝛿 on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ STD[a,b] , |d| ∑ ‖ ‖ ‖fn𝜖 (𝜉i ) − f (𝜉i )‖ (ti − ti−1 ) < 𝜖. ‖ ‖

(1.A.2)

i=1

b

The limit I = limn→∞ (KMS) ∫a fn (t) dt exists, since for m, n ⩾ n𝜖 , b b ‖ ‖ ‖ ‖ fn (t) dt − (KMS) fm (t) dt‖ ‖(KMS) ‖ ‖ ∫a ∫a ‖ ‖ b

⩽ (KMS)

∫a

b

‖f (t) − f (t)‖ dt + (KMS) ‖f (t) − f (t)‖ dt ⩽ 2𝜖. m ‖ ‖ ‖n ∫a ‖ b

Hence, if In = (KMS) ∫a fn (t) dt, then ‖∑ ‖ ‖ |d| ‖ ‖ f (𝜉 )(t − t ) − I ‖ i i i−1 ‖ ‖ ‖ i=1 ‖ ‖ ‖ ‖ ‖∑ |d| ∑‖ ‖ ‖ ‖ |d| ‖ ‖ + ‖I − I ‖ ⩽ f (𝜉 )(t − t ) − I ‖. ‖f (𝜉i ) − fn𝜖 (𝜉i )‖ (ti − ti−1 ) + ‖ n i i i−1 n ‖ ‖ n𝜖 ‖ 𝜖 𝜖 ‖ ‖ ‖ ‖ ‖ i=1 i=1 ‖ ‖ Thus, the first summand on the right-hand side of the last inequality is smaller than 𝜖 by (1.A.2), the third summand is smaller than 𝜖 by the definition of n𝜖 and, if we refine the gauge 𝛿, we may suppose, by the definition of In𝜖 , that the second summand is smaller than 𝜖, and the proof is complete. ◽

Appendix 1.A: The McShane Integral

We show next that Lemma 1.93 remains valid if we replace KMS by HMS, that is, if instead of the space KMS([a, b], X) of Kurzweil–McSchane integrable functions, we consider the space HMS([a, b], X) of Henstock–McSchane integrable functions. { } Lemma 1.94: Let fn n∈ℕ be a sequence in HMS([a, b], X) and f ∶ [a, b] → X be b ‖ a function. If limn→∞ ∫a ‖ ‖fn (t) − f (t)‖ dt = 0, then f ∈ HMS([a, b], X) and b

lim (KMS)

n→∞

∫a

b

fn (t) dt = (KMS)

∫a

f (t) dt.

Proof. By Lemma 1.93, f ∈ KMS([a, b], X), and we have the convergence of the integrals. It remains to prove that f ∈ HMS([a, b], X), that is, for every 𝜖 > 0, there exists a gauge 𝛿 on [a, b] such that for every 𝛿-fine d = (𝜉i , [ti−1 , ti ]) ∈ TPD[a,b] , |d| ‖ ti ‖ ∑ ‖ ‖ f (t) dt − f (𝜉i )(ti − ti−1 )‖ ⩽ 𝜖. ‖(KMS) ‖ ‖ ∫ ti−1 i=1 ‖ ‖ However, |d| ‖ |d| ‖ ti ti [ ‖ ∑ ∑ ] ‖ ‖ ‖ ‖ ‖ f (t) − fn (t) dt‖ f (t) dt − f (𝜉i )(ti − ti−1 )‖ ⩽ ‖(KMS) ‖(KMS) ‖ ‖ ‖ ‖ ∫ti−1 ∫ti−1 i=1 ‖ ‖ i=1 ‖ ‖ |d| ‖ |d| t ‖ i ∑‖ ‖ ∑‖ ‖ + fn (t) dt − fn (𝜉i )(ti − ti−1 )‖ + ‖(KMS) ‖fn (𝜉i ) − f (𝜉i )‖ (ti − ti−1 ). ‖ ‖ ∫ t i−1 i=1 ‖ ‖ i=1 b ‖ Since ∫a ‖ ‖fn (t) − f (t)‖ dt → 0 as n tends to infinity, there exists n𝜖 > 0 such that the first summand in the last inequality is smaller than 3𝜖 for all n ⩾ n𝜖 . Choose an n ⩾ n𝜖 . Then, we can take 𝛿 such that the third summand is smaller than 3𝜖 , b ‖ because it approaches ∫a ‖ ‖fn (t) − f (t)‖ dt. In addition, once fn ∈ HMS([a, b], X), we can refine 𝛿 so that the second summand becomes smaller than 3𝜖 , and we finish the proof. ◽

For a proof of the next lemma, it is enough to adapt the proof found in [107, Theorem 16] for the case of Banach space-valued functions. Lemma 1.95: 1 ([a, b], X) ⊂ KMS([a, b], X). Now, we are able to prove the next inclusion. Theorem 1.96: 1 ([a, b], X) ⊂ HMS([a, b], X). Proof. By Lemma 1.95, 1 ([a, b], X) ⊂ KMS([a, b], X). Then, following the steps of the proof of Lemma 1.95 and using Lemma 1.94, we obtain the desired result. ◽

49

50

1 Preliminaries

For the next result, which says that the indefinite integral of any function of HMS([a, b], X) belongs to BV([a, b], X), we employ a trick based on the fact that if g = f almost everywhere, then g ∈ HMS([a, b], X) and g̃ = f̃ , that is, the indefinite integrals of f and g coincide. This fact follows by a straightforward adaptation of [108, Theorem 9.10] for Banach space-valued functions (see also [70]). Thus, if we change a function f ∈ HMS([a, b], X) on a set of Lebesgue measure zero, its indefinite integral does not change. Therefore, we consider, for instance, that f vanishes at such points. Lemma 1.97: If f ∈ HMS([a, b], X), then f̃ ∈ BV([a, b], X). Proof. It is enough to show that every 𝜉 ∈ [a, b] has a neighborhood where f̃ is of bounded variation. By hypothesis, given 𝜖 > 0, there exists a gauge 𝛿 on [a, b] such that for every 𝛿-fine semitagged division d = (𝜉i , [ti−1 , ti ]) of [a, b], |d| ∑ ‖̃ ‖ ‖f (ti ) − f̃ (ti−1 ) − f (𝜉i )(ti − ti−1 )‖ < 𝜖. ‖ ‖

(1.A.3)

i=1

Let s0 < s1 < · · · < sm be any division of [𝜉 − 𝛿(𝜉), 𝜉 + 𝛿(𝜉)]. If we take 𝜉j = 𝜉 for j = 1, 2, … , m, then the point-interval pair (𝜉j , [sj−1 , sj ]) is a 𝛿-fine tagged partial division of [𝜉 − 𝛿(𝜉), 𝜉 + 𝛿(𝜉)] and, therefore, from (1.A.3) and since we can assume, without loss of generality (see comments in the paragraph before the statement), that f (𝜉j ) = f (𝜉) = 0 for j = 1, 2, … , m, we have m ∑ ‖ ‖̃ ‖f (sj ) − f̃ (sj−1 )‖ < 𝜖 ‖ ‖ j=1

and the proof is complete.



Lemma 1.98: Suppose f ∈ H([a, b], X). The following properties are equivalent: (i) f is absolutely integrable; (ii) f̃ ∈ BV([a, b], X). Proof. (i) ⇒ (ii). Suppose f is absolutely integrable. Since the variation of f̃ , varba (f̃ ), is given by { |d| } ∑‖ ‖ b ̃ vara (f ) = sup ‖f̃ (ti ) − f̃ (ti−1 )‖ ∶ d = (ti ) ∈ D[a,b] ‖ ‖ i=1

we have |d| |d| ‖ |d| t ti b ‖ ∑ ∑ ‖̃ ‖ ∑‖ i ‖ f (t) dt‖ ⩽ ‖f (t)‖ dt = ‖f (t)‖ dt. ‖ ‖f (ti ) − f̃ (ti−1 )‖ = ‖∫ ‖ ‖ ‖ ∫a ∫ i=1 i=1 ‖ ti−1 ‖ i=1 ti−1

Appendix 1.A: The McShane Integral

(ii) ⇒ (i). Suppose f̃ ∈ BV([a, b], X). We prove that the integral ∫a ‖f (t)‖ dt exists b and ∫a ‖f (t)‖ dt = varba (f̃ ). Given 𝜖 > 0, we need to find a gauge 𝛿 on [a, b] such that | |∑ | | |d| | ‖f (𝜉 )‖ (t − t ) − varb (f̃ )| < 𝜖, a | | ‖ i ‖ i i−1 | | i=1 | | b

whenever d = (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] is 𝛿-fine. However, |∑ | | |d| | | ‖f (𝜉 )‖ (t − t ) − varb (f̃ )| a | ‖ i ‖ i i−1 | | i=1 | | | | |d| | |d| ‖ t ‖ ti ‖ | ||∑ ‖ ∑ | |‖ ‖ ‖| | ‖ i ‖ b ̃ | + ⩽ (t − t ) − f (t) dt f (t) dt − var ( f ) | |‖f (𝜉i )‖ ‖ ‖ ‖ ‖ a | ‖ i i−1 | ‖∫t ‖ | || ‖∫t ‖ | i=1 | ‖ i−1 ‖ | | i=1 ‖ i−1 ‖ | | | |d| ‖ |d| t ‖ i ∑‖ | ‖ |∑ ‖ ‖ ⩽ f (t) dt‖ + || ‖f̃ (ti ) − f̃ (ti−1 )‖ − varba (f̃ )|| . ‖f (𝜉i )(ti − ti−1 ) − ‖ ‖ ‖ ‖ ∫ | ti−1 i=1 ‖ ‖ || i=1 | (1.A.4) By the definition of varba (f̃ ), we may take (ti ) ∈ D[a,b] such that the last summand in (1.A.4) is smaller than 2𝜖 . Because f ∈ H([a, b], X), we may take a gauge 𝛿 such that for every 𝛿-fine (𝜉i , [ti−1 , ti ]) ∈ TD[a,b] , the first summand in (1.A.4) is also smaller than 2𝜖 (and we may suppose that the points chosen for the second summand are the points of the 𝛿-fine tagged division (𝜉i , [ti−1 , ti ])). ◽ The next result is a consequence of the fact that HMS([a, b], X) ⊂ H([a, b], X) and Lemmas 1.97 and 1.98. A proof of it can be found in [[132], Theorem 10.3]. Corollary 1.99: All functions of HMS([a, b], X) are absolutely.integrable The reader can find a proof of the next lemma in [35, Theorem 9]. Lemma 1.100: All functions of H([a, b], X) are.measurable Finally, we can prove the following inclusion. Theorem 1.101: HMS([a, b], X) ⊂ 1 ([a, b], X). Proof. The result follows from the facts that all functions of H([a, b], X) and, hence, of HMS([a, b], X) are measurable (Lemma 1.100), and all functions of HMS([a, b], X) are absolutely integrable by Corollary 1.99. ◽

51

52

1 Preliminaries

As we mentioned before, the inclusion 1 ([a, b], X) ⊂ KMS([a, b], X) always holds. However, when X is an infinite dimensional Banach space, then for sure KMS([a, b], X) ⧵ 1 ([a, b], X) ≠ ∅, as shown by the next result due to C. S. Hönig (personal communication by him to his students in 1990 at the University of São Paulo) and presented in [73]. Proposition 1.102 (Hönig): If X is an infinite dimensional Banach space, then there exists f ∈ KMS([a, b], X) ⧵ 1 ([a, b], X). Proof. Let dim X denote the dimension of X. If dim X = ∞, then the Theorem of Dvoretsky–Rogers (see [60] and also [57]) implies there exists a sequence {xn }n∈ℕ in X which is summable but not absolutely summable. Thus, if we define a function f ∶ [1, ∞] → X by f (t) = xn , for n ⩽ t < n + 1, then ∑ b (KMS) ∫a f (t) dt = n∈ℕ xn ∈ X, whenever the integral exists. However, f ∉ 1 ([a, b], X), since b

∫a

‖ ‖ ‖ ‖ ‖ ‖f (t)‖ dt = ‖ ‖x1 ‖ + ‖x2 ‖ + ‖x3 ‖ + · · · = ∞.

and this completes the proof.



In the next example, borrowed from [73, Example 3.4], we exhibit a Banach space-valued function which is integrable in the variational Henstock sense and also in the sense of Kurzweil–McShane. Nevertheless, it is not absolutely integrable. Example 1.103: Let f ∶ [0, 1] → l2 (ℕ) be given by f (t) =

1 1 2i e , for i ⩽ t < i−1 and i ∈ ℕ. i i 2 2

Then, 1 2i−1

∫1

2i

1 2i e dt = ei i i i

which is summable in l2 (ℕ). Since the Henstock integral contains its improper integrals (and the same applies to the Kurzweil integral),{we have } f ∈ H([0, 1], l2 (ℕ)). However, f ∉ 1 ([0, 1], l2 (ℕ)) because the sequence 1i ei i∈ℕ

is non-summable in 1 ([0, 1], l2 (ℕ)). By the Monotone Convergence Theorem for the Kurzweil–McShane integral (which follows the ideas of [71] with obvious adaptations), f ∈ KMS([0, 1], l2 (ℕ)). But f ∉ RMS([0, 1], l2 (ℕ)), since f is not bounded, where by RMS([a, b], X) we denote the space of Riemann–McShane integrable functions from [a, b] to X.

53

2 The Kurzweil Integral Everaldo M. Bonotto 1 , Rodolfo Collegari 2 , Márcia Federson 3 , and Jaqueline G. Mesquita 4 1 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Faculdade de Matemática, Universidade Federal de Uberlândia, Uberlândia, MG, Brazil 3 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 4 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil

This chapter is devoted to the theory of integration introduced by Jaroslav Kurzweil in the form presented in his articles dated 1957, 1958, 1959, and 1962, and so on. See [147–151]. Although very short, this chapter brings the heart of the theory of generalized ordinary differential equations (ODEs) which is precisely the Kurzweil integration theory, presented here in a concise form which includes its most fundamental properties – those usually expected that a “good integral” will fulfill. As pointed out by S˘ . Schwabik in [209] (see also [136]), it is known that in a paper published in the early 1950s, I.I. Gichman observed that the nonperiodic averaging method for ODEs proposed by Bogolyubov through certain approximations lies heavily on the continuous dependence of the solutions on a parameter (see [103] and also [144], for instance). In 1955, M.A. Krasnosel’ski˘ı and S.G. Krein (see [143]), also while investigating averaging methods for ODEs, established a result on the “continuity” of integrals whose integrands fk (x, t), k ∈ ℕ0 , are also right-hand sides of certain nonautonomous ODEs. Then, Krasnosel’ski˘ı and Krein required that these integrands are equicontinuous and uniformly bounded in order to obtain continuous dependence on a parameter. In 1957, two papers by J. Kurzweil appeared concomitantly referring to one another: one is a solo article in English and Russian, and the other is coauthored by Z. Vorel and appears only in Russian (see [147, 154]). In [154], Kurzweil and Vorel presented a more general form of the following result: Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

54

2 The Kurzweil Integral

Let fk ∶ 𝒪 × [0, T] → ℝn , k ∈ ℕ0 , be a sequence of functions and 𝒪 be an open subset of the real line ℝ. Let xk be a solution of the differential equation ẋ = fk (x, t),

x(0) = 0

and let x0 be uniquely defined on [0, T], T > 0. If t

Fk (x, t) =

t

fk (x, s)ds →

∫0

∫0

f0 (x, s)ds = F0 (x, t)

uniformly with k → ∞ and if the functions fk (x, t), k ∈ ℕ0 , are equicontinuous in x for fixed t, then for sufficiently large k the solutions xk are defined on [0, T] and xk (t) → x0 (t), with k → ∞ uniformly on [0, T]. Then, in [147], referring to the above result, Kurzweil mentioned that, unlike that stated by Krasnosel’ski˘ı and Krein, the authors of [154] did not require uniform boundedness on the integrands fk (x, t), k ∈ ℕ0 . Still concerning the continuous dependence result of [154], it is within the best possible results according to Z. Artstein in [8] and as quoted in [136]. On the other hand, it thrust a question on how to relate the solutions of ODEs with right-hand sides fk (x, t), k ∈ ℕ0 (ℕ0 = ℕ ∪ {0}), to the indefinite integrals t

Fk (x, t) =

∫0

fk (x, s)ds,

k ∈ ℕ0 .

This issue was solved by Kurzweil in [147] when he introduced the concept of generalized ODEs whose framework is the scope of the book. In this chapter, as we said at the beginning, we focus our attention on the theory of integration discovered in 1957 in the paper [147], where Kurzweil introduced the concept of what he called generalized Perron integral, which we call Kurzweil integral, which appears on the right-hand side of generalized ODEs. Other references for the Kurzweil integration theory are [153, 179, 209].

2.1 The Main Background 2.1.1 Definition and Compatibility With the terminologies and notations of Chapter 1, we present the definition of a Kurzweil integrable function U defined on a square [a, b] × [a, b], with −∞ < a ⩽ b < ∞, taking values in an arbitrary Banach space X, endowed with a norm denoted by ∥ ⋅ ∥. Definition 2.1: Let X be a Banach space. A function U ∶ [a, b] × [a, b] → X is called Kurzweil integrable on [a, b] if there is an element I ∈ X such that for each

2.1 The Main Background

𝜖 > 0, there is a gauge 𝛿 on [a, b] so that ‖ ‖∑ ‖ ‖ |d| ‖ [U(𝜏 , t ) − U(𝜏 , t )] − I ‖ < 𝜖 i i i i−1 ‖ ‖ ‖ ‖ i=1 ‖ ‖ for every 𝛿-fine tagged division d = (𝜏i , [ti−1 , ti ]), i = 1, 2, … , |d|, of [a, b]. In this case, I is called the Kurzweil integral of U over [a, b] and will be denoted by b ∫a DU(𝜏, t). ∑|d| Sometimes we refer to the Riemann-type sum i=1 [U(𝜏i , ti ) − U(𝜏i , ti−1 )] simply b a by S(U, d). When the Kurzweil integral ∫a DU (𝜏, t) exists, we define ∫b DU(𝜏, t) = b a − ∫a DU(𝜏, t) and ∫a DU(𝜏, t) = 0. At this point, the reader may have paid attention to the fact that Definition 2.1 only makes sense if, for a given gauge 𝛿 defined on [a, b], there exists at least one 𝛿-fine tagged division d of [a, b]. This is assured by a result, stated below, known as Cousin Lemma (or Compatibility Theorem) which can be proved by application of the Heine–Borel Theorem. See, e.g. [172, Section S1.8]. Lemma 2.2 (Cousin Lemma): Given a gauge 𝛿 on [a, b], there is a 𝛿-fine tagged division d = (𝜏i , [ti−1 , ti ]), i = 1, 2, … , |d|, of [a, b]. In order to extend the Kurzweil integral to unbounded intervals of ℝ, we need to define 𝛿-neighborhoods of −∞ and ∞. Consider the extended real line ℝ = ℝ ∪ {−∞} ∪ {∞} with the operations: 0 ⋅ (±∞) = 0 = (±∞) ⋅ 0, x + (±∞) = ±∞ = (±∞) + x for x ∈ ℝ, x ⋅ (±∞) = ±∞ = (±∞) ⋅ x for x > 0, and x ⋅ (±∞) = ∓∞ = (±∞) ⋅ x for x < 0. Take 𝛿(−∞) > 0 and 𝛿(∞) > 0 and consider the neighborhoods ) ( [ ] 1 1 and −∞, − ,∞ . 𝛿(−∞) 𝛿(∞) Following the ideas of [12], for instance, consider U ∶ ℝ × ℝ → X and set U(−∞, −∞) = U(−∞, t) = U(∞, ∞) = U(∞, t) = 0 for all t ∈ ℝ. Then, given a gauge 𝛿 on ℝ, that is, a function 𝛿 ∶ ℝ → (0, ∞), consider a 𝛿-fine tagged division of ℝ, say, d = {(𝜏0 , [−∞, t0 ]), (𝜏1 , [t0 , t1 ]), … , (𝜏|d| , [t|d|−1 t|d| ]), (𝜏|d|+1 , [t|d| , ∞])}, with 𝜏i ∈ [ti−1 , ti ] ⊂ (𝜏i − 𝛿(𝜏i ), 𝜏i + 𝛿(𝜏i )), i = 1, … , n, ) ] [ ( [ ] 1 1 [−∞, t0 ] ⊂ −∞, − and t|d|−1 , ∞ ⊂ ,∞ . 𝛿(−∞) 𝛿(∞)

(2.1)

55

56

2 The Kurzweil Integral

This definition of 𝛿-fineness forces the tags 𝜏0 and 𝜏|d|+1 to satisfy 𝜏0 = −∞ and 𝜏|d|+1 = ∞. Consequently, the corresponding Riemann-type sum becomes S(U, d) =

|d| ∑

[U(𝜏i , ti ) − U(𝜏i , ti−1 )],

i=1

since U(−∞, t0 ) − U(−∞, −∞) = 0 and U(∞, ∞) − U(∞, t|d| ) = 0. In this manner, the results involving the Kurzweil integral of a function U ∶ [a, b] × [a, b] → X can be extended or easily adapted to the case where U ∶ J × J → X, with J being any unbounded interval of the real line ℝ (i.e. J = (−∞, ∞), J = [c, ∞) or J = (−∞, c]).

2.1.2 Special Integrals In the following lines, we specify some particular cases of the Kurzweil integral. In Definition 2.1, when we consider a BT (L(X, Y ), X, Y ), where X and Y are Banach spaces, 𝛼 ∶ [a, b] → L(X, Y ), f ∶ [a, b] → X, and U ∶ [a, b] × [a, b] → Y are functions such that U(𝜏, t) = 𝛼(𝜏)f (t), we obtain the Kurzweil vector integral as presented in Definition 1.37, that is, b

b

DU(𝜏, t) =

∫a

𝛼(s)df (s).

∫a

Similarly, when U(𝜏, t) = 𝛼(t)f (𝜏), we obtain the integral as presented in Definition 1.38, that is, b

b

DU(𝜏, t) =

∫a

d𝛼(s)f (s).

∫a

In particular, when f ∶ [a, b] → ℝn , g ∶ [a, b] → ℝ, and U(𝜏, t) = f (𝜏)g(t), we have b

b

DU(𝜏, t) =

∫a

f (s)dg(s)

∫a

and, here, f ∈ Kg ([a, b], ℝn ) = Hg ([a, b], ℝn ) (see comments before Example 1.44). When f ∶ [a, b] → ℝn and U(𝜏, t) = f (𝜏)t, we have b

b

DU(𝜏, t) =

∫a

∫a

f (s)ds

and, in this case, f ∈ K([a, b], ℝn ) = H([a, b], ℝn ). We refer to the integrals b

∫a

b

f (s)ds

and

∫a

f (s)dg(s),

respectively, as Perron and Perron–Stieltjes integrals. See Remark 1.39. We recall that, in the finite dimensional case, the Perron–Stieltjes and the Kurzweil– Henstock–Stieltjes integrals coincide. See for instance [120, 158].

2.2 Basic Properties

We end this section with an observation concerning other possibilities of more general integrals as, for instance, path integrals. Let [a, b] be a compact interval of ℝ and ℝ[a,b] be the set of all functions from [a, b] to ℝ. Consider a function f defined for values 𝜏 ∈ ℝ[a,b] , that is, f ∶ ℝ[a,b] → ℝ. Let I(ℝ[a,b] ) denote the set of all cylindrical intervals of ℝ[a,b] (see, e.g. [183]) and consider a function 𝜇 of cylindrical intervals J of ℝ[a,b] , that is, 𝜇 ∶ I(ℝ[a,b] ) → ℝ. Then, consider Riemann-type sums of the form ∑ f (𝜏)𝜇(J), 𝜏 ∈ J, over 𝜏-dependent divisions of the function space ℝ[a,b] . Sometimes, these sums approximate a path-type integral denoted by ∫

f (𝜏)𝜇(I).

More generally, one can consider a function h of point-interval pairs (𝜏, J) and, in particular, such function h can be given as h(𝜏, J) = f (𝜏)𝜇(J). An integral of type ∫ h(𝜏, J) is called Henstock (path) integral. In this respect, the reader may want to consult [121, 122, 181–183]. Coming to Definition 2.1, it is possible to consider U ∶ [a, b] × [a, b] → X as a particular case of a function h of point-interval pairs (𝜏, J), where 𝜏 ∈ [a, b] and J are subintervals of [a, b]. Take, for instance J = [c, d] ⊂ [a, b] and h(𝜏, J) = U(𝜏, d) − U(𝜏, c). Then, the Riemann-type sum |d| ∑

h(𝜏i , Ji ) =

|d| ∑

i=1

[U(𝜏i , ti ) − U(𝜏i , ti−1 )],

i=1

where Ji = [ti−1 , ti ] and d = (𝜏i , [ti−1 , ti ]) ∈ TD[a,b] , may approximate an integral of type b

∫[a,b]

h(𝜏, J) =

∫a

DU(𝜏, t),

whenever the latter exists.

2.2 Basic Properties Similar to the Riemann and Lebesgue integrals, the Kurzweil integral presented in Definition 2.1 has the usual properties of uniqueness, linearity, and additivity

57

58

2 The Kurzweil Integral

with respect to adjacent intervals, integrability on subintervals, among other properties. Here, we state those results we need, some of which are without proofs. Those proofs that we omit can be easily adapted from [209] to the case where the integrands take values in a general Banach space. Our first result concerns the linearity of the Kurzweil integral whose proof can be made following [209, Theorem 1.9]. Theorem 2.3 (Linearity): If U, V ∶ [a, b] × [a, b] → X are Kurzweil integrable functions and c1 , c2 ∈ ℝ, then c1 U + c2 V is also Kurzweil integrable and b

∫a

[ ] D c1 U(𝜏, t) + c2 V(𝜏, t) = c1

b

∫a

b

DU(𝜏, t) + c2

∫a

DV(𝜏, t).

Proof. Let c1 , c2 ∈ ℝ be constants and d be an arbitrary tagged division of [a, b] given by d = (𝜏j , [tj−1 , tj ]), j = 1, 2, … , |d|, and let S(U, d) and S(V, d) be the Riemann sums corresponding to the functions U, V ∶ [a, b] × [a, b] → X, respectively. Then, S(c1 U + c2 V, d) =

|d| ∑ [(c1 U + c2 V)(𝜏j , tj ) − (c1 U + c2 V)(𝜏j , tj−1 )] j=1

|d|

=



[(c1 U)(𝜏j , tj ) + (c2 V)(𝜏j , tj ) − (c1 U)(𝜏j , tj−1 ) − (c2 V)(𝜏j , tj−1 )]

j=1

=

|d| ∑

[(c1 U)(𝜏j , tj ) − (c1 U)(𝜏j , tj−1 )] +

j=1

|d| ∑

[(c2 V)(𝜏j , tj ) − (c2 V)(𝜏j , tj−1 )]

j=1

= c1 S(U, d) + c2 S(V, d), which completes the proof after appropriate choice of gauges.



The next lemma is known as Cauchy Criterion for the Kurzweil integral. Its proof follows as in [209, Theorem 1.7], and we omit it here. Theorem 2.4 (Cauchy Criterion): A function U ∶ [a, b] × [a, b] → X is Kurzweil integrable over [a, b], if and only if for every 𝜖 > 0, there exists a gauge 𝛿 on [a, b] such that ∥ S(U, d1 ) − S(U, d2 ) ∥< 𝜖 for every 𝛿-fine tagged divisions d1 , d2 of [a, b]. The next result concerns integrability on subintervals of [a, b]. It generalizes [209, Theorem 1.10] for the case of Banach space-valued integrands. The proof follows similarly.

2.2 Basic Properties

Theorem 2.5 (Integrability on Subintervals): Suppose U ∶ [a, b] × [a, b] → X is Kurzweil integrable over [a, b]. Then, given [c, d] ⊂ [a, b], U is also Kurzweil integrable over [c, d]. Proof. Let 𝜖 > 0 be given. By the Cauchy Criterion (Theorem 2.4), since the inteb gral ∫a DU(𝜏, t) exists, there is a gauge 𝛿 on [a, b] such that ∥ S(U, d1 ) − S(U, d2 ) ∥< 𝜖

(2.2)

for every 𝛿-fine tagged divisions d1 and d2 of [a, b]. Let d̃1 and d̃2 be arbitrary 𝛿-fine tagged divisions of [c, d] and assume that a < c < d < b. Consider, also, 𝛿-fine tagged divisions dL of [a, c] and dR of [d, b] which are guaranteed by the Cousin Lemma (Lemma 2.2). Consider d̃1 = (𝜏i , [ti−1 , ti ]), i = 1, 2, … , |d̃1 |, ( [ ]) L , tjL , j = 1, 2, … , |dL |, dL = 𝜏jL , tj−1 ]) ( [R dR = 𝜏kR , tk−1 , tkR , k = 1, 2, … , |dR |, and define a 𝛿-fine tagged division d1 of [a, b] from the union of the 𝛿-fine tagged divisions dL , d̃1 , and dR which implies that d1 is a 𝛿-fine tagged division of [a, b]. Analogously, the union of the 𝛿-fine tagged divisions dL , d̃2 , and dR generate a 𝛿-fine tagged division of [a, b] which we will denote by d2 . Therefore, by (2.2) ∥ S(U, d̃1 ) − S(U, d̃2 ) ∥=∥ S(U, d1 ) − S(U, d2 ) ∥< 𝜖, d

which yields, by Cauchy Criterion, that the integral ∫c DU(𝜏, t) exists.



The additivity of the integral with respect to subjacent intervals is described in the next result. A proof of it can be carried out as in [209, Theorem 1.11] with obvious adaptations. Theorem 2.6 (Additivity on Adjacent Intervals): Let c ∈ (a, b) and U ∶ [a, b] × [a, b] → X be a function such that U ∈ K([a, c], X) and U ∈ K([c, b], X). Then U ∈ K([a, b], X) and b

∫a

c

DU(𝜏, t) =

∫a

b

DU(𝜏, t) +

∫c

DU(𝜏, t).

The next result is known as Saks–Henstock Lemma. See [209, Lemma 1.13], for instance. Lemma 2.7 (Saks–Henstock): Let U ∶ [a, b] × [a, b] → X be Kurzweil integrable over [a, b]. Given 𝜖 > 0, let 𝛿 be a gauge on [a, b] such that

59

60

2 The Kurzweil Integral

‖ ‖∑ b ‖ ‖ |d| ‖ 0, there exists a gauge 𝛿j on [𝛾j , 𝛽j+1 ] such that 𝛽j+1 ‖ ‖ 𝜂 ‖ ‖ DU(𝜏, t)‖ < ‖S(U, dj ) − ‖ ‖ m+1 ∫𝛾j ‖ ‖

for every 𝛿j -fine tagged division dj of [𝛾j , 𝛽j+1 ]. Note that we can also take 𝛿j as a refinement of 𝛿, that is, 𝛿j satisfies 𝛿j (𝜏) < 𝛿(𝜏) for every 𝜏 ∈ [𝛾j , 𝛽j+1 ]. On the other hand, if 𝛾j = 𝛽j+1 , then S(U, dj ) = 0. Since the union of ̃ d and all dj , j = 1, 2, … , m, forms a 𝛿-fine tagged division d of [a, b] whose corresponding Riemann sum is given by m ∑

[U(𝜉j , 𝛾j ) − U(𝜉j , 𝛽j )] +

j=1

m ∑

S(U, dj ),

j=1

then, by (2.3), we obtain ‖∑ ‖ m b ∑ ‖m ‖ j ‖ [U(𝜉 , 𝛾 ) − U(𝜉 , 𝛽 )] + ‖ < 𝜖. S(U, d ) − DU(𝜏, t) j j j j ‖ ‖ ∫a ‖ j=1 ‖ j=1 ‖ ‖ ∑m 𝛾j ∑m 𝛽j+1 b Note that ∫a DU(𝜏, t) = j=1 ∫𝛽 DU(𝜏, t) + j=0 ∫𝛾 DU(𝜏, t). Hence, j

j

‖∑ ‖ 𝛾j ‖m ‖ ‖ [U(𝜉 , 𝛽 ) − U(𝜉 , 𝛾 ) − ‖ DU(𝜏, t)] j j j j ‖ ‖ ∫𝛽j ‖ j=1 ‖ ‖ ‖ ‖∑ ‖ m 𝛽j+1 b ∑ ‖m ‖ ‖ =‖ [U(𝜉 , 𝛽 ) − U(𝜉 , 𝛾 )] + DU(𝜏, t) − DU(𝜏, t) j j j j ‖ ‖ ∫ ∫a ‖ j=1 ‖ j=0 𝛾j ‖ ‖

2.2 Basic Properties

‖∑ ‖ m b ∑ ‖m ‖ j ‖ ⩽‖ [U(𝜉 , 𝛽 ) − U(𝜉 , 𝛾 )] − DU(𝜏, t) + S(U, d ) j j j j ‖ ‖ ∫a ‖ j=1 ‖ j=0 ‖ ‖ ‖∑ ‖ m 𝛽j+1 ∑ ‖m ‖ j +‖ DU(𝜏, t)‖ ‖ S(U, d ) − ‖ ∫ ‖ j=0 ‖ 𝛾 j j=0 ‖ ‖ ‖∑ ‖ m m b ∑ ‖ ‖ ⩽‖ S(U, dj )‖ ‖ [U(𝜉j , 𝛽j ) − U(𝜉j , 𝛾j )] − ∫ DU(𝜏, t) + ‖ ‖ j=1 ‖ a j=0 ‖ ‖ m ‖ 𝛽 ‖ j+1 ∑‖ 𝜂 ‖ + = 𝜖 + 𝜂, DU(𝜏, t)‖ < 𝜖 + (m + 1) ‖S(U, dj ) − ‖ ‖ ∫ m+1 𝛾 j j=0 ‖ ‖ which holds for every 𝜂 > 0. Therefore, (2.4) is satisfied, and we complete the proof. ◽ As a consequence of the Saks–Henstock Lemma, we have the next result. Corollary 2.8: Let U ∶ [a, b] × [a, b] → X be Kurzweil integrable over [a, b]. Given 𝜖 > 0, there exists a gauge 𝛿 on [a, b] such that, if [𝛾, 𝑣] ⊂ [a, b], then 𝑣 ‖ ‖ ‖ ‖ DU(𝜏, t)‖ < 𝜖, (i) (𝑣 − 𝛾) < 𝛿(𝛾) implies ‖U(𝛾, 𝑣) − U(𝛾, 𝛾) − ‖ ‖ ∫𝛾 ‖ ‖ 𝑣 ‖ ‖ ‖ ‖ (ii) (𝑣 − 𝛾) < 𝛿(𝑣) implies ‖U(𝑣, 𝑣) − U(𝑣, 𝛾) − DU(𝜏, t)‖ < 𝜖. ‖ ‖ ∫𝛾 ‖ ‖

Proof. Let 𝜖 > 0 be given. Since U ∶ [a, b] × [a, b] → X is Kurzweil integrable over [a, b], there exists a gauge 𝛿 on [a, b] such that (2.3) holds for every 𝛿-fine tagged division d = (𝜏i , [ti−1 , ti ]), i = 1, 2, … , |d|, of [a, b]. For item (i), let [𝛾, 𝑣] ⊂ [a, b] be such that (𝑣 − 𝛾) < 𝛿(𝛾). Then (𝛾, [𝛾, 𝑣]) is a 𝛿-fine tagged partial division of [a, b]. For item (ii), note that if [𝛾, 𝑣] ⊂ [a, b] and (𝑣 − 𝛾) < 𝛿(𝑣), then (𝑣, [𝛾, 𝑣]) is also a 𝛿-fine tagged partial division of [a, b]. Then, the result follows easily from the Saks–Henstock Lemma. ◽ The next theorem concerns the Cauchy extension for the Kurzweil integral. Its statement says that the Kurzweil integral is invariant under Cauchy extensions. The theorem is also known as being a Hake-type theorem for the Kurzweil integral. A proof of it follows as in [209, Theorem 1.14] with obvious changes for the case of Banach space-valued functions. The reader may want to consult the results from [213] as well.

61

62

2 The Kurzweil Integral

Theorem 2.9 (Cauchy Extension): If U ∶ [a, b] × [a, b] → X is a function such that for every c ∈ [a, b), U is integrable on [a, c] and the limit [ c ] DU(𝜏, t) − U(b, c) + U(b, b) = I ∈ X lim c→b− ∫a b

exists, then the function U is integrable on [a, b] and it satisfies ∫a DU(𝜏, t) = I. Similarly, if the function U is integrable on [c, b] for every c ∈ (a, b] and the limit ] [ b

lim+

DU(𝜏, t) + U(a, c) − U(a, a) = I ∈ X

∫c

c→a

b

exists, then the function U is integrable on [a, b], and we have ∫a DU(𝜏, t) = I. In particular, we have the following Hake-type theorem for Perron–Stieltjes integrals. Corollary 2.10: Consider a pair of functions f ∶ [a, b] → X and g ∶ [a, b] → ℝ. s

(i) Suppose the Perron–Stieltjes integral ∫a f (t)dg(t) exists, for every s ∈ [a, b), and [ s ] lim− f (t)dg(t) + f (b)[g(b) − g(s)] = I. ∫a s→b b

Then ∫a f (t)dg(t) = I. b (ii) Suppose the Perron–Stieltjes integral ∫s f (t)dg(t) exists, for every s ∈ (a, b], and ] [ b

lim+

∫s

s→a

f (t)dg(t) + f (a)(g(s) − g(a)) = I.

b

Then ∫a f (t)dg(t) = I. Proof. We prove item (i) and leave item (ii) to the reader, since it follows similarly. Consider a function U ∶ [a, b] × [a, b] → X defined by U(𝜏, t) = f (𝜏)g(t). Then U is Kurzweil integrable over [a, s) for all s ∈ [a, b), since, by hypothesis, the s Perron–Stieltjes integral ∫a f (t)dg(t) exists for every s ∈ [a, b). Then, by the Hake-type Theorem for the Kurzweil integral (Theorem 2.9), U is Kurzweil integrable over [a, b]. Then, clearly, we obtain s

lim−

s→b

∫a

DU(𝜏, s) − U(b, s) + U(b, b) s

= lim− s→b

∫a

f (t)dg(t) + f (b)[g(b) − g(s)] = I. b

b

Hence, by Theorem 2.9, ∫a f (t)dg(t) = ∫a DU(𝜏, t) = I.



2.2 Basic Properties

The next example, borrowed from [44], illustrates the use of the Hake Theorem for Perron integrals (see Corollary 2.9). ( ) Example 2.11: Let f ∶ [0, 1] → ℝ be a function given by f (t) = 2t sin t22 − ( ) 2 2 cos , if t ∈ (0, 1], and f (0) = 0. Note that f is a highly oscillating function t t2 which is not absolutely integrable over [0, 1], that is, f is a function of unbounded variation and, hence, it is not Lebesgue integrable over [0, 1]. Since the improper Riemann integral of f exists, using a Hake-type theorem for the Perron integral (which coincides with the Kurzweil–Henstock integral), it follows that the Perron integral of f also exists and it has the same value as the improper Riemann integral of f . If U ∶ [a, b] × [a, b] → X is a Kurzweil integrable function, its indefinite integral DU(𝜏, t), s ∈ [a, b], is not always continuous. The next result describes such a behavior. In particular, it implies that the indefinite integral of U is continuous at c ∈ [a, b], if and only if U(c, ⋅) ∶ [a, b] → X is continuous at c. Its proof, which we include here, follows as in [209, Theorem 1.16].

s ∫a

Theorem 2.12: Let U ∶ [a, b] × [a, b] → X be Kurzweil integrable over [a, b] and c ∈ [a, b]. Then [ s ] c DU(𝜏, t) − U(c, s) + U(c, c) = DU(𝜏, t). lim s→c ∫a ∫a Proof. Let 𝜖 > 0 be given. Since U is integrable on [a, b], there exists a gauge 𝛿 on [a, b] such that the inequality ‖ ‖∑ ‖ ‖ |d| ‖ 0. Owing to the fact that both Kurzweil integrals b b ∫a DU(𝜏, t) and ∫a DV(𝜏, t) exist, one can find a gauge 𝛿 on [a, b], with 𝛿(s) ⩽ 𝜃(s) for all s ∈ [a, b], such that for every 𝛿-fine tagged division d = (𝜏j , [tj−1 , tj ]) of [a, b], with j = 1, 2, … , |d|, the following inequalities hold: ‖ ‖∑ b ‖ ‖ |d| ‖ [U(𝜏 , t ) − U(𝜏 , t )] − DU(𝜏, t)‖ j j j j−1 ‖ 0 and t0 ∈ ℝ are given, Ω ⊂ G([−r, 0], B) is any open set, with B ⊂ X an open set and r > 0, and f ∶ Ω × [t0 , t0 + 𝜎] → X is a function. Besides, g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function and xs ∶ [−r, 0] → B is a function called “memory function” or “history function” given as in (3.2). This memory function describes the history of events in the lapse time between cause and effect. The integral on the right-hand side of (3.3) can be understood in the sense of Riemann–Stieltjes, Lebesgue–Stieltjes, or Perron–Stieltjes, for instance. But here we consider it in the sense of Perron–Stieltjes due to its generality. Clearly, when g(s) = s, Eq. (3.3) becomes the usual FDE described by (3.1). However, it is possible to define the function g ∶ [t0 , t0 + 𝜎] → ℝ in another way so that different types of equations are included in the setting of Eq. (3.3). As we mentioned above, this is covered in this chapter. Moreover, one can also consider Eq. (3.3) in a more general setting, taking infinite delays, time-dependent delays and state-dependent delays into account. See, for instance, [100, 116, 220].

3 Measure Functional Differential Equations

Even though measure FDEs are the core of this chapter, we are also interested in investigating equations without delays, that is, measure differential equations, which we refer to simply as MDEs and which we consider in its integral form t

x(t) = x(t0 ) +

∫t0

f (x(s), s)dg(s),

t ∈ [t0 , t0 + 𝜎],

(3.4)

where B ⊂ X is open, t0 ∈ ℝ and 𝜎 > 0 are given, g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function, and f ∶ B × [t0 , t0 + 𝜎] → X is Perron–Stieltjes integrable with respect to g. Considering the theory presented in [85] for the n-dimensional case, an interesting approach is to tackle analogous ideas, but now for case of Banach space-valued functions. In [29], for instance, the authors established results on dichotomies and boundedness of solutions for abstract MDEs and impulsive differential equations (we write IDEs) , using similar hypotheses to those employed in [85]. Thus, the investigation of this type of equations has been shown to be important to applications. Then, in this chapter, we investigate the following types of equations: measure FDEs, impulsive FDEs, functional dynamic equations on time scales, impulsive functional dynamic equations on time scales, all of which involving Banach space-valued functions. This chapter is based on the articles [21, 82, 84–86, 178]. In the sequel, we describe how we organized it. In Section 3.1, we define what is a solution of an initial value problem (we write IVP for short) for a measure FDE of the form (3.3) and, in particular, we come with a definition of a solution of an IVP concerning equation (3.4). In addition, we include a lemma which says that the norm of the memory function of a regulated function is also regulated. In Section 3.2, we describe a relation between the solutions of impulsive measure FDEs and the solutions of measure FDEs. In order to prove this result, we require conditions on the Perron–Stieltjes integral of a function f ∶ [a, b] → X instead of considering conditions on the integrand f itself. This “slight” different way of tacking integral equations allows us to consider integrands which behave as “badly” as the Perron–Stieltjes integral can handle, and it is a very well-known fact that Perron–Stieltjes integrals cope well with many jumps and highly oscillating functions (see Chapter 1). Still in Section 3.2, we present an example of a function whose indefinite integral satisfies a Carathéodory-type condition, showing that indeed our conditions are more general than those found in the literature for classical differential equations. Section 3.3 is based mostly on the papers [85, 86]. In this section, we recall the basic concepts and properties of the theory of dynamic equations on time scales and we prove that functional dynamic equations on time scales can be regarded as measure FDEs. We also present a correspondence between the solutions of impulsive functional dynamic equations on time scales and the solutions of impulsive

73

74

3 Measure Functional Differential Equations

measure FDEs. This result yields the fact that measure FDEs are, in addition, a useful tool to explore impulsive functional dynamic equations on time scales. Section 3.4 is dedicated to averaging principles for measure FDEs, as well as for impulsive measure FDEs and impulsive functional dynamic equations on time scales. The main references used in this section are [82, 84–86, 178]. At first, we present periodic averaging principles for various types of equations and, then, we investigate nonperiodic averaging principles. The results concerning averaging principles are very important, since they allow us to understand, for instance, the asymptotic behavior of the solutions of nonlinear systems involving parameters, by using the approach of the solutions of “averaged” autonomous equations, which are easier to deal with. In the last section of this chapter, namely Section 3.5, we investigate continuous dependence on time scales for impulsive functional dynamic equations on time scales. See [21], for instance. The main goal here is to provide sufficient conditions which ensure that the sequence of unique solutions of the impulsive functional dynamic equation on time scales t ∑ ⎧x(t) = x(t ) + f (xs , s)Δs + Ik (x(tk )), t ∈ [t0 , t0 + 𝜎]𝕋n , 0 ⎪ ∫t0 k∈{1,…,m} ⎨ tk 0, B ⊂ X is open, f ∶ B × [t0 , t0 + 𝜎] → X is a function, and g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function. In addition, the integral in the right-hand side of (3.6) is understood in the sense of Perron–Stieltjes. In particular, we present a definition of a solution of the initial value problem (3.6). Definition 3.2: Let B ⊂ X be open. We say that a function x ∶ [t0 , t0 + 𝜎] → X is a solution of the measure differential equation (3.6), with initial condition x(t0 ) = x0 , if it satisfies the following conditions: (i) (ii) (iii) (iv)

x(t0 ) = x0 ∈ B; x ∈ G([t0 , t0 + 𝜎], B) and (x(t), t) ∈ B × [t0 , t0 + 𝜎]; t +𝜎 the Perron–Stieltjes integral ∫t 0 f (x(s), s)dg(s) exists; 0 the equality t

x(t) = x0 +

∫t0

f (x(s), s)dg(s)

holds for all t ∈ [t0 , t0 + 𝜎].

75

76

3 Measure Functional Differential Equations

Recall that any norm in the Banach space X is denoted by ||⋅||. We finish this section by presenting an important property of the memory function ys , s ∈ [t0 , t0 + 𝜎], coming from a regulated function y ∶ [t0 − r, t0 + 𝜎] → X. This fact, borrowed from [85], is used in the entire book. Lemma 3.3: Let y ∶ [t0 − r, t0 + 𝜎] → X be a regulated function. Then, the norm of the memory function [t0 , t0 + 𝜎] ∋ s → ||ys ||∞ ∈ ℝ+ , where ||ys ||∞ = sup 𝜃∈[−r,0] ||ys (𝜃)||, is regulated on [t0 , t0 + 𝜎]. Proof. We need to prove that lims→s− ||ys ||∞ exists for every s0 ∈ (t0 , t0 + 𝜎], and 0 lims→s+ ||ys ||∞ exists for every s0 ∈ [t0 , t0 + 𝜎). We will only prove the existence of 0 the first limit, since the second one follows analogously. Let s0 ∈ (t0 , t0 + 𝜎]. By hypothesis, y is regulated. Thus, y satisfies the Cauchy condition at s0 − r and s0 , which means that, given 𝜖 > 0, there exists 𝛿 ∈ (0, s0 − t0 ) such that the following inequalities ||y(u) − y(𝑣)|| < 𝜖,

u, 𝑣 ∈ (s0 − r − 𝛿, s0 − r),

||y(u) − y(𝑣)|| < 𝜖,

u, 𝑣 ∈ (s0 − 𝛿, s0 )

and

(3.7) (3.8)

hold. Let s1 , s2 be such that s0 − 𝛿 < s1 < s2 < s0 . If s2 − r ⩾ s1 , then (3.7) yields ||y(s)|| < ||y(s2 − r)|| + 𝜖 ⩽ ||ys2 ||∞ + 𝜖 for every s ∈ [s1 − r, s2 − r]. If, on the other hand, s2 − r < s1 , then ||y(s)|| ⩽ ||ys2 ||∞ for every s ∈ [s2 − r, s1 ], whence ||y(s)|| < ||ys2 ||∞ + 𝜖,

for every s ∈ [s1 − r, s1 ].

In any case, ||ys1 ||∞ ⩽ ||ys2 ||∞ + 𝜖. Finally, (3.8) yields ||ys2 ||∞ ⩽ ||ys1 ||∞ + 𝜖 and hence, |||ys1 ||∞ − ||ys2 ||∞ | ⩽ 𝜖,

for every s1 , s2 ∈ (s0 − 𝛿, s0 ),

which ensures the existence of lims→s− ||ys ||∞ for s0 ∈ (t0 , t0 + 𝜎]. 0



3.2 Impulsive Measure FDEs In this section, our goal is to investigate the impulsive measure FDEs and to show how to relate these equations to measure FDEs. In fact, we are going to prove that a certain type of impulsive measure FDEs can be regarded as measure FDEs. The main reference for this section is [86]. be moments of impulse effects which we assume to form an increasLet {tk }m k=1 ing sequence, that is, t0 ⩽ t1 < · · · < tm < t0 + 𝜎. Set J0 = [t0 , t1 ] and Jk = (tk , tk+1 ],

3.2 Impulsive Measure FDEs

for k ∈ {1, … , m − 1}, and Jm = (tm , t0 + 𝜎]. We consider the impulsive measure FDE given by 𝑣 ⎧ x(𝑣) − x(u) = f (xs , s)dg(s), u, 𝑣 ∈ Jk , k ∈ {0, … , m}, ⎪ ∫u ⎪ ⎨Δ+ x(t ) = I (x(t )), k ∈ {1, … , m}, k k k ⎪ ⎪x = 𝜙, ⎩ t0

(3.9)

where Δ+ x(tk ) = x(tk+ ) − x(tk ), t0 ∈ ℝ, 𝜎 > 0, B ⊂ X is open, P = G([−r, 0], B), f ∶ P × [t0 , t0 + 𝜎] → X is a function, g ∶ [t0 , ∞) → ℝ is a left-continuous and nondecreasing function, and for each k ∈ {1, … , m}, Ik ∶ B → X is an impulse operator. Let x ∶ [t0 − r, t0 + 𝜎] → X be a solution of the system (3.9). In order to x(t) ∈ B for all t ∈ [t0 − r, t0 + 𝜎], we need to impose that x(tk+ ) ∈ B for every k ∈ {1, … , m}, so that, after the action of the impulse operator at time tk , the solution remains in B. Thus, we shall assume that I + Ik ∶ B → B for every k ∈ {1, … , m}, where I ∶ B → B is the identity operator. We start by recalling that, from the properties of Perron–Stieltjes integrals, given u, 𝑣 ∈ Jk , we have 𝑣

∫u

f (xs , s)dg(s) =

𝑣

∫u

f (xs , s)d̃ g(s),

g − g is constant on each interval Jk . Due to this where ̃ g ∶ [t0 , t0 + 𝜎] → ℝ is such ̃ fact, we can assume, without loss of generality, that g satisfies Δ+ g(tk ) = 0, for every k ∈ {1, … , m}. This assumption plays an important role here, since it implies that g is continuous at the times of impulse effects. This means that the function g and the impulse operators Ik do not “jump” at the same points, having discontinuities at different times. From this fact and by Corollary 2.14, the function t

[t0 , t0 + 𝜎] ∋ t →

∫t0

f (xs , s)dg(s)

is continuous at t1 , … , tm . Thus, we are talking about the following equivalent formulation of problem (3.9) t ∑ ⎧x(t) = x(t ) + f (xs , s)dg(s) + Ik (x(tk )), 0 ⎪ ∫t0 k∈{1,…,m} ⎨ tk T.

77

78

3 Measure Functional Differential Equations

Thus, for t ∈ [t0 , t0 + 𝜎], we can rewrite the previous initial value problem in the form m ∑ ⎧ t ∫ x(t) = x(t ) + f (x , s)dg(s) + Ik (x(tk ))Htk (t), ⎪ 0 s t0 ⎨ k=1 ⎪ ⎩xt0 = 𝜙.

t ∈ [t0 , t0 + 𝜎],

(3.10) (3.11)

Clearly, when g(s) = s, the initial value problem (3.10) and (3.11) is equivalent to the classic impulsive FDE (see [88, 91, 217] and the references therein) described by m t ⎧ ∑ ⎪x(t) = x(t0 ) + f (xs , s)ds + Ik (x(tk ))Htk (t), ∫t0 ⎨ k=1 ⎪x = 𝜙. ⎩ t0

t ∈ [t0 , t0 + 𝜎],

(3.12)

In what follows, we present a definition of a solution of problem (3.10) and (3.11) and, then, an auxiliary result, borrowed from [86], which is essential to achieve the main results of this section, since it tells us how to turn a Perron– Stieltjes integral plus a sum carrying some impulses back into another Perron–Stieltjes integral. This feature will be used later to transform an impulsive system into a measure-type equation. Definition 3.4: Let B ⊂ X be open and P = G([−r, 0], B). We say that a function x ∶ [t0 − r, t0 + 𝜎] → X is a solution of the impulsive measure FDE (3.10) and (3.11), if it satisfies the following conditions: (i) (ii) (iii) (iv)

x(t) = 𝜙(t − t0 ) for all t ∈ [t0 − r, t0 ]; x ∈ G([t0 − r, t0 + 𝜎], B) and (xt , t) ∈ P × [t0 , t0 + 𝜎]; t +𝜎 the Perron–Stieltjes integral ∫t 0 f (xs , s)dg(s) exists; 0 the equality (3.10) holds for all t ∈ [t0 , t0 + 𝜎].

Lemma 3.5: Let m ∈ ℕ. Suppose for each k ∈ {1, … , m}, tk ∈ [t0 , t0 + 𝜎], t0 ⩽ t1 < t2 < · · · < tm < t0 + 𝜎, and g ∶ [t0 , t0 + 𝜎] → ℝ is regulated, left-continuous on [t0 , t0 + 𝜎], and continuous at tk for each k ∈ {1, … , m}. Let f ∶ [t0 , t0 + 𝜎] → X be a function and consider ̃f ∶ [t0 , t0 + 𝜎] → X such that ̃f (t) = f (t), for every g ∶ [t0 , t0 + 𝜎] → ℝ such that ̃ g − g is cont ∈ [t0 , t0 + 𝜎] ⧵ {t1 , … , tm }, and take ̃ stant on each of the intervals [t0 , t1 ], (t1 , t2 ], … , (tm−1 , tm ], (tm , t0 + 𝜎]. Then, the t +𝜎 g(s) exists if and only if the Perron–Stieltjes Perron–Stieltjes integral ∫t 0 ̃f (s)d̃ t +𝜎

integral ∫t 0 0

t0 +𝜎

∫t0

0

f (s)dg(s) exists. In this case, ̃f (s)d̃ g(s) =

t0 +𝜎

∫t0

f (s)dg(s) +



̃f (t )Δ+̃ g(tk ). k

k∈{1,…,m} tk 0, there is a Δ-gauge 𝛿 = (𝛿L , 𝛿R ) on [a, b]𝕋 such that ‖ ‖∑ b ‖ ‖ |d| ‖ 0, there is a gauge 𝛿̃ ∶ [a, b] → (0, ∞) such that ‖ ‖∑ b ‖ ‖ |d| ( ∗ ) ( ∗ ) ∗ ∗ ‖ 𝛿(𝜏 In this case, it follows that 𝛿R (𝜏i ) = 𝜇(𝜏i ), the point 𝜏i is right-scattered and si = 𝜎(𝜏i ). Claim 2. There exists a modified tagged division ̃ d = (𝜏i , [si−1 , si ]) of [a, b] such ̂ ̃ ̃ that is 𝛿-fine and S(d𝕋 ) = S(d). Since d̂ 𝕋 is 𝛿-fine, we obtain by definition that ̃ i ) = 𝜏i − 𝛿L (𝜏i ) ⩽ si−1 < si ⩽ 𝜏i + 𝛿R (𝜏i ). 𝜏i − 𝛿(𝜏

(3.33)

̃ i )] Therefore, let us divide the interval [si−1 , si ] in two subintervals [si−1 , 𝜏i + 𝛿(𝜏 ̃ i ), si ]. Let us proceed as follows: replace the division point si by 𝜏i + and [𝜏i + 𝛿(𝜏 ̃ i )]. Then, we obtain by ̃ i ) and maintain 𝜏i as the tag of the interval [si−1 , 𝜏i + 𝛿(𝜏 𝛿(𝜏 ̃ ̃ ̃ (3.33) that [si−1 , 𝜏i + 𝛿(𝜏i )] ⊂ [𝜏i − 𝛿(𝜏i ), 𝜏i + 𝛿(𝜏i )]. On the other hand, cover [𝜏i + ̃ ̃ i ), si ] using an arbitrary 𝛿-fine division. This construction implies that ̃ d is a 𝛿(𝜏 ̃ 𝛿-fine division of [a, b]. d) holds in order to conclude Claim 2 It remains to show the equality S(d̂ 𝕋 ) = S(̃ is valid. But this follows immediately from the fact that u∗ = si for every u ∈ (𝜏i , si ] for all i = 1, … , |d̂ 𝕋 |. Notice that ‖∑ ‖ ‖ |d̂ | b b ‖ ‖ 𝕋 ‖ ‖ ‖ ∗ ‖ f (𝜏 )(s − s ) − = ‖S(d̂ 𝕋 ) − f (t)dg(t)‖ f ∗ (t)dg(t)‖ i i i−1 ‖ ‖ ‖ ‖ ∫ ∫ ‖ i=1 ‖ ‖ a a ‖ ‖ ‖ b ‖ ‖ ‖ ‖ = ‖S(̃ d) − f ∗ (t)dg(t)‖ < 𝜖, ‖ ‖ ∫a ‖ ‖ b

b

whence the Perron Δ-integral ∫a f (t)Δt exists and equals ∫a f ∗ (t)dg(t).



Theorem 3.22 shows us that Stieltjes-type integrals can be used to investigate Δ-integrals on time scales. Moreover, a careful examination reveals that it is possible to generalize the previous correspondence for Stieltjes-type of Δ-integrals. In order to do this, one needs to extend the definition of the Perron Δ-integral to a Stieltjes-type integral, say, a Perron–Stieltjes Δ-integral of a function f ∶ [a, b]𝕋 → b X with respect to a function g ∶ [a, b]𝕋 → ℝ. Let ∫a f (s)Δg(s) denote such integral,

3.3 Functional Dynamic Equations on Time Scales

which can be obtained by taking replacing the usual Riemann-type sum in the definition of the Perron Δ-integral by |d𝕋 | ∑

f (𝜏i )[g(si ) − g(si−1 )],

i=1

where d𝕋 = (𝜏i , [si−1 , si ]𝕋 ) is a tagged division of [a, b]𝕋 . Then, as in the proof of Theorem 3.22, one can show that the resulting Stieltjes-type Δ-integral satisfies b

b

f (s)Δg(s) =

∫a

f ∗ (s)dg∗ (s),

∫a

yielding a more general correspondence than that presented in Theorem 3.22 (see [86, 179]). In the next two results, we present important properties of the Perron–Stieltjes integral. They are borrowed from [85, 86]. Lemma 3.23: Let a, b ∈ 𝕋 , a < b and consider a function g ∶ [a, b] → ℝ given by g(t) = t∗ for every t ∈ [a, b]. If f ∶ [a, b] → X is such that the Perron–Stieltjes integral b ∫a f (s)dg(s) exists, then for every c, d ∈ [a, b], d∗

d

∫c

f (s)dg(s) =

f (s)dg(s).

∫c∗

Proof. By definition, the function g is constant on [c, c∗ ] and on [d, d∗ ]. Therefore, c∗ d∗ ∫c f (s)dg(s) = 0 and ∫d f (s)dg(s) = 0 and hence, c∗

d

∫c

f (s)dg(s) =

∫c

d∗

d

f (s)dg(s) +

∫c∗

proving the desired result.

f (s)dg(s) +

∫d

d∗

f (s)dg(s) =

∫c∗

f (s)dg(s) ◽

Theorem 3.24: Let 𝕋 be a time scale and [a, b] ⊂ 𝕋 ∗ . Consider a function g ∶ [a, b] → ℝ defined by g(s) = s∗ , for every s ∈ [a, b] and functions f1 , f2 ∶ [a, b] → X such that f1 (t) = f2 (t) for every t ∈ [a, b] ∩ 𝕋 . If the Perron–Stieltjes b b integral ∫a f1 (s)dg(s) exists, then the Perron–Stieltjes integral ∫a f2 (s)dg(s) also exists and both integrals coincide. b

Proof. Set I = ∫a f1 (s)dg(s). Since I exists, given an arbitrary 𝜖 > 0, there exists a gauge 𝛿1 ∶ [a, b] → (0, ∞) such that ‖ ‖∑ ‖ ‖ |d| ‖ f (𝜏 )[g(s ) − g(s )] − I ‖ < 𝜖, 1 i i i−1 ‖ ‖ ‖ ‖ i=1 ‖ ‖

95

96

3 Measure Functional Differential Equations

for every 𝛿1 -fine tagged division d = (𝜏i , [si−1 , si ]) of [a, b]. Now, set { 𝛿1 (t),( ) if t ∈ [a, b] ∩ 𝕋 , 𝛿2 (t) = 1 min 𝛿1 (t), 2 inf {|t − s| ∶ s ∈ 𝕋 } , if t ∈ [a, b] ⧵𝕋 . By definition, a 𝛿2 -fine tagged division is also 𝛿1 -fine. Consider an arbitrary 𝛿2 -fine tagged division ̃ d = (𝜏i , [si−1 , si ]) of [a, b]. For every i ∈ {1, … , |̃ d|}, there are two possibilities: (1) [si−1 , si ] ∩ 𝕋 = ∅; (2) 𝜏i ∈ 𝕋 . In case (1), g(si−1 ) = g(si ), which implies f2 (𝜏i )(g(si ) − g(si−1 )) = 0 = f1 (𝜏i )(g(si ) − g(si−1 )), while in case (2), we have f1 (𝜏i ) = f2 (𝜏i )

and f2 (𝜏i )(g(si ) − g(si−1 )) = f1 (𝜏i )(g(si ) − g(si−1 )).

Combining both cases, we obtain ‖ |̃d| ‖ ‖ |̃d| ‖ ‖∑ ‖ ‖∑ ‖ ‖ ‖ ‖ ‖ ‖ f2 (𝜏i )(g(si ) − g(si−1 )) − I ‖ = ‖ f1 (𝜏i )(g(si ) − g(si−1 )) − I ‖ < 𝜖. ‖ ‖ ‖ ‖ ‖ i=1 ‖ ‖ i=1 ‖ ‖ ‖ ‖ ‖ b

Finally, by the arbitrariness of 𝜖, ∫a f2 (s)dg(s) = I.



The next result, borrowed from [86, Theorem 4.1], follows from Theorem 3.24. Theorem 3.25: Let f ∶ [a, b]𝕋 → X be a function such that the Perron Δ-integral b ∫a f (s)Δs exists for every a, b ∈ 𝕋 , a < b. Choose an arbitrary a ∈ 𝕋 and define t

F1 (t) =

∫a

t

f (s)Δs, for t ∈ [a, b]𝕋

and

F2 (t) =

∫a

f ∗ (s)dg(s), for t ∈ [a, b],

where g(s) = s∗ , for every s ∈ [a, b]. Then, F2 = F1∗ . Proof. By Theorem 3.22 and Lemma 3.23, we have t∗

t

F2 (t) =

∫a

f ∗ (s)dg(s) =

and the result follows.

∫a

t∗

f ∗ (s)dg(s) =

∫a

f (s)Δs = F1 (t∗ ) = F1∗ (t) ◽

Now, we present an example to illustrate the relation between Perron–Stieltjes integrals and Perron Δ-integrals. Example 3.26: Let 𝕋 = ℤ and a, b ∈ 𝕋 with a < b. Take m ∈ ℕ such that b = a + m. Consider a regulated function f ∶ [a, b]𝕋 → ℝ and define its extension

3.3 Functional Dynamic Equations on Time Scales

f ∗ ∶ [a, b]𝕋 ∗ → ℝ by f ∗ (t) = f (t∗ ). Suppose, in addition, g ∶ [a, b] → ℝ is defined by g(t) = t∗ . By Lemma 3.20, it follows that f ∗ ∶ [a, b]𝕋 ∗ → ℝ is regulated and by Corollary 3.21, the function g is nondecreasing. Therefore, the Perron–Stieltjes b integral ∫a f ∗ (s)dg(s) exists and b

∫a

Let us show that m−1 ∑ a+k+1 k=0

∫a+k

∫a



m−1

a+m

f ∗ (s)dg(s) =

f ∗ (s)dg(s) =

k=0



a+k+1

∫a+k

f ∗ (s)dg(s).

m−1

f ∗ (s)dg(s) =

f ∗ (a + k)(g(a + k + 1) − g(a + k)).

k=0

Since 𝕋 = ℤ and g(t) = t∗ , for each k ∈ {0, … , m − 1} , the restriction g|(a+k,a+k+1] is constant. In particular, g(s) = g(a + k + 1),

for every s ∈ (a + k, a + k + 1].

(3.34)

Given k ∈ {0, … , m − 1}, let us prove the equality below a+k+1

∫a+k

f ∗ (s)dg(s) = f ∗ (a + k)(g(a + k + 1) − g(a + k))

(3.35)

holds. Let d = (𝜏i , [si−1 , si ]) be any tagged division of [a + k, a + k + 1]. Then, a + k = s0 < s1 < · · · < s|d|−1 < s|d| = a + k + 1. Hence, |d| ∑

f ∗ (𝜏i )(g(si ) − g(si−1 )) = f ∗ (𝜏1 )(g(s1 ) − g(s0 )) +

i=1

|d| ∑

f ∗ (𝜏i )(g(si ) − g(si−1 ))

i=2 ∗

= f ∗ (𝜏1 )(g(s1 ) − g(s0 )) = f (𝜏1 )(g(sn ) − g(s0 )) = f ∗ (𝜏1 )(g(a + k + 1) − g(a + k)), where the third equality follows by (3.34) and from the facts that s1 ∈ (a + k, a + k + 1] and s|d| = a + k + 1. Now, consider an arbitrary 𝜖 > 0 and define a gauge 𝛿 ∶ [a + k, a + k + 1] → (0, ∞) fulfilling 0 < 𝛿(s) < |s − (a + k)|, for s ≠ a + k, and 𝛿(a + k) = 𝜖. If d = (𝜏i , [si−1 , s1 ]) is a 𝛿-fine tagged division of the interval [a + k, a + k + 1], then 𝜏1 = a + k. Thus, |∑ | | |d| ∗ | | f (𝜏 )(g(s ) − g(s )) − f ∗ (a + k)(g(a + k + 1) − g(a + k))| i i i−1 | | | i=1 | | | ∗ ∗ = |f (a + k)(g(a + k + 1) − g(a + k)) − f (a + k)(g(a + k + 1) − g(a + k))| = 0 < 𝜖, a+k+1 ∗ f (s)dg(s)

which implies that the Perron–Stieltjes integral ∫a+k a+k+1

∫a+k

f ∗ (s)dg(s) = f ∗ (a + k)(g(a + k + 1) − g(a + k))

exists and

97

98

3 Measure Functional Differential Equations

proving equality (3.35). Hence, we obtain ∫a



m−1

b

f ∗ (s)dg(s) =

k=0 m−1

=



a+k+1

∫a+k

f ∗ (s)dg(s)

f ∗ (a + k)(g(a + k + 1) − g(a + k))

k=0



m−1

=



a+m−1

f (a + k) =

k=0

f (k) =

k=a

b−1 ∑

b

f (k) =

k=1

∫a

f (s)Δs,

showing that the two integrals can be related.

3.3.4 MDEs and Dynamic Equations on Time Scales In this subsection, our goal is to present a relation between measure differential equations (or simply MDEs) and dynamic equations on time scales. The relation we bring here was first presented by A. Slavík in [219] for functions defined on bounded intervals and taking values in ℝn . Then, in [78], the authors extended this correspondence to functions defined on unbounded intervals also taking values in ℝn . Here, we extend such relation to Banach space-valued functions defined on bounded intervals. However in Chapter 5, the reader can find a version of this relation for functions defined on unbounded intervals. Let 𝕋 be a time scale such that t0 , t0 + 𝜎 ∈ 𝕋 , where 𝜎 > 0, and consider a dynamic equation on time scales given by xΔ (t) = f (x(t), t),

t ∈ [t0 , t0 + 𝜎]𝕋 ,

(3.36)

where B ⊂ X is an open set and f ∶ B × [t0 , t0 + 𝜎]𝕋 → X is a function such that t → f (x(t), t) is Perron Δ-integrable on [t0 , t0 + 𝜎]𝕋 . Integrating Eq. (3.36) with respect to t, we obtain t

x(t) = x(t0 ) +

∫t0

f (x(s), s)Δs,

t ∈ [t0 , t0 + 𝜎]𝕋 ,

(3.37)

where the integral on the right-hand side is the Perron Δ-integral. Next, we present a concept of a solution of Eq. (3.37) which we refer to as a dynamic equation on time scales. Definition 3.27: We say that a function x ∶ [t0 , t0 + 𝜎]𝕋 → X is a solution of the dynamic equation on time scales (3.37), with initial condition x(s0 ) = x0 , s0 ∈ [t0 , t0 + 𝜎]𝕋 , provided (i) (ii) (iii) (iv)

x(s0 ) = x0 ∈ B; x ∈ G([t0 , t0 + 𝜎]𝕋 , B) and (x(t), t) ∈ B × [t0 , t0 + 𝜎]𝕋 ; t +𝜎 the Perron Δ-integral ∫t 0 f (x(s), s)Δs exists; 0 the equality (3.37) holds for all t ∈ [t0 , t0 + 𝜎]𝕋 .

3.3 Functional Dynamic Equations on Time Scales

The next result states a correspondence between the solutions of an MDE and the solutions of a dynamic equation on time scales, and it can be found in [78, 179, 219] for X = ℝn . We omit its proof here, since it follows as in proof of Theorem 3.30, presented in the next Subsection 3.3.5, with obvious adaptations. Theorem 3.28: Let 𝕋 be a time scale such that t0 , t0 + 𝜎 ∈ 𝕋 . Let B ⊂ X be an open subset and f ∶ B × [t0 , t0 + 𝜎]𝕋 → X be a function. Assume that, for every x ∈ G([t0 , t0 + 𝜎]𝕋 , B), the function t → f (x(t), t) is Perron Δ-integrable on [s1 , s2 ]𝕋 , for every s1 , s2 ∈ [t0 , t0 + 𝜎]𝕋 . Define g ∶ [t0 , t0 + 𝜎] → ℝ by g(s) = s∗ , for every s ∈ [t0 , t0 + 𝜎]. If x ∶ [t0 , t0 + 𝜎]𝕋 → B is a solution of the following Δ-integral equation on time scales t

x(t) = x0 +

f (x(s), s)Δs,

∫s0

t ∈ [t0 , t0 + 𝜎]𝕋 ,

(3.38)

where x0 ∈ B and s0 ∈ [t0 , t0 + 𝜎]𝕋 , then x∗ ∶ [t0 , t0 + 𝜎] → B is a solution of the MDE t

y(t) = x0 +

∫s0

t

f ∗ (y(s), s)dg(s) = x0 +

∫s0

f (y(s), s∗ )dg(s).

(3.39)

Conversely, if y ∶ [t0 , t0 + 𝜎] → B satisfies the MDE (3.39), then it must have the form y = x∗ , where x ∶ [t0 , t0 + 𝜎]𝕋 → B is a solution of the dynamic equation on time scales (3.38).

3.3.5 Relations with Measure FDEs In this subsection, our goal is to show that a functional dynamic equation on time scales of the form { Δ x (t) = f (xt∗ , t), t ∈ [t0 , t0 + 𝜎]𝕋 , (3.40) x(t) = 𝜙(t), t ∈ [t0 − r, t0 ]𝕋 , where t0 , t0 + 𝜎 ∈ 𝕋 , B ⊂ X is open, O = G([t0 − r, t0 + 𝜎]𝕋 , B), P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}, f ∶ P × [t0 , t0 + 𝜎]𝕋 → X, and 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B), can be regarded as a measure FDE. This correspondence was firstly presented in [85] for functions taking values in ℝn . Here, we extend this relation to the abstract case of Banach space-valued functions. Before we continue, let us make a few comments. We want to investigate dynamic equations on time scales such that the Δ-derivative of the unknown function x ∶ 𝕋 → X at t ∈ 𝕋 depends on the values of x(s), where s ∈ [t − r, t] ∩ 𝕋 . But, unlike the classical case, here we have an obstacle to surpass: the function xt is defined on a subset of [−r, 0]. In order to overcome this problem, we consider the function xt∗ instead.

99

100

3 Measure Functional Differential Equations

Throughout this section and in the remaining of this chapter, xt∗ stands for (x∗ )t . Clearly, xt∗ carries the same information as xt , but xt∗ is defined on the whole interval [−r, 0]. Thus, it seems reasonable to consider functional dynamic equations of the form xΔ (t) = f (xt∗ , t). Note that the functional dynamic equation on time scales (3.40) can be rewritten in its integral form as t ⎧ ∗ ⎪x(t) = x(t0 ) + ∫ f (xs , s)Δs, t ∈ [t0 , t0 + 𝜎]𝕋 , t0 ⎨ ⎪x(t) = 𝜙(t), t ∈ [t − r, t ] , 0 0 𝕋 ⎩

(3.41) (3.42)

where t0 , t0 + 𝜎 ∈ 𝕋 , 𝜎 > 0, B ⊂ X is open, O = G([t0 − r, t0 + 𝜎]𝕋 , B), P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}, f ∶ P × [t0 , t0 + 𝜎]𝕋 → X is a function, and 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B). What motivates us to deal with the integral form (3.41) instead of the differential form (3.40), is the fact that the hypothesis of our results are required on integrals instead of integrands and this enables our right-hand sides to be “merely” integrable. A concept of a solution of the functional dynamic equation on time scales (3.41) follows next. Definition 3.29: Let B ⊂ X be open, O = G([t0 − r, t0 + 𝜎]𝕋 , B), and P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}. We say that a function x ∶ [t0 − r, t0 + 𝜎]𝕋 → X is a solution of the functional dynamic equation on time scales (3.41), provided (i) (xt∗ , t) ∈ P × [t0 , t0 + 𝜎]𝕋 ; t +𝜎 (ii) the Perron Δ-integral ∫t 0 f (xs∗ , s)Δs exists; 0 (iii) the equalities (3.41) and (3.42) hold for all t ∈ [t0 , t0 + 𝜎]𝕋 .

Now, we are ready to present the main result of this subsection which concerns a relation between the solutions of a measure FDE and the solutions of a functional dynamic equations on time scales. Theorem 3.30: Let [t0 − r, t0 + 𝜎]𝕋 be a time scale interval, with t0 ∈ 𝕋 and 𝜎 > 0. Let B ⊂ X be open, O = G([t0 − r, t0 + 𝜎]𝕋 , B), P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}, f ∶ P × [t0 , t0 + 𝜎]𝕋 → X be a function and 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B). Assume that for every u z ∈ P, the Perron Δ-integral ∫u 2 f (z, s)Δs exists, for all u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 . Define 1 ∗ g(s) = s for every s ∈ [t0 , t0 + 𝜎]. If x ∶ [t0 − r, t0 + 𝜎]𝕋 → B is a solution of the functional dynamic equation on time scales (3.41) and (3.42), then x∗ ∶ [t0 − r, t0 + 𝜎] →

3.3 Functional Dynamic Equations on Time Scales

B satisfies t ⎧ f (xs∗ , s∗ )dg(s), ⎪x∗ (t) = x∗ (t0 ) + ∫t0 ⎨ ⎪xt∗ = 𝜙∗t . 0 ⎩ 0

t ∈ [t0 , t0 + 𝜎],

(3.43)

Conversely, if y ∶ [t0 − r, t0 + 𝜎] → B is a solution of the measure FDE t ⎧ f (ys , s∗ )dg(s), ⎪y(t) = y(t0 ) + ∫t0 ⎨ ⎪yt = 𝜙∗t , 0 ⎩ 0

t ∈ [t0 , t0 + 𝜎],

then y = x∗ , where x ∶ [t0 − r, t0 + 𝜎]𝕋 → B satisfies (3.41) and (3.42). Proof. Suppose x ∶ [t0 − r, t0 + 𝜎]𝕋 → B is a solution of (3.41) and (3.42). Then, t

x(t) = x(t0 ) +

f (xs∗ , s)Δs,

∫t0

for t ∈ [t0 , t0 + 𝜎]𝕋 ,

and, by Theorem 3.25, t

x∗ (t) = x∗ (t0 ) +

f (xs∗∗ , s∗ )dg(s),

∫t0

for t ∈ [t0 , t0 + 𝜎].

Notice that f (xs∗∗ , s∗ ) = f (xs∗ , s∗ ), for s ∈ [t0 , t0 + 𝜎]𝕋 . Then, Theorem 3.24 implies t

x∗ (t) = x∗ (t0 ) +

∫t0

f (xs∗ , s∗ )dg(s),

t ∈ [t0 , t0 + 𝜎].

(3.44)

In addition, for 𝜃 ∈ [−r, 0], we have xt∗0 (𝜃) = x∗ (t0 + 𝜃) = x((t0 + 𝜃)∗ ) = 𝜙((t0 + 𝜃)∗ ) = 𝜙∗t0 (𝜃).

(3.45)

Then, (3.44) and (3.45) yield x∗ ∶ [t0 − r, t0 + 𝜎] → B is a solution of (3.43). Now, we assume that y satisfies t

y(t) = y(t0 ) +

∫t0

f (ys , s∗ )dg(s),

for t ∈ [t0 , t0 + 𝜎].

By definition, g is constant on every interval (𝛼, 𝛽], where 𝛽 ∈ 𝕋 and 𝛼 = sup {𝜏 ∈ 𝕋 ∶ 𝜏 < 𝛽}. Then, y inherits the same property and hence, y = x∗ , for some x ∶ [t0 − r, t0 + 𝜎]𝕋 → B. Using the same ideas of the first part of the proof, one can show that x satisfies the functional dynamic equation on time scales (3.41) and (3.42). ◽ Lemma 3.31: Let [t0 , t0 + 𝜎]𝕋 be a time scale interval. If there exists a Perron u Δ-integrable function L ∶ [t0 , t0 + 𝜎]𝕋 → ℝ, then ∫u 2 L(s)h∗ (s)Δs exists in the 1 sense of Perron Δ-integral for every regulated function h∗ ∶ [t0 , t0 + 𝜎] → ℝ and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 such that u1 ⩽ u2 .

101

102

3 Measure Functional Differential Equations u

Proof. Since ∫u 2 L(s)Δs exists for every u1 , u2 ∈ 𝕋 such that u1 ⩽ u2 , by Theorem 1 u 3.22, it follows that ∫u 2 L∗ (s)dg(s) exists in the sense of Perron–Stieltjes 1 for g(s) = s∗ , where g ∶ [t0 , t0 + 𝜎] → ℝ. Also, since g is nondecreasing by t ∗ Corollary 3.21, ∫t L (s)dg(s) is of bounded variation on [t0 , t0 + 𝜎]. There0 fore, by Corollary 1.69, since h∗ ∶ [t0 , t0 + 𝜎] → ℝ is a regulated function, u it follows that ∫u 2 L∗ (s)h∗ (s)dg(s) exists in the sense of Perron–Stieltjes for 1 each u1 , u2 ∈ [t0 , t0 + 𝜎]. Applying Theorems 3.22 and 3.25, it follows that ∗ u ∫u∗2 L(s)h∗ (s)Δs exists and 1

u∗2

∫u∗

L(s)h∗ (s)Δs =

1

u2

∫u1

L∗ (s)h∗ (s)dg(s), ◽

getting the desired result.

We finish this section with a crucial result for the application of the correspondence that we presented in Theorem 3.30. The next result provides a way of translating the results from functional dynamic equations on time scales to their analogues in the framework of measure FDEs. The version presented here is more general than that from [86], since our conditions hold for functions instead of constants. Lemma 3.32: Let [t0 − r, t0 + 𝜎]𝕋 be a time scale interval such that t0 ∈ 𝕋 , B ⊂ X be open, O = G([t0 − r, t0 + 𝜎], B), P = G([−r, 0], B) and f ∶ P × [t0 , t0 + 𝜎]𝕋 → X be an arbitrary function. Define g(t) = t∗ and f ∗ (z, t) = f (z, t∗ ) for every z ∈ P and t ∈ [t0 , t0 + 𝜎]. The following statements hold. t +𝜎

(i) If the Perron Δ-integral ∫t 0 0

f (ys , s)Δs exists for every y ∈ O, then the

t +𝜎

Perron–Stieltjes integral ∫t 0 f ∗ (ys , s)dg(s) exists for every y ∈ O. 0 (ii) Suppose there is a Perron Δ-integrable function M ∶ [t0 , t0 + 𝜎]𝕋 → ℝ such that, for every y ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , we have u2 ‖ u2 ‖ ‖ ‖ f (ys , s)Δs‖ ⩽ M(s)Δs. ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖ Then, for t0 ⩽ u1 ⩽ u2 ⩽ t0 + 𝜎 and y ∈ O, we get u2 ‖ u2 ‖ ‖ ‖ f ∗ (ys , s)dg(s)‖ ⩽ M ∗ (s)dg(s). ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖ (iii) Suppose there exists a Perron Δ-integrable function L ∶ [t0 , t0 + 𝜎]𝕋 → ℝ such that, for y, z ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , we have u2 ‖ u2 [ ] ‖ ‖ ‖ f (ys , s) − f (zs , s) Δs‖ ⩽ L(s)||ys − zs ||∞ Δs. ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖

3.3 Functional Dynamic Equations on Time Scales

Then, for t0 ⩽ u1 ⩽ u2 ⩽ t0 + 𝜎 and y, z ∈ O, we obtain u2 ‖ u2 [ ‖ ] ‖ ‖ f ∗ (ys , s) − f ∗ (zs , s) dg(s)‖ ⩽ L∗ (s)||ys − zs ||∞ dg(s). ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖

t +𝜎

Proof. Let y ∈ O. If the Perron Δ-integral ∫t 0 0 3.22 and 3.24 yield t0 +𝜎

∫t0

f (ys , s)Δs =

t0 +𝜎

f (ys∗ , s∗ )dg(s) =

∫t0

t0 +𝜎

=

∫t0

f (ys , s)Δs exists, then Theorems t0 +𝜎

f (ys , s∗ )dg(s)

∫t0

f ∗ (ys , s)dg(s),

showing that the last integral also exists which leads to (i). From Theorems 3.22 and 3.24 and Lemma 3.23, we obtain, for every y ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , ‖ ‖ u2 ‖ ‖ u∗2 ‖ ‖ u2 ‖ ‖ ‖ ‖ ‖ ‖ f ∗ (ys , s)dg(s)‖ = ‖ f (ys∗ , s∗ )dg(s)‖ = ‖ f (ys∗ , s∗ )dg(s)‖ ‖ ‖ ‖∫u ‖ ‖∫u∗ ‖ ‖∫u ‖ ‖ 1 ‖ ‖ 1 ‖ ‖ 1 ∗ ∗ u2 ‖ u2 ‖ ‖ ‖ =‖ f (ys , s)Δs‖ ⩽ M(s)Δs ‖∫u∗ ‖ ∫u∗ ‖ 1 ‖ 1 u∗2

=

∫u∗

M ∗ (s)dg(s) =

1

u2

∫u1

M ∗ (s)dg(s),

which implies (ii). u Now, let us prove item (iii). Notice that the Perron Δ-integral ∫u 2 L(s)||ys − 1 zs ||∞ Δs exists in the sense of Perron Δ-integral for each u2 , u1 ∈ [t0 , t0 + 𝜎]𝕋 by Lemma 3.31, since the function s ∈ [t0 , t0 + 𝜎] → ||ys − zs || is regulated for all y, z ∈ O. On the other hand, for every y, z ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , we have ‖ u2 [ ‖ ] ‖ ‖ f ∗ (ys , s) − f ∗ (zs , s) dg(s)‖ ‖ ‖∫u ‖ ‖ 1 ‖ ‖ u2 [ ‖ ‖ u∗2 [ ‖ ] ] ‖ ‖ ‖ ‖ f (ys∗ , s∗ ) − f (zs∗ , s∗ ) dg(s)‖ = ‖ f (ys∗ , s∗ ) − f (zs∗ , s∗ ) dg(s)‖ =‖ ‖∫u ‖ ‖∫u∗ ‖ ‖ 1 ‖ ‖ 1 ‖ ∗ ∗ u2 ‖ u2 [ ‖ ] ‖ ‖ f (ys , s) − f (zs , s) Δs‖ ⩽ =‖ L(s)||ys − zs ||∞ Δs ‖∫u∗ ‖ ∫u∗ ‖ 1 ‖ 1 u∗2

=

∫u∗ 1

proving (iii).

L∗ (s)||ys − zs ||∞ dg(s) =

u2

∫u1

L∗ (s)||ys − zs ||∞ dg(s), ◽

103

104

3 Measure Functional Differential Equations

3.3.6 Impulsive Functional Dynamic Equations on Time Scales In this subsection, we focus our attention on functional dynamic equations on time scales subject to impulse effects and their relations with impulsive measure FDEs. These types of equations have been investigated by many authors (see [14, 15, 37, 109] and the references therein). Consider the following moments of impulses ⊂ [t0 , t0 + 𝜎]𝕋 such that t0 ⩽ t1 < · · · < tm ⩽ t0 + 𝜎, and the impulses oper{tk }m k=1 ators Ik ∶ B → X, B ⊂ X is open, with k ∈ {1, … , m}. Usually, an impulse condition is described by ( ) ( ) ( ( )) x tk+ − x tk− = Ik x tk− ,

k ∈ {1, … , m}.

(3.46)

When we are dealing with the theory of time scales, the following conventions are commonly adopted: ● ●

x(t+ ) = x(t), whenever t ∈ 𝕋 is a right-scattered point; x(t− ) = x(t), if t ∈ 𝕋 is a left-scattered point.

Notice that if x ∶ [t0 − r, t0 + 𝜎]𝕋 → B is a left-continuous function, then the impulsive condition (3.46) can be rewritten as x(tk+ ) − x(tk ) = Ik (x(tk )),

k ∈ {1, … , m}.

(3.47)

As commented in Section 3.2, we shall consider that I + Ik ∶ B → B for every k ∈ {1, … , m}, where I ∶ B → B is the identity operator. From (3.47), if tk is right-scattered, then Ik (x(tk )) = 0. Hence, we can assume, without loss of generality, that tk is right-dense, for every k ∈ {1, … , m}. This leads us to consider the following impulsive functional dynamic equation on time scales ( ) ⎧xΔ (t) = f xt∗ , t , t ∈ [t0 , t0 + 𝜎]𝕋 ⧵ {t1 , … , tm }, ⎪ + x(t ) = I (x(t )), k ∈ {1, … , m}, Δ ⎨ k k k ⎪ t ∈ [t0 − r, t0 ]𝕋 , ⎩x(t) = 𝜙(t),

(3.48)

where B ⊂ X is open, O = G([t0 − r, t0 + 𝜎]𝕋 , B), P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}, f ∶ P × [t0 , t0 + 𝜎]𝕋 → X is a function, 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B), t1 , … , tm ∈ 𝕋 are right-dense points of impulse effects satisfying t0 ⩽ t1 < t2 < · · · < tm < t0 + 𝜎, I1 , … , Im ∶ B → X are the impulse operators such that I + Ik ∶ B → B for k ∈ {1, … , m}, and we are assuming here that the solution is left-continuous. Hence, one can rewrite the solution of the system (3.48) in an integral form given

3.3 Functional Dynamic Equations on Time Scales

by ⎧ t ∑ ⎪x(t) = x(t ) + f (xs∗ , s)Δs + Ik (x(tk )), 0 ∫t0 ⎪ k∈{1,…,m} ⎨ tk 0. Consider an open set B ⊂ X, O = G− ([t0 − r, t0 + 𝜎]𝕋 , B), P = {xt∗ ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}, a function f ∶ P × [t0 , t0 + 𝜎]𝕋 → X and 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B). Define a function g ∶ [t0 , t0 + 𝜎] → ℝ by g(s) = s∗ for every s ∈ [t0 , t0 + 𝜎]. Assume that t1 , … , tm ∈ 𝕋 are right-dense points of impulse effects satisfying t0 ⩽ t1 < t2 < · · · < tm < t0 + 𝜎, I1 , … , Im ∶ B → X are the impulse operators such that I + Ik ∶ B → B for each k = 1, … , m, where I ∶ B → B is the identity operator. If x ∶ [t0 − r, t0 + 𝜎]𝕋 → B is a solution of the impulsive functional dynamic equation on time scales t ∑ ⎧x(t) = x(t ) + f (xs∗ , s)Δs + Ik (x(tk )), t ∈ [t0 , t0 + 𝜎]𝕋 , 0 ⎪ ∫t0 k∈{1,…,m} (3.51) ⎨ tk 0 such that ||x𝜖 (t) − y𝜖 (t)|| ⩽ J𝜖,

] [ L . for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ −r, 𝜖

Proof. Since f ∶ P × [0, ∞) → X, g ∶ P × [0, ∞) × (0, 𝜖0 ] → X, and 𝜙 ∈ P are bounded functions, we can assume, without loss of generality, that there exists a constant M > 0 such that ‖f (z, t)‖ ⩽ M and ‖g(z, t, 𝜖)‖ ⩽ M for all z ∈ P, t ∈ [0, ∞) and all 𝜖 ∈ (0, 𝜖0 ], and also ||𝜙||∞ ⩽ M. Then, T ‖1 ‖ M M𝛼 ‖ ‖ . ||f0 (x)|| = ‖ f (x, s)dh(s)‖ ⩽ [h(T) − h(0)] = ‖ T ∫0 ‖ T T ‖ ‖ Thus, for 𝜖 ∈ (0, 𝜖0 ] and s, t ∈ [0, ∞), with s ⩾ t, we consider the following cases:

(i) if s + 𝜃, t + 𝜃 < 0, then ||y𝜖 (s + 𝜃) − y𝜖 (t + 𝜃)|| = ||𝜖𝜙(s + 𝜃) − 𝜖𝜙(t + 𝜃)|| ⩽ 𝜖2M; (ii) if s + 𝜃, t + 𝜃 = 0, then ||y𝜖 (s + 𝜃) − y𝜖 (t + 𝜃)|| = 0; (iii) if s + 𝜃 ⩾ 0 and t + 𝜃 ⩽ 0, then s+𝜃 ‖ ‖ ‖ ‖ f0 ((y𝜖 )𝜎 )d𝜎 ‖ + ||𝜖𝜙(t + 𝜃)|| + ‖𝜖𝜙(0)‖ ||y𝜖 (s + 𝜃) − y𝜖 (t + 𝜃)|| ⩽ ‖𝜖 ‖ ∫0 ‖ ‖ ‖ 𝜖M(s + 𝜃)𝛼 + 𝜖2M ⩽ T 𝜖M(s + 𝜃)𝛼 𝜖M(t + 𝜃)𝛼 − + 2𝜖M ⩽ T T 𝜖M(s − t)𝛼 = + 𝜖2M; T

109

110

3 Measure Functional Differential Equations

(iv) if s + 𝜃, t + 𝜃 ⩾ 0, then s+𝜃 ‖ ‖ 𝜖M(s − t)𝛼 ‖ ‖ , f0 ((y𝜖 )𝜎 )d𝜎 ‖ ⩽ ||y𝜖 (s + 𝜃) − y𝜖 (t + 𝜃)|| = ‖𝜖 ‖ ∫t+𝜃 ‖ T ‖ ‖

𝜃 ∈ [−r, 0].

Combining all the cases above, we obtain ||(y𝜖 )s − (y𝜖 )t ||∞ = sup ||y𝜖 (s + 𝜃) − y𝜖 (t + 𝜃)|| ⩽ 𝜃∈[−r,0]

𝜖M(s − t)𝛼 + 𝜖2M, T (3.58)

for all s, t ∈ [0, ∞), with s ⩾ t. [ ] On the other hand, for every t ∈ 0, L𝜖 , we have t t ‖ ‖ f ((x𝜖 )s , s)dh(s) + 𝜖 2 g((x𝜖 )s , s, 𝜖)dh(s) ||x𝜖 (t) − y𝜖 (t)|| = ‖𝜖 ‖ ∫0 ∫0 ‖ t ‖ ‖ t ‖ ‖ ‖ ‖ f0 ((y𝜖 )s )ds‖ ⩽ 𝜖 ‖ [f ((x𝜖 )s , s) − f ((y𝜖 )s , s)] dh(s)‖ −𝜖 ‖ ‖∫0 ‖ ∫0 ‖ ‖ ‖ t t ‖ ‖ ‖ ‖ +𝜖‖ f ((y𝜖 )s , s)dh(s) − f ((y ) )ds‖ ‖∫0 ∫0 0 𝜖 s ‖ ‖ ‖ ‖ t ‖ ‖ 2‖ +𝜖 ‖ g((x𝜖 )s , s, 𝜖)dh(s)‖ ‖∫0 ‖ ‖ ‖ t

⩽𝜖

∫0

C||(x𝜖 )s − (y𝜖 )s ||∞ dh(s)

t ‖ t ‖ ‖ ‖ +𝜖‖ f ((y𝜖 )s , s)dh(s) − f0 ((y𝜖 )s )ds‖ ‖∫0 ‖ ∫0 ‖ ‖ 2 + 𝜖 M(h(t) − h(0)).

(3.59) [ ] Let us estimate the second term on the right-hand side of (3.59). Given t ∈ 0, L𝜖 , let p be the largest integer such that pT ⩽ t. Therefore, t ‖ ‖ t ‖ ‖ f ((y𝜖 )s , s)dh(s) − f0 ((y𝜖 )s )ds‖ ‖ ‖ ‖∫0 ∫0 ‖ ‖ p ‖ iT ‖ ∑‖ ‖ ⩽ [f ((y𝜖 )s , s) − f ((y𝜖 )(i−1)T , s)] dh(s)‖ ‖ ‖∫(i−1)T ‖ i=1 ‖ ‖ p ‖ iT iT ‖ ∑ ‖ ‖ f ((y𝜖 )(i−1)T , s)dh(s) − f0 ((y𝜖 )(i−1)T )ds‖ + ‖ ‖ ‖∫ ∫(i−1)T i=1 ‖ (i−1)T ‖ p ‖ iT ‖ ∑ ‖ ‖ + [f ((y ) ) − f0 ((y𝜖 )s )] ds‖ ‖ ‖∫(i−1)T 0 𝜖 (i−1)T ‖ i=1 ‖ ‖

3.4 Averaging Methods t ‖ t ‖ ‖ ‖ +‖ f ((y𝜖 )s , s)dh(s) − f0 ((y𝜖 )s )ds‖ . ‖∫pT ‖ ∫pT ‖ ‖

For every i ∈ {1, 2, … , p} and every s ∈ [(i − 1)T, iT], (3.58) gives us ||(y𝜖 )s − (y𝜖 )(i−1)T ||∞ ⩽

M𝜖𝛼(s − (i − 1)T) + 𝜖2M ⩽ M𝜖(𝛼 + 2), T

which, together with the fact that pT ⩽ L𝜖 , imply p ‖ iT ‖ ∑ ‖ ‖ [f ((y𝜖 )s , s) − f ((y𝜖 )(i−1)T , s)] dh(s)‖ ‖ ‖∫(i−1)T ‖ i=1 ‖ ‖ p ∑ CML𝛼(𝛼 + 2) . CM𝜖(𝛼 + 2)[h(iT) − h((i − 1)T)] = CM𝜖𝛼(𝛼 + 2)p ⩽ ⩽ T i=1

On the other hand, for every zs , zt ∈ P, with s, t ⩾ 0, the definition of f0 implies T ‖ 1‖ ‖ ‖ [f (zs , 𝜎) − f (zt , 𝜎)] dh(𝜎)‖ ‖ ‖ ∫0 T‖ ‖ ‖ C C ⩽ ||zs − zt ||∞ [h(T) − h(0)] = ||zs − zt ||∞ 𝛼, T T

||f0 (zs ) − f0 (zt )|| ⩽

(3.60)

whence, using the fact that (y𝜖 )s , (y𝜖 )(i−1)T ∈ P for s ∈ [(i − 1)T, iT], yields p ‖ iT ‖ ∑ ‖ ‖ [f0 ((y𝜖 )s ) − f0 ((y𝜖 )(i−1)T )] ds‖ ‖ ‖ ‖∫(i−1)T i=1 ‖ ‖ p iT ∑ ‖ ‖ ⩽ ‖f ((y ) ) − f0 ((y𝜖 )(i−1)T )‖ ds ‖ ∫(i−1)T ‖ 0 𝜖 s i=1

[ p ] p iT ∑ C ∑ C ⩽ 𝛼 ||(y𝜖 )s − (y𝜖 )(i−1)T ||∞ ds ⩽ 𝛼 𝜖M(𝛼 + 2)T T i=1 ∫(i−1)T T i=1

= 𝜖MC𝛼(𝛼 + 2)p ⩽

MCL𝛼(𝛼 + 2) , T

where the second estimate follows from Corollary 1.48, since t → f0 (yt ) is a regulated function (because f0 is continuous by (3.60) and t → yt is a regulated function). By the T-periodicity of f on the second variable and from the definition of f0 , we obtain p ‖ iT iT ‖ ∑ ‖ ‖ f ((y𝜖 )(i−1)T , s)dh(s) − f0 ((y𝜖 )(i−1)T )ds‖ ‖ ‖∫(i−1)T ‖ ∫ (i−1)T i=1 ‖ ‖ p ‖ ‖ ∑‖ T ‖ f ((y𝜖 )(i−1)T , s)dh(s) − f0 ((y𝜖 )(i−1)T )T ‖ = 0. = ‖ ‖ ‖∫0 i=1 ‖ ‖

111

112

3 Measure Functional Differential Equations

Moreover, t ‖ t ‖ ‖ t ‖ ‖ ‖ ‖ ‖ f ((y𝜖 )s , s)dh(s) − f0 ((y𝜖 )s )ds‖ ⩽ ‖ f ((y𝜖 )s , s)dh(s)‖ ‖ ‖∫pT ‖ ‖∫pT ‖ ∫pT ‖ ‖ ‖ ‖ t M𝛼 ‖f ((y ) )‖ ds ⩽ M[h(t) − h(pT)] + (t − pT) + ∫pT ‖ 0 𝜖 s ‖ T M𝛼T ⩽ M[h((p + 1)T) − h(pT)] + = 2M𝛼, T whence we derive the following estimate t ‖ t ‖ 2MCL𝛼(𝛼 + 2) ‖ ‖ + 2M𝛼. f ((y𝜖 )s , s)dh(s) − f0 ((y𝜖 )s )ds‖ ⩽ ‖ ‖∫0 ‖ ∫0 T ‖ ‖ Set K =

2MCL𝛼(𝛼+2) T

+ 2M𝛼. Then, from (3.59), we obtain t

||x𝜖 (t) − y𝜖 (t)|| ⩽ 𝜖 and, defining 𝜓(s) = sup

∫0

C||(x𝜖 )s − (y𝜖 )s ||∞ dh(s) + 𝜖K + 𝜖 2 M[h(t) − h(0)]

𝜏∈[0,s] ||x𝜖 (𝜏)

− y𝜖 (𝜏)||, we obtain, for every u ∈ [0, t],

u

||x𝜖 (u) − y𝜖 (u)|| ⩽ 𝜖

C𝜓(s)dh(s) + 𝜖K + 𝜖 2 M[h(u) − h(0)]

∫0 t

⩽𝜖

∫0

C𝜓(s)dh(s) + 𝜖K + 𝜖 2 M[h(t) − h(0)].

Therefore, t

𝜓(t) ⩽ 𝜖

∫0

C𝜓(s)dh(s) + 𝜖K + 𝜖 2 M[h(t) − h(0)].

(3.61)

On the other hand, we have 𝜖[h(t) − h(0)] ⩽ 𝜖[h(L∕𝜖) − h(0)] ⩽ 𝜖[h(⌈L∕(𝜖T)⌉ T) − h(0)] ⌈ ⌉ ( ) ( ) L L L ⩽𝜖 𝛼⩽𝜖 +1 𝛼 ⩽ + 𝜖0 𝛼, (3.62) 𝜖T 𝜖T T where ⌈q⌉ denotes the integer part of the real number q. Then, by (3.61), we obtain t ) ( L + 𝜖0 𝛼 C𝜓(s)dh(s) + 𝜖K + 𝜖M 𝜓(t) ⩽ 𝜖 ∫0 T and hence, the Grönwall inequality (see Theorem 1.52) together with (3.62) yield ( ( ) ) L 𝜓(t) ⩽ e𝜖C[h(t)−h(0)] K + M + 𝜖0 𝛼 𝜖 T ) ( ( ( ) ) C TL +𝜖0 𝛼 L + 𝜖0 𝛼 𝜖. ⩽e K+M T Define ) ( ( ( )) C L +𝜖 𝛼 L J=e T 0 K+M + 𝜖0 𝛼 . T

3.4 Averaging Methods

Thus, clearly ||x𝜖 (t) − y𝜖 (t)|| ⩽ 𝜓(t) ⩽ J𝜖, for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ [0, L𝜖 ]. Then, the definitions of the initial conditions of both systems imply that, for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ [−r, 0], we have ||x𝜖 (t) − y𝜖 (t)|| = 0 ⩽ 𝜓(t) ⩽ J𝜖, ◽

finishing the proof.

In the sequel, using the relations presented previously in this chapter between the solutions of a measure FDE and other types of equations (see Theorems 3.6 and 3.34), one can derive periodic averaging principles for impulsive measure FDEs as well as for impulsive functional dynamic equations on time scales. Slightly different versions can be found in [86]. Here, though, similarly as in Theorem 3.35, we consider Banach space-valued functions and a parameter within the initial conditions. Theorem 3.36: Assume that 𝜖0 , L, T > 0, P = G([−r, 0], X), and there exists m ∈ ℕ such that 0 ⩽ t1 < t2 < · · · < tm < T. Consider bounded functions f ∶ P × [0, ∞) → X and g ∶ P × [0, ∞) × (0, 𝜖0 ] → X and a nondecreasing left-continuous function h ∶ [0, ∞) → ℝ which is continuous at tk , for every k ∈ ℕ. Let Ik ∶ X → X, with k ∈ ℕ, be bounded and Lipschitz-continuous functions representing the impulse operators. For every integer k > m, define tk and Ik by the recursive formulas tk = tk−m + T and Ik = Ik−m . Suppose the following conditions are satisfied: (I1) the Perron–Stieltjes integrals b

∫0 (I2) (I3) (I4) (I5)

b

f (ys , s)dh(s)

and

∫0

g(ys , s, 𝜖)dh(s)

exist for every b > 0, y ∈ G([−r, b], X) and 𝜖 ∈ (0, 𝜖0 ]; f is Lipschitz-continuous with respect to the first variable; f is T-periodic with respect to the second variable; there is a constant 𝛼 > 0 for which h(t + T) − h(t) = 𝛼, for every t ⩾ 0; the Perron–Stieltjes integral T

f0 (x) =

1 f (x, s)dh(s) T ∫0

exists for every x ∈ P. Let 1∑ I (z), T k=1 k m

I0 (z) =

for all z ∈ X

113

114

3 Measure Functional Differential Equations

that 𝜙 ∈ P is bounded. Suppose, in addition, for every 𝜖 ∈ (0, 𝜖0 ], x𝜖 ∶ [and assume ] −r, L𝜖 → X is a solution of the nonautonomous impulsive measure FDE t t ∑ ⎧x(t) = x(0) + 𝜖 2 f (x , s)dh(s) + 𝜖 g(xs , s, 𝜖)dh(s) + 𝜖 Ik (x(tk )), s ⎪ ∫0 ∫0 k∈ℕ ⎨ tk 0 such that ||x𝜖 (t) − y𝜖 (t)|| ⩽ J𝜖,

] [ L . for all 𝜖 ∈ (0, 𝜖0 ] and t ∈ −r, 𝜖

Proof. Define a function ̃ h ∶ [0, ∞) → ℝ by { h(t), t ∈ [0, t1 ], ̃ h(t) = h(t) + ck , t ∈ (tk , tk+1 ], k ∈ ℕ, where the sequence {ck }k∈ℕ is such that 0 ⩽ ck ⩽ ck+1 , for every k ∈ ℕ. Suppose Δ+ ̃ h(tk ) = 1, for every k ∈ ℕ. Since h is a nondecreasing and left-continuous function, the same applies to ̃ h. Besides, it is not difficult to see that there exists a constant 𝛼 ̃ > 0 such that ̃ h(t + T) − ̃ h(t) = 𝛼 ̃,

for all t ⩾ 0.

On the other hand, since x𝜖 ∶ [−r, L𝜖 ] → X is a solution of the nonautonomous impulsive measure FDE (3.63), we have t

x𝜖 (t) = x𝜖 (0) +

∫0

∑ ] [ 𝜖Ik (x𝜖 (tk )) 𝜖f ((x𝜖 )s , s) + 𝜖 2 g((x𝜖 )s , s, 𝜖) dh(s) + k∈ℕ tk 0 such that ] [ L . ||x𝜖 (t) − y𝜖 (t)|| ⩽ J𝜖, for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ 0, 𝜖 As in the proof of Theorem 3.35, it is clear that ||x𝜖 (t) − y𝜖 (t)|| = 0 ⩽ J𝜖,

for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ [−r, 0] .

Therefore, combining both estimates, we get the desired result.



Before presenting the last result, let us recall the concept of a T-periodic time scale for a fixed T > 0. A time scale 𝕋 is called a T-periodic time scale, whenever t∈𝕋

implies t + T ∈ 𝕋 ,

and,

𝜇(t) = 𝜇(t + T).

115

116

3 Measure Functional Differential Equations

A periodic averaging principle for impulsive functional dynamic equations on time scales comes next. It slightly differs from the version presented in [86]. Here, though, we consider Banach space-valued functions and a parameter within the initial conditions of both the original and the averaged systems. Theorem 3.37: Let 𝜖0 , T, L, 𝜎 > 0. Suppose 𝕋 is a T-periodic time scale such that t0 ∈ 𝕋 , and P = G([−r, 0], X). Suppose, in addition, there exists m ∈ ℕ such that t1 , … , tm ∈ 𝕋 are right-dense points satisfying t0 ⩽ t1 < t2 < · · · < tm < t0 + T. For each k ∈ ℕ, let Ik ∶ X → X be a bounded Lipschitz-continuous function. For every integer k > m, define tk and Ik by the recursive formulas tk = tk−m + T and Ik = Ik−m . Consider bounded functions f ∶ P × [t0 , ∞)𝕋 → X and g ∶ P × [t0 , ∞)𝕋 × (0, 𝜖0 ] → X fulfilling: (TS1) the Perron Δ-integrals b

∫t0

b

f (ys , s)Δs and

∫t0

g(ys , s, 𝜖)Δs

exist for every b > t0 , y ∈ G([t0 − r, b], X) and 𝜖 ∈ (0, 𝜖0 ]; (TS2) f is Lipschitz-continuous with respect to the first variable; (TS3) f is T-periodic with respect to the second variable; (TS4) the Perron Δ-integral f0 (x) =

1 T ∫t0

t0 +T

f (x, s)Δs

exists, for every x ∈ P. Let 1∑ I (z), T k=1 k m

I0 (z) =

for z ∈ X,

and 𝜙 ∈ G([t0 −[ r, t0 ]𝕋 , X) be] bounded. Furthermore, assume that for every 𝜖 ∈ (0, 𝜖0 ], x𝜖 ∶ t0 − r, t0 + L𝜖 → X is a solution of the impulsive functional 𝕋 dynamic equation on time scales t t ∑ ⎧x(t) = x(t ) + 𝜖 f (xs∗ , s)Δs + 𝜖 2 g(xs∗ , s, 𝜖)Δs + 𝜖 Ik (y(tk )), 0 ⎪ ∫t0 ∫t0 k∈ℕ ⎨ tk 0 such that ||x𝜖 (t) − y𝜖 (t)|| ⩽ J𝜖,

] [ L for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ t0 − r, t0 + . 𝜖 𝕋

Proof. We can assume, without loss of generality, that t0 = 0, because if t0 ≠ 0, it is enough to consider a shifted problem with the time scale 𝕋̃ = {t − t0 ∶ t ∈ 𝕋 } and g(x, t, 𝜖) = g(x, t + t0 , 𝜖), where ̃f ∶ P × [0, ∞)𝕋̃ → functions ̃f (x, t) = f (x, t + t0 ) and ̃ X and ̃ g ∶ P × [0, ∞)𝕋̃ × (0, 𝜖0 ] → X. For every t ∈ [0, ∞), x ∈ P and 𝜖 ∈ (0, 𝜖0 ], consider the following extensions of the functions f and g respectively, given by f ∗ (x, t) = f (x, t∗ )

and g∗ (x, t, 𝜖) = g(x, t∗ , 𝜖).

Define a function h ∶ [0, ∞) → ℝ by h(t) = t∗ , for all t ∈ [0, ∞). Since 𝕋 is T-periodic, it is not difficult to see that h(t + T) − h(t) = T, for all t ⩾ 0. Then, Theorem 3.25 yields T

f0 (x) =

T

1 1 f (x, s)Δs = f ∗ (x, s)dh(s), T ∫0 T ∫0

for all x ∈ P. b

For every b > 0 and every y ∈ G([−r, b], X), the Perron Δ-integral ∫0 f (ys , s)Δs exists. Then, from Theorems 3.24 and 3.25, we obtain b

∫0

b

f (ys , s)Δs =

∫0

b

f (ys∗ , s∗ )dh(s) =

∫0

b

f (ys , s∗ )dh(s) =

∫0

f ∗ (ys , s)dh(s),

ensuring the existence of the last integral. [ ] By Theorem 3.30, for 𝜖 ∈ (0, 𝜖0 ] and t ∈ 0, L𝜖 , x𝜖 ∶ [−r, L𝜖 ] → X satisfies ⎧ t t ⎪(x𝜖 )∗ (t) = (x𝜖 )∗ (0) + 𝜖 f ∗ ((x𝜖 )∗s , s)dh(s) + 𝜖 2 g∗ ((x𝜖 )∗s , s, 𝜖)dh(s) ∫0 ∫0 ⎪ ∑ ⎪ + 𝜖 Ik ((x𝜖 )∗ (tk )), ⎨ k∈ℕ ⎪ tk 0 such that [ ] L . ||(x𝜖 )∗ (t) − y𝜖 (t)|| ⩽ J𝜖, for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ 0, 𝜖 [ ] Since (x𝜖 )∗ (t) = x𝜖 (t) for t ∈ 0, L𝜖 and by the initial condition, we obtain 𝕋 ] [ L ||x𝜖 (t) − y𝜖 (t)|| ⩽ J𝜖, for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ −r, , 𝜖 𝕋 completing the proof. ◽

117

118

3 Measure Functional Differential Equations

3.4.2 Nonperiodic Averaging In this subsection, our goal is to present nonperiodic averaging principles for measure FDEs, impulsive measure FDEs, and functional dynamic equations on time scales. All the results presented here can be found in [82, 84]. Let 𝜖0 > 0 be given such that 𝜖 ∈ (0, 𝜖0 ] and B ⊂ X be open. We focus our attention on a measure FDE of the form t t ⎧ ⎪x(t) = 𝜙(0) + 𝜖 f (xs , s)dh1 (s) + 𝜖 2 g(xs , s, 𝜖)dh2 (s) , ∫0 ∫0 ⎨ ⎪x0 = 𝜖𝜙, ⎩

(3.65)

where P ⊂ G([−r, 0], B) is open, r > 0, h1 , h2 ∶ [0, ∞) → ℝ are left-continuous and nondecreasing functions, f ∶ P × [0, ∞) → X and g ∶ P × [0, ∞) × (0, 𝜖0 ] → X are functions, and 𝜙 ∈ P. In order to obtain an averaging principle for (3.65), we consider the following auxiliary initial value problem t t ( ) ( ) ) ) ( ( ⎧ s s s s ⎪x(t) = 𝜙(0) + dh1 + , t ∈ [0, M], 𝜖f xs,𝜖 , 𝜖 2 g xs,𝜖 , , 𝜖 dh2 ∫ ∫ 𝜖 𝜖 𝜖 𝜖 ⎨ 0 0 ⎪x0 = 𝜖𝜙, ⎩ (3.66)

where xt,𝜖 (𝜃) = x(t + 𝜖𝜃), for 𝜃 ∈ [− 𝜖r , 0], 𝜙 ∈ P and M > 0. ̃𝜖 ⊂ G([− r , 0], B) be an open set and assume that f maps any pair Let P 𝜖 ̃𝜖 × [0, ∞) into X and that the mapping t → f (yt,𝜖 , t) is Perron–Stieltjes (𝜓, t) ∈ P integrable with respect to h1 , for all t ∈ [0, ∞). Suppose, in addition, g maps ̃𝜖 × [0, ∞) × (0, 𝜖0 ] into X and the mapping t → g(yt,𝜖 , t, 𝜖) is (𝜓, t, 𝜖) ∈ P Perron–Stieltjes integrable with ([ respect] to)h2 , for all t ∈ [0, ∞). Clearly, if xt ∈ G− ([−r, 0], B), then xt,𝜖 ∈ G− − 𝜖r , 0 , B . For details, see [84]. We will refer a few times to the next remark. Remark 3.38: By a change of variables, we can transform system (3.66) into system (3.65). Indeed, if x ∶ [−r, M] → B is a solution of Eq. (3.66), then t ( t ( ) ) ( ) ( ) s s s s dh1 + 𝜖2 . x(t) = 𝜙(0) + 𝜖 f xs,𝜖 , g xs,𝜖 , , 𝜖 dh2 ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 [ ] Define y(t) = x(𝜖t), for t ∈ 0, M𝜖 , 𝜓(s) = 𝜖s , for s ∈ [0, M], and m(𝜏) = f (x𝜖𝜏,𝜖 , 𝜏), ] [ for 𝜏 ∈ 0, M𝜖 . Considering 𝜏 = 𝜓(s) = 𝜖s in the calculations below

3.4 Averaging Methods t ( ( ( ) ( ) ) ) s s s s f xs,𝜖 , f x𝜖(s∕𝜖),𝜖 , dh1 = dh1 ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 t t ( ) ( ) s s = dh1 = m m(𝜓(s))dh1 (𝜓(s)) ∫0 ∫0 𝜖 𝜖 t

t∕𝜖

=

t∕𝜖

m(𝜏)dh1 (𝜏) =

∫0

∫0

f (x𝜖𝜏,𝜖 , 𝜏)dh1 (𝜏)

t∕𝜖

=

∫0

f (y𝜏 , 𝜏)dh1 (𝜏)

] [ we conclude, by Theorem 1.72, that x(𝜖t) = y(t), for all t ∈ 0, M𝜖 . [ ] Define n(𝜏) = g(x𝜖𝜏,𝜖 , 𝜏, 𝜖), for 𝜏 ∈ 0, M𝜖 . Again, using Theorem 1.72, we obtain t ( ) ) ( ( ) ( ) s s s s = g xs,𝜖 , , 𝜖 dh2 g x𝜖(s∕𝜖),𝜖 , , 𝜖 dh2 ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 t ( ) t ( ) s s = dh2 = n n(𝜓(s))dh2 (𝜓(s)) ∫0 ∫0 𝜖 𝜖 t

t∕𝜖

=

t∕𝜖

n(𝜏)dh2 (𝜏) =

∫0

∫0

g(x𝜖𝜏,𝜖 , 𝜏, 𝜖)dh2 (𝜏)

t∕𝜖

g(y𝜏 , 𝜏, 𝜖)dh2 (𝜏), ∫0 ] [ [ ] since, for every 𝜏 ∈ 0, M𝜖 and 𝜃 ∈ − 𝜖r , 0 , we have =

x𝜖𝜏,𝜖 (𝜃) = x(𝜖(𝜏 + 𝜃)) = y(𝜏 + 𝜃) = y𝜏 (𝜃). Therefore, we get y (t) − y(0) = x(𝜖t) − x(0) 𝜖t ( 𝜖t ( ( ) ( ) ) ) s s s s =𝜖 f xs,𝜖 , g xs,𝜖 , , 𝜖 dh2 dh1 + 𝜖2 ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 =𝜖

𝜖t∕𝜖

∫0

f (x𝜖s,𝜖 , s)dh1 (s) + 𝜖 2

t

=𝜖

∫0

𝜖t∕𝜖

∫0

g(x𝜖s,𝜖 , s, 𝜖) dh2 (s)

t

f (ys , s)dh1 (s) + 𝜖 2

∫0

g(ys , s, 𝜖) dh2 (s) ,

[ ] for t ∈ 0, M𝜖 . Thus, a solution of the measure FDE (3.66) on [0, M] corresponds ] [ to a solution of the measure FDE (3.65) on 0, M𝜖 and vice versa. In view of this, we now focus our attention on Eq. (3.66). Then, an averaging principle for Eq. (3.65) will be obtained naturally as a consequence.

119

120

3 Measure Functional Differential Equations

Motivated by Remark 3.38, we restrict ourselves to problem (3.66), where h1 , h2 ∶ [0, ∞) → ℝ are left-continuous and nondecreasing functions. Consider ̃𝜖 × [0, ∞) → X ̃𝜖 ⊂ G([− r , 0], B), and assume that f ∶ P open sets B ⊂ X and P 𝜖 satisfies the conditions: ̃𝜖 and all t ∈ [0, ∞), the Perron–Stieltjes integral (J1) for all 𝜑 ∈ P t

f (𝜑, s)dh1 (s)

∫0

exists; ̃𝜖 and every u1 , u2 ∈ (J2) there exists a constant L > 0 such that, for every 𝜑, 𝜓 ∈ P [0, ∞), with u1 ⩽ u2 , we have u2 ‖ u2 ‖ ‖ ‖ [f (𝜑, s) − f (𝜓, s)] dh1 (s)‖ ⩽ L ||𝜑 − 𝜓||∞ dh1 (s). ‖ ‖∫u ‖ ∫u1 ‖ 1 ‖ ̃𝜖 × [0, ∞) × (0, 𝜖0 ] → X: Consider the following assumptions on g ∶ P (J3) the Perron–Stieltjes integral t

∫0

g(𝜑, s, 𝜖)dh2 (s)

̃𝜖 , t ∈ [0, ∞) and 𝜖 ∈ (0, 𝜖0 ]; exists, for every 𝜑 ∈ P ̃𝜖 , 𝜖 ∈ (0, 𝜖0 ] and (J4) there is a constant C > 0 such that for all 𝜑 ∈ P u1 , u2 ∈ [0, ∞), with u1 ⩽ u2 , we have u2 ‖ ‖ u2 ‖ ‖ g(𝜑, s, 𝜖)dh2 (s)‖ ⩽ C dh2 (s); ‖ ‖ ‖∫u ∫u1 ‖ ‖ 1 (J5) there exists K > 0 such that for all 𝛽 ⩾ 0, we have h (T + 𝛽) − h1 (𝛽) ⩽ K; lim sup 1 T T→∞ (J6) there exists N > 0 such that for all 𝛽 ⩾ 0, we have h (T + 𝛽) − h2 (𝛽) ⩽ N; lim sup 2 T T→∞ (J7) there exists 𝛾 > 0 such that ||𝜙||∞ ⩽ 𝛾. ̃𝜖 , the limit Suppose, for each 𝜑 ∈ P T

1 f (𝜑, s)dh1 (s) T→∞ T ∫0

f0 (𝜑) = lim

(3.67)

exists, where the integral is taken in the sense of Perron–Stieltjes, and consider the averaged FDE { ( ) ẏ = f0 yt,𝜖 , (3.68) y0 = 𝜖𝜙, where t ∈ [0, M] and f0 is given by (3.67).

3.4 Averaging Methods

Claim. There is a relation between the solutions of (3.68) and the solutions of the averaged FDE { ( ) ẏ = 𝜖f0 yt , (3.69) y0 = 𝜖𝜙, ] [ where t ∈ 0, M𝜖 and f0 is given by (3.67). Indeed, as in Remark 3.38, denote 𝜓(s) = ] [ ( ) m(𝜏) = f0 y𝜖𝜏,𝜖 , for 𝜏 ∈ 0, M𝜖 . Thus, t

∫0

t

f0 (ys,𝜖 )ds =

∫0

t∕𝜖

=𝜖

= 𝜏, for s ∈ [0, M], and

t

f0 (y𝜖(s∕𝜖),𝜖 )ds =

∫0

s 𝜖

∫0

m(𝜓(s))ds t∕𝜖

m(𝜏)d𝜏 = 𝜖

f0 (y𝜖𝜏,𝜖 )d𝜏.

∫0

Taking u(t) = y(t𝜖) for t ∈ [0, M𝜖 ], we obtain y𝜖𝜏,𝜖 (𝜃) = y(𝜖(𝜏 + 𝜃)) = u(𝜏 + 𝜃) = u𝜏 (𝜃), ] [ ] [ for 𝜏 ∈ 0, M𝜖 and 𝜃 ∈ − 𝜖r , 0 . Therefore, 𝜖t

u(t) − u(0) = y(𝜖t) − y(0) = =𝜖

𝜖(t∕𝜖)

∫0

∫0

f0 (ys,𝜖 )ds t

f0 (y𝜖s,𝜖 )ds = 𝜖

∫0

f0 (us )ds,

] [ for every t ∈ 0, M𝜖 and the Claim is proved. Notice that, if conditions (J2) and (J5) are satisfied, then T T ‖ ‖ 1 1 ‖ ‖ f (𝜉, s)dh1 (s) − lim f (𝜑, s)dh1 (s)‖ ||f0 (𝜉) − f0 (𝜑)|| = ‖ lim ‖T→∞ T ∫0 ‖ T→∞ T ∫0 ‖ ‖ T 1 ⩽ lim L||𝜉 − 𝜑||∞ dh1 (s) T→∞ T ∫0 h (T) − h1 (0) ⩽ L||𝜉 − 𝜑||∞ lim sup 1 ⩽ LK||𝜉 − 𝜑||∞ . (3.70) T T→∞

In particular, for every yt , ys ∈ P and every t, s ∈ [0, ∞), we have ||f0 (ys ) − f0 (yt )|| ⩽ LK||ys − yt ||∞ .

(3.71)

̃𝜖 be a solution of the averaged equation (3.68), where 𝜙 is bounded by Let y ∈ P a constant 𝛾 > 0. Let s, t ∈ [0, M], with t ⩽ s, and 𝜃 ∈ [− 𝜖r , 0]. We consider three cases.

121

122

3 Measure Functional Differential Equations

(i) If s + 𝜖𝜃 > 0 and t + 𝜖𝜃 > 0, then ‖ s+𝜖𝜃 ‖ ‖ ‖ ||ys,𝜖 (𝜃) − yt,𝜖 (𝜃)|| = ||y(s + 𝜖𝜃) − y(t + 𝜖𝜃)|| = ‖ f0 (y𝜎,𝜖 )d𝜎 ‖ ‖∫t+𝜖𝜃 ‖ ‖ ‖ s+𝜖𝜃



∫t+𝜖𝜃

s+𝜖𝜃

||f0 (y𝜎,𝜖 ) − f0 (0)|| d𝜎 +

∫t+𝜖𝜃

||f0 (0)|| d𝜎

s+𝜖𝜃

⩽ LK

sup ||y(𝜎)|| d𝜎 + (s − t)||f0 (0)||,

∫t+𝜖𝜃

𝜎∈[t−r,s]

where this last inequality follows from (3.71). Therefore, ||ys,𝜖 − yt,𝜖 ||∞ = sup ||y(s + 𝜖𝜃) − y(t + 𝜖𝜃)|| 𝜃∈[− 𝜖r ,0]

⩽ LK(s − t) sup ||y(𝜎)|| + (s − t)||f0 (0)||. 𝜎∈[t−r,s]

(ii) If s + 𝜖𝜃 ⩽ 0 and t + 𝜖𝜃 ⩽ 0, then ||y(s + 𝜖𝜃) − y(t + 𝜖𝜃)|| =||𝜖𝜙(s + 𝜖𝜃) − 𝜖𝜙(t + 𝜖𝜃)|| ⩽ 2𝜖𝛾. (iii) If s + 𝜖𝜃 > 0 and t + 𝜖𝜃 ⩽ 0, then s+𝜖𝜃 ‖ ‖ ‖ ‖ f0 (y𝜎,𝜖 )d𝜎 ‖ + ‖𝜖𝜙(t + 𝜖𝜃)‖ ||y(s + 𝜖𝜃) − y(t + 𝜖𝜃)|| ⩽ ‖𝜖𝜙(0) + ‖ ‖ ∫0 ‖ ‖ s+𝜖𝜃

⩽ 2𝜖𝛾 +

∫0

||f0 (y𝜎,𝜖 )|| d𝜎 s+𝜖𝜃

⩽ 2𝜖𝛾 +LK

∫0

sup ||y(𝜎)|| d𝜎+(s − t)||f0 (0)||

𝜎∈[t−r,s]

⩽ 2𝜖𝛾 + LK(s + 𝜖𝜃) sup ||y(𝜎)|| + (s − t)||f0 (0)|| 𝜎∈[t−r,s]

⩽ 2𝜖𝛾 + LK[s + 𝜖𝜃 − (t + 𝜖𝜃)] sup ||y(𝜎)|| 𝜎∈[t−r,s]

+ (s − t)||f0 (0)|| = 2𝜖𝛾 + LK(s − t) sup ||y(𝜎)|| + (s − t)||f0 (0)||. 𝜎∈[t−r,s]

In either case, we have ||ys,𝜖 − yt,𝜖 ||∞ ⩽ 2𝜖𝛾 + LK(s − t) sup ||y(𝜎)|| + (s − t)||f0 (0)||, 𝜎∈[t−r,s]

(3.72)

which implies ||ys,𝜖 − yt,𝜖 ||∞ ⩽ 2𝛾𝜖, as s − t → 0+ . Hence, the mapping [0, M] ∋ t → yt,𝜖 is continuous, where yt,𝜖 is a solution of the averaged FDE (3.68). Now, we are in position to present a version of [82, Lemma 3.1] for Perron–Stieltjes integrals. This result can also be found in [84] for the case

3.4 Averaging Methods

X = ℝn . It is presented here for Banach space-valued functions and it is essential to the proofs of the nonperiodic averaging principles coming next. Lemma 3.39: Let B ⊂ X be open. Suppose the function f ∶ G([−r, 0], B) × [0, ∞) → X is Perron–Stieltjes integrable with respect to a nondecreasing function h1 ∶ [0, ∞) → ℝ. Suppose, further, the limit T

1 f (𝜓, s)dh1 (s), T→∞ T ∫0

f0 (𝜓) = lim

𝜓 ∈ G([−r, 0], B),

(3.73)

exists and is well defined. Then, for every t, 𝛼 > 0, the equality t∕𝜖+𝛼∕𝜖

lim+

𝜖→0

𝜖 𝛼 ∫t∕𝜖

f (𝜓, s)dh1 (s) = f0 (𝜓),

𝜓 ∈ G([−r, 0], B),

holds, where the Perron–Stieltjes integral on the left-hand side exists and is well defined. Proof. From (3.73), we derive that, for t, 𝛼 > 0 and 𝜓 ∈ G([−r, 0], B), we have lim+

1 t∕𝜖 + 𝛼∕𝜖 ∫0

lim+

𝜖 t ∫0

𝜖→0

𝜖→0

t∕𝜖+𝛼∕𝜖

f (𝜓, s)dh1 (s) = f0 (𝜓)

and

(3.74)

t∕𝜖

(3.75)

f (𝜓, s)dh1 (s) = f0 (𝜓).

Thus, [

] t∕𝜖+𝛼∕𝜖 t∕𝜖 1 1 lim f (𝜓, s)dh1 (s) − f (𝜓, s)dh1 (s) 𝜖→0+ t∕𝜖 + 𝛼∕𝜖 ∫0 t∕𝜖 ∫0 [ ] t∕𝜖+𝛼∕𝜖 1 = lim+ f (𝜓, s)dh1 (s) − f0 (𝜓) 𝜖→0 t∕𝜖 + 𝛼∕𝜖 ∫0 [ ] t∕𝜖 𝜖 + lim+ f0 (𝜓) − f (𝜓, s)dh1 (s) = 0, 𝜖→0 t ∫0 whence, for every 𝜖 > 0, we obtain t∕𝜖+𝛼∕𝜖

𝜖 𝛼 ∫t∕𝜖

f (𝜓, s)dh1 (s) = t∕𝜖

1 𝛼∕𝜖 ∫0

t∕𝜖+𝛼∕𝜖

f (𝜓, s)dh1 (s)

1 − f (𝜓, s)dh1 (s) 𝛼∕𝜖 ∫0 ) ) ( ( t∕𝜖+𝛼∕𝜖 t∕𝜖 t∕𝜖 + 𝛼∕𝜖 t∕𝜖 1 1 = f (𝜓, s)dh1 (s) − f (𝜓, s)dh1 (s) t∕𝜖 + 𝛼∕𝜖 𝛼∕𝜖 ∫0 t∕𝜖 𝛼∕𝜖 ∫0 t∕𝜖 ( ) t∕𝜖+𝛼∕𝜖 t 1 1 t +1 = f (𝜓, s)dh1 (s) − ⋅ f (𝜓, s)dh1 (s) ∫0 t∕𝜖 + 𝛼∕𝜖 𝛼 𝛼 t∕𝜖 ∫0

123

124

3 Measure Functional Differential Equations t∕𝜖+𝛼∕𝜖

1 f (𝜓, s)dh1 (s) t∕𝜖 + 𝛼∕𝜖 ∫0 [ ] t∕𝜖+𝛼∕𝜖 t∕𝜖 1 t 1 f (𝜓, s)dh1 (s) − f (𝜓, s)dh1 (s) . + 𝛼 t∕𝜖 + 𝛼∕𝜖 ∫0 t∕𝜖 ∫0

=

Then, combining (3.74) and (3.75), we get [ ] t∕𝜖+𝛼∕𝜖 𝜖 f (𝜓, s)dh1 (s) − f0 (𝜓) lim 𝜖→0+ 𝛼 ∫t∕𝜖 [ ] t∕𝜖+𝛼∕𝜖 1 f (𝜓, s)dh1 (s) − f0 (𝜓) = lim+ 𝜖→0 t∕𝜖 + 𝛼∕𝜖 ∫0 [ ] t∕𝜖+𝛼∕𝜖 t∕𝜖 1 t 1 + lim+ f (𝜓, s)dh1 (s) − f (𝜓, s)dh1 (s) = 0 𝜖→0 𝛼 t∕𝜖 + 𝛼∕𝜖 ∫0 t∕𝜖 ∫0 and hence, t∕𝜖+𝛼∕𝜖

lim+

𝜖→0

𝜖 𝛼 ∫t∕𝜖

f (𝜓, s)dh1 (s) = f0 (𝜓) ◽

concluding the proof. The next corollary follows easily from Lemma 3.39. See [82, 84].

Corollary 3.40: Consider open sets B ⊂ X and P ⊂ G([−r, 0], B), and assume that the function f ∶ P × [0, ∞) → X is a Perron–Stieltjes integrable with respect to a nondecreasing function h1 ∶ [0, ∞) → ℝ. Suppose, in addition, the limit T

1 f (𝜑, s)dh1 (s) T→∞ T ∫0

f0 (𝜑) = lim

for every 𝜑 ∈ P

exists and is well defined. Then, for every t, 𝛼 > 0 and yt ∈ P, we obtain t∕𝜖+𝛼∕𝜖

lim+

𝜖→0

𝜖 𝛼 ∫t∕𝜖

f (yt , s)dh1 (s) = f0 (yt ),

where the Perron–Stieltjes integral on the left-hand side exists and is well defined. The next corollary follows easily by the steps of the proof of Lemma 3.39. ̃𝜖 ⊂ G([− r , 0], B) be open, and assume Corollary 3.41: Let B ⊂ X be open and P 𝜖 ̃𝜖 × [0, ∞) → X is a Perron–Stieltjes integrable function with respect to a that f ∶ P nondecreasing function h1 ∶ [0, ∞) → ℝ. Suppose, further, the limit T

1 f (𝜑, s)dh1 (s) T→∞ T ∫0

f0 (𝜑) = lim

̃𝜖 for every 𝜑 ∈ P

3.4 Averaging Methods

̃𝜖 , we obtain exists and is well defined. Then, for every t, 𝛼 > 0 and yt,𝜖 ∈ P lim+

𝜂→0

𝜂 t∕𝜂+𝛼∕𝜂 f (yt,𝜖 , s)dh1 (s) = f0 (yt,𝜖 ), 𝛼 ∫t∕𝜂

where the Perron–Stieltjes integral on the left-hand side exists and is well defined. The next lemma can be found in [84, Lemma 3.2] for the finite dimensional case. We adapt it here for Banach space-valued functions. ̃𝜖 × ̃𝜖 ⊂ G([− r , 0], B) be open sets. Assume that f ∶ P Lemma 3.42: Let B ⊂ X and P 𝜖 [0, ∞) → X satisfies conditions (J1) and (J2) and h1 ∶ [0, ∞) → ℝ satisfies condition ̃𝜖 , the limit (J5). Suppose, for each 𝜑 ∈ P T

1 f (𝜑, s)dh1 (s) T→∞ T ∫0

f0 (𝜑) = lim

exists. Assume that 0 < M < ∞ and y ∶ [−r, M] → B is a maximal solution of the autonomous FDE { ( ) ẏ = f0 yt,𝜖 , (3.76) y0 = 𝜖𝜙, with maximal interval of existence being [−r, M]. Then, given 𝜖 > 0, there exists 𝜉(𝜖) > 0 such that t ( t ( ) ) ‖ ‖ s s ‖ ‖ dh1 − f ys,𝜖 , f0 (ys,𝜖 )ds‖ < 𝜉(𝜖), t ∈ [0, M], ‖𝜖 ‖ ∫0 ‖ ∫0 𝜖 𝜖 ‖ ‖ + and 𝜉(𝜖) tends to zero, as 𝜖 → 0 . Proof. Given 𝜖 > 0 and t ∈ [0, ∞), let 𝛿 be a gauge of [0, t]( which)corresponds to ( ) t 𝜖 > 0 in the definition of the Perron–Stieltjes integral ∫0 f y𝜎,𝜖 , 𝜎𝜖 dh1 𝜎𝜖 . Take a 𝛿-fine tagged division d = (𝜏i , [si−1 , si ]), i = 1, 2, … , |d|, of [0, t]. Thus, t ( t ) ( ) ‖ ‖ s s ‖ ‖ dh1 − f ys,𝜖 , f0 (ys,𝜖 )ds‖ ‖𝜖 ‖ ∫0 ‖ ∫0 𝜖 𝜖 ‖ ‖ |d| ‖ si [ ( ( )‖ ) ( )] ∑ s s s ‖ ‖ ⩽ f ys,𝜖 , − f ysi−1 ,𝜖 , dh1 ‖ ‖𝜖 ‖ ‖ ∫s 𝜖 𝜖 𝜖 i−1 i=1 ‖ ‖ |d| ‖ si ‖ ∑ ‖ ‖ + [f (y ) − f0 (ysi−1 ,𝜖 )] ds‖ ‖ ‖ ‖∫s 0 s,𝜖 i=1 ‖ i−1 ‖ |d| ‖ si ) ( ) ‖ ∑ ‖ si ( s s ‖ + dh1 − f ysi−1 ,𝜖 , f0 (ysi−1 ,𝜖 )ds‖ . (3.77) ‖ ‖ ‖∫s ∫ 𝜖 𝜖 si−1 i=1 ‖ i−1 ‖

125

126

3 Measure Functional Differential Equations

Assume, without loss of generality, that the gauge 𝛿 satisfies 𝛿(𝜏i ) < 2𝜖 , for every 𝜏i ∈ [si−1 , si ] and i = 1, 2, … , |d|. By (3.72), we obtain ||ys,𝜖 − ysi−1 ,𝜖 ||∞ ⩽ LK(s − si−1 ) sup ( < LK2𝛿(𝜏i ) ( < LK𝜖

𝜎∈[si−1 −r,s]

sup

||y(𝜎)|| + (s − si−1 )||f0 (0)|| + 2𝜖𝛾

) ||y(𝜎)|| + 2𝛿(𝜏i )||f0 (0)|| + 2𝜖𝛾

𝜎∈[si−1 −r,si ]

sup

) ||y(𝜎)|| + 𝜖||f0 (0)|| + 2𝜖𝛾,

𝜎∈[si−1 −r,si ]

for i = 1, 2, … , |d| and s ∈ [si−1 , si ]. Then, taking ) ( D = LK sup ||y(𝜎)|| + ||f0 (0)|| + 2𝛾, 𝜎∈[−r,M]

we get sup s∈[si−1 ,si ] ||ys,𝜖 − ysi−1 ,𝜖 ||∞ ⩽ 𝜖D, for i = 1, 2, … , |d|, which together with conditions (J2) and (J5) imply |d| ‖ s ) ( )] [ ( ( )‖ ∑ s s s ‖ ‖ i − f ysi−1 ,𝜖 , dh1 𝜖 f ys,𝜖 , ‖ ‖ ‖ ‖∫s 𝜖 𝜖 𝜖 i=1 ‖ i−1 ‖ |d| si ( ) ∑ s ⩽ 𝜖L sup ||y𝜎,𝜖 − ysi−1 ,𝜖 ||∞ dh1 ∫ 𝜖 si−1 i=1 𝜎∈[si−1 ,si ] |d| [ (s ) ( s )] ∑ ⩽ 𝜖 2 DL h1 i − h1 i−1 𝜖 𝜖 i=1

⎡ ] ( ) ⎢ h1 t 2 − h1 (0) = 𝜖DLt ⎢ = 𝜖 DL h1 𝜖 ⎢ ⎣ [

( ) t 𝜖

⎤ − h1 (0) ⎥ ⎥. t ⎥ 𝜖 ⎦

By hypothesis (J5), we can choose 𝜖 > 0 sufficiently small such that ( ) h1 𝜖t − h1 (0) ⩽ K, for each t ∈ [0, M]. t 𝜖

Then, |d| ‖ s ) ( )] [ ( ( )‖ ∑ s s s ‖ ‖ i − f ysi ,𝜖 , dh1 𝜖 f ys,𝜖 , ‖ ⩽ 𝜖DLtK ⩽ 𝜖DLMK. ‖ ‖ ‖∫s 𝜖 𝜖 𝜖 i=1 ‖ i−1 ‖ On the other hand, for i = 1, 2, … , |d| and s ∈ [si−1 , si ], by (3.70), we have |d| ‖ s ‖ ∑ ‖ i ‖ [f0 (ys,𝜖 ) − f0 (ysi−1 ,𝜖 )] ds‖ ‖ ‖∫s ‖ i=1 ‖ i−1 ‖ |d| ∑ ⩽ LK sup ||y𝜎,𝜖 − ysi−1 ,𝜖 ||∞ (si − si−1 ) < 𝜖DLKM. i=1 𝜎∈[si−1 ,si ]

3.4 Averaging Methods

Claim. The sum |d| ‖ si ( si ) ( ) ‖ ∑ s s ‖ ‖ dh1 − f ysi−1 ,𝜖 , f0 (ysi−1 ,𝜖 )ds‖ ‖𝜖 ‖ ∫s ‖ ∫ 𝜖 𝜖 si−1 i−1 i=1 ‖ ‖ can be made arbitrarily small by Corollary 3.40. Indeed, for each i = 1, 2, … , |d| and 𝛼i = si − si−1 , we have |d| ‖ si ( si ) ( ) ‖ ∑ s s ‖ ‖ dh1 − f ysi−1 ,𝜖 , f0 (ysi−1 ,𝜖 )ds‖ ‖𝜖 ‖ ∫s ‖ ∫ 𝜖 𝜖 si−1 i−1 i=1 ‖ ‖ |d| ‖ s +𝛼 s +𝛼 ( ( ) ) ‖ i−1 i i−1 i ∑‖ s s ‖ = f ysi−1 ,𝜖 , f0 (ysi−1 ,𝜖 )ds‖ dh1 − ‖𝜖 ‖ ∫s ‖ ∫ 𝜖 𝜖 si−1 i−1 i=1 ‖ ‖ |d| s ∕𝜖+𝛼 ∕𝜖 ( ) ‖ ‖ i−1 i ∑ ‖𝜖 ‖ = 𝛼i ‖ f ysi−1 ,𝜖 , s dh1 (s) − f0 (ysi−1 ,𝜖 )‖ . ‖ ‖ 𝛼i ∫s ∕𝜖 i−1 i=1 ‖ ‖ Define, for each i = 1, 2, … , |d|, s

𝛽i (𝜖) =

∕𝜖+𝛼i ∕𝜖

i−1 𝜖 ∫ 𝛼i si−1 ∕𝜖

f (ysi−1 ,𝜖 , s)dh1 (s) − f0 (ysi−1 ,𝜖 )

and set 𝛽(𝜖) = max{||𝛽i (𝜖)|| ∶ i = 1, 2, … , |d|}. Then, |d| ∑ i=1

|d| ∑ ‖ 𝛼i ‖ ‖𝛽i (𝜖)‖ ⩽ 𝛽(𝜖) (si − si−1 ) = 𝛽(𝜖)t < 𝛽(𝜖)M i=1

proving the Claim. Now, by Corollary 3.40, it is clear that 𝛽(𝜖) → 0 as 𝜖 → 0+ . Therefore, t ( t ( ) ) ‖ ‖ s s ‖ ‖ f ys,𝜖 , f0 (ys,𝜖 )ds‖ < 2𝜖DLKM + 𝛽(𝜖)M. dh1 − ‖𝜖 ‖ ∫0 ‖ ∫0 𝜖 𝜖 ‖ ‖ Then, setting 𝜉(𝜖) = 2𝜖DLKM + 𝛽(𝜖)M, yields 𝜉(𝜖) tends to zero as 𝜖 → 0+ and the inequality t ( t ( ) ) ‖ ‖ s s ‖ ‖ f ys,𝜖 , f0 (ys,𝜖 )ds‖ < 𝜉(𝜖) dh1 − ‖𝜖 ‖ ∫0 ‖ ∫0 𝜖 𝜖 ‖ ‖ holds completing the proof.



The main result of this section follows next. It is a nonperiodic averaging principle for measure FDEs, and it slightly differs from the version found in [84, Theorem 3.1] with respect to initial conditions and codomains. ̃𝜖 ⊂ G([− r , 0], B), and assume that Theorem 3.43: Consider open sets B ⊂ X and P 𝜖 ̃𝜖 × [0, ∞) → X satisfies conditions (J1) and (J2) and g∶ P ̃𝜖 × [0, ∞) × (0, 𝜖0 ] → X f∶ P

127

128

3 Measure Functional Differential Equations

satisfies the conditions (J3) and (J4). Suppose conditions (J5)–(J7) are fulfilled, where ̃𝜖 , h1 and h2 are nondecreasing functions and 𝜙 ∈ P. Suppose, further, for each 𝜑 ∈ P the limit T

1 f (𝜑, s)dh1 (s) T→∞ T ∫0

f0 (𝜑) = lim

exists and is well defined. Let M > 0 and consider the nonautonomous measure FDE t t ) ) ( ( ( ) ( ) ⎧ s s s s ⎪x(t)=𝜙(0)+ dh1 + , t ∈ [0, M], 𝜖f xs,𝜖 , 𝜖 2 g xs,𝜖 , , 𝜖 dh2 ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 ⎨ ⎪x0 = 𝜖𝜙, ⎩ (3.78)

where xt,𝜖 (𝜃) = x(t + 𝜖𝜃), for 𝜃 ∈ [− 𝜖r , 0], and the averaged autonomous FDE { ( ) ẏ = f0 yt,𝜖 , y0 = 𝜖𝜙,

(3.79)

where t ∈ [0, M]. Suppose [0, b) is the maximal interval of existence of the measure FDE (3.78) and [0, b) is the maximal interval of existence of the FDE (3.79). Likewise, assume that x𝜖 ∶ [−r, M] → B is a maximal solution of the measure FDE (3.78) and y ∶ [−r, M] → B is a maximal solution of averaged FDE (3.79), where M > 0 is such that M < min (b, b). Then, for every 𝜂 > 0, there exists 𝜖0 > 0 such that for 𝜖 ∈ (0, 𝜖0 ], ||x𝜖 (t) − y(t)|| < 𝜂,

for every t ∈ [0, M].

Proof. Notice that if t = 0, then ||x𝜖 (0) − y(0)|| = 0. Moreover, given t > 0, conditions (J2), (J4), (J5), and Lemma 3.42 yield ||x𝜖 (t) − y(t)|| t ( t ( t ) ) ( ) ( ) ‖ ‖ s s s s ‖ ‖ dh1 + 𝜖2 − = ‖𝜖 f (x𝜖 )s,𝜖 , g (x𝜖 )s,𝜖 , , 𝜖 dh2 f0 (ys,𝜖 )ds‖ ‖ ∫0 ‖ ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 ‖ ‖ t ( ) ( ) ( )‖ ‖ s s s ‖ ‖ ⩽ ‖𝜖 − f ys,𝜖 , dh1 f (x𝜖 )s,𝜖 , ‖ ‖ ∫0 𝜖 𝜖 𝜖 ‖ ‖ ‖ t ( t t ( ( ) ( )‖ ) ) ‖ ‖ ‖ s s s s ‖ ‖ ‖ ‖ 2 + ‖𝜖 f ys,𝜖 , f0 (ys,𝜖 )ds‖ + ‖𝜖 g (x𝜖 )s,𝜖 , , 𝜖 dh2 dh1 − ‖ ‖ ∫0 ‖ ‖ ‖ ∫0 ∫0 𝜖 𝜖 𝜖 𝜖 ‖ ‖ ‖ ‖ t t ( t ) ( ) ‖ ( ) ‖ s s s ‖ ‖ ⩽ L𝜖 + ‖𝜖 dh1 − ||(x𝜖 )s,𝜖 − ys,𝜖 ||∞ dh1 f ys,𝜖 , f (y )ds‖ ‖ ∫0 ∫0 ∫0 0 s,𝜖 ‖ 𝜖 𝜖 𝜖 ‖ ‖ ( ( ) ) t + 𝜖 2 C h2 − h2 (0) 𝜖 ( ) t

< L𝜖

∫0

||(x𝜖 )s,𝜖 − ys,𝜖 ||∞ dh1

h2 ( ) s + 𝜉(𝜖) + 𝜖Ct 𝜖

where 𝜉(𝜖) is given by Lemma 3.42.

t 𝜖

− h2 (0) t 𝜖

,

3.4 Averaging Methods

By condition (J6), we can choose 𝜖 > 0 sufficiently small such that ( ) h2 𝜖t − h2 (0) ⩽ N, for every t > 0. t 𝜖

Therefore, for 0 < t ⩽ M, we obtain t

||x𝜖 (t) − y(t)|| ⩽ L𝜖

||(x𝜖 )s,𝜖 − ys,𝜖 ||∞ dh1

∫0

( ) s + 𝜉(𝜖) + 𝜖CMN. 𝜖

From the fact that (x𝜖 )0 = 𝜖𝜙 = y0 , we obtain sup 𝜎∈[−r,0] ||x𝜖 (𝜎) − y(𝜎)|| = 0, whence, for s ∈ [0, t], we have ||(x𝜖 )s,𝜖 − ys,𝜖 ||∞ = sup ||x𝜖 (s + 𝜖𝜃) − y(s + 𝜖𝜃)|| 𝜃∈[− 𝜖r ,0]

= sup ||x𝜖 (𝜎) − y(𝜎)|| ⩽ sup ||x𝜖 (𝜎) − y(𝜎)|| 𝜎∈[s−r,s]

𝜎∈[−r,s]

= sup ||x𝜖 (𝜎) − y(𝜎)||. 𝜎∈[0,s]

Therefore, we obtain t

||x𝜖 (t) − y(t)|| ⩽ L𝜖

∫0

sup ||x𝜖 (𝜎) − y(𝜎)|| dh1

𝜎∈[0,s]

( ) s + 𝜉(𝜖) + 𝜖CMN 𝜖

and hence, the Grönwall inequality for the Perron–Stieltjes integral (Theorem 1.52) implies that ||x𝜖 (t) − y(t)|| ⩽ sup ||x𝜖 (𝜏) − y(𝜏)|| ⩽ e𝜖L(h1 (t∕𝜖)−h1 (0)) [𝜉(𝜖) + 𝜖CMN] . 𝜏∈[0,t]

Finally, by hypothesis (J5), it is possible to choose 𝜖 > 0 sufficiently small such that ] [ h1 (t∕𝜖) − h1 (0) ] [ ⩽ tLK ⩽ MLK. 𝜖L h1 (t∕𝜖) − h1 (0) = tL t∕𝜖 Then, taking 𝜂 = eKLM [𝜉(𝜖) + 𝜖CMN], we get ||x𝜖 (t) − y(t)|| ⩽ 𝜂, proving the result.

for all t ∈ [0, M], ◽

In view of Remark 3.38, another averaging method follows next as an immediate consequence of Theorem 3.43. Compare with the initial conditions and codomains used in [84, Corollary 3.2]. Corollary 3.44: Let B ⊂ X and P ⊂ G ([−r, 0], B) be open sets, and assume that ̃𝜖 and f ∶ P × [0, ∞) → X satisfies conditions (J1) and (J2) with P instead of P g ∶ P × [0, ∞) × (0, 𝜖0 ] → X satisfies the conditions (J3) and (J4), with P instead

129

130

3 Measure Functional Differential Equations

̃𝜖 . Suppose conditions (J5)–(J7) are fulfilled, where h1 , h2 ∶ [0, ∞) → ℝ are of P nondecreasing functions and 𝜙 ∈ P. Consider the nonautonomous measure FDE t t ⎧ ⎪y(t) = y(0) + 𝜖f (ys , s)dh1 (s) + 𝜖 2 g(ys , s, 𝜖)dh2 (s), ∫0 ∫0 ⎨ ⎪y0 = 𝜖𝜙, ⎩

and the averaged autonomous FDE { ẋ = 𝜖f0 (xt ), x0 = 𝜖𝜙,

(3.80)

(3.81)

where f0 is given by T

1 f (𝜓, s)dh1 (s), for every 𝜓 ∈ P. T→∞ T ∫0 [ ] Let M > 0 and x𝜖 , y ∶ −r, M𝜖 → B be solutions of (3.80) and (3.81), respectively. Then, for every 𝜂 > 0, there exists 𝜖0 > 0 such that ] [ M and 𝜖 ∈ (0, 𝜖0 ]. ||x𝜖 (t) − y(t)|| < 𝜂, for every t ∈ −r, 𝜖 f0 (𝜓) = lim

Now, we are able to present a nonperiodic averaging principle for impulsive measure FDEs, using the relations, provided by Theorem 3.6, between the solutions of these equations and the solutions of measure FDEs. A slightly different version can be found in [84, Theorem 4.2]. Theorem 3.45: Consider P = G ([−r, 0], X) and the nonautonomous impulsive measure FDE t t ∑ ⎧x (t) = x (0) + 𝜖 f ((x𝜖 )s , s)dh(s) + 𝜖 2 g((x𝜖 )s , s, 𝜖)dh(s) + 𝜖 Ik (x𝜖 (tk )), 𝜖 ⎪ 𝜖 ∫0 ∫0 k∈ℕ ⎨ tk 0 such that ] [ M ||x𝜖 (t) − y𝜖 (t)|| < 𝜂, for every t ∈ −r, and 𝜖 ∈ (0, 𝜖0 ]. 𝜖

Proof. Define a function ̃ h ∶ [0, ∞) → ℝ by { h(t), t ∈ [0, t1 ], ̃ h(t) = t ∈ (tk , tk+1 ], k ∈ ℕ, h(t) + ck , satisfies 0 ⩽ ck ⩽ ck+1 and in such way that, for all k ∈ ℕ, the sequence {ck }∞ k=1 +̃ ̃ Δ h(tk ) = 1. By definition, the function h is nondecreasing and left-continuous. On the other hand, as a consequence of (J5) and the definition of ̃ h, there exists a constant C > 0 such that, for all 𝛽 ⩾ 0, we have lim sup T→∞

̃ h(T + 𝛽) − ̃ h(𝛽) ⩽ C. T

Then, from the hypotheses, we obtain t

x𝜖 (t) = x𝜖 (0) +

∫0

∑ ) ( 𝜖Ik (x𝜖 (tk )), 𝜖f ((x𝜖 )s , s) + 𝜖 2 g((x𝜖 )s , s, 𝜖) dh(s) +

[ ] for all 𝜖 ∈ (0, 𝜖0 ] and all t ∈ 0, M𝜖 . Define a function { 𝜖f (z, t) + 𝜖 2 g(z, t, 𝜖), F𝜖 (z, t) = 𝜖Ik (z(0)),

k∈ℕ 0⩽tk t0 and y ∈ G([t0 − r, b], B), the functions t → f (yt , t) and t → g(yt , t, 𝜖) are regulated on [t0 , b]𝕋 ; (NT2) there exists a constant C > 0 such that for x, y ∈ P and u1 , u2 ∈ [t0 , ∞)𝕋 such that u1 ⩽ u2 , u2 ‖ u2 ‖ ‖ ‖ [f (x, s) − f (y, s)] Δs‖ ⩽ C ‖x − y‖∞ Δs; ‖ ‖∫u ‖ ∫u1 ‖ 1 ‖

(NT3) if z ∶ [−r, 0] → B is a regulated function, then f0 (z) = lim

T→∞ t0 +T∈𝕋

1 T ∫t0

t0 +T

f (z, s)Δs

exists and is well defined. Let 𝜙 ∈ G([t0 − r, t0 ]𝕋 , B) be bounded. Suppose for every 𝜖 ∈ (0, 𝜖0 ], the functional dynamic equation on time scales t t ⎧ ⎪x(t) = x(t0 ) + 𝜖 f (xs∗ , s)Δs + 𝜖 2 g(xs∗ , s, 𝜖)Δs, ∫t0 ∫t0 ⎨ ⎪x(t) = 𝜖𝜙(t), t ∈ [t0 − r, t0 ]𝕋 , ⎩

(3.88)

133

134

3 Measure Functional Differential Equations

[ has a solution x𝜖 ∶ t0 − r, t0 + { ẏ = 𝜖f0 (y∗t ),

]

M 𝜖 𝕋

→ B, and the averaged FDE

(3.89) y(t) = 𝜖𝜙(t), t ∈ [t0 − r, t0 ]𝕋 , [ ] has a solution y𝜖 ∶ t0 − r, t0 + M𝜖 → B. Then, for every 𝜂 > 0, there exists 𝜖0 > 0 𝕋 such that for 𝜖 ∈ (0, 𝜖0 ], ] [ M ||x𝜖 (t) − y𝜖 (t)|| < 𝜂, for every; t ∈ t0 − r, t0 + . 𝜖 𝕋 Proof. Note that we can consider t0 = 0 without loss of generality. Otherwise, we can deal with the shifted problem with the time scale 𝕋̃ = {t − t0 ∶ t ∈ 𝕋 } and the functions ̃f (x, t) = f (x, t + t ) 0

̃ g(x, t, 𝜖) = g(x, t + t0 , 𝜖),

and

g ∶ P × [0, ∞)𝕋̃ × (0, 𝜖0 ] → X. For t ∈ [0, ∞), z ∈ P where ̃f ∶ P × [0, ∞)𝕋̃ → X and ̃ and 𝜖 ∈ (0, 𝜖0 ], define the extensions f ∗ (z, t) = f (z, t∗ )

and

g∗ (z, t, 𝜖) = g(z, t∗ , 𝜖).

Since limt→∞ 𝜇(t) = 0, where 𝜇 is the graininess function, there are numbers D > t

0 and 𝜏 ∈ 𝕋 such that then

𝜇(t) t

⩽ D, for every t ∈ [𝜏, ∞)𝕋 . If t ∈ ℝ is such that 𝜌(t∗ ) ⩽ t,

t∗ − t ⩽ 𝜎(𝜌(t∗ )) − 𝜌(t∗ ), once the backward and forward jump operators (see Definition 3.11) satisfy t∗ ⩽ 𝜎(𝜌(t∗ )) and t ⩾ 𝜌(t∗ ) by the definition of t∗ . Therefore, t∗ ⩽ t + 𝜇(𝜌(t∗ )) ⩽ t + D𝜌(t∗ ) ⩽ t + Dt = t(D + 1). For all t ∈ [0, ∞), set h(t) = t∗ . Then, for sufficiently large T and for all a ⩾ 0, we have h(a + T) − h(a) (a + T)∗ − a∗ (a + T)(D + 1) − a∗ = ⩽ , T T T whence h(a + T) − h(a) (a + T)(D + 1) − a∗ ⩽ lim sup = D + 1, lim sup T T T→∞ T→∞ which, in turn, shows that conditions (J5) and (J6) are fulfilled. As a consequence of Theorems 3.25 and 3.46, we conclude that T

T

1 1 f (z, s)Δs = lim f (z, s∗ )dh(s) T→∞ T ∫0 T→∞ T ∫0

f0 (z) = lim

T

1 f ∗ (z, s)dh(s), T→∞ T ∫0

= lim

3.5 Continuous Dependence on Time Scales

[ ] for every z ∈ P. Then, using the fact that x𝜖 ∶ −r, M𝜖 → B is a solution of the 𝕋 nonautonomous functionals dynamic equation on time scale (3.88), Theorem 3.34 [ ] M M ∗ ∗ ∗ yields x𝜖 ∶ [−r, 𝜖 ] ∩ 𝕋 → B is a solution on −r, 𝜖 ∩ 𝕋 of the measure FDE t t ⎧ ⎪x∗ (t) = x∗ (0) + 𝜖f ∗ (xs∗ , s)dh(s) + 𝜖 2 g∗ (xs∗ , s, 𝜖)dh(s), ∫0 ∫0 ⎨ ⎪x0∗ = 𝜖𝜙∗ . ⎩

(3.90)

Finally, Lemma 3.32 implies that conditions (J1), (J2), (J3), and (J4) are also fulfilled. Therefore, Theorem 3.44 implies ] that for every 𝜂 > 0, there exists 𝜖0 > 0 [ M such that for all 𝜖 ∈ (0, 𝜖0 ] and t ∈ 0, 𝜖 , the inequality ||x𝜖∗ (t) − y𝜖 (t)|| < 𝜂, holds, where y𝜖 is a solution of the averaged autonomous functional [ dynamic ] ∗ equation on time scales (3.89). Noticing that x𝜖 (t) = x𝜖 (t) for t ∈ 0, M𝜖 and 𝕋 using the initial condition, the statement follows. ◽

3.5 Continuous Dependence on Time Scales This section is devoted to results on continuous dependence on time scales of solutions of dynamic equations on time scales. This type of result has been investigated by several researchers [1, 62, 157] because it plays an important role in applications. Let X be a Banach space with norm || ⋅ ||. The idea behind this type of result is to prove that the solution of the initial value problem { xΔ (t) = f (x(t), t), t ∈ 𝕋n , x(t0 ) = x0 ,

t0 ∈ 𝕋n ,

where f ∶ X × 𝕋n → X and x0 ∈ X, converges uniformly, as n → ∞, to the solution of the problem { xΔ (t) = f (x(t), t), t ∈ 𝕋 , x(t0 ) = x0 ,

t0 ∈ 𝕋 ,

where f ∶ X × 𝕋 → X and x0 ∈ X, whenever dH (𝕋n , 𝕋 ) → 0, as n → ∞, with dH denoting the Hausdorff metric (or 𝕋n → 𝕋 , as n → ∞ using the induced metric from the Fell topology – see [190], for details). We consider the Hausdorff topology and Hausdorff metric in which the distance between two sets is defined by { } dH (A, B) = max sup inf {||a − b||}, sup inf {||a − b||} . a∈A b∈B

b∈B a∈A

Our goal, here, is to generalize these results. More precisely, assuming that B ⊂ X is open, On ⊂ G([t0 − r, t0 + 𝜎]𝕋n , B) is open, Pn = {yt ∶ y ∈ On , t ∈ [t0 , t0 + 𝜎]},

135

136

3 Measure Functional Differential Equations

f ∶ Pn × [t0 , t0 + 𝜎]𝕋n → X is a function. As commented in Section 3.2, we shall consider that I + Ik ∶ B → B for every k ∈ {1, … , m}, where I ∶ B → B is the identity operator and Ik ∶ B → X denotes the impulse operator, our aim is to provide sufficient conditions to ensure that the sequence of solutions of the impulsive functional dynamic on time scales t ( ∑ ) ∗ ⎧x(t) = x(t ) + f xs n , s Δs + Ik (x(tk )), t ∈ [t0 , t0 + 𝜎]𝕋n , 0 ⎪ ∫t0 k∈{1,…,m} ⎨ tk 0, there exists N > 0 suffi0 ciently large such that for every n > N, we have ∗

||xnn (t) − x∗ (t)|| < 𝜖,

for all t ∈ [t0 , t0 + 𝜎].

Proof. Because the sequence of functions {gn }n∈ℕ converges uniformly to g, given 𝜖 > 0, there exists N1 > 0 sufficiently large such that for every n > N1 , |gn (t) − g(t)| < 𝜖,

for all t ∈ [t0 , t0 + 𝜎].

Then, ||gn − g||∞ < 𝜖 for sufficiently large n. On the other hand, since the sequence of functions 𝜙∗n converges uniformly to ∗ 𝜙 as n → ∞, there exists N2 > 0 sufficiently large such that for every n > N2 , ||𝜙∗n (t) − 𝜙∗ (t)|| < 𝜖,

for all t ∈ [−r, 0].

Now, due to the fact that limn→∞ gn (t0 ) = g(t0 ) and limn→∞ gn (t0 + 𝜎) = g(t0 + 𝜎), the sequences {gn (t0 )}n∈ℕ and {gn (t0 + 𝜎)}n∈ℕ are necessarily bounded. Thus, there exists a constant M1 > 0 such that t +𝜎

vart0 (gn (t)) = gn (t0 + 𝜎) − gn (t0 ) ⩽ M1 , 0

n ∈ ℕ.

(3.97)

Hence, by condition (CD1) and the facts that fn − g is a function of bounded variation and s → f (xs , s) is a regulated function on [t0 , t0 + 𝜎] for every x ∈ O, it t +𝜎 follows that the Perron–Stieltjes integral ∫t 0 f (xs∗ , s)d(gn − g)(s) exists. 0 Since gn − g converges uniformly to 0 as n → ∞, Theorem 1.88 yields t

lim

n→∞ ∫t

f (xs , s)d(gn − g)(s) = 0

0

uniformly with respect to t ∈ [t0 , t0 + 𝜎]. Therefore, for the given 𝜖 > 0, there exists N3 ∈ ℕ such that ‖ t ‖ ‖ ‖ f (xs , s)d(gn − g)(s)‖ ⩽ 𝜖, for all n ⩾ N3 and t ∈ [t0 , t0 + 𝜎]. ‖ ‖∫t ‖ ‖ 0 ‖ For t ∈ [t0 , t0 + 𝜎] and n > max {N1 , N2 , N3 }, we also have ‖ ‖ ∗n ‖xn (t) − x∗ (t)‖ ‖ ‖ t t ‖ ‖ ∗ ‖ ∗ ‖ = ‖xnn (t0 ) − x∗ (t0 ) + f ((xnn )s , s)dgn (s) − f ((x∗ )s , s)dg(s)‖ ‖ ‖ ∫t0 ∫t0 ‖ ‖ t ‖ t ‖ ∗n ∗n ‖ ‖ ∗ ∗ ⩽ ||xn (t0 ) − x (t0 )|| + ‖ f ((xn )s , s)dgn (s) − f ((x )s , s)dg(s)‖ ‖∫t ‖ ∫ t0 ‖ 0 ‖ t ‖ t ‖ ∗ ‖ ‖ ⩽ ||𝜙∗n (t0 ) − 𝜙∗ (t0 )|| + ‖ f ((xnn )s , s)dgn (s) − f ((x∗ )s , s)dg(s)‖ ‖∫t ‖ ∫t0 ‖ 0 ‖

(3.98)

3.5 Continuous Dependence on Time Scales t ‖ t ‖ ∗ ‖ ‖ ⩽‖ f ((xnn )s , s)dgn (s) − f ((x∗ )s , s)dgn (s)‖ ‖∫t ‖ ∫ t0 ‖ 0 ‖ t t ‖ ‖ ‖ ‖ +‖ f ((x∗ )s , s)dgn (s) − f ((x∗ )s , s)dg(s)‖ ‖∫t ‖ ∫ t 0 ‖ 0 ‖ t

⩽𝜖+

∫t0



L(s)||(xnn )s − (x∗ )s ||∞ dgn (s),

where we used condition (CD3) and (3.98) to obtain the last inequality. Thus, t



||xnn (t) − x∗ (t)|| ⩽ 𝜖 +



L(s)||(xnn )s − (x∗ )s ||∞ dgn (s).

∫t0





Using the facts that (xnn )t0 = 𝜙t n and (x∗ )t0 = 𝜙∗t , and by uniform convergence 0 0 → 𝜙∗t , it follows that for n > n0 ,

∗ 𝜙t n 0

0





||(xnn )s − xs∗ ||∞ = sup ||xnn (s + 𝜃) − x∗ (s + 𝜃)|| ⩽ 𝜖 𝜃∈[−r,0]



+ sup ||xnn (𝜂) − x∗ (𝜂)||, 𝜂∈[0,s]

whence t



||xnn (t) − x∗ (t)|| ⩽ 𝜖 +

∫t0

[ ] ∗ L(s) 𝜖 + sup ||xnn (𝜂) − x∗ (𝜂)|| dgn (s). 𝜂∈[0,s]

Since L is a regulated function, there exists 𝛾 > 0 such that 𝛾 = sup Hence, for every t ∈ [t0 , t0 + 𝜎], we have

s∈[t0 ,t0 +𝜎] L(s).



||xnn (t) − x∗ (t)|| ⩽𝜖+𝜖 =𝜖+𝜖 ⩽𝜖+𝜖

t0 +𝜎

∫t0

s∈[t0 ,t0 +𝜎] t0 +𝜎

∫t0 t0 +𝜎

∫t0

t

sup L(s)dgn (s) +

∫t0

t

𝛾 dgn (s) +

∫t0

𝜂∈[0,s]



L(s) sup ||xnn (𝜂) − x∗ (𝜂)|| dgn (s)

∫t0

𝜂∈[0,s]

t

𝛾 dgn (s) +



L(s) sup ||xnn (𝜂) − x∗ (𝜂)|| dgn (s)



𝛾 sup ||xnn (𝜂) − x∗ (𝜂)|| dgn (s). 𝜂∈[0,s]

Finally, the Grönwall inequality (Theorem 1.52) yields [ ] t0 +𝜎 t ∗n ∫ 𝛾 dg (s) ∗ 𝛾 dgn (s) e t0 n ||xn (t) − x (t)|| ⩽ 𝜖 1 + ∫t0 [ ] t0 +𝜎 t +𝜎 ∫ 0 𝛾 dgn (s) ⩽𝜖 1 + 𝛾 dgn (s) e t0 ∫t0 ⩽ 𝜖(1 + 𝛾M1 )e𝛾M1 where in the last inequality we used (3.97). Since 𝜖 > 0 is arbitrarily small, the statement follows. ◽

139

140

3 Measure Functional Differential Equations

It is worth highlighting that one cannot suppress, from Theorem 3.48, the hypothesis on the uniform convergence of the sequence of functions {gn }n∈ℕ to g, as n → ∞, because this cannot be ensured only by using d(𝕋n , 𝕋 ) → 0, as n → ∞. Below, we present an example, borrowed from [21], which illustrates this situation. Example 3.49: Define the following time scales 𝕋 = [0, a] ∪ [a + 1, b]

and

𝕋n = [0, a + 1∕n] ∪ [a + 1, b],

for every n ∈ ℕ.

Then, dH (𝕋 , 𝕋n ) = 1∕n → 0, as n → ∞. However, gn (a + 1∕n) = a + 1∕n, for every n ∈ ℕ, while g(a + 1∕n) = a + 1. Thus, for every n ⩾ 2, there exists t ∈ [0, b] such that g(t) − gn (t) ⩾ 1∕2, implying that the sequence {gn }n∈ℕ does not converge uniformly to g. Even if we consider the Fell topology instead of the Hausdorff topology, the assumption on the uniform convergence of the sequence of functions {gn }n∈ℕ is needed. The next example, also borrowed from [21], illustrates this fact. Example 3.50: Consider ℝ with the usual metric. Let CL(ℝ) denote the set of all closed and nonempty subsets of ℝ endowed with the Fell topology. Then, 𝕋n = {z + 1∕n ∶ z ∈ ℤ} converges to ℤ, as n → ℕ (see [62, Lemma 4]). Besides, for each n ∈ ℕ, gn (z + 1∕n) = z + 1∕n, while g (z + 1∕n) = z + 1. Therefore, the sequence {gn }n∈ℕ does not converge uniformly to g. The next theorem, which appears in [21], is the main result of this section and it concerns continuous dependence on time scales of solutions of impulsive functional dynamic equations on time scales. Theorem 3.51: Let t0 , t0 + 𝜎 ∈ 𝕋 ∩ 𝕋n for each n ∈ ℕ, with 𝜎 > 0, and consider an open set B ⊂ X. Suppose xn ∶ [t0 − r, t0 + 𝜎]𝕋n → B is a solution of the impulsive functional dynamic equation on time scales t ∑ ∗ ⎧x (t) = x (t ) + fn ((xnn )s , s)Δs + Ik (xn (tk )), t ∈ [t0 , t0 + 𝜎]𝕋n , n 0 ⎪ n ∫t0 k∈{1,…,m} ⎨ tk 0 sufficiently large such that for n > N, we have ||xn (t) − x(t)|| < 𝜖

for t ∈ [t0 − r, t0 + 𝜎]𝕋n ∩𝕋 .

Proof. Since for each n ∈ ℕ, the function fn ∶ P × [t0 , t0 + 𝜎]𝕋 → X satisfies conditions (CDT1), (CDT2), and (CDT3), Lemma 3.32 yields that the corresponding conditions (CD1), (CD2), and (CD3) are fulfilled for the extension of fn . Therefore, all hypotheses of Theorem 3.48 are satisfied, and hence, the desired result follows immediately by applying the relations between the solutions of impulsive measure FDEs and the solutions of impulsive functional dynamic equations on time scales as described in Theorem 3.34. ◽

141

142

3 Measure Functional Differential Equations

Criteria ensuring continuous dependence of solutions of dynamic equations on variable time scales have several applications in numerical analysis. It is a well-known fact that many differential equations cannot be solved analytically, however a numerical approximation to a certain solution is usually good enough to solve a problem described by models in engineering and sciences. In order to do this, it is possible to build up algorithms to compute such an approximation. Therefore, results as the ones presented in this chapter are very useful to the study of solutions of ODEs or dynamic equations on time scales, depending on the chosen time scale, without the necessity to solve the equations analytically. In what follows, we present some examples which illustrate this fact and show the effectiveness of our results. Such examples can be found in [21, 59, 157, 190]. Example 3.52: Consider a simple autonomous linear dynamic equation given by { xΔ (t) = ax(t), (3.101) x(0) = x0 . If we solve Eq. (3.101) for the time scale 𝕋 = ℝ, we clearly obtain x(t) = x0 eat . On the other hand, solving Eq. (3.101) for the time scale 𝕋n = n1 ℤ, where n ∈ ℕ, we obtain ) ( a nt for every t ∈ 𝕋n . yn (t) = x0 1 + n Therefore, dH (𝕋n , ℝ) → 0, as n → ∞, and moreover gn converges to g uniformly as n → ∞, where gn (t) = inf {s ∈ 𝕋n ∶ s ⩾ t} and g(t) = inf {s ∈ ℝ ∶ s ⩾ t}. Indeed, notice that if t ∈ 𝕋n , then gn (t) = g(t), while if t ∉ 𝕋n , then |gn (t) − g(t)| ⩽ n1 , for each n ∈ ℕ. Hence, for every t ∈ ℝ and every n ∈ ℕ, we have |gn (t) − g(t)| < n1 . It is also true that limn→∞ yn (t) = x(t). Example 3.53: Consider a particular (logistic) initial value problem given by ( ) { xΔ (t) = 4x 34 − x , (3.102) x(0) = x0 . Suppose 𝕋n = n1 ℤ+ , for n ∈ ℕ. Then, evaluating the solution, one obtains ( ) 1 x t + − x(t) ] [ n x(𝜎(t)) − x(t) 3 = − x(t) , = 4x(t) xΔ (t) = 1 𝜇(t) 4 n

which implies that, for each n ∈ ℕ, ) [ ] [ ] ( 4 3 4 3+n 1 = x(t) − x(t) + x(t) = x(t) − x(t) , x t+ n n 4 n 4

3.5 Continuous Dependence on Time Scales

which is obtained by iterating the following equation (see [190], for details) [ ] 3+n 4 − x(t) xn (t) = x(t) n 4 for each n ∈ ℕ. Finally, taking n → ∞, the solutions tend to the solution of the logistic differential equation on ℝ+ and dH ( n1 ℤ+ , ℝ+ ) → 0 as n → ∞ (see [190]). Similarly, one can show that gn → g uniformly, as n → ∞, where gn (t) = inf {s ∈ 𝕋n ∶ s ⩾ t} and g(t) = inf {s ∈ ℝ ∶ s ⩾ t}.

143

145

4 Generalized Ordinary Differential Equations Everaldo M. Bonotto 1 , Márcia Federson 2 , and Jaqueline G. Mesquita 3 1 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil

The goal of this chapter is to introduce a new class of integral equations known as generalized ordinary differential equations for functions taking values in a Banach space. Throughout the book, we use the short form “generalized ODEs” to refer to these equations. It is well-known that the theory of generalized ordinary differential equations (ODEs) goes back to 1957 with the Czech mathematician Jaroslav Kurzweil. See his papers [147–149] from 1957 and 1958. Kurzweil’s main initial idea was to generalize some results on continuous dependence, with respect to initial conditions, of solutions of classic ODEs in order to obtain averaging principles. Therefore, in [147], Kurzweil introduced the concept of generalized ODEs for vector-valued and Banach space-valued functions. It is known that generalized ODEs are heavily based on the non-absolute integration theory of J. Kurzweil and R. Henstock, as presented in Chapters 1 and 2. Recall that the main feature of the Kurzweil–Henstock integral is to cope with highly oscillating functions and not only functions with many jumps. One of the main advantages in working with generalized ODEs lies on the fact that such equations contain “limiting equations” of ODEs and other equations, as pointed out by Z. Artstein in [9], for instance. In such paper, Artstein proved that when the right-hand side of an ODE satisfies some Carathéodory- and Lipschitz-type conditions, the “limiting equation” may no longer be viewed as “an ODE.” This inconvenient situation does not occur when we work in the framework of generalized ODEs. The reader may want to check both papers [4, 9] for more details. As a matter of fact, it has been observed, over the years, Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

146

4 Generalized Ordinary Differential Equations

that different types of differential equations can be identified with generalized ODEs, that is, they can be treated via the theory of these equations. Such is the case, for example, of classic ODEs, impulsive differential equations (IDEs) and measure differential equations (MDEs), as mentioned in the book [209] by S˘ tefan Schwabik. Not only can these equations be seen as generalized ODEs, but also functional differential equations (FDEs) and, more generally, measure FDEs on neutral type, for instance. The aim of the present chapter is to detail the relation between generalized ODEs and measure FDEs. Then, by means of the results of Chapter 3, relations between generalized ODEs and dynamic equations on time scales can be derived, as well as relations between generalized ODEs and impulsive FDEs. The relations between generalized ODEs and measure FDEs of neutral type are left to Chapter 15. It is also possible to generalize this correspondence, in order to establish a relation between generalized ODEs and other types of measure FDEs such as measure FDEs with infinite delays, measure FDEs with time-dependent delays, and measure FDEs with state-dependent delays (see [100, 116, 220]). Through these relations, we are able to obtain the analogues of the results for generalized ODEs for all these types of special equations. We begin this chapter with a section where we recall the basic concepts and results of the theory of generalized ODEs. Such results are borrowed from [209] and adapted to the Banach space-valued case. They are essential to the proofs of the correspondences between the solutions of generalized ODEs and the solutions of measure FDEs.

4.1 Fundamental Properties Let X be a Banach space endowed with the norm || ⋅ ||, O ⊂ X be an open set, I ⊂ ℝ be an interval, and F ∶ O × I → X be a function. Definition 4.1: We say that x ∶ I → X is a solution of the generalized ODE dx = DF(x, t) (4.1) d𝜏 on the interval I, whenever (x(t), t) ∈ O × I and b

x(b) − x(a) =

∫a

DF(x(𝜏), t),

for all a, b ∈ I,

(4.2)

where the integral is in the sense of Definition 2.1, with U(𝜏, t) = F(x(𝜏), t). It is important to notice that any generalized ODE is a type of integral equation and the notation in the integrand of (4.1) does not mean that x is differentiable with respect to 𝜏. It is only symbolical and follows the notation of the pioneer papers by Kurzweil (see [147–150], for instance).

4.1 Fundamental Properties

The first result of this chapter is, therefore, an immediate consequence of the previous definition (Definition 4.1). An interested reader can check its proof in [209, Proposition 3.5] for the case where X = ℝn . We omit its proof for the Banach space-valued case here, since it follows similarly as in [209, Proposition 3.5] with obvious adaptations. Theorem 4.2: If x ∶ I → X is a solution of the generalized ODE dx = DF(x, t) d𝜏 on I, then for every fixed 𝛾 ∈ I, we have

(4.3)

s

x(s) = x(𝛾) +

∫𝛾

DF(x(𝜏), t),

s ∈ I.

(4.4)

Reciprocally, if x ∶ I → X satisfies the integral equation (4.4) for some 𝛾 ∈ I and every s ∈ I, and (x(t), t) ∈ O × I, then x is a solution of generalized ODE (4.3). Throughout this chapter, we assume that Ω = O × I. Next, we present a class of right-hand sides F, which plays an important role within generalized ODEs. Definition 4.3: Assume that h ∶ I → ℝ is a nondecreasing function and 𝜔 ∶ [0, ∞) → [0, ∞) is a continuous and increasing function such that 𝜔(0) = 0. We say that a function F ∶ Ω → X belongs to the class  (Ω, h, 𝜔), whenever, for all x, y ∈ O and all s1 , s2 ∈ I, we have ||F(x, s2 ) − F(x, s1 )|| ⩽ |h(s2 ) − h(s1 )|,

(4.5)

||F(x, s2 ) − F(x, s1 ) − F(y, s2 ) + F(y, s1 )|| ⩽ 𝜔(||x − y||)|h(s2 ) − h(s1 )|.

(4.6)

When 𝜔 ∶ [0, ∞) → [0, ∞) is the identity function, we write simply  (Ω, h) instead of  (Ω, h, 𝜔). The next result, adapted from [209, Proposition 3.6] to Banach space-valued functions, describes how the solutions of (4.1) inherit the properties of the right-hand sides F ∶ Ω → X. In particular, if F is continuous with respect to the second variable, then any solution x ∶ I → O of the generalized ODE (4.1) is a continuous function. Theorem 4.4: Let O ⊂ X be open and x ∶ [a, b] → X be a solution of the generalized ODE (4.1) on [a, b] ⊂ I. Then, for every 𝜎 ∈ [a, b], lim[x(s) − F(x(𝜎), s) + F(x(𝜎), 𝜎)] = x(𝜎). s→𝜎

Proof. Since x ∶ [a, b] → X is a solution of the generalized ODE (4.1), we have x(t) ∈ O for every t ∈ [a, b]. Let 𝜎 ∈ [a, b] be fixed. According to Theorem 4.2, s

x(s) −

∫𝜎

DF(x(𝜏), t) = x(𝜎),

147

148

4 Generalized Ordinary Differential Equations

which implies x(s) − F(x(𝜎), s) + F(x(𝜎), 𝜎) − x(𝜎) s



∫𝜎

DF(x(𝜏), t) + F(x(𝜎), s) − F(x(𝜎), 𝜎) = 0,

for every s ∈ [a, b]. On the other hand, by Theorem 2.12, we obtain [ s ] lim DF(x(𝜏), t) − F(x(𝜎), s) + F(x(𝜎), 𝜎) = 0 s→𝜎 ∫𝜎 ensuring the existence of the limit lim[x(s) − F(x(𝜎), s) + F(x(𝜎), 𝜎) − x(𝜎)], s→𝜎

which, in turn, yields lim[x(s) − F(x(𝜎), s) + F(x(𝜎), 𝜎) − x(𝜎)] = 0 s→𝜎



concluding the proof.

A proof of the next result can be carried out by straightforward adaptation of [209, Lemma 3.9] to the Banach space-valued case. Lemma 4.5: Assume that F ∶ Ω → X satisfies condition (4.5) and let [a, b] ⊂ I. If x ∶ [a, b] → X is such that x(t) ∈ O for every t ∈ [a, b] and if the integral b ∫a DF(x(𝜏), t) exists, then for every s1 , s2 ∈ [a, b], we have ‖ s2 ‖ ‖ ‖ DF(x(𝜏), t)‖ ⩽ |h(s1 ) − h(s2 )|. ‖ ‖∫s ‖ ‖ 1 ‖ Proof. By condition (4.5), we obtain |t − 𝜏|||F(x(𝜏), t) − F(x(𝜏), 𝜏)|| ⩽ |t − 𝜏||h(t) − h(𝜏)| b

for every 𝜏, t ∈ [a, b]. In addition, the integral ∫a dh(s) exists and s2

∫s1

dh(t) = h(s2 ) − h(s1 ),

s1 , s2 ∈ [a, b].

Then, the statement follows from Theorem 2.15.



The next lemma gives us another useful estimate. Its proof for the Banach space-valued case follows the same ideas of [178, Lemma 5]. Lemma 4.6: Assume that F ∶ Ω → X belongs to the class  (Ω, h, 𝜔). If the functions x, y ∶ [a, b] ⊂ I → O are regulated, then b ‖ ‖ b ‖ ‖ D[F(x(𝜏), s) − F(y(𝜏), s)]‖ ⩽ 𝜔(||x(s) − y(s)||)dh(s). ‖ ‖ ∫a ‖∫ a ‖ ‖

4.1 Fundamental Properties

Proof. We start by pointing out that the function 𝜔(||x(⋅) − y(⋅)||) is regulated, since it is a composition of a continuous function and a regulated function. Therefore, from this fact and using that h is a nondecreasing function, the Perron–Stieltjes integral on the right-hand side exists. Then, choosing an arbitrary tagged division d = (𝜏i , [si−1 , si ]) of [a, b], we get the following estimates ‖∑ ‖ ‖ |d| ‖ ‖ [F(x(𝜏 ), s ) − F(x(𝜏 ), s ) − F(y(𝜏 ), s ) + F(y(𝜏 ), s )]‖ i i i i−1 i i i i−1 ‖ ‖ ‖ i=1 ‖ ‖ ‖ |d| ∑ ‖F(x(𝜏 ), s ) − F(x(𝜏 ), s ) − F(y(𝜏 ), s ) + F(y(𝜏 ), s )‖ ⩽ i i i i−1 i i i i−1 ‖ ‖ i=1



|d| ∑ i=1

‖ 𝜔(‖ ‖x(𝜏i ) − y(𝜏i )‖)(h(si ) − h(si−1 )),

where we also used the fact that F ∈  (Ω, h, 𝜔). On the other hand, given an 𝜖 > 0, there is a tagged division d = (𝜏i , [si−1 , si ]) ∈ TD[a,b] such that ‖ b ‖ D[F(x(𝜏), s) − F(y(𝜏), s)] ‖ ‖∫a ‖ ‖ |d| ∑ ‖ (4.7) − [F(x(𝜏i ), si ) − F(x(𝜏i ), si−1 ) − F(y(𝜏i ), si ) + F(y(𝜏i ), si−1 )] < 𝜖 ‖ ‖ ‖ i=1 ‖ and | b | |d| ∑ | | | ‖x(𝜏 ) − y(𝜏 )‖)(h(s ) − h(s ))| < 𝜖. (4.8) 𝜔(||x(s) − y(s)||)dh(s) − 𝜔( i ‖ i i−1 | |∫ ‖ i | a | i=1 | | As a consequence of (4.7) and (4.8), we obtain ‖ b ‖ ‖ ‖ D[F(x(𝜏), s) − F(y(𝜏), s)]‖ ‖ ‖∫a ‖ ‖ ‖ ‖∑ ‖ |d| ‖ ‖ ‖ ⩽‖ [F(x(𝜏 ), s ) − F(x(𝜏 ), s ) − F(y(𝜏 ), s ) + F(y(𝜏 ), s )] i i i i−1 i i i i−1 ‖ ‖ ‖ i=1 ‖ ‖ ‖ b ‖ +‖ D[F(x(𝜏), s) − F(y(𝜏), s)] ‖ ∫a −

|d| ∑ i=1


0, let k0 ∈ ℕ be such that for k ≥ k0 , we have ||xk (s) − x(s)||
0, there is a 0 gauge 𝛿 on [t0 , 𝑣] such that, if d = (𝜏i , [si−1 , si ]) is a 𝛿-fine tagged division of [t0 , 𝑣], then ‖ ‖∑ 𝑣 ‖ ‖ |d| ‖ [F(x(𝜏 ), s ) − F(x(𝜏 ), s )] − DF(x(𝜏), t)‖ (4.29) i i i i−1 ‖ < 𝜖. ‖ ∫ ‖ ‖ i=1 t0 ‖∞ ‖ Combining (4.28) and (4.29), we obtain ||x(𝑣)(𝜗) − x(𝑣)(𝑣)|| ‖∑ ‖ |d| < 2𝜖 + ‖ ‖ [F(x(𝜏i ), si ) − F(x(𝜏i ), si−1 )](𝜗) ‖ i=1 ‖ ‖ |d| ∑ ‖ − [F(x(𝜏i ), si ) − F(x(𝜏i ), si−1 )](𝑣)‖ ‖. ‖ i=1 ‖ On the other hand, using the definition of F, we conclude that F(x(𝜏i ), si )(𝜗) − F(x(𝜏i ), si−1 )(𝜗) = F(x(𝜏i ), si )(𝑣) − F(x(𝜏i ), si−1 )(𝑣), for every i ∈ {1, … , |d|}. Therefore, ||x(𝑣)(𝜗) − x(𝑣)(𝑣)|| < 2𝜖 and, hence, by the arbitrariness of 𝜖 > 0, we get the first equality of the statement.

163

164

4 Generalized Ordinary Differential Equations

Now, suppose 𝜗 ⩽ 𝑣. Similarly as before, we obtain 𝑣

x(𝑣)(𝜗) = x(t0 )(𝜗) +

DF(x(𝜏), t)(𝜗)

∫t0

and

𝜗

x(𝜗)(𝜗) = x(t0 )(𝜗) +

∫t0

DF(x(𝜏), t)(𝜗),

whence 𝑣

x(𝑣)(𝜗) − x(𝜗)(𝜗) =

∫𝜗

DF(x(𝜏), t)(𝜗).

Suppose d = (𝜏i , [si−1 , si ]) is an arbitrary tagged division of [𝜗, 𝑣]. By the definition of F, for each i = 1, 2, … , |d|, F(x(𝜏i ), si )(𝜗) − F(x(𝜏i ), si−1 )(𝜗) = 0, 𝑣

which yields ∫𝜗 DF(x(𝜏), t)(𝜗) = 0. Thus, since x is a solution of the generalized ODE (4.25), x(𝑣)(𝜗) = x(𝜗)(𝜗) concluding the proof. ◽ The next two theorems are the main results of this chapter and they concern the correspondence between the solutions of certain measure FDEs and the solutions of nonautonomous generalized ODEs. The versions presented here are more general than those from [85], since conditions (B) and (C) on the function f involve functions M and L instead of constants. Besides, we consider Banach space-valued functions here. Theorem 4.18: Assume that O is an open subset of G([t0 − r, t0 + 𝜎], X) with the prolongation property, P = {yt ∶ y ∈ O, t ∈ [t0 , t0 + 𝜎]}, 𝜙 ∈ P, g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function, f ∶ P × [t0 , t0 + 𝜎] → X satisfies conditions (A)–(C). Let F ∶ O × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) be given by (4.24). Let y ∈ O be a solution of the measure FDE t ⎧ f (ys , s)dg(s), ⎪ y(t) = y(t0 ) + ∫t0 ⎨ ⎪ yt = 𝜙. ⎩ 0

t ∈ [t0 , t0 + 𝜎],

For every t ∈ [t0 , t0 + 𝜎], let { y(𝜗), 𝜗 ∈ [t0 − r, t], x(t)(𝜗) = y(t), 𝜗 ∈ [t, t0 + 𝜎]. Then, the function x ∶ [t0 , t0 + 𝜎] → O is a solution of the generalized ODE dx = DF(x, t). d𝜏

(4.30)

4.3 Relations with Measure FDEs

Proof. Let 𝜖 > 0 and define a function h ∶ [t0 , t0 + 𝜎] → ℝ by t

h(t) =

∫t0

M(s)dg(s),

for t ∈ [t0 , t0 + 𝜎].

Since g is nondecreasing, h is also a nondecreasing function. Then, given 𝑣 ∈ [t0 , t0 + 𝜎], the function h admits only a finite number of points t ∈ [t0 , 𝑣] such that Δ+ h(t) ≥ 𝜖, see Theorem 1.4. Let us denote these points by t1 , … , tm . Consider a gauge 𝛿 on [t0 , 𝑣] such that { } tk − tk−1 𝛿(𝜏) < min ∶ k = 2, … , m , for 𝜏 ∈ [t0 , 𝑣], and 2 } { . 𝛿(𝜏) < min |𝜏 − tk | ∶ k = 1, … , m , for 𝜏 ∈ [t0 , 𝑣] ⧵ {tk }m k=1 If a point-interval pair (𝜏, [c, d]) is 𝛿-fine, then [c, d] contains at most one of the points t1 , … , tm , and if tk ∈ [c, d], then 𝜏 = tk . Using the equality ytk = x(tk )tk , for k ∈ {1, … , m}, by Corollary 2.14, we obtain t

lim

t→tk + ∫t

L(s)||ys − x(tk )s ||∞ dg(s) = L(tk )||ytk − x(tk )tk ||∞ Δ+ g(tk ) = 0

k

for every k ∈ {1, … , m}. Therefore, we can pick up a gauge 𝛿 on [t0 , 𝑣] such that tk +𝛿(tk )

∫tk

L(s)||ys − x(tk )s ||∞ dg(s)
0, there is only a finite number of points t ∈ [t0 , 𝑣] for which Δ+ h(t) ≥ 𝜖. As in the previous theorem, we denote such points by t1 , … , tm , and we consider a gauge 𝛿 ∶ [t0 , 𝑣] → (0, ∞) on [t0 , 𝑣] satisfying } { t −t (i) 𝛿(𝜏) < min k 2k−1 ∶ k = 2, … , m , 𝜏 ∈ [t0 , 𝑣]; } { (ii) 𝛿(𝜏) < min |𝜏 − tk | ∶ k = 1, … , m , 𝜏 ∈ [t0 , 𝑣] ⧵ {t1 , … , tm }; t +𝛿(t ) 𝜖 , k ∈ {1, … , m}. (iii) ∫t k k L(s)||ys − x(tk )s ||∞ dg(s) < 2m+1 k

Because conditions (A)–(C) are fulfilled, Lemma 4.16 implies that the function F given by (4.24) belongs to the class  (O × [t0 , t0 + 𝜎], h), where h is defined as earlier. Arguing as in the proof of the previous theorem, we can assume that the gauge 𝛿 satisfies ||h(𝜌) − h(𝜏)|| ⩽ 𝜖, . for every 𝜌 ∈ [𝜏, 𝜏 + 𝛿(𝜏)) and 𝜏 ∈ [t0 , 𝑣] ⧵ {tk }m k=1 𝑣 By the existence of the Kurzweil integral ∫t DF(x(𝜏), t), the gauge 𝛿 can be cho0 sen such that ‖ ‖ 𝑣 |d| ∑ [ ]‖ ‖ ‖ F(x(𝜏i ), si ) − F(x(𝜏i ), si−1 ) ‖ (4.39) ‖ 0 be such that [t0 , t0 + Δ] ⊂ (a, b), h(t0 + Δ) − h(t0 ) < 12 and also x ∈ O whenever ∥ x − ̃ x ∥⩽ h(t0 + Δ) − h(t0 ). Our goal is to apply the Banach Fixed Point Theorem to show the existence and uniqueness of a solution to (5.1). In order to do that, consider Q as the set of all functions z ∶ [t0 , t0 + Δ] → O such that z ∈ G([t0 , t0 + Δ], O) and ∥ z(t) − ̃ x ∥⩽ h(t) − h(t0 ),

for every t ∈ [t0 , t0 + Δ].

5.1 Local Existence and Uniqueness of Solutions

One can easily check that Q ⊂ G([t0 , t0 + Δ], O) is closed (recall that, G([t0 , t0 + Δ], O) = {y ∈ G([t0 , t0 + Δ], X) ∶ y(t) ∈ O for all t ∈ [t0 , t0 + Δ]}, see Remark 1.2). For all s ∈ [t0 , t0 + Δ] and z ∈ Q, define an operator T ∶ Q → G([t0 , t0 + Δ], X) by Tz(s) = ̃ x+

s

∫t0

DF(z(𝜏), t),

where the integral on the right-hand side exists due to Corollary 4.8. By Lemma 4.5, for s ∈ [t0 , t0 + Δ], we have ‖ s ‖ ‖ ‖ ∥ Tz(s) − ̃ x ∥= ‖ DF(z(𝜏), t)‖ ⩽ h(s) − h(t0 ). ‖∫t ‖ ‖ 0 ‖ Therefore, T maps Q into itself. Consider t0 ⩽ s1 ⩽ t0 + Δ and z1 , z2 ∈ Q. By Theorem 2.15 or Lemma 4.6, ‖ s1 ‖ ‖ ‖ ∥ Tz2 (s1 ) − Tz1 (s1 ) ∥ = ‖ D[F(z2 (𝜏), t) − F(z1 (𝜏), t)]‖ ‖∫t ‖ ‖ 0 ‖ ‖ s1 ‖ ‖ ‖ ⩽‖ D[∥ z2 (𝜏) − z1 (𝜏) ∥ h(t)]‖ ‖∫t ‖ ‖ 0 [ ‖ ] ⩽ sup ∥ z2 (𝜏) − z1 (𝜏) ∥ ⋅ h(s1 ) − h(t0 ) 𝜏∈[t0 ,t0 +Δ]

[ ] =∥ z2 − z1 ∥∞ ⋅ h(s1 ) − h(t0 ) . Thus, ] 1 [ ∥ Tz2 − Tz1 ∥∞ ⩽∥ z2 − z1 ∥∞ h(t0 + Δ) − h(t0 ) < ∥ z2 − z1 ∥∞ , 2 and, hence, T is a contraction. Then, the Banach Fixed Point Theorem yields the result. In the sequel, we need to analyze the case where t0 is not a point of continuity of the function h. We define an auxiliary function ̃ h ∶ [a, b] → ℝ by { h(t), t ⩽ t0 ̃ h(t) = + h(t) − h(t0 ) + h(t0 ), t > t0 . Notice that ̃ h is nondecreasing and left-continuous and continuous at t0 . For all x ∈ O, define { F(x, t), t ⩽ t0 , ̃ (x, t) = F F(x, t) − [F(̃ x, t0+ ) − F(̃ x, t0 )], t > t0 ,

175

176

5 Basic Properties of Solutions

̃ ∈  (Ω, ̃ It is not difficult to prove that F h). Thus, a solution z of the generalized ODE ⎧ dz ̃ (z, t), = DF ⎪ ⎨ d𝜏 ⎪z(t0 ) = ̃ x+ , ⎩ exists by the first part of the proof. Then, defining x(t0 ) = ̃ x and x(t) = z(t) for t > t0 , x. ◽ we obtain a solution of the generalized ODE (5.1) for which x(t0 ) = ̃ At this point, it is worth noticing that one can describe precisely the type of discontinuity of a solution of the generalized ODE (5.1). Indeed, the assumption on the left-continuity of the function h in Theorem 5.1 implies that the solution of (5.1) is left-continuous as well (see Lemma 4.9). This means that, given a solution x of (5.1), the left-limit x(𝜎 − ) exists for every 𝜎 in the domain of x. In addition, the number Δ > 0 depends on the function h. The second fact that we can infer from Theorem 5.1 is that the condition ̃ x + F(̃ x, t0+ ) − F(̃ x, t0 ) ∈ O is sufficient, but not necessary. The next example, borrowed from [78, Example 2.18], shows us this fact. Example 5.2: Consider X = ℝ with norm | ⋅ | (absolute value) and Ω = (−1, 1) × [0, 1]. Let 𝜑 ∶ [0, 1] → ℝ be a function defined by 𝜑(t) = t − 1, for 0 < t ⩽ 1, and 𝜑(0) = 0, and let F ∶ Ω → ℝ be defined by F(x, t) = 𝜑(t) for all (x, t) ∈ Ω. Note that F is constant with respect to the first variable. Consider a function h ∶ [0, 1] → ℝ given by { 2t + 1, 0 < t ⩽ 1, h(t) = h(0) = 0. By definition, the function h is left-continuous on (0, 1] and increasing on [0, 1]. Consider the generalized ODE given by ⎧ dx = DF(x, t) = D[𝜑(t)], ⎪ ⎨ d𝜏 ⎪x(0) = 0. ⎩ We claim that F ∈  (Ω, h) whose proof we divide into two parts: (i) and (ii). (i) We assert that, for all (x, t2 ), (x, t1 ) ∈ Ω, |F(x, t2 ) − F(x, t1 )| ⩽ |h(t2 ) − h(t1 )|.

(5.2)

5.1 Local Existence and Uniqueness of Solutions

Indeed. Let 0 < t1 ⩽ t2 ⩽ 1 and x ∈ (−1, 1). Then, |F(x, t2 ) − F(x, t1 )| = |𝜑(t2 ) − 𝜑(t1 )| = |t2 − 1 − (t1 − 1)| = |t2 − t1 | ⩽ |2(t2 − t1 )| = |2t2 + 1 − (2t1 + 1)| = |h(t2 ) − h(t1 )|. Now, let t1 = 0, 0 < t2 ⩽ 1 and x ∈ (−1, 1). In this case, we obtain |F(x, t2 ) − F(x, t1 )| = |𝜑(t2 ) − 𝜑(t1 )| = |t2 − 1| = 1 − t2 < 1 < 2t2 + 1 = h(t2 ) − h(0) = |h(t2 ) − h(t1 )|. Notice that the case t1 = t2 = 0 is trivial. Hence, the assertion follows. (ii) We assert that, for all (x, t2 ), (x, t1 ), (y, t2 ), (y, t1 ) ∈ Ω, |F(x, t2 ) − F(x, t1 ) − F(y, t2 ) + F(y, t1 )| ⩽ |h(t2 ) − h(t1 )| |x − y| , which comes directly from the fact that |F(x, t2 ) − F(x, t1 ) − F(y, t2 ) + F(y, t1 )| = |𝜑(t2 ) − 𝜑(t1 ) − 𝜑(t2 ) + 𝜑(t1 )| = 0 ⩽ |h(t2 ) − h(t1 )| |x − y| . This concludes the proof of the claim that F ∈  (Ω, h). Now, we prove that 0 + F(0, 0+ ) − F(0, 0) ∉ (−1, 1). In fact, by the definition of F, F(0, 0+ ) − F(0, 0) = lim+ F(0, t) − F(0, 0) = lim+ 𝜑(t) = −1. t→0

t→0

Hence, 0 + F(0, 0+ ) − F(0, 0) = −1 ∉ (−1, 1). We proceed so as to show that the function 𝜑, defined in the beginning of this example, is the unique solution of the generalized ODE (5.2) on [0, 1]. At first, let us prove that 𝜑 is a solution of the generalized ODE (5.2) on [0, 1]. If s ∈ [0, 1], then s

𝜑(s) − 𝜑(0) =

∫0

s

D[𝜑(t)] =

DF(𝜑(𝜏), t),

∫0

which proves that 𝜑 is a solution of (5.2) on [0, 1]. Suppose x ∶ [0, 1] → ℝ is also a solution of (5.2) on [0, 1]. Then, given s ∈ [0, 1], we obtain s

x(s) = 0 +

∫0

s

DF(x(𝜏), t) = D[𝜑(t)] = 𝜑(s) − 𝜑(0) = 𝜑(s), ⏟⏞⏟⏞⏟ ∫0 ⏟⏟⏟ 𝜑(t)

0

177

178

5 Basic Properties of Solutions

that is, x(s) = 𝜑(s) for all s ∈ [0, 1]. By the definition of the Kurzweil integral, we s have 𝜑(s) − 𝜑(0) = ∫0 D[𝜑(t)]. This completes our example.

5.1.1 Applications to Other Equations In this section, we apply Theorems 4.18, 4.19, and 5.1 in order to prove an existence and uniqueness result for measure FDEs. Then, as a direct consequence of Theorems 3.30 and 5.3, we prove an analogue for functional dynamic equations on time scales. The following result is a generalization of [85, Theorem 5.3], since here we consider conditions (A)–(C) from Chapter 4, which are more general than the conditions considered in [85]. Moreover, we prove this result for functions taking values in an infinite dimensional Banach space. Recall that the prolongation property is given in Definition 4.15. Theorem 5.3: Let r, 𝜎 > 0 and t0 ∈ ℝ. Assume that O ⊂ G([t0 − r, t0 + 𝜎], X) is an open set with the prolongation property and consider the set P = {xt ∶ x ∈ O, t ∈ [t0 , t0 + 𝜎]}. Assume, further, that the function g ∶ [t0 , t0 + 𝜎] → ℝ is nondecreasing and left-continuous, f ∶ P × [t0 , t0 + 𝜎] → X satisfies conditions (A)–(C) from Chapter 4. Let F ∶ O × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) be given by (4.24). If 𝜙 ∈ P is such that the function { 𝜙(t − t0 ), t ∈ [t0 − r, t0 ], z(t) = + t ∈ (t0 , t0 + 𝜎] 𝜙(0) + f (𝜙, t0 )Δ g(t0 ), belongs to O, then there exist a 𝛿 > 0 and a function y ∶ [t0 − r, t0 + 𝛿] → X, which is the unique solution of the measure FDE t ⎧ ⎪y(t) = y(t0 ) + ∫ f (ys , s)dg(s), t0 ⎨ ⎪y = 𝜙. ⎩ t0

Proof. By Lemma 4.16, the function F belongs to the class  (O × [t0 , t0 + 𝜎], h), with t

h(t) =

∫t0

Define x0 (𝜗) =

(M(s) + L(s))dg(s),

{ 𝜙(𝜗 − t0 ), 𝜙(0),

t ∈ [t0 , t0 + 𝜎].

𝜗 ∈ [t0 − r, t0 ], 𝜗 ∈ [t0 , t0 + 𝜎].

Then, it is not difficult to see that x0 ∈ O.

5.1 Local Existence and Uniqueness of Solutions

We proceed to prove that x0 + F(x0 , t0+ ) − F(x0 , t0 ) ∈ O. Clearly F(x0 , t0 ) = 0. Note that the limit F(x0 , t0+ ) is taken with respect to the supremum norm and we know that such limit exists because F ∈  (O × [t0 , t0 + 𝜎], h). Therefore, F is regulated with respect to the second variable and, hence, we can calculate the pointwise limit F(x0 , t0+ )(𝜗) for every 𝜗 ∈ [t0 − r, t0 + 𝜎]. Then, Corollary 2.14 yields { 0, t ∈ [t0 − r, t0 ], + F(x0 , t0 )(𝜗) = + t ∈ (t0 , t0 + 𝜎]. f (𝜙, t0 )Δ g(t0 ), This shows that x0 + F(x0 , t0+ ) − F(x0 , t0 ) = z ∈ O. Thus, all hypotheses of Theorem 5.1 are satisfied. Hence, there exist 𝛿 > 0 and a unique solution x ∶ [t0 , t0 + 𝛿] → X satisfying the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(t0 ) = x0 . ⎩

(5.3)

Recall by the Definition 4.1 that (x(t), t) ∈ O × [t0 , t0 + 𝛿]. Finally, Theorem 4.19 implies that the function y ∶ [t0 − r, t0 + 𝛿] → X given by { x(t0 )(𝜗), t0 − r ⩽ 𝜗 ⩽ t0 , y(𝜗) = x(𝜗)(𝜗), t0 ⩽ 𝜗 ⩽ t0 + 𝛿 is a solution of the measure FDE t ⎧ ⎪y(t) = y(t0 ) + ∫ f (ys , s)dg(s), t0 ⎨ ⎪y = 𝜙. ⎩ t0

Note that this solution is unique. Otherwise, by Theorem 4.18, x would not the only solution of the generalized ODE (5.3), which would be a contradiction. ◽ The next existence and uniqueness result is specialized for functional dynamic equations on time scales. The version presented here is more general than the one presented in [85, Theorem 5.5], because we deal with arbitrary functions M and L instead of constants (see hypotheses (ii) and (iii) below). Nevertheless, the proof follows similarly as in [85, Theorem 5.5]. Theorem 5.4: Let [t0 − r, t0 + 𝜎]𝕋 be a time scale interval, with t0 , t0 + 𝜎 ∈ 𝕋 and 𝜎 > 0, B ⊂ X be open, O = G([t0 − r, t0 + 𝜎], B) and P = G([−r, 0], B). Consider a function f ∶ P × [t0 , t0 + 𝜎]𝕋 → X satisfying the following conditions: t +𝜎

(i) The Perron Δ-integral ∫t 0 0

f (ys , s)Δs exists for every y ∈ O;

179

180

5 Basic Properties of Solutions

(ii) There exists a Perron Δ-integrable function M ∶ [t0 , t0 + 𝜎]𝕋 → ℝ such that for every y ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , we have u2 ‖ u2 ‖ ‖ ‖ f (ys , s)Δs‖ ⩽ M(s)Δs; ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖ (iii) There exists a Perron Δ-integrable function L ∶ [t0 , t0 + 𝜎]𝕋 → ℝ such that for every y, z ∈ O and u1 , u2 ∈ [t0 , t0 + 𝜎]𝕋 , with u1 ⩽ u2 , we have u2 ‖ u2 ‖ ‖ ‖ [f (ys , s) − f (zs , s)] Δs‖ ⩽ L(s) ∥ ys − zs ∥∞ Δs. ‖ ‖∫u ‖ ∫u 1 ‖ 1 ‖ If 𝜙 ∶ [t0 − r, t0 ]𝕋 → B is a regulated function for which 𝜙(t0 ) + f (𝜙∗t , t0 )𝜇(t0 ) ∈ B 0 (where, as usual, 𝜇 ∶ 𝕋 → [0, ∞) is the graininess function), then there exist 𝛿 > 0 satisfying 𝛿 ⩾ 𝜇(t0 ) and t0 + 𝛿 ∈ 𝕋 , and a function y ∶ [t0 − r, t0 + 𝛿]𝕋 → B, which is the unique solution of the functional dynamic equation on time scales t ⎧ ∗ ⎪y(t) = y(t0 ) + ∫ f (ys , s)Δs, t ∈ [t0 , t0 + 𝛿], t0 ⎨ ⎪y(t) = 𝜙(t), t ∈ [t0 − r, t0 ]𝕋 . ⎩

Proof. Let g(t) = t∗ and f ∗ (y, t) = f (y, t∗ ), for every t ∈ [t0 , t0 + 𝜎] and y ∈ P. By the definition of g, Δ+ g(t0 ) = 𝜇(t0 ). Then, conditions (i), (ii) and (iii) and Lemma 3.32 yield that the function f ∗ fulfills conditions (A)–(C), g ∶ [t0 , t0 + 𝜎] → ℝ is nondecreasing and left-continuous. By Lemma 3.20, since 𝜙 ∶ [t0 − r, t0 ]𝕋 → B is a regulated function, 𝜙∗ ∶ [t0 − r, t0 ] → B is also a regulated function. Define a function z ∶ [t0 , t0 + 𝜎] → X by { 𝜙∗ (t), t ∈ [t0 − r, t0 ], (5.4) z(t) = + ∗ ∗ t ∈ (t0 , t0 + 𝜎]. 𝜙 (t0 ) + f (𝜙t , t0 )Δ g(t0 ), 0

If t ∈ [t0 − r, t0 ], then by hypothesis, 𝜙∗ (t) = 𝜙(t∗ ) ∈ B for every t ∈ [t0 − r, t0 ]. If t ∈ (t0 , t0 + 𝜎], then 𝜙∗ (t0 ) + f (𝜙∗t0 , t0 )Δ+ g(t0 ) = 𝜙∗ (t0 ) + f (𝜙∗t0 , t0 )𝜇(t0 ) ∈ B. Therefore, z defined by (5.4) belongs to B. In this way, all the assumptions of Theorem 5.3 are satisfied. Thus, there exist 𝛿1 > 0 and a function y ∶ [t0 − r, t0 + 𝛿1 ] → B, which is a unique solution of the measure FDE t ⎧ ∗ ⎪y(t) = y(t0 ) + ∫ f (ys , s)dg(s), t0 ⎨ ⎪y = 𝜙∗ . t ⎩ t0 0

t ∈ [t0 , t0 + 𝛿1 ],

(5.5)

5.2 Prolongation and Maximal Solutions

If t0 is right-dense, then there exists 𝜏 ∈ 𝕋 such that t0 < 𝜏 < t0 + 𝛿1 . Take 𝛿 = 𝜏 − t0 . Notice that 𝛿 > 0 and t0 + 𝛿 = 𝜏 ∈ 𝕋 . Since [t0 − r, t0 + 𝛿] ⊂ [t0 − r, t0 + 𝛿1 ], y|[t0 −r,t0 +𝛿] is also a solution of (5.5) on [t0 − r, t0 + 𝛿]. Then, Theorem 3.30 yields y|[t0 −r,t0 +𝛿] = x∗ , where x ∶ [t0 − r, t0 + 𝛿]𝕋 → B is a solution of the functional dynamic equation on time scales t ⎧ ∗ ⎪x(t) = x(t0 ) + ∫ f (xs , s)Δs, t ∈ [t0 , t0 + 𝛿]𝕋 , t ⎨ 0 ⎪x(t) = 𝜙(t), t ∈ [t − r, t ] . 0 0 𝕋 ⎩

(5.6)

Again by Theorem 3.30, we obtain the uniqueness of the solution x. Now, assume that t0 is right-scattered. Without loss of generality, one can take 𝛿 ⩾ 𝜇(t0 ). Otherwise, set y(𝜎(t0 )) = 𝜙(t0 ) + f (𝜙∗t0 , t0 )𝜇(t0 ) to get a solution defined in [t0 − r, t0 + 𝜇(t0 )]𝕋 . Arguing as before, we get y|[t0 −r,t0 +𝛿] = x∗ , where x ∶ [t0 − r, t0 + 𝛿]𝕋 → B is a solution of Eq. (5.6). Using Theorem 3.30 once again, we obtain the uniqueness of the solution x and the statement holds. ◽

5.2 Prolongation and Maximal Solutions In this section, we bring up results concerning the prolongation of solutions of generalized ODEs for Banach space-valued functions. These results are fundamental in the study of asymptotic properties of solutions of generalized ODEs, such as stability, boundedness, and controllability of solutions as one can check in Chapters 8, 11, and 12. We also collect results on prolongation of solutions specialized for MDEs and for dynamic equations on time scales. Let O ⊂ X be an open set, [t0 , ∞) ⊂ ℝ and Ω = O × [t0 , ∞). Consider the generalized ODE dx = DF(x, t), (5.7) d𝜏 where F ∈  (Ω, h) and h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous. Throughout this section, for t0 < 𝛽 < 𝜗 < ∞, define Γ∞ 𝛽,𝜗 = {[𝛽, 𝜗], [𝛽, 𝜗), [𝛽, ∞)}. The next result presents conditions that guarantee the prolongation of a solution of the nonautonomous generalized ODE (5.7). Its proof is inspired in [209, Proposition 4.15]. Such result is crucial in extending solutions to unbounded intervals. We point out that the version we present for Banach space-valued functions was firstly proved in [78, Theorem 3.1]. We also notice that a version of for bounded intervals and finite dimensional space-valued functions can be found in [209, Lemma 4.4].

181

182

5 Basic Properties of Solutions

Theorem 5.5: Let F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous. If x ∶ [𝛾, 𝛽) → X and y ∶ I → X, I ∈ Γ∞ 𝛽,𝜗 , are solutions of the generalized ODE (5.7) on [𝛾, 𝛽) and I, respectively, where t0 ⩽ 𝛾 < 𝛽 < 𝜗 < ∞ and if the limit limt→𝛽 − x(t) exists and limt→𝛽 − x(t) = y(𝛽), then the function z ∶ [𝛾, 𝛽) ∪ I → X, defined by z(t) = x(t), for t ∈ [𝛾, 𝛽), and z(t) = y(t), for t ∈ I, is a solution of the generalized ODE (5.7) on [𝛾, 𝛽) ∪ I, where [𝛾, 𝛽) ∪ I = [𝛾, 𝜗], for I = [𝛽, 𝜗], [𝛾, 𝛽) ∪ I = [𝛾, 𝜗), for I = [𝛽, 𝜗), and [𝛾, 𝛽) ∪ I = [𝛾, ∞), for I = [𝛽, ∞). Proof. Assume that the limit limt→𝛽 − x(t) exists and limt→𝛽 − x(t) = y(𝛽). Define a function z ∶ [𝛾, 𝛽) ∪ I → X by z(t) = x(t), for t ∈ [𝛾, 𝛽), and z(t) = y(t), for t ∈ I. Then, z is well-defined and it is a regulated function, since x and y are regulated functions by Lemma 4.9. Now, we show that z is, in fact, a solution of the generalized ODE (5.7) on [𝛾, 𝛽) ∪ I. Note that z(t) ∈ O for all t ∈ [𝛾, 𝛽) ∪ I since x and y are solutions of the generalized ODE (5.7), see Definition 4.1. Because 𝛽 is an accumulation point of the set [𝛾, 𝛽), there is a sequence { } tn n∈ℕ ⊂ [𝛾, 𝛽) such that tn → 𝛽 as n → ∞. Thus, once limt→𝛽 − x(t) = y(𝛽), we obtain lim x(tn ) = y(𝛽).

(5.8)

n→∞

Let s1 , s2 ∈ [𝛾, 𝛽) ∪ I. The first possibility we consider is s1 ∈ [𝛾, 𝛽) and s2 ∈ I. Then, for n ∈ ℕ sufficiently large, we conclude that tn ∈ (s1 , 𝛽) and 𝛽

s2

DF(z(𝜏), t) =

∫s1

∫s1

s2

DF(z(𝜏), t) +

∫s1

DF(z(𝜏), t)

𝛽

tn

=

∫𝛽

DF(x(𝜏), t) +

∫tn

s2

DF(z(𝜏), t) +

∫𝛽

DF(y(𝜏), t)

𝛽

= x(tn ) − x(s1 ) +

∫tn

DF(z(𝜏), t) + y(s2 ) − y(𝛽).

Using the definition of the function z, we obtain 𝛽

s2

∫s1

DF(z(𝜏), t) = x(tn ) − y(𝛽) + z(s2 ) − z(s1 ) +

∫tn

DF(z(𝜏), t).

(5.9)

Then, by Lemma 4.5, ‖ ‖ 𝛽 ‖ ‖ DF(z(𝜏), t)‖ ⩽ ||h(𝛽) − h(tn )|| , ‖ ‖ ‖∫t ‖ ‖ n for sufficiently large n. Furthermore, since h is left-continuous and tn → 𝛽, as n → ∞, tn < 𝛽, for all n ∈ ℕ, we obtain limn→∞ ||h(𝛽) − h(tn )|| = 0. Thus, 𝛽

lim

n→∞ ∫t

n

DF(z(𝜏), t) = 0.

(5.10)

5.2 Prolongation and Maximal Solutions

Finally, taking the limit as n → ∞ in (5.9) and using equalities (5.8) and (5.10), we conclude that s2

∫s1

DF(z(𝜏), t) = z(s2 ) − z(s1 ),

(5.11)

for every s1 , s2 ∈ [𝛾, 𝛽) ∪ I such that s1 ∈ [𝛾, 𝛽) and s2 ∈ I. In the other cases for which s1 , s2 ∈ [𝛾, 𝛽) and s1 , s2 ∈ I, Eq. (5.11) is easily verified. Hence, z is a solution of the generalized ODE (5.7) on [𝛾, 𝛽) ∪ I. ◽ At this point, we highlight the fact that [209, Lemma 4.4] can be derived as a consequence of our Theorem 5.5. This fact is shown in Corollary 5.6 below which is a version of [209, Lemma 4.4] for I = [𝛽, 𝜗]. The proof presented here and borrowed from [78, Corollary 3.2] is slightly different though. Corollary 5.6: Let F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous. If x ∶ [𝛾, 𝛽] → X and y ∶ I → X, I ∈ Γ∞ 𝛽,𝜗 , are solutions of the generalized ODE (5.7) on [𝛾, 𝛽] and I respectively, where t0 ⩽ 𝛾 < 𝛽 < 𝜗 < ∞ and x(𝛽) = y(𝛽), then the function z ∶ [𝛾, 𝛽] ∪ I → X, given by z(t) = x(t), for t ∈ [𝛾, 𝛽], and z(t) = y(t), for t ∈ I, is a solution of the generalized ODE (5.7) on [𝛾, 𝛽] ∪ I. Proof. Suppose x(𝛽) = y(𝛽). Due to the fact that F ∈  (Ω, h) and h is a left-continuous function on [t0 , ∞), Lemma 4.9 implies that x is left-continuous on (𝛾, 𝛽]. Thus, limt→𝛽 − x(t) = x(𝛽). Hence, limt→𝛽 − x(t) = y(𝛽), once x(𝛽) = y(𝛽). Moreover, because x ∶ [𝛾, 𝛽] → X is a solution of the generalized ODE (5.7) on [𝛾, 𝛽], x|[𝛾,𝛽) is a solution of the generalized ODE (5.7) on [𝛾, 𝛽). Then, all hypotheses of Theorem 5.5 are fulfilled and, therefore, the function z, defined at the beginning of its proof, is a solution of the generalized ODE (5.7) on [𝛾, 𝛽] ∪ I. ◽ In the sequel, we intend to prove that, under certain conditions, the unique solution x ∶ [𝜏0 , 𝜏0 + Δ] → X of the generalized ODE (5.7) satisfying x(𝜏0 ) = x0 , which is ensured by Theorem 5.1, can be extended intervals containing [𝜏0 , 𝜏0 + Δ] up to a maximal interval of existence, at least while the graph of the solution does not reach the boundary of Ω. Consider a pair (x0 , 𝜏0 ) ∈ Ω satisfying x0 + F(x0 , 𝜏0+ ) − F(x0 , 𝜏0 ) ∈ O.

(5.12)

For a fixed pair (x0 , 𝜏0 ) ∈ Ω, define the set S𝜏0 ,x0 all functions x ∶ Ix ⊂ [t0 , ∞) → X, with Ix being an interval and 𝜏0 = min Ix such that x is a solution on Ix of the generalized ODE (5.7), with initial condition x(𝜏0 ) = x0 . Given two elements x ∶ Ix → X and z ∶ Iz → X of S𝜏0 ,x0 , we say that x is smaller or equal to z (x ≼ z), if and only if Ix ⊂ Iz and z|Ix = x. The fact that the relation ≼ defines a partial order

183

184

5 Basic Properties of Solutions

relation in S𝜏0 ,x0 follows straightforwardly. See, for instance, [78, Proposition 3.4 and Remark 3.5]. As a matter of fact, the relation ≼ defines a total order in S𝜏0 ,x0 . In order to investigate the forward maximal solution, one needs to require condition (5.12). Otherwise, x(t) ∉ O for t > 𝜏0 , contradicting the definition of a solution of a generalized ODE. Therefore, from now on, we assume that (5.12) is fulfilled for every x ∈ O and every t ∈ [t0 , ∞), which we summarize in the following definition, { } Ω = ΩF = (x, t) ∈ Ω ∶ x + F(x, t+ ) − F(x, t) ∈ O , (5.13) meaning that there are no points in Ω at which the solution of the generalized ODE (5.7) escapes from the set O. Now we recall the definitions, introduced in [78], of a proper prolongation or prolongation to the right of a solution and, then, of a maximal forward solution of a nonautonomous generalized ODE. Definition 5.7: Let 𝜏0 ⩾ t0 , I ⊂ [t0 , ∞) and x ∶ I → X be a solution of (5.7) on the interval I, with 𝜏0 = min I. A solution y ∶ J → X, J ⊂ [t0 , ∞), with 𝜏0 = min J, of the generalized ODE (5.7) is called a prolongation to the right of x, if I ⊂ J and x(t) = y(t) for all t ∈ I. If I ⫋ J, then y is called a proper prolongation of x to the right or proper right prolongation of x. Similarly, we define a proper prolongation of a solution to the left. Definition 5.8: Let (x0 , 𝜏0 ) ∈ Ω. We say that x ∶ J → X is a maximal forward solution or simply maximal solution of the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(𝜏0 ) = x0 , ⎩

(5.14)

if x ∈ S𝜏0 ,x0 and, for every z ∶ I → X in S𝜏0 ,x0 such that x ≼ z, we have x = z. This means that x is a maximal solution of generalized ODE (5.14), whenever x ∈ S𝜏0 ,x0 and there is no proper right prolongation of x. Remark 5.9: When x ∶ J → X is a maximal solution of the generalized ODE 5.14 and sup J = ∞, x is called a global forward solution on J. The next auxiliary result is important to the proof of the existence and uniqueness of a maximal solution of the generalized ODE (5.14), for the case where X is a Banach space and Ω = O × [t0 , ∞), with O being an open subset of X. We call the reader’s attention to the fact that, in the classical theory of ODEs, the Grönwall inequality is a powerful tool to prove this type of result. Here, since up to now there is no Grönwall inequality available for Kurzweil integrals of the form ∫ DU(𝜏, t), the proof of the following result, borrowed from [78], is arduous.

5.2 Prolongation and Maximal Solutions

Lemma 5.10: Let F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous and Ω = ΩF , where ΩF is given by (5.13). Let (x0 , 𝜏0 ) ∈ Ω and consider the generalized ODE (5.14). If y ∶ Jy → X and z ∶ Jz → X are solutions of the generalized ODE (5.14), where Jy and Jz are intervals such that 𝜏0 = min Jy = min Jz , then y(t) = z(t), for all t ∈ Jy ∩ Jz . Proof. Fix (x0 , 𝜏0 ) ∈ Ω and assume that y ∶ Jy → X and z ∶ Jz → X are solutions of the generalized ODE (5.14), where Jy and Jz are intervals such that 𝜏0 = min Jy = min Jz . Therefore, y(𝜏0 ) = z(𝜏0 ) = x0 .

(5.15)

We want to show that y(t) = z(t), for all t ∈ Jy ∩ Jz . Notice that Jy ∩ Jz is an interval of the form [𝜏0 , d], or [𝜏0 , d), d ⩽ ∞. Thus, we need to consider two situations. The first situation occurs when Jy ∩ Jz = [𝜏0 , d]. Let Λ be the set of all t ∈ [𝜏0 , d] such that y(s) = z(s), for every s ∈ [𝜏0 , t]. Since 𝜏0 ∈ Λ, we have Λ ≠ Ø. Then, taking 𝜆 = sup Λ, we have 𝜆 ⩽ d and [𝜏0 , 𝜆) ⊂ Λ. Therefore, y(t) = z(t),

t ∈ [𝜏0 , 𝜆).

(5.16)

Owing to the facts that F belongs to the class  (Ω, h), with h being a left-continuous function, and the functions y and z are solutions of the generalized ODE (5.14) on Jy ∩ Jz , Lemma 4.9 implies that y and z are left-continuous on (𝜏0 , 𝜆]. Thus, y(𝜆) = lim− y(t) = lim− z(t) = z(𝜆), t→𝜆

t→𝜆

whence (5.17)

y(𝜆) = z(𝜆). In view of (5.16) and (5.17), we have 𝜆 ∈ Λ and [𝜏0 , 𝜆] ⊂ Λ.

(5.18)

Now, we want to prove that 𝜆 = d. In order to do so, we suppose, by contradiction, that 𝜆 < d. Because (y(𝜆), 𝜆) ∈ ΩF = Ω, Theorem 5.1 yields that there are 𝛿 > 0 (we can consider 𝜆 + 𝛿 < d, for instance) and a unique solution x ∶ [𝜆, 𝜆 + 𝛿] → X of the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(𝜆) = y(𝜆) = z(𝜆). ⎩

(5.19)

On the other hand, y|[𝜆,𝜆+𝛿] and z|[𝜆,𝜆+𝛿] are solutions of (5.19). Thus, by the uniqueness of a solution, x(t) = z(t) = y(t),

t ∈ [𝜆, 𝜆 + 𝛿],

(5.20)

185

186

5 Basic Properties of Solutions

and, hence, (5.18) and (5.20) imply 𝜆 + 𝛿 ∈ Λ, which contradicts the definition of 𝜆. As a matter of fact, the contradiction came from the assumption that 𝜆 < d. This means d = 𝜆 and y(t) = z(t), for all t ∈ Jy ∩ Jz . Let us now consider the second situation, where Jy ∩ Jz = [𝜏0 , d), with d ⩽ ∞. At first, we prove that, for every 𝜏 ∈ (𝜏0 , d), we have y(s) = z(s), for every s ∈ [𝜏0 , 𝜏). Define a set Λ of all t ∈ (𝜏0 , d) such that y(s) = z(s), for all s ∈ [𝜏0 , t). We proceed so as to show that Λ is not empty. Notice that (x0 , 𝜏0 ) ∈ Ω = ΩF and, therefore, we can apply Theorem 5.1 to obtain 𝜂 > 0 (we can take 𝜏0 + 𝜂 < d, for instance) and a unique solution x ∶ [𝜏0 , 𝜏0 + 𝜂] → X of the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(𝜏0 ) = x0 = y(𝜏0 ) = z(𝜏0 ). ⎩

(5.21)

However, once y|[𝜏0 ,𝜏0 +𝜂] and z|[𝜏0 ,𝜏0 +𝜂] are solutions of (5.21), we conclude that x(t) = y(t) = z(t),

t ∈ [𝜏0 , 𝜏0 + 𝜂].

(5.22)

Furthermore, y(t) = z(t) for all t ∈ [𝜏0 , 𝜏0 + 𝜂), which means that 𝜏0 + 𝜂 ∈ Λ. Therefore, Λ is not empty. Consider 𝜆 = sup Λ. In such a case, 𝜆 ⩽ d and (𝜏0 , 𝜆) ⊂ Λ. Let us prove that 𝜆 = d. Once again, we suppose to the contrary that 𝜆 < d. We want to show that y(t) = z(t), for t ∈ [𝜏0 , 𝜆), and y(𝜆) = z(𝜆). Let t ∈ [𝜏0 , 𝜆). If t = 𝜏0 , the conclusion comes straightforwardly as a consequence of (5.15). However, if t ∈ (𝜏0 , 𝜆), then t ∈ Λ (since (𝜏0 , 𝜆) ⊂ Λ). Thus, y(s) = z(s), for all s ∈ [𝜏0 , t). Knowing that y|(𝜏0 ,t] and z|(𝜏0 ,t] are left-continuous functions, one gets y(t) = lim− y(s) = lim− z(s) = z(t). s→t

s→t

Therefore, y(t) = z(t), for all t ∈ [𝜏0 , 𝜆) and, hence, by the left-continuity of the functions y|(𝜏0 ,𝜆] and z|(𝜏0 ,𝜆] , we have y(𝜆) = z(𝜆). Then, y(t) = z(t),

t ∈ [𝜏0 , 𝜆].

(5.23)

Since (y(𝜆), 𝜆) ∈ ΩF = Ω, Theorem 5.1 yields that there are 𝛿 > 0 (we can take 𝜆 + 𝛿 < d, for instance) and a unique solution x ∶ [𝜆, 𝜆 + 𝛿] → X of the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(𝜆) = y(𝜆) = z(𝜆). ⎩

(5.24)

5.2 Prolongation and Maximal Solutions

In addition, because z|[𝜆,𝜆+𝛿] and y|[𝜆,𝜆+𝛿] are solutions of (5.24), the uniqueness of solution implies x(t) = z(t) = y(t),

t ∈ [𝜆, 𝜆 + 𝛿].

(5.25)

Then, (5.23) and (5.25) imply 𝜆 + 𝛿 ∈ Λ, which contradicts the definition of 𝜆. This contradiction implies that d = 𝜆 and Λ = (𝜏0 , d). Thus, y(t) = z(t), for all t ∈ [𝜏0 , d) = Jy ∩ Jz , and we finished the proof. ◽ In view of the previous lemma, our job turned out to be easier. The next result brings up sufficient conditions, which guarantee existence and uniqueness of a maximal solution of the generalized ODE (5.7) with initial condition x(𝜏0 ) = x0 . A version of such result for the finite dimensional case, that is, X = ℝn and, moreover, Ω = O × (a, b), with O = Bc = {x ∈ ℝn ∶∥ x ∥< c}, and (a, b) ⊂ ℝ can be found in [209, Proposition 4.13]. Here, we present a version for the infinite dimensional case, that is, X is any Banach space and also Ω = O × [t0 , ∞), where O ⊂ X is open. Such a version can be found in [78, Theorem 3.9]. Theorem 5.11: Let F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous. If Ω = ΩF , then, for every (x0 , 𝜏0 ) ∈ Ω, there exists a unique maximal solution x ∶ J → X of the generalized ODE (5.7), where x(𝜏0 ) = x0 and J is an interval such that 𝜏0 = min J. Proof. Consider Ω = ΩF and let (x0 , 𝜏0 ) ∈ Ω be fixed. At first, we prove the existence of a maximal solution. In order to do that, consider a set S of all functions x ∶ Jx ⊂ [t0 , ∞) → X such that Jx is an interval with 𝜏0 = min Jx and x is a solution of the generalized ODE (5.7), with x(𝜏0 ) = x0 . It is clear that S is nonempty by the local existence and uniqueness of solution guaranteed by Theorem 5.1. ⋃ Let J = y∈S Jy and define a function x ∶ J → X by the relation x(t) = y(t), where y ∈ S and t ∈ Jy . Note that if y, z ∈ S, then y(s) = z(s), for all s ∈ Jy ∩ Jz , by Lemma 5.10. Therefore, x is well-defined. Note, in addition, that (x(t), t) ∈ O × J and that J is an interval with 𝜏0 = min J. This follows from the fact that J is a union connected to a common point and x is a maximal solution of the generalized ODE (5.7), proving the existence of a maximal solution. Now, we prove the uniqueness of such maximal solution. Suppose x1 ∶ J1 → X and x2 ∶ J2 → X are two maximal solutions of the generalized ODE (5.7) with x1 (𝜏0 ) = x2 (𝜏0 ) = x0 and J1 , J2 are intervals satisfying 𝜏0 = min J1 = min J2 . By Lemma 5.10, we need to prove that x1 (t) = x2 (t), for all t ∈ J1 ∩ J2 . In order to do this, define a function x3 ∶ J3 → X, J3 = J1 ∪ J2 by x3 (t) = x1 (t), for t ∈ J1 , and

187

188

5 Basic Properties of Solutions

x3 (t) = x2 (t), for t ∈ J2 . Clearly, x3 is a solution of the generalized ODE (5.7) with initial condition x3 (𝜏0 ) = x0 , J1 , J2 ⊂ J3 and x3 |J1 = x1 , x3 |J2 = x2 . Finally, because the solutions x1 and x2 are assumed to be maximal, we have J3 = J2 = J1 and x3 (t) = x2 (t) = x1 (t), for all t ∈ J3 . Therefore, x1 = x2 . ◽ Next, we present a result which says that the maximal interval J of the existence and uniqueness of a maximal solution is half-open. For a finite dimensional version of this fact, see [209, Proposition 4.14], where the author considered X = ℝn and Ω = O × (a, b), where O = Bc ⊂ ℝn and (a, b) ⊂ ℝ. The version presented here and borrowed from [78, Theorem 3.10] holds for the infinite dimensional case. Theorem 5.12: Consider F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous and Ω = ΩF . Suppose (x0 , 𝜏0 ) ∈ Ω and x ∶ J → X is the maximal solution of the generalized ODE (5.7), with x(𝜏0 ) = x0 , and J is an interval for which 𝜏0 = min J. Then, J = [𝜏0 , 𝜔) with 𝜔 ⩽ ∞. Proof. It is clear that J ⊂ [t0 , ∞). Let 𝜔 = sup J. Then, 𝜔 ⩽ ∞. If 𝜔 = ∞, then the result follows. Consider 𝜔 < ∞. We want to prove that 𝜔 ∉ J. Suppose, to the contrary, that 𝜔 ∈ J. Therefore, J = [𝜏0 , 𝜔]. By the definition of a solution, (x(𝜔), 𝜔) ∈ Ω = ΩF . Thus, Theorem 5.1 yields there exists 𝜂 > 0 such that there exists a unique solution z ∶ [𝜔, 𝜔 + 𝜂] → X on [𝜔, 𝜔 + 𝜂] of the generalized ODE (5.7) such that z(𝜔) = x(𝜔). Therefore, Corollary 5.6 implies { x(t), t ∈ [𝜏0 , 𝜔], y(t) = z(t), t ∈ [𝜔, 𝜔 + 𝜂], is a solution of the generalized ODE (5.7) satisfying y(𝜏0 ) = x0 . Notice that y is a proper right prolongation of x. Such prolongation is assumed to be maximal and, hence, we have a contradiction. Therefore, 𝜔 ∉ J and J = [𝜏0 , 𝜔). ◽ A version of [209, Proposition 4.15] for functions F taking values in an infinite dimensional Banach space X is presented next. This result can also be found in [78]. Theorem 5.13: Let F ∈  (Ω, h), where h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function and Ω = ΩF . Assume that (x0 , 𝜏0 ) ∈ Ω and x ∶ [𝜏0 , 𝜔) → X is the maximal solution of the generalized ODE (5.7) with x(𝜏0 ) = x0 . Then, for every compact set K ⊂ Ω, there exists tK ∈ [𝜏0 , 𝜔) such that (x(t), t) ∉ K, for all t ∈ (tK , 𝜔). Proof. We proceed with the proof by contradiction. Assume that there exist a { } compact set K ⊂ Ω and a sequence tn n∈ℕ ⊂ [𝜏0 , 𝜔) such as limn→∞ tn = 𝜔 and

5.2 Prolongation and Maximal Solutions

(x(tn ), tn ) ∈ K, for all n ∈ ℕ. We divided the proof into two cases, namely, 𝜔 = ∞ and 𝜔 < ∞. Case 1: 𝜔 = ∞. { } Since K is compact, the sequence (x(tn ), tn ) n∈ℕ admits a convergent subse} { quence, which we denote, again, by (x(tn ), tn ) n∈ℕ . Thus, there exists (y, 𝜏) ∈ K such that lim (x(tn ), tn ) = (y, 𝜏).

n→∞

In particular, lim t n→∞ n

= 𝜏,

which contradicts the fact that limn→∞ tn = 𝜔 = ∞. Case 2: 𝜔 < ∞. } { By the compactness of K, the sequence (x(tn ), tn ) n∈ℕ has a convergent subse} { quence that we still denote by (x(tn ), tn ) n∈ℕ . Therefore, there exists (y, 𝜏) ∈ K ⊂ Ω such that limn→∞ (x(tn ), tn ) = (y, 𝜏) and, hence, lim t n→∞ n

= 𝜏.

By the uniqueness of the limit, 𝜏 = 𝜔. Once (y, 𝜔) = (y, 𝜏) ∈ Ω = ΩF , we obtain y + F(y, 𝜔+ ) − F(y, 𝜔) ∈ O. Then, Theorem 5.1 implies there exist 𝜂 > 0 and a unique solution z ∶ [𝜔, 𝜔 + 𝜂] → X of the generalized ODE (5.7) for which z(𝜔) = y. Consider a function u ∶ [𝜏0 , 𝜔 + 𝜂] → X given by u(t) = x(t), for t ∈ [𝜏0 , 𝜔), and u(t) = z(t), for t ∈ [𝜔, 𝜔 + 𝜂]. Then, Theorem 5.5 yields that u ∶ [𝜏0 , 𝜔 + 𝜂] → X is a solution of the generalized ODE (5.7), with initial condition u(𝜏0 ) = x0 . This leads us to a contradiction, because u is a proper right prolongation of the solution x, which in turn, is assumed to be maximal. The statement follows then. ◽ The previous theorem enables us to present two consequences. The first one follows immediately and, hence, we omit its proof. The reader may want to consult [78] though. Corollary 5.14: Consider F ∈  (Ω, h), where h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function and Ω = ΩF . Take (x0 , 𝜏0 ) ∈ Ω and let x ∶ [𝜏0 , 𝜔) → X be the maximal solution of the generalized ODE (5.7) with x(𝜏0 ) = x0 . If x(t) belongs to a compact N ⊂ O, for all t ∈ [𝜏0 , 𝜔), then 𝜔 = ∞. Corollary 5.15: Assume that F ∈  (Ω, h), where the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous and Ω = ΩF . Take (x0 , 𝜏0 ) ∈ Ω and let x ∶ [𝜏0 , 𝜔) → X be the maximal solution of the generalized ODE (5.7) with initial condition x(𝜏0 ) = x0 . If 𝜔 < ∞. Then, the following statements hold

189

190

5 Basic Properties of Solutions

(i) the limit limt→𝜔− x(t) exists; (ii) (y, 𝜔) ∈ 𝜕Ω and limt→𝜔− (x(t), t) = (y, 𝜔), where y = limt→𝜔− x(t). Proof. Consider 𝜔 < ∞ and let 𝜖 > 0 be given. Since 𝜔 ∈ (𝜏0 , ∞) and h is left-continuous on (𝜏0 , ∞), the limit limt→𝜔− h(t) exists. Thus, by the Cauchy condition, there exists 𝛿 > 0 (we can take 𝜏0 < 𝜔 − 𝛿) such that |h(t) − h(s)| < 𝜖, for all t, s ∈ (𝜔 − 𝛿, 𝜔). Thus, by Lemma 4.9, ‖x(t) − x(s)‖ ⩽ |h(t) − h(s)| < 𝜖,

for every t, s ∈ (𝜔 − 𝛿, 𝜔).

Again, by the Cauchy condition, limt→𝜔− x(t) exists. Therefore, there exists y ∈ X such that y = lim− x(t) t→𝜔

(5.26)

which proves item (i). Now, let us prove (ii). Notice that limt→𝜔− (x(t), t) = (y, 𝜔). Since 𝜔 is an accu{ } mulation point of the set [𝜏0 , 𝜔), there is a sequence tn n∈ℕ ⊂ [𝜏0 , 𝜔) such that lim t = 𝜔. This fact together with (5.26) imply limn→∞ x(tn ) = y. Then, once { n→∞ n } (x(tn ), tn ) n∈ℕ ⊂ Ω and limn→∞ (x(tn ), tn ) = (y, 𝜔), it follows that (y, 𝜔) ∈ Ω.

(5.27)

We want to show that (y, 𝜔) ∉ Ω. Suppose, to the contrary, that (y, 𝜔) ∈ Ω. By Theorem 5.1, there exist Δ > 0 and a unique solution z ∶ [𝜔, 𝜔 + Δ] → X of the generalized ODE (5.7) satisfying z(𝜔) = y. Define a function u ∶ [t0 , 𝜔 + Δ] → X by { x(t), t ∈ [𝜏0 , 𝜔), u(t) = z(t), t ∈ [𝜔, 𝜔 + Δ]. Then, Theorem 5.5 implies that u is a solution of the generalized ODE (5.7) with initial condition u(𝜏0 ) = x(𝜏0 ) = x0 . But this leads us to a contradiction, since u is a proper right prolongation of the solution x, which, in turn, is assumed to be maximal. Therefore, (y, 𝜔) ∉ Ω, that is, (y, 𝜔) ∈ Ωc and this means that (y, 𝜔) ∈ Ωc . Finally, (5.27) and (5.28) yield (y, 𝜔) ∈ 𝜕Ω, which concludes the proof.

(5.28) ◽

Another consequence of Theorem 5.13, says that, even when O = X, it is possible to ensure that the maximal solution is defined on the whole interval [𝜏0 , ∞), whenever x(𝜏0 ) = x0 , that is, we have a global forward solution. This result is of extreme importance for the study of stability, boundedness, and control theory in the setting of generalized ODEs. Additionally, if Ω = N × [t0 , ∞) and F ∈  (Ω, h),

5.2 Prolongation and Maximal Solutions

where N is a compact subset of X, then, by the definition of a solution, (x(t), t) ∈ Ω, for every t ∈ [𝜏0 , 𝜔). In particular, x(t) ∈ N, for every t ∈ [𝜏0 , 𝜔) and, hence, 𝜔 = ∞. It is enough to follow the ideas of Corollary 5.14. See also [78]. Corollary 5.16: If Ω = X × [t0 , ∞) and F ∈  (Ω, h), where h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function, then for every (x0 , 𝜏0 ) ∈ Ω, there exists a unique global forward solution on [𝜏0 , ∞) of the generalized ODE (5.7) with initial condition x(𝜏0 ) = x0 . Proof. We start by claiming that Ω = ΩF . In order to prove our claim, it is enough to show that Ω ⊂ ΩF . Take an arbitrary pair (z0 , s0 ) ∈ Ω. Since h is nondecreasing, the limit lims→s+ h(s) exists. Then, the Cauchy condition implies that, for every 0 𝜖 > 0, there exists 𝛿 > 0 such that if s, t ∈ (s0 , s0 + 𝛿), then |h(t) − h(s)| < 𝜖. Since F ∈  (Ω, h), we obtain ‖F(z , t) − F(z , s)‖ ⩽ |h(t) − h(s)| < 𝜖, 0 ‖ ‖ 0

for every t, s ∈ (s0 , s0 + 𝛿).

Therefore, the limit lims→s+ F(z0 , s) exists. Denote such limit by F(z0 , s+0 ). Then, 0 z0 + F(z0 , s+0 ) − F(z0 , s0 ) ∈ X and, hence, (z0 , s0 ) ∈ ΩF . The claim is, therefore, true. Now, take a pair (x0 , 𝜏0 ) ∈ Ω and assume that x ∶ [𝜏0 , 𝜔) → X is the maximal solution of the generalized ODE (5.7) satisfying x(𝜏0 ) = x0 . The existence of such a solution is guaranteed by Theorems 5.11 and 5.12. We want to show that 𝜔 = ∞. Assume, to the contrary, that 𝜔 < ∞. Then, Corollary 5.15 implies the limit limt→𝜔− x(t) exists. Set (5.29) { } Once 𝜔 is an accumulation point of the set [𝜏0 , 𝜔), there is a sequence tn n∈ℕ ⊂ [𝜏0 , 𝜔) such that limn→∞ tn = 𝜔. Then, (5.29) yields limn→∞ x(tn ) = y. { } Due to the fact that N = x(tn ) n∈ℕ ∪ {y} is a compact subset of X, the set N × [𝜏0 , 𝜔] is also a compact set in Ω. Thus, by Theorem 5.13, there exists ̂t ∈ [𝜏0 , 𝜔) such that y = lim− x(t) ∈ X. t→𝜔

(x(t), t) ∉ N × [𝜏0 , 𝜔],

for all t ∈ (̂t, 𝜔),

which contradicts the fact that x(tn ) ∈ N, for all n ∈ ℕ and limn→∞ tn = 𝜔. Hence, 𝜔 = ∞ necessarily. ◽

5.2.1 Applications to MDEs In this section, we apply the results on prolongation and maximal solutions of generalized ODEs for MDEs. This is possible because of the relations between the solutions of one type of equation and the solution of another type of equation. These relations are set out in detail in Chapter 4.

191

192

5 Basic Properties of Solutions

The results presented here are based on [78], where the authors considered the case where X = ℝn . Here, we present them for the case of an arbitrary Banach space X. Thus, as in the previous chapters, X denotes a Banach space equipped with norm ‖ ⋅ ‖ and O ⊂ X denotes an arbitrary open set. We also consider the integral form of a measure differential equation (we write MDE, for short) as follows: t

x(t) = x(𝜏0 ) +

∫𝜏0

f (x(s), s)dg(s),

t ⩾ 𝜏0 ,

(5.30)

where 𝜏0 ⩾ t0 , f ∶ O × [t0 , ∞) → X and g ∶ [t0 , ∞) → ℝ are functions and the integral on the right-hand side is understood in the Perron–Stieltjes’ sense as presented in Chapter 1. We recall the reader that G([t0 , ∞), X) denotes the vector space of all functions x ∶ [t0 , ∞) → X such that x|[𝛼,𝛽] belongs to the space G([𝛼, 𝛽], X), for all [𝛼, 𝛽] ⊂ [t0 , ∞). We use the symbol G0 ([t0 , ∞), X) to denote the vector space of all functions x ∈ G([t0 , ∞), X) such that sup e−(s−t0 ) ∥ x(s) ∥< ∞. When this space is endowed s∈[t0 ,∞)

with the norm ‖x‖[t0 ,∞) = sup e−(s−t0 ) ∥ x(s) ∥, s∈[t0 ,∞)

x ∈ G0 ([t0 , ∞), X),

it becomes a Banach space. See the comments in the end of Subsection 1.1.1. Given x, z ∈ G0 ([t0 , ∞), O) and s1 , s2 ∈ [t0 , ∞), with s1 ⩽ s2 , we shall consider that the functions f ∶ O × [t0 , ∞) → X and g ∶ [t0 , ∞) → ℝ satisfy the following conditions: (A1) g ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous on (t0 , ∞); s (A2) The Perron–Stieltjes integral ∫s 2 f (x(s), s)dg(s) exists; 1 (A3) There exists a locally Perron–Stieltjes integrable function M ∶ [t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ f (x(s), s)dg(s)‖ ⩽ M(s)dg(s); ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖

(A4) There exists a locally Perron–Stieltjes integrable function L ∶ [t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ [f (x(s), s) − f (z(s), s)] dg(s)‖ ⩽ ‖x − z‖[t0 ,∞) L(s)dg(s). ‖ ‖∫s ‖ ∫s1 ‖ 1 ‖

The first result we present tells us about some properties of a function, which later will be used as the right-hand side of the generalized ODE, which corresponds to the MDE (5.30).

5.2 Prolongation and Maximal Solutions

Lemma 5.17: Let f ∶ O × [t0 , ∞) → X satisfy conditions (A2)–(A4) and g satisfy condition (A1). Then, the function F ∶ O × [t0 , ∞) → X defined by t

F(x, t) =

∫t0

f (x, s)dg(s),

for (x, t) ∈ O × [t0 , ∞), is an element of the class  (Ω, h), where Ω = O × [t0 , ∞) t and h(t) = ∫t [M(s) + L(s)] dg(s), t ∈ [t0 , ∞), is a nondecreasing and left-continuous 0 function. In addition, Ω = ΩF . Proof. The proof that F ∈  (Ω, h) is analogous to the proof of [78, Theorem 4.2]. On the other hand, Corollary 2.14 yields z0 + F(z0 , s+0 ) − F(z0 , s0 ) = z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O, for all (z0 , s0 ) ∈ Ω. Then, z0 + F(z0 , s+0 ) − F(z0 , s0 ) ∈ O, for all (z0 , s0 ) ∈ Ω, whence ◽ Ω = ΩF . The next result is similar to Theorem 4.14 with obvious adaptations. Recall the concepts of solutions of MDEs and generalized ODEs in Definitions 3.2 and 4.1. Theorem 5.18: Assume that f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Then, x ∶ I → X is a solution of the MDE (5.30) on I ⊂ [t0 , ∞) if and only if x is a solution on I of the generalized ODE dx = DF(x, t), d𝜏 where F is given as in Lemma 5.17. The next two definitions bring up the concepts of prolongation to the right and maximal solution in the context of MDEs. Definition 5.19: Let 𝜏0 ⩾ t0 and x ∶ J → X be a solution of the MDE (5.30) on the interval J ⊂ [t0 , ∞), with 𝜏0 = min J. A solution y ∶ ̂ J → X, with ̂ J ⊂ [t0 , ∞) J , of the MDE (5.30) is called a prolongation to the right of x, whenand 𝜏0 = min ̂ ever J ⊂ ̂ J and x(t) = y(t), for all t ∈ J. In particular, when J ⫋ ̂ J , y is called a proper prolongation of x to the right or proper right prolongation. Analogously, one can define a proper prolongation of a solution to the left. Definition 5.20: Let (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and I ⊂ [t0 , ∞) be an interval for which 𝜏0 = min I. A solution y ∶ I → X of the MDE t

y(t) = x0 +

∫𝜏0

f (y(s), s)dg(s)

193

194

5 Basic Properties of Solutions

is called a maximal solution, if there exists no proper prolongation of y to the right. If, moreover, sup I = ∞, then y is called a global forward solution on I. The next result ensures the existence and uniqueness of the maximal solution of the MDE (5.30). Theorem 5.21: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Assume, in addition, that for all (z0 , s0 ) ∈ O × [t0 , ∞), we have z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O. Then, for every (x0 , 𝜏0 ) ∈ O × [t0 , ∞), there exists a unique maximal solution x ∶ J → X of the MDE (5.30), with initial condition x(𝜏0 ) = x0 , and J an interval with 𝜏0 = min J. Proof. Consider the generalized ODE ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(𝜏0 ) = x0 , ⎩

(5.31)

where F is given as in Lemma 5.17 and, by the same lemma, F ∈  (Ω, h) and Ω = ΩF , where Ω = O × [t0 , ∞) and h is a nondecreasing and left-continuous function. Since all the hypotheses of Theorem 5.11 are satisfied, there exists a unique maximal solution x ∶ J → X of the generalized ODE (5.31), where J is an interval such that 𝜏0 = min J. Then, Theorem 5.18 implies x ∶ J → X is also a solution of the MDE (5.30) with x(s0 ) = x0 . We proceed so as to prove that the solution x of the MDE (5.30) does not have a proper prolongation to the right. Suppose, to the contrary, that there exists a solution ̂ x ∶̂ J → X of the MDE (5.30) satisfying ̂ x(𝜏0 ) = x0 , where ̂ J is an interval J and J ⫋ ̂ J , that is, ̂ x extends x properly. Then, Theorem 4.18 such that 𝜏0 = min ̂ yields ̂ x is a solution of the generalized ODE (5.31), with ̂ x(𝜏0 ) = x0 , contradicting the maximality of the solution x. Hence, x does not admit proper extension to the right, that is, x is a maximal solution of the MDE (5.30) satisfying x(𝜏0 ) = x0 . Finally, Theorems 5.11 and 5.18 yield the uniqueness of the maximal solution x of the MDE (5.30). ◽ Remark 5.22: Notice that by the proof of Theorem 5.21, x ∶ J → X is the maximal solution of the MDE (5.30) with x(𝜏0 ) = x0 , if and only if x ∶ J → X is the maximal solution of the generalized ODE (5.31), where F is given by Lemma 5.17. Similarly as we did for generalized ODEs, the next result says that the maximal interval J of the existence and uniqueness of a maximal solution of the MDE (5.30), with right-hand side given as in Lemma 5.17, is half-open.

5.2 Prolongation and Maximal Solutions

Theorem 5.23: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Assume, further, that for every pair (z0 , s0 ) ∈ O × [t0 , ∞), z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O. Moreover, suppose (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and x ∶ J → X is the maximal solution of the MDE (5.30), with initial condition x(𝜏0 ) = x0 , and J is an interval for which 𝜏0 = min J. Then, J = [𝜏0 , 𝜔), with 𝜔 ⩽ ∞. Proof. Consider (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and suppose x ∶ J → X is the maximal solution of the MDE (5.30), with initial condition x(𝜏0 ) = x0 , where J is an interval with 𝜏0 = min J. By Remark 5.22, x ∶ J → X is the maximal solution of the generalized ODE (5.31), where F is given by Lemma 5.17, Ω = ΩF and F ∈  (Ω, h), with Ω = O × [t0 , ∞). Moreover, the function h ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous. Thus, all hypotheses of Theorem 5.12 are ◽ fulfilled and, hence, J = [𝜏0 , 𝜔), where 𝜔 ⩽ ∞. The next result guarantees that the graph of the maximal solution escapes from compact sets at finite time. Theorem 5.24: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Assume, further, that for all (z0 , s0 ) ∈ O × [t0 , ∞), z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O. Let (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and x ∶ [𝜏0 , 𝜔) → X be the maximal solution of the MDE (5.30) with initial condition x(𝜏0 ) = x0 . Then, for every compact set K ⊂ O × [t0 , ∞), there exists tK ∈ [𝜏0 , 𝜔) such that (x(t), t) ∉ K, for all t ∈ (tK , 𝜔). Proof. Let K be a compact subset of O × [t0 , ∞). Consider the function F ∶ O × [t0 , ∞) → X defined in Lemma 5.17, which implies Ω = ΩF and F ∈  (Ω, h), where Ω = O × [t0 , ∞) and h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function. By Remark 5.22, x ∶ [𝜏0 , 𝜔) → X is the maximal solution of the generalized ODE (5.31). Therefore, once all hypotheses of Theorem 5.13 are fulfilled, there exists tK ∈ [𝜏0 , 𝜔) such that (x(t), t) ∉ K, for all t ∈ (tK , 𝜔). ◽ As a consequence of Theorem 5.24, we have the following result, which gives conditions for the existence of a global forward solution of the MDE (5.30) with right-hand side given as in Lemma 5.17 and initial condition x(𝜏0 ) = x0 . Corollary 5.25: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Assume that, for all (z0 , s0 ) ∈ O × [t0 , ∞), we have z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O. Suppose (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and x ∶ [𝜏0 , 𝜔) → X is the maximal solution of the MDE (5.30) with x(𝜏0 ) = x0 . If x(t) ∈ N for all t ∈ [𝜏0 , 𝜔), where N is a compact subset of O, then 𝜔 = ∞ and we have a global forward solution of the MDE (5.30) with initial condition x(𝜏0 ) = x0 .

195

196

5 Basic Properties of Solutions

Proof. Let x(t) ∈ N, for all t ∈ [𝜏0 , 𝜔), where N is a compact subset of O. Consider the function F ∶ O × [t0 , ∞) → X defined in Lemma 5.17 which satisfies Ω = ΩF and F ∈  (Ω, h), where Ω = O × [t0 , ∞) and h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function. By Remark 5.22, x ∶ [𝜏0 , 𝜔) → X is the maximal solution of generalized ODE (5.31). Thus, by Corollary 5.14, 𝜔 = ∞. ◽ The next corollary shows that if 𝜔 is finite, then the limit y = limt→𝜔− x(t) exists and (y, 𝜔) ∈ 𝜕Ω. Corollary 5.26: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2)–(A4), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Suppose, in addition, for all (z0 , s0 ) ∈ O × [t0 , ∞), z0 + f (z0 , s0 )Δ+ g(s0 ) ∈ O. Take (x0 , 𝜏0 ) ∈ O × [t0 , ∞) and let x ∶ [𝜏0 , 𝜔) → X be the maximal solution of the MDE (5.30), with x(𝜏0 ) = x0 . If 𝜔 < ∞. Then the following conditions hold. (i) The limit limt→𝜔− x(t) exists; (ii) (y, 𝜔) ∈ 𝜕Ω and limt→𝜔− (x(t), t) = (y, 𝜔), where y = limt→𝜔− x(t).

Proof. Take 𝜔 < ∞ and a pair (x0 , 𝜏0 ) ∈ O × [t0 , ∞). Let x ∶ [𝜏0 , 𝜔) → X be the maximal solution of (5.30) with x(𝜏0 ) = x0 . Remark 5.22 implies that x ∶ [𝜏0 , 𝜔) → X is the maximal solution of the generalized ODE (5.31), where F ∶ O × [t0 , ∞) → X, defined in Lemma 5.17, satisfies Ω = ΩF and F ∈  (Ω, h), where Ω = O × [t0 , ∞) and h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function. Therefore, all the hypotheses of Corollary 5.15 are satisfied and the proof is complete. ◽ Corollary 5.27: Suppose f ∶ X × [t0 , ∞) → X satisfies conditions (A2)–(A4) (with O = X), and g ∶ [t0 , ∞) → ℝ satisfies condition (A1). Then, for every (x0 , 𝜏0 ) ∈ X × [t0 , ∞), the maximal solution of the MDE (5.30), with initial condition x(𝜏0 ) = x0 , is defined on [𝜏0 , ∞), that is, the maximal solution is, in fact, a global forward solution. Proof. Consider a pair (x0 , 𝜏0 ) ∈ X × [t0 , ∞) and let F ∶ O × [t0 , ∞) → X be defined as in Lemma 5.17 which implies Ω = ΩF and F ∈  (Ω, h), with Ω = X × [t0 , ∞) and h ∶ [t0 , ∞) → ℝ being a nondecreasing and left-continuous function. Hence, Corollary 5.16 implies there exists a unique global forward solution x ∶ [𝜏0 , ∞) → X of the generalized ODE (5.31). Then, by Remark 5.22, x ∶ [𝜏0 , ∞) → X is also a global forward solution of the MDE (5.30), with initial condition x(𝜏0 ) = x0 . ◽

5.2 Prolongation and Maximal Solutions

5.2.2 Applications to Dynamic Equations on Time Scales In this section, we make use of the relation, detailed in Chapter 3, between the solutions of MDEs and the solutions of dynamic equations on time scales and we present results on prolongation and maximal solutions in the framework of dynamic equations on time scales. The theory presented here is based on [78], but we deal with infinite dimensional space-valued functions. Let 𝕋 be a time scale such that t0 ∈ 𝕋 and consider the dynamic equation on time scales given by t

x(t) = x(t0 ) +

∫t0

f (x∗ (s), s)Δs,

t ∈ [t0 , ∞)𝕋 ,

(5.32)

where f ∶ O × [t0 , ∞)𝕋 → X is a function, O ⊂ X is an open subset and, as usual, x∗ ∶ 𝕋 ∗ → X is defined by x∗ (t) = x(t∗ ). For all y, 𝑤 ∈ G0 ([t0 , ∞)𝕋 , O) and all s1 , s2 ∈ [t0 , ∞)𝕋 , with s1 ⩽ s2 , we assume that the function f ∶ O × [t0 , ∞)𝕋 → X satisfies the conditions: s

(B1) The Perron Δ-integral ∫s 2 f (y(s), s)Δs exists; 1 (B2) There exists a locally Perron Δ-integrable function M ∶ [t0 , ∞)𝕋 → ℝ such that s2 ‖ ‖ s2 ‖ ‖ f (y(s), s)Δs‖ ⩽ M(s)Δs; ‖ ‖ ∫s ‖∫s 1 ‖ ‖ 1 (B3) There exists a locally Perron Δ-integrable function L ∶ [t0 , ∞)𝕋 → ℝ such that s2 ‖ s2 ‖ ‖ ‖ [f (y(s), s) − f (𝑤(s), s)] Δs‖ ⩽ ‖y − 𝑤‖[t0 ,∞) L(s)Δs. ‖ ‖∫s ‖ ∫s1 ‖ 1 ‖ In the following result, we present a correspondence between the solutions of a MDE and the solutions of a dynamic equation on time scales. The reader may consult [78] for a proof of this result in the setting of ℝn -valued functions. The proof for infinite dimensional case follows similarly. Theorem 5.28: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and let [t0 , ∞)𝕋 be a time scale interval. Let O ⊂ X be an open subset and f ∶ O × [t0 , ∞)𝕋 → X be a function. Assume that, for every x ∈ G([t0 , ∞)𝕋 , X), the function t → f (x(t), t) is Perron Δ-integrable on [s1 , s2 ]𝕋 , for every s1 , s2 ∈ [t0 , ∞)𝕋 . Define g ∶ [t0 , ∞) → ℝ by g(s) = s∗ , for every s ∈ [t0 , ∞). Let J ⊂ [t0 , ∞) be a nondegenerate interval such that J ∩ 𝕋 is nonempty and, for each t ∈ J, t∗ ∈ J ∩ 𝕋 . If x ∶ J ∩ 𝕋 → X is a solution of the dynamic equation on time scales t ⎧ ∗ ⎪x(t) = x(𝜏0 ) + ∫ f (x (s), s)Δs, 𝜏0 ⎨ ⎪x(𝜏 ) = x , 0 ⎩ 0

t ∈ J ∩ 𝕋,

(5.33)

197

198

5 Basic Properties of Solutions

where x0 ∈ O and 𝜏0 ∈ J ∩ 𝕋 , then x∗ ∶ J → X is a solution of the MDE t

y(t) = x0 +

∫𝜏0

t

f ∗ (y(s), s)dg(s) = x0 +

∫𝜏0

f (y(s), s∗ )dg(s).

(5.34)

Conversely, if y ∶ J → X satisfies the MDE (5.34), then it must have the form y = x∗ , where x ∶ J ∩ 𝕋 → X is a solution of the dynamic equation on time scales (5.33). The concepts of prolongation to the right and maximal solution in the setting of dynamic equations on time scales are described in the next two definitions. Definition 5.29: Let 𝕋 be a time scale such that sup 𝕋 = ∞. Consider a solution x ∶ I𝕋 → X, with I𝕋 ⊂ [t0 , ∞)𝕋 , of the dynamic equation on time scale (5.32) on the interval I𝕋 , with 𝜏0 = min I𝕋 . The solution y ∶ J𝕋 → X, with J𝕋 ⊂ [t0 , ∞)𝕋 and 𝜏0 = min J𝕋 , of the dynamic equation on time scales (5.32) is called a prolongation of x to the right, if I𝕋 ⊂ J𝕋 and x(t) = y(t) for all t ∈ I𝕋 . If I𝕋 ⫋ J𝕋 , then y is called a proper prolongation of x to the right. In an analogous way, one can define a proper prolongation of a solution to the left. Definition 5.30: Let 𝕋 be a time scale such that sup 𝕋 = ∞. Consider a pair (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 . The solution y ∶ I𝕋 → X, with I𝕋 ⊂ [t0 , ∞)𝕋 and 𝜏0 = min I𝕋 , of the dynamic equation on time scales t ⎧ ∗ ⎪x(t) = x(𝜏0 ) + ∫ f (x (s), s)Δs, 𝜏0 ⎨ ⎪x(𝜏 ) = x , 0 ⎩ 0

is called maximal solution, if there is no proper prolongation of y to the right. If, in addition, sup I = ∞, then y is called a global forward solution on I. Before proving the first result on prolongation of solutions of dynamic equations on time scales, we present a lemma that relates conditions (A2)–(A4) from the Subsection 5.2.1 to conditions (B1)–(B3) from the beginning of this subsection. The lemma implies that, if conditions (B1)–(B3) are fulfilled for f , M, and L, then so do conditions (A2)–(A4) for f ∗ , M ∗ , and L∗ . It also generalizes Lemma 3.32 to unbounded intervals and functions without delays. Moreover, it is essential in the remainder of this section. We omit its proof here in order not to be repetitive, because it is essentially the proof of Lemma 3.32. But the reader may find a proof in [78, Theorem 5.8]. Lemma 5.31: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and t0 ∈ 𝕋 , and f ∶ O × [t0 , ∞)𝕋 → X be a function. Define g(t) = t∗ and f ∗ (y, t) = f (y, t∗ ) for every y ∈ O and t ∈ [t0 , ∞). Then, the following assertions hold.

5.2 Prolongation and Maximal Solutions

(i) If f ∶ O × [t0 , ∞)𝕋 → X satisfies condition (B1), then the Perron–Stieltjes intet gral ∫t 2 f ∗ (x(s), s)dg(s) exists, for all x ∈ G([t0 , ∞), O) and for all t1 , t2 ∈ [t0 , ∞). 1 (ii) If f ∶ O × [t0 , ∞)𝕋 → X satisfies condition (B2), then f ∗ ∶ O × [t0 , ∞) → X satisfies t2 ‖ t2 ‖ ‖ ‖ f ∗ (x(s), s)dg(s)‖ ⩽ M ∗ (s)dg(s), ‖ ‖∫t ‖ ∫t 1 ‖ 1 ‖

for all t1 , t2 ∈ [t0 , ∞), t1 ⩽ t2 , and for all x ∈ G([t0 , ∞), O), where g(t) = t∗ and M ∗ ∶ [t0 , ∞) → ℝ is given by M ∗ (s) = M(s∗ ). (iii) If f ∶ O × [t0 , ∞)𝕋 → X satisfies condition (B3), then f ∗ ∶ O × [t0 , ∞) → X satisfies t2 ‖ ‖ t2 ‖ ‖ [f ∗ (x(s), s) − f ∗ (z(s), s)] dg(s)‖ ⩽ ‖x − z‖[t0 ,∞) L∗ (s)dg(s), ‖ ‖ ‖∫t ∫ t 1 ‖ ‖ 1

for all t1 , t2 ∈ [t0 , ∞), t1 ⩽ t2 , and for all x, z ∈ G0 ([t0 , ∞), O), where g(t) = t∗ and L∗ ∶ [t0 , ∞) → ℝ is given by L∗ (s) = L(s∗ ). Theorem 5.32: Consider a time scale 𝕋 with sup 𝕋 = ∞ and t0 ∈ 𝕋 . Suppose f ∶ O × [t0 , ∞)𝕋 → X satisfies conditions (B1)–(B3) and assume that, for all (z0 , s0 ) ∈ O × [t0 , ∞)𝕋 , z0 + f (z0 , s0 )𝜇(s0 ) ∈ O (where, as usual, 𝜇 ∶ 𝕋 → [0, ∞) is the graininess function). Then, for all (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 , there exists a unique maximal solution x ∶ [𝜏0 , 𝜔)𝕋 → X, 𝜔 ⩽ ∞, of the dynamic equation on time scales given by t ⎧ ∗ ⎪x(t) = x(𝜏0 ) + ∫ f (x (s), s)Δs, 𝜏0 ⎨ ⎪x(𝜏 ) = x . 0 ⎩ 0

(5.35)

When 𝜔 < ∞, we have 𝜔 ∈ 𝕋 and 𝜔 is left-dense. Proof. Fix a pair (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 . Existence. Define f ∗ ∶ O × [t0 , ∞) → X by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ O × [t0 , ∞) and g(t) = t∗ , for every t ∈ [t0 , ∞). Because f satisfies conditions (B1)–(B3), Lemma 5.31 implies f ∗ satisfies conditions (A2)–(A4). By definition, g is a nondecreasing and left-continuous function on (t0 , ∞). Notice that, for every (z0 , s0 ) ∈ O × [t0 , ∞), we have { z0 + f (z0 , s0 )𝜇(s0 ) ∈ O, s0 ∉ 𝕋 , ∗ + z0 + f (z0 , s0 )Δ g(s0 ) = (5.36) s0 ∈ 𝕋 . z0 ∈ O, Then, for each (z0 , s0 ) ∈ O × [t0 , ∞), z0 + f ∗ (z0 , s0 )Δ+ g(s0 ) ∈ O. Therefore f ∗ and g satisfy all the hypotheses of Theorems 5.21 and 5.23 and, hence, there exists a

199

200

5 Basic Properties of Solutions

unique maximal solution y ∶ [𝜏0 , 𝜔) → X, 𝜔 ⩽ ∞, of the MDE t

y(t) = x0 +

∫𝜏0

f ∗ (y(s), s)dg(s).

(5.37)

Now, we need to analyze two cases with respect to 𝜔. Case 1: 𝜔 = ∞. According to Theorem 5.28, y ∶ [𝜏0 , ∞) → X must have the form y = x∗ , where x ∶ [𝜏0 , ∞)𝕋 → X is a solution of the dynamic equation on time scales (5.35). Note that, in fact, x is a global forward solution. Case 2: 𝜔 < ∞. In order to prove that 𝜔 ∈ 𝕋 and 𝜔 is left-dense, we need to show two claims are true. Claim 1. 𝜔 ∈ 𝕋 . Suppose to the contrary, that is, 𝜔 ∉ 𝕋 and define a set H = {s ∈ 𝕋 ∶ s < 𝜔} , which is nonempty, once 𝜏0 ∈ H. Since 𝜔 ∉ 𝕋 , H = 𝕋 ∩ (−∞, 𝜔] and, hence, H is a closed subset of ℝ. Let 𝛽 = sup H. Since H is closed, 𝛽 ∈ H. In addition, 𝛽 ⩽ 𝜔. However, 𝜔 ∉ 𝕋 . Therefore, 𝛽 < 𝜔. Because g is constant on t (𝛽, 𝜔], we have ∫𝜏 f (y(s), s)dg(s) = 0, for all 𝜏, t ∈ (𝛽, 𝜔]. Let 𝜂 ∈ (𝛽, 𝜔) be fixed and define a function u ∶ [𝜏0 , 𝜔] → X by { y(t), t ∈ [𝜏0 , 𝜔), u(t) = y(𝜂), t = 𝜔. Note that u is well-defined and u|[𝜏0 ,𝜔) = y. We want to prove that u is a solution of the MDE (5.37) on [𝜏0 , 𝜔]. Let s1 , s2 ∈ [𝜏0 , 𝜔] be such that s1 ∈ [𝜏0 , 𝜔) and s2 = 𝜔. Thus, 𝜂

u(s2 ) − u(s1 ) =

∫s1 𝜔

=

∫s1

f ∗ (y(s), s)dg(s) +

𝜔

f ∗ (y(s), s)dg(s) ∫𝜂 ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟ =0

f ∗ (y(s), s)dg(s) =

s2

∫s1

f ∗ (u(s), s)dg(s).

The case where s1 , s2 ∈ [𝜏0 , 𝜔) follows from the definition of u. Thus, u is a solution of the MDE (5.37) on [𝜏0 , 𝜔]. Note that u is a proper right proper prolongation of y ∶ [𝜏0 , 𝜔) → X, contradicting the fact that y is the maximal solution of the MDE (5.37). Thus, 𝜔 ∈ 𝕋 and Claim 1 holds. Claim 2. 𝜔 is left-dense. We need to prove that 𝜌(𝜔) = 𝜔. Assume that 𝜌(𝜔) < 𝜔. Then, g is constant on (𝜌(𝜔), 𝜔] and, hence, as in Claim 1 (using 𝛽 = 𝜌(𝜔)), one can conclude that there exists a proper right prolongation of y ∶ [𝜏0 , 𝜔) → X, contradicting the fact that y is the maximal solution of the MDE (5.37). Therefore, 𝜔 is left-dense and Claim 2 holds. By Theorem 5.28, y ∶ [𝜏0 , 𝜔) → X is of the form y = x∗ , where x ∶ [𝜏0 , 𝜔)𝕋 → X is a solution of the dynamic equation on time scales (5.35). We point out that x ∶ [𝜏0 , 𝜔)𝕋 → X is a maximal solution of the dynamic equation on time scales (5.35). Indeed, assume that z ∶ J𝕋 → X is a proper right prolongation of x ∶ [𝜏0 , 𝜔)𝕋 → X.

5.2 Prolongation and Maximal Solutions

Without loss of generality we can take J𝕋 = [𝜏0 , 𝜔]𝕋 . Because z ∶ [𝜏0 , 𝜔]𝕋 → X is a solution of the dynamic equation on time scales (5.35), Theorem 5.28 implies that z∗ ∶ [𝜏0 , 𝜔] → X is solution of MDE (5.37). Notice that z∗ |[𝜏0 ,𝜔) = y. Therefore, z∗ ∶ [𝜏0 , 𝜔] → X is a proper right prolongation of y ∶ [𝜏0 , 𝜔) → X, in contradiction to the fact that y is the maximal solution of the MDE (5.37). Therefore, x ∶ [𝜏0 , 𝜔)𝕋 → X is a maximal solution of the dynamic equation on time scales (5.35). Uniqueness. Assume that 𝜓 ∶ [𝜏0 , 𝜔) ̃ 𝕋 → X is also a maximal solution of the dynamic equation on time scales (5.35). We need to prove that x(t) = 𝜓(t),

for all t ∈ [𝜏0 , 𝜔)𝕋 ∩ [𝜏0 , 𝜔) ̃ 𝕋.

(5.38)

By Theorem 5.28, 𝜓 ∗ ∶ [𝜏0 , 𝜔) ̃ → X is a solution of the MDE (5.37). On the other hand, y ∶ [𝜏0 , 𝜔) → X is the unique maximal solution of (5.37). Therefore, y(t) = 𝜓 ∗ (t), for every t ∈ [𝜏0 , 𝜔) ∩ [𝜏0 , 𝜔) ̃ and hence, ̃ 𝕋 . Then, for t ∈ [𝜏0 , 𝜔)𝕋 ∩ [𝜏0 , 𝜔) ̃ 𝕋, y(t) = 𝜓 ∗ (t), for all t ∈ [𝜏0 , 𝜔)𝕋 ∩ [𝜏0 , 𝜔) x(t) = x∗ (t) = y(t) = 𝜓 ∗ (t) = 𝜓(t) and (5.38) holds. ̃ by Now, define 𝛾 ∶ I𝕋 → X, with I = [𝜏0 , 𝜔) ∪ [𝜏0 , 𝜔), { x(t), t ∈ [𝜏0 , 𝜔)𝕋 , 𝛾(t) = ̃ 𝕋. 𝜓(t), t ∈ [𝜏0 , 𝜔) According to (5.38), 𝛾 is well-defined. Notice that 𝛾 is a solution of (5.35) and 𝛾|[𝜏0 ,𝜔)𝕋 = x, 𝛾|[𝜏0 ,𝜔) ̃ 𝕋 = 𝜓. Because x and 𝜓 are maximal solutions of (5.35), we have I𝕋 = [𝜏0 , 𝜔)𝕋 = [𝜏0 , 𝜔) ̃ 𝕋 and, hence, 𝛾(t) = x(t) = 𝜓(t), for all t ∈ I𝕋 , that is, x = 𝜓, proving the uniqueness of x. ◽ The next corollary follows straightforwardly from Theorem 5.32. Corollary 5.33: Consider a time scale 𝕋 for which sup 𝕋 = ∞ and t0 ∈ 𝕋 . Assume that f ∶ O × [t0 , ∞)𝕋 → X satisfies conditions (B1)–(B3). Assume, further, that for all (z0 , s0 ) ∈ O × [t0 , ∞)𝕋 , we have z0 + f (z0 , s0 )𝜇(s0 ) ∈ O. Let (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and x ∶ [𝜏0 , 𝜔)𝕋 → X be the maximal solution of the dynamic equation on time scales (5.35) (which is ensured by Theorem 5.32). If each point of 𝕋 is left-scattered, then 𝜔 = ∞ and we have a global forward solution of Eq. (5.35). Remark 5.34: In the proof of Theorem 5.32, y = x∗ ∶ [𝜏0 , 𝜔) → X is the maximal solution of the MDE (5.37) if and only if x ∶ [𝜏0 , 𝜔)𝕋 → X is the maximal solution of the dynamic equation on time scales (5.35), where (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 . The theorem below shows that the graph of the maximal solution of the dynamic equation on time scales (5.35) fulfills the following property: for every compact K ⊂ O × [t0 , ∞)𝕋 , there exists tK such that if t ∈ (tK , 𝜔)𝕋 , then (x(t), t) ∉ K.

201

202

5 Basic Properties of Solutions

Theorem 5.35: Consider a time scale 𝕋 for which sup 𝕋 = ∞ and t0 ∈ 𝕋 . Assume that f ∶ O × [t0 , ∞)𝕋 → X fulfills conditions (B1)–(B3) and, for all (z0 , s0 ) ∈ O × [t0 , ∞)𝕋 , we have z0 + f (z0 , s0 )𝜇(s0 ) ∈ O. Take (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and let x ∶ [𝜏0 , 𝜔)𝕋 → X be the maximal solution of the dynamic equation on time scales (5.35). Then, for every compact set K ⊂ O × [t0 , ∞)𝕋 , there exists tK ∈ [𝜏0 , 𝜔) such that (x(t), t) ∉ K, for t ∈ (tK , 𝜔) ∩ 𝕋 . Proof. Suppose K is a compact subset of O × [t0 , ∞)𝕋 . In particular, K ⊂ O × [t0 , ∞). Let (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and x ∶ [𝜏0 , 𝜔)𝕋 → X be the maximal solution of the dynamic equation on time scales (5.35). By Remark 5.34, x∗ ∶ [𝜏0 , 𝜔) → X is the maximal solution of the MDE t

y(t) = x0 +

∫𝜏0

f ∗ (y(s), s)dg(s),

t ⩾ 𝜏0 ,

where f ∗ ∶ O × [t0 , ∞) → X is given by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ O × [t0 , ∞) and g(t) = t∗ , for all t ∈ [t0 , ∞). Since f satisfies conditions (B1)–(B3), by Lemma 5.31, f ∗ satisfies conditions (A2)–(A4). By definition, the function g is nondecreasing and left-continuous on (t0 , ∞). Using the same arguments as in the proof of Theorem 5.32, one can show that z0 + f ∗ (z0 , s0 )Δ+ g(s0 ) ∈ O, for all (z0 , s0 ) ∈ O × [t0 , ∞). Thus, f ∗ , g, and x∗ satisfy the hypotheses of Theorem 5.24 and, hence, there exists tK ∈ [𝜏0 , 𝜔) such that (x∗ (t), t) ∉ K, for all t ∈ (tK , 𝜔). In particular, (x(t), t) = (x(t∗ ), t) = (x∗ (t), t) ∉ K, for ◽ all t ∈ (tK , 𝜔) ∩ 𝕋 and this completes the proof. Notice that the set (tK , 𝜔) ∩ 𝕋 from the proof of Theorem 5.35 is nonempty. In fact, if 𝜔 < ∞, then Theorem 5.32 implies that 𝜔 ∈ 𝕋 and 𝜔 is left-dense. Therefore, (tK , 𝜔) ∩ 𝕋 ≠ ∅. On the other hand, if 𝜔 = ∞, then (tK , ∞) ∩ 𝕋 ≠ ∅, since sup 𝕋 = ∞. Theorem 5.36: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and t0 ∈ 𝕋 . Suppose f ∶ O × [t0 , ∞)𝕋 → X satisfies conditions (B1)–(B3). Moreover, assume that, for all (z0 , s0 ) ∈ O × [t0 , ∞)𝕋 , z0 + f (z0 , s0 )𝜇(s0 ) ∈ O. Take (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and suppose x ∶ [𝜏0 , 𝜔)𝕋 → X is the unique maximal solution of the dynamic equation on time scales (5.35) with initial condition x(𝜏0 ) = x0 . If x(t) ∈ N, for all t ∈ [𝜏0 , 𝜔)𝕋 , where N is a compact subset of O, then 𝜔 = ∞ and we have a global forward solution. Proof. Let (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and x ∶ [𝜏0 , 𝜔)𝕋 → X be the unique maximal solution of (5.35) satisfying x(𝜏0 ) = x0 . Assume, in addition, that x(t) ∈ N, for any t ∈ [𝜏0 , 𝜔)𝕋 , where N is a compact subset of O. By Remark 5.34, x∗ ∶ [𝜏0 , 𝜔) → X is the unique maximal solution of t

y(t) = x0 +

∫𝜏0

f ∗ (y(s), s)dg(s),

5.2 Prolongation and Maximal Solutions

where f ∗ ∶ O × [t0 , ∞)𝕋 → X is given by f ∗ (x, t) = f (x, t∗ ) for all (x, t) ∈ O × [t0 , ∞) and g(t) = t∗ for all t ∈ [t0 , ∞). Note that x∗ (s) ∈ N, for all s ∈ [𝜏0 , 𝜔), since x(t) ∈ N, for all t ∈ [𝜏0 , ∞)𝕋 . On the other hand, one can prove, similarly as in the proof of Theorem 5.32, that f ∗ satisfies conditions (A2)–(A4), g is a nondecreasing and left-continuous function on (𝜏0 , ∞) and, moreover, z0 + f ∗ (z0 , 𝜏0 )Δ+ g(𝜏0 ) ∈ O, for all (z0 , 𝜏0 ) ∈ O × [t0 , ∞). Thus, the functions f ∗ ∶ O × [t0 , ∞) → X, g ∶ [t0 , ∞) → ℝ and x∗ ∶ [𝜏0 , 𝜔) → X satisfy all hypotheses of Corollary 5.25. Then 𝜔 = ∞ and the desired result follows. ◽ Next, we present a result that guarantees that the graph of the maximal solution of the dynamic equation on time scales (5.35) converges to a point of the boundary of Ω𝕋 , as t → 𝜔− . Theorem 5.37: Consider a time scale 𝕋 for which sup 𝕋 = ∞ and t0 ∈ 𝕋 . Suppose f ∶ O × [t0 , ∞)𝕋 → X satisfies conditions (B1)–(B3) and, for all (z0 , s0 ) ∈ O × [t0 , ∞)𝕋 , we have z0 + f (z0 , s0 )𝜇(s0 ) ∈ O. Take (x0 , 𝜏0 ) ∈ O × [t0 , ∞)𝕋 and let x ∶ [𝜏0 , 𝜔)𝕋 → X be the maximal solution of the dynamic equation on time scales (5.35). If 𝜔 < ∞. Then, the following statements hold: (i) the limit limt→𝜔− x(t) exists; (ii) (y, 𝜔) ∈ 𝜕Ω𝕋 and limt→𝜔− (x(t), t) = (y, 𝜔), where y = limt→𝜔− x(t) and Ω𝕋 = O × [t0 , ∞)𝕋 . Proof. At first, let Ω𝕋 = O × [t0 , ∞)𝕋 and x ∶ [𝜏0 , 𝜔)𝕋 → X be the maximal solution of the dynamic equation on time scales (5.35). Suppose, further, 𝜔 < ∞. Let f ∗ ∶ O × [t0 , ∞) → X be defined by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ O × [t0 , ∞) and g ∶ [t0 , ∞) → ℝ be given by g(t) = t∗ , for all t ∈ [t0 , ∞). By the same arguments as those used in the proof of Theorem 5.32, one can show that f ∗ fulfills conditions (A2)–(A4) and g fulfills condition (A1). In addition, z0 + f ∗ (z0 , s0 )Δ+ g(s0 ) ∈ O,

for all (z0 , s0 ) ∈ O × [t0 , ∞).

On the other hand, Remark 5.34 yields x∗ ∶ [𝜏0 , 𝜔) → X is the maximal solution of t

y(t) = x0 +

∫𝜏0

f ∗ (y(s), s)dg(s),

t ⩾ 𝜏0 .

Once all hypotheses of Corollary 5.26 are fulfilled, the limit limt→𝜔− x∗ (t) exists and, moreover, (y, 𝜔) ∈ 𝜕Ω

and

lim (x∗ (t), t) = (y, 𝜔),

t→𝜔−

(5.39)

203

204

5 Basic Properties of Solutions

where y = limt→𝜔− x∗ (t) and Ω = O × [t0 , ∞). By Theorem 5.32, 𝜔 ∈ 𝕋 and 𝜔 is left-dense. { } Now, we prove item (i). Let tk k∈ℕ be a sequence in [𝜏0 , 𝜔)𝕋 such that limn→∞ tk = 𝜔. Then, x(tk ) = x(tk∗ ) = lim x∗ (tk ) = y, k→∞

that is, the limit limt→𝜔− x(t) exists and limt→𝜔− x(t) = y. Now, let us prove item (ii). Since 𝜔 is left-dense, there exists a sequence { } { } sk k∈ℕ in [𝜏0 , 𝜔)𝕋 satisfying limk→∞ sk = 𝜔. Because (x(sk ), sk ) k∈ℕ ∈ Ω𝕋 and limk→∞ (x(sk ), sk ) = (y, 𝜔), we obtain (y, 𝜔) ∈ Ω𝕋 .

(5.40)

On the other hand, by (5.39), (y, 𝜔) ∈ Ω . But Ω ⊂ c

(y, 𝜔) ∈ Ωc𝕋 .

c

Ωc𝕋 ,

because Ω𝕋 ⊂ Ω. Thus, (5.41)

At last, item (ii) follows from (5.40) and (5.41).



When O = X, it is possible ensure that the domain of the maximal solution of the dynamic equation on time scales (5.35) equals [𝜏0 , ∞)𝕋 . This is the content of the next theorem. Theorem 5.38: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and t0 ∈ 𝕋 . Suppose f ∶ X × [t0 , ∞)𝕋 → X satisfies conditions (B1)–(B3) (with O = X). Then, for every (x0 , 𝜏0 ) ∈ X × [t0 , ∞)𝕋 , there exists a unique maximal solution on [𝜏0 , ∞)𝕋 of the dynamic equation on time scales (5.35) fulfilling x(𝜏0 ) = x0 . Proof. Consider a given fixed pair (x0 , 𝜏0 ) ∈ X × [t0 , ∞)𝕋 . Thus, (x0 , 𝜏0 ) ∈ X × [t0 , ∞). Define f ∗ ∶ X × [t0 , ∞) → X by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ X × [t0 , ∞). Similarly as in the proof of Theorem 5.32, one can show that f ∗ satisfies conditions (A2)–(A4) and g satisfies condition (A1). Furthermore, z0 + f ∗ (z0 , s0 )Δ+ g(s0 ) ∈ O, for all (z0 , s0 ) ∈ O × [t0 , ∞). Then, once all hypotheses of Corollary 5.27 are satisfied, there exists a unique maximal solution y ∶ [𝜏0 , ∞) → X of t

y(t) = x0 +

∫𝜏0

f ∗ (y(s), s)dg(s).

According to Theorem 5.28, y must have the form y = x∗ ∶ [𝜏0 , ∞) → X, where x ∶ [𝜏0 , ∞)𝕋 → X is a solution of (5.35), with initial condition x(𝜏0 ) = x0 . Then, Remark 5.34 yields the result. ◽

205

6 Linear Generalized Ordinary Differential Equations Everaldo M. Bonotto 1 , Rodolfo Collegari 2 , Márcia Federson 3 , and Miguel V. S. Frasson 1 1 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Faculdade de Matemática, Universidade Federal de Uberlândia, Uberlândia, MG, Brazil 3 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil

The investigation of linear equations in the framework of generalized ordinary differential equations (ODEs), which we refer to as linear generalized ODEs, is very important as they also are in the setting of classical ODEs. Linear equations are easier to deal with and one usually obtain a wider knowledge about them as, for instance, every linear ODE with constant coefficients can be solved and every maximal solution of such equation is defined in the entire real line. The same applies to linear generalized ODEs. This chapter, whose main references are [45, 209–211], deals with linear generalized ODEs presenting an appropriate environment where any initial value problem (IVP) for linear generalized ODEs admits a unique global solution. We also recall the notion of fundamental operator associated to a linear generalized ODE and we establish a variation-of-constants formula for linear perturbed generalized ODEs, extending the result from [45]. Lastly, we apply the results to measure functional differential equations (as before, we write measure FDEs, for short). Let X be a Banach space and L(X) be the Banach space of bounded linear operators from X into itself and equip these spaces with the usual operator norm. As described in Chapter 1, BV([a, b], X) denotes the Banach space of functions x ∶ [a, b] → X of bounded variation endowed with the variation norm.

Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

206

6 Linear Generalized Ordinary Differential Equations

Given a compact interval [a, b] ⊂ ℝ and a function F ∶ X × [a, b] → X, a generalized ODE dx = DF(x, t), d𝜏 is called a linear generalized ODE, if there exist functions A ∶ [a, b] → L(X) and B ∶ [a, b] → X such that F(x, t) = A(t)x + b(t). When B ≡ 0, the generalized ODE dx = D[A(t)x] (6.1) d𝜏 is called linear homogeneous generalized ODE and, in case B ≢ 0, the generalized ODE dx = D[A(t)x + B(t)] (6.2) d𝜏 is called linear nonhomogeneous generalized ODE. Recall that, according to Definition 4.1, a function x ∶ [a, b] → X is a solution of (6.2) on [a, b], whenever s2

x(s2 ) − x(s1 ) =

D[A(t)x(𝜏)] + B(s2 ) − B(s1 ),

∫s1

s1 , s2 ∈ [a, b],

(6.3)

where the integral is in the sense of Kurzweil as in Definition 2.1. Note that the integral term on the right-hand side of the last equality coincides with the abstract Perron–Stieltjes integral, since the Riemann-type sums for ∑ b ∫a D[A(t)x(𝜏)] have the form [A(si ) − A(si−1 )]x(𝜏i ). Then we shall use a more b conventional notation of the integral, namely, ∫a d[A(t)]x(t), and, hence, (6.3) becomes s2

x(s2 ) − x(s1 ) =

∫s1

d[A(s)]x(s) + B(s2 ) − B(s1 ),

s1 , s2 ∈ [a, b].

Proposition 6.1: Suppose A ∈ BV([a, b], L(X)) and B ∈ BV([a, b], X). x ∶ [a, b] → X is a solution of (6.2) on [a, b], then x ∈ BV([a, b], X).

If

A proof of Proposition 6.1 follows the steps of [209, Lemma 6.1] with obvious adaptations for the Banach space. Next we present a result on the existence and uniqueness of a global solution for an initial value problem (we write IVP, for short) for the linear generalized ODE (6.1). In order to do that, we introduce some conditions through the next definition. Definition 6.2: Let J ⊂ ℝ be an interval, A ∶ J → L(X) be a function, Δ+ A(t) = lim A(s) − A(t), Δ− A(t) = A(t) − lim− A(s) and I ∈ L(X) be the identity operator. We s→t s→t+ say that (i) A satisfies condition (Δ+ ) on J, if [I + Δ+ A(t)]−1 ∈ L(X), for all t ∈ J∖{sup J}.

6.1 The Fundamental Operator

(ii) A satisfies condition (Δ− ) on J, if [I − Δ− A(t)]−1 ∈ L(X), for all t ∈ J∖{inf J}. (iii) A satisfies condition (Δ), whenever A satisfies both conditions (Δ+ ) and (Δ− ). The next theorem concerns the backward and forward solutions of a linear nonhomogeneous generalized ODE. Theorem 6.3: Let (t0 , x0 ) ∈ [a, b] × X, B ∈ BV([a, b], X) and A ∈ BV([a, b], L(X)). Consider the IVP ⎧ dx = D[A(t)x + B(t)], ⎪ ⎨ d𝜏 ⎪ x(t0 ) = x0 . ⎩

(6.4)

(i) If A satisfies (Δ+ ) on [a, t0 ], then the IVP (6.4) has a unique solution on [a, t0 ]. (ii) If A satisfies (Δ− ) on [t0 , b], then the IVP (6.4) has a unique solution on [t0 , b]. (iii) If A satisfies (Δ) on [a, b], then the IVP (6.4) has a unique solution on [a, b]. For a proof, the reader may consult [211, Propositions 2.8, 2.9, and Theorem 2.10]. Remark 6.4: Let J ⊂ ℝ be an interval and denote by BVloc (J, X) the set of all functions F ∶ J → X of locally bounded variation, that is, F is of bounded variation in each compact interval [a, b] ⊂ J. If A ∈ BVloc (ℝ, L(X)) satisfies condition (Δ) on ℝ and B ∈ BVloc (ℝ, X), then Theorem 6.3 guarantees existence and uniqueness of global forward solution of the IVP (6.4), with (t0 , x0 ) ∈ ℝ × X. Remark 6.5: If A ∈ BV([a, b], L(X)), then, by Theorem 1.4, the sets {t ∈ [a, b) ∶ ‖Δ+ A(t)‖ ⩾ 1} and {t ∈ (a, b] ∶ ‖Δ− A(t)‖ ⩾ 1} are finite. Therefore, there exists a finite set F ⊂ [a, b] such that the operators [I + Δ+ A(t)] and [I − Δ− A(t)] are invertible for every t ∈ [a, b] ⧵ F. This means that when A ∈ BV([a, b], L(X)), condition (Δ) is satisfied except on a finite subset of [a, b].

6.1 The Fundamental Operator As we look at Eq. (6.1), we can also consider the equation dz = D[A(t)z], d𝜏

(6.5)

207

208

6 Linear Generalized Ordinary Differential Equations

with z ∈ L(X). A solution of (6.5) on the interval [a, b] is an operator-valued function z ∶ [a, b] → L(X) such that s2

z(s2 ) − z(s1 ) =

∫s1

d[A(s)]z(s),

s1 , s2 ∈ [a, b].

Thus, if we fix (t0 , z0 ) ∈ [a, b] × L(X) and we assume that A ∈ BV([a, b], L(X)) satisfies condition (Δ), then Theorem 6.3 implies that the IVP ⎧ dz = D[A(t)z], ⎪ ⎨ d𝜏 ⎪ z(t0 ) = z0 ⎩

(6.6)

has a unique solution Φ ∶ [a, b] → L(X). Furthermore, Φ ∶ [a, b] → L(X) is a solution of the IVP (6.6), if and only if it satisfies the integral equation t

Φ(t) = z0 +

∫t0

d[A(s)]Φ(s),

t ∈ [a, b].

(6.7)

This leads us to the following result. See [45, Theorem 4.3]. Theorem 6.6: Assume that A ∈ BV([a, b], L(X)) satisfies condition (Δ). Then there exists a uniquely determined operator U ∶ [a, b] × [a, b] → L(X), called fundamental operator of the linear generalized ODE (6.1), such that t

U(t, s) = I +

∫s

d[A(r)]U(r, s),

t, s ∈ [a, b].

(6.8)

Note that, if U ∶ [a, b] × [a, b] → L(X) is the fundamental operator given by (6.8), then, for each s ∈ [a, b], the operator U(⋅, s) ∶ [a, b] → L(X) is the unique solution of the IVP { dz = D[A(t)z], d𝜏 z(s) = I. Moreover, U(⋅, s) ∶ [a, b] → L(X) is of bounded variation in [a, b], for each s ∈ [a, b]. As a consequence, we have the following straightforward result. Proposition 6.7: Let ̃ x ∈ X, s ∈ [a, b] and suppose A ∈ BV([a, b], L(X)) satisfies condition (Δ). Then the unique solution x ∶ [a, b] → X of the IVP ⎧ dx = D[A(t)x], ⎪ ⎨ d𝜏 ⎪ x(s) = ̃ x ⎩

6.2 A Variation-of-Constants Formula

is given by x(t) = U(t, s)̃ x, for all t ∈ [a, b], where U ∶ [a, b] × [a, b] → L(X) is given by (6.8). The next proposition gives us some basic properties of the fundamental operator. The proof follows the steps of [209, Theorem 6.15] with obvious adaptations for the Banach space-valued case. Proposition 6.8: Suppose A ∈ BV([a, b], L(X)) satisfies condition (Δ). Then the operator U ∶ [a, b] × [a, b] → L(X) given by (6.8) satisfies the following properties: (i) U(t, t) = I, for all t ∈ [a, b]; (ii) There exists a constant M > 0 such that, for all t, s ∈ [a, b], we have ‖U(t, s)‖ ⩽ M,

varba U(t, ⋅) ⩽ M,

and

varba U(⋅, s) ⩽ M;

(iii) U(t, s) = U(t, r)U(r, s), for all t, r, s ∈ [a, b]; (iv) There exists [U(t, s)]−1 ∈ L(X) and [U(t, s)]−1 = U(s, t), for all t, s ∈ [a, b]; (v) We have, for all t, s ∈ [a, b], U(t+ , s) = [I + Δ+ A(t)]U(t, s) and U(t− , s) = [I − Δ− A(t)]U(t, s), U(t, s+ ) = U(t, s)[I + Δ+ A(s)]−1 and U(t, s− ) = U(t, s)[I − Δ− A(s)]−1 .

6.2 A Variation-of-Constants Formula In this section, we present a variation-of-constants formula for a linear perturbed generalized ODE of type dx = D[A(t)x + F(x, t)], d𝜏 and in order to do that, we need some auxiliary results, which we present in the sequel. The next three lemmas guarantee that the integrals involved in the variation-of-constants formula are well defined. The first lemma can be found in [45, Lemma 4.5]. Lemma 6.9: Suppose A ∈ BV([a, b], L(X)) satisfies condition (Δ) and let U ∶ [a, b] × [a, b] → L(X) be given by (6.8). Then, for 𝜑 ∈ G([a, b], X), the function r 𝜑 ̂ ∶ [a, b] → X given by 𝜑 ̂ (r) = ∫a ds [U(r, s)]𝜑(s) is regulated. Proof. Fix 𝜑 ∈ G([a, b], X), r ∈ [a, b) and let {rn }n∈ℕ ⊂ [r, b] be a decreasing r sequence that converges to r. Note that the integral ∫a ds [U(r + , s)]𝜑(s) makes sense since, by Proposition 6.8 and the Theorem of Helly (Theorem 1.34), U(r + , ⋅) ∈ BV([a, b], L(X)).

209

210

6 Linear Generalized Ordinary Differential Equations

̃ ∶ [a, b] × [a, b] → L(X) by Defining U { U(𝜎, s), a ⩽ s ⩽ 𝜎 ⩽ b ̃ U(𝜎, s) = I, a⩽𝜎 0 be such that ‖U(r, s)‖ ⩽ M, for all s ∈ [a, b]. By Proposition 6.8, we have ‖̃ ̃ + ‖ = sup ‖U(rn , s) − U(r + , s)‖ sup ‖U(r n , s) − U(r , s)‖ ‖ ‖ s∈[a,r] s∈[a,b] = sup ‖U(rn , r)U(r, s) − [I + Δ+ A(r)]U(r, s)‖ s∈[a,r]

= sup ‖U(rn , r) − I − Δ+ A(r)‖‖U(r, s)‖ s∈[a,r]

‖ rn ‖ + ‖ ⩽ M‖ ‖∫ d[A(𝜏)]U(𝜏, r) − Δ A(r)‖ → 0, ‖ r ‖ as n → ∞. Using the Uniform Convergence Theorem (see Theorem 1.51), we obtain b [ ] ̃ n , s) − U(r ̃ + , s) 𝜑(s) = 0 lim ds U(r n→∞ ∫a which concludes the existence of the right-hand side limit lims→r+ 𝜑 ̂ (s) for all r ∈ ̂ (s) exists for all [a, b). Analogously, we prove that the left-hand side limit lims→r− 𝜑 r ∈ (a, b]. ◽ As a consequence of Theorem 2.9, we have the following useful result. Lemma 6.10: Let K ∈ BV([a, b], L(X)) and x0 ∈ X. Then b

d[K(r)]𝜒{a} (r)x0 = [K(a+ ) − K(a)]x0

∫a b

∫a

d[K(r)]𝜒{b} (r)x0 = [K(b) − K(b− )]x0 .

and

6.2 A Variation-of-Constants Formula

Moreover, for every c ∈ (a, b) and every (𝛼, 𝛽) ⊂ [a, b], we have b

d[K(r)]𝜒{c} (r)x0 = [K(c+ ) − K(c− )]x0

∫a

and

b

∫a

d[K(r)]𝜒(𝛼,𝛽) (r)x0 = [K(𝛽 − ) − K(𝛼 + )]x0 .

Next result can be found in [45, Lemma 4.6]. Lemma 6.11: Let A ∈ BV([a, b], L(X)) satisfy condition (Δ), K ∈ BV([a, b], L(X)) ̂ ∶ [a, b] → L(X) and U ∶ [a, b] × [a, b] → L(X) be given by (6.8). Then the function U defined by ̂ = U(s)

b

∫s

d[K(r)]U(r, s)

is of bounded variation in [a, b]. Moreover, for every c ∈ (a, b], we have ̂ = [K(c) − K(c− )]U(c, c− ) + lim− U(s)

b

d[K(r)]U(r, c− ),

(6.9)

d[K(r)]U(r, c+ ) − [K(c+ ) − K(c)]U(c, c+ ).

(6.10)

s→c

∫c

and, for c ∈ [a, b), we have ̂ = lim+ U(s)

s→c

b

∫c

̂ is well defined, since K, U(⋅, s) ∈ BV([a, b], L(X)) for all Proof. At first, note that U ̃ ∶ [a, b] × [a, b] → L(X) s ∈ [a, b]. Similarly as in the proof of Lemma 6.9, define U by { U(𝜎, s), a ⩽ s ⩽ 𝜎 ⩽ b, ̃ s) = U(𝜎, I, a ⩽ 𝜎 < s ⩽ b. ̃ ⋅)) = var𝜎a (U(𝜎, ⋅)). Also, for every Note that, for each 𝜎 ∈ [a, b], varba (U(𝜎, s ∈ [a, b], b

∫a

̃ s) = d[K(r)] U(r,

s

∫a

b

d[K(r)] I +

∫s

̂ d[K(r)] U(r, s) = K(s) − K(a) + U(s). (6.11)

Then, if d = (𝛼j ) ∈ D[a,b] , we obtain |d| |d| ‖ b ‖ ∑ ∑ ‖̂ ‖ ̂ j−1 )‖ ̃ 𝛼j ) − U(r, ̃ 𝛼j−1 ))‖ d[K(r)](U(r, ‖U(𝛼j ) − U(𝛼 ‖⩽ ‖ + varba (K) ‖ ‖ ‖ ‖ ‖∫ j=1 j=1 ‖ a ‖ |d| ∑ ‖̃ ̃ 𝛼j−1 )‖ ‖U(r, 𝛼j ) − U(r, ‖ + varba (K) ‖ ‖ r∈[a,b]

⩽ varba (K) sup

j=1

211

212

6 Linear Generalized Ordinary Differential Equations

) ( ̃ ⋅) + varba (K) ⩽ varba (K) sup varba U(r, r∈[a,b]

= varba (K) sup varba (U(r, ⋅)) + varba (K) r∈[a,b]

(

= varba (K)

) sup varba (U(r, ⋅)) + 1 ,

r∈[a,b]

̂ ∈ BV([a, b], L(X)). which implies that U Now, let c ∈ (a, b] be fixed. By (6.11) and Theorem 1.51, we obtain b

̂ = K(a) − K(c− ) + lim lim− U(s) −

s→c

∫a

s→c

b

= K(a) − K(c− ) +

∫a

̃ s) d[K(r)] U(r,

̃ c− ), d[K(r)] U(r,

(6.12)

̃ c− ) ∶ [a, b] → L(X) is given by where U(⋅, { U(r, c− ), a < c ⩽ r ⩽ b, ̃ c− ) = U(r, I, a ⩽ r < c ⩽ b. By Theorem 2.9 we have b

∫a

̃ c− ) d[K(r)] U(r, c

=

∫a

̃ c− ) + d[K(r)] U(r, [

= lim− s→c

∫a

s

b

∫c

d[K(r)] U(r, c− )

] d[K(r)]I + [K(c) − K(s)]U(c, c ) + −

b

∫c

d[K(r)]U(r, c− )

b

= K(c− ) − K(a) + [K(c) − K(c− )]U(c, c− ) +

∫c

d[K(r)]U(r, c− ).

(6.13)

Thus, equalities (6.12) and (6.13) yield (6.9), for every c ∈ (a, b]. In an analogous way, one can prove that (6.10) is verified for every c ∈ [a, b). ◽ The following two results are very useful to obtain the variation-of-constants formula for a linear perturbed generalized ODE. They can be found [45, Lemma 4.8 and Corollary 4.9]. Proposition 6.12: Let A ∈ BV([a, b], L(X)) satisfy condition (Δ) and consider functions K ∈ BV([a, b], L(X)) and U ∶ [a, b] × [a, b] → L(X) given by (6.8). Then, for 𝜑 ∈ G([a, b], X), we have ( r ) b d[K(r)] d [U(r, s)]𝜑(s) ∫a s ∫a ] [ b

=

∫a

b

d[K(s)]𝜑(s) +

∫a

b

ds

∫s

dr [K(r)]U(r, s) 𝜑(s).

(6.14)

6.2 A Variation-of-Constants Formula

Proof. At first, note that Lemmas 6.9 and 6.11 guarantee that all the integrals involved in (6.14) are well defined. Let (𝛼, 𝛽) ⊂ [a, b] and x0 ∈ X be fixed. According to Theorem 2.9 and Lemma 6.10, we obtain r

∫a

⎧ 0, a ⩽ r ⩽ 𝛼, ⎪ ds [U(r, s)]𝜒(𝛼,𝛽) (s)x0 = ⎨ x0 − U(r, 𝛼 + )x0 , 𝛼 < r < 𝛽, ⎪ U(r, 𝛽 − )x0 − U(r, 𝛼 + )x0 , 𝛽 ⩽ r ⩽ b, ⎩

that is, r

∫a

ds [U(r, s)]𝜒(𝛼,𝛽) (s)x0 = 𝜒(𝛼,𝛽) (r)x0 + 𝜒[𝛽,b] (r)U(r, 𝛽 − )x0 − 𝜒(𝛼,b] (r)U(r, 𝛼 + )x0 .

Then, by Lemma 6.10, we have ) ( r b d[K(r)] d [U(r, s)]𝜒(𝛼,𝛽) (s)x0 ∫a ∫a s b

=

∫a

) ( d[K(r)] 𝜒(𝛼,𝛽) (r)x0 + 𝜒[𝛽,b] (r)U(r, 𝛽 − )x0 − 𝜒(𝛼,b] (r)U(r, 𝛼 + )x0 b

= [K(𝛽 − ) − K(𝛼 + )]x0 +

∫a

d[K(r)]𝜒[𝛽,b] (r)U(r, 𝛽 − )x0

b



∫a

d[K(r)]𝜒(𝛼,b] (r)U(r, 𝛼 + )x0 .

(6.15)

Let us calculate the integrals in (6.15). Using Lemma 6.10, we have b

d[K(r)]𝜒[𝛽,b] (r)U(r, 𝛽 − )x0

∫a

𝛽

=

∫a

b

d[K(r)]𝜒{𝛽} (r)U(r, 𝛽 − )x0 +

∫𝛽

d[K(r)]U(r, 𝛽 − )x0

b

= [K(𝛽) − K(𝛽 − )]U(𝛽, 𝛽 − )x0 +

∫𝛽

d[K(r)]U(r, 𝛽 − )x0

and also b

∫a

d[K(r)]𝜒(𝛼,b] (r)U(r, 𝛼 + )x0 b

=

d[K(r)]𝜒(𝛼,b] (r)U(r, 𝛼 + )x0

∫𝛼 b

=

b

d[K(r)]U(r, 𝛼 + )x0 −

∫𝛼

∫𝛼

d[K(r)]𝜒{𝛼} (r)U(r, 𝛼 + )x0

b

=

∫𝛼

d[K(r)]U(r, 𝛼 + )x0 − [K(𝛼 + ) − K(𝛼)]U(𝛼, 𝛼 + )x0 .

213

214

6 Linear Generalized Ordinary Differential Equations

Thus,

(

b

d[K(r)]

∫a

)

r

∫a

ds [U(r, s)]𝜒(𝛼,𝛽) (s)x0

= [K(𝛽 − ) − K(𝛼 + )]x0 + [K(𝛽) − K(𝛽 − )]U(𝛽, 𝛽 − )x0 b

+

d[K(r)]U(r, 𝛽 − )x0

∫𝛽 (



)

b

d[K(r)]U(r, 𝛼 )x0 − [K(𝛼 ) − K(𝛼)]U(𝛼, 𝛼 )x0 +

∫𝛼

+

.

+

(6.16)

On the other hand, by Lemma 6.10, we obtain b

d[K(s)]𝜒(𝛼,𝛽) (s)x0 = [K(𝛽 − ) − K(𝛼 + )]x0

∫a

(6.17)

and, by Lemma 6.11, we obtain ] [ b

∫a

b

ds

∫s [

= lim− s→𝛽

d[K(r)]U(r, s) 𝜒(𝛼,𝛽) (s)x0 b

∫s

]

[

d[K(r)]U(r, s) x0 − lim+ s→𝛼

∫s

b

= [K(𝛽) − K(𝛽 − )]U(𝛽, 𝛽 − )x0 + d[K(r)] ∫𝛽 ( b



∫𝛼

]

b

d[K(r)]U(r, s) x0 (

) lim− U(r, s)x0 s→𝛽 )

d[K(r)]U(r, 𝛼 + )x0 − [K(𝛼 + ) − K(𝛼)]U(𝛼, 𝛼 + )x0

.

(6.18)

Thus, equalities (6.16)–(6.18) yield (6.14), for every 𝜑(s) = 𝜒(𝛼,𝛽) (s)x0 . Following analogous steps, one can prove that equality (6.14) holds, for every 𝜑(s) = 𝜒{𝛾} (s)x0 for each 𝛾 ∈ [a, b]. Finally, given 𝜑 ∈ G([a, b], X), let {𝜑n }n∈ℕ be a sequence of step functions which is uniformly convergent in [a, b] to 𝜑. Since each 𝜑n is a step function, equality (6.14) is satisfied for all n ∈ ℕ and by the Uniform Convergence Theorem (Theorem 1.51), (6.14) is also fulfilled for 𝜑 ∈ G([a, b], X). ◽ Corollary 6.13: Let A ∈ BV([a, b], L(X)) satisfy condition (Δ) and U ∶ [a, b] × [a, b] → L(X) be given by (6.8). Then, for 𝜑 ∈ G([a, b], X), we have ( r ) b b b d[A(r)] ds [U(r, s)]𝜑(s) = d[A(s)]𝜑(s) + d [U(b, s)]𝜑(s). ∫a ∫a s ∫a ∫a Proof. The proof follows considering K = A in Proposition 6.12.



6.2 A Variation-of-Constants Formula

A variation-of-constants formula for linear perturbed generalized ODEs follows next. Theorem 6.14: Let A ∈ BV([a, b], L(X)) satisfy condition (Δ) and consider a function F ∶ X × [a, b] → X fulfilling ‖F(x, t2 ) − F(x, t1 )‖ ⩽ |h(t2 ) − h(t1 )|, for all (x, t2 ), (x, t1 ) ∈ X × [a, b] and for some nondecreasing function h ∶ [a, b] → ℝ. Let U ∶ [a, b] × [a, b] → L(X) be given by (6.8). Then, x ∈ G([a, b], X) is a solution of the linear perturbed generalized ODE ⎧ dx = D[A(t)x + F(x, t)], ⎪ ⎨ d𝜏 ⎪ x(t0 ) = ̃ x, ⎩

(6.19)

if and only if it is a solution of the integral equation t

x+ x(t) = U(t, t0 )̃

∫t0

(

t

DF(x(𝜏), s) −

d𝜎 [U(t, 𝜎)]

∫t0

𝜎

∫t0

) DF(x(𝜏), s) ,

(6.20)

for all t ∈ [a, b]. Proof. Let x ∈ G([a, b], X) be a solution of the IVP (6.19) and define y ∶ [a, b] → X by ( 𝜎 ) t t x+ DF(x(𝜏), s) − d [U(t, 𝜎)] DF(x(𝜏), s) , t ∈ [a, b]. y(t) = U(t, t0 )̃ ∫t0 ∫t0 𝜎 ∫t0 Using (6.8) and Corollary (6.13), for all t ∈ [a, b], we have t

∫t0

t

d[A(r)]y(r) =

t

d[A(r)]U(r, t0 )̃ x+

∫t0

t



∫t0

d𝜎 [U(r, 𝜎)]

∫t0

t

=

∫t0

d[A(r)]U(r, t0 )̃ x+ ∫t0

t



𝜎

∫t0

) DF(x(𝜏), s)

∫t0

r

d[A(r)]

∫t0

DF(x(𝜏), s)

r

d[A(r)]

∫t0

∫t0

DF(x(𝜏), s)

∫t0

t

t



(

r

d[A(r)]

∫t0

r

d[A(r)]

DF(x(𝜏), s) (

d𝜎 [U(t, 𝜎)]

x −̃ x− = U(t, t0 )̃ = y(t) − ̃ x−

𝜎

∫t0

) DF(x(𝜏), s)

t

∫t0

d𝜎 [U(t, 𝜎)]

t

∫t0

DF(x(𝜏), s),

(

𝜎

∫t0

) DF(x(𝜏), s)

215

216

6 Linear Generalized Ordinary Differential Equations

and, hence, t

x(t) − y(t) =

∫t0

d[A(r)](x(r) − y(r)),

t ∈ [a, b].

Since the unique solution of the IVP { dz = D[A(t)z], d𝜏 z(t0 ) = 0 is the trivial solution, x(t) = y(t) for all t ∈ [a, b], which implies that x is a solution of (6.20). On the other hand, let x ∈ G([a, b], X) be a solution of (6.20) and define 𝜎 𝜑(𝜎) = ∫t DF(x(𝜏), s) for 𝜎 ∈ [a, b]. Note that, by Corollary 1.48, 𝜑 ∈ BV([a, b], X). 0 Then, using Corollary 6.13, we obtain t

∫t0

d[A(r)]x(r) = U(t, t0 )̃ x −̃ x−

t

∫t0

d𝜎 [U(t, 𝜎)]𝜑(𝜎) = x(t) − ̃ x−

t

∫t0

DF(x(𝜏), s), ◽

for all t ∈ [a, b], which means x is a solution of (6.19).

As a consequence we have a variation-of-constants formula for a linear nonhomogeneous generalized ODE. Corollary 6.15: Let A ∈ BV([a, b], L(X)) satisfy condition (Δ) and consider a function b ∈ BV([a, b], X). Let U ∶ [a, b] × [a, b] → L(X) be given by (6.8). Then, the unique solution x ∈ BV([a, b], X) of the IVP { dx = D[A(t)x + b(t)], (6.21) d𝜏 x(t0 ) = ̃ x is given by t

x + b(t) − b(t0 ) − x(t) = U(t, t0 )̃

∫t0

( ) d𝜎 [U(t, 𝜎)] b(𝜎) − b(t0 ) ,

t ∈ [a, b]. (6.22)

6.3 Linear Measure FDEs In Chapter 4, there is a whole description of the correspondence between generalized ODEs and measure FDEs. Here, we are going to describe the correspondence between linear generalized ODEs and linear measure FDEs. The main reference for this section is [45, Section 5], for the case where X = ℝn . Let r, 𝜎 > 0, t0 ∈ ℝ and X be a Banach space. Recall that, given a function y ∶ [t0 − r, t0 + 𝜎] → X, the memory of y at a point t ∈ [t0 , t0 + 𝜎] can be described by a

6.3 Linear Measure FDEs

function yt ∶ [−r, 0] → X defined by yt (𝜃) = y (t + 𝜃), for all 𝜃 ∈ [−r, 0], and denote by 𝔏(G([−r, 0], X), X) the space of all linear functions from G([−r, 0], X) to X. Consider the linear measure FDE t

y(t) = y(t0 ) +

∫t0

𝓁(s)ys dg(s),

(6.23)

and the IVP { t y(t) = 𝜙(0) + ∫t 𝓁(s)ys dg(s), 0 yt0 = 𝜙,

(6.24)

where g ∶ [t0 , t0 + 𝜎] → ℝ is nondecreasing, 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X), 𝜙 ∈ G([−r, 0], X) and the integral in the right-hand side of (6.23), when it exists, is considered in the sense of Perron–Stieltjes. Recall that, according to Definition 3.1, a function y ∶ [t0 − r, t0 + 𝜎] → X is a solution of the IVP (6.24), if y(t) = 𝜙(t − t0 ) for every t ∈ [t0 − r, t0 ], the t +𝜎 Perron–Stieltjes integral ∫t 0 𝓁(s)ys dg(s) exists, and the equality (6.23) holds for 0 every t ∈ [t0 , t0 + 𝜎]. Next we introduce the following conditions on the function 𝓁. We say that 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies condition t +𝜎

(L1) If the Perron–Stieltjes integral ∫t 0 𝓁(s)ys dg(s) exists for every y ∈ 0 G([t0 − r, t0 + 𝜎], X); (L2) If there exists a Perron–Stieltjes integrable function M ∶ [t0 , t0 + 𝜎] → ℝ such that s2 ‖ s2 ‖ ‖ ‖ 𝓁(s)ys dg(s)‖ ⩽ M(s)‖ys ‖∞ dg(s) ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ for all s1 , s2 ∈ [t0 , t0 + 𝜎] with s1 ⩽ s2 and y ∈ G([t0 − r, t0 + 𝜎], X). Similarly as defined in Section 4.3, for all y ∈ G([t0 − r, t0 + 𝜎], X) and t ∈ [t0 , t0 + 𝜎], define F(y, t) ∶ [t0 − r, t0 + 𝜎] → X by ⎧ t0 − r ⩽ 𝜗 ⩽ t0 , ⎪ 0, 𝜗 ⎪ ⎪ 𝓁(s)ys dg(s), t0 ⩽ 𝜗 ⩽ t ⩽ t0 + 𝜎, F(y, t)(𝜗) = ⎨ ∫t0 t ⎪ ⎪ 𝓁(s)ys dg(s), t0 ⩽ t ⩽ 𝜗 ⩽ t0 + 𝜎. ⎪ ∫t0 ⎩

(6.25)

Proposition 6.16: Suppose 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies conditions (L1) and (L2) and let F(y, t) ∶ [t0 − r, t0 + 𝜎] → X, for all y ∈ G([t0 − r, t0 + 𝜎], X) and all t ∈ [t0 , t0 + 𝜎], be given by (6.25). Then, for each t ∈ [t0 , t0 + 𝜎], the function A(t) ∶ G([t0 − r, t0 + 𝜎], X) → G([t0 − r, t0 + 𝜎], X) given

217

218

6 Linear Generalized Ordinary Differential Equations

by [A(t)y](𝜗) = F(y, t)(𝜗) is a bounded linear operator on G([t0 − r, t0 + 𝜎], X). Moreover, A ∶ [t0 , t0 + 𝜎] → L(G([t0 − r, t0 + 𝜎], X)) is of bounded variation. Proof. For each t ∈ [t0 , t0 + 𝜎], the linearity of both the integral and the operator 𝓁(t) involved in (6.25) implies the linearity of A(t). For t ∈ [t0 , t0 + 𝜎] and y ∈ G([t0 − r, t0 + 𝜎], X), we have ‖ t ‖ ‖ ‖ sup ‖[A(t)y](𝜗)‖ ⩽ ‖ 𝓁(s)ys dg(s)‖ ‖A(t)y‖∞ = ‖∫t ‖ 𝜗∈[t0 −r,t0 +𝜎] ‖ 0 ‖ ( t ) ⩽ M(s)dg(s) ‖y‖∞ ∫t0 that is, A(t) is a bounded linear operator on G([t0 − r, t0 + 𝜎], X). Besides, for all y ∈ G([t0 − r, t0 + 𝜎], X) and t0 ⩽ s1 ⩽ s2 ⩽ t0 + 𝜎, ‖[A(s2 ) − A(s1 )]y‖∞ =

sup

𝜗∈[t0 −r,t0 +𝜎]

‖[A(s2 )y](𝜗) − [A(s1 )y](𝜗)‖

= sup ‖[A(s2 )y](𝜗) − [A(s1 )y](𝜗)‖ 𝜗∈[s1 ,s2 ]

) ‖ 𝜗 ‖ ( s2 ‖ ‖ 𝓁(s)ys dg(s)‖ ⩽ M(s)dg(s) ‖y‖∞ , = sup ‖ ‖ ∫ ∫s1 𝜗∈[s1 ,s2 ] ‖ ‖ s1 ‖ which implies ‖A(s2 ) − A(s1 )‖ ⩽ t +𝜎

t +𝜎

Thus, vart0 (A) ⩽ ∫t 0 0

0

s2

∫s1

M(s)dg(s).

M(s)dg(s), which completes the proof.

Consider, again, the linear generalized ODE { dx = D[A(t)x], d𝜏 x(t0 ) = ̃ x,

(6.26) ◽

(6.27)

where A ∶ [t0 , t0 + 𝜎] → L(G([t0 − r, t0 + 𝜎], X)) is given by ⎧ t0 − r ⩽ 𝜗 ⩽ t0 , ⎪ 0, 𝜗 ⎪ 𝓁(s)ys dg(s), t0 ⩽ 𝜗 ⩽ t ⩽ t0 + 𝜎, ⎪ [A(t)y](𝜗) = ⎨ ∫t0 ⎪ t ⎪ 𝓁(s)ys dg(s), t0 ⩽ t ⩽ 𝜗 ⩽ t0 + 𝜎, ⎪ ∫t0 ⎩

(6.28)

𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies conditions (L1) and (L2) and ̃ x ∈ G([t0 − r, t0 + 𝜎], X) is given by { 𝜙(𝜗 − t0 ), t0 − r ⩽ 𝜗 ⩽ t0 , ̃ x(𝜗) = 𝜙(0), t0 ⩽ 𝜗 ⩽ t0 + 𝜎.

6.3 Linear Measure FDEs

Note that a solution of the IVP (6.27) is a function x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) which satisfies the integral equation x(t) = ̃ x+

t

∫t0

d[A(s)]x(s),

t ∈ [t0 , t0 + 𝜎].

We refer to the IVP (6.27) as the linear generalized ODE associated to the linear measure FDE (6.24). The next auxiliary lemma follows as in Lemma 4.17. Lemma 6.17: If x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) is a solution of the linear generalized ODE (6.27) then, for 𝑣 ∈ [t0 , t0 + 𝜎], we have { x(𝑣)(𝑣), 𝑣 ⩽ 𝜗, x(𝑣)(𝜗) = x(𝜗)(𝜗), 𝜗 ⩽ 𝑣. The following result establishes a correspondence between a solution of the linear measure FDE (6.24) and a solution of the linear generalized ODE (6.27). The proof follows the same steps of Theorems 4.18 and 4.19. A similar result for the case of infinite delays can be found in [179, Theorems 4.4 and 4.5] when X = ℝn . Theorem 6.18: Suppose 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies conditions (L1) and (L2). (i) If y ∶ [t0 − r, t0 + 𝜎] → X is a solution of the linear measure FDE (6.24), then the function x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X), given by { y(𝜗), 𝜗 ∈ [t0 − r, t], x(t)(𝜗) = y(t), 𝜗 ∈ [t, t0 + 𝜎], is a solution of the linear generalized ODE (6.27). (ii) If x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) is a solution of the linear generalized ODE (6.27), then the function y ∶ [t0 − r, t0 + 𝜎] → X, given by { x(t0 )(𝜗), t0 − r ⩽ 𝜗 ⩽ t0 , y(𝜗) = x(𝜗)(𝜗), t0 ⩽ 𝜗 ⩽ t0 + 𝜎, is a solution of the linear measure FDE (6.24). We finish this section by presenting a result on the existence and uniqueness of solution of the linear generalized ODE (6.27). In order to obtain that, we require a stronger condition than (L2), since (L2) does not imply condition (Δ). We say that 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies condition

219

220

6 Linear Generalized Ordinary Differential Equations

(L2*) if there exists a Perron integrable function M ∶ [t0 , t0 + 𝜎] → ℝ such that s2 ‖ s2 ‖ ‖ ‖ 𝓁(s)ys dg(s)‖ ⩽ M(s)‖ys ‖∞ dg(s) ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ for all s1 , s2 ∈ [t0 , t0 + 𝜎], with s1 ⩽ s2 and all y ∈ G([t0 − r, t0 + 𝜎], X) and, for all t ∈ [t0 , t0 + 𝜎], there exists 𝜉 = 𝜉(t) > 0 such that t0 +𝜉

∫t0

M(s)dg(s) < 1,

t0 +𝜎

∫t0 +𝜎−𝜉

M(s)dg(s) < 1

and t+𝜉

∫t−𝜉

M(s)dg(s) < 1,

for t ∈ (t0 , t0 + 𝜎).

Theorem 6.19: Assume that 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) satisfies conditions (L2) and (L2* ). Then the linear generalized ODE (6.27) has a unique solution on [t0 , t0 + 𝜎]. Proof. Consider t ∈ [t0 , t0 + 𝜎] and 𝜉 > 0 according to (L2* ). Using (6.26), we get t+𝜉

‖A(t + 𝜉) − A(t − 𝜉)‖ ⩽

∫t−𝜉

M(s)dg(s) < 1.

Thus, ‖Δ+ A(t)‖, ‖Δ− A(t)‖ < 1 for all t ∈ [t0 , t0 + 𝜎], which implies that (Δ) is satisfied and the result follows straightforwardly from Proposition 6.16 and Theorem 6.3. ◽

6.4 A Nonlinear Variation-of-Constants Formula for Measure FDEs Let r, 𝜎 > 0, t0 ∈ ℝ and X be a Banach space. Consider the linear measure FDE t

y(t) = y(t0 ) +

∫t0

𝓁(s)ys dg(s),

(6.29)

the associated IVP t ⎧ 𝓁(s)ys dg(s), ⎪ y(t) = 𝜙(0) + ∫t0 ⎨ ⎪ yt = 𝜙 ⎩ 0

(6.30)

and the linear perturbed measure FDE t t ⎧ 𝓁(s)ys dg(s) + f (ys , s)dg(s), ⎪ y(t) = 𝜙(0) + ∫t0 ∫t0 ⎨ ⎪ yt = 𝜙, ⎩ 0

t ∈ [t0 , t0 + 𝜎],

(6.31)

6.4 A Nonlinear Variation-of-Constants Formula for Measure FDEs

where 𝜙 ∈ G([−r, 0], X), 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X), f ∶ G([−r, 0], X) × [t0 , t0 + 𝜎] → X, g ∶ [t0 , t0 + 𝜎] → ℝ is nondecreasing and we consider Perron– Stieltjes integration. Moreover, define, for y ∈ G([t0 − r, t0 + 𝜎], X) and t ∈ [t0 , t0 + 𝜎], ⎧ t0 − r ⩽ 𝜗 ⩽ t0 , ⎪ 0, 𝜗 ( ⎪ ) ⎪ f ys , s dg(s), t0 ⩽ 𝜗 ⩽ t, F(y, t)(𝜗) = ⎨ ∫t0 t ( ⎪ ) ⎪ f ys , s dg(s), t ⩽ 𝜗 ⩽ t0 + 𝜎. ∫ ⎪ t0 ⎩ Throughout this section we are going to assume that conditions (L2) and (L2*) are satisfied for 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X) and conditions (A)–(C) from Section 4.3 are satisfied for f ∶ G([−r, 0], X) × [t0 , t0 + 𝜎] → X. By Theorem 6.18, there is a correspondence between the linear measure FDE (6.30) and the linear generalized ODE (6.27). On the other hand, by Theorems 4.18 and 4.19, there is a correspondence between Eq. (6.31) and the following linear perturbed generalized ODE { dx = D[A(t)x + F(x, t)], (6.32) d𝜏 x(t0 ) = ̃ x, x is given by where A ∶ [t0 , t0 + 𝜎] → L(G([t0 − r, t0 + 𝜎], X)) is given by (6.28) and ̃ { 𝜙(𝜗 − t0 ), t0 − r ⩽ 𝜗 ⩽ t0 , ̃ x(𝜗) = (6.33) 𝜙(0), t0 ⩽ 𝜗 ⩽ t0 + 𝜎. Next, we define the fundamental operator for the linear measure FDE (6.29). In order to do that we will also assume, throughout this section, the following condition: (E) For all t0 ∈ ℝ, 𝜙 ∈ G([−r, 0], X) and 𝓁 ∶ [t0 , t0 + 𝜎] → 𝔏(G([−r, 0], X), X), the IVP { t y(t) = 𝜙(0) + ∫t 𝓁(s)ys dg(s), 0 yt0 = 𝜙, has a unique solution on [t0 − r, t0 + 𝜎]. Definition 6.20: For each s ∈ ℝ and each t ∈ [s, s + 𝜎], let T(t, s) ∶ G([−r, 0], X) → G([−r, 0], X) be defined by T(t, s)𝜙 = yt , where y ∶ [s − r, s + 𝜎] → X is the solution of the linear measure FDE { t y(t) = 𝜙(0) + ∫s 𝓁(𝜉)y𝜉 dg(𝜉), ys = 𝜙.

221

222

6 Linear Generalized Ordinary Differential Equations

The family of operators {T(t, s) ∶ t ⩾ s} is called fundamental operator of the linear measure FDE (6.29). The next proposition gives us some basic properties of the fundamental operator introduced in Definition 6.20. The result follows straightforward from the definition of the fundamental operator and condition (E). Proposition 6.21: The fundamental operator {T(t, s) ∶ t ⩾ s} of the linear measure FDE (6.29) satisfies the following properties: (i) T(t, t) = I, for all t ∈ ℝ; (ii) T(t, s) = T(t, r)T(r, s), for all t ⩾ r ⩾ s; (iii) If y ∶ [t0 − r, t0 + 𝜎] → X is the solution of (6.30) then T(t, s)ys = yt and y(t) = (T(t, s)𝜙)(0), for all t, s ∈ [t0 , t0 + 𝜎], with t ⩾ s. In the sequel we present some auxiliary results that will be useful to obtain a variation-of-constants formula for a linear perturbed measure FDE. Lemma 6.22: Let y ∶ [t0 − r, t0 + 𝜎] → X and x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) be corresponding solutions of (6.31) and (6.32), respectively. Then t F(y, t)(𝜗) = ∫t DF(x(𝜏), s)(𝜗), for all t0 ⩽ t ⩽ t0 + 𝜎. 0

A proof of Lemma 6.22 can be carried out as in [45, Lemma 6.1]. Note that, for all y ∈ G([t0 − r, t0 + 𝜎], X) and t ∈ [t0 , t0 + 𝜎], F(y, t) is a function from [t0 − r, t0 + 𝜎] to X and, for each s ∈ [t0 , t0 + 𝜎], we denote F(y, t)s ∶ [−r, 0] → X by F(y, t)s (𝜃) = F(y, t)(s + 𝜃). The next two lemmas can be found in [45, Lemmas 6.4 and 6.5], respectively, for the case X = ℝn . Lemma 6.23: Let y ∶ [t0 − r, t0 + 𝜎] → X and x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) be corresponding solutions of (6.31) and (6.32), respectively. Then, for t0 ⩽ s ⩽ t ⩽ t0 + 𝜎 and t0 ⩽ 𝑤 ⩽ t0 + 𝜎, we have ̃ (t, s)F(y, 𝑤)(0), U(t, s)F(y, 𝑤)(t) = T ̃ (t, s)F(y, 𝑤) = T(t, s)F(y, 𝑤)s , for all t, s, 𝑤 ∈ [t0 , t0 + 𝜎], with t ⩾ s, {T(t, s) ∶ where T t ⩾ s} is the fundamental operator of the linear measure FDE (6.23) and U(t, s) is the fundamental operator of the corresponding linear generalized ODE (6.27). Proof. Note that, by Proposition 6.21, T(t, s)(F(y, 𝑤)s )(0) describes the solution of the linear measure FDE { t y(t) = 𝜙(0) + ∫s 𝓁(𝜉)y𝜉 dg(𝜉), ys = F(y, 𝑤)s .

6.4 A Nonlinear Variation-of-Constants Formula for Measure FDEs

In addition, using Proposition 6.7, we have U(t, s)(F(y, 𝑤)), which describes the solution of the corresponding linear generalized ODE { dx = D[A(t)x], d𝜏 x(s) = F(y, 𝑤). By Theorem 6.18, U(t, s)F(y, 𝑤)(t) = T(t, s)(F(y, 𝑤)s )(0) for all t0 ⩽ s ⩽ t ⩽ t0 + 𝜎 ◽ and t0 ⩽ 𝑤 ⩽ t0 + 𝜎 and the proof is finished. Lemma 6.24: Let y ∶ [t0 − r, t0 + 𝜎] → X and x ∶ [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], X) be corresponding solutions of (6.31) and (6.32), respectively. Then, for t0 ⩽ t ⩽ t0 + 𝜎, t

t

ds [U(t, s)]F(y, s)(t) =

∫t0

∫t0

̃ (t, s)]F(y, s)(0), ds [T

̃ (t, s)F(y, 𝑤) = T(t, s)F(y, 𝑤)s , for all t, s, 𝑤 ∈ [t0 , t0 + 𝜎], with t ⩾ s, {T(t, s) ∶ where T t ⩾ s} is the fundamental operator of the linear measure FDE (6.23) and U(t, s) is the fundamental operator of the corresponding linear generalized ODE (6.27). Proof. Let 𝜖 > 0 be given. Then, by the definition of the Kurzweil integral, there exists a gauge 𝛿 on [t0 , t] such that, for every 𝛿-fine tagged division d = (𝜏i , [si−1 , si ]) of [t0 , t], we have ‖ ‖∑ t ‖ ‖ |d| ‖ < 𝜖. ‖ [U(t, s ) − U(t, s )]F(y, 𝜏 ) − d [U(t, s)]F(y, s) i i−1 i s ‖ ‖ ∫t0 ‖ ‖ i=1 ‖∞ ‖ By Lemma 6.23, |d| ∑

[U(t, si ) − U(t, si−1 )]F(y, 𝜏i )(t)

i=1

=

|d| ∑

[T(t, si )(F(y, 𝜏i )si ) − T(t, si−1 )(F(y, 𝜏i )si−1 )](0)

i=1

=

|d| [ ∑

] ̃ (t, si ) − T ̃ (t, si−1 ) F(y, 𝜏i )(0), T

i=1

which implies t

∫t0

t

ds [U(t, s)]F(y, s)(t) =

and completes the proof.

∫t0

] [ ̃ (t, s) F(y, s)(0) ds T ◽

223

224

6 Linear Generalized Ordinary Differential Equations

The next result, which gives us a variation-of-constants formula for the linear perturbed measure FDE (6.31), follows from Theorem 6.14 and the correspondence of equations described in Theorem 6.18. Theorem 6.25: Let y ∶ [t0 − r, t0 + 𝜎] → X be a solution of the linear perturbed measure FDE (6.31). Then, for t0 ⩽ t ⩽ t0 + 𝜎, we have t

y(t) = T(t, t0 )𝜙(0) +

t

f (ys , s)dg(s) −

∫t0

∫t0

̃ (t, s)]F(y, s)(0), ds [T

̃ (t, s)F(y, 𝑤) = T(t, s)F(y, 𝑤)s , for all t, s, 𝑤 ∈ [t0 , t0 + 𝜎], with t ⩾ s, {T(t, s) ∶ where T t ⩾ s} is the fundamental operator of the linear measure FDE (6.29). Proof. Given t ∈ [t0 , t0 + 𝜎], Theorem 6.14 implies t

x(t) + x(t)(t) = U(t, t0 )̃

∫t0

t

DF(x(𝜏), s)(t) −

∫t0

ds [U(t, s)]

(

s

∫t0

) DF(x(𝜏), u) (t),

where U(t, s) is the fundamental operator of the linear generalized ODE (6.27), which corresponds to the linear measure FDE (6.29). By Proposition 6.7, Theorem 6.18 and Proposition 6.21, we get x(t) = T(t, t0 )𝜙(0). U(t, t0 )̃ Also, Lemmas 6.22 and 6.24 imply t

(

t

∫t0

∫t0

ds [U(t, s)]

s

∫t0

t

DF(x(𝜏), s)(t) =

) DF(x(𝜏), u) (t) =

f (ys , s)dg(s)

∫t0 t

∫t0

and

̃ (t, s)]F(y, s)(0). ds [T

Finally, by Theorem 4.19, y(t) = x(t)(t) for all t ∈ [t0 , t0 + 𝜎] and thus t

y(t) = T(t, t0 )𝜙(0) +

∫t0

t

f (ys , s)dg(s) −

∫t0

for all t ∈ [t0 , t0 + 𝜎], which finishes the proof.

̃ (t, s)]F(y, s)(0) ds [T ◽

225

7 Continuous Dependence on Parameters Suzete M. Afonso 1 , Everaldo M. Bonotto 2 , Márcia Federson 3 , and Jaqueline G. Mesquita 4 1 Departamento de Matemática, Instituto de Geociências e Ciências Exatas, Universidade Estadual Paulista “Júlio de Mesquita Filho” (UNESP), Rio Claro, SP, Brazil 2 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 4 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil

In this chapter, our goal is to investigate results on continuous dependence on parameters for generalized ordinary differential equations (ODEs) taking values in a Banach space. Combining these results with the relations between generalized ODEs and measure functional differential equations (FDEs) described in Chapter 4, we are able to translate the obtained results to measure FDEs. Most of the results presented in this chapter are generalizations of the those found in [209] and they were presented in [4, 85, 86, 95, 177]. However, we also include a new result, namely, Theorem 7.4, on the convergence of solutions of a nonautonomous generalized ODE. Such result ensures that, under certain conditions, if a sequence of solutions xk ∶ [a, b] → X of the nonautonomous generalized ODE dx = DFk (x, t), d𝜏

k ∈ ℕ,

converges to a function x0 , where [a, b] is a compact interval of the real line and X is an arbitrary Banach space, then this convergence is uniform and x0 ∶ [a, b] → X is a solution of the “limiting equation,” where X is a Banach space. In order to obtain such a result, we assume some regularity on a sequence of right-hand sides Fk , with k ∈ ℕ. Another interesting result we would like to highlight in this chapter is Theorem 7.7, which ensures that, under certain conditions, if x0 ∶ [a, b] → X is a Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

226

7 Continuous Dependence on Parameters

solution on [a, b] of the nonautonomous generalized ODE given by dx = DF0 (x, t), d𝜏 then there exists a sufficiently large k ∈ ℕ such that the generalized ODE dx = DFk (x, t) d𝜏 possesses a solution xk ∶ [a, b] → X, which converges uniformly to x0 on [a, b], as k → ∞. The previous results are stated not only for generalized ODEs, but also for measure FDEs. As we mentioned, this is done by means of the correspondence, presented in Chapter 4, between the solutions of these two types of equations. It is important to note that, whenever we translate a result for generalized ODEs to some other type of equation, we get a very general result involving nonabsolute integrable functions. This is due to the fact that, in the process of transferring the results from a nonautonomous generalized ODE to some other nonautonomous equation, the main properties of the Kurzweil integration theory are preserved and inherited by the other equation. For example, once the right-hand sides of generalized ODEs are Kurzweil integrable functions as presented in Chapter 2, the translation of results allows the right-hand sides of, say, measure FDEs to be Perron–Stieltjes integrable functions, meaning that they may be of unbounded variation and, hence, highly oscillating (see Remark 1.39).

7.1 Basic Theory for Generalized ODEs In this section, our goal is to investigate continuous dependence results on parameters for generalized ODEs in a Banach space X, equipped with norm ‖ ⋅ ‖. Throughout this section, we assume Ω = O × [a, b], where O ⊂ X is an open set. We start by presenting some auxiliary results. Lemma 7.1: Consider a sequence of functions Fk ∶ Ω → X such that Fk ∈  (Ω, hk ), where {hk }k∈ℕ is an equiregulated sequence of nondecreasing and left-continuous functions hk ∶ [a, b] → ℝ such that hk (b) − hk (a) ⩽ c, for some c > 0, for all k ∈ ℕ0 . If {Fk }k∈ℕ converges pointwisely to F0 on Ω, then lim Fk (x, s) = Fk (x, t+ )

s→t+

uniformly, for all x ∈ O and k ∈ ℕ0 , and lim Fk (x, t+ ) = F0 (x, t+ ),

k→∞

for all (x, t) ∈ Ω.

7.1 Basic Theory for Generalized ODEs

Proof. Fix t ∈ [a, b). Once Fk ∈  (Ω, hk ), we have, for all (x, s), (x, t) ∈ Ω and k ∈ ℕ0 , ‖F (x, s) − F (x, t)‖ ⩽ |h (s) − h (t)|. k k k ‖ k ‖ The fact that the sequence {hk }k∈ℕ is equiregulated implies that for all 𝜖 > 0, there exists 𝛿 > 0 such that if t < s < t + 𝛿 < b, then ‖F (x, s) − F (x, t)‖ ⩽ |h (s) − h (t)| < 𝜖, k k k ‖ k ‖ for all x ∈ O and k ∈ ℕ0 . This means that the convergence lims→t+ Fk (x, s) = Fk (x, t+ ) is uniform for every x ∈ O and k ∈ ℕ0 . Thus, by Moore–Osgood theorem (see [19], for instance), we get lim Fk (x, t+ ) = lim lim+ Fk (x, s) = lim+ lim Fk (x, s)

k→∞

k→∞ s→t

s→t

k→∞

= lim+ F0 (x, s) = F0 (x, t+ ), s→t



obtaining the desired result.

The next result is a modified version of [2, Lemma A1] and it can be found in [177]. Theorem 7.2: Consider a sequence of functions Fk ∶ Ω → X such that Fk ∈  (Ω, hk ) for every k ∈ ℕ0 , {Fk }k∈ℕ converges pointwisely to F0 on Ω and lim Fk (x, t+ ) = F0 (x, t+ ),

k→∞

t ∈ [a, b),

holds for all x ∈ O. Moreover, assume that hk ∶ [a, b] → ℝ is a nondecreasing and left-continuous function such that hk (b) − hk (a) ⩽ c, for some c > 0 and every k ∈ ℕ0 . Let 𝜓k ∈ G([a, b], X), k ∈ ℕ, be such that limk→∞ ‖𝜓k − 𝜓0 ‖∞ = 0. Assume that (𝜓k (s), s) ∈ C × [a, b] for all k ∈ ℕ, where C is a closed subset in X such that C ⊂ O. Then, for every s1 , s2 ∈ [a, b], we have s2 ‖ ‖ s2 ‖ ‖ DFk (𝜓k (𝜏), s) − DF0 (𝜓0 (𝜏), s)‖ = 0. lim ‖ ‖ ∫ k→∞ ‖∫s s1 ‖ ‖ 1

(7.1)

Proof. We will prove that the estimate (7.1) holds for s1 = a and s2 = b. The case s1 , s2 ∈ (a, b) follows similarly and, thus, we omit its proof here. From the fact that 𝜓0 is the uniform limit of regulated functions on [a, b], it follows that 𝜓0 ∈ G([a, b], X), since G([a, b], X) is complete. In addition, the b existence of the Kurzweil integral ∫a DFk (𝜓k (𝜏), s), for every k ∈ ℕ0 , is ensured by Corollary 4.8, because (𝜓k (s), s) ∈ C × [a, b] ⊂ O × [a, b] for all k ∈ ℕ. Note, also, that (𝜓0 (s), s) ∈ C × [a, b].

227

228

7 Continuous Dependence on Parameters

Let 𝜖 > 0 be given. Since C is complete, 𝜓0 ∈ G([a, b], C). Theorem 1.4 yields there is a step function y ∶ [a, b] → C such that ‖y − 𝜓 ‖ = sup ‖y(t) − 𝜓 (t)‖ < 𝜖 . 0 ‖∞ 0 ‖ ‖ ‖ 3c a⩽t⩽b ‖ Moreover, since limk→∞ ‖ ‖𝜓k − 𝜓0 ‖∞ = 0, there exists N0 ∈ ℕ such that for k > N0 , ‖𝜓 − 𝜓 ‖ < 𝜖 , 0 ‖∞ ‖ k 3c whence b ‖ ‖ b ‖ ‖ DFk (𝜓k (𝜏), s) − DF0 (𝜓0 (𝜏), s)‖ ‖ ‖ ‖∫a ∫a ‖ ‖ ‖ b ‖ ‖ b ‖ ‖ ‖ ‖ ‖ ⩽‖ D[Fk (𝜓k (𝜏), s) − Fk (𝜓0 (𝜏), s)]‖ + ‖ D[Fk (𝜓0 (𝜏), s) − Fk (y(𝜏), s)]‖ ‖∫a ‖ ‖∫a ‖ ‖ ‖ ‖ ‖ ‖ b ‖ ‖ b ‖ ‖ ‖ ‖ ‖ +‖ D[Fk (y(𝜏), s) − F0 (y(𝜏), s)]‖ + ‖ D[F0 (y(𝜏), s) − F0 (𝜓0 (𝜏), s)]‖ . (7.2) ‖∫a ‖ ‖∫a ‖ ‖ ‖ ‖ ‖ Now, let us consider the first summand on the right-hand side of the inequality in (7.2). By Lemma 4.6, we have b ‖ b ‖ ‖ ‖ D[Fk (𝜓k (𝜏), s) − Fk (𝜓0 (𝜏), s)]‖ ⩽ ‖𝜓k (𝜏) − 𝜓0 (𝜏)‖dhk (𝜏) ‖ ‖∫a ‖ ∫a ‖ ‖ 𝜖 ⩽ ‖𝜓k − 𝜓0 ‖∞ (hk (b) − hk (a)) < . 3 Similarly, for the second and fourth summands on the right-hand side of (7.2), we have the following estimates ‖ ‖ b 𝜖 𝜖 ‖ ‖ D[Fk (𝜓0 (𝜏), s) − Fk (y(𝜏), s)]‖ < (hk (b) − hk (a)) ⩽ ‖ ‖ 3c ‖∫a 3 ‖ ‖ ‖ ‖ b 𝜖 𝜖 ‖ ‖ D[F0 (y(𝜏), s) − F0 (𝜓0 (𝜏), s)]‖ < (h0 (b) − h0 (a)) ⩽ . ‖ ‖ 3c ‖∫a 3 ‖ ‖ In consequence, we obtain b ‖ ‖ b ‖ ‖ b ‖ ‖ ‖ ‖ DFk (𝜓k (𝜏), s) − DF0 (𝜓0 (𝜏), s)‖ < 𝜖 + ‖ D[Fk (y(𝜏), s) − F0 (y(𝜏), s)]‖ . ‖ ‖ ‖∫a ‖ ‖∫a ∫a ‖ ‖ ‖ ‖ b

Now, let us consider the Kurzweil integral ∫a D[Fk (y(𝜏), s) − F0 (y(𝜏), s)]. From the fact that y ∶ [a, b] → C is a step function, there exists a finite number of points a = r0 < r1 < r2 < · · · < rp−1 < rp = b such that for 𝜏 ∈ (rj−1 , rj ), j ∈ {1, 2, … , p}, the equality y(𝜏) = cj ∈ C holds. Thus, b

∫a

DFk (y(𝜏), s) =

p ∑

rj

j=1

∫rj−1

DFk (cj , s).

7.1 Basic Theory for Generalized ODEs

By Theorem 2.12, we get rj

∫rj−1

( ) ( ) ( ) + + DFk (y(𝜏), t) = Fk cj , rj− − Fk cj , rj−1 + Fk y(rj−1 ), rj−1 ( ) ( ( ) ) ( ( ) ) − Fk y rj−1 , rj−1 − Fk y(rj ), rj− + Fk y rj , rj ,

which implies, by hypotheses, that rj

lim

k→∞ ∫r

D[Fk (y(𝜏), t) − F0 (y(𝜏), t)] = 0. j−1

Combining the previous estimates, the statement follows by the arbitrariness of 𝜖 > 0. ◽ The following result is an immediate consequence of Theorem 7.2. It can be found in [4]. Corollary 7.3: Assume that for each k ∈ ℕ0 , Fk ∶ Ω → X belongs to the class  (Ω, h), where h ∶ [a, b] → ℝ is a nondecreasing and left-continuous function. Assume, further, that {Fk }k∈ℕ converges uniformly to F0 on Ω. Let C be a closed subset in X such that C ⊂ O. Then, for every s1 , s2 ∈ [a, b], we have s2 ‖ s2 ‖ ‖ ‖ DFk (𝜓k (𝜏), s) − DF0 (𝜓0 (𝜏), s)‖ = 0, lim ‖ ‖ ∫ k→∞ ‖∫s s1 ‖ 1 ‖ provided one of the following conditions hold:

(7.3)

(i) 𝜓k ∈ G([a, b], X) for all k ∈ ℕ, (𝜓k (s), s) ∈ C × [a, b] for all k ∈ ℕ, and limk→∞ ‖𝜓k − 𝜓0 ‖∞ = 0; (ii) 𝜓k ∈ BV([a, b], X) for all k ∈ ℕ, (𝜓k (s), s) ∈ C × [a, b] for all k ∈ ℕ, and limk→∞ ‖𝜓k − 𝜓0 ‖BV = 0. Proof. If item (i) holds, then the result follows from Lemma 7.1 and Theorem 7.2 for the case where hk = h, for all k ∈ ℕ0 . Now, suppose item (ii) holds. We firstly recall that BV([a, b], X) ⊂ G([a, b], X). Thus, for every t ∈ [a, b], we have ‖𝜓k (t) − 𝜓0 (t)‖ ⩽ ‖𝜓k (a) − 𝜓0 (a)‖ + ‖𝜓k (t) − 𝜓0 (t) − (𝜓k (a) − 𝜓0 (a))‖ ⩽ ‖𝜓k (a) − 𝜓0 (a)‖ + varta (𝜓k − 𝜓0 ) ⩽ ‖𝜓k (a) − 𝜓0 (a)‖ + varba (𝜓k − 𝜓0 ) = ‖𝜓k − 𝜓0 ‖BV , which yields limk→∞ ‖𝜓k − 𝜓0 ‖∞ = 0. Hence, condition (i) is valid and the result follows. ◽

229

230

7 Continuous Dependence on Parameters

Now, we present a result on continuous dependence of solutions on parameters for generalized ODEs. Such result, borrowed from [177], is a generalization of [95, Theorem 2.4]. Although the proof follows the ideas presented in [95], it is somehow different because it uses Theorem 7.2. Theorem 7.4: Let {hk }k∈ℕ be a sequence of nondecreasing and left-continuous functions hk ∶ [a, b] → ℝ such that hk (b) − hk (a) ⩽ c, for some c > 0 and every k ∈ ℕ0 . Assume that, for every k ∈ ℕ, Fk ∶ Ω → X belongs to the class  (Ω, hk ) and {Fk }k∈ℕ converges pointwisely to F0 on Ω. Moreover, suppose for all x ∈ O, we have lim Fk (x, t+ ) = F0 (x, t+ ),

k→∞

t ∈ [a, b).

For every k ∈ ℕ, let xk ∶ [a, b] → X be a solution of the generalized ODE dx = DFk (x, t). d𝜏

(7.4)

Assume that there exists a closed subset C in X, C ⊂ O, such that xk (t) ∈ C for all t ∈ [a, b] and all k ∈ ℕ. If there exists a function x0 ∶ [a, b] → X such that {xk }k∈ℕ converges uniformly to x0 on [a, b], then x0 is a solution on [a, b] of the generalized ODE dx = DF0 (x, t). d𝜏

(7.5)

Proof. Since the functions xk ∶ [a, b] → X are regulated and {xk }k∈ℕ converges uniformly to x0 on [a, b], x0 ∶ [a, b] → X is a regulated function on [a, b] and (x0 (s), s) ∈ C × [a, b]. Hence, Corollary 4.8 implies the Kurzweil integral s ∫s 2 DF0 (x0 (𝜏), t) exists for every s1 , s2 ∈ [a, b]. 1 Using the definition of a solution of the generalized ODE (7.4), for each k ∈ ℕ and for every s1 , s2 ∈ [a, b], s2

xk (s2 ) − xk (s1 ) =

DFk (xk (𝜏), t).

∫s1

Thus, by Theorem 7.2, we obtain s2 ‖ s2 ‖ ‖ ‖ lim ‖ DFk (xk (𝜏), t) − DF0 (x0 (𝜏), t)‖ = 0, ‖ ∫s1 k→∞ ‖∫s ‖ 1 ‖

for all s1 , s2 ∈ [a, b]. Therefore, s2

x0 (s2 ) − x0 (s1 ) =

∫s1

DF0 (x0 (𝜏), t),

for all s1 , s2 ∈ [a, b] and, hence, x0 is a solution of the generalized ODE (7.5) on [a, b]. ◽

7.1 Basic Theory for Generalized ODEs

The next result is an immediate consequence of Theorem 7.4, which concerns continuous dependence on parameters for solutions of nonautonomous generalized ODEs. Such result, borrowed from [177], ensures that, under certain conditions, if a sequence of solutions xk ∶ [a, b] → X of the generalized ODE dx = DFk (x, t) d𝜏 converges to a function x0 , then this convergence is uniform and x0 ∶ [a, b] → X is a solution of the “limiting equation.” Corollary 7.5: Consider a sequence of functions Fk ∶ Ω → X such that Fk ∈  (Ω, hk ), k ∈ ℕ. Suppose the following conditions are fulfilled: (a) The initial sequence of functions hk ∶ [a, b] → ℝ, k ∈ ℕ, is equiregulated; (b) The sequence {hk (a)}k∈ℕ is bounded; (c) For every k ∈ ℕ, the function hk is nondecreasing and left-continuous satisfying hk (b) − hk (a) ⩽ c for some c > 0; (d) {Fk }k∈ℕ converges pointwisely to F0 on Ω; (e) For each k ∈ ℕ, xk ∶ [a, b] → X is a solution of the generalized ODE dx (7.6) = DFk (x, t) d𝜏 on the interval [a, b], there exists a closed subset C in X, C ⊂ O, such that xk (t) ∈ C, for all t ∈ [a, b] and all k ∈ ℕ, and {xk }k∈ℕ converges pointwisely to x0 ∶ [a, b] → X. Then, the following assertions hold: (i) The sequence {hk }k∈ℕ has a subsequence that converges uniformly to a nondecreasing function h0 ; (ii) For every s1 , s2 ∈ [a, b], ‖x0 (s2 ) − x0 (s1 )‖ ⩽ |h0 (s2 ) − h0 (s1 )|, where h0 is the function described in (i); (iii) limk→∞ xk (t) = x0 (t) uniformly on [a, b]; (iv) x0 is a solution on [a, b] of the generalized ODE dx = DF0 (x, t). d𝜏 Proof. Conditions (b) and (c) imply that for every k ∈ ℕ, hk is a nondecreasing function and the sequence {hk (a)}k∈ℕ is bounded. Hence, the set {hk (t) ∶ t ∈ [a, b]} is also bounded for every k ∈ ℕ and t ∈ [a, b]. Since the sequence {hk }k∈ℕ is equiregulated, by Corollary 1.19, there exists a uniformly convergent subsequence {hnk }k∈ℕ . For the sake of simplicity, we denote this

231

232

7 Continuous Dependence on Parameters

subsequence by {hk }k∈ℕ . Let h0 ∶ [a, b] → ℝ be a function for which the sequence {hk }k∈ℕ converges uniformly. Since {hk }k∈ℕ is a sequence of nondecreasing functions, h0 is also a nondecreasing function and, hence, item (i) follows. Suppose a ⩽ s1 < s2 ⩽ b. For every k ∈ ℕ, Lemma 4.9 yields the estimate ‖xk (s2 ) − xk (s1 )‖ ⩽ |hk (s2 ) − hk (s1 )|. Applying the limit as k → ∞ in the previous inequality, we get by (i), the following estimate ‖x0 (s2 ) − x0 (s1 )‖ ⩽ |h0 (s2 ) − h0 (s1 )|, for s1 , s2 ∈ [a, b] and, therefore, (ii) follows. In sequel, our goal is to prove (iii). Using Lemma 4.9, we get ‖xk (s2 ) − xk (s1 )‖ ⩽ |hk (s2 ) − hk (s1 )|, for s1 , s2 ∈ [a, b]. Since the sequence of functions {hk }k∈ℕ is equiregulated, the previous inequality, together with Corollary 1.1 and the fact that limk→∞ xk (t) = x0 (t) pointwisely for every t ∈ [a, b], imply that {xk }k∈ℕ converges uniformly to x0 on [a, b]. In order to prove (iv), note that, by the definition of a solution of the generalized ODE (7.6), we have, for each k ∈ ℕ, (xk (t), t) ∈ C × [a, b] ⊂ O × [a, b] and s2

xk (s2 ) − xk (s1 ) =

DFk (xk (𝜏), s),

∫s1

(7.7)

for every s1 , s2 ∈ [a, b]. Because Fk ∈  (Ω, hk ), for each k ∈ ℕ, the sequence {hk }k∈ℕ is equiregulated and {Fk }k∈ℕ converges pointwisely to F0 , Corollary 1.1 implies that the sequence of the right-hand side, {Fk }k∈ℕ , converges uniformly to F0 . Thus, Lemma 7.1 yields limk→∞ Fk (x, t+ ) = F0 (x, t+ ). Therefore, all hypotheses of Theorem 7.2 are fulfilled and, hence, for all s1 , s2 ∈ [a, b], we have s2 ‖ ‖ s2 ‖ ‖ lim ‖ DFk (xk (𝜏), s) − DF0 (x0 (𝜏), s)‖ = 0. (7.8) ‖ ∫s1 k→∞ ‖∫s ‖ ‖ 1 Finally, from (7.7), we obtain s2 ‖ ‖ ‖ ‖ DF0 (x0 (𝜏), s)‖ ⩽ ‖xk (s2 ) − x0 (s2 )‖ + ‖xk (s1 ) − x0 (s1 )‖ ‖x0 (s2 ) − x0 (s1 ) − ‖ ‖ ∫s1 ‖ ‖ s2 ‖ s2 ‖ ‖ ‖ +‖ DFk (xk (𝜏), s) − DF0 (x0 (𝜏), s)‖ . ‖∫s ‖ ∫s1 1 ‖ ‖ Then, taking k → ∞, it follows from (iii) and (7.8) that s2

x0 (s2 ) − x0 (s1 ) =

∫s1

DF0 (x0 (𝜏), s),

for all s1 , s2 ∈ [a, b], proving the result.



7.1 Basic Theory for Generalized ODEs

The next result is a consequence of Corollary 7.5 for the case where hk = h, for all k ∈ ℕ0 . In such case, the equiregulatedness of {hk }k∈ℕ follows from Corollary 1.19. This result can be found in [4]. Corollary 7.6: Assume that, for each k ∈ ℕ0 , Fk ∶ Ω → X belongs to the class  (Ω, h), where h ∶ [a, b] → ℝ is a nondecreasing and left-continuous function, and {Fk }k∈ℕ converges pointwisely to F0 on Ω. Let xk ∶ [a, b] → X, k ∈ ℕ, be solutions of the generalized ODE dx = DFk (x, t) d𝜏 on [a, b] such that {xk }k∈ℕ converges pointwisely to x0 ∶ [a, b] → X. Assume that there exists a closed subset C in X, with C ⊂ O, such that xk (t) ∈ C for all t ∈ [a, b] and all k ∈ ℕ. Then, x0 ∶ [a, b] → X satisfies: (i) ‖x0 (s2 ) − x0 (s1 )‖ ⩽ h(s2 ) − h(s1 ), for s1 ⩽ s2 , with s1 , s2 ∈ [a, b]; (ii) limk→∞ xk (s) = x0 (s) uniformly on [a, b]; (iii) x0 is a solution on [a, b] of the generalized ODE dx = DF0 (x, t). d𝜏 The next result is a generalization of [209, Theorem 8.6] for the case of Banach space-valued functions. Here, different arguments from those of [209] are used. Theorem 7.7: Assume that, for each k ∈ ℕ, the function Fk ∶ Ω → X belongs to the class  (Ω, h), where h ∶ [a, b] → ℝ is a nondecreasing and left-continuous function, and {Fk }k∈ℕ converges pointwisely to F0 on Ω. Let x0 ∶ [a, b] → X be a solution of the generalized ODE dx = DF0 (x, t) d𝜏 on [a, b], satisfying the following uniqueness property:

(7.9)

(U) if z ∶ [a, 𝛾] → X, [a, 𝛾] ⊂ [a, b], is a solution of (7.9) such that z(a) = x0 (a), then z(t) = x0 (t) for every t ∈ [a, 𝛾]. Assume, further, that there exists 𝜌 > 0 such that if s ∈ [a, b] and ‖y − x0 (s)‖ < 𝜌, then (y, s) ∈ Ω. Let {yk }k∈ℕ ⊂ O satisfy limk→∞ yk = x0 (a). Then, there exists a positive integer k0 such that, for all k > k0 , there exists a solution xk ∶ [a, b] → X of the generalized ODE dx = DFk (x, t), d𝜏 with xk (a) = yk , such that {xk }k∈ℕ converges uniformly to x0 on [a, b].

(7.10)

233

234

7 Continuous Dependence on Parameters

Proof. Let y ∈ O be such that ‖y − x0 (a)‖ < 𝜌2 (or ‖y − x0 (a+ )‖ < 𝜌2 , where x0 (a+ ) = x0 (a) + F0 (x0 (a), a+ ) − F0 (x0 (a), a)). By hypothesis, (y, a) ∈ Ω. Since limk→∞ yk = x0 (a), the pointwise convergence of {Fk }k∈ℕ yields lim [yk + Fk (yk , a+ ) − Fk (yk , a)] = x0 (a) + F0 (x0 (a), a+ ) − F0 (x0 (a), a).

k→∞

(7.11) Indeed, it is enough to observe that, applying (4.6) to Fk and using the pointwise convergence of Fk to F 0 , for each ϵ > 0, ‖[yk + Fk (yk , a+ ) − Fk (yk , a)] − [x0 (a) + F0 (x0 (a), a+ ) − F0 (x0 (a), a)]‖ ⩽ ‖yk − x0 (a)‖ + ‖Fk (yk , a+ ) − Fk (yk , a) − F0 (x0 (a), a+ ) + F0 (x0 (a), a)‖ ⩽ ‖yk − x0 (a))‖ + ‖yk − x0 (a)‖(h(a+ ) − h(a)) + 𝜖, for sufficiently large k. The convergence in (7.11) implies the existence of k1 ∈ ℕ such that for k > k1 , we have (yk , a) ∈ Ω and (yk + Fk (yk , a+ ) − Fk (yk , a), a) ∈ Ω. Since O is open, one can obtain d > a such that, if t ∈ [a, d] and ]‖ [ ‖ ‖z − yk + Fk (yk , a+ ) − Fk (yk , a) ‖ ⩽ h(t) − h(a+ ), ‖ ‖ then (z, t) ∈ Ω for all k > k1 . By Theorem 5.1, for each k > k1 , there is a unique solution xk ∶ [a, d] → X of the generalized ODE (7.10) on [a, d] for which xk (a) = yk . As mentioned in [209, Theorem 8.6], the solutions xk of the generalized ODE (7.10) exist on the same interval [a, d], for all k > k1 , since the choice of d depends only on the function h. Now, we claim that limk→∞ xk (t) = x0 (t) for every t ∈ [a, d]. Indeed, fix t ∈ [a, d]. By Theorem 7.2, t ‖ t ‖ ‖ ‖ 𝜂k = ‖ DFk (x0 (𝜏), s) − DF0 (x0 (𝜏), s)‖ → 0, as k → ∞. (7.12) ‖∫a ‖ ∫a ‖ ‖ Thus, for k > k1 , we have t t ‖ ‖ ‖ ‖ ‖xk (t) − x0 (t)‖ = ‖xk (a) + DFk (xk (𝜏), s) − x0 (a) − DF0 (x0 (𝜏), s)‖ ‖ ‖ ∫a ∫a ‖ ‖ t ‖ t ‖ ‖ ‖ ⩽ ‖xk (a) − x0 (a)‖ + ‖ DFk (xk (𝜏), s) − DFk (x0 (𝜏), s)‖ ‖∫a ‖ ∫a ‖ ‖ t ‖ t ‖ ‖ ‖ +‖ DFk (x0 (𝜏), s) − DF0 (x0 (𝜏), s)‖ ‖∫a ‖ ∫a ‖ ‖ t

⩽ ‖yk − x0 (a)‖ +

∫a

‖xk (s) − x0 (s)‖ dh(s) + 𝜂k ,

where we used Lemma 4.6 and (7.12) to obtain the last inequality. Then, for k > k1 , by the Grönwall inequality (see Theorem 1.52), we obtain ) ( (7.13) ‖xk (t) − x0 (t)‖ ⩽ ‖yk − x0 (a)‖ + 𝜂k eh(t)−h(a) → 0 as k → ∞,

7.1 Basic Theory for Generalized ODEs

where we used (7.12) and the fact that limk→∞ yk = x0 (a) to obtain (7.13). Therefore, limk→∞ xk (t) = x0 (t), for each t ∈ [a, d]. Let us verify that limk→∞ xk (t) = x0 (t) on [a, b]. Suppose to the contrary, that is, there exists d∗ ∈ (a, b) satisfying the following property: for every d < d∗ , there is a solution xk of the generalized ODE (7.10) with xk (a) = yk , defined on [a, d], for k ∈ ℕ sufficiently large, such that limk→∞ xk (t) = x0 (t) for t ∈ [a, d], but this convergence is not true on [a, d′ ] for d′ > d∗ . According to Lemma 4.9, the inequality ‖xk (s2 ) − xk (s1 )‖ ⩽ |h(s2 ) − h(s1 )| holds for s2 , s1 ∈ [a, d∗ ) and k ∈ ℕ sufficiently large. Therefore, for k ∈ ℕ sufficiently large, the limit xk (d∗− ) exists. Using the fact that the solution x0 is left-continuous, we get lim xk (d∗− ) = x0 (d∗− ) = x0 (d∗ ).

k→∞

Set xk (d∗ ) = xk (d∗− ). Then, limk→∞ xk (d∗ ) = x0 (d∗ ), which implies that the statement holds on [a, d∗ ]. Since d∗ < b, one can use the same argument, with initial condition limk→∞ xk (d∗ ) = x0 (d∗ ) and the local uniqueness given by condition (U), to conclude that the statement is also valid on [d∗ , d∗ + 𝛿], for some 𝛿 > 0, which contradicts the definition of d∗ . Hence, there is k0 ∈ ℕ such that xk is defined on [a, b], for all k > k0 , and limk→∞ xk (t) = x0 (t) on [a, b]. Finally, set C = {z ∈ X ∶ ‖z − x0 (s)‖ ⩽ 𝜌2 , s ∈ [a, b]} ⊂ O. By (7.13), we have xk (t) ∈ C for every t ∈ [a, b] and k sufficiently large. Then, Corollary 7.6 yields this convergence is uniform on [a, b]. ◽ The next result is a generalization of [209, Theorem 8.8] for Banach spaces whose proof follows the same steps as the finite dimensional case. Thus, we provide only few comments here. Theorem 7.8: Assume that for each k ∈ ℕ0 , the function Fk ∶ Ω → X belongs to the class  (Ω, hk ), where hk ∶ [a, b] → ℝ is nondecreasing and left-continuous. Suppose h0 ∶ [a, b] → ℝ is nondecreasing and continuous on [a, b] and, for every a ⩽ t1 ⩽ t2 ⩽ b, we have lim sup (hk (t2 ) − hk (t1 )) ⩽ h0 (t2 ) − h0 (t1 ) k→∞

and {Fk }k∈ℕ converges pointwisely to F0 on Ω. Let x0 ∶ [a, b] → X be a solution on [a, b] of the generalized ODE dx = DF0 (x, t), d𝜏

(7.14)

which has the uniqueness property (U) described in Theorem 7.7. Assume, further, that there is 𝜌 > 0 such that if s ∈ [a, b] and ‖y − x(s)‖ < 𝜌, then (y, s) ∈ Ω. Let

235

236

7 Continuous Dependence on Parameters

{yk }k∈ℕ ∈ O satisfy limk→∞ yk = x0 (a). Then, for every 𝜇 > 0, there exists k∗ ∈ ℕ such that for k > k∗ , there exists a solution xk on [a, b] of the generalized ODE dx = DFk (x, t), d𝜏

(7.15)

for which xk (a) = yk and ‖xk (s) − x0 (s)‖ < 𝜇,

for all s ∈ [a, b].

Proof. The existence of the solutions xk of the generalized ODE (7.15) for sufficiently large k ∈ ℕ and the pointwise convergence limk→∞ xk (s) = x0 (s), for s ∈ [a, b], follows similarly as in the proof of Theorem 7.7 with minor changes. The remaining part follows exactly as in the proof of [209, Theorem 8.8]. ◽

7.2 Applications to Measure FDEs In this section, our goal is to prove some results on continuous dependence on parameters for measure FDEs, using what we obtained in Section 7.1 and the correspondence between generalized ODEs and measure FDEs presented in Chapter 4. Let r, 𝜎 > 0. Similarly as in Chapter 4, we assume that O ⊂ G([t0 − r, t0 + 𝜎], ℝn ) has the prolongation property and we borrow the same conditions (A)–(C) to the next results of this section. The first result we present is borrowed from [85]. Theorem 7.9: Assume that O = G([t0 − r, t0 + 𝜎], ℝn ), P = {yt ∶ y ∈ O, t ∈ [t0 , t0 + 𝜎]}, g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing left-continuous function, and fk ∶ P × [t0 , t0 + 𝜎] → ℝn , k ∈ ℕ0 , are functions that satisfy conditions (A)–(C) for the same functions M and L. Suppose, in addition, that for every y ∈ O, t

lim

k→∞ ∫t

0

t

fk (ys , s)dg(s) =

∫t0

f0 (ys , s)dg(s)

uniformly with respect to t ∈ [t0 , t0 + 𝜎]. Let yk ∈ O, k ∈ ℕ, be solutions of the measure FDEs t ⎧ f ((y ) , s)dg(s), ⎪yk (t) = yk (t0 ) + ∫t0 k k s ⎨ ⎪(yk )t = 𝜙k . 0 ⎩

t ∈ [t0 , t0 + 𝜎],

7.2 Applications to Measure FDEs

Consider a sequence of functions 𝜙k ∈ P, k ∈ ℕ, such that {𝜙k }k∈ℕ converges to 𝜙0 uniformly on [−r, 0]. If there exists a function y0 such that {yk }k∈ℕ converges pointwisely to y0 on [t0 , t0 + 𝜎], then y0 is a solution of the measure FDE t ⎧ f ((y ) , s)dg(s), ⎪y0 (t) = y0 (t0 ) + ∫t0 0 0 s ⎨ ⎪(y0 )t = 𝜙0 . 0 ⎩

t ∈ [t0 , t0 + 𝜎],

Proof. For each k ∈ ℕ, define an auxiliary function Fk ∶ O × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) by t0 − r ⩽ 𝜗 ⩽ t0 , ⎧0, ⎪ 𝜗 ⎪ f (x , s)dg(s), t0 ⩽ 𝜗 ⩽ t ⩽ t0 + 𝜎, Fk (x, t)(𝜗) = ⎨∫t0 k s ⎪ t f (x , s)dg(s), t ⩽ 𝜗 ⩽ t0 + 𝜎. ⎪ ⎩∫t0 k s

(7.16)

By hypotheses, {Fk }k∈ℕ converges uniformly to F0 , for every (x, t) ∈ O × [t0 , t0 + 𝜎], where t0 − r ⩽ 𝜗 ⩽ t0 , ⎧0, ⎪ 𝜗 ⎪ f (x , s)dg(s), t0 ⩽ 𝜗 ⩽ t ⩽ t0 + 𝜎, F0 (x, t)(𝜗) = ⎨∫t0 0 s ⎪ t f (x , s)dg(s), t ⩽ 𝜗 ⩽ t0 + 𝜎, ⎪ ⎩∫t0 0 s

(7.17)

whence F0 ∈  (O × [t0 , t0 + 𝜎], h). Then, the Moore–Osgood theorem (see [19]) yields lim Fk (x, t+ ) = F0 (x, t+ ),

k→∞

for every (x, t) ∈ O × [t0 , t0 + 𝜎). For every k ∈ ℕ0 and t ∈ [t0 , t0 + 𝜎], define { yk (𝜗), 𝜗 ∈ [t0 − r, t], xk (t)(𝜗) = yk (t), 𝜗 ∈ [t, t0 + 𝜎].

(7.18)

Applying Theorem 4.18, one concludes that for each k ∈ ℕ, xk ∶ [t0 , t0 + 𝜎] → O is a solution of the generalized ODE dx = DFk (x, t). d𝜏 On the other hand, for each k ∈ ℕ and t0 ⩽ t1 ⩽ t2 ⩽ t0 + 𝜎, t2 ‖ t2 ‖ ‖ ‖ ‖yk (t2 ) − yk (t1 )‖ = ‖ fk ((yk )s , s)dg(s)‖ ⩽ M(s)dg(s) ⩽ 𝜂(K(t2 ) − K(t1 )), ‖∫t ‖ ∫t 1 ‖ 1 ‖

237

238

7 Continuous Dependence on Parameters t

where 𝜂(t) = t, for all t ∈ [0, ∞), and K(t) = ∫t M(s)dg(s) + t, for all t ∈ [t0 , t0 + 𝜎]. 0 One can easily check that K and 𝜂 are increasing functions fulfilling 𝜂(0) = 0. Moreover, the sequence {yk (t0 )}k∈ℕ is bounded, since yk (t0 ) = 𝜙k (0) and 𝜙k → 𝜙0 uniformly on [−r, 0]. Hence, Corollary 1.19 implies that {yk }k∈ℕ admits a subsequence that is uniformly convergent on [t0 , t0 + 𝜎]. For simplicity, let us denote this subsequence again by {yk }k∈ℕ . By the property of the initial condition, this uniform convergence can be extended to the entire interval [t0 − r, t0 + 𝜎]. From this fact and by (7.18), we conclude that {xk }k∈ℕ converges to a certain x0 uniformly on [t0 , t0 + 𝜎], where x0 ∶ [t0 , t0 + 𝜎] → O is given by { y0 (𝜗), 𝜗 ∈ [t0 − r, t], (7.19) x0 (t)(𝜗) = y0 (t), 𝜗 ∈ [t, t0 + 𝜎]. Finally, by Theorem 7.4, x0 is a solution of dx = DF0 (x, t) d𝜏 on [t0 , t0 + 𝜎], whence, by Theorem 4.19, y0 ∶ [t0 − r, t0 + 𝜎] → ℝn satisfies t ⎧ f ((y ) , s)dg(s), ⎪y0 (t) = y0 (t0 ) + ∫t0 0 0 s ⎨ ⎪(y0 )t = 𝜙0 , 0 ⎩

t ∈ [t0 , t0 + 𝜎],



and the statement follows.

In the next result, we present the analogue version of Theorem 7.7 for measure FDEs. This formulation is presented for the first time here. Its proof follows directly from the correspondence between generalized ODEs and measure FDEs described by Theorems 4.18 and 4.19, and from Theorem 7.7. Theorem 7.10: Suppose O = G([t0 − r, t0 + 𝜎], ℝn ), P = {yt ∶ y ∈ O, t ∈ [t0 , t0 + 𝜎]}, g ∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing and left-continuous function, and for each k ∈ ℕ, fk ∶ P × [t0 , t0 + 𝜎] → ℝn satisfies conditions (A)–(C) for the same functions M and L. Suppose, further, for every y ∈ O, t

lim

k→∞ ∫t

[fk (ys , s) − f0 (ys , s)] dg(s) = 0,

t ∈ [t0 , t0 + 𝜎].

(7.20)

0

Let y0 ∶ [t0 − r, t0 + 𝜎] → ℝn be the unique solution of t ⎧ f ((y ) , s)dg(s), ⎪y0 (t) = y0 (t0 ) + ∫t0 0 0 s ⎨ ⎪(y0 )t = 𝜙0 , 0 ⎩

t ∈ [t0 , t0 + 𝜎],

(7.21)

where 𝜙0 ∈ P. Consider {𝜙k }k∈ℕ is a sequence of regulated and left-continuous functions in P such that converges uniformly to 𝜙0 on [−r, 0]. Let {zk }k∈ℕ ⊂ O satisfy

7.2 Applications to Measure FDEs

limk→∞ zk (t0 + 𝜃) = 𝜙0 (𝜃) on [−r, 0]. Then, for sufficiently large k ∈ ℕ, there exists a solution yk of t ⎧ f ((y ) , s)dg(s), ⎪yk (t) = yk (t0 ) + ∫t0 k k s ⎨ ⎪(yk )t = 𝜙k . 0 ⎩

t ∈ [t0 , t0 + 𝜎],

(7.22)

Moreover, the sequence {yk }k∈ℕ converges uniformly to y0 on [t0 − r, t0 + 𝜎]. Proof. The main idea behind this proof is to apply the correspondence between measure FDEs and generalized ODEs, as well as to apply Theorem 7.7 . Let us consider for each k ∈ ℕ, and each function fk , the function Fk ∶ O × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) given by (7.16). Clearly, conditions (A)–(C), together with Lemma 4.16, imply that Fk ∈  (O × [t0 , t0 + 𝜎], h), for every k ∈ ℕ, with t

h(t) =

∫t0

(L(s) + M(s))dg(s),

for t ∈ [t0 , t0 + 𝜎].

By the hypotheses and by definition, {Fk }k∈ℕ converges uniformly to F0 , given by (7.17) for every (x, t) ∈ O × [t0 , t0 + 𝜎]. Thus, F0 ∈  (O × [t0 , t0 + 𝜎], h). Applying the Moore–Osgood theorem (see [19]), we obtain lim Fk (x, t+ ) = F0 (x, t+ ),

k→∞

for every (x, t) ∈ O × [t0 , t0 + 𝜎).

Let y0 be the unique solution of the measure FDE (7.21). Define x0 ∶ [t0 , t0 + 𝜎] → O as follows { y0 (𝜗), 𝜗 ∈ [t0 − r, t], x0 (t)(𝜗) = y0 (t), 𝜗 ∈ [t, t0 + 𝜎]. Then, Theorem 4.18 yields x0 is the unique solution of the generalized ODE (7.9) on [t0 , t0 + 𝜎]. Consider the following generalized ODE dx = DFk (x, t), d𝜏 on [t0 , t0 + 𝜎], with initial condition { 𝜙k (𝜗 − t0 ), 𝜗 ∈ [t0 − r, t0 ], zk (t0 )(𝜗) = 𝜗 ∈ [t0 , t0 + 𝜎]. 𝜙k (0), Notice that for 𝜗 ∈ [t0 − r, t0 ], we have lim zk (t0 )(𝜗) = lim 𝜙k (𝜗 − t0 ) = 𝜙0 (𝜗 − t0 ) = x0 (t0 )(𝜗)

k→∞

k→∞

and for 𝜗 ∈ [t0 , t0 + 𝜎], we obtain lim zk (t0 )(𝜗) = lim 𝜙k (0) = 𝜙0 (0) = x0 (t0 )(𝜗).

k→∞

k→∞

(7.23)

239

240

7 Continuous Dependence on Parameters

Hence, for 𝜗 ∈ [t0 − r, t0 + 𝜎], we have lim zk (t0 )(𝜗) = x0 (t0 )(𝜗),

k→∞

where x0 is the solution of (7.23), by the uniqueness. Therefore, applying Theorem 7.7, there exists a positive integer k0 such that for all k > k0 , there exists a solution xk of the generalized ODE (7.23) satisfying limk→∞ xk (s) = x0 (s) for every s ∈ [t0 , t0 + 𝜎]. Define, for each k > k0 , the function { xk (t0 )(𝜗), 𝜗 ∈ [t0 − r, t0 ], yk (𝜗) = (7.24) xk (𝜗)(𝜗), 𝜗 ∈ [t0 , t0 + 𝜎]. Using Theorem 4.19 again, we conclude that yk is a solution of the measure FDE (7.22) on [t0 − r, t0 + 𝜎]. Then, by (7.24) and using the assumptions on the initial condition, we obtain lim (yk )t0 (𝜃) = lim 𝜙k (𝜃) = 𝜙0 (𝜃) = (y0 )t0 (𝜃),

k→∞

k→∞

for 𝜃 ∈ [−r, 0]. Hence, lim yk (s) = y0 (s),

k→∞

for s ∈ [t0 − r, t0 ].

Finally, combining this with the definition of yk and the fact that limk→∞ xk (s) = x0 (s), for every s ∈ [t0 , t0 + 𝜎], we conclude that lim yk (t) = y0 (t),

k→∞

and the proof is finished.

for every t ∈ [t0 − r, t0 + 𝜎], ◽

241

8 Stability Theory Suzete M. Afonso 1 , Fernanda Andrade da Silva 2 , Everaldo M. Bonotto 3 , Márcia Federson 2 , Luciene P. Gimenes (in memorian) 4 , Rogelio Grau 5 , Jaqueline G. Mesquita 6 , and Eduard Toon 7 1 Departamento de Matemática, Instituto de Geociências e Ciências Exatas, Universidade Estadual Paulista “Júlio de Mesquita Filho” (UNESP), Rio Claro, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 4 Departamento de Matemática, Centro de Ciências Exatas, Universidade Estadual de Maringá, Maringá, PR, Brazil 5 Departamento de Matemáticas y Estadística, División de Ciencias Básicas, Universidad del Norte, Barranquilla, Colombia 6 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil 7 Departamento de Matemática, Instituto de Ciências Exatas, Universidade Federal de Juiz de Fora, Juiz de Fora, MG, Brazil

This chapter is dedicated to the study of the stability theory for generalized ODEs. Here, we consider X a Banach space, equipped with norm ∥ ⋅ ∥, and we assume that Ω = O × [t0 , ∞), where t0 ⩾ 0, and O is an open subset of X such that 0 ∈ O (neutral element of X). We also assume that F ∶ Ω → X belongs to the class  (Ω, h) introduced in Definition 4.3, where h ∶ [t0 , ∞) → ℝ is a left-continuous and nondecreasing function. Throughout this chapter, we deal with stability issues concerning nonautonomous generalized ODEs of type dx = DF(x, t), d𝜏

(8.1)

in the aforementioned setting. We focus our attention on the study of the stability of the trivial solution of (8.1) for which we consider that the following condition is fulfilled: (ZS) F(0, t2 ) − F(0, t1 ) = 0 for all t1 , t2 ∈ [t0 , ∞). Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

242

8 Stability Theory

Condition (ZS) guarantees that indeed x ≡ 0 is a solution of (8.1). In order to check this fact, it is enough to take U(𝜏, t) = F(x(𝜏), t) in the definition of the t Kurzweil integral ∫t 2 DU(x, t) (see Definition 2.1), with t1 , t2 ∈ [t0 , ∞), t1 < t2 , 1 and to verify that the Riemann-type sum, which approximates the integral is only formed by differences of the form U(𝜏i , si ) − U(𝜏i , si−1 ) = F(x(𝜏i ), si ) − F(x(𝜏i ), si−1 ). Therefore, assumption (ZS) implies that t2

∫t1

DF(0, t) = F(0, t2 ) − F(0, t1 ) = 0,

t1 , t2 ∈ [t0 , ∞),

which, in turn, implies that x ≡ 0 satisfies the generalized ODE (8.1) on [t0 , ∞). Notice that if we add to F(x(𝜏), t) a function that depends only on the variable x, then the solutions of (8.1) do not change. In particular, if we subtract F(x(𝜏), t0 ) from F(x(𝜏), t), then we obtain a normalized representation F1 of F fulfilling F1 (z, t0 ) = 0 for every z ∈ O. This fact shows us that, in order to get stability results for any solution of the generalized ODE (8.1), it is sufficient to obtain the same results for the trivial solution of (8.1). The results on the stability of the trivial solution in the framework of the generalized ODE (8.1) are inspired in the theory, developed by Aleksandr M. Lyapunov on the stability of solutions for classic ODEs. See [166, 167, 170], for instance. In his famous PhD thesis [170], Lyapunov proposed two methods to establish stability of solutions of ODEs. The first method, also known as Indirect Method of Lyapunov, consists of studying the stability of equilibrium points of a nonlinear system by analyzing the stability of the trivial solution for the corresponding linearized system. The second method, which is now referred to as the Lyapunov Stability Criterion or Direct Method of Lyapunov, makes use of a Lyapunov functional to determine criteria for stability. The results that guarantee that “the existence of a Lyapunov functional implies stability” are known as Lyapunov-type theorems. On the other hand, the results that show that “stability implies the existence of a Lyapunov functional ” are known as converse Lyapunov theorems. Converse Lyapunov theorems confirm the effectiveness of the Direct Method of Lyapunov. If the stability of the trivial solution of the generalized ODE (8.1) guarantees the existence of a Lyapunov functional, we can assure that it is always possible to find such a functional, although this may be an arduous task. The study of converse Lyapunov theorems for generalized ODEs started in 1984, when Štefan Schwabik introduced the concept of variational stability, which generalizes Lyapunov stability and integral stability, and he established converse Lyapunov theorems for generalized ODEs in the finite dimensional case (see [208]). Later, the authors of [89] established converse Lyapunov theorems for a

8 Stability Theory

class of functional differential equations (we write FDEs) with Perron integrable right-hand sides. Recently, motivated by the works [89, 208], the authors of [7] stated converse Lyapunov theorems on regular stability and integral stability for measure FDEs. This was done via generalized ODEs. In what follows, we present a concept of a Lyapunov functional for generalized ODEs, which we use throughout this chapter. Recall that O is an open subset of the Banach space X. Definition 8.1: Let B ⊆ O be any subset. We say that the function V ∶ [t0 , ∞) × B → ℝ+ is a Lyapunov functional with respect to the generalized ODE (8.1), if the following conditions are satisfied: (LF1) V(⋅, x) ∶ [t0 , ∞) → ℝ+ is left-continuous on (t0 , ∞), for all x ∈ B; (LF2) there exists a continuous strictly increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that V(t, x) ⩾ b(∥ x ∥), for all t ∈ [t0 , ∞) and all x ∈ B; (LF3) for every solution x ∶ I → B of the generalized ODE (8.1), defined on the interval I ⊂ [t0 , ∞), the derivative D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽0 𝜂

holds for all t ∈ I ⧵ sup I, that is, the right derivative of V along every solution of the generalized ODE (8.1) is non-positive. We point out that if x ∶ I ⊂ [t0 , ∞) → B is a solution of the generalized ODE (8.1), condition (LF3) of Definition 8.1 holds and the function I ∋ t → V(t, x(t)) is left-continuous, then (R1) V(t2 , x(t2 )) ⩽ V(t1 , x(t1 )) for all t1 , t2 ∈ I, with t1 ⩽ t2 . On the other hand, it is clear that if the solution x ∶ I ⊂ [t0 , ∞) → B of the generalized ODE (8.1) satisfies condition (R1), then it satisfies condition (LF3) as well. Therefore, conditions (LF3) and (R1) are somehow related and this allows us to work with the most convenient condition for each type of stability. This chapter deals with three notions of stability in the framework of generalized ODEs: variational stability, due to Schwabik, Lyapunov stability and also regular stability as introduced in [87]. With respect to Lyapunov stability (also known as uniform stability), we also present results for measure differential equations (as before, we write simply MDEs) and for dynamic equations on time scales. We organized this chapter so that its first section contains results on

243

244

8 Stability Theory

variational stability and variational asymptotic stability for generalized ODEs. Still in this section we make an account of Lyapunov-type theorems and converse Lyapunov theorems. In Section 8.2, we deal with uniform stability and uniform asymptotic stability of the trivial solution in the setting of the generalized ODE (8.1). Then, using the correspondence between the solutions of generalized ODEs and MDEs (respectively, dynamic equations on time scales) and the results from Section 8.2, we describe Lyapunov-type theorems for MDEs in Section 8.3 (respectively, for dynamic equations on time scales in Section 8.4). Lastly but not least, the results concerning regular stability for generalized ODEs, including a Lyapunov-type theorems and a converse Lyapunov theorems, are brought together in Section 8.5. It is worth mentioning that the concepts of variational and regular stability concern, respectively, the local behavior of a function of bounded variation and a regulated function, which is initially close to the trivial solution x ≡ 0, while the concepts of Lyapunov stability give us the asymptotic behavior of solutions of generalized ODEs.

8.1 Variational Stability for Generalized ODEs This section is devoted to presenting results on the variational stability of the trivial solution of the generalized ODE (8.1). The concept of variational stability for ordinary differential equations was introduced by H. Okamura in [191] in 1943, who called it strong stability, see [204, 238]. Later, in the year 1959, I. Vrkoˇc considered, in [234], Carathéodory equations and proved that Okamura’s strong stability is equivalent to his concept of integral stability. For generalized ODEs, the concept of variational stability was introduced by S˘ . Schwabik in the work [208] from 1984. The starting point of Schwabik’s approach was the paper [234] of I. Vrkoˇc on integral stability, besides the paper [39] by S.-N. Chow and J. A. Yorke from 1974, in which there is an improvement of Vrkoˇc’s results. As we mentioned in Lemma 4.9, every solution x ∶ [𝛼, 𝛽] → O of the generalized ODE (8.1), with t0 ⩽ 𝛼 < 𝛽 < ∞, is of bounded variation. For this reason, it is natural to measure the distance between two solutions of (8.1) using the variation norm, as Schwabik pointed out in [208], and it seems to be very plausible to use the concept of variational stability (strong stability) introduced by Okamura handled by Vrkoˇc and others. We recall that the set BV([𝛼, 𝛽], X), with −∞ < 𝛼 < 𝛽 < ∞, denotes the Banach space of all functions x ∶ [𝛼, 𝛽] → X of bounded variation, endowed with the

8.1 Variational Stability for Generalized ODEs

standard variation norm ∥ x∥BV =∥ x(𝛼) ∥ +var𝛽𝛼 (x), for all x ∈ BV([𝛼, 𝛽], X), where var𝛽𝛼 (x) denotes the variation of the function x on [𝛼, 𝛽]. See Chapter 1 for more details. The next concepts of stability are borrowed from [208]. See, also, [209]. As previously mentioned, these concepts take into consideration the variation of solutions of the Eq. (8.1) around the trivial solution x ≡ 0. Definition 8.2: The trivial solution x ≡ 0 of the generalized ODE (8.1) is (i) Variationally stable, if for every 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖) > 0 such that if x ∶ [𝛾, 𝑣] → O, t0 ⩽ 𝛾 < 𝑣 < ∞, is a function of bounded variation in [𝛾, 𝑣] and ( ) s 𝑣 DF(x(𝜏), t) < 𝛿, ∥ x(𝛾) ∥< 𝛿 and var𝛾 x(s) − ∫𝛾 then ∥ x(t) ∥< 𝜖, for every t ∈ [𝛾, 𝑣]; (ii) Variationally attracting, if there exists 𝛿0 > 0 and for every 𝜖 > 0, there exist T = T(𝜖) ⩾ 0 and 𝜌 = 𝜌(𝜖) > 0 such that if x ∶ [𝛾, 𝑣] → O, with t0 ⩽ 𝛾 < 𝑣 < ∞, is a function of bounded variation in [𝛾, 𝑣] and ( ) s ∥ x(𝛾) ∥< 𝛿0 and 𝑣ar𝛾𝑣 x(s) − DF(x(𝜏), t) < 𝜌, ∫𝛾 then ∥ x(t) ∥< 𝜖, for every t ∈ [𝛾, 𝑣] ∩ [𝛾 + T, ∞); (iii) Variationally asymptotically stable, if it is both variationally stable and variationally attracting. Note that if x ∶ [𝛾, 𝑣] → O, with t0 ⩽ 𝛾 < 𝑣 < ∞, is a solution of the generalized ODE (8.1), then s

x(s) − x(𝛾) = and, hence, ( 𝑣 var𝛾 x(s) −

DF(x(𝜏), t), for s ∈ [𝛾, 𝑣],

∫𝛾 s

∫𝛾

) DF(x(𝜏), t) = var𝑣𝛾 (x(𝛾)) = 0.

245

246

8 Stability Theory

8.1.1 Direct Method of Lyapunov We turn our attention to Lyapunov-type theorems for the generalized ODE (8.1). We start by presenting an auxiliary result, namely, Lemma 8.3, which describes some properties of a possible candidate to a Lyapunov functional. Such result can be found in [208, 209] for the case where the function V is defined in [0, ∞) × ℝn . Here, we consider V defined in the set [t0 , ∞) × X, where X is a Banach space. Moreover, we consider Ω = O × [t0 , ∞), where O is an open subset of X such that 0 ∈ O and t0 ⩾ 0, F ∈  (Ω, h), and F satisfies (ZS). We omit its proof, since it follows similarly as in [209, Lemma 10.12] (see also [208, Lemma 1]). Lemma 8.3: Let F ∈  (Ω, h). Assume that V ∶ [t0 , ∞) × X → ℝ is such that V(⋅, x) ∶ [t0 , ∞) → ℝ is left-continuous on (t0 , ∞), for all x ∈ X, and satisfies |V(t, z) − V(t, y)| ⩽ K ∥ z − y ∥,

z, y ∈ X, t ∈ [t0 , ∞),

(8.2)

where K > 0 is a constant. Suppose, in addition, there exists a function Φ ∶ X → ℝ such that for every solution x ∶ I → O, with I ⊂ [t0 , ∞), of the generalized ODE (8.1), we have D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ Φ(x(t)), 𝜂

t ∈ I ⧵ sup I. (8.3)

If x ∶ [𝛾, 𝑣] → O, t0 ⩽ 𝛾 < 𝑣 < ∞, is left-continuous on (𝛾, 𝑣] and of bounded variation in [𝛾, 𝑣], then ( ) s DF(x(𝜏), t) + M(𝑣 − 𝛾), (8.4) V(𝑣, x(𝑣)) − V(𝛾, x(𝛾)) ⩽ K var𝑣𝛾 x(s) − ∫𝛾 where M = sup Φ(x(t)). t ∈ [𝛾,𝑣]

Sufficient conditions for the trivial solution of the generalized ODE (8.1) to be variationally stable (see Definition 8.2–(i)) are provided in the next result. S˘ . Schwabik proved this criterion for the case where X = ℝn . Due to the similarity with that result, we omit its proof here and we refer the interested reader to [208, Theorem 1] or [209, Theorem 10.13] for more details. Theorem 8.4: Let F ∈  (Ω, h) satisfy condition (ZS) and consider V ∶ [t0 , ∞) × B𝜌 → ℝ a Lyapunov functional, where B𝜌 = {y ∈ X ∶∥ y ∥⩽ 𝜌} ⊂ O. Assume that V satisfies the additional conditions: (i) V(t, 0) = 0, t ∈ [t0 , ∞);

8.1 Variational Stability for Generalized ODEs

(ii) There is a constant K > 0 such that, for every t ∈ [t0 , ∞) and every z, y ∈ B𝜌 , |V(t, z) − V(t, y)| ⩽ K ∥ z − y ∥ . Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally stable. Under certain additional conditions, the trivial solution of the generalized ODE (8.1) is variationally asymptotically stable. This result was also stated by S˘ . Schwabik in [208, Theorem 2] for the case where X = ℝn . The reader can come out with an analogous proof of [208, Theorem 2] to the next result (see also [209, Theorem 10.14]). Theorem 8.5: Assume that F ∈  (Ω, h) satisfies (ZS) and V ∶ [t0 , ∞) × B𝜌 → ℝ is a Lyapunov functional, where B𝜌 = {y ∈ X ∶∥ y ∥⩽ 𝜌} ⊂ O. Assume, in addition, that V satisfies conditions (i) and (ii) from Theorem 8.4, and that there is a continuous function Φ ∶ X → ℝ, with Φ(0) = 0 and Φ(x) > 0 for x ≠ 0, such that for every solution x ∶ I → B𝜌 of the generalized ODE (8.1), with I ⊂ [t0 , ∞), we have D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ −Φ(x(t)), 𝜂

t ∈ I ⧵ sup I. (8.5)

Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally asymptotically stable.

8.1.2 Converse Lyapunov Theorems In this subsection, we prove that the variational stability and the variational asymptotic stability imply the existence of a Lyapunov functional with the properties described in Theorems 8.4 and 8.5, respectively. The results described in this subsection are borrowed from [208]. Throughout this subsection, we consider Ω = O × [t0 , ∞) where O is an open subset of X such that 0 ∈ O and t0 ⩾ 0, F ∈  (Ω, h) satisfies (ZS), and we assume the existence of a local solution of the generalized ODE (8.1) on [s, s + 𝛿(s)] for all s ⩾ t0 , where 𝛿(s) > 0 depends on s. The reader may want to consult Theorem 5.1 for a sufficient condition for this existence. First, we introduce a modified notion of the variation of a given function. Definition 8.6: Let −∞ < a < b < ∞ and f ∶ [a, b] → X be a given function. For a given division d = (ti ) ∈ D[a,b] and for every 𝜆 ⩾ 0, we define |d| ∑ i=1

e−𝜆(ti −ti−1 ) ∥ f (ti ) − f (ti−1 ) ∥= 𝑣𝜆 (f , D)

and

247

248

8 Stability Theory

e𝜆 varba (f ) = sup 𝑣𝜆 (f , D). d∈D[a,b]

The number e𝜆 varba (f ) is called the e𝜆 -variation of the function f over [a, b]. The next result relates the concept of the e𝜆 -variation with the concept of variation of a given function. For a proof of this fact, we refer the interested reader to [209, Lemma 10.16] for the case where X is the n-dimensional Euclidean space. The proof for the case where X is an infinite dimensional Banach space is analogous and, therefore, it will be omitted here. Lemma 8.7: Let f ∶ [a, b] → X be a given function with −∞ < a < b < ∞. Then, e−𝜆(b−a) varba (f ) ⩽ e𝜆 varba (f ) ⩽ varba (f ),

(8.6)

for all 𝜆 ⩾ 0. Moreover, if a ⩽ c ⩽ b, then the identity e𝜆 varba (f ) = e−𝜆(b−c) e𝜆 varca (f ) + e𝜆 varbc (f )

(8.7)

holds for all 𝜆 ⩾ 0. The following result gives us an important propriety of e𝜆 -variation and follows directly from Lemma 8.7. Corollary 8.8: If a ⩽ c ⩽ b and 𝜆 ⩾ 0, then e𝜆 varca (f ) ⩽ e𝜆 varba (f ). Now, let us consider a special set of functions of locally bounded variation. Let a > 0 be such that Ba = {x ∈ X ∶∥ x ∥< a} ⊂ O. For t > t0 and x ∈ Ba , we consider the set (SA) Aa (t, x) of all functions 𝜑 ∶ [t0 , ∞) → X of locally bounded variation in [t0 , ∞), such that 𝜑(t0 ) = 0, 𝜑(t) = x, sup s∈[t0 ,t] ∥ 𝜑(s) ∥< a and 𝜑 is left-continuous on (t0 , ∞). Moreover, for all 𝜆 ⩾ 0, s ⩾ t0 , and x ∈ Ba , define V𝜆 ∶ [t0 , ∞) × Ba → ℝ by { ( )} 𝜎 ⎧ e𝜆 varst0 𝜑(𝜎) − DF(𝜑(𝜏), t) , s > t0 , ⎪ inf ∫t0 V𝜆 (s, x) = ⎨𝜑∈Aa (s,x) (8.8) ⎪∥ x ∥, s = t0 . ⎩ 𝜎

Since 𝜑 is of locally bounded variation, the Kurzweil integral ∫t DF(𝜑(𝜏), t) 0 exists for all 𝜎 ∈ [t0 , s] and the function [t0 , s] ∋ 𝜎 → 𝜑(𝜎) −

𝜎

∫t0

DF(𝜑(𝜏), t)

8.1 Variational Stability for Generalized ODEs

is also of bounded variation in [t0 , s] for all s ∈ [t0 , ∞), by Corollary 4.8. Furthermore, the e𝜆 -variation of this function is bounded. Therefore, V𝜆 is well-defined for all a > 0 and 𝜆 ⩾ 0. Additionally, the zero function 𝜑 ≡ 0 belongs to the set Aa (s, 0) and, hence, V𝜆 (s, 0) = 0,

for all s ⩾ t0 and 𝜆 ⩾ 0.

(8.9)

On the other hand, for all s ⩾ t0 and x ∈ Ba , we have V𝜆 (s, x) ⩾ 0, as ( ) 𝜎 s DF(𝜑(𝜏), t) ⩾ 0, for all 𝜑 ∈ Aa (s, x). e𝜆 vart0 𝜑(𝜎) − ∫t0 Lemma 8.9: Let F ∈  (Ω, h). For x, y ∈ Ba , s ∈ [t0 , ∞) and 𝜆 ⩾ 0, the functional V𝜆 given by (8.8) satisfies the inequality |V𝜆 (s, x) − V𝜆 (s, y)| ⩽∥ x − y ∥ . Proof. If s = t0 , then V𝜆 (t0 , y) − V𝜆 (t0 , x) =∥ y ∥ − ∥ x ∥⩽∥ y − x ∥ and the proof is complete. Now, assume that s > t0 . Let 𝜂 > 0 be such that 0 < 𝜂 < s − t0 and let 𝜑 ∈ Aa (s, x) be an arbitrary function, where Aa (s, x) is defined in (SA). Define a function 𝜑𝜂 ∶ [t0 , ∞) → X by if 𝜎 ∈ [t0 , s − 𝜂], ⎧𝜑(𝜎), ⎪ y − 𝜑(s − 𝜂) (𝜎 − s + 𝜂), if 𝜎 ∈ [s − 𝜂, s], 𝜑𝜂 (𝜎) = ⎨𝜑(s − 𝜂) + 𝜂 ⎪ if 𝜎 ∈ (s, ∞). ⎩y, Clearly, 𝜑𝜂 ∈ Aa (s, y). Then, by (8.7), we have ( ) 𝜎 DF(𝜑𝜂 (𝜏), t) V𝜆 (s, y) ⩽e𝜆 varst0 𝜑𝜂 (𝜎) − ∫t0 ( ) 𝜎 𝜑 =e−𝜆𝜂 e𝜆 vars−𝜂 (𝜎) − DF(𝜑 (𝜏), t) 𝜂 𝜂 t0 ∫t0 ( ) 𝜎 DF(𝜑𝜂 (𝜏), t) + e𝜆 varss−𝜂 𝜑𝜂 (𝜎) − ∫t0 ( ) 𝜎 𝜑(𝜎) − =e−𝜆𝜂 e𝜆 vars−𝜂 DF(𝜑(𝜏), t) t0 ∫t0 ( ) 𝜎 DF(𝜑𝜂 (𝜏), t) + e𝜆 varss−𝜂 𝜑𝜂 (𝜎) − ∫t0 ( ) 𝜎 𝜑(𝜎) − ⩽e−𝜆𝜂 e𝜆 vars−𝜂 DF(𝜑(𝜏), t) t0 ∫t0

249

250

8 Stability Theory

( + varss−𝜂 (𝜑𝜂 ) + varss−𝜂 ( 𝜑(𝜎) − ⩽e−𝜆𝜂 e𝜆 vars−𝜂 t 0

𝜎

∫t0 𝜎

∫t0

) DF(𝜑𝜂 (𝜏), t)

) DF(𝜑(𝜏), t)

+ ∥ y − 𝜑(s − 𝜂) ∥ +h(s) − h(s − 𝜂), where the last inequality follows from Lemma 4.5. By the fact that 𝜂 > 0, we obtain ( ( ) ) 𝜎 𝜎 s 𝜑(𝜎) − 𝜑(𝜎) − e−𝜆𝜂 e𝜆 vars−𝜂 DF(𝜑(𝜏), t) ⩽ e var DF(𝜑(𝜏), t) . 𝜆 t0 t0 ∫t0 ∫t0 Therefore, we conclude that ( V𝜆 (s, y) ⩽ e𝜆 varst0 𝜑(𝜎) −

𝜎

∫t0

Then, as 𝜂 → 0 , we obtain ( s V𝜆 (s, y) ⩽ e𝜆 vart0 𝜑(𝜎) −

) DF(𝜑(𝜏), t) + ∥ y − 𝜑(s − 𝜂) ∥ +h(s) − h(s − 𝜂).

+

𝜎

∫t0

) DF(𝜑(𝜏), t) + ∥ y − x ∥ .

Since the choice of 𝜑 ∈ Aa (s, x) is arbitrary, we can take the infimum for all 𝜑 ∈ Aa (s, x) on the right-hand side of the last inequality, obtaining V𝜆 (s, y) ⩽ V𝜆 (s, x)+ ∥ y − x ∥ . Analogously, we can prove that V𝜆 (s, x) ⩽ V𝜆 (s, y)+ ∥ y − x ∥, whence it follows ◽ that |V𝜆 (s, x) − V𝜆 (s, y)| ⩽∥ x − y ∥ . The proof of the next result can be found in [209, Lemma 10.20] for the case where X is the n-dimensional Euclidean space. The proof for the case where X is an arbitrary infinite dimensional Banach space is similar. For this reason, we omit it here. Lemma 8.10: Let F ∈  (Ω, h), y ∈ Ba , s, r ∈ [t0 , ∞), 𝜆 ⩾ 0 and V𝜆 be defined by (8.8). Then, the following inequality holds |V𝜆 (r, y) − V𝜆 (s, y)| ⩽ (1 − e−𝜆|r−s| )a + |h(r) − h(s)|. Corollary 8.11 in the sequel is an immediate consequence of Lemmas 8.9 and 8.10. Corollary 8.11: Let F ∈  (Ω, h), 𝜆 ⩾ 0 and V𝜆 be defined by (8.8). Then, for all x, y ∈ Ba and all r, s ∈ [t0 , ∞), we have |V𝜆 (s, x) − V𝜆 (r, y)| ⩽∥ x − y ∥ +(1 − e−𝜆|r−s| )a + |h(r) − h(s)|. The next result deals with the right derivative of V𝜆 .

8.1 Variational Stability for Generalized ODEs

Lemma 8.12: Let F ∈  (Ω, h). If x ∶ [s, s + 𝛿(s)] → O is a solution of the generalized ODE (8.1), with s ⩾ t0 and 𝛿(s) > 0, then the function V𝜆 defined by (8.8), for all 𝜆 ⩾ 0, satisfies lim sup 𝜂→0+

V𝜆 (s + 𝜂, x(s + 𝜂)) − V𝜆 (s, x(s)) ⩽ −𝜆V𝜆 (s, x(s)). 𝜂

Proof. Choose an arbitrary x0 ∈ O and let a > 0 be such that a >∥ x0 ∥ +h(s + 1) − h(s). Now, let 𝜑 ∈ Aa (s, x0 ) be arbitrary and x ∶ [s, s + 𝛿(s)] → O be the solution of the generalized ODE (8.1) with x(s) = x0 . Let 0 < 𝜂 < min {𝛿(s), 1} and define ⎧𝜑(𝜎), if 𝜎 ∈ [t0 , s], ⎪ 𝜑𝜂 (𝜎) = ⎨x(𝜎), if 𝜎 ∈ [s, s + 𝜂], ⎪x(s + 𝜂), if 𝜎 ∈ (s + 𝜂, ∞). ⎩ Then, 𝜑(s) = x(s) = 𝜑𝜂 (s) = x0 and 𝜑𝜂 ∈ Aa (s + 𝜂, x(s + 𝜂)). Therefore, using Lemma 4.5 and the fact that x is solution of the generalized ODE (8.1), we obtain 𝜎 ‖ ‖ ‖ ∥ x(𝜎) ∥= ‖ ‖x0 + ∫ DF(x(𝜏), t)‖ ‖ ‖ s ⩽ ∥ x0 ∥ +h(𝜎) − h(s)

⩽ ∥ x0 ∥ +h(s + 1) − h(s) < a, for 𝜎 ∈ [s, s + 𝜂], provided h is nondecreasing. Hence, by (8.7), we have ( ) 𝜎 𝜑 (𝜎) − DF(𝜑 (𝜏), t) V𝜆 (s + 𝜂, x(s + 𝜂)) ⩽e𝜆 vars+𝜂 𝜂 𝜂 t0 ∫t0 ( ) 𝜎 DF(𝜑(𝜏), t) =e−𝜆𝜂 e𝜆 varst0 𝜑(𝜎) − ∫t0 ( ) s 𝜎 x(𝜎) − + e𝜆 vars+𝜂 DF(𝜑(𝜏), t) − DF(x(𝜏), t) s ∫t0 ∫s ( ) 𝜎 =e−𝜆𝜂 e𝜆 varst0 𝜑(𝜎) − DF(𝜑(𝜏), t) ∫t0 ( ) s x − DF(𝜑(𝜏), t) + e𝜆 vars+𝜂 s 0 ∫t0 ( ) 𝜎 =e−𝜆𝜂 e𝜆 varst0 𝜑(𝜎) − DF(𝜑(𝜏), t) , ∫t0 since

( x0 − vars+𝜂 s

s

∫t0

) DF(𝜑(𝜏), t) = 0.

251

252

8 Stability Theory

Taking the infimum for every 𝜑 ∈ Aa (s, x0 ) on the right-hand side of the inequality obtained above, we get V𝜆 (s + 𝜂, x(s + 𝜂)) ⩽ e−𝜆𝜂 V𝜆 (s, x0 ) = e−𝜆𝜂 V𝜆 (s, x(s)), consequently, V𝜆 (s + 𝜂, x(s + 𝜂)) − V𝜆 (s, x(s)) ⩽ (e−𝜆𝜂 − 1)V𝜆 (s, x(s)) and we finally obtain lim sup 𝜂→0+

V𝜆 (s + 𝜂, x(s + 𝜂)) − V𝜆 (s, x(s)) ⩽ −𝜆V𝜆 (s, x(s)), 𝜂 −𝜆𝜂 −1

because lim𝜂→0+ e

𝜂



= −𝜆.

In order to establish converse theorems, we need a different concept of variational stability and an auxiliary result. For this purpose, consider the following nonhomogeneous generalized ODE dx = D[F(x, t) + P(t)], d𝜏

(8.10)

where F ∈  (Ω, h) satisfies condition (ZS) and P ∶ [t0 , ∞) → X is a Kurzweil integrable function. Definition 8.13: The trivial solution x ≡ 0 of the generalized ODE (8.1) is (i) Variationally stable with respect to perturbations, if for every 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖) > 0 such that if ∥ x0 ∥< 𝛿 and P ∈ G− ([𝛾, 𝑣], X), with var𝑣𝛾 (P) < 𝛿, then ∥ x(t, 𝛾, x0 ) ∥=∥ x(t) ∥< 𝜖, for every t ∈ [𝛾, 𝑣], where x(⋅, 𝛾, x0 ) is the solution of the nonhomogeneous generalized ODE (8.10), with x(𝛾, 𝛾, x0 ) = x0 and [𝛾, 𝑣] ⊂ [t0 , ∞); (ii) Variationally attracting with respect to perturbations, if there is a 𝛿̃ > 0 and for every 𝜖 > 0, there exist T = T(𝜖) ⩾ 0 and 𝜌 = 𝜌(𝜖) > 0 such that if ∥ x0 ∥< 𝛿̃ and P ∈ G− ([𝛾, 𝑣], X) with var𝑣𝛾 (P) < 𝜌, then ∥ x(t, 𝛾, x0 ) ∥=∥ x(t) ∥< 𝜖, for every t ⩾ 𝛾 + T, t ∈ [𝛾, 𝑣], where x(⋅, 𝛾, x0 ) is the solution of the nonhomogeneous generalized ODE (8.10) with x(𝛾, 𝛾, x0 ) = x0 and [𝛾, 𝑣] ⊂ [t0 , ∞); (iii) Variationally asymptotically stable with respect to perturbations, if it is both variationally stable and variationally attracting with respect to perturbations.

8.1 Variational Stability for Generalized ODEs

The proof of the next result can be found in [89, Proposition 3.1]. Proposition 8.14: Let F ∈  (Ω, h) satisfy condition (ZS). The following statements hold. (i) The trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally stable if and only if it is variationally stable with respect to perturbations. (ii) The trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally attracting if and only if it is variationally attracting with respect to perturbations. (iii) The trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally asymptotically stable if and only if it is variationally asymptotically stable with respect to perturbations. Now, we are able to prove the converse theorems for variational stability and variational asymptotic stability of the trivial solution x ≡ 0 of the generalized ODE (8.1). The next results are borrowed from [89]. Theorem 8.15: Let F ∈  (Ω, h) satisfy condition (ZS). If the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally stable, then, for every a > 0 such that Ba = {x ∈ X ∶∥ x ∥< a} ⊂ O, there exists a function V ∶ [t0 , ∞) × Ba → ℝ, satisfying the following conditions: (i) For every x ∈ Ba , the function V(⋅, x) is left-continuous on (t0 , ∞); (ii) V(t, 0) = 0 and |V(t, x) − V(t, y)| ⩽∥ x − y ∥ for all x, y ∈ Ba , t ∈ [t0 , ∞); (iii) For every solution x ∶ [s, s + 𝛿(s)] → Ba , with 𝛿(s) > 0 and s ⩾ t0 , of the generalized ODE (8.1), we have lim sup 𝜂→0+

V(s + 𝜂, x(s + 𝜂)) − V(s, x(s)) ⩽ 0, 𝜂

that is, the right derivative of V along every solution x(⋅) of (8.1) is non-positive; (iv) There exists a continuous increasing function b ∶ ℝ+ → ℝ+ such that b(0) = 0 and b(∥ x ∥) ⩽ V(t, x), for all x ∈ Ba and t ∈ [t0 , ∞). Proof. Let V0 (s, x) be defined by (8.8) with 𝜆 = 0. Let us define V(s, x) = V0 (s, x), for all s ⩾ t0 and x ∈ Ba . Item (i) is a consequence of Corollary 8.11. By Lemma 8.9 and Eq. (8.9), we obtain item (ii). By Lemma 8.12, item (iii) is also satisfied. Lastly, we need to verify that condition (iv) holds. In order to do that, we use the variational stability of the solution x ≡ 0 of the generalized ODE (8.1). We assume that item (iv) does not hold. Therefore, there exist 𝜖 > 0, with 0 < 𝜖 < a, and a sequence {(tk , xk )}k∈ℕ such that 𝜖 0 as in Definition 8.13 (i). Let k0 ∈ ℕ be such that V(tk , xk ) < 𝛿 for all k > k0 . Hence, there exists 𝜑k ∈ Aa (tk , xk ) satisfying ( ) 𝜎 t vartk 𝜑k (𝜎) − DF(𝜑k (𝜏), t) < 𝛿. 0 ∫t0 Define a function P ∶ [t0 , ∞) → X by ⎧ 𝜎 ⎪𝜑k (𝜎) − DF(𝜑k (𝜏), t), for 𝜎 ∈ [t0 , tk ], ∫t0 ⎪ P(𝜎) = ⎨ tk ⎪xk − DF(𝜑k (𝜏), t), for 𝜎 ∈ [tk , ∞). ∫t0 ⎪ ⎩ Hence,

(

var∞ t0 (P)

=

t vartk 0

𝜑k (𝜎) −

𝜎

∫t0

) DF(𝜑k (𝜏), t) < 𝛿

and the function P is left-continuous. For 𝜎 ∈ [t0 , t], we have 𝜑k (𝜎) = 𝜑k (𝜎) + = 𝜑k (t0 ) + = 𝜑k (t0 ) + = 𝜑k (t0 ) +

𝜎

∫t0 ∫t0 ∫t0 ∫t0

𝜎

DF(𝜑k (𝜏), t) − 𝜎

∫t0

DF(𝜑k (𝜏), t)

DF(𝜑k (𝜏), t) + P(𝜎) 𝜎

DF(𝜑k (𝜏), t) + P(𝜎) − P(t0 ) 𝜎

D[F(𝜑k (𝜏), t) + P(t)],

since 𝜑k ∈ Aa (tk , xk ) and P(t0 ) = 𝜑k (t0 ) = 0. Therefore, 𝜑k is a solution of the nonhomogeneous generalized ODE dy = D[F(y, t) + P(t)]. d𝜏 Then, once the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally stable with respect to perturbations, we conclude that ∥ 𝜑k (s) ∥< 𝜖 for all s ∈ [t0 , tk ]. In particular, we have ∥ 𝜑k (tk ) ∥=∥ xk ∥< 𝜖, which contradicts our assumption. ◽ Theorem 8.16: Let F ∈  (Ω, h) satisfy condition (ZS). If the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally asymptotically stable, then there exist

8.1 Variational Stability for Generalized ODEs

a > 0 such that Ba = {x ∈ X ∶∥ x ∥< a} ⊂ O and a function V ∶ [t0 , ∞) × Ba → ℝ satisfying the following conditions: (i) for every x ∈ Ba , the function V(⋅, x) is left-continuous on (t0 , ∞); (ii) V(t, 0) = 0 and |V(t, x) − V(t, y)| ⩽∥ x − y ∥ for all x, y ∈ Ba , t ∈ [t0 , ∞); (iii) for every solution x ∶ [s, s + 𝛿(s)] → Ba , with 𝛿(s) > 0 and s ⩾ t0 , of the generalized ODE (8.1) we have lim sup 𝜂→0+

V(s + 𝜂, x(s + 𝜂)) − V(s, x(s)) ⩽ −t0 V(t, x(s)); 𝜂

(iv) there exists a continuous increasing function b ∶ ℝ+ → ℝ+ such that b(0) = 0 and b(∥ x ∥) ⩽ V(t, x) for every x ∈ Ba , t ∈ [t0 , ∞). Proof. By Proposition 8.14 (ii), the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally attracting with respect to perturbations, since it is variatioñ Let Vt (s, x) ally asymptotically stable. Take 𝛿̃ as in Definition 8.13 (i) and 0 < a < 𝛿. 0 be defined by (8.8) with 𝜆 = t0 . We define V(s, x) = Vt0 (s, x), for all s ⩾ t0 and x ∈ Ba .

(8.11)

Items (i), (ii), and (iii) follow in the same way as in the proof of Theorem 8.15. Assume that condition (iv) does not hold. Then, there exists 𝜖, with 0 < 𝜖 < a, and a sequence {(tk , xk )}k∈ℕ such that 𝜖 ⩽∥ xk ∥< a, tk → ∞ and V(tk , xk ) → 0 as k → ∞. Once the trivial solution x ≡ 0 of the generalized ODE (8.1) is variationally attracting with respect to perturbations, there are T = T(𝜖) ⩾ 0 and 𝜌(𝜖) > 0 such that if ∥ y0 ∥< 𝛿̃ and P ∈ G− ([𝛼, 𝛽], X), with var𝛽𝛼 (P) < 𝜌(𝜖), then ∥ y(t, 𝛼, y0 ) ∥< 𝜖,

for all t ∈ [𝛼, 𝛽] ∩ [𝛼 + T, ∞) and 𝛼 ⩾ t0 ,

where y(⋅, 𝛼, y0 ) is a solution of the nonhomogeneous generalized ODE dy = D[F(y, t) + P(t)] d𝜏 satisfying y(𝛼, 𝛼, y0 ) = y0 . Choose k0 ∈ ℕ such that tk > T + t0 and V(tk , xk ) < 𝜌(𝜖)e−t0 T for all k ∈ ℕ, k ⩾ k0 . Fix k ⩾ k0 . By the definition of V given by (8.11), we can select 𝜑 ∈ Aa (tk , xk ) such that ( ) 𝜎 t et0 vartk 𝜑(𝜎) − DF(𝜑(𝜏), t) < 𝜌(𝜖)e−t0 T . 0 ∫t0 Defining 𝛼 = tk − T, we have 𝛼 > t0 , since tk > T + t0 . Hence, ( ) 𝜎 t et0 var𝛼k 𝜑(𝜎) − DF(𝜑(𝜏), t) < 𝜌(𝜖)e−t0 T . ∫t0

255

256

8 Stability Theory

Moreover, by (8.6), we also have ( ( ) ) 𝜎 𝜎 t t e−t0 T var𝛼k 𝜑(𝜎) − DF(𝜑(𝜏), t) = e−t0 (tk −𝛼) var𝛼k 𝜑(𝜎) − DF(𝜑(𝜏), t) ∫t0 ∫t0 ( ) 𝜎 (8.6) t ⩽ et0 var𝛼k 𝜑(𝜎) − DF(𝜑(𝜏), t) ∫t0 ⩽ 𝜌(𝜖)e−t0 T . Therefore, t

var𝛼k

( 𝜑(𝜎) −

𝜎

∫t0

) DF(𝜑(𝜏), t) < 𝜌(𝜖).

(8.12)

For 𝜎 ∈ [𝛼, tk ], define P(𝜎) = 𝜑(𝜎) −

𝜎

DF(𝜑(𝜏), t).

∫t0

t

Then, P ∈ G− ([𝛼, tk ], X) and, by (8.12), var𝛼k (P) < 𝜌(𝜖). Furthermore, notice that 𝜑 ∶ [𝛼, tk ] → X is a solution of the nonhomogeneous generalized ODE dy = D[F(y, t) + P(t)]. d𝜏 Indeed, we have 𝜑(𝜎) = 𝜑(𝜎) + 𝜎

=

∫t0

𝜎

𝜎

DF(𝜑(𝜏), t) −

∫t0

∫t0

DF(𝜑(𝜏), t)

DF(𝜑(𝜏), t) + P(𝜎)

= P(𝛼) + = 𝜑(𝛼) +

𝛼

∫t0 ∫𝛼

𝜎

DF(𝜑(𝜏), t) + 𝜎

∫𝛼

DF(𝜑(𝜏), t) + P(𝜎) − P(𝛼)

D[F(𝜑(𝜏), t) + P(t)].

̃ we get ∥ 𝜑(𝛼) ∥< 𝛿̃ and, by definition of the variaSince 𝜑 ∈ Aa (tk , xk ) and a < 𝛿, tional attractivity of the trivial solution, the inequality ∥ 𝜑(t) ∥< 𝜖 is satisfied, for every t ⩾ 𝛼 + T. This is also valid for t = tk = 𝛼 + T, that is, ∥ 𝜑(tk ) ∥=∥ xk ∥< 𝜖 and this contradicts the fact that ∥ xk ∥⩾ 𝜖. ◽

8.2 Lyapunov Stability for Generalized ODEs This section is devoted to the study of Lyapunov (sometimes known as uniform) stability theory in the context of generalized ODEs and it is divided into two subsections.

8.2 Lyapunov Stability for Generalized ODEs

Recall that we are considering that the function F ∶ Ω → X belongs to the class  (Ω, h), where Ω = O × [t0 , ∞), O is an open subset of X such that 0 ∈ O, h ∶ [t0 , ∞) → ℝ is a left-continuous and nondecreasing function, and F satisfies (ZS). Denote by x(⋅) = x(⋅, s0 , x0 ) the maximal solution x ∶ [s0 , 𝜔(s0 , x0 )) → O of the generalized ODE (8.1), with initial condition x(s0 ) = x0 , where t0 ⩽ s0 < 𝜔(s0 , x0 ) ⩽ ∞. The existence of such a solution is assumed throughout this section and can be guaranteed by Theorem 5.11, if we consider Ω = ΩF , where ΩF is given by (5.13). In the following lines, we present the classical definitions of stability, uniform stability and uniform asymptotic stability for the trivial solution of the generalized ODE (8.1), which were first introduced in [80]. Definition 8.17:

The trivial solution of the generalized ODE (8.1) is

(i) Stable, if for any s0 ⩾ t0 and 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖, s0 ) > 0 such that if x0 ∈ O with ∥ x0 ∥< 𝛿, then ∥ x(t, s0 , x0 ) ∥=∥ x(t) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 )),

where x(⋅, s0 , x0 ) is the solution of the generalized ODE (8.1) with x(s0 , s0 , x0 ) = x0 ; (ii) Uniformly stable (Lyapunov stable), if it is stable with 𝛿 independent of s0 ; (iii) Uniformly asymptotically stable, if there exists 𝛿0 > 0 such that for every 𝜖 > 0, there exists T = T(𝜖) ⩾ 0 such that if s0 ⩾ t0 and x0 ∈ O, with ∥ x0 ∥< 𝛿0 , then ∥ x(t, s0 , x0 ) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 )) ∩ [s0 + T(𝜖), ∞),

where x(⋅, s0 , x0 ) is the solution of the generalized ODE (8.1) with x(s0 , s0 , x0 ) = x0 .

8.2.1 Direct Method of Lyapunov By using the Lyapunov direct method, we present criteria of uniform stability and uniform asymptotic stability for generalized ODEs. The results described here do not require any Lipschitz-type condition concerning the Lyapunov functional. See [80] for more details. Theorem 8.18: Let F ∈  (Ω, h) satisfy condition (ZS) and consider V ∶ [t0 , ∞) × B𝜌 → ℝ a Lyapunov functional with respect to the generalized ODE (8.1), where 𝜌 > 0, B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌}, and B𝜌 ⊂ O. Assume that (i) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ such that a(0) = 0 and V(t, x) ⩽ a(∥ x ∥), for all t ∈ [t0 , ∞) and x ∈ B𝜌 ; (ii) The function I ∋ t → V(t, x(t)), I ⊂ [t0 , ∞), is nonincreasing along every solution x ∶ I → B𝜌 of the generalized ODE (8.1). Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is uniformly stable.

257

258

8 Stability Theory

Proof. By the definition of a Lyapunov functional with respect to the generalized ODE (8.1) (see Definition 8.1, condition (LF2)), there exists a continuous strictly increasing function b ∶ ℝ+ → ℝ+ such that V(t, x) ⩾ b(∥ x ∥)

for all (t, x) ∈ [t0 , ∞) × B𝜌 .

(8.13)

Since a(0) = 0 and a is continuous, for all 𝜖 > 0, a|[0,𝜖] is uniformly continuous, b(𝜖) > 0 and, therefore, there exists 𝛿 depending on 𝜖 such that a(𝛿) < b(𝜖). Let x0 ∈ B𝜌 and x(t) = x(t, s0 , x0 ), with t ∈ [s0 , 𝜔(s0 , x0 )), be the maximal solution of the generalized ODE (8.1). Assume that ∥ x(s0 ) ∥=∥ x0 ∥< 𝛿. Condition (ii) and (8.13) imply that b(∥ x(t) ∥) ⩽ V(t, x(t)) ⩽ V(s0 , x0 ) ⩽ a(∥ x0 ∥) < a(𝛿) < b(𝜖), for all t ∈ [s0 , 𝜔(s0 , x0 )). Then, because b is an increasing function, we conclude that ∥ x(t) ∥< 𝜖, for all t ∈ [s0 , 𝜔(s0 , x0 )), whence it follows that the trivial solution of the generalized ODE (8.1) is uniformly stable. ◽ In the sequel, we present an example of Lyapunov functional, which does not satisfy Lipschitz-type condition. See [80, Example 2.6]. Example 8.19: Consider Ω = (−1, 1) × [0, ∞) and let F ∶ Ω → ℝ be given by F(x, t) = −ktx, where k > 0 is fixed. Define h ∶ [0, ∞) → ℝ by h(t) = kt,

for all t ∈ [0, ∞).

Note that, by definition, the function h is left-continuous on (0, ∞) and increasing on [0, ∞). Also, it is easy to prove that F ∈  (Ω, h). Consider the generalized ODE dx = DF(x, t) = D[−ktx]. (8.14) d𝜏 The generalized ODE (8.14) is associated with the following autonomous ODE dx = −kx. (8.15) dt Notice that x is a solution of the generalized ODE (8.14) if and only if x is a solution of the autonomous ODE (8.15) (see [80, Remark 2.7]). Therefore, x(t) = x0 e−k(t−s0 ) is the solution of the generalized ODE (8.14), with initial condition x(s0 ) = x0 ∈ (−1, 1). Define V ∶ [0, ∞) × ℝ → ℝ by V(t, x) = x2 . Since V does not depend on the first variable, the function V(⋅, x) ∶ [0, ∞) → ℝ is left-continuous on (0, ∞), for all x ∈ ℝ.

8.2 Lyapunov Stability for Generalized ODEs

In addition, if x ∶ [𝛼, 𝛽] → (−1, 1), [𝛼, 𝛽] ⊂ ℝ+ , is a solution of the generalized ODE (8.14), then D+ V(t, x(t)) =lim sup

x02 e−2k(t+𝜂−s0 ) − x02 e−2k(t−s0 )

𝜂→0+

=−

2kx02 e−2k(t−s0 )

𝜂 ⩽ 0,

for all t ∈ [𝛼, 𝛽), that is, the right derivative of V is non-positive along every solution of the generalized ODE (8.14). Hence, condition (LF3) of Definition 8.1 is satisfied. Moreover, the function [𝛼, 𝛽] ∋ t → V(t, x(t)) = x2 (t) ∈ (0, ∞) is left-continuous on [𝛼, 𝛽). Then, V satisfies condition (R1) and, therefore, the function [𝛼, 𝛽] ∋ t → V(t, x(t)) ∈ ℝ is nonincreasing, getting the condition (ii) of Theorem 8.18. Now, consider the continuous increasing function b ∶ ℝ+ → ℝ+ given by 1 b(s) = s2 , for all s ∈ ℝ+ . 2 Note that condition (LF2) is satisfied, since 1 V(t, x) = x2 = |x2 | ⩾ |x2 | = b(|x|), 2 for all (t, x) ∈ ℝ+ × ℝ. Therefore, V(t, x) = x2 is a Lyapunov functional with respect to the generalized ODE (8.14). Let a ∶ ℝ+ → ℝ+ be given by 3 2 s , for all s ∈ ℝ. 2 Then, a is a continuous increasing function, a(0) = 0, and for all x ∈ ℝ, we have 3 V(t, x) = x2 = |x|2 ⩽ |x|2 = a(|x|), 2 whence it follows that the condition (i) of Theorem 8.18 holds. In contrast, V(t, x) = x2 is not a Lipschitz function. Indeed, assume that there is L > 0 such that |V(t, x) − V(t, y)| ⩽ L|x − y|, for all (t, x) ∈ [0, ∞) × ℝ. Taking x = L + 1 and y = 0, we conclude that L + 1 ⩽ L, which obviously does not occur, provided L > 0. a(s) =

In the sequel, we present a Lyapunov-type theorem on uniform asymptotic stability of the trivial solution of the generalized ODE (8.1). Such a result can be found on [80, Theorem 3.6]. Theorem 8.20: Let F satisfy condition (ZS) and consider V ∶ [t0 , ∞) × B𝜌 → ℝ a Lyapunov functional with respect to the generalized ODE (8.1), where 𝜌 > 0, B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌}, and B𝜌 ⊂ O. Assume the following conditions hold:

259

260

8 Stability Theory

(i) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ such that a(0) = 0 and, for all solution x ∶ I → B𝜌 of the generalized ODE (8.1), with I ⊂ [t0 , ∞), we have V(t, x(t)) ⩽ a(∥ x(t) ∥),

for all t ∈ I;

(ii) There exists a continuous function Φ ∶ X → ℝ satisfying Φ(0) = 0 and Φ(x) > 0, for x ≠ 0, such that, for all solution x ∶ I → B𝜌 of the generalized ODE (8.1), with I ⊂ [t0 , ∞), we have V(s, x(s)) − V(t, x(t)) ⩽ (s − t) (−Φ(x(t))) ,

for all t, s ∈ I, t ⩽ s.

Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is uniformly asymptotically stable. Proof. Let 𝛿0 = 𝜌2 and 𝜖 > 0. Once all hypotheses from Theorem 8.18 are satisfied, the trivial solution x ≡ 0 of the generalized ODE (8.1) is uniformly stable and, therefore, there exists 𝛿(𝜖) = 𝛿 ∈ (0, 𝜌) such that if 𝜏0 ⩾ t0 and y0 ∈ B𝜌 with ∥ y0 ∥< 𝛿, then ∥ x(t, 𝜏0 , y0 ) ∥< 𝜖

for all t ∈ [𝜏0 , 𝜔(𝜏0 , y0 )).

Define N = sup {−Φ(y) ∶ 𝛿(𝜖) ⩽∥ y ∥< 𝜌} < 0 and T(𝜖) = −

(8.16) a(𝛿0 ) N

> 0.

Let s0 ⩾ t0 , x0 ∈ B𝜌 and x ∶ [s0 , 𝜔(s0 , x0 )) → O be the maximal solution of the generalized ODE (8.1) with initial condition x(s0 ) = x0 such that ∥ x0 ∥< 𝛿0 . We need to show that ∥ x(t) ∥< 𝜖, for all t ∈ [s0 , 𝜔(s0 , x0 )) ∩ [s0 + T(𝜖), ∞). At first, notice that if T(𝜖) ⩾ 𝜔(s0 , x0 ) − s0 , then [s0 , 𝜔(s0 , x0 )) ∩ [s0 + T(𝜖), ∞) = ∅. Therefore, we may assume that T(𝜖) < 𝜔(s0 , x0 ) − s0 . Claim. There exists t ∈ [s0 , s0 + T(𝜖)] such that ∥ x(t) ∥< 𝛿(𝜖). Indeed, assume that ∥ x(s) ∥⩾ 𝛿(𝜖), for all s ∈ [s0 , s0 + T(𝜖)]. By hypotheses (i) and (ii), we have V(s0 + T(𝜖), x(s0 + T(𝜖))) ⩽ V(s0 , x(s0 )) + T(𝜖)(−Φ(x(s0 ))) ⩽ a(∥ x(s0 ) ∥) + T(𝜖)N ) ( a(𝛿 ) ⩽ a(𝛿0 ) + − 0 N = 0. N On the other hand, condition (LF2) implies that V(s0 + T(𝜖), x(s0 + T(𝜖))) ⩾ b(∥ x(s0 + T(𝜖)) ∥) ⩾ b(𝛿(𝜖)) > 0, which contradicts (8.17) and ensures the veracity of the Claim.

(8.17)

8.3 Lyapunov Stability for MDEs

Finally, set y0 = x(t). By (8.16), we have ∥ x(t, t, y0 ) ∥< 𝜖, for all t ∈ [t, 𝜔(t, y0 )). Consequently, ∥ x(t) ∥< 𝜖 for all t ∈ [s0 + T(𝜖), 𝜔(s0 , x0 )), as t ∈ [s0 , s0 + T(𝜖)] and T(𝜖) < 𝜔(s0 , x0 ) − s0 . This shows that the trivial solution x ≡ 0 of the generalized ODE (8.1) is uniformly asymptotically stable. ◽

8.3 Lyapunov Stability for MDEs In this section, we use the results established in the previous sections to obtain Lyapunov stability results for MDEs. The results described in this section are borrowed from [80] and adapted to Banach space-valued case. As before, we consider a Banach space X with norm ∥ ⋅ ∥ and O ⊂ X as being an open set containing the origin. Now, let us consider the following integral form of an MDE of type t

x(t) = x(𝜏0 ) +

∫𝜏0

f (x(s), s) dg(s),

t ⩾ 𝜏0 ,

(8.18)

where 𝜏0 ⩾ t0 ⩾ 0, f ∶ O × [t0 , ∞) → X, and g ∶ [t0 , ∞) → ℝ. We remind the reader that G([t0 , ∞), X) denotes the vector space of all functions x ∶ [t0 , ∞) → X such that x ∈ G([𝛼, 𝛽], X), for all [𝛼, 𝛽] ⊂ [t0 , ∞). Moreover, the set G0 ([t0 , ∞), X) denotes the vector space of all functions x ∈ G([t0 , ∞), X) such that sup e−(s−t0 ) ∥ x(s) ∥< ∞.

s∈[t0 ,∞)

This space, equipped with the norm ∥ x∥[t0 ,∞) = sup e−(s−t0 ) ∥ x(s) ∥, s∈[t0 ,∞)

for x ∈ G0 ([t0 , ∞), X), becomes a Banach space (see Proposition 1.9). We also assume the following conditions: (A1) The function g ∶ [t0 , ∞) → ℝ is nondecreasing and left-continuous on (t0 , ∞); s (A2) The Perron–Stieltjes integral ∫s 2 f (x(s), s) dg(s) exists, for all x ∈ G([t0 , ∞), O) 1 and all s1 , s2 ∈ [t0 , ∞); (A3) There exists a locally Perron–Stieltjes integrable function M ∶ [t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ f (x(s), s) dg(s)‖ ⩽ M(s) dg(s), ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖

for all x ∈ G([t0 , ∞), O) and all s1 , s2 ∈ [t0 , ∞), with s1 ⩽ s2 ;

261

262

8 Stability Theory

(A4) There exists a locally Perron–Stieltjes integrable function L ∶ [t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ [f (x(s), s) − f (z(s), s)] dg(s)‖ ⩽ ‖x − z‖[t0 ,∞) L(s) dg(s), ‖ ‖∫s ‖ ∫s1 ‖ 1 ‖ for all x, z ∈ G0 ([t0 , ∞), O) and all s1 , s2 ∈ [t0 , ∞), with s1 ⩽ s2 . Under the aforementioned conditions, it is known that x ∶ I → O is a solution of the MDE (8.18) on I ⊂ [t0 , ∞) if and only if x is a solution of the generalized ODE dx = DF(x, t) d𝜏 on I, where the function F ∶ O × [t0 , ∞) → X is given by

(8.19)

t

F(x, t) =

∫𝜏0

(8.20)

f (x, s) dg(s)

for all (x, t) ∈ O × [t0 , ∞), see Theorem 5.18. Throughout this section, we assume that f (0, t) = 0, for all t ⩾ t0 . This implies that the function x ≡ 0 is a solution of the MDE (8.18). Moreover, we denote by x(⋅) = x(⋅, s0 , x0 ) the unique maximal solution x ∶ [s0 , 𝜔(s0 , x0 )) → O of the MDE (8.18) with initial condition x(s0 ) = x0 . A sufficient condition for the existence and uniqueness of this maximal solution can be found in Theorem 5.21. For simplicity of notation, when it is clear, we write only 𝜔 instead of 𝜔(s0 , x0 ). Next, we present the definitions of Lyapunov stability, uniform stability, and uniform asymptotic stability of the trivial solution of the MDE (8.18). Definition 8.21: The trivial solution of the MDE (8.18) is (i) Stable, if for every s0 ⩾ t0 and 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖, s0 ) > 0 such that if x0 ∈ O with ∥ x0 ∥< 𝛿, then ∥ x(t, s0 , x0 ) ∥=∥ x(t) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 )),

where x(⋅, s0 , x0 ) is the solution of the MDE (8.18) with x(s0 , s0 , x0 ) = x0 ; (ii) Uniformly stable (Lyapunov stable), if it is stable and 𝛿 is independent of s0 ; (iii) Uniformly asymptotically stable, if there exists 𝛿0 > 0 and for every 𝜖 > 0, there exists T = T(𝜖) ⩾ 0 such that if s0 ⩾ t0 and x0 ∈ O, with ∥ x0 ∥< 𝛿0 , then ∥ x(t, s0 , x0 ) ∥=∥ x(t) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 )) ∩ [s0 + T, ∞),

where x(⋅, s0 , x0 ) is the solution of the MDE (8.18) such that x(s0 , s0 , x0 ) = x0 . In the sequel, we present a concept of Lyapunov functional with respect to the MDE (8.18).

8.3 Lyapunov Stability for MDEs

Definition 8.22: We say that U ∶ [t0 , ∞) × B𝜌 → ℝ, where 𝜌 > 0 and B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌} ⊂ O, is a Lyapunov functional with respect to the MDE (8.18), if the following conditions are satisfied: (LFM1) For all x ∈ B𝜌 , the function U(⋅, x) ∶ [t0 , ∞) → ℝ is left-continuous on (t0 , ∞); (LFM2) There exists a continuous increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that U(t, x) ⩾ b(∥ x ∥),

for every (t, x) ∈ [t0 , ∞) × B𝜌 ;

(LFM3) For every solution x ∶ I ⊂ [t0 , ∞) → B𝜌 of the MDE (8.18), D+ U(t, x(t)) = lim sup 𝜂→0+

U(t + 𝜂, x(t + 𝜂)) − U(t, x(t)) ⩽0 𝜂

holds for all t ∈ I ⧵ sup I, that is, the right derivative of U along every solution of the MDE (8.18) is non-positive.

8.3.1 Direct Method of Lyapunov In this subsection, we present direct methods of Lyapunov for uniform stability and for uniform asymptotic stability of the trivial solution of the MDE (8.18). These results are borrowed from [80]. Theorem 8.23: Assume that f ∶ O × [t0 , ∞) → X satisfies conditions (A2), (A3), and (A4), g ∶ [t0 , ∞) → ℝ satisfies condition (A1), and f (0, t) = 0 for every t ∈ [t0 , ∞). Furthermore, let U ∶ [t0 , ∞) × B𝜌 → ℝ be a Lyapunov functional with respect to the MDE (8.18), where 𝜌 > 0 and B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌} ⊂ O. Assume, in addition, that U satisfies the conditions: (i) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ such that a(0) = 0 and U(t, x) ⩽ a(∥ x ∥),

for every (t, x) ∈ [t0 , ∞) × B𝜌 ;

(ii) The function I ∋ t → U(t, x(t)), where I ⊂ [t0 , ∞) is an interval, is nonincreasing along every solution x ∶ I → B𝜌 of the MDE (8.18). Then, the trivial solution x ≡ 0 of the MDE (8.18) is uniformly stable. Proof. Let F ∶ O × [t0 , ∞) → X be defined by (8.20). Since the function f ∶ O × [t0 , ∞) → X satisfies conditions (A2), (A3), (A4), and the function g ∶ [t0 , ∞) →

263

264

8 Stability Theory

ℝ satisfies condition (A1), it follows from Lemma 5.17 that F ∈  (Ω, h), where Ω = O × [t0 , ∞) and the function h ∶ [t0 , ∞) → ℝ is given by t

h(t) =

(M(s) + L(s)) dg(s).

∫t0

Note that h is clearly left-continuous on (t0 , ∞), once g is left-continuous on (t0 , ∞). In addition, as f (0, t) = 0 for all t ∈ [t0 , ∞), we have F(0, t2 ) − F(0, t1 ) = 0, for all t2 , t1 ⩾ t0 . Using the correspondence between the solutions of the generalized ODE (8.19) and the solutions of the MDE (8.18) (see Theorem 5.18), we infer that U ∶ [t0 , ∞) × B𝜌 → ℝ is a Lyapunov functional with respect to the generalized ODE (8.19), where F is given by (8.20). Moreover, conditions (i) and (ii) allow us to conclude that U also fulfills all conditions from Theorem 8.18. In this way, once all hypotheses from Theorem 8.18 are satisfied, the trivial solution x ≡ 0 of the generalized ODE (8.19) is uniformly stable. Finally, using again the correspondence between the solutions of the generalized ODE (8.19) and the solutions of the MDE (8.18) (see Theorem 5.18), we conclude that the trivial solution x ≡ 0 of the MDE (8.18) is also uniformly stable. ◽ The following result ensures that the trivial solution x ≡ 0 of the MDE (8.18) is uniformly asymptotically stable under certain conditions. Theorem 8.24: Suppose f ∶ O × [t0 , ∞) → X satisfies conditions (A2), (A3), and (A4), g ∶ [t0 , ∞) → ℝ satisfies condition (A1), and f (0, t) = 0 for every t ∈ [t0 , ∞). Suppose U ∶ [t0 , ∞) × B𝜌 → ℝ, 𝜌 > 0 and B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌} ⊂ O, satisfies conditions (LFM1) and (LFM2) from Definition 8.22 and condition (i) from Theorem 8.23. Moreover, suppose there exists a continuous function Φ ∶ X → ℝ satisfying Φ(0) = 0 and Φ(z) > 0 for z ≠ 0, such that for every maximal solution x(⋅) = x(⋅, s0 , x0 ), (x0 , s0 ) ∈ B𝜌 × [t0 , ∞), of the MDE (8.18), we have U(s, x(s)) − U(t, x(t)) ⩽ (s − t) (−Φ(x(t))) ,

(8.21)

for all t, s ∈ [s0 , 𝜔) with t ⩽ s. Then, the trivial solution x ≡ 0 of the MDE (8.18) is uniformly asymptotically stable. Proof. Let F ∶ O × [t0 , ∞) → X be the function defined by (8.20). Using the same arguments as in the proof of Theorem 8.23, we have F ∈  (Ω, h), where Ω = O × [t0 , ∞) and the function h ∶ [t0 , ∞) → ℝ is given by t

h(t) =

∫t0

(M(s) + L(s)) dg(s).

By the correspondence between the solutions of the generalized ODE (8.19) and the solutions of the MDE (8.18) (Theorem 5.18), we can verify that U ∶ [t0 , ∞) ×

8.4 Lyapunov Stability for Dynamic Equations on Time Scales

B𝜌 → ℝ is a Lyapunov functional with respect to the generalized ODE (8.19) and it satisfies all hypotheses of Theorem 8.20, whence it follows that the trivial solution x ≡ 0 of the generalized ODE (8.19) is uniformly asymptotically stable. Again, the correspondence between the solutions of the generalized ODE (8.19) and the solutions of the MDE (8.18) (Theorem 5.18) helps us to conclude that the trivial solution x ≡ 0 of the MDE (8.18) is uniformly asymptotically stable. ◽

8.4 Lyapunov Stability for Dynamic Equations on Time Scales In this section, we recall the concepts of Lyapunov stability in the framework of dynamic equations on time scales. These concepts were first dealt with in such an environment in [80]. Let 𝕋 be a time scale and X be a Banach space. We recall that if f ∶ 𝕋 → X is a function, then f ∗ ∶ 𝕋 ∗ → X is defined by f ∗ (t) = f (t∗ ),

for all t ∈ 𝕋 ∗ ,

where 𝕋 ∗ is given by { (−∞, sup 𝕋 ), if sup 𝕋 < ∞, 𝕋∗ = (−∞, ∞), otherwise, and t∗ = inf {s ∈ 𝕋 ∶ s ⩾ t}, for all t ∈ ℝ such that t ⩽ sup 𝕋 . Throughout this section, we consider a time scale 𝕋 such that sup 𝕋 = ∞, t0 ∈ 𝕋 , t0 ⩾ 0 and a dynamic equation on time scales of type t

x(t) = x(t0 ) +

∫t0

f (x(s), s)Δs,

t ∈ [t0 , ∞)𝕋 ,

(8.22)

where the integral on the right-hand side is in the sense of Perron Δ-integral, f ∶ Bc × [t0 , ∞)𝕋 → X, Bc = {x ∈ X ∶∥ x ∥< c}, and ∥ ⋅ ∥ is a norm in X. We recall that G([t0 , ∞)𝕋 , Bc ) denotes the space of all functions x ∶ [t0 , ∞)𝕋 → Bc , which are regulated on [t0 , ∞)𝕋 (the concept of regulated function defined on a time scale can be found in Definition 3.14). Moreover, the set G0 ([t0 , ∞)𝕋 , X) denotes the vector space of all functions x ∈ G([t0 , ∞)𝕋 , X) such that sup e−(s−t0 ) ∥ x(s) ∥< ∞.

s∈[t0 ,∞)𝕋

From now on, we consider that the function f ∶ Bc × [t0 , ∞)𝕋 → X fulfills the following conditions: s

(C1) The Perron Δ-integral ∫s 2 f (y(s), s)Δs exists, for all y ∈ G([t0 , ∞)𝕋 , Bc ) and all 1 s1 , s2 ∈ [t0 , ∞)𝕋 ;

265

266

8 Stability Theory

(C2) There is a locally Perron Δ-integrable function M ∶ [t0 , ∞)𝕋 → ℝ such that s2 ‖ s2 ‖ ‖ ‖ f (y(s), s)Δs‖ ⩽ M(s)Δs, ‖ ‖∫ s ‖ ∫s 1 ‖ 1 ‖

for all y ∈ G([t0 , ∞)𝕋 , Bc ) and all s1 , s2 ∈ [t0 , ∞)𝕋 , s1 ⩽ s2 ; (C3) There is a locally Perron Δ-integrable function L ∶ [t0 , ∞)𝕋 → ℝ such that s2 ‖ s2 ‖ ‖ ‖ [f (y(s), s) − f (𝑤(s), s)]Δs‖ ⩽ ‖y − 𝑤‖[t0 ,∞) L(s)Δs, ‖ ‖∫s ‖ ∫s1 ‖ 1 ‖

for all y, 𝑤 ∈ G0 ([t0 , ∞)𝕋 , Bc ) and all s1 , s2 ∈ [t0 , ∞)𝕋 , s1 ⩽ s2 . Under the previous conditions, we recall a correspondence between dynamic equations on time scales and MDEs (see Theorem 3.30 for more details). Let I ⊂ [t0 , ∞) be a nondegenerate interval such that I ∩ 𝕋 is nonempty and for each t ∈ I, we have t∗ ∈ I ∩ 𝕋 . If x ∶ I ∩ 𝕋 → X is a solution of the dynamic equation on time scales (8.22), then x∗ ∶ I → X is a solution of the MDE t2

y(t2 ) − y(t1 ) =

∫t1

f ∗ (y(s), s) dg(s),

t1 , t2 ∈ I.

(8.23)

Conversely, if y ∶ I → X satisfies the MDE (8.23), then it must have the form y = x∗ , where x ∶ I ∩ 𝕋 → X is a solution of the dynamic equation on time scales (8.22). Here, f ∗ ∶ Bc × [t0 , ∞) → X and g ∶ [t0 , ∞) → ℝ are given by f ∗ (x, s) = f (x, s∗ ) and g(s) = s∗ , respectively. In what follows, let us assume that f (0, t) = 0 for every t ∈ [t0 , ∞)𝕋 . This condition implies that x ≡ 0 is a solution of the dynamic equation on time scales (8.22). Moreover, we denote by x(⋅) = x(⋅, s0 , x0 ) the unique maximal solution x ∶ [s0 , 𝜔(s0 , x0 ))𝕋 → X of the dynamic equation on time scales (8.22) with x(s0 ) = x0 . Lyapunov stability concepts for the trivial solution x ≡ 0 of the dynamic equation on time scales (8.22), introduced in [80], are presented in the sequel. Definition 8.25: Let 𝕋 be a time scale such that sup 𝕋 = ∞. The trivial solution of the dynamic equation on time scales (8.22) is (i) Stable, if for every s0 ∈ 𝕋 with s0 ⩾ t0 and 𝜖 > 0, then there exists 𝛿 = 𝛿(𝜖, s0 ) > 0 such that for x0 ∈ Bc with ∥ x0 ∥< 𝛿, we have ∥ x(t, s0 , x0 ) ∥=∥ x(t) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 ))𝕋 ,

where x(⋅, s0 , x0 ) is the solution of the dynamic equation on time scales (8.22) with x(s0 , s0 , x0 ) = x0 ; (ii) Uniformly stable (Lyapunov stable), if it is stable with 𝛿 independent of s0 ;

8.4 Lyapunov Stability for Dynamic Equations on Time Scales

(iii) Uniformly asymptotically stable, if there exists 𝛿0 > 0 and for every 𝜖 > 0, there exists T = T(𝜖) ⩾ 0 such that if s0 ∈ 𝕋 , s0 ⩾ t0 and x0 ∈ Bc , with ∥ x0 ∥< 𝛿0 , then ∥ x(t, s0 , x0 ) ∥=∥ x(t) ∥< 𝜖,

for all t ∈ [s0 , 𝜔(s0 , x0 )) ∩ [s0 + T, ∞) ∩ 𝕋 ,

where x(⋅, s0 , x0 ) is the solution of the dynamic equation on time scales (8.22) with x(s0 , s0 , x0 ) = x0 . Definition 8.26: We say that U ∶ [t0 , ∞)𝕋 × B𝜌 → ℝ, where 0 < 𝜌 < c and B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌}, is a Lyapunov functional with respect to the dynamic equation on time scales (8.22), if the following conditions are satisfied: (LFD1) The function t → U(t, x) is left-continuous on (t0 , ∞)𝕋 , for all x ∈ B𝜌 ; (LFD2) There exists a continuous increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that U(t, x) ⩾ b(∥ x ∥),

for every (t, x) ∈ [t0 , ∞)𝕋 × B𝜌 ;

(LFD3) For every solution z ∶ [s0 , ∞)𝕋 → B𝜌 , s0 ⩾ t0 , of the dynamic equation on time scales (8.22), we have U(s, z(s)) − U(t, z(t)) ⩽ 0,

for every s, t ∈ [s0 , ∞)𝕋 , with t ⩽ s.

8.4.1 Direct Method of Lyapunov In this subsection, we estimate the region of uniform stability using Lyapunov’s direct method. The results described in this subsection are presented in [80]. The following result is concerned with the uniform stability of the trivial solution of the dynamic equation on time scales (8.22). Theorem 8.27: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and [t0 , ∞)𝕋 be a time scale interval. Assume that f ∶ Bc × [t0 , ∞)𝕋 → X satisfies conditions (C1), (C2), (C3), and f (0, t) = 0 for every t ∈ [t0 , ∞)𝕋 . Let U ∶ [t0 , ∞)𝕋 × B𝜌 → X, 0 < 𝜌 < c, be a Lyapunov functional with respect to the dynamic equation on time scales (8.22), where B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌}. Moreover, assume that U satisfies the following conditions: (i) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ such that a(0) = 0 and for all (t, x) ∈ [t0 , ∞)𝕋 × B𝜌 , we have U(t, x) ⩽ a(∥ x ∥);

(8.24)

267

268

8 Stability Theory

(ii) The function [s0 , 𝜔)𝕋 ∋ t → U(t, x(t)) is nonincreasing, for every solution x ∶ [s0 , 𝜔)𝕋 → X of the dynamic equation on time scales (8.22) with s0 ∈ [t0 , ∞)𝕋 . Then, the trivial solution x ≡ 0 of the dynamic equation on time scales (8.22) is uniformly stable. Proof. Let f ∗ ∶ Bc × [t0 , ∞) → X and g ∶ [t0 , ∞) → [t0 , ∞)𝕋 be the functions defined by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ Bc × [t0 , ∞), and g(t) = t∗ , for all t ∈ [t0 , ∞). Since f satisfies conditions (C1), (C2), and (C3), it follows that f ∗ satisfies conditions (A2), (A3), and (A4) described in Section 8.3. In addition, by the definition of g, it follows that g satisfies (A1). Since f (0, t) = 0, for all t ∈ [t0 , ∞)𝕋 , we get f ∗ (0, t) = 0, for all t ∈ [t0 , ∞). Define U ∗ (t, x) = U(t∗ , x),

for every (t, x) ∈ [t0 , ∞) × B𝜌 .

We will show that U ∗ ∶ [t0 , ∞) × B𝜌 → ℝ is a Lyapunov functional with respect to the MDE t

y(t) = y(𝜏0 ) +

f ∗ (y(s), s) dg(s), for all t, 𝜏0 ∈ [t0 , ∞),

∫𝜏0

(8.25)

and U ∗ satisfies conditions (i) and (ii) of Theorem 8.23. Indeed, since U ∗ (t, x) = U(g(t), x) and the functions U(⋅, x), and g(t) = t∗ are left-continuous on (t0 , ∞), then the function U ∗ (⋅, x) ∶ [t0 , ∞) → ℝ is also left-continuous on (t0 , ∞), for all x ∈ B𝜌 . Moreover, there exists a continuous and increasing function b ∶ ℝ+ → ℝ+ with b(0) = 0 such that U(s, x) ⩾ b(∥ x ∥),

for every (s, x) ∈ [t0 , ∞)𝕋 × B𝜌 .

Thus, for every (t, x) ∈ [t0 , ∞) × B𝜌 , we conclude U ∗ (t, x) = U(t∗ , x) ⩾ b(∥ x ∥). Let (x0 , s0 ) ∈ B𝜌 × [t0 , ∞) and y = y(⋅, s0 , x0 ) ∶ [s0 , 𝜔) → B𝜌 be the solution of the MDE (8.25). By the fact that y(t) ∈ B𝜌 ⊂ Bc , for t ∈ [s0 , 𝜔), we have 𝜔 = ∞. Now, as y ∶ [s0 , ∞) → B𝜌 is a global forward solution of (8.25), we have 𝜏

y(𝜏) = y(s0 ) +

∫s0

f (y(s), s∗ ) dg(s), 𝜏 ∈ [s0 , ∞),

with g(s) = s∗ . Claim. U ∗ (s, y(s)) − U ∗ (t, y(t)) ⩽ 0 for all s0 ⩽ t < s < ∞. In order to prove the Claim, let us consider two cases.

8.4 Lyapunov Stability for Dynamic Equations on Time Scales

At first, we assume that s0 ∈ [t0 , ∞) ⧵ 𝕋 . Thus, g is constant on [s0 , s∗0 ], whence it follows that t∗ = s∗0 for all t ∈ [s0 , s∗0 ] and, consequently, y(𝜏) = y(s∗0 ) for all 𝜏 ∈ [s0 , s∗0 ]. On the other hand, using the correspondence between solutions of the dynamic equation on time scales (8.22) and solutions of the MDE (8.25), y|[s∗ ,∞) must possess 0 the form y|[s∗ ,∞) = x∗ ,

(8.26)

0

where x ∶ [s∗0 , ∞)𝕋 → X is a solution of the dynamic equation on time scales (8.22). Besides, x is also the global forward solution of the dynamic equation on time scales (8.22) through (x(s∗0 ), s∗0 ), and y(s∗0 ) = x∗ (s∗0 ) = x(s∗0 ). When s0 ⩽ t < s ⩽ s∗0 , we obtain U ∗ (s, y(s)) − U ∗ (t, y(t)) =U(s∗ , y(s)) − U(t∗ , y(t)) =U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) = 0. When s0 ⩽ t ⩽ s∗0 < s, we have U ∗ (s, y(s)) − U ∗ (t, y(t)) =U(s∗ , y(s)) − U(t∗ , y(t)) =U(s∗ , x∗ (s)) − U(s∗0 , y(s∗0 )) =U(s∗ , x(s∗ )) − U(s∗0 , x(s∗0 )) =U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ 0. When s∗0 ⩽ t < s, then, by (8.26), we derive U ∗ (s, y(s)) − U ∗ (t, y(t)) =U(s∗ , y(s)) − U(t∗ , y(t)) =U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ 0, which proves the Claim for the case where s0 ∈ [t0 , ∞) ⧵ 𝕋 . Now, suppose s0 ∈ 𝕋 . Hence, s0 = s∗0 ∈ 𝕋 and, consequently, [s0 , ∞)𝕋 = [s∗0 , ∞)𝕋 . Also, y ∶ [s0 , ∞) → X is such that y = x∗ , where x ∶ [s0 , ∞)𝕋 → X is a global forward solution of the dynamic equation on time scales (8.22) through (y(s0 ), s0 ). Thus, U ∗ (s, y(s)) − U ∗ (t, y(t)) = U ∗ (s, x∗ (s)) − U ∗ (t, x∗ (t)) = U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ 0, for every s0 ⩽ t < s < ∞, whence it follows the truthfulness of the Claim for the case where s0 ∈ 𝕋 . Hence, U ∗ satisfies condition (ii) of Theorem 8.23. Analogously, one can prove that lim sup 𝜂→0+

U ∗ (t + 𝜂, y(t + 𝜂)) − U ∗ (t, y(t)) ⩽ 0, 𝜂

for every s0 ⩽ t < ∞.

269

270

8 Stability Theory

Thus, U ∗ is a Lyapunov functional with respect to the MDE (8.25). Now, by hypothesis (i), there exists a continuous and increasing function a ∶ ℝ+ → ℝ+ for which a(0) = 0 and condition (8.24) is fulfilled. Then, for all solutions y ∶ I → B𝜌 , I ⊂ [t0 , ∞), of the MDE (8.25), we have U ∗ (t, y(t)) = U(t∗ , y(t)) ⩽ a(∥ y(t) ∥),

for all t ∈ I.

Once all conditions of Theorem 8.23 are satisfied, the trivial solution x∗ ≡ 0 of the MDE (8.25) is uniformly stable. Using the correspondence between solutions of the dynamic equation on time scales (8.22) and solutions of the MDE (8.25), the trivial solution x ≡ 0 of the dynamic equation on time scales (8.22) is uniformly stable. ◽ Theorem 8.28: Let 𝕋 be a time scale such that sup 𝕋 = ∞ and [t0 , ∞)𝕋 be a time scale interval. Assume that f ∶ Bc × [t0 , ∞)𝕋 → X satisfies conditions (C1), (C2), (C3), and f (0, t) = 0 for every t ∈ [t0 , ∞)𝕋 . Suppose U ∶ [t0 , ∞)𝕋 × B𝜌 → ℝ, with 0 < 𝜌 < c and B𝜌 = {x ∈ X ∶∥ x ∥⩽ 𝜌}, satisfies conditions (LFD1) and (LFD2) from Definition 8.26, and condition (i) from Theorem 8.27. Moreover, suppose there exists a continuous function 𝜙 ∶ X → ℝ satisfying 𝜙(0) = 0 and 𝜙(𝜐) > 0 for 𝜐 ≠ 0, such that for every maximal solution x(⋅) = x(⋅, s0 , x0 ), (x0 , s0 ) ∈ B𝜌 × [t0 , ∞)𝕋 , of the dynamic equation on time scales (8.22), we have U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ (s − t)∗ (−𝜙(x(t∗ ))),

(8.27)

for all t, s ∈ [s0 , 𝜔)𝕋 , with t ⩽ s. Then, the trivial solution x ≡ 0 of the dynamic equation on time scales (8.22) is uniformly asymptotically stable. Proof. Let f ∗ ∶ Bc × [t0 , ∞) → X and g ∶ [t0 , ∞) → [t0 , ∞)𝕋 be the functions defined by f ∗ (x, t) = f (x, t∗ ), for all (x, t) ∈ Bc × [t0 , ∞), and g(t) = t∗ , for all t ∈ [t0 , ∞). Using the same arguments as in the proof of Theorem 8.27, we can prove that ∗ f satisfies conditions (A2)–(A4) and f ∗ (0, t) = 0 for every t ∈ [t0 , ∞). Let us define U ∗ (t, x) = U(t∗ , x),

for every (t, x) ∈ [t0 , ∞) × B𝜌 .

Then, we can show that U ∗ satisfies conditions (LFM1) and (LFM2) from Definition 8.22, using similar arguments as those employed in the proof of Theorem 8.27. According to hypothesis (i) from Theorem 8.27, we can also prove that U ∗ satisfies hypothesis (i) from Theorem 8.23, as in the proof of Theorem 8.27. Now, we will show that U ∗ satisfies condition (8.21). Consider (x0 , s0 ) ∈ B𝜌 × [t0 , ∞) and y = y(⋅, s0 , x0 ) ∶ [s0 , 𝜔) → B𝜌 be the maximal solution of the MDE (8.25). Since y(t) ∈ B𝜌 ⊂ Bc , we have 𝜔 = ∞. Let us consider two cases. At first, we assume that s0 ∈ [t0 , ∞) ⧵ 𝕋 . By similar arguments used in the proof of Theorem 8.27, we can prove that y|[s∗ ,∞) is such that y|[s∗ ,∞) = x∗ , where 0

0

8.4 Lyapunov Stability for Dynamic Equations on Time Scales

x ∶ [s∗0 , ∞)𝕋 → X is the global forward solution of the dynamic equation on time scales (8.22) through (x(s∗0 ), s∗0 ). If s0 ⩽ t < s ⩽ s∗0 , we get U ∗ (s, y(s)) − U ∗ (t, y(t)) = U(s∗ , y(s)) − U(t∗ , y(t)) = U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) = 0. Using this fact together with (8.27), we obtain −𝜙(x(t∗ )) = 0 and, hence, x(t∗ ) = 0, whence y(t) = 0, since x(t∗ ) = x(s∗0 ) = y(s∗0 ) = y(t). Therefore, U ∗ (s, y(s)) − U ∗ (t, y(t)) = (s − t)(−𝜙(y(t))) = 0. If s0 ⩽ t ⩽ s∗0 < s, we have U ∗ (s, y(s)) − U ∗ (t, y(t)) = U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ (s − t)∗ (−𝜙(x(t∗ ))) = (s − t)∗ (−𝜙(y(t))) ⩽ (s − t)(−𝜙(y(t))). Besides, if s∗0 ⩽ t < s, we obtain U ∗ (s, y(s)) − U ∗ (t, y(t)) = U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ (s − t)∗ (−𝜙(x(t∗ ))) = (s − t)∗ (−𝜙(y(t))) ⩽ (s − t)(−𝜙(y(t))). Thus, U ∗ (s, y(s)) − U ∗ (t, y(t)) ⩽ (s − t)(−𝜙(y(t))), for all s0 ⩽ t < s < ∞. Let us suppose s0 ∈ 𝕋 . Thus, [s0 , ∞)𝕋 = [s∗0 , ∞)𝕋 . Repeating arguments used in the proof of Theorem 8.27, we can show that y ∶ [s0 , ∞) → X is such that y = x∗ , where x ∶ [s0 , ∞)𝕋 → X is the global forward solution of dynamic equation on time scales (8.22) through (x(s0 ), s0 ). Hence, we obtain U ∗ (s, y(s)) − U ∗ (t, y(t)) = U ∗ (s, x∗ (s)) − U ∗ (t, x∗ (t)) = U(s∗ , x(s∗ )) − U(t∗ , x(t∗ )) ⩽ (s − t)∗ (−𝜙(y(t))) ⩽ (s − t)(−𝜙(y(t))). Therefore, U ∗ (s, y(s)) − U ∗ (t, y(t)) ⩽ (s − t)(−𝜙(y(t))), for all s0 ⩽ t < s < ∞.

271

272

8 Stability Theory

Since all hypotheses of Theorem 8.24 are satisfied, the trivial solution x∗ ≡ 0 of the MDE (8.25) is uniformly asymptotically stable. Using the correspondence between solutions of the dynamic equation on time scales (8.22) and solutions of the MDE (8.25) (see Theorem 3.30), the trivial solution x ≡ 0 of the dynamic equation on time scales (8.22) is uniformly asymptotically stable. ◽

8.5 Regular Stability for Generalized ODEs This section concerns some results on regular stability for the trivial solution of generalized ODEs. The concepts of regular stability for generalized ODEs were introduced by the authors of [87]. Throughout this section, we assume that the function F ∶ Ω → X belongs to the class  (Ω, h), where Ω = O × [t0 , ∞), O is an open subset of X such that 0 ∈ O, and h ∶ [t0 , ∞) → ℝ is a left-continuous and nondecreasing function. Moreover, we suppose F satisfies condition (ZS), that is, F(0, t2 ) − F(0, t1 ) = 0,

for all t2 , t1 ∈ [t0 , ∞).

Consider the generalized ODE (8.1) and the nonhomogeneous generalized ODE dx = D[F(x, t) + P(t)], (8.28) d𝜏 where P ∶ [t0 , ∞) → X is a Kurzweil integrable function. We also assume that, for all 𝛼 ⩾ t0 and for all x0 ∈ X, there exist local solutions, x of the generalized ODE (8.1), and x of the nonhomogeneous generalized ODE (8.28), on [𝛼, 𝛽] ⊂ [t0 , ∞), with x(𝛼) = x0 = x(𝛼). Sufficient conditions for the existence and uniqueness of these solutions are described in Theorem 5.1. Now, we recall concepts concerning regular stability of the trivial solution of the generalized ODE (8.1). Definition 8.29: The trivial solution x ≡ 0 of the generalized ODE (8.1) is (i) Regularly stable, if for every 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖) ∈ (0, 𝜖) such that if the function x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X is regulated, left-continuous on (𝛼, 𝛽] and satisfies s ‖ ‖ ∥ x(𝛼) ∥< 𝛿 and sup ‖ x(s) − x(𝛼) − DF(x(𝜏), t)‖ ‖ ‖ < 𝛿, ∫ s∈[𝛼,𝛽] ‖ ‖ 𝛼 then ∥ x(t) ∥< 𝜖,

for all t ∈ [𝛼, 𝛽];

8.5 Regular Stability for Generalized ODEs

(ii) Regularly attracting, if there exists 𝛿0 > 0 and for every 𝜖 > 0, there exist T = T(𝜖) ⩾ 0 and 𝜌 = 𝜌(𝜖) > 0 such that if x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X is a regulated function, left-continuous on (𝛼, 𝛽] and satisfies s ‖ ‖ ‖ sup ‖ ‖x(s) − x(𝛼) − ∫ DF(x(𝜏), t)‖ < 𝜌, s∈[𝛼,𝛽] ‖ ‖ 𝛼

∥ x(𝛼) ∥< 𝛿0

and

∥ x(t) ∥< 𝜖,

for all t ∈ [𝛼, 𝛽] ∩ [𝛼 + T, ∞);

then

(iii) Regularly asymptotically stable, if it is regularly stable and regularly attracting. For regular stability by perturbations, we have the following concepts. Definition 8.30: The trivial solution x ≡ 0 of the generalized ODE (8.1) is (i) Regularly stable with respect to perturbations, if for every 𝜖 > 0, there exists 𝛿 = 𝛿(𝜖) > 0 such that, if ∥ x0 ∥< 𝛿 and P ∈ G− ([𝛼, 𝛽], X), with sup s∈[𝛼,𝛽] ∥ P(s) − P(𝛼) ∥< 𝛿, then ∥ x(t, 𝛼, x0 ) ∥< 𝜖,

for all t ∈ [𝛼, 𝛽],

where x(⋅, 𝛼, x0 ) is the solution of the nonhomogeneous generalized ODE (8.28), with initial condition x(𝛼, 𝛼, x0 ) = x0 and [𝛼, 𝛽] ⊂ [t0 , ∞); (ii) Regularly attracting with respect to perturbations, if there exists 𝛿̃ > 0 and for every 𝜖 > 0, there exist T = T(𝜖) ⩾ 0 and 𝜌 = 𝜌(𝜖) > 0 such that, if ∥ x0 ∥< 𝛿̃ and P ∈ G− ([𝛼, 𝛽], X), with sup ∥ x(t, 𝛼, x0 ) ∥< 𝜖,

s∈[𝛼,𝛽]

∥ P(s) − P(𝛼) ∥< 𝜌, then

for all t ∈ [𝛼, 𝛽] ∩ [𝛼 + T, ∞),

where x(⋅, 𝛼, x0 ) is the solution of the nonhomogeneous generalized ODE (8.28), with initial condition x(𝛼, 𝛼, x0 ) = x0 and [𝛼, 𝛽] ⊂ [t0 , ∞); (iii) Regularly asymptotically stable with respect to perturbations, if it is regularly stable with respect to perturbations and regularly attracting with respect to perturbations. Note that the concept of regular stability is more general than the concept of variational stability defined in Section 8.1. Similar statements hold for regular attractivity and regular asymptotic stability. The following result shows us that there is an equivalence between the concepts of stability given in Definitions 8.29 and 8.30. This result can be found in [87, Theorem 4.7]. Theorem 8.31: Let F ∈  (Ω, h) satisfy condition (ZS). The following statements are true.

273

274

8 Stability Theory

(i) The trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable if and only if it is regularly stable with respect to perturbations. (ii) The trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly attracting if and only if it is regularly attracting with respect to perturbations. (iii) The trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly asymptotically stable if and only if it is regularly asymptotically stable with respect to perturbations. Proof. We start by proving (i). At first, suppose that the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable. Let 𝜖 > 0 and 𝛿 = 𝛿(𝜖) > 0 be as in Definition 8.29(i) and x ∶ [𝛼, 𝛽] → X be a solution of the nonhomogeneous generalized ODE (8.28), with P ∈ G− ([𝛼, 𝛽], X). Thus, by the definition of solution of (8.28), we have s

x(s) − x(𝛼) =

∫𝛼

DF(x(𝜏), t) + P(s) − P(𝛼),

for all s ∈ [𝛼, 𝛽]. Notice that, for all s1 , s2 ∈ [𝛼, 𝛽], we have ‖ s2 ‖ ‖ ‖ ∥ x(s2 ) − x(s1 ) ∥ ⩽ ‖ DF(x(𝜏), t)‖ + ∥ P(s2 ) − P(s1 ) ∥ ‖∫s ‖ ‖ 1 ‖ ⩽∥ h(s2 ) − h(s1 ) ∥ + ∥ P(s2 ) − P(s1 ) ∥, where the last inequality follows from Lemma 4.5. Thus, every point in [𝛼, 𝛽] at which h and P are continuous is a point of continuity of the solution x. Hence, once h and P are left-continuous on (𝛼, 𝛽], x is also left-continuous on (𝛼, 𝛽]. Assume that ∥ x(𝛼) ∥< 𝛿 and sup s∈[𝛼,𝛽] ∥ P(s) − P(𝛼) ∥< 𝛿. Then, s ‖ ‖ ‖ sup ‖ ‖x(s) − x(𝛼) − ∫ DF(x(𝜏), t)‖ = sup ∥ P(s) − P(𝛼) ∥< 𝛿, s∈[𝛼,𝛽] ‖ ‖ s∈[𝛼,𝛽] 𝛼

and, since the trivial solution of the generalized ODE (8.1) is regularly stable, we conclude that ∥ x(t) ∥< 𝜖 for every t ∈ [𝛼, 𝛽], which implies that trivial solution x ≡ 0 of (8.1) is regularly stable with respect to perturbations. Reciprocally, assume that the trivial solution x ≡ 0 is regularly stable with respect to perturbations. Let 𝜖 > 0 and 𝛿 = 𝛿(𝜖) > 0 be as in Definition 8.30(i) and let x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X be a regulated function, which is left-continuous on (𝛼, 𝛽] such that ∥ x(𝛼) ∥< 𝛿

and

s ‖ ‖ ‖ sup ‖ ‖x(s) − x(𝛼) − ∫ DF(x(𝜏), t)‖ < 𝛿. s∈[𝛼,𝛽] ‖ ‖ 𝛼

Let P ∶ [𝛼, 𝛽] → X be defined by s

P(s) = P(𝛼) + x(s) − x(𝛼) −

∫𝛼

DF(x(𝜏), t),

for all s ∈ [𝛼, 𝛽].

8.5 Regular Stability for Generalized ODEs

Once F ∈  (Ω, h) and x, h ∈ G− ([𝛼, 𝛽], X), we have P ∈ G− ([𝛼, 𝛽], X). Moreover, for all s ∈ [𝛼, 𝛽], s

x(s) = x(𝛼) +

∫𝛼

DF(x(𝜏), t) + P(s) − P(𝛼),

and, therefore, x is a solution of the nonhomogeneous generalized ODE (8.28). On the other hand, s ‖ ‖ x(s) − x(𝛼) − DF(x(𝜏), t)‖ sup ∥ P(s) − P(𝛼) ∥= sup ‖ ‖ ‖ < 𝛿. ∫ s∈[𝛼,𝛽] s∈[𝛼,𝛽] ‖ ‖ 𝛼

Since the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable with respect to perturbations, we have ∥ x(t) ∥< 𝜖, for all t ∈ [𝛼, 𝛽], which allows us to conclude that the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable. The proof of item (ii) is analogous to the proof of item (i) and we leave it aside. Finally, item (iii) follows from items (i) and (ii). ◽

8.5.1 Direct Method of Lyapunov This subsection is devoted to presenting Lyapunov-type theorems for the generalized ODE (8.1), using the concepts introduced in Section 8.5. These results can be found in [7] and [87]. Recall that we are considering F ∈  (Ω, h) satisfying condition (ZS), where Ω = O × [t0 , ∞), O ⊂ X is an open subset containing the origin, and h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function. In what follows, we present a result that will be essential to our purposes and it can be found in [7, Lemma 3.7]. Lemma 8.32: Let F ∈  (Ω, h), where h ∶ [t0 , ∞) → ℝ is a nondecreasing and left-continuous function, and let V ∶ [t0 , ∞) × X → ℝ be a functional. Assume that: (i) for each left-continuous function z ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X on (𝛼, 𝛽], the function [𝛼, 𝛽] ∋ t → V(t, z(t)) is left-continuous on (𝛼, 𝛽]; (ii) for all regulated functions x, y ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X, the inequality |V(t, x(t)) − V(t, y(t)) − V(s, x(s)) + V(s, y(s))| ⩽ sup ∥ x(𝜉) − y(𝜉) ∥ 𝜉∈[s,t]

(8.29) holds for all 𝛼 ⩽ s < t ⩽ 𝛽; (iii) given (s0 , x0 ) ∈ [t0 , ∞) × X, the function [s0 , 𝜔(s0 , x0 )) ∋ t → V(t, x(t)) is nonincreasing along every solution x(⋅, s0 , x0 ) ∶ [s0 , 𝜔(s0 , x0 )) → X of the generalized ODE (8.1) with x(s0 ) = x0 .

275

276

8 Stability Theory

If, moreover, x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X is regulated and left-continuous on (𝛼, 𝛽], then 𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ V(𝑣, x(𝑣)) − V(𝛾, x(𝛾)) ⩽ 2 sup ‖x(𝜉) − x(𝛼) − ‖ ∫𝛼 𝜉∈[𝛼,𝛽] ‖ ‖ ‖

for all 𝑣, 𝛾 ∈ [𝛼, 𝛽] with 𝛾 ⩽ 𝑣. Proof. Let x ∶ [a, b] → X be a regulated and left-continuous function on [a, b). By 𝛽 Corollary 4.8, the integral ∫𝛼 DF(x(𝜏), t) exists. Set 𝜎 ∈ [𝛼, 𝛽]. By Theorem 5.1, there exists a local solution x ∶ [𝜎, 𝜎 + 𝜂1 (𝜎)] → X of the generalized ODE (8.1), with initial condition x(𝜎) = x(𝜎). Consider 𝜂 > 0 such that 𝜂 ⩽ 𝜂1 (𝜎) and 𝜎 + 𝜂 ⩽ 𝛽. Hence, the Kurzweil integral 𝜎+𝜂 ∫𝜎 D[F(x(𝜏), t) − F(x(𝜏), t)] exists by Theorem 2.5. By assumption (iii), we obtain V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ 0.

(8.30)

Besides, by Corollary 2.8, we have s ‖ ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − DF(x(𝜏), t)‖ ‖ ‖< ∫𝜎 ‖ ‖ s ‖ ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − DF(x(𝜏), t)‖ ‖ ‖< ∫𝜎 ‖ ‖

𝜂𝜖 and 2 𝜂𝜖 , 2

(8.31) (8.32)

for all s ∈ [𝜎, 𝜎 + 𝜂]. Furthermore, by (8.31) and (8.32), we obtain ‖ s ‖ ‖ ‖ ‖∫ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ s∈[𝜎,𝜎+𝜂] ‖ 𝜎 ‖ − sup ∥ F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎) ∥ s∈[𝜎,𝜎+𝜂] ( ‖ s ‖ ⩽ sup D[F(x(𝜏), t) − F(x(𝜏), t)] ‖ s∈[𝜎,𝜎+𝜂] ‖∫𝜎 ) ‖ −(F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎))‖ ‖ ‖ s ‖ ‖ ‖ ⩽ sup ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − ∫ DF(x(𝜏), t)‖ s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 s ‖ ‖ ‖ + sup ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − ∫ DF(x(𝜏), t)‖ < 𝜂𝜖. s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 sup

(8.33)

Moreover, as x(𝜎) = x(𝜎) and F ∈  (Ω, h), we get sup s∈[𝜎,𝜎+𝜂]

∥ F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎) ∥

⩽∥ x(𝜎) − x(𝜎) ∥ sup |h(s) − h(𝜎)| = 0. s∈[𝜎,𝜎+𝜂]

(8.34)

8.5 Regular Stability for Generalized ODEs

Replacing (8.34) in (8.33), we obtain ‖ ‖ s ‖ ‖ ‖∫ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ < 𝜖𝜂. s∈[𝜎,𝜎+𝜂] ‖ 𝜎 ‖ sup

(8.35)

Since x(𝜎) = x(𝜎), using (8.29), we obtain V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) = V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) + V(𝜎, x(𝜎)) ⩽ ||V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) + V(𝜎, x(𝜎))|| ⩽ sup ∥ x(𝜉) − x(𝜉) ∥ . (8.36) 𝜉∈[𝜎,𝜎+𝜂]

Now, by Eqs. (8.30), (8.35), (8.36) and noticing that x is a solution of the generalized ODE (8.1), we have V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) = V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) + V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ sup

𝜉∈[𝜎,𝜎+𝜂]

∥ x(𝜉) − x(𝜉) ∥= sup

𝜉∈[𝜎,𝜎+𝜂]

∥ x(𝜉) − x(𝜎) + x(𝜎) − x(𝜉) ∥

𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ ‖x(𝜉) − x(𝜎) − ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ 𝜉 𝜉 ‖ ‖ ‖ ‖ = sup ‖x(𝜉) − x(𝜎) − DF(x(𝜏), t) + D[F(x(𝜏), t) − F(x(𝜏), t)]‖ ‖ ∫𝜎 ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ 𝜉 ‖ ‖ ‖ ‖ ⩽ sup ‖x(𝜉) − x(𝜎) − DF(x(𝜏), t)‖ ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ ‖ ‖ 𝜉 ‖ ‖ + sup ‖ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ 𝜉 ‖ ‖ ‖ ‖ ⩽ sup ‖x(𝜉) − x(𝜎) − DF(x(𝜏), t)‖ + 𝜖𝜂. ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ Making 𝜖 → 0, we derive that

= sup

𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ . ‖x(𝜉) − x(𝜎) − ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ (8.37)

V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ sup Let P ∶ [𝛼, 𝛽] → X be defined by s

P(s) = x(s) − x(𝛼) − Note that

∫𝛼

DF(x(𝜏), t),

for all s ∈ [𝛼, 𝛽].

‖ s2 ‖ ‖ ‖ ∥ P(s2 ) − P(s1 ) ∥ ⩽∥ x(s2 ) − x(s1 ) ∥ + ‖ DF(x(𝜏), t)‖ ‖∫s ‖ ‖ 1 ‖ ⩽∥ x(s2 ) − x(s1 ) ∥ + ∥ h(s2 ) − h(s1 ) ∥

277

278

8 Stability Theory

for all s1 , s2 ∈ [𝛼, 𝛽], where the last inequality follows from Lemma 4.5. Thus, every point in [𝛼, 𝛽] at which h and x are continuous is a point of continuity of the function P and, because h and x are left-continuous on (𝛼, 𝛽], P is also left-continuous s on (𝛼, 𝛽]. In addition, once x is regulated, the function s → ∫𝛼 DF(x(𝜏), t) is also regulated and, hence, P is regulated. Note, as well, that ∥ P(𝜉) − P(𝜎) ∥ ( )‖ 𝜉 𝜎 ‖ ‖ ‖ DF(x(𝜏), t) − x(𝜎) − x(𝛼) − DF(x(𝜏), t) ‖ = ‖x(𝜉) − x(𝛼) − ‖ ‖ ∫𝛼 ∫𝛼 ‖ ‖ 𝜉 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ ⩽ ‖x(𝜉) − x(𝛼) − DF(x(𝜏), t)‖ + ‖x(𝜎) − x(𝛼) − DF(x(𝜏), t)‖ . ‖ ‖ ‖ ‖ ∫𝛼 ∫𝛼 ‖ ‖ ‖ ‖

(8.38)

For 𝜎 ∈ [𝛼, 𝛽] fixed, define f ∶ [𝛼, 𝛽] → ℝ by ⎧ sup ∥ P(𝜉) − P(𝜎) ∥, t ∈ [𝛼, 𝜎], ⎪ f (t) = ⎨𝜉∈[t,𝜎] sup ∥ P(𝜉) − P(𝜎) ∥, t ∈ [𝜎, 𝛽]. ⎪𝜉∈[𝜎,t] ⎩ In this way, f is well-defined and it is left-continuous on (𝛼, 𝛽], provided P is left-continuous on (𝛼, 𝛽]. Furthermore, 𝜉 ‖ ‖ ‖ ‖ f (𝜎 + 𝜂) − f (𝜎) = sup ‖x(𝜉) − x(𝜎) − DF(x(𝜏), t)‖ . ‖ ∫𝜎 𝜉∈[𝜎,𝜎+𝜂] ‖ ‖ ‖ By the last equality and (8.37), we obtain V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ f (𝜎 + 𝜂) − f (𝜎). Using Proposition 1.8, we conclude that V(𝑣, x(𝑣)) − V(𝛾, x(𝛾)) ⩽ f (𝑣) − f (𝛾), for all 𝛾, 𝑣 ∈ [𝛼, 𝛽], with 𝛾 ⩽ 𝑣. To finalize the proof, we need to show that 𝜉 ‖ ‖ ‖ ‖ f (𝑣) − f (𝛾) ⩽ 2 sup ‖x(𝜉) − x(𝛼) − DF(x(𝜏), t)‖ , ‖ ‖ ∫ 𝜉∈[𝛼,𝛽] ‖ 𝛼 ‖ for all 𝛾, 𝑣 ∈ [𝛼, 𝛽], with 𝛾 ⩽ 𝑣. Let us consider three cases. Case 1: If 𝛾 < 𝑣 ⩽ 𝜎, then f (𝑣) − f (𝛾) = sup ∥ P(𝜉) − P(𝜎) ∥ − sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝑣,𝜎]

𝜉∈[𝛾,𝜎]

⩽ sup ∥ P(𝜉) − P(𝜎) ∥ − sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝛾,𝜎]

𝜉∈[𝛾,𝜎]

𝜉 ‖ ‖ ‖ ‖ = 0 ⩽ 2 sup ‖x(𝜉) − x(𝛼) − DF(x(𝜏), t)‖ . ‖ ∫𝛼 𝜉∈[𝛼,𝛽] ‖ ‖ ‖

(8.39)

8.5 Regular Stability for Generalized ODEs

Case 2: If 𝛾 < 𝜎 ⩽ 𝑣, then f (𝑣) − f (𝛾) = sup ∥ P(𝜉) − P(𝜎) ∥ − sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝜎,𝑣]

𝜉∈[𝛾,𝜎]

⩽ sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝜎,𝑣]

(

𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ ‖x(𝜉) − x(𝛼) − ‖ ∫𝛼 𝜉∈[𝜎,𝑣] ‖ ‖ ‖ ) 𝜎 ‖ ‖ ‖ +‖ ‖x(𝜎) − x(𝛼) − ∫ DF(x(𝜏), t)‖ ‖ ‖ 𝛼 𝜉 ‖ ‖ ‖ ‖ ⩽2 sup ‖x(𝜉) − x(𝛼) − DF(x(𝜏), t)‖ . ‖ ∫𝛼 𝜉∈[𝛼,𝛽] ‖ ‖ ‖ Case 3: If 𝜎 ⩽ 𝛾 < 𝑣, then (8.38)

⩽ sup

f (𝑣) − f (𝛾) = sup ∥ P(𝜉) − P(𝜎) ∥ − sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝜎,𝑣]

𝜉∈[𝜎,𝛾]

⩽ sup ∥ P(𝜉) − P(𝜎) ∥ + sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝜎,𝛾]

𝜉∈[𝛾,𝑣]

− sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝜎,𝛾]

= sup ∥ P(𝜉) − P(𝜎) ∥ 𝜉∈[𝛾,𝑣]

(

𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ ‖x(𝜉) − x(𝛼) − ‖ ‖ ∫𝛼 𝜉∈[𝛾,𝑣] ‖ ‖ ) 𝜎 ‖ ‖ ‖ +‖ ‖x(𝜎) − x(𝛼) − ∫ DF(x(𝜏), t‖ ‖ ‖ 𝛼 𝜉 ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ ⩽2 sup ‖x(𝜉) − x(𝛼) − ‖ ∫𝛼 𝜉∈[𝛼,𝛽] ‖ ‖ ‖ and the proof is complete. (8.38)

⩽ sup



The next result can be found in [7, Theorem 3.8] and it is known as Lyapunov-type theorem for regular stability. This shows that the existence of a Lyapunov functional with respect to the generalized ODE (8.1) satisfying some properties implies that the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable. Theorem 8.33: Let F ∈  (Ω, h) satisfy condition (ZS) and let V ∶ [t0 , ∞) × X → ℝ be a Lyapunov functional with respect to the generalized ODE (8.1) satisfying the conditions of Lemma 8.32. Moreover, assume that: (i) V(t, 0) = 0, for all t ∈ [t0 , ∞);

279

280

8 Stability Theory

(ii) there exists a continuous increasing function a ∶ ℝ+ → ℝ+ satisfying a(0) = 0, such that V(t, z) ⩽ a(∥ z ∥),

for all z ∈ X and t ∈ [t0 , ∞).

Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable. Proof. By condition (LF2) of Definition 8.1, there exists an increasing function b ∶ ℝ+ → ℝ+ satisfying b(∥ x ∥) ⩽ V(t, x),

for every (t, x) ∈ [t0 , ∞) × X.

Let s0 ⩾ t0 and 𝜖 > 0. By the fact that a(0) = 0, a is continuous and increasing and and a(𝛿) < b(𝜖) . b(𝜖) > 0, there exists 𝛿 = 𝛿(𝜖) such that 0 < 𝛿 < b(𝜖) 3 3 We want to prove that ∥ x(t) ∥< 𝜖

for all t ∈ [𝛼, 𝛽],

where t0 ⩽ 𝛼 < 𝛽 < ∞ and x ∶ [𝛼, 𝛽] → X is a regulated function and leftcontinuous on (𝛼, 𝛽] satisfying ∥ x(𝛼) ∥< 𝛿

and

s ‖ ‖ ‖ sup ‖ ‖x(s) − x(𝛼) − ∫ DF(x(𝜏), t)‖ < 𝛿. s∈[𝛼,𝛽] ‖ ‖ 𝛼

By Lemma 8.32, condition (ii), and the fact that a(𝛿)
0 for x ≠ 0, such that for every solution x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X of the generalized ODE (8.1), we have, for every t ∈ [𝛼, 𝛽), D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ −Φ(x(t)). 𝜂

(8.40)

Then, the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly asymptotically stable. Proof. By Theorem 8.33, it remains to prove that the solution x ≡ 0 of the generalized ODE is regularly attracting. At first, notice that, using the ideas of Lemma 8.32 and Eq. (8.40), we can prove that, for every regulated function y ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X left-continuous on (𝛼, 𝛽], we have 𝜉 ‖ ‖ ‖ ‖ DF(y(𝜏), t)‖ + M(𝑣 − 𝛾), V(𝑣, y(𝑣)) − V(𝛾, y(𝛾)) ⩽ sup ‖y(𝜉) − y(𝛼) − ‖ ‖ ∫ 𝜉∈[𝛼,𝛽] ‖ 𝛼 ‖ (8.41)

for all 𝑣, 𝛾 ∈ [𝛼, 𝛽], with 𝛾 ⩽ 𝑣 and M = sup 𝜉∈[𝛼,𝛽] Φ(y(𝜉)). From the regular stability of the trivial solution x ≡ 0 of the generalized ODE (8.1), it follows that, for every a > 0, with a < 𝜌, there exists 𝛿0 ∈ (0, a) such that if x ∈ G− ([s0 , s1 ], X) is such that s ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ < 𝛿0 , ∥ x(s0 ) ∥< 𝛿0 and sup ‖x(s) − x(s0 ) − ‖ ‖ ∫ s∈[s0 ,s1 ] ‖ s0 ‖ then ∥ x(t) ∥< a,

for all t ∈ [s0 , s1 ],

that is, x(t) ∈ Ba = {x ∈ X ∶∥ x ∥< a} ⊂ B𝜌 = {x ∈ X ∶∥ x ∥< 𝜌} ⊂ B𝜌 ⊂ O, for every t ∈ [s0 , s1 ]. Let 𝜖 > 0 be arbitrary. Using again the regular stability of the trivial solution x ≡ 0 of the generalized ODE (8.1), there is 𝛿 = 𝛿(𝜖) > 0 such that, for every function x ∈ G− ([s2 , s3 ], X) with s ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ < 𝛿(𝜖), (8.42) ∥ x(s2 ) ∥< 𝛿(𝜖) and sup ‖x(s) − x(s2 ) − ‖ ∫s2 s∈[s2 ,s3 ] ‖ ‖ ‖ then ∥ x(t) ∥< 𝜖,

(8.43) ) 𝛿 +𝜌(𝜖) for every t ∈ [s2 , s3 ]. Define 𝜌(𝜖) = min {𝛿0 , 𝛿(𝜖)} > 0 and T(𝜖) = − 0 N , where N = sup {−Φ(x) ∶ 𝜌(𝜖) ⩽∥ x ∥< 𝜖} = − inf {Φ(x) ∶ 𝜌(𝜖) ⩽∥ x ∥< 𝜖} < 0. (

281

282

8 Stability Theory

Assume that x ∶ [s0 , s1 ] → X is a regulated function on [s0 , s1 ] and leftcontinuous on (s0 , s1 ] such that ∥ x(s0 ) ∥< 𝛿0 and s ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ < 𝜌(𝜖). (8.44) sup ‖x(s) − x(s2 ) − ‖ ∫s2 s∈[s2 ,s3 ] ‖ ‖ ‖ Suppose that T(𝜖) < s1 − s0 . We assure that there is t ∈ [s0 , s0 + T(𝜖)] such that ∥ x(t) ∥< 𝜌(𝜖). Assume the contrary, that is, ∥ x(s) ∥⩾ 𝜌(𝜖) for all s ∈ [s0 , s0 + T(𝜖)]. By (8.41), we have V(s0 + T(𝜖), x(s0 + T(𝜖)))

s ‖ ‖ ‖ ‖ DF(x(𝜏), t)‖ + NT(𝜖) ‖x(s) − x(s0 ) − ‖ ‖ ∫ s∈[s0 ,s0 +T(𝜖)] ‖ s0 ‖ ) ( −(𝛿0 + 𝜌(𝜖)) ⩽∥ x(s0 ) ∥ +𝜌(𝜖) + N N ) ( −(𝛿0 + 𝜌(𝜖)) = 0, ⩽ 𝛿0 + 𝜌(𝜖) + N N

⩽ V(s0 , x(s0 )) +

sup

which contradicts the inequality V(s0 + T(𝜖), x(s0 + T(𝜖))) ⩾ b(∥ x(s0 + T(𝜖)) ∥) ⩾ b(𝜌(𝜖)) > 0. Therefore, there exists t ∈ [s0 , s0 + T(𝜖)] such that ∥ x(t) ∥< 𝜌(𝜖). Once (8.42) holds in view of the choice of 𝜌(𝜖) and, moreover, (8.43) holds for the case where s2 = t and s3 = s1 , by (8.44), we obtain ∥ x(t) ∥< 𝜖,

for t ∈ [t, s1 ].

Consequently, ∥ x(t) ∥< 𝜖,

for t > s0 + T(𝜖),

since t ∈ [s0 , s0 + T(𝜖)], and this completes the proof.



8.5.2 Converse Lyapunov Theorem Motivated by S˘ . Schwabik’s work, [208], where he proved converse Lyapunov theorems concerning variational stability, the authors of [7] established converse Lyapunov theorems concerning regular stability. In this subsection, we describe such results. Throughout this subsection, we assume that F ∈  (Ω, h) satisfies condition (ZS), where Ω = X × [t0 , ∞). By Corollary 5.16, for every (x0 , s0 ) ∈ Ω, there exists a unique global forward solution x ∶ [s0 , ∞) → X of the generalized ODE (8.1). Given s ⩾ t0 and x ∈ X, define { A(s, x) = 𝜑 ∈ G([t0 , ∞), X) ∶ 𝜑(t0 ) = 0, 𝜑(s) = x, 𝜑 } is left-continuous on (t0 , ∞) (8.45)

8.5 Regular Stability for Generalized ODEs

and V ∶ [t0 , ∞) × X → ℝ by } { 𝜎 ‖ ‖ ⎧ ‖ ‖ ⎪ inf sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ , if s > t0 , ‖ ∫t0 V(s, x) = ⎨𝜑∈A(s,x) 𝜎∈[t0 ,s] ‖ ‖ ‖ ⎪∥ x ∥, if s = t0 . ⎩

(8.46)

Remark 8.35: Since 𝜑 ∈ A(s, x) is a regulated function, by Corollary 4.8, the 𝜎 Kurzweil integral ∫t DF(𝜑(𝜏), t) exists, for every 𝜎 ∈ [t0 , ∞), and the mapping 0 s s ∋ [t0 , ∞) → ∫t DF(𝜑(𝜏), t) is regulated, as well as the mapping s ∋ [t0 , ∞) → 0 s 𝜑(s) − ∫t DF(𝜑(𝜏), t). On the other hand, since G([a, b], X) ⊂ B([a, b], X), where 0 B([a, b], X) denotes the space of all bounded function f ∶ [a, b] → X, we have 𝜎 ‖ ‖ ‖ ‖ sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ < ∞, for all [a, b] ⊂ [t0 , ∞). ‖ ∫t0 𝜎∈[a,b] ‖ ‖ ‖ In particular, 𝜎 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), t)‖ < ∞, for all s ⩾ t0 . sup ‖𝜑(𝜎) − ‖ ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖ Therefore, the function V, given by (8.46), is well-defined for all (s, x) ∈ [t0 , ∞) × X. Our next goal is to show that if the trivial solution of the generalized ODE (8.1) is regularly stable, then V, given by (8.46), is a Lyapunov functional with respect to the generalized ODE (8.1) and it satisfies all conditions of Theorem 8.33. The next result shows that V, given by (8.46), satisfies condition (i) of Theorem 8.33. Lemma 8.36: The function V defined in (8.46) satisfies: (i) V(s, 0) = 0, for all s ⩾ t0 ; (ii) V(s, x) ⩾ 0, for all x ∈ X and all s ⩾ t0 . Proof. Since 𝜑 ≡ 0 ∈ A(s, 0), we have V(s, 0) = 0, for all s ⩾ t0 . Now, we notice that item (ii) follows from the definition of V. ◽ The next result gives an estimate for the function V defined by (8.46). Lemma 8.37: Let F ∈  (Ω, h) and V ∶ [t0 , ∞) × X → ℝ be defined by (8.46). Then, for all x, y ∈ X and all s ∈ [t0 , ∞), we have V(s, y) − V(s, x) ⩽∥ y − x ∥ .

283

284

8 Stability Theory

Proof. If s = t0 , then V(t0 , y) − V(t0 , x) =∥ y ∥ − ∥ x ∥⩽∥ y − x ∥ and the proof is complete. Now, assume that s > t0 . Let 𝜂 > 0 be such that 0 < 𝜂 < s − t0 and 𝜑 ∈ A(s, x) be an arbitrary function. Define 𝜑𝜂 ∶ [t0 , ∞) → X by if 𝜎 ∈ [t0 , s − 𝜂], ⎧𝜑(𝜎), 𝜎−s+𝜂 ⎪ (y − x), if 𝜎 ∈ [s − 𝜂, s], 𝜑𝜂 (𝜎) = ⎨𝜑(𝜎) + 𝜂 ⎪ if 𝜎 ∈ (s, ∞). ⎩y,

(8.47)

It is not difficult to see that 𝜑𝜂 ∈ A(s, y). By the definition of V in (8.46), 𝜎 ‖ ‖ ‖ ‖ DF(𝜑𝜂 (𝜏), t)‖ . (8.48) V(s, y) ⩽ sup ‖𝜑𝜂 (𝜎) − ‖ ‖ ∫ 𝜎∈[t0 ,s] ‖ t0 ‖ Moreover, 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜑𝜂 (𝜏), t)‖ − sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ sup ‖𝜑𝜂 (𝜎) − ‖ 𝜎∈[t0 ,s] ‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖ ‖ ‖ 𝜎 𝜎 ‖ ‖ ‖ ‖ ⩽ sup ‖𝜑𝜂 (𝜎) − DF(𝜑𝜂 (𝜏), t) − 𝜑(𝜎) + DF(𝜑(𝜏), t)‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖ 𝜎 𝜎 ‖ ‖ ‖ ‖ ⩽ sup ‖𝜑𝜂 (𝜎) − DF(𝜑𝜂 (𝜏), t) − 𝜑(𝜎) + DF(𝜑(𝜏), t)‖ ‖ ‖ ∫ ∫ 𝜎∈[t0 ,s−𝜂] ‖ t0 t0 ‖ 𝜎 𝜎 ‖ ‖ ‖ ‖ + sup ‖𝜑𝜂 (𝜎) − DF(𝜑𝜂 (𝜏), t) − 𝜑(𝜎) + DF(𝜑(𝜏), t)‖ . ‖ ∫t0 ∫t0 𝜎∈[s−𝜂,s] ‖ ‖ ‖ (8.49)

By (8.47), (8.48), and (8.49), we obtain 𝜎 𝜎 ‖ ‖ ‖ ‖ V(s, y) ⩽ sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t) − 𝜑(𝜎) + DF(𝜑(𝜏), t)‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,s−𝜂] ‖ ‖ ‖ 𝜎 ‖ 𝜎−s+𝜂 ‖ + sup ‖𝜑(𝜎) + (y − x) − DF(𝜑𝜂 (𝜏), t) ∫t0 𝜂 𝜎∈[s−𝜂,s] ‖ ‖ 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ −𝜑(𝜎) + DF(𝜑(𝜏), t)‖ + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ ‖ 𝜎∈[t0 ,s] ‖ ‖ ∫t0 ∫t0 ‖ ‖ ‖ s−𝜂 𝜎 ‖𝜎 − s + 𝜂 ‖ = sup ‖ (y − x) − DF(𝜑(𝜏), t) − DF(𝜑𝜂 (𝜏), t) ∫t0 ∫s−𝜂 𝜂 𝜎∈[s−𝜂,s] ‖ ‖ s−𝜂 𝜎 ‖ ‖ DF(𝜑(𝜏), t) + DF(𝜑(𝜏), t)‖ + ‖ ∫s−𝜂 ∫t0 ‖ 𝜎 ‖ ‖ ‖ ‖ + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ ‖ ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖

8.5 Regular Stability for Generalized ODEs 𝜎 𝜎 ‖𝜎 − s + 𝜂 ‖ ‖ ‖ = sup ‖ DF(𝜑𝜂 (𝜏), t) + DF(𝜑(𝜏), t)‖ (y − x) − ‖ ‖ ∫ ∫ 𝜂 𝜎∈[s−𝜂,s] ‖ s−𝜂 s−𝜂 ‖ 𝜎 ‖ ‖ ‖ ‖ + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ ‖ ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖ ‖ 𝜎 ‖ ‖𝜎 − s + 𝜂 ‖ ‖ ‖ ⩽ sup ‖ (y − x)‖ + sup ‖ DF(𝜑𝜂 (𝜏), t)‖ ‖ ‖ ‖ ‖ ∫ 𝜂 𝜎∈[s−𝜂,s] ‖ 𝜎∈[s−𝜂,s] ‖ ‖ s−𝜂 ‖ 𝜎 ‖ 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ + sup ‖ DF(𝜑(𝜏), t)‖ + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ ‖ 𝜎∈[t0 ,s] ‖ ‖ ∫s−𝜂 ∫t0 𝜎∈[s−𝜂,s] ‖ ‖ ‖ ‖ ‖ 𝜎 ‖ ‖ ‖ ‖ ⩽ ∥ y − x ∥ +2 sup |h(𝜎) − h(s − 𝜂)| + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ ‖ ∫t0 𝜎∈[s−𝜂,s] 𝜎∈[t0 ,s] ‖ ‖ ‖ 𝜎 ‖ ‖ ‖ ‖ = ∥ y − x ∥ +2|h(s) − h(s − 𝜂)| + sup ‖𝜑(𝜎) − DF(𝜑(𝜏), t)‖ , ‖ ∫t0 𝜎∈[t0 ,s] ‖ ‖ ‖ where the inequality ‖ ‖ 𝜎 ‖ ‖ DF(𝜑(𝜏), t)‖ ⩽ |h(𝜎) − h(s − 𝜂)| ‖ ‖ ‖∫s−𝜂 ‖ ‖ follows from Lemma 4.5. Making 𝜂 → 0+ , we obtain 𝜎 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), t)‖ . V(s, y) ⩽∥ y − x ∥ + sup ‖𝜑(𝜎) − ‖ ‖ ∫ 𝜎∈[t0 ,s] ‖ t0 ‖ Then, taking the infimum for all 𝜑 ∈ A(s, x) on the right-hand side of the last inequality, we conclude that

V(s, y) ⩽∥ y − x ∥ +V(s, x) and the proof is complete.



The proof of the next result follows from Lemma 8.37 and from the fact that V, defined by (8.46), satisfies V(s, 0) = 0 for all s ⩾ t0 . Corollary 8.38: If F ∈  (Ω, h), then V ∶ [t0 , ∞) × X → ℝ defined by (8.46) satisfies V(s, x) ⩽∥ x ∥,

for all x ∈ X and s ∈ [t0 , ∞).

The next result will be of interest for the forthcoming considerations. Lemma 8.39: Let F ∈  (Ω, h) and V ∶ [t0 , ∞) × X → ℝ be defined by (8.46). Then, the function t →  V(t, x(t)) is nonincreasing along every global forward

285

286

8 Stability Theory

solution x ∶ [t0 , ∞) → X, which is denoted by x(⋅, s0 , x0 ), for (s0 , x0 ) ∈ [t0 , ∞) × X, of the generalized ODE (8.1), with initial condition x(s0 ) = x0 . Proof. Consider x ∶ [s0 , ∞) → X being the global forward solution of the generalized ODE (8.1) with initial condition x(s0 ) = x0 . Let t1 , t2 ∈ [s0 , ∞) be such that t2 > t1 and consider 𝜑 ∈ A(t1 , x(t1 )). Let 𝜙 ∶ [t0 , ∞) → X be defined by ⎧𝜑(𝜎), if 𝜎 ∈ [t , t ], 0 1 ⎪ 𝜙(𝜎) = ⎨x(𝜎), if 𝜎 ∈ [t1 , t2 ], ⎪x(t2 ), if 𝜎 ∈ (t2 , ∞). ⎩ By Lemma 4.9, x is left-continuous on (t1 , t2 ] and, by definition of the set A(t1 , x(t1 )), 𝜑 is left-continuous on (t0 , ∞) (see (8.45)). Moreover, since 𝜙(t2 ) = 𝜙(t) = x(t2 ) for all t ⩾ t2 , we conclude that 𝜙 is left-continuous on (t0 , ∞) and, therefore, 𝜙 ∈ A(t2 , x(t2 )). Thus, V given by (8.46) satisfies 𝜎 ‖ ‖ ‖ ‖ V(t2 , x(t2 )) ⩽ sup ‖𝜙(𝜎) − DF(𝜙(𝜏), s)‖ . ‖ ∫t0 𝜎∈[t0 ,t2 ] ‖ ‖ ‖

(8.50)

Now, define f ∶ [t0 , t2 ] → X by f (𝜎) = 𝜙(𝜎) −

𝜎

∫t0

DF(𝜙(𝜏), t)

for all 𝜎 ∈ [t0 , t2 ].

Notice that, for all t, s ∈ [t0 , t2 ], with s < t, we have ( )‖ t s ‖ ‖ ‖ DF(𝜙(𝜏), 𝑣) − 𝜙(s) − DF(𝜙(𝜏), 𝑣) ‖ ∥ f (t) − f (s) ∥ = ‖𝜙(t) − ‖ ‖ ∫t0 ∫t0 ‖ ‖ t ‖ ‖ ‖ ‖ = ‖𝜙(t) − 𝜙(s) − DF(𝜙(𝜏), 𝑣)‖ ‖ ‖ ∫s ‖ ‖ ‖ t ‖ ‖ ‖ ⩽∥ 𝜙(t) − 𝜙(s) ∥ + ‖ DF(𝜙(𝜏), 𝑣)‖ ‖ ∫s ‖ ‖ ‖ ⩽∥ 𝜙(t) − 𝜙(s) ∥ +|h(t) − h(s)|, (8.51) where the last inequality follows from Lemma 4.5. Therefore, every point in [t0 , t2 ] at which h and 𝜑 are continuous is a point of continuity of the function f and, because h and 𝜑 are left-continuous on (t0 , t2 ], the function f is also left-continuous on (t0 , t2 ]. Besides, f is regulated, since 𝜙 is regulated. Therefore, by Proposition 1.6, we can consider two cases with respect to 𝜎 ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ . sup ‖𝜙(𝜎) − ‖ ∫t0 𝜎∈[t0 ,t2 ] ‖ ‖ ‖

8.5 Regular Stability for Generalized ODEs

Case 1: Suppose, for some 𝑣 ∈ [t0 , t2 ], we have 𝜎 𝑣 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = ‖𝜙(𝑣) − DF(𝜙(𝜏), s)‖ . sup ‖𝜙(𝜎) − ‖ ‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,t2 ] ‖ ‖ ‖ ‖ ‖ Then, either 𝑣 ∈ [t0 , t1 ] or 𝑣 ∈ [t1 , t2 ]. Firstly, consider 𝑣 ∈ [t0 , t1 ], then 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = sup ‖𝜙(𝜎) − DF(𝜙(𝜏), s)‖ . sup ‖𝜙(𝜎) − ‖ ‖ ‖ ‖ ∫ ∫ 𝜎∈[t0 ,t2 ] ‖ t0 t0 ‖ 𝜎∈[t0 ,t1 ] ‖ ‖

By the fact that 𝜙(t) = 𝜑(t) for all t ∈ [t0 , t1 ], we get 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = sup ‖𝜑(𝜎) − DF(𝜑(𝜏), s)‖ . sup ‖𝜙(𝜎) − ‖ 𝜎∈[t0 ,t1 ] ‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,t2 ] ‖ ‖ ‖ ‖ ‖

From this and (8.50), we derive that 𝜎 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), s)‖ . V(t2 , x(t2 )) ⩽ sup ‖𝜑(𝜎) − ‖ ‖ ∫ 𝜎∈[t0 ,t1 ] ‖ t0 ‖

Then, taking the infimum for all 𝜑 ∈ A(t1 , x(t1 )) on the right-hand side of the last inequality, we have V(t2 , x(t2 )) ⩽ V(t1 , x(t1 )). On the other hand, since 𝜙(t) = x(t) for all t ∈ [t1 , t2 ], then, if 𝑣 ∈ [t1 , t2 ], it follows that 𝑣 t1 𝑣 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = ‖𝜙(𝑣) − DF(𝜙(𝜏), s) − DF(𝜙(𝜏), s)‖ ‖𝜙(𝑣) − ‖ ‖ ‖ ‖ ∫t0 ∫t0 ∫t1 ‖ ‖ ‖ ‖ t1 𝑣 ‖ ‖ ‖ ‖ = ‖x(𝑣) − DF(𝜑(𝜏), s) − DF(x(𝜏), s)‖ . ‖ ‖ ∫t0 ∫t1 ‖ ‖ (8.52) By definition of solution of the generalized ODE (8.1), we have 𝑣

x(𝑣) −

∫t1

DF(x(𝜏), s) = x(t1 ) = 𝜑(t1 ).

(8.53)

Replacing (8.53) in (8.52), we get 𝑣 t1 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = ‖𝜑(t1 ) − DF(𝜑(𝜏), s)‖ ‖𝜙(𝑣) − ‖ ‖ ‖ ‖ ∫t0 ∫t0 ‖ ‖ ‖ ‖ 𝜎 ‖ ‖ ‖ ‖ ⩽ sup ‖𝜑(𝜎) − DF(𝜑(𝜏), s)‖ . ‖ ∫t0 𝜎∈[t0 ,t1 ] ‖ ‖ ‖

Now, (8.54) implies the following inequality 𝜎 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), s)‖ . V(t2 , x(t2 )) ⩽ sup ‖𝜑(𝜎) − ‖ ∫t0 𝜎∈[t0 ,t1 ] ‖ ‖ ‖

(8.54)

287

288

8 Stability Theory

Taking the infimum for all 𝜑 ∈ A(t1 , x(t1 )) on the right-hand side of the inequality above, we obtain V(t2 , x(t2 )) ⩽ V(t1 , x(t1 )). Case 2: Suppose, for some 𝑣 ∈ [t0 , t2 ), we have 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ sup ‖𝜙(𝜎) − DF(𝜙(𝜏), s)‖ = ‖𝜙(𝑣+ ) − lim+ DF(𝜙(𝜏), s)‖ . ‖ ‖ ‖ ∫t0 𝜎→𝑣 ∫t 𝜎∈[t0 ,t2 ] ‖ 0 ‖ ‖ ‖ ‖ Hence, either 𝑣 ∈ [t0 , t1 ) or 𝑣 ∈ [t1 , t2 ). If 𝑣 ∈ [t0 , t1 ), then the proof follows as in case 1. By Theorem 4.4 and Lemma 4.10, if x ∶ [t0 , ∞) → X is the solution of the generalized ODE (8.1), then, for all 𝑣 ∈ [t0 , ∞), we have x(𝑣+ ) − lim+ 𝜎→𝑣

𝜎

∫t1

𝑣

DF(x(𝜏), s) = x(𝑣) −

∫t1

DF(x(𝜏), s).

(8.55)

Now, since 𝑣 ∈ [t1 , t2 ) and 𝜙|[t1 ,t2 ) = x, then 𝜎 𝜎 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ DF(𝜙(𝜏), s)‖ = ‖𝜙(𝑣+ ) − lim+ DF(𝜙(𝜏), s)‖ sup ‖𝜙(𝜎) − ‖ ‖ ‖ ∫t0 𝜎→𝑣 ∫t 𝜎∈[t0 ,t2 ] ‖ 0 ‖ ‖ ‖ ‖ 𝜎 ‖ ‖ ‖ + ‖ = ‖x(𝑣 ) − lim+ DF(𝜙(𝜏), s)‖ ‖ ‖ 𝜎→𝑣 ∫t 0 ‖ ‖ t 𝜎 ‖ ‖ 1 ‖ ‖ = ‖x(𝑣+ ) − DF(𝜑(𝜏), s) − lim+ DF(x(𝜏), s)‖ ‖ ‖ ∫t0 𝜎→𝑣 ∫t 1 ‖ ‖ t1 𝑣 ‖ ‖ (8.55) ‖ ‖ = ‖x(𝑣) − DF(𝜑(𝜏), s) − DF(x(𝜏), s)‖ ‖ ‖ ∫t0 ∫t1 ‖ ‖ t1 ‖ ‖ (8.53) ‖ ‖ = ‖x(t1 ) − DF(𝜑(𝜏), s)‖ ‖ ‖ ∫t0 ‖ ‖ t1 ‖ ‖ ‖ ‖ = ‖𝜑(t1 ) − DF(𝜑(𝜏), s)‖ ‖ ‖ ∫t0 ‖ ‖ 𝜎 ‖ ‖ ‖ ‖ ⩽ sup ‖𝜑(𝜎) − DF(𝜑(𝜏), s)‖ . ‖ ‖ ∫ 𝜎∈[t0 ,t1 ] ‖ t0 ‖ By (8.50), 𝜎 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), s)‖ . V(t2 , x(t2 )) ⩽ sup ‖𝜑(𝜎) − ‖ ∫t0 𝜎∈[t0 ,t1 ] ‖ ‖ ‖ Thus, if we take the infimum for all 𝜑 ∈ A(t1 , x(t1 )) on the right-hand side of the last inequality, we conclude that V(t2 , x(t2 )) ⩽ V(t1 , x(t1 )). ◽

In what follows, we present a lemma which showed to be essential to prove Lemmas 8.41 and 8.42. Lemma 8.40: For all s ⩾ t0 and all x ∈ X, the set A(s, x) defined by (8.45) is closed.

8.5 Regular Stability for Generalized ODEs

Proof. Consider a sequence {𝜑n }n∈ℕ in A(s, x) such that {𝜑n }n∈ℕ converges to the function 𝜑 in G− ([t0 , ∞), X) with the topology of locally uniform convergence. We want to show that 𝜑 ∈ A(s, x). Notice that 𝜑 satisfies 𝜑(t0 ) = lim 𝜑n (t0 ) = lim 0 = 0 n→∞

n→∞

and

𝜑(s) = lim 𝜑n (s) = lim x = x. n→∞

n→∞

We want to prove that 𝜑 is left-continuous on (t0 , ∞). In order to do that, we will show that 𝜑 is left-continuous on (𝛼, 𝛽], for each (𝛼, 𝛽] ⊂ (t0 , ∞). Indeed, given t ∈ (𝛼, 𝛽] ⊂ (t0 , ∞), we apply the Moore–Osgood theorem (see, e.g., [19]) to obtain lim 𝜑(𝜉) = lim− lim 𝜑n (𝜉) = lim lim− 𝜑n (𝜉)

𝜉→t−

𝜉→t n→∞

n→∞ 𝜉→t

= lim 𝜑n (t ) = lim 𝜑n (t) = 𝜑(t), −

n→∞

n→∞

for all t ∈ (𝛼, 𝛽]. Therefore, 𝜑(t0 ) = 0, 𝜑(s) = x and 𝜑 is left-continuous on (t0 , ∞) which implies that 𝜑 ∈ A(s, x) and A(s, x) is closed. ◽ The next result shows that V given by (8.46) satisfies condition (LF1) of Definition 8.1. Lemma 8.41: Let F ∈  (Ω, h) and V ∶ [t0 , ∞) × X → ℝ be defined by (8.46). Then, the function V(⋅, y) ∶ [t0 , ∞) → ℝ is left-continuous on (t0 , ∞), for all y ∈ X. Proof. Let y ∈ X and 𝜎0 ∈ (t0 , ∞) be given. By Lemma 8.40, there exists 𝜓 ∈ A(𝜎0 , y) satisfying 𝜎 ‖ ‖ ‖ ‖ V(𝜎0 , y) = sup ‖𝜓(𝜎) − DF(𝜓(𝜏), t)‖ . ‖ ∫t0 𝜎∈[t0 ,𝜎0 ] ‖ ‖ ‖ Since h and 𝜓 are left-continuous on (t0 , ∞), for all 𝜖 > 0, there exists 𝛿0 such that if t ∈ [𝜎0 − 𝛿0 , 𝜎0 ), then |h(t) − h(𝜎0 )| < 𝜖

and

∥ 𝜓(t) − 𝜓(𝜎0 ) ∥< 𝜖.

We need to prove that |V(t, y) − V(𝜎0 , y)| < 𝜖 for all t ∈ [𝜎0 − 𝛿0 , 𝜎0 ). At first, we observe that 𝜎 ‖ ‖ ‖ ‖ V(𝜎0 , y) = sup ‖𝜓(𝜎) − DF(𝜓(𝜏), s)‖ ‖ ∫t0 𝜎∈[t0 ,𝜎0 ] ‖ ‖ ‖ 𝜎 ‖ ‖ ‖ ‖ ⩾ sup ‖𝜓(𝜎) − DF(𝜓(𝜏), s)‖ ⩾ V(t, 𝜓(t)). ‖ ∫t0 𝜎∈[t0 ,t] ‖ ‖ ‖ On the other hand, by Lemma 8.37 and by Eq. (8.56), we have V(t, y) − V(𝜎0 , y) ⩽ V(t, y) − V(t, 𝜓(t)) ⩽∥ y − 𝜓(t) ∥ =∥ 𝜓(𝜎0 ) − 𝜓(t) ∥< 𝜖 for all t ∈ [𝜎0 − 𝛿0 , 𝜎0 ).

(8.56)

289

290

8 Stability Theory

It remains to show that V(𝜎0 , y) − V(t, y) < 𝜖 for all t ∈ [𝜎0 − 𝛿0 , 𝜎0 ). Let x ∶ [t, ∞) → X be the global forward solution of the generalized ODE (8.1) with initial condition x(t) = y. Then, by Lemma (8.39), V(𝜎0 , x(𝜎0 )) − V(t, x(t)) ⩽ 0, since t < 𝜎0 . Then, V(𝜎0 , y) − V(t, y) = V(𝜎0 , y) − V(𝜎0 , x(𝜎0 )) + V(𝜎0 , x(𝜎0 )) − V(t, x(t)) ⩽ V(𝜎0 , y) − V(𝜎0 , x(𝜎0 )) ⩽∥ x(𝜎0 ) − y ∥, where the last inequality follows from Lemma 8.37. By the fact that x is the solution of the generalized ODE (8.1), with x(t) = y, it follows that ‖ 𝜎0 ‖ ‖ ∥ x(𝜎0 ) − y ∥= ‖ ‖∫ DF(x(𝜏), s)‖ ⩽ |h(𝜎0 ) − h(t)|, ‖ t ‖ where the last inequality follows from Lemma 4.5. Therefore, by (8.56), we conclude that V(𝜎0 , y) − V(t, y) ⩽∥ x(𝜎0 ) − y ∥⩽ |h(𝜎0 ) − h(t)| < 𝜖, for all t ∈ [𝜎0 − 𝛿, 𝜎0 ), which completes the proof.



In the sequel, we prove that if the trivial solution of the generalized ODE (8.1) is regularly stable, then V, defined by (8.46), satisfies condition (LF2) of Definition 8.1. Lemma 8.42: Let F ∈  (Ω, h) satisfy condition (ZS). If the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable, then V defined by (8.46) satisfies: (i) There exists a continuous (strictly) increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that V(t, x) ⩾ b(∥ x ∥),

for every (t, x) ∈ [t0 , ∞) × X.

Proof. Suppose that (i) does not hold. Then, there are 𝜖 > 0 and a sequence of pairs {(tk , xk )}k∈ℕ in [t0 , ∞) × X, such that 𝜖 ⩽∥ xk ∥,

(8.57)

tk → ∞ and V(tk , xk ) → 0, as k → ∞. On the other hand, since the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable, it follows from Theorem 8.31 that x ≡ 0 is regularly stable with respect to perturbation.

8.5 Regular Stability for Generalized ODEs

Take 𝛿 = 𝛿(𝜖) > 0 given in Definition 8.30(i). Since V(tk , xk ) → 0 as tk → ∞, there exists k0 ∈ ℕ such that V(tk , xk ) < 𝛿, for all k > k0 . Moreover, since A(tk , xk ) is a closed set, for each k > k0 , there exists 𝜑k ∈ A(tk , xk ) such that 𝜎 ‖ ‖ ‖ ‖ V(tk , xk ) = sup ‖𝜑k (𝜎) − DF(𝜑k (𝜏), t)‖ < 𝛿. (8.58) ‖ ‖ ∫ 𝜎∈[t0 ,tk ] ‖ t0 ‖ For each k > k0 , define Pk ∶ [t0 , tk ] → X by Pk (𝜎) = 𝜑k (𝜎) −

𝜎

DF(𝜑k (𝜏), t),

∫t0

for 𝜎 ∈ [t0 , tk ].

Then, Pk (t0 ) = 𝜑k (t0 ) = 0, since 𝜑k ∈ A(tk , xk ) and sup ∥ Pk (𝜎) − Pk (t0 ) ∥ = sup ∥ Pk (𝜎) ∥

𝜎∈[t0 ,tk ]

𝜎∈[t0 ,tk ]

𝜎 ‖ ‖ ‖ ‖ DF(𝜑k (𝜏), t)‖ < 𝛿. = sup ‖𝜑k (𝜎) − ‖ ‖ ∫ 𝜎∈[t0 ,tk ] ‖ t0 ‖ − Moreover, it is clear that P ∈ G ([t0 , tk ], X). Now, note, in addition, that for 𝜎 ∈ [t0 , tk ], we have

𝜑k (𝜎) =

𝜎

∫t0

DF(𝜑k (𝜏), t) + 𝜑k (𝜎) −

= 𝜑k (t0 ) + = 𝜑k (t0 ) +

𝜎

∫t0 ∫t0

𝜎

∫t0

DF(𝜑k (𝜏), t) + 𝜑k (t0 )

DF(𝜑k (𝜏), t) + Pk (𝜎) − Pk (t0 ) 𝜎

D[F(𝜑k (𝜏), t) + Pk (t)]

which implies that 𝜑k ∶ [t0 , tk ] → X is the solution of the nonhomogeneous generalized ODE dx = D[F(x, t) + Pk (t)], d𝜏 with initial condition 𝜑k (t0 ) = 0. Since the trivial solution of the generalized ODE is regularly stable with respect to perturbations, P ∈ G− ([t0 , tk ], X) and ∥ 𝜑k (t0 ) ∥= 0 < 𝛿, we have ∥ 𝜑k (t) ∥< 𝜖 for all t ∈ [t0 , tk ]. In particular, ∥ 𝜑k (tk ) ∥=∥ xk ∥< 𝜖 (since 𝜑k ∈ A(tk , xk )), which contradicts (8.57). Therefore, condition (i) is satisfied. ◽ The next theorem, established in [7], is known as converse Lyapunov theorem on regular stability for generalized ODEs because it shows that regular stability implies the existence of a Lyapunov functional with some properties described in Theorem 8.33. Theorem 8.43: Let F ∈  (Ω, h) satisfy condition (ZS). If the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly stable, then there exists a functional V ∶ [t0 , ∞) × X → ℝ satisfying the following conditions:

291

292

8 Stability Theory

(i) V(⋅, x) ∶ [t0 , ∞) → ℝ is left-continuous on (t0 , ∞) for all x ∈ X; (ii) There exists a continuous (strictly) increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that for every (t, x) ∈ [t0 , ∞) × X;

V(t, x) ⩾ b(∥ x ∥),

(iii) The function t → V(t, x(t)) is nonincreasing along every global forward solution x ∶ [t0 , ∞) → X of the generalized ODE (8.1) with x(s0 ) = x0 ; (iv) V(t, 0) = 0 for all t ∈ [t0 , ∞); (v) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ satisfying a(0) = 0, such that V(t, z) ⩽ a(∥ z ∥),

for all z ∈ X and t ∈ [t0 , ∞);

(vi) For every solution x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X of the generalized ODE (8.1) D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ 0, 𝜂

t ∈ [𝛼, 𝛽)

holds, that is, the right derivative of V along every solution of the generalized ODE (8.1) is non-positive. Proof. By Lemmas 8.39, 8.41, and 8.42, the function V ∶ [t0 , ∞) × X → ℝ defined by (8.46) satisfies items (i), (ii), and (iii). By Lemma 8.36 and Corollary 8.38, V satisfies items (iv) and (v). Finally, it is clear that if V satisfies (iii), then it satisfies (vi). ◽ The next statement is a Converse Lyapunov theorem on regular asymptotic stability of the trivial solution x ≡ 0 of the generalized ODE (8.1) and it was also proved in [7]. Theorem 8.44: Let F ∈  (Ω, h) satisfy condition (ZS). If the trivial solution x ≡ 0 of the generalized ODE (8.1) is regularly asymptotically stable, then there exists a functional V ∶ [t0 , ∞) × X → ℝ satisfying the following conditions: (i) V(t, 0) = 0 for all t ∈ [t0 , ∞); (ii) There exists a continuous increasing function a ∶ ℝ+ → ℝ+ satisfying a(0) = 0, such that V(t, y) ⩽ a(∥ y ∥),

for all y ∈ X and t ∈ [t0 , ∞);

(iii) The function t → V(t, x(t)) is nonincreasing along every global forward solution x ∶ [t0 , ∞) → X of the generalized ODE (8.1) with x(s0 ) = x0 ; (iv) V(⋅, x) ∶ [t0 , ∞) → ℝ is left-continuous on (t0 , ∞) for all x ∈ X;

8.5 Regular Stability for Generalized ODEs

(v) For every solution x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X of (8.1), we have D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ −V(t, x(t)), 𝜂

t ∈ [𝛼, 𝛽);

(vi) There exists a continuous (strictly) increasing function b ∶ ℝ+ → ℝ+ satisfying b(0) = 0 such that V(t, x) ⩾ b(∥ x ∥),

for every (t, x) ∈ [t0 , ∞) × X.

Proof. For s ⩾ t0 and x ∈ X, define V ∶ [t0 , ∞) × X → ℝ by { } 𝜎 ‖ ‖ ⎧ ‖ ‖ −s ⎪ inf sup ‖𝜑(𝜎) − , if s > t0 , DF(𝜑(𝜏), t)‖ e ‖ ∫t0 V(s, x) = ⎨𝜑∈A(s,x) 𝜎∈[t0 ,s] ‖ ‖ ‖ ⎪∥ x ∥, if s = t0 . ⎩ (8.59) Notice that V is well-defined (see Remark 8.35). Similarly as in the proofs of Lemmas 8.36, 8.37, 8.39, and 8.41, one can prove that the function V, defined by (8.59), satisfies items (i), (ii), (iii), and (iv), respectively. Although the proof of item (v) is similar to the proof of Lemma 8.39, it shows the importance of the exponential function in the definition of the Lyapunov functional. For this reason, it is relevant to bring it up here. In order to prove item (v), let x ∶ [𝛼, 𝛽] ⊂ [t0 , ∞) → X be a solution of the generalized ODE (8.1) and t ∈ [𝛼, 𝛽). Take 𝜑 ∈ A(t, x(t)) and consider 0 < 𝜂 < t + 𝜂 < 𝛽. Define 𝜙𝜂 ∶ [t0 , ∞) → X by ⎧𝜑(𝜎), if 𝜎 ∈ [t0 , t], ⎪ 𝜙𝜂 (𝜎) = ⎨x(𝜎), if 𝜎 ∈ [t, t + 𝜂], ⎪x(t + 𝜂), if 𝜎 ∈ (t + 𝜂, ∞). ⎩ Therefore, 𝜙𝜂 ∈ A(t + 𝜂, x(t + 𝜂)) and 𝜎 ‖ ‖ ‖ ‖ DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) . ‖𝜙𝜂 (𝜎) − ‖ ∫t0 𝜎∈[t0 ,t+𝜂] ‖ ‖ ‖ As in the proof of Lemma 8.39, we consider two cases with respect to 𝜎 ‖ ‖ ‖ ‖ DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) . sup ‖𝜙𝜂 (𝜎) − ‖ ∫t0 𝜎∈[t0 ,t2 ] ‖ ‖ ‖ Case 1: Suppose, for some 𝑣 ∈ [t0 , t + 𝜂], we have 𝜎 𝑣 ‖ ‖ ‖ ‖ ‖ ‖ ‖ ‖ sup ‖𝜙𝜂 (𝜎) − DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) = ‖𝜙𝜂 (𝑣) − DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) . ‖ ‖ ‖ ∫t0 ∫t0 𝜎∈[t0 ,t+𝜂] ‖ ‖ ‖ ‖ ‖

V(t + 𝜂, x(t + 𝜂)) ⩽ sup

293

294

8 Stability Theory

At first, assume that 𝑣 ∈ [t0 , t]. Then, 𝜙𝜂 (s) = 𝜑(s), for all s ∈ [t0 , t], and 𝑣 ‖ ‖ ‖ ‖ DF(𝜑(𝜏), s)‖ e−t e−𝜂 V(t + 𝜂, x(t + 𝜂)) ⩽ ‖𝜑(𝑣) − ‖ ‖ ∫t0 ‖ ‖ ( ) 𝜎 ‖ ‖ ‖ −t −𝜂 ‖ ⩽ sup ‖𝜑(𝜎) − DF(𝜑(𝜏), s)‖ e e . ‖ ∫t0 𝜎∈[t0 ,t] ‖ ‖ ‖

Taking the infimum for all 𝜑 ∈ A(t, x(t)) on the right-hand side of the inequality above, we obtain V(t + 𝜂, x(t + 𝜂)) ⩽ V(t, x(t))e−𝜂 .

(8.60)

If 𝑣 ∈ [t, t + 𝜂], then the proof of (8.60) is analogous to the proof of Case 1 in Lemma 8.39, with the same adaptations that we did here. The same statement holds for the proof of Case 2. Case 2: Suppose, for some 𝑣 ∈ [t0 , t + 𝜂], we have 𝜎 ‖ ‖ ‖ ‖ DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) sup ‖𝜙𝜂 (𝜎) − ‖ ‖ ∫ 𝜎∈[t0 ,t+𝜂] ‖ t0 ‖ 𝜎 ‖ ‖ ‖ ‖ = ‖𝜙𝜂 (𝑣+ ) − lim+ DF(𝜙𝜂 (𝜏), s)‖ e−(t+𝜂) . ‖ ‖ 𝜎→𝑣 ∫t 0 ‖ ‖ By (8.60), we obtain V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ V(t, x(t))(e−𝜂 − 1) and, moreover, V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) V(t, x(t))(e−𝜂 − 1) ⩽ lim sup lim sup 𝜂 𝜂 𝜂→0+ 𝜂→0+ (e−𝜂 − 1) = V(t, x(t))lim sup 𝜂 𝜂→0+ = −V(t, x(t)), which completes the proof of item (v). The proof of item (vi) is quite similar to the proof of Lemma 8.42. We leave the details to the interested reader. However, we inform that item (vi) is proved in [7]. ◽

295

9 Periodicity Marielle Ap. Silva 1 , Everaldo M. Bonotto 2 , Rodolfo Collegari 3 , Márcia Federson 1 , and Maria Carolina Mesquita 1 1 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Faculdade de Matemática, Universidade Federal de Uberlândia, Uberlândia, MG, Brazil

The study of periodic solutions is an important and well-known branch of the theory of differential equations related, in a broad sense, to the study of periodic phenomena that arise in problems applied in technology, biology, and economics. For example, in order to investigate the impact of environmental factors, the requirement of periodic parameters is more realistic and important due to many periodic factors of the real world. We can mention, for instance seasonal effects of the climate, mating and harvesting habits, among others. Let T > 0. We know that a function f ∶ ℝ → ℝ is called T-periodic, if it satisfies f (t + T) = f (t),

for every t ∈ ℝ.

For example, the trigonometric functions sine and cosine are 2𝜋-periodic. But what can we say about the periodicity of a function which takes values in a finite dimensional space of higher dimension? The next example can motivate possible answers. Example 9.1:

Consider functions g, h ∶ ℝ → ℝ × ℝ given by

g(t) = (cos(t), sin(t))

and h(t) = (t2 , sin(t))

whose trajectories are presented in Figures 9.1 and 9.2, respectively. What can we say about the periodicity of such functions? Note that each coordinate of g is 2𝜋-periodic, while the first coordinate of h is nonperiodic. If we extend Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

296

9 Periodicity

y x

Figure 9.1

Trajectory of g.

y x

Figure 9.2

Trajectory of h.

the notion of periodicity to ℝn requiring that f ∶ ℝ → ℝn is T-periodic, whenever fi ∶ ℝ → ℝ is T-periodic, for every i = 1, … , n, then g is 2𝜋-periodic, while h is nonperiodic. There are many works concerning periodicity of solutions in the framework of classic ordinary differential equations (ODEs) and impulsive differential equations. We can mention, for instance, [99, 165, 188, 199, 228, 235, 236]. However, in these works, usual hypotheses on continuity or piecewise continuity on the right-hand sides of the equations are required. Here, on the contrary, it is enough to consider functions which are integrable in the sense of Perron or Perron–Stieltjes, allowing the right-hand sides to have many discontinuities and to be highly oscillating. While, on the one hand, there are many works on periodic solutions of ODEs and impulsive differential equations, on the other hand, there are a few studies on the periodic behavior of solutions of generalized ODEs. In the framework of generalized ODEs, periodic boundary value problems in the finite dimensional case were investigated in [81] and [230], and the theory of affine-periodic solutions, which encompasses the notions of periodicity, quasiperiodicity, and antiperiodicity, was introduced in [77] for the space ℂn . In this chapter, we start by investigating periodicity of solutions of linear generalized ODEs for functions taking values in ℝn . This is the content of Section 9.1, where we prove a result which relates the periodicity of solutions of linear nonhomogeneous generalized ODEs to the periodicity of solutions

9.1 Periodic Solutions and Floquet’s Theorem

of linear homogeneous generalized ODEs. Not only that, but we also present a Floquet-type theorem which provides a characterization of the fundamental matrix of periodic linear generalized ODEs. This characterization, which we refer to as Floquet Theorem for generalized ODEs, is due to S˘ . Schwabik (see [207]). Here, we redeem the results of [207] and present them in English with a slightly different guise. We also include some applications for linear ODEs with impulses. In the second part of this chapter, we focus our attention to the infinite dimensional case. Thus, in Section 9.2, we present a result on the existence of what we call a (𝜃, T)-periodic solution of a linear nonhomogeneous generalized ODE, with T > 0 and 𝜃 > 0. Roughly speaking, the notion of (𝜃, T)-periodic solution means that a solution x ∶ [0, ∞) → X of a linear nonhomogeneous generalized ODE is (𝜃, T)-periodic, whenever x is a solution and x(t + T) = 𝜃x(t), for all t ∈ [0, ∞). In particular, if we consider 𝜃 = 1, we have the concept of periodic solutions, and for 𝜃 = −1, we have the concept of antiperiodic solutions. Here, we get an existence result for the case where 𝜃 is a positive number. The main tool used here is the Banach Fixed Point Theorem. An auxiliary result, namely Lemma 9.21, with some ideas coming from [77], relates a (𝜃, T)-periodic solution of a linear nonhomogeneous generalized ODE to a solution of a boundary value problem. Using such lemma and the Banach Fixed Point Theorem, we obtain conditions for the existence of a (𝜃, T)-periodic solution for linear generalized ODEs. We present an example to illustrate this result. Still in this section, we apply the results to measure differential equations (we write MDEs as in Chapter 3) which are known to be special cases of generalized ODEs (see Chapter 4). Recall that the reason for applying the results to MDEs is because they encompass many types of equations, such as classic ODEs, impulsive differential equations, and dynamic equations on time scales.

9.1 Periodic Solutions and Floquet’s Theorem The aim of this section is to present a study on periodic solutions for the following linear nonhomogeneous generalized ODE: dx = D[A(t)x + g(t)], d𝜏

(9.1)

where A ∶ [0, ∞) → L(ℝn ) is a T-periodic operator and g ∶ [0, ∞) → ℝn is a T-periodic function. In addition, we prove a Floquet-type theorem for the linear homogeneous generalized ODE dx = D[A(t)x], d𝜏

(9.2)

297

298

9 Periodicity

which provides a characterization for its fundamental matrices. A first investigation of the Floquet Theorem for linear generalized ODEs was made by S˘ . Schwabik in [207]. Next, we present a concept of periodicity for functions defined on [0, ∞) taking values in ℝn . This concept is similar to the concept already known for real-valued functions defined on ℝ. Definition 9.2: A function x ∶ [0, ∞) → ℝn is said to be periodic, if there exists T > 0 such that x(t + T) = x(t), for all t ∈ [0, ∞). In this case, T is called a period of x, and x is called T-periodic. Moreover, we say that A ∶ [0, ∞) → L(ℝn ) is a T-periodic operator, if A(t) = A(t + T), for all t ∈ [0, ∞). Now, we provide necessary and sufficient conditions for a solution of linear nonhomogeneous generalized ODE (9.1) to be periodic. Theorem 9.3 is a well-known result for classical ODEs and a more general version of it for generalized ODEs can be found in [93]. Recall that A ∈ BVloc ([0, ∞), L(ℝn )), whenever A ∶ [0, ∞) → L(ℝn ) is of locally bounded variation in [0, ∞), that is, for all compact interval [a, b] ⊂ [0, ∞), A ∈ BV([a, b], L(ℝn )). Throughout this section, we assume that A ∈ BVloc ([0, ∞), L(ℝn )) satisfies condition (Δ) presented in Chapter 6. We draw the reader’s attention to the fact that, under these conditions, the existence and uniqueness of a solution x ∶ [0, ∞) → ℝn of the linear nonhomogeneous generalized ODE (9.1) is ensured by Remark 6.4. Theorem 9.3: Let x ∶ [0, ∞) → ℝn be a solution of the linear nonhomogeneous generalized ODE (9.1). Assume that A ∈ BVloc ([0, ∞), L(ℝn )) is a T-periodic operator satisfying condition (Δ) on [0, ∞) and g ∶ [0, ∞) → ℝn is a T-periodic function. Then x ∶ [0, ∞) → ℝn is a T-periodic solution of (9.1) if and only if x(0) = x(T). Proof. Suppose x ∶ [0, ∞) → ℝn is a T-periodic solution of (9.1). Taking t = 0, we have x(0) = x(T). On the other hand, let us assume that x ∶ [0, ∞) → ℝn is a solution of (9.1) satisfying x(0) = x(T). If y(t) = x(t + T), for all t ∈ [0, ∞), then by the substitution theorem (Theorem 2.18), we have

9.1 Periodic Solutions and Floquet’s Theorem

s2 +T

y(s2 ) − y(s1 ) = x(s2 + T) − x(s1 + T) =

∫s1 +T

D[A(s)x(s) + g(s)]

s2

=

D[A(s + T)x(s + T) + g(s + T)]

∫s1 s2

=

∫s1

D[A(s)y(s) + g(s)],

for every s1 , s2 ∈ [0, ∞). Therefore, y ∶ [0, ∞) → ℝn is a solution of (9.1) and y(0) = x(0 + T) = x(0). From the uniqueness of the solution of the initial value problem (see Remark 6.4), x(t) = y(t), for all t ∈ [0, ∞). Thus, x(t) = y(t) = x(t + T),

t ∈ [0, ∞),

that is, x ∶ [0, ∞) → ℝ is T-periodic. n



In what follows, we present some results and definitions on fundamental matrices. These notions are analogous to the classical theory of linear ODEs. The next result is presented in [209, Theorem 6.11]. Theorem 9.4: Assume that A ∈ BVloc ([0, ∞), L(ℝn )) satisfies condition (Δ) on ̃ ∈ L(ℝn ), there exists a uniquely [0, ∞). If t0 ∈ [0, ∞), then for every matrix X determined matrix-valued function X ∶ [0, ∞) → L(ℝn ) such that ̃+ X(t) = X

t

∫t0

d[A(s)]X(s),

for t ∈ [0, ∞). We say that a matrix-valued function X ∶ [0, ∞) → L(ℝn ) is a solution of the matrix equation dX = D[A(t)X] d𝜏 if for every s1 , s2 ∈ [0, ∞), the identity

(9.3)

s2

X(s2 ) − X(s1 ) =

∫s1

d[A(s)]X(s)

holds. Furthermore, a matrix-valued function X ∶ [0, ∞) → L(ℝn ) is said to be a fundamental matrix of the linear generalized ODE (9.2), if X is a solution of the matrix equation (9.3) and the matrix X(t) is nonsingular for at least one value t ∈ [0, ∞). A proof of the next result can be found in [209, Theorem 6.12].

299

300

9 Periodicity

Theorem 9.5: Assume that A ∈ BVloc ([0, ∞), L(ℝn )) satisfies condition (Δ) on [0, ∞). Then every fundamental matrix X ∶ [0, ∞) → L(ℝn ) of the linear generalized ODE (9.2) is nonsingular for all t ∈ [0, ∞). Remark 9.6: Let X, Y ∶ [0, ∞) → L(ℝn ) be fundamental matrices of the linear generalized ODE (9.2). According to [209, Theorem 6.13] and [209, Lemma 6.18], X(t)X −1 (s) = Y (t)Y −1 (s) for all t, s ∈ [0, ∞). Theorem 9.7 gives us a variation-of-constants formula for a solution of the linear nonhomogeneous generalized ODE (9.1). For a proof of this fact, the reader may want to consult [209, Corollary 6.19]. Theorem 9.7: Assume that A ∈ BVloc ([0, ∞), L(ℝn )) satisfies condition (Δ) on ̃ ∈ ℝn , and g ∈ BVloc ([0, ∞), ℝn ). If X ∶ [0, ∞) → L(ℝn ) is an [0, ∞), t0 ∈ [0, ∞), X arbitrary fundamental matrix of the linear generalized ODE (9.2), then the unique solution of the initial value problem ⎧ dx = D[A(t)x + g(t)], ⎪ ⎨ d𝜏 ̃ ⎪x(t0 ) = X ⎩ can be represented by

( ̃− x(t) = g(t) − g(t0 ) + X(t) X −1 (t0 )X

t

∫t0

) d[X −1 (s)](g(s) − g(t0 )) ,

for all t ∈ [0, ∞). The next result gives us necessary and sufficient conditions for the linear generalized ODE (9.2) to have a T-periodic nontrivial solution. Proposition 9.8: Let A ∈ BVloc ([0, ∞), L(ℝn )) be a T-periodic operator satisfying condition (Δ) on [0, ∞) and X ∶ [0, ∞) → L(ℝn ) be a fundamental matrix of the linear generalized ODE (9.2). Then (9.2) has a T-periodic nontrivial solution if and only if det [X(T) − X(0)] = 0. Proof. Let x ∶ [0, ∞) → ℝn be a T-periodic nontrivial solution of (9.2). Then, there exists s ∈ [0, ∞) such that ̃ ≠ 0. x(s) = X

9.1 Periodic Solutions and Floquet’s Theorem

̃ for all t ∈ [0, ∞). Using Theorem 9.7 with g ≡ 0, we may write x(t) = X(t)X −1 (s)X ̃ . Then, Set K = X −1 (s)X ̃ = X(t)K x(t) = X(t)X −1 (s)X for all t ∈ [0, ∞). Using Theorem 9.3, we obtain X(T)K = x(T) = x(0) = X(0)K, that is, [X(T) − X(0)]K = 0. Thus, since K ≠ 0, we conclude that det [X(T) − X(0)] = 0. Now, assume that det [X(T) − X(0)] = 0. Then, one can obtain z ∈ ℝn , z ≠ 0, such that [X(T) − X(0)]z = 0. Let 𝑤 = X(0)z. Note that x(t) = X(t)X −1 (0)𝑤 is a nontrivial solution of (9.2) defined on [0, ∞). Besides, we have x(T) = X(T)X −1 (0)𝑤 = X(T)z = X(0)z = 𝑤 = x(0). Then, Theorem 9.3 assures that x is T-periodic.



Corollary 9.9: Consider the linear generalized ODE (9.2) and suppose A(t) = A, for all t ∈ [0, ∞). Then, (9.2) has a T-periodic nontrivial solution if and only if the matrix (I − eAT ) is singular. Proof. It is enough to note that X(t) = eAt is a fundamental matrix of Eq. (9.2), where A(t) = A is a constant matrix. ◽ Theorem 9.10 below exhibits sufficient conditions for Eq. (9.1) to admit a unique T-periodic solution. Theorem 9.10: Assume that A ∈ BVloc ([0, ∞), L(ℝn )) is a T-periodic operator satisfying condition (Δ) on [0, ∞) and g ∈ BVloc ([0, ∞), ℝn ) is a T-periodic function. If x ≡ 0 is the unique T-periodic solution of the linear generalized ODE (9.2), then the linear nonhomogeneous generalized ODE (9.1) has a unique T-periodic solution. Proof. Let X ∶ [0, ∞) → L(ℝn ) be a fundamental matrix of the linear generalized ODE (9.2). By Proposition 9.8, we have det [X(T) − X(0)] ≠ 0. Define T

z = X(0)[X(T) − X(0)]−1

∫0

ds [X(T)X −1 (s)](g(s) − g(0)).

By the variation-of-constants formula in Theorem 9.7, the function t

𝑤(t) = X(t)X −1 (0)z + g(t) − g(0) −

∫0

ds [X(t)X −1 (s)](g(s) − g(0))

301

302

9 Periodicity

is a solution of (9.1) on [0, ∞) such that 𝑤(0) = z. Besides, T

𝑤(T) = X(T)X −1 (0)z + g(T) − g(0) −

∫0

ds [X(T)X −1 (s)](g(s) − g(0))

= X(T)X −1 (0)z − [X(T) − X(0)]X −1 (0)z = 𝑤(0). Thus, by Theorem 9.3, 𝑤 is a T-periodic solution of (9.1). In order to show the uniqueness, suppose 𝑤1 and 𝑤2 are T-periodic solutions of (9.1). Consequently, 𝑤1 − 𝑤2 is a T-periodic solution of (9.2). By hypothesis, we ◽ have x = 𝑤1 − 𝑤2 ≡ 0, that is, 𝑤1 ≡ 𝑤2 . We are now in a position to present a characterization of fundamental matrices of linear homogeneous generalized ODE (9.2). But before this, we need an auxiliary result, whose proof can be found [38]. Lemma 9.11: If C is a n × n matrix with det C ≠ 0, then there exists a matrix B such that eB = C. Theorem 9.12 in the sequel, known as the Floquet Theorem, can also be found in [207]. Theorem 9.12: Consider the linear generalized ODE (9.2), where the operator A ∈ BVloc ([0, ∞), L(ℝn )) is a T-periodic operator satisfying condition (Δ) on [0, ∞). Let X ∶ [0, ∞) → L(ℝn ) be a fundamental matrix of (9.2). Then, (i) X(t + T) is also a fundamental matrix of (9.2); (ii) there exist a T-periodic matrix-valued function P ∶ [0, ∞) → L(ℝn ) and a constant matrix B such that X(t) = P(t)eBt , for all t ∈ [0, ∞). Proof. Let us prove (i). Consider X(t) a fundamental matrix of (9.2). Then, for all s1 , s2 ∈ [0, ∞), we have s2

X(s2 ) − X(s1 ) =

∫s1

d[A(s)]X(s).

According to Theorem 2.18, we obtain s2 +T

X(s2 + T) − X(s1 + T) =

∫s1 +T

s2

d[A(s)]X(s) =

s2

=

∫s1

d[A(s)]X(s + T),

∫s1

d[A(s + T)]X(s + T)

9.1 Periodic Solutions and Floquet’s Theorem

since A is T-periodic from hypothesis. Thus, X(t + T) is a fundamental matrix of (9.2). Now, we prove (ii). By item (i) and Remark 9.6, X(t)X −1 (0) = X(t + T)X −1 (T), for all t ∈ [0, ∞). Thus, the nonsingular matrix C = X −1 (0)X(T) satisfies X(t + T) = X(t) C, for all t ∈ [0, ∞). By Lemma 9.11, one can find a matrix B such that eBT = C. Define P(t) = X(t) e−Bt for all t ∈ [0, ∞). Thus, P(t + T) = X(t + T) e−B(t+T) = X(t) C C−1 e−Bt = X(t) e−Bt = P(t), holds for all t ∈ [0, ∞), which completes the proof.



Remark 9.13: To prove the Floquet Theorem (Theorem 9.12) in the classical case, derivative properties, such as the chain rule, are strongly used (see [38] for instance). In the context of generalized ODEs, we only use properties of the Perron–Stieltjes integral, and this is the main difference in the proofs of the Floquet Theorem for ODEs and its version for generalized ODEs. Not only that, but the right-hand sides or coefficients of linear homogeneous generalized ODEs may be nonabsolute integrable.

9.1.1 Linear Differential Systems with Impulses Let F ∶ [0, ∞) → L(ℝn ) be a locally Perron integrable n × n matrix-valued function. Consider a sequence {ti }i∈ℕ ⊂ [0, ∞) such that t1 < t2 < · · · < ti < · · · and limi→∞ ti = ∞. For each i ∈ ℕ, let Bi ∈ L(ℝn ) be a matrix-valued function such that I + Bi is nonsingular and I denotes the identity matrix. We consider the following linear impulsive differential system: { ̇ = F(t)x, (9.4) x(t) t ≠ ti , Δx(ti ) = x(ti+ ) − x(ti ) = Bi x(ti ),

i ∈ ℕ.

(9.5)

We shall assume that there exists k ∈ ℕ such that the following conditions hold: (C1) 0 < t1 < · · · < tk < T and ti+k = ti + T for all i ∈ ℕ; (C2) F(t + T) = F(t) for all t ∈ [0, ∞) and Bi+k = Bi for all i ∈ ℕ; ∑k T (C3) ∫0 F(s)ds + i=1 Bi = 0. Now, define t

A(t) =

∫0

F(s)ds +

∞ ∑

Bi Hti (t),

t ∈ [0, ∞),

i=1

where for each i ∈ ℕ, Hti (t) = 0 for 0 ⩽ t ⩽ ti and Hti (t) = 1 for t > ti .

(9.6)

303

304

9 Periodicity

Definition 9.14: Let t0 ∈ [0, ∞) and x0 ∈ ℝn . A function x ∶ [0, ∞) → ℝn is said to be a solution of the initial value problem ⎧x(t) ̇ = F(t)x, ⎪ + ⎨Δx(ti ) = x(ti ) − x(ti ) = Bi x(ti ), ⎪x(t ) = x , 0 ⎩ 0

t ≠ ti , i ∈ ℕ,

(9.7)

̇ = F(t)x(t) for almost all t ∈ [0, ∞) ⧵ {ti ∶ i ∈ ℕ}, x(t0 ) = x0 and if x(t) for i ∈ ℕ.

x(ti+ ) = lim+ x(t) = x(ti ) + Bi x(ti ), t→ti

According to [209, Example 6.20], we can state the following result.

Lemma 9.15: The operator A ∶ [0, ∞) → L(ℝn ) given by (9.6) is of locally bounded variation and satisfies condition (Δ) on [0, ∞). Moreover, x ∶ [0, ∞) → ℝn is a solution of the impulsive system (9.4) and (9.5) if and only if x is a solution of the linear homogeneous generalized ODE dx = D[A(t)x], d𝜏 where A is given by (9.6).

(9.8)

The next result gives us sufficient conditions for the operator A to be periodic.

Proposition 9.16: Assume that conditions (C1)–(C3) are satisfied. Then, the operator A ∶ [0, ∞) → L(ℝn ) given by (9.6) is T-periodic. Proof. Let t ∈ [0, ∞). Then, either t ∈ [0, t1 ] or there exists a natural number j ∈ ℕ such that tj < t ⩽ tj+1 . Let us consider the second case. Thus, tj+k = tj + T < t + T ⩽ tj+1 + T = tj+k+1 , by virtue of (C1). Consequently, using conditions (C2) and (C3), we have t+T

A(t + T) =

F(s)ds +

∫0

i=1 T

=

∫0

k ∑ F(s)ds + Bi Hti (t + T) + i=1

T

=

∫0

∞ ∑ Bi Hti (t + T)

F(s)ds +

k ∑ Bi + i=1

t+T

∫T

F(s)ds +

t

∫0

F(s + T)ds +

∞ ∑ i=k+1

j+k ∑ i=k+1

Bi

Bi Hti (t + T)

9.1 Periodic Solutions and Floquet’s Theorem t

=

F(s + T)ds +

∫0

i=1 t

=

j ∑ Bi+k =

∫0

F(s)ds +

∞ ∑

t

∫0

F(s)ds +

j ∑

Bi

i=1

Bi Hti (t) = A(t),

i=1



which completes the proof.

Next, we provide conditions for a solution of the impulsive system (9.4) and (9.5) to be periodic. Theorem 9.17: Let x ∶ [0, ∞) → ℝn be a solution of the impulsive differential system (9.4) and (9.5). If conditions (C1)–(C3) are satisfied, then x ∶ [0, ∞) → ℝn is T-periodic if and only if x(0) = x(T). Proof. Let x ∶ [0, ∞) → ℝn be a solution of the system (9.4) and (9.5). It is enough to prove the sufficient condition. Let us assume that x(0) = x(T). Since F ∶ [0, ∞) → L(ℝn ) is a T-periodic function, using Lemma 9.15 and Proposition 9.16, we have A ∈ BVloc ([0, ∞), L(ℝn )) is a T-periodic operator which satisfies condition (Δ) on [0, ∞). Thus, by Theorem 9.3, we conclude that x ∶ [0, ∞) → ℝn is a T-periodic solution of (9.8) and, consequently, by Lemma 9.15, x ∶ [0, ∞) → ℝn is a T-periodic solution of the system (9.4) and (9.5). ◽ In what follows, we describe a Floquet theory for the impulsive differential system (9.4) and (9.5). Let Z ∶ [0, ∞) → L(ℝn ) be an arbitrary fundamental matrix of the impulsive system (9.4) and (9.5). Then, t

Z(t) = Z(0) +

F(r)Z(r)dr +

∫0

∑ 0 0 be such that f (t + T) = 𝜃f (t),

for all t ∈ [0, ∞).

Then, the existence of a (𝜃, T)-periodic solution x ∶ [t0 , ∞) → X of (9.10) is equivalent to the existence of a solution z ∶ [t0 , t0 + T] → X of the boundary value problem ⎧ dx = D[A(t)x + f (t)] ⎪ ⎨ d𝜏 ⎪x(t0 + T) = 𝜃x(t0 ). ⎩

(9.11)

Proof. Assume that x ∶ [t0 , ∞) → X is a (𝜃, T)-periodic solution of (9.10). Then, t2

x(t2 ) − x(t1 ) =

∫t1

d[A(s)]x(s) + f (t2 ) − f (t1 ),

for all t2 , t1 ∈ [t0 , ∞), (9.12)

and x(t + T) = 𝜃x(t),

for all t ∈ [t0 , ∞).

(9.13)

Now, we introduce an auxiliary function z ∶ [t0 , t0 + T] → X defined by z(t) = 𝜃 −1 x(t + T),

t ∈ [t0 , t0 + T].

(9.14)

We assert that z is a solution of the boundary value problem (9.11). Indeed, for every t1 , t2 ∈ [t0 , t0 + T], we have z(t2 ) − z(t1 ) = 𝜃 −1 x(t2 + T) − 𝜃 −1 x(t1 + T) = 𝜃 −1

t2 +T

∫t1 +T

d[A(s)]x(s) + 𝜃 −1 f (t2 + T) − 𝜃 −1 f (t1 + T).

309

310

9 Periodicity

By Theorem 2.18, with 𝜓(𝜉) = 𝜉 + T, 𝜉 ∈ [t0 , t0 + T], we obtain 𝜓(t2 )

t2 +T

∫t1 +T

d[A(s)]x(s) =

∫𝜓(t1 )

t2

d[A(s)]x(s) =

∫t1

d[A(s + T)]x(s + T),

(9.15)

for all t1 , t2 ∈ [t0 , t0 + T]. Using condition (A1), (9.15) and the hypothesis on f , we conclude that t2

z(t2 ) − z(t1 ) =

∫t1

d[A(s)]𝜃 −1 x(s + T) + 𝜃 −1 𝜃f (t2 ) − 𝜃 −1 𝜃f (t1 )

t2

=

∫t1

d[A(s)]z(s) + f (t2 ) − f (t1 ),

for all t1 , t2 ∈ [t0 , t0 + T]. Moreover, by (9.13) and (9.14), we have z(t0 + T) = 𝜃 −1 x(t0 + 2T) = 𝜃 −1 𝜃x(t0 + T) = x(t0 + T) = 𝜃z(t0 ). Thus, z is a solution of the boundary value problem (9.11). On the other hand, suppose z ∶ [t0 , t0 + T] → X is a solution of the boundary value problem (9.11). Then, z is a solution of the linear nonhomogeneous generalized ODE (9.10) satisfying z(t0 + T) = 𝜃z(t0 ). By Remark 6.4, there exists a unique global forward solution x ∶ [t0 , ∞) → X of (9.10) with x(t0 ) = 𝜃 −1 z(t0 + T). Then, the uniqueness implies x|[t0 ,t0 +T] = z, that is, x is an extension of z. We need to prove that x(t + T) = 𝜃x(t) for all t ∈ [t0 , ∞). Using the same steps as before, it is possible to prove that the function 𝜑(t) = 𝜃 −1 x(t + T) is a solution of the linear nonhomogeneous generalized ODE (9.10) satisfying 𝜑(t0 ) = 𝜃 −1 x(t0 + T) = 𝜃 −1 z(t0 + T) = x(t0 ). Notice that 𝜑(t) and x(t) are solutions of the linear nonhomogeneous generalized ODE (9.10) with 𝜑(t0 ) = x(t0 ). Then, by the uniqueness of the initial value problem with Eq. (9.10) and initial condition 𝜑(t0 ) = x(t0 ), we can conclude 𝜑(t) = x(t), for all t ∈ [t0 , ∞), that is, x(t + T) = 𝜃x(t),

for all t ∈ [t0 , ∞),

and the proof is complete.



It is important to mention that, due to Lemma 9.21, it is possible to change the problem of finding a (𝜃, T)-periodic solution of the linear nonhomogeneous generalized ODE (9.10) to finding a solution of the boundary value problem (9.11). The next result ensures, under some conditions, the existence of a (𝜃, T)-periodic solution of the linear nonhomogeneous generalized ODE (9.10). Theorem 9.22: Suppose conditions (A1)–(A3) are satisfied. Let 𝜃 > 0 be such that f (t + T) = 𝜃f (t),

for all t ∈ [0, ∞).

9.2 (𝜃,T)-Periodic Solutions t +T

Moreover, assume that for a given t0 ∈ [0, ∞), we have vart0 0

(A) < 1 and

t0 +T

d[A(t)]z = 0,

∫t0

for all z ∈ X.

Then, the linear nonhomogeneous generalized ODE (9.10) admits a (𝜃, T)-periodic solution defined on [t0 , ∞). Proof. Define an operator Φ ∶ BV([t0 , t0 + T], X) → BV([t0 , t0 + T], X) by t

Φ(x)(t) =

∫t0

d[A(s)]x(s) + f (t),

t ∈ [t0 , t0 + T].

Notice that the operator Φ is well-defined. Indeed, since (A2) holds, by Corollary t 1.48, the integral ∫t d[A(s)]x(s) exists for all t ∈ [t0 , t0 + T]. Moreover, by Corollary 0 4.8, Φ(x) ∈ BV([t0 , t0 + T], X). By Corollary 1.48, we conclude ‖ ‖ t t +T ‖ ‖ d[A(s)]x(s)‖ ⩽ vart0 (A) ∥x∥∞ , ‖ 0 ‖ ‖∫t ‖ ‖ 0 Consequently, t +T

vart0 0

t +T

[Φ(x) − Φ(y)] ⩽ vart0 0

for every t ∈ [t0 , t0 + T].

(A) ∥x − y∥∞ ,

for all x, y ∈ BV([t0 , t0 + T], X). Thus, for any x, y ∈ BV([t0 , t0 + T], X), we have t +T

∥ Φ(x) − Φ(y)∥BV =∥ Φ(x)(t0 ) − Φ(y)(t0 ) ∥ +vart0 0



t +T vart0 (A) 0

[Φ(x) − Φ(y)]

∥x − y∥∞ ⩽ K ∥x − y∥BV ,

t +T

where K = vart0 (A) < 1 by hypothesis. Thus, Φ is a contraction on the Banach 0 space BV([t0 , t0 + T], X). Therefore, by the Banach Fixed Point Theorem, Φ has a ̃ in BV([t0 , t0 + T], X). unique fixed point X ̃ ∶ [t0 , t0 + T] → X is a solution of the boundary value problem (9.11). Claim. X ̃ ∈ BV([t0 , t0 + T], X) is a fixed point of the operator Φ, we have In fact, since X ̃ (t) = X

t

∫t0

̃ (s) + f (t), d[A(s)]X

t ∈ [t0 , t0 + T].

(9.16)

Taking t = t0 , we obtain ̃ (t0 ) = f (t0 ). X

(9.17)

By (9.16) and (9.17), we have ̃ (t) = X ̃ (t0 ) + X

t

∫t0

̃ (s) + f (t) − f (t0 ), d[A(s)]X

for all t ∈ [t0 , t0 + T],

311

312

9 Periodicity

̃ ∶ [t0 , t0 + T] → X is a solution of the linear nonhomogeneous generalthat is, X ized ODE (9.10). Now, taking t = t0 + T in (9.16) and using (9.17) and the hypotheses f (t0 + T) = t ̃ (s) = 0 for all t ∈ [t0 , t0 + T], we get 𝜃f (t0 ) and ∫t d[A(s)]X 0

̃ (t0 ). ̃ (t0 + T) = f (t0 + T) = 𝜃f (t0 ) = 𝜃 X X ̃ ∶ [t0 , t0 + T] → X is a solution of the boundary value problem (9.11). Thus, X Then, by Lemma 9.21, there exists a (𝜃, T)-periodic solution of the linear nonho◽ mogeneous generalized ODE (9.10) defined on [t0 , ∞). In the sequel, we present an example, which illustrates Theorem 9.22. Example 9.23: Consider the linear nonhomogeneous generalized ODE given by dx = D[𝛼 cos(t)x + et cos(t)], d𝜏

(9.18)

1 defined on [0, ∞), where 0 < 𝛼 < 2𝜋 . For each t ∈ [0, ∞), define an operator A(t) ∶ ℝ → ℝ by A(t)x = 𝛼 cos(t)x, x ∈ ℝ, and the function f ∶ [0, ∞) → ℝ by f (t) = et cos(t). It is clear that, for all t ∈ (0, ∞), we have [ ]−1 A(t + 2𝜋) = A(t) and I + lim+ A(s) − A(t) = I ∈ L(ℝ). s→t

Note also that 2𝜋

∫0

d[𝛼 cos(t)]z = 𝛼[cos(2𝜋) − cos(0)]z = 0,

for all z ∈ ℝ.

We assert that A ∈ BVloc ([0, ∞), L(ℝ)) and f ∈ BVloc ([0, ∞), ℝ). Indeed, let a, b ∈ ℝ, a < b, and d = (ti ), j = 1, 2, … , |d|, be an arbitrary division of [a, b]. Then, ∥ A(ti ) − A(ti−1 ) ∥ = sup |A(ti )x − A(ti−1 )x| |x|⩽1

= sup |𝛼 cos(ti ) x − 𝛼 cos(ti−1 ) x| |x|⩽1

⩽ 𝛼|ti − ti−1 | = 𝛼(ti − ti−1 ), which implies that |d| ∑

∥ A(ti ) − A(ti−1 ) ∥⩽

i=1

|d| ∑ 𝛼(ti − ti−1 ) = 𝛼(b − a), i=1

that is, A ∈ BV([a, b], L(ℝ)). Hence, A ∈ BVloc ([0, ∞), L(ℝ)). On the other hand, since f is locally Lipschitz, it follows that f ∈ BVloc ([0, ∞), ℝ). We conclude that conditions [A1]–[A3] are satisfied. Now, set 𝜃 = e2𝜋 . Since f (t + 2𝜋) = e2𝜋 f (t) for all t ∈ [0, ∞) and var2𝜋 0 (A) ⩽ 2𝜋𝛼 < 1,

9.2 (𝜃,T)-Periodic Solutions

it follows from Theorem 9.22 that the linear nonhomogeneous generalized ODE (9.18) admits a (𝜃, 2𝜋)-periodic solution defined on [0, ∞).

9.2.1 An Application to MDEs In this subsection, we present an application of the results of Section 9.2 to MDEs. In order to do that, let us consider the following integral form of an MDE: t

x(t) = x(t0 ) +

∫t0

t

 (s)x(s)ds +

∫t0

(s)x(s)du(s) + h(t) − h(t0 ),

(9.19)

where t0 ⩾ 0, t ∈ [t0 , ∞), and  ∶ [0, ∞) → L(X),  ∶ [0, ∞) → L(X) and u ∶ [0, ∞) → [0, ∞) are functions. We also assume the following conditions: (E1)  (⋅) is a T-periodic locally Perron integrable function over [0, ∞); (E2) u is of locally bounded variation in [0, ∞), left-continuous on (0, ∞) and, for all t ∈ (0, ∞) and for some 𝛽 > 0, u(t + T) = u(t) + 𝛽; (E3) (⋅) is a T-periodic locally Perron–Stieltjes integrable function with respect to u over [0, ∞); (E4) there exists a locally Perron integrable function M1 ∶ [0, ∞) → ℝ such that for each a, b ∈ [0, ∞), a ⩽ b, we have b ‖ b ‖ ‖ ‖  (s)ds‖ ⩽ M1 (s)ds; ‖ ‖∫a ‖ ∫a ‖ ‖

(E5) E5 there exists a locally Perron–Stieltjes integrable function M2 ∶ [0, ∞) → ℝ such that for each a, b ∈ [0, ∞), a ⩽ b, we have b ‖ b ‖ ‖ ‖ (s)du(s)‖ ⩽ M2 (s)du(s); ‖ ‖∫a ‖ ∫a ‖ ‖

(E6) h ∈ BVloc ([0, ∞), X) and there is 𝜃 > 0, such that for all t ∈ [0, ∞), h(t + T) = 𝜃h(t); (E7) for all t ∈ [0, ∞), there exists 𝜉 = 𝜉(t) > 0 such that t+𝜉

∫t

t+𝜉

M1 (s)ds +

∫t

M2 (s)du(s) < 1;

(E8) the following equalities hold t0 +T

∫t0

 (s)ds =

t0 +T

∫t0

(s)du(s) = 0.

313

314

9 Periodicity

Let t0 ⩾ 0. According to Definition 3.2, a function x ∶ [t0 , ∞) → X is called a solution of the initial value problem t t ⎧ ⎪x(t) = x(t0 ) + ∫  (s)x(s)ds + ∫ (s)x(s)du(s) + h(t) − h(t0 ), t0 t0 ⎨ ⎪x(t ) = x , 0 ⎩ 0

(9.20)

t

on [t0 , ∞), if for all t ∈ [t0 , ∞), the Perron integral ∫t  (s)x(s)ds and the 0

t

Perron–Stieltjes integral ∫t (s)x(s)du(s) exist and x satisfies the following integral 0 equation t

x(t) = x0 +

∫t0

t

 (s)x(s)ds +

∫t0

(s)x(s)du(s) + h(t) − h(t0 ),

t ∈ [t0 , ∞).

The next result exhibits a relation between the MDE (9.19) and its corresponding linear nonhomogeneous generalized ODE. See Theorem 4.14. Such result can also be found in [209] for the finite dimensional case, but the proof for functions taking values in a general Banach space X follows analogously. Theorem 9.24: Let t0 ∈ [0, ∞). The function x ∶ [t0 , ∞) → X is a solution of the MDE (9.19), with initial condition x(t0 ) = x0 if and only if x ∶ [t0 , ∞) → X is a solution of the linear nonhomogeneous generalized ODE ⎧ dx = D[A(t)x + h(t)] ⎪ ⎨ d𝜏 ⎪x(t0 ) = x0 , ⎩

(9.21)

where, for all t ∈ [0, ∞), t

A(t) =

∫t0

t

 (s)ds +

∫t0

(s)du(s),

(9.22)

Definition 9.25: Given T > 0, 𝜃 > 0 and t0 ∈ [0, ∞), we say that a function x ∶ [t0 , ∞) → X is a (𝜃, T)-periodic solution of the MDE (9.19), if x is a solution of (9.19) and for all t ∈ [t0 , ∞), x(t + T) = 𝜃x(t). The next lemma exhibits some properties of the operator A ∶ [0, ∞) → L(X) defined in (9.22). Lemma 9.26: Let A ∶ [0, ∞) → L(X) be the operator given by (9.22). Then, A is T-periodic in [0, ∞), A ∈ BVloc ([0, ∞), X), and A satisfies the condition (Δ+ ) on

9.2 (𝜃,T)-Periodic Solutions

[0, ∞) (see p. 207). Moreover, t0 +T

d[A(s)]z = 0,

∫t0

for all z ∈ X.

Proof. Let [a, b] ⊂ [0, ∞) be given and consider a division d = (ti ), i = 1, 2, … , |d|, of [a, b]. By conditions [E4] and [E5], we have |d| ∑

|d| ‖ t ti ‖ ∑ ‖ i ‖  (r)dr + (r)du(r)‖ ‖ ‖∫t ‖ ∫ t i−1 i=1 ‖ i−1 ‖ ) |d| ( ti ti ∑ ⩽ M1 (r)dr + M2 (r)du(r) ∫ti−1 ∫ti−1 i=1

∥ A(ti ) − A(ti−1 ) ∥ ⩽

i=1

b



∫a

b

M1 (r)dr +

M2 (r)du(r),

∫a

which is finite by hypothesis. Thus, A ∈ BVloc ([0, ∞), L(X)). By condition [E7], for each t ∈ [0, ∞), there exists 𝜉 = 𝜉(t) > 0 such that ∥ A(t + 𝜉) − A(t) ∥< 1. Consequently, ∥ Δ+ A(t) ∥< 1 and, hence, A satisfies condition (Δ+ ), presented in Chapter 6, on [0, ∞). Moreover, by conditions (E1)–(E5) and (E8), we have t+T

A(t + T) =

t+T

 (s)ds +

∫t0

t0 +T

=

∫t0

(s)du(s)

∫t0

t+T

 (s)ds +

∫t0 +T

t

=

t

=

∫t0

t0 +T

∫t0

t+T

(s)du(s) +

∫t0 +T

(s)du(s)

t

 (s + T)ds +

∫t0

 (s)ds +

∫t0

(s + T)du(s + T)

t

 (s)ds +

∫t0

(s)du(s) = A(t),

where in the last inequality, we used the fact u(s + T) = u(s) + 𝛽, for all s ∈ [0, ∞), and by definition of the integral, we have t

∫t0

(s)du1 (s) = 0,

with u1 (s) = 𝛽, for all s ∈ [0, ∞). Thus, we conclude that A is T-periodic on [0, ∞). At last, by Proposition 1.67, we obtain t0 +T

∫t0

t0 +T

d[A(s)]z =

∫t0

 (s) z ds +

t0 +T

∫t0

(s) z du(s)

315

316

9 Periodicity

for every z ∈ X. Thus, using condition [E8], we conclude that t0 +T

d[A(s)]z = 0

∫t0



for all z ∈ X, and the proof is complete.

In view of the relations between the MDE (9.19) and the linear nonhomogeneous generalized ODE (9.21) and due to Lemma 9.21, we conclude that the existence of a (𝜃, T)-periodic solution x ∶ [t0 , ∞) → X of the MDE (9.19) is equivalent to the existence of a solution of the boundary value problem t t ⎧ ⎪x(t) = x(t0 ) + ∫  (s)x(s)ds + ∫ (s)x(s)du(s) + h(t) − h(t0 ), t ∈ [t0 , t0 + T], t0 t0 ⎨ ⎪x(t + T) = 𝜃x(t ). 0 ⎩ 0

(9.23) In the next result, we consider the constant 𝛽 given in condition [E2] and the functions M1 and M2 given by conditions (E4) and (E5), respectively. Theorem 9.27: Assume that conditions [E1]–[E8] are satisfied. If M1 (s) + M2 (s) ⩽

1 , 𝜂 max {T, 𝛽}

for all s ∈ [0, ∞),

where 𝜂 > 2, then the MDE (9.19) admits a (𝜃, T)-periodic solution on [t0 , ∞). Proof. We are going to use Theorem 9.22 to conclude the result. By Lemma 9.26, conditions [A1]–[A3] from the previous section are satisfied. In addition, condition [E6] says that there exists 𝜃 > 0 such that h(t + T) = 𝜃h(t) for all t ∈ [0, ∞). Now, let t0 ∈ [0, ∞). Then, t +T

vart0 0

(A) ⩽ ⩽

t0 +T

∫t0

t0 +T

M1 (s)ds +

∫t0

M2 (s)du(s)

[u(t0 + T) − u(t0 )] 2 T + ⩽ < 1, 𝜂 max {T, 𝛽} 𝜂 max {T, 𝛽} 𝜂

where we used the facts that u(t0 + T) − u(t0 ) = 𝛽 and 𝜂 > 2. Thus, by Theorems ◽ 9.22 and 9.24, the MDE (9.19) admits a (𝜃, T)-periodic solution on [t0 , ∞).

317

10 Averaging Principles Márcia Federson 1 and Jaqueline G. Mesquita 2 1 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil

Averaging methods have the purpose to simplify the analysis of nonautonomous differential systems through simpler autonomous differential systems obtained as an “averaged” equation of the original (or exact) equation. Roughly speaking, averaging methods enable one to replace a time-varying small perturbation, acting on a long time interval, by a time-invariant perturbation and, in this process, only a small error is introduced. As we mentioned in Section 2.3, although the idea of averaging came first with Andrew Stephenson in his studies from the early 1900s (see [224, 225]), the term “averaging” goes back to the works of Pjotr Kapitza (see [139, 140]). The work by N. N. Krylov and N. N. Bogolyubov in [145], published in 1934, as well as the paper [20] from 1955 by N. N. Bogolyubov and A. Mitropols’ki˘ı are also considered to be pioneers in averaging methods (also referred to as averaging principles). When dealing with averaging methods, one is expected to describe the difference between the solutions of the exact system and the averaged system for a sufficiently small parameter on a finite interval. The following nonlinear system was considered in [20, 145], for instance, { dx = ẋ = 𝜖X(t, x), (10.1) dt x(0) = x0 , where 𝜖 is a small parameter, x and X are n-dimensional vectors, and X is Lipschitzian on the second variable. The averaged equation for the ordinary differential equation (ODE) (10.1) is { ẏ = 𝜖X0 (y), (10.2) y(0) = x0 , Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

318

10 Averaging Principles

with T

1 X(t, x)dt, T→∞ T ∫0

X0 (x) = lim

that is, the right-hand side of the autonomous equation (10.2) is obtained by taking the average or mean of the right-hand side of the nonautonomous equation (10.1). In the 1960s, V. I. Fodˇcuk [94], A. Halanay [113], J. K. Hale [114], G. N. Medvedev [175], and G. N. Medvedev, B. I. Morgunov, and V. M. Volosov [176] developed methods of averaging for certain functional differential equations (we write, for short, FDEs) with a small parameter. However, the averaged systems they considered were autonomous ODEs. Let us specify that situation with an example. Consider the delay differential equation ẋ = 𝜖f (t, x(t − r)),

(10.3)

where r > 0, and 𝜖 > 0 is a small parameter. Consider the change of variables t → t∕𝜖 and y(t) = x(t∕𝜖). Then, ( ) t x − r = y(t − 𝜖r) 𝜖 and, therefore, Eq. (10.3) can be rewritten as ( ) ( ) t t 1 ̇ = ẋ =f , y(t − 𝜖r) . y(t) 𝜖 𝜖 𝜖 Taking 𝜖 → 0+ , the delay becomes negligible and, hence, the averaged equation is an autonomous ODE given by ẏ = f0 (y),

T

1 f (s, y)ds. T→∞ T ∫0

with f0 (y) = lim

In the 1970s, the investigations about averaging methods for FDEs showed that the classic approximation by solutions of an autonomous ODE could be replaced by an approximation by solutions of an autonomous FDE, whenever one could treat separately the limiting process involving the delay and the averaging process. At this moment, it became clear that dealing with these two processes apart would enable one to get an infinite-dimensional phase space. Furthermore, a better approximation of solutions was obtained and such a fact could be verified by the order of the approximation and computational simulations. In this respect, the reader may want to consult the works of V. V. Strygin in [226] and of B. Lehman and S. P. Weibel in [162]. See also [160, 161, 163], for instance. In these papers, the authors considered a nonautonomous FDE of the form ( ) { ẋ = 𝜖f t, xt , x0 = 𝜙, where 𝜖 > 0 is a small parameter and, as usual, xt (𝜃) = x(t + 𝜃), for 𝜃 ∈ [−r, 0], with r ⩾ 0 and t ⩾ 0. The initial function 𝜙 is taken in the Banach space

10 Averaging Principles

 = ([−r, 0], ℝn ) of continuous functions from [−r, 0] to ℝn , equipped with the supremum norm. The function f ∶ℝ ×  → ℝn is continuous on the first variable and Lipschitzian on the second variable. The averaged system is then given by ( ) { ẏ = 𝜖f0 yt , y0 = 𝜙, where, for every 𝜑 ∈ , the limit T

1 f (s, 𝜑)ds T→∞ T ∫0

f0 (𝜑) = lim

exists. In the late 1980s, D. D. Bainov and S. D. Milusheva (see [11]) considered an impulsive FDE of neutral type given by ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩

̇ = 𝜖X(t, x(t), x(Δ(t, x(t)), x(Δ(t, ̇ x(t) x(t))))), x(t) = 𝜙(t, 𝜖), ̇ 𝜖), ̇ = 𝜙(t, x(t) xi+ = xi− + 𝜖Ii (xi− ),

t > 0, t ≠ 𝜏i (x), t ∈ [−r, 0], t ∈ [−r, 0], i ∈ ℕ,

(10.4)

where 𝜖 > 0 is a small parameter, r > 0, t − r ⩽ Δ(t, x(t)) ⩽ t, t ⩾ 0, 𝜙(t, 𝜖) is the initial function, the surfaces 𝜏i (x) are such that 𝜏i (x) < 𝜏i+1 (x), for i ∈ ℕ, and all 𝜏i (x) are in the half-space t > 0 for all x in some set D ⊂ ℝn and i ∈ ℕ. The averaged system for (10.4) is given by { ẏ = 𝜖X0 (y) + 𝜖I0 (y), (10.5) y (0) = x0 , whenever the limits 1 T→∞ T ∫t lim

t+T

X(s, x, x, 0)ds = X0 (x)

and

1 ∑ Ii (x) = I0 (x) T→∞ T t 0, there exists 𝜖 ∈ (0, 𝜖0 ], 𝜖0 = 𝜖0 (𝜂, L), such that if 0 < 𝜖 ⩽ 𝜖0 , then [ ] L , ‖x(t) − y(t)‖ < 𝜂, for all t ∈ 0, 𝜖 where x and y are solutions of (10.4) and (10.5), respectively. This means that the solution of the averaged equation (10.5) is close to the solution of the original equation (10.4). Our aim in the next sections is to present periodic and nonperiodic averaging principles for generalized ODEs. We also include one application for impulsive

319

320

10 Averaging Principles

differential equations (IDEs, for short). We start by presenting periodic averaging principles and, then, we present nonperiodic averaging methods. All the results of this chapter can be found in [83, 178].

10.1 Periodic Averaging Principles In this section, we present a periodic averaging principle for generalized ODEs described by [ ] dx = D 𝜖F (x, t) + 𝜖 2 G(x, t, 𝜖) . d𝜏 This result can be found in [178] for the case X = ℝn . Here, we present its version for Banach space-valued functions whose proof is essentially the same. We also add an application to IDEs. Theorem 10.1: Let B ⊂ X be open, 𝜖0 > 0 and L > 0 be given and consider Ω = B × [0, ∞) and functions F∶Ω → X and G∶Ω × (0, 𝜖0 ] → X fulfilling the following conditions: (i) there exist nondecreasing left-continuous functions h1 , h2 ∶[0, ∞) → [0, ∞) such that F belongs to the class  (Ω, h1 ), and for every fixed 𝜖 ∈ (0, 𝜖0 ], the function (x, t) → G(x, t, 𝜖) belongs to the class  (Ω, h2 ); (ii) F(x, 0) = 0 and G(x, 0, 𝜖) = 0 for every x ∈ B, 𝜖 ∈ (0, 𝜖0 ]; (iii) there exist T > 0 and a bounded Lipschitz-continuous function M∶B → X such that, for every x ∈ B and every t ∈ [0, ∞), F(x, t + T) − F(x, t) = M(x); (iv) there exists a constant 𝛼 > 0 such that, for every j ∈ ℕ, h1 (jT) − h1 ((j − 1)T) ⩽ 𝛼; (v) there exists a constant 𝛽 > 0 such that | h2 (t) | | | | t | ⩽ 𝛽, for every t > 0. | | Let F0 (x) =

F(x, T) , T

for all x ∈ B.

Assume that for every 𝜖 ∈ (0, 𝜖0 ], x𝜖 ∶[0, L𝜖 ] → B is a solution of the generalized ODE { [ ] dx = D 𝜖F (x, t) + 𝜖 2 G(x, t, 𝜖) , (10.6) d𝜏 x(0) = x0 (𝜖),

10.1 Periodic Averaging Principles

and y𝜖 ∶[0, L𝜖 ] → B is a solution of the autonomous ODE { ′ y (t) = 𝜖F0 (y(t)), y(0) = y0 (𝜖).

(10.7)

Suppose, in addition, there exists a constant J > 0 such that ‖x (𝜖) − y (𝜖)‖ ⩽ J𝜖, 0 ‖ 0 ‖

for every 𝜖 ∈ (0, 𝜖0 ].

Then, there exists a constant K > 0 such that for every 𝜖 ∈ (0, 𝜖0 ] and every t ∈ [0, L𝜖 ], we have ‖x (t) − y (t)‖ ⩽ K𝜖. 𝜖 ‖ ‖ 𝜖 Proof. Since M is a bounded and Lipschitzian function, there are positive constants m and l for which ‖M(x)‖ ⩽ m and

‖M(x) − M(y)‖ ⩽ l ‖x − y‖ ,

for all x, y ∈ X. Suppose x ∈ B. Then, from the definition of F0 and by (ii), we have ‖ ‖ ‖ ‖ ‖F (x)‖ = ‖ F(x, T) ‖ = ‖ F(x, T) − F(x, 0) ‖ = ‖M(x)‖ ⩽ m . (10.8) ‖ 0 ‖ ‖ T ‖ ‖ ‖ T T T ‖ ‖ ‖ ‖ Define a function H∶B × [0, ∞) → X by H(x, t) = F0 (x)t. Then, the definition of F0 yields H can be written as H(x, t) =

F(x, T) t, for every x ∈ B × [0, ∞). T

Thus, ‖H(x, s ) − H(x, s )‖ = 1 ‖F(x, T)s − F(x, T)s ‖ 2 1 ‖ 2 1‖ ‖ T‖ 1 m = ‖F(x, T)‖ (s2 − s1 ) ⩽ (s2 − s1 ), T T and also ‖H(x, s ) − H(x, s ) − H(y, s ) + H(y, s )‖ 2 1 2 1 ‖ ‖ 1‖ F(x, T)s2 − F(x, T)s1 − F(y, T)s2 + F(y, T)s1 ‖ = ‖ T‖ 1 = ‖F(x, T) − F(y, T)‖ (s2 − s1 ) T 1 = ‖F(x, T) − F(x, 0) + F(y, 0) − F(y, T)‖ (s2 − s1 ) T l 1 = ‖M(x) − M(y)‖ (s2 − s1 ) ⩽ ‖x − y‖ (s2 − s1 ), T T for all x, y ∈ B and all s1 , s2 ∈ [0, ∞), with s1 ⩽ s2 . Therefore, H is an element of )t for t ⩾ 0. the class  (Ω, h3 ), where h3 (t) = ( m+l T

321

322

10 Averaging Principles

Since x𝜖 is a solution of the nonautonomous generalized ODE (10.6) and y𝜖 is a solution of the ODE (10.7), we have, for every t ∈ [0, L𝜖 ], t

x𝜖 (t) = x0 (𝜖) + 𝜖

t

DF(x𝜖 (𝜏), s) + 𝜖 2

∫0

∫0

DG(x𝜖 (𝜏), s, 𝜖),

t

y𝜖 (t) = y0 (𝜖) + 𝜖

∫0

and

t

F0 (y𝜖 (𝜏))d𝜏 = y0 (𝜖) + 𝜖

∫0

D[F0 (y𝜖 (𝜏))s],

whence we obtain t ‖ ‖x (t) − y (t)‖ = ‖ x0 (𝜖) − y0 (𝜖) + 𝜖 DF(x𝜖 (𝜏), s) 𝜖 ‖ ‖ ‖ 𝜖 ‖ ∫0 ‖

‖ ‖ D[F0 (y𝜖 (𝜏))s]‖ ‖ ∫0 ∫0 ‖ ‖ t ‖ ‖ ‖ ⩽ J𝜖 + 𝜖 ‖ D[F(x𝜖 (𝜏), s) − F(y𝜖 (𝜏), s)]‖ ‖∫0 ‖ ‖ ‖ ‖ t ‖ ‖ ‖ + 𝜖 ‖ D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s]‖ ‖∫0 ‖ ‖ ‖ ‖ t ‖ ‖ ‖ + 𝜖 2 ‖ DG(x𝜖 (𝜏), s, 𝜖)‖ . ‖∫0 ‖ ‖ ‖ t

+ 𝜖2

t

DG(x𝜖 (𝜏), s, 𝜖) − 𝜖

(10.9)

On the other hand, since G belongs to  (Ω, h2 ), Lemma 4.5 yields ‖ t ‖ ‖ ‖ DG(x𝜖 (𝜏), s, 𝜖)‖ ⩽ 𝜖 2 (h2 (t) − h2 (0)) 𝜖2 ‖ ‖∫0 ‖ ‖ ‖ ( ) h (L∕𝜖) L = 𝜖L 2 ⩽ 𝜖L𝛽, ⩽ 𝜖 2 h2 𝜖 L∕𝜖 for all t ∈ [0, L𝜖 ], where this last inequality follows from (v). Now, using the fact that F belongs to the class  (Ω, h1 ) and by Lemma 4.6, we obtain t ‖ t ‖ ‖ ‖ ‖x (s) − y (s)‖ dh (s). D[F(x𝜖 (𝜏), s) − F(y𝜖 (𝜏), s)]‖ ⩽ ‖ 𝜖 ‖ 1 ‖∫0 ‖ ∫0 ‖ 𝜖 ‖ ‖

(10.10)

Let us estimate the second term on the right-hand side of (10.9). In order to do that, we assume that p is the largest integer for which pT ⩽ t (notice that p varies as t changes). Thus, for every t ∈ [0, L𝜖 ], we have t

∫0

p ∑ D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s] = j=1

jT

∫(j−1)T

D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s]

t

+

∫pT

D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s].

10.1 Periodic Averaging Principles

In addition, for every j ∈ {1, … , p}, we have jT

∫(j−1)T

D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s] jT

=

∫(j−1)T

D[F(y𝜖 (𝜏), s) − F(y𝜖 (jT), s)]

jT



∫(j−1)T

D[F0 (y𝜖 (𝜏))s − F0 (y𝜖 (jT))s]

jT

+

∫(j−1)T

D[F(y𝜖 (jT), s) − F0 (y𝜖 (jT))s].

(10.11)

Applying Lemma 4.5 (see the estimate (10.10)) to the first integral on the right-hand side of (10.11), we get jT ‖ ‖ jT ‖ ‖ ‖y (s) − y (jT)‖ dh (s). D[F(y𝜖 (𝜏), s) − F(y𝜖 (jT), s)]‖ ⩽ ‖ 𝜖 ‖ 1 ‖ ∫(j−1)T ‖ 𝜖 ‖∫(j−1)T ‖ ‖ Moreover, for every x, y ∈ B, we have ‖ ‖ ‖F (x) − F (y)‖ = ‖ F(x, T) − F(y, T) ‖ 0 ‖ 0 ‖ ‖ ‖ T ‖ ‖ ‖ F(x, T) − F(x, 0) − F(y, T) + F(y, 0) ‖ ‖ ‖ =‖ ‖ T ‖ ‖ (h(T) − h(0)) ⩽ ‖x − y‖ (10.12) T whence F0 ∶B → X is continuous. Using the fact that y𝜖 is a solution of the autonomous ODE (10.7), we obtain for every s ∈ [(j − 1)T, jT], s ‖ s ‖ ‖ ‖F (y (s))‖ ds ‖y (s) − y (jT)‖ ⩽ ‖ 𝜖F (y (s))ds ⩽ 𝜖 ‖ ‖ 𝜖 0 𝜖 ‖ ‖∫ ‖ 𝜖 ‖ ∫jT ‖ 0 𝜖 ‖ jT ‖ ‖ s m m ds = 𝜖 (jT − s) ⩽ 𝜖m, ⩽𝜖 ∫jT T T where we used the estimate (10.8), and Corollary 1.48 together with the fact that s → F0 (y𝜖 (s)) is continuous to achieve the second inequality. Therefore, [ ] ‖y (s) − y (jT)‖ dh (s) ⩽ 𝜖m h (jT) − h ((j − 1)T) ⩽ 𝜖m𝛼. 𝜖 1 1 ‖ 1 ∫(j−1)T ‖ 𝜖 jT

A similar argument applied to the second integral on the right-hand side of (10.11) yields ‖ jT ‖ [ ] ‖ ‖ D[F0 (y𝜖 (𝜏))s − F0 (y𝜖 (jT))s]‖ ⩽ 𝜖m h3 (jT) − h3 ((j − 1)T) ‖ ‖∫(j−1)T ‖ ‖ ‖ = 𝜖m(m + l).

323

324

10 Averaging Principles

Now, we want to obtain an estimate for the third integral on the right-hand side of (10.11). Given an arbitrary y ∈ B, we have jT

∫(j−1)T

D[F(y, s) − F0 (y)s] = F(y, jT) − F(y, (j − 1)T) − F0 (y)T = M(y) − F(y, T) = −F(y, 0) = 0,

where we applied conditions (ii) and (iii). Taking p as the largest integer for which pT ⩽ t ⩽ L∕𝜖, we get p iT ‖ ‖∑ ‖ ‖ D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s]‖ ⩽ p𝜖m𝛼 + p𝜖m(m + l) ‖ ‖ ‖ ∫(i−1)T ‖ ‖ i=1 Lm𝛼 m(m + l)L + , ⩽ T T combining the previous estimates. Then, using Lemma 4.5, we obtain ‖ t ‖ ‖ t ‖ ‖ t ‖ ‖ ‖ ‖ ‖ ‖ ‖ D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s]‖ ⩽ ‖ DF(y𝜖 (𝜏), s)‖ + ‖ D[F0 (y𝜖 (𝜏))s]‖ ‖ ‖∫pT ‖ ‖∫pT ‖ ‖∫pT ‖ ‖ ‖ ‖ ‖ ‖ ‖ ⩽ h1 (t) − h1 (pT) + h3 (t) − h3 (pT) ⩽ h1 (pT + T) − h1 (pT) + h3 (pT + T) − h3 (pT) ⩽ 𝛼 + m + l, since F ∈  (Ω, h1 ) and H(x, t) = F0 (x)t ∈  (Ω, h3 ), whence we conclude that ‖ t ‖ ‖ ‖ D[F(y𝜖 (𝜏), s) − F0 (y𝜖 (𝜏))s]‖ ⩽ K1 , ‖ ‖∫0 ‖ ‖ ‖ for a certain positive constant K1 . Then, t

‖x (s) − y (s)‖ dh (s) + 𝜖(J + K + L𝛽). ‖x (t) − y (t)‖ ⩽ 𝜖 𝜖 𝜖 1 ‖ ‖ 1 ‖ 𝜖 ∫ ‖ 𝜖 0

By Grönwall’ inequality (Theorem 1.52), we have ‖x (t) − y (t)‖ ⩽ e𝜖(h1 (t)−h1 (0)) 𝜖(J + K + L𝛽). 𝜖 1 ‖ 𝜖 ‖ Finally, note that 𝜖[h1 (t) − h1 (0)] ⩽ 𝜖[h1 (L∕𝜖) − h1 (0)] ⩽ 𝜖[h1 (⌈L∕(𝜖T)⌉ T) − h1 (0)] ( ) ( ⌈ ⌉ ) L L L 𝛼⩽𝜖 +1 𝛼 ⩽ + 𝜖0 𝛼. ⩽𝜖 𝜖T 𝜖T T Then,

(

‖x (t) − y (t)‖ ⩽ e 𝜖 ‖ ‖ 𝜖 (

where K = e

L +𝜖0 T

) 𝛼

L +𝜖0 T

) 𝛼

𝜖(J + K1 + L𝛽) = K𝜖,

(J + K1 + L𝛽) and the statement follows.



10.1 Periodic Averaging Principles

10.1.1 An Application to IDEs This section concerns an application of the previous result to IDEs. We start by presenting two auxiliary lemmas which relate T-periodic integrands to T-periodic Stieltjes-type integrals. These results are known for non-Stieltjes-type integrals. Next, they are presented within the framework of Perron–Stieltjes integrals. Lemma 10.2: Let z∶[0, ∞) → X be a T-periodic function and g∶[0, ∞) → ℝ be a t nondecreasing function such that the Perron–Stieltjes integral ∫0 z(s)dg(s) exists for every t ∈ [0, ∞). Assume, further, that there exists 𝛼 > 0 such that g(t + T) − g(t) = 𝛼,

for every t ∈ [0, ∞).

Then, for each n ∈ ℕ0 , we have T

(n+1)T

z(s)dg(s) =

∫nT

z(s)dg(s).

∫0

Proof. Let n ∈ ℕ0 be fixed. Since z is a T-periodic function, z is an nT-periodic function, that is, for all s ∈ [0, ∞).

z(s + nT) = z(s),

On the other hand, let 𝜙∶[0, T] → ℝ be given by 𝜙(s) = s + nT, for all s ∈ [0, T]. Then, ∫nT

𝜙(T)

T+nT

(n+1)T

z(s)dg(s) =

∫nT

z(s)dg(s) =

T

=

T

z(s + nT)dg(s) =

∫0

z(𝜙(s)) dg(𝜙(s))

∫𝜙(0) ∫0

z(s)dg(s), ◽

getting the desired result.

Lemma 10.3: Given an open set B ⊂ X, let f ∶B × [0, ∞) → X be a T-periodic function with respect to the second variable and g∶[0, ∞) → ℝ be a nondecreasing funct tion such that the Perron–Stieltjes integral ∫0 f (x, s)dg(s) exists for every t ∈ [0, ∞) and x ∈ B. Assume, in addition, that there exists 𝛼 > 0 such that g(t + T) − g(t) = 𝛼,

for every t ∈ [0, ∞).

Then, for every t ⩾ 0 and x ∈ B, we have t+T

∫t

T

f (x, s)dg(s) =

∫0

f (x, s)dg(s).

Proof. Let t ⩾ 0 be given. Since [0, ∞) = ∪n∈ℕ0 [nT, (n + 1)T], there exists n ∈ ℕ0 such that nT ⩽ t ⩽ (n + 1)T. In particular, t ⩽ (n + 1)T = nT + T ⩽ t + T,

325

326

10 Averaging Principles

due to the fact that nT ⩽ t, that is, t ⩽ (n + 1)T ⩽ t + T. As a consequence, we get t+T

∫t

t+T

(n+1)T

f (x, s)dg(s) =

f (x, s)dg(s) +

∫t

∫(n+1)T t+T

(n+1)T

=

f (x, s)dg(s) +

∫t

f (x, s)dg(s)

∫nT+T

f (x, s)dg(s)

Then, considering the following change of variable 𝜑(𝜉) = 𝜉 + T, we obtain t+T

∫t

𝜑(t)

(n+1)T

f (x, s)dg(s) =

f (x, s)dg(s) +

∫t

∫𝜑(nT) t

(n+1)T

=

f (x, s)dg(s) +

∫t

∫nT

f (x, s)dg(s) +

∫t

∫nT

f (x, s)dg(s) +

∫t

∫nT

∫nT

f (x, s)dg(s)

T

(n+1)T

=

f (x, s + T)dg(s)

t

(n+1)T

=

f (x, 𝜑(s))dg(𝜑(s))

t

(n+1)T

=

f (x, s)dg(s)

f (x, s)dg(s) =

∫0

f (x, s)dg(s),

where we used the T-periodicity of f and, to obtain the last equality, we employed Lemma 10.2. ◽ Consider a function f ∶X × [0, ∞) → X, an increasing sequence of numbers 0 ⩽ t1 < t2 < …, and a sequence of mappings Ii ∶X → X, with i ∈ ℕ, consider the IDE { ′ x (t) = 𝜖f (x(t), t) + 𝜖 2 g̃ (x(t), t, 𝜖), t ≠ ti , (10.13) Δ+ x(ti ) = x(ti+ ) − x(ti ) = 𝜖Ii (x(ti )), i ∈ ℕ. Throughout this subsection, we assume that f is T-periodic on the second variable and the impulse times are periodic in the sense that there exists k ∈ ℕ such that, for every integer i > k, we have ti = ti−k + T

and Ii = Ii−k .

Let us assume, in addition, that the functions f and g̃ are continuous with respect to the second variable. Moreover, consider that the impulse operators Ii , i ∈ ℕ, are bounded and Lipschitz-continuous. The first result we present is borrowed from [178], where it was proved for the n-dimensional case. Here, we extend it to the case of Banach space-valued function. The proof remains similar though. Theorem 10.4: Let Ω = X × [0, ∞), T > 0, 𝜖0 > 0, L > 0 and consider functions f ∶Ω → X and g̃ ∶Ω × (0, 𝜖0 ] → X which are continuous on the second variable and satisfy the following conditions:

10.1 Periodic Averaging Principles

(H1) for every t ∈ [0, ∞), x ∈ X, and 𝜖 ∈ (0, 𝜖0 ], the Perron integrals t

∫0

t

f (x, s)ds

and

∫0

g̃ (x, s, 𝜖)ds

exist; (H2) for every s1 , s2 ∈ [0, ∞) such that s1 ⩽ s2 and x, y ∈ X, there exists a constant C > 0 such that s2 ‖ s2 ‖ ‖ ‖ f (x, s)ds‖ ⩽ C ds and ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ s2 ‖ s2 ‖ ‖ ‖ [f (x, s) − f (y, s)] ds‖ ⩽ C ‖x − y‖ ds; ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ (H3) for every s1 , s2 ∈ [0, ∞) such that s1 ⩽ s2 , x, y ∈ X and 𝜖 ∈ (0, 𝜖0 ], there exists a constant N > 0 such that s2 ‖ s2 ‖ ‖ ‖ g̃ (x, s, 𝜖)ds‖ ⩽ N ds and ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ s2 ‖ s2 ‖ ‖ ‖ [̃g(x, s, 𝜖) − g̃ (y, s, 𝜖)] ds‖ ⩽ N ‖x − y‖ ds. ‖ ‖∫s ‖ ∫s 1 ‖ 1 ‖ Assume, further, that f is T-periodic with respect to the second variable. Take k ∈ ℕ and consider 0 ⩽ t1 < t2 < · · · < tk < T and impulse operators Ii ∶X → X, i ∈ {1, 2, … , k} which are bounded and Lipschitzian functions and 𝜖 ∈ (0, 𝜖0 ]. For every integer i > k, set and Ii = Ii−k .

ti = ti−k + T Define

1 1∑ f (x, s)ds and I0 (x) = I (x) T ∫0 T i=1 i T

f0 (x) =

k

for every x ∈ X. Moreover, assume that, for every 𝜖 ∈ (0, 𝜖0 ], x𝜖 ∶[0, L𝜖 ] → X is a solution of the IDE ⎧ x′ (t) = 𝜖f (x(t), t) + 𝜖 2 g̃ (x(t), t, 𝜖) ⎪ + ⎨ Δ x(ti ) = 𝜖Ii (x(ti )), i ∈ ℕ, ⎪ x(0) = x0 (𝜖), ⎩ and y𝜖 ∶[0, L𝜖 ] → X is a solution of the ODE { ′ y (t) = 𝜖[f0 (y(t)) + I0 (y(t))], y(0) = y0 (𝜖).

t ≠ ti , (10.14)

(10.15)

327

328

10 Averaging Principles

If there is a constant J > 0 such that ‖x (𝜖) − y (𝜖)‖ ⩽ J𝜖 for every 𝜖 ∈ (0, 𝜖 ], 0 0 ‖ ‖ 0 then there exists a constant K > 0 such that for every 𝜖 ∈ (0, 𝜖0 ] and t ∈ [0, L𝜖 ], ‖x (t) − y (t)‖ ⩽ K𝜖. 𝜖 ‖ ‖ 𝜖 Proof. Given 𝜖 ∈ (0, 𝜖0 ], define two functions t

F(x, t) =

f (x, s)ds +

∫0

and

i=1 t

G(x, t, 𝜖) =

∞ ∑ Ii (x)Hti (t)

∫0

g̃ (x, s, 𝜖)ds,

where x ∈ X, t ∈ [0, ∞) and, for each i ∈ ℕ, Hti is the Heaviside function concentrated at ti . Suppose x𝜖 is a solution of the generalized ODE given by dx𝜖 = D[𝜖F(x𝜖 , t) + 𝜖 2 G(x𝜖 , t, 𝜖)] d𝜏 which is exactly the generalized ODE corresponding to the integral form of the IDE (10.14). The reader may want to check Chapters 3 and 4 for more details on the relations between several differential equations and generalized ODEs, and also [209] for this specific type of equation. By the hypothesis, there is a positive constant D such that, for every x, y ∈ X, t ∈ [0, ∞) and i ∈ ℕ, we have ‖I (x)‖ ⩽ D ‖i ‖

and

‖I (x) − I (y)‖ ⩽ D ‖x − y‖ . i ‖i ‖

Define a function h1 ∶[0, ∞) → ℝ by ∞ ∑ h1 (t) = Ct + D Hti (t). i=1

Thus, h1 is left-continuous and nondecreasing. Moreover, for all 0 ⩽ u ⩽ t, we get ∞ ‖ ‖ t ∑ ‖ ‖ f (x, s)ds + Ii (x)[Hti (t) − Hti (u)]‖ ‖F(x, t) − F(x, u)‖ = ‖ ‖ ‖∫u i=1 ‖ ‖ ∞ ‖ t ‖ ∑ ‖ ‖ ‖I (x)‖ [H (t) − H (u)] ⩽‖ f (x, s)ds‖ + ti ‖ i ‖ ti ‖∫u ‖ ‖ ‖ i=1 ∞ [ ] ∑ ⩽ C(t − u) + D Hti (t) − Hti (u) i=1

= h1 (t) − h1 (u)

10.1 Periodic Averaging Principles

and, also, ‖F(x, t) − F(x, u) − F(y, t) + F(y, u)‖ ∞ ]‖ ‖ t ∑ ][ [ ‖ ‖ = ‖ [f (x, s) − f (y, s)] ds + Ii (x) − Ii (y) Hti (t) − Hti (u) ‖ ‖ ‖∫u i=1 ‖ ‖ ∞ [ ] ‖ t ‖ ∑ ‖ ‖ ‖I (x) − I (y)‖ H (t) − H (u) ⩽‖ f (x, s) − f (y, s)ds‖ + i ti ‖ ti ‖i ‖∫u ‖ ‖ ‖ i=1 ] [ ∞ [ ] ∑ ⩽ ‖x − y‖ C(t − u) + D Hti (t) − Hti (u) i=1

= ‖x − y‖ (h1 (t) − h1 (u)). Therefore, F belongs to the class  (Ω, h1 ). Now, let us define h2 ∶[0, ∞) → ℝ by h2 (t) = Nt. Then, for 0 ⩽ u ⩽ t, we obtain ‖ ‖ t ‖ ‖ g̃ (x, s, 𝜖)ds‖ ⩽ N(t − u) = h2 (t) − h2 (u). ‖G(x, t, 𝜖) − G(x, u, 𝜖)‖ = ‖ ‖ ‖∫u ‖ ‖ In addition, for 0 ⩽ u ⩽ t and x, y ∈ X, we have ‖G (x, t, 𝜖) − G(x, u, 𝜖) − G(y, t, 𝜖) + G(y, u, 𝜖)‖ ‖ ‖ t ‖ ‖ = ‖ [̃g(x, s, 𝜖) − g̃ (y, s, 𝜖)] ds‖ ⩽ N ‖x − y‖ (t − u) ‖ ‖∫ u ‖ ‖ = ‖x − y‖ [h2 (t) − h2 (u)]. Hence, for every fixed 𝜖 ∈ (0, 𝜖0 ], the function (x, t) → G(x, t, 𝜖) belongs to the class  (Ω, h2 ). Notice that F(x, 0) = 0 and G(x, 0, 𝜖) = 0 for all x ∈ X. Thus, for all t ⩾ 0 and x ∈ X, by Lemma 10.3, taking g(s) = s, we obtain t+T

F(x, t + T) − F(x, t) =

f (x, s)ds +

∫t ∫0

Ii (x)

i∶t⩽ti 0. In the first result, we relate a solution of the above generalized ODE to a solution of the following averaged ODE: ẏ = F0 (y),

10.2 Nonperiodic Averaging Principles

where F(x, r) , for all x ∈ B. r Such a result generalizes [[209], Theorem 8.12], and it can be found in [83]. The version presented here is more general, since our result holds for any generalized ODE, while in [83], the result is restricted to a specific generalized ODE. It is relevant to bring both results here (see Theorems 10.5 and 10.7), due to the different techniques used in each of the proofs. Defining a function G∶Ω → X by )( ) ( t 𝜗 , for every (x, t) ∈ Ω and 𝜗 ∈ [0, ∞), G(x, t)(𝜗) = 𝜖F x, 𝜖 𝜖 we obtain, by a change of variable, that a solution of the above nonautonomous generalized ODE can be related to the solution of the autonomous ODE: F0 (x) = lim

r→∞

ẏ = G0 (y), where

)( ) ( 𝜗 t G x, 𝜖 𝜖 , for x ∈ B and 𝜗 ∈ [0, ∞). G0 (x)(𝜗) = lim+ t 𝜖→0 𝜖 The next result is a generalization of the result found in [83] for functions taking values in a Banach space. The proof of this result is very similar to the one found in [83] with obvious adaptation, which, in turn, is inspired in the proof of a result found in [209].

Theorem 10.5: Consider Ω = B × [0, ∞) and B = {x ∈ X∶ ‖x‖ < c}, with c > 0, and assume that F ∈  (Ω, h), where h∶[0, ∞) → ℝ is a left-continuous and nondecreasing function, and F∶B × [0, ∞) → X is such that F(y, 0) = 0 for every y ∈ B. Assume, further, that there is a constant C such that, for every 𝛼 ⩾ 0, lim sup r→∞

h(r + 𝛼) − h(𝛼) ⩽C r

(10.16)

and, for every x ∈ B, F(x, r) = F0 (x). r Let y∶[0, ∞) → X be the uniquely determined solution of the autonomous ODE { ẏ = F0 (y), (10.17) y(0) = ỹ , lim

r→∞

which belongs to B together with its 𝜌-neighborhood such that 𝜌 > 0, that is, there exists 𝜌 > 0 such that {̃x ∈ X∶ ‖̃x − y(t)‖ < 𝜌,

for every t ∈ [0, ∞)} ⊂ B.

331

332

10 Averaging Principles

Then, for every 𝜇 > 0 and L > 0, there exists 𝜖0 > 0 such that, for all 𝜖 ∈ (0, 𝜖0 ), ] [ ‖x (t) − z (t)‖ < 𝜇, for every t ∈ 0, L , 𝜖 ‖ 𝜖 ‖ 𝜖 where x𝜖 is a solution of the generalized ODE: { dx = D [𝜖F (x, t)] , (10.18) d𝜏 x𝜖 (0) = y(0), on [0, L𝜖 ] and z𝜖 is a solution of the autonomous ODE { ż = 𝜖F0 (z), z𝜖 (0) = y(0),

(10.19)

on [0, L𝜖 ]. Proof. For y ∈ B, t ∈ [0, ∞) and 𝜖 > 0, consider the functions ) ( ( ) t t and h𝜖 (t) = 𝜖h . G𝜖 (y, t) = 𝜖F y, 𝜖 𝜖 Clearly, the function h𝜖 is nondecreasing and left-continuous on [0, ∞). Once F ∈  (Ω, h), we have, for every x, y ∈ B and every t1 , t2 ∈ [0, ∞) such that t2 ⩾ t1 , ( ) ‖ ( t ) t1 ‖ ‖ 2 ‖G (y, t ) − G (y, t )‖ = 𝜖 ‖ − F y, F y, ‖ ‖ 2 𝜖 1 ‖ ‖ 𝜖 ‖ ‖ 𝜖 𝜖 ‖ ‖ ( )| | (t ) t | | ⩽ 𝜖 |h 2 − h 1 | = h𝜖 (t2 ) − h𝜖 (t1 ), | 𝜖 𝜖 || | and, also, ‖G (y, t ) − G (y, t ) − G (x, t ) + G (x, t )‖ 2 𝜖 1 𝜖 2 𝜖 1 ‖ ‖ 𝜖 ( ) ( ) ( ) ‖ ( t ) t1 t2 t1 ‖ ‖ ‖ 2 − F y, − F x, + F x, = 𝜖 ‖F y, ‖ ‖ ‖ 𝜖 𝜖 𝜖 𝜖 ‖ ‖ [ ] ⩽ ‖y − x‖ h𝜖 (t2 ) − h𝜖 (t1 ) . Therefore, G𝜖 ∈  (Ω, h𝜖 ), for all 𝜖 > 0. Now, consider y ∈ B. By the hypotheses, we have F(y, r) − F(y, 0) F(y, r) = lim = F0 (y). r→∞ r r Thus, for every 𝜂 > 0, there exists R > 0 such that for r > R, ‖ ‖ ‖F (y)‖ ⩽ ‖F (y) − F(y, r) − F(y, 0) ‖ + ‖F(y, r) − F(y, 0)‖ ‖ ‖ 0 ‖ ‖ 0 r r ‖ ‖ h(r) − h(0) < 2𝜂 + C, ⩽𝜂+ r lim

r→∞

10.2 Nonperiodic Averaging Principles

where we used the estimates ‖F(y, r) − F(y, 0)‖ ⩽ h(r) − h(0) (because F ∈  (Ω, h)) and (10.16). Then, since 𝜂 > 0 can be taken arbitrarily small, we obtain ‖F (y)‖ ⩽ C, ‖ 0 ‖

for all y ∈ B.

In an analogous way, for every x, y ∈ B and every 𝜂 > 0, there exists a positive constant R such that, whenever r > R, we have ‖F (x) − F (y)‖ < 𝜂 + ‖F(y, r) − F(y, 0) − F(x, r) + F(x, 0)‖ 0 ‖ ‖ 0 r h(r) − h(0) ⩽ 𝜂 + C ‖y − x‖ . ⩽ 𝜂 + ‖y − x‖ r Then, the arbitrariness of 𝜂 > 0 yields ‖F (x) − F (y)‖ ⩽ C ‖y − x‖ , 0 ‖ 0 ‖

for all x, y ∈ B.

Notice that, for y ∈ B and t > 0, we have ) ( ) ( t t 𝜖 = lim+ t F y, = tF0 (y) lim+ G𝜖 (y, t) = lim+ 𝜖F y, 𝜖→0 𝜖→0 𝜖→0 t 𝜖 𝜖 lim+ G𝜖 (y, 0) = lim+ 𝜖F (y, 0) = 0. 𝜖→0

and

𝜖→0

Consider, for y ∈ B and t ⩾ 0, a function given by G0 (y, t) = tF0 (y). Then, it is clear that lim G𝜖 (y, t) = G0 (y, t)

𝜖→0+

and, moreover, G0 ∈  (Ω, h0 ), with h0 (t) = Ct, for all t ⩾ 0. For 0 ⩽ t1 < t2 < ∞, we have ( )] [ ( ) t t h𝜖 (t2 ) − h𝜖 (t1 ) = 𝜖 h 2 − h 1 𝜖 𝜖 [ ( ) ( )] t −t t t 𝜖 h 2 1 + 1 −h 1 . = (t2 − t1 ) (t2 − t1 ) 𝜖 𝜖 𝜖 Then, the hypotheses yield lim sup(h𝜖 (t2 ) − h𝜖 (t1 )) ⩽ C(t2 − t1 ) = h0 (t2 ) − h0 (t1 ), 𝜖→0+

once lim

𝜖→0+

t2 − t1 = ∞. 𝜖

From the fact that y∶[0, ∞) → B is a solution of the autonomous ODE (10.17), we obtain, by the properties of the Kurzweil integral, the equalities s2

y(s2 ) − y(s1 ) =

∫s1

s2

F0 (y(𝜏))d𝜏 =

∫s1

s2

D[F0 (y(𝜏))t] =

∫s1

DG0 (y(𝜏), t),

333

334

10 Averaging Principles

for all s1 , s2 ∈ [0, ∞). Thus, y is also a solution on [0, ∞) of the generalized ODE: dy = DG0 (y, t) d𝜏 and, by the hypotheses, such a solution is uniquely determined. Therefore, all conditions of Theorem 7.8 are satisfied for the parameter 𝜖 → 0+ . Then, Theorem 7.8 implies that, for every 𝜇 > 0 and L > 0, there exists 𝜖0 > 0 such that, for 𝜖 ∈ (0, 𝜖0 ), there is a solution y𝜖 on the interval [0, L] of the generalized ODE: dy = DG𝜖 (y, t), d𝜏 with initial condition y𝜖 (0) = y(0), and ‖y (s) − y(s)‖ ⩽ 𝜇, ‖ 𝜖 ‖

(10.20)

for all s ∈ [0, L] .

(10.21)

For the solution y𝜖 ∶[0, L] → B of (10.20), we have s2

y𝜖 (s2 ) − y𝜖 (s1 ) =

DG𝜖 (y𝜖 (𝜏), t) = 𝜖

∫s1

s2

∫s1

) ( t DF y𝜖 (𝜏), 𝜖

whenever s1 , s2 ∈ [0, L]. For t ∈ [0, L𝜖 ], denote x𝜖 (t) = y𝜖 (𝜖t). Then, we get x𝜖 (t2 ) − x𝜖 (t1 ) = y𝜖 (𝜖t2 ) − y𝜖 (𝜖t1 ) = 𝜖 =𝜖

𝜖t2

∫𝜖t1

𝜖t2

∫𝜖t1 ( ( ) ) s 𝜎 , DF x𝜖 𝜖 𝜖

) ( s DF y𝜖 (𝜎), 𝜖

for every t1 , t2 ∈ [0, L𝜖 ]. Then, taking 𝜑(𝜎) = 𝜎𝜖 and applying Theorem 2.18, we obtain 𝜖t2 𝜑(𝜖t2 ) t2 ( ( ) ) s 𝜎 , = DF x𝜖 DF(x𝜖 (𝜏), t) = DF(x𝜖 (𝜏), t) ∫𝜑(𝜖t1 ) ∫t1 ∫𝜖t1 𝜖 𝜖 for any t1 , t2 ∈ [0, L𝜖 ] and y𝜖 (0) = x𝜖 (0) = y(0). Therefore, the function x𝜖 ∶[0, L𝜖 ] → B is a solution of the generalized ODE (10.18) on [0, L𝜖 ]. Similarly, it can be shown that the function z𝜖 ∶[0, L𝜖 ] → B given by z𝜖 (t) = y(𝜖t) is a solution of the autonomous ODE (10.19) on [0, L𝜖 ]. Thus, (10.21) yields ‖x (t) − z (t)‖ = ‖y (𝜖t) − y(𝜖t)‖ < 𝜇 𝜖 ‖ ‖ 𝜖 ‖ ‖ 𝜖 for every t ∈ [0, L𝜖 ], concluding the proof.



As an immediate consequence of Theorem 10.5, we have the following result. Corollary 10.6: Consider Ω = B × [0, ∞) and B = {x ∈ X∶ ‖x‖ < c}, with c > 0 and assume that F ∈  (Ω, h), where h∶[0, ∞) → ℝ is a left-continuous and nondecreasing function, and F∶B × [0, ∞) → X is such that F(y, 0) = 0 for every y ∈ B.

10.2 Nonperiodic Averaging Principles

Assume, further, that there is a constant C such that, for every 𝛼 ⩾ 0, lim sup r→∞

h(r + 𝛼) − h(𝛼) ⩽C r

(10.22)

and, for every x ∈ B, F(x, r) = F0 (x). r Let y∶[0, ∞) → X be a uniquely determined solution of the autonomous ODE: { ẏ = F0 (y), (10.23) y(0) = ỹ , lim

r→∞

which belongs to B together with its 𝜌-neighborhood, with 𝜌 > 0, that is, there exists 𝜌 > 0 such that {̃x ∈ X∶ ‖̃x − y(t)‖ < 𝜌,

for every t ∈ [0, ∞)} ⊂ B.

Then, for every 𝜇 > 0 and L > 0, there exists 𝜖0 > 0 such that, for all 𝜖 ∈ (0, 𝜖0 ), ‖x (t) − z (t)‖ < 𝜇, 𝜖 ‖ 𝜖 ‖

for every t ∈ [0, L] ,

where x𝜖 is a solution of the generalized ODE: )] [ ( { t dx , = D 𝜖F x, 𝜖 d𝜏 x𝜖 (0) = y(0), on [0, L] and z𝜖 is a solution on [0, L] of the autonomous ODE: { ż = F0 (z), z𝜖 (0) = y(0).

(10.24)

(10.25)

The next result can be found in [83]. But the version presented here is more general, since it holds for a more general setting of generalized ODEs. This type of result can be useful when we want to apply the results from generalized ODEs to measure FDEs, using the correspondences presented in Chapter 4. Theorem 10.7: Consider a function G∶Ω → X such that G ∈  (Ω, h), where Ω = B × [0, ∞), B = {x ∈ X∶ ‖x‖ < c}, with c > 0, and h∶[0, ∞) → ℝ is a left-continuous and nondecreasing function. Suppose, for every 𝛼 ⩾ 0, we have ) ( t + 𝛼 − h(𝛼) h 𝜖 ⩽ C, where C > 0 is a constant, (10.26) lim sup t 𝜖→0+ 𝜖 and, for every x ∈ B, we have )( ) ( t 𝜗 G x, 𝜖 𝜖 lim = G (x)(𝜗), for all 𝜗 ∈ [0, ∞). (10.27) 𝜖→0+

t 𝜖

0

335

336

10 Averaging Principles

Suppose, in addition, for each y ∈ B, the function G(y, t)∶[0, ∞) → X given by 𝜗 → G(y, t)(𝜗), is such that G(y, 0)(𝜗) = 0 for every 𝜗 ⩾ 0 and y ∈ B. Let y∶[0, ∞) → B be the unique solution of the autonomous ODE: { ẏ = G0 (y), (10.28) y(0) = ỹ and assume that there exists 𝜌 > 0 such that {̃x ∈ X∶ ‖̃x − y(t)‖ < 𝜌,

for every t ∈ [0, ∞)} ⊂ B.

Then, for every 𝜇 > 0 and every L > 0, there exists 𝜖0 > 0 such that for 𝜖 ∈ (0, 𝜖0 ), ] [ ‖x (t) − z (t)‖ < 𝜇, for every t ∈ 0, L , 𝜖 ‖ ‖ 𝜖 𝜖 where x𝜖 is a solution of the generalized ODE: dx = 𝜖D [G (x, t)] (10.29) d𝜏 ] [ on 0, L𝜖 such that x𝜖 (0) = y(0) and z𝜖 is a solution on [0, L𝜖 ] of the autonomous ODE: { ż = 𝜖G0 (z), (10.30) z𝜖 (0) = y(0). Proof. For y ∈ B, t ∈ [0, ∞), and 𝜖 > 0, define the functions { ( ) 0, 𝜗 ∈ [−r, 0], t )( ) ( h𝜖 (t) = 𝜖h and H𝜖 (y, t)(𝜗) = 𝜗 t , 𝜗 ∈ [0, ∞). 𝜖G y, 𝜖 𝜖 𝜖 It is easy to see that h𝜖 is nondecreasing and left-continuous on [0, ∞). Moreover, since G ∈  (Ω, h), we have, for every x, y ∈ B, t1 , t2 ∈ [0, ∞), and 𝜗 ∈ [0, ∞), ( ) ‖ ( t ) (𝜗) t1 ( 𝜗 )‖ ‖ 2 ‖H (y, t )(𝜗) − H (y, t )(𝜗)‖ = ‖ − 𝜖G y, 𝜖G y, ‖ ‖ 𝜖 2 𝜖 1 ‖ ‖ ‖ ‖ 𝜖 𝜖 𝜖 𝜖 ‖ ‖ ( ) ( ) | t2 t1 || | ⩽ 𝜖 |h −h | = |h𝜖 (t2 ) − h𝜖 (t1 )| | 𝜖 𝜖 || | and, also, ‖ H𝜖 (x, t2 )(𝜗) − H𝜖 (x, t1 )(𝜗) − H𝜖 (y, t2 )(𝜗) + H𝜖 (y, t1 )(𝜗)‖ ‖ ( )( ) ( )( ) ‖ ( t ) (𝜗) t t 𝜗 𝜗 ‖ − 𝜖G x, 1 − 𝜖G y, 2 = ‖𝜖G x, 2 ‖ 𝜖 𝜖 𝜖 𝜖 𝜖 𝜖 ‖ ) ( )‖ ( t 𝜗 ‖ = +𝜖G y, 1 ‖ 𝜖 𝜖 ‖ ‖ ( )| | (t ) t | | 2 ⩽ ‖x − y‖∞ 𝜖 |h − h 1 | = ‖x − y‖ |h𝜖 (t2 ) − h𝜖 (t1 )|. | 𝜖 𝜖 || |

10.2 Nonperiodic Averaging Principles

Thus, ‖H (y, t ) − H (y, t )‖ ⩽ |h (t ) − h (t )| and 2 𝜖 1 ‖ 𝜖 2 𝜖 1 ‖ 𝜖 ‖H (x, t ) − H (x, t ) − H (y, t ) + H (y, t )‖ ⩽ ‖x − y‖ |h (t ) − h (t )| 2 𝜖 1 𝜖 2 𝜖 1 ‖ 𝜖 2 𝜖 1 ‖ 𝜖 and, therefore, H𝜖 ∈  (Ω, h𝜖 ) for 𝜖 > 0. Let y ∈ B and t ∈ [0, ∞). Then, for every 𝜗 ∈ [0, ∞), we obtain )( ) ( ) )( ) ( ( 𝜗 𝜗 𝜗 t t − G(y, 0) G y, G y, 𝜖 𝜖 𝜖 𝜖 𝜖 = G (y)(𝜗), = lim+ lim 0 t t 𝜖→0+ 𝜖→0 𝜖 𝜖 where we used (10.27) and the fact that G(y, 0)(𝜗) = 0 for every 𝜗 ∈ [0, ∞). Therefore, in virtue of (10.27), for every 𝜂 > 0, there exists a sufficiently small 𝜖 > 0 such that for 𝜗 ∈ [0, ∞), we obtain [ ( )( ) ( )]‖ ‖ 𝜗 𝜗 ‖ ‖G (y)(𝜗)‖ ⩽ ‖G (y)(𝜗) − 𝜖 G y, t − G(y, 0) ‖ ‖ 0 ‖ 0 t 𝜖 𝜖 𝜖 ‖ ‖ ‖ ( ) ( ) ( ) 𝜖‖ t 𝜗 𝜗 ‖ ‖ ‖ + ‖G y, − G(y, 0) t‖ 𝜖 𝜖 𝜖 ‖ ‖ [ ( ) ] t 𝜖 h − h(0) < 2𝜂 + C, ⩽𝜂 + t 𝜖 where we also used the fact that ( ) ‖ ( t) ‖ t ‖G y, − G(y, 0)‖ ⩽h − h(0), ‖ ‖ 𝜖 𝜖 ‖ ‖ once G ∈  (Ω, h). Thus, we have ‖G (y)‖ ⩽ C, ‖ 0 ‖

for every y ∈ B,

because 𝜂 > 0 can be taken arbitrarily small. Similarly, if x, y ∈ B and t ∈ [0, ∞), then for every 𝜂 > 0, there exists a sufficiently small 𝜖 > 0 such that, for all 𝜗 ∈ [0, ∞), ‖ G0 (x)(𝜗) − G0 (y)(𝜗)‖ ‖ ( )( ) ( ) ( )( ) ( )‖ t t 𝜖‖ 𝜗 𝜗 𝜗 𝜗 ‖ G y, − G(y, 0) − G x, + G(x, 0) 0 can be taken sufficiently small, we obtain ‖G (x) − G (y)‖ ⩽ C ‖y − x‖ , 0 ‖ ‖ 0

x, y ∈ B.

(10.31)

337

338

10 Averaging Principles

Besides, for y ∈ B, t ∈ (0, ∞), and 𝜗 ∈ [0, ∞), we have )( ) ( 𝜗 t lim+ H𝜖 (y, t)(𝜗) = lim+ 𝜖G y, 𝜖→0 𝜖→0 𝜖 𝜖 ( )( ) t 𝜗 𝜖 = tG0 (y)(𝜗) = lim+ t G y, 𝜖→0 t 𝜖 𝜖 and, for t = 0 and 𝜗 ∈ [0, ∞), we get ( ) 𝜗 lim+ H𝜖 (y, 0)(𝜗) = lim+ 𝜖G (y, 0) = 0, 𝜖→0 𝜖→0 𝜖 where we recall that G(y, 0)(𝜗) = 0 for every 𝜗 ⩾ 0 and y ∈ B. Thus, defining H0 (y, t) = tG0 (y), for y ∈ B and t ⩾ 0, we obtain lim H𝜖 (y, t) = H0 (y, t).

𝜖→0+

By (10.31), H0 ∈  (Ω, h0 ), where h0 (t) = Ct, t ⩾ 0. Furthermore, from the definition of h𝜖 , we have, for 0 ⩽ t1 < t2 < ∞, ( )] [ ( ) t t h𝜖 (t2 ) − h𝜖 (t1 ) = 𝜖 h 2 − h 1 𝜖 𝜖 ) ( )] [ ( t t t −t 𝜖 h 2 1 + 1 −h 1 = (t2 − t1 ) t2 − t1 𝜖 𝜖 𝜖 and, by condition (10.26), we have lim sup[h𝜖 (t2 ) − h𝜖 (t1 )] ⩽ C(t2 − t1 ) = h0 (t2 ) − h0 (t1 ), 𝜖→0+

since t2 − t1 = ∞. 𝜖 Once y ∈  is a solution of the autonomous ODE (10.28) and using the properties of the Kurzweil integral, we obtain lim

𝜖→0+

s2

y(s2 ) − y(s1 ) =

∫s1

s2

G0 (y(𝜏))d𝜏 =

∫s1

s2

D[G0 (y(𝜏))t] =

∫s1

DH0 (y(𝜏), t),

for every s1 , s2 ∈ [0, ∞). Thus, y is a solution of the generalized ODE: dy = DH0 (y, t) d𝜏 such that y(0) = ỹ on [0, ∞) and, by the hypotheses and Theorem 5.1, this solution is uniquely determined. Thus, all hypotheses of Theorem 7.8 are fulfilled. Therefore, by Theorem 7.8, for every 𝜇 > 0 and every L > 0, there is a 𝜖0 > 0 such that for 𝜖 ∈ (0, 𝜖0 ) and there

10.2 Nonperiodic Averaging Principles

exists a solution y𝜖 on the interval [0, L] of the generalized ODE: dx = DH𝜖 (x, t), d𝜏 satisfying y𝜖 (0) = y(0) and ‖y (s) − y(s)‖ ⩽ 𝜇, ‖ ‖ 𝜖

(10.32)

for every s ∈ [0, L] ⊂ [0, ∞),

where y is solution of the averaged equation (10.28). Then, proceeding as in the previous theorem, we obtain the desired result. ◽ A direct consequence of Theorem 10.7 follows next. Corollary 10.8: Consider a function G∶Ω → X such that G ∈  (Ω, h), where Ω = B × [0, ∞), B = {x ∈ X∶ ‖x‖ < c}, with c > 0, and h∶[0, ∞) → ℝ is a left-continuous and nondecreasing function. Assume that, for every 𝛼 ⩾ 0, ) ( t + 𝛼 − h(𝛼) h 𝜖 ⩽ C, where C > 0 is a constant, (10.33) lim sup t 𝜖→0+ 𝜖 and, for every x ∈ B, we have )( ) ( t 𝜗 G x, 𝜖 𝜖 lim+ = G0 (x)(𝜗), for all 𝜗 ∈ [0, ∞). (10.34) t 𝜖→0

𝜖

Suppose, in addition, for each y ∈ B, the function G(y, 0)∶[0, ∞) → X given by 𝜗 → G(y, t)(𝜗), is such that G(y, 0)(𝜗) = 0 for every 𝜗 ⩾ 0 and y ∈ B. Let y∶[0, ∞) → B be the unique solution of the autonomous ODE: { ẏ = G0 (y)(𝜗), (10.35) y(0) = ỹ and assume that there exists 𝜌 > 0 such that {̃x ∈ X∶ ‖̃x − y(t)‖ < 𝜌 for every

t ∈ [0, ∞)} ⊂ B.

Then, for every 𝜇 > 0 and every L > 0, there exists 𝜖0 > 0 such that for 𝜖 ∈ (0, 𝜖0 ), ‖x (t) − z (t)‖ < 𝜇, 𝜖 ‖ ‖ 𝜖

for every t ∈ [0, L] ,

where x𝜖 ∶[0, L] → X is a solution of the generalized ODE: [ ( ) ( )] { dx 𝜗 , = 𝜖D G x, 𝜖t d𝜏 𝜖 x𝜖 (0) = y(0),

(10.36)

339

340

10 Averaging Principles

on [0, L] such that x𝜖 (0) = y(0) and z𝜖 ∶[0, L] → X is a solution of the autonomous ODE: { ż = G0 (z)(𝜗), (10.37) z𝜖 (0) = y(0) on [0, L]. We end this chapter by calling the reader’s attention to the fact that, in Subsection 3.4.2, there are nonperiodic averaging principles specified for functional MDEs.

341

11 Boundedness of Solutions Suzete M. Afonso 1 , Fernanda Andrade da Silva 2 , Everaldo M. Bonotto 3 , Márcia Federson 4 , Rogelio Grau 5 , Jaqueline G. Mesquita 6 , and Eduard Toon 7 1 Departamento de Matemática, Instituto de Geociências e Ciências Exatas, Universidade Estadual Paulista “Júlio de Mesquita Filho” (UNESP), Rio Claro, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 4 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 5 Departamento de Matemáticas y Estadística, División de Ciencias Básicas, Universidad del Norte, Barranquilla, Colombia 6 Departamento de Matemática, Instituto de Ciências Exatas, Universidade de Brasília, Brasília, DF, Brazil 7 Departamento de Matemática, Instituto de Ciências Exatas, Universidade Federal de Juiz de Fora, Juiz de Fora, MG, Brazil

Concepts of boundedness of solutions in the setting of generalized ODEs were introduced in [2] and, since then, the theory has been explored occasionally. We can mention, for instance, [79]. The concepts we bring up here were inspired by the definitions of boundedness of solutions in the framework of impulsive functional differential equations (we write simply impulsive FDEs) explored by X. Fu and L. Zhang in [98], Z. Luo and J. Shen in [169], and I. Stamova in [223]. In the paper [223], I. Stamova proved several criteria, via Lyapunov’s Direct Method, for the boundedness of solutions of a class of FDEs undergoing variable impulse perturbations. Boundedness results for nonautonomous generalized ODEs and the correspondence between the former and FDEs with variable impulses that allowed the authors of [2] to obtain similar criteria assuming weaker conditions. In [79], the authors introduced new concepts of boundedness of solutions in the setting of measure differential equations (we write MDEs) and, motivated by the work [2], they provided results on boundedness of solutions for MDEs using the correspondence between the solutions of a class of these Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

342

11 Boundedness of Solutions

equations and the solutions of a class of generalized ODEs. By virtue of this correspondence, the results obtained in [79] are more general than those found in the literature. The reader may want to consult [2, 33, 69] for instance. The authors of [79] also extended their results to dynamic equations on time scales, using the fact that the latter can be regarded as MDEs (see [219]). Throughout this chapter, we consider that X is a Banach space with norm ‖⋅‖, t0 ⩾ 0, and Ω = X × [t0 , ∞). Let F∶Ω → X be a function defined for every (x, t) ∈ Ω and taking values in the Banach space X. We consider F as an element of the class  (Ω, h) (see Definition 4.3), where h∶[t0 , ∞) → ℝ is a left-continuous and nondecreasing function on (t0 , ∞), and the nonautonomous generalized ODE: dx = DF(x, t) d𝜏 subject to the initial condition: x(s0 ) = x0 ,

(11.1)

(11.2)

where (x0 , s0 ) ∈ Ω. Anchored by Corollary 5.16, we may assume that, for every (x0 , s0 ) ∈ Ω, there exists a unique global forward solution x∶[s0 , ∞) → X of the initial value problem (11.1)–(11.2). Then, for every (x0 , s0 ) ∈ Ω, we denote by x(⋅, s0 , x0 ) the unique (global forward) solution of the generalized ODE (11.1) satisfying x(s0 ) = x0 . This chapter is organized as follows. In Section 11.1, we recall the concepts of uniform boundedness, quasiuniform boundedness, and uniform ultimate boundedness in the scenery of generalized ODEs. Moreover, criteria of uniform boundedness and uniform ultimate boundedness for the generalized ODE (11.1) are also included. In Section 11.2, by using the results established in Section 11.1 and the correspondence between generalized ODEs and MDEs, we exhibit results on boundedness for MDEs. We conclude this chapter by showing the extension of one of these results to a certain impulsive differential equation (IDE) in Subsection 11.2.1.

11.1 Bounded Solutions and Lyapunov Functionals In this section, we present some results concerning the boundedness of solutions for the generalized ODE (11.1) using Lyapunov functionals. Next, we remind the concept of Lyapunov functional with respect to the generalized ODE (11.1) presented in Chapter 8, for the reader’s convenience. A function V∶[t0 , ∞) × X → ℝ is said to be a Lyapunov functional with respect to the generalized ODE (11.1), whenever the following conditions are fulfilled: (LF1) V(⋅, x)∶[t0 , ∞) → ℝ is left-continuous on (t0 , ∞) for every x ∈ X;

11.1 Bounded Solutions and Lyapunov Functionals

(LF2) there exists a continuous strictly increasing function b∶ℝ+ → ℝ+ satisfying b(0) = 0 such that, for all t ∈ [t0 , ∞) and x ∈ X, we have V(t, x) ⩾ b(‖x‖); (LF3) for every solution x∶[𝛾, 𝑣) → X of the generalized ODE (11.1), with [𝛾, 𝑣) ⊆ [t0 , ∞), we have D+ V(t, x(t)) = lim sup 𝜂→0+

V(t + 𝜂, x(t + 𝜂)) − V(t, x(t)) ⩽ 0, 𝜂

t ∈ [𝛾, 𝑣),

that is, the right derivative of V along every solution of the generalized ODE (11.1) is nonpositive. The concepts of uniform boundedness for generalized ODEs introduced in [2] are presented in the sequel. Recall that by x(⋅, s0 , x0 ), we mean the solution of the initial value problem (11.1)–(11.2). Definition 11.1: The generalized ODE (11.1) is said to be (i) uniformly bounded, if for every 𝛼 > 0, there exists M = M(𝛼) > 0 such that, ‖ for all s0 ∈ [t0 , ∞) and all x0 ∈ X, with ‖ ‖x0 ‖ < 𝛼, we have ‖x(s, s , x )‖ < M, 0 0 ‖ ‖

for all s ⩾ s0 ;

(ii) quasiuniformly ultimately bounded, if there exists B > 0 such that for every 𝛼 > 0, there exists T = T(𝛼) > 0, such that for all s0 ∈ [t0 , ∞) and all x0 ∈ X, ‖ with ‖ ‖x0 ‖ < 𝛼, we have ‖x(s, s , x )‖ < B, 0 0 ‖ ‖

for all s ⩾ s0 + T;

(iii) uniformly ultimately bounded, if it is uniformly bounded and quasiuniformly ultimately bounded. The next auxiliary result was proved in [79, Lemma 3.4], and it is essential to derive the main results of this section. Lemma 11.2: Consider the generalized ODE (11.1), with F ∈  (Ω, h), and assume that the functional V∶[t0 , ∞) × X → ℝ satisfies the following conditions: (V1) for each left-continuous function z∶[𝛾, 𝑣] → X on (𝛾, 𝑣], the function [𝛾, 𝑣] ∋ t → V(t, z(t)) ∈ ℝ+ is also left-continuous on (𝛾, 𝑣]; (V2) for all functions x, y∶[𝛾, 𝑣] → X, [𝛾, 𝑣] ⊂ [t0 , ∞), of bounded variation in [𝛾, 𝑣], the following condition: |V(t, x(t)) − V(t, y(t)) − V(s, x(s)) + V(s, y(s))| ( ) ⩽ h1 (t) − h1 (s) sup ‖x(𝜉) − y(𝜉)‖ 𝜉∈[𝛾,𝑣]

343

344

11 Boundedness of Solutions

holds for all 𝛾 ⩽ s < t ⩽ 𝑣, where h1 ∶[t0 , ∞) → ℝ is a nondecreasing and left-continuous function; (V3) there exists a function Φ∶X → ℝ such that for all solution z∶[s0 , ∞) → X of (11.1), with s0 ⩾ t0 , and for all s0 ⩽ t < s < ∞, we have V(s, z(s)) − V(t, z(t)) ⩽ (s − t)Φ(z(t)). If x∶ [𝛾, 𝑣] → X, t0 ⩽ 𝛾 < 𝑣 < ∞, is left-continuous on (𝛾, 𝑣] and of bounded variation in [𝛾, 𝑣], then V(𝑣, x(𝑣)) − V(𝛾, x(𝛾)) s ‖ ‖ ‖ ‖ ⩽ (h1 (𝑣) − h1 (𝛾)) sup ‖x(s) − x(𝛾) − DF(x(𝜏), t)‖ + (𝑣 − 𝛾)K, (11.3) ‖ ∫𝛾 s∈[𝛾,𝑣] ‖ ‖ ‖ { } where K = sup Φ(x(t))∶t ∈ [𝛾, 𝑣] .

Proof. Let [𝛾, 𝑣] ⊂ [t0 , ∞) and x∶[𝛾, 𝑣] → X be a function which is left-continuous on (𝛾, 𝑣] and of bounded variation in [𝛾, 𝑣] ⊂ [t0 , ∞). Corollary 4.8 guarantees the 𝑣 existence of the Kurzweil integral ∫𝛾 DF(x(𝜏), t). Set { } K = sup Φ(x(t))∶t ∈ [𝛾, 𝑣] . If K = ∞, then it is clear that inequality (11.3) is satisfied and the result follows. Therefore, we assume that K < ∞. Given 𝜎 ∈ [𝛾, 𝑣], the existence and uniqueness of a global forward solution x∶[𝜎, ∞) → X of the generalized ODE (11.1) on [𝜎, ∞) satisfying the initial condition x(𝜎) = x(𝜎) is ensured by Corollary 5.16, since (x(𝜎), 𝜎) ∈ Ω = X × [t0 , ∞). For every 𝜂1 > 0, x|[𝜎,𝜎+𝜂1 ] is also a solution of the generalized ODE (11.1). Thus, 𝜎+𝜂 Corollary 4.8 and Theorem 2.5 imply that the Kurzweil integral ∫𝜎 1 DF(x(𝜏), t) exists. Let 𝜂2 > 0 be such that 𝜂2 ⩽ 𝜂1 and 𝜎 + 𝜂2 ⩽ 𝑣. Then, the Kurzweil integrals 𝜎+𝜂2

𝜎+𝜂2

DF(x(𝜏), t)

∫𝜎

and

∫𝜎

DF(x(𝜏), t)

also exist by the integrability on subintervals of the Kurzweil integral (see Theorem 2.5). Consequently, the integral 𝜎+𝜂2

∫𝜎

D[F(x(𝜏), t) − F(x(𝜏), t)]

(11.4)

exists as well. Therefore, for 𝜖 > 0, there is a gauge 𝛿 of the interval [𝜎, 𝜎 + 𝜂2 ] corresponding to 𝜖 in the definition of the integral (11.4). Without loss of generality, we can consider 𝜂2 < 𝛿(𝜎).

11.1 Bounded Solutions and Lyapunov Functionals

Let Φ∶X → ℝ be the function from assumption (V3). Then, V(s, x(s)) − V(t, x(t)) ⩽ (s − t)Φ(x(t)), for every 𝜎 ⩽ t < s < ∞ and, consequently, for every 0 < 𝜂 < 𝜂2 , we obtain V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ 𝜂Φ(x(𝜎)).

(11.5)

Now, observe that Corollary 2.8 provides the following relations: s ‖ ‖ 𝜂𝜖 ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − DF(x(𝜏), t)‖ ‖ < 2[h (𝜎 + 𝜂) − h (𝜎) + 1] ‖ ∫𝜎 ‖ ‖ 1 1

(11.6) and s ‖ ‖ 𝜂𝜖 ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − DF(x(𝜏), t)‖ ‖ < 2[h (𝜎 + 𝜂) − h (𝜎) + 1] , ‖ ∫𝜎 ‖ ‖ 1 1 (11.7)

for every s ∈ [𝜎, 𝜎 + 𝜂]. Note also that ‖ s ‖ ‖ sup ‖ ‖∫ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ s∈[𝜎,𝜎+𝜂] ‖ 𝜎 ‖ ‖ − sup ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎)‖ ‖ s∈[𝜎,𝜎+𝜂]

‖ s ‖ D[F(x(𝜏), t) − F(x(𝜏), t)] ‖ s∈[𝜎,𝜎+𝜂] ‖∫𝜎

⩽ sup

‖ − [F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎)]‖ ‖ ‖ s ‖ ‖ ‖ ⩽ sup ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − ∫ DF(x(𝜏), t)‖ s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 s ‖ ‖ ‖ + sup ‖ ‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − ∫ DF(x(𝜏), t)‖ . s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎

(11.8)

In addition, since F ∈  (Ω, h) and x(𝜎) = x(𝜎), we have sup s∈[𝜎,𝜎+𝜂]

‖F(x(𝜎), s) − F(x(𝜎), 𝜎) − F(x(𝜎), s) + F(x(𝜎), 𝜎)‖ ‖ ‖ ‖ ⩽‖ ‖x(𝜎) − x(𝜎)‖ sup |h(s) − h(𝜎)| = 0,

(11.9)

s∈[𝜎,𝜎+𝜂]

where the last inequality follows from condition (4.6) of Definition 4.3. Therefore, (11.6), (11.7), (11.8), and (11.9) yield ‖ s ‖ 𝜂𝜖 ‖ ‖ ‖∫ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ ⩽ [h (𝜎 + 𝜂) − h (𝜎) + 1] . s∈[𝜎,𝜎+𝜂] ‖ 𝜎 ‖ 1 1 sup

(11.10)

345

346

11 Boundedness of Solutions

Note that x is of bounded variation in [𝛾, 𝑣]. This fact is a consequence of Lemma 4.9, since F ∈  (Ω, h) and the function h is nondecreasing. Consequently, x is of bounded variation in [𝜎, 𝜎 + 𝜂] ⊂ [𝛾, 𝑣]. Thus, by assumption (V2) and by the relation x(𝜎) = x(𝜎), we obtain V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) = V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) + V(𝜎, x(𝜎)) ⩽ ||V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) + V(𝜎, x(𝜎))|| ‖ ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ ‖x(s) − x(s)‖ s∈[𝜎,𝜎+𝜂]

= (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup

s∈[𝜎,𝜎+𝜂]

‖x(s) − x(𝜎) + x(𝜎) − x(s)‖ ‖ ‖

s ‖ ‖ ‖x(s) − x(𝜎) − DF(x(𝜏), t)‖ ‖ ‖, ∫𝜎 s∈[𝜎,𝜎+𝜂] ‖ ‖

= (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup which means that

V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) s ‖ ‖ ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ x(s) − x(𝜎) − DF(x(𝜏), t)‖ ‖ ‖. ∫ s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 Then, by (11.5), (11.10), and (11.11), we get

(11.11)

V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) = V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎 + 𝜂, x(𝜎 + 𝜂)) + V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) s ‖ ‖ ‖x(s) − x(𝜎) − DF(x(𝜏), t)‖ ‖ ‖ + 𝜂Φ(x(𝜎)) ∫ s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 s ‖ ‖ ‖ ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ ‖x(s) − x(𝜎) − ∫ DF(x(𝜏), t)‖ + 𝜂K s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 s ‖ ‖ ‖ ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ ‖x(s) − x(𝜎) − ∫ DF(x(𝜏), t)‖ s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 ‖ s ‖ D[F(x(𝜏), t) − F(x(𝜏), t)]‖ + (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ ‖ ‖ + 𝜂K s∈[𝜎,𝜎+𝜂] ‖∫𝜎 ‖ s ‖ ‖ ‖ ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup ‖ ‖x(s) − x(𝜎) − ∫ DF(x(𝜏), t)‖ + 𝜂𝜖 + 𝜂K. s∈[𝜎,𝜎+𝜂] ‖ ‖ 𝜎 (11.12)

⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup

Let us define Γ∶[𝛾, 𝑣] → X by s

Γ(s) = x(s) −

∫𝛾

DF(x(𝜏), t), 𝑣

for s ∈ [𝛾, 𝑣]. s

The Kurzweil integral ∫𝛾 DF(x(𝜏), t) exists and the function s → ∫𝛾 DF(x(𝜏), t) is of bounded variation in [𝛾, 𝑣] by Corollary 4.8, since x is a function of bounded

11.1 Bounded Solutions and Lyapunov Functionals

variation in [𝛾, 𝑣] and (x(s), s) ∈ Ω, for every s ∈ [𝛾, 𝑣]. Consequently, for each s s ∈ [𝛾, 𝑣], the existence of the Kurzweil integral ∫𝛾 DF(x(𝜏), t) is guaranteed by Theorem 2.5. Therefore, the function Γ is well defined and is of bounded variation in [𝛾, 𝑣]. Furthermore, Γ is left-continuous on (𝛾, 𝑣], since x and h are left-continuous on (𝛾, 𝑣] (see Lemma 4.5). Note also that, for s ∈ [𝛾, 𝑣], we have 𝜎

s

Γ(s) − Γ(𝜎) = x(s) − x(𝜎) −

DF(x(𝜏), t) +

∫𝛾

∫𝛾

DF(x(𝜏), t)

s

= x(s) − x(𝜎) −

∫𝜎

DF(x(𝜏), t).

(11.13)

Now, if f ∶[𝛾, 𝑣] → ℝ is the function given by ⎧(h (t) − h (𝜎)) sup ‖Γ(s) − Γ(𝜎)‖ + 𝜖t + Kt, t ∈ [𝛾, 𝜎], 1 ⎪ 1 s∈[𝛾,t] f (t) = ⎨ sup ‖Γ(s) − Γ(𝜎)‖ + 𝜖t + Kt, t ∈ [𝜎, 𝑣], ⎪(h1 (t) − h1 (𝜎))s∈[𝜎,t] ⎩ then f is well defined, and it is left-continuous on (𝛾, 𝑣], since h1 and Γ are leftcontinuous on (𝛾, 𝑣]. Moreover, the left-continuity of x∶[𝛾, 𝑣] → X together with assumption (V1) imply that the function [𝛾, 𝑣] ∋ t → V(t, x(t)) ∈ ℝ+ is left-continuous on (𝛾, 𝑣] as well. By (11.12) and (11.13), we have V(𝜎 + 𝜂, x(𝜎 + 𝜂)) − V(𝜎, x(𝜎)) ⩽ (h1 (𝜎 + 𝜂) − h1 (𝜎)) sup

s∈[𝜎,𝜎+𝜂]

‖Γ(s) − Γ(𝜎)‖ + 𝜂𝜖 + 𝜂K

= f (𝜎 + 𝜂) − f (𝜎). Since the functions [𝛾, 𝑣] ∋ t → V(t, x(t)) and [𝛾, 𝑣] ∋ t → f (t) fulfill all hypotheses of Proposition 1.8, we derive V(𝑣, x(𝑣)) − V(𝛾, x(𝛾)) ⩽ f (𝑣) − f (𝛾) = (h1 (𝑣) − h1 (𝜎)) sup ‖Γ(s) − Γ(𝜎)‖ + 𝜖𝑣 + K𝑣 s∈[𝜎,𝑣]

− (h1 (𝛾) − h1 (𝜎)) sup ‖Γ(s) − Γ(𝜎)‖ − 𝜖𝛾 − K𝛾 s∈[𝛾,𝛾]

⩽ (h1 (𝑣) − h1 (𝜎)) sup ‖Γ(s) − Γ(𝜎)‖ + 𝜖𝑣 + K𝑣 s∈[𝛾,𝑣]

+ (h1 (𝜎) − h1 (𝛾)) sup ‖Γ(s) − Γ(𝜎)‖ − 𝜖𝛾 − K𝛾 s∈[𝛾,𝑣]

= (h1 (𝑣) − h1 (𝛾)) sup ‖Γ(s) − Γ(𝜎)‖ + 𝜖(𝑣 − 𝛾) + K(𝑣 − 𝛾) s∈[𝛾,𝑣]

s ‖ ‖ ‖ = (h1 (𝑣) − h1 (𝛾)) sup ‖ ‖x(s) − x(𝜎) − ∫ DF(x(𝜏), t)‖ s∈[𝛾,𝑣] ‖ ‖ 𝜎 + K(𝑣 − 𝛾) + 𝜖(𝑣 − 𝛾),

whence we conclude that inequality (11.3) holds, since 𝜖 > 0 is arbitrary.



347

348

11 Boundedness of Solutions

The next theorem, established in [79, Theorem 3.5], provides us sufficient conditions which guarantee that the generalized ODE (11.1) is uniformly bounded. This result does not require the Lipschitz condition on the Lyapunov functional, which makes it more general than [2, Theorem 4.3]. Theorem 11.3: Consider the generalized ODE (11.1), with F ∈  (Ω, h), and [ ) V∶ t0 , ∞ × X → ℝ a Lyapunov functional such that, for each left-continuous function z∶[𝛾, 𝑣] ⊂ [t0 , ∞) → X on (𝛾, 𝑣], the function [𝛾, 𝑣] ∋ t → V(t, z(t)) is left-continuous on (𝛾, 𝑣]. Moreover, suppose V satisfies the following conditions: (UB1) the function b∶ℝ+ → ℝ+ of condition (LF2) from the definition of Lyapunov functional is such that b(s) → ∞ as s → ∞; (UB2) there exists a monotone increasing function p∶ℝ+ → ℝ+ such that p(0) = 0 and, for every pair (t, z) ∈ [t0 , ∞) × X, V(t, z) ⩽ p(‖z‖);

(11.14)

(UB3) for every global forward solution z∶[s0 , ∞) → X, s0 ⩾ t0 , of the generalized ODE (11.1), we have, for every s0 ⩽ t < s < ∞, V(s, z(s)) − V(t, z(t)) ⩽ 0. Under these conditions, the generalized ODE (11.1) is uniformly bounded. Proof. Let 𝛼 > 0. Since p(𝛼) > 0 and b(s) → ∞ as s → ∞, by assumption (UB1), there exists M = M(𝛼) > 0 such that p(𝛼) < b(s), for every s ⩾ M. Therefore, for s = M, we have p(𝛼) < b(M).

(11.15)

Given s0 ∈ [t0 , ∞) and x0 ∈ X, denote by x(⋅) = x(⋅, s0 , x0 )∶[s0 , ∞) → X the global forward solution of the generalized ODE (11.1) satisfying the initial condition ‖ x(s0 ) = x0 , with ‖ ‖x0 ‖ < 𝛼. Claim 1. For all s ⩾ s0 , we have V(s, x(s)) < b(M).

(11.16)

In fact, using assumption (UB3), conditions (11.14) and (11.15), we obtain ‖ V(s, x(s)) ⩽ V(s0 , x(s0 )) = V(s0 , x0 ) ⩽ p(‖ ‖x0 ‖) < p(𝛼) < b(M) for every s ⩾ s0 . Claim 2. For every s ⩾ s0 , we have ‖x(s)‖ < M.

(11.17)

11.1 Bounded Solutions and Lyapunov Functionals

‖ Indeed, otherwise, there would be s ∈ [s0 , ∞) such that ‖ ‖x(s)‖ ⩾ M. This and condition (LF2) from the definition of Lyapunov functional would imply ‖ V(s, x(s)) ⩾ b(‖ ‖x(s)‖) ⩾ b(M),

(11.18)

since b is an increasing function. Clearly, relation (11.18) contradicts (11.16) and this contradiction implies ‖x(s)‖ < M, for every s ⩾ s0 , which proves Claim 2 and completes the proof. ◽ The next result, which was also proved in [79, Theorem 3.6], gives us sufficient conditions to ensure that the generalized ODE (11.1) is uniformly ultimately bounded. Theorem 11.4: Consider the generalized ODE (11.1), with F ∈  (Ω, h), and [ ) V∶ t0 , ∞ × X → ℝ a Lyapunov functional. Assume that V satisfies assumptions (V1) and (V2) from Lemma 11.2, and also hypotheses (UB1) and (UB2) from Theorem 11.3. Moreover, assume that V satisfies the following condition: ̃ (V3) there exists a continuous function Φ∶X → ℝ+ , with Φ(0) = 0 and Φ(x) > 0 whenever x ≠ 0, such that, for every global forward solution z∶[s0 , ∞) → X of (11.1), with s0 ⩾ t0 , we have V(s, z(s)) − V(t, z(t)) ⩽ (s − t) (−Φ(z(t))) ,

s0 ⩽ t < s < ∞.

Then, the generalized ODE (11.1) is uniformly ultimately bounded. ̃ implies Proof. First of all, note that assumption (V3) V(s, z(s)) − V(t, z(t)) ⩽ (s − t) (−Φ(z(t))) ⩽ 0, for every s0 ⩽ t < s < ∞ and every global forward solution z∶[s0 , ∞) → X, with s0 ∈ [t0 , ∞), of the generalized ODE (11.1). Thus, since all hypotheses of Theorem 11.3 are satisfied, the generalized ODE (11.1) is uniformly bounded. In this way, it remains to prove that the generalized ODE (11.1) is quasiuniformly ultimately bounded. Given 𝛼 > 0, s0 ∈ [t0 , ∞), and x0 ∈ X, let x(⋅) = x(⋅, s0 , x0 )∶[s0 , ∞) → X be the solution of the generalized ODE (11.1) satisfying the initial condition x(s0 ) = x0 , ‖ with ‖ ‖x0 ‖ < 𝛼. We can claim that there exists (by Definition 11.1 (i)) a positive number M1 = M1 (𝛼) such that ‖x(s, s , x )‖ < M , 0 0 ‖ 1 ‖

for every s ⩾ s0 ,

since the generalized ODE (11.1) is uniformly bounded. In addition, we can affirm that ̃ (A1) there exists M2 = M2 (𝛼) > 0 such that p(𝛼) < b(M2 ), using the same argument as in (11.15) from the proof of Theorem 11.3.

349

350

11 Boundedness of Solutions

Let [t, ∞) ⊂ [t0 , ∞) and define y∶[t, ∞) → X by y(s) = x(s, s0 , x0 ) for every s ∈ ‖ ‖ [t, ∞). Since the generalized ODE (11.1) is uniformly bounded, if ‖y(t)‖ < 𝜇, where ‖ ‖ 𝜇 > 0, then there exists B = B(𝜇) > 0 such that ‖y(s)‖ < B,

for every s ⩾ t.

(11.19)

Without loss of generality, we can take B ∈ (𝜇, ∞) (otherwise, we could take B > B′ such that B′ ∈ (𝜇, ∞)). Consider 𝛼 and B as above and set { } M = M(𝛼) = max M1 (𝛼), M2 (𝛼) and 𝜆 = min {𝛼, 𝜇}. ̃ holds and b is increasing, we have Since (A1) ‖x(s, s , x )‖ < M, 0 0 ‖ ‖

for every s ⩾ s0 ,

and p(𝛼) < b(M).

(11.20)

Define 2b(M) > 0. N ‖ If we show that ‖ ‖x(s, s0 , x0 )‖ < B for every s ⩾ s0 + T(𝛼), then we complete the proof. But, before that, we state that [ ] ̃ (A2) There exists t′ ∈ s0 + T(𝛼)∕2, s0 + T(𝛼) such that ‖x(t′ )‖ < 𝜆. N = sup {−Φ(z)∶ 𝜆 ⩽ ‖z‖ < M} < 0 and T(𝛼) = −

̃ is false, that is, In fact, assume that assertion (A2) ] [ (11.21) ‖x(s)‖ ⩾ 𝜆, for every s ∈ s0 + T(𝛼)∕2, s0 + T(𝛼) . [ ] Let I𝛼 = s0 + T(𝛼)∕2, s0 + T(𝛼) . Since the function x(⋅) = x(⋅, s0 , x0 ) is solution of the initial value problem (11.1)–(11.2), x(⋅, s0 , x0 )|I𝛼 is left-continuous on (s0 + T(𝛼)∕2, s0 + T(𝛼)] and of bounded variation in I𝛼 . Therefore, applying Lemma 11.2, we obtain ( )) ( T(𝛼) T(𝛼) , x s0 + V(s0 + T(𝛼), x(s0 + T(𝛼))) ⩽ V s0 + 2 2 )) ( ( T(𝛼) + h1 (s0 + T(𝛼)) − h1 s0 + 2 ( ) s ‖ ‖ T(𝛼) ‖ ‖ × sup ‖x(s) − x s0 + DF(x(𝜏, s0 , z0 ), t)‖ − ‖ ∫s0 + T(𝛼) 2 s∈I𝛼 ‖ ‖ ‖ 2 ⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏞⏟ 0

} { T(𝛼) sup −Φ(x(s))∶s ∈ I𝛼 , + 2

11.1 Bounded Solutions and Lyapunov Functionals

and, consequently,

( )) ( T(𝛼) T(𝛼) , x s0 + s0 + 2 2 } { T(𝛼) + sup −Φ(x(s))∶ s ∈ I𝛼 )) (2 ( T(𝛼) T(𝛼) T(𝛼) + ⩽ V s0 + , x s0 + sup {−Φ(z)∶ 𝜆 ⩽ ‖z‖ < M}. 2 2 2 (11.22)

V(s0 + T(𝛼), x(s0 + T(𝛼))) ⩽ V

The last inequality in (11.22) follows from the relations: (11.21)

(11.20)

‖ 𝜆 ⩽ ‖x(s)‖ = ‖ ‖x(s, s0 , z0 )‖ < M, which hold for every s ∈ I𝛼 . Moreover, using the same argument as in (11.17) from the proof of Theorem 11.3, we get ( )) ( T(𝛼) T(𝛼) , x s0 + < b(M), (11.23) V s0 + 2 2 as (11.20) holds. Thus, using (11.22) and (11.23), we have T(𝛼) sup {−Φ(z)∶ 𝜆 ⩽ ‖z‖ < M} 2 2b(M) T(𝛼) N = b(M) − N = 0, = b(M) + 2 2N

V(s0 + T(𝛼), x(s0 + T(𝛼))) < b(M) +

which yields V(s0 + T(𝛼), x(s0 + T(𝛼))) < 0.

(11.24)

On the other hand, condition (LF2) from the definition of Lyapunov functional and assumption (11.21) imply ‖ V(s0 + T(𝛼), x(t0 + T(𝛼))) ⩾ b(‖ ‖x(s0 + T(𝛼))‖) ⩾ b(𝜆) > 0, ̃ holds. Thus, which contradicts (11.24). From this, we infer that the assertion (A2) ‖x(s, s , x )‖ < B for s ⩾ t′ , 0 0 ‖ ‖ since (11.19) holds for t = t′ . Furthermore, ‖x(s, s , x )‖ < B for s > t + T(𝛼), 0 0 ‖ 0 ‖ once t′ ∈ I𝛼 . Therefore, the generalized ODE (11.1) is quasiuniformly ultimately bounded and the proof is complete. ◽

351

352

11 Boundedness of Solutions

11.2 An Application to MDEs Using the boundedness results for generalized ODEs proved in the previous section (namely Theorems 11.3 and 11.4), the authors of [79] obtained boundedness results for MDEs with functions taking values in ℝn . In this section, we exhibit the same results for MDEs with functions taking values in an arbitrary Banach space, whose proofs are similar to the ones presented in [79]. Let (X, ‖⋅‖) be a Banach space. Given t0 ⩾ 0 and a function f ∶X × [t0 , ∞) → X, we consider the integral form of a MDE of type t

x(t) = x(s0 ) +

∫s0

f (x(s), s)dg(s),

t ⩾ s0 ,

(11.25)

where s0 ⩾ t0 and g∶[t0 , ∞) → ℝ. We assume that the following conditions are fulfilled: (D1) the function g∶[t0 , ∞) → ℝ is nondecreasing and left-continuous on (t0 , ∞); s (D2) the Perron–Stieltjes integral ∫s 2 f (x(s), s)dg(s) exists, for all x ∈ G([t0 , ∞), X) 1 and all s1 , s2 ∈ [t0 , ∞); (D3) there exists a locally Perron–Stieltjes integrable function M∶[t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ f (x(s), s)dg(s)‖ ⩽ M(s)dg(s) ‖ ‖∫ s ‖ ∫s 1 ‖ 1 ‖ for all x ∈ G([t0 , ∞), X) and all s1 , s2 ∈ [t0 , ∞), with s1 ⩽ s2 ; (D4) there exists a locally Perron–Stieltjes integrable function L∶[t0 , ∞) → ℝ with respect to g such that s2 ‖ s2 ‖ ‖ ‖ [f (x(s), s) − f (z(s), s)]dg(s)‖ ⩽ ‖x − z‖[t0 ,∞) L(s)dg(s) ‖ ‖∫ s ‖ ∫s1 ‖ 1 ‖ for all x, z ∈ G0 ([t0 , ∞), X) and all s1 , s2 ∈ [t0 , ∞), with s1 ⩽ s2 . We remind the reader that the Banach space (G0 ([t0 , ∞), X), ‖⋅‖[t0 ,∞) ) was described in Chapter 1, and so does the vector space G([t0 , ∞), X). Theorem 5.18 assures that x∶I → X is a solution of the MDE (11.25) on I ⊂ [t0 , ∞) if and only if x is a solution of the generalized ODE dx = DF(x, t) d𝜏 on I, where the function F∶X × [t0 , ∞) → X is given by

(11.26)

t

F(x, t) =

∫s0

f (x, s)dg(s)

for all (x, t) ∈ X × [t0 , ∞).

(11.27)

11.2 An Application to MDEs

Recall that x(⋅) = x(⋅, s0 , x0 ) denotes the unique global forward solution x∶ [s0 , ∞) → X of the MDE (11.25) with x(s0 ) = x0 which we are assuming to exist (see Corollary 5.27). The concepts of uniform boundedness for MDEs were introduced in [79, Definition 3.5] and are presented in the sequel. Definition 11.5: We say that the MDE (11.25) is (i) uniformly bounded, if for every 𝛼 > 0,, there exists M = M(𝛼) > 0 such that, ‖ for every s0 ∈ [t0 , ∞) and for all x0 ∈ X, with ‖ ‖x0 ‖ < 𝛼, we have ‖x(s, s , x )‖ < M, 0 0 ‖ ‖

for all s ⩾ s0 ;

(ii) quasiuniformly ultimately bounded, if there exists B > 0 such that for every 𝛼 > 0, there exists T = T(𝛼) > 0, such that for all s0 ∈ [t0 , ∞) and all x0 ∈ X, ‖ with ‖ ‖x0 ‖ < 𝛼, we have ‖x(s, s , x )‖ < B, 0 0 ‖ ‖

for all s ⩾ s0 + T;

(iii) uniformly ultimately bounded, if it is uniformly bounded and quasiuniformly ultimately bounded. The concept of a Lyapunov functional with respect to the MDE (11.25) was presented in Section 8.3. Nonetheless, we recall it here for the sake of convenience. Definition 11.6: A functional U∶[t0 , ∞) × X → ℝ is said to be a Lyapunov functional with respect to the MDE (11.25), if the following conditions are satisfied: (LFM1) for all x ∈ X, the function U(⋅, x)∶[t0 , ∞) → ℝ is left-continuous on (t0 , ∞); (LFM2) there exists a continuous increasing function b∶ℝ+ → ℝ+ satisfying b(0) = 0 such that U(t, x) ⩾ b(‖x‖),

for every (t, x) ∈ [t0 , ∞) × X;

(LFM3) for every solution x∶[𝛾, 𝑣) ⊂ [t0 , ∞) → X of the MDE (11.25), D+ U(t, x(t)) = lim sup 𝜂→0+

U(t + 𝜂, x(t + 𝜂)) − U(t, x(t)) ⩽0 𝜂

holds for all t ∈ [𝛾, 𝑣), that is, the right derivative of U along every solution of the MDE (11.25) is nonpositive. The next theorem ensures us that the MDE (11.25) is uniformly bounded under certain conditions. The version of this result for MDEs with functions taking values in ℝn was proved in [79, Theorem 4.6]. The proof we present here was borrowed from [79].

353

354

11 Boundedness of Solutions

Theorem 11.7: Assume that the function f ∶X × [t0 , ∞) → X satisfies conditions (D2), (D3), (D4), and the function g∶[t0 , ∞) → ℝ satisfies condition (D1). Let U∶[t0 , ∞) × X → ℝ be a Lyapunov functional such that for each left-continuous function z∶[𝛾, 𝑣] ⊂ [t0 , ∞) → X on (𝛾, 𝑣], the function [𝛾, 𝑣] ∋ t → U(t, z(t)) is left-continuous on (𝛾, 𝑣]. Moreover, suppose U satisfies the following conditions: (UBM1) the function b∶ℝ+ → ℝ+ of condition (LFM2) from the definition of Lyapunov functional is such that b(s) → ∞ as s → ∞; (UBM2) there exists a monotone increasing function p∶ℝ+ → ℝ+ such that p(0) = 0 and, for every pair (t, z) ∈ [t0 , ∞) × X, U(t, z) ⩽ p(‖z‖);

(11.28)

(UBM3) for every solution z∶[s0 , ∞) → X, s0 ⩾ t0 , of the MDE (11.25), we have U(s, z(s)) − U(t, z(t)) ⩽ 0, for every s0 ⩽ t < s < ∞. Then, the MDE (11.25) is uniformly bounded. Proof. Let us consider F∶X × [t0 , ∞) → X the function defined by (11.27). Since the function f ∶X × [t0 , ∞) → X satisfies conditions (D2), (D3), (D4), and the function g∶[t0 , ∞) → ℝ satisfies condition (D1), it follows by Lemma 5.17 that F ∈  (Ω, h), where Ω = X × [t0 , ∞) and the function h∶[t0 , ∞) → ℝ is given by t

h(t) =

∫t0

[M(s) + L(s)]dg(s).

Since g is left-continuous on (t0 , ∞), h is also left-continuous on (t0 , ∞). Using the correspondence between the solutions of the generalized ODE (11.26) and the solutions of the MDE (11.25) guaranteed by Theorem 5.18, we can easily verify that U satisfies condition (UB3) from Theorem 11.3. Moreover, conditions (UBM1) and (UBM2) imply that U also satisfies conditions (UB1) and (UB2) from Theorem 11.3. Therefore, as all hypotheses from Theorem 11.3 are satisfied, the generalized ODE (11.25) is uniformly bounded. Finally, the correspondence between the solutions of the generalized ODE (11.26) and the solutions of the MDE (11.25) allows us to conclude that the MDE (11.25) is also uniformly bounded, obtaining the desired result. ◽ The correspondence between the solutions of the generalized ODE (11.26) and the solutions of the MDE (11.25) together with Theorem 11.4 enables us to obtain the next criterion to guarantee the uniform ultimate boundedness of the MDE

11.2 An Application to MDEs

(11.25). An analogous version of this result for MDEs with functions taking values in ℝn was proved in [79, Theorem 4.7]. Theorem 11.8: Assume that f ∶X × [t0 , ∞) → X satisfies conditions (D2), (D3), (D4), and the function g∶[t0 , ∞) → ℝ satisfies condition (D1). Let U∶[t0 , ∞) × X → ℝ be a Lyapunov functional such that for each left-continuous function z∶[𝛾, 𝑣] ⊂ [t0 , ∞) → X on (𝛾, 𝑣], the function [𝛾, 𝑣] ∋ t → U(t, z(t)) is left-continuous on (𝛾, 𝑣]. Moreover, suppose U satisfies conditions (UBM1) and (UBM2) from Theorem 11.7, besides of the following conditions: (UBM1*) for every x, y∶[𝛾, 𝑣] → X of bounded variation in [𝛾, 𝑣], with [𝛾, 𝑣] ⊂ [t0 , ∞), we have |U(t, x(t)) − U(t, y(t)) − U(s, x(s)) + U(s, y(s))| ( t ) ⩽ P(𝜏)du(𝜏) sup ‖x(𝜉) − y(𝜉)‖ , ∫s 𝜉∈[𝛾,𝑣] for every 𝛾 ⩽ s < t ⩽ 𝑣, where u∶[t0 , ∞) → ℝ is a nondecreasing and left-continuous function and P∶[t0 , ∞) → ℝ is a locally Perron–Stieltjes integrable function with respect to u; (UBM2*) there exists a continuous function 𝜙∶X → ℝ+ , with 𝜙(0) = 0 and 𝜙(x) > 0, whenever x ≠ 0, such that for every solution z∶[s0 , ∞) → X, s0 ⩾ t0 , of the MDE (11.25), we have U(s, z(s)) − U(t, z(t)) ⩽ (s − t) (−𝜙(z(t))) , for every s0 ⩽ t < s < ∞. Then, the MDE (11.25) is uniformly ultimately bounded. The proof of Theorem 11.8 basically follows the same procedure as the proof of Theorem 11.7. More specifically, making use of the correspondence between a solution of the MDE (11.25) and a solution of the generalized ODE (11.26), it is possible to show that U satisfies the conditions of Theorem 11.4 (which allows us to conclude that the generalized ODE (11.26) is uniformly ultimately bounded) and that the uniform ultimate boundedness of the generalized ODE (11.26) implies the uniform ultimate boundedness of the MDE (11.25). Remark 11.9: Using the fact, observed in [85, 219], that MDEs encompass dynamic equations on time scales, the authors of [79] extended their results concerning boundedness of solutions of MDEs [79, Theorems 4.6 and 4.7] for dynamic equations on time scales.

355

356

11 Boundedness of Solutions

11.2.1 An Example Let us consider ℝ with the absolute-value norm | ⋅ | and the following impulsive differential equation (we write IDE for short) { ẋ = −b(t)𝜁(x), t ≠ tk , t ⩾ 0, (11.29) Δ(x(tk )) = x(tk+ ) − x(tk ) = Ik (x(tk )), k ∈ ℕ, under the following conditions: (E1) b∶ℝ+ → ℝ+ is a nonnegative function, 𝜁∶ℝ → ℝ is a function fulfilling 𝜁(0) = 0 and x𝜁(x) > 0, whenever x ≠ 0, and the Perron integral 𝜏 ∫𝜏 2 b(s)𝜁(x(s))ds exists, for all x ∈ G(ℝ+ , ℝ) and all 𝜏1 , 𝜏2 ∈ ℝ+ , with 𝜏1 < 𝜏2 ; 1 (E2) there exists a locally Perron integrable function m∶ℝ+ → ℝ+ such that for all 𝜏1 , 𝜏2 ∈ ℝ+ , with 𝜏1 < 𝜏2 , 𝜏2 | 𝜏2 | | | b(s)𝜁(x(s))ds| ⩽ m(s)ds, | |∫𝜏 | ∫𝜏 1 | 1 |

for all x ∈ G(ℝ+ , ℝ); (E3) there exists a locally Perron integrable function 𝓁∶ℝ+ → ℝ+ such that for all 𝜏1 , 𝜏2 ∈ ℝ+ , with 𝜏1 < 𝜏2 , 𝜏2 | 𝜏2 | | | [b(s)𝜁(x(s)) − b(s)𝜁(z(s))]ds| ⩽ ‖x − z‖[0,∞) 𝓁(s)ds, | |∫𝜏 | ∫𝜏1 | 1 |

for all x, z ∈ G0 (ℝ+ , ℝ); (E4) 0 < t1 < t2 < · · · < tk < … and lim tk = ∞; k→∞

(E5) for all k ∈ ℕ, the impulse operators Ik ∶ℝ → ℝ satisfy Ik (0) = 0 and xIk (x) < 0 for all x ≠ 0. Moreover, there are constants K1 > 0 and 0 < K2 < 1 such that, for all k ∈ ℕ and all x, y ∈ ℝ, we have |Ik (x)| ⩽ K1

and |Ik (x) − Ik (y)| ⩽ K2 |x − y|.

Given s0 ⩾ 0, we say that a function x∶[s0 , ∞) → ℝ is a solution of (11.29), if x is differentiable almost everywhere on each interval [0, t1 ] ∩ [s0 , ∞) and (tk , tk+1 ] ∩ [s0 , ∞) for k ∈ ℕ, x′ (t) = −b(t)𝜁(x(t)), for almost all t ∈ [s0 , ∞) and x(tk+ ) = x(tk ) + Ik (x(tk )) if tk ∈ [s0 , ∞). It is worthwhile to mention that if s0 = 0, then tk ∈ (s0 , ∞) by virtue of (E4). If x∶[s0 , ∞) → ℝ is solution of (11.29), with s0 ⩾ 0, then x satisfies the following integral equation: t

x(t) = x(s0 ) −

∫s0

b(s)𝜁(x(s))ds +

∞ ∑ k=1

Ik (x(tk ))Htk (t),

11.2 An Application to MDEs

for all t ⩾ s0 , where Htk denotes the left-continuous Heaviside function concentrated at tk , that is, { 0, for 0 ⩽ t ⩽ tk , Htk (t) = 1, for t > tk , (see Eq. (3.12)). Let f ∶ℝ × ℝ+ → ℝ and g∶ℝ+ → ℝ be functions defined by f (x, t) = −b(t)𝜁(x)

and g(t) = t.

Consider, also, the functions { f (x, t), if t ∈ ℝ+ ⧵ {t1 , t2 , …}, f̃ (x, t) = Ik (x(tk )), if t = tk , k ∈ ℕ, and

{ g̃ (t) =

t, if t ∈ [0, t1 ], t + k, if t ∈ (tk , tk+1 ], k ∈ ℕ.

Anchored by [86, Theorem 3.1], we can affirm that the IDE (11.29) can be transformed into the integral form of a measure functional differential equations without impulses as follows: t

x(t) = x(s0 ) +

∫s0

f̃ (x(s), s)d̃g(s),

for all t ⩾ s0 .

(11.30)

Hence, x is solution of the IDE (11.29) if and only if x is solution of the MDE (11.30). Since conditions (E1)–(E5) hold, it is possible to prove that conditions (D1)–(D4) are satisfied with f and g replaced by f̃ and g̃ , respectively, see [86, Lemma 3.3]. Our goal is to show that the IDE (11.29) is uniformly bounded, which means that for every 𝛼 > 0, there exists M = M(𝛼) > 0 such that, for all s0 ∈ [t0 , ∞) and all x0 ∈ ℝ, with |x0 | < 𝛼, we have |x(s, s0 , x0 )| < M,

for all s ⩾ s0 ,

where x(⋅) = x(⋅, s0 , x0 ) denotes the unique global forward solution x∶[s0 , ∞) → ℝ of the IDE (11.29) such that x(s0 ) = x0 . Define U∶ℝ+ × ℝ → ℝ+ by U(t, x) = |x|. Let us verify that the function U satisfies the conditions of Theorem 11.7. It is clear that, for all x ∈ ℝ, the function U(⋅, x)∶[t0 , ∞) → ℝ is left-continuous on (t0 , ∞), whence it follows that U satisfies condition (LFM1). Moreover, for each left-continuous function z∶[𝛾, 𝑣] ⊂ [t0 , ∞) → ℝ on (𝛾, 𝑣], the function [𝛾, 𝑣] ∋ t → U(t, z(t)) is left-continuous on (𝛾, 𝑣]. If we consider functions b∶ℝ+ → ℝ+ and p∶ℝ+ → ℝ+ defined by b(s) = 2s and p(s) = s, we have b(|x|) ⩽ U(t, x) ⩽ p(|x|),

357

358

11 Boundedness of Solutions

with b(0) = 0 = p(0) and b(s) → ∞ as s → ∞. This shows that U satisfies conditions (LFM2), (UBM1), and (UBM2). Define an auxiliary function V∶ℝ → ℝ+ by V(x) = |x|, x ∈ ℝ. Let x∶[s0 , ∞) → ℝ be a solution of (11.30), with s0 ⩾ 0. We claim that D+ V(x(t)) = lim sup 𝜂→0

+

V(x(t + 𝜂)) − V(x(t)) ⩽ 0, 𝜂

for all t ⩾ s0 .

At first, we suppose t ≠ tk for all k ∈ ℕ. It is known that x ≡ 0 is the unique solution of (11.29) such that x(t) = 0. Then, we may assume that x(t) ≠ 0 or x(t) = 0. (i) If x(t) ≠ 0, then D+ V(x(t)) = sgn(x(t))x′ (t) = −sgn(x(t))b(t)𝜁(x(t)) ⩽ 0, since b is nonnegative and x𝜁(x) > 0 for x ≠ 0. As usual, the symbol sgn(z) denotes the sign of z. (ii) If x(t) = 0, then D+ V(x(t)) = 0.

Now, suppose t = tk for some k ∈ ℕ. We claim that V(x(tk+ )) ⩽ V(x(tk )). Indeed, if x(tk ) = 0, then Ik (x(tk )) = 0 by condition (E5). Thus, V(x(tk+ )) = V(x(tk ) + Ik x(tk )) = 0 = V(x(tk )). If x(tk ) > 0, then condition (E5) implies that |Ik (x(tk ))| ⩽ K2 |x(tk )| < |x(tk )| and Ik (x(tk )) < 0, that is, −x(tk ) = −|x(tk )| < Ik (x(tk )) < 0, whence it follows that 0 < x(tk ) + Ik (x(tk )) < x(tk ). Consequently, V(x(tk+ )) = V(x(tk ) + Ik x(tk )) = x(tk ) + Ik (x(tk )) < x(tk ) = V(x(tk )). Now, if x(tk ) < 0, then condition (E5) implies that |Ik (x(tk ))| ⩽ K2 |x(tk )| < |x(tk )| and Ik (x(tk )) > 0, that is, 0 < Ik (x(tk )) ⩽ |Ik (x(tk ))| < |x(tk )| = −x(tk ) and, therefore, x(tk ) < x(tk ) + Ik (x(tk )) < 0.

11.2 An Application to MDEs

This implies that V(x(tk+ )) = V(x(tk ) + Ik x(tk )) = |x(tk ) + Ik (x(tk ))| ⩽ |x(tk )| = V(x(tk )). Thus, once V(x(tk+ )) ⩽ V(x(tk )), we have V(x(tk + 𝜂)) ⩽ V(x(tk )), for sufficiently small 𝜂 > 0. Consequently, for t = tk , we get D+ V(x(t)) = lim sup 𝜂→0+

V(x(t + 𝜂)) − V(x(t)) ⩽ 0, 𝜂

which completes the proof that D+ V(x(t)) ⩽ 0 for all t ⩾ s0 . Therefore, V(x(t)) ⩽ V(x(s)),

for all t ⩾ s ⩾ s0 ,

whence we obtain U(t, x(t)) = V(x(t)) ⩽ V(x(s)) = U(s, x(s)), for all t ⩾ s ⩾ s0 , showing that condition (UBM3) is satisfied. Since all conditions of Theorem 11.7 are satisfied, we conclude that the MDE (11.30) is uniformly bounded and, consequently, the IDE (11.29) is uniformly bounded.

359

361

12 Control Theory Fernanda Andrade da Silva 1 , Márcia Federson 1 , and Eduard Toon 2 1 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Exatas, Universidade Federal de Juiz de Fora, Fora, MG, Brazil

A control system is a time-evolving system over which one can act through an input or control function. The purpose of control theory is to analyze properties of such systems, with the intention of bringing a certain initial data to a certain final data. Observability in control theory is a measure of how well the states of a system can be inferred from some knowledge about its external outputs. Roughly speaking, observability means that, from system outputs, it is possible to determine the behavior of the entire system. In 1991, Milan Tvrdý introduced a concept of complete controllability for generalized ODE defined in finite dimensional spaces (see [229]). Tvrdý considered the following integral equations: t ⎧ ⎪ x(t) − x(0) − ∫ d[A(s)]x(s) + (u)(t) − (u)(0) = f (t) − f (0), 0 ⎪ x ∈ 𝔾nL , u ∈  , ⎨ 1 ⎪ K(s)d[x(s)] = r, ⎪ Mx(0) + ∫0 ⎩

(12.1)

where  = Ln2 , that is, U is the space of n-vector-valued functions which are square integrable over [0, 1] in the sense of Lebesgue, 𝔾nL is the linear space of n-vector-valued functions regulated on [0, 1] and left-continuous on (0, 1), M is a constant m × n-matrix, K(t) is an m × n-matrix valued function of bounded variation in [0, 1], r ∈ ℝn , f ∶[0, 1] → ℝn is regulated on [0, 1] and left-continuous on (0, 1), A(t) is an n × n-matrix-valued function of bounded variation in [0, 1], left-continuous on (0, 1), right-continuous at 0 such that det [I + Δ+ A(t)] ≠ 0 on

Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

362

12 Control Theory

[0, 1] and  ∈ L( , 𝔾nL ). System (12.1) is said to be completely controllable, if it possesses a solution in 𝔾nL ×  , for any f ∈ 𝔾nL and any r ∈ ℝn . In this chapter, we introduce new concepts of controllability and observability for abstract generalized ODEs, and we investigate necessary and sufficient conditions for a system of nonhomogeneous generalized ODEs, with initial data, controls, and observations taking values in Banach spaces, to be exactly controllable, approximately controllable, or observable. These are the contents of Section 12.1. In Section 12.2, we include an application to ordinary differential equations (we write ODEs for short) with Perron integrable functions on the right-hand side. Moreover, Corollaries 12.7 and 12.9 show that the results described in Section 12.2 extend classical results on controllability and observability for ODEs presented in the literature (see [237], for instance).

12.1 Controllability and Observability Our goal, in this section, is to establish necessary and sufficient conditions for a system of nonhomogeneous generalized ODEs to be controllable/observable (see Theorem 12.2 in the present section). ̃ X, and Y be Banach spaces and S ⊂ X. We denote by L(X, Y ) the space of Let U, continuous linear mappings T∶X → Y . When X = Y , we write simply L(X) instead of L(X, X). Throughout this section, S denotes the space of initial data, Ũ is the control space, X is the evolution space, and Y is the observation space. Furthermore, we assume that 0 ∈ S (neutral element of X). Recall that BVloc ([t0 , ∞), X) denotes the vector space of all functions f ∶[t0 , ∞) → X such that x is of bounded variation in [a, b], for all [a, b] ⊂ [t0 , ∞). Let t0 ⩾ 0 and consider the nonhomogeneous generalized ODE: dx = D[A(t)x + B(t)u], (12.2) d𝜏 ̃ X), and u ∈ Ũ are such that the where A∶[t0 , ∞) → L(X), B∶[t0 , ∞) → L(U, b Perron integral ∫a B(s)uds exists for all a, b ∈ [t0 , ∞) (see Remark 1.39 for the b ̃ X)), definition of ∫a B(s)u ds). Moreover, assume that B ∈ BVloc ([t0 , ∞), L(U, A ∈ BVloc ([t0 , ∞), L(X)), and A satisfies the conditions of Definition 6.2 on [t0 , ∞), that is, (D) [I + Δ+ A(t)]−1 ∈ L(X), for all t ∈ [t0 , ∞) and [I − Δ− A(t)]−1 ∈ L(X), for all t ∈ (t0 , ∞), where Δ+ A(t) = A(t+ ) − A(t) and Δ− A(t) = A(t) − A(t− ). Denote by x(⋅) = x(⋅, d, u) the solution x of the nonhomogeneous generalized ODE (12.2), with initial condition x(t0 ) = d and control u. By Remark 6.4, we can assume that all solutions of the nonhomogeneous generalized ODE (12.2) are defined for all t ∈ [t0 , ∞). By the variation-of-constants

12.1 Controllability and Observability

formula (see Theorem 6.14), the solution of the nonhomogeneous generalized ODE (12.2), with initial condition x(t0 ) = d, is given by t

x(t) = U(t, t0 )d + B(t)u − B(t0 )u −

∫t0

ds [U(t, s)](B(t)u − B(t0 )u),

for all t ∈ [t0 , ∞), where U is the fundamental operator of the linear generalized ODE dx = D[A(t)x] d𝜏 and it is given by t

U(t, s) = I +

∫s

t, s ∈ [t0 , ∞),

d[A(s)]U(r, s),

(12.3)

as attested by Theorem 6.6. The evolution map of the nonhomogeneous generalized ODE (12.2) at time t ∈ [t0 , ∞) is given by (d, u) → x(t, d, u) = F(t)d + G(t)u, defined on S × Ũ taking values in X, where (i) for all d ∈ S, the mapping [t0 , ∞) ∋ t → F(t)d ∈ X is given by (12.4)

F(t)d = U(t, t0 )d, with U(t, t0 ) given by (12.3); (ii) for all t ∈ [t0 , ∞), the mapping Ũ ∋ u → G(t)u ∈ X is given by t

G(t)u = B(t)u − B(t0 )u −

∫t0

ds [U(t, s)](B(t)u − B(t0 )u).

(12.5)

We also define an observer C∶[t0 , ∞) → L(X, Y ) and the observation at time t ∈ [t0 , ∞) by y(t, d, u) = C(t)x(t, d, u).

(12.6)

Now, we consider a system of nonhomogeneous generalized ODEs dx = D[A(t)x + B(t)u], d𝜏 y(t) = C(t)x(t).

(12.7)

We introduce, in the next lines, concepts of controllability and observability for system (12.7), where we assume that A∶[t0 , ∞) → L(X), with A ∈ BVloc ([t0 , ∞), L(X)), satisfies condition (D) and C is given by (12.6). Consider ̃ X), with B ∈ BVloc ([t0 , ∞), L(U, ̃ X)), and assume that there exists B∶[t0 , ∞) → L(U, b ̃ the Perron integral ∫ B(s)u ds, for all a, b ∈ [t0 , ∞) and all u ∈ U. a

363

364

12 Control Theory

Definition 12.1: Let t0 < T < ∞ be fixed. The data d ∈ S is said to be (i) approximately controllable at time T to a point x̃ ∈ X, if there exists a sequence of controls {un }n∈ℕ in Ũ such that x(T, d, un ) → x̃ as n → ∞; (ii) exactly controllable at time T to x̃ , if there exists a control u ∈ Ũ such that x(T, d, u) = x̃ . The system of generalized ODEs (12.7) is said to be approximately controllable (exactly controllable) at time T, if all points of S are approximately controllable ̃ a state (exactly controllable) at time T to all points of X. Accordingly, given u ∈ U, d ∈ S is said to be observable at time T, if d can be uniquely determined by u and by the observation y(T, d, u). The system (12.7) is said to be observable at time T, if all states in S are observable at time T. For the next theorem, we define the mapping F(T)∶S→L([0, T], X) F(T)d∶ [0, T]→X d → t →(F(T)d)(t) = F(t)d,

(12.8)

where F(t)d = U(t, t0 )d for all d ∈ S and t ∈ [t0 , ∞). Theorem 12.2: Consider the mappings G, C, and F given by (12.5), (12.6), and (12.8), respectively. The following assertions hold: (i) The system of generalized ODEs (12.7) is approximately controllable at time T if and only if the range of G(T) is everywhere dense in X. (ii) The system of generalized ODEs (12.7) is exactly controllable at time T if and only if the mapping G(T) is onto. (iii) The system of generalized ODEs (12.7) is observable at time T if and only if the composite mapping C(T)F(T) is one-to-one. Proof. At first, we prove item (i). Let x̃ ∈ X be arbitrary. Since the system of generalized ODEs (12.7) is approximately controllable at time T, by Definition 12.1, d = 0 is approximately controllable at time T to x̃ . Therefore, there exists a sequence {un }n∈ℕ in Ũ such that x(T, 0, un ) → x̃ as n → ∞, that is, G(T)un → x̃ , as n → ∞. Hence, the range of G(T) is everywhere dense in X. Conversely, for arbitrary d ∈ S and x̃ ∈ X, if the range of G(T) is everywhere dense in X, then there exists a sequence {un }n∈ℕ in Ũ such that G(T)un → x̃ − F(T)d as n → ∞, that is, x(T, d, un ) → x̃ , as n → ∞. As in item (i), in order to prove item (ii), we take x̃ arbitrarily and d = 0. Since the system of generalized ODEs (12.7) is exactly controllable at time T, there exists u ∈ Ũ such that x(T, 0, u) = x̃ , that is, G(T)u = x̃ . Therefore, G(T) is onto.

12.2 Applications to ODEs

Conversely, for arbitrary d ∈ S and x̃ ∈ X, let z = x̃ − F(T)d. Since G(T) is onto, there exists u ∈ Ũ such that G(T)u = z = x̃ − F(T)d. Then, x(T, d, u) = x̃ . Finally, for (iii), let d′ , d ∈ S be such that d′ ≠ d. Since the system of generalized ODEs (12.7) is observable at time T, by Definition 12.1, y(T, d′ , 0) ≠ y(T, d, 0), that is, C(T)(F(T)d) ≠ C(T)(F(T)d′ ). Hence, C(T)F(T) is one-to-one. Conversely, take d ∈ S arbitrarily. Suppose the system of generalized ODEs (12.7) is nonobservable at time T. Then, there exists d′ ≠ d ∈ S such that y(T, d′ , 0) = y(T, d, 0), for u = 0. Thus, C(T)(F(T)(d′ − d)) = 0 in opposition to our initial hypothesis. ◽

12.2 Applications to ODEs In this section, we show that when we consider a system of classic ODEs, our main results described in Section 12.1 generalize the results described in [237] (see [237, Theorems 1 and 2] and Corollaries 12.7 and 12.9 in this section). Consider the system of ODEs: ̂ (t)̂ ̂(t)u(t), ̂ ẋ = A x+B (12.9) ̂ y(t) = C(t)̂ x, where ̂ ẋ =

= ℝn = S, Y = ℝr , Ũ = ℝp ,  is the set of all control functions ̃ A ̂ (⋅), B ̂(⋅)u(⋅), and C(⋅) are locally Perron integrable over [t0 , ∞), and u∶[t0 , ∞) → U, the system of generalized ODEs: d ̂ x(t), X dt

dx = D[A(t)x + B(t)u], d𝜏 y(t) = C(t)x(t),

(12.10)

where t

A(t) =

∫t0

̂ (s)ds A

t

and B(t)u =

∫t0

̂(s)u(s)ds. B

(12.11)

Notice that A(⋅)x and B(⋅)u are continuous on [t0 , ∞) for all x ∈ ℝn and all u ∈  (see Corollary 2.14). Therefore, B ∈ BVloc ([t0 , ∞), L( , ℝn )), A ∈ BVloc ([t0 , ∞), L(ℝn )), and A satisfies condition (D) which implies that A and B satisfy all conditions of Section 12.1. The next result gives a characterization for the solution of the nonhomogeneous generalized ODE (12.10). Theorem 12.3: Let x∶[t0 , ∞) → X be the solution of the nonhomogeneous generalized ODE (12.10) with initial condition x(t0 ) = d ∈ S. Then, x is given by t

x(t) = U(t, t0 )d +

∫t0

̂(𝜏)u(𝜏)d𝜏, U(t, 𝜏)B

365

366

12 Control Theory

for all t ∈ [t0 , ∞), where U is given by (12.3), and the integral on the right-hand side is in Perron’s sense. Proof. By Theorem 6.14, the solution of the nonhomogeneous generalized ODE (12.10), with initial condition x(t0 ) = d, satisfies the integral equation t

x(t) = U(t, t0 )d + B(t)u −

∫t0

(12.12)

ds [U(t, s)]B(s)u,

for all t ∈ [t0 , ∞), see Eq. (6.22). By the Integration by Parts Formula for the Perron-Stieltjes integral (see Corollary 1.56), t

∫t0

t

ds [U(t, s)]B(s)u =U(t, t)B(t)u − U(t, t0 )B(t0 )u − t

=B(t)u −

∫t0

∫t0

U(t, s)d[B(s)u]

̂(s)u(s)ds. U(t, s)B

(12.13)

Then, replacing (12.13) in (12.12), we obtain t

x(t) = U(t, t0 )d +

∫t0

̂(s)u(s)ds, U(t, s)B

where the integral in (12.14) is in Perron’s sense.

(12.14) ◽

Remark 12.4: If Φ is the fundamental operator of the ODE (12.9), then Φ(t, s) = U(t, s),

for all t, s ∈ [t0 , ∞).

See [209, Example 6.2]. In this case, the functions G and F given by (12.5) and (12.4), respectively, can be rewritten as follows: t

G(t)u =

∫t0

̂(s)u(s)ds = U(t, s)B

t

̂(s)u(s)ds, Φ(t, s)B

(12.15)

for all t ∈ [t0 , ∞) and all d ∈ S.

(12.16)

∫t0

for all t ∈ [t0 , ∞) and all u ∈  and F(t)d = Φ(t, t0 )d,

Notice that the definitions of exact controllability, approximate controllability, and of observability for ODEs are analogous to the same definitions for generalized ODEs. In the sequel, we describe necessary and sufficient conditions on controllability and observability for the system of ODEs (12.9). The proof is a consequence of Theorem 12.2 and Remark 12.4.

12.2 Applications to ODEs

Corollary 12.5: The following assertions hold. (i) The system of ODEs (12.9) is approximately controllable at time T if and only if the range of G(T), defined by (12.15), is everywhere dense in X. (ii) The system of ODEs (12.9) is exactly controllable at time T if and only if the mapping G(T), defined by (12.15), is onto. (iii) The system of ODEs (12.9) is observable at time T if and only if the composite mapping C(T)F(T) is one-to-one, where (F(T)d)(t) = F(t)d for all d ∈ S and t ∈ [0, T] and F is given (12.16). In the sequel, we use the notation M ′ to denote the transpose of a given matrix M. The next result relates conditions on exact controllability between systems of generalized ODEs (12.10) and ODEs (12.9). Theorem 12.6: Let T ⩾ t0 . The function G(T), given by (12.15), is onto if and only ̂(T) are linearly independent, where U is given by if the rows of the matrix U(t0 , T)B (12.3). ̂(T) are linearly independent. Proof. Assume that the rows of the matrix U(t0 , T)B Then, the matrix T

(t0 , T) =

∫t0

̂(T)B ̂′ (𝜏)U ′ (t0 , 𝜏)d𝜏 U(T, 𝜏)B

is positive definite. Let x ∈ X be arbitrary and take ̂′ (𝜏)U ′ (t0 , 𝜏) −1 (t0 , T)x, u(𝜏) = B

t0 ⩽ 𝜏 ⩽ T.

By (12.15), we obtain T

G(T)u =

∫t0

̂(𝜏)u(𝜏)d𝜏 = x, U(T, 𝜏)B

which shows that G(T) is onto. ̂(T) are linearly dependent, then Conversely, if the rows of the matrix U(t0 , T)B there exists x ∈ X, x ≠ 0, such that ̂(T) ≡ 0, x′ U(t0 , 𝜏)B

t0 ⩽ 𝜏 ⩽ T.

Since G(T) is onto, there exists u such that G(T)u = x. Therefore, T

∫t0

̂(T)u(𝜏)d𝜏 = x. U(T, 𝜏)B

367

368

12 Control Theory

Multiplying the above equation by x′ U(t0 , T), we obtain T

∫t0

̂(T)u(𝜏)d𝜏 = x′ U(t0 , T)x, x′ U(t0 , 𝜏)B

whence 0 = x′ U(t0 , T)x which contradicts the fact that U is the fundamental operator of the generalized ODE dx = D[A(t)x], d𝜏 where A is given by (12.11).



As a consequence of Remark 12.4, Corollary 12.5, and Theorem 12.6, we obtain the next result. Corollary 12.7: The system of ODEs (12.9) is exactly controllable at time T if and ̂(T) are linearly independent, where Φ is the only if the rows of the matrix Φ(t0 , T)B fundamental operator of (12.9). The following result relates conditions about observability between systems of generalized ODEs (12.10) and ODEs (12.9). Theorem 12.8: Let T ⩾ t0 and u ≡ 0 in (12.10). Then, C(T)F(T) is one-to-one if and only if the columns of the matrix C(T)U(T, t0 ) are linearly independent, where C is given by (12.11), F(T) by (12.16) and U is given by (12.3). Proof. Suppose C(T)F(T) is one-to-one and the columns of the matrix C(T)U(T, t0 ) are linearly dependent. Then, there exists d ∈ S, with d ≠ 0, such that C(T)U(T, t0 )d = 0. On the other hand, C(T)U(T, t0 )d = C(T)(F(T)d). Thus, C(T)(F(T)d) = 0 for d ≠ 0, which contradicts the fact that C(T)F(T) is one-to-one. Conversely, assume that the columns of the matrix C(T)U(T, t0 ) are linearly independent and C(T)F(T) is not one-to-one. Then, there exist d′ , d ∈ S such that d′ ≠ d and C(T)(F(T)d′ ) = C(T)(F(T)d). Hence, C(T)(F(T)(d′ − d)) = 0 implies C(T)U(T, t0 )d = 0 for d = d′ − d ≠ 0, which contradicts the fact that the columns ◽ of the matrix C(T)U(T, t0 ) are linearly independent. The proof of the next result follows directly from Remark 12.4, Corollary 12.5, and Theorem 12.8. Corollary 12.9: The system of ODEs (12.9) is observable at time T if and only if the columns of the matrix C(T)Φ(T, t0 ) are linearly independent, where C is given by (12.11), F(T) by (12.16), and Φ is the fundamental operator of (12.9).

369

13 Dichotomies Everaldo M. Bonotto 1 and Márcia Federson 2 1 Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 2 Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil

The theory of exponential dichotomy is an important tool to the investigation of the behavior of nonautonomous differential equations. Exponential dichotomy is a kind of conditional stability which generalizes the notion of hyperbolicity of autonomous systems to nonautonomous systems. There are many important consequences for systems which admit a dichotomy. We can mention, for instance, the work [168] where the authors present a proof of Smale’s Conjecture by means of exponential dichotomies. The classical notion and basic properties of dichotomies in the context of ODEs can be found in [48] and [49], among other references. We also mention Kenneth J. Palmer who made a great contribution to the theory of exponential dichotomy in the framework of ODEs (see, for instance, [192, 193]). Later, Liviu H. Popescu improved results on exponential dichotomies to infinite dimensional Banach spaces, [197]. Good references containing results on dichotomies for nonautonomous systems are [117] and [200]. The reader may also want to consult the paper [10] by D. D. Bainov, S. I. Kostadinov, and N. Van Minh for the study of dichotomies in the setting of impulsive differential equations (IDEs), and the paper [43] by Charles V. Coffman and Juan Jorge Schäffer in the setting of difference equations. In this chapter, following the ideas of [48], we present the theory of exponential dichotomy for a class of linear generalized ODEs. All the results presented in this chapter were borrowed from [29]. In Section 13.1, we recall the concept of exponential dichotomy, and we present sufficient conditions to obtain exponential

Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

370

13 Dichotomies

dichotomy. Section 13.2 deals with the relation between the exponential dichotomy of a linear generalized ODE and the existence of bounded solutions of its perturbation. Lastly, we apply the results to MDEs in Section 13.3 and to IDEs in Section 13.4.

13.1 Basic Theory for Generalized ODEs This section concerns the theory of exponential dichotomy in the context of linear generalized ODEs. The reader may find all the definitions and results presented here in [29, Section 3]. Let f ∶ J → 𝕄n (ℝ) be a continuous function defined on an interval J ⊂ ℝ, where 𝕄n (ℝ) denotes the space of all n × n real matrices, and consider the following linear ordinary differential equation x′ = f (t)x.

(13.1)

Let X(t) be a fundamental matrix of (13.1). In the classical theory of ordinary differential equations, the ODE (13.1) is said to admit an exponential dichotomy on J, if there exist positive constants K1 , K2 , 𝛼1 , and 𝛼2 , and a constant matrix P such that P2 = P, satisfying the following conditions: ● ●

‖X(t)PX −1 (s)‖ ⩽ K e−𝛼1 (t−s) 1 ‖ ‖ ‖X(t)(I − P)X −1 (s)‖ ⩽ K e−𝛼2 (s−t) 2 ‖ ‖

for all t, s ∈ J, with t ⩾ s; for all t, s ∈ J, with s ⩾ t.

The matrix P is a projection onto the stable subspace of ℝn , and I − P is a projection onto the unstable subspace of ℝn . One of our interests is to handle the theory of dichotomies for ODEs of type (13.1) whose function f takes values in a Banach space and is integrable in a more general sense. Here we consider the nonabsolute integration in the senses of Perron and Perron–Stieltjes, following the notation and terminology of Chapter 1. In order to obtain results on dichotomies for a class of ODEs, IDEs, MDEs, MFDEs, etc., we translate the notion of dichotomies for a class of linear generalized ODEs following the notation and terminology of Chapters 2, 3, 4, and 6. Given a Banach space X and an interval J ⊂ ℝ, we consider the following linear generalized ODE dx = D [A(t)x] , (13.2) d𝜏 where A∶ J → L(X) is an operator satisfying the following general conditions: (H 1 ) A ∈ BV ([a, b], L(X)), for every interval [a, b] ⊂ J; (H 2 ) A satisfies condition (Δ) presented in Definition 6.2.

13.1 Basic Theory for Generalized ODEs

Let t0 ∈ J and x0 ∈ X. Under conditions (H1 ) and (H2 ), the linear generalized ODE (13.2) with initial condition x(t0 ) = x0 admits a unique global forward solution, see Remark 6.4. According to Theorem 6.6, the linear generalized ODE (13.2) admits a fundamental operator U∶ J × J → L(X). For a fixed arbitrary point t0 ∈ J, we define the operators  ,  −1 ∶ J → L(X) by and  −1 (t) = U(t0 , t),

 (t) = U(t, t0 )

for all t ∈ J. Thus, if x∶ J → X is the solution of the linear generalized ODE (13.2) with initial condition x(s) = x0 , x0 ∈ X and s ∈ J, then x(t) =  (t) −1 (s)x0 for all t ∈ J. Next, we present the concept of exponential dichotomy. Definition 13.1: The linear generalized ODE (13.2) admits an exponential dichotomy on J, if there exist positive constants K1 , K2 , 𝛼1 , and 𝛼2 and a projection P∶X → X (P2 = P) such that (i) (ii)

‖ (t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) , for all t, s ∈ J, with t ⩾ s; 1 ‖ ‖ ‖ (t)(I − P) −1 (s)‖ ⩽ K e−𝛼2 (s−t) , for all t, s ∈ J, with s ⩾ t. 2 ‖ ‖

The proof of the next result is inspired in [48], see page 13 there. Lemma 13.2: Assume that the operator A, which satisfies conditions (H1 ) and (H2 ), is defined on [0, ∞). If the linear generalized ODE (13.2) admits an exponential dichotomy on the interval [a, ∞), a ⩾ 0, then it also admits an exponential dichotomy on [0, ∞). Proof. By Definition 13.1, there are positive constants K1 , K2 , 𝛼1 , 𝛼2 , and a projection P such that ‖ (t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) , 1 ‖ ‖ ‖ (t)(I − P) −1 (s)‖ ⩽ K e−𝛼2 (s−t) , 2 ‖ ‖

for all t, s ∈ [a, ∞), with t ⩾ s; for all t, s ∈ [a, ∞), with s ⩾ t.

Now, by Proposition 6.8, item (ii), there exists M ⩾ 1 such that ‖ (t)‖ ⩽ M

and

‖ −1 (t)‖ ⩽ M, for all t ∈ [0, a]. ‖ ‖

Let t, s ∈ [0, ∞). Consider M1 = K1 e𝛼1 a M 4 . We have some cases to consider: ●

If 0 ⩽ a ⩽ s ⩽ t, then ‖ (t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) 1 ‖ ‖ ⩽ M1 e−𝛼1 (t−s) .

371

372

13 Dichotomies ●

If 0 ⩽ s ⩽ a ⩽ t, then ‖ (t)P −1 (s)‖ ⩽ ‖ (t)P −1 (a)‖ ‖ (a) −1 (s)‖ ‖ ‖ ‖‖ ‖ ‖ ⩽ K1 e−𝛼1 (t−a) M 2 ⩽ K1 e−𝛼1 (t−a) M 2 e𝛼1 s ⩽ M1 e−𝛼1 (t−s) .



If 0 ⩽ s ⩽ t ⩽ a, then ‖ (t)P −1 (s)‖ ⩽ ‖ (t) −1 (a)‖ ‖ (a)P −1 (a)‖ ‖ (a) −1 (s)‖ ‖ ‖ ‖‖ ‖‖ ‖ ‖ ⩽ M 2 K1 M 2 ⩽ M 4 K1 e𝛼1 (a−t+s) = M1 e−𝛼1 (t−s) .

−1 −𝛼 (t−s) , for all t, s ∈ [0, ∞) with t ⩾ s. Analogously, ‖ Hence, ‖ ‖ (t)P (s)‖ ⩽ M1 e 1 𝛼 a 4 −1 −𝛼 (s−t) for all t, s ∈ ‖ taking M2 = K2 e 2 M , we obtain ‖ ‖ (t)(I − P) (s)‖ ⩽ M2 e 2 [0, ∞) with t ⩽ s. ◽

The first characterization of an exponential dichotomy for the linear generalized ODE (13.2) is given by the next result. Proposition 13.3: The linear generalized ODE (13.2) admits an exponential dichotomy on J if and only if there are positive constants L1 , L2 , M, 𝛽1 , and 𝛽2 , and a projection P∶X → X such that, for all 𝜉 ∈ X, the following estimates hold: (i) (ii) (iii)

‖ (t)P𝜉‖ ⩽ L1 e−𝛽1 (t−s) ‖ (s)P𝜉‖, for all t, s ∈ J, with t ⩾ s; ‖ (t)(I − P)𝜉‖ ⩽ L2 e−𝛽2 (s−t) ‖ (s)(I − P)𝜉‖, for all t, s ∈ J, with t ⩽ s; ‖ (t)P −1 (t)‖ ⩽ M, for all t ∈ J. ‖ ‖

Proof. Suppose the linear generalized ODE (13.2) has an exponential dichotomy on the interval J. By Definition 13.1, there are positive constants K1 , K2 , 𝛼1 , 𝛼2 , and a projection P such that ‖ (t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) , 1 ‖ ‖ ‖ (t)(I − P) −1 (s)‖ ⩽ K e−𝛼2 (s−t) , 2 ‖ ‖

for all t, s ∈ J, with t ⩾ s;

(13.3)

for all t, s ∈ J, with s ⩾ t.

(13.4)

Set L1 = M = K1 , L2 = K2 , 𝛽1 = 𝛼1 , and 𝛽2 = 𝛼2 . Now, let 𝜉 ∈ X. Using the estimates (13.3) and (13.4), we obtain −1 −𝛼1 (t−s) ‖ ‖ (s)P𝜉‖ ‖ (t)P𝜉‖ = ‖ ‖ (t)P (s) (s)P𝜉 ‖ ⩽ K1 e

= L1 e−𝛽1 (t−s) ‖ (s)P𝜉‖ ,

13.1 Basic Theory for Generalized ODEs

for all t, s ∈ J with t ⩾ s, and −1 ‖ ‖ (t)(I − P)𝜉‖ = ‖ ‖ (t)(I − P) (s) (s)(I − P)𝜉 ‖

⩽ K2 e−𝛼2 (s−t) ‖ (s)(I − P)𝜉‖ = L2 e−𝛽2 (s−t) ‖ (s)(I − P)𝜉‖ , for all t, s ∈ J with t ⩽ s. By (13.3), we have ‖ (t)P −1 (t)‖ ⩽ K e−𝛼1 (t−t) = K = M, 1 1 ‖ ‖ for all t ∈ J. For the sufficient condition, let us assume the existence of positive constants L1 , L2 , M, 𝛽1 , and 𝛽2 satisfying conditions (i), (ii), and (iii). Set K1 = ML1 and K2 = (1 + M)L2 . If t, s ∈ J with t ⩾ s, we obtain ‖ (t)P −1 (s)‖ = sup ‖ (t)P −1 (s)𝜉 ‖ ‖ ‖ ‖ ‖ ‖𝜉‖⩽1

−1 ‖ ⩽ sup L1 e−𝛽1 (t−s) ‖ ‖ (s)P (s)𝜉 ‖ ‖𝜉‖⩽1

⩽ ML1 e−𝛽1 (t−s) = K1 e−𝛽1 (t−s) . On the other hand, if t, s ∈ J with t ⩽ s, then ‖ (t)(I − P) −1 (s)‖ = sup ‖ (t)(I − P) −1 (s)𝜉 ‖ ‖ ‖ ‖ ‖ ‖𝜉‖⩽1

−1 ‖ ⩽ sup L2 e−𝛽2 (s−t) ‖ ‖ (s)(I − P) (s)𝜉 ‖ ‖𝜉‖⩽1

⩽ (1 + M)L2 e−𝛽2 (s−t) = K2 e−𝛽2 (s−t) .



In what follows, we present the concept of bounded growth for linear generalized ODEs. Definition 13.4: Let J ⊂ ℝ be an interval such that sup J = ∞. The linear generalized ODE (13.2) admits a bounded growth on J, if there are constants h > 0 and C ⩾ 1 such that for any solution x∶ J → X of (13.2), we have ‖x(t)‖ ⩽ C ‖x(s)‖

for s, t ∈ J, with s ⩽ t ⩽ s + h.

Example 13.5: Consider the linear generalized ODE dx = D[A(t)x], d𝜏 where A∶[0, ∞) → ℝ is given by t

A(t) = −

∫0

a(s)ds, for all t ⩾ 0.

(13.5)

373

374

13 Dichotomies

Assume that a∶[0, ∞) → ℝ is Perron integrable and ‖a‖∞ = sup s∈[0,∞) |a(s)| < ∞. The function A satisfies conditions (H1 )–(H2 ) and, moreover, ( ) t U(t, s) = exp − a(r)dr , t, s ∈ [0, ∞), ∫s is the fundamental operator of the linear generalized ODE (13.5). Note that, given h > 0, if 0 ⩽ t − s ⩽ h, then ‖U(t, s)‖ ⩽ e‖a‖∞ h . Now, let x∶[0, ∞) → ℝ be a solution of the linear generalized ODE (13.5), that is, x(t) = U(t, t0 )x0 for all t ∈ [0, ∞) and x0 ∈ ℝ. Then, |x(t)| = |U(t, s)U(s, t0 )x0 | ⩽ e‖a‖∞ h |x(s)|, whenever 0 ⩽ s ⩽ t ⩽ s + h. Hence, Eq. (13.5) admits a bounded growth on [0, ∞). Lemma 13.6: Let J ⊂ ℝ be an interval such that sup J = ∞. The linear generalized ODE (13.2) admits a bounded growth on J, if and only if there exist constants C ⩾ 1 and 𝜇 ⩾ 0 such that ‖ (t) −1 (s)‖ ⩽ Ce𝜇(t−s) ‖ ‖

for all s, t ∈ J, with t ⩾ s.

Proof. Assume that (13.2) admits a bounded growth on J. By Definition 13.4, there exist constants C ⩾ 1 and h > 0 such that ‖ (t) −1 (s)𝜉 ‖ ⩽ C ‖ (r) −1 (s)𝜉 ‖ ‖ ‖ ‖ ‖ r ⩽ t ⩽ r + h.

for 𝜉 ∈ X, s, t, r ∈ J, with

Thus, ‖ (t) −1 (s)‖ ⩽ C ‖ (r) −1 (s)‖ ‖ ‖ ‖ ‖

for s, t, r ∈ J, with r ⩽ t ⩽ r + h. (13.6)

𝜇′

𝜇′

Let 𝜇 ′ ⩾ 0 be such that e = C and 𝜇 = h . Now, let t, s ∈ J with t ⩾ s. There exists n ∈ ℕ such that t ∈ [s + nh, s + (n + 1)h). Using (13.6), we have ‖ (t) −1 (s)‖ ⩽ C ‖ (t − h) −1 (s)‖ ⩽ · · · ⩽ Cn+1 ‖ (s) −1 (s)‖ = Cn+1 ‖ ‖ ‖ ‖ ‖ ‖ ′ = C(e𝜇 )n = C(e𝜇 )hn ⩽ Ce𝜇(t−s) , which completes the necessary condition. Now, let us assume that there are constants C ⩾ 1 and 𝜇 ⩾ 0 such that ‖ (t) −1 (s)‖ ⩽ Ce𝜇(t−s) ‖ ‖

for all s, t ∈ J, with t ⩾ s.

13.1 Basic Theory for Generalized ODEs

Let x∶ J → X be a solution of (13.2). Fix r ∈ J. Then x(t) =  (t) −1 (r)x(r) for all t ∈ J. Consider h > 0 and take t, s ∈ J with s ⩽ t ⩽ s + h, then −1 −1 −1 ‖ ‖ ‖‖ ‖ ‖x(t)‖ = ‖ ‖ (t) (r)x(r)‖ ⩽ ‖ (t) (s)‖ ‖ (s) (r)x(r)‖

⩽ Ce𝜇(t−s) ‖x(s)‖ ⩽ Ce𝜇h ‖x(s)‖ .

Since Ce𝜇h ⩾ 1, we conclude that the linear generalized ODE (13.2) admits a bounded growth on J. ◽ Next, we state an auxiliary result. This result can be found in [48], pages 11–14, and in [205]. Lemma 13.7: Let J = [a, ∞) and a ⩾ 0. Assume that there exist positive constants 𝛼1 , 𝛼2 , K1 , and K2 , and a projection P∶X → X satisfying conditions: (i) ‖ (t)P𝜉‖ ⩽ K1 e−𝛼1 (t−s) ‖ (s)P𝜉‖, for all t ⩾ s ⩾ a, 𝜉 ∈ X; (ii) ‖ (t)(I − P)𝜉‖ ⩽ K2 e−𝛼2 (s−t) ‖ (s)(I − P)𝜉‖, for all a ⩽ t ⩽ s, 𝜉 ∈ X. Then the following conditions hold: (a) given 𝜃 ∈ (0, 1), there is T > 0 such that for any solution x∶ J → X of (13.2), we have ‖x(s)‖ ⩽ 𝜃 sup ‖x(u)‖ , for all s ⩾ T; |u−s|⩽T

(b) if the linear generalized ODE (13.2) admits a bounded growth on [a, ∞), a ⩾ 0, −1 ‖ then there exists M ⩾ 0 such that ‖ ‖ (t)P (t)‖ ⩽ M, for all t ∈ J = [a, ∞). Proof. Taking K = max {K1 , K2 } and 𝛼 = min {𝛼1 , 𝛼2 }, we may rewrite conditions (i) and (ii) as ) (i′ ) ‖ (t)P𝜉‖ ⩽ Ke−𝛼(t−s) ‖ (s)P𝜉‖, for all t ⩾ s ⩾ a, 𝜉 ∈ X; (ii′ ) ‖ (t)(I − P)𝜉‖ ⩽ Ke−𝛼(s−t) ‖ (s)(I − P)𝜉‖, for all a ⩽ t ⩽ s, 𝜉 ∈ X. (a) For a given 𝜃 ∈ (0, 1), let T1 ⩾ 0 be such that K −1 e𝛼T1 − Ke−𝛼T1 ⩾ 2𝜃 −1 . Let x∶ J → X be a solution of the generalized ODE (13.2) and s ⩾ a + T1 . Define x1 (t) =  (t)P −1 (t)x(t)

and x2 (t) =  (t)(I − P) −1 (t)x(t),

for all t ∈ J. Note that x(t) = x1 (t) + x2 (t) and x(t) =  (t) −1 (s)x(s) =  (t) −1 (s)(x1 (s) + x2 (s)) =  (t)P −1 (s)x(s) +  (t)(I − P) −1 (s)x(s).

375

376

13 Dichotomies

‖ ‖ ‖ Case 1: ‖ ‖x2 (s)‖ ⩾ ‖x1 (s)‖. ‖x(s + T )‖ = ‖ (s + T )P −1 (s)x(s) +  (s + T )(I − P) −1 (s)x(s)‖ 1 ‖ 1 1 ‖ ‖ ‖ −1 ‖ − ‖ (s + T )P −1 (s)x(s)‖ ⩾‖  (s + T )(I − P) (s)x(s) 1 1 ‖ ‖ ‖ ‖ −1 ‖ ⩾ K −1 e𝛼T1 ‖ (s)x(s)  (s)(I − P) ‖ ‖ −1 ‖ − Ke−𝛼T1 ‖ (s)x(s)  (s)P ‖ ‖ ‖ − Ke−𝛼T1 ‖x (s)‖ = K −1 e𝛼T1 ‖ (s) x ‖ 2 ‖ ‖ 1 ‖ ) ( ‖ x (s) ⩾ K −1 e𝛼T1 − Ke−𝛼T1 ‖ ‖ 2 ‖ −1 ‖ ⩾ 2𝜃 ‖x2 (s)‖ ‖ ⩾ 𝜃 −1 ‖x(s)‖ . ‖ Hence, ‖x(s)‖ ⩽ 𝜃 ‖ ‖x(s + T1 )‖ . Consequently, s ⩾ a + T1 ⇒ ‖x(s)‖ ⩽ 𝜃 sup ‖x(u)‖ . |u−s|⩽T1

‖ ‖ ‖ Case 2: ‖ ‖x2 (s)‖ ⩽ ‖x1 (s)‖. ‖x(s − T )‖ = ‖ (s − T )P −1 (s)x(s) +  (s − T )(I − P) −1 (s)x(s)‖ 1 ‖ 1 1 ‖ ‖ ‖ −1 ‖ − ‖ (s − T )(I − P) −1 (s)x(s)‖ ⩾‖  (s − T )P (s)x(s) 1 1 ‖ ‖ ‖ ‖ −1 ‖ ⩾ K −1 e𝛼T1 ‖ (s)x(s)  (s)P ‖ ‖ −1 − Ke−𝛼T1 ‖ (s)x(s)‖  (s)(I − P) ‖ ‖ ‖ − Ke−𝛼T1 ‖x (s)‖ = K −1 e𝛼T1 ‖ (s) x ‖ 1 ‖ ‖ 2 ‖ ‖ − Ke−𝛼T1 ‖x (s)‖ (s) x ⩾ K −1 e𝛼T1 ‖ ‖ 1 ‖ ‖ 1 ‖ ‖ (s) x ⩾ 2𝜃 −1 ‖ ‖ 1 ‖ ⩾ 𝜃 −1 ‖x(s)‖ . ‖ Thus, ‖x(s)‖ ⩽ 𝜃 ‖ ‖x(s − T1 )‖. Hence, s ⩾ a + T1 ⇒ ‖x(s)‖ ⩽ 𝜃 sup ‖x(u)‖ . |u−s|⩽T1

Taking T = a + T1 , we conclude the proof of item (a). (b) Since the linear generalized ODE (13.2) admits a bounded growth on [a, ∞), it follows by Lemma 13.6 that there exist constants C ⩾ 1 and 𝜇 ⩾ 0 such that ‖ (t) −1 (s)‖ ⩽ Ce𝜇(t−s) ‖ ‖

for all t, s ∈ [a, ∞), with t ⩾ s.

Let t ∈ [a, ∞) and h > 0. Using conditions (i’) and (ii’), we obtain ‖ (t + h)P −1 (t)‖ ⩽ Ke−𝛼h ‖ (t)P −1 (t)‖ ‖ ‖ ‖ ‖ and ‖ (t + h)(I − P) −1 (t)‖ ⩾ K −1 e𝛼h ‖ (t)(I − P) −1 (t)‖ . ‖ ‖ ‖ ‖

13.1 Basic Theory for Generalized ODEs −1 ‖ −1 ‖ ‖ Define 𝜌 = 𝜌(t) = ‖ ‖ (t)(I − P) (t)‖ , 𝜎 = 𝜎(t) = ‖ (t)P (t)‖ and consider −1 𝛼h −𝛼h > 0. Note that |𝜌 − 𝜎| ⩽ 1 and h such that 𝛾 = K e − Ke −1 −1 −1 −1 ‖ Ce𝜇h ‖ ‖𝜌  (t)(I − P) (t) + 𝜎  (t)P (t)‖ −1 −1 −1 −1 −1 ‖ ⩾‖ ‖ (t + h) (t)[𝜌  (t)(I − P) (t) + 𝜎  (t)P (t)]‖ −1 −1 −1 −1 ‖ =‖ ‖𝜌  (t + h)(I − P) (t) + 𝜎  (t + h)P (t)‖ −1 ‖ −1 ‖ −1 ‖ −1 ‖ ⩾ 𝜌 ‖ (t + h)(I − P) (t)‖ − 𝜎 ‖ (t + h)P (t)‖ ⩾ K −1 e𝛼h − Ke−𝛼h = 𝛾.

Consequently, −1 −1 −1 −1 ‖ 𝛾C−1 e−𝜇h ⩽ ‖ ‖𝜌  (t)(I − P) (t) + 𝜎  (t)P (t)‖ −1 −1 −1 −1 ‖ =‖ ‖𝜎 I + (𝜌 − 𝜎 ) (t)(I − P) (t)‖ ⩽ 𝜎 −1 + |𝜌−1 − 𝜎 −1 |𝜌 = 𝜎 −1 (1 + 𝜌𝜎|𝜌−1 − 𝜎 −1 |)

= 𝜎 −1 (1 + |𝜌𝜎(𝜌−1 − 𝜎 −1 )|) = 𝜎 −1 (1 + |𝜎 − 𝜌|) ⩽ 2𝜎 −1 . −1 ‖ −1 𝜇h Therefore, ‖ ‖ (t)P (t)‖ = 𝜎 ⩽ 2𝛾 Ce as desired.



Our goal now is to present a result that gives us sufficient conditions to obtain exponential dichotomy for the linear generalized ODE (13.2) on [0, ∞). Before that, we present an auxiliary result. Lemma 13.8: Let J = [0, ∞). Assume that there are T > 0, C > 1, and 0 < 𝜃 < 1 such that any solution x∶[0, ∞) → X of the linear generalized ODE (13.2) satisfies the conditions: (i) ‖x(t)‖ ⩽ C ‖x(s)‖ for 0 ⩽ s ⩽ t ⩽ s + T; (ii) ‖x(t)‖ ⩽ 𝜃 sup ‖x(u)‖ for all t ⩾ T. |u−t|⩽T

If x∶ [0, ∞) → X is a bounded nontrivial solution of (13.2), then sup ‖x(u)‖ ⩽ 𝜃

|u−t|⩽nT

sup

|u−t|⩽(n+1)T

‖x(u)‖ ,

n ∈ ℕ,

provided t ⩾ (n + 1)T. Proof. Let x be a bounded nontrivial solution of the linear generalized ODE (13.2). Note that x is defined on [0, ∞) as conditions H1 and H2 hold. Define the auxiliary function 𝜇(s) = sup u⩾s ‖x(u)‖, for all s ⩾ 0. For t ⩾ s + T, it follows by condition (ii) that ‖x(t)‖ ⩽ 𝜃 sup ‖x(u)‖ ⩽ 𝜃 sup ‖x(u)‖ = 𝜃𝜇(s). |u−t| 0 such that |𝜂 − t| ⩽ nT. Note that 𝜂 ⩾ t − nT ⩾ T. Using condition (ii), ‖x(𝜂)‖ ⩽𝜃 ⩽𝜃

sup

𝜂−T⩽u⩽𝜂+T

‖x(u)‖ ‖x(u)‖ = 𝜃

sup t−(n+1)T⩽u⩽t+(n+1)T

sup

|u−t|⩽(n+1)T

‖x(u)‖ .

Thus, sup ‖x(𝜂)‖ ⩽ 𝜃

|𝜂−t|⩽nT

sup

|u−t|⩽(n+1)T

‖x(u)‖ ◽

and the result is proved. Theorem 13.9: Let J = [0, ∞) and assume that { } ‖ V0 = x0 ∈ X∶ ‖ ‖x0 ‖ = 1 and x(⋅, x0 ) is unbounded

is a compact subset of X, where x(t, x0 ), t ⩾ 0, is the solution of (13.2) such that x(0, x0 ) = x0 . Assume that there are T > 0, C > 1, and 0 < 𝜃 < 1 such that any solution x∶[0, ∞) → X of the linear generalized ODE (13.2) satisfies the conditions: (i) ‖x(t)‖ ⩽ C ‖x(s)‖ for 0 ⩽ s ⩽ t ⩽ s + T; (ii) ‖x(t)‖ ⩽ 𝜃sup |u−t|⩽T ‖x(u)‖ for all t ⩾ T. x

Moreover, assume that for each x0 ∈ V0 , there is an increasing sequence {tn0 }n∈ℕ ⊂ x0 x ⩽ tn0 + T, n ∈ ℕ, such that ℝ+ , with tn+1 x ‖x(t, x )‖ < 𝜃 −n C, for all t ∈ [0, tn0 ), and 0 ‖ ‖ ‖ x0 ‖ ‖x(tn , x0 )‖ ⩾ 𝜃 −n C, n ∈ ℕ. ‖ ‖ Then the linear generalized ODE (13.2) admits an exponential dichotomy on [0, ∞).

Proof. The proof follows the ideas of [48, Proposition 1, page 14]. At first, we prove the following auxiliary conditions: (I) if x∶[0, ∞) → X is a bounded nontrivial solution of (13.2), then there are K > 1 and 𝛼 > 0 such that ‖x(t)‖ ⩽ Ke−𝛼(t−s) ‖x(s)‖, for 0 ⩽ s ⩽ t < ∞;

13.1 Basic Theory for Generalized ODEs

(II) if x∶[0, ∞) → X is an unbounded solution of (13.2) such that ‖x(0)‖ = 1, then there are K > 1 and 𝛼 > 0 such that ‖x(t)‖ ⩽ Ke−𝛼(s−t) ‖x(s)‖ for t1x(0) ⩽ t ⩽ s < ∞. Let us prove that (I) holds. For that, let x∶[0, ∞) → X be a bounded nontrivial solution of the linear generalized ODE (13.2). Define K = 𝜃 −1 C > 1 and 𝛼 = −T −1 ln 𝜃 > 0. Let t ⩾ s and take n ∈ ℕ0 such that s + nT ⩽ t < s + (n + 1)T. In the case when n = 0, that is, s ⩽ t < s + T, we conclude that taking into account that condition (i) holds, we obtain ‖x(t)‖ ⩽ C ‖x(s)‖ = 𝜃 −1 𝜃C ‖x(s)‖ ⩽ 𝜃 −1 C𝜃

(t−s) T

(t−s) T

< 1. Hence,

‖x(s)‖ = Ke−𝛼(t−s) ‖x(s)‖ .

However, if n ⩾ 1, then the following two inequalities are valid: (t − s) < n + 1. T Using Lemma 13.8 and condition (ii), we get t − nT ⩾ s and

‖x(t)‖ ⩽ 𝜃 sup ‖x(u)‖ ⩽ 𝜃 2 sup ‖x(u)‖ ⩽ · · · ⩽ 𝜃 n sup ‖x(u)‖ |u−t|⩽T

|u−t|⩽2T

|u−t|⩽nT

⩽ 𝜃 sup ‖x(u)‖ ⩽ 𝜃 C ‖x(s)‖ = 𝜃 C𝜃 n

n

−1

u⩾s

⩽ 𝜃 −1 C𝜃

(t−s) T

n+1

‖x(s)‖

‖x(s)‖ = Ke−𝛼(t−s) ‖x(s)‖ .

Thus, the statement (I) is proved. Now, we verify that (II) holds. Set K = 𝜃 −1 C > 1 and 𝛼 = −T −1 ln 𝜃 > 0. Let x∶[0, ∞) → X be an unbounded solution of (13.2) satisfying the initial condition ‖x(0)‖ = 1. Condition (i) implies that ‖x(t)‖ ⩽ C ‖x(0)‖ = C < 𝜃 −1 C,

0 ⩽ t ⩽ T.

(13.9)

Further, according to the hypothesis, we may obtain an increasing sequence {tnx(0) }n∈ℕ ⊂ ℝ+ , which we denote simply by {tn }n∈ℕ , such that tn+1 ⩽ tn + T for all n ∈ ℕ, ‖x(t)‖ < 𝜃 −n C for t ∈ [0, tn ) and ‖x(t )‖ ⩾ 𝜃 −n C, ‖ n‖

n ∈ ℕ.

This last inequality together with (13.9) implies t1 > T. Consequently, T < t1 < t2 < · · · < tn < · · ·. If tn → 𝜆 as n → ∞, for some 𝜆, then lims→𝜆− x(s) = ∞ which is a contradiction once the solution x is regulated. Hence, tn → ∞ as n → ∞. Let t1 ⩽ t ⩽ s. We may assume, without loss of generality, that tm ⩽ t < tm+1 and tn ⩽ s < tn+1 , for some m, n ∈ ℕ. In this way, we obtain s−t < n − m + 1 since T s − t < tn+1 − tm ⩽ tn + T − tm ⩽ tn−1 + 2T − tm < · · · < tm + (n − m + 1)T − tm .

379

380

13 Dichotomies

Since condition (i) holds and taking into account the properties of the sequence {tn }n∈ℕ ⊂ ℝ+ , we conclude that ‖ ‖x(t)‖ < 𝜃 −m−1 C = 𝜃 n−m C𝜃 −n−1 ⩽ 𝜃 n−m ‖ ‖x(tn+1 )‖ ⩽ C𝜃 −1 𝜃 n−m+1 ‖x(s)‖ ⩽ C𝜃 −1 𝜃 −𝛼(s−t)

= Ke

(s−t) T

‖x(s)‖

‖x(s)‖ ,

and we conclude the proof of statement (II). In order to show that the linear generalized ODE (13.2) admits an exponential dichotomy on [0, ∞), let us consider the following subspace of X: X1 = {𝜉 ∈ X∶x(t, 𝜉) is bounded in [0, ∞)} . Let X2 be a subspace of X such that X = X1 ⊕ X2 . By hypothesis, for each 𝜉 ∈ X2 such that ‖𝜉‖ = 1, there exists t1 = t1 (𝜉) satisfying the following conditions: x(t1 , 𝜉) ⩾ 𝜃 −1 C and ‖x(t, 𝜉)‖ < 𝜃 −1 C, for 0 ⩽ t < t1 . { } Claim. The set t1 (𝜉)∶𝜉 ∈ X2 and ‖𝜉‖ = 1 is bounded. ‖ Indeed, suppose to the contrary that there is a sequence {𝜉n }n∈ℕ ⊂ X2 , ‖ ‖𝜉n ‖ = 1, (n) such that t1 = t1 (𝜉n ) → ∞ as n → ∞. We may assume that 𝜉n → 𝜉0 as n → ∞, for ‖ some 𝜉0 ∈ X2 , since V0 is compact. Note that ‖ ‖𝜉0 ‖ = 1. Consequently, for every t ⩾ 0, we have x(t, 𝜉n ) = U(t, 0)𝜉n → U(t, 0)𝜉0 = x(t, 𝜉0 ),

as

n → ∞,

where U∶ J × J → L(X) is the fundamental operator of (13.2). But this implies that ‖x(t, 𝜉 )‖ ⩽ 𝜃 −1 C, for 0 ⩽ t < ∞, 0 ‖ ‖ (n) −1 ‖ as ‖ ‖x(t, 𝜉n )‖ < 𝜃 C for all 0 ⩽ t < t1 and all n ∈ ℕ, which is a contradiction because 0 ≠ 𝜉0 ∈ X2 . Hence, { } t1 (𝜉)∶ 𝜉 ∈ X2 and ‖𝜉‖ = 1 is bounded, that is, T1 = sup 𝜉∈X2 ,‖𝜉‖=1 t1 (𝜉) < ∞. Using condition (II), for each a ∈ X2 , a ≠ 0, we obtain ‖ ( a )‖ ‖ ( a )‖ ‖ ‖ ‖ ‖ ‖x(t, a)‖ = ‖a‖ ‖x t, ‖ ⩽ ‖a‖ Ke−𝛼(s−t) ‖x s, ‖ ‖ ‖ ‖ ‖ ‖a‖ ‖a‖ ‖ ‖ ‖ ‖ −𝛼(s−t) = Ke ‖x(s, a)‖ , for T1 ⩽ t ⩽ s < ∞. Taking the projection P ∈ L(X) from the decomposition X = X1 ⊕ X2 on the subspace X1 , we conclude for each 𝜉 ∈ X that ‖ (t)P𝜉‖ ⩽ Ke−𝛼(t−s) ‖ (s)P𝜉‖ ,

t ⩾ s ⩾ T1 , and

‖ (t)(I − P)𝜉‖ ⩽ Ke

s ⩾ t ⩾ T1 .

−𝛼(s−t)

‖ (s)(I − P)𝜉‖ ,

13.2 Boundedness and Dichotomies

Now, Lemma 13.7 assures the existence of a constant L > 0 such that ‖ (t)P −1 (t)‖ ⩽ L, for all t ∈ [T , ∞). Proposition 13.3 shows that the lin1 ‖ ‖ ear generalized ODE (13.2) admits an exponential dichotomy on the interval [T1 , ∞) and, finally, Lemma 13.2 concludes that (13.2) admits an exponential dichotomy on [0, ∞). ◽ Remark 13.10: In Example 13.5, if a(t) ⩾ a0 > 0, for all t ⩾ 0, then conditions (i) and (ii) from Lemma 13.7 are fulfilled with P = I, K1 = 1, and 𝛼1 = a0 (here K2 and 𝛼2 can be taken arbitrary). Consequently, by Lemma 13.7, item (a), given 𝜃 ∈ (0, 1), there is T > 0 such that for any solution x∶[0, ∞) → ℝ of (13.5), we have ‖x(s)‖ ⩽ 𝜃 sup ‖x(u)‖ , for all s ⩾ T. |u−s|⩽T

Moreover, since P = I, we conclude that { } ‖ V0 = x0 ∈ ℝ∶ ‖ ‖x0 ‖ = 1 and x(⋅, x0 )is unbounded = ∅. Then, by Theorem 13.9, the generalized ODE (13.5) admits an exponential dichotomy on [0, ∞).

13.2 Boundedness and Dichotomies In the classical theory of ordinary differential equations, there exists a relation between exponential dichotomy and boundedness of solutions. In this section, we deal with this relation in the context of generalized ODEs, that is we investigate the relation between exponential dichotomy of the generalized ODE dx = D [A(t)x] (13.10) d𝜏 and the bounded solutions of [ ] dx = D A(t)x + f (t) . (13.11) d𝜏 Throughout this section, we will consider J = ℝ, the operator A∶ℝ → L(X) satisfies conditions (H1′ ) A ∈ BV ([a, b], L(X)) for all interval [a, b] ⊂ ℝ, (H2′ ) A satisfies condition (Δ) presented in Definition 6.2 with J = ℝ, and f ∶ℝ → X is a function which will be specified later. The definitions and the results presented in this section can be found in [29, Sec. 4]. Definition 13.11: We say that the linear generalized ODE (13.10) satisfies condition (D), whenever the operator A satisfies both conditions (H1′ ) and (H2′ ) and the

381

382

13 Dichotomies

generalized ODE (13.10) admits an exponential dichotomy, with positive constants K1 and K2 , positive exponents 𝛼1 and 𝛼2 and a projection P ∈ L(X), that is, ‖ (t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) , t ⩾ s, and 1 ‖ ‖ ‖ (t)(I − P) −1 (s)‖ ⩽ K e−𝛼2 (s−t) , s ⩾ t, 2 ‖ ‖

(13.12)

where  ∶ℝ → L(X) is given by  (t) = U(t, 0), t ∈ ℝ, and U∶ℝ × ℝ → L(X) is the fundamental operator associated with the linear generalized ODE (13.10). Proposition 13.12: The unique bounded solution of the linear generalized ODE (13.10), satisfying condition (D), is the null solution. Proof. Let x∶ℝ → X be a bounded solution of (13.10) and set 𝜉 = x(0). There exists K > 0 such that ‖x(t)‖ ⩽ K for all t ∈ ℝ. Using Proposition 6.7, we conclude that the unique solution of the IVP ⎧ dx = D [A(t)x] , ⎪ ⎨ d𝜏 ⎪x(0) = 𝜉 ⎩ is given by x(t) =  (t)𝜉, t ∈ ℝ. Note that x(t) =  (t)𝜉 =  (t)P𝜉 +  (t)(I − P)𝜉,

t ∈ ℝ.

Set x1 (t) =  (t)P𝜉 and x2 (t) =  (t)(I − P)𝜉, t ∈ ℝ. Taking into account the inequalities in (13.12), we obtain ‖x (t)‖ = ‖ (t)P𝜉‖ = ‖ (t)P −1 (0)𝜉 ‖ ⩽ K e−𝛼1 t ‖𝜉‖ , t ⩾ 0, and 1 ‖ 1 ‖ ‖ ‖

(13.13) ‖x (t)‖ = ‖ (t)(I − P)𝜉‖ = ‖ (t)(I − P) −1 (0)𝜉 ‖ ⩽ K e𝛼2 t ‖𝜉‖ , t ⩽ 0. 2 ‖ 2 ‖ ‖ ‖ (13.14)

Consequently, ‖x (t)‖ = ‖x(t) − x (t)‖ ⩽ ‖x(t)‖ + ‖x (t)‖ ⩽ K + K ‖𝜉‖ , t ⩽ 0, 2 ‖ 2 ‖ 2 ‖ ‖ 1 ‖ ‖ ‖x (t)‖ = ‖x(t) − x (t)‖ ⩽ ‖x(t)‖ + ‖x (t)‖ ⩽ K + K ‖𝜉‖ , t ⩾ 0. 1 ‖ 1 ‖ 2 ‖ ‖ ‖ 1 ‖

(13.15) (13.16)

The inequalities (13.13), (13.14), (13.15), and (13.16) say that x1 and x2 are bounded functions in ℝ. Thus, there exists L > 0 such that ‖x (t)‖ + ‖x (t)‖ ⩽ L, ‖ 1 ‖ ‖ 2 ‖ for all t ∈ ℝ. Using this boundedness of x1 and x2 and conditions (13.12), we obtain −1 ‖ ‖ ‖ ‖P𝜉‖ = ‖ ‖x1 (0)‖ = ‖ (0)P𝜉‖ = ‖ (0)P (t) (t)P𝜉 ‖ −1 𝛼 t ‖‖ ‖ 1 ⩽‖ ‖ (0)P (t)‖ ‖x1 (t)‖ ⩽ K1 e L,

13.2 Boundedness and Dichotomies

for all t ⩽ 0, and ‖ ‖(I − P)𝜉‖ = ‖ ‖x2 (0)‖ = ‖ (0)(I − P)𝜉‖ −1 ‖ =‖ ‖ (0)(I − P) (t) (t)(I − P)𝜉 ‖ −1 −𝛼2 t ‖‖ ‖ ⩽‖ ‖ (0)(I − P) (t)‖ ‖x2 (t)‖ ⩽ K2 e L, for all t ⩾ 0. The previous estimates take us to P𝜉 = 0 and (I − P)𝜉 = 0, that is, 𝜉 = P𝜉 + (I − P)𝜉 = 0. Therefore, x(t) = 0 for all t ∈ ℝ as we have uniqueness of a solution. ◽ The next result gives us sufficient conditions for the nonhomogeneous linear generalized ODE (13.11) to not admit more than one bounded solution. Corollary 13.13: Assume that the linear generalized ODE (13.10) satisfies condition (D) and f ∈ G(ℝ, X). Then the nonhomogeneous linear generalized ODE (13.11) admits at most one bounded solution. Proof. Suppose x, y∶ℝ → X are two bounded solutions of the nonhomogeneous generalized ODE (13.11). Using Theorem 6.14 with F(x, t) = f (t), these solutions can be written by x(t) =  (t)x(0) + f (t) − f (0) − y(t) =  (t)y(0) + f (t) − f (0) −

t

[ ] ds  (t) −1 (s) ( f (s) − f (0)) , t ∈ ℝ, and

t

[ ] ds  (t) −1 (s) ( f (s) − f (0)) , t ∈ ℝ.

∫0 ∫0

Recall that we are assuming in this section that  (t) = U(t, 0), t ∈ ℝ. Now, define z∶ℝ → X by z(t) = x(t) − y(t), t ∈ ℝ, which is clearly bounded. Since z(t) =  (t)x(0) −  (t)y(0) =  (t)z(0), for all t ∈ ℝ, we have z is a bounded solution of the linear generalized ODE (13.10). By Proposition 13.12, z(t) = x(t) − y(t) = 0 for all t ∈ ℝ, that is, x(t) = y(t) for all t ∈ ℝ, and the proof is complete. ◽ In the sequel, we present two results which give us sufficient conditions for the nonhomogeneous linear generalized ODE (13.11) to admit a unique bounded solution. These results are stated in Propositions 13.14 and 13.15. Proposition 13.14: Assume that the linear generalized ODE (13.10) satisfies condition (D), f ∈ G(ℝ, X), the Perron–Stieltjes integrals t

∫−∞

] [ ds  (t)P −1 (s) ( f (s) − f (0))

and

383

384

13 Dichotomies ∞

∫t

[ ] ds  (t)(I − P) −1 (s) ( f (s) − f (0))

exist for each t ∈ ℝ, and the functions t

t ∈ ℝ →

∫−∞ ∞

t ∈ ℝ →

∫t

[ ] ds  (t)P −1 (s) ( f (s) − f (0)) ∈ X

and

] [ ds  (t)(I − P) −1 (s) ( f (s) − f (0)) ∈ X

are bounded, where  (t) = U(t, 0), t ∈ ℝ. Then the nonhomogeneous linear generalized ODE (13.11) admits a unique bounded solution. Proof. Denote by x∶ℝ → X the solution of the initial value problem [ ] ⎧ dx ⎪ d𝜏 = D A(t)x + f (t) , ⎪ 0 ] [ ⎪ −1 ⎨x(0) = − ∫ ds P (s) ( f (s) − f (0)) −∞ ⎪ ∞ ] [ ⎪ + ds (I − P) −1 (s) ( f (s) − f (0)) . ⎪ ∫ ⎩ 0 According to Theorem 6.14, x(t) is given by t

x(t) =  (t)x(0) + f (t) − f (0) −

∫0

] [ ds  (t) −1 (s) ( f (s) − f (0)) , t ∈ ℝ.

Since  (t) −1 (s) =  (t)P −1 (s) +  (t)(I − P) −1 (s), we have t

x(t) =  (t)x(0) + f (t) − f (0) − t



∫0

∫0

[ ] ds  (t)P −1 (s) ( f (s) − f (0))

] [ ds  (t)(I − P) −1 (s) ( f (s) − f (0))

=  (t)x(0) + f (t) − f (0) [ t 0 [ [ ] ] −  (t) ds P −1 (s) ( f (s) − f (0)) ± ds P −1 (s) ( f (s) − f (0)) ∫0 ∫−∞ t ] [ d (I − P) −1 (s) ( f (s) − f (0)) + ∫0 s ] ∞ [ ] −1 ± ds (I − P) (s) ( f (s) − f (0)) ∫0 [ t [ ] =  (t)x(0) + f (t) − f (0) −  (t) d P −1 (s) ( f (s) − f (0)) ∫−∞ s ] ∞ [ ] ds (I − P) −1 (s) ( f (s) − f (0)) + x(0) − ∫t

13.2 Boundedness and Dichotomies t

= f (t) − f (0) − ∞

+

∫t

∫−∞

] [ ds  (t)P −1 (s) ( f (s) − f (0))

] [ ds  (t)(I − P) −1 (s) ( f (s) − f (0)) .

The last equality shows that x is bounded, since by hypothesis the functions ] [ ds  (t)P −1 (s) ( f (s) − f (0)) ∈ X

t

t ∈ ℝ →

∫−∞ ∞

t ∈ ℝ →

∫t

and

[ ] ds  (t)(I − P) −1 (s) ( f (s) − f (0)) ∈ X

are bounded. In conclusion, Corollary 13.13 assures that x is the only bounded solution of (13.11). ◽ Proposition 13.15: Assume that the linear generalized ODE (13.10) satisfies condition (D), f ∈ G(ℝ, X), the Perron–Stieltjes integrals t

W1 (t) =

∫−∞

 (t)P −1 (s)d[f (s)]

and



W2 (t) =

∫t

 (t)(I − P) −1 (s)d[f (s)]

exist for all t ∈ ℝ, the functions t ∈ ℝ → W1 (t) ∈ X and t ∈ ℝ → W2 (t) ∈ X are bounded and the function k∶ℝ → X given by ( ) ∑ ∑ ⎧ + −1 + − −1 − Δ  (𝜏)Δ f (𝜏) − Δ  (𝜏)Δ f (𝜏) , t > 0 ⎪ (t) 0⩽𝜏 |t|, and using Corollary 1.48, we obtain ‖x − x ‖ ⩽ m‖ ‖ n ∫

−m

−n

‖ (t)P −1 (r)‖ d[varr (f )] −n ‖ ‖

⩽ (var−m −n (f )) sup

r∈[−n,−m]

⩽ K1 Vf sup

r∈[−n,−m]

‖ (t)P −1 (r)‖ ‖ ‖

e−𝛼1 (t−r) ⩽ K1 Vf e−𝛼1 (t+m) .

‖ Then ‖ n }n∈ℕ is a Cauchy ‖xn − xm ‖ → 0 as n, m → ∞, which shows that {x t sequence. Thus, the Perron–Stieltjes integral W1 (t) = exists. Since

∫−∞

 (t)P −1 (r)d[f (r)]

‖ ‖ t ‖ ‖x ‖ = ‖  (t)P −1 (r)d[f (r)]‖ ⩽ K1 Vf sup e−𝛼1 (t−r) ⩽ K1 Vf , ‖ n‖ ‖ ‖ ‖∫−n r∈[−n,t] ‖ ‖ ‖ ‖ for every n ∈ ℕ, we get ‖W1 (t)‖ ⩽ K1 Vf . By the arbitrariness of t and by the fact that the limiting factor K1 Vf does not depend on the choice of t ∈ ℝ, we have the existence of the Perron–Stieltjes integral W1 (t) for every t ∈ ℝ and the boundedness of the map ℝ ∋ t → W1 (t). (iii) Looking at the proof of [209, Theorem 6.15], we also can obtain the following estimates: b

‖U(t, s)‖ ⩽ CeC vara (A)

for all t, s ∈ [a, b], b

b

varba (U(⋅, s)) ⩽ CeC vara (A) varba (A) and varba (U(t, ⋅)) ⩽ C2 e2C vara (A) varba (A). Claim. vart𝜂 ( (t)P −1 (⋅)) ⩽ L1 e−𝛽1 (t−𝜂) ‖P‖ C3 e3CVA VA , for every t ⩾ 𝜂, where the constants L1 and 𝛽1 come from Proposition 13.3. Indeed, let d = s0 ⩽ s1 ⩽ · · · ⩽ s|d| be a division of [𝜂, t] and remember that  −1 (t) = U(0, t), t ∈ ℝ, then |d| ∑ i=1

|d| ‖ (t)P −1 (s ) −  (t)P −1 (s )‖ ⩽ ∑ ‖ (t)P‖ ‖ −1 (s ) −  −1 (s )‖ i i−1 i i−1 ‖ ‖ ‖ ‖ i=1

⩽ L1

e−𝛽1 (t−𝜂)

and the claim is proved.

‖

t (𝜂)P‖ C2 e2C var𝜂 (A)

vart𝜂 (A) ⩽ L1 e−𝛽1 (t−𝜂) ‖P‖ C3 e3CVA VA ,

13.3 Applications to MDEs t

Now, for a fixed t ∈ ℝ, define xn = ∫−n ds [ (t)P −1 (s)](f (s) − f (0)), n ∈ ℕ. Since f is bounded, there is M > 0 such that ‖f (t)‖ ⩽ M for all t ∈ ℝ. As in item (i), we have {xn }n∈ℕ is a Cauchy sequence because for n ⩾ m > |t|, we have ‖x − x ‖ ⩽ var−m ( (t)P −1 (⋅)) sup −n m‖ ‖ n

s∈[−n,−m]

‖f (s) − f (0)‖

⩽ 2M vart−n ( (t)P −1 (⋅)) ⩽ 2ML1 e−𝛽1 (t+n) ‖P‖ C3 e3CVA VA , ‖ which shows that ‖ ‖xn − xm ‖ → 0 as n, m → ∞. Hence, using the arbitrariness of t, t

d [ (t)P −1 (s)](f (s) − f (0)) exists for every ∫−∞ s t ∈ ℝ. It is not difficult to see that ‖ ‖ t ‖ ‖ sup ‖ ds [ (t)P −1 (s)](f (s) − f (0))‖ ⩽ 2ML1 ‖P‖ C3 e3CVA VA . ‖ ∫−∞ t∈ℝ ‖ ‖ ‖ we can conclude that the integral



Similarly, we show the existence of the integral ∫t ds [ (t)(I − P) −1 (s)](f (s) − f (0)) and the boundedness ‖ sup ‖ ‖ t∈ℝ ‖∫t



‖ 3 3CVA ds [ (t)(I − P) −1 (s)](f (s) − f (0))‖ VA . ‖ ⩽ 2ML2 (1 + ‖P‖)C e ‖ ◽

13.3 Applications to MDEs In this section, we apply the results on exponential dichotomies for a class of measure differential equations (MDEs). We shall use the correspondence between generalized ODEs and MDEs to obtain the results. The reader may find all the concepts and results presented in this section in [29, Section 5]. We consider J ⊂ ℝ an interval and X a Banach space. We will study the following MDE in the integral form: t

x(t) = x(t0 ) +

∫t0

t

 (s)x(s)ds +

∫t0

(s)x(s)du(s),

(13.18)

where t0 ∈ J and the functions  ∶ J → L(X), ∶ J → L(X), and u∶ J → ℝ satisfy the following conditions: (C1) (C2) (C3) (C4)

 (⋅) is Perron integrable over J; u is of locally bounded variation in J and left-continuous on J ⧵ {inf J}; (⋅) is Perron–Stieltjes integrable with respect to u over J; there exists a locally Perron integrable function m1 ∶ J → ℝ such that for each b ‖ ‖ b a, b ∈ J, a ⩽ b, we have ‖∫a  (s)ds‖ ⩽ ∫a m1 (s)ds; ‖ ‖

391

392

13 Dichotomies

(C5) there is a locally Perron–Stieltjes integrable function m2 ∶ J → ℝ with ‖ ‖ b respect to u, such that for each a, b ∈ J, a ⩽ b, we have ‖∫a (s)du(s)‖ ⩽ ‖ ‖ b ∫a m2 (s)du(s); (C6) for all t such that t is a point of discontinuity of u, we have ( )−1 r I + lim (s)du(s) ∈ L(X). r→t+ ∫t Let t0 ∈ [𝛼, 𝛽]. According to Definition 3.2, a function x∶[𝛼, 𝛽] ⊂ J → X is a 𝛽 solution of the MDE (13.18) on [𝛼, 𝛽], if the Perron integral ∫𝛼  (s)x(s)ds and the 𝛽 Perron–Stieltjes integral ∫𝛼 (s)x(s)du(s) exist and the equality (13.18) holds for all t ∈ [𝛼, 𝛽]. Next, we present the correspondence result between the MDE (13.18) and its corresponding generalized ODE. See Theorem 4.14. The reader also may consult [209, Theorem 5.17]. Theorem 13.18: Let t0 ∈ [𝛼, 𝛽]. The function x∶[𝛼, 𝛽] ⊂ J → X is a solution of (13.18), with initial condition x(t0 ) = x0 , if and only if x is solution of ⎧ dx = D[A(t)x + F(t)x], ⎪ ⎨ d𝜏 ⎪ x(t0 ) = x0 , ⎩ t

where A(t) =

∫t0

t

 (s)ds and F(t) =

∫t0

(s)du(s) for all t ∈ [𝛼, 𝛽].

Theorem 13.19 concerns the existence and uniqueness of a solution of the MDE (13.18). Theorem 13.19: Let  and  satisfy conditions (C1)–(C6). Then the MDE (13.18) admits a unique solution defined in J. Proof. Consider the operators A∶ J → L(X) and F∶ J → L(X) defined in Theorem 13.18. We need to show that A + F satisfies conditions (H1 )–(H2 ). At first, we show that A + F satisfies condition H1 . Indeed, let a, b ∈ J, a < b, and d = t0 ⩽ t1 ⩽ … ⩽ t|d| = b be a division of [a, b]. Since |d| ∑ i=1



‖A(t ) + F(t ) − A(t ) − F(t )‖ i i−1 i−1 ‖ ‖ i

|d| ‖ |d| ‖ t t ‖ ∑ ‖ ∑ ‖ i ‖ i ‖ ‖  (s)ds‖ + (s)du(s)‖ ‖ ‖ ‖∫t ‖∫t ‖ ‖ i=1 ‖ i−1 ‖ i=1 ‖ i−1 ‖

13.3 Applications to MDEs



|d| ∑

ti

i=1

∫ti−1

m1 (s)ds +

|d| ∑

ti

i=1

∫ti−1

b

m2 (s)du(s) =

∫a

b

m1 (s)ds +

∫a

m2 (s)du(s),

we conclude that b

varba (A + F) ⩽

∫a

b

m1 (s)ds +

∫a

m2 (s)du(s) < ∞.

Hence, condition H1 is satisfied. Now, let us prove that A + F satisfies condition H2 . At first, we note that A ∈ C(J, L(X)) as t2 ‖ t2 ‖ ‖ ‖A(t ) − A(t )‖ = ‖  (s)ds‖ ⩽ m1 (s)ds, ‖ 1 ‖ ‖ 2 ‖∫t ‖ ∫t 1 ‖ 1 ‖

for all t1 , t2 ∈ J with t1 ⩽ t2 . On the other hand, t2 ‖ t2 ‖ ‖ ‖F(t ) − F(t )‖ = ‖ (s)du(s) ⩽ m2 (s)du(s), ‖ ‖ 1 ‖ ‖ 2 ‖∫t ‖ ∫t 1 ‖ 1 ‖

for all t1 , t2 ∈ J with t1 ⩽ t2 , which implies that F is left-continuous on J ⧵ {inf J} since u has the same property. Thus, for t ∈ J, we have A(t− ) = A(t+ ) = A(t) and F(t) = F(t− ), which shows that (I − [(A + F)(t) − (A + F)(t− )])−1 = I ∈ L(X) and (I + [(A + F)(t+ ) − (A + F)(t)])−1 = (I + F(t+ ) − F(t))−1 ( )−1 𝛼 = I + lim+ (s)du(s) ∈ L(X), 𝛼→t ∫t where we used condition (C6). Hence, A + F satisfies condition H2 . According to Proposition 6.7 and Theorem 13.18 the result is proved.



In the next result, we exhibit the concept of the fundamental operator associated with the MDE (13.18), see Theorem 13.20. Theorem 13.20: There exists a unique operator V∶ J × J → L(X) such that t

V(t, s) = I +

∫s

t

 (r)V(r, s)dr +

∫s

(r)V(r, s)du(r),

(13.19)

for all t, s ∈ J. Moreover, for each fixed s ∈ J, V(⋅, s) is an operator of locally bounded variation. For t0 ∈ J, the function x(t) = V(t, t0 )̃x is the unique solution of (13.18) satisfying the initial condition x(t0 ) = x̃ , with x̃ ∈ X. This operator is called the fundamental operator of the MDE (13.18).

393

394

13 Dichotomies

Proof. By Theorem 6.6, there exists a unique operator U∶ J × J → L(X) such that t

U(t, s) = I +

d[A(r) + F(r)]U(r, s),

∫s

t, s ∈ J,

where A∶ J → L(X) and F∶ J → L(X) are the operators defined in Theorem 13.18. Note that A + F satisfies conditions H1 –H2 by the proof of Theorem 13.19. Moreover, for each s ∈ J, U(⋅, s) is an operator of locally bounded variation in J. Using Proposition 1.67, we obtain t

t

d[A(r)]U(r, s) =

∫s

∫s

t

∫s

 (r)U(r, s)dr

∫s

t

d[F(r)]U(r, s) =

∫s

t

d[A(r) − A(s)]U(r, s) =

and

t

d[F(r) − F(s)]U(r, s) =

∫s

(r)U(r, s)du(r),

that is, t

U(t, s) = I +

∫s

t

 (r)U(r, s)dr +

∫s

(r)U(r, s)du(r), t, s ∈ J.

Hence, V∶ J × J → L(X) defined by V(t, s) = U(t, s), t, s ∈ J, is the unique operator that satisfies (13.19). Lastly, let x(t) = V(t, t0 )̃x with x̃ ∈ X and t ∈ J. Since t

∫t0

t

 (s)x(s)ds +

∫t0

(s)x(s)du(s)

t

=

∫t0

t

 (s)V(s, t0 )̃xds +

∫t0

(s)V(s, t0 )̃xdu(s)

= (V(t, t0 ) − I)̃x = x(t) − x̃ , and we are assuming that conditions (C1)–(C5) hold, it follows by Theorem 13.19 that x(t) = V(t, t0 )̃x is the unique solution of (13.18) such that x(t0 ) = x̃ . ◽ As a consequence of Theorem 13.20, we have the following result. Corollary 13.21: Let V∶ J × J → L(X) be the fundamental operator of (13.18) and U∶ J × J → L(X) be the fundamental operator of the corresponding linear generalized ODE dx = D[A(t)x + F(t)x], d𝜏

(13.20)

t

where A(t) =

∫t0 for all t, s ∈ J.

t

 (s)ds and F(t) =

∫t0

(s)du(s), for all t ∈ J. Then U(t, s) = V(t, s)

13.3 Applications to MDEs

Let V be the fundamental operator of (13.18). Define the operator ∶ J → L(X) by (t) = V(t, t0 ),

t ∈ J,

where t0 ∈ J. We denote  −1 (t) = V(t0 , t), t, t0 ∈ J. In the next definition, we present the concept of exponential dichotomy for the MDE (13.18). Definition 13.22: The MDE (13.18) admits an exponential dichotomy on J, if there exist positive constants K1 , K2 , 𝛼1 , and 𝛼2 and a projection P∶X → X such that (i) (ii)

‖(t)P −1 (s)‖ ⩽ K e−𝛼1 (t−s) for all t, s ∈ J, with t ⩾ s; 1 ‖ ‖ ‖(t)(I − P) −1 (s)‖ ⩽ K e−𝛼2 (s−t) for all t, s ∈ J, with s ⩾ t. 2 ‖ ‖

Remark 13.23: The operator V satisfies conditions (i), (ii), (iii), (iv), and (v) from Proposition 6.8. Remark 13.24: If  (t) = U(t, t0 ), t, t0 ∈ J, where U∶ J × J → L(X) is the fundamental operator of (13.20), then (t) =  (t) for all t ∈ J. Using Theorem 13.18 that gives the correspondence between the solutions of (13.18) and (13.20) and Remark 13.24, we can state the following result that gives us the equivalence between the dichotomies of the MDE (13.18) and its corresponding generalized ODE. Proposition 13.25: The MDE (13.18) admits an exponential dichotomy on J, if and only if the generalized ODE (13.20) admits an exponential dichotomy on J. By Propositions 13.3 and 13.25, we obtain the following result that gives a characterization of an exponential dichotomy for the MDE (13.18). Proposition 13.26: The MDE (13.18) admits an exponential dichotomy on J, if and only if there exist positive constants L1 , L2 , L, 𝛼1 ,, and 𝛼2 , and a projection P∶X → X such that, for all 𝜉 ∈ X, the following estimates hold: (i) (ii) (iii)

‖(t)P𝜉‖ ⩽ L1 e−𝛼1 (t−s) ‖(s)P𝜉‖, for all t, s ∈ J, with t ⩾ s; ‖(t)(I − P)𝜉‖ ⩽ L2 e−𝛼2 (s−t) ‖(s)(I − P)𝜉‖, for all t, s ∈ J, with s ⩾ t; ‖(t)P −1 (t)‖ ⩽ L, for all t ∈ J. ‖ ‖

395

396

13 Dichotomies

Using Theorem 13.18, Remark 13.24, and Theorem 13.9, we state in the next result sufficient conditions to obtain exponential dichotomy on [0, ∞). Theorem 13.27: Let J = [0, ∞) and assume that { } ‖ V0 = x0 ∈ X∶ ‖ ‖x0 ‖ = 1 and x(t, x0 ) is unbounded is a compact set in X, where x(t, x0 ) denotes the solution of (13.18) satisfying the condition x(0) = x0 . Assume that there exist constants T > 0, C > 1, and 0 < 𝜃 < 1 such that any solution x∶[0, ∞) → X of (13.18) satisfies the conditions: (i) ‖x(t)‖ ⩽ C ‖x(s)‖ for all 0 ⩽ s ⩽ t ⩽ s + T; (ii) ‖x(t)‖ ⩽ 𝜃 sup ‖x(u)‖ for all t ⩾ T. |u−t|⩽T

x

Moreover, assume that for each x0 ∈ V0 , there is an increasing sequence {tn0 }n∈ℕ ⊂ x0 x x0 −n ‖ ℝ+ with tn+1 ⩽ tn0 + T, for all n ∈ ℕ, such that ‖ ‖x(t, x0 )‖ < 𝜃 C for t ∈ [0, tn ) and ‖ x0 ‖ ‖x(tn , x0 )‖ ⩾ 𝜃 −n C, n ∈ ℕ. Then the MDE (13.18) admits an exponential dichotomy ‖ ‖ on J. From now on, we study the relation between the exponential dichotomy of the MDE t

x(t) = x0 +

t

 (s)x(s)ds +

∫t0

(s)x(s)du(s),

∫t0

and the bounded solutions of t

x(t) = x0 +

∫t0

t

 (s)x(s)ds +

∫t0

t

(s)x(s)du(s) +

∫t0

h(s)du(s),

(13.21)

where h∶ J → X is a Perron–Stieltjes integrable function with respect to u over J and t0 ∈ J. We assume that  ,  and u satisfy conditions (C1)–(C6). Remark 13.28: Let t0 ∈ J and consider the functions t

A(t) =

∫t0

t

 (s)ds, F(t) =

∫t0

t

(s)du(s), and g(t) =

∫t0

h(s)du(s) + g(t0 ),

for t ∈ J. Consequently, the generalized ODE corresponding to the MDE (13.21) is given by dx = D[(A(t) + F(t))x + g(t)], d𝜏 In addition,

t ∈ J.

dx = D[(A(t) + F(t))x], t ∈ J, d𝜏 is the linear generalized ODE corresponding to the MDE (13.21).

(13.22)

(13.23)

13.3 Applications to MDEs

Lemma 13.29: x∶ J → X is a solution of the MDE (13.21) if and only if x is a solution of generalized ODE (13.22). Proof. According to Proposition 1.67, we have t

∫t0

t

 (s)x(s)ds +

∫t0

t

(s)x(s)du(s) +

t

=

∫t0

∫t0

h(s)du(s)

t

d[A(s)]x(s) +

∫t0

d[F(s)]x(s) + g(t) − g(t0 ),

for all t ∈ J. Consequently, we obtain the desired result.



Lemma 13.30: Let J = ℝ and u be of locally bounded variation. Assume that the MDE (13.18) admits an exponential dichotomy on ℝ. Then the linear generalized ODE (13.23) satisfies the condition (D). Proof. Proposition 13.25 gives us the equivalence between the dichotomies of the MDE (13.18) and its corresponding generalized ODE (13.23). Since A + F satisfies conditions H1′ and H2′ (see the proof of Theorem 13.19), then the linear generalized ODE (13.23) satisfies the condition (D) as presented in Definition 13.11. ◽ Let us suppose that the MDE (13.18) admits an exponential dichotomy on ℝ. Using Lemma 13.29, Lemma 13.30, and take into account that g ∈ G(ℝ, X), we can conclude from Corollary 13.13 that the generalized ODE (13.22) admits at most one bounded solution. Using again the correspondence between the solutions of the MDE (13.21) and of the generalized ODE (13.22), see Lemma 13.29, we conclude that MDE (13.21) admits at most one bounded solution. This result is stated in the next proposition. Proposition 13.31: Let J = ℝ and u be of locally bounded variation. Assume that the MDE (13.18) admits an exponential dichotomy on ℝ. Then the MDE (13.21) admits at most one bounded solution. In what follows, we give sufficient conditions for the perturbed MDE (13.21) to admit a unique bounded solution. Proposition 13.32: Let J = ℝ and u be of locally bounded variation in ℝ. Assume that (13.18) admits an exponential dichotomy on ℝ, the Perron–Stieltjes integrals ) t ( [ ] s ds (t)P −1 (s) h(r)du(r) and Z1 (t) = ∫−∞ ∫0 ) ∞( [ ] s Z2 (t) = ds (t)(I − P) −1 (s) h(r)du(r) ∫t ∫0

397

398

13 Dichotomies

exist for each t ∈ ℝ and the functions t ∈ ℝ → Z1 (t) ∈ X

and

t ∈ ℝ → Z2 (t) ∈ X

are bounded, where (t) = V(t, 0) for all t ∈ ℝ. Then the perturbed MDE (13.21) admits a unique bounded solution. Proof. Since (13.18) admits an exponential dichotomy on ℝ, it follows by Lemma 13.30 that the generalized ODE (13.23) satisfies condition (D). Let U∶ℝ × ℝ → L(X) be the fundamental operator of (13.23) and V∶ℝ × ℝ → L(X) be the fundamental operator of (13.18). Define  (t) = U(t, 0) and (t) = V(t, 0), t ∈ ℝ (t0 = 0). Then by Remark 13.24, we have (t) =  (t) for all t ∈ ℝ. Thus, we obtain t [ ] d  (t)P −1 (s) (g(s) − g(0)) and Z1 (t) = ∫−∞ s ∞ [ ] Z2 (t) = ds  (t)(I − P) −1 (s) (g(s) − g(0)) , ∫t for all t ∈ ℝ. Now, Proposition 13.14 assures that the nonhomogeneous linear generalized ODE (13.22) admits a unique bounded solution. Consequently, through the correspondence between the solutions of the MDE (13.21) and of the Eq. (13.22), we obtain the result. ◽ In Proposition 13.33, we present another result concerning sufficient conditions for the perturbed MDE (13.21) to admit a unique bounded solution. Proposition 13.33: Let J = ℝ and u be of locally bounded variation in ℝ. Assume that (13.18) admits an exponential dichotomy on ℝ, the Perron–Stieltjes integrals t

Z3 (t) =

∫−∞

(t)P −1 (s)h(s)du(s)

and



Z4 (t) =

∫t

(t)(I − P) −1 (s)h(s)du(s)

exist for all t ∈ ℝ, the functions t ∈ ℝ → Z3 (t) ∈ X and t ∈ ℝ → Z4 (t) ∈ X are bounded, and the function K∶ℝ → X given by ( ) ∑ ∑ ⎧ Δ+  −1 (𝜏)Δ+ H(𝜏) − Δ−  −1 (𝜏)Δ− H(𝜏) , t > 0 ⎪ (t) 0⩽𝜏 0, there exists 𝛿 > 0 such that 𝜖 |h(t) − h(s)| < , 2

409

410

14 Topological Dynamics

whenever |t − s| < 𝛿, t, s ∈ C. Furthermore, since C is compact, there exists M > 0 such that |h(s)| ⩽ M for all s ∈ C. Thus, ∥ F(x, t) − F(y, s) ∥< 𝜖, whenever ∥ x − y ∥< 2𝜖 (M + |h(0)|)−1 and |t − s| < 𝛿, with (y, s) ∈ A × C, which proves the result. ◽ Remark 14.2: Let {Fk }k∈ℕ ⊂ 0 (Ω, h) and F ∈ 0 (Ω, h). As a consequence of k→∞

Lemma 14.1, we may conclude that the convergence 𝜌(Fk , F) → 0 is equivalent k→∞

k→∞

to the pointwise convergence Fk → F, that is, ∥ Fk (x, t) − F(x, t) ∥ → 0 for each (x, t) ∈ Ω. Indeed, as we presented before Lemma 14.1, the convergence k→∞

k→∞

𝜌(Fk , F) → 0 is equivalent to the uniform convergence ∥ Fk (x, t) − F(x, t) ∥ → 0 on every compact subset of Ω. In this way, we need to verify that the pointwise convergence implies the uniform convergence. Let us assume that k→∞

∥ Fk (x, t) − F(x, t) ∥ → 0 for each (x, t) ∈ Ω. Let 𝜖 > 0 and K ⊂ Ω be a compact subset. For each (x, t) ∈ K, there exists Kx,t ∈ ℕ such that 𝜖 whenever k > Kx,t . 3 By Lemma 14.1, there exists 𝛿 > 0 such that 𝜖 𝜖 ∥ Fk (x, t) − Fk (y, s) ∥< and ∥ F(x, t) − F(y, s) ∥< , 3 3 whenever ∥ x − y ∥< 𝛿 and |t − s| < 𝛿, with (x, t), (y, s) ∈ K. Using the compactness of K, one can find (x1 , t1 ), … , (xm , tm ) ∈ K such that ∥ Fk (x, t) − F(x, t) ∥
max {Kx1 ,t1 , … , Kxm ,tm }, we obtain ∥ Fk (x, t) − F(x, t) ∥ ⩽∥ Fk (x, t) − Fk (xp , tp ) ∥ + ∥ Fk (xp , tp ) − F(xp , tp ) ∥ + ∥ F(xp , tp ) − F(x, t) ∥ < 𝜖. Hence, ∥ Fk (x, t) − F(x, t) ∥< 𝜖 for all (x, t) ∈ K and k > max {Kx1 ,t1 , … , Kxm ,tm }. k→∞

This yields the uniform convergence ∥ Fk (x, t) − F(x, t) ∥ → 0 on every compact subset of Ω. Since the functions in the class 0 (Ω, h) satisfy conditions (14.1), (14.2), and (14.3), we have the following straightforward lemma.

14.2 Existence of a Local Semidynamical System

Lemma 14.3: The class 0 (Ω, h) is closed. Now, we present the result that guarantees the compactness of the class 0 (Ω, h). Theorem 14.4: The class 0 (Ω, h) is compact. Proof. By Lemma 14.1, we have the equicontinuity of 0 (Ω, h) on compact subsets of Ω =  × [0, ∞). Furthermore, using (14.1) and (14.2), ∥ F(x, t) ∥=∥ F(x, t) − F(x, 0) ∥⩽ |h(t) − h(0)| ⩽ |h(t)| + |h(0)|

(14.7)

for every t ∈ [0, ∞), x ∈  and F ∈ 0 (Ω, h). We claim that 0 (Ω, h) is uniformly bounded on compact sets. In fact, let A ⊂  and C ⊂ [0, ∞) be compact sets and take (x, t) ∈ A × C. By the continuity of h and the compactness of C, there exists M > 0 such that |h(s)| ⩽ M for all s ∈ C. Now, by the relation (14.7), we have ∥ F(x, t) ∥⩽ M + |h(0)| for all (x, t) ∈ A × C and all F ∈ 0 (Ω, h), which shows the claim. Consequently, the Ascoli’s Theorem (see [184]) guarantees that any sequence {Fn }n∈ℕ in 0 (Ω, h) admits a subsequence {Fnk }k∈ℕ which converges uniformly on compact subsets to some function F0 . Finally, since 0 (Ω, h) is closed (see Lemma 14.3), we conclude ◽ that F0 ∈ 0 (Ω, h) and the proof is finished.

14.2 Existence of a Local Semidynamical System This section concerns the existence of a local semidynamical system generated by the following nonautonomous generalized ODE dx = DF(x, t), (14.8) d𝜏 where F ∶ Ω → ℝn belongs to the class 0 (Ω, h). Recall that we are assuming that Ω =  × [0, ∞), where  ⊂ ℝn in an open set and h is a nondecreasing continuous real function. The results of this section are borrowed from [4]. Remark 14.5: Lemma 4.9 ensures us that the solutions of (14.8) are continuous, since h is a continuous function. According to Remark 14.2, a sequence {Fk }k∈ℕ ⊂ 0 (Ω, h) converges to a function F ∈ 0 (Ω, h) if k→∞

Fk (x, t) → F(x, t)

411

412

14 Topological Dynamics

in ℝn for each (x, t) ∈ Ω, that is, k→∞

∥ Fk (x, t) − F(x, t) ∥ → 0, k→∞

for every (x, t) ∈ Ω. In this case, we write Fk → F. In addition, given a sequence {𝑣k }k∈ℕ ⊂ ℝn and 𝑣 ∈ ℝn , we have k→∞

(𝑣k , Fk ) → (𝑣, F)

in ℝn × 0 (Ω, h),

k→∞

k→∞

if and only if ∥ 𝑣k − 𝑣 ∥ → 0 and ∥ Fk (x, s) − F(x, s) ∥ → 0 for every (x, s) ∈ Ω. In the next definition, we exhibit the concept of a general local semidynamical system. Our aim is to show that the generalized ODE (14.8) generates a local semidynamical system. Given (𝑣, F) ∈  × 0 (Ω, h), we denote by I(𝑣,F) an interval of type [0, b) ⊂ ℝ, with b ∈ ℝ+ . Consider, also, the following set S = {(t, 𝑣, F) ∈ ℝ+ ×  × 0 (Ω, h) ∶ t ∈ I(𝑣,F) }. Definition 14.6: A local semidynamical system on the space  × 0 (Ω, h) is a mapping 𝜋 ∶ S →  × 0 (Ω, h) that satisfies the following conditions: (i) 𝜋(0, 𝑣, F) = (𝑣, F), for every (𝑣, F) ∈  × 0 (Ω, h); (ii) given (𝑣, F) ∈  × 0 (Ω, h), if t ∈ I(𝑣,F) and s ∈ I𝜋(t,𝑣,F) , then t + s ∈ I(𝑣,F) and 𝜋(s, 𝜋(t, 𝑣, F)) = 𝜋(t + s, 𝑣, F); (iii) 𝜋 ∶ S →  × 0 (Ω, h) is continuous; (iv) I(𝑣,F) = [0, b(𝑣,F) ) is maximal in the following sense: either I(𝑣,F) = ℝ+ or, if b(𝑣,F) ≠ ∞, then the positive orbit {𝜋(t, 𝑣, F) ∶ t ∈ [0, b(𝑣,F) )} ⊂  × 0 (Ω, h) cannot be continued to a larger interval [0, b(𝑣,F) + c), c > 0; k→∞

(v) if (𝑣k , Fk ) → (𝑣, F), where (𝑣, F) and (𝑣k , Fk ) ∈  × 0 (Ω, h), k ∈ ℕ, then I(𝑣,F) ⊂ lim inf I(𝑣k ,Fk ) . We observe that the definition of a local semidynamical system presented in [17] consists of items (i), (ii), (iii), and (iv) from Definition 14.6. Item (v) from [17], known as Kamke’s axiom, assures that the domain of 𝜋 is open. In our case, we replace condition (v) from [17] by an equivalent property of lower semicontinuity. The reader may want to consult [17, p. 12, 13] for more details. It is important to mention that if the domain of 𝜋 is ℝ+ ×  × 0 (Ω, h), then conditions (iv) and (v) are satisfied straightforwardly. Taking this fact into account, we have the following definition. Definition 14.7: If the domain of 𝜋 is ℝ+ ×  × 0 (Ω, h), then 𝜋 will be called a global semidynamical system.

14.2 Existence of a Local Semidynamical System

Given F ∈ 0 (Ω, h) and t ⩾ 0, we denote the translate Ft of F by Ft (x, s) = F(x, t + s) − F(x, t),

(14.9)

(x, s) ∈ Ω.

The next result exhibits some basic properties of the translates Ft , t ⩾ 0. The reader also may consult [9, p. 234]. Lemma 14.8: Given F ∈ 0 (Ω, h) and t ⩾ 0, the translate Ft of F satisfies the following properties: (i) F0 = F (normalization of F); (ii) Ft+𝜏 = (Ft )𝜏 for all t, 𝜏 ⩾ 0 (semigroup property); (iii) the mapping (t, F) → Ft is continuous. Proof. Item (i) follows immediately from the definition of Ft given in (14.9). Let t, 𝜏 ⩾ 0 and (x, s) ∈ Ω, then (Ft )𝜏 (x, s) = Ft (x, 𝜏 + s) − Ft (x, 𝜏) = F(x, t + 𝜏 + s) − F(x, t) − F(x, t + 𝜏) + F(x, t) = F(x, t + 𝜏 + s) − F(x, t + 𝜏) = Ft+𝜏 (x, s), that is, condition (ii) holds. In order to show that condition (iii) holds, let t, tk ⩾ 0 and F, Fk ∈ 0 (Ω, h), k ∈ k→∞

k→∞

ℕ, be such that Fk → F and tk → t. For (x, s) ∈ Ω and using (14.2), we have ∥ (Fk )tk (x, s) − Ft (x, s) ∥ ⩽∥ (Fk )tk (x, s) − (Fk )t (x, s) ∥ + ∥ (Fk )t (x, s) − Ft (x, s) ∥ ⩽∥ Fk (x, s + tk ) − Fk (x, s + t) ∥ + ∥ Fk (x, tk ) − Fk (x, t) ∥ + ∥ Fk (x, t + s) − F(x, t + s) ∥ + ∥ Fk (x, t) − F(x, t) ∥ ⩽ |h(s + tk ) − h(s + t)| + |h(tk ) − h(t)| + ∥ Fk (x, t + s) − F(x, t + s) ∥ + ∥ Fk (x, t) − F(x, t) ∥ . k→∞

k→∞

k→∞

Since h is continuous, Fk → F and tk → t, we conclude that (Fk )tk → Ft .



Given t ⩾ 0 and F ∈ 0 (Ω, h), we cannot assure that Ft ∈ 0 (Ω, h). In the next definition, we provide some restriction on the function h in order to obtain an invariant subset of 0 (Ω, h) under the translate Ft . Definition 14.9: For a given nondecreasing continuous function h ∶ [0, ∞) → ℝ, we say that a function F ∶ Ω → X belongs to the class 0∗ (Ω, h), if F belongs to the class 0 (Ω, h) and the function h satisfies |h(t1 + s) − h(t2 + s)| ⩽ |h(t1 ) − h(t2 )|,

t1 , t2 , s ∈ [0, ∞).

The next result follows immediately from Theorem 14.4.

413

414

14 Topological Dynamics

Corollary 14.10: If h ∶ [0, ∞) → ℝ is a nondecreasing continuous function, then the space 0∗ (Ω, h) is compact. Next, we show that the class 0∗ (Ω, h) is invariant, that is, Ft ∈ 0∗ (Ω, h) for all t ⩾ 0 provided F ∈ 0∗ (Ω, h). Lemma 14.11: If F ∈ 0∗ (Ω, h), then the translate Ft of F belongs to 0∗ (Ω, h) for each t ⩾ 0. Proof. Let F ∈ 0∗ (Ω, h) and t ⩾ 0. Note that Ft (x, 0) = F(x, t) − F(x, t) = 0 for all x ∈ . Also, for all (x, s2 ), (x, s1 ), (y, s2 ), (y, s1 ) ∈ Ω, and taking into account conditions (14.2)–(14.3), we obtain ∥ Ft (x, s2 ) − Ft (x, s1 ) ∥ =∥ F(x, t + s2 ) − F(x, t + s1 ) ∥ ⩽ |h(t + s2 ) − h(t + s1 )| ⩽ |h(s2 ) − h(s1 )| and ∥ Ft (x, s2 ) − Ft (x, s1 ) − [Ft (y, s2 ) − Ft (y, s1 )] ∥ =∥ F(x, t + s2 ) − F(x, t + s1 ) − F(y, t + s2 ) + F(y, t + s1 ) ∥ ⩽∥ x − y ∥ |h(t + s2 ) − h(t + s1 )| ⩽∥ x − y ∥ |h(s2 ) − h(s1 )|. Hence, Ft ∈ 0∗ (Ω, h) for all t ⩾ 0.



Remark 14.12: The function Ft , defined in (14.9), is continuous for each t ⩾ 0, since we are assuming that F ∈ 0∗ (Ω, h), where h is nondecreasing and continuous. We finish this section with the construction of a local semidynamical system related to the nonautonomous generalized ODE (14.8). Theorem 14.13 generalizes [9, Theorem 6.3] and [91, Theorem 4.1]. Theorem 14.13: Assume that for each u ∈  and each F ∈ 0∗ (Ω, h), x(t, u, F) is the unique maximal solution of the initial value problem ⎧ dx = DF(x, t), ⎪ ⎨ d𝜏 ⎪x(0) = u. ⎩

(14.10)

Let [0, 𝜔(u, F)), 𝜔(u, F) > 0, be the maximal interval of definition of the solution x(⋅ , u, F) and define 𝜋 ∶ S →  × 0∗ (Ω, h) by 𝜋(t, u, F) = (x(t, u, F), Ft ),

(14.11)

14.2 Existence of a Local Semidynamical System

where S = {(t, u, F) ∈ ℝ+ ×  × 0∗ (Ω, h) ∶ t ∈ I(u,F) }. Then, 𝜋 is a local semidynamical system on  × 0∗ (Ω, h). Proof. Since the second component Ft of 𝜋 in (14.11) is defined for all t ∈ [0, ∞), the maximal interval I(u,F) of the semidynamical system, given by (14.11), coincides with [0, 𝜔(u, F)). We need to verify that the five conditions of Definition 14.6 hold. We prove at first that conditions (i), (ii), (iv), and (v) hold, and, finally, we prove condition (iii). Proof of (i): Let (u, F) ∈  × 0∗ (Ω, h). Since F0 (x, s) = F(x, s) for all (x, s) ∈ Ω (see Lemma 14.8(i)) and x(0, u, F) = u for each (u, F), we obtain 𝜋(0, u, F) = (u, F). Proof of (ii): Let t ∈ I(u, F) , 𝜎 ∈ I𝜋(t,u, F) and (u, F) ∈  × 0∗ (Ω, h). For 𝜏 ∈ I(u, F) , set and 𝜉(𝜏) = x(𝜏 + t),

𝜓(𝜏) = x(𝜏, x(t), Ft ),

x(𝜏) = x(𝜏, u, F),

where x is the maximal solution of (14.10) and 𝜓 is a solution of the following initial value problem: ⎧ d𝜓 = D[Ft (𝜓, s)], ⎪ ⎨ d𝜏 ⎪𝜓(0) = x(t) = x(t, u, F). ⎩

(14.12)

We are going to show that 𝜉 is a solution of problem (14.12). Initially, we point out that 𝜉(𝜎) − 𝜉(0) = x(𝜎 + t) − x(t) =

𝜎+t

∫t

DF(x(𝜏), s).

On the other hand, using the change of variable 𝜙(s) = s + t and Theorem 2.18, we obtain 𝜙(𝜎)

t+𝜎

∫t

DF(x(𝜏), s) =

∫𝜙(0) 𝜎

=

∫0

𝜎

DF(x(𝜏), s) =

∫0

DF(x(𝜙(𝜍)), 𝜙(𝜇))

DF(x(𝜍 + t), 𝜇 + t),

whence 𝜉(𝜎) − 𝜉(0) =

𝜎

∫0

𝜎

DF(x(𝜏 + t), s + t) =

∫0

DFt (𝜉(𝜏), s).

Furthermore, since 𝜉(0) = x(t) = x(t, u, F), we use the uniqueness of the solution of (14.10) (see Theorem 5.1), and we conclude that 𝜓(𝜎) = 𝜉(𝜎) = x(𝜎 + t) for all 𝜎 ∈ I𝜋(t, u, F) = [0, 𝜔(u, F)). Using this fact, we obtain 𝜋(𝜎, 𝜋(t, u, F)) = 𝜋(𝜎, x(t, u, F), Ft ) = 𝜋(𝜎, x(t), Ft ) = (x(𝜎, x(t), Ft ), (Ft )𝜎 ) = (𝜉(𝜎), (Ft )𝜎 )

415

416

14 Topological Dynamics

= (𝜉(𝜎), F𝜎+t ) = (x(𝜎 + t), Ft+𝜎 ) = (x(𝜎 + t, u, F), Ft+𝜎 ) = 𝜋(𝜎 + t, u, F). Proof of (iv): Assume that 𝜔 = 𝜔(u, F) < ∞. Since h is continuous, Ω = ΩF . Thus, if x(t, u, F) → z as t → 𝜔− , then it follows by Corollary 5.15 that z ∉ . Proof of (v): Take (y0 , F0 ) ∈  × 0∗ (Ω, h) and a sequence {(yk , Fk )}k∈ℕ ⊂  × k→∞

0∗ (Ω, h), such that (yk , Fk ) → (y0 , F0 ). We are going to show that 𝜔(u, F) is lower semicontinuous on  × 0∗ (Ω, h). The idea of the proof is based on [9, Thm. k→∞

A.8]. Let tk → t0 and x(s) = x(s, y0 , F0 ) be the unique solution of the initial value problem: ⎧ dx = DF0 (x, s), ⎪ ⎨ d𝜏 ⎪x(0) = y0 , ⎩

(14.13)

defined on the maximal interval [0, 𝜔(y0 , F0 )), with 𝜔(y0 , F0 ) > 0. According to [209, Theorem 8.6], we can assert that there exists k1 ∈ ℕ such that, for each k ⩾ k1 , one can obtain a solution xk (s, yk , Fk ) of the nonautonomous generalized ODE: ⎧ dx = DFk (x, s), ⎪ ⎨ d𝜏 ⎪x(0, yk , Fk ) = yk , ⎩

(14.14)

defined on [0, 𝛾], 0 < 𝛾 < 𝜔(y0 , F0 ), satisfying limk→∞ xk (s, yk , Fk ) = x(s, y0 , F0 ) for every s ∈ [0, 𝛾]. The result [209, Theorem 8.6] assures that 𝛾 does not depend on k ⩾ k1 . Consider a set A ⊂ [0, ∞) defined by A = {b ⩾ 0 ∶ for k ⩾ k1 the functions xk (s, yk , Fk ) are defined in [0, b] and are equicontinuous on [0, b]}.

(14.15)

Claim 1. The functions xk (⋅, yk , Fk ), k ⩾ k1 , are equicontinuous on [0, 𝛾]. Indeed, Lemma 4.9 provides the relation ∥ xk (s2 , yk , Fk ) − xk (s1 , yk , Fk ) ∥⩽ |h(s2 ) − h(s1 )|,

s1 , s2 ∈ [0, 𝛾],

consequently, the equicontinuity of xk (s, yk , Fk ) follows immediately, since h is continuous and does not depend on k. This yields A ≠ ∅. Set 𝛽 = sup A. In order to prove the lower semicontinuity of 𝜔, we will show that [0, 𝛽) is the maximal positive interval of definition of the solution x(⋅, y0 , F0 ). Claim 2. The sequence of functions xk (⋅, yk , Fk ) is an equibounded sequence for k > k2 with k2 sufficiently larger than k1 .

14.2 Existence of a Local Semidynamical System

In fact, let 0 ⩽ b < 𝛽. Again, using Lemma 4.9, we obtain ∥ xk (s, yk , Fk ) ∥⩽ ∥ yk ∥ + ∥ xk (s, yk , Fk ) − yk ∥ = ∥ yk ∥ + ∥ xk (s, yk , Fk ) − xk (0, yk , Fk ) ∥ ⩽ ∥ yk ∥ +[h(s) − h(0)] ⩽ ∥ yk ∥ +[h(b) − h(0)], k→∞

for every s ∈ [0, b], which proves Claim 2, since yk → y0 . Claims 1 and 2 allow us to conclude that {xk (s, yk , Fk )}k>k2 is a pointwise relatively compact family of uniformly bounded variation. Therefore, by a Helly’s Choice Principle (see, e.g. [13]), we can infer that the sequence xk ( ⋅ , yk , Fk ) is relatively compact in C([0, b], ℝn ) for k > k2 . The result [209, Theorem 8.2] guarantees that every limiting point of this sequence is a solution of system (14.13) defined on [0, b]. Besides, from the uniqueness of solutions of this equation, we obtain exactly one limiting point of the sequence {xk (s, yk , Fk )}k>k2 . Thus, the whole sequence converges uniformly to the solution x(s, y0 , F0 ) on [0, b]. Suppose to the contrary that x(𝛽) = x(𝛽, y0 , F0 ) is defined. Consequently, x(𝛽) ∈  and, by Theorem 5.1, there exists Δ𝛽 > 0 such that x(s, y0 , F0 ) is defined for each s ∈ [𝛽, 𝛽 + Δ𝛽 ]. Moreover, [209, Theorem 8.6] ensures the existence of an integer k such that the sequence xk (s, yk , Fk ) is defined and is equicontinuous on [0, 𝛽 + Δ𝛽 ] for all k ⩾ k. But this is a contradiction as 𝛽 = sup A. Therefore, x(⋅, y0 , F0 ) is not defined at 𝛽 and 𝛽 = 𝜔(y0 , F0 ). Proof of (iii): Note that for each fixed (u, F) ∈  × 0∗ (Ω, h), 𝜋(t, u, F) is continuous at every t ∈ I(u,F) . This fact follows from Remarks 14.12, and Lemma 14.8(iii). Let (t0 , u0 , F0 ) ∈ I(u0 ,F0 ) ×  ×  ∗ (Ω, h) be arbitrary and {(tk , uk , Fk )}k∈ℕ ⊂ k→∞

I(uk ,Fk ) ×  ×  ∗ (Ω, h) be a sequence such that (tk , uk , Fk ) → (t0 , u0 , F0 ). It follows from the proof of item (v) that k→∞

x(s, uk , Fk ) → x(s, u0 , F0 ),

(14.16)

uniformly on compact intervals of [0, 𝛽), where 𝛽 = sup A, with A defined in (14.15), x(s, u0 , F0 ) is the unique solution of the initial value problem ⎧ dx = DF0 (x, s), ⎪ ⎨ d𝜏 ⎪x(0) = u0 , ⎩ and, for each k, x(s, uk , Fk ) is the unique solution of the initial value problem ⎧ dx = DFk (x, s), ⎪ ⎨ d𝜏 ⎪x(0) = uk . ⎩

417

418

14 Topological Dynamics k→∞

Note that x(tk , uk , Fk ) → x(t0 , u0 , F0 ). In fact, this follows from item (v), the continuity of 𝜋(⋅, u0 , F0 ), relation (14.16), and the inequality ∥ x(tk , uk , Fk ) − x(t0 , u0 , F0 ) ∥⩽ ∥ x(tk , uk , Fk ) − x(tk , u0 , F0 ) ∥ + ∥ x(tk , u0 , F0 ) − x(t0 , u0 , F0 ) ∥ . Finally, the continuity of mapping 𝜋 is guaranteed, because (Fk )tk converges to (F0 )t0 by the properties of the translates of F. Therefore, the proof is complete. ◽

14.3 Existence of an Impulsive Semidynamical System In this section, based on the paper [4], we investigate properties of the following impulsive generalized ODE associated with the initial value problem (14.10): ⎧ dx ⎪ = DF(x, s) ⎪ d𝜏 ⎨I ∶ M → ℝ n ⎪ ⎪x(0) = u, ⎩

(14.17)

where F ∈ 0∗ (Ω, h) (Ω =  × [0, ∞) with  ⊂ ℝn an open set), u ∈ , M ⊂ ℝn is a closed subset, and I is a continuous function such that I(M ∩ ) ⊂  ⧵ M. We also assume that M satisfies the following condition: if for (u, F) ∈  × 0∗ (Ω, h), the solution of (14.10) is such that x(t0 , u, F) ∈ M for some t0 > 0, then there exists 𝜖 = 𝜖(u, F) > 0 such that x(t, u, F) ∉ M for t ∈ (t0 − 𝜖, t0 ) ∪ (t0 , t0 + 𝜖).

(14.18)

Now, we define a function which represents the least positive time at which the trajectory of (u, F) meets M. This function, which we denote by 𝜑 ∶  × 0∗ (Ω, h) → (0, ∞], is given by { s, if x(s, u, F) ∈ M and x(t, u, F) ∉ M for 0 < t < s, 𝜑(u, F) = ∞, if x(t, u, F) ∉ M for all t > 0. (14.19) From now on, in this chapter, we shall assume that the function 𝜑 defined in (14.19) is continuous on ( ⧵ M) × 0∗ (Ω, h). The reader may want to consult [41] to obtain sufficient conditions for 𝜑 to be continuous. Using the function 𝜑, we are now able to describe the impulsive trajectory of system (14.17). Let (u, F) ∈  × 0∗ (Ω, h) be fixed and arbitrary. Next, we characterize the solution ̃ x(t, u, F) of the impulsive generalized ODE (14.17).

14.3 Existence of an Impulsive Semidynamical System

(I) If 𝜑(u, F) = ∞, then we define ̃ x(t, u, F) = x(t, u, F) for all t ⩾ 0, where x(t, u, F) is the solution of the generalized ODE (14.10). But, if 𝜑(u, F) = s0 , then u1 = x(s0 , u, F) ∈ M. Thus, we define ̃ x(t, u, F) on [0, s0 ] by { x(t, u, F), 0 ⩽ t < s0 , ̃ x(t, u, F) = t = s0 , u+1 , where u+1 = I(u1 ). Let us consider u = u+0 . The process now continues from u+1 on. (II) If 𝜑(u+1 , F) = ∞,, then we define ̃ x(t, u, F) = x(t − s0 , u+1 , F) for t ⩾ s0 , where + x(⋅, u1 , F) is the solution of the generalized ODE ⎧ dx = DF(x, s), ⎪ ⎨ d𝜏 ⎪x(0) = u+1 . ⎩ However, if 𝜑(u+1 , F) = s1 , then u2 = x(s1 , u+1 , F) ∈ M. In this way, we define ̃ x(t, u, F) on [s0 , s0 + s1 ] by { x(t − s0 , u+1 , F), s0 ⩽ t < s0 + s1 , ̃ x(t, u, F) = u+2 , t = s0 + s1 , where u+2 = I(u2 ). x(tn , u, F) = u+n , (III) Assume that ̃ x(t, u, F) is defined on the interval [tn−1 , tn ] with ̃ ∑n−1 + where tn = i=0 si , n ∈ ℕ. If 𝜑(un , F) = ∞, then we define ̃ x(t, u, F) = x(t − tn , u+n , F) for all t ⩾ tn . However, if 𝜑(u+n , F) = sn , then { x(t − tn , u+n , F), tn ⩽ t < tn+1 , ̃ x(t, u, F) = t = tn+1 , u+n+1 , where u+n+1 = I(un+1 ) and un+1 = x(sn , u+n , F) ∈ M. The solution ̃ x(t, u, F) is defined on each interval [tn , tn+1 ], where t0 = 0 and ∑n x(t, u, F) is defined on the interval [0, tn+1 ]. tn+1 = i=0 si , n ∈ ℕ0 . Consequently, ̃ Furthermore, provided 𝜑(u+n , F) = ∞ for some n, the process described above ends after a finite number of steps. However, if 𝜑(u+n , F) < ∞ for all n ∈ ℕ0 , then the process continues indefinitely and, in this case, ̃ x(t, u, F) is defined on [0, T(u, F)), ∑∞ where T(u, F) = i=0 si . From now on, we assume that the solutions of Eqs. (14.10) and (14.17) are defined in the whole interval [0, ∞). The reader may consult Chapter 5 to obtain sufficient conditions to prolongate a solution to the interval [0, ∞). By Theorem 14.13 and Definition 14.7, the mapping 𝜋 ∶ ℝ+ ×  × 0∗ (Ω, h) →  × 0∗ (Ω, h)

419

420

14 Topological Dynamics

given by 𝜋(t, u, F) = (x(t, u, F), Ft ), defines a global semidynamical system associated with the generalized ODE (14.10). Let us denote a global semidynamical system by ( × 0∗ (Ω, h), 𝜋) and, for the sake of simplicity, we refer to such system as a semidynamical system. Definition 14.14: Let (u, F) ∈  × 0∗ (Ω, h). (i) The motion of (u, F) is the continuous function 𝜋(u,F) ∶ ℝ+ →  × 0∗ (Ω, h) defined by 𝜋(u, F) (t) = 𝜋(t, u, F), t ∈ ℝ+ . (ii) The positive orbit of (u, F) is given by 𝜋 + (u, F) = {𝜋(t, u, F) ∶ t ⩾ 0}. The concept of impulsive semidynamical system related to a generalized ODE was introduced in [4] and follows below. Definition 14.15: An impulsive semidynamical system on  × 0∗ (Ω, h) is a mapping 𝜋̃ ∶ ℝ+ ×  × 0∗ (Ω, h) →  × 0∗ (Ω, h) which satisfies the following conditions: (i) 𝜋(0, ̃ u, F) = (u, F) for all (u, F) ∈  × 0∗ (Ω, h); (ii) 𝜋(s, ̃ 𝜋(t, ̃ u, F)) = 𝜋(t ̃ + s, u, F) for all (u, F) ∈  × 0∗ (Ω, h) and t, s ∈ [0, ∞); ̃ u, F) is right-continuous at (iii) for each (u, F) ∈  × 0∗ (Ω, h), the mapping 𝜋(⋅, every point in [0, ∞) and the left limits 𝜋(t ̃ − , u, F) exist for all t > 0. The reader may consult [25, 27, 41] to obtain more details about the theory of impulsive semidynamical systems in the classic ordinary case. Definition 14.16: Let (u, F) ∈  × 0∗ (Ω, h). The positive impulsive orbit of ̃ u, F) ∶ t ⩾ 0}. (u, F) is given by the set 𝜋̃ + (u, F) = {𝜋(t, Consider a semidynamical system ( × 0∗ (Ω, h), 𝜋) associated with the system (14.10) and let 𝜋(t, u, F) = (x(t, u, F), Ft ) be its motion, where x(t, u, F) is the unique solution of (14.10) defined on the interval [0, ∞). Associated with this motion, we define a mapping 𝜋̃ ∶ ℝ+ ×  × 0∗ (Ω, h) →  × 0∗ (Ω, h) by for tn ⩽ t < tn+1 and n ∈ ℕ0 , (14.20) 𝜋(t, ̃ u, F) = 𝜋(t − tn , u+n , F) ∑ n−1 where u = u+0 , t0 = 0, tn = i=0 si , with n ∈ ℕ, and sn = 𝜑(u+n , F), n ∈ ℕ0 (recall the definition of 𝜑 in (14.19)). It is worth noting that 𝜋(t, ̃ u, F) = (̃ x(t, u, F), Ft−tn ) for tn ⩽ t < tn+1 , n ∈ ℕ0 , where ̃ x(t, u, F) is the solution of (14.17).

14.3 Existence of an Impulsive Semidynamical System

The next result, established in [4, Theorem 5.2], guarantees that 𝜋̃ given by (14.20) is an impulsive semidynamical system associated with the impulsive generalized ODE (14.17). Theorem 14.17: The mapping 𝜋̃ given by (14.20) is an impulsive semidynamical system associated with the impulsive generalized ODE (14.17). We denote such system ̃ by ( × 0∗ (Ω, h), 𝜋). Proof. Using the ideas of the proof of [25, Proposition 2.1], we obtain conditions (i) and (ii) of Definition 14.15. Moreover, condition (iii) of Definition 14.15 is easily verified, since ̃ x(t, u, F) and Ft are right-continuous at every point t ∈ [0, ∞) and ◽ the left limits ̃ x(t−, u, F) and Ft− exist for all t > 0. This completes the proof. Although 𝜋̃ is not continuous, we have the following result concerning convergence. It is worth mentioning that its proof is analogous to the proof presented in [141, Lemma 2.3]. See also [4, Lemma 6.1]. ̃ be an impulsive semidynamical system. Lemma 14.18: Let ( × 0∗ (Ω, h), 𝜋) Assume that u ∈  ⧵ M and {𝑣n }n∈ℕ is a sequence in  which converges to u. Let n→∞

{Fn }n∈ℕ be a sequence in 0∗ (Ω, h) such that Fn → F. Then, for every t ⩾ 0, there n→∞

exists a sequence of real numbers {𝜖n }n∈ℕ , with 𝜖n → 0, such that n→∞

̃ u, F). 𝜋(t ̃ + 𝜖n , 𝑣n , Fn ) → 𝜋(t, Proof. For each n ∈ ℕ, let x(t, 𝑣n , Fn ) denote the solution of the initial value problem ⎧ dx = DFn (x, s) ⎪ ⎨ d𝜏 ⎪x(0) = 𝑣n , ⎩

(14.21) n→∞

defined for all t ⩾ 0. According to [209, Theorem 8.6], x(t, 𝑣n , Fn ) → x(t, u, F), where x(t, u, F) is the solution of the generalized ODE (14.10). Then n→∞

𝜋(t, 𝑣n , Fn ) → 𝜋(t, u, F) n→∞

for each t ⩾ 0, since (Fn )t → Ft . Note that if 𝜑(u, F) = ∞, then the statement follows taking 𝜖n = 0, n ∈ ℕ, as n→∞

𝜑(𝑣n , Fn ) → 𝜑(u, F). However, if 𝜑(u, F) < ∞, then we use the ideas presented in [141, Lemma 2.3] to conclude the result. Case 1: 0 ⩽ t < s0 = 𝜑(u, F).

421

422

14 Topological Dynamics

Let 𝜖 > 0 be such that 𝜖 < s0 − t. Since 𝜑 is continuous on ( ⧵ M) × 0∗ (Ω, h), there exists n0 ∈ ℕ such that −𝜖 < 𝜑(𝑣n , Fn ) − 𝜑(u, F) for all n ⩾ n0 . Consequently, t < s0 − 𝜖 < 𝜑(𝑣n , Fn ) for n ⩾ n0 , and taking 𝜖n = 0, yields n→∞

̃ 𝑣n , Fn ) = 𝜋(t, 𝑣n , Fn ) → 𝜋(t, u, F) = 𝜋(t, ̃ u, F). 𝜋(t ̃ + 𝜖n , 𝑣n , Fn ) = 𝜋(t, Case 2: t = s0 = 𝜑(u, F). Choosing 𝜖n = 𝜑(𝑣n , Fn ) − 𝜑(u, F), n ∈ ℕ, we have ̃ 𝜋(t ̃ + 𝜖n , 𝑣n , Fn ) = 𝜋(𝜑(𝑣 n , Fn ), 𝑣n , Fn ) = 𝜋(0, I((𝑣n )1 ), Fn ), n→∞

where (𝑣n )1 = x(𝜑(𝑣n , Fn ), 𝑣n , Fn ), n ∈ ℕ. However, since I((𝑣n )1 ) → I(u1 ), we obtain n→∞

̃ u, F). 𝜋(t ̃ + 𝜖n , 𝑣n , Fn ) = 𝜋(0, I((𝑣n )1 ), Fn ) → 𝜋(0, u+1 , F) = 𝜋(t, Case 3: t > 𝜑(u, F). In this case, we may write ∑

m−1

t=

si + t ′ ,

i=0

for some m ∈ ℕ and 0 ⩽ t′ < sm . Now, set tn =

∑m−1 i=0

𝜑((𝑣n )+i , Fn ), where

(𝑣n )+0 = 𝑣n , (𝑣n )i = x(𝜑((𝑣n )+i−1 , Fn ), (𝑣n )+i−1 , Fn ) I((𝑣n )i ) =

(𝑣n )+i ,

and

for 1 ⩽ i ⩽ m − 1.

Thus, n→∞

𝜋(t ̃ n , 𝑣n , Fn ) = ((𝑣n )+m , Fn ) → (u+m , F). Defining 𝜖n = tn + t′ − t, n ∈ ℕ, and taking into account that u+m ∉ M (since I(M ∩ ) ⊂  ⧵ M and t′ < sm = 𝜑(u+m , F)), it follows from Case 1 that n→∞

̃ ′ , 𝜋(t ̃ n , 𝑣n , Fn )) → 𝜋(t ̃ ′ , u+m , F) = 𝜋(t, ̃ u, F), 𝜋(t ̃ + 𝜖n , 𝑣n , Fn ) = 𝜋(t ◽

and the proof is finished.

Definition 14.19: A subset Γ of  × 0∗ (Ω, h) is called positively invariant, if for ̃ 𝑣0 , F0 ) ∈ Γ. every (𝑣0 , F0 ) ∈ Γ and every t ∈ [0, ∞), we have 𝜋(t, The positive orbit of a point (𝑣, H) ∈  × 0∗ (Ω, h) is positively invariant. The closure of a positive orbit 𝜋̃ + (𝑣, H), with (𝑣, H) ∈  × 0∗ (Ω, h), is not positively invariant in general. However, we have the following result. Lemma 14.20: For each (𝑣, H) ∈  × 0∗ (Ω, h), 0∗ (Ω, h)) is positively invariant.

the

set

𝜋̃+ (𝑣, H) ⧵ (M ×

14.4 LaSalle’s Invariance Principle

Proof. Let (u, F) ∈ 𝜋̃+ (𝑣, H) ⧵ (M × 0∗ (Ω, h)) and t ⩾ 0 be arbitrary. Then, there n→∞

̃ n , 𝑣, H) → (u, F). Since u ∉ M, it folexists a sequence {tn }n∈ℕ ⊂ ℝ+ such that 𝜋(t lows from Lemma 14.18 that there exists a sequence of real numbers {𝜖n }n∈ℕ , with n→∞

𝜖n → 0, such that n→∞

𝜋(t ̃ + tn + 𝜖n , 𝑣, H) = 𝜋(t ̃ + 𝜖n , 𝜋(tn , 𝑣, H)) → 𝜋(t, ̃ u, F). Therefore, 𝜋(t, ̃ u, F) ∈ 𝜋̃+ (𝑣, H) ⧵ (M × 0∗ (Ω, h)), and the proof is complete.



14.4 LaSalle’s Invariance Principle In this section, we present a version of LaSalle’s invariance principle in the context of generalized ODEs. The definitions and the results presented is this section can be found in [4]. The existence of an impulsive semidynamical system ( × 0∗ (Ω, h), 𝜋) ̃ (Theorem 14.17) will be crucial for obtaining such result. Next, we exhibit the concept of a limit set on impulsive semidynamical systems in the frame of generalized ODEs. Definition 14.21: Let ( × 0∗ (Ω, h), 𝜋) ̃ be an impulsive semidynamical system. The set of all limiting points of 𝜋(t, ̃ u, F), when t → ∞, is given by n→∞

̃ n , u, F) → (u∗ , F ∗ ) Ω+ (u, F) = {(u∗ , F ∗ ) ∈  × 0∗ (Ω, h) ∶ 𝜋(𝜆 n→∞

for some sequence of positive real numbers 𝜆n → ∞}. ̃ u, F). We call Ω+ (u, F) the positive limit set of 𝜋(t, The next result follows straightforwardly using Lemma 14.18. Lemma 14.22: The set Ω+ (u, F) ⧵ (M × 0∗ (Ω, h)) is positively invariant. In particular, if Ω+ (u, F) ∩ (M × 0∗ (Ω, h)) = ∅, then Ω+ (u, F) is positively invariant. In the sequel, we establish sufficient conditions for the limit set to be non-empty. Proposition 14.23: Let ( × 0∗ (Ω, h), 𝜋) ̃ be an impulsive semidynamical system. If ̃ x(t, u, F) remains in a compact subset  of  for all t ∈ [0, ∞), then Ω+ (u, F) is non-empty. n→∞

Proof. Let {𝜆n }n∈ℕ ⊂ ℝ+ be a sequence such that 𝜆n → ∞. Note that for each ∑p(n)−1 n ∈ ℕ, there exists p(n) ∈ ℕ∗ satisfying tp(n) ⩽ 𝜆n < tp(n)+1 , where tp(n) = i=0 si . By the definition of 𝜋, ̃ we may write 𝜋(𝜆 ̃ n , u, F) = 𝜋(𝜆n − tp(n) , u+p(n) , F) = (x(𝜆n − tp(n) , u+p(n) , F), F𝜆n −tp(n) ).

423

424

14 Topological Dynamics

Using the compactness of  and of 0∗ (Ω, h) guaranteed by Corollary 14.10, we obtain a subsequence {nk }k∈ℕ , u∗ ∈  and F ∗ ∈ 0∗ (Ω, h) such that k→∞

̃ x(𝜆nk , u, F) = x(𝜆nk − tp(nk ) , u+p(n ) , F) → u∗ k

F𝜆n

and

k→∞ k

−tp(n

k)

→ F ∗ in 0∗ (Ω, h). k→∞

k→∞

Therefore, 𝜋(𝜆 ̃ nk , u, F) → (u∗ , F ∗ ) and, since 𝜆nk → ∞, we conclude that ∗ ∗ ◽ (u , F ) ∈ Ω+ (u, F). In the next definition, we present the concept of a Lyapunov functional assocĩ ated with the impulsive semidynamical system ( × 0∗ (Ω, h), 𝜋). Definition 14.24: A Lyapunov functional associated to the impulsive semidynam̃ is a nonnegative function V ∶  × 0∗ (Ω, h) → ℝ+ ical system ( × 0∗ (Ω, h), 𝜋) which satisfies the following conditions: (i) V is continuous on  × 0∗ (Ω, h); ̇ (ii) V(u, F) ⩽ 0 for (u, F) ∈  × 0∗ (Ω, h), where V(𝜋(h, ̃ u, F)) − V(u, F) ̇ . V(u, F) = lim sup + h h→0 Note that condition (ii) of Definition 14.24 ensures us that V(𝜋(t, ̃ u, F)) ⩽ V(u, F) for every t ⩾ 0. Now, we are able to present a version of LaSalle’s invariance principle for generalized ODEs. ̃ be Theorem 14.25 LaSalle’s Invariance Principle: Let ( × 0∗ (Ω, h), 𝜋) ̃ an impulsive semidynamical system. Suppose x(t, u, F) remains in a compact subset  of  for all t ∈ [0, ∞). Let V ∶  × 0∗ (Ω, h) → ℝ+ be a Lyapunov functional as defined in Definition 14.24. Define E = {(u, F) ∈  × 0∗ (Ω, h) ∶ ̇ V(u, F) = 0}. Let W be the largest set in E which is positively invariant. If Ω+ (u, F) ∩ (M × 0∗ (Ω, h)) = ∅, then Ω+ (u, F) is contained in W. Proof. The proof follows some ideas of [26, Theorem 3.1]. We know that Ω+ (u, F) ≠ ∅ and Ω+ (u, F) is positively invariant, see Lemma 14.22 and Proposition 14.23. Then, consider (u∗ , F ∗ ) ∈ Ω+ (u, F). Case 1: Ω+ (u, F) is a singleton. In this case, Ω+ (u, F) = {(u∗ , F ∗ )}. By the positive invariance of Ω+ (u, F), we have 𝜋(t, ̃ u∗ , F ∗ ) = (u∗ , F ∗ )

14.5 Recursive Properties

̇ ∗ , F ∗ ) = 0 and Ω+ (u, F) ⊂ E. Since W is for all t ⩾ 0. Consequently, we obtain V(u the largest positively invariant subset in E, we conclude that Ω+ (u, F) ⊂ W. Case 2: Ω+ (u, F) is not a singleton. Let (u1 , F1 ), (u2 , F2 ) ∈ Ω+ (u, F). We claim that V(u1 , F1 ) = V(u2 , F2 ). Indeed, by definition of a positive limit set, there exist sequences {𝜆n }n∈ℕ and {𝜅n }n∈ℕ in ℝ+ n→∞

n→∞

with 𝜆n → ∞ and 𝜅n → ∞ such that n→∞

n→∞

𝜋(𝜆 ̃ n , u, F) → (u1 , F1 ) and 𝜋(𝜅 ̃ n , u, F) → (u2 , F2 ). We may choose subsequences {𝜆nk }k∈ℕ and {𝜅nk }k∈ℕ such that 𝜆nk ⩽ 𝜅nk , k ∈ ℕ. Then, condition (ii) of Definition 14.24 implies that ̃ nk , u, F)). V(𝜋(𝜅 ̃ nk , u, F)) ⩽ V(𝜋(𝜆

(14.22)

By the continuity of V, V(u2 , F2 ) ⩽ V(u1 , F1 ) as k → ∞ in (14.22). Analogously, we may choose subsequences {𝜅nm }m∈ℕ and {𝜆nm }m∈ℕ such that 𝜅nm ⩽ 𝜆nm , m ∈ ℕ, and then V(u1 , F1 ) ⩽ V(u2 , F2 ). This yields the claim and, hence, V is constant on Ω+ (u, F). By the positive invariance of Ω+ (u, F), the derivative of V satisfies ̇ ∗ , F ∗ ) = 0 for every (u∗ , F ∗ ) ∈ Ω+ (u, F). Therefore, Ω+ (u, F) ⊂ W which comV(u pletes the proof. ◽

14.5 Recursive Properties This section brings out some topological properties of an impulsive semidynamical system ( × 0∗ (Ω, h), 𝜋) ̃ as presented in Section 14.3. Consider the space  × 0∗ (Ω, h) with the following metric 𝜚((x, F1 ), (y, F2 )) =∥ x − y ∥ +𝜌(F1 , F2 ), for all (x, F1 ), (y, F2 ) ∈  × 0∗ (Ω, h), where 𝜌 is defined in (14.4). In addition, we shall assume the following condition (T): (T) If (u, F) ∈ M × 0∗ (Ω, h), (𝑣, H) ∈  × 0∗ (Ω, h), and {tn }n∈ℕ is a sequence such n→∞

n→∞

that 𝜋(t ̃ n , 𝑣, H) → (u, F), then there exists a sequence {𝛼n }n∈ℕ ⊂ ℝ+ , 𝛼n → 0, such that 𝜋(𝛼n , 𝜋(t ̃ n , 𝑣, H)) ∈ M × 0∗ (Ω, h) for n sufficiently large, that is, n→∞

𝜋(t ̃ n + 𝛼n , 𝑣, H) → (I(u), F). Next, we present the concepts of minimality and recurrence. The reader may consult these definitions in [18] for the case of continuous dynamical systems. For general impulsive systems, the reader can consult [30] for instance. Definition 14.26: A subset Σ ⊂  × 0∗ (Ω, h) is called minimal, whenever Σ ⧵ (M × 0∗ (Ω, h)) ≠ ∅, Σ is closed, Σ ⧵ (M × 0∗ (Ω, h)) is positively invariant, and Σ does not contain any proper subset satisfying the previous conditions.

425

426

14 Topological Dynamics

Definition 14.27: A point (u, F) ∈  × 0∗ (Ω, h) is said to be recurrent, if for every 𝜖 > 0, there exists a T = T(𝜖) > 0, such that for every t, s ⩾ 0, the interval [0, T] contains a number 𝜏 > 0 such that 𝜚(𝜋(t, ̃ u, F), 𝜋(s ̃ + 𝜏, u, F)) < 𝜖. A subset Σ ⊂  × 0∗ (Ω, h) is said to be recurrent, if each point (u, F) ∈ Σ is recurrent. Next, we characterize minimal sets of  × 0∗ (Ω, h). The proof of this result is based on the proof of [30, Theorem 4.4]. Theorem 14.28: A subset Σ ⊂  × 0∗ (Ω, h) is minimal if and only if Σ = 𝜋̃+ (u, F), for all (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)). Proof. Assume that Σ is minimal and let (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)). By the positive invariance of Σ ⧵ (M × 0∗ (Ω, h)) and by the fact that Σ is closed, we have 𝜋̃+ (u, F) ⊂ Σ = Σ. Since 𝜋̃+ (u, F) ⧵ (M × 0∗ (Ω, h)) ≠ ∅, 𝜋̃+ (u, F) is closed and 𝜋̃+ (u, F) ⧵ (M × 0∗ (Ω, h)) is positively invariant (see Lemma 14.20), the minimality of Σ yields Σ = 𝜋̃+ (u, F). Now, assume that Σ = 𝜋̃+ (u, F), for all (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)). Let Γ ⊂ Σ be such that Γ ⧵ (M × 0∗ (Ω, h)) ≠ ∅, Γ be closed and Γ ⧵ (M × 0∗ (Ω, h)) be positively invariant. Take (𝑣, F) ∈ Γ ⧵ (M × 0∗ (Ω, h)). Then (𝑣, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)), which yields Σ = 𝜋̃+ (𝑣, F). Finally, since Γ ⧵ (M × 0∗ (Ω, h)) is positively invariant, we obtain Γ ⊂ Σ = 𝜋̃+ (𝑣, F) ⊂ Γ, that is, Σ = Γ. Therefore, Σ is minimal.



By the proof of Theorem 14.28, we obtain the following result. Theorem 14.29: Let Σ ⊂  × 0∗ (Ω, h) and assume that, for all (u, F) ∈ Σ, Ω+ (u, F) ⧵ (M × 0∗ (Ω, h)) ≠ ∅. Then, Σ is minimal if and only if Σ = Ω+ (u, F) for all (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)). In the sequel, we prove that compact minimal sets are recurrent, as in Birkhoff’s Theorem, see [30, Theorem 4.17]. With slight modifications, the proof follows similarly.

14.5 Recursive Properties

Theorem 14.30: If the set Σ ⊂  × 0∗ (Ω, h) is minimal and compact, then the set Σ ⧵ (M × 0∗ (Ω, h)) is recurrent. Proof. Suppose the contrary, that is, there exists (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)) which is not recurrent. Then, there are 𝜖 > 0 and sequences {𝜆n }n∈ℕ , {sn }n∈ℕ , {tn }n∈ℕ n→∞

⊂ ℝ+ such that 𝜆n → ∞ and ̃ n + 𝜏, u, F)) ⩾ 𝜖, for every 𝜏 ∈ [0, 𝜆n ] and n ∈ ℕ. (14.23) 𝜚(𝜋(t ̃ n , u, F), 𝜋(s ( ) 𝜆 Note that 𝜋(t ̃ n , u, F), 𝜋̃ sn + 2n , u, F ∈ Σ for all n ∈ ℕ, since Σ ⧵ (M × 0∗ (Ω, h)) is positively invariant. By the compactness of Σ, we can assume that there are (u1 , F1 ), (u2 , F2 ) ∈ Σ such that ) ( ( ) n→∞ n→∞ 𝜆n 𝜚(𝜋(t ̃ n , u, F), (u1 , F1 )) → 0 and 𝜚 𝜋̃ sn + , u, F , (u2 , F2 ) → 0. 2 We have two cases to consider: when u2 ∉ M and when u2 ∈ M. Case 1: u2 ∉ M. ∑k Let t ⩾ 0 be fixed and arbitrary and assume that t ≠ j=0 𝜑((u2 )+j , F), for all k ∈ ℕ0 , that is, t is not a jump time. Using the continuity of 𝜋 and I, we obtain 𝛿 > 0 such that, if 𝜚((𝑤, I), (u2 , F2 )) < 𝛿, then 𝜖 (14.24) 𝜚(𝜋(t, ̃ 𝑤, I), 𝜋(t, ̃ u2 , F2 )) < . 3 Now, let n0 ∈ ℕ be such that

𝜆n0 2

𝜖 𝜚(𝜋(t ̃ n0 , u, F), (u1 , F1 )) < 3

> t, ( ( ) ) 𝜆n0 and 𝜚 𝜋̃ sn0 + , u, F , (u2 , F2 ) < 𝛿. 2 (14.25)

Using (14.23), (14.24), and (14.25), we get )) ( ( 𝜆n0 + t, u, F ̃ n0 , u, F), 𝜋 sn0 + 𝜚(𝜋(t, ̃ u2 , F2 ), (u1 , F1 )) ⩾𝜚 𝜋(t 2 ))) ( ( ( 𝜆n − 𝜚 𝜋(t, ̃ u2 , F2 ), 𝜋̃ t, 𝜋 sn0 + 0 , u, F 2 − 𝜚(𝜋(t ̃ n0 , u, F), (u1 , F1 )) 𝜖 𝜖 𝜖 >𝜖 − − = . 3 3 3 By the arbitrariness of t, we conclude that 𝜖 𝜚(𝜋(t, ̃ u2 , F2 ), (u1 , F1 )) > , 3 ∑k for all t ⩾ 0, with t ≠ j=0 𝜑((u2 )+j , F), k ∈ ℕ0 .

427

428

14 Topological Dynamics

On the other hand, if there exists k ∈ ℕ such that t = may choose a sequence {𝛽n }n⩾1 ⊂ ℝ+ such that n→∞

𝛽n →

k ∑ 𝜙((u2 )+j , F) j=0

and

k ∑

∑k j=0

𝜑((u2 )+j , F) < 𝛽n
, for n ∈ ℕ. 3 Since n → ∞, we obtain ( ( k ) ) ∑ 𝜖 + 𝜚 𝜋̃ 𝜙((u2 )j , F), u2 , F2 , (u1 , F1 ) ⩾ , 3 j=0 because 𝜋̃ is right-continuous. Therefore, 𝜖 𝜚(𝜋(t, ̃ u2 , F2 ), (u1 , F1 )) ⩾ , for all t ⩾ 0. 3 Thus (u1 , F1 ) ∉ 𝜋̃+ (u2 , F2 ). Now, since Σ is minimal, we have Σ = 𝜋̃+ (u2 , F2 ), that is, (u1 , F1 ) ∉ Σ, which is a contradiction. Case 2: u2 ∈ M. n→∞

By condition (T), there exists a sequence {𝛼n }n∈ℕ ⊂ ℝ+ , 𝛼n → 0, such that ( ( ) ) n→∞ 𝜆 𝜚 𝜋̃ 𝛼n + sn + n , u, F , (I(u2 ), F2 ) → 0. 2 Since Σ ⧵ (M × 0∗ (Ω, h)) is positively invariant and (u, F) ∈ Σ ⧵ (M × 0∗ (Ω, h)), we ̃ I(u2 ), F2 ) for every t ⩾ get (I(u2 ), F2 ) ∈ Σ = Σ. Now, we consider the motion 𝜋(t, 0. Since I(M) ∩ M = ∅ we have I(u2 ) ∉ M. By following the ideas of Case 1, we conclude that 𝜖 𝜚(𝜋(t, ̃ I(u2 ), F2 ), (u1 , F1 )) ⩾ , for all t ⩾ 0, 3 which yields (u1 , F1 ) ∉ 𝜋̃+ (I(u2 ), F2 ), which, in turn, is a contradiction, once Σ is minimal. ◽ Cases 1 and 2 imply Σ ⧵ (M × 0∗ (Ω, h)) is recurrent. Corollary 14.31: Let (u, F) ∈  × 0∗ (Ω, h) be given. If Ω+ (u, F) is compact and minimal, then Ω+ (u, F) ⧵ (M × 0∗ (Ω, h)) is recurrent.

429

15 Applications to Functional Differential Equations of Neutral Type Fernando G. Andrade 1 , Miguel V. S. Frasson 2 , and Patricia H. Tacuri 3 1

Colégio Técnico de Bom Jesus, Universidade Federal do Piauí, Bom Jesus, PI, Brazil Departamento de Matemática Aplicada e Estatística, Instituto de Ciências Matemáticas e de Computação (ICMC), Universidade de São Paulo, São Carlos, SP, Brazil 3 Departamento de Matemática, Centro de Ciências Exatas, Universidade Estadual de Maringá, Maringá, PR, Brazil 2

15.1 Drops of History Present in Mathematics for a long time, differential equations appear concomitantly with the emergence of Calculus. On 29 October 1675, Gottfried W. Leibniz introduced both the notation ∫ , which resembled the letter “S” in italics in old texts, and also the notation ∫ l which represented the sum of functions l’s. Such Leibniz’s manuscript can be found in [102]. In particular, on page 125, one finds the sentence Utile erit scribi ∫ pro omn., ut ∫ l pro omn. l, id est summa ipsorum l. which, translated to English, means It will be helpful to write ∫ instead of omn., and ∫ l instead of omn. l, that is the sum of l. This implies that, before the introduction of the notations ∫ and ∫ l, one would read omn. and omn.l, respectively. Much later, in 1744, the Bernoulli brothers, Johann and Jakob, suggested the terminology “integral of l” for the notation ∫ l for the first time. Reference [16], called “Opera,” contains several articles due to Jakob Bernoulli, one of which contains the sentence below with the term “integral.” Notice that [16] was published

Generalized Ordinary Differential Equations in Abstract Spaces and Applications, First Edition. Edited by Everaldo M. Bonotto, Márcia Federson, and Jaqueline G. Mesquita. © 2021 John Wiley & Sons, Inc. Published 2021 by John Wiley & Sons, Inc.

430

15 Applications to Functional Differential Equations of Neutral Type

later than the year of publication of the same article in Acta Eruditorum. Thus, on [16, p. 423], one reads Ergo & horum Integralia æquantur… which is equivalent to And then these integrals are equal to… Back to Leibniz’s manuscript of 29 October 29 1675, it is possible to encounter the words , nempe ut ∫ augebit, ita d minuet dimen…si sit ∫ l ⊓ ya. Ponemus l ⊓ ya d siones. ∫ autem significat summan, d diffenrentiam. or, equivalently, , namely that ∫ increases by less than d dimen…if ∫ l ⊓ ya. I will put l ⊓ ya d sions. ∫ means summation, d differentiation. where the symbol ⊓ stands for the symbol = for equality, signifying that while the integral, ∫ , increases the dimensions, the derivative, d, decreases them. Continuing in 1675, but now in a different manuscript, dated 11 November, Leibniz wrote that dx and dx denoted the same thing and he not only introduced the notations dx and dy to represent the differentials of x and y but also the symbol dx to represent the derivative of x with respect to y, after concluding that neither dy . In this manuscript, which can be dxy and dx dy are the same, nor d xy equals dx dy found in [101, Beilage II (Appendix II)], pages 32–40, one reads Videndum an dx dy idem sit quod d xy, et an

dx dy

idem quod d xy …

which means Let us see if dx dy is the same as d xy, and whether

dx dy

is the same as d xy …

In the following year, Leibniz used, for the first time, the term “æquatio differentialis” to denote a relationship between the differentials dx and dy of the variables x and y respectively. But differentials only appeared in a manuscript published a few years later, in 1684. As a matter of fact, the year 1684 is considered to be the formal date for the arising of Calculus, and this manuscript of Leibniz, entitled

15.1 Drops of History

Nova Methodus pro Maximis et Minimis, can be found in [164]. In particular, we refer to pages 467 or 469, where one can find the term “æquatio differentialis”. According to the historian F. Cajori in [34, p. 213], Sir Isaac Newton wrote a coded message to Leibniz containing the information Data æquetione quotcunque fluentes quantitates s involvente fluxiones invenire, et viceversa. or, equivalently, The given equation to quantify the number of fluent involving the flows to find, and vice-versa. which essentially contains the idea of a derivative. In 1671, Newton wrote his work entitled Methodus Fluxionum et Serierum infinitorum whose first version was only published much later, in 1736, when John Colson translated it from Latin into English as The Method of Flows and Infinite Series with it Applications to the Geometry of CURVE-LINES (see [187]). In this work, Newton tried to solve differential equations, which he referred to as “Fluxional Equations.” In such work, Newton considered that if x is a quantity, called by him fluents, that is indefinitely increasing, then the derivative ẋ of x would be the speed at which the quantity x is increasing (called by him fluxions). See [34, p. 194]. The progress of the theory of differential equations in the final years of the seventeenth century was mainly due to Leibniz, Newton and the Bernoulli brothers. The development of such a theory continued throughout the eighteenth century and, among other authors, we can highlight Daniel Bernoulli, son of Johann Bernoulli, and his contemporary, Leonhard Euler. As a matter of fact, in 1755, Euler published a rigorous reformulation of Calculus in Institutiones calculi differentialis (see [64]) and Institutionum calculi integralis (see [65–67]). At the same time, a branch of a special type of equations was being developed, that is, the branch of functional equations. Recall that a functional equation is any equation whose argument of the unknown function is also a function. Some functional equations are well known. For instance, in 1821 (see [36, p. 104–113]), Augustin Louis Cauchy solved the following four different functional equations f (t + s) = f (t) + f (s), f (ts) = f (t) + f (s), f (t + s) = f (t) × f (s), f (ts) = f (t) × f (s). The first equation, known as Cauchy additive functional, was already known by Adrien-Marie Legendre in 1794 in his study of areas (see [159, Note IV, p. 293]).

431

432

15 Applications to Functional Differential Equations of Neutral Type

Considérons un rectangle cont les dimensions sont p et q, et sa surface qui est une fonction de p et q, représentons la par 𝜑(p, q). Si on considere un autre rectangle dont les dimensions sont p + p′ et q, il est clair que ce rectangle est composé de deux autres, l’un qui a pour dimension p et q, l’autre qui a pour dimensions p′ et q, de sorte qu’on aura 𝜑∶ (p + p′ , q) = 𝜑∶ (p, q) + 𝜑∶ (p′ , q) which, translated to English, means Consider a rectangle with dimensions p and q, and its area, which is a function of p and q, represents it by 𝜑(p, q). If we consider another rectangle whose dimensions are p + p′ and q, it is clear that this rectangle is composed of two others, one with dimensions p and q, the other with dimensions p′ and q, so that we have 𝜑(p + p′ , q) = 𝜑(p, q) + 𝜑(p′ , q). Cauchy functionals were also investigated by Johan Ludwig William Valdemar Jensen in 1878 (see [137]) and in 1905 (see [138]). In fact, Jensen solved the Cauchy functional equations in 1878, and it was only in 1905 Jensen wrote about the functional which carries his name and is a modification of Cauchy additive functional. Later, it became known as Jensen functional. Thus, in [138] one reads ) f (t) + f (s) ( t+s = . f 2 2 Note that, when s = 0 in Jensen functional, one obtains the Cauchy additive functional. As a matter of fact, functional equations had been investigated since 1747, by Jean le Rond d’Alembert, for example (see [50–52]). In 1769, d’Alembert solved the problem of the parallelogram of forces, by reducing the proof for solving the following functional equation: f (t + s) − f (t − s) = 2f (t)f (s). Indeed, on [53, p. 279], one reads Donc, substituant cette dernière de valeur de z𝜑u dans l’équation précédente, on aura après les réductions 𝜑𝛼 + 𝜑(𝛼 + 2m′ ) = 𝜑m′ × 𝜑(𝛼 + m′ ), ou en faisant 𝛼 + m′ = ★, 𝜑(★ − m′ ) + 𝜑(★ + m′ ) = 𝜑m′ × 𝜑★ which means Thus, substituting the latter with the value of z𝜑u in the previous equation, we will have after the reductions 𝜑𝛼 + 𝜑(𝛼 + 2m′ ) = 𝜑m′ × 𝜑(𝛼 + m′ ), or in making 𝛼 + m′ = ★, 𝜑(★ − m′ ) + 𝜑(★ + m′ ) = 𝜑m′ × 𝜑★.

15.1 Drops of History

Nowadays, in the particular case, where the difference in the arguments of a functional equation is constant, as in f (t) = f (t − 1) + f (t + 2), for example, the functional equation is said to be a difference equation (see [61]). When the ideas of differential equations are combined with those of functional equations, functional differential equations emerge. The latter appeared for the first time in a work of Marquis de Condorcet (a codename for Marie Jean Antoine Nicolas de Caritat Condorcet) entitled Sur la détermination des fonctions arbitraires qui entrent dans les intégrales des Équations aux differences partielles, where one reads, on [46, p. 52], Lorsqu’on a F ′ , on en déduit F par une équation ou finie, ou aux différences infiniment petites. This sentence means When we have F ′ , we deduce F by an equation either finite, or with infinitely small differences. According to Anatoli𝚤˘ D. Myshkis (see [186]), the motivation for Condorcet to develop this type of equation came from geometric problems proposed by Euler in [63]. More specifically, Euler was looking for curves whose evolutes are similar to the curves themselves. Differential equations have always been used in applications in sciences and engineering. We can mention, for example, one of the first models for application in population dynamics which is due to Thomas Robert Malthus [171]. The Malthusian model (see [185, p. 2]) is described as d y(t) = by(t), dt where b > 0 can be understood as the reproduction rate. The model says that the growth of a population is proportional to the number of individuals in the current population. After reading Malthus’ work, Pierre François Verhulst (see [231, 232]) proposed the following model in which the environment has a capacity K, ) ( y(t) d . (15.1) y(t) = by(t) 1 − K dt The term b(1 − y(t)∕K) in Verhulst model is the per capita birth rate. Such model is also known as the logistic equation, and it suggests that the population starts to decrease immediately after reaching the maximum capacity of the environment K. However, it is known that this equation does not describe the real behavior

433

434

15 Applications to Functional Differential Equations of Neutral Type

of a single species population. For example, in [198], David M. Pratt stated that the fluctuation in the population dynamics of small planktonic crustacean called Daphnia magna is due to a delay in the impact of the effects of density. In this case, George Evelyn Hutchinson (see [133]) suggested that the model for describing population dynamics shall be expressed by ) ( y(t − r) d . y(t) = by(t) 1 − K dt Unlike the Verhulst model, in Hutchinson’s equation, the reproductive process continues after the population reaches the capacity K of the environment, being interrupted after a time r > 0. Such number r represents a delay or a retardation in time. While in Hutchinson’s model, known as “delayed logistic equation,” the delay is considered within the death rate, some approaches based on the experiments of the entomologist Alexander John Nicholson (see [189]) suggest that the delay should be set at the birth rate, since the reproduction rate may not only depend on the current population density but also on the population density in the past. Such a model can be described by the equation d y(t) = b( y(t − r))y(t − r) − 𝜆( y(t))y(t). dt where b( y(t)) is the birth rate per head, 𝜆( y(t)) is the death rate per head, and r denotes the time needed for an egg to become an adult fly. This equation is known as “Nicholson’s blowflies equation.” See, for instance, [111, 195] and [112]. Frederick Edward Smith, on the other hand, in an investigation using Daphnia magna (see [222]) mentioned that the per capita rate in (15.1) should be replaced by b(1 − ( y(t) + c dy(t)∕dt)∕K), since a growing population consumes much more food than a population that has already reached maturity, with c being a positive or negative constant. In this case, the population dynamics model would take the form ) ( y(t) − c dy(t)∕dt d . y(t) = by(t) 1 − K dt For Yang Kuang (see [146]), it was reasonable to think that y(t) represents in a population of individuals that feed on a pasture which, in turn, needs time to regenerate. A better approach is therefore, a neutral logistic equation of the form ) ( y(t − r) − c dy(t − r)∕dt d . y(t) = by(t) 1 − K dt This equation was investigated for the first time by Kondalsamy Gopalsamy and Bing Gen Zhang in 1988 (see [106]). The theory behind functional differential equations of neutral type (we write FDE of neutral type, for short) is relatively new and is generally related to the

15.2 FDEs of Neutral Type with Finite Delay

space of continuous functions as the phase space. Here, we propose an expansion of this branch of equations by studying measure FDEs of neutral type within the framework of generalized ODEs. In order to do this, we present a relation, borrowed from [76], between the latter and measure FDEs of neutral type. We use the existing theory of generalized ODEs to obtain our results, namely, existence and uniqueness of solutions and continuous dependence on parameters. We end this chapter with an example, also borrowed from [76], which illustrates the theory.

15.2 FDEs of Neutral Type with Finite Delay Let t0 ∈ ℝ, r > 0 and Ω ⊂ C([−r, 0], ℝn ) × ℝ be an open subset. The theory of FDEs of neutral type is usually concerned with equations of the form d M(yt , t) = f (yt , t), t ⩾ t0 , (15.2) dt where f ∶ Ω → ℝn is continuous and yt denotes the memory function yt (𝜃) = y(t + 𝜃), 𝜃 ∈ [−r, 0], for every t ⩾ t0 . In the sequel, we recall the concept of normalized function. Definition 15.1: We say that a matrix-valued function 𝜁∶ [−r, 0] → ℝn×n of bounded variation is normalized, if 𝜁 is left-continuous on (−r, 0) and 𝜁(0) = 0. We assume that the operator M in (15.2) is linear continuous with respect to the first variable (in this case, we use the notation M(𝜑, t) = M(t)𝜑) and is given by 0

M(t)𝜑 = 𝜑(0) −

∫−r

d𝜃 [𝜉(t, 𝜃)]𝜑(𝜃),

(15.3)

for 𝜑 ∈ C([−r, 0], ℝn ) and t ⩾ t0 , where the function 𝜉∶ ℝ × ℝ → ℝn×n is measurable and, for each t ⩾ t0 , 𝜉(t, ⋅) is a n × n matrix-valued normalized in [−r, 0] with 𝜉(t, 𝜃) = 𝜉(t, −r), 𝜃 ⩽ −r,

and

𝜉(t, 𝜃) = 0, 𝜃 ⩾ 0,

such that (N1) the variation of 𝜉(t, ⋅) on [s, 0], var0s (𝜉(t, ⋅)), tends to zero as s → 0. Next, we define a solution of Eq. (15.2). Definition 15.2: We say that a function y∶ [t0 , t0 + 𝜎] → ℝn , with 𝜎 > 0, is a solution of the FDE of neutral type (15.2), if (yt , t) ∈ Ω for all t ∈ [t0 , t0 + 𝜎], and dM(yt , t)∕dt = f (yt , t) holds for almost every t ∈ [t0 , t0 + 𝜎].

435

436

15 Applications to Functional Differential Equations of Neutral Type

The integral form of the FDE of neutral type (15.2) is, then, given by 0

t

y(t) = y(t0 ) +

∫t0

f (ys , s)ds +

∫−r

0

d𝜃 [𝜉(t, 𝜃)]y(t + 𝜃) −

∫−r

d𝜃 [𝜉(t0 , 𝜃)]y(t0 + 𝜃),

for all t ∈ [t0 , t0 + 𝜎], where the integrals can be taken in the sense of Riemann–, Lebesgue– or Perron–Stieltjes. A systematic study of this class of equations can be found in [115]. We are interested in broadening the class of neutral equations, for instance weakening continuity hypotheses, and getting qualitative information on solutions that may be not continuous as well. Given t0 ∈ ℝ and r, 𝜎 > 0, let O ⊂ G([t0 − r, t0 + 𝜎], ℝn ) be an open subset and consider the set P = {yt ∶ y ∈ Oand t0 ⩽ t ⩽ t0 + 𝜎} ⊂ G([−r, 0], ℝn ).

(15.4)

Assume that f ∶ P × [t0 , t0 + 𝜎] → ℝn is a function such that t → f (yt , t) is integrable over [t0 , t0 + 𝜎] in the sense of Perron–Stieltjes, for each y ∈ O, with respect to a nondecreasing function g∶ [t0 , t0 + 𝜎] → ℝ. Here, we deal with a measure FDE of neutral type with integral form 0

t

y(t) = y(t0 ) +

∫t0

f (ys , s)dg(s) +

∫−r

d𝜃 [𝜉(t, 𝜃)]y(t + 𝜃)

0



∫−r

d𝜃 [𝜉(t0 , 𝜃)]y(t0 + 𝜃),

(15.5)

where t ∈ [t0 , t0 + 𝜎]. Definition 15.3: We say that a function y∶ [t0 , t0 + 𝜎] → ℝn is a solution of the measure FDE of neutral type (15.5), whenever (yt , t) ∈ P × [t0 , t0 + 𝜎], the t Perron–Stieltjes ∫t f (ys , s)dg(s) exists and the equality (15.5) is satisfied for all 0 t ∈ [t0 , t0 + 𝜎]. Given an arbitrary element x̃ ∈ G([t0 − r, t0 + 𝜎], ℝn ) and c ⩾ 1, consider the sets O = Bc = {z ∈ G([t0 − r, t0 + 𝜎], ℝn )∶ ‖z − x̃ ‖∞ < c} and P = Pc = {yt ∶ y ∈ Bc and t0 ⩽ t ⩽ t0 + 𝜎}. Notice that Bc has the prolongation property (see Definition 4.15). In this section, we want to investigate some qualitative properties of a measure FDE of neutral type using a biunivocal relation between the solution of Eq. (15.5), with initial condition yt0 = 𝜑 ∈ Pc , and the solution of the generalized ODE dx = DF(x, t), d𝜏

(15.6)

15.2 FDEs of Neutral Type with Finite Delay

with initial condition x(t0 ) = x0 ∈ Bc , where F∶ Bc × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) is defined by ⎧0, 𝜗 ∈ [t0 − r, t0 ], ⎪ 𝜗 0 ⎪ ⎪∫ f (xs , s)dg(s) + ∫ d𝜃 [𝜉(𝜗, 𝜃)]x(𝜗 + 𝜃) −r ⎪ t0 0 ⎪ ⎪ − d [𝜉(t0 , 𝜃)]x(t0 + 𝜃), 𝜗 ∈ [t0 , t], F(x, t)(𝜗) = ⎨ ∫−r 𝜃 ⎪ t 0 ⎪ ⎪∫ f (xs , s)dg(s) + ∫ d𝜃 [𝜉(t, 𝜃)]x(t + 𝜃) −r ⎪ t0 0 ⎪ ⎪ − d [𝜉(t0 , 𝜃)]x(t0 + 𝜃), 𝜗 ∈ [t, t0 + 𝜎]. ∫−r 𝜃 ⎩ (15.7) In order to fulfill this purpose, given t0 < a < b < t0 + 𝜎 and y, z ∈ Bc , we shall assume that the function f ∶ Pc × [t0 , t0 + 𝜎] → ℝn and the nondecreasing function g∶ [t0 , t0 + 𝜎] → ℝ satisfy conditions (A), (B), and (C) (see p. 162). Consider, in addition, the following condition on the normalized function 𝜉∶ ℝ × ℝ → ℝn×n ∶ (N2) there exists a Perron integrable function C∶ [t0 , t0 + 𝜎] → ℝ such that, for all s1 , s2 ∈ [t0 , t0 + 𝜎], with s1 ⩽ s2 , and z ∈ O, we have 0 ‖ ‖ 0 ‖ ‖ d𝜃 𝜉(s2 , 𝜃)z(s2 + 𝜃) − d𝜃 𝜉(s1 , 𝜃)z(s1 + 𝜃)‖ ‖ ‖ ‖∫−r ∫−r ‖ ‖ ( 0 ) s2 ⩽ C(s) d 𝜉(s, 𝜃) ‖z(s + 𝜃)‖ ds. ∫s1 ∫−r 𝜃 Next, we present a lemma which gives sufficient conditions for the function F to belong to the class  (Bc × [t0 , t0 + 𝜎], h) (see Definition 4.3), where h is a nondecreasing function. Then, we present a pair of theorems which form a bridge between generalized ODEs and measure FDEs of neutral type with finite delay. This bridge relates the solution y∶ [t0 − r, t0 + 𝜎] → ℝn of the measure FDE of neutral type (15.5) with initial condition yt0 = 𝜑 ∈ Pc and the solution x∶ [t0 , t0 + 𝜎] → Bc of the generalized ODE (15.6) with initial condition x(t0 ) = x0 . Such relation is described by Theorems 15.6 and 15.7 and based on { y(𝜗), 𝜗 ∈ [t0 − r, t], x(t)(𝜗) = (15.8) y(t), 𝜗 ∈ [t, t0 + 𝜎]. Both theorems are borrowed from [76], as well as Lemma 15.4. Lemma 15.4: Let c ⩾ 1. Assume that g∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function and f ∶ Pc × [t0 , t0 + 𝜎] → ℝn satisfies conditions (A)–(C). Moreover, suppose the

437

438

15 Applications to Functional Differential Equations of Neutral Type

normalized function 𝜉∶ ℝ × ℝ → ℝn×n satisfies conditions (N1) and (N2). Then, the function F∶ Bc × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) given by (15.7) belongs to the class  (Bc × [t0 , t0 + 𝜎], h), where h∶ [t0 , t0 + 𝜎] → ℝ is given by t

h(t) =

∫t0

t

[L(s) + M(s)]dg(s) +

∫t0

( ) C(s) var0−r (𝜉(s, ⋅))ds ‖̃x‖∞ + c .

(15.9)

Proof. At first, note that condition (A) implies that the integrals that appear in (15.7) exist. Given x ∈ Bc and t0 ⩽ t1 < t2 ⩽ t0 + 𝜎, we have [F(x, t2 ) − F(x, t1 )](𝜗) =

(15.10)

⎧0, 𝜗 ∈ [t0 − r, t1 ], ⎪ 𝜗 0 ⎪ ⎪∫ f (xs , s)dg(s) + ∫ d𝜃 [𝜉(𝜗, 𝜃)]x(𝜗 + 𝜃) −r ⎪ t1 0 ⎪ ⎪ − d [𝜉(t1 , 𝜃)]x(t1 + 𝜃), 𝜗 ∈ [t1 , t2 ], ∫−r 𝜃 ⎨ ⎪ t2 0 ⎪ f (xs , s)dg(s) + d [𝜉(t2 , 𝜃)]x(t2 + 𝜃) ⎪∫t ∫−r 𝜃 1 ⎪ 0 ⎪ − d [𝜉(t1 , 𝜃)]x(t1 + 𝜃), 𝜗 ∈ [t2 , t0 + 𝜎], ⎪ ∫−r 𝜃 ⎩

(15.11)

and, using conditions (B) and (N2), we obtain ‖F(x, t ) − F(x, t )‖ = sup ‖[F(x, t ) − F(x, t )](𝜗)‖ 2 1 ‖∞ 2 1 ‖ ‖ ‖ 𝜗∈[t1 ,t2 ]

0 ‖ 𝜗 ‖ sup ‖ f (xs , s)dg(s) + d [𝜉(𝜗, 𝜃)]x(𝜗 + 𝜃) ∫ ∫−r 𝜃 𝜗∈[t1 ,t2 ] ‖ ‖ t1 0 ‖ ‖ − d𝜃 [𝜉(t1 , 𝜃)]x(t1 + 𝜃)‖ ‖ ∫−r ‖ ( 0 ) 𝜗 𝜗 ⩽ sup ( M(s)dg(s) + C(s) d𝜃 [𝜉(s, 𝜃)] ‖x(s + 𝜃)‖ ds) ∫t1 ∫−r 𝜗∈[t1 ,t2 ] ∫t1 ( 0 ) t2 t2 M(s)dg(s) + C(s) d𝜃 [𝜉(s, 𝜃)] ‖x(s + 𝜃)‖ ds ⩽ ∫t1 ∫−r ∫t1 ( 0 ) t2 t2 ⩽ M(s)dg(s) + C(s) d𝜃 [𝜉(s, 𝜃)] ds‖x‖∞ ∫t1 ∫t1 ∫−r ( 0 ) t2 t2 ( ) ⩽ M(s)dg(s) + C(s) d [𝜉(s, 𝜃)] ds ‖̃x‖∞ + c ∫t1 ∫t1 ∫−r 𝜃

⩽ h(t2 ) − h(t1 ).

15.2 FDEs of Neutral Type with Finite Delay

Now, for x, y ∈ Bc and t0 ⩽ t1 ⩽ t2 ⩽ t0 + 𝜎, we have [F(x, t2 ) − F(x, t1 ) − F( y, t2 ) + F( y, t1 )](𝜗) = ⎧0, 𝜗 ∈ [t0 − r, t1 ], ⎪ 𝜗 ⎪ ⎪∫ [ f (xs , s) − f (ys , s)]dg(s) ⎪ t1 0 ⎪ ⎪ + d [𝜉(𝜗, 𝜃)][x(𝜗 + 𝜃) − y(𝜗 + 𝜃)] ∫−r 𝜃 ⎪ 0 ⎪ ⎪ d [𝜉(t1 , 𝜃)][x(t1 + 𝜃) − y(t1 + 𝜃)], 𝜗 ∈ [t1 , t2 ], − =⎨ ∫−r 𝜃 ⎪ t2 ⎪ [ f (xs , s) − f (ys , s)]dg(s) ⎪∫t ⎪ 1 0 ⎪ + d [𝜉(t2 , 𝜃)][x(t2 + 𝜃) − y(t2 + 𝜃)] ⎪ ∫−r 𝜃 ⎪ 0 ⎪ ⎪ d [𝜉(t1 , 𝜃)][x(t1 + 𝜃) − y(t1 + 𝜃)], 𝜗 ∈ [t2 , t0 + 𝜎], − ∫−r 𝜃 ⎩ and, according to hypotheses (C) and (N2), we have ‖F(x, t ) − F(x, t ) − F( y, t ) + F( y, t )‖ 2 1 2 1 ‖∞ ‖ = sup ‖ ) − F(x, t ) − F( y, t2 ) + F( y, t1 )](𝜗)‖ [F(x, t 2 1 ‖ ‖ 𝜗∈[t1 ,t2 ]

‖ 𝜗 ‖ ⩽ sup ‖ [ f (xs , s) − f (ys , s)]dg(s) ∫ 𝜗∈[t1 ,t2 ] ‖ ‖ t1 0

+

∫−r 0



∫−r

d𝜃 [𝜉(𝜗, 𝜃)][x(𝜗 + 𝜃) − y(𝜗 + 𝜃)] ‖ ‖ d𝜃 [𝜉(t1 , 𝜃)][x(t1 + 𝜃) − y(t1 + 𝜃)]‖ ‖ ‖ 𝜗

‖ ⩽ sup ( L(s)‖ ‖xs − ys ‖∞ dg(s) 𝜗∈[t1 ,t2 ] ∫t1 ( 0 ) 𝜗 + C(s) d𝜃 [𝜉(s, 𝜃)] ‖x(s + 𝜃) − y(s + 𝜃)‖ ds) ∫t1 ∫−r ( t2 ( 0 ) ) t2 ⩽ L(s)dg(s) + C(s) d𝜃 [𝜉(s, 𝜃)] ds ‖x − y‖∞ ∫t1 ∫t1 ∫−r ⩽ (h(t2 ) − h(t1 ))‖x − y‖∞ . 𝜗 ‖ For every 𝜗 ∈ [t0 , t0 + 𝜎], the existence of the integral ∫t L(s)‖ ‖ys − zs ‖∞ dg(s) is 1 ‖ guaranteed by the Lemma 3.3, since s → ‖ ‖zs − ys ‖∞ is a regulated function.

439

440

15 Applications to Functional Differential Equations of Neutral Type

The above calculations show that F ∈  (Ω, h) (see Definition 4.3), for Ω = Bc × [t0 , t0 + 𝜎] and the function h given by (15.9). ◽ The next result can be found in [76]. As it is a modified version of the Lemma 4.17, the proof can be adapted with no difficulty. Thus, we omit it here. Lemma 15.5: Let c ⩾ 1 and assume that 𝜑 ∈ Pc , g∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function and f ∶ Pc × [t0 , t0 + 𝜎] → ℝn is such that the integral t +𝜎 ∫t 0 f (yt , t)dg(t) exists for every y ∈ Pc . Moreover, suppose 𝜉∶ ℝ × ℝ → ℝn×n 0 is a normalized function which satisfies conditions (N1) and (N2). Consider F∶ Bc × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) given by (15.7) and assume that x∶ [t0 , t0 + 𝜎] → Bc is a solution of the generalized ODE (15.6) with initial condition x(t0 )(𝜗) = 𝜑(𝜗) for 𝜗 ∈ [t0 − r, t0 ], and x(t0 )(𝜗) = x(t0 )(t0 ) for 𝜗 ∈ [t0 , t0 + 𝜎]. If 𝑣 ∈ [t0 , t0 + 𝜎] and 𝜗 ∈ [t0 , t0 + 𝜎], then x(𝑣)(𝜗) = x(𝑣)(𝑣),

𝜗 ⩾ 𝑣,

x(𝑣)(𝜗) = x(𝜗)(𝜗),

𝑣 ⩾ 𝜗.

Theorem 15.6: Suppose 𝜑 ∈ Pc , c ⩾ 1, g∶ [t0 , t0 + 𝜎] → ℝ is a nondecreasing function, f ∶ Pc × [t0 , t0 + 𝜎] → ℝn satisfies conditions (A)–(C). Moreover, suppose the normalized function 𝜉∶ ℝ × ℝ → ℝn×n satisfies conditions (N1) and (N2). Let F∶ Bc × [t0 , t0 + 𝜎] → G([t0 − r, t0 + 𝜎], ℝn ) be given by (15.7) and y∶ [t0 − r, t0 + 𝜎] → ℝn be a solution of the measure FDE of neutral type 0

t

y(t) = y(t0 ) +

∫t0

f (ys , s)dg(s) +

∫−r

d𝜃 [𝜉(t, 𝜃)]y(t + 𝜃)

0



∫−r

d𝜃 [𝜉(t0 , 𝜃)]y(t0 + 𝜃),

(15.12)

where t ∈ [t0 , t0 + 𝜎], subject to the initial condition yt0 = 𝜑. For every t ∈ [t0 , t0 + 𝜎], let x(t)∶ [t0 − r, t0 + 𝜎] → ℝn be given by (15.8). Then, the function x∶ [t0 , t0 + 𝜎] → Bc is a solution of the generalized ODE (15.6). Proof. At first, we prove that, for every 𝑣 ∈ [t0 , t0 + 𝜎], the Kurzweil integral 𝑣 ∫t DF(x(𝜏), t) exists and 0

𝑣

x(𝑣) − x(t0 ) =

∫t0

DF(x(𝜏), t).

Fix an arbitrary 𝜖 > 0. By hypothesis, the function g is nondecreasing. Then, it admits only a finite number of points t ∈ [t0 , 𝑣] such that Δ+ g(t) ⩾ 𝜖. We denote these points by t1 , … , tm .

15.2 FDEs of Neutral Type with Finite Delay

Let 𝛿∶ [t0 , t0 + 𝜎] → (0, ∞) be a gauge such that { } tk − tk−1 𝛿(𝜏) < min ∶ k = 2, … , m , 𝜏 ∈ [t0 , t0 + 𝜎], and 2 } { 𝛿(𝜏) < min |𝜏 − tk |, |𝜏 − tk−1 |∶ 𝜏 ∈ (tk−1 , tk ), k = 1, … , m . These conditions assure that in a 𝛿-fine point-interval pair, we must have at most one of the points t1 , … , tm , for example tk , and when this happens, the tag of the interval is necessarily equal to tk . Note that, for all 𝜃 ∈ [−r, 0], by Lemma 15.5, we have x(tk )(tk + 𝜃) = x(tk + 𝜃)(tk + 𝜃), and, using the relation (15.8), x(tk + 𝜃)(tk + 𝜃) = y(tk + 𝜃). Thus, according to the Corollary 2.14, we have s

lim+

s→tk

∫tk

‖ ‖ ‖ L(s)‖ y − x(tk )tk ‖ Δ+ g(tk ) = 0, ‖ys − x(tk )s ‖∞ dg(s) = L(tk )‖ ‖∞ ‖ tk

for every k ∈ {1, … , m}. As a consequence of this last equality, we can choose the gauge 𝛿 such that tk +𝛿(tk )

∫tk tk +𝛿(tk )

∫tk

𝜖 ‖ L(s)‖ ‖ys − x(tk )s ‖∞ dg(s) < 4m + 1 , k ∈ {1, … , m}, and ‖ C(s)var0−r (𝜉(s, ⋅))‖ ‖ys − x(tk )s ‖∞ ds