An Introduction to Fractional Differential Equations (Industrial and Applied Mathematics) 9819960797, 9789819960798

This is an introductory-level text on fractional calculus and fractional differential equations. Targeted to graduate st

129 93

English Pages 170 [163] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

An Introduction to Fractional Differential Equations (Industrial and Applied Mathematics)
 9819960797, 9789819960798

Table of contents :
Preface
Contents
1 Introduction
1.1 Brief History
1.2 Special Functions
1.3 Laplace Transform
1.4 Inverse Laplace Transform
1.5 Fixed Point Theorems
1.6 Function Spaces
1.7 Exercises
References
2 Fractional Calculus
2.1 Preliminaries
2.2 Riemann–Liouville Fractional Integrals and Derivatives
2.3 Caputo Fractional Derivatives
2.4 Examples
2.5 Exercises
References
3 Fractional Differential Equations
3.1 Motivation
3.2 Equation with Constant Coefficient
3.3 Equation with Matrix Coefficient
3.4 Nonlinear Equations
3.5 Nonlinear Damped Equations
3.6 Examples
3.7 Exercises
References
4 Applications
4.1 Observability
4.2 Controllability of Linear Systems
4.3 Controllability of Nonlinear Systems
4.4 Stability
4.5 Nonlinear Equations
4.6 Examples
4.7 Exercises
References
5 Fractional Partial Differential Equations
5.1 Motivation
5.2 Fractional Partial Integral and Derivative
5.3 Linear Fractional Equations
5.3.1 Adomian Decomposition Method
5.3.2 Fractional Diffusion Equation
5.3.3 Fractional Wave Equation
5.3.4 Fractional Black–Scholes Equation
5.4 Nonlinear Fractional Equations
5.4.1 Dirichlet Boundary Condition
5.4.2 Neumann Boundary Condition
5.5 Fractional Equations with Kernel
5.6 Examples
5.7 Exercises
References
6 Fractional Integrals and Derivatives
6.1 Definitions of Fractional Integrals
6.2 Definitions of Fractional Derivatives
6.3 Comments
6.4 Examples
6.5 Exercises
References
Appendix Index
Index

Citation preview

Industrial and Applied Mathematics

K. Balachandran

An Introduction to Fractional Differential Equations

Industrial and Applied Mathematics Series Editors G. D. Veerappa Gowda, TIFR Centre For Applicable Mathematics, Bengaluru, Karnataka, India S. Kesavan, The Institute of Mathematical Sciences, Chennai, Tamil Nadu, India Pammy Manchanda, Guru Nanak Dev University, Amritsar, India Editorial Board Zafer Aslan, ˙Istanbul Aydın University, ˙Istanbul, Türkiye K. Balachandran, Bharathiar University, Coimbatore, Tamil Nadu, India Martin Brokate, Technical University, Munich, Germany N. K. Gupta, Indian Institute of Technology Delhi, New Delhi, India Akhtar A. Khan, Rochester Institute of Technology, Rochester, USA René Pierre Lozi, University Côte d’Azur, Nice, France M. Zuhair Nashed, University of Central Florida, Orlando, USA Govindan Rangarajan, Indian Institute of Science, Bengaluru, India K. R. Sreenivasan, NYU Tandon School of Engineering, Brooklyn, USA Noore Zahra, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia

The Industrial and Applied Mathematics series publishes high-quality researchlevel monographs, lecture notes, textbooks, contributed volumes, focusing on areas where mathematics is used in a fundamental way, such as industrial mathematics, bio-mathematics, financial mathematics, applied statistics, operations research and computer science.

K. Balachandran

An Introduction to Fractional Differential Equations

K. Balachandran Department of Mathematics Bharathiar University Coimbatore, Tamil Nadu, India

ISSN 2364-6837 ISSN 2364-6845 (electronic) Industrial and Applied Mathematics ISBN 978-981-99-6079-8 ISBN 978-981-99-6080-4 (eBook) https://doi.org/10.1007/978-981-99-6080-4 Mathematics Subject Classification: 26A33, 33E22, 34A08, 34A12, 35A01, 35R11, 93B05, 93B07, 93D20 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd. The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721, Singapore Paper in this product is recyclable.

Fractional Calculus is the calculus of the 21st century. Fractional Differential Equations are alternative models of nonlinear differential equations.

Preface

Fractional calculus and fractional differential equations are widely used by many researchers in different areas of physical sciences, life sciences, engineering and technology. So there is a need to have an introductory level book on the elementary concepts of fractional calculus and fractional differential equations along with applications for the beginners of mathematics students. There are many books, monographs, lecture notes, and review articles available on the topics but many of them are having less examples and cumbersome notations which discourage the beginners. In order to avoid the difficulty and fulfill the requirement, I have attempted to write an introductory level book on these subjects with an interesting application area. Several new definitions of fractional integrals and fractional derivatives are listed from the literature so that the students can understand the importance and developments of this new area. Further, in each chapter, a set of exercises is included for the benefit of the learners. Although references are made with some of the available books on this subject, the presentation of the material is entirely new and easily comprehensible to the students. In the process some contents of the subjects are refined and examples are added at appropriate levels so that the book will motivate the students to understand the concepts easily and clearly. I hope the students will learn the fundamental concepts and comprehend the contents thoroughly from this book. I would like to thank my former research students, all of them are now teachers in various institutions in India, and colleagues for reading the manuscript and suggesting improvements of the text. Finally, I thank Mr. Shamim Ahmad, Executive Editor, Springer-India for his continuous encouragement to complete the project in spite of inordinate delay on my part. Coimbatore, India

K. Balachandran

vii

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Brief History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Special Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 Fixed Point Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Function Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 6 13 15 17 19 21 21

2 Fractional Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Riemann–Liouville Fractional Integrals and Derivatives . . . . . . . . . . 2.3 Caputo Fractional Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23 23 26 36 42 48 49

3 Fractional Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Equation with Constant Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Equation with Matrix Coefficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Nonlinear Damped Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51 51 54 56 63 70 76 85 85

4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Observability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Controllability of Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Controllability of Nonlinear Systems . . . . . . . . . . . . . . . . . . . . . . . . . .

87 87 89 93

ix

x

Contents

4.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Nonlinear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95 100 105 111 112

5 Fractional Partial Differential Equations . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Fractional Partial Integral and Derivative . . . . . . . . . . . . . . . . . . . . . . . 5.3 Linear Fractional Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Adomian Decomposition Method . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Fractional Diffusion Equation . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Fractional Wave Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.4 Fractional Black–Scholes Equation . . . . . . . . . . . . . . . . . . . . . 5.4 Nonlinear Fractional Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Dirichlet Boundary Condition . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Neumann Boundary Condition . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 Fractional Equations with Kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115 115 118 119 120 122 123 125 127 128 132 134 138 139 140

6 Fractional Integrals and Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Definitions of Fractional Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Definitions of Fractional Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Comments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143 143 145 149 151 155 156

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Chapter 1

Introduction

Abstract In this chapter, we provide a brief history of fractional calculus with tautochrone problem. Special functions like gamma function, Mittag-Leffler function, Wright function, and Mittag-Leffler matrix function are introduced. The Laplace transform, inverse Laplace transform, and some basic fixed point theorems and function spaces are given. Finally, a set of exercises is provided. Keywords Tautochrone problem · Gamma function · Mittag-Leffler function · Wright function · Laplace transform · Fixed point theorems

1.1 Brief History The subject of fractional calculus deals with the investigations of derivatives and integrals of any arbitrary real or complex order which unify and extend the notions of integer order derivative and n-fold integral. It can be considered as a branch of mathematical analysis which deals with integrodifferential operators and equations where the integrals are of convolution type and exhibit (weakly singular) kernels of power-law type. It is strictly related to the theory of pseudo-differential operators. It was found that various especially interdisciplinary applications can be elegantly modeled with the help of the fractional derivatives. In a letter dated September 30, 1695, L’ Hospital wrote to Leibniz asking him a particular notation that he had used in his publication for the n-th derivative of a function d n f (x) . dxn What would the result be if n = 21 ?. Leibniz responded that it would be “an apparent paradox from which one day useful consequences will be drawn”. In these words, fractional calculus was born. Studies over the intervening period have proved at least half right. It is clear that in the twentieth century especially numerous applications have been found. However, these applications and mathematical background surrounding fractional calculus are far from paradoxical. While the physical meaning © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4_1

1

2

1 Introduction

is difficult to grasp, the definitions are no more rigorous than their integer order counterpart. S. F Lacroix was the first to start the fractional calculus around 1819 by defining the n-th derivative of the power function y = x m where m is a positive integer as m! dn y x m−n , m ≥ n = dxn (m − n)! and using Legendre’s symbol , for the generalized factorial, he wrote dn y (m + 1) x m−n , = dxn (m − n + 1) where m ≥ n. He then let n be any real number to arrive at the definition of a fractional derivative of a power function. By letting m = 1 and n = 21 , he obtained 1

d2y 1

dx 2

√ 2 x = √ . π

Three years later, Fourier defined the derivative of arbitrary order in terms of its Fourier transform. Niels Henrik Abel worked with fractional operations on the tautochrone problem in 1823. He was the first to solve an integral equation by means of the fractional calculus. Perhaps, even more importantly, the derivation below will provide an example of how the Riemann–Liouville fractional integral arises in the formulation of physical problems. Tautochrone Problem: Assume that a thin wire C is placed in the first quadrant of a vertical plane and that a frictionless bead slides along the wire under the action of gravity (see Fig. 1.1). Let the initial velocity of the bead be zero. The tautochrone problem is to determine the shape of the curve such that the time of descent of a frictionless point mass sliding down the curve under the action of gravity is independent of the starting point. This problem should not be confused with the brachistochrone problem in the calculus of variations as that problem is to find the shape of the curve such that the time of descent of the bead from one point to the other would be a minimum. Now, moving on to the formulation of the problem, let s be the arc length measured along the curve C from O to an arbitrary point Q on C and let α be the angle of inclination. Then −g cos α is the acceleration d 2 s/dt 2 of the bead, where g is the gravitational constant and dη = cos α. ds

1.1 Brief History

3

y P : (x, y)

c Q : (ζ, η) α

s

g

x

0 Fig. 1.1 Tautochrone curve

Hence we have the differential equation dη d 2s = −g . dt 2 ds With the aid of the integrating factor ds/dt, we see immediately that 

ds dt

2 = −2gη + k,

(1.1)

where k is a constant of integration. Since the bead started from rest, ds/dt is zero when η = y and thus k = 2g y. Therefore (1.1) can be written as  ds = − 2g(y − η). dt The negative square root is chosen since as t increases, s decreases. Thus the time of descent T from P to O is 1 T = −√ 2g



O

√ P

Now the arc length s is a function of η, say, s = h(η),

1 ds. y−η

4

1 Introduction

where h depends on the shape of the curve C. Therefore 1 T = −√ 2g or





0

(y − η)−1/2 [h  (η)dη]

y



y

2gT =

(y − η)−1/2 h  (η)dη,

(1.2)

0

where h  (η) = If we let

ds . dη

f (y) ≡ h  (y),

(1.3)

(1.4)

then the integral equation (1.2) may be written in the notation of the fractional calculus as √ 2g (1.5) T = D −1/2 f (y). ( 21 ) But the right-hand side of (1.5) is the Riemann–Liouville fractional integral of f of order 21 . This is our desired formulation. It remains to solve (1.5) and then to find the equation of C. Abel applied the fractional operator D 1/2 to both sides of (1.5) and thereof to have  D

1/2

2g T = f (y). π

(1.6)

Now we know from [1] that this is legitimate if f and T are of class C. But a constant is certainly of class C and since T D 1/2 T = √ , πy we see that f also is of class C. Thus (1.6) becomes f (y) =

√ 2g −1/2 Ty , π

(1.7)

which is the solution of (1.5) [or (1.2)]. We also could have solved (1.5) by the Laplace transform technique since (1.2) is a convolution integral. Now to solve the second part of the problem, that is, to find the equation of C, we begin by using (1.4) and (1.3) to write

1.1 Brief History

5

 ds f (y) = h  (y) = = dy Thus

 1+

dx dy

2 .

 dx = f 2 (y) − 1 dy

or



y

x= 0

 (

2gT 2 − 1)dη + c. π2 η

But c = 0 since at the origin x = 0 = y. If we let

(1.8)

gT 2 , π2

a= then the change of variable of integration

η = 2a sin2 ζ reduces (1.8) to

 x = 4a

β

cos2 ζdζ,

0



where β = arcsin

y . 2a

These last two equations then imply that 1 sin 2β), 2 y = 2a sin2 β,

x = 2a(β +

and if we make the trivial change of variable θ = 2β, the parametric equations of C become  x = a(θ + sin θ) (1.9) y = a(1 − cos θ) . The solution of the problem is now complete. with a = gT π2 We see from (1.9) that C is a cycloid. One may also formulate the more general problem of determining C such that the time of descent T , instead of being a constant, is a specified function of η, say ψ(η). Then (1.5) becomes 2

6

1 Introduction



2g ψ(y) = D −1/2 f (y), π

and under suitable conditions on ψ, the solution of the fractional integral equation above is  2g 1/2 f (y) = D ψ(y). π Although this problem may seem to be a simple exercise in elementary mechanics and differential equations, it turned out to be of greater mathematical significance. Even though the tautochrone problem was attacked and solved by mathematicians long before Abel, it was he who first solved it by means of the fractional calculus.

1.2 Special Functions (i) Gamma Function: The gamma function is defined by the Euler integral of the second kind as  (z) =



t z−1 e−t dt, R(z) > 0.

(1.10)

0

However we have to make sure that this definition makes sense, that is, the improper integral converges for all complex z ∈ C, R(z) > 0, real part of z. For this function, the reduction formula (z + 1) = z(z), R(z) > 0

(1.11)

holds; it is obtained from (1.10) by integration by parts. Using this relation, the Euler gamma function is extended to the half-plane R(z) ≤ 0 by (z) =

(z + n) , R(z) > −n, n ∈ N, z ∈ / Z− 0 := 0, −1, −2 · · · (z)n

(1.12)

Here (z)n is the Pochhammer symbol, defined for complex z ∈ C and non-negative integer n ∈ N0 by (z)0 = 1 and (z)n = z(z + 1) · · · (z + n − 1), n ∈ N. Equations (1.12) and (1.13) yield (n + 1) = (1)n = n!, n ∈ N0

(1.13)

1.2 Special Functions

7

with 0! = 1. It follows from (1.12) that the gamma function is analytic everywhere in the complex plane C except at z = 0, −1, −2, · · · , where (z) has simple poles and is represented by the asymptotic formula (z) =

(−1)k [1 + O(z + k)], z → −k, k ∈ N0 . z+k

We also indicate some other properties of the gamma function such as the functional equation: (z)(1 − z) =

π , z∈ / Z0 ; 0 < R(z) < 1, sin πz

and

√ 1 ( ) = π. 2 √ Alternatively we now show that ( 21 ) = π. By definition we have 1 ( ) = 2





e−t t − 2 dt. 1

0

If we let t = y 2 , then dt = 2ydy and we now have 1 ( ) = 2 2





e−y dy. 2

0

Equivalently we can write as 1 ( ) = 2 2





e−x dx. 2

0

If we multiply both, then we get 1 [( )]2 = 4 2







0



e−(x

2

+y 2 )

dxdy.

0

Using polar coordinates x = r cos θ and y = r sin θ, we get 1 [( )]2 = 4 2 Thus ( 21 ) =

√ π.

 0

π 2



∞ 0

e−r r dr dθ = π. 2

8

1 Introduction

Some more properties of gamma function are as follows: • (z) = •

( −1 ) 2

(z+1) , for z ( −1 2 +1)

=

• ( −3 )= 2

−1 2

( −3 2 +1) −3 2

• (x) = limn→∞

negative value of z. √ = −2( 21 ) = −2 π. √ = 43 π.

n! nx . x(x+1)···(x+n)

(ii) Mittag-Leffler Function: In 1903, the Swedish mathematician Gosta Mittag-Leffler introduced the function E α (z) defined by E α (z) =

∞ n=0

zn , (αn + 1)

(1.14)

where z is a complex variable and α ≥ 0 (for α = 0 the radius of convergence of the sum (1.14) is finite and one has by definition E 0 (z) = 1/(1 − z)). The Mittag-Leffler function is a direct generalization of the exponential function because of substitution of n! = (n + 1) with (αn)! = (αn + 1). For 0 < α < 1, it interpolates between 1 . Its importance is realized the pure exponential and a hypergeometric function 1−z in the problems of physics, chemistry, biology, engineering, and applied sciences because Mittag-Leffler function naturally occurs as the solution of fractional differential equations or fractional integral equations. The generalization of E α (z) was studied by Wiman in 1905 and he defined the function E α,β (z) =

∞ n=0

zn , (αn + β)

(1.15)

where z, α, β ∈ C and (α) > 0, (β) > 0, which is called as Wiman’s function or generalized Mittag-Leffler function as E α,1 (z) = E α (z). By definition E α,β (z) =

∞ n=0 ∞

zn (αn + β)

=

z n+1 (αn + α + β) n=−1

=

∞ zn 1 +z (β) (αn + α + β) n=0

=

1 + z E α,α+β (z). (β)

1.2 Special Functions

9

Note that E α (z 1 + z 2 ) = E α (z 1 )E α (z 2 ) and E α,β (z 1 + z 2 ) = E α,β (z 1 )E α,β (z 2 ) and the equality occurs only when α = β = 1. From the definition, we have the following identities: E 1 (z) =

∞ k=0

E 1,2 (z) =

∞ k=0

E 1,3 (z) =

∞ k=0



zk zk = = ez , (k + 1) k! k=0 ∞

1 z k+1 ez − 1 zk = = , (k + 2) z k=0 (k + 1)! z ∞ 1 z k+2 ez − 1 − z zk = 2 = . (k + 3) z k=0 (k + 2)! z2

From two-parameter Mittag-Leffler function, we can obtain the trigonometric and hyperbolic functions as follows: E 2,1 (z 2 ) = E 2,2 (z 2 ) =



z 2k = cosh(z), (2k + 1)

k=0 ∞

1 z

k=0

sinh(z) z 2k+1 = . (2k + 1)! z

The Mittag-Leffler function (1.15) can be expressed in the integral representation for z ∈ C, α > 0, as  α−1 t t e 1 dt, 2πi C t α − z  t α−β t 1 E α,β (z) = e dt, 2πi C t α − z E α (z) =

where the path of integration C starts and ends at −∞ and encircles the circular disk |t| ≤ |z|α . Also we have the following identities: E α,β (z) + E α,β (−z) = 2E 2α,β (z 2 ) E α,β (z) − E α,β (−z) = 2z E 2α,α+β (z 2 ). d E β (z) = β1 E β,β (z) for 0 < β < ∞. For, the Mittag-Leffler Next we show that dz function is analytic and thus absolutely convergent. Hence we may interchange the sum and derivative and calculate the derivative of E β (z) as

10

1 Introduction ∞ d zn d E β (z) = dz dz n=0 (βn + 1)

= =

∞ n=1 ∞ n=0

=

∞ n=0

Note that [2]





nz n−1 (βn + 1) (n + 1)z n (β(n + 1) + 1) (n + 1)z n β(n + 1)(β(n + 1))

=

∞ zn 1 β n=0 (βn + β)

=

E β,β (z) . β

e−t t β−1 E α,β (zt α )dt =

0

and

1 , |z| < 1 1−z

dk E k (z k ) = E k (z k ). dz k

(iii) Wright function: Next we define a new type of function called Wright function as Wα,β (z) =

∞ k=0

zk , α > −1, β ∈ C. k!(αk + β)

Observe that W0,β (z) = and

ez (β)

d Wα,β (z) = Wα,α+β (z). dz

1.2 Special Functions

11

Consider the function of Wright type Wα (z) =

∞ n=0

=

(−z)n n!(−αn + 1 − α)

∞ 1 (−z)n−1 (nα) sin(nπα), z ∈ C, π n=1 (n − 1)!

with 0 < α < 1. For −1 < r < ∞, λ > 0, the following results hold:

∞ α α α (i) 0 t α+1 Wα ( t1α )e−λ dt = e−λ ,

∞ (1+r ) , (ii) 0 Wα (t)t r dt = (1+αr )

∞ (iii) 0 Wα (t)e−zt dt = E α (−z), z ∈ C, ∞ (iv) 0 αt Wα (t)e−zt dt = E α,α (−z), z ∈ C. Another generalized form of Mittag-Leffler function is defined by γ

E α,β (z) =

∞ k=0

(γ)k z k , k!(αk + β)

(1.16)

where (γ)k is a Pochhammer symbol. Its integral representation is γ

E α,β (z) =

1 2πi

 es C

s αγ−β ds, (s α − z)γ

where C is any suitable contour in the complex plane encompassing at the left all the singularities of the integrand. Also we have [2] dk k+1 E α,β (x) = k!E α,β+αk (x) dxk and

dk γ γ+k E (x) = (γ)k E α,β+αk (x). d x k α,β

(iv) Mittag-Leffler Matrix Function: If A is an n × n matrix, then the Mittag-Leffler matrix function E α (At α ) is defined as E α (At α ) =

∞ k=0

Ak t αk , α > 0, t ∈ R. (αk + 1)

(1.17)

12

1 Introduction

The two-parameter Mittag-Leffler matrix function is defined as E α,β (At α ) =

∞ k=0

Ak t αk , α, β > 0, t ∈ R. (αk + β) 

Now we calculate E α (At α ), when A =

E α (At α ) =

a −b b a

∞ k=0

(1.18)

 . By definition we have

Ak t αk . (αk + 1)

However it is easy to see that  Ak =

a −b b a

Therefore α

E α (At ) =

∞ k=0



k =

 Re(λk ) −I m(λk ) . I m(λk ) Re(λk )

t αk (αk + 1)



 Re(λk ) −I m(λk ) . I m(λk ) Re(λk )

√ On the other hand, if we write λ = r eiθ , where r = a 2 + b2 and θ = ar ctg( ab ), then λk = r k eikθ = r k (cos(kθ) + i sin(kθ)) and therefore α

E α (At ) =

∞ k=0

r k t αk (αk + 1)



 cos(kθ) − sin(kθ) . sin(kθ) cos(kθ)

Next we state the properties of Mittag-Leffler matrix function. If A, B are n × n matrices, then we have the following properties[3]: AE α,β (A) = E α,β (A)A. E α,β (A∗ ) = (E α,β (A))∗ ;(* denotes matrix transpose). E α,β (X AX −1 ) = X E α,β (A)X −1 for any nonsingular matrix X . The eigenvalues of E α,β (A) are E α,β (λi ) where λi are the eigenvalues of A. If B commutes with A, then B commutes with E α,β (A). E α,β (AB)A = AE α,β (B A). 1 I where I and 0 are the identity and zero matrices of dimension E α,β (0) = (β) n. m−1 Ak . (viii) Am E α,β+mα (A) = E α,β (A) − (αk + β) k=0 (ix) In general E α (A + B) = E α (A)E α (B), but if A and B are nilpotent with index 2 and AB = B A = 0, then E α (A + B) = E α (A)E α (B). (i) (ii) (iii) (iv) (v) (vi) (vii)

1.3 Laplace Transform

13

 1 E α,β (z)(z I − A)−1 dz, where C is a closed contour enclos2πi C ing the spectrum of A.

(x) E α,β (A) =

1.3 Laplace Transform Suppose that f is a real or complex valued function of the (time) variable t > 0 and s is a real or complex parameter. We define the Laplace transform of f as  F(s) = L( f (t)) =



e−st f (t)dt = lim



τ →∞ 0

0

τ

e−st f (t)dt

whenever the limit exists. When it does, the integral is said to converge. If the limit does not exist, the integral is said to diverge and there is no Laplace transform defined for f . The notation L( f ) will also be used to denote the Laplace transform of f and the integral is the ordinary Riemann integral. The parameter s belongs to some domain on the real line or in the complex plane. Let f (t) = eat , a real. This function is continuous on [0, ∞) and of exponential order a. Then 



L(eat ) =

e−st eat dt,

0 ∞

e−(s−a)t dt,  −(s−a)t ∞ e 1 , R(s) > a. = = −(s − a) 0 (s − a)

=

0

The same calculation holds for complex a and R(s) > R(a). Let f (t) = t which is continuous and of exponential order, then 



L(t) =

te−st dt =

0

1 . s2

Performing integration by parts twice as above, we find that  L(t 2 ) =



e−st t 2 dt =

0

2 , R(s) > 0. s3

By induction, one can show that in general L(t n ) =

n! s n+1

, R(s) > 0,

for n = 1, 2, 3 · · · . Indeed this formula holds for n = 0, since 0!=1.

14

1 Introduction

The Laplace transform of one parameter Mittag-Leffler function is defined as 

∞ ±λk t αk dt, (αk + 1) 0 k=0  ∞ ∞ ±λk = e−st t αk dt, (αk + 1) 0 k=0

L[E α (±λt α )](s) =

= =





e−st

1 1 ±λk . . α (αk + 1), (αk + 1) s s k

k=0 ∞

1 s

k=0

±λk , sαk

1 ±λ ±λ2 = [1 + α + 2α + · · · ], s s s ±λ −1 1 = [(1 − α ) ], s s s α−1 = α . s ±λ The Laplace transform of two-parameter Mittag-Leffler function is defined as L[t β−1 E α,β ](±λt α )(s) =

s α−β , sα ± λ

where R(α) > 0, R(β) > 0 and the Laplace transform of generalized Mittag-Leffler function is defined as (γ)

L[t β−1 E α,β (±λt α )](s) =

s αγ−β .R(s) > 0, R(β) > 0, |λs −α | < 1. (s α ± λ)γ

The Laplace transform of the convolution 

t

f (t) ∗ g(t) = 

f (t − τ )g(τ )dτ ,

0 t

=

f (τ )g(t − τ )dτ ,

0

of the two functions f (t) and g(t), which are equal to zero for t < 0, is equal to the product of the Laplace transforms of those functions L[ f (t) ∗ g(t)](s) = F(s)G(s), under the assumption that both F(s) and G(s) exist.

1.4 Inverse Laplace Transform

15

1.4 Inverse Laplace Transform If L{ f (t)} = F(s), then the inverse Laplace transform of F(s) is defined as L−1 {F(s)} = f (t). The inverse transform L−1 is a linear operator: L−1 {F(s) + G(s)} = L−1 {F(s)} + L−1 {G(s)}, L−1 {cF(s)} = cL−1 {F(s)}, for any constant c. As an example, the inverse Laplace transform of F(s) =

6 1 + 2 3 s s +4

is f (t) = L−1 {F(s)}   2 1 −1 2 −1 + 3L = L 2 s3 s2 + 4 2 t + 3 sin 2t. = 2 There is usually more than one way to invert the Laplace transform. For example, also let F(s) = (s 2 + 4s)−1 . We can compute the inverse transform of this function by completing the square:

 1 s 2 + 4s  1 = L−1 (s + 2)2 − 4  2 1 = L−1 2 (s + 2)2 − 4 1 = e−2t sinh 2t, 2

f (t) = L−1

and use the partial fraction decomposition of F(s) as F(s) =

1 1 1 = − . s(s + 4) 4s 4(s + 4)

16

1 Introduction

Therefore f (t) = L−1 {F(s)}   1 1 − L−1 = L−1 4s 4(s + 4) 1 1 −4t = − e 4 4 1 = e−2t sinh 2t. 2 Next we find the inverse Laplace transform q(t) of Q(s) = Q(s) = −

3s . (s 2 +1)2

Note that

1 3 d . 2 2 ds s + 1

Hence q(t) = L−1 {Q(s)}  1 d 3 = − L−1 2 ds s 2 + 1 3 = t sin t. 2 If the Laplace transforms of f (t) and g(t) are F(s) and G(s), respectively, then L{( f ∗ g)(t)} = F(s)G(s), and so

L−1 {F(s)G(s)} = ( f ∗ g)(t).

Suppose we want to find the inverse transform x(t) of X (s). Write X (s) as a product F(s)G(s) where f (t) and g(t) are known, then by the above result, x(t) = . We write Q(s) = ( f ∗ g)(t). For the inverse transform q(t) of Q(s) = (s 23s +1)2 F(s)G(s), where F(s) = s 23+1 and G(s) = s 2 s+1 . But the inverses of F(s) and G(s) are f (t) = 3 sin t and g(t) = cos t. Therefore q(t) = L−1 {Q(s)} = L−1 {F(s)G(s)} = ( f ∗ g)(t)  t sin(t − v) cos vdv. =3 0

1.5 Fixed Point Theorems

17

By using the trigonometric identity 2 sin A cos B = sin(A + B) + sin(A − B), we have  t  3 t sin tdv + sin(t − 2v)dv q(t) = 2 0 0 3 = t sin t. 2 For finding the inverse Laplace transform x(t) of the function X (s), we can use the convolution theorem and write as X (s) = Since L−1 { 1s } = 1 and L−1 { s 21+4 } =

1 2

1 1 . s s2 + 4

sin 2t, we have

 1 t sin 2vdv 2 0 1 = (1 − cos2t). 4

x(t) =

1.5 Fixed Point Theorems The fixed point technique is one of the powerful methods mainly applied to prove the existence and uniqueness of solutions of differential equations. It is a useful method in different branches of mathematics. Let X be a Banach space and T be an operator such that T : X → X . We say that x ∈ X is a fixed point of T if T x = x. A mapping T : X → X is said to be a contraction if there exists a real number α, 0 ≤ α < 1, such that T x − T y ≤ α x − y for all x, y ∈ X . Note that . indicates a norm in X . The following theorem, known as Banach’s contraction mapping principle, is an important source of proving existence and uniqueness theorems in different branches of analysis. Theorem 1.5.1 (Banach fixed point theorem) If X is a Banach space and T : X → X is a contraction mapping, then T has a unique fixed point. A generalization of the above theorem is as follows. Theorem 1.5.2 Let X be a Banach space and let T : X → X be such that T n : X → X is a contraction for some n. Then T has a unique fixed point.

18

1 Introduction

In 1911, Brouwer proved the following fixed point theorem in a finite-dimensional space: Theorem 1.5.3 (Brouwer fixed point theorem) Let D ⊂ R n be a nonempty compact convex subset and f : D → D be continuous. Then f has a fixed point. Brouwer’s theorem fails if the dimension of the space X is infinite, since in infinitedimensional Banach spaces, bounded sets need not be relatively compact. In 1930, Schauder extended the domain of validity of the Brouwer theorem by proving the following fixed point theorem. The Schauder fixed point theorem will be helpful to assert the existence of solutions of some initial value problems in differential equations. In this theorem, the fixed point lies in a space of functions and this point may be a function that solves a nonlinear integral equation or a partial differential equation. Now let M be a subset of a Banach space X and A be an operator, generally nonlinear, defined on M and mapping M into itself. The operator A is called compact on the set M if it carries every bounded subset of M into a compact set. If, in addition, A is continuous on M (that is, it maps bounded sets into bounded sets), then it is said to be completely continuous on M (in case of A being linear, both the definitions coincide). Theorem 1.5.4 (Schauder Theorem) Let X be a real Banach space, M ⊂ X a nonempty closed bounded convex subset and F : M → M be compact. Then F has a fixed point. The following theorems are alternate forms of Schauder’s theorem. Theorem 1.5.5 (Leray–Schauder Theorem) Every completely continuous operator which maps a closed bounded convex subset of a Banach space into itself has at least one fixed point. Theorem 1.5.6 Let T be a compact continuous mapping of a normed linear space X into X . Then T has a fixed point. Theorem 1.5.7 (Schaefer Theorem) Let E be a normed linear space. Let F : E → E be a completely continuous operator and let ζ(F) = {x ∈ E : x = λF x

for some 0 < λ < 1}.

Then either ζ(F) is unbounded or F has a fixed point. For more about fixed point theorems, one can refer the book by Smart [4].

1.6 Function Spaces

19

1.6 Function Spaces The Space C(J, Rn ) Let J = [a, b] (−∞ < a < b < ∞) be a finite interval on the real line R. Let C(J, Rn ) denote the Banach space of continuous functions x(t) with values in Rn for t ∈ J with the norm . ||x|| = ||x||C = sup{|x(t)| : t ∈ J }, where |x(t)| is the usual Euclidean norm in Rn . Let C 1 (J, Rn ) denote the Banach space of continuously differentiable functions x(t) with values in Rn for t ∈ J with the norm ||x||C 1 = ||x||C + ||x  ||C . The functions of a set K are said to be uniformly bounded if there is a constant M such that f (t) ≤ M for all f ∈ K and all t ∈ [a, b]. They are called equicontinuous if for every > 0 there is a δ > 0 depending only on such that |t1 − t2 | < δ for every t1 , t2 ∈ [a, b] and f (t1 ) − f (t2 ) < for all f ∈ K . Theorem 1.6.1 (Arzela-Ascoli) A set K ⊂ C(J : Rn ) is compact if and only if it is uniformly bounded and equicontinuous. The Space L p (J : R) Let L p (J : R)(1 ≤ p < ∞) denote the space of all p-integrable measurable func b 1 tions with norm f p = ( a | f (x)| p d x) p and L ∞ (J : R) be the space of all essentially bounded measurable functions with norm f ∞ = ess supa≤x≤b | f (x)|. Let L 2 (J : Rn ) be the space of all measurable n-vector valued functions f (t) defined

b for t ∈ J with values in Rn such that a | f (t)|2 dt < ∞. In L 2 (J : Rn ), the inner product  b ( f, g) L 2 (J :Rn ) = ( f (t), g(t))Rn dt a

is well defined, ( f (t), g(t))Rn is the usual scalar or dot product in Rn : ( f (t), g(t))Rn = g ∗ (t) f (t) and each function f ∈ L 2 (J : Rn ) has associated with it a norm 

f 2 = ( a

b

1

| f (t)|2 dt) 2 .

20

1 Introduction

Here ∗ denotes the adjoint. The norm of a continuous n × m matrix valued function A : J → Rn × Rm is defined by

A(t) = max i

m

max |ai j (t)|,

j=1

where ai j (t) are the elements of A(t). The Space AC[a, b] A function f (x) is called absolutely continuous on [a, b], if for every > 0 there exists a δ > 0 such that whenever any finite set of pairwise disjoint intervals [ak , bk ] ⊂ [a, b], k = 1, 2, · · · , n satisfies nk=1 (bk − ak ) < δ then nk=1 | f (bk ) − f (ak )| < . The space of these functions is denoted by AC[a, b]. Let AC[a, b] be the space of the absolutely continuous functions f in [a, b] and AC n [a, b] be the space of the absolutely continuous functions f which have continuous derivative up to order n − 1 in [a, b] such that f (n−1) ∈ AC[a, b]. The space AC n [a, b] consists of those functions f (x) which can be represented in the form n−1 f (x) = Ian+ φ(x) + ck (x − a)k , k=0

where φ(t) ∈ L 1 (a, b), ck (k = 0, 1, · · · , n − 1) are arbitrary constants and Ian+ φ(x) =

1 (n − 1)!

It follows that φ(t) = f n (t), ck =

f k (a) k!



x

(x − t)n−1 φ(t)dt.

a

(k = 0, 1, · · · , n − 1).

Theorem 1.6.2 (Lebesgue Dominated Convergence Theorem) Let g be integrable over a set E and let { f n } be a sequence of measurable functions such that | f n (x)| ≤ g(x) on E and f (x) = lim f n (x) for almost all x ∈ E. Then n→∞



 f (x)d x = lim E

n→∞

f n (x)d x. E

References

21

1.7 Exercises 

(m)(n) for m > 0, n > 0. (m + n) 0 Show that (i) E 2 (−x 2 ) = cos x (ii) E 2 (x 2 ) = cosh x (iii) x E 2,2 (x 2 ) = sinh x. Show that ddx E α,β (x) = ddx E α,α+β (x) + E α,α+β (x).



 1 −1 0 −1 , (ii) A = . Calculate E α (At α ) when (i) A = 1 1 1 0 sin at sinh at Find the Laplace transform of (i) cos at, (ii) eat , (iii) . 2a 2 1 s 1 , (ii) n , (iii) 2 . Find the inverse Laplace transform of (i) 2 (s − 2) s s + a2 Prove the Banach fixed point theorem.

1 Define T : C[0, 1] → C[0, 1] by T x(t) = 1 + 0 x(s)ds. Is T a contraction?  t (t − Show that the operator T : C[0, 1] → C[0, 1] defined by T x(t) =

1.1. Show that 1.2. 1.3. 1.4. 1.5. 1.6. 1.7. 1.8. 1.9.

1

x m−1 (1 − x)n−1 d x =

0

s)x(s)ds is a contraction. 1.10. Show by an example that Brouwer’s theorem fails in infinite-dimensional spaces. 1.11. Give an example of bounded operator. 1.12. Give an example of completely continuous operator.

References 1. Miller, K., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley and Sons Inc, New York (1993) 2. Capelas de Oliveira, E.: Solved Exercises in Fractional Calculus. Springer Nature, Switzerland (2019) 3. Popolizio, M.: On the matrix Mittlag-Leffler function. Theoretical properties and numerical computation. Mathematics 7(12), 1140 (2019) 4. Smart, D.R.: Fixed Point Theorems. Cambridge University Press, Cambridge (1980)

Chapter 2

Fractional Calculus

Abstract After introducing the concept of fractional integral we define the Riemann– Liouville fractional integrals, Riemann–Liouville fractional derivatives, and Caputo fractional derivatives. Some elementary properties are proved and few examples are added. In the end a set of exercises is included. Keywords Fractional calculus · Riemann–Liouville fractional integral · Riemann–Liouville fractional derivative · Caputo fractional derivative · Properties and examples

2.1 Preliminaries Let f (t) be a function defined for t > 0 and define the definite integral from 0 to t as  t f (s)ds. I f (t) = 0

Repeating this process 

t

I 2 f (t) =

I f (s)ds =

 t 

0

0

s

 f (τ )dτ ds,

0

and this can be extended arbitrarily. The Cauchy formula for repeated integration, namely, I n f (t) =

1 (n − 1)!



t

(t − s)n−1 f (s)ds

0

leads in a straightforward way to a generalization for real n. Using the gamma function to remove the discrete nature of the factorial function gives us a natural candidate for fractional applications of the integral operator © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4_2

23

24

2 Fractional Calculus

I α f (t) =



1 (α)

t

(t − s)α−1 f (s)ds,

0

where α > 0. This is a well-defined operator. It is straightforward to show that the operator I satisfies 1 (α + β)

I α I β f (t) = I β I α f (t) = I α+β f (t) =



t

(t − s)α+β−1 f (s)ds.

0

In 1832, J. Liouville gave two definitions of fractional derivatives of a fairly restrictive class of functions. The first definition restricts functions to functions that can be expressed as a trigonometric series and the second definition applies to the functions x −α with α > 0. The definition of the fractional integral begins with the consideration of the n-fold integral  Ian f (x) =



x



x1

dx1

dx2 a

a

x2

 dx3 · · ·

a

xn−1

f (t)dt.

(2.1)

a

The function f in (2.1) is assumed to be continuous on the interval [a, b] where b > a. We assert that (2.1) may be reduced to a single integral of the form  Ian f (x) =

x

a

1 (x − t)n−1 f (t)dt = (n − 1)! (n)



x

(x − t)n−1 f (t)dt.

(2.2)

a

Clearly the right-hand side of expression (2.2) is meaningful for any number n whose real part is greater than zero, that is, Iaα f (x) =

1 (α)



x

(x − t)α−1 f (t)dt,

a

where Re α > 0, which is the fractional integral of f of order α. Alternatively consider the n-th-order differential equation dn y = f (x) dxn with initial conditions y(a) = y  (a) = y  (a) = · · · = y n−1 (a) = 0. Then the solution of the equation is  y(x) = a

x

(x − t)n−1 f (t)dt. (n − 1)!

(2.3)

2.1 Preliminaries

25

Since f (x) is the n-th derivative of y(x), we may interpret y(x) as the n-th integral of f (x). Thus  x (x − t)n−1 n f (t)dt. Ia f (x) = (n − 1)! a If we replace integer n with any real number α and change the factorial to a gamma function, then we have (2.3). Another way of defining the fractional integral is as follows. We can recognize the Laplace domain equivalent for the n-fold integral of the function f (t). Consider an anti-derivative or primitive of the function f (t), D−1 f (t), then  t −1 f (x)dx. (2.4) D f (t) = 0

Now let us perform the repeated applications of the operator. For example, D−2 f (t) =

 t 0

x

f (y)dydx.

(2.5)

0

Equation (2.5) can be considered as a double integral and taking into account the x y-plane over which it is integrated (see Fig. 2.1), we can reverse the sequence of integrations by doing the proper changes in their limits. So we obtain D−2 f (t) =

 t 0

t

f (y)dxdy.

(2.6)

y

As f (y) is a constant with respect to x, we find that the inner integral is simply (t − y) f (y) and we have D−2 f (t) =



t

(t − y) f (y)dy.

(2.7)

0

Similarly we can obtain D−3 f (t) =

1 2



t

(t − y)2 f (y)dy,

(2.8)

0

and so on giving the formula D−n f (t) =



 t  t f (y)(t − y)n−1 dy. ··· f (y) dy · · · dy =    (n − 1)! 0   0 n n

(2.9)

26

2 Fractional Calculus

y

y

(2.5)

=

x

(2.6)

t

(2.6) x (2.5)

t

Fig. 2.1 x y-plane for integration

Since D−n = I n , Eq. (2.9) can be written as I n f (t) =

1 (n − 1)!



t

(t − s)n−1 f (s)ds.

0

This is true for any n > 0 and so we replace n by α for fractional nature and factorial by gamma and have (2.3) for a = 0.

2.2 Riemann–Liouville Fractional Integrals and Derivatives Let J = [a, b] (−∞ < a < b < ∞) be a finite interval on the real line R. The α α f (x) and Ib− f (x) of order α ∈ C Riemann–Liouville fractional integrals Ia+ (R(α) > 0) are defined by α Ia+ f (x) :=

1 (α)

 a

x

(x − t)α−1 f (t)dt, x > a, R(α) > 0

(2.10)

2.2 Riemann–Liouville Fractional Integrals and Derivatives

27

and α f (x) := Ib−

1 (α)



b

(t − x)α−1 f (t)dt, x < b, R(α) > 0.

(2.11)

x

These integrals are called the left-sided and the right-sided fractional integrals. The α α Riemann–Liouville fractional derivatives Da+ f (x) and Db− f (x) of order α ∈ C (R(α) ≥ 0) are defined by [1] α n−α f (x) = D n Ia+ f (x), (2.12) Da+ n  x d f (t) 1 dt, n = [R(α)] + 1, x > a, = n (n − α) dx a (x − t)α−n+1

and d n n−α )I f (x), (2.13) dx n 0+  b dn 1 f (t) (− n ) = dt, n = [R(α)] + 1, x < b, α−n+1 (n − α) dx x (t − x)

α Db− f (x) = (−

where [R(α)] means the integral part of R(α) and D = {0, 1, 2, · · · },

d . dx

When α = n ∈ N0 =

n m−n f (x) = D m (Ia+ ) f (x), m = [R(n)] + 1. Da+

Suppose n = 0, then

m 0 f (x) = D m Ia+ f (x) , m = [R(0)] + 1, Da+ d 1 (I f (x)), = dx  a+ x d f (t)dt. = dx a By using Leibnitz formula, we get 0 Da+ f (x) = f (x).

Similarly when n = 1, n = 2, n = 3, · · · , we get 1 f (x) = f 1 (x), Da+ 2 f (x) = f 2 (x), Da+ 3 f (x) = f 3 (x), Da+

.. . n f (x) = f n (x). Da+

28

2 Fractional Calculus

If 0 < R(α) < 1, then (2.12) and (2.13) become α f (x) = Da+

d 1 (1 − α) dx



x

f (t) dt, (x − t)α−[R(α)]

b

f (t) dt, (t − x)α−[R(α)]

a

for 0 < R(α) < 1, x > a and α Db− f (x) =

d −1 (1 − α) dx

 x

for 0 < R(α) < 1, x < b, respectively. When α ∈ R+ , (2.12) and (2.13) become α Da+

dn 1 f (x) = (n − α) dx n



x

a

f (t) dt, (x − t)α−n+1

for n = [α] + 1, x > a, and α Db−

dn 1 (− n ) f (x) = (n − α) dx



b x

f (t) dt, (t − x)α−n+1

for n = [α] + 1, x < b, where 0 < R(α) < 1 implies α ∈ R+ α Da+ f (x) =

d 1 (1 − α) dx

α Db− f (x) =

d −1 (n − α) dx

and



x

f (t) dt, 0 < α < 1, x > a, (x − t)α

b

f (t) dt, 0 < α < 1, t < b. (t − x)α

a

 x

If R(α) = 0 but (α = 0), (2.12) and (2.13) become the fractional derivatives of a purely imaginary order. iθ Da+ f (x) =

d 1 (1 − iθ) dx

iθ Db− f (x) =

d 1 (1 − iθ) dx

and



x

f (t) dt, θ ∈ R/{0}, x > a, (x − t)iθ

b

f (t) dt, θ ∈ R/{0}, t < b. (t − x)iθ

a

 x

Theorem 2.2.1 If R(α) ≥ 0 and β ∈ C, (R(β) > 0), then (β) (x − a)β+α−1 , R(α) > 0; (β + α) (β) (x − a)β−α−1 , R(α) ≥ 0. = (β − α)

α Ia+ (x − a)β−1 = α (x − a)β−1 Da+

2.2 Riemann–Liouville Fractional Integrals and Derivatives

Proof The Riemann–Liouville integral (2.10) can be written in the form α Ia+ f (x) =



1 (α)

x

a

f (t) dt, x > a, R(α) > 0. (x − t)1−α

Taking f (t) = (t − a)β−1 , α (x Ia+

− a)

β−1

 x (t − a)β−1 1 = dt, (α) a (x − t)1−α  x 1 = (t − a)β−1 (x − t)α−1 dt. (α) a

Substituting the transformation y=

t −a dt , dy = =⇒ dt = (x − a)dy, x −a x −a

we get α (x Ia+

− a)

β−1

 x 1 = (t − a)β−1 (x − t)α−1 dt, (α) a  (x − a)β+α−1 1 β−1 y (1 − y)α−1 dy, = (α) 0 (β) = (x − a)β+α−1 . (α + β)

Thus we have α Ia+ (x − a)β−1 =

(β) (x − a)β+α−1 , R(α) > 0. (α + β)

Now the Riemann–Liouville derivative (2.12) can be written as α n−α ( f (x)) = D n (Ia+ f (x)). Da+

Taking f (t) = (t − a)β−1 , we have α n−α (x − a)β−1 = D n (Ia+ (x − a)β−1 ) Da+  x dn 1 (t − a)β−1 (x − t)n−α−1 dt = (n − α) d x n a  x 1 = (t − a)β−1 (x − t)−α−1 dt. (−(α − 1))! a

29

30

2 Fractional Calculus

Using the bilinear transformation y=

t −a dt , dy = =⇒ dt = (x − a)dy, x −a x −a

we get α (x Da+

− a)

β−1

 (x − a)β−α−1 1 β−1 = y (1 − y)−α−1 dy, (−(α − 1))! 0 (β) (x − a)β−α−1 , R(α) ≥ 0. = (β − α) 

Similar to the above theorem, we can also prove (β) (b − x)β+α−1 , R(α) > 0; (β + α) (β) (b − x)β−α−1 , R(α) ≥ 0. = (β − α)

α Ib− (b − x)β−1 = α (b − x)β−1 Db−

In particular, if β = 1 and R(α) ≥ 0, then the Riemann–Liouville fractional derivatives of a constant are, in general, not equal to zero. We know that α Da+ (x − a)β−1 =

(β) (x − a)β−α−1 , (β − α)

(1) (x − a)−α (x − a)−α = (1 − α) (1 − α) −α (b − x) α Db− . 1= (1 − α)

α Da+ 1=

Lemma 2.2.2 If R(α) > 0 and R(β) > 0, then the equations β

α+β

β

α+β

α Ia+ Ia+ f (x) = Ia+ f (x)

and α Ib− f (x) = Ib− f (x) Ib−

are satisfied at almost every point x ∈ [a, b] for f (x) ∈ L p (a, b), 1 ≤ p ≤ ∞. If α + β > 1, then the relations hold at any point of [a, b].

2.2 Riemann–Liouville Fractional Integrals and Derivatives

31

Proof By definition, we have  x 1 β (x − s)α−1 (Ia+ f (s))ds (α) a  x s 1 = (x − s)α−1 (s − τ )β−1 f (τ )dτ ds (α)(β) a a  x   x 1 f (τ ) (x − s)α−1 (s − τ )β−1 ds dτ = (α)(β) a τ

β

α Ia+ (Ia+ f (x)) =

where in the last step we exchanged the order of integration and pulled out the f (τ ) factor from the s integration. Changing variables to r defined by s = τ + (x − τ )r , β

α (Ia+ f (x)) = Ia+

1 (α)(β)



x

(x − τ )α+β−1 f (τ )



a

1

 (1 − r )α−1r β−1 dr dτ .

0

The inner integral is the beta function which satisfies the following property: 

1

(1 − r )α−1r β−1 dr = B(α, β) =

0

(α)(β) . (α + β)

Substituting back into the equation β

α Ia+ (Ia+ f (x)) =

1 (α + β)



x

α+β

(x − τ )α+β−1 f (τ )dτ = Ia+ f (x),

a

α and interchanging α and β show that the order in which the operator Ia+ is applied is immaterial. By similar way the other identity can be proved. 

This relationship is called the semigroup property of fractional integral operators. Unfortunately the comparable process for the fractional derivative operator Daα is significantly more complex, but it can be shown that Daα is neither commutative nor additive in general. Theorem 2.2.3 If R(α) > R(β) > 0, then for f (x) ∈ L p (a, b), 1 ≤ p ≤ ∞, the relations β

α−β

β

α−β

α f (x) = Ia+ f (x) Da+ Ia+

and α f (x) = Ib− f (x) Db− Ib−

hold almost everywhere on [a, b].

32

2 Fractional Calculus

Proof From the Riemann–Liouville integral (2.10) and derivative (2.12), we have β

α f (x) = D n Ia+ Da+ Ia+

n+α−β

f (x).

Using the above lemma, we have β α Da+ Ia+

1 dn f (x) = ( n) (n + α − β) dx

 a

x

f (t) dt. (x − t)β−α−n+1

By Leibniz formula, we have β

 x (n + α − β) 1 (x − t)α−β−1 f (t)dt, (n + α − β) a (α − β)  x 1 f (t) = dt, (α − β) a (x − t)β−α+1

α Da+ Ia+ f (x) =

α−β

= Ia+ f (x); hence β

α−β

β

α−β

α f (x) = Ia+ f (x). Da+ Ia+

Similarly α f (x) = Ib− f (x). Db− Ib−

 Observe that when β = 1 we have α+1 α f (x) = Ia+ f (x). D Ia+

Theorem 2.2.4 Let R(α) ≥ 0, m ∈ N. α+m α f (x) and Da+ f (x) exist, then (a) If the fractional derivatives Da+ α α+m f (x) = Da+ f (x). D m Da+ α+m α f (x) and (Db− f )(x) exist, then (b) If the fractional derivatives Db− α α+m f (x) = Da+ f (x). D m Db−

Proof By using the previous theorem for β = k ∈ N and R(α) > k, we have α α−k f (x) = Ia+ f (x) D k Ia+

2.2 Riemann–Liouville Fractional Integrals and Derivatives

33

and α−k α f (x) = Ib− f (x). D k Ib−

If the Riemann–Liouville derivative can be written in the form α n−α f (x) = D n Ia+ f (x), Da+

then α n−α f (x) = D m D n Ia+ f (x) D m Da+ n m n−α = D D Ia+ f (x) n−α−m = D n Ia+ f (x) α+m = Da+ f (x).

Therefore α α+m f (x) = Da+ f (x). D m Da+

Similarly α+m α f (x) = Db− f (x). D m Db−

 Lemma 2.2.5 If R(α) > 0 and f (x) ∈ L p (a, b), 1 ≤ p ≤ ∞, then the following equalities hold: α α Ia+ f (x) = f (x) Da+

and α α Ib− f (x) = f (x). Db−

Proof The Riemann–Liouville integral (2.10) and derivative (2.12) can be written in the form  x f (t) 1 α f (x) = dt Ia+ (α) a (x − t)1−α and α n−α Da+ f (x) = D n Ia+ f (x).

34

2 Fractional Calculus

Then α α n Ia+ f (x) = D n Ia+ f (x). Da+ n f )(x); when n = 1 Take D n (Ia+

d 1 d 1 I f (x) = dx a+ dx (1)



x

f (t)dt.

a

By using Leibniz rule, we have d 1 I f (x) = f (x). dx a+ When n = 2 d2 2 I f (x) = f (x). dx 2 a+ By proceeding in a similar manner, we get n D n Ia+ f (x) = f (x).

Thus α α Ia+ f (x) = f (x). Da+

Similarly α α Ib− f (x) = f (x). Db−

 α α Note that Ia+ Da+ f (x) is not necessarily equal to f (x) but the equality holds α α α Da+ f (x) = f (x). Instead, if f (x) ∈ L 1 (a, b) has when f (x) ∈ Ia+ (L 1 ), that is, Ia+ α an integrable fractional derivative Da+ f (x) then the equality is not true in general [2].

Laplace Transform of Riemann–Liouville Fractional Derivative The Laplace transform of the Riemann–Liouville integral of order α > 0 of a function 1 x α−1 and f (x): f (x) can be written as a convolution of the functions (α) α I0+ f (x) =

1 (α)

 0

x

(x − t)α−1 f (t)dt =

1 α−1 x ∗ f (x). (α)

2.2 Riemann–Liouville Fractional Integrals and Derivatives

35

The Laplace transform of the function x α−1 is L[x α−1 ](s) = (α)s −α . Here we use the formula α f (x)](s) = L( L[I0+

1 α−1 x )L( f (x)) = s −α F(s). (α)

(2.14)

To obtain the Laplace transform of Riemann–Liouville derivative, we represent α D0+ f (x) as the n-th derivative of some function, say g(x). That is, α f (x) = g (n) (x), D0+

and so g(x) can be written as −(n−α) f (x) g(x) = D0+  x 1 = (x − t)n−α−1 f (t)dt, n − 1 ≤ α < n. (n − α) 0

The use of the formula for the Laplace transform of an integer order derivative leads to α f (x)](s) = L[g (n) (x)](s) = s n G(s) − L[D0+

n−1

s k g (n−k−1) (0).

(2.15)

k=0

Using (2.14), the Laplace transform of g(x) is evaluated as G(s) = s −(n−α) F(s).

(2.16)

Additionally, from the definition of the Riemann–Liouville fractional derivative, dn−k−1 −(n−α) D f (x), dx n−k−1 0+ α−k−1 = D0+ f (x).

g (n−k−1) (x) =

(2.17)

Substituting (2.16) and (2.17) in (2.15) we obtain the Laplace transform of the Riemann–Liouville derivative of order α > 0 as α f (x)](s) = s α F(s) − L[D0+

n−1 k=0

α−k−1 s k D0+ f (x)|x=0 , n − 1 ≤ α < n.

36

2 Fractional Calculus

2.3 Caputo Fractional Derivatives α α Let Da+ f (x), Db− f (x) be the Riemann–Liouville fractional derivatives of order α f (x) α ∈ C (R(α) ≥ 0) defined by (2.12) and (2.13). The fractional derivatives C Da+ C α and Db− f (x) of order α ∈ C (R(α) ≥ 0) on [a, b] are defined via the above Riemann–Liouville fractional derivatives by C

α α Da+ f (x) = Da+ [ f (x) −

n−1 f (k) (a) (x − a)k ], k! k=0

C

α α Db− f (x) = Db− [ f (x) −

n−1 f (k) (b) (b − x)k ], k! k=0

respectively, where n = [R(α)] + 1 for α ∈ / N0 , n = α ∈ N0 . These derivatives are called left-sided and right-sided Caputo fractional derivatives of order α. When 0 < R(α) < 1, the relations take the following forms: C

α α Da+ f (x) = Da+ [ f (x) − f (a)],

C

α α Db− f (x) = Db− [ f (x) − f (b)].

If α ∈ / N0 and f (x) is a function for which the Caputo fractional derivatives α α Da+ f (x) and C Db− f (x) of order α ∈ C, (R(α) ≥ 0) exist together with the α α f )(x) and (Db− f )(x), then Riemann–Liouville fractional derivatives (Da+

C

0 0 Da+ f (x) = Db− f (x) = f (x), n f (x) = f (n) (x), Da+ n f (x) = (−1)n f (n) (x), (n ∈ N) Db−

and α (x − a)β−1 = Da+

(β) (x − a)β−α−1 , (R(α) ≥ 0). (β − α)

They are connected with each other by the following relations: C D α f (x) = D α f (x) − a+ a+

α f (x) − = Da+

α f (x) − = Da+

n−1 k=0 n−1 k=0 n−1 k=0

f k (a) α Da+ (x − a)k k! f k (a) (k + 1) (x − a)(k−α) k! (k − α + 1) f k (a) (x − a)k−α , n = [R(α)] + 1, (k − α + 1)

2.3 Caputo Fractional Derivatives

37

and similarly C

α Db−

f (x) =

α Db−

f (x) −

n−1 k=0

f (k) (b) (b − x)k , n = [R(α)] + 1. (k − α + 1)

When 0 < R(α) < 1, we have f (a) (x − a)−α (1 − α) f (b) C α α (b − x)−α . Db− f (x) = Db− f (x) − (1 − α)

C

α α Da+ f (x) = Da+ f (x) −

If α ∈ / N0 , then the Caputo fractional derivatives coincide with the Riemann– Liouville fractional derivatives in the following case: C

α α Da+ f (x) = Da+ f (x),

if f (a) = f 1 (a) = · · · = f n−1 (a) = 0, n = [R(α)] + 1 and C

α α Db− f (x) = Db− f (x),

if f (b) = f 1 (b) = · · · = f n−1 (b) = 0, n = [R(α)] + 1. In particular, when 0 < R(α) < 1, C

α α Da+ f (x) = Da+ f (x), when f (a) = 0,

C

α α Db− f (x) = Db− f (x), when f (b) = 0.

If α = n ∈ N0 , f n (x) is the usual derivative of f (x) of order n, n ∈ N, and C

α Da+ f (x) = f n (x),

C

α Db− f (x) = (−1)n f n (x).

The left-sided and right-sided Caputo fractional derivatives of order α > 0, n − 1 < α ≤ n, are defined by [3] C

α n−α n Da+ f (x) := Ia+ D f (x)  x 1 (x − t)n−α−1 f (n) (t)dt, = (n − α) a

(2.18)

and C

n−α n α Db− f (x) := (−1)n I0+ D f (x)  b n (−1) (t − x)n−α−1 f (n) (t)dt, = (n − α) x

(2.19)

38

2 Fractional Calculus

respectively, where the function f (t) has absolutely continuous derivatives up to order (n − 1) on [a, b]. In particular, when 0 < α < 1, C

α Da+ f (x) =

1 (1 − α)



x

(x − t)−α f  (t)dt,

(2.20)

(t − x)−α f  (t)dt.

(2.21)

a

and C

α Db−

−1 f (t) = (1 − α)



b x

Laplace Transform of Caputo Derivative The Laplace transform of Caputo derivative is obtained by writing the Caputo derivative in the form C

−(n−α) α D0+ f (x) = D0+ g(x), g(x) = f (n) (x).

By using (2.14) we have −(n−α) g(x)](s) = s −(n−α) G(s). L[D0+

(2.22)

The Laplace transform of an integer order derivative is G(s) = s n F(s) −

n−1

s n−k−1 f (k) (0).

(2.23)

k=0

Substituting (2.23) in (2.22), we have α L[C D0+ f (x)](s) = s α F(s) −

n−1

s α−k−1 f (k) (0), n − 1 < α ≤ n.

k=0

Notations: For brevity and notational convenience, here afterward, we take the Riemann–Liouville fractional integral as I α for I0α , Riemann–Liouville fractional derivative D α for D0α and the Caputo derivative C D0α as C D α . Even though many definitions of fractional integrals and derivatives are introduced by several authors (a partial list is given in Chap. 6) we are considering in this book only Caputo derivative and R-L integral and derivative. The relation between the Caputo and Riemann–Liouville fractional derivatives is proved in the following theorem. Theorem 2.3.1 Suppose x > 0, α ∈ R and n − 1 < α < n, n ∈ N . Then the following relation between the Riemann–Liouville and the Caputo derivatives holds:

2.3 Caputo Fractional Derivatives

C

39

D α f (x) = D α f (x) −

n−1 k=0

x k−α f (k) (0). (k + 1 − α)

Proof The well-known Taylor series expansion of f (x) about the point 0 is f (x) = f (0) + x f  (0) + =

n−1 k=0

x 2  x 3  x n−1 f (0) + f (0) + · · · + f n−1 (0) + Rn−1 2! 3! (n − 1)!

xk f (k) (0) + Rn−1 , (k + 1)

where  Rn−1 = 0

x

1 f (n) (τ )(x − τ )n−1 dτ = (n − 1)! (n)



x

f (n) (τ )(x − τ )n−1 dτ = I n f (n) (x).

0

Now, by using the linearity of the Riemann–Liouville fractional derivative, we obtain α

D f (x) = D

α

 n−1 k=0

=

n−1 k=0

=

n−1 k=0

=

n−1 k=0

=

n−1 k=0

xk f (k) (0) + Rn−1 (k + 1)



Dα x k f (k) (0) + D α Rn−1 (k + 1) x k−α (k + 1) f (k) (0) + D α I n f n (x) (k − α + 1) (k + 1) x k−α f (k) (0) + I n−α f n (x) (k − α + 1) x k−α f (k) (0) + C D α f (x). (k − α + 1)

This means that C

D α f (x) = D α f (x) −

n−1 k=0

x k−α f (k) (0). (k + 1 − α)

So the proof is complete. We shall state some properties of the operators I α and C D α .



40

2 Fractional Calculus

Theorem 2.3.2 For α, β > 0 and f as a suitable function, we have (i) I α C D α f (t) = f (t) − f (0), 0 < α < 1, (ii) C D α I α f (t) = f (t), (iii) C D α f (t) = I 1−α D f (t) = I 1−α f  (t), 0 < α < 1, (iv) C D α C D β f (t) = C D α+β f (t), (v) C D α C D β f (t) = C D β C D α f (t). Due to this fact, the concept of sequential fractional derivative is discussed by Miller and Ross [4]. Sequential Fractional Derivative For n ∈ N, the sequential fractional derivative for suitable function f (t) is defined by f (kα := Dkα f (x) = Dα D(k−1)α f (t), where k = 1, · · · , n, D0 f (t) = f (t), and Dα is any fractional differential operator; here we mention it as CD α . Proposition 2.3.3 When α becomes an integer, the Caputo derivative is a classical derivative. Proof We know that C

D α f (t) =

1 (n − α)



t

(t − s)n−α−1 f (n) (s)ds.

a

Taking limit as α → n 1 lim D f (t) = lim α→n α→n (n − α) C

α



t

(t − s)n−α−1 f (n) (s)ds.

a

Taking u = f (n) (s), dv = (t − s)n−α−1 ds (t − s)n−α du = f (n+1) (s)ds, v = − , n−α

2.3 Caputo Fractional Derivatives

41

we have f (n) (a)(t − a)n−α α→n (n − α + 1)  t 1 (t − s)n−α f (n+1) (s)ds + lim α→n (n − α + 1) a  t = f (n) (a) + f (n+1) (s)ds

lim C D α f (t) = lim

α→n

a

= f

(n)

(t). 

Proposition 2.3.4 When α becomes an integer, the Riemann–Liouville fractional derivative is also a classical derivative. Proof The relation between Riemann–Liouville and Caputo derivatives is C

D α f (t) = D α f (t) −

n−1 k=0

D α f (t) = C D α f (t) +

f k (a) (t − a)k−α (k − α + 1)

n−1 k=0

f k (a) (t − a)k−α . (k − α + 1)

Taking limit as α → n lim D α f (t) = lim C D α f (t) + lim

α→n

α→n

α→n

n−1 k=0

f k (a) (t − a)k−α (k − α + 1)

f 0 (a) (t − a)−α = f (n) (t) + lim α→n (1 − α) f 1 (a) (t − a)1−α + lim α→n (2 − α) .. . f n−1 (a) (t − a)n−1−α + lim α→n (n − α) = f (n) (t). Here we used the definition of gamma integral for negative integer. We conclude that α becomes an integer for all fractional derivatives and integral becomes ordinary integer derivative and integral. 

42

2 Fractional Calculus

2.4 Examples From the following example, one can observe that the fractional derivative does not satisfy the commutative property. Example 2.4.1 D α D β f (t) = D β D α f (t). 1

Let f (t) = t 2 , α = 23 , β = 21 . On calculating, we get √ 1 2

π , 2

1 2

D t = and 3

1

1

1

D 2 t 2 = D 1 D 2 t 2 = 0. So we get 1

3

1

D 2 D 2 t 2 = 0. Now √ π 3 t −3/2 D 2 (1) = − . D D t = 2 4 3 2

1 2

1 2

From the above two equations, we observe that 1

3

1

3

1

1

D 2 D 2 t 2 = D 2 D 2 t 2 . Example 2.4.2 If n − 1 < α ≤ n and β > −1, then I αt β =

1 (α)



t

(t − s)α−1 s β dt =

0

t α−1 (α)

 t s α−1 β 1− s ds. t 0

Further replacing y by s/t, we have I αt β =

t α+β (α)



1 0

y β (1 − y)α−1 dy.

2.4 Examples

43

Therefore I αt β =

(β + 1) α+β t . (α + β + 1)

(2.24)

Example 2.4.3 If n − 1 < α ≤ n and β > −1, then D α t β = D n I n−α t β = D n



(β + 1) t β−α+n (β − α + n + 1)



(β + 1) (β − α + n) · · · (β − α − n + 1)t β−α (β − α + n + 1) (β + 1) β−α t = , β > −1. (β − α + 1) =

Note that D α K =

K t −α , where K is a constant. In particular, (1 − α) √ 1

1

D2t2 =

π , 2 1

and the Riemann–Liouville derivative of a constant is not zero. Further D 1 t 2 does 1 not exist, but D α t 2 exists for α ≤ 21 . Example 2.4.4 If n − 1 < α ≤ n and β > n − 1, then C

 (β + 1) β−n (β + 1) = t I n−α t β−n (β − n + 1) (β − n + 1) (β − n + 1) β−α (β + 1) × t = (β − n + 1) (β − α + 1) (β + 1) β−α t = , β > n − 1. (β − α + 1)

D α t β = I n−α D n t β = I n−α

As a special case, consider α = C

1



1 2

D2t =

and β = 1; then f (t) = t and 1 ( 21 )



t

1 1

0

(t − τ ) 2

dτ .

Taking into account the properties of the gamma function and using the substitution 1 u = (t − τ ) 2 , the final result for the Caputo fractional derivative of the function f (t) = t is obtained as

44

2 Fractional Calculus C

1 1 D 2 t = −√ π



t

1

1 d(t − τ ) (t − τ ) 2  0 1 1 1 = − √ √ du 2 , u = (t − τ ) 2 π tu  √t 2u 1 = √ du u π 0 2 √ = √ ( t − 0). π

0

Thus C

√ 2 t 1 D2t = √ . π

1

1

Observe that CD 2 CD 2 t = 1. Example 2.4.5 α at

I e =I

α

∞  (at)k k=0

k!

=

∞ ak I αt k . (k + 1) k=0

By using (2.24), we get I α eat =

∞ k=0

a k t k+α = t α E 1,1+α (at). (k + α + 1)

Example 2.4.6 Let n − 1 < α ≤ n. Then D α eat = D n I n−α eat . By using (2.25), we get ∞   D α eat = D n t n−α E 1,1+n−α (at) = k=0

a k D n t k+n−α (k + n − α + 1)

= t −α E 1,1−α (at). Example 2.4.7 Let n − 1 < α ≤ n. Then C

D α eat = I n−α D n eat = a n I n−α eat .

By using (2.25), we get C

D α eat = a n t n−α E 1,1+n−α (at).

(2.25)

2.4 Examples

45

Example 2.4.8 α

I sin at = I

α

∞  (−1)k (at)2k+1 k=0

(2k + 2)

=

∞ (−1)k a 2k+1 k=0

(2k + 2)

I α t 2k+1 .

By using (2.25), we get α

I sin at =

∞ (−1)k a 2k+1 t 2k+α+1

(2k + α + 2)

k=0

= at α+1 E 2,2+α (−a 2 t 2 ).

(2.26)

Example 2.4.9 Let n − 1 < α ≤ n. D α sin at = D n I n−α sin at. By using (2.26), we get   D α sin at = D n at n−α+1 E 2,2+n−α (−a 2 t 2 ) ∞ (−1)k a 2k+1 D n t 2k+n−α+1 = (2k + n − α + 2) k=0 =

∞ (−1)k a 2k+1 t 2k+1−α k=0 1−α

= at

(2k + 2 − α) E 2,2−α (−a 2 t 2 ).

Example 2.4.10 Let n − 1 < α ≤ n. Then C

D α sin at = I n−α D n sin at =

∞ (−1)k a 2k+1 n−α 2k−n+1 I t . (2k − n + 2) k=0

By using (2.26), we get C

D α sin at =

∞ (−1)k a 2k+1 t 2k+1−α k=0

(2k + 2 − α)

= at 1−α E 2,2−α (−a 2 t 2 ).

Next we show that the product rule for integer order derivative is not true for the fractional derivative. Example 2.4.11 Consider D α (eat ebt ) = D α (e(a+b)t ).

46

2 Fractional Calculus

Using the definition, we have D α (e(a+b)t ) = (a + b)α e(a+b)t but if you use D α ( f (t)g(t)) = D α f (t)g(t) + f (t)D α g(t), we get D α (eat ebt ) = D α eat .ebt + eat D α ebt = (a α + bα )e(a+b)t = (a + b)α e(a+b)t . Therefore D α ( f (t)g(t)) = D α f (t)g(t) + f (t)D α g(t). Similarly the chain rule for integer order derivative is not true for the fractional derivative. d α f (g(t)) d α f (g(t)) d α g(t)  = . dt α dg α dt α Remark: [5] In 1914, Ramanujan introduced the idea of fractional differentiation of x n by adopting the notation D for ordinary differentiation and using the gamma function. He obtained the following identity Dm x n =

(n + 1) x n−m (n − m + 1)

and for m = 21 , 1

D 2 xn =

(n + 1) n− 1 x 2. (n + 21 )

Example 2.4.12 (R-L fractional derivative of a basic power function) Let us take f (t) = t k . Then the first derivative is f  (t) =

d f (t) = kt k−1 . dt

Repeating this gives the more general result that dα k k! t k−α t = α dt (k − α)! which, after replacing the factorials with the gamma function, leads us to dα k (k + 1) k−α t , k ≥ 0. t = dt α (k − α + 1)

2.4 Examples

47

For k = 1 and α = 21 , we obtain the half-derivative of the function t as 1

d2 dt

1 2

(1 + 1) 1− 1 (2) 1 1 1 t 2 = t 2 = √π t 2 . (1 − 21 + 1) ( 23 )

t=

2

Repeating this process yields √

2 2π 0 (1 + 21 ) d 2 2t 2 2 ( 23 ) 0 2 1 1 2−2 = √ t = t = √ √ √ t = 1, 1 π ( 21 − 21 + 1) π (1) π dt 2 π 1√ 3 (because ( ) = π and (1) = 1) 2 2 1

1

which is indeed the expected result of 

1

1

d2 d2 1 2

dt dt

1 2

 t=

d t = 1. dt

For negative integer power k, the gamma function is undefined and we have to use the following relation: d α −k (k + α) −(k+α) t t = (−1)α and k ≥ 0. dt α (k) This extension of the above differential operator need not be constrained only to real powers. For example, the (1 + i)th derivative of the (1 − i)th derivative yields the second derivative. Also notice that setting negative values for α yields integrals. For a general function f(t) and 0 < α < 1, the complete fractional derivative is d 1 D f (t) = (1 − α) dt α



t 0

f (s) ds. (t − s)α

For arbitrary α, since the gamma function is undefined for arguments whose real part is a negative integer and whose imaginary part is zero, it is necessary to apply the fractional derivative after the integer derivative has been performed. For example, 3

1

1

D 2 f (t) = D 2 D 1 f (t) = D 2

d f (t). dt

Observe that, for zero initial conditions, the two derivatives are the same. It is known that Riemann–Liouville derivative leads to difficulties while applying to realworld problems, especially in the context of initial conditions. Hence relatively recent fractional derivative introduced by Caputo is extensively used in applications as the initial conditions involved are physically meaningful. The difference between the Riemann–Liouville definition and the Caputo definition is that the Caputo derivative

48

2 Fractional Calculus

of a constant K is 0, that is, C D α K = 0, whereas the Riemann–Liouville fractional K t −α . derivative of a constant K need not be equal to zero, that is, D α K = (1−α) Example 2.4.13 In the fractional calculus, the law of exponents is known to be generally true for the operators of fractional integration due to their semigroup property. In general, both the operators of fractional differentiation, D α and C D α , do not satisfy either the semigroup property or the (weaker) commutative property. To show how the law of exponents does not necessarily hold for the standard fractional derivative, we provide two simple examples (with power functions) for which (a) D α D β f (t) = D β D α f (t) = D α+β f (t), (b) D α D β g(t) = D β D α g(t) = D α+β g(t). For (a), let us take f (t) = t − 2 and α = β = 21 . Then, using the property, we 1

−3

get D 2 f (t) = 0, D 2 D 2 f (t) = 0, but D 2 + 2 f (t) = D f (t) = − t 22 . For (b), let us 1 1 take g(t) = t 2 and α = 21 , β = 23 . Then, from Example (2.4.1), we get D 2 g(t) = 1

√ 3 π , D 2 g(t) 2 −3 − t 42 .

1

1

1

1

3

3

1

1

= 0, but D 2 D 2 g(t) = 0, D 2 D 2 g(t)= −

3

t2 4

and D 2 + 2 g(t) = D 2 g(t) = 1

3

2.5 Exercises Show that, for α > 0, I α (x f (x)) = x I α f (x) − αI α+1 f (x). Find the R-L derivative of order 21 of the function f (x) = ln x. Show that √ D α (cos ax) = a α cos(ax + απ/2) for a, α ∈ R. 1 Show that 2D 2 sin x = sin x + cos x. Calculate the Caputo derivative of order α with n − 1 < α ≤ n for the following functions: (i) f (x) = (1 + x)m , (ii) f (x) = e x . 2.6. Evaluate I α f (x) = 1 + f (x) for α > 0 and f (x) is a continuous function. 2.7. Let a continuous function and 0 < α ≤ 1. Solve the integral equation  t f (t) be −α (t − s) f (s)ds = 1. 0 2.8. Let a, b ∈ R, 0 < α ≤ 1. Show that the integral equation 2.1. 2.2. 2.3. 2.4. 2.5.

b at 1−α − x(t) = (2 − α) (1 − α)



t

(t − τ )−α x(τ )dτ

0

has a solution x(t) = ab [1 − E 1−α (−bt 1−α )]. 2.9. Show that the solution of the integrodifferential equation 

t 0

√ 4 t t. is f (t) = 3π

(t − τ )− 2 f  (τ )dτ = t 1

References

49

2.10. Let f (t) be a continuous function such that f  (0) = 0 and 0 < α ≤ 1. Show that the solution of the integrodifferential equation C D α f (t) = 1 + f (t) is given by f (t) = t α E α,α+1 (t α ).

References 1. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 2. Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives. Theory and Applications. Gordon and Breach Science Publishers, Amsterdam (1993) 3. Caputo, M.: Linear model of dissipation whose Q is almost frequency independent-II. Geophys. J. Roy. Astron. Soc. 13, 529–539 (1967) 4. Miller, K., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley and Sons Inc, New York (1993) 5. Ranganathan, S.R.: Ramanujan. The Man and The Mathematician. Ess-Ess Publications, New Delhi (1967)

Chapter 3

Fractional Differential Equations

Abstract Fractional differential equations appear more frequently in different areas of science and engineering. As motivation, the occurrence of these equations in different fields of study is indicated. Solution representations for linear fractional differential equations are obtained with the help of the Mittag–Leffler matrix functions. Existence of solutions for nonlinear fractional differential equations and nonlinear fractional damped equations are established by using fixed point theorems. Several examples are provided and a set of exercises is given. Keywords Mittag–Leffler matrix functions · Linear equations · Solution representation · Existence of solutions · Nonlinear fractional differential equations · Fixed point method

3.1 Motivation Fractional differential equations appear more frequently in different areas of science and engineering. In fact, real-world processes generally or most likely result in fractional order systems. The main reason for using the integer order models was the absence of solution methods for fractional differential equations. The most important advantage of using fractional differential equation is their nonlocal property. It is well known that the integer order differential operator is a local operator but the fractional order differential operator is nonlocal. This means that the next state of a system depends not only upon its current state but also upon all its past states. Many real-world systems are better characterized by using a non-integer order dynamic model based on fractional calculus. Fractional calculus can be an aid for explanation of discontinuity formation and singularity formation is an enriching thought experiment. We may say that nature works with fractional derivatives. In fact, fractional differential equations are alternative models of nonlinear differential equations [1]. More interesting facts and the importance of fractional differential equations can be found in [2–9]. In the following models, the fractional derivatives are taken in the Caputo sense.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4_3

51

52

3 Fractional Differential Equations

(i) Fractional Order HIV/AIDS Model The integer order HIV/AIDS model is recently reconstructed as the fractional order model [10]. In that, they first divided the total population into a susceptible class of size S and an infectious class before the onset of AIDS and a full-blown AIDS group of size A which is removed from the active population. Based on the assumptions that the infectious period is very long (≥10 years), we further consider several stages of the infectious period. For simplicity, we consider two stages according to clinic stages, that is, the asymptomatic phase (I) and the symptomatic phase (J). Thus, the model can be described by dα S dt α dα I dt α dα J dt α dα A dt α S(δ) = S0 ,

= μK − cβ(I + b J )S − μS, = cβ(I + b J )S − (μ + k1 )I + γ J, = k1 I − (μ + k2 + γ)J, = k2 J − (μ + d)A, I (δ) = I0 ,

J (δ) = J0 ,

A(δ) = A0 ,

where μK is the recruitment rate of the population; μ is the number of death rate constant; c is the average number of constants of an individual per unit time; β and bβ are probabilities of disease transmission per contact by an infective in the first stage and in the second stage, respectively; k1 and k2 are the transfer rate constants from the asymptomatic phase I to the symptomatic phase J and from the symptomatic phase to the AIDS case, respectively; γ is treatment rate from the symptomatic phase J to the asymptomatic phase I ; d is the disease-related death rate of the AIDS cases. (ii) Fractional Model of Tumor–Immune System Immune system is one of the most fascinating schemes from the point of view of biology and mathematics. The immune system is complex, intricate, and interesting. It is known to be multifunctional and multipathway, so most immune effectors do more than one job. Also, each function of the immune system is typically done by more than one effector which makes it more robust. Studying immune system cancer interactions is an important topic. The reason for using fractional order differential equations is that they are naturally related to systems with memory which exist in tumor–immune interactions. The model includes two immune effectors: E 1 (t), E 2 (t) (such as cytotoxic T cells and natural killer cells) interacting with the cancer cells, T (t), with a Holling function of type III. (Holling type III describes the response of predators to prey depressed at low prey density, then levels off with a further increase in prey density.)

3.1 Motivation

53

The model takes the form [11] dα T = aT − r1 T E 1 − r2 T E 2 , dt α dα E 1 T 2 E1 = −d E + , 1 1 dt α T 2 + k1 dα E 2 T 2 E2 = −d E + , 2 2 dt α T 2 + k2 where 0 < α ≤ 1 and a, r1 , r2 , d1 , d2 , k1 , and k2 are positive constants. The interaction terms in the second and third equations of the above model satisfy the crossreactivity property of the immune system. (iii) Fractional Model of Electrical Circuits Consider linear electrical circuits composed of resistors, supercondensators (ultracapacitors), coils, and voltage (current) sources. As the state varies, the voltage across the supercondensator, currents in the coils are usually chosen. It is well known that the current i(t) in supercondensator is related with its voltage u C (t) by the formula i C (t) = C

dα u C (t) , 0 < α < 1, dt α

where C is the capacity of the supercondensator. Similarly, the voltage u L (t) on the coil is related with its current i L (t) by the formula u L (t) = L

dβ i L (t) , 0 < β < 1, dt β

where L is the inductance of the coil. The advantages of fractional derivatives become apparent in modeling electrical properties of real materials [12]. (iv) Fractional Oscillator with Damping The fractional derivative version of the conventional single degree of freedom oscillator is referred to as the single degree of freedom fractional oscillator. The equation of motion is given by ¨ + cC D α ϑ(t) + kϑ(t) = f (t), m ϑ(t) where m is the mass, c the damping coefficient, k the stiffness, ϑ the displacement, and f the forcing function. Further Bagley and Torvik (see [13]) analyzed the viscoelastically damped structures by means of fractional calculus and introduced the Bagley–Torvik equation of order α = 1/2 and α = 3/2 to study the motion of a rigid plate in a Newtonian fluid. The generic form of Bayley–Torvik equation is given by

54

3 Fractional Differential Equations

a D 2 y(t) + bC D α y(t) + cy(t) = f (t), t ∈ [0, T ], D k y(0) = ck , k = 0, 1, where α = 1/2 or α = 3/2, a > 0 and b, c ∈ R. (v) Fractional Electrochemistry Model In electrochemistry, the idea of a half-order fractional integral of the current field was known. Oldham and Spanier [14] suggested the replacement of the classical integer order Fick’s law describing the diffusion of electroactive species toward the electrodes by a fractional order integral law in the form D

−1/2

 i(t) = kc0

 √   C(0, t) C(0, t) k −1/2 1− + 1− , D c0 R c0

where c0 is the uniform concentration of electro-active species, k is the diffusion coefficient and k and R are constants.

3.2 Equation with Constant Coefficient Consider the fractional differential equation of the form C

D α x(t) = ax(t) + f (t), 0 < t ≤ T, a ∈ R,

(3.1)

x(0) = x0 , where 0 < α < 1 and f (t) is a continuous function on [0, T ]. Taking I α on both sides of (3.1), we get the corresponding Volterra integral equation x(t) = x0 +

a (α)



t 0

(t − s)α−1 x(s)ds +

1 (α)



t

(t − s)α−1 f (s)ds.

0

Now we apply the method of successive approximations to solve this integral equation. For that, we set x0 (t) = x0 ,

 t  t a 1 (t − s)α−1 xm−1 (s)ds + (t − s)α−1 f (s)ds, (α) 0 (α) 0 for m = 1, 2, ... .

xm (t) = x0 +

3.2 Equation with Constant Coefficient

55

We find for x1 (t), that is,  t  t a 1 α−1 x1 (t) = x0 + (t − s) x0 ds + (t − s)α−1 f (s)ds (α) 0 (α) 0  t at α 1 x0 + (t − s)α−1 f (s)ds = x0 + (α + 1) (α) 0  t 1  a k t αk 1 x0 + = (t − s)α−1 f (s)ds. (αk + 1) (α) 0 k=0 Similarly, we find for x2 (t),  t  t a 1 (t − s)α−1 x1 (s)ds + (t − s)α−1 f (s)ds (α) 0 (α) 0  t 1  ak + 1 = x0 + (t − s)α−1 s αk x0 ds (α)(αk + 1) 0 k=0   t  s a 1 α−1 α−1 + (t − s) (s − τ ) f (τ )dτ ds (α) 0 (α) 0  t 1 (t − s)α−1 f (s)ds + (α) 0  t 2 2  a k t αk a k−1 (t − s)αk−1 x0 + f (s)ds = (αk + 1) (αk) 0 k=1 k=0  t 2 2  a k t αk a k−1 (t − s)αk−1 x0 + f (s)ds. = (αk + 1) (αk) 0 k=1 k=0

x2 (t) = x0 +

Continuing this process, we derive the following relation for xm (t), m ∈ N, as  t m−1  a k−1 (t − s)αk−1 f (s)ds (αk) 0 k=1 k=0  t m−1 m   a k (t − s)αk+α−1 a k t αk x0 + f (s)ds. = (αk + 1) (αk + α) 0 k=0 k=0

xm (t) =

m 

a k t αk x0 + (αk + 1)

Taking the limit as m → ∞, we obtain the explicit form of x(t) as x(t) =

∞  k=0

a k t αk x0 + (αk + 1)

 0

t

(t − s)α−1

∞  a k (t − s)αk f (s)ds. (αk + α) k=0

We write the above equation in terms of the Mittag–Leffler function and the solution of (3.1) is

56

3 Fractional Differential Equations

x(t) = E α (at α )x0 +



t

(t − s)α−1 E α,α (a(t − s)α ) f (s)ds.

(3.2)

0

If the inhomogeneous term f (t) = 0, then the Eq. (3.1) is called homogeneous equation of the form C

D α x(t) = ax(t), 0 < t ≤ T, a ∈ R, x(0) = x0 ,

and the solution is x(t) = E α (at α )x0 . The solution of the Cauchy problem C

D α x(t) − ax(t) = f (t), x(0) = x0 , x  (0) = y0 ,

where 1 < α < 2 and x0 , y0 , a ∈ R is of the form α



α

x(t) = E α (at )x0 + t E α,2 (at )y0 +

t

(t − s)α−1 E α,α (a(t − s)α ) f (s)ds.

0

In particular, the solution to the equation C

D α x(t) − ax(t) = 0, x(0) = x0 , x  (0) = y0 ,

is given by x(t) = E α (at α )x0 + t E α,2 (at α )y0 .

3.3 Equation with Matrix Coefficient If the constant a is replaced by an n × n matrix A and x(t), f (t) are n vectors, then the Eq. (3.1) becomes vector fractional differential equation C

D α x(t) = Ax(t) + f (t), 0 < t ≤ T, x(0) = x0 ,

(3.3)

and the solution takes the following form: α



t

x(t) = E α (At )x0 + 0

(t − s)α−1 E α,α (A(t − s)α ) f (s)ds.

(3.4)

3.3 Equation with Matrix Coefficient

57

Similarly, when f = 0 in (3.3), the solution (3.4) takes the form x(t) = E α (At α )x0 . Observe that C

D α x(t) = CD α E α (At α )x0 = AE α (At α )x0 = Ax(t).

Next, we obtain the solution representation by using the Laplace transform technique. Consider the linear fractional differential equation of the form C

D α x(t) = Ax(t) + f (t), t ∈ J = [0, T ],

(3.5)

x(0) = x0 , where 0 < α < 1, x ∈ Rn , A is an n × n matrix and f (t) is continuous on J . Applying Laplace transform on both sides and using the Laplace transform of Caputo derivative, we get λα X (λ) − λα−1 x(0) = AX (λ) + F(λ), F(λ) λα−1 x(0) + α . X (λ) = α [λ I − A] [λ I − A] Applying inverse Laplace transform on both sides, we have

L−1 {X (λ)} (t) = L−1 λα−1 (λα I − A)−1 (t)x0   F(λ) (t). +L−1 [λα I − A] Inserting Laplace transform of Mittag–Leffler function, we get the solution as x(t) = E α (At α )x0 + α

 

d dt t

= E α (At )x0 + α

s 



t 0

α−1

 (t − s)α−1 E α (As α )ds ∗ f (t) (α) E α,α (As α ) f (t − s)ds

0 t

= E α (At )x0 +

(t − s)α−1 E α,α (A(t − s)α ) f (s)ds.

0

Therefore, the solution is given by α



t

x(t) = E α (At )x0 + 0

(t − s)α−1 E α,α (A(t − s)α ) f (s)ds.

(3.6)

58

3 Fractional Differential Equations

Solution Verification: Now we verify that the solution satisfies the fractional differential equation (3.5). We know that D α E α (At α ) = AE α (At α ), ∞  d αk+α Ak d α t E α,α+1 (At α ) = t dt (αk + α + 1) dt k=0 C

=

∞  Ak (αk + α)t αk+α−1 , (αk + α + 1) k=0

d α t E α,α+1 (At α ) = t α−1 E α,α (At α ) dt ∞  d d αk Ak α E α (At ) = t dt (αk + 1) dt k=0 =

∞  k=0

=

∞  k=0

=

∞ 

Ak αkt αk−1 (αk + 1) Ak αk−1 t (αk) Ak+1 t αk+α−1 (αk + α)

k=1 α−1

= At  t ∞  (t − s)α−1 E α,α (A(t − s)α )ds = 0

=

k=0 ∞  k=0

E α,α (At α ),  t Ak (t − s)αk+α−1 ds (αk + α) 0 (t − s)αk+α Ak . (αk + α) αk + α

= (t − s)α E α,α+1 (A(t − s)α ). Taking C D α on both sides of (3.6), C

D α x(t) = C D α E α (At α )x0  t + C Dα (t − s)α−1 E α,α (A(t − s)α ) f (s)ds 0

(3.7)

3.3 Equation with Matrix Coefficient C

59

D α x(t) = L 1 + L 2

where L 1 = C D α E α (At α )x0 , = AE α (At α )x0 ,  t C α (t − s)α−1 E α,α (A(t − s)α ) f (s)ds, and L 2 = D 0   t 1 −α d (t − s) = (1 − α) 0 ds  s α−1 α (s − τ ) E α,α (A(s − τ ) ) f (τ )dτ ds. 0

Consider d ds



s

(s − τ )α−1 E α,α (A(s − τ )α ) f (τ )dτ

0

and take u = f (τ ), dv = (s − τ )α−1 E α,α (A(s − τ )α )dτ du = f  (τ )dτ , v = −(s − τ )α−1 E α,α+1 (A(s − τ )α ). Then  s d (s − τ )α−1 E α,α (A(s − τ )α ) f (τ )dτ ds 0 s  d = − (s − τ )α−1 E α,α+1 (A(s − τ )α ) f (τ ) ds 0  s α α  + (s − τ ) E α,α+1 (A(s − τ ) ) f (τ )dτ 0  d − 0. f (s) + s α E α,α+1 (A(s − 0)α ) f (0) = ds   s (s − τ )α E α,α+1 (A(s − τ )α ) f  (τ )dτ + 0  d α s E α,α+1 (As α ) f (0) = ds  s d (s − τ )α E α,α+1 (A(s − τ )α ) f  (τ )dτ + ds 0 = s α−1 E α,α (As α ) f (0)  s (s − τ )α−1 E α,α (A(s − τ )α ) f  (τ )dτ . + 0

(3.8)

60

3 Fractional Differential Equations

Thus, d ds



s 0

(s − τ )α−1 E α,α (A(s − τ )α ) f (τ )dτ = s α−1 E α,α (As α ) f (0)  s + (s − τ )α−1 E α,α (A(s − τ )α ) f  (τ )dτ 0

and from (3.8)  t 1 L2 = (t − s)−α s α−1 E α,α (As α ) f (0)ds (1 − α) 0  s  t 1 (t − s)−α ds (s − τ )α−1 E α,α (As α ) f  (τ )dτ . + (1 − α) 0 0 Let L 2 = I1 + I2 , where I1 =

1 (1 − α)

1 = (1 − α) =

f (0) (1 − α)

Taking dy = − dst , y =

 

t

(t − s)−α s α−1 E α,α (As α ) f (0)ds

0 t 0

∞  k=0

t−s ,1 t

A (αk + α)

=

f (0) (1 − α)

k=0

k=0 t

Ak s αk f (0)ds (αk + α)

(t − s)−α s αk+α−1 ds

0

− y = st , s : 0 → t, y : 1 → 0,

Ak f (0)  (1 − α) k=0 (αk + α) ∞ 



k



I1 =

∞ 

(t − s)−α s α−1

k



1

(yt)−α ((1 − y)t)αk+α−1 tdy

0

A t αk (αk + α)



1

y −α+−1 (1 − y)αk+α−1 dy

0

= f (0)E α,1 (At α ) = f (0)E α (At α ); and I2 =

1 (1 − α)



t 0

f  (τ )dτ

∞  k=0

Ak (αk + α)

 τ

t

(t − s)−α (s − τ )αk+α−1 ds.

(3.9)

3.3 Equation with Matrix Coefficient

By taking y =

t−s , dy t−τ

61

ds = − t−τ ,1 − y =



t

I2 =

s−τ t−τ

and limits s : τ → t, y : 1 → 0

f  (τ )E α (A(t − τ )α )dτ .

0

Let u = E α (A(t − τ )α ), dv = f  (τ )dτ du = −A(t − τ )α−1 E α,α (A(t − τ )α )dτ , v = f (τ ). Then t  t I2 = E α (A(t − τ ) ) f (τ ) + A (t − τ )α−1 E α,α (A(t − τ )α ) f (τ )dτ 0 0  t = f (t) − f (0)E α (At α ) + A (t − τ )α−1 E α,α (A(t − τ )α ) f (τ )dτ α

0

and from (3.9), L 2 = f (0)E α (At α ) + f (t) − f (0)E α (At α )  t +A (t − s)α−1 E α,α (A(t − s)α ) f (s)ds, 0  t = f (t) + A (t − s)α−1 E α,α (A(t − s)α ) f (s)ds. 0

From (3.8), C

D α x(t) = L 1 + L 2 = AE α (At α )x0 + f (t)  t +A (t − s)α−1 E α,α (A(t − s)α ) f (s)ds 0  t (t − s)α−1 E α,α (A(t − s)α ) f (s)ds] + f (t) = A[E α (At α )x0 + = Ax(t) + f (t)

0

and x(0) = x0 . Remark: Consider the linear fractional differential equation of the form C

D α x(t) + A2 x(t) = f (t), x(0) = x0 , x  (0) = y0 ,

(3.10)

62

3 Fractional Differential Equations

where 1 < α ≤ 2, x ∈ Rn , A is an n × n matrix and f is a continuous function. Applying Laplace transform to both sides, we get s α X (s) − s α−1 x(0) − s α−2 x  (0) + A2 X (s) = F(s). Then X (s) =

s α−2 F(s) s α−1 x0 + α y0 + α . 2 2 +A s I+A s I + A2

sα I

Taking inverse Laplace transform on both sides, we get    −1   −1  (t)x0 + L−1 s α−2 s α I + A2 (t)y0 L−1 {X (s)} (t) = L−1 s α−1 s α I + A2 +L−1







−1 F(s) s α I + A2 (t).

Using the Laplace transform of Mittag–Leffler function, we get the solution of the system (3.10) as x(t) = E α (−A2 t α )x0 + t E α,2 (−A2 t α )y0 + f (t) ∗ t α−1 E α,α (−A2 t α )  t (t − s) f (s)ds, = 0 (t)x0 + 1 (t)y0 + 0

where 0 (t) = E α (−A2 t α ), 1 (t) = t E α,2 (−A2 t α ), (t) = t α−1 E α,α (−A2 t α ). Note that when α = 2, the linear fractional differential equation (3.10) reduces to the second order differential equation d 2 x(t) + A2 x(t) = f (t), dt 2 with the same initial conditions x(0) = x0 and x  (0) = y0 and the solution takes the form  t A−1 sin(A(t − s)) f (s)ds. x(t) = cos(At) x0 + A−1 sin(At) y0 + 0

3.4 Nonlinear Equations

63

3.4 Nonlinear Equations Consider the following nonlinear fractional differential equation: C

D α x(t) = Ax(t) + f (t, x(t)),

(3.11)

x(t) = x0 , where 0 < α < 1, x ∈ Rn , A is an n × n matrix and f : J × Rn → Rn is continuous. The integral representation of solution of Eq. (3.11) is given by x(t) = E α (At α )x0 +



t

(t − s)α−1 E α,α (A(t − s)α ) f (s, x(s))ds.

(3.12)

0

The existence and uniqueness of the solution of (3.11) implies the existence and uniqueness of the solution of (3.12) and vice versa. See [13] for details. Let C(J, Rn ) denote the Banach space of continuous functions x(t) with values in Rn for t ∈ J with the norm ||x|| = sup{|x(t)| : t ∈ J }. Assume the following conditions: (H1 ) (H2 )

There exist constants M0 > 0, M > 0 such that ||E α (At α )|| ≤ M0 and ||E α,α (At α )|| ≤ M. f : J × Rn → Rn is continuous and there exist constants L , N > 0 such that || f (t, x1 ) − f (t, x2 )|| ≤ L||x1 − x2 ||, for all x1 , x2 ∈ Rn . and N = max || f (t, 0)||. α

Theorem 3.4.1 If the hypotheses (H1 ) and (H2 ) are satisfied and M Tα L < 1, then Eq. (3.11) has a unique solution on J . Proof Define the mapping  : C(J ; Rn ) → C(J ; Rn ) by α



t

x(t) = E α (At )x0 +

(t − s)α−1 E α,α (A(t − s)α ) f (s, x(s))ds

0

and we need to show that  has a fixed point. This fixed point is the solution of Eq. (3.11). Choose M0 x0 + N . r≥ α 1 − (M Tα L) Let Br = {x ∈ C(J ; Rn ); ||x|| ≤ r }. Then we have to show that Br ⊂ Br . From the assumptions, we have

64

3 Fractional Differential Equations

 t ||x(t)|| ≤||E α (At α )|| x0 + (t − s)α−1 ||E α,α (A(t − s)α )|||| f (s, x(s))||ds, 0  t α (t − s)α−1 ||E α,α (A(t − s)α−1 )|| = ||E α (At )||x0 + 0

[|| f (s, x(s)) − f (s, 0)|| + || f (s, 0)||]ds, Tα M L||x|| + N , = M0 x0 + α ≤ r. Thus,  maps Br into itself. Now, for x1 , x2 ∈ C(J, Rn ), we have  ||x1 (t) − x2 (t)||

t

≤ 0

≤M ≤

(t − s)α−1 E α,α (A(t − s)α ) L||x1 − x2 ||ds,

Tα L||x1 − x2 ||, α

1 ||x1 − x2 ||. 2

Hence,  is a contraction mapping and therefore there exists a unique fixed point x ∈ Br such that x(t) = x(t). Any fixed point of  is the solution of Eq. (3.11).  Theorem 3.4.2 If the hypotheses (H1 ) and (H2 ) are satisfied, then Eq. (3.11) has a unique solution on J . Proof The proof is based on the application of fixed point method. Define a mapping  : C(J, Rn ) → C(J, Rn ) by α



t

x(t) = E α (At )x0 +

(t − s)α−1 E α,α (A(t − s)α ) f (s, x(s))ds.

0

Let x1 , x2 ∈ C(J ; Rn ). Then from Eq. (3.13), we have, for each t ∈ J ||x1 (t) − x2 (t)|| ≤ M

Tα L||x1 − x2 ||. α

Then by induction, we have α

|| n x1 (t) −  n x2 (t)|| ≤ α

(M T L)n

(M Tα L)n ||x1 − x2 ||. n!

α Since < 1 for large n, by the generalization of the Banach contraction prinn! ciple (Theorem 1.5.2),  has a unique fixed point x ∈ C(J ; Rn ). This fixed point is the solution of Eq. (3.11). 

3.4 Nonlinear Equations

65

Now, we prove an interesting theorem which is very useful for proving the existence of solutions of nonlinear fractional differential equations in general Banach spaces. Theorem 3.4.3 Let X = {x ∈ C([0, T ]; R) : equipped with the norm

C

D α x ∈ C([0, T ]; R)} be the space

x X = max |x(t)| + max |CD α x|. t∈[0,T ]

t∈[0,T ]

Then X is a Banach space. Proof If α = 1, then X = C 1 ([0, T ]; R) and in this case, X is a Banach space. Let us assume that α ∈ (0, 1). Let {xn } ⊂ X be a Cauchy sequence. Then {xn } and {CD α xn } are Cauchy sequences in the space C([0, T ]; R), and so there exist x, y ∈ C([0, T ]; R) such that lim xn = x ⇒ lim xn − x = 0

n→∞

n→∞

and lim CD α xn = y ⇒ lim CD α xn − y = 0.

n→∞

n→∞

We want to prove that y = CD α x; for that, 

t C

D α xn (s)ds =

0



t



0



0



0



0

t

=

t

=

t

=

 s 1 (s − τ )−α xn (τ )dτ ds (1 − α) 0  s ds (s − τ )−α xn (τ )dτ (1 − α) 0  t xn (τ )dτ (s − τ )−α ds (1 − α) τ xn (τ )dτ (s − τ )1−α t . (1 − α) 1 − α τ

xn (τ )dτ (t − τ )1−α 1−α 0 (1 − α)  t 1 = (t − τ )1−α xn (τ )dτ . (2 − α) 0

=

t

Let u = (t − τ )1−α , dv = xn (τ )dτ , du = −(1 − α)(t − τ )−α dτ , v = xn (τ ).

66

3 Fractional Differential Equations

Then 

t C

t  t   1 (t − τ )1−α xn (τ ) + (1 − α)(t − τ )−α xn (τ )dτ (2 − α) 0 0  t −1 1 1−α = xn (0) + (t − τ )−α xn (τ )dτ . (3.13) t (2 − α) (1 − α) 0

D α xn (s)ds =

0

Consider −1 (1 − α)



t

(t − τ )

−α

xn (0)dτ =

0

= = =

 t −xn (0) (t − τ )−α dτ (1 − α) 0 −xn (0) −(t − τ )1−α t . (1 − α) 1 − α 0   t 1−α −xn (0) −0+ (1 − α) 1−α −xn (0) 1−α t (2 − α)

and (3.13) implies 

t C

D α xn (s)ds =

0

1 (1 − α)



t

(t − τ )−α (xn (τ ) − xn (0))dτ ,

0

that is, 

t C

0



1 D xn (s)ds = (1 − α) α

t

(t − s)−α (xn (s) − xn (0))ds.

0

Letting n → ∞ gives 

t C

lim

n→∞ 0

1 n→∞ (1 − α)

D α xn (s)ds = lim



t

(t − s)−α (xn (s) − xn (0))ds.

0

By using the Lebesgue-dominated convergence theorem, 

t

1 y(s)ds = (1 − α)

0



t

(t − s)−α (x(s) − x(0))ds.

0

Taking differentiation with respect to t on both sides, we have d dt

 0

t

 t d 1 (t − s)−α (x(s) − x(0))ds (1 − α) dt 0  t d 1 (t − s)−α (x(s) − x(0))ds y(t) = (1 − α) dt 0

y(s)ds =

3.4 Nonlinear Equations

67

Let u = x(s) − x(0), dv = (t − s)−α ds (t − s)1−α . du = x  (s)ds, v = − 1−α Then t  1 d (t − s)1−α − (x(s) − x(0)) (1 − α) dt 1−α 0   t 1−α (t − s) x  (s)ds + 1−α 0  d t 1−α 1 − 0.(x(t) − x(0)) + .0 = (1 − α) dt 1−α   t (t − s)1−α  x (s)ds + 1−α 0  t d (t − s)1−α  1 x (s)ds = (1 − α) dt 0 1−α  t  1 d (t − s)1−α  = x (s)ds + 1.0 − 0 (1 − α) 0 dt 1 − α  t (1 − α)(t − s)−α  1 x (s)ds = (1 − α) 0 (1 − α)  t 1 (t − s)−α x  (s)ds = (1 − α) 0 = CD α x(t).

y(t) =

Finally, we have lim CD α xn (t) = y(t) = CD α x(t).

n→∞

Thus, the space X is a Banach space.



Consider the general fractional differential equation C

D α x(t) = f (t, x(t)), x(0) = x0 ,

(3.14)

where 0 < α < 1 and the nonlinear function f : G → R is continuous. Here, G = {(t, x) : t ∈ [0, h ∗ ], |x − x0 | < K , for some K > 0, h ∗ > 0} and M = sup | f (t, x)| (t,x)∈G

68

3 Fractional Differential Equations

and 





h = min h ,

K (α + 1) M

α1 

.

Now we prove that the initial value problem has a solution x(t) ∈ C([0, h]; R). For that, we state the following result. Lemma 3.4.4 The function x(t) ∈ C([0, h]; R) is a solution of the initial value problem (3.14) if and only if it is a solution of the integral equation 1 x(t) = x0 + (α)



t

(t − s)α−1 f (s, x(s))ds.

(3.15)

0

For the sake of simplicity of the presentation, we only treat the scalar case explicitly here. However, all the results can be extended to vector valued functions without any difficulty. Theorem 3.4.5 The initial value problem (3.14) has a solution x(t) ∈ C([0, h]; R) Proof If M = 0, then f (t, x) = 0 for all (t, x) ∈ G. In this case, x(t) = x0 is the solution of the initial value problem (3.14). When M = 0, the initial value problem (3.14) is equivalent to the integral equation (3.15). Set U = {x ∈ C([0, h] : R) : x − x0 ≤ K }. Obviously, U is a closed convex subset of the Banach space C([0, h] : R) and so U is also a Banach space. Define the operator P : U → U by P x(t) = x0 +

1 (α)



t

(t − s)α−1 f (s, x(s))ds.

(3.16)

0

First, we show that P x ∈ U whenever x ∈ U . Now for 0 ≤ t1 ≤ t2 ≤ h, |P x(t1 ) − P x(t2 )|   t2 1 t1 α−1 α−1 = (t1 − s) f (s, x(s))ds − (t2 − s) f (s, x(s))ds (α) 0 0  1 t1 [(t1 − s)α−1 − (t2 − s)α−1 ] f (s, x(s))ds (α) 0  t2 (t2 − s)α−1 f (s, x(s))ds + t1   t1  t2 M α−1 α−1 ≤ |(t1 − s) − (t2 − s) |ds + (t2 − s)α−1 ds (α) 0 t1

=

3.4 Nonlinear Equations

69

 t2  t1 M [(t1 − s)α−1 − (t2 − s)α−1 ]ds + (t2 − s)α−1 ds, (α) 0 t1   (t2 − t1 )α M 1 α α α [t1 − t2 + (t2 − t1 ) ] + = (α) α α 2M ≤ (t2 − t1 )α , since 0 < α < 1, (α + 1)

=

which tends to zero as t2 → t1 and so P is continuous. Moreover,  1 t α−1 |P x(t) − x0 | = (t − s) f (s, x(s))ds (α) 0 1 ≤ Mt α (α + 1) Mh α ≤ (α + 1) M K (α + 1) ≤ = K. (α + 1) M Thus, P x ∈ U and so P maps U into U itself. Next, we show that P(U ) = {P x : x ∈ U } is a relatively compact set. This can be done by means of the Arzela–Ascoli theorem. For z ∈ P(U ), t ∈ [0, h], |z(t)| = |P x(t)|

≤ ≤

 t 1 (t − s)α−1 | f (s, x(s))|ds (α) 0 1 Mh α ≤ x0 + K 

x0 + (α + 1)

x0 +

which is the required boundedness property. For equicontinuity, we have, for 0 ≤ t1 ≤ t2 ≤ h, 2M (t2 − t1 )α .

P x(t1 ) − P x(t2 ) ≤ (α + 1) Thus, if |t1 − t2 | < δ, then

P x(t1 ) − P x(t2 ) ≤

2M δα . (α + 1)

Noting that the expression on the right-hand side is independent of x, t1 and t2 , we see that the set P(U ) is equicontinuous. Hence, by the Arzela–Ascoli theorem, P(U ) is relatively compact and so, by Schauder’s fixed point theorem, P has a fixed point. This fixed point is a solution of the initial value problem (3.14). 

70

3 Fractional Differential Equations

3.5

Nonlinear Damped Equations

Consider the nonlinear fractional differential equation of the form C

D α x(t) + A2 x(t) = f (t, x(t),CD β x(t)), t ∈ J = [0, T ],

x(0) = x0 , x  (0) = y0 ,

(3.17)

with 1 < α ≤ 2, 0 < β ≤ 1, A is an n × n matrix and the nonlinear function f : J × Rn × Rn → Rn is continuous. The solution of (3.17) is given by 

t

x(t) = 0 (t) x0 + 1 (t) y0 +

(t − s) f (s, x(s),CD β x(s))ds

0

where , 0 , 1 are already defined. For brevity, let us take n1 n3 n5 n6

= = = =

sup{ 0 (t) , t ∈ J }; sup{ (t − s) , t, s ∈ J }; sup{ 2 (t − s) , t, s ∈ J }; n 4 x0 + n 1 y0 ;

n 2 = sup{ 1 (t) , t ∈ J }; n 4 = sup{ A2 (t) , t ∈ J }; 2 (t) = t α−1 E α,α−1 (−A2 t α ); c=n 1 x0 + n 2 y0 .

Now, we make the following assumptions to obtain the existence results for the equation: (H1) For each t ∈ J , the function f (t, ·, ·) : Rn × Rn → Rn is continuous and the function f (·, x, y) : J → Rn is strongly measurable for each x, y ∈ Rn . (H2) For every positive constant k, there exists h k ∈ L 1 (J ) such that sup

x , y ≤k

f (t, x, y) ≤ h k (t), for every t ∈ J.

(H3) There exists a continuous function m 1 : J → [0, ∞) such that

f (t, x, y) ≤ m 1 (t) ( x + y ) , t ∈ J, x, y ∈ Rn , where  : (0, ∞) → (0, ∞) is a continuous nondecreasing function. (H4) There exists a constant M > 0 and a continuous function m 2 : J → [0, ∞) such that  t n5 n 6 t −β + (t − s)−β m 1 (s)(w(s))ds ≤ Mm 2 (t)(w(t)), (1 − β) (1 − β) 0 and  0

T

 m(s)ds < c



ds . (s)

3.5 Nonlinear Damped Equations

71

where m(t)=max{n 3 m 1 (t), Mm 2 (t)}.

Theorem 3.5.1 Assume that the hypotheses (H 1)–(H 4) hold. Then there exists a solution to the nonlinear equation (3.17) on J .   Proof Consider the Banach space X = x : x ∈ C(J, Rn ) and CD β x ∈ C(J, Rn ) with norm x ∗ = max{ x , CD β x }. We now show that the nonlinear operator F : X → X defined by 

t

(F x)(t) = 0 (t) x0 + 1 (t) y0 +

(t − s) f (s, x(s),CD β x(s))ds

0

has a fixed point. This fixed point is then a solution to (3.17). Now 

t

(F x)(t) = 0 (t) x0 + 1 (t) y0 +

(t − s) f (s, x(s),CD β x(s))ds.

0

The first step is to obtain a priori bound of the set ζ(F) = {x ∈ X : x = λF x for some λ ∈ (0, 1)}. Let x ∈ ζ(F). Then x = λF x for some 0 < λ < 1. Thus, for each t ∈ J , we have 

t

x(t) = λ0 (t) x0 + λ1 (t) y0 + λ

(t − s) f (s, x(s),CD β x(s))ds.

0

Then 

t

x(t) ≤ n 1 x0 + n 2 y0 + n 3

m 1 (s)( x(s) + CD β x(s) )ds

0



t

≡ c + n3

m 1 (s)( x(s) + CD β x(s) )ds.

0

Denoting the right-hand side of the above inequality by r1 (t), we have r1 (0) = c,

x(t) ≤ r1 (t) and r1 (t) = n 3 m 1 (t)( x(t) + CD β x(t) ).

72

3 Fractional Differential Equations

Also, x  (t) = −λ A2 (t) x0 + λ0 (t) y0 + λ



t

2 (t − s) f (s, x(s),CD β x(s))ds.

0

and 



x (t)

≤ ≡

t

n 4 x0 + n 1 y0 + n 5 m 1 (s)( x(s) + CD β x(s) )ds 0  t m 1 (s)( x(s) + CD β x(s) )ds. n6 + n5 0

Hence, it follows that

CD β x(t)

 t 1 (t − s)−β x  (s) ds (1 − β) 0  t n6 ≤ (t − s)−β ds (1 − β) 0  s  t n5 + (t − s)−β m 1 (τ )( x(τ ) + CD β x(τ ) )dτ ds (1 − β) 0 0  t n6 −β (t − s) ds ≤ (1 − β) 0  t t n5 + (t − s)−β dsm 1 (τ )( x(τ ) + CD β x(τ ) )dτ (1 − β) 0 τ  t n 6 t 1−β n5 ≤ + (t − τ )1−β m 1 (τ )( x(τ ) + CD β x(τ ) )dτ . (2 − β) (2 − β) 0



Denoting the right-hand side of the above inequality by r2 (t), we have r2 (0) = 0 and

CD β x(t)



r2 (t)

and r2 (t)

=

n5 n 6 t −β + (1 − β) (1 − β)



t

(t − τ )−β m 1 (τ )( x(τ ) + CD β x(τ ) )dτ .

0

Let w(t) = r1 (t) + r2 (t), t ∈ J . Then w(0) = r1 (0) + r2 (0) = c and w (t) = r1 (t) + r2 (t) ≤ m(t)(w(t)) which implies that for each t ∈ J , 

w(t)

w(0)

ds ≤ (s)



T c

 m(s)ds < c



ds . (s)

3.5 Nonlinear Damped Equations

73

From the above inequality, we see that there exists a constant K such that w(t) = r1 (t) + r2 (t) ≤ K , t ∈ J. Then x(t) ≤ r1 (t) and CD β x(t) ≤ r2 (t), t ∈ J, and hence

x ∗ = max{ x , CD β x } ≤ K and the set ζ(F) is bounded. Next, we prove that the operator F : X → X is completely continuous. Let Bq = {x ∈ X : x ∗ ≤ q}. We first show that F maps bounded sets into equicontinuous family in Bq . Let x ∈ Bq and t1 , t2 ∈ J . Then if 0 < t1 < t2 ≤ T ,

(F x)(t2 ) − (F x)(t1 )

≤ 0 (t2 ) − 0 (t1 )

x0 + 1 (t2 ) − 1 (t1 )

y0

  t1    + [(t2 − s) − (t1 − s)] f (s, x(s),C D β x(s))ds  0    t2   (t2 − s) f (s, x(s),C D β x(s))ds  + t1

≤ 0 (t2 ) − 0 (t1 )

x0 + 1 (t2 ) − 1 (t1 )

y0

 t1

(t2 − s) − (t1 − s) h q (s)ds + 0  t2 +

(t2 − s) h q (s)ds;

(3.18)

t1

and 

(F x) (t)

 ≤ ≤ ≤

A (t)

x0 + 0 (t)

y0 +  t h q (s)ds n 4 x0 + n 1 y0 + n 5 0  t h q (s)ds. n6 + n5 2

t

2 (t − s) h q (s)ds.

0

0

Hence, it follows that

CD β (F x)(t2 ) − CD β (F x)(t1 )

 t2  t1   1 1   =  (t2 − s)−β (F x) (s)ds − (t1 − s)−β (F x) (s)ds  (1 − β) 0 (1 − β) 0   t2  1   ≤ (t2 − s)−β (F x) (s)ds   (1 − β) t1    t1   1   + (t2 − s)−β − (t1 − s)−β (F x) (s)ds   (1 − β) 0

74

3 Fractional Differential Equations

 t2 1 (t2 − s)−β (F x) (s) ds (1 − β) t1  t1   1 (t2 − s)−β − (t1 − s)−β (F x) (s) ds + (1 − β) 0  s  t2  t2 n5 n6 (t2 − s)−β ds + (t2 − s)−β h q (τ )dτ ds ≤ (1 − β) t1 (1 − β) t1 0  t1   n6 + (t2 − s)−β − (t1 − s)−β ds (1 − β) 0  t1    s n5 (t2 − s)−β − (t1 − s)−β h q (τ )dτ ds + (1 − β) 0 0  s  t2 n6 1 1−β 1−β (t2 ≤ − t1 ) + (t2 − s)−β h q (τ )dτ ds (2 − β) (1 − β) t1 0  t1   1 1−β 1−β + (t2 − τ ) − (t2 − t1 ) − (t1 − τ )1−β h q (τ )dτ . (3.19) (2 − β) 0 ≤

The right-hand sides of (3.18) and (3.19) tend to zero as t2 → t1 . Thus, F maps Bq into an equicontinuous family of functions. It is easy to see that the family F Bq is uniformly bounded. Next, we show that F is a compact operator. It suffices to show that the closure of F Bq is compact. Let 0 ≤ t ≤ T be fixed and be a real number satisfying 0 < < t. For x ∈ Bq , we define  t−

(t − s) f (s, x(s),CD β x(s))ds. (F x)(t) = 0 (t)x0 + 1 (t)y0 + 0

Note that using the same methods as in the procedure above, we obtain the boundedness and equicontinuous property of F which implies that the set S (t) = {(F x)(t) : x ∈ Bq } is relatively compact in X for every 0 < < t. Moreover, for every x ∈ Bq ,

(F x)(t) − (F x)(t)

 t    ≤  (t − s) f (s, x(s),CD β x(s))ds  t−

 t ≤

(t − s) h q (s)ds. t−

Also

(F x) (t) − (F x) (t)

 t    ≤  2 (t − s) f (s, x(s),CD β x(s))ds  t−

 t ≤

2 (t − s) h q (s)ds. t−

Since (F x)(t) − (F x)(t) → 0 and (F x) (t) − (F x) (t) → 0 as → 0, this implies that

3.5 Nonlinear Damped Equations

75

CD β (F x)(t) − CD β (F x)(t)

 t 1 (t − s)−β (F x) (t) − (F x) (t)) ds → 0 as → 0. (1 − β) 0



So relatively compact sets S (t) = {(F x)(t) : x ∈ Bq } are arbitrarily close to the set {(F x)(t) : x ∈ Bq }. Hence, {(F x)(t) : x ∈ Bq } is compact in X by the Arzela– Ascoli theorem. Next, it remains to show that F is continuous. Let {xn } be a sequence in X such that xn − x → 0 as n → ∞. Then there is an integer k such that xn ≤ k,

CD β xn ≤ k for all n and t ∈ J . So, x(t) ≤ k, CD β x(t) ≤ k and x, CD β x ∈ X . By (H 1), f (t, xn (t),CD β xn (t)) → f (t, x(t),CD β x(t)), for each t ∈ J . Since

f (t, xn (t),CD β xn (t)) − f (t, x(t),CD β x(t)) ≤ 2h k (t), we have, by the dominated convergence theorem,

(F xn )(t) − (F x)(t)

 t    (t − s) f (s, xn (s),C D β xn (s)) − f (s, x(s),C D β x(s)) ds = sup  t∈J



 T 0

0

 

(t − s) f (s, xn (s),C D β xn (s)) − f (s, x(s),C D β x(s)) ds.

Also

(F xn ) (t) − (F x) (t)

 t    2 (t − s) f (s, xn (s),C D β xn (s)) − f (s, x(s),C D β x(s)) ds = sup  t∈J



 T 0

0

 

2 (t − s) f (s, xn (s),C D β xn (s)) − f (s, x(s),C D β x(s)) ds.

This implies that

CD β (F xn )(t) − CD β (F x)(t)

 t 1 (t − s)−β (F xn ) (t) − (F x) (t) ds → 0 as n → ∞. ≤ (1 − β) 0

76

3 Fractional Differential Equations

Thus, F is continuous. Finally, the set ζ(F) = {x ∈ X : x = λF x, λ ∈ (0, 1)} is bounded as shown in the first step. By Schaefer’s theorem, the operator F has a fixed point in X . This fixed point is then the solution of (3.17). 

3.6 Examples Example 3.6.1 Consider the fractional differential equation in R and take f = 0, x(0) = 1, A = 1 in (3.3), we get C

D α x(t) = x(t), t ∈ [0, T ], x(0) = 1,

(3.20)

where 0 < α < 1. The corresponding integral solution of (3.20) is 1 x(t) = 1 + (α)



t

0

x(s) ds. (t − s)1−α

(3.21)

The exact solution of (3.20) is given by x(t) = E α (t α ) =

∞  k=0

(t α )k (αk + 1)

(3.22)

where E α is the Mittag–Leffler function. On expanding, we have x(t) =

(t α )1 (t α )2 (t α )0 + + + ··· (α.0 + 1) (α + 1) (2α + 1) tα t 2α =1+ + + ··· . (α + 1) (2α + 1)

We want to show that both the solutions are same. For that using (3.22) in (3.21), we get 1+

tα t 2α + + ··· (α + 1) (2α + 1)

 (t − s)α−1 1 +

sα s 2α + + · · · ds (α + 1) (2α + 1) 0   t α t 1 −(t − s) 1 = 1+ + (t − s)α−1 s α ds + · · · (α) α (α)(α + 1) 0 0  t 1 tα + = 1+ (t − s)α−1 s α ds + · · · . (α + 1) (α)(α + 1) 0 = 1+

1 (α)



t

3.6 Examples

77

Next, we prove that t 2α 1 = (2α + 1) (α)(α + 1)



t

(t − s)α−1 s α ds.

0

The value of the integral 

t



(t − s)α−1 s α ds = t α−1

0

=t

α−1

= t α−1  2α =t

t



0



0

s (1 − )α−1 s α ds, t

1

0 1

1

s (1 − x)α−1 (t x)α tdx, (taking x = ), t (1 − x)α−1 t α+1 x α dx,

x α+1−1 (1 − x)α−1 dx,

0

= t 2α B(α + 1, α), (α + 1)(α) . = t 2α (2α + 1) From the above, we get 1 (α + 1)(α)



t

(t − s)α−1 s α ds =

0

t 2α . (2α + 1)

Similarly, we evaluate the other terms on the right-hand side of the series and it is equal to 1+

t 2α tα + + ··· . (α + 1) (2α + 1)

Hence, the solution representations (3.21) and (3.22) are the same. Example 3.6.2 Consider the fractional differential equation with f (t) = t, x(0) = 1, A = 1, then C

D α x(t) = x(t) + t, t ∈ [0, T ] x(0) = 1,

(3.23)

where 0 < α < 1. The corresponding integral solution of (3.23) is 1 x(t) = 1 + (α)

 0

t

x(s) 1 ds + 1−α (t − s) (α)

 0

t

s ds. (t − s)1−α

(3.24)

78

3 Fractional Differential Equations

The exact solution of (3.23) is 

α

x(t) = E α,1 (t ) +

t

(t − s)α−1 [E α,α (t − s)α ]sds.

(3.25)

0

On expanding t 2α tα + + ··· x(t) = 1 + (α + 1) (2α + 1)   t 1 (t − s)α (t − s)2α + (t − s)α−1 + + + · · · sds. (α) (2α) (3α) 0 Using similar procedure as in the above example with judicious integral evaluation, we can show that both the solutions (3.24) and (3.25) for (3.23) are the same. Example 3.6.3 For the Riemann–Liouville fractional differential equation 4

D 3 x(t) = 0, the solution is

x(t) = c1 t 3 + c2 t − 3 , t > 0, 1

2

where c1 and c2 are arbitrary constants. Example 3.6.4 Consider the equation 1

t D 2 x(t) = x(t). Taking Laplace transform, we get D[s 2 X (s) − D − 2 x(0)] + X (s) = 0. 1

1

Differentiation gives 1 1 D X (s) + ( s −1 + s − 2 )X (s) = 0 2 which is a first-order differential equation in X (s) and the solution is √

X (s) = ks − 2 e−2 s , 1

where k is a constant. By taking the inverse Laplace transform, we have k 1 x(t) = √ e− t , t > 0. t

3.6 Examples

79

Example 3.6.5 Basset Equation Consider the Basset equation x(t) ˙ + a C D α x(t) + x(t) = f (t), t ∈ J = [0, T ], where f (t) is a continuous function on J . Let α = and a = 2, the solution is x(t) =

1 2

and a ∈ R. Take 0 < a < 2

 1 1 1 [λ1 E 21 (λ2 t 2 ) − λ2 E 21 (λ1 t 2 )]x0 λ1 − λ2   t 1 1 − 21 2 2 1 1 1 1 + (t − s) [E 2 , 2 (λ1 (t − s) ) − E 2 , 2 (λ2 (t − s) )] f (s)ds , 0

√ a2 − 4 , 2 √ −a − a 2 − 4 . λ2 = 2

where

−a +

λ1 =

Now let us consider a = 2 and λ1 = λ2 = −

a = λ = −1 2

The polynomial 1

1

s + as 2 + 1 = (s 2 − λ)2 1

= (s 2 + 1)2 . Now we consider    t α 1 (∓λt ) (s), ∓ 2λt E = L 2 λ ∈ C. 1 1 2 π s 2 (s 2 ± λ)2 1

Case (i): Now    1 t − 2λt E 1 (−λt 2 ) (s) =L 2 1 1 π 2 s 2 (s 2 ± λ)2 √     ∞ t 2 t (s) = L 2 e−st √ dt π π 0  ∞ √ 2 −st = √ e tdt π 0    t 2 3 (s) = √ 3 ( ) L 2 π 2 πs 2    t 1 L 2 (s) = 3 π s2 1

(3.26)

80

3 Fractional Differential Equations    ∞ 1 1 L − 2λt E 1 (−λt 2 ) (s) = e−st − 2λt E 1 (−λt 2 )dt 2

0

= −2λ = −2λ   1 L − 2λt E 1 (−λt 2 ) (s) = 2

 ∞

2

1

e−st t E 1 (−λt 2 )dt 2

0

 ∞ 0

1 1 1 s 2 (s 2

∞  (−λ)k t 2 k

e−st t

+ λ)2

dt

k k=0  2 + 1



1 3

.

(3.27)

s2

From (3.26) and (3.27), we have    t 1 1 (−λt 2 ) (s). − 2λt E = L 2 1 1 2 π s 2 (s 2 ± λ)2 1

Case (ii): Now    t 1 1 (λt 2 ) (s), + 2λt E = L 2 1 1 2 π s 2 (s 2 − λ)2 1

we conclude that    t 1 1 (∓λt 2 ) (s). ∓ 2λt E = L 2 1 1 2 π s 2 (s 2 ± λ)2 1

Next    t 1 2 1 (∓λt 2 ) (s). + (1 + 2λ = L ∓ 2λ t)E 1 2 π (s 2 ± λ)2 1

Case (i): Now    t 1 2 1 (−λt 2 ) (s) + (1 + 2λ = L − 2λ t)E 1 2 π (s 2 + λ)2 1

   t (s) = L − 2λ π   1 L (1 + 2λ2 t)E 21 (−λt 2 ) (s) =

  1 2 2 1 L (1 + 2λ t)E 2 (−λt ) (s) =

−λ 3

s2   1 2 1 L E 2 (−λt ) (s)  1 2 2 1 +2λ E 2 (−λt ) (s) 1 1 2

1 2

s (s +

λ)2

+

λ 3

s2

.

(3.28)

(3.29)

3.6 Examples

81

From (3.28) and (3.29), we get    t 1 2 1 (−λt 2 ) (s) = L − 2 t)E + (1 + 2λ 1 2 π (s 2 + λ)2 1

Case (ii):    t 1 2 2 + (1 + 2λ t)E 21 (λt ) (s) =L 2 1 π (s 2 − λ)2 1

we conclude that    t 1 2 2 + (1 + 2λ t)E 21 (∓λt ) (s) =L ∓2 1 π (s 2 ± λ)2 1

Consider the equation 1

x(t) ˙ + 2C D 2 x(t) + x(t) = f (t), x(0) = x0 .

(3.30)

Applying Laplace transform on both sides to get s X (s) − x(0) + 2s 2 X (s) − 2s − 2 x(0) + X (s) = F(s). 1

1

On grouping, we get X (s)(s + 2s 2 + 1) = x(0) + 2s − 2 x(0) + F(s) 1

1

1 + 2s − 2

1

X (s) =

1 2

x0 +

F(s) 1

s + 2s + 1) s + 2s 2 + 1)    1 t 1 1 (−t 2 ) (s) + (1 + 2t)E = L − 2 1 2 π (s 2 + 1)2    1 t 1 2 − 2t E 21 (−t ) (s) = L −2 1 π (s 2 + 1)2    F(s) t 1 2 1 + (1 + 2t)E 2 (−t ) (s). = L f (t) ∗ −2 1 π (s 2 + 1)2 Taking inverse Laplace transform on both sides, we get

82

3 Fractional Differential Equations

√  t √ √ 2 t 2 x(t) = √ x0 + (1 − 2t)E ‘12 (− t)x0 − √ t − s f (s)ds π π 0  t  + (1 + 2(t − s))E ‘12 (− (t − s)) f (s)ds. 0

Example 3.6.6 Consider the fractional differential equation C 3/2

D

C 3/2

D

x1 (t) − 2x1 (t) + 3x2 (t) =

x12

x1 + sin t

x1 , x2 (t) − 4x1 (t) + 5x2 (t) = 2 x1 + t

⎫ ,⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎭

(3.31)



        x1 (0) 1 0 x1 (0) with initial conditions = = and for t ∈ [0, 2]. x2 (0) x2 (0) 1 0 It has the following form C 3/2

D

⎫ x(t) + A2 x(t) = f (t, x), t ∈ [0, 2], ⎬ x(0) = x0 , x  (0) = y0 ,



(3.32)

⎤ x1     ⎢ x12 + sin t ⎥ x1 (t) −2 3 ⎥ ⎢ 2 . where A = , f (t, x) = ⎢ ⎥, x(t) = x2 (t) −4 5 ⎦ ⎣ x2 x22 + t Using Mittag–Leffler matrix function for a given matrix A2 , we get ⎡

 (2 − s) =

 L 1 (s) L 2 (s) , L 3 (s) L 4 (s)

where ! " L 1 (s) = (2 − s)1/2 4E 3/2,3/2 (−(2 − s)3/2 ) − 3E 3/2,3/2 (−2(2 − s)3/2 ) , ! " L 2 (s) = (2 − s)1/2 3E 3/2,3/2 (−2(2 − s)3/2 ) − 3E 3/2,3/2 (−(2 − s)3/2 ) , ! " L 3 (s) = (2 − s)1/2 4E 3/2,3/2 (−(2 − s)3/2 ) − 4E 3/2,3/2 (−2(2 − s)3/2 ) , " ! L 4 (s) = (2 − s)1/2 4E 3/2,3/2 (−2(2 − s)3/2 ) − 3E 3/2,3/2 (−(2 − s)3/2 ) .

3.6 Examples

83

Further, the nonlinear function f is bounded, continuous, and satisfies conditions of Theorem 3.5.1. Hence, there exists a solution to the nonlinear equation (3.31). Example 3.6.7 Consider the fractional damped dynamical system C 7/4

D

$ ⎫ # exp(−2t) |x1 | + |CD 1/2 x1 (t)| ⎪ x1 (t) − x1 (t) = ,⎪ ⎪ ⎪ 1 + |x2 (t)| ⎬

$ ⎪ # ⎪ exp(−2t) |x2 | + |CD 1/2 x2 (t)| ⎪ ⎪ C 7/4 ,⎭ D x2 (t) − x2 (t) = 1 + |x1 (t)|

(3.33)



        x1 (0) 1 x1 (0) 0 with initial conditions = and = for t ∈ [0, 3]. 0 1 x2 (0) x2 (0) It has the following form C 7/4

D

1/2

x(t) + A2 x(t) = f (t, x(t),CD0+ x(t)), t ∈ [0, 3] x(0) = x0 , x  (0) = y0 ,

 (3.34)



   −1 0 x1 (t) where A = , and , x(t) = x2 (t) 0 −1 # $⎤ ⎡ exp(−2t) |x1 | + |CD 1/2 x1 (t)| ⎢ ⎥ 1 + |x2 (t)| ⎢ ⎥ 1/2 f (t, x(t),CD0+ x(t)) = ⎢ ⎥. # $ C 1/2 ⎣ exp(−2t) |x2 | + | D x2 (t)| ⎦ 2

1 + |x1 (t)| Using Mittag–Leffler matrix function for a given matrix A2 , we get  (3 − t) =

 N (t) 0 , 0 N (t)

where N (t) = (3 − t)3/4 E 7/4,7/4 ((3 − t)7/4 ). Further the nonlinear function f is continuous and satisfies the hypotheses of Theorem 3.5.1. Hence, the Eq. (3.33) has a solution on [0, 3]. Example 3.6.8 Consider the following fractional differential equation D α x(t) = f (t), t > 0

(3.35)

where 0 < α < 1. Take the initial condition x(0) = 0 and assume that the function f (t) can be expressed as Taylor series ∞  f (n) (0) n t . f (t) = n! n=0

84

3 Fractional Differential Equations

We know that Dαt β =

(1 + β) β−α t . (1 + β − α)

We can look for the solution of the Eq. (3.35) in the form of power series as x(t) = t α

∞ 

xn t n =

n=0

∞ 

xn t n+α

(3.36)

n=0

Substituting the expression (3.36) into the Eq. (3.35), we have ∞  n=0



xn

 f (n) (0) (1 + n + α) n t = f (t) = tn (n + 1) n! n=0

and comparison of the coefficients of both sides gives xn =

∞  n=0

f (n) (0) , n = 1, 2, ... (1 + n + α)

Therefore, under the above assumptions, the solution of the Eq. (3.35) is x(t) = t α

∞  n=0

f (n) (0) tn. (1 + n + α)

(3.37)

In the case of the Eq. (3.35), we can easily transform the expression (3.37) x(t) = =

∞  f (n) (0) (n + 1) n+α t n! (1 + n + α) n=0 ∞  f (n) (0) α n I t n! n=0

= Iα

∞  f (n) (0) n  t n! n=0

= I α f (t).

(3.38)

Applying I α on both sides of (3.35) and the composition law of Riemann–Liouville derivative we would get the expression (3.38). However, the use of the inverse operator is often impossible.

References

3.7 3.1. 3.2. 3.3. 3.4. 3.5. 3.6. 3.7. 3.8. 3.9.

85

Exercises √ 1 Solve t D 2 x(t) + x(t) = 0. 1 Solve Dx(t) + D 2 x(t) − 2x(t) = 0. f (0) α−1 α t . Show that D I f (t) + I α D f (t) = (α) C α α ˙ = 0. Solve D x(t) + ω x(t) = 0, 1 < α ≤ 2, with x(0) = x0 and x(0) Solve C D α x(t) + κ C D β x(t) + x(t) = 0, 1 < α ≤ 2, 0 < β ≤ 1, κ > 0, with x(0) = 1 and x(0) ˙ = 0. Solve C D α θ(t) + κθ(t) = κa, 0 < α ≤ 1, with θ(0) = θ0 and a, κ are constants. Solve (1 + t) C D α y(t) = 1 with y(0) = 0 and 0 < α ≤ 1. Solve C D α y(t) = 1 − e−t , 0 < α ≤ 1 with y(0) = 0. Solve C D α y(t) + C D β y(t) = 1,1 < α ≤ 2, 0 < β ≤ 1,with y(0) = y  (0) = 0. 3

3.10. Solve the equation D 2 y(t) + 2C D 2 y(t) + 2y(t) = sin t y  (0) = 0.

with y(0) =

References 1. Bonilla, B., Rivero, M., Rodriguez-Germá, L., Trujillo, J.J.: Fractional differential equations as alternative models to nonlinear differential equations. Appl. Math. Comput. 187, 79–88 (2007) 2. Das, S.: Functional Fractional Calculus for Systems Identifications and Controls. Springer, New York (2008) 3. Diethelm, K.: The Analysis of Fractional Differential Equations. Springer, New York (2010) 4. Mainardi, F.: Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models. Imperial College Press, London (2010) 5. Miller, K., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. Wiley, New York (1993) 6. Petráš, I.: Fractional-Order Nonlinear Systems: Modeling, Analysis and Simulation. Springer, Heidelberg (2011) 7. Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999) 8. Sabatier, J., Agrawal, O.P., Tenreiro Machado, J.A.: Advances in Fractional Calculus. Springer, Dordrecht (2007) 9. Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives, Theory and Applications. Gordon and Breach Science Publishers, Amsterdam (1993) 10. Javidi, M., Nyamoradi, N.: Numerical behavior of a fractional order HIV/AIDS epidemic model. World J. Model. Simul. 9, 139–149 (2013) 11. Rihan, F.A.: Numerical modeling of fractional-order biological systems. Abstract Appl. Anal. (2013), Art.ID 816803(11pp) 12. El-Sayed, A.M.A., Nour, H.M., Raslan, W.E., El-Shazly, E.S.: Fractional parallel RLC circuit. Alexandria J. Math. 3, 11–23 (2012) 13. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 14. Oldham, K., Spanier, J.: The Fractional Calculus. Academic Press, New York (1974)

Chapter 4

Applications

Abstract In this chapter, we discuss some applications of fractional differential equations in control theory. The basic problems in control theory such as observability, controllability, and stability are considered for fractional dynamical systems. Observability and controllability of linear systems are studied via Grammian matrix. Sufficient conditions for the controllability of nonlinear fractional dynamical systems are established by means of the fixed point theorem. Stability of linear and nonlinear systems is discussed. Examples are provided to illustrate the theory and few exercises are given. Keywords Fractional dynamical systems · Observability · Controllability · Stability · Nonlinear systems In this chapter, we discuss some applications of fractional differential equations in control theory. Basic facts about control theory can be found in the books [1, 2]. The basic problems in control theory such as observability, controllability, and stability are studied to the fractional dynamical systems in the references [3–13].

4.1 Observability Observability is one of the fundamental concepts in control theory. The theory of observability is based on the possibility to deduce the initial state of the system from observing its input–output behavior. This means that, from the system’s output, it is possible to determine the behavior of the entire system. Consider the fractional order linear time-invariant system C

D α x(t) = Ax(t), 0 < α < 1, t ∈ J = [0.T ],

(4.1)

with linear observation y(t) = H x(t),

(4.2)

where x ∈ Rn , y ∈ Rm , A is an n × n matrix and H is an m × n matrix.

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4_4

87

88

4 Applications

Definition 4.1.1 The system (4.1) and (4.2) is observable on an interval J if y(t) = H x(t) = 0,

t ∈ J,

implies x(t) = 0,

t ∈ J.

Theorem 4.1.2 The linear system (4.1) and (4.2) is observable on J if and only if the observability Grammian matrix 

T

M=

E α (A∗ t α )H ∗ H E α (At α )dt

0

is positive definite. Here, ∗ denotes the matrix transpose. Proof The solution x(t) of (4.1) corresponding to the initial condition x(0) = x0 is given by x(t) = E α (At α )x0 and we have, for y(t) = H x(t) = H E α (At α )x0 , 

T

y = 2

0

=

x0∗

y ∗ (t)y(t)dt



T

E α (A∗ t α )H ∗ H E α (At α )dt x0

0

= x0∗ M x0 is a quadratic form in x0 . Clearly, M is an n × n symmetric matrix. If M is positive definite, then y = 0 implies x0∗ M x0 = 0. Therefore, x0 = 0. Hence, (4.1)–(4.2) is observable on J . If M is not positive definite, then there is some x0 = 0 such that x0∗ M x0 = 0. Then x(t) = E α (At α )x0 = 0, for t ∈ J , but y2 = 0, so y = 0 and we conclude that (4.1)–(4.2) is not observable on J . Hence, the proof.  Alternatively, the following result is proved in [14]. Lemma 4.1.3 The fractional system (4.1) and (4.2) is observable on an arbitrary interval [0, T ] if and only if ⎡ ⎢ ⎢ rank ⎢ ⎣

H HA .. .

H An−1

⎤ ⎥ ⎥ ⎥ = n. ⎦

4.2 Controllability of Linear Systems

89

If the linear system (4.1) and (4.2) is observable on an interval J , then x(0) = x0 , the initial state for the solution on that interval is reconstructed directly from the observation y(t) = H E α (At α )x0 . Definition 4.1.4 The n × n matrix function R(t) defined on J is a reconstruction kernel if and only if 

T

R(t)H E α (At α )dt = I.

0

Theorem 4.1.5 There exists a reconstruction kernel R(t) on J if and only if the system (4.1) and (4.2) is observable on J . Proof If a reconstruction kernel exists and satisfies 

T



T

R(t)y(t)dt =

0

R(t)H E α (At α )dt x0 = x0

0

and y(t) = 0, then x0 = 0. So x(t) = 0 and we conclude that the system (4.1) and (4.2) is observable on J . If, on the other hand, the system (4.1) and (4.2) is observable on J , then from Theorem 4.1.2 

T

M=

E α (A∗ t α )H ∗ H E α (At α )dt > 0.

0

Let R0 (t) = M −1 E α (A∗ t α )H ∗ , t ∈ J. Then we have   T R0 (t)H E α (At α )dt = M −1 0

T

E α (A∗ t α )H ∗ H E α (At α )dt = I,

0

so that R0 (t) is a reconstruction kernel on J .



4.2 Controllability of Linear Systems The problem of controllability of dynamical systems is widely used in analysis and the design of control system. Any control system is said to be controllable if every state corresponding to this process can be affected or controlled at respective time by some control signals. The control problems involved in the description of fractional dynamical system are much more advanced.

90

4 Applications

Consider the linear dynamical system represented by the fractional differential equation of the form C α

D x(t) = Ax(t) + Bu(t), t ∈ J = [0, T ],

(4.3)

x(0) = x0 , with 0 < α < 1, x ∈ R n , u ∈ R m , A is a n × n matrix and B is a n × m matrix. The solution of the system (4.3) is x(t) = E α (At α )x0 +



t

(t − s)α−1 E α,α (A(t − s)α )Bu(s)ds.

(4.4)

0

Definition 4.2.1 System (4.3) is said to be controllable on J if, for every x0 , x1 ∈ R n , there exists a control u(t) such that the solution x(t) of (4.3) satisfies the conditions x(0) = x0 and x(T ) = x1 . Theorem 4.2.2 The linear control system (4.3) is controllable on [0, T ] if and only if the controllability Grammian matrix  W =

T

[E α,α (A(T − s)α )B][E α,α (A(T − s)α )B]∗ ds

0

is positive definite, for some T > 0. Proof Since W is positive definite, it is non-singular and therefore its inverse is well-defined. Define the control function as u(t) = (T − t)1−α B ∗ E α,α (A∗ (T − t)α )W −1 [x1 − E α (AT α )x0 ].

(4.5)

Substituting t = T in (4.4) and inserting (4.5), we have x(T ) = E α (AT α )x0 +



T

(T − s)α−1 E α,α (A(T − s)α )B(T − s)1−α

0

×B ∗ E α,α (A∗ (T − s)α )W −1 [x1 − E α (AT α )x0 ]ds = E α (AT α )x0 + W W −1 [x1 − E α (AT α )x0 ] = x1 . Thus, (4.3) is controllable. On the other hand, if it is not positive definite, there exists a nonzero y such that y ∗ W y = 0, that is,

4.2 Controllability of Linear Systems

y





T

91

E α,α (A(T − s)α )B B ∗ E α,α (A∗ (T − s)α )y ds = 0

0

y ∗ E α,α (A(T − s)α )B = 0,

on [0, T ]. Let x0 = [E α (AT α )]−1 y. By the assumption, there exists an input u such that it steers x0 to the origin in the interval [0, T ]. It follows that  T x(T ) = 0 = E α (AT α )x0 + (T − s)α−1 E α,α (A(T − s)α )Bu(s)ds 0  T α−1 (T − s) E α,α (A(T − s)α )Bu(s)ds. 0= y+ 0

Then 



T

0=y y+

(T − s)α−1 y ∗ E α,α (A(T − s)α )Bu(s)ds.

0

But the second term is zero leading to the conclusion y ∗ y = 0. This is a contradiction to y = 0. Thus, W is positive definite. Hence, the proof.  Lemma 4.2.3 [14] The fractional control system (4.3) is controllable if and only if rank [B, AB, . . ., An−1 B] = n. Consider the linear fractional dynamical system represented by the fractional differential equation C

D α x(t) + A2 x(t) = Bu(t), t ∈ J, x(0) = x0 , x  (0) = y0 ,

(4.6)

where 1 < α ≤ 2, x(t) ∈ Rn , u(t) ∈ L 2 (J ; Rm ) and A, B are matrices of dimensions n × n, m × n, respectively. The solution of the system (4.6) is  x(t) = 0 (t)x0 + 1 (t)y0 +

t

(t − s)Bu(s)ds.

(4.7)

0

where , 0 , 1 are already defined in Sect. 3.3 in terms of Mittag–Leffler functions. Definition 4.2.4 The system (4.6) is said to be controllable on J if, for each vectors x0 , y0 , x1 ∈ Rn , there exists a control u(t) ∈ L 2 (J, Rm ) such that the corresponding solution of (4.6) together with x(0) = x0 satisfies x(T ) = x1 . We note that our controllability definition is concerned only with steering the state vector but not the velocity vector y0 in (4.6).

92

4 Applications

Lemma 4.2.5 [2] Let f i , for i = 1, 2, . . . , n, be 1 × p vector valued continuous functions defined on [t1 , t2 ]. Let F be an n × p matrix with f i as its ith row. Then f 1 , f 2 , . . . , f n are linearly independent on [t1 , t2 ] if and only if the n × n constant matrix  t2 F(t)F ∗ (t)dt W (t1 , t2 ) = t1

is positive definite. Theorem 4.2.6 The following statements regarding the linear system (4.6) are equivalent: (a) The linear system (4.6) is controllable on J . (b) The rows of (t)B are linearly independent. (c) The controllability Grammian 

T

W =

(T − s)B B ∗ ∗ (T − s)ds

(4.8)

0

is positive definite. Proof First, we prove that (a) =⇒ (b). Suppose that the system (4.6) is controllable, but the rows of (t)B are linearly dependent on J . Then there exists a nonzero constant n × 1 row vector y ∗ such that y ∗ (t) B = 0, for every t ∈ J.

(4.9)

We choose x(0) = x0 = 0, x  (0) = y0 = 0. Therefore, the solution of (4.6) becomes 

t

x(t) =

(t − s)Bu(s)ds.

0

Since the system (4.6) is controllable on J , taking x(T ) = y, we have 

T

x(T ) = y = yy ∗ =



(T − s)Bu(s)ds,

0 T

y ∗ (T − s)Bu(s)ds.

0

From (4.9), yy ∗ = 0 and hence y = 0. Hence, it contradicts our assumption that y is nonzero. Now we prove that (b) =⇒ (a). Suppose that the rows of (t)B are linearly independent of J . Therefore, by Lemma (4.2.5), the n × n constant matrix 

T

W = 0

(T − s)B B ∗ ∗ (T − s) ds

4.3 Controllability of Nonlinear Systems

93

is positive definite. Now we define the control function as u(t) = B ∗ ∗ (T − t)W −1 [x1 − 0 (T )x0 − 1 (T )y0 ].

(4.10)

Substituting (4.10) in (4.7), we have 

T

x(T ) = 0 (T )x0 + 1 (T )y0 +

(T − s)B B ∗ ∗ (T − s)W −1

0

× [x1 − 0 (T )x0 − 1 (T )y0 ] ds = 0 (T )x0 + 1 (T )y0 + W W −1 [x1 − 0 (T )x0 − 1 (T )y0 ] = x1 . Thus, the system (4.6) is controllable. The implications (b) =⇒ (c) and (c) =⇒ (b) follow directly from the Lemma (4.2.5). Hence, the result.  Remark 4.2.7 It should be mentioned that the linear fractional dynamical system (4.6) reduces to the second order dynamical system for α = 2 and it is of the form [2] d 2 x(t) + A2 x(t) = Bu(t), t ∈ J, dt 2 with the same initial conditions x(0) = x0 and x  (0) = y0 . Further taking α = 2 in (4.7) and (4.8), one can easily derive the solution and the controllability Grammian for the above second order dynamical system as  t A−1 sin(A(t − s))Bu(s)ds, x(t) = cos(At) x0 + A−1 sin(At) y0 + 0  T A−1 sin(A(t − s))B B ∗ (A−1 sin(A(t − s)))∗ ds. W = 0

Moreover, this problem is also steering the states only but not the velocity vector in (4.6).

4.3 Controllability of Nonlinear Systems Consider the nonlinear fractional dynamical system represented by the fractional differential equation of the form C

D α x(t) = Ax(t) + Bu(t) + f (t, x(t), u(t)), x(0) = x0 ,

(4.11)

94

4 Applications

where 0 < α < 1, x ∈ R n , u ∈ R m and A is an n × n matrix, B is an n × m matrix and f : J × R n × R m → R n is continuous. Let us introduce the following notation. Denote Q as the Banach space of continuous R n × R m valued functions defined on the interval J with the norm (x, u) = x + u, where x = sup{|x(t)| : t ∈ J } and u = sup{|u(t)| : t ∈ J }. That is, Q = Cn (J ) × Cm (J ), where Cn (J ) is the Banach space of continuous R n valued functions defined on the interval J with the sup norm. For each (z, v) ∈ Q, consider the linear fractional dynamical system C

D α x(t) = Ax(t) + Bu(t) + f (t, z(t), v(t)), x(0) = x0 .

Then the solution is given by  t (t − s)α−1 E α,α (A(t − s)α )Bu(s)ds x(t) = E α (At α )x0 + 0  t α−1 + (t − s) E α,α (A(t − s)α ) f (s, z(s), v(s))ds.

(4.12)

0

Now we prove the following theorem. Theorem 4.3.1 If the nonlinear function f is continuous and uniformly bounded on J and if the linear system (4.3) is controllable, then the nonlinear system (4.11) is controllable on J . Proof Define the operator P : Q → Q by P(z, v) = (x, u) where

u(t) = (T − t)1−α B ∗ E α,α (A∗ (T − t)α )W −1 x1 − E α (AT α )x0  −

T

(T − s)α−1 E α,α (A(T − s)α ) f (s, z(s), v(s))ds

(4.13)

0

and  t (t − s)α−1 E α,α (A(t − s)α )Bu(s)ds x(t) = E α (At α )x0 + 0  t α−1 + (t − s) E α,α (A(t − s)α ) f (s, z(s), v(s))ds. 0

Let

a1 = sup E α,α (A(T − t)α ), a2 = sup E α (At α )x0 ,

(4.14)

4.4 Stability

95

c1 = 4a12 T α B ∗ W −1 α−1 , c2 = 4a1 T α α−1 , d1 = 4a1 B ∗ W −1 [|x1 | + a2 ], d2 = 4a2 , M = sup | f | = sup{| f (s, z(s), v(s))| : s ∈ J }. Then

α −1 |u(t)| ≤ B a1 W  |x1 | + a2 + T a1 α sup | f |

c1 d1 + M =a ≤ 4a 4a a1 T α a1 T α B a+ M =b |x(t)| ≤ a2 + 4aα α ∗

−1

Choose a positive constant r such that a ≤ r/2 and b ≤ r/2. Thus, if Q(r ) = {(z, v) ∈ Q : z ≤

r r and v ≤ }, 2 2

then P maps Q(r ) into itself. Since f is continuous, it implies that the operator is continuous and hence is completely continuous by the application of Arzela–Ascoli’s theorem. Since Q(r ) is closed, bounded and convex, the Schauder fixed point theorem guarantees that P has a fixed point (z, v) ∈ Q(r ) such that P(z, v) = (z, v) ≡ (x, u). Hence, we have  t (t − s)α−1 E α,α (A(t − s)α )Bu(s)ds x(t) = E α (At α )x0 + 0  t + (t − s)α−1 E α,α (A(t − s)α ) f (s, x(s), u(s))ds.

(4.15)

0

Thus, x(t) is the solution of the system (4.11) and it is easy to verify that x(T ) = x1 . Hence, the system (4.11) is controllable on J . 

4.4 Stability The stability of a system relates to its response to inputs or disturbances. A system which remains in a constant state unless affected by an external action and which returns to a constant state when the external action is removed can be considered to be

96

4 Applications

stable. The study of the stability of fractional system can be carried out by studying the solutions of the differential equations that characterize them. One notices that the analysis of stability in fractional order system is more complicated than in ordinary system. For instance, consider the following two systems with initial conditions x(0) = 1 for 0 < b < 1, d x(t) = bt b−1 dt C

D α x(t) = bt b−1 , 0 < α < 1.

(4.16)

(4.17)

The analytical solution of (4.16) is x(t) = 1 + t b . As t → ∞, x(t) → ∞. Therefore, the integer-order system (4.16) is unstable for any b ∈ (0, 1). The analytical solution of (4.17) is x(t) = 1 +

(b + 1) b+α−1 t . (b + α)

As t → ∞, x(t) → 1, when b + α − 1 < 0. Therefore, the fractional order system (4.17) is stable for any 0 < b ≤ 1 − α and this implies that the fractional order system may have additional attractive feature over the integer-order system. Consider the fractional linear time-invariant system C

D α x(t) = Ax(t), 0 < α < 1, t ∈ [t0 , t1 ],

(4.18)

x(t0 ) = x0 , where x ∈ R n and A is an n × n matrix. The solution of (4.18) is given by x(t) = E α (A(t − t0 )α )x0 .

Definition 4.4.1 The solution of fractional dynamical system (4.18) is said to be stable if there exists ε > 0 such that any solution x(t) of (4.18) satisfies x(t) < ε for all t > t0 . The solution is said to be asymptotically stable if in addition to being stable, x(t) → 0 as t → ∞. Theorem 4.4.2 The fractional dynamical system (4.18) is asymptotically stable iff | arg(spec A)| > απ/2.

4.4 Stability

97

Proof Taking Laplace transform on both sides of (4.18), we get X (s)s α − s α−1 x0 = AX (s). It follows that the solution of the linear system (4.18) is given by x(t) = E α (At α )x0 . First, suppose that the matrix A is similar to the diagonal matrix. Then there exists an invertible matrix T such that A = T −1 AT = diag(λ1 , . . . , λn ). Then E α (At α ) = T E α (At α )T −1 = T diag[E α (λ1 t α ), E α (λ2 t α ), . . . , E α (λn t α )]T −1 . By a result in [15], we have E α (λi t α ) = −

p (λi t α )−k + O(|λi t α |−1− p ) → 0 as t → +∞, 1 ≤ i ≤ n. (1 − kα) k=2

Hence, the conclusion holds. Next, suppose the matrix A is similar to a Jordan canonical form, that is, there exists an invertible matrix T such that J = T −1 AT = diag(J1 , . . . , Jr ), where Ji , 1 ≤ i ≤ r has the following form: ⎛ ⎜ ⎜ Ji = ⎜ ⎜ ⎝

and

r i=1

λi 1



⎟ . λi . . ⎟ ⎟ ⎟ .. . 1⎠ λi ni ×ni

n i = n. Obviously, E α (At α ) = T diag[E α (J1 t α ), E α (λ2 t α ), . . . , E α (Jr t α )]T −1 ,

where for 1 ≤ i ≤ r, E α (Ji t α ) =

∞ ∞ (Ji t α )k (t α )k = Jk (αk + 1) (αk + 1) i k=0

k=0

98

4 Applications ⎞ λik Ck1 λik−1 · · · Ckn i −1 λik−n i +1 ⎟ ⎜ .. .. ∞ ⎟ ⎜ (t α )k . . λik ⎟ ⎜ = ⎟ ⎜ ⎟ (αk + 1) ⎜ .. k=0 ⎠ ⎝ . Ck1 λik−1 λik ⎞ ⎛ ∞ ∞ ∞  n i −1 k−n i +1 (λi t α )k  (t α )k (t α )k C 1 λk−1 · · · λi (αk+1) C k ⎟ ⎜ k=0 (αk+1) k=0 (αk+1) k i k=0 ⎟ ⎜ ⎟ ⎜ ∞ . .  (λi t α )k .. .. ⎟ ⎜ ⎟ ⎜ (αk+1) ⎟ ⎜ k=0 =⎜ ⎟ ∞ ⎟ ⎜ ..  (t α )k k−1 1 ⎟ ⎜ . C λ ⎟ ⎜ (αk+1) k i k=0 ⎟ ⎜ ∞ ⎠ ⎝  (λ t α )k ⎛

i

k=0

(αk+1)

j

(Ck , 1 ≤ j ≤ n i − 1 are the binomial coefficients)    n i −1 ⎛ ⎞ 1 ∂ 1 ∂ α) · · · α) E (λ t E (λ t E α (λi t α ) (1)! α i α i ∂λ (n −1)! ∂λ i i i ⎜ ⎟ ⎜ ⎟ .. .. ⎜ ⎟ α . . E α (λi t ) ⎟. =⎜ ⎜ ⎟   .. ⎜ ⎟ 1 ∂ α ⎝ ⎠ . E (λ t ) α i (1)! ∂λi E α (λi t α )

By some tedious calculations, if |ar g(λi (A))| > απ , 1 ≤ i ≤ r and t → ∞, then we 2 have     1  ∂ j E α (λi t α ) → 0 and  E α (λi t α ) → 0, 0 ≤ j ≤ n i − 1, 1 ≤ i ≤ r. j! ∂λi Indeed, these can be seen from the following: p (λi t α )−k + O(|λi t α |−1− p ), E α (λi t ) = − (1 − kα) k=2 α

which implies E α (λi t α ) → 0 as t → ∞ and 1 j!



∂ ∂λi

j

E α (λi t α ) =

1 j!



∂ ∂λi

j  −

p (λi t α )−k + O(|λi t α |−1− p ) (1 − kα) k=2

p −k− j −αk (−1) j (k + j − 1) . . . (k + 1)kλi t =− j!(1 − αk) k=2

+O(|λi |−1− p− j |t α |−1− p ) =−

p k=2

−k− j

(−1) j (k + j − 1)!λi t −αk + O(|λi |−1− p− j |t α |−1− p ) j!(k − 1)!(1 − αk)

4.4 Stability

99

    j   which leads to  1j! ∂λ∂ i E α (λi t α ) → 0, 0 ≤ j ≤ n i − 1 as t → ∞. It now follows that x(t) = E α (At α )x0  → 0 as t → ∞ for any nonzero initial value x0 . The proof is complete.



The following results are useful in analyzing the stability results of fractional differential equations. Lemma 4.4.3 [16] If all the eigenvalues of A satisfy |ar g(spec(A))| >

απ , 2

(4.19)

then the solution of the system (4.18) is asymptotically stable. Lemma 4.4.4 [15] If α < 2 and β is arbitrary real number, μ is such that απ/2 < μ < min{π, απ} and C is a real constant, then |E α,β (z)| ≤

C . 1 + |z|

(μ ≤ |ar g(z)| ≤ π, |z| ≥ 0) Lemma 4.4.5 (Gronwall Lemma) [17] Suppose that g(t) and f (t) are non-negative continuous functions on [t0 , t1 ], λ ≥ 0 and if  f (t) ≤ λ +

t

g(s) f (s)ds,

t0

then f (t) ≤ λe

t t0

g(s)ds

, t0 ≤ t ≤ t1 .

α −γt Theorem 4.4.6  ∞Suppose ||E α,β (At )|| ≤ Me , 0 ≤ t < ∞, γ > 0 for β = 1, β = α and 0 ||B(t)||dt < N, where M, N > 0, then the solution of equation C

D α x(t) = Ax(t) + B(t)x(t), 0 < α < 1, x(0) = x0 ,

(4.20)

is asymptotically stable. Proof By the Laplace transform and the inverse Laplace transform the solution of Eq. (4.20) can be written as

100

4 Applications

x(t) = E α (At α )x0 +



t

(t − τ )α−1 E α,α (A(t − τ )α )B(τ )x(τ )dτ .

0

Then we obtain ||x(t)|| ≤ ||E α (At α )||x0  +



t

(t − τ )α−1 E α,α (A(t − τ )α )B(τ )x(τ )dτ ,

0

from the boundedness, we can obtain ||x(t)|| ≤ Me

−γt



t

x0  +

Me−γ(t−τ ) B(τ )x(τ )dτ .

(4.21)

0

Multiplying by eγt both sides of Eq. (4.21), we have 

γt

t

e ||x(t)|| ≤ Mx0  +

Meγτ B(τ )x(τ )dτ .

0

Let eγt ||x(t)|| = u(t). Then according to the previous lemma, we have 

eγt ||x(t)|| ≤ Mx0  exp

t

 B(s)ds .

(4.22)

0

Multiplying by e−γt both sides of Eq. (4.22), we obtain  ||x(t)|| ≤ Mx0  exp

t

 B(s)ds e−γt .

0

Then x(t) ≤ M||x0 ||e N −γt , so ||x(t)|| → 0, t → ∞, that is, the solution of Eq. (4.20) is asymptotically stable. 

4.5 Nonlinear Equations Consider the nonlinear fractional differential equation of order 0 < α < 1 C

D α x(t) = f (t, x(t)), t ∈ J = [0, T ], x(0) = x0 ,

(4.23)

where f : J × Rn → Rn is continuous and f (t, 0) = 0. Definition 4.5.1 The zero solution of the Eq. (4.23) is said to be stable if, for any initial value x0 , there exists > 0 such that ||x(t)|| < for all t > 0. The zero solution is said to be asymptotically stable if, in addition to being stable, ||x(t)|| → 0 as t → +∞.

4.5 Nonlinear Equations

101

Consider the nonlinear fractional system of the form C

D α x(t) = Ax(t) + f (t, x(t)), x(0) = x0 ,

(4.24)

where 0 < α < 1, A is an n × n matrix and f (t, x(t)) ∈ C(J × Rn , Rn ) with f (t, 0) = 0. Theorem 4.5.2 Suppose that || f (t, x(t))|| ≤ M||x|| and all the eigenvalues of A satisfy (4.19). Then the zero solution of (4.24) is asymptotically stable. Proof The solution of the system (4.24) can be written as  t α (t − s)α−1 E α,α (A(t − s)α ) f (s, x(s))ds. x(t) = E α (At )x0 + 0

From this, x(t) ≤ E α (At α )x0  +



t

(t − s)α−1 E α,α (A(t − s)α ) f (s, x(s))ds 0  t α (t − s)α−1 E α,α (A(t − s)α )xds. ≤ E α (At )x0  + M 0

By using the Gronwall Lemma, we have   t (t − s)α−1 E α,α (A(t − s)α )ds , x(t) ≤ E α (At α )x0  exp M 0

and so x(t) ≤ CE α (At α )x0 , 

where



t

C = max exp M t∈J

α−1

(t − s)

E α,α (A(t − s) )ds . α

0

Further, E α (At α )x0  → 0 as t → ∞. Hence, we have lim x(t) = 0.

t→∞

Therefore, the zero solution of the given system is asymptotically stable.



Consider the nonlinear system of the form C

D α x(t) = Ax(t) + I α g(t, x(t)), 0 < α ≤ 1, x(0) = x0

where A is an n × n matrix and g(t, x(t)) ∈ C[J × Rn , Rn ], g(t, 0) = 0.

(4.25)

102

4 Applications

Lemma 4.5.3 [18] If A satisfies (4.19), then there exists a constant K > 0 such that  t ||t 2α−1 E α,2α (At α )||dt ≤ K. (4.26) 0

Theorem 4.5.4 Let g(t, x(t)) satisfy ||g(t, x(t))|| ≤ M||x||,

(4.27)

and all the eigenvalues of A satisfy (4.19). Then the zero solution of (4.25) is asymptotically stable. Proof The solution of the system can be written as x(t) = E α (At α )x0 +

 t

(t − s)α−1 E α,α (A(t − s)α )I α g(s, x(s))ds  s  t 1 (t − s)α−1 E α,α (A(t − s)α ) (s − τ )α g(τ , x(τ ))dτ ds = E α (At α )x0 + (α) 0 0 = I1 + I2 . 0

Let us evaluate I2 . For, I2 =

1 (α)

 t 0

s

(t − s)α−1 (s − τ )α−1 E α,α (A(t − s)α )g(τ , x(τ ))dτ ds

0

 t  t ∞ Ak 1 g(τ , x(τ )) (t − s)αk+α−1 (s − τ )α−1 dsdτ (α) 0 (αk + α) τ k=0  t ∞ α k (A(t − τ ) ) g(τ , x(τ ))dτ (t − τ )2α−1 = (αk + 2α) 0 k=0  t = (t − τ )2α−1 E α,2α (A(t − τ )α )g(τ , x(τ ))dτ . =

0

Hence x(t) = E α (At α )x0 +



t

(t − τ )2α−1 E α,2α (A(t − τ )α )g(τ , x(τ ))dτ .

0

Then x(t) ≤ E α (At α )x0  + M

 0

t

(t − τ )2α−1 E α,2α (A(t − τ )α )xdτ .

4.5 Nonlinear Equations

103

By using Gronwall’s Lemma, 

α



t

x(t) ≤ E α (At )x0  exp M

(t − τ )

2α−1

α



E α,2α (A(t − τ ) )dτ

0

and so x(t) ≤ CE α (At α )x0 , 

where



C = max exp M t∈J

t

(t − τ )

2α−1

α



E α,2α (A(t − τ ) )dτ .

0

Further, E α (At α )x0  → 0 as t → ∞. Hence, we have limt→∞ x(t) = 0. Therefore the zero solution of the given system is asymptotically stable.  Consider the fractional integrodifferential equation of the form C



α

t

D x(t) = Ax(t) + g(t, x(t),

k(t, s, x(s))ds),

(4.28)

0

with the initial condition x(0) = x0 , where 0 < α ≤ 1, g ∈ C[J × Rn × Rn , Rn ] and k ∈ C[J × J × Rn , Rn ] with g(t, 0, 0) = 0, k(t, s, 0) = 0 for all t, s ∈ J. Theorem 4.5.5 Let k(t, s, x(s)) satisfy ||k(t, s, x(s))|| ≤ M1 ||x||, s ∈ [0, t],

(4.29)

and all the eigenvalues of A satisfy (4.19). Then the zero solution of (4.28) is asymptotically stable. Proof Comparing the Eq. (4.28) with the Eq. (4.24), the nonlinear term is given by 

t

f (t, x(t)) = g(t, x(t),

k(t, s, x(s))ds). 0

Then the condition for the stability is given by 

t

 f (t, x(t)) = g(t, x(t),

k(t, s, x(s))ds)  t k(t, s, x(s))ds. ≤ C1 x + C2  0

0

104

4 Applications

By using the condition (4.29), we have 

t

g(t, x(t),

k(t, s, x(s))ds) ≤ C1 x + C2 T M1 x

0

≤ Mx,

where M = C1 + C2 T M1 . Hence, the nonlinear term satisfies the required condition of the Theorem 4.5.2 and the rest of the proof is similar to that of Theorem 4.5.2. Thus the system (4.28) is asymptotically stable.  Corollary 4.5.6 Consider the nonlinear system of the form C

D α x(t) = Ax(t) +



t

g(τ , x(τ ))dτ , α ∈ (0, 1),

(4.30)

0

where A is an n × n matrix and g(t, x) ∈ C(J × Rn , Rn ), g(t, 0) = 0 with the initial condition given by x(0) = 0. Suppose that ||g(t, x(t))|| ≤ M1 ||x||

(4.31)

and all the eigenvalues of A satisfy (4.19). Then the zero solution of (4.30) is asymptotically stable. Consider the nonlinear system of the form C

D α x(t) − AI α x(t) = f (t, x(t)), α ∈ (0, 1),

(4.32)

where A is an n × n matrix and the initial condition is given by x(0) = x0 and f (t, 0) = 0. Theorem 4.5.7 Suppose f (t, x(t)) satisfies the condition || f (t, x(t))|| ≤ M1 ||x||

(4.33)

and all the eigenvalues of A satisfy (4.19). Then the zero solution of (4.32) is asymptotically stable. Proof The given Eq. (4.32) can be written as C

D α x(t) = AI α x(t) + f (t, x(t)).

Taking Laplace transform on both sides, we get X (s) =

s 2α − 1 x0 + L{t 2α−1 E 2α,α (At 2α ) ∗ f (t, x(t))}. s 2α − A

4.6 Examples

105

By taking inverse Laplace transform, the solution representation of the system can be written as  t (t − s)α−1 E 2α,α (A(t − s)2α ) f (s, x(s))ds x(t) = E 2α (At 2α )x0 + 0

and from this, we have 

t

x(t) ≤ E 2α (At 2α )x0  +

(t − s)α−1 E 2α,α (A(t − s)2α ) f (s, x(s))ds.

0

By using Gronwall’s inequality,   t s α−1 E 2α,α (A(s)2α )ds . x(t) ≤ E 2α (At 2α )x0  exp M 0

Since   t s α−1 E 2α,α (As 2α ) ds exp M 0

is bounded and E 2α (At 2α )x0  → 0 as t → ∞, we have limt→∞ x(t) = 0.



4.6 Examples In this section, we apply the results established in the previous sections to the following fractional dynamical systems. Let x1 (t) = x(t) and x2 (t) =C D α x1 (t). Example 4.6.1 Consider the linear fractional dynamical system C

D α x(t) = Ax(t), 0 < α < 1, t ∈ [0, T ], x(0) = x0 ,

(4.34)



  0 1 with the linear observation y(t) = H x(t) where A = , H = 1 0 and −1 0

x1 (t) . The Mittag–Leffler matrix function is given by x(t) = x2 (t) ⎡ E α (At α ) = ⎣

1 [E α (it α ) 2

+ E α (−it α )]

− 2i1 [E α (it α )

α

− E α (−it )]

1 [E α (it α ) 2i 1 [E α (it α ) 2

− E α (−it α )] α

+ E α (−it )]

⎤ ⎦.

106

4 Applications

The observability Grammian for this system is  M=

T

E α (A∗ t α )H ∗ H E α (At α )dt.

0

Using matrix calculations, we can show that M is positive definite for T > 0. Hence, the system (4.34) is observable. Example 4.6.2 Consider the following nonlinear fractional dynamical system represented by the scalar fractional differential equation C

D 1/2 x(t) = x(t) + u(t) + sin x(t) cos u(t), t ∈ [0, 1], x(0)=x0 ,

here A = B = 1; α = 1/2; T = 1; f (t, x(t), u(t)) = sin x(t) cos u(t). The two-parameter Mittag–Leffler function is given by E 1/2,1/2 ((t − s)1/2 ) =

∞ k=0

(t − s)k/2 . ((k + 1)/2)

By simple calculation, one can see that the controllability Grammian 

1

W = 

∞ 1

= 0

 = 0

=

[E 1/2,1/2 ((1 − s)1/2 )][E 1/2,1/2 ((1 − s)1/2 )]∗ ds

0 ∞

(1 − s)m/2 (1 − s)k/2 × ds ((k + 1)/2) m=0 ((m + 1)/2)

k=0 ∞ ∞ 1

(1 − s)(k+m)/2 ds ((k + 1)/2)((m + 1)/2) k=0 m=0

∞ ∞

2 (k + m + 2)((k + 1)/2)((m + 1)/2) k=0 m=0

>0 and the control function is

∞ ∞ (1 − t)(k+1)/2 −1 (1)k/2 x1 − W x0 u(t) = ((k + 1)/2) ((k/2 + 1) k=0 k=0 ∞  1 (1 − s)(k−1)/2 sin x(s) cos u(s)ds ; − ((k + 1)/2) k=0 0

(4.35)

4.6 Examples

107

since W > 0, the linear system is controllable and the nonlinear function f (t, x, u) = sin x cos u is uniformly bounded. Hence, by Theorem 4.3.1, the nonlinear system (4.35) is controllable on [0, 1]. Example 4.6.3 Consider the linear fractional control system C

D α x(t) = Ax(t) + Bu(t), 0 < α < 1, t ∈ [0, T ],

(4.36)





01 0 where A = and B = . 10 1 The Mittag–Leffler matrix function of the given matrix A is ⎡ E α,α (At α ) = ⎣

E 2α,α (t 2α ) t α E 2α,2α (t 2α ) t α E 2α,2α (t 2α ) E 2α,α (t 2α )

⎤ ⎦.

The controllability Grammian of this system is  W =

T

E α,α (A(T − τ )α )B B ∗ E α,α (A∗ (T − τ )α )dτ .

0

Using matrix calculations, we can show that W is positive definite for T > 0. So the linear system (4.36) is controllable on [0, T ]. Example 4.6.4 Consider the nonlinear fractional control system C

D α x(t) = Ax(t) + Bu(t) + f (t, x, u), 0 < α < 1, t ∈ [0, T ], 

where A, B are as above and f (t, x, u) =

(4.37)

 sin x1 . sin u

Since the controllability Grammian W is positive definite for T > 0 and so the linear part of the system is controllable on [0, T ]. Further, the nonlinear function f satisfies the hypothesis of the Theorem 4.3.1. Observe that the control defined by  u(t) = (T − t)1−α B ∗ E α,α (A∗ (T − t)α )W −1 x1 − E α (A(T )α )x0  T α−1 ∗ α − (T − s) E α,α (A (T − t) ) f (s, x(s), u(s))ds 0

steers the system (4.37) from x0 to x1 and hence the nonlinear fractional control system is controllable on [0, T ].

108

4 Applications

Example 4.6.5 Consider the linear fractional dynamical system C



0 −1 D x(t) = x(t), 0 < α < 1, t ∈ [0, T ], 1 0 x(0) = x0 , α

(4.38)

The eigenvalues of the given matrix are ±i.       1   π  π απ = = > , 0 < α < 1, | arg(i)| = tan−1  0 2 2 2      απ −1   π  π = −  = > , 0 < α < 1. | arg(−i)| = tan−1  0 2 2 2 Therefore, the fractional order system (4.38) is asymptotically stable. This implies that the system is stable. When α = 1, the integer order system is stable but not asymptotically stable. Example 4.6.6 Consider the fractional dynamical system C



1 −1 x(t), 0 < α < 1, t ∈ [0, T ], 1 1 x(0) = x0 ,

D α x(t) =

(4.39)

The eigenvalues of the given matrix are 1 ± i.   π  π  απ , if α = 1/3 | arg(1 + i)| = tan−1 (1) =   = > 4  4 2   π  π απ | arg(1 − i)| = tan−1 (−1) =   = > , if α = 1/3. 4 4 2 Therefore, the fractional order system (4.39) is asymptotically stable and this implies that the system is stable if α = 1/3. But, for α = 1, the integer order system is unstable. Example 4.6.7 Consider the following system of fractional differential equations in the Caputo sense with α ∈ (0, 1): C C

D α x = ax − by D α y = bx + ay

Applying the Laplace transform on both sides of system (4.40) yields 

s α X (s) − s α−1 x0 = a X (s) − bY (s), s α Y (s) − s α−1 y0 = bX (s) + aY (s),

(4.40)

4.6 Examples

109

where X (s) = L(x(t)), Y (s) = L(y(t)), x0 = x(0) and y0 = y(0). It follows that (s α − a)s α−1 x0 − bs α−1 y0 , (s α − a)2 + b2 (s α − a)s α−1 y0 + bs α−1 x0 Y (s) = , (s α − a)2 + b2

X (s) =

that is, x0 + y0 i s α−1 x0 − y0 i s α−1 + , α α 2 s − a − bi 2 s − a + bi s α−1 y0 + x0 i s α−1 y0 − x0 i + , Y (s) = 2 s α − a − bi 2 s α − a + bi

X (s) =

in which i is the imaginary unit. Making the inverse Laplace transform and using L−1 and −1

L





s α−1 α s − a − bi s α−1 s α − a + bi





= E α,1 ((a + bi)t α )

= E α,1 ((a − bi)t α )

lead to x0 − y0 i x0 + y0 i E α,1 ((a + bi)t α ) + E α,1 ((a − bi)t α ), 2 2 y0 + x0 i y0 − x0 i E α,1 ((a + bi)t α ) + E α,1 ((a − bi)t α ). y(t) = 2 2

x(t) =

If a < 0 and α ∈ (0, 1), applying the theorem, we have lim x(t) = lim y(t) = 0,

t→+∞

t→+∞

which indicates that the zero solution of system (4.40) is asymptotically stable. In system (4.40), if one lets α approach 1 and uses lim α→1 C D α = dtd , then the original system can be changed into an ordinary differential system. For this ordinary system, the zero solution is stable for a = 0. However, it is not the case for fractional system (4.40). Example 4.6.8 (Duffing Equation) Consider the fractional nonlinear system C

D α x(t) = −x(t) − x(t)3 , 0 < α < 2.

110

4 Applications

When α = 21 , the nonlinear first-order Duffing equation is given by C

1

D 2 x(t) = −x(t) − x(t)3 , x(0) = 1.

(4.41)

The homogeneous system C D 2 x(t) = −x(t) satisfies |ar g(−1)| > π4 . Here, f (t, x) = −x 3 which satisfies f (t, 0) = 0. Since all the conditions of Theorem 4.5.2 are satisfied, the given system (4.41) is asymptotically stable. When α = 23 , the nonlinear second-order Duffing equation is given by 1

C

3

D 2 x(t) = −x(t) − x(t)3 , x(0) = 1, x  (0) = 1.

(4.42)

The given equation can be converted into a system of fractional differential equations by using the substitution 1

C

D 2 x1 (t) = x2 (t),

C

D 2 x2 (t) = x3 (t),

C

D 2 x3 (t) = −x1 (t) − x13 (t),

1

1

(4.43)

where x1 (t) = x(t) with initial condition x1 (0) = 1, x2 (0) = 0, x3 (0) = 1. This 1 can be written in the standard form, C D 2 x(t) = Ax(t) + f (t, x) and f (t, x(t)) = (0, 0, x13 (t))T where ⎡

⎤ 010 A = ⎣ 0 0 1⎦. −1 0 0 The eigenvalues of A are −1, 0.5 ± 0.8660i which satisfy |ar g(spec(A))| > π4 . Here, the nonlinear term f (t, x) satisfies f (t, 0) = 0. Since all the conditions of Theorem 4.5.2 are satisfied, the given system (4.42) is asymptotically stable. Example 4.6.9 Consider the fractional nonlinear system D α x1 (t) = x2 (t), C α D x2 (t) = −x1 (t) − (1 + x2 (t))2 x2 (t), C

(4.44)

with the initial condition x1 (0) = 0.14, x2 (0) = 0.125, when α = 0.9. This can be written in the standard form C D α x(t) = Ax(t) + f (t, x), where f (t, x) = (0, −(1 + x2 (t))2 x2 (t))T , where

01 A= −1 0



4.7 Exercises

111

The eigenvalues of A are ±i which satisfy |ar g(spec(A))| > 0.8π . Since the sys2 tem (4.44) satisfies all the conditions of the Theorem 4.5.2, the system (4.42) is asymptotically stable. Example 4.6.10 Consider the scalar integrodifferential equation of the form C



α

t

D x(t) + 2x(t) = −5

x(s)ds + h(t),

(4.45)

0

where α =

1 2

with initial condition x(0) = 0 and  h(t) =

1 0 0. These fractional diffusion models are also used in the dynamics of chemical reactions and phase transitions that have subdiffusion process. (ii) Fractional Fisher–Kolmogorov Equation The Fisher–Kolmogorov equation is one of the nonlinear reaction diffusion equations, which was formulated by Fisher in 1937 and its analytical results were developed by Kolmogorov. At the beginning, it was introduced to simulate the propagation of a gene within a population and the solution of Fisher–Kolmogorov equation is a wave that travels in space and time with an obverse that never changes its shape. The time fractional Fisher–Kolmogorov equation is attained by changing the time derivative of the Fisher–Kolmogorov equation with a fractional derivative of order α, 0 < α < 1. The extended time fractional Fisher–Kolmogorov equation is of the form C α

∂ u − u + 2 u + u 3 − u = 0, ∂t α

0 < α < 1,

where u(x, t) denotes the density of particles or individuals. On discussion regarding epidemics, u(x, t) could mean the density of infected individuals (see [2]). (iii) Fractional Cable Equation Diffusion is a most important transport mechanism in the living organism. Since the diffusion is anomalous in disordered media, to model those anomalous subdiffusion the integer time derivative is replaced by fractional time derivative in the corresponding models such as C α

∂ u ∂2u − + u(x, t) = 0. ∂t α ∂x 2

5.1 Motivation

117

For example, through MRI, the anomalous diffusion is frequently detected in brain tissues. In order to find a model of the brain, we should investigate how the voltage propagates in brain tissues. To be precise, it is important to understand how the voltage propagates in a cable with anomalous diffusion because axons in neuron can be expressed as cables. Those are very important in the neuron–neuron communications. In this regard, more recently authors have suggested the fractional cable equation [3] of the form  C 1−α  d ∂2v ∂ ∂v 0 ≤ α ≤ 1, = β 1−α − i M c , ∂t ∂t 4r L ∂x 2 where v(x, t) is the voltage in a cylindrical cable, M represents the membrane capacitance, d is a diameter of a cylindrical cable, r L indicates the longitudinal resistance, β is a constant, and i c specifies the ionic current flowing per unit area in the cable both inward and outward directions. Here the Riemann–Liouville fractional derivative is considered. In nerve cells to analyze the electrodiffusion of ions, the current i c is taken as v/r M , where r M is the membrane resistance. Since fractals are used to derive stretched exponential function in a brain tissue, the diffusion is described by this function in [4]. (iv) Fractional Burgers Equation Burgers equation is the simplest nonlinear equation for diffusive waves in fluid dynamics. We investigate Burgers equation with a fractional time derivative   C α ∂ u2 ∂ u ∂u + au + b = − α , ∂t ∂x 2 ∂t

 ≥ 0, 0 < α < 1,

(5.1)

where a is a constant speed of advection and b is a coefficient of nonlinear term. When  = 0, the above equation reduces to a transport equation. The fractional time derivative term on the right-hand side describes the effect of memory and linear losses during the propagation of waves. Similarly, when the model has anomalous diffusion or dispersion or sedimentation of particles, the space fractional Burger’s equation has been referred by the authors [5] which is obtained by replacing fractional Laplacian rather than fractional time derivative in the right-hand side of (5.1). In a gas-filled pipe, the physical processes of acoustic waves which have unidirectional propagation are modeled by using fractional derivatives as C α

C β ∂2u ∂ u ∂ u ∂u − μ + u + η = 0, 0 < α, β ≤ 1, t > 0, α 2 ∂t ∂x ∂x ∂x β

where , μ, η are the parameters. Such equations also appear in shallow-water waves and waves in bubbly liquids [5, 6].

118

5 Fractional Partial Differential Equations

(v) Fractional Population Model The fractional population model is of the form C α

∂ u ∂2u ∂2u = + + f (u). ∂t α ∂x 2 ∂ y2

Here u(x, t) represents the population density and f denotes the population supply due to births and deaths. This equation is used to calculate the dynamics of the population changes. Gurney and Nisbet [7] considered the animal population model, wherein the population density of the animal species moves from higher population to lower population density. They observed that the movement or migration of the species takes place at a faster rate at higher density populations when compared to the lower densities. To model this scenario, they considered a fine rectangular mesh in such a way that the animal species are located at the grid points in the mesh. The population model was then formulated by incorporating the nature of migration as for each time step, the animal either remains at its location or moves toward lower population density locations. The probability of such movement of the species depends on the magnitude of the density gradient in the considered mesh. This approach leads to the following equation: C α

∂ u ∂2u2 ∂2u2 = + + f (u). α 2 ∂t ∂x ∂ y2

For more details about the fractional partial differential equations, the reader can refer the books [8, 9] and the papers [10, 11].

5.2 Fractional Partial Integral and Derivative We introduce the following definitions from fractional calculus to study the fractional partial differential equations. Definition 5.2.1 [12] The Riemann–Liouville fractional partial integral operator of order α with respect to t of a function f (x, t) is defined by 1 I f (x, t) = (α) α

t 0

f (x, s) ds, (t − s)1−α

where f (·, t) is an integrable function. For any n − 1 < α < n, n ∈ N, the Riemann–Liouville and Caputo fractional partial derivative operators are defined as follows:

5.3 Linear Fractional Equations

119

Definition 5.2.2 [12] The Riemann–Liouville fractional partial derivative of order α of a function f (x, t) with respect to t is defined by ∂n ∂ α f (x, t) 1 = ∂t α (n − α) ∂t n

t 0

f (x, s) ds, (t − s)α−n+1

where the function f (·, t) has absolutely continuous derivatives up to order (n − 1). Definition 5.2.3 [12] The Caputo fractional partial derivative of order α with respect to t of a function f (x, t) is defined as C α

∂ f (x, t) 1 = α ∂t (n − α)

t 0

1 ∂ n f (x, s) ds, α−n+1 (t − s) ∂s n

where the function f (·, t) has absolutely continuous derivatives up to order (n − 1). The Riemann–Liouville and Caputo fractional partial derivatives are linked by the following relationship: ∂ k f (x, 0) ∂ f (x, t) t k−α ∂ α f (x, t)  = − . ∂t α ∂t α (k + 1 − α) ∂t k k=0 n−1

C α

Although the function with Riemann–Liouville fractional partial derivative need not be continuous at the origin and differentiable, it has some disadvantages when dealing with real-world problem. Specifically, while using Riemann–Liouville derivative, the fractional partial differential equation needs the fractional order initial condition and the Riemann–Liouville fractional partial derivative of a constant K is ∂α K = K t −α /(1 − α). But fractional partial derivative in Caputo sense is applica∂t α ble when trying to model real-world phenomena because Caputo partial derivative admits conventional initial conditions and the Caputo fractional partial derivative of a constant is zero as in integer order case. Both the fractional partial derivatives coincide with zero initial condition.

5.3 Linear Fractional Equations Consider the initial value problem C α

∂ u(x, t) = f (x, t), ∂t α u(x, 0) = u 0 (x)

t ∈ J = [0, T ],

(5.2)

120

5 Fractional Partial Differential Equations

where 0 < α < 1 and f : R × J → R is a continuous function. Then the corresponding integral equation is 1 u(x, t) = u 0 (x) + (α)



t

0

f (x, s) ds. (t − s)1−α

(5.3)

We observe the following two cases: 1. If f (·, t) ∈ L 1 ([0, T ]) or f (·, t) ∈ C([0, T ]), then the derivative of u(x, t) does C α does not exist and the equivalent relation between (5.2) not exist. Hence ∂ ∂tu(x,t) α and (5.3) is not possible. In this case, (5.3) is not a strong solution but called a mild solution of (5.2). 2. If f (·, t) ∈ AC([0, T ]) (the space of all absolutely continuous functions), then u(·, t) ∈ AC([0, T ]). This shows the equivalent relation between (5.2) and (5.3) and the problem (5.2) can be solved for a strong solution. Lemma 5.3.1 (Green’s Identity) [13] Let  be a bounded domain in Rm with smooth boundary ∂. Then, for any u, v ∈ C 2 (), 

 

vu dx =

∂

v

∂u ds − ∂n

 

∇u · ∇v dx,

where n is the outward unit normal to the boundary ∂ and ds is the element of arc length. For the special case v = 1, 

 

u dx =

∂

∂u ds. ∂n

(5.4)

This is called Green’s first identity.

5.3.1 Adomian Decomposition Method To describe this method [14], consider the differential equation of the form Lu + Ru + N u = f,

(5.5)

where L is a linear operator which can be inverted, R is the remainder operator, N is a nonlinear operator, and f is the source term. Since L is invertible, we have from (5.5), u = L −1 Lu = L −1 f − L −1 Ru − L −1 N u.

(5.6)

5.3 Linear Fractional Equations

121

The beauty of the method is that the solution is first expressed as a series and the terms ∞  are approximated one by one. To be clear, suppose u can be expressed as u = un n=0

with u 0 = φ + L −1 f , where φ denotes terms appearing from the initial or boundary conditions. The nonlinear term N u can be written in terms of Adomian polynomials as Nu =

∞ 

An ,

n=0

where An is calculated using the formula An =



1 dn N (v(y)) , n = 0, 1, 2, . . . , n! dλn λ=0

and v(y) =

∞ 

λn u n .

n=0

Now the series solution u =

∞ 

u n to the differential equation is calculated iteratively

n=0

as follows: u 0 = φ + L −1 f u n+1 = −L −1 Ru n − L −1 An , n ≥ 0, and then summing up all the terms to obtain u. This method can be extended to partial differential equations and fractional differential equations. Suppose we consider the partial differential equation L t u = L x u + f, with initial condition u(x, 0) = g(x), where L t and L x are differential operators with respect to t and x, respectively. Then the form of (5.6) in this case would be −1 L −1 t L t u = L t (L x u + f ).

Therefore the solution algorithm is u 0 (x, t) = g(x) + L −1 t f, u n+1 (x, t) = L −1 t L x u n , n ≥ 0.

122

5 Fractional Partial Differential Equations

t For the heat equation L −1 = 0 (·) ds and for the wave equation L −1 t t = t1 t2 −1 0 0 (·) ds2 ds1 . In the case of fractional differential equation, L t is the Riemann– Liouville fractional integral operator and is denoted by I α . Application of this method to different fractional partial differential can be found in [10].

5.3.2 Fractional Diffusion Equation Consider the fractional diffusion equation of the form C α

C β ∂ u ∂ u = κ , α ∂t ∂x β

0 < α ≤ 1, 1 < β ≤ 2, 0 < x < l, t > 0,

(5.7)

with the initial and boundary conditions u(0, t) = 0 = u(l, t), t ≥ 0,

(5.8)

u(x, 0) = f (x), 0 ≤ x ≤ l,

(5.9)

where κ is a constant (diffusivity constant in heat diffusion problems). We assume a nonzero separable solution of (5.7) in the form u(x, t) = X (x)T (t). Substituting (5.10) in (5.7) gives (here C D η = t)

C η

d dy η

(5.10) where η = α or β and y = x or

1 C dβ X 1 C dαT = . X dxβ κT dt α

(5.11)

Since the left-hand side depends only on x and the right-hand side is a function of time t only, result (5.11) can be true only if both sides are equal to the same constant λ. Thus we obtain two ordinary fractional differential equations C β

d X − λX = 0 and dxβ

C α

d T − λκT = 0. dt α

(5.12)

We consider λ as negative, since all other values are not of physical interest. Hence let λ = −γ 2 . Therefore Eq. (5.12) becomes C β

d X + γ 2 X = 0, and dxβ

C α

d T + κγ 2 T = 0. dt α

(5.13)

5.3 Linear Fractional Equations

123

The above equations can be solved by means of Laplace transform technique [12] and the solution in terms of Mittag-Leffler function is given as X (x) = AE β,1 (−γ 2 x β ) + Bx E β,2 (−γ 2 x β )

(5.14)

T (t) = C E α (−κγ 2 t α ),

(5.15)

and

where A, B, and C are arbitrary constants. The boundary conditions for X (x) are X (0) = 0 = X (l)

(5.16)

which are used to find A and B in the solution (5.14). It turns out that A = 0 and B = 0. Hence Bl E β,2 (−γ 2 l β ) = 0

(5.17)

gives the eigenvalues and the corresponding eigenfunctions are given by X n (x) = Bn x E β,2 (−γn2 x β ),

(5.18)

where Bn are constants. With γ = γn , we obtain the solution u n (x, t) as u n (x, t) = an E α (−γn2 κt)x E β,2 (−γn2 x β ),

(5.19)

where an = Bn Cn . Thus the most general solution is obtained by the principle of superposition in the form u(x, t) =

∞ 

an E α (−γn κt α )x E β,2 (−γn x β ).

(5.20)

n=1

The constant an is calculated by using the initial condition u(x, 0) = f (x).

5.3.3 Fractional Wave Equation Consider the one-dimensional fractional wave equation of the form C α

C β ∂ u(x, t) 0 ≤ x ≤ l, t ≥ 0, 2 ∂ u(x, t) = c , 1 < α ≤ 2, 1 < β ≤ 2, ∂t α ∂x β

(5.21)

124

5 Fractional Partial Differential Equations

with initial and boundary conditions u(0, t) = 0 = u(l, t), t ≥ 0, u(x, 0) = f (x), u t (x, 0) = g(x) 0 ≤ x ≤ l,

(5.22)

where c2 is vibrating string constant. We assume a separable solution of (5.21) in the form u(x, t) = X (x)T (t).

(5.23)

Substitution of (5.23) in (5.21) gives 1 C dαT 1 C dβ X = . c2 T dt α X dxβ

(5.24)

Since the left-hand side depends only on t and the right-hand side is a function of time x only, (5.24) can be true only if both sides are equal to the same constant μ. Thus we obtain two ordinary fractional differential equations written as 1 C dαT 1 C dβ X = = μ. c2 T dt α X dxβ

(5.25)

Thus the determination of solutions of the original fractional partial differential equation has been reduced to the determination of solutions of the two fractional differential equations C α

d T = c2 μT and dt α

C β

d X = μX. dxα

(5.26)

The cases μ > 0 and μ = 0 lead to trivial zero solution and hence we take μ < 0; we write μ = −λ. Then the component differential equations and their solutions are X (x) = AE β,1 (−λx β ) + Bx E β,2 (−λx β ).

(5.27)

From the condition X (x) = 0, we obtain A = 0. The condition X (l) = 0 gives Bl E β,2 (−λl β ) = 0. Solving the above equation, we get the eigenvalues λn and the corresponding eigenfunctions are given by X n (x) = Bn x E β,2 (−λn x β ). Now for λ = λn , the general solution of equation C α

d T = c2 μT dt α

5.3 Linear Fractional Equations

is given by

125

Tn (t) = Cn E α,1 (−c2 λn t α ) + Dn t E α,2 (−c2 λn t α ).

Hence the general solution is given by   u n (x, t) = an E α,1 (−c2 λn t α ) + bn t E α,2 (−c2 λn t α ) x E β,2 (−λn x β ), where an = Bn Cn and bn = Bn Dn . Since the equation is linear and homogeneous, by the principle of superposition u(x, t) =

∞    an E α,1 (−c2 λn t α ) + bn t E α,2 (−c2 λn t α ) x E β,2 (−λn x β ) n=1

is also a solution. By using the remaining two conditions on f (x) and g(x), we calculate the constants an and bn . The eigenvalues for different values of β can be easily obtained and if we take α = β = 2, the above solution reduces to u(x, t) =

∞  nπc nπc nπx an cos( t) + bn sin( t) sin( ), l l l n=1

which is the solution to the classical wave equation.

5.3.4 Fractional Black–Scholes Equation Consider the fractional Black–Scholes option pricing equation [15, 16] of the form C α

∂ v ∂2v ∂v − kv, = + (k − 1) α 2 ∂t ∂x ∂x

0 < α ≤ 1,

(5.28)

with the initial condition v(x, 0) = e x − 1, x > 0. For applying the Adomian decomposition method, we rewrite Eq. (5.28) as C 1−α ∂ ∂v = 1−α ∂t ∂t



∂2v ∂v − kv . + (k − 1) ∂x 2 ∂x

Now integrating on both sides with respect to t, we get v(x, t) = v(x, 0) +

t  C 0

∂ 1−α ∂s 1−α



∂ 2 v(x, s) ∂v(x, s) + (k − 1) − kv(x, s) ∂x 2 ∂x

 ds.

126

5 Fractional Partial Differential Equations

Here we take v(x, 0) as v0 and v1 (x, t) =

t  C

∂ 1−α ∂s 1−α

0



∂ 2 v0 (x, s) ∂v0 (x, s) + (k − 1) − kv0 (x, s) 2 ∂x ∂x

 ds.

The iterative scheme, for n > 1, to this problem is given by vn (x, t) =

t  C

∂ 1−α ∂s 1−α

0



∂ 2 vn−1 (x, s) ∂vn−1 (x, s) − kvn−1 (x, s) + (k − 1) 2 ∂x ∂x

 ds.

(5.29) Hence, by Adomian decomposition method, the solution to (5.28) is given by v(x, t) =

∞ 

vn (x, t).

n=0

Here we consider the initial condition v(x, 0) = e x − 1, x > 0; hence v0 = e x − 1. By using (5.29), we calculate the remaining terms of the solution as v1 (x, t) =

t  C 0

∂ 1−α ∂s 1−α



∂ 2 v0 (x, 0) ∂v0 (x, 0) − kv0 (x, 0) + (k − 1) 2 ∂x ∂x

 ds

−kt α −kt α + (e x − 1) (α + 1) (α + 1)

  t  C 1−α 2 ∂ v1 (x, 0) ∂ ∂v1 (x, 0) − kv1 (x, 0) ds v2 (x, t) = + (k − 1) ∂s 1−α ∂x 2 ∂x = −e x

0

= −e x

(−kt α )2 (−kt α )2 + (e x − 1) (2α + 1) (2α + 1)

and so on. So, by Adomian decomposition method, the solution of Eq. (5.28) is given by v(x, t) =

∞ 

vn (x, t)

n=0

(kt α )2 kt α − + ···) (α + 1) (2α + 1) (kt α )2 kt α +(e x − 1)(1 − + + ···) (α + 1) (2α + 1) = e x (1 − E α (−kt α )) + (e x − 1)E α (−kt α ), = e x − E α (−kt α ), = ex (

(5.30)

5.4 Nonlinear Fractional Equations

127

where E α (x) is Mittag-Leffler function in one parameter. Equation (5.30) is the exact solution of Eq. (5.28).

5.4 Nonlinear Fractional Equations Consider the nonlinear fractional partial differential equation of the form C α

∂ u = a(t)u(x, t) + f (t, u(x, t)), t ∈ J, ∂t α

(5.31)

with initial condition u(x, 0) = u 0 (x),

x ∈ ,

where 0 < α < 1,  is a bounded subset of R with smooth boundary ∂, J = [0, T ], and f : J × R → R is a nonlinear continuous function. We assume the following hypotheses to prove our main result: (H1) a(t) is continuous on J and there exists a β ∈ (0, α) such that a(t) ∈ L 1/β (0, T )  T β 1 β and (a(s)) ds ≤ C1 for C1 > 0. 0

(H2) f (t, u) is continuous with respect to u, Lebesgue measurable with respect to t and satisfies   f (t, u) dx φ(x)u(x, t) dx  φ(x) ≤ f t,  ,  φ(x) dx  φ(x) dx for some function φ(x). (H3) There exists an integrable function m(t) : J → [0, ∞) such that f (t, u) ≤ m(t) u , where m(t) ∈ L 1/β (0, T ), for some β ∈ (0, α), and 

T

1

(m(s)) β ds

β

≤ C2 , for C2 > 0.

0

It is easy to show that the initial value problem (5.31) is equivalent to the following integral equation:

128

5 Fractional Partial Differential Equations

u(x, t) = u 0 (x) + +

1 (α)

 t 1 (t − s)α−1 a(s)u(x, s) ds (α) 0  t (t − s)α−1 f (s, u(x, s)) ds,

(5.32)

0

for t ∈ J .

5.4.1 Dirichlet Boundary Condition Now we prove the existence of solution of (5.31) with Dirichlet boundary condition u(x, t) = 0,

(x, t) ∈ ∂ × J,

(5.33)

where ∂ is the boundary of . In order to prove the result, consider the following eigenvalue problem: u + λu = 0, (x, t) ∈  × J, u = 0, (x, t) ∈ ∂ × J,

 (5.34)

where λ is a constant not depending on the variables x and t. Thus for x ∈ , the smallest eigenvalue λ1 of the problem (5.34) is positive and the corresponding eigenfunction φ(x) ≥ 0. Define the function U (t) as U (t) =



u(x, t)φ(x) dx  φ(x) dx

(5.35)

and for any constant b > 0, take ⎧ ⎨



(α)b r1 = min T, ⎩ ( U (0) + b)(λ1 C1 + C2 )



α−β 1−β

⎫ 1 1−β  α−β ⎬ ⎭

.

Theorem 5.4.1 Assume that there exists β ∈ (0, α) for some α > 0 such that (H1)– (H3) hold. Then there exists at least one solution for the initial value problem (5.31) on  × [0, r1 ]. Proof First we have to prove that the initial value problem (5.31) with condition (5.33) has a solution if and only if the equation λ1 U (t) = U (0) − (α) has a solution.

 0

t

(t − s)

α−1

1 U (s) ds + (α)



t 0

(t − s)α−1 f (s, U (s)) ds

5.4 Nonlinear Fractional Equations

129

Step 1. Suppose U (t) is a solution of the above equation. Then by Lemma 3.1 of [17] u(x, t) is a solution of Eq. (5.32). Conversely, let u(x, t) be a solution of (5.31). This implies that u(x, t) is a solution of (5.32). Now multiplying both sides of Eq. (5.32) by φ(x) and integrating with respect to x ∈ , we get 

 t  1 φ(x)u 0 (x) dx + φ(x) (t − s)α−1 a(s)u(x, s) ds dx (α)   0   t 1 α−1 φ(x) (t − s) f (s, u(x, s)) ds dx. + (α)  0

 

φ(x)u(x, t) dx =

Invoking Assumption (H2), we get U (t) ≤ U (0) −

 t  t λ1 1 (t − s)α−1 a(s)U (s) ds + (t − s)α−1 f (s, U (s)) ds. (α) 0 (α) 0

Let K = {U : U ∈ C(J, R), U (t) − U (0) ≤ b} for some b > 0. Define an operator F : C(J, R) → C(J, R) by FU (t) = U (0) −

 t  t λ1 1 (t − s)α−1 a(s)U (s) ds + (t − s)α−1 f (s, U (s)) ds. (α) 0 (α) 0

Clearly U (0) ∈ K . This means that K is nonempty. From our construction of K , we can say that K is closed and bounded. Now, for any U1 , U2 ∈ K and for any a1 , a2 ≥ 0 such that a1 + a2 = 1, a1 U1 (t) + a2 U2 (t) − U (0) ≤ a1 U1 (t) − U (0) +a2 U2 (t) − U (0) ≤ a1 b + a2 b = b. Thus a1 U1 + a2 U2 ∈ K . Therefore K is a nonempty closed convex set. Next we have to prove that the operator F maps K into itself.   t     FU (t) − U (0) =  λ1 (t − s)α−1 a(s)U (s) ds  (α) 0   t  1 + (t − s)α−1 f (s, U (s)) ds   (α) 0  t  t λ1 1 ≤ (t − s)α−1 f (s, U (s)) ds. ( U (0) + b) (t − s)α−1 a(s) ds + (α) (α) 0 0

130

5 Fractional Partial Differential Equations

Then, by using Assumptions (H1), (H3) and Holder’s inequality, we achieve FU (t) − U (0) ≤ ≤ + ≤ =

 t  t λ1 1 m(s)(t − s)α−1 U (s) ds ( U (0) + b) (t − s)α−1 a(s) ds + (α) (α) 0 0 1−β  t β  t

1 1 λ1 1−β ds a(s) β ds (t − s)α−1 ( U (0) + b) (α) 0 0 1−β  t β  t 1

1 1 1−β (t − s)α−1 ds ( U (0) + b) (m(s)) β ds (α) 0 0     ( U (0) + b) λ1 C1 1 − β 1−β α−β ( U (0) + b) C2 1 − β 1−β α−β r1 + r1 (α) α−β (α) α−β  1−β ( U (0) + b) (λ1 C1 + C2 ) 1 − β α−β r1 ≤ b. (α) α−β

Therefore F maps K into itself. Now define a sequence {Uk (t)} in K such that U0 (t) = U (0) and Uk+1 (t) = FUk (t), k = 0, 1, 2, . . . (t) ∈ K such Since K is closed, there exists a subsequence {Uki (t)} of Uk (t) and U that (t). (5.36) lim Uki (t) = U ki →∞

Then the Lebesgue dominated convergence theorem yields  t  t   (s) ds + 1 (s) ds. (t) = U (0) − λ1 (t − s)α−1 a(s)U (t − s)α−1 f s, U U (α) 0 (α) 0

Next we claim that F is completely continuous. For that, first we prove that F : K → K is continuous. Step 2. Let {Um (t)} be a converging sequence in K to U (t). Then, for any  > 0, let Um (t) − U (t) ≤

(α) α−β

3λ1 C1r1



α−β 1−β

1−β .

By Assumption (H2), f (t, Um (t)) −→ f (t, U (t)) for each t ∈ [0, r1 ] and since    f (t, Um (t)) − f (t, U (t)) ≤ (α) 3r1α



α−β 1−β

1−β ,

5.4 Nonlinear Fractional Equations

131

indeed, we can establish   λ1 C1 1 − β 1−β α−β FUm (t)− FU (t) ≤ r1 Um (t) − U (t) (α) α − β    1 − β 1−β  rα  f (t, Um (t)) − f (t, U (t)) + 1 (α) α − β ≤ . Taking limit as m → ∞, it is ensured that F is continuous, since  is arbitrary small. Step 3. Moreover, for U ∈ K ,   λ1 C 1 + C 2 1 − β 1−β α−β FU (t) ≤ U (0) + ( U (0) + b) r1 (α) α−β ≤ U (0) + b. Hence F K is uniformly bounded. Now it remains to show that F maps K into an equicontinuous family. Step 4. Now let U ∈ K and t1 , t2 ∈ J . Then if 0 < t1 < t2 ≤ r1 , by Assumptions (H1)–(H3), we obtain FU (t1 ) − FU (t2 ) ≤ + + + ≤ + + +

 t1

λ1 (t2 − s)α−1 − (t1 − s)α−1 a(s) ds ( U (0) + b) (α) 0  t2 λ1 (t2 − s)α−1 a(s) ds ( U (0) + b) (α) t1  t 

1  1   (t2 − s)α−1 − (t1 − s)α−1 f (s, U (s)) ds    (α) 0  t  2  1   (t2 − s)α−1 f (s, U (s)) ds   (α)  t1   t1 1−β 1 λ1 C 1 ((t2 − s)α−1 − (t1 − s)α−1 ) 1−β ds (α) 0 1−β  t2 1 λ1 C 1 ((t2 − s)α−1 ) 1−β ds ( U (0) + b) (α) t1  t1 1−β

1 C2 1−β ds (t2 − s)α−1 − (t1 − s)α−1 ( U (0) + b) (α) 0  t2 1−β 1 C2 ((t2 − s)α−1 ) 1−β ds . ( U (0) + b) (α) t1

The right-hand side is independent of U ∈ K . Since 0 < β < α < 1, the right-hand side of the above inequality goes to zero as t1 → t2 . Thus F maps K into an equicontinuous family of functions. In view of Ascoli–Arzela’s theorem, F is completely continuous. Then applying the Leray–Schauder fixed point theorem, we deduce that F has a fixed point in K which is a solution of (5.31). 

132

5 Fractional Partial Differential Equations

5.4.2 Neumann Boundary Condition Next our aim is to show the existence of solutions of (5.31) with Neumann boundary condition instead of Dirichlet boundary condition. That is, ∂u(x, t) = 0, ∂n

(x, t) ∈ ∂ × J,

(5.37)

where n is an outward unit normal. Now we define the function V (t) by V (t) =



u(x, t) dx  dx

(5.38)

and for any constant b > 0, take ⎧ ⎨



(α)b r2 = min T, ⎩ ( V (0) + b)C2



α−β 1−β

⎫ 1 1−β  α−β ⎬ ⎭

.

Theorem 5.4.2 Assume that there exists a β ∈ (0, α) for some α > 0 such that (H2)–(H3) hold. Then there exists at least one solution for the initial value problem (5.31) on  × [0, r2 ]. Proof The initial value problem (5.31) with the Neumann boundary condition has a solution if and only if the equation 1 V (t) = V (0) + (α)



t

(t − s)α−1 f (s, V (s)) ds

(5.39)

0

has a solution. Let u(x, t) be a solution of (5.31). Therefore u(x, t) is a solution of (5.32). Now integrating both sides of (5.32) with respect to x ∈  implies   t 1 u 0 (x) dx + (t − s)α−1 a(s)u(x, s) ds dx (α)   0   t 1 α−1 (t − s) f (s, u(x, s)) ds dx. + (α)  0 

 

u(x, t) dx =

Using Green’s identity (5.4) and the Neumann boundary condition, we obtain  

u(x, t) dx = 0.

As a consequence, from (5.38) and Assumption (H2), we get

5.4 Nonlinear Fractional Equations

133

V (t) ≤ V (0) +

1 (α)



t

(t − s)α−1 f (s, V (s)) ds.

0

In order to show the existence of solution of (5.39), define an operator S : C(J, R) → C(J, R) by SV (t) = V (0) +

1 (α)



t

(t − s)α−1 f (s, V (s)) ds.

0

First we have to prove that the operator S maps K into itself.   t     SV (t) − V (0) =  1 (t − s)α−1 f (s, V (s)) ds  (α) 0  t 1 (t − s)α−1 f (s, V (s)) ds. ≤ (α) 0 With the help of Assumption (H3) and the Holder inequality, we determine  t 1 m(s)(t − s)α−1 V (s) ds SV (t) − V (0) ≤ (α) 0  s   t 1 (t − s)α−1 m 2 (s, τ ) V (s) dτ ds + (α) 0 0  t 1−β  t β   1 1 1 α−1 1−β β ds (t − s) ≤ (m(s)) ds ( V (0) + b) (α) 0 0   ( V (0) + b) C2 1 − β 1−β α−β ≤ r2 (α) α−β   ( V (0) + b) C2 1 − β 1−β α−β = r2 ≤ b. (α) α−β Since K is closed, we next define a sequence {Vk (t)} in K which has a subsequence {Vki (t)} such that (t). (5.40) lim Vki (t) = V ki →∞

Thus, by Lebesgue’s dominated convergence, we obtain (t) = V (0) + V

1 (α)



  (s) ds. (t − s)α−1 f s, V

t

0

As in the previous theorem, it is similar to prove that F is completely continuous. Thus, by the Leray–Schauder fixed point theorem, we conclude that S has a fixed point in K which is a solution of (5.31). 

134

5 Fractional Partial Differential Equations

5.5 Fractional Equations with Kernel In this section, we consider the fractional partial differential equation with kernel of the form  t C α   ∂ u = a(t)u(x, t) + h(t − s)u(x, s)ds + f t, u(x, t) , t ∈ J,(5.41) α ∂t 0 with the initial condition u(x, 0) = u 0 (x),

x ∈ ,

where 0 < α < 1, h : J → R is a positive kernel and f : J × R → R is a nonlinear function. This Eq. (5.41) is a special case of integrodifferential equation of motion of fractional Maxwell fluid with zero pressure. In order to establish the existence of solution of (5.41), we additionally need the following condition on the kernel along with (H1)–(H3): (H 4) There exists a constant C3 > 0 such that 

T 0



s

h(s − τ ) dτ

β1

ds

β

≤ C3 .

0

The integral equation corresponding to (5.41) can be written as  t 1 (t − s)α−1 a(s)u(x, s) ds u(x, t) = u 0 (x) + (α) 0  t  s

1 α−1 (t − s) h(s − τ )u(x, τ ) dτ ds + (α) 0 0  t   1 (t − s)α−1 f s, u(x, s) ds. + (α) 0

(5.42)

For any constant b > 0, take ⎧ ⎨



(α)b r3 = min T, ⎩ ( U (0) + b)(λ1 (C1 + C3 ) + C2 )



α−β 1−β

⎫ 1 1−β  α−β ⎬ ⎭

.

Now we are concerned with the existence of solutions of (5.41) with Dirichlet boundary condition. The main theorem is as follows: Theorem 5.5.1 Assume that there exists β ∈ (0, α) for some 0 < α < 1 such that (H1)–(H3) and (H4) hold. Then there exists at least one solution for the initial value problem (5.41) on  × [0, r3 ].

5.5 Fractional Equations with Kernel

135

Proof Our first aim is to prove that the initial value problem (5.41) has a solution if and only if the equation  t λ1 (t − s)α−1 a(s)U (s) ds U (t) = U (0) − (α) 0  t  s

λ1 α−1 (t − s) h(s − τ )U (τ ) dτ ds − (α) 0 0  t 1 α−1 (t − s) f (s, U (s)) ds + (α) 0 has a solution. We begin the proof by assuming u(x, t) to be a solution of (5.42). Now multiplying both sides of Eq. (5.42) by φ(x) and integrating with respect to x ∈ , we get 

  t 1 φ(x)u 0 (x) dx + φ(x) (t − s)α−1 a(s)u(x, s) ds dx (α)   0   t  s

1 α−1 + φ(x) (t − s) h(s − τ )u(x, τ ) dτ ds dx (α)  0 0   t 1 α−1 φ(x) (t − s) f (s, u(x, s)) ds dx. (5.43) + (α)  0

 

φ(x)u(x, t) dx =

Combining (5.35) and Assumptions (H2) and (H4), (5.43) we get  t λ1 (t − s)α−1 a(s)U (s) ds U (t) ≤ U (0) − (α) 0  t  s

λ1 (t − s)α−1 h(s − τ )U (τ ) dτ ds − (α) 0 0  t 1 (t − s)α−1 f (s, U (s)) ds. + (α) 0 Let us define an operator Q : C(J, R) → C(J, R) by  t λ1 (t − s)α−1 a(s)U (s) ds QU (t) = U (0) − (α) 0  t  s

λ1 α−1 (t − s) h(s − τ )U (τ ) dτ ds − (α) 0 0  t 1 (t − s)α−1 f (s, U (s)) ds. + (α) 0

(5.44)

136

5 Fractional Partial Differential Equations

Next we verify that Q maps K into itself.  t λ1 QU (t) − U (0) ≤ ( U (0) + b) (t − s)α−1 a(s) ds (α) 0  t  s

λ1 α−1 h(s − τ ) dτ ds + ( U (0) + b) (t − s) (α) 0 0  t 1 α−1 (t − s) f (s, U (s)) ds. + (α) 0 Making use of Holder’s inequality and the assumptions, for any U ∈ K , we can establish 1−β 

1 t λ1 C 1 1−β α−1 (t − s) QU (t)−U (0) ≤ ds ( U (0) + b) (α) 0 1−β 

1 t λ1 C 3 1−β (t − s)α−1 + ds ( U (0) + b) (α) 0  t 1 + m(s)(t − s)α−1 U (s) ds (α) 0   ( U (0) + b) λ1 C1 1 − β 1−β α−β ≤ r3 (α) α−β   ( U (0) + b) λ1 C3 1 − β 1−β α−β + r3 (α) α−β   ( U (0) + b) C2 1 − β 1−β α−β + r3 (α) α−β   ( U (0) + b) (λ1 (C1 + C3 ) + C2 ) 1 − β 1−β α−β = r3 ≤ b, t ∈ [ 0, r3 ]. (α) α−β

For any sequence {Uk (t)} in K , the Lebesgue dominated convergence theorem implies that  t λ1 (s) ds   (t − s)α−1 a(s)U U (t) = U (0) − (α) 0  t  s

λ1 α−1 (τ ) dτ ds (t − s) h(s − τ )U − (α) 0 0  t 1 (s)) ds, (t − s)α−1 f (s, U + (α) 0 (t) is defined as in (5.36). As in the previous theorem, we can prove that Q where U is completely continuous. Then, applying the Leray–Schauder fixed point theorem, we achieve that Q has a fixed point in K which is a solution of (5.41). 

5.5 Fractional Equations with Kernel

137

The following theorem asserts the existence of solution of (5.41) with Neumann boundary condition (5.37). Theorem 5.5.2 Assume that there exists β ∈ (0, α) for some 0 < α < 1 such that (H2) and (H3) hold. Then there exists at least one solution for the initial value problem (5.41) on  × [0, r2 ]. Proof In order to prove the existence of solutions of (5.41), it is enough to show that the equation 

1 V (t) = V (0) + (α)

t

(t − s)α−1 f (s, V (s)) ds

0

has a solution. Assume u(x, t) to be a solution of (5.41). Then it follows that u(x, t) is a solution of (5.42). Now integrating both sides of Eq. (5.42) with respect to x ∈ , we are led to 

 

u(x, t) dx =

  t 1 u 0 (x) dx + (t − s)α−1 a(s)u(x, s) ds dx (α)  0    t  s

1 (t − s)α−1 h(s − τ )u(x, τ ) dτ ds dx + (α)  0 0   t 1 (t − s)α−1 f (s, u(x, s)) ds dx. (5.45) + (α)  0

Combining Green’s identity, the Neumann boundary condition and Assumption (H2), (5.45) can be written as V (t) ≤ V (0) +



1 (α)

t

(t − s)α−1 f (s, V (s)) ds.

0

Now an operator P : C(J, R) → C(J, R) is defined by P V (t) = V (0) +

1 (α)



t

(t − s)α−1 f (s, V (s)) ds.

0

Next we have to prove that the operator P maps K into itself. From the above equation, we observe that 1 P V (t) − V (0) ≤ (α)



t

(t − s)α−1 f (s, V (s)) ds ≤ b.

0

(t) as in (5.40), the Lebesgue Since K is closed, for any sequence {Vk (t)} in K and V dominated convergence theorem gives

138

5 Fractional Partial Differential Equations

1 (α)

(t) = V (0) + V



t

(s)) ds. (t − s)α−1 f (s, V

0

It is easy to show that P is completely continuous. Then, by the Leray–Schauder fixed point theorem, we conclude that P has a fixed point in K which is a solution of (5.41). 

5.6 Examples Example 5.6.1 Consider the following nonlinear fractional partial differential equation: C 21 ∂ u(x, t) = t 2 u(x, t) + u 2 (x, t), (x, t) ∈  × J (5.46) 1 ∂t 2 with the initial condition u(x, 0) = u 0 , x ∈  and the boundary condition u(x, t) = 0, (x, t) ∈ ∂ × J, where J = [0, 1] and  = [0, π/2]. Here a(t) = t 2 , and f (t, u(x, t)) = u 2 (x, t). Since the eigenfunctions of the Laplacian operator are sin mx and cos mx where λ = m 2 , we note that Assumptions (H1)–(H3) of Theorem 5.4.1 are satisfied for some β ∈ (0, 1/2). Hence Problem (5.46) has a solution. Example 5.6.2 Consider the fractional partial differential equation with kernel of the form C



1

∂ 2 u(x, t) ∂t

1 2

t

= u(x, t) +

e−(t−s) u(x, s) ds + u 2 (x, t), (x, t) ∈  × J

0

with the initial condition u(x, 0) = u 0 , x ∈  and the boundary condition u(x, t) = 0, (x, t) ∈ ∂ × J,

(5.47)

5.7 Exercises

139

where J = [0, 1] and  = [0, π/2]. Here a(t) = 1, h(t) = e−t , and f (t, u(x, t)) = u 2 (x, t). Note that there is a constant K > 0 such that  1  s

β1 β e−(t−s) dt ds ≤ K . 0

0

Observe that Assumption (H4) is satisfied in addition to Assumptions (H1)–(H3) of Theorem 5.5.1 for some β ∈ (0, 1/2). Hence Eq. (5.47) has a solution.

5.7 Exercises 5.1. Solve the heat equation ∂ 2 u(x, t) ∂u(x, t) = , 0 ≤ x ≤ l, t > 0, ∂t ∂x 2 with initial condition u(x, 0) = f (x). 5.2. Solve the wave equation ∂ 2 u(x, t) ∂ 2 u(x, t) = , 0 ≤ x ≤ l, t > 0, 2 ∂t ∂x 2 with initial condition u(x, 0) = f (x). 5.3. Using the Adomian decomposition method (ADM), solve the following fractional partial differential equation: C α

∂ u(x, t) C ∂ β u(x, t) ∂ 2 u(x, t) + + u(x, t) = , ∂t α ∂t β ∂x 2 1 0 ≤ x ≤ l, t > 0, 1 < α ≤ 2, < β ≤ 1, 2

with initial conditions u(x, 0) = 0, u t (x, 0) = e x . 5.4. Solve the equation by the ADM C α

∂ u(x, t) C ∂ α−1 u(x, t) ∂ 2 u(x, t) + + u(x, t) = + sinh u, ∂t α ∂t α−1 ∂x 2 0 ≤ x ≤ l, t > 0, 1 < α ≤ 2,

with initial conditions u(x, 0) = u t (x, 0) = 0.

140

5 Fractional Partial Differential Equations

5.5. Solve the equation by the ADM C α

∂ 2 u(x, t) 3t 2 ∂ u(x, t) ∂u(x, t) = sinh x, − + ∂t α ∂x ∂x 2 2 0 ≤ x ≤ l, t > 0, 0 < α ≤ 1,

with initial conditions u(x, 0) = u t (x, 0) = 0. 5.6. Solve the equation by the ADM C α

∂ ∂ u(x, t) ∂u(x, t) ∂u(x, t) = ((x 2 + 3) ) + xet + 1, −x α ∂t ∂x ∂x ∂x 0 ≤ x ≤ l, t > 0, 0 < α ≤ 1,

with initial conditions u(x, 0) = 1 + x, u t (x, 0) = 2x. 5.7. Solve C α

∂ u(x, t) ∂ 2 u(x, t) = α ∂t ∂x 2 0 ≤ x ≤ l, t > 0, 0 < α ≤ 1, with initial condition u(x, 0) = f (x). 5.8. Solve ∂ ψ(x, t) 1 C ∂ β C ∂ β ψ(x, t) = 0, + ∂t α 2 ∂x β ∂x β 0 ≤ x ≤ l, t > 0, 0 < α, β ≤ 1,

C α

i

with initial conditions ψ(x, 0) = ψ0 (x). 5.9. Establish the existence of solution of the nonlinear equation C

1

∂2u

= u(x, t) + sin u(x, t) 1 ∂t 2 u(x, 0) = u 0 (x).

References 1. Nigmatullin, R.R.: The realization of the generalized transfer equation in a medium with fractal geometry. Physica Status Solidi B 133, 425–430 (1986) 2. Chen, P., Ma, W., Tao, S., Zhang, K.: Blowup and global existence of mild solutions for fractional extended Fisher-Kolmogorov equations. Int. J. Nonlinear Sci. Numer. Simul. 22, 641–656 (2021)

References

141

3. Langlands, T.A.M., Henry, B.I., Wearne, S.L.: Fractional cable equation models for anomalous electrodiffusion in nerve cells: infinite domain solutions. SIAM J. Appl. Math. 71, 1168–1203 (2011) 4. Hall, M.G., Barrick, T.R.: Two step anomalous diffusion tensor imaging. NMR Biomed. 25, 286–294 (2012) 5. Biler, P., Funaki, T., Woyczynski, W.: Fractal Burgers equations. J. Diff. Equ. 148, 9–46 (1998) 6. Momani, S.: Non-perturbative analytical solutions of the space and time fractional Burgers equations. Chaos Solitons Fractals 28, 930–937 (2006) 7. Gurney, W.S.C., Nisbet, R.M.: The regulation of inhomogeneous populations. J. Theor. Biol. 54, 35–49 (1977) 8. Guo, B., Pu, X., Huang, F.: Fractional Partial Differential Equations and their Numerical Solutions. Science Press, Beijing (2011) 9. Kubica, A., Ryszewska, K., Yamamoto, M.: Time-Fractional Differential Equations; A Theoretical Introduction. Springer, Singapore (2020) 10. Joice Nirmala, R., Balachandran, K.: Analysis of solutions of time fractional telegraph equation. J. Korean Soc. Ind. Appl. Math. 14, 209–224 (2014) 11. Kolokoltsov, V.: The probabilistic point of view on the generalized fractional partial differential equations. Fract. Calc. Appl. Anal. 22, 543–600 (2019) 12. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 13. Evans, L.C.: Partial Differential Equations. American Mathematical Society, Providence (1998) 14. Adomian, G.: Solving Frontier Problems of Physics: The Decomposition Method. Kluwer Academic Publishers, Boston, MA (1994) 15. Gülkac, V.: The homotopy perturbation method for the Black-Scholes equation. J. Stat. Comput. Simul. 80, 1349–1354 (2010) 16. Kumar, S., Yildirim, A., Khan, Y., Jafari, H., Sayevand, K., Wei, L.: Analytical solution of fractional Black-Scholes European option pricing equation by using Laplace transform. J. Fract. Calc. Appl. 2, 1–9 (2012) 17. Ouyang, Z.: Existence and uniqueness of the solutions for a class of nonlinear fractional order partial differential equations with delay. Comput. Math. Appl. 61, 860–870 (2011)

Chapter 6

Fractional Integrals and Derivatives

Abstract In this chapter, we list various types of fractional integrals and fractional derivatives available in the literature. In fact, the purpose is to show that there are many types of fractional integrals and derivatives currently under investigation by several researchers some with theory and others with applications. A brief comment about few fractional derivatives is given. Several examples and exercises are constructed to understand different definitions. Keywords Definitions · Fractional integrals · Fractional derivatives In this chapter, we list various types of fractional integrals and fractional derivatives available in the literature for the benefit of the readers. These definitions and their properties can be found in [1–7] and in particular [8, 9]. In fact, the purpose of this chapter is to show that there are many types of fractional integrals and derivatives currently under investigation by several researchers some with theory and others with applications. This may provide an insight to the students about the developments of the field. Here the functions are taken in appropriate function spaces with required smoothness conditions.

6.1 Definitions of Fractional Integrals 1. Liouville left-sided integral [8]: I+α

1 f (x) = (α)



x

−∞

(x − ξ )α−1 f (ξ )dξ.

(6.1)

(ξ − x)α−1 f (ξ )dξ.

(6.2)

2. Liouville right-sided integral [8]: I−α f (x) =

1 (α)



∞ x

© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4_6

143

144

6 Fractional Integrals and Derivatives

3. Riemann–Liouville left-sided integral [8]: Iaα+ f (x) =



1 (α)

x

(x − ξ )α−1 f (ξ )dξ, x ≥ a.

(6.3)

a

4. Riemann–Liouville right-sided integral [8]: Ibα− f (x) =



1 (α)

b

(ξ − x)α−1 f (ξ )dξ, x ≤ b.

(6.4)

f (ξ ) dξ , x > 0, α > 0. (ln( ξx ))1−α ξ

(6.5)

x

5. Hadamard integral [8]: I+α

1 f (x) = (α)



x

0

6. Weyl integral [4]: α x W∞

1 (α)

f (x) =





(ξ − x)α−1 f (ξ )dξ.

(6.6)

x

7. Erdelyi left-sided integral [8]: α Iσ,η



σ x −σ (α+η) f (x) = (α)

x

(x σ − ξ σ )α−1 ξ σ η+σ −1 f (ξ )dξ

(6.7)

(ξ σ − x σ )α−1 ξ σ (1−α−η)−1 f (ξ )dξ.

(6.8)

−∞

8. Erdelyi right-sided integral [8]: α f (x) = Iσ,η



σ xση (α)

∞ x

9. Kober left-sided integral [8]: α I1,η

x −α−η f (x) = (α)



x

(x − ξ )α−1 ξ η f (ξ )dξ.

(6.9)

(ξ − x)α−1 ξ −α−η f (ξ )dξ.

(6.10)

0

10. Kober right-sided integral [8]: α f (x) = I1,η

xη (α)



∞ x

11. Riemann–Liouville left integral of variable fractional order [8]: α(·,·) a Ix

 f (x) = a

x

(ξ − x)α(ξ,x)−1 f (ξ )

dξ . (α(ξ, x))

(6.11)

6.2 Definitions of Fractional Derivatives

145

12. Riemann–Liouville right integral of variable fractional order [8]: α(·,·) x Ib



b

f (x) =

(x − ξ )α(ξ,x)−1 f (ξ )

x

dξ . (α(ξ, x))

(6.12)

13. Generalized left-sided integral [10–12]: ρ α Ia +

ρ 1−α f (x) = (α)



x

a

s ρ−1 f (s) ds (x ρ − s ρ )1−α

(6.13)

s ρ−1 f (s) ds (s ρ − x ρ )1−α

(6.14)

14. Generalized right-sided integral [10–12]: ρ α Ib−

f (x) =

ρ 1−α (α)



b x

15. Fractional integral of a function f with respect to a function g of order α [9] Iaα+ ,g

1 f (x) = (α)

 a

x

f (t) g  (t)dt, α > 0, a < b, (6.15) [g(x) − g(t)]1−α

for every function f ∈ L 1 (a, b) and for any monotonic function g(t) with continuous derivative.

6.2 Definitions of Fractional Derivatives 1. Liouville derivative, for 0 < α < 1, [8]: D α f (x) =

d 1 (1 − α) d x



x −∞

(x − ξ )−α f (ξ )dξ, −∞ < x < +∞.

(6.16)

2. Liouville left-sided derivative, for n − 1 < α < n, [8]: α D+

dn 1 f (x) = (n − α) d x n



x

(x − ξ )−α+n−1 f (ξ )dξ, −∞ < x. (6.17)

−∞

3. Liouville right-sided derivative for n − 1 < α < n [8]: α f (x) = D−

(−1)n d n (n − α) d x n





(x − ξ )−α+n−1 f (ξ )dξ, x < ∞. (6.18)

x

4. Riemann–Liouville left-sided derivative [8]: Daα+ f (x) =

dn 1 (n − α) d x n

 a

x

(x − ξ )n−α−1 f (ξ )dξ, x ≥ a.

(6.19)

146

6 Fractional Integrals and Derivatives

5. Riemann–Liouville right-sided derivative [8]: Dbα−

(−1)n d n f (x) = (n − α) d x n



b

(ξ − x)n−α−1 f (ξ )dξ, x ≤ b.

(6.20)

(x − ξ )n−α−1 f (n) (ξ )dξ, x ≥ a.

(6.21)

(ξ − x)n−α−1 f (n) (ξ )dξ, x ≤ b.

(6.22)

x

6. Caputo left-sided derivative [13]: C

Daα+ f (x) =

1 (n − α)



x

a

7. Caputo right-sided derivative [13]: C

Dbα− f (x) =

(−1)n (n − α)



b x

8. Grunwald–Letnikov left-sided derivative [5]: GL

Daα+

[n] 1  (α + 1) f (x − kh) , nh = x − a. (6.23) f (x) = lim α (−1)k h→0 h (k + 1)(α − k + 1) k=0

9. Grunwald–Letnikov right-sided derivative [5]: GL

[n] 1  (α + 1) f (x + kh) , nh = b − x. (6.24) (−1)k α h→0 h (k + 1)(α − k + 1) k=0

Dbα− f (x) = lim

10. Weyl derivative [4]: a x D∞



(−1)n d n f (x) = (α) d x n



(ξ − x)α−1 f (ξ )dξ.

(6.25)

f (x) − f (ξ ) dξ. (x − ξ )1+α

(6.26)

f (x) − f (x − ξ ) dξ. ξ 1+α

(6.27)

f (x) − f (x + ξ ) dξ. ξ 1+α

(6.28)

x

11. Marchaud derivative [9]: α D+ f (x) =

α (1 − α)



x

−∞

12. Marchaud left-sided derivative [9]: α D+ f (x) =

α (1 − α)





0

13. Marchaud right-sided derivative [9]: α D−

α f (x) = (1 − α)

 0



6.2 Definitions of Fractional Derivatives

147

14. Hadamard derivative [8]: α (1 − α)

α D+ f (x) =



f (x) − f (ξ ) dξ . (ln( ξx ))1+α ξ

x 0

(6.29)

15. Davison–Essex derivative [14]: D0n+α

d n+1−k 1 f (x) = (1 − α) d x n+1−k



x

(x − ξ )−α f (k) (ξ )dξ.

(6.30)

0

16. Riesz derivative, for n − 1 < α < n, [8]: Dxα f (x) = −

dn 1 2cos(απ/2)(α) d x n



x

(x − ξ )n−α−1 f (ξ )dξ  (ξ − x)n−α−1 f (ξ )dξ . (6.31)

−∞ ∞



+ x

17. Coimbra derivative [15]: D0α(x)

1 f (x) = (1 − α(x))



x

(x − ξ )

−α(x)



f (ξ )dξ + f(0)x

−α(x)

 .(6.32)

0

18. Cossar derivative [7]: α D− f (x) = −

d 1 lim (1 − α) N →∞ d x



N

(ξ − x)−α f (ξ )dξ.

(6.33)

x

19. Riemann–Liouville left derivative of variable fractional order [16]: α(·,·) a Dx



d f (x) = dx

x

(x − ξ )−α(ξ,x) f (ξ )

a

dξ . (1 − α(ξ, x))

(6.34)

20. Riemann–Liouville right derivative of variable fractional order [16]: α(·,·) f (x) = x Db



d dx

b

(ξ − x)−α(ξ,x) f (ξ )

x

dξ . (1 − α(ξ, x))

(6.35)

21. Caputo left derivative of variable fractional order [8]: C α(·,·) a Dx

 f (x) = a

x

(x − ξ )−α(ξ,x) f  (ξ )

dξ . (1 − α(ξ, x))

(6.36)

148

6 Fractional Integrals and Derivatives

22. Caputo right derivative of variable fractional order [8]:  b dξ C α(·,·) D f (x) = (ξ − x)−α(ξ,x) f  (ξ ) . x b (1 − α(ξ, x)) x 23. Caputo derivative of variable fractional order [8]:  x 1 C α(x) Dx f (x) = (x − ξ )−α(ξ,x) f  (ξ )dξ. (1 − α(x)) 0 24. Modified Riemann–Liouville fractional derivative [8]:  x d 1 (x − ξ )−α [ f (ξ ) − f (0)]dξ. D α f (x) = (1 − α) d x 0

(6.37)

(6.38)

(6.39)

25. Osler fractional derivative [17]: α a Dz



f (ξ ) dξ, (ξ − z)1+α

(6.40)

d (1−μ)(1−γ ) I f (x). dx

(6.41)

(α + 1) f (z) = 2πi

C

where C is the contour integral. 26. Hilfer derivative [18]: D μ,γ f (x) = I γ (1−μ)

27. Caputo–Fabrizio fractional derivative of order 0 < α < 1 [19]:    t M(α) α(t − τ ) CF α f  (τ )dτ, D f (t) = exp − (1 − α) a 1−α

(6.42)

where M(α) is a normalization function with M(0) = M(1) = 1. 28. Atangana–Baleanu fractional derivative in Riemann–Liouville sense [20]:   t −α(t − τ )α AB(α) d AB R α f (τ )dτ (6.43) Eα a Dt f (t) = 1 − α dt a 1−α where AB(α) is a normalization function with AB(0) = AB(1) = 1. 29. Atangana–Baleanu derivative in Caputo sense [20]: ABC

α a Dt f (t) =

AB(α) 1−α





t

Eα a

−α(t − τ )α 1−α



f  (τ )dτ.

(6.44)

30. Generalized left-sided derivative (RL sense) [12]: Daα,ρ + f (x) =

  d n x s ρ−1 f (s) ρ α−n+1 x 1−ρ ds ρ ρ α−n+1 (n − α) dx a (x − s )

(6.45)

6.3 Comments

149

31. Generalized right-sided derivative (RL sense) [12]: α,ρ Db −

 n  b s ρ−1 f (s) ρ α−n+1 1−ρ d f (x) = ds −x ρ ρ α−n+1 (n − α) dx x (s − x )

(6.46)

32. Generalized left-sided derivative (Caputo sense) [12]: C

α,ρ Da +

ρ α−n+1 f (x) = (n − α)



s (ρ−1)(1−n) f n (s)ds (x ρ − s ρ )α−n+1

x

a

(6.47)

33. Generalized right-sided derivative (Caputo sense) [12]: C

α,ρ Db −

(−1)n ρ α−n+1 f (x) = (n − α)



s (ρ−1)(1−n) f n (s)ds (s ρ − x ρ )α−n+1

b x

(6.48)

34. R-L derivative of a function f with respect to a function g of order 0 < α < 1 [9] Dgα

1 d 1 f (x) =  (1 − α) g (x) d x

 a

x

f (t) g  (t)dt [g(x) − g(t)]α

(6.49)

35. Marchaud derivative of a function f with respect to a function g of order 0 < α < 1 [9] f (x) 1 (1 − α) [g(x) − g(a)]α  x f (x) − f (t)  α + g (t)dt (1 − α) a [g(x) − g(t)]1+α

Daα+ ,g f (x) =

(6.50)

The above definitions are not complete and many more new definitions will crop up as and when new problems arise in different fields of research.

6.3 Comments Recently, it has been established that the fractional derivative introduced via regular kernel has no physical significance and those derivatives contain nothing new [21– 25]. In fact, they are not satisfying the rigorous analysis of fractional derivative [26, 27]. Several authors have introduced new definitions of fractional derivatives, such as conformable fractional derivative, deformable derivative, α-derivative, M-fractional derivative, and so on. In fact, these concepts do not bring any novelty and have no physical relevance, they also create a certain confusion by using the term fractional and have nothing to do with the classical Riemann–Liouville fractional derivative.

150

6 Fractional Integrals and Derivatives

Moreover, analyzing the definition of these derivatives, it is easy to observe that one can introduce a new fractional derivative without any physical significance. (i) Conformal derivative [28]: D α f (t) = lim

h→0

f (t + ht 1−α ) − f (t) . h

(6.51)

(ii) α-derivative (see [29]): −α

f (teht ) − f (t) D f (t) = lim . h→0 h α

(6.52)

(iii) Local M-derivative [29]: α,β

D M f (t) = lim

h→0

f (t E β (ht −α )) − f (t) h

(6.53)

where E β (.) is the Mittag–Leffler function. If f is differentiable, then α,β

D M f (t) =

t 1−α d f (β + 1) dt α,β

α,β

and β = 1 it is equal to α-derivative. Also D M ( M Ia α,β M Ia

 f (t) = (1 + β) a

t

f (t)) = f (t), where

f (s) ds. s 1−α

(6.54)

Now let us introduce a general definition. Let φ be a strictly positive function defined on an interval J ⊂ R+ and let f : (0, ∞) → R be a given function. Then we define the “φ-fractional derivative” of f by D (φ) f (t) = lim

→0

f (t + φ(t)) − f (t) , t > 0,

(6.55)

provided that the limit exists. It is easy to see that from lim

→0

f (t + φ(t)) − f (t) f (t + φ(t)) − f (t) = lim φ(t) = f  (t)φ(t),

→0

φ(t)

it follows that D (φ) f (t) = f  (t)φ(t), t > 0, that is, D (φ) f (t) represents the derivative of a given function f (t) with respect to another function φ(t), a concept that is well known even for fractional derivatives. Next, if we take φ(t) = t 1−α for t > 0 and α ∈ (0, 1), then from (6.55), we obtain the definition of the “conformable fractional derivative” of f of order α as it is defined in [28]. Next, for each > 0, let us take

6.4 Examples

151

φ (t) = t 1−α

∞ 

k t −kα (k + 1)! k=0

for t > 0 and α ∈ (0, 1). Then it is easy to see that te t

−α

= t + φ (t)

so that the α fractional derivative can be defined by (6.55) because −α

f (te t ) − f (t) f (t + φ(t)) − f (t) = lim

→0

→0

= lim D (φ ) f (t) = f  (t) lim φ (t),

D α f (t) = lim

→0

→0

provided that the limit exists. Finally, if we take φ (t) = t 1−α

∞ 

k t −kα (αk + 1)! k=0

for t > 0 and α ∈ (0, 1), then t E α ( t −α ) = t + φ (t) and so the M-fractional derivative M α f , defined in [29], can be obtained by (6.55). Similarly by using the Mittag–Leffler functions with two parameters, three parameters, etc, one can define a new derivative, or instead of φ, one can take any analytic function and introduce a new kind of fractional derivative without any purpose.

6.4 Examples Example 6.4.1 Let us evaluate the Hadamard fractional integral of the function f (x) = (ln( ax ))β−1 . From the definition of Hadamard integral, we have x 1 Iaα+ f (x) = Iaα+ (ln( ))β−1 = a (α)

 a

x

dt x t (ln )α−1 (ln )β−1 . a a t

Introducing the change of variable η = (ln at )/(ln ax ), the limits become η = 0 when t = a and η = 1 when t = x and (ln ax )dη = dtt . Substituting this in the above integral, we get

152

6 Fractional Integrals and Derivatives

x Iaα+ (ln( ))β−1 a

Since

1 0

x 1 (ln )α+β−1 = (α) a

(1 − η)α−1 ηβ−1 dη =

(α)(β) , (α+β)



1

(1 − η)α−1 ηβ−1 dη.

0

we have

x (β) x Iaα+ (ln( ))β−1 = (ln )α+β−1 . a (α + β) a Example 6.4.2 Property of Weyl integral: Wxα Wxβ f (x) = Wxα+β f (x), x > 0, α > 0, β > 0. From the definition Wxα f (x) = Wxβ Wxα

1 (α)

∞ x

(t − x)α−1 f (t)dt and so,

 ∞ 1 β Wx f (x) = (t − x)α−1 f (t)dt , (α) x ∞  ∞ 1 1 (s − x)β−1 ds (t − s)α−1 f (t)dt . = (α) (β) x s

From the Dirichlet formula [4], we have  t

a

(x − t)

β−1



a

dx

(s − x)

α−1



a

f (s)ds = B(α, β)

x

(s − t)α+β−1 f (s)ds,

t

where B(α, β) is the beta function. Allowing a → ∞ after using this identity, we have  ∞ 1 1 β α B(α, β) (t − x)α+β−1 f (t)dt, Wx Wx f (x) = (α) (β) x  ∞ 1 = (t − x)α+β−1 f (t)dt, (α + β) x = Wxα+β f (x). Example 6.4.3 Weyl fractional integral of f (x) = e−ax , a > 0. From the definition of Weyl fractional integral, we have  ∞ 1 (t − x)α−1 e−at dt, (α) x  1 −α −ax ∞ α−1 −y a e y e dy = (α) 0 (by changing the variable y = a(t − x)), 1 −α −ax a e = (α), (α)

Wxα e−ax =

= a −α e−ax .

6.4 Examples

153

Example 6.4.4 Using elementary calculus, we can find the Weyl integral of the following functions. (i) If f (x) = cos ax, a > 0, then Wxα cos ax = a −α cos(ax +

απ ). 2

(ii) If f (x) = sin ax, a > 0, then Wxα sin ax = a −α sin(ax +

απ ). 2

(iii) If f (x) = x −β , 0 < α < β, x > 0, then Wxα x −β =

(β − α) β−α x . (β)

Example 6.4.5 Weyl integral for product of x with f (x). By applying the definition, we have  ∞ 1 (t − x)α−1 [t f (t)]dt (α) x  ∞ 1 (t − x)α−1 [t − x + x] f (t)dt = (α) x = αWxα+1 f (x) + x Wxα f (x)

Wxα [x f (x)] =

In particular, if f (x) = e−ax , a > 0, then Wxα [xe−ax ] = a −(α+1) (α + ax)e−ax . Example 6.4.6 Weyl fractional derivative of f (x) = e−ax . From the definition, α −ax D− e = (−1)n

d n n−α −ax W e , dxn x

where n is the smallest integer greater than α > 0. Let β = n − α. Then d n β −ax W e dxn x dn = (−1)n n [a −β e−ax ] dx −β n −ax = a [a e ] = a α e−ax .

α −ax D− e = (−1)n

154

6 Fractional Integrals and Derivatives

Example 6.4.7 Marchuad fractional derivative of f (x) = eax . By definition, we have  ∞ 1 (1 − α) 0  ∞ eax α = (1 − α) 0  ∞ eax αa α = (1 − α) 0 = a α eax

α ax D+ e =

eax − ea(x−t) dt t 1+α 1 − e−at dt t 1+α 1 − e−τ dτ τ 1+α

since, integration by parts gives 

∞ 0

1 − e−τ 1 dτ = τ 1+α α

 0



e−τ (1 − α) . dτ = τα α

As a consequence, eax is the solution of the fractional differential equation α f (x) = a α f (x). D+

Example 6.4.8 Liouville and Marchuad fractional derivatives α We know that D+ f (x) = D I+1−α f (x). So

 x d 1 (x − t)−α f (t)dt (1 − α) d x −∞  ∞ d 1 t −α f (x − t)dt = (1 − α) d x 0  ∞  ∞ α dη  = f (x − t) dt (1 − α) 0 η1+α t

α D+ f (x) =

and interchanging the order of integration, we have α D+ f (x) =

α (1 − α)

 0



f (x) − f (x − t) dt, 0 < α < 1. t 1+α

Here, f  denotes the first derivative of f with respect to its argument. Similarly, we get  ∞ f (x) − f (x + t) α α f (x) = dt, 0 < α < 1. D− (1 − α) 0 t 1+α

6.5 Exercises

155

Example 6.4.9 Generalized fractional derivative (R-L sense) of order 0 < α < 1 of the function f (x) = x μ , μ ∈ R From the definition, we have α,ρ

D0 + x μ =

  x d s ρ−1 s μ ρα x 1−ρ ds ρ ρ α (1 − α) dx 0 (x − s )

To evaluate the integral, use substitution t = s ρ /x ρ , we obtain 

x 0

μ  s ρ−1 s μ x μ+ρ(1−α) 1 t ρ ds = dt, α (x ρ − s ρ )α ρ 0 (1 − t) x μ+ρ(1−α) μ = B(1 − α, 1 + ) ρ ρ

where B is the beta function. Thus, by using the relation between beta and gamma functions, we obtain (1 + μρ )ρ α−1 μ−αρ α,ρ μ D0 + x = x . (1 + μρ − α) When ρ = 1, we recover the R-L derivative of x μ as D0α+ x μ =

(1 + μ) x μ−α . (1 + μ − α)

6.5 Exercises 6.1. Find the Grunwald–Letnikov derivative of the function f (x) = (x − a)2 . 6.2. Find the Hadamard derivative of the function f (x) = x with a = 1. α establish that 6.3. For the Hadamard fractional integral I+α and derivative D+ α α I+ f (x) = f (x) D+

and

6.4. 6.5. 6.6. 6.7. 6.8. 6.9.

β

α+β

I+α I+ f (x) = I+

f (x).

Show that the Hadamard operator is a linear operator. Identify the other linear operators of fractional derivative and integral. Express Hilfer fractional derivative in terms of R-L derivative. Express Hilfer fractional derivative in terms of Caputo derivative. What is the connection between the Leibniz rule and fractional derivative? Prove that the generalized fractional integral operator ρ Iaα+ satisfies the semigroup property.

156

6 Fractional Integrals and Derivatives

6.10. If g  (t) = 0, then prove that β

α+β

Iaα+ ,g Ia + ,g f (x) = Ia + ,g f (x). 6.11. Compute the Hadamard integral of the function E α [(ln at )α ]. 6.12. Find the Weyl fractional derivative of cos at of order 0 < α < 1.

References 1. Das, S.: Functional Fractional Calculus for Systems Identifications and Controls. Springer, New York (2008) 2. Diethelm, K.: The Analysis of Fractional Differential Equations. Springer-Verlag, New York (2010) 3. Mainardi, F.: Fractional Calculus and Waves in Linear Viscoelasticity: An Introduction to Mathematical Models. Imperial College Press, London (2010) 4. Miller, K., Ross, B.: An Introduction to the Fractional Calculus and Fractional Differential Equations. John Wiley and Sons Inc, New York (1993) 5. Podlubny, I.: Fractional Differential Equations. Academic Press, San Diego (1999) 6. Sabatier, J., Agrawal, O.P., Tenreiro Machado, J.A.: Advances in Fractional Calculus. Springer, Dordrecht (2007) 7. Yang, X.J.: General Fractional Derivative; Theory. Methods and Applications. CRC Press, Boca-Raton (2019) 8. Kilbas, A.A., Srivastava, H.M., Trujillo, J.J.: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) 9. Samko, S.G., Kilbas, A.A., Marichev, O.I.: Fractional Integrals and Derivatives. Theory and Applications. Gordon and Breach Science Publishers, Amsterdam (1993) 10. Kiryakova, V.: Generalized Fractional Calculus and Applications. Wiley and Sons, New York (1994) 11. Kiryakova, V.: A brief story about the operators of the generalized fractional calculus. Frac. Calc. Appl. Anal. 11, 203–220 (2008) 12. Katugampola, U.N.: A new approach to generalized fractional derivatives. Bull. Math. Anal. Appl. 6, 1–15 (2014) 13. Caputo, M.: Linear model of dissipation whose Q is almost frequency independent-II. Geophys. J. Roy. Astron. Soc. 13, 529–539 (1967) 14. Davison, M., Essex, C.: Fractional differential equations and initial value problems. Math. Sci. 23, 108–116 (1998) 15. Coimbra, C.F.M.: Mechanics with variable-order differential operators. Annalen der Physik 12, 692–703 (2003) 16. Trujillo, J.J., Rivero, M., Bonilla, B.: On a Riemann-Liouville generalized Taylor’s formula. J. Math. Anal. Appl. 231, 255–265 (1999) 17. Osler, T.J.: Leibniz rule for fractional derivatives generalized and an application to infinite series. SIAM J. Appl. Math. 18, 658–674 (1970) 18. Hilfer, R.: Applications of Fractional Calculus in Physics. World Scientific, Singapore (2000) 19. Caputo, M., Fabrizio, M.: A new definition of fractional derivative without singular kernel. Progr. Frac. Differ. Appl. 1, 73–85 (2015) 20. Atangana, A., Baleanu, D.: New fractional derivatives with non-local and non-singular kernel: Theory and application to heat transfer model. Ther. Sci. 2, 763–769 (2016) 21. Abdelhakim, A.A.: The flaw in the conformable calculus: it is conformable because it is not fractional. Frac. Calc. Appl. Anal. 22, 242–254 (2019)

References

157

22. Abdelhakim, A.A., Tenreiro Machado, J.A.: A critical analysis of conformable derivative. Nonlinear Dyn. 95, 3063–3073 (2019) 23. Diethelm, K., Garappa, R., Giusti, A., Stynes, M.: Why fractional derivatives with nonsingular kernels should not be used? Frac. Calc. Appl. Anal. 23, 610–634 (2020) 24. Giusti, A.: A comment on new definitions of fractional derivative. Nonlinear Dyn. 93, 1757– 1763 (2018) 25. Ortigueira, M.D., Martynyuk, V., Fedula, M., Tenreiro Machado, J.: The failure of fractional calculus operators in two physical systems. Frac. Calc. Appl. Anal. 22, 255–270 (2019) 26. Ortigueira, M.D., Tenreiro Machado, J.: What is a fractional derivative? J. Comput. Phys. 293, 4–13 (2015) 27. Sales Teodoro, G., Tenreiro Machado, J.A., Capelas de Oliveira, E.: A review of definitions of fractional derivatives and other operators. J. Math. Phys. 388, 195–208 (2019) 28. Khalil, R., Horani, M.A., Yousef, A., Sababheh, M.: A new definition of fractional derivative. J. Comput. Appl. Math. 264, 65–70 (2014) 29. Vanterlal C. Sousa, J., Capelas de Oliveira, E.: On the local M-derivative. Progr. Frac. Differ. Appl. 4, 479–492 (2018)

Index

A Absolutely continuous, 20, 38, 119, 120 Adomian Decomposition Method, 120, 125, 126, 139, 140 Arzela-Ascoli theorem, 69, 75 Asymptotically stable, 96, 100 Atangana-Baleanu fractional derivative, 148

B Banach fixed point theorem, 17 Banach space, 17–19, 63, 65, 67, 68, 71, 94 Basset equation, 79 Boundary condition, 121, 123, 124, 138 Brouwer fixed point theorem, 18

C Caputo-Fabrizio derivative, 148 Caputo fractional derivative, 36, 37, 43 Caputo variable fractional derivative, 147, 148 Coimbra derivative, 147 Compact operator, 74 Completely continuous, 18, 21, 73, 95, 130, 131, 133, 136, 138 Conformal derivative, 150 Contraction mapping, 17, 64 Controllability, 87, 91, 111 Controllability Grammian, 90, 92, 93, 106, 107 Cossar derivative, 147

D Damped equation, 112

Davison-Essex derivative, 147 Duffing equation, 109, 110

E Equicontinuous, 19, 69, 73, 74, 131 Erdelyi integral, 144

F Fixed point, 17, 18, 63, 64, 69, 71, 76, 95, 115, 131, 133, 136, 138 Fractional Black–Scholes equation, 125 Fractional differential equations, 51, 54, 57, 58, 61–63, 65, 67, 70, 76–78, 82, 83, 87, 90, 91, 93, 99, 100, 106, 108, 110, 121, 122, 124, 154 Fractional diffusion equations, 116, 122 Fractional partial derivative, 118, 119 Fractional partial differential equations, 115, 118, 119, 124, 127, 134, 138, 139 Fractional partial integral, 118 Fractional wave equation, 123

G Gamma function, 6–8, 23, 25, 43, 46, 47, 155 Generalized derivative, 148, 149, 155 Generalized integral, 145 Green’s identity, 120, 137 Grunwald-Letnikov derivative, 155

H Hadamard derivative, 147, 155 Hadamard integral, 144, 151, 156

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 K. Balachandran, An Introduction to Fractional Differential Equations, Industrial and Applied Mathematics, https://doi.org/10.1007/978-981-99-6080-4

159

160 Hilfer derivative, 148

I Integrodifferential equation, 48, 49, 103, 111, 134

K Kober integral, 144

L Laplace transform, 4, 13–16, 21, 34, 35, 38, 57, 62, 78, 81, 97, 99, 104, 108, 123 Leray–Schauder theorem, 18 Liouville fractional derivative, 27, 30, 34–39, 41, 48, 117, 148, 149 Liouville fractional integral, 2, 4, 26, 122 Local M-derivative, 150

M Marchaud derivative, 146, 149 Mittag-Leffler function, 8, 9, 11, 14, 55, 57, 62, 76, 91, 106, 127, 150, 151 Mittag-Leffler matrix function, 11, 12, 82, 83, 105, 107

Index Reconstruction kernel, 89 Riemann–Liouville fractional derivative, 27, 35–39, 41, 48, 117, 148, 149 Riemann–Liouville fractional integral, 4, 26, 38, 122 Riemann-Liouville variable fractional derivative, 147 Riemann-Liouville variable fractional integral, 144, 145 Riesz derivative, 147

S Schaefer theorem, 18 Schauder theorem, 18 Semigroup property, 31, 48, 155 Sequential derivative, 40 Solution verification, 58 Stability, 95, 96, 99, 103, 112 Stable, 96, 99, 100, 108, 109, 111 Successive approximation, 54

T Tautochrone problem, 2, 6 Taylor series, 39, 83

U Uniformly bounded, 19, 74, 94, 107, 131 O Observability, 87, 111 Observability Grammian, 88, 106 Osler fractional derivative, 148

P Pochhammer symbol, 6

R Ramanujan, 46

V Vector fractional differential equation, 56 Volterra integral equation, 54

W Weyl derivative, 146 Weyl integral, 144, 152, 153 Wiman’s function, 8 Wright function, 10