Laplace Transforms [5 ed.]
 9798985972405

Citation preview

Laplace Transforms Laplace Transforms Robert P. Massé Subtitle Fifth Edition

Lorem Ipsum Dolor Facilisis

Also by Robert P. Massé

Copyright © 2022 Robert P. Massé

Physics: Nature of Physical Fields and Forces

All rights reserved.

Physics: Where It Went Wrong 5th Edition Vectors and Tensors of Physical Fields Number Theory

ISBN: 979-8-9859724-0-5

Complex Variables Massé, Robert P.

by Robert P. Massé

Laplace Transforms

Kay E. Massé

Physics: Principle of Least Action

i

Dedication This book is dedicated to my beloved wife Kay whose loving and insightful support made it possible

ii

Preface Mathematics is an essential tool in the continuing struggle to understand the physical world.

iii

! The subject of this book is the Laplace transform which is a particular mathematical transform. The Laplace transform provides an extremely powerful method for solving linear differential equations involving initial conditions. It does this by converting differential equations into algebraic equations. Differentiation is thereby replaced with algebraic manipulations. Even many differential equations that have discontinuous or impulsive forcing functions can easily be solved using Laplace transforms. These transforms therefore find application in the numerous diverse fields where differential equations are used to describe changes in an entity. ! The material in this book is organized and presented in a manner designed to make this a comprehensive reference work on Laplace transforms. Most of the sources for this book are given in the references at the end of the book. ! The research that forms the foundation of this book could not have been accomplished without the outstanding interlibrary loan systems made available through the good resources of the Tampa Bay Library Consortium and the Alachua County Library district.

!

!

First Edition – January 18, 2021

!

Second Edition – June 29, 2021

!

Third Edition – September 15, 2021

!

Fourth Edition – November 13, 2021

!

Fifth Edition – April 25, 2022

Robert P. Massé

iv

4! Derivatives and Integrals of Laplace Transforms

Contents 1!

Definition of the Laplace Transform

!

1.1!

Mathematical Transforms

!

1.2!

Fourier Transform

!

1.3!

Laplace Transform

!

1.4!

Existence Requirements

!

1.5!

Laplace Transforms of Simple Functions

2!

Properties of the Laplace Transform

!

2.1!

Linearity Property

!

2.2!

s-Shifting Property

!

2.3!

Scaling Property

3! Laplace Transforms of Derivatives and Integrals !

3.1!

Laplace Transforms of Derivatives of Continuous

!

!

Functions

!

3.2!

Laplace Transforms of Derivatives of Piecewise

!

!

Continuous Functions

!

3.3!

Laplace Transforms of Integrals

!

4.1!

Derivatives of Laplace Transforms

!

4.2!

Integrals of Laplace Transforms

!

4.3!

Initial and Final Values of Laplace Transforms

5! Laplace Transforms of Discontinuous and Periodic ! Functions !

5.1!

Laplace Transforms of Unit Step Functions

!

5.2!

t-Shifting Property

!

5.3!

Laplace Transforms of Rectangular Pulses

!

5.4!

Laplace Transforms of Unit Impulse Functions

!

5.5!

Laplace Transforms of Periodic Functions

6!

Inverse Laplace Transforms

!

6.1!

Integral Transform Pairs

!

6.2!

The Inverse Laplace Transform

!

6.3!

Inverse Laplace Transforms from a Transform Table

7! Convolution !

7.1!

The Convolution Integral

!

7.2!

Laplace Transform of the Convolution Integral

!

7.3!

Inverse Laplace Transform of the Convolution

!

!

Integral v

8!

Laplace Transforms as Series Expansions

!

11.3! Important Theorems for Determining Residues

!

8.1!

Power Series Representation of Laplace Transforms

!

11.4! Complex Series

!

8.2!

Inverse of a Laplace Transform Series

!

11.5! Power Series

!

11.6! Taylor Series

9!

Solution of Linear Ordinary Differential Equations

!

11.7! Laurent Series

!

having Constant Coefficients

!

11.8! Singularities

!

9.1!

Classification of Differential Equations

!

11.9! Residues

!

9.2!

Solutions of Differential Equations

!

11.10!Residue Integration

!

9.3!

Solving Linear ODEs using Laplace Transforms

!

9.4!

Solving Linear ODEs having Discontinuous Forcing

!

!

Functions

!

9.5!

Solving Simultaneous ODEs using Laplace

!

!

Transforms

10! Solution of Partial Differential Equations !

10.1! Partial Differential Equations

!

10.2! Solving PDEs using Laplace Transforms

!

10.3! Obtaining Laplace Transforms using Partial

!

!

Differentiation

12! Inverse Laplace Transforms from Residue Integration !

12.1! Inverse Laplace Transform

!

12.2! Inverse Laplace Transform of a Unit Step Function

!

12.3! Inverse Laplace Transform of a Unit Impulse

!

!

!

12.4! Inverse Laplace Transform of a Function of

!

!

!

12.5! Residue Integration of the Inverse Laplace Transform

!

12.6! Pole Position Determines System Stability

Function Exponential Order

13! Solution of Linear Ordinary Differential Equations ! having Variable Coefficients

11! Complex Variables

!

13.1! Differential Equations of Second Order

!

11.1! Contours in the Complex Plane

!

13.2! Differential Equations of Higher Order

!

11.2! Derivatives of Complex Functions vi

14! Evaluating Integral Equations with the Laplace !

Transform

!

14.1! Integral Equations

!

14.2! Laplace Transforms of Integral Equations

Appendix A!

The Greek Alphabet

Appendix B!

Integration by Parts

Appendix C!

Partial Fractions

!

C.1! General Procedure

!

C.2! Calculating Partial Fractions

!

C.3! Heaviside Cover-up Method

!

C.4! Heaviside Expansion Formula

!

C.5! Common Partial Fractions

Appendix D!

Summary of Propositions

Appendix E!

Laplace Transform Tables

Appendix F!

Properties of Laplace Transforms

Appendix G!

Power Series

Appendix H!

Special Functions

Appendix I!

Laplace Transforms of Derivatives

References vii

Chapter 1 Definition of the Laplace Transform

T

L

f (t ) e { f (t )} = Tlim →∞ ∫ 0−

−st

dt

8

!

In this chapter we will define the Laplace transform and

transformed function (or image function) is designated as

discuss requirements for its existence. We will then present the

output of the system.

Laplace transforms of some simple functions. We will begin by

!

reviewing the general nature of mathematical transforms.

integral transforms can be written as:

Mathematical transformations of a function f (t) that are

F (s) =



b

f ( t ) K ( s, t ) dt !

1.1! MATHEMATICAL TRANSFORMS

!

!

where the function F ( s ) is the transform of the function f (t) ,

In mathematics a transformation is a mapping of one set

of numbers into another set of numbers. This mapping is performed by some procedure that is invertible, but which may

a

(1.1-1)

and where K ( s, t ) is called the kernel for the transformation. The original function f (t) is a function of a real variable t and

not be injective (one-to-one). The mapping operator is known

exists in t-space (space containing the t dimension), and the

as a transform.

transformed function F ( s ) is a function of a complex variable s

!

If a set of numbers can be represented by some function,

then a mathematical transform that operates on this function to

and exists in s-space. The kernel K ( s, t ) is a function of both t

and s and so functions in both t-space and s-space. Integral

produce another function is a particular type of mathematical

transforms are defined by their kernels. The integral of an

operator. The transformed function will have an independent

integral transform is evaluated on [ a, b ] where a can be − ∞

variable different from that of the original function. A new

and b can be ∞ .

functional relation is then established with a new independent

!

variable. Therefore the transformed function can be considered

obtain one that is simpler to interpret than is the original

to be in a transform space that is different from the space of the

function. For example, integral transforms can be used to

original function.

convert differential and integral equations into algebraic

!

An operator is known in engineering as a system. The

equations (the algebraic equations are generally easier to solve).

original function is designated as input for the system, and the

Such transformations from calculus to algebra are known as

The purpose of transforming a mathematical function is to

operational calculus. Once a solution in transform space is 9

obtained, the transformation is inverted to bring the solution

transform is K ( ω, t ) = e− i ω t . The Fourier transform therefore

back to the original space (see Figure 1.1-1).

represents a function f (t) as a linear combination of sinusoids in the form of complex exponential functions e− i ω t . This makes the Fourier transform very useful for analyzing the frequency content of a function of time f (t) . ! !

From Euler’s formula we have:

e−i ω t = cos ( ω t ) − i sin ( ω t ) !

(1.2-2)

where e− i ω t is a bounded periodic function since e− i ω t = 1 . The existence of the Fourier transform integral depends, therefore, on the function f (t) . If f (t) diverges as t → ∞ , the Fourier transform of f (t) will not exist, and equation (1.2-1) has no meaning. Many functions do not have a Fourier transform. Figure 1.1-1! Integral transform solution of a problem.

1.3! LAPLACE TRANSFORM

1.2! FOURIER TRANSFORM !

!

Probably the most familiar integral transform is the

Fourier transform, which is defined as: !

F (ω ) =





−∞

f ( t ) e−i ω t dt !

It may be possible to keep f (t) from diverging as t → ∞ by

multiplying it by a convergence factor in the form of an exponential e− σ t , where σ is chosen to be some positive real-

(1.2-1)

where ω is a real-valued parameter, and where the real-valued variable t often represents time. The Fourier transform F ( ω ) of

f (t) is a complex-valued function. The kernel for the Fourier

valued parameter that is large enough to ensure convergence of the integral: !





−∞

f ( t ) e− σ t dt !

(1.3-1)

This in turn will ensure the existence of the integral: 10

!

F ( σ, ω ) =





( f (t ) e ) e −σt

−i ω t

−∞

dt !

!

where F ( σ, ω ) is a function of both σ and ω , and where f (t) is assumed to be integrable in any finite interval 0 ≤ t ≤ T . We see and that f ( t ) e

−σt

−σt

,

is an attenuated function. Equation (1.3-2)

can be written: ! !

F ( σ, ω ) =





−∞

!

F (s) =

(1.3-4)



−∞

f ( t ) e− s t dt !

transform

is

f (t) , a function of a real variable t , into F ( s ) , a function of a complex variable s . The function f (t) is generally real in practice, but can be complex. Since we choose σ > 0 , the convergence factor e− σ t can

that we have the one-sided or unilateral Laplace transform:

F (s) =





0−

f ( t ) e− s t dt !

(1.3-6)

which is referred to as simply the Laplace transform. The integral in this equation is known as the Laplace transform (1.3-5)

integral. We see that modification of the Fourier transform to be applicable to many more functions leads to the development

which is an integral transform known as the bilateral Laplace transform. The complex variable s can be considered a generalized frequency containing an attenuation term σ . !

Laplace

the Fourier transform. The bilateral Laplace transform changes

!

where σ ∈! and ω ∈! . Equation (1.3-3) becomes: ∞

bilateral

the Laplace transform is usually restricted to values of t ≥ 0 so

We will now let σ and ω be the rectangular components

s = σ + iω !

the

cause the Laplace integral to diverge if t is negative. Therefore

(1.3-3)

of a complex variable s so that: !

of

only along the imaginary axis in the complex plane as i ω is for

!

f ( t ) e− ( σ+i ω ) t dt !

kernel

K ( s, t ) = e− s t , where s is complex, and is not constrained to be

(1.3-2)

that F ( σ, ω ) is the Fourier integral of the function f ( t ) e

The

the bilateral Laplace transform of a function f (t) is the same as the Fourier transform of the attenuated function f ( t ) e

the subject of this book. !

Equation (1.3-5) is identical to equation (1.3-2). Therefore −σt

of the Laplace transform. The unilateral Laplace transform is All nonnegative real numbers on the t-axis upon which

the Laplace transform acts form the domain of definition of the Laplace transform. Points on the t-axis that are mapped by the

. 11

Laplace transform onto the complex s-plane form the range of

time it becomes of interest. This beginning is generally not in

the Laplace transform.

the distant past (at t = − ∞ ). We can arbitrarily take t = 0 to

Example 1.3-1

correspond to the beginning time. In any case, the Laplace

Show that the Laplace transform F ( s ) is a complex function.

transform does not process f (t) for negative values of t . The

Solution:

⎡⎣ 0 − , ∞ , where we are using the interval notation:

Laplace transform operates on functions only on the interval

)

We have by definition: !

F (s) =





0−

f (t ) e

−st

dt =

Finite and half-open interval!





0−

f ( t ) e− ( σ+i ω ) t dt

!

e

included within the interval of integration of the Laplace transform. This becomes an important consideration when any

we can write: !

F (s) =



0



f (t ) e

−σ t

( cos (ω t ) − i sin (ω t )) dt

F (s) =





0

If no discontinuity of f (t) exists at t = 0 , then t = 0 − , t = 0 , and



f (t ) e

−σ t

cos ( ω t ) dt − i





0



integral, where t = 0 + means that t = 0 is approached from the

f ( t ) e− σ t sin ( ω t ) dt

and so F ( s ) is clearly a complex function. !

discontinuity of f (t) occurs at t = 0 (see Lundberg et al., 2007).

t = 0 + are all equivalent as a limit of the Laplace transform

or !

The lower limit of the Laplace transform integral is written

left (from the negative side). The point t = 0 is then entirely

= cos ( ω t ) − i sin ( ω t )



⇒ ! a ≤ t < b ! (1.3-7)

as t = 0 − , where t = 0 − means that t = 0 is approached from the

Using Euler’s formula: −i ω t

!

[ a, b ) !

Because physical processes are generally causal, the

restriction to t ≥ 0 in the Laplace transform is not severe. Any physical function f (t) is expected to have a beginning at which

right (from the positive side). !

The function f (t) serves as the input to the Laplace

transform. The output from the Laplace transform is designated

F ( s ) , and is referred to as the Laplace transform F ( s ) of f (t) . The Laplace transform operator is denoted by L . The Laplace transform of f (t) is then given by: 12

! !

L



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) ! 0−

(1.3-8)

processes can be represented by functions that have a Laplace transform.

The historical origin of the Laplace transform is generally

!

For the Laplace integral to converge to a limit, certain

considered to be some of the early work of Euler beginning in

continuity and bounding requirements must apply. The limit in

1737. This was followed by Laplace’s studies of probability

equation (1.4-1) will generally exist only for certain values of

theory (1812), and Bateman’s study of radioactivity (1910). The

Re s .

Laplace transform techniques are closely related to operational calculus methods developed by Oliver Heaviside (1892, 1894, 1912) to analyse transient disturbances in electrical networks. Laplace transform theory and techniques were further extended in studies by Lerch (1903), Bromwich (1916), and Doetsch (1937). A history of the Laplace transform is given by Lützen (1979) and Deakin (1981, 1982, 1992, 2018).

Example 1.4-1 Determine the Laplace transform for f (t) = 0 . Solution: We have: !

L {0} = lim

T →∞

T



0−

( 0 ) e− s t dt = 0

1.4! EXISTENCE REQUIREMENTS !

The Laplace transform F ( s ) of a function f (t) is an

improper integral that must be evaluated in terms of a limit: ! L



{ f (t )} = ∫ f (t ) e 0−

−st

dt = lim

T →∞



T

0−

f ( t ) e− s t dt = F ( s ) !

(1.4-1)

For the Laplace transform of a function f (t) to exist, the integral in equation (1.4-1) must converge to a limit F ( s ) as

Example 1.4-2 Determine the Laplace transform for f (t) = 1 . Solution: We have: !

T → ∞ . If no limit exists, the Laplace integral is said to diverge, and f (t) then does not have a Laplace transform. Most physical

L {1} = lim

T →∞



T

0



(1) e

T

−st

⎡ e− s t ⎤ ⎡ e− sT 1 ⎤ dt = lim ⎢ = lim − ⎥ ⎢ ⎥ T →∞ ⎣ −s ⎦ 0 T → ∞ ⎣ −s −s ⎦

or 13

L {1} =

!

1 ! s

Re s > 0

!

Note that F ( s ) = 1 s is holomorphic on the entire s-plane

except s = 0 (see Section 11.2.4), but that F ( s ) only converges

!

( )

f t 0− means lim f ( t ) is approached from the left t → t0

A function f (t) will be continuous at a point t = t 0 if it

satisfies the three conditions given in equation (1.4-2), and if

f (t 0+ ) = f (t 0− ) . If we have f (t 0+ ) ≠ f (t 0− ) then a discontinuity

for Re s > 0 .

exists at t = t 0 , and the function is discontinuous.

1.4.1! !

CONTINUOUS AND DISCONTINUOUS FUNCTIONS

Before considering the continuity requirements for the

existence of a Laplace transform, we will briefly review the definitions of continuous and discontinuous functions. !

A function f (t) is defined as continuous at a point t = t 0 if

the following three conditions are satisfied:

!

!

⎧ ⎪ 1. ⎪⎪ ⎨ 2. ⎪ ⎪ 3. ⎪⎩

f ( t ) exists at t = t 0 lim f (t) exists

t → t0

! !

!

!

!

(1.4-2)

lim f ( t ) → f ( t 0 )

t → t0

A function f (t) can approach a point t = t 0 from either the

right or the left. We will use the following notation to describe these two directions of approach to a point t = t 0 : !

f

( ) means lim f (t ) is approached from the right t 0+

t → t0

Figure 1.4-1! A jump discontinuity in f ( t ) at t = t 0 . !

( )

If both f t 0+

( ) ( )

f t 0+ − f t 0−

( )

and f t 0−

are finite so that the difference

is finite, then t = t 0 is a jump discontinuity. An

example of a jump discontinuity is shown Figure 1.4-1. If a function f (t) has a jump discontinuity at a point t = t 0 , then 14

f (t) is undefined at t = t 0 , and cannot be assumed to be the

( )

( )

( )

( )

average of f t 0+ and f t 0− . If either or both f t 0+ and f t 0−

( ) ( )

are infinite so that the difference f t 0+ − f t 0−

is infinite, then

an infinite discontinuity exists at t = t 0 . !

A function f ( t ) can be continuous at a point when the

point is approached from one direction, but not continuous when the point is approached from the other direction. The function is then considered to be discontinuous at the point. Such a discontinuity is an infinite discontinuity. A function that has an infinite discontinuity when approached from any

Figure 1.4-2! A piecewise continuous function f ( t ) .

direction is known as a pole (see Chapter 11).

1.! Within each open segment f ( t ) will be continuous

1.4.2!

2.! Finite right hand and finite left hand limits of f ( t )

!

CONTINUITY REQUIREMENTS OF THE LAPLACE TRANSFORM For the Laplace transform F ( s ) of a function f ( t )

(see Figure 1.4-2).

to exist,

f ( t ) must be defined and integrable on the interval of

integration. This in turn requires the function f ( t ) to be piecewise continuous on the interval of integration. For a function f ( t ) to be continuous at a point t = t 0 , f ( t ) must satisfy equation (1.4-2).

will exist as f ( t ) approaches the respective endpoints from within the segment.

3.! At any point t 0 that is a juncture between two of the segments, f ( t ) will have a jump discontinuity at

( )

( )

which the limits f t 0− and f t 0+ of f ( t ) at the sides of the discontinuity both exist (see Figure 1.4-1).

For a function to be piecewise continuous on a finite

4.! Any finite interval will have only a finite number of

interval, it must be possible to partition the interval into a finite

jump discontinuities, and so a finite number of

number of segments such that:

segments.

!

15

A function f ( t ) that is piecewise continuous on an

!

interval will be bounded on each segment of the interval. A

)

function f ( t ) will be piecewise continuous on ⎡⎣ 0 , ∞ if f ( t ) is piecewise continuous on ⎡⎣ 0 − , T ⎤⎦ for all T > 0 . −

n−1

L

!

i=0

Determine if the following functions are piecewise

!

⎧ 1 0≤t σ 0 ! (1.4-8)

since e− i ω t = 1 . A function f ( t ) that satisfies equation (1.4-7) is limited to

!

no more than exponential growth. Such a function is said to be of exponential order. If f ( t ) is of exponential order, then the rate at which f ( t ) can increase will be limited to no more than

M eσ 0 t when T0 is chosen to be large enough (see Figure 1.4-3). The concept of exponential order has been developed to meet the Laplace transform existence requirements for a function

f (t ) .

Figure 1.4-3! Examples of functions of exponential order. !

A function f ( t ) that satisfies equation (1.4-7) for all t ≥ T0

will be of exponential order even if f ( t ) > M eσ 0 t for some or all t < T0 . Most functions arising from physical problems can be expected to be of exponential order. All rational algebraic functions are of exponential order. Only functions that increase more rapidly than the exponential factor eσ 0 t when t ≥ T0 are not of exponential order (see Example 1.4-8). Proposition 1.4-1: If f ( t ) is a bounded function such that f ( t ) ≤ M for t ≥ T0 , then f ( t ) is of exponential order. Proof: 17

From the Maclaurin series for eσ 0 t we have:

If σ 0 = 0 equation (1.4-7) becomes:

!

f ( t ) ≤ M e( 0 ) t = M !

!

t ≥ T0 !



(1.4-9)

!

e

σ0 t

and so f ( t ) is bounded by the constant M . We see then that

!

and so:



!

Note that the symbol



∑ n=0

any function f ( t ) that is bounded for t ≥ T0 is of exponential order.

=

σ 0n t n σ 0n t n > n! n!

tn
0 . Solution:

!

We have: !

1.4.4!

CONVERGENCE REQUIREMENTS OF THE LAPLACE TRANSFORM Since the Laplace transform F ( s ) is defined in terms of an

improper integral, it is important to determine the conditions

f (t) = 2 e3t sin ( t ) = 2 e3t sin ( t ) ≤ 2 e3t

Therefore f (t) = 2 e sin ( t ) is of exponential order for t > 0 . 3t

Example 1.4-5 Show that f (t) = t n where n is a positive integer is of exponential order for t ≥ T0 .

under which this integral converges. It is also important to know the region of the complex s -plane for which F ( s ) exists since F ( s ) is a function of s .

Proposition 1.4-2, Absolute Convergence of the Laplace Transform: If f ( t ) is a piecewise continuous function for t ≥ 0 , and if

f ( t ) ≤ M eσ 0 t for t ≥ T0 , then the Laplace transform:

Solution: 18

L

!



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) !

(1.4-10)

0−



!

T0

0

converges absolutely for Re s > σ 0 where σ 0 ≥ 0 .

!



f (t ) e

−st

dt ≤ K



T0

0



e

−σt

e

−iωt

⎛ 1 e− σ T0 ⎞ dt ≤ K ⎜ − σ ⎟⎠ ⎝σ

!

(1.4-14)

Proof:

Therefore the first integral on the right in equation (1.4-12) is

!

bounded and integrable on ⎡⎣ 0 − , T0 ⎤⎦ . This integral converges absolutely on ⎡⎣ 0 − , T0 ⎤⎦ . This integral can consist of the sum of

We can write: ∞

{ f (t )} = ∫ f (t ) e

L

!

0

!

−st



dt =



T0

0



f (t ) e

−st

dt +





f ( t ) e− s t dt

T0

!

(1.4-11)

! !



f (t ) e

0−

−st

continuous. !

We then have: ∞

integrals on segments in each of which f ( t ) e− s t is bounded and

dt ≤



T0

f (t ) e

0−

−st

dt +





For t ≥ T0 we are given that f ( t ) ≤ M eσ 0 t where M > 0 is

independent of s . Therefore:

f ( t ) e− s t dt ! (1.4-12)

T0

!

lim M e

t→∞

Since f ( t ) is taken to be a piecewise continuous function

− (σ − σ0 ) t

⎧⎪ ∞ if σ < σ 0 ! =⎨ 0 if σ > σ 0 ⎪⎩

(1.4-15)

on ⎡⎣ 0 − , T0 ⎤⎦ , some finite positive number K will exist that bounds the function f ( t ) on ⎡⎣ 0 − , T0 ⎤⎦ so that we have

If σ > σ 0 where Re s = σ we will have:

and we can write:

since e− i ω t = 1 for all ω . Since f ( t ) ≤ M eσ 0 t , we obtain:

f ( t ) ≤ K . Therefore the first integral on the right is integrable,

!



T0

0



f (t ) e

−st

dt ≤



T0

0



f (t ) e

−st

dt ≤ K



T0

0



e− s t dt ! (1.4-13)

Since s = σ + i ω and e− i ω = 1 for all ω , we have:

!

f ( t ) e− s t = f ( t ) e− σ t e− i ω t = f ( t ) e− σ t !

!

f ( t ) e− s t ≤ M e− ( σ − σ 0 ) t !

t ≥ T0 ! (1.4-16)

t ≥ T0 !

(1.4-17)

For the second integral on the right of equation (1.4-12) we then have: 19





!

f (t ) e

−st

dt ≤ M

T0





e− ( σ − σ 0 ) t dt !

(1.4-18)

Proposition 1.4-3, Existence of the Laplace Transform:

T0

If f ( t ) is a piecewise continuous function for t ≥ 0 , and if

or

f ( t ) ≤ M eσ 0 t where Re s > σ 0 with σ 0 ≥ 0 and t ≥ T0 , then the





!

f (t ) e

T0

−st

M dt ≤ e− ( σ − σ 0 ) t σ0 − σ



!

Laplace transform:

(1.4-19)

T0

For Re s > σ 0 :



!



f ( t ) e− s t dt ≤

M e− ( σ − σ 0 ) T0 ! σ − σ0

(1.4-20)

on [T0 , ∞ ) . From Equation (1.4-20) we then have:



f ( t ) e− s t dt ≤

T0

! !

M ! σ − σ0

!

Re s > σ 0 !

(1.4-21)



0



f (t ) e

−st

⎛ 1 e− σ T0 ⎞ M ! dt ≤ K ⎜ − + σ ⎟⎠ σ − σ ⎝σ

Re s > σ 0

0

! !

(1.4-22)

Follows from Proposition 1.4-2 since L

)

{ f (t )}

absolutely on ⎡⎣ 0 − , ∞ where Re s > σ 0 with σ 0 ≥ 0 .

converges



Propositions 1.4-2 and 1.4-3 prove that if a function f ( t )

satisfies the two conditions:

From equations (1.4-14), (1.4-21), and (1.4-12) we have: ∞

(1.4-23)

0−

Proof: !

Therefore if σ > σ 0 this improper integral converges absolutely

!

{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) !

exists since the integral converges.

T0



L

!



1.!

f ( t ) is piecewise continuous.

2.!

f ( t ) is of exponential order.

then these conditions are sufficient to ensure that its Laplace transform exists. These conditions are, however, not necessary for the existence of a Laplace transform.

From the comparison theorem for improper integrals, we can

!

now conclude that the Laplace transform integral converges,

are not of exponential order can still have Laplace transforms.

and converges absolutely for Re s > σ 0 on ⎡⎣ 0 − , ∞ .

For example, f ( t ) = 1

)



Certain functions that are not piecewise continuous or

t

is not piecewise continuous on 20

)

⎡⎣ 0 − , ∞ since f ( t ) → ∞ as t → 0 + , which is a discontinuity that is not finite. Nevertheless, f ( t ) = 1 t does have a Laplace

Proposition 1.4-5, Riemann Lebesque Lemma:

!

{ f (t )} = F ( s ) exists, then f (t ) has

only the one Laplace transform F ( s ) .

Proof: ! !

From equation (1.4-22) we have:





0

f (t ) e



−st

⎛ 1 e− σ T0 ⎞ M ! dt ≤ K ⎜ − + σ ⎟⎠ σ − σ ⎝σ

! !

!

As Re s = σ → ∞ equation (1.4-26) becomes:

equation (1.4-1) we have: !

L



{ f (t )} = ∫ f (t ) e 0−

−st

dt = F ( s ) !

(1.4-24)

This is a one-to-one relation between f ( t ) and F ( s ) and so f ( t ) has the unique Laplace transform F ( s ) . !



The inverse Laplace transform is not necessarily unique,

however, as is discussed in Chapter 6.

!

Re s > σ 0

0

Proof: From the definition of the Laplace transform given in

(1.4-25)

Re s→ ∞

uniformly.

functions, and exponential functions.

If the Laplace transform L

lim F ( s ) → 0 !

!

among these functions are polynomials, sine and cosine

Proposition 1.4-4, Uniqueness of the Laplace Transform:

)

function of exponential order on ⎡⎣ 0 − , ∞ , then:

From Propositions 1.4-1 and 1.4-2 we see that many of the

most common functions do have Laplace transforms. Included

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

If L

transform.

(1.4-26)

lim

Re s→ ∞









0−

f ( t ) e− s t dt → 0 !

(1.4-27)

f ( t ) e− s t dt → 0 !

(1.4-28)

Therefore: !

lim

Re s→ ∞

0



We then have the Riemann Lebesque Lemma: !

lim F ( s ) → 0 !

Re s→ ∞

(1.4-29) 21

where equation (1.4-29) applies uniformly on Re s > σ 0 (see Proposition 1.4-7).

or

⎡ ⎤ ⎢ M ⎥ lim s F ( s ) ≤ lim ⎢ K 1− e− σ T0 + ! σ0 ⎥ σ→ ∞ σ→ ∞ 1− ⎢ ⎥ σ ⎦ ⎣



(

! !

If equation (1.4-29) does not apply for a Laplace transform

F ( s ) of a function f ( t ) , then f ( t ) cannot be a piecewise continuous function of exponential order. The converse is not

Therefore:

true, however. It is possible for a function G ( s ) → 0 as s → ∞

!

without G ( s ) being the Laplace transform of any function g ( t ) .

)

lim s F ( s ) ≤ K + M !

(1.4-32)

(1.4-33)

σ→ ∞

and so s F ( s ) is bounded as Re s → ∞ .



Proposition 1.4-6: If L

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

)

function of exponential order on ⎡⎣ 0 − , ∞ , then s F ( s ) is bounded as Re s → ∞ .

Example 1.4-6 Determine the Laplace transform for f (t) = eat where a is a real constant.

Proof:

Solution:

!

The function f (t) = eat is continuous in − ∞ < t < ∞ . We have:

From equation (1.4-22) we have:

! F (s) =





0−

f (t ) e

−st

⎛ 1 e− σ T0 ⎞ M dt ≤ K ⎜ − + ! σ ⎟⎠ σ − σ ⎝σ

Re s > σ 0

0

! !

(1.4-30)

where K > 0 and M > 0 . Since Re s = σ , we can write: ! !

!

⎡ σM ⎤ lim s F ( s ) = lim σ F ( s ) ≤ lim ⎢ K 1− e− σ T0 + σ→ ∞ σ→ ∞ σ→ ∞ σ − σ 0 ⎥⎦ ⎣ ! (1.4-31)

(

)

{ } = lim ∫

L e

T

at

T →∞

0−

e− s t eat dt =

1 ⎡ lim e− ( s−a ) T − 1⎤⎥ ⎢ a − s ⎣ T →∞ ⎦

If Re ( s − a ) > 0 the function f (t) = eat is of exponential order and the limit exists. We then have: !

{ }

L eat =

1 ! s−a

Re s > a

22

We see that the Laplace transform changes a transcendental function eat into a simpler algebraic function 1 ( s − a ) . Note

1.4.5!

that if a = 0 we have: !

{ }

L e

at

!

1 = L {1} = s

REGION OF CONVERGENCE OF THE LAPLACE TRANSFORM

The region of convergence of the Laplace transform is

defined to be the set of complex values of s for which the Laplace transform exists. If f ( t ) is a piecewise continuous

as given in Example 1.4-2.

function of exponential order, the Laplace transform integral will converge absolutely in the region Re s > σ 0 as given in

Example 1.4-7 Determine the Laplace transform for f (t) = e

− at

where a is a

real constant.

Proposition 1.4-2, and so will converge to the right of the vertical line Re s = σ 0 in the s-plane (see Figure 1.4-4). !

The right half-plane Re s > σ 0 is known as the region of

Solution:

absolute convergence, and is a function of σ 0 . The term right

The function f (t) = e− at is continuous in − ∞ < t < ∞ and of

half-plane is used to mean a half-plane that is bounded on the

exponential order. We have: !

{ } = lim ∫

L e

− at

T →∞

Therefore: !

{ }

L e− at =

1 s+a

T

0



e

− s t −at

e

−1 ⎡ − (s + a) T ⎤ dt = lim e − 1 ⎥⎦ s + a ⎢⎣ T → ∞

left side but which extends infinitely on the right side. The constant

Re s = σ 0

is

called

the

abscissa

of

absolute

convergence, and the line Re s = σ 0 is not included within the region of absolute convergence. !

If the region of convergence of the Laplace transform of

f ( t ) is to the right of a vertical line Re s = σc where σc ≤ σ 0 , then σc is called the abscissa of convergence or the abscissa of simple convergence. The region of convergence Re s > σ c can be larger than the region of absolute convergence Re s > σ 0 . Depending upon the function, the abscissa of convergence can 23

be positive, zero, or negative. In some cases the region of

converges uniformly with respect to s in the open half-plane

convergence can be the entire s-plane.

Re s > σ 0 . Proof: ! !

We have from equation (1.4-20):





f ( t ) e− σ t dt ≤

T0

M e− ( σ − σ 0 ) T0 ! σ0 − σ

Re s > σ 0 ! (1.4-35)

for large enough T0 . Let σ ≥ σ1 > σ 0 where σ1 is some given value of σ . We then have: !





f ( t ) e− σ t dt ≤

T0

M M ≤ = MU ! σ − σ 0 σ1 − σ 0

(1.4-36)

where MU > 0 is an upper bound for the expression in equation Figure 1.4-4! Example of a region of convergence and a region of absolute convergence of the Laplace transform for a function of exponential order. Proposition 1.4-7, Laplace Transform Converges Uniformly in its Region of Absolute Convergence:

0−





T0

f (t ) e

− σt

dt ≤





f ( t ) e− σ t dt !

(1.4-37)

T0

we have:

f ( t ) e− σ t dt ≤ MU !

(1.4-38)

where MU is independent of σ . We see that the integral in



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) !





T0

f ( t ) ≤ M eσ 0 t for t ≥ T0 , then the Laplace transform: L

!

!

If f ( t ) is a piecewise continuous function having

!

(1.4-36) for all Re s ≥ σ1 > σ 0 . Since:

(1.4-34)

equation (1.4-38) converges absolutely. Therefore from the 24

Weierstrass M-test for integrals, we also see that the Laplace

generally do this implicitly for the rest of the book without

transform of f ( t ) converges uniformly with respect to s .

explicitly showing the limiting process.



Proposition 1.4-8:

Example 1.4-8

If f ( t ) is a piecewise continuous function having

Show that the Laplace transform does not converge for the

f ( t ) ≤ M eσ 0 t for t ≥ T0 , then the Laplace transform: L

!

2

continuous function f (t) = e t .



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) ! 0

(1.4-39)



The function f (t) = e t

converges absolutely and uniformly with respect to s in the

!

Proof:

{ } ∫

L e

t2

=



0

Follows from Propositions 1.4-2 and 1.4-7.

2

is not of exponential order since it

increases more rapidly than eat when t ≥ T0 . We have:

open half-plane Re s > σ 0 .

!

Solution:



t2 −st

e e

dt =





0



et

2

−s t

dt

Adding and subtracting s 2 4 to the exponent, this integral



can be rewritten as:

Proposition 1.4-9: If the Laplace transform of a function f ( t ) with f ( t ) ≤ M e

σ0 t

!

converges absolutely with respect to s for Re s ≥ σ1 > σ 0 , then it

{ }= e

L e

t2

(

− s2 4

)





0−

e

(t−s 2 )2

dt

converges uniformly and absolutely with respect to s for

For any given value of s we then have as t → ∞ :

Re s = σ1 where σ1 ≥ σ 0 .

!

Proof:

{ }→ ∞

L et

2

Therefore the Laplace transform does not converge for the

!

Follows from Proposition 1.4-8.

!

As noted previously, the Laplace transform is an improper



2

continuous function f (t) = e t , and so the Laplace transform 2

does not exist for f (t) = e t .

integral that must be evaluated in terms of a limit. We will 25

!

From Example 1.4-8 we see that continuity of a function is

not sufficient to ensure that the Laplace transform of the function exists.

!

∫ u dv = u v − ∫ v du

we have: ∞

⎡ t e− s t ⎤ 1 L {t } = ⎢ − ⎥ + s ⎦ t = 0− s ⎣

1.5! LAPLACE TRANSFORMS OF SIMPLE FUNCTIONS

!

!

and so:

Using the defintion of the Laplace transform given in

equation (1.3-8), the Laplace transforms of many functions can be easily determined.

!

1 L {t } = − 2 e− s t s

∞ −

=

t=0





0−

e− s t dt

1 ! 2 s

Re s > 0

Example 1.5-1 Determine the Laplace transform for the unit ramp function

Example 1.5-2

f (t) = t .

Determine the Laplace transform for the ramp function defined by

Solution:

!

We have: !

L {t } =





0−

t e− s t dt Solution:

We can integrate by parts (see Appendix B). We let: ! !

u=t!

dv = e− s t dt

du = dt !

e− s t v=− s

Using:

⎧ t 0 ≤ t ≤1 f (t ) = ⎨ t >1 ⎩ 1

The ramp function f (t) is shown in Figure 1.5-1. Its Laplace transform is: !

L



{ f (t )} = ∫ f (t ) e 0−

−st

dt =



1

0−

te

−st

dt +





e− s t dt

1

26

!

e− s e− s t L { f ( t )} = − − 2 s s

1

e− s + s − t=0

and so: !

1− e− s ! L { f ( t )} = s2

Re s > 0

Example 1.5-3 Determine the Laplace transform for f (t) = t 2 . Solution:

Figure 1.5-1! Ramp function for Example 1.5-2.

We have: Integrating by parts, we let: ! !

−st

u=t!

dv = e

du = dt !

e− s t v=− s

!

dt

! or

⎡ t e− s t ⎤ 1 L { f ( t )} = ⎢ − ⎥ + s ⎦ t = 0− s ⎣



0−

t 2 e− s t dt

Integrating by parts, we let: !

we then have: 1

{ }= ∫

L t

2





1

0



e

−st

⎡ e− s t ⎤ dt + ⎢ − ⎥ ⎣ s ⎦ t =1

!

u = t2 !

dv = e− s t dt

du = 2t dt !

e− s t v=− s

We then have: !

{ }

L t

2



2 ⎡ 1 ⎤ = ⎢ − e− s t t 2 ⎥ + ⎣ s ⎦ t = 0− s





0−

t e− s t dt

27

or !

Integrating by parts, we have:

{ }

L t

2



⎡ t 2 e− s t ⎤ 2 = ⎢− + L {t } ⎥ s s ⎣ ⎦ t = 0−

Using L’Hôpital’s rule and Example 1.5-1, we have: !

{ }

L t2 =

2 1 2 ! = s s2 s3

Re s > 0

!





{ }

⎡ t n e− s t ⎤ n = ⎢− + ⎥ s ⎦ t = 0− s ⎣

{ }

⎡ t n e− s t ⎤ n = ⎢− + L t n−1 ⎥ s ⎦ t = 0− s ⎣

L t

n



0−

t n−1 e− s t dt

or !

L tn



{ }

This gives us a recursive relation: Example 1.5-4 Determine the Laplace transform for f (t) = t n .

!

{ }

L tn =

{ }

n L t n−1 s

Solution:

from which we obtain:

First we show that f (t) = t n is of exponential order. We can

!

{ }

L tn =

write: !

at n< lnt

!

!

exponential order. We then have: !

{ }= ∫

L t

Re s > 0

{ }

L tn =

n ( n − 1) ( n − 2 )!( n − k + 1) s

k

{ }

L t n−k !

Re s > 0

or for k = n :

which is true given that t > e . Therefore f (t) = t n is of

n

{ }

Continuing in this way, we get:

lnt n < ln eat

for sufficiently large t since we have: !

n ⎛ n − 1⎞ n−2 ! ⎜⎝ ⎟⎠ L t s s



0



n −st

t e

{ }

L tn =

{ }

n! n! n! 1 0 ! L t = L 1 = { } sn sn sn s

Re s > 0

n! ! n+1 s

Re s > 0

Therefore:

dt

!

{ }

L tn =

28

!

We will now consider the Laplace transform of t α where

α is a real number such that α > −1 . We have:

{ }= ∫

L t

!

α



0−

t α e− s t dt !

(1.5-1)

Letting u = s t so that t = u s and dt = du s , equation (1.5-1) becomes: !

{ }= s

L t

α

1 α+1





0−

u α e−u du !

(1.5-2)

Using the definition of the gamma function: !

Γ ( α + 1) =





0



u α e− u du !

α > −1 !

(1.5-3)

we can write: ! ! !

{ }

L tα =

Γ ( α + 1) s

α+1

!

α > −1 !

(1.5-4)

If α = n , where n is a nonnegative integer, then:

Γ ( n + 1) = n! !

(1.5-5)

and we have: !

{ }

L tn =

n! s

n +1

!

Re s > 0 !

(1.5-6)

as determined in Example 1.5-4. 29

Chapter 2 Properties of the Laplace Transform

L

{ f (t ) e } = F ( s − b ) bt

30

!

Certain properties of the Laplace transform make it

possible to construct the transforms of many different functions using Laplace transforms already obtained for other less complicated functions.

L {c1 f ( t ) + c2 g ( t )} = lim

!

T →∞



T



T

0

⎡⎣ c1 f ( t ) + c2 g ( t ) ⎤⎦ e− s t dt ! (2.1-3) −

or

2.1! LINEARITY PROPERTY !

Since this is an improper integral, we have:

L {c1 f ( t ) + c2 g ( t )} = lim

!

The Laplace transform operator L is a linear operator.

This means that the Laplace transform of any linear

T →∞

0

⎡⎣ c1 f ( t ) e− s t + c2 g ( t ) e− s t ⎤⎦ dt −

! !

(2.1-4)

combination of functions equals the same linear combination of

Since integration is a linear operation, we have:

their Laplace transforms. The linearity of the Laplace transform the Laplace transform is defined by the following proposition.

⎡ ! L {c1 f ( t ) + c2 g ( t )} = lim ⎢ c1 T →∞ ⎣ ! !

Proposition 2.1-1, Linearity Property of Laplace Transforms:

or

is a direct result of the linearity of integration. The linearity of

The Laplace transform is a linear operator. If f ( t ) and g ( t ) are real-valued functions having Laplace transforms that exist, then:

L {c1 f ( t ) + c2 g ( t )} = c1 L

!

{ f (t )} + c2 L {g (t )} !

(2.1-1)

where c1 and c2 are arbitrary constants.

! L {c1 f ( t ) + c2 g ( t )} = c1 lim

T →∞

Therefore:

L {c1 f ( t ) + c2 g ( t )} =



0

⎡⎣ c1 f ( t ) + c2 g ( t ) ⎤⎦ e− s t dt ! −

(2.1-2)

⎤ g ( t ) e− s t dt ⎥ 0− ⎦



(2.1-5)



T

0−

f (t ) e

−st

dt + c2 lim

T →∞



T

0−

g ( t ) e− s t dt

and so:

! !

0−

dt + c2

T

(2.1-6)

Proof:





f (t ) e

−st

! !

! L {c1 f ( t ) + c2 g ( t )} = c1

From the definition of the Laplace transform we can write:

T





0−

f (t ) e

L {c1 f ( t ) + c2 g ( t )} = c1 L

!

−st

dt + c2





0−

g ( t ) e− s t dt ! (2.1-7)

{ f (t )} + c2 L {g (t )} !

(2.1-8)



31

the right converges absolutely for σ > σ f , and the second

Proposition 2.1-2, Convergence of a Sum of Laplace Transforms:

integral on the right converges absolutely for σ > σg . Therefore we

If f ( t ) and g ( t ) are piecewise continuous functions of

{ f (t )} converges for σ > σf and {g (t )} converges for σ > σg , then L {c1 f (t ) + c2 g (t )}

exponential order, and if L

L

(

)

From Proposition 2.1-1 we have:

!

L {c1 f ( t ) + c2 g ( t )} = c1 L

{ f (t )} + c2 L {g (t )} !



L {c1 f ( t ) + c2 g ( t )}

(

)

Solution:

(2.1-9)

We have from Proposition 2.1-1 and Example 1.4-2:

0−

⎡⎣ c1 f ( t ) + c2 g ( t ) ⎤⎦ e− s t dt ≤



!



0

⎡⎣ c1 f ( t ) + c2 g ( t ) ⎤⎦ e− σ t dt −

! !

(2.1-10)

or

!

L {c} =





0−

ce

−st

dt = c





0−

e− s t dt = c L {1} =

c s

From Proposition 2.1-1 and Example 1.5-4 we see that the

Laplace transform of any finite power series of t n will exist. For an infinite series, however, a linear combination of Laplace





0



⎡⎣ c1 f ( t ) + c2 g ( t ) ⎤⎦ e

−st

dt ≤ c1





0

+ c2



f (t ) e





0

!

transform

Determine the Laplace transform for f (t) = c where c is a real

where c1 and c2 are arbitrary constants. We can write:

!

Laplace

constant.

!

!

the

Example 2.1-1

Proof:



that

converges absolutely for σ > σm where σ m = max σ f , σg . ■

converges absolutely for σ > σ m where σ m = max σ f , σg .

!

see



− σt

transformed terms of the series will not generally equal the

dt

g ( t ) e− σ t dt ! (2.1-11)

From Proposition 1.4-2 and the definition of exponential

order given in equation (1.4-7) we know that first integral on

Laplace transform of the series sum (see Chapter 8). Example 2.1-2 Determine the Laplace transform for f (t) =

N

∑a t n

n

.

n=0

Solution: 32

We have from Proposition 2.1-1 and Example 1.5-4:

⎧ N ⎫ a a 2! a 3! a N! a ⎪ n⎪ !L ⎨ an t ⎬ = 0 + 21 + 3 2 + 4 2 +!+ N +1N ! s s s ⎪⎩ n = 0 ⎪⎭ s s



!

or

Re s > 0

!

L {cosh ( at )} =

We also have:

We also see from Proposition 2.1-1 and Examples 1.4-6 and

!

1.4-7 that it is possible to obtain the Laplace transforms for the hyperbolic cosine and sine functions using eat and e− at .

⎧ eat − e−at ⎫ 1 1 at −at L {sinh ( at )} = L ⎨ ⎬= L e − L e 2 2 ⎩ ⎭ 2

{ }

L {sinh ( at )} =

Determine the Laplace transforms for cosh ( at ) and sinh ( at ) where a is a real constant.

eat + e−at ! cosh ( at ) = 2

Solution:

eat − e−at sinh ( at ) = 2

From Example 1.4-6 we have: !

From Examples 1.4-6 and 1.4-7 we have:

{ }

{ }

1 ! s−a

L e−at =

1 ! s+a

Re s > a

⎧e + e L {cosh ( at )} = L ⎨ 2 ⎩ at

−at

{ }

L ei t =

1 s+i s 1 = 2 = 2 +i 2 s − i s +1 s +1 s +1

From Euler’s formula ei t = cos ( t ) + i sin ( t ) and Proposition 2.1-1 we can then write:

With Proposition 2.1-1 we then have: !

1 ⎤ a ⎡ 1 − = ⎢⎣ s − a s + a ⎥⎦ s 2 − a 2

Determine the Laplace transforms for cos ( t ) and sin ( t ) .

We will use:

L eat =

1 2

Example 2.1-4

Solution:

!

{ }

and so: !

Example 2.1-3

!

1⎡ 1 1 ⎤ s + = 2 ⎢⎣ s − a s + a ⎥⎦ s 2 − a 2

⎫ 1 1 at −at ⎬= L e + L e 2 ⎭ 2

{ }

{ }

!

L {cos ( t )} =

s s2 + 1

!

L {sin ( t )} =

1 s2 + 1

!

Re s > 1

33

!

The Laplace transform therefore changes sine and cosine

functions into algebraic functions. Laplace transforms of

! e

− bt

e− ( b − i ) t + e− ( b + i ) t e− ( b − i ) t − e− ( b + i ) t − bt cos ( t ) = ! e sin ( t ) = 2 2i

tangent, cotangent, secant, and cosecant do not exist because From Example 1.4-7 we can write:

these functions are not of exponential order. !

The Laplace transform of the product of two functions

f ( t ) and g ( t ) is not equal to the product of their Laplace transforms L !

L

{ f (t )} and

L { g ( t )} :

L

{ f (t )} L {g (t )}

(2.1-12)

is equal, however, to the

convolution of the two functions (see Chapter 7).

!

sine functions: e− bt cos ( t ) and e− bt sin ( t ) where b > 0 is a real

Solution: From Euler’s formula e = cost + i sint we have: it

Therefore:

}

L e− ( b − i ) t =

− it

!

sin ( t ) =

}

{

}

1 ( s + b) + i

L e− ( b + i ) t =

{

}

⎤ 1⎡ 1 1 s+b + ⎢ ⎥= 2 ⎣ ( s + b ) − i ( s + b ) + i ⎦ ( s + b )2 + 1

{

}

⎤ 1 ⎡ 1 1 1 − ⎢ ⎥= 2i ⎣ ( s + b ) − i ( s + b ) + i ⎦ ( s + b )2 + 1

! L e− bt sin ( t ) =

constant.

e +e 2

{

1 ! ( s + b) − i

{

1 s + (b + i )

L e− ( b + i ) t =

From Proposition 2.1-1 we then have:

Determine the Laplace transforms for the damped cosine and

! cos ( t ) =

}

! L e− bt cos ( t ) =

Example 2.1-5

it

{

1 ! s + (b − i )

L e− ( b − i ) t =

or

{ f (t ) g (t )} ≠ L { f (t )} L {g (t )} !

The product

!

e −e 2i it

− it

Example 2.1-6 Determine the Laplace transform for 1+ 2t − 3sint . Solution: Using Proposition 2.1-1 we have: !

L {1+ 2t − 3sin ( t )} = L {1} + 2 L {t } − 3 L {sint } 34

Proposition 2.2-1, First Shift Theorem (s-Shifting Property):

and so from Examples 1.4-2, 1.5-1, and 2.1-4:

If L

!

1 2 3 L {1+ 2t − 3sin ( t )} = + 2 − 2 s s s +1

Proof:

Determine the Laplace transform for sinh 2 ( t ) .

!

Solution:

!L

2.1-1 we have:

{

}

L sinh ( t )

2.2!

(2.2-1)

From the definition of the Laplace transform we can write:

{ f (t ) e } = ∫ bt

L

!

From Examples 1.4-2, 1.4-6 and 1.4-7, and using Proposition

2

Re ( s − b ) > 0 !

bt



⎡ f ( t ) e ⎤⎦ e − ⎣ bt

− st

dt =





0−

f ( t ) e− ( s− b ) t dt ! (2.2-2)

Let u = s − b :

2

⎡ e t − e− t ⎤ e 2t e − 2t 1 2 sinh ( t ) = ⎢ + − ⎥ = 2 4 4 2 ⎣ ⎦

!

{ f (t ) e } = F ( s − b ) !

0

We have:

!

L

!

Example 2.1-7

!

{ f (t )} = F ( s ) , then:

1⎡ 1 1 2⎤ 2 = ⎢ + − ⎥= 4 ⎣ s − 2 s + 2 s ⎦ s s2 − 4

(

)

S-SHIFTING PROPERTY

{ f (t ) e } = ∫ bt



0−

f ( t ) e− u t dt = F ( u ) !

(2.2-3)

Therefore:

L

!

{ f (t ) e } = F ( s − b ) ! bt

Re ( s − b ) > 0 !

(2.2-4)



Proposition 2.2-2: If L

There is a Laplace transform property known as shifting

L

that translates the Laplace transform by a constant. Shifting the Laplace transform of a function is equivalent to the exponential

Proof:

scaling of the function as we will now show.

!

{ f (t )} = F ( s ) is absolutely convergent for Re s > σ 0 , then

{ f (t ) e } is absolutely convergent for Re s > σ − bt

0

+ Reb .

From the definition of the Laplace transform we can write: 35

L

!

{

f (t ) e

− bt

}

=





0

⎡ f (t ) e − ⎣

− bt

⎤⎦ e

− st

dt =





0−

f ( t ) e− ( s+ b ) t dt

!!

!

L

{(1) e } = s −1 b = L {e } bt

bt

(2.2-5)

where s = σ + i ω . Since e i ω = 1 and e i Im b = 1 we have:

e− ( s+ b ) t = e− ( σ + Re b ) t !

! Since L

then L

(2.2-6)

{ f (t )} is absolutely convergent for σ > σ 0 where: L

!

Example 2.2-2



{ f (t )} = ∫ f (t ) e

−σt

0−

dt !

(2.2-7)

{ f (t ) e } is absolutely convergent for Re s > σ − bt



0 + Reb .

Determine the Laplace transform for t 2e− 5t . Solution: From Example 1.5-3 we have: !

{ }

L t2 =

2 s3

Applying the s-shifting property we then obtain: !

{

}

L t 2e− 5t =

2

( s + 5 )3

Example 2.2-1 Using the s-shifting property determine the Laplace

Example 2.2-3

transform for ebt where b is a real constant.

Determine the Laplace transforms for the damped cosine and

Solution:

sine functions: e−bt cos ( t ) and e−bt sin ( t ) where b > 0 is a real

From Example 1.4-2 we have: !

L {1} =

1 s

Applying the s-shifting property we then obtain:

constant. Solution: From Example 2.1-4 we have: !

L {cos ( t )} =

s s +1 2

!

L {sin ( t )} =

1 2

s +1

!

Re s > 1 36

Applying the s-shifting property we then obtain:

{

}

! L e−bt cos ( t ) =

s+b

( s + b) +1 2

{

}

L e−bt sin ( t ) =

!

Proposition 2.3-1, Scaling Property (s-Scaling):

1

If L

( s + b) + 1 2

Compare this with Example 2.1-5.

{ f (t )} = F ( s ) , then: L

!

{ f (bt )} = b1 F ⎛⎜⎝ bs ⎞⎟⎠ !

(2.3-1)

where b > 0 . Example 2.2-4 Determine the Laplace transform for t n ebt where n is a positive integer.

L tn =

{

n! s

n+1

Re s > 0

!

}

L t n ebt =

n!

( s − b)

n+1

!

Re s > b

2.3! SCALING PROPERTY !

L



{ f (bt )} = ∫ f (bt ) e− s t dt !

(2.3-2)



Changing variables:

Applying the s-shifting property we then obtain: !

From the definition of the Laplace transform we can write:

0

From Example 1.5-4 we have:

{ }

! !

Solution:

!

Proof:

The Laplace transform operation has a property known as

scaling that scales the Laplace transform by a constant.

τ = bt !

!

dt =

dτ ! b

(2.3-3)

we have:

1 L { f ( bt )} = b

!





0−

f ( τ ) e− ( s b ) τ dτ !

(2.3-4)

and so:

L

!

{ f (bt )} = b1 F ⎛⎜⎝ bs ⎞⎟⎠ !

b > 0!

(2.3-5)



37

Example 2.3-1

Proposition 2.3-2, Scaling Property (t-Scaling): If L

where a is a real constant.

1 ⎧ ⎛ t ⎞⎫ F (b s ) = L ⎨ f ⎜ ⎟ ⎬ ! b ⎩ ⎝ b⎠ ⎭

!

Determine the Laplace transform for cos ( at ) and sin ( at )

{ f (t )} = F ( s ) , then: (2.3-6)

where b > 0 .

From Example 2.1-4 we have: !

Proof: !

Solution:

L {cos ( t )} =

From the definition of the Laplace transform we can write:

F (b s ) =

!





0−

f ( τ ) e− b s τ dτ !

σ Re s > ! b

t = bτ !

dτ =

dt ! b

(2.3-7)

!

L {cos ( at )} =

1 s = a ⎛ s⎞2 s2 + a2 ⎜⎝ a ⎟⎠ + 1

!

L {sin ( at )} =

1 1 a = a ⎛ s⎞2 s2 + a2 ⎜⎝ a ⎟⎠ + 1





⎛t⎞ f ⎜ ⎟ e− s t dt ! 0− ⎝ b ⎠

!

Re s > 1

Example 2.3-2

1 ⎧ ⎛ t ⎞⎫ F (b s ) = L ⎨ f ⎜ ⎟ ⎬ ! b ⎩ ⎝ b⎠ ⎭ ■

s2 + 1

(2.3-9)

and so: !

1

s a

!

(2.3-8)

we have:

1 F (b s ) = b

s2 + 1

L {sin ( t )} =

!

Applying Proposition 2.3-1 we then obtain:

Changing variables: !

s

b > 0!

(2.3-10)

Determine the Laplace transform for ebt cos ( at ) and

ebt sin ( at ) where a and b are real constants.

Solution: 38

From Example 2.3-1 and Proposition 2.2-1: !

{

}

L ebt cos ( at ) =

!

s−b

1

1



( s − 3)2 + 4 ( s + 3)2 + 4

( s − b )2 + a 2

and !

L {sinh ( 3t ) sin ( 2t )} =

Example 2.3-4

{

}

L ebt sin ( at ) =

a

Determine the Laplace transform for cos 2 t .

( s − b )2 + a 2

Solution: Using the trigonometric formula:

Example 2.3-3

cos 2 t =

1 ⎡1+ cos ( 2t ) ⎤⎦ 2⎣

Determine the Laplace transform for sinh ( 3t ) sin ( 2t ) .

!

Solution:

and Proposition 2.1-1 we have:

We have:

!

!

⎧ e3t − e− 3t ⎫ L {sinh ( 3t ) sin ( 2t )} = L ⎨ sin ( 2t ) ⎬ 2 ⎩ ⎭

!

!

{e

3t

sin ( 2t ) − e

− 3t

}

1 1 L {1} + L {cos ( 2t )} 2 2

Therefore from Examples 1.4-2 and 2.3-1:

or

1 L {sinh ( 3t ) sin ( 2t )} = L 2

{

L cos 2 t =

sin ( 2t )

}

11 1 s s2 + 2 L cos t = + = 2 2 s 2 s + 4 s s2 + 4

{

2

}

(

)

From Example 2.3-2 and Proposition 2.2-1: !

L {sinh ( 3t ) sin ( 2t )} =

1 2

⎡ ⎤ 2 2 − ⎢ ⎥ 2 2 ⎢⎣ ( s − 3) + 4 ( s + 3) + 4 ⎥⎦

and so: 39

Chapter 3 Laplace Transforms of Derivatives and Integrals

L

{ f ′ (t )} = s F ( s ) − f ( 0 ) −

40

!

The primary application of the Laplace transform is in the

Since f ( t ) is a continuous function of exponential order on

solution of differential equations. The use of Laplace transforms

[ 0, ∞ )

and f ′ ( t ) is a piecewise-smooth function on [ 0, ∞ ) , we

in solving ordinary differential equations is the subject of

can integrate by parts. Letting:

Chapter 9, and their use in solving partial differential equations

!

u = e− s t !

dv = f ′ ( t ) dt !

(3.1-3)

we will now determine the Laplace transforms of derivatives

!

du = − s e− s t dt !

v = f (t ) !

(3.1-4)

and of integrals of functions.

we have:

is the subject of Chapter 10. In preparation for these chapters

3.1! LAPLACE TRANSFORMS OF DERIVATIVES OF CONTINUOUS FUNCTIONS

and f ′ ( t ) is a piecewise-smooth function on [ 0, ∞ ) , then

{ f ′ (t )} exists for all Re s > σ 0 , and we have: L

{ f ′ (t )} = s F ( s ) − f ( 0− ) !

where L

!

t=0



+s



0



f ( t ) e− s t dt !

(3.1-5)

(3.1-1)

{ f (t )} = F ( s ) .

lim f ( t ) e− s t − f ( 0 − ) + s F ( s ) ! { f ′ (t )} = t→ ∞

(3.1-6)

Since f ( t ) is of exponential order, we have from equation (1.4-8): !

f ( t ) e− s t = f ( t ) e− σ t e− i ω t ≤ M e− ( σ − σ 0 ) t !

σ > σ 0 ! (3.1-7)

and so: !

Proof: !

L

!

If f ( t ) is a continuous function of exponential order on [ 0, ∞ )

!

{ f ′ (t )} = f (t ) e



and so:

Proposition 3.1-1, Laplace Transforms of Derivatives:

L

L

!

−st ∞

lim f ( t ) e− s t → 0 !

(3.1-8)

t→∞

and we have:

From the definition of the Laplace transform we can write:

L

{ f ′ (t )} = ∫



0



f ′ ( t ) e− s t dt !

(3.1-2)

!

L

Therefore L

{ f ′ (t )} = s F ( s ) − f ( 0− ) ! { f ′ (t )} exists if Re s > σ 0 .

(3.1-9) ■

41

Note that f ′ ( t ) need not be of exponential order for this

!

Example 3.1-2

proposition. A function f ( t ) can be of exponential order and

Determine the Laplace transform for f (t) = sin 2 t .

yet its derivative f ′ ( t ) not be of exponential order.

Differentiation on the t -axis is seen to be equivalent to

Solution:

multiplication in the s -plane. Differential operations become

We have:

algebraic operations.

!

!

f ′ ( t ) = 2 sint cost = sin 2t

Example 3.1-1

and so from equation (3.1-1):

Determine the Laplace transform for f ′(t) where f (t) = sin ( t ) .

!

Solution:

we can write:

We have from Example 2.1-4:

!

!

L

{ f (t )} = L {sin (t )} = s 21+ 1 = F ( s )

L

!

L

!

( )

L

{

}

L sin 2 t =

L {sin 2t } s

=

1 2 s s2 + 4

If f ( t ) and f ′ ( t ) are both continuous functions of exponential

{ f ′ (t )} = s s 21+ 1 − 0 = s 2 s+ 1

)

order on ⎡⎣ 0 − , ∞ , and if f ′′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then L { f ′′ ( t )} exists for all Re s > σ 0 , and we

)

{ f ′ (t )} = L {cos (t )}

as expected.

} ( )

Proposition 3.1-2, Laplace Transforms of Second Derivatives:

From Example 2.1-4 we see: !

{

L {sin 2t } = s L sin 2 t − f 0 −

( )

{ f ′ (t )} = s F ( s ) − f 0−

where f (0 − ) = 0 . We then have:

{ f ′ (t )} = s F ( s ) − f ( 0− )

Since f 0 − = 0 , we have from Example 2.3-1:

From Proposition 3.1-1 we can write: !

L

have: !

L

{ f ′′ (t )} = s 2 F ( s ) − s f ( 0− ) − f ′ ( 0− ) !

(3.1-10) 42

where L

{ f (t )} = F ( s ) .

We can use Proposition 3.1-1 to write: !

Proof: !

From the definition of the Laplace transform we can write:

L

!

{ f ′′ (t )} = ∫



0



f ′′ ( t ) e− s t dt !

(3.1-11)

L

{ f ′′ (t )} = s ⎡⎣ s F ( s ) − f ( 0− )⎤⎦ − f ′ ( 0− ) !

(3.1-17)

L

{ f ′′ (t )} = s 2 F ( s ) − s f ( 0− ) − f ′ ( 0− ) !

(3.1-18)

or !

Since f ( t ) and f ′ ( t ) are continuous functions of exponential

Therefore L

[ 0, ∞ ) , we can integrate by parts. Letting:

!

order on [ 0, ∞ ) , and f ′′ ( t ) is a piecewise-smooth function on

{ f ′′ (t )} exists if Re s > σ 0 .



Note that f ′′ ( t ) need not be of exponential order.

!

u = e− s t !

dv = f ′′ ( t ) dt !

(3.1-12)

Example 3.1-3

!

du = − s e− s t dt !

v = f ′ (t ) !

(3.1-13)

Determine the Laplace transform for f ′′(t) where

f (t) = sin ( t ) .

we have:

L

!

{ f ′′ (t )} = f ′ (t ) e

−st ∞ t = 0−

+s





0



f ′ (t ) e

−st

Solution:

dt ! (3.1-14)

and so: ! L

!

lim f ′ ( t ) e { f ′′ (t )} = t→ ∞

−st

( ) + s∫

− f′ 0





0−

f ′ ( t ) e− s t dt ! (3.1-15)

Since f ′ ( t ) is of exponential order, and using equation (3.1-2)

L

L

{ f (t )} = L {sin (t )} = s 21+ 1 = F ( s )

We also have: !

f ′ ( t ) = cos ( t )

From Proposition 3.1-2 we can write:

we obtain: !

From Example 2.1-4 we have:

{ f ′′ (t )} = − f ′ ( 0 ) + s L { f ′ (t )} ! −

(3.1-16)

!

L

{ f ′′ (t )} = s 2 F ( s ) − s f ( 0− ) − f ′ ( 0− ) 43



n−1



where f (0 ) = 0 and f ′(0 ) = 1 . Therefore:

L

!

{ f ′′ (t )} = s 2

1 1 − 1 = − s2 + 1 s2 + 1

{ f ′′ (t )} = L { − sin (t )}

If f ( t ) , f ′ ( t ) , f ′′ ( t ) ,!, f ( n−1) ( t ) , are continuous functions of

)

exponential order on ⎡⎣ 0 − , ∞ , and if f ( n ) ( t ) is a piecewisesmooth function on ⎡⎣ 0 − , ∞ , then L f ( n ) ( t ) exists for all

L

!

!

{

{ f ( ) (t )} = s F ( s ) − s f ( 0 ) − s n

where L

n

n−1



n−2

}

( )

( )

f ′ 0 − − !− f ( n−1) 0 −

(3.1-19)

( )

( )

( )

(3.1-21)

Follows by continuing the process used in Propositions

{ f ( ) (t )} = s F ( s ) ! n

L

where L

{ f (t )} = F ( s ) .

n

(3.1-22)

3.2! LAPLACE TRANSFORMS OF DERIVATIVES OF PIECEWISE CONTINUOUS FUNCTIONS Proposition 3.2-1: If f ( t ) is a piecewise continuous function of exponential order

{ f (t )} = F ( s ) .

3.1-1 and 3.1-2. We then have:

(3.1-20)

f 0 − = f ′ 0 − = ! = f ( n−1) 0 − ≡ 0 !

!

)

on ⎡⎣ 0 − , ∞ with a single finite discontinuity at t = t 0 , and if f ′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then:

Proof: !

( )

f (k ) 0− !

then from Proposition 3.1-3 we will have:

Proposition 3.1-3, Laplace Transforms of Higher Order Derivatives:

!

n−1−k

note that if we have:

as expected.

Re s > σ 0 , and we have:

n

Note that f ( n ) ( t ) need not be of exponential order. Also

!

)

n



!

L

{ f ( ) (t )} = s F ( s ) − ∑ s k=0

or !

L

!

!

L

)

{ f ′ (t )} = s F ( s ) − f ( 0− ) − ⎡⎣ f (t0+ ) − f (t0− )⎤⎦ e− s t

where L

0

! (3.2-1)

{ f (t )} = F ( s ) and Re s > σ 0 . 44

Proof: For a function f ( t ) having a single jump discontinuity at

!

( )

( )

t = t 0 similar to that shown in Figure 1.4-1, f t 0+ and f t 0− are both finite and exist. We can write:

L

!

{ f ′ (t )} = ∫

t 0−

f ′ ( t ) e− s t dt +

0−

f ′ ( t ) e− s t dt !

t 0+

−st

u=e

!

du = − s e− s t dt !

!

L

!

dv = f ′ ( t ) dt !

(3.2-3)

v = f (t ) !

(3.2-4)

{ f ′ (t )} = f (t ) e− s t

t 0− t=0



+s



t 0−

0−

f ( t ) e− s t dt + f ( t ) e +s

{ f ′ (t )} = f (t0− ) e− s t

0

( )

− f 0− + s −f

!





t 0+

f (t ) e

−st

−st ∞

t=

and t =

+s





0−

f ( t ) e− s t dt (3.2-7)

L ■

{ f ′ (t )} = s F ( s ) − f ( 0− ) − ⎡⎣ f (t0+ ) − f (t0− )⎤⎦ e− s t

0

! (3.2-8)

If f ( t ) is a piecewise continuous function of exponential order

)

on ⎡⎣ 0 − , ∞ with finite discontinuities at t = t 0 , t1, t 2 , !, t n , and if f ′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then:

dt !

( )e t 0+

− s t0



t 0−

0−

+s

t 0+ ,

!

)

{ f ′ (t )} = s F ( s ) − f ( 0− ) − ∑ ⎡⎣ f (t k+ ) − f (t k− )⎤⎦ e− s t

L

k

! (3.2-9)

k=0

t = t 0+

where L

(3.2-5)

{ f (t )} = F ( s ) and Re s > σ 0 .

Proof: !

f ( t ) e− s t dt





t 0+

f ( t ) e− s t dt ! (3.2-6)

Since the function f ( t ) does not add to the integrals between

t 0−

− s t0

n

!

L

t 0+

Proposition 3.2-2:

or !

( ) − f ( )e

−f 0



or

we have: !

− s t0

!

(3.2-2)

Integrating by parts, and letting: !

L

!

!





{ f ′ (t )} = f ( ) e t 0−

we can combine the integrals:

3.3!

Follows from Proposition 3.2-1.

■.

LAPLACE TRANSFORMS OF INTEGRALS

Proposition 3.3-1: If f ( t ) is a piecewise-smooth function of exponential order, then the integral of f ( t ) is of exponential order.

45

Proof: !

Since f ( t ) is a piecewise-smooth function, it is integrable

)

on ⎡⎣ 0 − , ∞ . We now let: !

g (t ) =



t

f ( τ ) dτ !

(3.3-1)

)

where g ( t ) will then be continuous on ⎡⎣ 0 , ∞ except at points at which f ( t ) has a discontinuity. We also have: −

!



t

f ( τ ) dτ = f ( t ) !

(3.3-2)

! !



f ( τ ) dτ !

(3.3-6)

T0

g (t ) ≤ K

!



T0

dτ + M

0



t

e σ 0 t dτ !

(3.3-7)

T0

or

g ( t ) ≤ K T0 +

!

(

)

M σ 0 t σ 0 T0 ! e −e σ0

(3.3-8)

Since eσ 0 T0 > σ 0 T0 approximating eσ 0 T0 with σ 0 T0 :

Since f ( t ) is piecewise-smooth function of exponential

order, on a finite interval ⎡⎣ 0 − , T0 ⎤⎦ we know that f ( t ) will be bounded by some positive constant K so that: !

f ( τ ) dτ +

0

0

Therefore g′ ( t ) is piecewise continuous. !



t

and so:

0

d g′ ( t ) = dt

g (t ) ≤

!

T0

f (t ) ≤ K !

0 ≤ t ≤ T0 !

(3.3-3)

Since f ( t ) is of exponential order, we also have:

f ( t ) ≤ M eσ 0 t !

t > T0 !

(3.3-4)

g ( t ) ≤ ( K − M ) T0 +

!

M σ0 t e ! σ0

(3.3-9)

Selecting M > K :

g (t ) ≤

!

M σ0 t e ! σ0

(3.3-10)

Therefore g ( t ) is of exponential order for Re s > σ 0 and t > T0 . ■

We can write: !

g (t ) ≤



T0

0

or

f ( τ ) dτ +



t

T0

Proposition 3.3-2:

f ( τ ) dτ !

(3.3-5)

If f ( t ) is a piecewise-smooth function of exponential order, then the Laplace tranform of the integral of f ( t ) exists.

46

( )

where from equation (3.3-12) we see g 0 + = 0 . Therefore we

Proof: !

Follows from Propositions 3.3-1 and 1.4-3.

have:



If f ( t ) is a piecewise-smooth function of exponential order, and

or

if L

!

{ f (t )} = F ( s ) , then: ⎧ L ⎨ ⎩

!



⎫ F (s) f ( τ ) dτ ⎬ = ! s ⎭

t

0−

!

Since f ( t ) is a piecewise-smooth function, it is integrable

!

⎧ L ⎨ ⎩

)



t

0

⎫ F (s) ! f ( τ ) dτ ⎬ = s ⎭

Re s > σ 0 !

(3.3-14)

(3.3-15)



(3.3-11)

Proof:

{ f (t )} = F ( s ) = s G ( s ) = s L { g (t )} !

L

!

Proposition 3.3-3, Laplace Transform of Integrals:

Integration on the t -axis is seen to be equivalent to

division in the s -plane. Integral operations become algebraic operations.

on ⎡⎣ 0 − , ∞ . We now let:

g (t ) =

!



t

f ( τ ) dτ !

(3.3-12)

0

)

where g ( t ) will then be continuous on ⎡⎣ 0 , ∞ except at points at which f ( t ) has a discontinuity. From Propositions 3.3-1 and −

3.3-2 we also know that g ( t ) is of exponential order for

Re s > σ 0 and t > T0 , and that g ( t ) has a Laplace transform L

!

{ g (t )} = G ( s ) .

Since g ( t ) is of exponential order, using Proposition 3.1-1

we can write: !

L

{ f (t )} = L { g′ (t )} = s G ( s ) − g ( 0+ ) !

Example 3.3-1

⎧ Determine L ⎨ ⎩



Solution: We have from Example 2.1-4: !

L {cos ( τ )} = F ( s ) =

s s2 + 1

From Proposition 3.3-3 we obtain: !

(3.3-13)

⎫ cos ( τ ) dτ ⎬ . 0 ⎭ t

⎧ L ⎨ ⎩

⎫ F (s) 1 s 1 cos ( τ ) dτ ⎬ = = = = L {sin ( t )} 2 2 s s s +1 s +1 0 ⎭



t

47

Proposition 3.3-4, Laplace Transform of Double Integrals: If f ( t ) is a piecewise-smooth function of exponential order, and if L

{ f (t )} = F ( s ) , then: ⎧ L ⎨ ⎩

!

t

∫∫ 0−

0−

L

where L

{ g (t )} = G ( s ) and g ( 0− ) = g′ ( 0− ) = 0 . Therefore we

(3.3-20)

have:

⎫ F (s) f ( γ ) dγ dτ ⎬ = 2 ! s ⎭

τ

{ g′′ (t )} = s 2G ( s ) − s g ( 0− ) − g′ ( 0− ) !

!

(3.3-16)

L

!

{ g′′ (t )} = L { f (t )} = s 2G ( s ) !

(3.3-21)

or Proof: !

Since f ( t ) is a piecewise-smooth function, it is integrable

)

on ⎡⎣ 0 − , ∞ . We now let: !

g (t ) =

t

∫∫ 0−

τ

0−

f ( γ ) dγ dτ !

(3.3-17)

)

where g ( t ) will then be continuous on ⎡⎣ 0 − , ∞ except at points at which f ( t ) has a discontinuity. We also have: !

g′ ( t ) =



t

0

f ( γ ) dγ !

(3.3-18)

and !

g′′ ( t ) = f ( t ) !

Therefore g′ ( t ) and g′′ ( t ) are piecewise continuous. !

(3.3-22)

and so:

⎧ L ⎨ ⎩

!

t

τ





∫∫ 0

0

⎫ F (s) f ( γ ) dγ dτ ⎬ = 2 ! s ⎭

(3.3-23)



Example 3.3-2

⎧ Determine L ⎨ ⎩

t

⎫ sin ( 2 γ ) dγ dτ ⎬ . 0 ⎭

∫∫ 0−

τ

Solution: (3.3-19)

Since g ( t ) is of exponential order, using Proposition 3.1-2

we can write:

F ( s ) = s 2G ( s ) !

!

From Example 2.3-1 we have: !

L {sin ( 2 τ )} = F ( s ) =

2 s2 + 4

From Proposition 3.3-4 we obtain: 48

⎧ L ⎨ ⎩

!

t

⎫ F (s) 1 2 2 sin ( 2 γ ) dγ dτ ⎬ = 2 = 2 2 = 4 s s s + 4 s + 4s 2 0 ⎭

∫∫ 0−

τ

Proposition 3.3-5, Laplace Transform of Multiple Integrals: If f ( t ) is a piecewise-smooth function of exponential order, and

{ f (t )} = F ( s ) , then:

if L !

⎧ L ⎨ ⎩

!

!

t

τ1

τ2

0−

0−

∫∫ ∫ ∫ 0−

!

τn

0−

⎫ F (s) f ( γ n ) dγ n dγ n −1 ! dγ 1 dτ ⎬ = n ! s ⎭ (3.3-24)

Proof: !

Follows using the same procedures as in Propositions 3.3-3

and 3.3-4.



49

Chapter 4 Derivatives and Integrals of Laplace Transforms

L {t f ( t )} = − F ′ ( s )

50

!

In this chapter we will determine the derivatives and

integrals of Laplace transforms of functions. We will show that derivatives of all orders of the Laplace transform of a function

!

f ( t ) will exist if f ( t ) is piecewise-smooth and of exponential

or

order with Re s > σ 0 .

!



d F′ ( s) = ds

F′ ( s) =

0−





0

4.1! DERIVATIVES OF LAPLACE TRANSFORMS

!

If f ( t ) is a piecewise-smooth function of exponential order, and

{ f (t )} = F ( s ) , then we have:

L { t f ( t )} = −

!

d L ds

(4.1-1)

!

! !

Proof: !

From the definition of the Laplace transform, we have:

d F′ ( s) = L ds

{ f (t )} = dsd





0−

f ( t ) e− s t dt !

dt =





f (t )

0−

∂ −st e dt ! ∂s

(4.1-3)

( − t f (t )) e− s t dt !

(4.1-4)

(4.1-2)

!

!

(see Proposition 1.4-7). Therefore the order of differentiation and integration in equation (4.1-2) can be interchanged.

(4.1-5)

L { t f ( t )} = − F ′ ( s ) = −

{ f (t )} !

d L ds

(4.1-6)

We can verify this equation using the definition:

d L ds

{ f (t )} = F ′ ( s ) = Δslim →0

F ( s + Δs ) − F ( s ) Δs

!

(4.1-7)

We can now write:

Since f ( t ) is piecewise-smooth function of exponential order, the Laplace transform F ( s ) converges uniformly for Re s > σ 0

F ′ ( s ) = L {− t f ( t )} !

Therefore: !

{ f (t )} = − F ′ ( s ) !



f (t ) e

−st

and so:

Proposition 4.1-1: if L



⎛ e − t ( s +Δs ) − e−s t ⎞ F ′ ( s ) − L {− t f ( t )} = lim ⎜ ⎟ f ( t ) dt − Δs → 0 Δs 0 ⎝ ⎠











0−

( − t f (t )) e− s t dt !

(4.1-8)

or 51

F ′ ( s ) − L {− t f ( t )} =

! !



⎡ ⎛ e− t Δs − 1 ⎞ ⎤ −st + t f t e dt ( ) ⎢ Δslim ⎥ ⎟ →0 ⎜ Δs ⎝ ⎠ ⎦ ⎣



0−

!

(4.1-9)

F ′ ( s ) − L {− t f ( t )} =

!



0−

( − t + t ) f (t ) e− s t dt = 0 !

(4.1-10)

Therefore:

d L ds

!

{ f (t )} = F ′ ( s ) = L { − t f (t )} !

(4.1-11)

and so if f ( t ) is a piecewise-smooth function of exponential

order, then t f ( t ) will also be a piecewise-smooth function of

Re s > σ 0 , and so F ′ ( s ) must exist for all Re s > σ 0 . This means

that F ( s ) is analytic for Re s > σ 0 (see Proposition 12.1-1).



Determine the Laplace transform for t 2 using Proposition 4.1-1. Solution:

Proposition 4.1-2: If f ( t ) is a piecewise-smooth function of exponential order, and

We have from Example 1.5-1:

if L

!

{ f (t )} = F ( s ) , then F ′ ( s ) exists for Re s > σ 0 .

L {t} =

1 = F (s) s2

Let f ( t ) = t . From Proposition 4.1-1 we can write:

Proof: Since f ( t ) is a piecewise-smooth function of exponential

order, we know from Proposition 1.4-3 that L

{ f (t )}

exists for

Re s > σ 0 . Since we also have: !

(4.1-13)

Example 4.1-1



!

t ≥ T0 !

exponential order. Therefore L {t f ( t )} must also exist for

Using L’Hôpital’s rule: ∞

t f ( t ) ≤ Me ( j + σ 0 ) t !

!

t < ejt !

j >0!

!

{ }

L { t f ( t )} = L t 2 = − F ′ ( s ) = −

d 1 2 = ds s 2 s 3

which agrees with the results of Example 1.5-3. (4.1-12) Example 4.1-2

we can write:

Determine the Laplace transform for t e−at . 52

Solution:

Proposition 4.1-3:

Let f ( t ) = e−at . We have from Example 1.4-7: !

{ }

L e−at =

If f ( t ) is a piecewise-smooth function of exponential order, and

1 = F (s) s+a

L

given by:

From Proposition 4.1-1 we can write: !

{

L te

−at

}

d 1 1 = − F′ ( s) = − = ds s + a ( s + a )2

}

n

(4.1-14)

where n is a positive integer. Proof:

Determine the Laplace transform for t sin ( at ) .

!

Solution:

!

We can prove this proposition by mathematical induction.

We will assume this theorem is true for n = m . We then have:

Let f ( t ) = sin ( at ) . We have from Example 2.3-1

{

}

L t m f ( t ) = ( −1) F ( m ) ( s ) ! m

(4.1-15)

or

a L { sin ( at )} = 2 = F (s) s + a2

!





0

From Proposition 4.1-1 we can write: ! L { t f ( t )} = L { t sin ( at )} = − F ′ ( s ) = −

{

L t n f ( t ) = ( −1) F ( n ) ( s ) !

!

Example 4.1-3

!

{ f (t )} = F ( s ) , then F (n) ( s ) exists for Re s > σ 0 , and is

d a 2as = ds s 2 + a 2 s2 + a2

(



t f (t ) e m

−st

dt = ( −1)

m

dm F (s) ! ds m

(4.1-16)

We now can write:

)

2

!

d m+1 m d F s = −1 ( ) ( ) ds m+1 ds





0



t m f ( t ) e− s t dt !

(4.1-17)

Since f ( t ) is piecewise-smooth function of exponential order, the Laplace transform F ( s ) converges uniformly for Re s > σ 0

53

(see Proposition 1.4-7). The order of differentiation and integration in equation (4.1-17) can then be interchanged.

d m+1 m m+1 F ( s ) = ( −1) ds

!





0−

t m f (t )

∂ −st e dt ! ∂s

(4.1-18)

!

!





0



t

m+1

f (t ) e

−st

!

dt !

(4.1-19)

L

{t

m+1

}

f ( t ) = ( −1)

m+1

!

F ( m+1) ( s ) !

(4.1-20)

Since equation (4.1-15) is true for m = 1 (Proposition 4.1-1), then this equation is valid for all positive integers n = m + 1 . !

form t f ( t ) where f ( t ) is a piecewise-smooth function of n

exponential order, then the Laplace transform L nth derivative of L

{ f (t )} multiplied by ( −1)n .

{t

n

}

f ( t ) is the

{

}

L t sin ( at ) 2

2

= F (s)

d2 ⎛ a ⎞ = ( −1) F ′′ ( s ) = 2 ⎜ 2 ⎟ ds ⎝ s + a 2 ⎠ 2

d⎛ a ⎞ 2as = − ⎜⎝ 2 ⎟ ds s + a 2 ⎠ s2 + a2

(

and so: !



From Proposition 4.1-3 we see that if a function has the

s +a 2

We have:

and so: !

a

From Proposition 4.1-3 we can write:

or

d m+1 m+1 m+1 F ( s ) = ( −1) ds

L {sin ( at )} =

(

)

2

)

2 2 d2 ⎛ a ⎞ 2 a 3s − a = = L t 2 sin ( at ) ⎟ 2 ⎜ 2 2 3 ds ⎝ s + a ⎠ s2 + a2

(

)

{

}

Example 4.1-5 Determine the Laplace transform for ( t − 3) e5t . 2

Solution: We are given:

Example 4.1-4 Determine the Laplace transform for t sin ( at ) . 2

Solution: Let f ( t ) = sin ( at ) . We have from Example 2.3-1:

!

(t − 3)2 e5t = t 2 e5t − 6t e5t + 9 e5t

Let f ( t ) = e5t . We have from Example 1.4-6: !

{ }

L e5t =

1 = F (s) s−5 54

Letting g ( t ) = f ′′ ( t ) we can write:

From Proposition 4.1-3 we can write:

L

!

{(t − 3) e } = (−1) F′′ ( s ) − 6 (−1) F′ ( s ) + 9 F ( s ) 2

2

5t

We have:

L

d L {t f ′′ ( t )} = − ds

!

{(t − 3) e } = ( s −25) 2

5t

3



6

( s − 5 )2

+

9

If f ( t ) is a piecewise-smooth function of exponential order, and

{ f (t )} = F ( s ) , then for all Re s > σ 0 : L {t f ′′ ( t )} = −s 2

!

(4.1-24)

0−

( )



{ f ′′ (t )} !

(4.1-25)

d ⎡ 2 − − ⎤ ! s F s − s f 0 − f 0 ′ ( ) ⎦ ds ⎣

( )

(4.1-26)

dF ( s ) − 2 s F ( s ) + f 0− ! ds

(4.1-27)

0



e− s t f ′′ ( t ) dt = −

d L ds

( )

L {t f ′′ ( t )} = −s 2

( )



(4.1-21)

4.2! INTEGRALS OF LAPLACE TRANSFORMS

Proof: !

e− s t g ( t ) dt !

Therefore: !

dF ( s ) − 2 s F ( s ) + f 0− ! ds



L {t f ′′ ( t )} = −

Proposition 4.1-4:

!





Using Proposition 3.1-2 we obtain:

s−5

!

L

(4.1-23)

or

and so: !

d L {t g ( t )} = − ds

!

d2 ⎛ 1 ⎞ 2 F ′′ ( s ) = 2 ⎜ = ⎟ ds ⎝ s − 5 ⎠ ( s − 5 )3

!

0−

e− s t t g ( t ) dt !

From Proposition 4.1-1 we then have:

d⎛ 1 ⎞ −1 F′ ( s) = ⎜ = ⎟ ds ⎝ s − 5 ⎠ ( s − 5 )2

!



L {t g ( t )} =

!



Proposition 4.2-1:

From the definition of the Laplace transform we have:

L {t f ′′ ( t )} =





0−

e

−st

t f ′′ ( t ) dt !

(4.1-22)

If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists, then: t →0 ⎝ t ⎠

55

⎧ f (t ) ⎫ L ⎨ ⎬= ⎩ t ⎭

!

where L





Therefore:

F ( u ) du !

(4.2-1)

s



!

{ f (t )} = F ( s ) .

be a piecewise-smooth function. Since f ( t ) is of exponential order and f ( t ) t ≤ f ( t ) , we see that f ( t ) t will also be of exponential order. Finally, since lim+ ( f ( t ) t ) exists, the Laplace t →0

)

transform of f ( t ) t will also exist on ⎡⎣ 0 , ∞ . ! From the definition of the Laplace transform we can write:



!



∫ ∫

F ( u ) du =

u=s

s



t=0

+

+

f ( t ) e−u t dt du !

(4.2-2)

Since the integrand is uniformly convergent (see Proposition 1.4-7), we can change the order of integration: !





F ( u ) du =

s





t = 0+

f (t )





e−u t du dt !

u=s

(4.2-3)

⎧ f (t ) ⎫ L ⎨ ⎬= ⎩ t ⎭

!

!



s





0+

f (t ) − s t e dt ! t

(4.2-5)



(4.2-4)

F ( u ) du !

(4.2-6)

s

Example 4.2-1 Determine the Laplace transform for

sin ( at ) t

.

Solution:

⎛ f (t ) ⎞ Let f ( t ) = sin ( at ) . First we check to see if lim+ ⎜ ⎟ exists. t →0 ⎝ t ⎠ We have using L’Hôpital’s rule: !

⎛ sin ( at ) ⎞ ⎛ f (t ) ⎞ lim+ ⎜ = lim a cos ( at ) = a ⎟ t → 0+ ⎜ t ⎟ = tlim + t →0 ⎝ t ⎠ → 0 ⎝ ⎠

and so the limit exists. Then from Proposition 4.2-1 we have: !

⎡ 1 ⎤ F ( u ) du = f ( t ) ⎢ − e−u t ⎥ dt ! ⎣ t ⎦ u=s 0+







and so: ∞



and we have:

Since f ( t ) is a piecewise-smooth function, f ( t ) t will also





F ( u ) du =

s

Proof: !



⎧ sin ( at ) ⎫ L ⎨ ⎬= t ⎩ ⎭





s

F ( u ) du =





s

a du u 2 + a2

or

56

!

⎧ sin ( at ) ⎫ L ⎨ ⎬= t ⎩ ⎭





s

1 2

⎛ u⎞ ⎜⎝ ⎟⎠ + 1 a



du ⎡ −1 u ⎤ = tan a ⎢⎣ a ⎥⎦ u = s

and so: !

⎧ sin ( at ) ⎫ π −1 ⎛ s ⎞ −1 ⎛ a ⎞ L ⎨ ⎬ = − tan ⎜⎝ ⎟⎠ = tan ⎜⎝ ⎟⎠ a s ⎩ t ⎭ 2

Solution:

⎛ f (t ) ⎞ Let f ( t ) = eat − ebt . First we check to see if lim+ ⎜ ⎟ exists. t →0 ⎝ t ⎠ We have using L’Hôpital’s rule: !

See also Example 6.2-4.

and so the limit exists. Then from Proposition 4.2-1 we have: !

Example 4.2-2 Determine the Laplace transform for Si ( t ) =

sin ( u ) du . − u 0



t



t

!

0−

⎫ F (s) f ( u ) du ⎬ = s ⎭





F ( u ) du =

s





s

1 ⎤ ⎡ 1 − ⎢⎣ u − a u − b ⎥⎦ du

⎛ a⎞ 1− ⎧e − e ⎫ ⎛ u − a⎞ ⎛ s−a⎞ ⎜ u⎟ L ⎨ = lim ln − ln ⎬ = ln ⎜⎝ ⎟ ⎜⎝ ⎟ t u − b ⎠ s u → ∞ ⎜ 1− b ⎟ s−b⎠ ⎩ ⎭ ⎜⎝ u ⎟⎠ at

From Proposition 3.3-3 we have: !

⎧ eat − ebt ⎫ L ⎨ ⎬= t ⎩ ⎭

and so:

Solution:

⎧ L ⎨ ⎩

⎛ eat − ebt ⎞ ⎛ a eat − b ebt ⎞ ⎛ f (t ) ⎞ lim ⎜ = lim+ ⎜ = a−b ⎟ = tlim ⎟ ⎟ t → 0+ ⎝ t ⎠ → 0+ ⎜ t → 0 t 1 ⎝ ⎠ ⎝ ⎠

bt



or

sin ( u ) and so using Example 4.2-1 with f ( u ) = : u ⎧ t sin ( u ) ⎫ 1 −1 ⎛ 1 ⎞ ! L { Si ( t )} = L ⎨ du ⎬ = tan ⎜ ⎟ − ⎝ s⎠ u 0 ⎩ ⎭ s



!

⎧ eat − ebt ⎫ ⎛ s−b⎞ L ⎨ ⎬ = ln ⎜⎝ s − a ⎟⎠ t ⎩ ⎭

Proposition 4.2-2: If f ( t ) is a piecewise-smooth function of exponential order, and

Example 4.2-3

e −e . t at

Determine the Laplace transform for

bt

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists where a is a constant, then: t →0 ⎝ t + a ⎠ 57

⎧ f (t ) ⎫ s a L ⎨ ⎬=e t + a ⎩ ⎭

!





e−u a F ( u ) du !

(4.2-7)

the

Laplace

transform

of





e−u a F ( u ) du !

(4.2-12)

s



From the definition of the Laplace transform we can write:

es a

!





e−u a F ( u ) du = es a

s



∫∫ s



0



e−u a f ( t ) e−u t dt du !

Proposition 4.2-3: If f ( t ) is a piecewise-smooth function of exponential order, and

(4.2-8)

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists, then: t →0 ⎝ t ⎠

Since the integrand is uniformly convergent (see Proposition 1.4-7), we can change the order of integration:

es a

!





e−u a F ( u ) du = es a

s





0−

f (t )





e−u (t + a ) du dt !

(4.2-9)

! es a





⎧ f (t ) ⎫ L ⎨ ⎬= t ⎩ ⎭

!

s

where L

and so:

e−u a F ( u ) du = es a

s





⎡ f (t ) ⎢ − ⎣ 0−



1 − u (t + a ) ⎤ e ⎥⎦ dt ! (4.2-10) t+a u=s

Therefore: !

Therefore

⎧ f (t ) ⎫ s a L ⎨ ⎬=e t + a ⎩ ⎭

!

Proof: !

order.

f ( t ) ( t + a ) will exist, and we have:

s

{ f (t )} = F ( s ) .

where L

exponential

e

sa





F ( u ) du !

(4.2-13)

0

{ f (t )} = F ( s ) .

Proof: !

Follows from Proposition 4.2-1 when s → 0 .



Proposition 4.2-4:





s

e

−ua

F ( u ) du =



f (t )

0−

t+a



e

− st

dt !

(4.2-11)

Since f ( t ) is a piecewise-smooth function of exponential order,

If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ 2 ⎟ exists, then: t →0 ⎝ t ⎠

f ( t ) ( t + a ) will also be a piecewise-smooth function of 58

⎧ f (t ) ⎫ L ⎨ 2 ⎬= ⎩ t ⎭

!

where L



∫∫ s



u1

F ( u 2 ) du 2 du1 !

!

{ f (t )} = F ( s ) .

Since f ( t ) is a piecewise-smooth function, f ( t ) t

also be a piecewise-smooth function. Since exponential order and f ( t ) t

2

f (t )

2

will

is of

≤ f ( t ) , we see that f ( t ) t will 2

t →0

(

)

g (t ) =

t

g (t ) t

=

f (t ) t2

(4.2-15)

!

(4.2-16)

⎧ f (t ) ⎫ ⎧ g (t ) ⎫ L ⎨ 2 ⎬= L ⎨ ⎬= ⎩ t ⎭ ⎩ t ⎭



where L { g ( t )} = G ( s ) . Therefore:

s

G ( u1 ) du1 !

(4.2-17)

s

F ( u 2 ) du 2 du1 !

u1

(4.2-19)

⎧ f (t ) ⎫ L ⎨ n ⎬= ⎩ t ⎭

∫ ∫ ∫ !∫

where L

{ f (t )} = F ( s ) .



s



u1



u2



un−1

F ( un ) dun dun−1 !du1 !

(4.2-20)

Proof: !



∫∫



⎛ f (t ) ⎞ if lim+ ⎜ n ⎟ exists, then: t →0 ⎝ t ⎠ !

!



If f ( t ) is a piecewise-smooth function of exponential order, and

)

From the Proposition 4.2-1 we can write: !

(4.2-18)

Proposition 4.2-5:

We then have: !

s

⎧ f (t ) ⎫ L ⎨ ⎬ du1 ! t ⎩ ⎭



the Laplace transform of f ( t ) t 2 will also exist on ⎡⎣ 0 + , ∞ . ! Let !

⎧ f (t ) ⎫ L ⎨ 2 ⎬= ⎩ t ⎭

!

also be of exponential order. Finally, since lim+ f ( t ) t 2 exists,

f (t )





Using Proposition 4.2-1 again:

Proof: !

⎧ f (t ) ⎫ ⎧ g (t ) ⎫ L ⎨ 2 ⎬= L ⎨ ⎬= t t ⎩ ⎭ ⎩ ⎭

(4.2-14)

Follows from Propositions 4.2-1 and 4.2-4.



4.3! INITIAL AND FINAL VALUES OF LAPLACE TRANSFORMS !

The following two theorems provide information about a

function f ( t ) from its Laplace transform even when the actual 59

function f ( t ) is undetermined. The initial value theorem shows that the limit of the Laplace transform of a function f ( t )

as s → ∞ corresponds to the limits of f ( t ) as t → 0 + . The Final

L

!

t → ∞. !

The initial value theorem provides information regarding

the short-term performance of f ( t ) , while the final value theorem

provides

information

regarding

the

long-term

performance of f ( t ) . If the function f ( t ) is known, the limiting values of a Laplace transform can provide a check on the correctness of the Laplace transform.

{ f (t )} = F ( s ) where f (t ) is a continuous function of exponential order on [ 0, ∞ ) and f ′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then: lim ( s F ( s )) = f ( 0 + ) ! (4.3-1) s→∞

!

Proof: !

From the definition of the Laplace transform and from

Proposition 3.1-1 we have:

f ′ ( t ) e− s t dt !

0−

( )=∫

s F (s) − f 0

!



0+

0

(4.3-2)



f ′ (t ) e

−st

dt +





0

+

f ′ ( t ) e− s t dt !

(4.3-3)

Taking the limit as s → ∞ : ! !

(

( ))

lim s F ( s ) − f 0

s→∞



⎡ = lim ⎢ s→∞ ⎢⎣



0+

0−

f ′ (t ) e

−st

dt +





0+

⎤ f ′ ( t ) e− s t dt ⎥ ⎥⎦

!

(4.3-4)

We will integrate the first integral by parts. Letting: !

u = e− s t !

dv = f ′ ( t ) dt !

(4.3-5)

!

du = − s e− s t dt !

v = f (t ) !

(4.3-6)

Proposition 4.3-1, Initial Value Theorem: If L

{ f ′ (t )} = s F ( s ) − f ( 0 ) = ∫

We can write:

value theorem shows that the limit of the Laplace transform of a function f ( t ) as s → 0 corresponds to the limit of f ( t ) as





we have:



!

0+

0



f ′ (t ) e

( )

−st

dt = f ( t ) e

+ −st 0

t=0



+s



0+

0



f ( t ) e− s t dt !

(4.3-7)

( )

Since f 0 − and f 0 + are not functions of s , we have: ! !

lim

s→∞

!



0+

0−

( ) ( )

f ′ ( t ) e− s t dt = f 0 + − f 0 − + lim s s→∞



0+

0−

f ( t ) e− s t dt (4.3-8) 60

The Laplace transform L

{ f ′ (t )}

converges uniformly for

Re s > σ 0 (see Proposition 1.4-7), and so we can interchange the integration and limiting process. !

lim s

s→∞



0

+

0−

f (t ) e

−st



dt =

0+

s→∞

(4.3-9)

0+ s → ∞

( ) ( )

and so:

!

lim

s→∞



0

+

f ′ (t ) e

−st

dt =





lim f ′ ( t ) e

−st

0 s→∞ +

s→∞

dt !

(4.3-10)

integrating. Using L’Hôpital’s rule, we have:



!

lim s f ( t ) e− s t dt = 0 !

0+ s → ∞



!

do not have a unique limit, but oscillate with t .

( ) ( ) ( )

lim ( s F ( s )) − f 0 − = f 0 + − f 0 − !

( ) ≠ f ( 0 ) , and so equation (4.3-13) becomes:

f 0



(

( )) = f ′ ( 0 ) !

lim s 2 F ( s ) − s f 0 −

!

s→∞

(4.3-13)

If f ( t ) is discontinuous at the origin, we will have

+

{ f (t )} = F ( s ) where f (t ) and f ′ (t ) are continuous functions of exponential order on [ 0, ∞ ) , and f ′′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then:

(4.3-12)

and so equation (4.3-4) becomes using equation (4.3-5):

!

Note also that certain functions such as sin at and cos at

If L

lim f ′ ( t ) e− s t dt = 0 !

s→∞

(4.3-16)



! (4.3-11)

s→∞

Proposition 4.3-2, Initial Value Theorem for Derivative:

0+ s → ∞

!

( ) ( )

lim ( s F ( s )) = f 0 − = f 0 + !

!

We also have: ∞

(4.3-15)

or

Since s is not a function of t , we can let s → ∞ prior to

0+

( )

lim ( s F ( s )) − f 0 − = 0 !

!

and ∞

(4.3-14)

If f ( t ) is continuous at the origin, we have f 0 + = f 0 − ,

!

lim s f ( t ) e− s t dt !

( )

lim ( s F ( s )) = f 0 + !

!

+

(4.3-17)

Proof: !

From the definition of the Laplace transform and

Proposition 3.1-2 we can write: 61

L

! !

{ f ′′ (t )} = s F ( s ) − s f ( 0 ) − f ′ ( 0 ) = ∫ −

2





f ′′ ( t ) e

0−

!

−st

dt

(4.3-18)

( ) − f ′(0 ) = ∫

s F (s) − s f 0 2

!





0+

0−

f ′′ ( t ) e

−st

dt +





0+

f ′′ ( t ) e

! !

or

dt !

(

( )

!



0+

0−

f ′′ ( t ) e− s t dt +



0+

⎤ −st f ′′ ( t ) e dt ⎥ ! (4.3-20) ⎥⎦

Using a process similar to that in Proposition 4.3-1, we have:

(

( ))

( )

( )

( )

! lim s 2 F ( s ) − s f 0 − − f ′ 0 − = f ′ 0 + − f ′ 0 − ! s→∞

!

( ) ≠ f ′ ( 0 ) , and so equation (4.3-21) becomes: lim ( s F ( s ) − s f ( 0 )) = f ′ ( 0 ) !

f′ 0 ! !

(4.3-21)

If f ( t ) is discontinuous at the origin, we will have

+





2

+

s→∞

If

f (t )

is

continuous

( ) = f ′ ( 0 ) , and so:

f′ 0

+



at

the

origin,

s→∞



+

(4.3-24)

{ f (t )} = F ( s ) where f (t ) is a continuous function of exponential order on [ 0, ∞ ) and f ′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then:

( )) = ∞

( )) = f ′ ( 0 ) = f ′ ( 0 ) !

lim s 2 F ( s ) − s f 0 −

(4.3-23)

If L

lim s 2 F ( s ) − s f 0 − − f ′ 0 − ⎡ lim ⎢ s→∞ ⎢⎣

(

( )

Proposition 4.3-3, Final Value Theorem:

Taking the limit as s → ∞ : s→∞

( ))

■ −st

(4.3-19)

!

s→∞

!

We can write:

(

lim s 2 F ( s ) − s f 0 − − f ′ 0 − = 0 !

!

we

have

s→0

(4.3-25)

t→∞

Proof: !

We have from Proposition 3.1-1:

!

{ f ′ (t )} = s F ( s ) − f ( 0 ) = ∫

L





0



f ′ ( t ) e− s t dt !

(4.3-26)

( ))

(4.3-27)

We can write: !

(4.3-22)

lim ( s F ( s )) = lim f ( t ) !

!

lim

s→0





0



(

f ′ ( t ) e− s t dt = lim s F ( s ) − f 0 − !

The Laplace transform L

s→0

{ f (t )} = F ( s )

converges uniformly

for Re s > σ 0 (see Proposition 1.4-7), and so we interchange the integration and limiting process: 62





!

(

( ))

lim f ′ ( t ) e− s t dt = lim s F ( s ) − f 0 − !

0− s → 0

s→0

(4.3-28)

Example 4.3-1 Determine the initial and final values of the function

Since s is not a function of t , we can let s → 0 prior to

f ( t ) = e− 2t .

integrating. Equation (4.3-28) then becomes:



!



0−

(

Solution:

( ))

f ′ ( t ) dt = lim s F ( s ) − f 0 − ! s→0

(4.3-29)

( ) is not a function of s , we have:

Since f 0



!

!





0−

( )

f ′ ( t ) dt = lim ( s F ( s )) − f 0 − ! s→0

(4.3-30)

or

( )

lim f ( t ) − f 0

!

t→∞



( )

= lim ( s F ( s )) − f 0 s→0



!

lim ( s F ( s )) = lim f ( t ) !

s→0

t→∞

L



− 2t

the initial value when t = 0 :

( )

f 0 + = lim ( s F ( s )) = lim

(4.3-31)

(4.3-32)

{ e } = s +1 2

From Proposition 4.3-1 and using L’Hôpital’s rule, we have

!

and so: !

From Example 1.4-7 the Laplace transform of f ( t ) = e− 2t is:

s→∞

s =1 s→∞ s + 2

From Proposition 4.3-3 we have the final value when t → ∞ : s ! lim f ( t ) = lim ( s F ( s )) = lim =0 t→∞ s→0 s→0 s + 2 These values agree with the function f ( t ) = e− 2t for t = 0 and

t → ∞.

An important assumption for this theorem is that lim f ( t ) t→∞

exists. If this is not the case, use of this theorem can lead to erroneous results.

Example 4.3-2 Determine the Laplace transform for Ci ( t ) = −



t



cosu du . u 63

Solution:

!

s F (s) =



s F (s) =

1 ln s 2 + 1 + C 2

Let !

f (t ) = −





t

or

cosu du u

!

We then have: !

!

t f ′ ( t ) = − cost

!

s L {t f ′ ( t )} = − L {cos ( t )} = − 2 s +1

L {t f ′ ( t )} = −

d L ds

{ f ′ (t )} = − s 2 s+ 1

Using Proposition 3.1-1: !

(

lim ( s F ( s )) = lim f ( t )

s→0

t→∞

As s → 0 we then have:

From Proposition 4.1-1 we have: !

)

given in Proposition 4.3-3:

Therefore: !

(

To evaluate the constant C , we can us the final value theorem

cost f ′ (t ) = − t

or !

s ds s2 + 1

( )) = s s+ 1

d s F ( s ) − f 0− ds

lim ( s F ( s )) = lim

s→0

t→∞





t

cosu du = 0 u

and so the limit lim f ( t ) exists. We also have as s → 0 : t→∞

!

(

)⎤⎥⎦ = 0

⎡1 lim ⎢ ln s 2 + 1 s→0 ⎣ 2

Therefore C = 0 and we obtain: !

F ( s ) = L {Ci ( t )} =

(

)

1 ln s 2 + 1 2s

2

( )

Since the derivative of f 0 − with respect to s is zero, we obtain: 64

Chapter 5 Laplace Transforms of Discontinuous and Periodic Functions

{

}

L U (t − t0 ) =

e

− s t0

s

65

!

In this chapter we will consider the Laplace transforms of

discontinuous and periodic functions. The discontinuous

where t 0 is a constant. The discontinuity or jump from 0 to 1 for

U ( t − t 0 ) occurs at t = t 0 (see Figure 5.1-1).

functions we will consider involve either unit step functions or unit impulse functions. Discontinuous and periodic functions often provide the forcing functions for differential equations.

5.1! LAPLACE TRANSFORMS OF UNIT STEP FUNCTIONS !

The unit step function (also called the Heaviside step

function) is a discontinuous function that has the shape of a stair step having unit value, jumping from 0 to 1 at the origin (see Figure 5.1-1).

5.1.1! DEFINITION OF UNIT STEP FUNCTIONS ! The unit step function U ( t ) is a function of t ,

and is

defined as: ! ! !

⎧ 0 t 0 . Therefore the region of absolute convergence for

filtering action of the unit step function. The filtering action of

the unit step function is the entire right half-plane Re s > 0 .

the unit step function is illustrated in Figure 5.1-2.

Both the unit step function and the shifted unit step

function are clearly of exponential order since we have M = 1 and σ 0 = 0 :

U ( t ) ≤ M eσ 0 t ≤ 1 !

!

(5.1-3)

Multiplication of a unit step function by a constant c yields a step function of amplitude c !

By forming the product of a function f ( t ) with the shifted

unit step function U ( t − t 0 ) , the product will be zero for all

t < t 0 , but will equal the function f ( t ) for all t ≥ t 0 . We then

have: !

⎧⎪ 0 t < t0 ! f (t ) U (t − t0 ) = ⎨ f t t ≥ t ( ) 0 ⎪⎩

We also have:

t0 ≥ 0 !

(5.1-4) Figure 5.1-2! Examples of filtering action of the unit step function on the ramp function f ( t ) = t . 67

Example 5.1-2

Example 5.1-4

Express the function f ( t ) defined by:

Express the f ( t ) shown in Figure 5.1-3 in terms of unit step

!

⎧ t ⎪ f (t ) = ⎨ t 2 ⎪ t3 ⎩

functions:

0≤t 0 !

(6.2-11)

h′ ( t ) = 0 !

for all t > 0 !

(6.2-12)

h (t ) =

t

∫ n (τ) dτ = 0 !

t >0!

(6.2-14)

(6.2-15)

f (t ) ≡ g (t ) !

so that L {h ( t )} = 0 , and we have F ( s ) = G ( s ) when f ( t ) ≠ g ( t ) . This uniqueness property of the Laplace transform was first

We can conclude then that !

!

h′ ( t ) = f ( t ) − g ( t ) = n ( t ) ≠ 0 !

0

and so: !



transform. If f ( t ) = g ( t ) on a set of points except for isolated

h′ ( t ) = f ( t ) − g ( t ) !

!

F ( s ) will have a unique inverse Laplace transform.

for all t > 0 !

demonstrated by Lerch (1903). (6.2-13) 91

The discontinuous unit step function U ( t ) and the number

!

where:

1, for example, both have the same Laplace transform 1 s . A null function cannot be a continuous function. Example 6.2-1

! G ( s ) = lim

ε1→ 0

0−

h (t ) e

−st

dt +



T

ke

−st

dt + lim

ε2 → 0

T





h ( t ) e− s t dt

T + ε2

or

Show that the functions f ( t ) and g ( t ) have the same Laplace !

transform where: !



T − ε1

f (t ) = h (t ) !

t≥0

G (s) =







0−

h ( t ) e− s t dt = F ( s )

Therefore F ( s ) = G ( s ) although f ( t ) ≠ g ( t ) .

and

!

⎧ h (t ) 0− ≤ t < T ⎪⎪ g ( t ) = ⎨ k ≠ h (T ) t =T ⎪ t >T ⎪⎩ h ( t )

where k is a finite constant. We see then that f ( t ) ≠ g ( t ) .





0−

f ( t ) e− s t dt =

G (s) =

L {n ( t )} = N ( s )

Solution:





0−





0



g (t ) e

−st

dt



t

n ( τ ) dτ = 0 !

τ=0

h ( t ) e− s t dt

N (s) =





0−

n ( t ) e− s t dt

Let !

and !

!

!

We have:

F (s) =

Show that if n ( t ) is a null function, then so is:

We have by definition:

Solution:

!

Example 6.2-2

!

dv = n ( τ ) dτ

u = e− s t ! du = − s e

− st

dt !

v=

t

∫ n (τ) dτ 0

92

We then can write:





!

0



n (t ) e

−st

We then have:

⎡ dt = − ⎢ e− s t ⎣



⎤ n ( τ ) dτ ⎥ + τ=0 ⎦ t=0



t





se

t =0

− st



t

n ( τ ) dτ dt

τ=0

and so: !

N (s) =

! or !





0−

n ( t ) e− s t dt = 0

t f ( t ) = ebt − eat

f (t ) =

(

1 bt at e −e t

)

(see Example 4.2-3).

Therefore if n ( t ) is a null function, then so is L {n ( t )} = N ( s ) . Example 6.2-4 Example 6.2-3

Determine the inverse Laplace transform f ( t ) for:

Determine the inverse Laplace transform f ( t ) for:

!

!

⎛ s − a⎞ F ( s ) = ln ⎜ ⎝ s − b ⎟⎠

⎛ 1⎞ F ( s ) = tan −1 ⎜ ⎟ ⎝ s⎠

Solution:

Solution:

From Proposition 4.1-1 we have:

From Proposition 4.1-1 we have:

!

!

L {t f ( t )} = − F ′ ( s )

L {t f ( t )} = − F ′ ( s )

Therefore:

Therefore: !

L {t f ( t )} =

1 1 − s−b s−a

!

and so: !

{

L {t f ( t )} = L ebt − eat

}

L {t f ( t )} = −



1 s2

⎛ 1⎞ 1+ ⎜ ⎟ ⎝ s⎠

2

or 93

L {t f ( t )} =

!

1 s2 + 1

L {t f ( t )} = L {sin ( t )}

We then have: !

transform is by residue integration using the inversion formula given in equation (6.2-5). This method requires

and so: !

2.! The most general method of inverting a Laplace

t f ( t ) = sin ( t )

or

some knowledge of complex variable theory, including knowledge of contour integration. This method is presented in Chapter 12. Complex variable theory is first reviewed in Chapter 11. 3.! A third method of inverting a Laplace transform is to use

f (t ) =

!

sin ( t )

one or more of the properties of Laplace transforms listed

t

in Section 6.2.3 in conjunction with a Laplace transform table.

6.2.2! !

METHODS OF INVERTING A LAPLACE TRANSFORM

There are a number of methods available to invert a

Laplace transform. Some of these are:

4.! A fourth method of inverting a Laplace transform is to use the convolution integral. This method is presented in Chapter 7. 5.! A fifth method of inverting a Laplace transform is to

1.! The simplest method of inverting a Laplace transform

expand the transform into a series, and then invert the

involves using an existing table of Laplace transforms.

series term-by-term. This method is presented in Chapter

Such a table provides quick and easy assess to both the

8.

direct and inverse Laplace transforms, but is limited to only the more common (standard) functions. This method is presented in this chapter.

6.! A sixth method of inverting a Laplace transform is to use numerical integration. This method will not be discussed in this book. 94

6.2.3! !

PROPERTIES OF THE INVERSE LAPLACE TRANSFORM

Inverse Laplace transforms have a linearity property

which follows from the linearity property of Laplace transforms.

L

!

{c1 F ( s ) + c2 G ( s )} = c1 f (t ) + c2 g (t ) !

!

{ f (t )} = F ( s ) and

L { g ( t )} = G ( s ) ,

(6.2-19)

(6.2-20)

{c1 F ( s ) + c2 G ( s )} = c1 L −1{F ( s )} + c2 L −1{G ( s )}

−1

!

(6.2-21)



{c1 F ( s ) + c2 G ( s )} = c1 L −1{F ( s )} + c2 L −1{G ( s )}

−1

! where c1 and c2 are arbitrary constants.

L

! !

(6.2-16)

!

The following are some important properties of inverse

Laplace transforms that can be used to extend the utility of existing transform tables: 1.! From Proposition 6.2-2 we have the linearity property

Proof: We have: L

{ f (t )} = F ( s ) and

L { g ( t )} = G ( s ) where f ( t )

for inverse Laplace transforms:

and g ( t ) are continuous functions. Therefore the inverse

!

Laplace transforms exist (see Proposition 12.4-1):

! !

!

−1

transforms L

c1 F ( s ) + c2 G ( s ) is a linear operator:

!

L {c1 f ( t ) + c2 g ( t )} = c1 F ( s ) + c2 G ( s ) !

!

From equation (6.2-17) we then have:

L

(6.2-18)

where c1 and c2 are arbitrary constants. Therefore:

If f ( t ) and g ( t ) are continuous functions having Laplace respectively, then the inverse Laplace transform of

!

{ f (t )} + c2 L {g (t )} !

and so:

Proposition 6.2-2, Linearity Property of Inverse Laplace Transforms:

!

L {c1 f ( t ) + c2 g ( t )} = c1 L

!

L

{F ( s )} = f (t ) !

−1

From Proposition 2.1-1 we have:

L

−1

{G ( s )} = g (t ) !

(6.2-17)

L

{c1 F ( s ) + c2 G ( s )} = c1 L −1{F ( s )} + c2 L −1{G ( s )}

−1

(6.2-22)

2.! From Proposition 2.2-1 we have the s-shifting property for inverse Laplace transforms: 95

!

L

−1

{F ( s − a )} = eat f (t ) !

Re ( s − a ) > 0 ! (6.2-23)

6.3!

INVERSE LAPLACE TRANSFORMS FROM A TRANSFORM TABLE

3.! From Proposition 2.3-1 we have the scaling property for inverse Laplace transforms: !

1 L b

−1 ⎧

⎛ s⎞⎫ F ⎨ ⎜⎝ ⎟⎠ ⎬ = f ( bt ) ! ⎩ b ⎭

!

b > 0!

(6.2-24)

!

L

(s) ⎫ ⎨ ⎬= s ⎩ ⎭

−1 ⎧ F



0−

−1

L

{F ( τ )} dτ !

!

L

(s) ⎫ ⎨ 2 ⎬= ⎩ s ⎭

−1 ⎧ F

∫∫ 0



τ

0



(6.2-25)

!

L

d ⎫ ⎨ F ( s )⎬ = L ⎩ ds ⎭

common Laplace transforms is given in Appendix E, and a table of Laplace transform properties is given in Appendix F. !

L

−1

{F ( γ )} dγ dτ !

(6.2-26)

includes an impulse function. A rational fraction is defined to be an algebraic fraction in which the numerator is a polynomial

P ( s ) of degree m , and the denominator Q ( s ) is a polynomial of

{F ′ ( s )} = − t f (t ) !

−1

degree n : (6.2-27) !

6.! From Proposition 5.2-1 we have the t-axis shifting !

property for inverse Laplace transforms: !

L

−1

{e

− s t0

}

A Laplace transform F ( s ) of a function f ( t ) generally

takes the form of a rational fraction unless the transform

5.! From Proposition 4.1-1 we have: −1 ⎧

table. Extensive tables of Laplace transforms for the most available in the literature and on the internet. A table of some

4.! From Proposition 3.3-4 we have: t

associated with an inverse Laplace transform using a transform common functions have been tabulated and are readily

4.! From Proposition 3.3-3 we have: t

Many simple proper rational fractions can be directly

F ( s ) = f (t − t0 ) U (t − t0 ) !

(6.2-28)

am s m + am−1 s m−1 +!+ a1 s + a0 ( ) F ( s) = = ! bn s n + bn−1 s n−1 +!+ b1 s + b0 Q ( s) P s

(6.3-1)

The first step in obtaining an inverse Laplace transform

using a transform table is to ensure that the degree of the numerator of the transform is less than the degree of the

96

denominator. This will generally be true since from Proposition

into a product of linear factors ( a s + b ) or quadratic factors of

1.4-5 we know that we must have:

the form a s 2 + b s + c

lim F ( s ) = 0 !

!

(6.3-2)

Re s→ ∞

)

If m ≥ n in equation (6.3-1), we can divide the numerator

P ( s ) by the denominator Q ( s ) to obtain a proper rational fraction having n > m , together with a polynomial in s :

()

m

F s = cm s + cm−1 s

!

m−1

+!+ c1 s + c0 +

( )! Q ( s) P1 s

(6.3-3)

The polynomial in s is the Laplace transform of multiples of the unit impulse function and its derivatives (see Section 5.4.3):

()

! f t = cmδ !

( m)

(

)

n

that are irreducible (have no other real

factors). After factoring the denominator of equation (6.3-1) into first-order and quadratic factors, we obtain:

if f ( t ) is a piecewise continuous function of exponential order on ⎡⎣ 0 − , ∞ . !

n

(t ) + cm−1 δ

( m−1)

(t ) +!+ c1 δ ′ (t ) + c0 δ (t ) () ()

⎧⎪ P1 s ⎫⎪ + L −1 ⎨ ⎬ ! (6.3-4) ⎪⎩ Q s ⎪⎭

!

F (s) =

P (s)

Q(s)

=

P (s)

c ( s − z1 ) ( s − z2 ) ( s − z 3 )!( s − zn )

!

(6.3-5)

where c is a constant multiplier and z n are the n roots of Q ( s ) . The roots can be real, imaginary, or complex, and are called the poles or the F ( s ) singularities (see Chapter 11). An imaginary or complex root only occurs when its complex conjugate is also a root. Along with c , the roots z n completely specify Q ( s ) .

6.3.1! !

PARTIAL FRACTION EXPANSION

When using transform tables to obtain the inverse Laplace

transform, if the transform has more than one pole, the factored Laplace transform is first changed into a sum of simpler fractions known as partial fractions. Each denominator of a partial fraction will then be a single irreducible first-order or

If m ≥ n in equation (6.3-1), we see that the function f ( t ) will

quadratic factor of Q ( s ) .

always include unit impulse functions.

!

!

From the fundamental theorem of algebra we know that

fractions, each of which is also a proper rational fraction. This

any polynomial having only real coefficients can be factored

process is called partial fraction expansion. The number of

Only proper rational fractions can be expanded as partial

97

partial fractions will equal the degree of the denominator polynomial. Many partial fractions are common functions

!

f (t ) = L

⎧ 1 ⎨ 2 ⎪⎩ s s + 4

−1 ⎪

which can be directly associated with an inverse Laplace

(

)

⎫⎪ ⎬ ⎪⎭

transform using a transform table. A review of partial fractions

Solution:

is given in Appendix C.

From the Laplace transform table in Appendix E we have:

6.3.2!

INVERSE LAPLACE TRANSFORMS FOR CONTINUOUS FUNCTIONS USING LAPLACE TRANSFORM TABLES

Determine the inverse Laplace transform:

f (t ) = L

L

(

)

⎫⎪ 1 ⎬ = ⎡⎣1− cos ( 2t ) ⎤⎦ 4 ⎭⎪

Example 6.3-3

Example 6.3-1

!

!

⎧ 1 ⎨ 2 ⎩⎪ s s + 4

−1 ⎪

−1 ⎧

1⎫ ⎨ 5⎬ ⎩s ⎭

Determine the inverse Laplace transform: ⎧ 1 ⎫ ! f ( t ) = L −1 ⎨ ⎬ s s − 5 ( ) ⎩ ⎭

Solution:

Solution:

From the Laplace transform table in Appendix E we have:

Using equation (6.2-25) we have:

4

!

4

t t f (t ) = = 4! 24

!

f (t ) = L

−1 ⎧

1 ⎫ ⎨ ⎬= s s − 5 ( ) ⎩ ⎭



t

0−

L

−1 ⎧

1 ⎫ ⎨ ⎬ dτ τ − 5 ⎩ ⎭

From the Laplace transform table in Appendix E we have: Example 6.3-2 Determine the inverse Laplace transform:

!

f (t ) =



t

0−

e5 τ dτ

Therefore: 98

!

t

e5 τ f (t ) = 5

e5t − 1 = 5 τ =0

Another way to solve this problem is to expand: !

F (s) =

1 s ( s − 5)

!

f (t ) = L

−1⎧

s+9 ⎫ ⎨ ⎬= 3 L s s + 3 ( ) ⎩ ⎭

−1⎧ 1 ⎫

⎨ ⎬− 2 L ⎩s ⎭

−1⎧

1 ⎫ ⎨ ⎬ ⎩s + 3⎭

From the Laplace transform table in Appendix E we have: !

(

)

f ( t ) = 3 (1) − 2 e− 3 t = 3 − 2 e− 3 t

as partial fractions. We then have: !

F (s) =

1 11 1 1 =− + s ( s − 5) 5 s 5 s−5

From the Laplace transform table in Appendix E we have: !

e5t − 1 f (t ) = 5

Determine the inverse Laplace transform:

f (t ) = L

Determine the inverse Laplace transform: !

f (t ) = L

−1⎧

3 ⎫ ⎨ 2 ⎬ ⎩s + 5⎭

Solution: We can write:

Example 6.3-4

!

Example 6.3-5

−1⎧

s+9 ⎫ ⎨ ⎬ ⎩ s ( s + 3) ⎭

Solution:

!

3 3 5 = s2 + 5 5 s2 + 5

From the Laplace transform table in Appendix E we have: !

f (t ) =

3 5

sin

( 5 t)

Expanding as partial fractions we have: !

s+9 3 2 = − s ( s + 3) s s + 3

Therefore using Proposition 6.2-2:

Example 6.3-6 Determine the inverse Laplace transform: 99

!

f (t ) = L

−1⎧

s2 + 9 ⎫ ⎨ ⎬ s s + 1 ( ) ⎩ ⎭

Example 6.3-7 Determine the inverse Laplace transform:

Solution:

f (t ) = L

−1⎧

⎫ 2 ⎨ 2 ⎬ s − 6 s + 13 ⎩ ⎭

The polynomials in the numerator P ( s ) = s 2 + 9 and the

!

Laplace transform is not a proper rational fraction. We must

Solution:

first divide P ( s ) by Q ( s ) :

The denominator is an irreducible quadratic, and so the next

denominator Q ( s ) = s 2 + s have the same degree, and so the

!

step is to complete the square (take half the coefficient of the

s2 + 9 s−9 = 1− s ( s + 1) s ( s + 1)

middle term and square it):

Expanding as partial fractions we have:

!

s +9 1 9 9 = 1− + − s ( s + 1) s +1 s s +1 2

!

L

s2 + 9 ⎫ ⎨ ⎬= L s s + 1 ( ) ⎩ ⎭

{1} −

−1

L

−1⎧

1 ⎫ ⎨ ⎬+ 9 L s + 1 ⎩ ⎭

s 2 − 6 s + 13

=

(s

2 2

)

− 6s + 9 + 4

=

2

( s − 3)2 + 4

From the Laplace transform table in Appendix E and Proposition 2.2-1 we have:

Therefore using Proposition 6.2-2: −1⎧

2

−1⎧ 1 ⎫

⎨ ⎬− 9 L ⎩s ⎭

−1⎧

1 ⎫ ⎨ ⎬ ⎩ s + 1⎭

!

f (t ) = L

⎧ ⎫⎪ 3t 2 ⎨ ⎬ = e sin ( 2t ) 2 ⎪⎩ ( s − 3) + 4 ⎪⎭

−1⎪

From the Laplace transform table in Appendix E we have: !

f ( t ) = δ ( t ) − e− t + 9 − 9 e− t = δ ( t ) + 9 − 10 e− t

When the numerator P ( s ) and the denominator Q ( s ) are of the same degree, the function f ( t ) will always include a unit impulse function.

Example 6.3-8 Determine the inverse Laplace transform: !

f (t ) = L

−1⎧

s+4 ⎫ ⎨ 2 ⎬ s + 2 s + 2 ⎩ ⎭ 100

Solution: The denominator is an irreducible quadratic, and so the next

!

step is to complete the square: !

s+4 s+4 = s 2 + 2 s + 2 ( s + 1)2 + 1

! f (t ) = L

⎧ ( s + 1) ⎫⎪ ⎨ ⎬+ L 2 ⎪⎩ ( s + 1) + 1 ⎪⎭

−1⎪

!

⎧ ⎫⎪ 3 ⎨ ⎬ 2 ⎪⎩ ( s + 1) + 1 ⎪⎭

−1⎪

From the Laplace transform table in Appendix E and

2

3 1⎞ ⎛ ⎜⎝ s + ⎟⎠ + 4 2

f ( t ) = e− t cos ( t ) + 3e− t sin( t )

1⎞ 1 ⎫ ⎧⎛ s + ⎟⎠ − 2 ⎪ ⎪ ⎜⎝ 2 ⎪ ⎪ f ( t ) = L −1⎨ ⎬ 2 3 1 ⎛ ⎞ ⎪ + ⎪ s + ⎜ ⎟ ⎪⎩ ⎝ 4 ⎪⎭ 2⎠

or

Proposition 2.2-1 we have: !

s

numerator of F ( s ) is also expressed in terms of s + 1 2 :

numerator of F ( s ) is also expressed in terms of s + 1 : ⎧ ( s + 1) + 3 ⎫⎪ ⎨ ⎬= L 2 ⎪⎩ ( s + 1) + 1 ⎪⎭

s2 + s + 1

=

We can use the first shift theorem (Proposition 2.2-1) if the

We can use the first shift theorem (Proposition 2.2-1) if the

−1⎪

s

!

⎧ ⎫ 1 s + ⎪ ⎪ 1 ⎪ 2 −1⎪ f (t ) = L ⎨ L ⎬− 2 3 3 1 ⎞ ⎪⎛ + ⎪ s + s ⎜ ⎟ ⎪⎩ ⎝ 4 ⎪⎭ 2 ⎠

⎧ ⎫ 3 ⎪ ⎪ ⎪ 2 −1⎪ ⎨ ⎬ 2 3 1 ⎛ ⎞ ⎪ + ⎪ s + s ⎜ ⎟ ⎪⎩ ⎝ 4 ⎪⎭ 2 ⎠

Example 6.3-9

From the Laplace transform table in Appendix E and

Determine the inverse Laplace transform:

Proposition 2.2-1 we have:

!

f (t ) = L

−1⎧

⎫ s ⎨ 2 ⎬ ⎩ s + s + 1⎭

!

⎛ 3 ⎞ 1 −t 2 ⎛ 3 ⎞ f ( t ) = e − t 2 cos ⎜ t − e sin ⎜ t ⎝ 2 ⎟⎠ ⎝ 2 ⎟⎠ 3

Solution: The denominator is an irreducible quadratic, and so the next step is to complete the square:

Example 6.3-10 Verify the following result from Example 3.3-2: 101

t

!

∫∫ 0−

τ

0

sin ( 2 γ ) dγ dτ = L

t

−1 ⎧

2 ⎫ ⎨ 4 2⎬ ⎩ s + 4s ⎭

!

Expanding as partial fractions we have:

2 2 1 1 = = − s 4 + 4s 2 s 2 s 2 + 4 2 s2 2 s2 + 4

(

)

(

6.3.3!

)

From the Laplace transform table in Appendix E we have: !

−1 ⎧

2 ⎫ t 1 ⎨ 4 ⎬ = − sin ( 2t ) ⎩ s + 4s 2 ⎭ 2 4

L

!

∫∫ 0−

τ

1 sin ( 2 γ ) dγ dτ = 2 0



t

0−

INVERSE LAPLACE TRANSFORMS FOR DISCONTINUOUS FUNCTIONS USING TRANSFORM TABLES

( − cos ( 2 γ )) 0 dτ

f (t ) = L

−1⎧ 2 e ⎨ 2

⎫ ⎬ ⎩ s +1 ⎭ −πs

τ

Solution: From the Laplace transform table in Appendix E and

t

∫∫ 0



τ

0

sin ( 2 γ ) dγ dτ =

1 2



t

0



Proposition 5.2-1 we have:

(1− cos ( 2 τ )) dτ

!

and so: t

!

2 ⎫ ⎨ 4 ⎬ ⎩ s + 4s 2 ⎭

Example 6.3-11

!

or !

0

−1 ⎧

Determine the inverse Laplace transform:

Evaluating the integral, we obtain: t

0−

sin ( 2 γ ) dγ dτ = L

is valid.

Solution:

!

∫∫

τ

∫∫ 0−

τ

0

sin ( 2 γ ) dγ dτ =

Therefore the equation:

t

τ 1 t 1 − sin ( 2 τ ) = − sin ( 2t ) 2 4 2 4 0

f (t ) = L

−1⎧ 2 e ⎨ 2

⎫ ⎬ = 2 sin ( t − π ) U ( t − π ) = − 2 sint U ( t − π ) s + 1 ⎩ ⎭ −πs

or !

⎧⎪ 0 t 0 . The product f ( t ) ∗ g ( t ) is

(7.1-3)

0

Other properties of convolutions that follow from the

f ( t ) ∗ ⎡⎣ c1 g ( t ) + c2 h ( t ) ⎤⎦ = c1 f ( t ) ∗ g ( t ) + c2 f ( t ) ∗ h ( t ) !

(7.1-4)

!

⎡⎣ f ( t ) ∗ g ( t ) ⎤⎦ ∗ h ( t ) = f ( t ) ∗ ⎡⎣ g ( t ) ∗ h ( t ) ⎤⎦ !

(7.1-5)

!

f ( t ) ∗ c1 g ( t ) = c1 f ( t ) ∗g ( t ) = c1 ( f ( t ) ∗g ( t )) !

(7.1-6)

where c1 and c2 are arbitrary constants, and f ( t ) , g ( t ) , and

h ( t ) are piecewise-smooth functions.

7.1.2!

known as the generalized product.

CONVERGENCE OF THE CONVOLUTION INTEGRAL

Proposition 7.1-1, Convergence of the Convolution:

7.1.1! !

PROPERTIES OF THE CONVOLUTION INTEGRAL

If f ( t ) and g ( t ) are piecewise continuous functions of

Convolution is a commutative operation as can be seen by

letting v = t − τ so that dv = − dτ and equation (7.1-1) becomes: !

f (t ) ∗ g (t ) =



t

! !

exponential order, then their convolution:

0

f ( t − v ) g ( v ) ( −1) dv =



t

f ( t −τ ) g (τ ) dτ !

0

(7.1-2)

!

f (t ) ∗ g (t ) =



t

f ( τ ) g ( t − τ ) dτ !

(7.1-7)

0

is also a continuous function of exponential order, and so it converges absolutely. 108

If σ f < σg , let a = σ g − σ f and M = M f Mg . We then have:

Proof: Since f ( t ) and g ( t ) are piecewise continuous functions on

!

any finite interval 0 ≤ t < τ , the function f ( t ) ∗ g ( t ) will also be piecewise continuous on 0 ≤ t < τ . Since f ( t ) and g ( t ) are functions of exponential order, we

!

also have: !

f ( t ) ≤ Mf e

σf t

!

g ( t ) ≤ Mg e

σg t

where σ f

and σ g



!

0

!

t

∫Me

σf τ

f

σg t

e

Mg e

( σ f −σg ) t − 1

! !

σg ( t−τ )

dτ !

(7.1-10)

σ f ≠ σg !

! !

(7.1-11)

If σ f > σg , let a = σ f − σg and M = M f Mg . We then have: !



0

−e a

σf t

≤ Me

σg t

!



t



t



t

f ( τ ) g ( t − τ ) dτ ≤



t

Mf e

σg τ

Mg e

σ g ( t−τ )

dτ !

(7.1-14)

0

!

f ( τ ) g ( t − τ ) dτ ≤ M e

σg t

0

t

∫ dτ !

(7.1-15)

0

!

f ( τ ) g ( t − τ ) dτ ≤ M

e

σf t

−e a

f ( τ ) g ( t − τ ) dτ ≤ M t e

σg t

!

(7.1-16)

0

σ f − σg

0

t

σg t

(7.1-13)

0

0

f ( τ ) g ( t − τ ) dτ ≤ M f Mg e

≤M

e

If σ f = σg , let M = M f Mg . We then have:

!



σg t

and so:

or t

−e −a

!!

(7.1-9)

are nonnegative real constants, and

f ( τ ) g ( t − τ ) dτ ≤

σf t

or

T0 = max (T1, T2 ) . Therefore: t

e

0

(7.1-8)

t ≥ T2 ≤ T0 !

!

f ( τ ) g ( t − τ ) dτ ≤ M

!

t ≥ T1 ≤ T0 !

!



!

t

σg t

≤ Me

σf t

!

(7.1-12)

From equations (7.2-12), (7.2-13), and (7.2-16) we see that if

f ( t ) and g ( t ) are both piecewise continuous functions of exponential order, then their convolution: !

f (t ) ∗ g (t ) =



t

f ( τ ) g ( t − τ ) dτ !

(7.1-17)

0

is also a piecewise continuous function of exponential order. We then know that their convolution converges absolutely (see Proposition 1.4-2).



109

t

⎡ − ( a−b ) τ ⎤ − bt e f (t ) ∗ g (t ) = e ⎢ ⎥ b − a ⎣ ⎦ τ= 0

!

Proposition 7.1-2: If f ( t ) and g ( t ) are piecewise continuous functions of

Therefore:

exponential order, then their convolution:

f (t ) ∗ g (t ) =

!



t

f ( τ ) g ( t − τ ) dτ !

f (t ) ∗ g (t ) = e

!

(7.1-18)

0

− bt

⎡ e− ( a−b) t − 1⎤ e− a t − e− bt ⎢ ⎥= b − a b−a ⎣ ⎦

has a Laplace transform.

7.2!

Proof: !

Follows from Propositions 7.1-1 and 1.4-3.

LAPLACE TRANSFORM OF THE CONVOLUTION INTEGRAL



Proposition 7.2-1, Laplace Convolution Theorem: Example 7.1-1

If f ( t ) and g ( t ) are piecewise continuous functions of

Determine the convolution f ( t ) ∗ g ( t ) where f ( t ) = e− at and

g (t ) = e

− bt

exponential order with Laplace transforms of L

.

and L

Solution:

L

!

{ g (t )} = G ( s ) , respectively, then: { f (t ) ∗ g (t )} = F ( s ) G ( s ) !

{ f (t )} = F ( s ) (7.2-1)

We have from equation (7.1-1): !

f (t ) ∗ g (t ) =



t

f ( τ ) g ( t − τ ) dτ =

0

∫e 0

or !

t

−aτ

e

− b ( t−τ )



Proof: ! !

f ( t ) ∗ g ( t ) = e− bt



t

0

e− ( a−b ) τ dτ

From the definition of the Laplace transform we have:

L

{ f (t ) ∗ g (t )} = ∫



t = 0−

f ( t ) ∗ g ( t ) e− s t d t !

(7.2-2)

Using equation (7.1-1) we obtain:

and so: 110

!

L

{ f (t ) ∗ g (t )} = ∫



e

⎡ ⎢ ⎣

−st

t = 0−

⎤ f ( τ ) g ( t − τ ) dτ ⎥ d t ! (7.2-3) τ=0 ⎦



t

We can rewrite equation (7.2-3) as: !

⎡ L { f ( t ) ∗ g ( t )} = ⎢ t = 0− ⎣





⎤ e− s t f ( τ ) g ( t − τ ) dτ ⎥ d t ! τ=0 ⎦ t



(7.2-4)

where the region of integration includes 0 ≤ τ ≤ t (vertical bar in shaded region of Figure 7.2-1) and 0 − ≤ t ≤ ∞ . !

Since the integrals are uniformly convergent (see

Proposition 1.4-7), we can change the order of integration:

⎡ L { f ( t ) ∗ g ( t )} = ⎢ τ=0 ⎣

⎤ e− s t f ( τ ) g ( t − τ ) d t ⎥ dτ ! (7.2-5) t=τ ⎦



∫ ∫

!



where the region of integration includes τ ≤ t ≤ ∞ (horizontal bar in shaded region of Figure 7.2-1) and 0 ≤ τ ≤ ∞ . We will now let v = t − τ where we hold τ fixed. We then

!

have dv = dt , and equation (7.2-5) becomes:

L

!

{ f (t ) ∗ g (t )} = ∫



τ=0





e

− s ( v +τ )

v= 0

f (τ ) g ( v ) dv dτ !

(7.2-6)

!

and so: !

L

Figure 7.2-1! Integration regions for L

{ f (t ) ∗ g (t )} .

From the definition of the Laplace transform equation

(7.2-7) becomes:

{ f (t ) ∗ g (t )} = ∫



f (τ ) e

τ=0

−s τ

⎡ ⎢ ⎣





g (v) e

v= 0

−sv

⎤ dv ⎥ dτ ! (7.2-7) ⎦

! L

{ f (t ) ∗ g (t )} = G ( s ) ∫



f (τ ) e− sτ dτ !

τ=0

Re s > σ 0 !

(7.2-8)

111

or

!

L

!

{ f (t ) ∗ g (t )} = F ( s ) G ( s ) !

Re s > σ 0 !

(7.2-9)

and so:

!

Re s > σ 0 ! (7.2-10)



{ f (t ) ∗δ (t )} = L { f (t )} L {δ (t )} !

(7.2-13)

!

L

{ f (t ) ∗δ (t )} = L { f (t )} (1) !

(7.2-14)

and so:

Proposition 7.2-2:

F1 ( s ) , F2 ( s ) , !, Fn ( s ) , respectively, then: L

{ f1 (t ) ∗ f2 (t ) ∗!∗ fn (t )} = F1 ( s ) F2 ( s ) ! Fn ( s ) !

(7.2-11)

f ( t ) ∗ g ( t ) can give the impression that the Laplace transform is multiplicative. In general, however, this is not true:

L

(7.2-16)

Example 7.2-1 If f ( t ) = 1 and g ( t ) = 1 show that the convolution

f (t ) ∗ g (t ) ≠ f (t ) g (t ) .

function δ ( t ) is f ( t ) :



t

0

Proof:

{ f (t ) g (t )} ≠ F ( s ) G ( s ) !



The convolution of any function f ( t ) with the unit impulse

f ( t ) ∗δ ( t ) =

(7.2-15)

The Laplace transform of the convolution of two functions

!

Proposition 7.2-3:

!

f ( τ ) δ ( t − τ ) dτ = f ( t ) !



!

Proof: Follows from Proposition 7.2-1.



t

0

functions of exponential order with Laplace transforms of

!

f ( t ) ∗δ ( t ) =

!

If f1 ( t ) , f2 ( t ) , !, fn ( t ) are piecewise continuous

!

L

or

{ f (t ) ∗ g (t )} = L { f (t )} L {g (t )} !

L

!

We have from equation (7.2-10):

f ( τ ) δ ( t − τ ) dτ = f ( t ) !

Solution: (7.2-12)

We have from equation (7.2-10): !

L

{ f (t ) ∗ g (t )} = L { f (t )} L {g (t )} = 1s 1s = s12 112

Therefore: !

L

!

{ f (t ) ∗ g (t )} ≠ L { f (t ) g (t )} = L {1} = 1s

{ {

}

L e t ∗ e− t =

Example 7.2-2

−t

1 1 1 1 − 2 s −1 2 s +1

Therefore: !

Solution:

{ } L {e } = s 1− 1 s 1+ 1

Expanding as partial fractions: !

If g ( t ) = 1 show that the convolution f ( t ) ∗ g ( t ) ≠ f ( t ) g ( t ) .

}

L e t ∗ e− t = L e t

e t e− t e ∗e = − 2 2 −t

t

We have from equation (7.2-10): !

L

{ f (t ) ∗ g (t )} = L { f (t )} L {g (t )} = L { f (t )} 1s

and so: !

L

{ f (t ) ∗ g (t )} =

F (s) s

=

L

{ f (t )}

Determine the Laplace transform of: !

f (t ) ∗ g (t ) ≠ f (t ) g (t ) = f (t )

We have:

Example 7.2-3

Solution: From Proposition 7.2-1 we have:

τ 2 e3 (t−τ ) dτ

Solution:

! Determine the convolution e t ∗ e− t .



t

0

s

Therefore: !

Example 7.2-4

⎧ L ⎨ ⎩

⎫ τ 2 e3 (t−τ ) dτ ⎬ = L t 2 ∗ e3t 0 ⎭



t

{

}

and so: !

⎧ L ⎨ ⎩

t

∫τ 0

2

e

3 ( t−τ )

⎫ dτ ⎬ = L t 2 L e3t ⎭

{ } { }

Therefore: 113

!

⎧ L ⎨ ⎩

⎫ 2 1 τ 2 e3 (t−τ ) dτ ⎬ = 3 0 ⎭ s s−3



t

!

⎧ L ⎨ ⎩

t



0−

⎫ F (s) f ( τ ) dτ ⎬ = s ⎭

using the Laplace convolution theorem (Proposition 7.2-1). Example 7.2-5 Solution:

Determine the Laplace transform of: t

!

∫e

− 3 ( t−τ )

0

If g ( t ) = 1 we have from equation (7.1-1):

cos ( 4 τ ) dτ

f (t ) ∗ g (t ) =

!



t

f ( τ ) g ( t − τ ) dτ =

0

Solution:

⎧ L ⎨ ⎩



{

!



t

e

− 3 ( t−τ )

0

}

!

⎧ L ⎨ ⎩

0



⎧ L { f ( t ) ∗ g ( t )} = L ⎨ ⎩



!

{ } L {cos ( 4 t )}

⎫ 1 s e− 3 (t−τ ) cos ( 4 τ ) dτ ⎬ = 2 0 ⎭ s + 3 s + 16

{ f (t ) ∗ g (t )} = L { f (t )} L {g (t )} = L { f (t )} 1s =

F (s) s

Therefore:

⎫ cos ( 4 τ ) dτ ⎬ = L e− 3t ⎭

Therefore:

L

!

⎫ e− 3 (t−τ ) cos ( 4 τ ) dτ ⎬ = L e− 3t ∗ cos ( 4 t ) 0 ⎭ t

and so:

⎧ L ⎨ ⎩

f ( τ ) dτ

From equation (7.2-10) we have:

We have: !



t

7.3!

t

!

t

0

⎫ F (s) f ( τ ) dτ ⎬ = s ⎭

INVERSE LAPLACE TRANSFORM OF THE CONVOLUTION INTEGRAL The inverse Laplace transform of the convolution of two

functions f ( t ) ∗ g ( t ) can be obtained from equation (7.2-1): Example 7.2-6

!

L

−1

{F ( s ) G ( s )} = f (t ) ∗ g (t ) !

(7.3-1)

Derive Proposition 3.3-3: 114

This equation can be very useful in obtaining the inverse

!

Laplace transforms of some functions.

L

−1 ⎧

⎫ ⎨ ⎬= s s − 2 ( ) ⎩ ⎭ 1

t

∫ (1) e

2 ( t−τ )

0

1 dτ = − e2 (t−τ ) 2

t τ =0

or

Example 7.3-1 Determine the inverse Laplace transform for

1 s ( s − 2)

.

!

L

−1 ⎧

1 ⎫ 1 2t ⎨ ⎬ = e −1 s s − 2 )⎭ 2 ⎩ (

(

)

Solution: Example 7.3-2

We will take: !

1 s ( s − 2)

= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

1 F (s) = ! s

G (s) =

( s − 2)

f (t ) = L

−1 ⎧ 1 ⎫

⎨ ⎬ =1! ⎩s ⎭

g (t ) = L

−1 ⎧

1 ⎫ 2t ⎨ ⎬=e ⎩s − 2⎭

and so: !

L

( s + 1) ( s − 2 )

.

We will take:

1

We then have: !

1

Solution:

where: !

Determine the inverse Laplace transform for

−1 ⎧

1 ⎫ 2t ⎨ ⎬ = f ( t ) ∗ g ( t ) = (1) ∗ e ⎩ s ( s − 2) ⎭

Therefore:

!

1

( s + 1) ( s − 2 )

= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

where: !

F (s) =

1 ! s +1

G (s) =

1 s−2

We then have: !

f (t ) = L

−1 ⎧

1 ⎫ −t ⎨ ⎬=e ! ⎩ s + 1⎭

g (t ) = L

−1 ⎧

1 ⎫ 2t ⎨ ⎬=e ⎩s − 2⎭

and so:

115

!

L

−1 ⎧

⎫ −t 2t ⎨ ⎬ = f (t ) ∗ g (t ) = e ∗ e ⎩ ( s + 1) ( s − 2 ) ⎭ 1

!

L

−1 ⎧

⎫ ⎨ ⎬= s + 1 s − 2 )( )⎭ ⎩( 1

t

∫e

−τ

e

2 ( t−τ )

dτ = e

t

2t

0

∫e

− 3τ

!



L

0

or !

−1 ⎧ 1 ⎫

g (t ) = L

⎨ ⎬ =1! ⎩s ⎭

−1 ⎧

1 ⎫ 1 ⎨ 2 ⎬ = sin ( 3t ) ⎩s + 9⎭ 3

and so:

Therefore: !

f (t ) = L

⎧ 1 ⎨ 2 ⎩⎪ s s + 9

−1 ⎪

(

)

⎫⎪ 1 = f t ∗ g t = 1 ∗ sin ( 3t ) ( ) ( ) ( ) ⎬ 3 ⎭⎪

)

⎫⎪ 1 ⎬= ⎪⎭ 3

)

⎫⎪ 1 ⎬ = cos 3( t − τ ) ⎪⎭ 9

Therefore:

⎫ 1 e2t −3τ −1 ⎧ L ⎨ e ⎬=− 3 s + 1 s − 2 ( ) ( ) ⎩ ⎭

t

τ =0

(

1 = e2t − e− t 3

)

!

L

⎧ 1 ⎨ 2 ⎪⎩ s s + 9

−1 ⎪

(

t

∫ (1) sin ⎡⎣ 3 (t − τ)⎤⎦ dτ 0

or Example 7.3-3 Determine the inverse Laplace transform for Solution: We will take: !

(

1

s s +9 2

)

s s +9 2

)

.

!

L

(

t

= τ=0

1 (1− cos 3t ) 9

Example 7.3-4

= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

Determine the inverse Laplace transform for

1 F (s) = ! s

We then have:

1

( s + a) ( s + b)

.

Solution:

where: !

(

1

⎧ 1 ⎨ 2 ⎪⎩ s s + 9

−1 ⎪

G (s) =

(s

1 2

+9

We will take:

)

!

1

( s + a) ( s + b)

= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

where: 116

!

F (s) =

1 ! s+a

G (s) =

1 s+b

Solution: We will take:

We then have: ! f (t ) = L

−1 ⎧

1 ⎫ − at ⎨ ⎬=e ! s + a ⎩ ⎭

g (t ) = L

−1 ⎧

1 ⎫ − bt ⎨ ⎬=e s + b ⎩ ⎭

and so: !

L

L

−1 ⎧

⎫ − at − bt ⎨ ⎬ = f (t ) ∗ g (t ) = e ∗ e ⎩ ( s + a) ( s + b) ⎭ 1

!

−1 ⎧

⎫ ⎨ ⎬= ⎩ ( s + a) ( s + b) ⎭ 1

t

∫e

− a τ − b ( t−τ )

e

!



= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

F (s) =

1 ! s−2

G (s) =

1 s−2

f (t ) = L

−1 ⎧

1 ⎫ 2t ⎨ ⎬=e ! s − 2 ⎩ ⎭

−1 ⎧

1 ⎫ 2t ⎨ ⎬=e s − 2 ⎩ ⎭

g (t ) = L

0

and so:

⎧ ⎫ − bt 1 L −1 ⎨ ⎬=e ⎩ ( s + a) ( s + b) ⎭

e− b t (b−a ) τ (b−a ) τ e dτ = e b − a 0



t

− at − bt ⎫ e− b t (b−a ) t 1 e − e −1 ⎧ ⎡e L ⎨ − 1⎤⎦ = ⎬= ⎣ b−a ⎩ ( s + a) ( s + b) ⎭ b − a

!

L

τ=0

!

⎧ 1 ⎨ 2 ⎪⎩ ( s − 2 )

−1 ⎪

⎫⎪ 2t 2t ⎬ = f (t ) ∗ g (t ) = e ∗ e ⎪⎭

L

⎧ 1 ⎨ 2 ⎪⎩ ( s − 2 )

−1 ⎪

⎫⎪ ⎬= ⎪⎭



t

0



e e

2( t−τ )

dτ = e

2t



t

dτ = t e2t

0

Example 7.3-6

Example 7.3-5 Determine the inverse Laplace transform for

t

Therefore:

and so: !

( s − 2)

2

We then have:

or !

1

where:

Therefore: !

!

1

( s − 2)

2

.

Determine the inverse Laplace transform for

(s

s 2

)

+1

2

.

117

where

Solution:

!

We will take: !

(s

s 2

)

+1

2

!

L

⎧ s ⎨ ⎪⎩ s 2 + 1

−1 ⎪

where: !

s ! s2 + 1

F (s) =

G (s) =

1

f (t ) = L

!

−1 ⎧

s ⎫ ⎨ 2 ⎬ = cos ( t ) ! ⎩ s + 1⎭

g (t ) = L

−1 ⎧

⎫ ⎨ 2 ⎬ = sin ( t ) ⎩ s + 1⎭

and so: !

⎧ s −1 ⎪ L ⎨ ⎪⎩ s 2 + 1

(

)

2

⎫ ⎪ ⎬ = f ( t ) ∗ g ( t ) = cos ( t ) ∗sin ( t ) ⎪⎭

Therefore: !

⎧ s −1 ⎪ L ⎨ ⎪⎩ s 2 + 1

(

)

2

⎫ ⎪ ⎬= ⎪⎭



cos ( τ ) sin ( t − τ ) dτ

)

t

∫ ⎡⎣sin (t ) + sin (t − 2 τ)⎤⎦ dτ 0

⎧ s ⎨ ⎪⎩ s 2 + 1

−1 ⎪

(

)

2

⎫ ⎪ 1 ⎬ = τ sin ( t ) ⎪⎭ 2

t

+ τ=0

t 1 cos ( t − 2 τ ) τ = 0 4

or !

L

⎧ s ⎨ ⎪⎩ s 2 + 1

−1 ⎪

(

)

2

⎫ 1 ⎪ 1 ⎬ = t sin ( t ) + ⎡⎣ cos ( t ) − cos ( t ) ⎤⎦ 4 ⎪⎭ 2

Therefore:

L

⎧ s ⎨ ⎪⎩ s 2 + 1

−1 ⎪

(

)

2

⎫ ⎪ 1 ⎬ = t sin ( t ) ⎪⎭ 2

0

Using the trigonometric identity: !

L

1

! t

(

2

⎫ ⎪ 1 ⎬= ⎪⎭ 2

and so:

s +1 2

We then have: !

t−τ=θ

and!

we have:

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

= L

τ=φ!

2sin ( θ ) cos ( φ ) = sin ( θ + φ ) + sin ( θ − φ )

Example 7.3-7 Determine the inverse Laplace transform for

1 s s +1

. 118

Solution: We will take: !

1 s s +1

= L

{ f (t ) ∗ g (t )} = F ( s ) G ( s )

where: !

F (s) =

1 ! s

G (s) =

1 s +1

We then have: !

f (t ) = L

−1 ⎧ 1 ⎫

g (t ) = L

⎨ ⎬ =1! ⎩s ⎭

−t ⎫ e ⎨ ⎬= ⎩ s +1 ⎭ πt

−1 ⎧

1

and so: !

L

−1 ⎧

⎫ e− t ⎨ ⎬ = f (t ) ∗ g (t ) = πt ⎩s s +1 ⎭ 1

Therefore: !

L

−1 ⎧

⎫ 1 ⎨ ⎬= π s s + 1 ⎩ ⎭ 1

t

e− τ

0

τ



dτ = erf

( t)

119

Chapter 8 ∞

Laplace Transforms as Series Expansions

L

{ f (t )} = ∑ n=0

an n! n+1 s

120

!

In this chapter we will consider Laplace transforms that

can be represented as converging power series. We will also

!

g( x) =





n = 0, 1, 2, !!

an x n !

(8.1-2)

n=0

determine the inversion of Laplace transforms that are converging power series.

Representing summation by an integral, we can write equation

!

While representing a function or its Laplace transform in

(8.1-2) in the form:

the form of a power series does not provide the same physical



insight as does a closed form, for complicated problems a representation as a power series may be the easiest to obtain. In such cases it is important to determine if term-by-term transformation of the series is possible.

8.1! POWER SERIES REPRESENTATION OF LAPLACE TRANSFORMS !

A power series representing a function f ( t ) will have the



0−

a ( t ) x t dt !

(8.1-3)

Replacing x with e− s , we have the Laplace transform as an equivalent to a power series: !

( ) = A(s) = ∫

g e

−s



0

!



a ( t ) e −s t dt !

(8.1-4)

A power series that consists of terms having positive

powers of t as in equation (8.1-1) can converge very slowly.

f (t ) =

having negative powers of t . This can be accomplished if the



∑a t ! n

n

n = 0, 1, 2, ! !

(8.1-1)

n=0

where an are constant coefficients. !

g( x) =

Therefore it can be advantageous to convert the series into one

form: !

!

The Laplace transform can be considered to be an

series can be Laplace transformed term-by-term to obtain a series having the form: ∞

!

! { f (t )} = ∑ an L {t n } = ∑ an sn! n+1 n=0

improper integral analogous to a power series. We can see this by considering the power series:

L

!



(8.1-5)

n=0

If a function f ( t ) can be expanded in a converging power

series, it is possible to obtain L

{ f (t )} by Laplace transforming

121

the series term-by-term and then summing the Laplace transforms, provided the coefficients of the series follow certain

N

L

!

constraints.

n=0

Proposition 8.1-1, Laplace Transform of a Power Series:

)

If a function f ( t ) is continuous on ⎡⎣ 0 − , ∞ and is of exponential order, and if f ( t ) can be represented by a

N

L

!

f (t ) =

∑a t ! n

n

n = 0, 1, 2, ! !

(8.1-6)

N

L

!

where the coefficients satisfy: !

and where Re s > σ 0 , then L

(8.1-7)

{ f (t )} can be represented by the

! { f (t )} = ∑ an sn! n+1

N

L

{ f (t )} − ∑ an L { t } n

n=0

(8.1-8)

n=0

Proof: !

⎧ ⎪ ≤ L ⎨ f (t ) − ⎪⎩

⎫ an t ⎬ ! (8.1-10) ⎪⎭ n=0

(8.1-9)

N



n⎪

A power series converges absolutely and uniformly within

its circle of convergence. Therefore the series can be integrated

⎧ ⎪ ≤ L ⎨ ⎪⎩

⎫ ⎪ an t ⎬ ! ⎪⎭ n = N +1 ∞



n

(8.1-11)

(8.1-7) we then have:



L



n⎪

From the constraints on the coefficients given in equation

!

Laurent series: !

{ f (t )} − ∑ an L { t } n

n=0

M >0!

⎫ an t ⎬ ! ⎪⎭ n=0 N

and so using equation (8.1-6):

n=0

σ 0n ! an ≤ M n!

{ f (t )} − ∑ an L { t } n

n=0



⎧ ⎪ = L ⎨ f (t ) − ⎪⎩

or

converging power series: !

{ f (t )} − ∑ an L { t } n

⎧ ⎪ ≤ML ⎨ ⎪⎩





n = N +1

( σ 0 t )n n!

⎫ ⎪ ⎬ ! ⎪⎭

(8.1-12)

Since we also have: ∞

!

e

σ0 t

=

∑ n= 0

( σ 0 t )n n!

!

(8.1-13)

we can write:

term-by-term by Laplace transforming each term. We can write: 122

N

!

L

{ f (t )} − ∑ an L { t n } n=0

N ⎧ ( σ 0 t )n ⎪ σ0 t ≤ M L ⎨e − n! ⎪⎩ n= 0



! !

Therefore we have:

⎫ ⎪ ⎬! ⎪⎭



N

L

{ f (t )} − ∑ an L { t } n

n=0

{ f (t )} = ∑ an L {t n } !

(8.1-19)

n=0

(8.1-14) or

Taking the Laplace transform: !

L

!

N ⎛ σ 0n 1 ≤M⎜ − n+1 ⎜⎝ s − σ 0 n = 0 s



⎞ ⎟ ! (8.1-15) ⎟⎠



L

!

! { f (t )} = ∑ an sn! n+1

(8.1-20)

n=0



or N

!

lim

N →∞

L

{ f (t )} − ∑ an L { t n } n=0

!

!

⎛ 1 1 ≤ M lim ⎜ − N→ ∞ ⎜ s − σ s 0 ⎝

σ 0n ⎞ ⎟ sn ⎟ ⎠ n= 0 N



!

L

{ f (t )} − ∑ an L { t } n

n=0

From equation (8.1-20) we see that the neighborhood of

t = 0 for f ( t ) corresponds to the neighborhood of s = ∞ for the

Laplace transform of f ( t ) .

(8.1-16) Example 8.1-1

and so using the geometric series: ∞

!

⎛ ⎞ 1 1 ⎟ ⎜ 1 ! (8.1-17) ≤M⎜ − σ0 ⎟ s − σ0 s 1− ⎜⎝ ⎟ s ⎠

Determine the Laplace transform f ( t ) = et using its Taylor series: !

t t2 t3 tn t e = 1+ + + +! = 1! 2! 3! n!

or ∞

!

L

{ f (t )} − ∑ an L { t n } n=0

!

⎛ 1 1 ⎞ ≤M⎜ − =0! ⎝ s − σ 0 s − σ 0 ⎟⎠ (8.1-18)



∑ n=0

tn n!

Solution: We have: !

an =

1 1 σ 0 , then L

{ }

1 1 1 1 L et = + 2 + 3 +! n = s s s s



∑s

1 n+1

=

n=0

1 s



∑ n=0

{ }

L et =

L

!

1 sn

{ f (t )} =



{ f (t )} can be represented by the



Γ ( n + ν + 1)

an

s

n=0

n + ν +1

!

(8.1-23)

Proof:

Therefore we can write: !

(8.1-22)

series:

or !

M >0!

!

1 1 1 = s 1− 1 s − 1 s

From equation (1.5-4) we have:

{ }

L tα =

!

Γ ( α + 1) s α+1

α > −1 !

!

(8.1-24)

Letting α = n + ν we have from Proposition 8.1-1 and equation !

Proposition 8.1-1 is a special case of Watson’s lemma that

(8.1-24):

applies also for cases where the exponent of t is not an integer.



Proposition 8.1-2, Watson’s Lemma:

converging power series: !

f (t ) =

n

!

n=0

where the coefficients satisfy:

ν > −1 !

Γ ( n + ν + 1) s n + ν +1

!

ν > −1 !

(8.1-25)



Proposition 8.1-3, Convergence of the Laplace Transform of a Power Series: If f ( t ) can be represented by a converging Taylor series:



∑a t

{ f (t )} = ∑ an n=0

)

If a function f ( t ) is continuous on ⎡⎣ 0 − , ∞ and is of exponential order, and if f ( t ) can be represented by a

n+ν

L

!

(8.1-21) !

f (t ) =



∑a t ! n

n

n = 0, 1, 2, ! !

(8.1-26)

n=0

124

!

and if:

M σ 0n an ≤ ! n!

!

(8.1-27)

where M > 0 , then the series L

{ f (t )} will be uniformly

!

u = f (t ) !

dv = e− s t dt !

(8.1-30)

du = f ′ ( t ) dt !

e− s t ! v=− s

(8.1-31)

so that:

convergent in Re s > σ 0 .

L

! Proof: !

Follows from Proposition 8.1-1 since the power series is

uniformly convergent (see Proposition 1.4-7).



s

f (0) 1 L { f ( t )} = + s s

!





0



Integrating by parts again, we let:

order on ⎡⎣ 0 − , ∞ , then:

!

f (n) ( 0 )



L

!

{ f (t )} = ∑

s

n=0

n+1

!

(8.1-28)



0



f ′ ( t ) e− s t dt !

f ′ ( t ) e− s t dt !

If f ( t ) , f ′ ( t ) , f ′′ ( t ) ,! are continuous functions of exponential

)



1 + s t=0

(8.1-32)

or



Proposition 8.1-4:

{ f (t )} = −

e− s t f ( t )

!

(8.1-33)

u = f ′ (t ) !

dv = e− s t dt !

(8.1-34)

du = f ′′ ( t ) dt !

e− s t ! v=− s

(8.1-35)

so that: Proof: ! !

f ( 0 ) ⎡ e− s t f ′ ( t ) ⎤ 1 ! L { f ( t )} = −⎢ + ⎥ 2 2 s ⎣ s ⎦ t=0 s ∞

From the definition of the Laplace transform we have:

L



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) ! 0



Integrating by parts, we let:

(8.1-29)





0−

f ′′ ( t ) e− s t dt ! (8.1-36)

or !

L

{ f (t )} =

f (0) s

+

f ′ (0) s

2

1 + 2 s





0−

f ′′ ( t ) e− s t dt !

(8.1-37)

125



From repeated integration by parts, we then obtain:

L

!

f 0 f′ 0 f ′′ 0 f ′′′ 0 { f (t )} = (s ) + s(2 ) + s(3 ) + s 4( ) +!!

1 ! { f (t )} = F ( s ) = ∑ an s n+1

(8.2-1)

n=0

(8.1-38)

that converges for Re s > σ0 with lim F ( s ) = 0 , then F ( s ) can s→∞

or

be inverted term-by-term to obtain: ∞

L

!

L

!

{ f (t )} = ∑ n=0

f (n) ( 0 ) s

n+1

!

(8.1-39)

f (t ) =

!



∑ n! an

tn !

(8.2-2)

n=0



where f ( t ) converges absolutely and uniformly in its circle of

8.2! INVERSE OF A LAPLACE TRANSFORM SERIES

Proof:

!

!

If a Laplace transform does not have a standard form

found in a table, it may be useful to expand the transform into a series and then invert the series term-by-term. If the transform can be expanded as a converging Laurent series of powers of

convergence.

function of 1 s . We can then represent f ( t ) by a power series: !

1 s , then the inverted transform will have the form of a series of powers of t . For small values of t , this procedure can provide a solution of adequate accuracy.

represented by a series:

f (t ) =



∑b t ! n

(8.2-3)

n

n=0

where f ( t ) will converge absolutely and uniformly in its circle of convergence. From Proposition 8.1-1 we have:

Proposition 8.2-1, Inverse of a Laplace Transform Series: If the Laplace transform F ( s ) of a function f ( t ) can be

By definition of a Laurent series, F ( s ) is an analytic

!

⎧ ∞ ⎫ ⎪ n⎪ L { f ( t )} = L ⎨ bn t ⎬ = ⎪⎩ n = 0 ⎪⎭





∑ n=0

bn n! ! s n+1

(8.2-4)

We will let bn = an n! so that: 126

⎧ ∞ a ⎫ ⎪ n n⎪ L { f ( t )} = L ⎨ t ⎬= n! ⎪⎩ n = 0 ⎪⎭



!

Proof:



∑ n=0

an

1 s

n+1

!

(8.2-5)

L

!

Follows from Propositions 8.1-2 and 8.2-1.



Example 8.2-1

Therefore we see that:

⎧ ⎫ 1 ⎪ an n+1 ⎬ = ⎨ ⎪⎩ n = 0 s ⎪⎭ ∞



−1⎪

!



Determine the inverse Laplace transform f ( t ) of:

∑ n=0

an n t = f (t ) ! n!

(8.2-6)

!

F (s) =



∑s

1 n+1

n=0



Proposition 8.2-2, Inverse of a Laplace Transform Series: If the Laplace transform F ( s ) of a function f ( t ) can be represented by a series: !

L

{ f (t )} = F ( s ) =

n=0

an

1 s n + ν +1

ν > −1 !

!

F (s) =

have:

be inverted term-by-term to obtain:

!

f (t ) =

∑ n=0

an

∑s

1 n+1

1 1 1 1 1 1 = + 2 + 3 +! = = s s s 1− 1 s − 1 s s

From the transform tables in Appendices E and F we then

that converges for Re s > σ0 with lim F ( s ) = 0 , then F ( s ) can ∞



n=0

(8.2-7)

s→∞

!

We have: !





Solution:

1 t n + ν +1 ! Γ ( n + ν + 1)

(8.2-8)

where f ( t ) converges absolutely and uniformly in its circle of

f (t ) = et

From Proposition 8.2-1 with an = 1 we also have: !

f (t ) =



∑ n=0

1 n t = et n!

convergence.

127

The series expansion of F ( s ) is: Example 8.2-2 Determine the inverse Laplace transform f ( t ) of: !

F (s) =

1 s2 + 1

Solution:

!

F (s) =

1 1 1 1 − + − +! s 2 3! s 4 5! s 6 7! s 8

From the transform table in Appendix E we then have: !

f (t ) = t −

t3

( 3!)

2

+

t5

( 5!)

2



t7

( 7!)

2

+!

We have: !

1 s2 1 1 1 1 F (s) = 2 = = 2 − 4 + 6 − 8 +! s + 1 1+ 1 s s s s 2 s

From the transform table in Appendix E we then have: !

t3 t5 t7 f ( t ) = t − + − +! = sin ( t ) 3! 5! 7!

Example 8.2-3 From its series expansion determine the inverse Laplace transform f ( t ) of: !

F (s) =

1 1 sin s s

Solution: 128

Chapter 9 Solution of Linear Ordinary Differential Equations having Constant Coefficients

L ⎡⎣ a f ′′ ( t ) + b f ′ ( t ) + c f ( t ) ⎤⎦ = 0

129

!

To fully understand the nature of physical processes, it is

!

A mathematical model of a physical process will then

often helpful to first mathematically describe these processes.

generally be in the form of an equation with derivatives. Such

This involves creating a mathematical model of the process.

an equation is a mathematical encapsulation of changes that are

Since all physical processes involve some entity undergoing a

occurring in a physical process. The type of differential

change, and since the derivative of a function is defined as the

equation needed to describe a physical process depends both

rate of change of the function at a point, differential equations

upon the nature of the process and upon the detail to which the

(equations containing derivatives) are often encountered in any

process is being described.

description of physical processes. Prediction of the functioning

!

of a physical process over time or space usually involves

differential equations and partial differential equations. If the

solving a differential equation that represents how the physical

dependent variable in a differential equation is a function of

system is changing (see Figure 9-1).

only one independent variable, the resulting differential

There are two types of differential equations: ordinary

equation contains only ordinary derivatives and is called an ordinary differential equation (ODE). If the dependent variable in a differential equation is a function of more than one independent variable, the resulting differential equation has partial derivatives and is called a partial differential equation (PDE). !

In this chapter we will consider the application of Laplace

transform techniques to the solution of ordinary differential equations. In Chapter 10 the application of Laplace transform techniques to the solution of partial differential equations will Figure 9-1!

Mathematical model of a physical process.

be considered. The Laplace transform changes differential equations together with their initial conditions into algebraic 130

equations, thereby considerably simplifying the solution of the

can either be a function of the independent variable t , in which

differential equation.

case they are called variable coefficients, or they can be constant coefficients.

9.1! !

CLASSIFICATION OF DIFFERENTIAL EQUATIONS Differential equations are classified according to their

order, degree, and linearity.

!

An ODE is defined for some real interval I of the

independent variable on which the ODE has meaning. This interval may be determined by the problem represented by the ODE or by variable coefficients of the ODE. We will assume in this book that the coefficients of an ODE are continuous

9.1.1! !

ORDER OF A DIFFERENTIAL EQUATION

The order of a differential equation is the order of the

highest derivative within the differential equation. !

A first order ODE having y ( t ) as the dependent variable

and t as the independent variable is: !

dy ( t ) a1( t ) + a0 ( t ) y ( t ) = f ( t ) ! dt

functions on the real interval I on which the ODE is defined. !

We can write the first order ODE with variable coefficients

in equation (9.1-1) in the form:

a1( t ) y′ + a0 ( t ) y = f ( t ) !

!

(9.1-2)

where the coefficient a1( t ) is not identically zero, and where the (9.1-1)

where f ( t ) is known as the forcing function or driving function because of its activating role in physical systems represented by the ODE. A differential equation need not have a forcing function. The coefficients ai ( t ) are functions of the

dependent variable y is understood to be a function of the independent variable t . !

A second order ODE with variable coefficients has the

form: !

a2 ( t ) y′′ + a1( t ) y′ + a0 ( t ) y = f ( t ) !

independent variable t . In a first order ODE the only derivative

where a2 ( t ) is not identically zero.

of y = y ( t ) that will appear is the first derivative.

!

!

the form:

The coefficients ai ( t ) of a differential equation do not

determine the order of the differential equation. The coefficients

!

(9.1-3)

The general nth order ODE with variable coefficients has

an ( t ) y( n ) + an−1( t ) y(n−1 ) +!+ a1( t ) y′ + a0 ( t ) y = f ( t ) !

(9.1-4) 131

where an ( t ) is not identically zero. The general nth order ODE with constant coefficients has the form:

an y( n ) + an−1 y(n−1 ) +!+ a1 y′ + a0 y = f ( t ) !

!

9.1.2! !

(9.1-5)

DEGREE OF A DIFFERENTIAL EQUATION

The degree of a differential equation is the power to which

the highest order derivative is raised after the differential equation has been rationalized. If the degree of the differential equation is two or higher, the differential equation forms a polynomial with its derivatives. !

(9.1-6)

degree of the differential equation.

!

LINEARITY OF A DIFFERENTIAL EQUATION

The linearity of a differential equation can be of two

kinds: linear and nonlinear.

Moreover no products of y ( t ) and any of its derivatives can be present in a linear ODE. The general forms of linear ODEs are given in equations (9.1-4) and (9.1-5). !

If a differential equation is not linear, then it is nonlinear.

Any ODE containing a function of the dependent function y ( t ) such as sin ( y ( t )) is nonlinear.

Equations (9.1-2), (9.1-3), (9.1-4), and (9.1-5) are all linear

The coefficients of a differential equation do not determine the ODEs of order greater than one is it possible to find an analytic

(9.1-7)

The coefficients of a differential equation do not determine the

9.1.3!

differential equation, and so all linear ODEs are of degree one.

solution.

3

!

derivatives will be present only to the first power in a linear

linearity of the differential equation. For very few nonlinear

and an example of second order ODE of degree three is:

⎛ d2y ⎞ dy a2( t ) ⎜ 2 ⎟ + a1( t ) + a0 ( t ) y = f ( t ) ! dt ⎝ dt ⎠

function y ( t ) and its derivatives. Therefore y ( t ) and all its

ODEs, and equations (9.1-6) and (9.1-7) are nonlinear ODEs.

2

⎛ dy ⎞ a1( t ) ⎜ ⎟ + a0 ( t ) y = f ( t ) ! ⎝ dt ⎠

A linear differential equation is linear in the unknown

!

An example of a first order ODE of degree two is:

!

!

9.2! !

SOLUTIONS OF DIFFERENTIAL EQUATIONS A solution of a differential equation is any function that

satisfies the differential equation on some specified open interval I of the independent variable on which the ODE has meaning. Substitution of the solution into the differential 132

equation will reduce the equation to an identity. Verification of a solution can therefore be accomplished by making such a

9.2.3!

substitution. The process of solving a differential equation is

!

referred to as integrating the equation. Some differential

that arises from a nonzero forcing function is known as the

equations that can be posed have no solution.

particular solution or particular integral. It can represent the

!

A solution of a general ODE will be a function y = y ( t )

which can be represented as a set of curves. The number of

PARTICULAR SOLUTION

The solution of a nonhomogeneous differential equation

transient solution of an ODE.

solutions that satisfy a given differential equation will generally

9.2.4!

be infinite.

!

GENERAL SOLUTION

The general solution of an ODE includes all solutions that

satisfy the ODE. A general solution will contain arbitrary

9.2.1!

STEADY-STATE AND TRANSIENT SOLUTIONS

constants resulting from the integration of the ODE. Therefore

A steady-state solution is a bounded solution that does

the general solution of an nth order ODE contains n arbitrary

not go to zero as the independent variable increases infinitely. A

integration constants. A general solution is not constrained by

transient solution goes to zero as the independent variable

any initial conditions. Homogeneous and nonhomogeneous

increases infinitely.

ODEs both have general solutions. Only after initial conditions

!

are applied to the ODE can a single specific solution be

9.2.2! !

COMPLEMENTARY SOLUTION

A solution of a homogeneous differential equation is

determined. In a specific solution the arbitrary constants of a general solution are given specific values.

known as a complementary solution or complementary function. Any nonzero forcing function f ( t ) that exists can be

9.2.5!

INITIAL AND BOUNDARY CONDITIONS

set to zero to obtain a homogeneous differential equation. The

!

complementary solution of an ODE represents a steady-state

dependent variable and/or some of its derivatives for a given

solution of the ODE.

value of the independent variable at some specific time. The

Initial conditions of a differential equation specify the

133

number of initial conditions required to define a single specific

obtained, which is not the case when other methods are used to

solution is a function of the order of the ODE. An nth order ODE

solve the ODE. Using the Laplace transform, nonhomogeneous

containing n arbitrary constants, requires n initial conditions to

ODEs are then solved without first solving the associated

determine all the constants. An initial value problem is a

homogeneous ODE. The complete solution of an ODE is

differential equation with initial conditions sufficient to define a

thereby found without the necessity of first finding the

single specific solution.

complementary

solution,

thus

!

determine

arbitrary

constants

Boundary conditions of a differential equation specify the

the

eliminating

the

associated

need with

to the

dependent variable for a given value of the independent

complementary solution.

variable at several points in space. Rather than specifying

!

conditions at a certain time as do initial conditions, boundary

obtain solutions of linear nonhomogeneous ODEs having

conditions establish boundaries in space within which the

constant coefficients and closed form solutions consists of the

differential equation has meaning.

following:

9.3! !

SOLVING LINEAR ODES USING LAPLACE TRANSFORMS

The general procedure for using the Laplace transform to

1.! The entire nonhomogeneous ODE including the forcing function is first transformed using the Laplace transform.

The Laplace transform is very useful for obtaining

2.! The initial conditions are applied to the transformed

solutions of linear ODEs having constant coefficients. Such

equation. This produces an algebraic equation already

equations will be transformable if their driving function is

incorporating the initial conditions.

transformable. We will now consider some examples. !

When using Laplace transforms to solve ODEs, the entire

nonhomogeneous ODE including its forcing function is transformed at the same time. This means that the steady-state and transient solutions of the ODE can simultaneously be

3.! The algebraic equation is solved for the transformed dependent variable. 4.! The transformed dependent variable solution is rewritten using partial fractions if necessary. 134

5.! The inverse Laplace transform of each of the terms in the partial fraction expansion of the transformed solution is determined.

Figure 9.3-1 by the solid lines. The Laplace transform procedure can be much simpler to execute than the more direct method. Proposition 9.3-1: If L { y ( t )} = Y ( s ) and L

{ f (t )} = F ( s ) , where y (t ) is a continuous function of exponential order on [ 0, ∞ ) , and f ( t )

6.! In this way the solution to the original nonhomogeneous ODE is obtained (see Figure 9.3-1).

and y′ ( t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) , the Laplace transform of the second-order ODE:

a y′′ ( t ) + b y′ ( t ) + c f ( t ) = f ( t ) !

!

(9.3-1)

is then:

Y (s) =

!

( a s + b ) y ( 0 ) + a y′ ( 0 ) a s2 + b s + c

+

F (s) a s2 + b s + c

!

(9.3-2)

where a , b , and c are constants. Proof: !

The Laplace transform of the ODE in equation (9.3-1) is:

L {a y′′ ( t ) + b y′ ( t ) + cy ( t )} = L

! Figure 9.3-1! Solution of linear ODE using Laplace transforms. !

(9.3-3)

Using Propositions 3.1-1 and 3.1-2, we have:

The most direct procedure for obtaining solutions is

!

indicated by the dashed line in Figure 9.3-1. The procedure for

!

using the Laplace transform to obtain solutions is indicated in

{ f (t )} !

a ⎡⎣ s 2 Y ( s ) − s y ( 0 ) − y′ ( 0 ) ⎤⎦ + b ⎡⎣ sY ( s ) − y ( 0 ) ⎤⎦ + cY ( s ) = F ( s ) !

(9.3-4)

and so: 135

(a s

!

2

)

+ b s + c Y ( s ) = a s y ( 0 ) + a y′ ( 0 ) + b y ( 0 ) + F ( s ) !

(9.3-5)

Y (s) =

( a s + b ) y ( 0 ) + a y′ ( 0 ) a s2 + b s + c

+

F (s) a s2 + b s + c

a y′′ ( t ) + b y′ ( t ) + c y ( t ) = 0 !

(9.3-10)

The roots of a s 2 + b s + c depend only upon properties of the

Therefore: !

!

!

(9.3-6)

function y ( t ) . The roots are the poles s1 and s2 of the impulse response:



!

If the initial conditions of a second-order ODE are zero, we

!

H (s) =

1 1 ! = 2 a as +bs + c ( s − s1 ) ( s − s2 )

(9.3-11)

have: !

(a s

2

)

+ b s + c Y (s) = F (s) !

(9.3-7)

and the system transfer function H ( s ) is defined to be: ! !

H (s) =

Y (s) 1 ! = 2 F (s) a s + b s + c

(9.3-8)

The system transfer function is then the ratio of the

Laplace transforms of the output function to the input forcing function. When the forcing function f ( t ) is an impulse function

δ ( t ) , the system response function H ( s ) is called the impulse

response, and is the solution to the ODE since F ( s ) = 1 : ! !

H (s) = Y (s) ! The polynomial a s 2 + b s + c

(9.3-9) in the denominator of

Figure 9.3-2! System response to a forcing function. !

Once the system transfer function H ( s ) is determined, the

equation (9.3-8) is the characteristic function for the linear

response of a system Y ( s ) to any forcing function f ( t ) can be

homogeneous differential equation:

determined either on the s-plane: 136

Y (s) = H (s) F (s) !

!

(9.3-12)

y (t ) = h (t ) ∗ f (t ) = L

−1

{ H ( s ) F ( s )} !

(9.3-13)

Let L { y ( t )} = Y ( s ) and L

Example 9.3-1

and y′ ( t ) are piecewise-smooth functions of exponential order

!

{ f (t )} = F ( s ) where y (t ) is a continuous function of exponential order on [ 0, ∞ ) , and f ( t ) on [ 0, ∞ ) . Given the second-order ODE:

a y′′ ( t ) + b y′ ( t ) + c f ( t ) = f ( t ) !

y′ ( t ) − 2y ( t ) = 0

with initial condition: y ( 0 ) = 3 (9.3-14)

Solution: We first transform the entire ODE:

that:

!

H (s) =

Y (s) 1 ! = 2 F (s) a s + b s + c

(9.3-15)

y (t ) =

!



h ( t − τ ) f ( τ ) dτ !

0

Proof: We are given:

(9.3-16)

L

{ y′ (t )} − 2 L { y (t )} = 0

Using Proposition 3.1-1, we have: !

then we have: t

(9.3-18)

Determine the solution of the differential equation:

where a , b , and c are constants, and y ( 0 ) = 0 and y′ ( 0 ) = 0 so

!

!

∫ h (t − τ) f (τ) dτ !



Proposition 9.3-2:

!

y (t ) = h (t ) ∗ f (t ) =

!

t

0

as shown in Figure 9.3-2.

!

(9.3-17)

From Proposition 7.2-1 and equation (7.1-2) we then have:

or on the t-axis: !

Y (s) = H (s) F (s) !

!

( )

⎡ s Y ( s ) − y 0 − ⎤ − 2Y ( s ) = 0 ⎣ ⎦

Applying the initial condition: !

[ s − 2] Y ( s) = 3

and so the Laplace transform of y ( t ) is: 137

!

3 Y (s) = s−2

!

From the transform table in Appendix E we obtain the inverse transform: !

y ( t ) = 3e

!

( )

y ( t ) = y 0 − e2t

Using the initial condition we can write:

y′ ( t ) − 2y ( t ) = 0

!

( )

and so:

( )

y 0 − = 3e− 4

!

Solution:

Therefore:

We first transform the entire ODE:

!

( )

⎡ s Y ( s ) − y 0 − ⎤ − 2Y ( s ) = 0 ⎣ ⎦

or !

y ( t ) = 3e− 4 e2t = 3e− 4+2 t

{ y′ (t )} − 2 L { y (t )} = 0

Using Proposition 3.1-1 we have: !

( )

y ( 2 ) = 3 = y 0 − e2 ( 2 ) = y 0 − e 4

with initial condition: y ( 2 ) = 3

L

s−2

inverse transform:

Determine the solution of the differential equation:

!

( )

From the transform table in Appendix E we obtain the

2t

Example 9.3-2

!

Y (s) =

y 0−

[ s − 2] Y ( s) = y(0



)

and so the Laplace transform of y ( t ) is:

Example 9.3-3 Determine the solution of the differential equation: !

y′ ( t ) + y ( t ) = sint

with initial condition: y ( 0 ) = 0 . Solution: We first transform the entire ODE: 138

!

L

{ y′ (t )} + L { y (t )} = L {sint }

Using Proposition 3.1-1 we have: !

( )

⎡ s Y ( s ) − y 0− ⎤ + Y ( s ) = 1 ⎣ ⎦ s2 + 1

Applying the initial condition: !

1 [ s + 1] Y ( s ) = 2 s +1

Y (s) =

1

( s + 1)( s

2

Determine the solution of the differential equation: !

with initial condition: y ( 0 ) = 1 . Solution:

)

+1

!

1

( s + 1)( s 2 + 1)

=

A Bs + C + 2 s +1 s +1

we find A = 1 2 , B = −1 2 , and C = 1 2 : !

Y (s) =

1 1 1 s 1 1 − + 2 s + 1 2 s2 + 1 2 s2 + 1

From the transform table in Appendix E we obtain the inverse transform: !

y (t ) =

1 −t 1 1 e − cost + sint 2 2 2

L

{ y′ (t )} + 4 L { y (t )} = L {e− 3t }

Using Proposition 3.1-1 we have: !

Expanding as partial fractions: !

y′ ( t ) + 4y ( t ) = e− 3t

We first transform the entire ODE:

and so the Laplace transform of y ( t ) is: !

Example 9.3-4

( )

⎡ s Y ( s ) − y 0 − ⎤ + 4Y ( s ) = 1 ⎣ ⎦ s+3

Applying the initial condition: !

1

[s + 4] Y (s) − 1 = s + 3

and so the Laplace transform of y ( t ) is: !

Y (s) =

1 1 + s + 4 ( s + 3) ( s + 4 )

Expanding as partial fractions: !

1

( s + 3)( s + 4 )

=

A B + s+3 s+4

we find A = 1 and B = −1 : 139

!

Y (s) =

1 1 1 1 + − = s+4 s+3 s+4 s+3

!

Y (s) =

s s2 + 4

From the transform table in Appendix E we obtain the

From the transform table in Appendix E we obtain the

inverse transform:

inverse transform:

!

y ( t ) = e− 3t

!

y ( t ) = cos 2t

Example 9.3-5

Example 9.3-6

Determine the solution of the differential equation:

Determine the solution of the differential equation:

!

y′′ ( t ) + 4y ( t ) = 0

!

y′′ ( t ) − y ( t ) = 1

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 0 .

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 0 .

Solution:

Solution:

We first transform the entire ODE:

We first transform the entire ODE:

!

L

{ y′′ (t )} + 4 L { y (t )} = 0

From Proposition 3.1-2 we have: !

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 4Y ( s ) = 0 ⎣ ⎦

Applying the initial conditions: !

⎡⎣ s 2 + 4 ⎤⎦ Y ( s ) − s = 0

and so the Laplace transform of y ( t ) is:

!

L

{ y′′ (t )} − L { y (t )} = L {1}

From Proposition 3.1-2 we have: !

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − Y ( s ) = 1 ⎣ ⎦ s

Applying the initial conditions: !

1 ⎡⎣ s 2 − 1⎤⎦ Y ( s ) = s

and so the Laplace transform of y ( t ) is: 140

!

Y (s) =

(

1

)

s s2 − 1

=

1 s ( s − 1) ( s + 1)

From Proposition 3.1-2 we have: !

Expanding as partial fractions: !

1 A B C = + + s ( s − 1) ( s + 1) s s − 1 s + 1

we find A = 1 , B = 1 2 , and C = 1 2 : !

Y (s) = −

1 1 1 + + s 2 ( s − 1) 2 ( s + 1)

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − 4Y ( s ) = 1 ⎣ ⎦ s +1

Applying the initial conditions: !

1 ⎡⎣ s 2 − 4 ⎤⎦ Y ( s ) = s +1

and so the Laplace transform of y ( t ) is: !

Y (s) =

From the transform table in Appendix E we obtain the inverse transform: !

1 1 y ( t ) = −1+ et + e− t 2 2

Example 9.3-7 Determine the solution of the differential equation: !

y′′ ( t ) − 4y ( t ) = e−t

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 0 . Solution:

(

1 1 = s 2 − 4 ( s + 1) ( s + 2 ) ( s − 2 ) ( s + 1)

)

Expanding as partial fractions: !

1 A B C = + + ( s + 2 )( s − 2 )( s + 1) s + 2 s − 2 s + 1

we find A = 1 4 , B = 1 12 , and C = 1 3 : !

Y (s) =

1 1 1 + − 4 ( s + 2 ) 12 ( s − 2 ) 3( s + 1)

From the transform table in Appendix E we obtain the inverse transform: !

y (t ) =

1 − 2t 1 2t 1 −t e + e − e 4 12 3

We first transform the entire ODE: !

L

{ y′′ (t )} − 4 L { y (t )} = L { e− t } 141

Example 9.3-8

Example 9.3-9

Determine the solution of the differential equation:

Determine the solution of the differential equation:

!

y′′ + 4 y = cos ( 2t )

!

y′′ ( t ) − 6 y′ ( t ) + 9 y ( t ) = 0

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 1 .

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 1 .

Solution:

Solution:

We first transform the entire ODE:

We first transform the entire ODE:

!

{ y′′ (t )} + 4 L { y (t )} = L { cos ( 2t )}

L

From Proposition 3.1-2 we have: !

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 4Y ( s ) = 2 s ⎣ ⎦ s +4

Applying the initial conditions: !

(

)

s s + 4 Y (s) = 2 +1 s +4 2

and so the Laplace transform of y ( t ) is: !

Y (s) =

(

s s2 + 4

)

2

+

1 s2 + 4

From the transform table in Appendix E we obtain the inverse transform: !

y (t ) =

t 1 sin ( 2t ) + sin ( 2t ) 4 2

!

L

{ y′′ (t )} − 6 L { y′ (t )} + 9 L { y (t )} = 0

From Propositions 3.1-1 and 3.1-2 we have: !

( ) ( )

( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − 6 ⎡ s Y ( s ) − y 0 − ⎤ + 9Y ( s ) = 0 ⎣ ⎦ ⎣ ⎦

Applying the initial conditions: !

⎡⎣ s 2 Y ( s ) − 1⎤⎦ − 6 s Y ( s ) + 9Y ( s ) = 0

or !

⎡⎣ s 2 − 6 s + 9 ⎤⎦ Y ( s ) = 1

and so the Laplace transform of y ( t ) is: !

Y (s) =

1

( s − 3)2

From the transform table in Appendix E we obtain the inverse transform: 142

!

y ( t ) = t e3t

!

s +1 A B = + ( s + 3)( s − 2 ) s + 3 s − 2

Example 9.3-10

we find A = 2 5 and B = 3 5 :

Determine the solution of the differential equation:

!

!

y′′ ( t ) + y′ ( t ) − 6y ( t ) = 0

inverse transform:

Solution:

!

We first transform the entire ODE:





⎡⎣ s 2 + s − 6 ⎤⎦ Y ( s ) − s − 1 = 0

and so the Laplace transform of y ( t ) is: !

2 − 3t 3 2t e + e 5 5

Example 9.3-11

( ) − y′ ( 0 )⎤⎦ + ⎡⎣ s Y ( s ) − y ( 0 )⎤⎦ − 6Y ( s ) = 0

⎡s Y (s) − s y 0 ⎣ 2

Applying the initial conditions: !

y (t ) =

{ y′′ (t )} + L { y′ (t )} − 6 L { y (t )} = 0

L

From Propositions 3.1-1 and 3.1-2 we have: !

2 3 + 5 ( s + 3) 5 ( s − 2 )

From the transform table in Appendix E we obtain the

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 0

!

Y (s) =

Y (s) =

s +1 s +1 = s 2 + s − 6 ( s + 3) ( s − 2 )

Expanding as partial fractions:



Determine the solution of the differential equation: !

y′′ ( t ) + 2 y′ ( t ) + 10y ( t ) = 0

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 0 Solution: We first transform the entire ODE: !

L

{ y′′ (t )} + 2 L { y′ (t )} +10 L { y (t )} = 0

From Propositions 3.1-1 and 3.1-2 we have: !

( ) ( )

( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 2 ⎡ s Y ( s ) − y 0 − ⎤ + 10Y ( s ) = 0 ⎣ ⎦ ⎣ ⎦ 143

Applying the initial conditions: !

!

⎡⎣ s 2 + 2 s + 10 ⎤⎦ Y ( s ) − s − 2 = 0

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 1

and so the Laplace transform of y ( t ) is: !

Y (s) =

Solution:

s+2 s 2 + 2 s + 10

We first transform the entire ODE:

The denominator is an irreducible quadratic, and so the next

Y (s) =

s+2

( s + 1)

2

!

+9

If we change the numerator we can use the first shift theorem (Proposition 2.2-1): !

Y (s) =

( s + 1) + 1

!

=

s +1

( s + 1)2 + 9 ( s + 1)2 + 9

L

{ y′′ (t )} + 4 L { y′ (t )} + 2 L { y (t )} = 0

From Propositions 3.1-1 and 3.1-2 we have:

step is to complete the square: !

y′′ ( t ) + 4 y′ ( t ) + 2y ( t ) = 0

+

1

3

3 ( s + 1) + 9

( ) ( )

Applying the initial conditions: !

⎡⎣ s 2 + 4 s + 2 ⎤⎦ Y ( s ) = s + 5

and so the Laplace transform of y ( t ) is:

2

!

Y (s) =

From the transform table in Appendix E we obtain the inverse transform: !

1 y ( t ) = e− t cos ( 3t ) + e− t sin ( 3t ) 3

( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 4 ⎡ s Y ( s ) − y 0 − ⎤ + 2Y ( s ) = 0 ⎣ ⎦ ⎣ ⎦

s+5 s2 + 4 s + 2

The denominator is an irreducible quadratic, and so the next step is to complete the square: !

Y (s) =

s+5

( s + 2 )2 − 2

Example 9.3-12

If we change the numerator we can use the first shift theorem

Determine the solution of the differential equation:

(Proposition 2.2-1): 144

!

Y (s) =

( s + 2) + 3

( s + 2)

2

−2

=

s+2

( s + 2)

2

−2

+

3

2

2 ( s + 2) − 2

From the transform table in Appendix E we obtain the inverse transform: !

y (t ) = e

− 2t

cosh

( 2 t)+

3 2

e

− 2t

sinh

or

2

( 2 t)

!

1 ⎡⎣ s 2 − 2 s + 1⎤⎦ Y ( s ) = 1+ s −1

and so the Laplace transform of y ( t ) is: !

Y (s) =

1

+

1

( s − 1)2 ( s − 1)3

From the transform table in Appendix E we obtain the Example 9.3-13

inverse transform:

Determine the solution of the differential equation: !

y′′ ( t ) − 2 y′ ( t ) + y ( t ) = e

t

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 1

!

t 2 et y (t ) = t e + 2 t

Example 9.3-14

Solution:

Determine the solution of the differential equation:

We first transform the entire ODE:

!

!

L

{ y′′ (t )} − 2 L { y′ (t )} + L { y (t )} = L {et }

From Propositions 3.1-1 and 3.1-2 we have: !

( ) ( )

Applying the initial conditions: !

( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − 2 ⎡ s Y ( s ) − y 0 − ⎤ +Y ( s ) = 1 ⎣ ⎦ ⎣ ⎦ s −1

1 ⎡⎣ s 2 Y ( s ) − 1⎤⎦ − 2 s Y ( s ) +Y ( s ) = s −1

y′′ ( t ) + y′ ( t ) − 2y ( t ) = 4 e2 t

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 0 Solution: We first transform the entire ODE: !

L

{ y′′ (t )} + L { y′ (t )} − 2 L { y (t )} = 4 L {e2t }

From Propositions 3.1-1 and 3.1-2 we have: 145

!

( ) ( )

⎡ s2 Y ( s ) − s y 0− ⎣

( )

4 − y′ 0 − ⎤⎦ + ⎡⎣ s Y ( s ) − y 0 − ⎤⎦ − 2Y ( s ) = s−2

Applying the initial conditions: !

4 ⎡⎣ s 2 + s − 2 ⎤⎦ Y ( s ) = s−2

and so the Laplace transform of y ( t ) is: !

Y (s) =

4 ( s − 1)( s + 2 )( s − 2 )

Expanding as partial fractions: !

4 A B C = + + ( s − 1)( s + 2 )( s − 2 ) s − 1 s + 2 s − 2

we find A = − 4 3 , B = 1 3 , and C = 1 : !

Y (s) = −

4 1 1 1 1 + + 3 s −1 3 s + 2 s − 2

!

y′′ ( t ) − 2 y′ ( t ) + y ( t ) = e3 t

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 0 Solution: We first transform the entire ODE: !

L

{ y′′ (t )} − 2 L { y′ (t )} + L { y (t )} = L {e3t }

From Propositions 3.1-1 and 3.1-2 we have: !

( ) ( )

Applying the initial conditions: !

1 ⎡⎣ s 2 − 2 s + 1⎤⎦ Y ( s ) = s−3

and so the Laplace transform of y ( t ) is: !

Y (s) =

From the transform table in Appendix E we obtain the inverse transform: !

y (t ) = −

4 t 1 − 2t 2t e + e +e 3 3

Example 9.3-15 Determine the solution of the differential equation:

( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − 2 ⎡ s Y ( s ) − y 0 − ⎤ +Y ( s ) = 1 ⎣ ⎦ ⎣ ⎦ s−3

1

( s − 1) ( s − 3) 2

Expanding as partial fractions: !

1

( s − 1) ( s − 3) 2

=

A B C + + s − 1 ( s − 1)2 s − 3

we find A = −1 4 , B = −1 2 , and C = 1 4 : !

Y (s) = −

1 1



1

1

4 s − 1 2 ( s − 1)

2

+

1 1 4 s−3 146

From the transform table in Appendix E we obtain the

and so the Laplace transform of y ( t ) is:

inverse transform:

!

!

Y (s) =

1 1 1 y ( t ) = − et − t e t + e3t 4 2 4

2 ( s − 1)

⎡( s − 1)2 + 1⎤ ⎡ s 2 − 3s + 2 ⎤ ⎦ ⎣ ⎦⎣

or !

Example 9.3-16

Y (s) =

Determine the solution of the differential equation: !

y′′ ( t ) − 3 y′ ( t ) + 2y ( t ) = 2 e t cost

!

Solution:

{ y′′ (t )} − 3 L { y′ (t )} + 2 L { y (t )} = 2 L {et cost }

From Propositions 3.1-1 and 3.1-2 we have: !

⎡( s − 1)2 + 1⎤ ( s − 2 ) ⎣ ⎦

2=

As + B ⎡( s − 1)2 + 1⎤ ⎣ ⎦

+

C s−2

( ) ( )

( )

2 ( s − 1)

( s − 1)2 + 1

Applying the initial conditions:

⎡⎣ s 2 − 3s + 2 ⎤⎦ Y ( s ) =

2 ( s − 1)

!

Y (s) = −

s ⎡( s − 1)2 + 1⎤ ⎣ ⎦

+

1 s−2

or

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ − 3 ⎡ s Y ( s ) + y 0 − ⎤ + 2Y ( s ) = ⎣ ⎦ ⎣ ⎦

!

!

2

we find A = −1 , B = 0 , and C = 1 :

We first transform the entire ODE:

L

⎡( s − 1)2 + 1⎤ ( s − 2 ) ( s − 1) ⎣ ⎦

=

Expanding as partial fractions:

with initial conditions: y ( 0 ) = 0 , y′ ( 0 ) = 0

!

2 ( s − 1)

!

Y (s) = −

s −1 ⎡( s − 1)2 + 1⎤ ⎣ ⎦



1 ⎡( s − 1)2 + 1⎤ ⎣ ⎦

+

1 s−2

From the transform table in Appendix E we obtain the inverse transform: !

y ( t ) = − et cost − et sint + e2t

( s − 1)2 + 1 147

!

Example 9.3-17

Y (s) = 2

Determine the solution of the differential equation: !

Solution:

{ y′′′ (t )} − L { y′′ (t )} + L { y′ (t )} − L { y (t )} = 0

!

( )

( )

( )

⎡ s 3 Y ( s ) − s 2 y 0 − − s y′ 0 − − y′′ 0 − ⎤ ⎣ ⎦

( ) ( )

( )

− ⎡⎣ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤⎦ + ⎡⎣ s Y ( s ) − y 0 − ⎤⎦ −Y ( s ) = 0

−Y ( s ) = 0

or !

)

1

( s − 1)( s 2 + 1)

=

A Bs + C + 2 s −1 s +1

Y (s) =

2 s −1

+

1 s +1 3 s +1 − 2 = − 2 s −1 s +1 s −1 s +1

From the transform table in Appendix E we obtain the inverse transform:

⎡⎣ s 3 Y ( s ) − 2 s 2 − 2 s − 4 ⎤⎦ − ⎡⎣ s 2 Y ( s ) − 2 s − 2 ⎤⎦ + ⎡⎣ s Y ( s ) − 2 ⎤⎦

!

!

!

Applying the initial conditions: !

(

⎤ ⎥ ⎥⎦

we find A = 1 , B = 1 , and C = 1 :

From Propositions 3.1-1, 3.1-2, and 3.1-3 we have: !

!

⎡ 1 1 Y (s) = 2 ⎢ + ⎢⎣ s − 1 ( s − 1) s 2 + 1

Expanding as partial fractions:

We first transform the entire ODE:

L

( s − 1)( s 2 + 1)

Dividing by s 2 + 1 :

y′′′ ( t ) − y′′ ( t ) + y′ ( t ) − y ( t ) = 0

with initial conditions: y ( 0 ) = 2 , y′ ( 0 ) = 2 , y′′ ( 0 ) = 4

!

s2 + 2

!

y ( t ) = 3 et − cos ( t ) − sin ( t )

Example 9.3-18 Determine the solution of the differential equation:

⎡⎣ s 3 − s 2 + s − 1⎤⎦ Y ( s ) = 2 s 2 + 4

and so the Laplace transform of y ( t ) is:

!

y′′ ( t ) + y ( t ) = f ( t )

with initial conditions: y ( 0 ) = 2 , y′ ( 0 ) = 0 148

Solution: We first transform the entire ODE: !

L

SOLVING LINEAR ODES HAVING DISCONTINUOUS FORCING FUNCTIONS

{ y′′ (t )} + L { y (t )} = L { f (t )}

From Proposition 3.1-2 we have: !

9.4!

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ +Y ( s ) = F ( s ) ⎣ ⎦

!

Laplace transforms provide an easy way to evaluate linear

nonhomogeneous ODEs that have a discontinuous forcing function. This is one of the great advantages of using the

Applying the initial conditions:

Laplace transform to solve ODEs. The unit step function and

⎡⎣ s 2 Y ( s ) − 2 ⎤⎦ +Y ( s ) = F ( s )

the unit impulse function can both be used to represent some

!

discontinuous forcing functions.

or !

⎡⎣ s 2 + 1⎤⎦ Y ( s ) = 2 + F ( s )

and so the Laplace transform of y ( t ) is: !

Y (s) =

2 s2 + 1

+

F (s)

s2 + 1

The second term on the right is the product of two transforms. Therefore using the transform tables in Appendices E and F we obtain the inverse transform: !

y ( t ) = sint + f ( t ) ∗sint

Determine the solution of the differential equation: !

a f ′′ ( t ) + b f ′ ( t ) + c f ( t ) = U ( t )

with initial conditions: f ( 0 ) = f ′ ( 0 ) = 0 Solution: We first transform the entire ODE: !

aL

{ f ′′ (t )} + b L { f ′ (t )} + c L { f (t )} = L {U (t )}

From Proposition 9.3-1 we have:

or !

Example 9.4-1

y ( t ) = sin ( 2t ) +



t

0

f ( τ ) sin ( t − τ ) dt

!

F (s) =

( a s+ b ) f ( 0 ) + a f ′ ( 0 ) a s2 + b s + c

+

L {U ( t )} a s2 + b s + c

=

L {U ( t )} a s2 + b s + c 149

The Laplace transform of f ( t ) is then: !

L

{ f (t )} =

(

1

s a s2 + b s + c

)

!

and so the inverse transform is: !

f (t ) = L

⎧ 1 ⎨ 2 ⎪⎩ s a s + b s + c

−1⎪

(

)

and so the Laplace transform of y ( t ) is:

⎫⎪ ⎬ ⎪⎭

Y (s) =

1 1 + s − 3 s ( s − 3)

Expanding as partial fractions: !

1 A B = + s ( s − 3) s − 3 s

we find: Example 9.4-2 Determine the solution of the differential equation: !

y′ ( t ) − 3y ( t ) = U ( t )

with initial condition: y ( 0 ) = 1 .

!

Y (s) =

1 1 3 1 3 + − s−3 s−3 s

From the transform table in Appendix E, we obtain the inverse transform: !

y (t ) =

1 3t 3t 1 4 1 e + e − U ( t ) = e3t − 3 3 3 3

Solution: We first transform the entire ODE: !

L

{ y′ (t )} − 3 L { y (t )} = L {U (t )}

From Proposition 3.1-1 we have: !

( )

⎡ s Y ( s ) − y 0 − ⎤ − 3Y ( s ) = 1 ⎣ ⎦ s

Applying the initial condition: !

1 [ s − 3 ] Y ( s ) = 1+ s

Example 9.4-3 Determine the solution of the differential equation: !

⎛ π⎞ y′′ ( t ) + 4 y ( t ) = δ ⎜ t − ⎟ ⎝ 2⎠

with initial conditions: y ( 0 ) = y′ ( 0 ) = 0 . Solution: We first transform the entire ODE: 150

!

Solution:

⎧ ⎛ π⎞ ⎫ L { y′′ ( t )} + 4 L { y ( t )} = L ⎨δ ⎜ t − ⎟ ⎬ ⎩ ⎝ 2⎠⎭

We first transform the entire ODE: !

From Proposition 3.1-2 we have: !

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 4Y ( s ) = e− π s ⎣ ⎦

2

Applying the initial conditions: !

⎡⎣ s 2 + 4 ⎤⎦ Y ( s ) = e− π s

2

and so the Laplace transform of y ( t ) is: !

e− π s 2 1 2 e− π s 2 Y (s) = 2 = s + 4 2 s2 + 4

L

{ y′′ (t )} + 3 L { y′ (t )} + 2 L { y (t )} = L {δ (t − 4 )}

From Propositions 3.1-1 and 3.1-2 we have:

( ) ( )

Applying the initial conditions: !

⎡⎣ s 2 + 3s + 2 ⎤⎦ Y ( s ) = e− 4 s

and so the Laplace transform of y ( t ) is: !

Y ( s ) = e− 4 s

From Proposition 5.2-1 and the transform table in Appendix E, we obtain the inverse transform: !

1 1 ⎡ ⎛ π⎞ ⎤ ⎛ π⎞ ⎛ π⎞ y ( t ) = sin ⎢ 2 ⎜ t − ⎟ ⎥ U ⎜ t − ⎟ = − sin ( 2t ) U ⎜ t − ⎟ ⎝ 2⎠ 2 2 ⎣ ⎝ 2⎠⎦ ⎝ 2⎠

( )

! ⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 3 ⎡ s Y ( s ) − y 0 − ⎤ + 2Y ( s ) = e− 4 s ⎣ ⎦ ⎣ ⎦

⎡ 1 1 ⎤ = e− 4 s ⎢ − ⎥ ( s + 2 )( s + 1) ⎣ ( s + 1) ( s + 2 ) ⎦ 1

From Proposition 5.2-1 and the transform table in Appendix E we obtain the inverse transform: !

y ( t ) = ⎡⎣ e− (t−4 ) − e− 2 (t−4 ) ⎤⎦ U ( t − 4 )

Example 9.4-4

Example 9.4-5

Determine the solution of the differential equation:

Determine the solution of the differential equation:

!

y′′ ( t ) + 3 y′ ( t ) + 2 y ( t ) = δ ( t − 4 )

with initial conditions: y ( 0 ) = y′ ( 0 ) = 0 .

!

y′′ ( t ) + 2 y′ ( t ) + 5 y ( t ) = δ ( t − 1)

with initial conditions: y ( 0 ) = y′ ( 0 ) = 0 . 151

9.5!

Solution: We first transform the entire ODE: !

L

!

{ y′′ (t )} + 2 L { y′ (t )} + 5 L { y (t )} = L {δ (t − 1)}

From Propositions 3.1-1 and 3.1-2 we have:

( ) ( )

( )

! ⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + 2 ⎡ s Y ( s ) − y 0 − ⎤ + 5Y ( s ) = e− s ⎣ ⎦ ⎣ ⎦ Applying the initial conditions: !

⎡⎣ s 2 + 2 s + 5 ⎤⎦ Y ( s ) = e− s

Y ( s ) = e− s

1 s2 + 2 s + 5

The denominator is an irreducible quadratic, and so the next step is to complete the square: !

Y ( s ) = e− s

1

( s + 1)2 + 4

From Proposition 5.2-1 and the transform table in Appendix E we obtain the inverse transform: !

The Laplace transform is also very useful for obtaining the

solution of a system of simultaneous linear nonhomogeneous ODEs having constant coefficients and closed-form solutions. The general procedure for obtaining such solutions consists of the following: 1.! The entire system of ODEs including forcing functions are first transformed using the Laplace transform. 2.! The initial conditions are applied to the transformed

or !

SOLVING SIMULTANEOUS ODES USING LAPLACE TRANSFORMS

1 y ( t ) = U ( t − 1) e t sin ⎡⎣ 2 ( t − 1) ⎤⎦ 2

equations. 3.! The system of algebraic equations is solved for the transformed dependent variables. 4.! The

transformed

dependent

variable

solutions

are

rewritten using partial fractions if necessary. 5.! The inverse Laplace transform of each of the terms in the partial fraction expansions of the transformed solutions is determined. 6.! By this process the solutions to the original ODEs are obtained. 152

or Example 9.5-1

!

( s + 1) X ( s ) − Y ( s ) = 0

!

− 2 X (s) + s Y (s) = 2

Solve the following system of differential equations: !

x! ( t ) + x ( t ) − y ( t ) = 0

!

y! ( t ) − 2 x ( t ) = 0

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

with initial conditions: x ( 0 ) = 0 , y ( 0 ) = 2 ! Solution:

X (s) =

We first transform the entire ODEs: ! !

L

{ x! (t )} + L { x (t )} − L { y (t )} = 0

L

{ y! (t )} − 2 L { x (t )} = 0

!

Y (s) =

From Proposition 3.1-1 we have:

!

s + 1 −1 −2 s s +1 0 −2 2 s + 1 −1 −2 s

=

=

2

s ( s + 1) − 2

2 ( s + 1) s2 + s − 2

( )

Expanding as partial fractions:

( )

!

⎡ s X ( s ) − x 0− ⎤ + X ( s ) − Y ( s ) = 0 ⎣ ⎦

!

0 −1 2 s

⎡s Y (s) − y 0 ⎤ − 2 X (s) = 0 ⎣ ⎦ −

Applying the initial conditions:

2

( s − 1)( s + 2 ) 2 ( s + 1)

=

A B + s −1 s + 2

=

A B + s −1 s + 2

!

s X (s) + X (s) − Y (s) = 0

!

!

⎡⎣ s Y ( s ) − 2 ⎤⎦ − 2 X ( s ) = 0

we find A = 2 3 and B = − 2 3 :

( s − 1)( s + 2 )

=

=

2 2 = s 2 + s − 2 ( s − 1) ( s + 2 )

2 ( s + 1)

( s − 1)( s + 2 )

153

!

X (s) =

2 1 2 1 − 3 s −1 3 s + 2

!

L

{ y! (t )} − L { x (t )} = 0

From Proposition 3.1-1 we have: !

Y (s) =

4 1 2 1 + 3 s −1 3 s + 2

From the transform table in Appendix E we obtain the inverse transforms: ! !

x (t ) = y (t ) =

2 t 2 − 2t e − e 3 3 4 t 2 − 2t e + e 3 3

Example 9.5-2

( )

⎡ s X ( s ) − x 0− ⎤ + Y ( s ) = 2 ⎣ ⎦ s

! !

( )

⎡ s Y ( s ) − y 0− ⎤ − X ( s ) = 0 ⎣ ⎦

Applying the initial conditions:

s X (s) + Y (s) =

!

s Y (s) − X (s) = 1

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

Solve the following system of differential equations: ! !

x! ( t ) + y ( t ) = 2 y! ( t ) − x ( t ) = 0

!

X (s) =

with initial conditions: x ( 0 ) = −1 , y ( 0 ) = 1 Solution: We first transform the entire ODEs: !

L

{ x! (t )} + L { y (t )} = L {2}

2 2−s −1 = s s

!

!

Y (s) =

2−s s 1

1 s

s 1 −1 s

s

2−s s

−1

1

s 1 −1 s

=

2 − s −1 1 s = − s2 + 1 s2 + 1 s2 + 1

=

2−s s+ s s2 + 1

=

s 2 1 + − s2 + 1 s s2 + 1 s2 + 1

(

)

154

Expanding as partial fractions: !

(

2

)

s s2 + 1

=

A s

+

!

2L

s 2 2s 1 2 s 1 Y (s) = 2 + − 2 − 2 = − 2 − 2 s +1 s s +1 s +1 s s +1 s +1

From the transform table in Appendix E we obtain the

( )

⎡ s X ( s ) − x 0 − ⎤ + 2Y ( s ) = 0 ⎣ ⎦

! !

inverse transforms: ! !

x ( t ) = sint − cost y ( t ) = 2 − cost − sint

Example 9.5-3 Solve the following system of differential equations: ! !

{ y! (t )} − L { x (t )} = L {1}

From Proposition 3.1-1 we have:

we find A = 2 , B = 2 , and C = 0 : !

L

Bs + C s2 + 1

{ x! (t )} + 2 L { y (t )} = 0

!

( )

1 2 ⎡⎣ s Y ( s ) − y 0 − ⎤⎦ − X ( s ) = s

Applying the initial conditions: !

s X ( s ) + 2Y ( s ) = 0

!

− X (s) + 2 s Y (s) =

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

x! ( t ) + 2 y ( t ) = 0 2 y! ( t ) − x ( t ) = 1

with initial conditions: x ( 0 ) = 0 , y ( 0 ) = 0

1 s

!

X (s) =

0 2 1 2s s s 2 −1 2 s

=



(

2 s

=

−1

) s (s

2 s2 + 1

2

)

+1

Solution: We first transform the entire ODEs: 155

s

!

0 1 −1 s

Y (s) =

s 2 −1 2 s

!

=

(

1

with initial conditions: x ( 0 ) = 1 , y ( 0 ) = 0

)

2 s2 + 1

Solution: We first transform the entire ODEs:

Expanding as partial fractions: !

(

−1

)

s s +1 2

=

A Bs + C + 2 s s +1

we find A = −1 , B = 1 , and C = 0 : !

1 s X (s) = − + 2 s s +1

From the transform table in Appendix E we obtain the inverse transforms: !

x ( t ) = − 1+ cost

!

y (t ) =

1 sint 2

!

L

{ x! (t )} − 2 L { x (t )} + L { y (t )} = 0

!

L

{ y! (t )} − 2 L { y (t )} − L { x (t )} = 0

From Proposition 3.1-1 we have:

( )

⎡ s X ( s ) − x 0 − ⎤ − 2 X ( s ) +Y ( s ) = 0 ⎣ ⎦

! !

!

x! ( t ) − 2 x ( t ) + y ( t ) = 0

( )

⎡ s Y ( s ) − y 0 − ⎤ − 2Y ( s ) − X ( s ) = 0 ⎣ ⎦

Applying the initial conditions: !

( s − 2) X ( s) + Y ( s) = 1

!

− X ( s) + ( s − 2)Y ( s) = 0

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

Example 9.5-4 Solve the following system of differential equations:

y! ( t ) − 2y ( t ) − x ( t ) = 0

!

X (s) =

1 1 0 s−2 s−2 1 −1 s − 2

=

s−2

( s − 2 )2 + 1 156

!

Y (s) =

s−2 1 −1 0 s−2 1 −1 s − 2

From Proposition 3.1-1 we have:

=

1

( )

⎡ s X ( s ) − x 0 − ⎤ − 2 X ( s ) +Y ( s ) = 0 ⎣ ⎦

!

( s − 2 )2 + 1 !

( )

⎡ s Y ( s ) − y 0 − ⎤ − 2Y ( s ) − X ( s ) = 0 ⎣ ⎦

From the transform table in Appendix E we obtain the

Applying the initial conditions:

inverse transforms:

!

( s − 2) X ( s) + Y ( s) = 1

!

− X ( s) + ( s − 2)Y ( s) = 0

!

x ( t ) = e2t cos ( t )

!

y ( t ) = e sin ( t ) 2t

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

Example 9.5-5 Solve the following system of differential equations: !

x! ( t ) − 2 x ( t ) + y ( t ) = 0

!

y! ( t ) − 2y ( t ) − x ( t ) = 0

with initial conditions: x ( 0 ) = 1 , y ( 0 ) = 0 Solution: We first transform the entire ODEs: ! !

L L

{ x′ (t )} − 2 L { x (t )} + L { y (t )} = 0 { y′ (t )} − 2 L { y (t )} − L { x (t )} = 0

!

!

X (s) =

Y (s) =

1 1 0 s−2 s−2 1 −1 s − 2 s−2 1 −1 0 s−2 1 −1 s − 2

=

=

s−2

( s − 2 )2 + 1

1

( s − 2 )2 + 1

From the transform table in Appendix E we obtain the inverse transforms:

157

!

x ( t ) = e2t cos ( t )

Applying the initial conditions:

!

y ( t ) = e sin ( t )

!

⎡⎣ s 2 X ( s ) − 4 s ⎤⎦ − X ( s ) −Y ( s ) = 0

!

⎡⎣ s 2 Y ( s ) + 2 s ⎤⎦ + Y ( s ) + X ( s ) = 0

2t

Example 9.5-6

or

Solve the following system of differential equations: ! !

!! x (t ) − x (t ) − y (t ) = 0 !! y (t ) + y (t ) + x (t ) = 0

with initial conditions: x ( 0 ) = 4 , y ( 0 ) = − 2 , x! ( 0 ) = 0 , and

!

⎡⎣ s 2 − 1⎤⎦ X ( s ) −Y ( s ) = 4 s

!

X ( s ) + ⎡⎣ s 2 + 1⎤⎦ Y ( s ) = − 2 s

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

y! ( 0 ) = 0 .

Solution:

!

X (s) =

We first transform the entire ODEs: ! !

L L

{ !!x (t )} − L

{ x (t )} − L

{ y (t )} = 0

{ !!y (t )} + L { y (t )} + L { x (t )} = 0

From Proposition 3.1-2 we have:

( ) − x! ( 0 )⎤⎦ − X ( s ) −Y ( s ) = 0

⎡s X (s) − s x 0 ⎣ 2

! !

−1

4s





( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y! 0 − ⎤ + Y ( s ) + X ( s ) = 0 ⎣ ⎦

!

Y (s) =

− 2 s s2 + 1 s2 − 1

−1

1

s2 + 1

s2 − 1

4s

1

−2s

s2 − 1

−1

1

s2 + 1

=

=

(

)

4 s s2 + 1 − 2 s

(s

2

)(

)

2 −1 s +1 +1

(

)

2 − 2 s s − 1 − 4s

(s

2

)(

)

2 −1 s +1 +1

=

4 2 + s s3

=−

2 2 − s s3

From the transform table in Appendix E we obtain the inverse transforms: 158

!

x (t ) = 4 + t 2

!

y (t ) = − 2 − t 2

!

Solve the following system of differential equations:

!! x ( t ) + !! y (t ) = t 2

!

!! x ( t ) − !! y (t ) = 4 t

2 ⎡⎣ s 2 X ( s ) − 2 s ⎤⎦ + s 2 Y ( s ) = 3 s

!

4 ⎡⎣ s 2 X ( s ) − 2 s ⎤⎦ − s 2 Y ( s ) = 2 s

!

y! ( 0 ) = 0 .

or

Solution:

!

We first transform the entire ODEs:

L

{ !!x (t )} + L { !!y (t )} = s23

!

L

{ !!x (t )} − L { !!y (t )} = s42

( ) ( )

2 s2 Y ( s) =

Y (s) =

2 4 − s3 s2

1 2 − s5 s4

We then also have: !

1 2 2 ⎡⎣ s 2 X ( s ) − 2 s ⎤⎦ + s 2 ⎛⎜ 5 − 4 ⎞⎟ = 3 ⎝s s ⎠ s

or !

From Proposition 3.1-2 we have: !

!

Subtracting the second equation from the first:

with initial conditions: x ( 0 ) = 2 , y ( 0 ) = 0 , x! ( 0 ) = 0 , and

!

( ) ( )

Applying the initial conditions:

Example 9.5-7

!

( ) ( )

⎡ s 2 X ( s ) − s x 0 − − x! 0 − ⎤ − ⎡ s 2 Y ( s ) − s y 0 − − y! 0 − ⎤ = 4 ⎣ ⎦ ⎣ ⎦ s2

( ) ( )

⎡ s 2 X ( s ) − s x 0 − − x! 0 − ⎤ + ⎡ s 2 Y ( s ) − s y 0 − − y! 0 − ⎤ = 2 ⎣ ⎦ ⎣ ⎦ s3

X (s) =

2 1 2 2 1 2 2 − + + = + + s5 s5 s4 s s5 s4 s

From the transform table in Appendix E we obtain the inverse transforms:

159

!

t4 2 3 x (t ) = + t + 2 4! 3!

!

!

t4 2 3 y (t ) = − t 4! 3!

Applying the initial conditions:

( ) ( )

( )

⎡ s 2 Y ( s ) − s y 0 − − y! 0 − ⎤ − 4Y ( s ) − 2 ⎡ s X ( s ) − x 0 − ⎤ = − 4 ⎣ ⎦ ⎣ ⎦ s

2 s2

!

s 2 X ( s ) − X ( s ) + 5 sY ( s ) =

!

s 2 Y ( s ) − 4Y ( s ) − 2 s X ( s ) = −

Example 9.5-8 Solve the following system of differential equations: !

!! x ( t ) − x ( t ) + 5 y! ( t ) = 2t

or

!

!! y ( t ) − 4y ( t ) − 2 x! ( t ) = − 4

!

2 ⎡⎣ s 2 − 1⎤⎦ X ( s ) + 5 sY ( s ) = 2 s

y! ( 0 ) = 0 .

!

− 2 s X ( s ) + ⎡⎣ s 2 − 4 ⎤⎦ Y ( s ) = −

Solution:

Solving for X ( s ) and Y ( s ) using Cramer’s rule:

with initial conditions: x ( 0 ) = 0 , y ( 0 ) = 0 , x! ( 0 ) = 0 , and

We first transform the entire ODEs: ! !

2 L { !! x ( t )} − L { x ( t )} + 5 L { y! ( t )} = 2 s L

!

{ !!y (t )} − 4 L { y (t )} − 2 L { x! (t )} = − 4s

From Propositions 3.1-1 and 3.1-2 we have: !

4 s

( ) ( )

X (s) =

2 s2 4 − s

4 s

5s s2 − 4

s2 − 1

5s

−2s

s2 − 4

( )(

)

2 2 2 s − 4 + 20 s = 2 2 s 2 − 1 s − 4 + 10 s

(

)

( )

⎡ s 2 X ( s ) − s x 0 − − x! 0 − ⎤ − X ( s ) + 5 ⎡ sY ( s ) − y 0 − ⎤ = 2 ⎣ ⎦ ⎣ ⎦ s2 160

2 s −1 s2 4 − 2s − s

!

x ( t ) = − 2 t + 10sin ( t ) − 4 sin ( 2t )

!

y ( t ) = 2 − 4 cos ( t ) + 2 cos ( 2t )

2

!

Y (s) =

s2 − 1

5s

− 2s

s2 − 4

( = ( s − 1)( s

) − 4 ) + 10 s

4 2 4 − s s −1 + s

2

2

2

or !

!

X (s) =

Y (s)

(

22 s 2 − 8

)(

s2 s2 + 1 s2 + 4 − 4s 2 + 8

)(

(

s s2 + 1 s2 + 4

)

)

Therefore: !

X (s) = −

!

Y (s) =

2 s

2 s2 −

+

10 8 − s2 + 1 s2 + 4

4s 2s + s2 + 1 s2 + 4

From the transform table in Appendix E we obtain the inverse transforms:

161

Chapter 10 Solution of Partial Differential Equations

F ( x, s ) =





0−

f ( x, t ) e

−st

dt

162

!

In this chapter we will consider the application of Laplace

higher than two, the Laplace transform can be repetitively

transform techniques to the solution of constant coefficient

applied to obtain solutions.

linear partial differential equations. We will also consider the

!

use of partial differentiation to determine Laplace transforms.

a function of two independent variables v ( x, t ) is:

10.1!

∂v ∂v ∂2 v ∂2 v ∂2 v ! c1 2 + 2 c2 + c3 2 + c4 + c5 + c6 v = f ( x, t ) ! (10.1-1) ∂x ∂t ∂x ∂t ∂x ∂t

!

PARTIAL DIFFERENTIAL EQUATIONS

A differential equation describes a process involving the

change in some variable known as a dependent variable. If the dependent variable is a function of more than one independent variable, the differential equation is called a partial differential equation (PDE). A solution of a partial differential equation is any function of the independent variables that satisfies the differential equation. !

The Laplace transform of a linear PDE is accomplished by

considering the PDE to be a function of only one independent variable so that the Laplace transform operates only upon that

The most general form of a second order linear PDE that is

where f ( x, t ) is the forcing function or driving function. The coefficients ci can be functions of the two independent variables x and t , although we will only consider PDEs having constant coefficients. Second order linear PDEs are classified as follows: !

Elliptic!

c22 − 4 c1 c3 < 0 !

(10.1-2)

!

Hyperbolic!

c22 − 4 c1 c3 > 0 !

(10.1-3)

!

Parabolic!

c22 − 4 c1 c3 = 0 !

(10.1-4)

one variable. The other independent variables are then treated

!

as parameters, and so are considered constants for the given

PDE, it is necessary that:

transformation. The Laplace transform reduces the number of independent variables in the PDE. !

For a second order PDE only one independent variable

will remain after solution of the Laplace transformed PDE, and so the PDE has become an ODE. For some linear PDEs of order

For the Laplace transform to be useful in the solution of a

1.! The PDE must be linear. 2.! The coefficients of the PDE must be constants. 3.! One or more of the independent variables must have a range from 0 to ∞ . 163

!

Proposition 10.1-1: If f ( x, t ) is a continuous function of exponential order on

[ 0, ∞ ) and if the first partial derivatives of f ( x, t ) are

piecewise-smooth functions of exponential order on [ 0, ∞ ) , then:

⎧ ∂ f ( x, t ) ⎫ − L ⎨ ⎬ = s F ( x, s ) − f x, 0 ! ⎩ ∂t ⎭

(

!

where L

⎧ ∂ f ( x, t ) ⎫ −st L ⎨ ⎬ = f ( x, t ) e ⎩ ∂t ⎭

)

(10.1-5)

!





∂t

0

e

−st

dt !

(10.1-6)

! we have:

du = − s e− s t dt !

∂t

(10.1-10)

⎧ ∂ f ( x, t ) ⎫ − L ⎨ ⎬ = s F ( x, s ) − f x, 0 ! ⎩ ∂t ⎭

(

v = f ( x, t ) !

(10.1-11)

)

)

exponential order on ⎡⎣ 0 − , ∞ , then:

(10.1-7)

(

)

− 2 ∂ f x, 0 ⎪⎧ ∂ f ( x, t ) ⎪⎫ 2 − ! (10.1-12) L ⎨ ⎬ = s F ( x, s ) − s f x, 0 − 2 ∂t ∂t ⎩⎪ ⎭⎪

(

where L (10.1-8)

)

functions of exponential order on ⎡⎣ 0 − , ∞ , and if the second partial derivatives of f ( x, t ) are piecewise-smooth functions of

!

= dv !

0−

If f ( x, t ) and its first partial derivatives are continuous

parts considering x as a parameter. Letting:

∂ f ( x, t )



f ( x, t ) e− s t dt

Proposition 10.1-2:

)

u = e− s t !

(10.1-9)



⎡⎣ 0 − , ∞ , and its first partial derivatives are piecewise-smooth functions of exponential order on ⎡⎣ 0 − , ∞ , we can integrate by

!

)



! !

Since f ( x, t ) is a continuous function of exponential order on

)

0−

(

!

From the definition of the Laplace transform we can write:

∂ f ( x, t )



f ( x, t ) e− s t dt !

⎧ ∂ f ( x, t ) ⎫ L ⎨ lim f ( x, t ) e− s t − f x, 0 − + s ⎬ = t→ ∞ ⎩ ∂t ⎭

!



+s

Therefore:

{ f ( x, t )} = F ( x, s ) .

⎧ ∂ f ( x, t ) ⎫ L ⎨ ⎬= ∂t ⎩ ⎭

t=0





and so:

Proof: !



)

{ f ( x, t )} = F ( x, s ) .

Proof: !

From the definition of the Laplace transform we can write: 164

⎧⎪ ∂2 f ( x, t ) ⎫⎪ L ⎨ ⎬= 2 ⎪⎩ ∂t ⎪⎭

!





0

∂2 f ( x, t ) ∂t



2

e

−st

or

dt !

(10.1-13)

Since f ( x, t ) and its first partial derivatives are continuous

)

functions of exponential order on ⎡⎣ 0 − , ∞ , and the second partial derivatives of f ( x, t ) are piecewise-smooth functions of −

)

exponential order on ⎡⎣ 0 , ∞ , we can integrate by parts. Letting:

u=e

!

−st

dv =

!

du = − s e

!

∂2 f ( x, t )

−st

dt !

∂t

2

∂f ( x, t )

v=

∂t

!

dt !

(

)

(

)

(10.1-18)

We can use Proposition 10.1-1 to write: − ∂ f x, 0 ⎧⎪ ∂2 f ( x, t ) ⎫⎪ − ! L ⎨ ⎬ = s ⎡⎣ s F ( s ) − f x, 0 ⎤⎦ − 2 ∂t ⎪⎩ ∂t ⎪⎭

(

!

(10.1-14)

)

− ∂ f x, 0 ⎧⎪ ∂2 f ( x, t ) ⎫⎪ 2 − ! L ⎨ ⎬ = s F ( x, s ) − s f x, 0 − 2 ∂t ⎪⎩ ∂t ⎪⎭

(

! !

(10.1-15)

(10.1-19)

)

(10.1-20)



Proposition 10.1-3:

2 ⎪⎧ ∂ f ( x, t ) ⎪⎫ ∂f ( x, t ) − s t L ⎨ e ⎬= 2 ∂t ∂t ⎩⎪ ⎭⎪



+s t=0





∂f ( x, t )

0−

∂t



! !

e

−st

If f ( x, t ) is a continuous function of exponential order on

[ 0, ∞ ) and if the first partial derivatives of f ( x, t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) ,

dt

(10.1-16)

then:

and so:

(

− ∂f ( x, t ) − s t ∂f x, 0 ⎧⎪ ∂2 f ( x, t ) ⎫⎪ ! L ⎨ lim e − ⎬ = t→ 2 ∞ ∂t ∂t ⎪⎩ ∂t ⎪⎭

!

)

and so:

we have: !

(

− 2 ⎧ ∂f ( x, t ) ⎫ ∂ f x, 0 ⎪⎧ ∂ f ( x, t ) ⎪⎫ L ⎨ ! ⎬= s L ⎨ ⎬− 2 ∂t ∂t ∂t ⎩ ⎭ ⎩⎪ ⎭⎪

+s





∂f ( x, t )



∂t

0

) e

−st

dt !

(10.1-17)

!

⎧ ∂ f ( x, t ) ⎫ dF ( x, s ) ! L ⎨ ⎬= dx ⎩ ∂x ⎭

(10.1-21)

!

⎧⎪ ∂ 2 f ( x, t ) ⎫⎪ dF 2 ( x, s ) ! L ⎨ ⎬= 2 2 dx ∂x ⎩⎪ ⎭⎪

(10.1-22) 165

Therefore:

Proof: !

From the definition of the Laplace transform we can write:

⎧ ∂ f ( x, t ) ⎫ L ⎨ ⎬= ∂x ⎩ ⎭

!



∂ f ( x, t )

0−

∂x



e− s t dt !

(10.1-23)

10.2!

or since x is independent of t :

⎧ ∂ f ( x, t ) ⎫ d L ⎨ ⎬= ∂x ⎩ ⎭ dx

!





0



f ( x, t ) e− s t dt =

dF ( x, s ) dx

!

(10.1-24)

!





0−

f ( x, t ) e

−st

dt =

dF 2 ( x, s ) dx 2

! (10.1-25)

Laplace transform to obtain solutions of linear PDEs consists of the following: 1.! The entire PDE is first transformed using the Laplace transform for one of the independent variables that has a

Example 10.1-1

range from 0 to ∞ .

Determine:

2.! The initial and boundary conditions are applied to the

⎧ ∂ f ( x, t ) ⎫ ⎧ ∂cos ( x t ) ⎫ L ⎨ = L ⎬ ⎨ ⎬ ⎩ ∂t ⎭ ⎩ ∂t ⎭

transformed equation. 3.! The new PDE or ODE is solved using classical methods for

Solution:

the transformed dependent variable.

From Proposition 10.1-1 we have: !

The Laplace transform is very useful for obtaining

closed-form solutions. The general procedure for using the



!

!

SOLVING PDES USING LAPLACE TRANSFORMS

solutions to linear PDEs having constant coefficients and

Similarly we have:

⎧⎪ ∂ 2 f ( x, t ) ⎫⎪ d 2 L ⎨ ⎬= 2 2 ⎪⎩ ∂x ⎪⎭ dx

!

⎧ ∂cos ( x t ) ⎫ s x2 L ⎨ −1 = − 2 ⎬= s 2 2 ∂t s + x s + x2 ⎩ ⎭

⎧ ∂ f ( x, t ) ⎫ − L ⎨ ⎬ = s F ( x, s ) − f x, 0 ⎩ ∂t ⎭

(

)

4.! The transformed dependent variable solution is rewritten using partial fractions if necessary.

166

5.! The inverse Laplace transform of each of the terms in the partial fraction expansion of the transformed solution is determined.

!

We also have from Proposition 10.1-3:

6.! The initial and boundary conditions are applied to the !

solution. 7.! For PDE of order higher than two, the above procedure

8.! In this way the solution to the original PDE is obtained.

Determine the solution of the partial differential equation:

∂x

+

∂v ( x, t ) ∂t

= 2x

v ( x, 0 ) = 0 !

v ( 0, t ) = 0

dx

+ s V ( x, s ) =

2x s

dV ( x, s ) dx

es x + s V ( x, s ) es x = 2 es x

x s

or !

Solution:

(

)

d x V ( x, s ) es x = 2 es x dx s

Integrating, we obtain:

From Proposition 10.1-1 we have: !

dV ( x, s )

form, an integrating factor exists for it. Multiplying by the s dx integrating factor µ = e ∫ = e s x , we have: !

where the initial and boundary conditions are: !

!

Since this is a linear ODE in what is known as the standard

Example 10.2-1

!

⎧ ∂v ( x, t ) ⎫ ∂V ( x, s ) dV ( x, s ) L ⎨ = ⎬= ∂x ∂x dx ⎩ ⎭

and so the transformed PDE is:

can be repeated.

∂v ( x, t )

⎧ ∂v ( x, t ) ⎫ L ⎨ ⎬ = sV ( x, s ) ∂t ⎩ ⎭

⎧ ∂v ( x, t ) ⎫ − L ⎨ ⎬ = sV ( x, s ) − v x, 0 ⎩ ∂t ⎭

(

Applying the initial condition:

)

!

V ( x, s ) es x =

2 s



x es x dx + c ( s )

where c ( s ) is an integration constant. Integrating by parts, we have: 167

!

2 e− s x ⎡ x e s x e s x ⎤ V ( x, s ) = − 2 ⎥ + c ( s ) e− s x ⎢ s ⎣ s s ⎦

or !

2x 2 V ( x, s ) = 2 − 3 + c ( s ) e− s x s s

Since a boundary condition is v ( 0, t ) = 0 , we must have 2 V ( 0, s ) = 0 , and so c ( s ) = 3 . Therefore: s !

2 x 2 2 e− s x V ( x, s ) = 2 − 3 + 3 s s s

Solution: From Proposition 10.1-1 we have: !

v ( x, t ) = 2 x t − t 2 + ( t − x ) U ( t − x ) 2

!

!

Example 10.2-2 Determine the solution of the partial differential equation: !

∂x

=

∂t

− 2 v ( x, t )

where lim v ( x, t ) = 0 and where the initial condition is:

v ( x, 0 ) = e− x

⎧ ∂v ( x, t ) ⎫ ∂V ( x, s ) dV ( x, s ) L ⎨ = ⎬= ∂x ∂x dx ⎩ ⎭

dV ( x, s ) dx

= s V ( x, s ) − e− x − 2V ( x, s ) = ( s − 2 ) V ( x, s ) − e− x

or !

dV ( x, s ) dx

+ ( 2 − s ) V ( x, s ) = − e− x

Multiplying by the integrating factor µ = e ∫

x→ ∞

!

⎧ ∂v ( x, t ) ⎫ −x L ⎨ ⎬ = sV ( x, s ) − e ⎩ ∂t ⎭

and so the transformed PDE is: !

∂v ( x, t )

)

Applying the initial condition:

solution:

∂v ( x, t )

(

We also have from Proposition 10.1-3:

From the transform tables in Appendix E and F we obtain the !

⎧ ∂v ( x, t ) ⎫ − L ⎨ ⎬ = sV ( x, s ) − v x, 0 ⎩ ∂t ⎭

!

( 2−s ) dx

= e( 2−s ) x :

dV ( x, s ) ( 2−s ) x e + ( 2 − s ) V ( x, s ) e( 2−s ) x = − e− x e( 2−s ) x dx 168

or

(

!

)

v ( x, t ) = e− x e t

d V ( x, s ) e( 2−s ) x = − e(1−s ) x dx

!

Example 10.2-3

Integrating this equation, we obtain:

Determine the solution of the one-dimensional wave

(1−s ) x ( 2−s ) x e V ( x, s ) e = + c(s) s −1

!

equation:

where c ( s ) is an integration constant. We then have:

e (1−s ) x e− ( 2−s ) x V ( x, s ) = + c ( s ) e ( s−2 ) x s −1

!

!

∂v 2 ( x, t ) ∂x 2

2 1 ∂v ( x, t ) = 2 k ∂t 2

where lim v ( x, t ) = 0 and where the initial and boundary x→ ∞

conditions are: or !

!

−x

e V ( x, s ) = + c ( s ) e ( s−2 ) x s −1

Since the boundary condition is lim v ( x, t ) = 0 , we have: x→ ∞

!

lim V ( x, s ) = lim

x→ ∞

x→ ∞





0



v ( x, t ) e− s t dt =





lim v ( x, t ) e− s t dt = 0

and so c ( s ) = 0 . Therefore: !

e− x V ( x, s ) = s −1

From the transform table in Appendix E we obtain the solution:

∂v ( x, 0 ) ∂t

v ( 0, t ) = f ( t )

= 0!

Solution: From Proposition 10.1-2 we have:

0 x→ ∞ −

v ( x, 0 ) = 0 !

!

(

∂v x, 0 − ⎧⎪ ∂2 v ( x, t ) ⎫⎪ 2 − L ⎨ ⎬ = s V ( x, s ) − s v x, 0 − 2 ∂t ⎪⎩ ∂t ⎪⎭

(

)

)

Applying the initial condition: !

2 ⎪⎧ ∂v ( x, t ) ⎪⎫ 2 L ⎨ ⎬ = s V ( x, s ) 2 ∂t ⎩⎪ ⎭⎪

We also have from Proposition 10.1-3: 169

!

∂v 2 ( x, t ) ∂x 2

=

∂V 2 ( x, s ) ∂x 2

Example 10.2-4 Determine the solution of the equation:

and so the transformed PDE is: !

∂V 2 ( x, s ) ∂x 2

s2 = 2 V ( x, s ) k

!

V ( x, s ) = c1 ( s ) e ( x k ) s + c2 ( s ) e− ( x k ) s

lim V ( x, s ) = lim

x→ ∞

x→ ∞





0−

v ( x, t ) e

−st

dt =





lim v ( x, t ) e− s t dt = 0

0 − x→ ∞

V ( x, s ) = c2 ( s ) e− ( x k ) s

Since the boundary condition is v ( 0, t ) = f ( t ) we have

V ( 0, s ) = F ( s ) , and so c2 ( s ) = F ( s ) : !

V ( x, s ) = F ( s ) e− ( x k ) s

From the second shift theorem (Proposition 5.2-1) we then have: !

∂x 2

v ( x, 0 ) = 0 !

!

Therefore we must have c1 ( s ) = 0 so that: !

=k

boundary conditions are for x > 0 :

We also have: !

∂t

∂v 2 ( x, t )

where v ( x, t ) is bounded as x → ∞ , and where the initial and

or !

∂v ( x, t )

x⎞ ⎛ x⎞ ⎛ v ( x, t ) = f ⎜ t − ⎟ U ⎜ t − ⎟ ⎝ k⎠ ⎝ k⎠

v ( 0, t ) = v0

Solution: From Proposition 10.1-1 we have: !

⎧ ∂v ( x, t ) ⎫ − L ⎨ ⎬ = sV ( x, s ) − v x, 0 ⎩ ∂t ⎭

(

)

Applying the initial condition: !

⎧ ∂v ( x, t ) ⎫ L ⎨ ⎬ = sV ( x, s ) ∂t ⎩ ⎭

We also have from Proposition 10.1-3: !

∂v 2 ( x, t ) ∂x 2

=

∂V 2 ( x, s ) ∂x 2

and so the transformed PDE is: 170

!

s V ( x, s ) = k

∂V 2 ( x, s )

10.3!

∂x 2

or !

V ( x, s ) = c1( s ) e x

s k

+ c2 ( s ) e− x

that: !

V ( x, s ) = c2 ( s ) e −x

!

V ( 0, s ) =



0−

!

⎧ ∂ f ( x, t ) ⎫ dF ( x, s ) L ⎨ ! ⎬= ∂x dx ⎩ ⎭

s k

v ( 0, t ) e

−st

dt = v0





0−

e

−st

v dt = 0 = c2 ( s ) s

Therefore: !

From equation (10.1-21) we have:

e −x s k V ( x, s ) = v0 s

⎧ ∂ f ( a, t ) ⎫ ∂F ( a, s ) ! L ⎨ ⎬= ∂a ∂a ⎩ ⎭

!

new functions. Example 10.3-1 Determine the Laplace transform of t cos at using the Laplace transform F ( a, s ) of f ( a, t ) = sin at . Solution:

solution:

We have:

⎛ x ⎞ v ( x, t ) = v0 erfc ⎜ ⎟ ⎝ 2 kt ⎠

(10.3-2)

We can use this equation to obtain the Laplace transform of

From the transform table in Appendix E we obtain the

!

(10.3-1)

or letting x = a :

Since the boundary condition is v ( 0, t ) = v0 we have: ∞

! s k

Since v ( x, t ) is bounded as x → ∞ , we must have c1( s ) = 0 so

OBTAINING LAPLACE TRANSFORMS USING PARTIAL DIFFERENTIATION

!

∂ f ( a, t ) ∂a

=

∂ ( sin ( at )) ∂a

= t cos ( at )

and

171

!

F ( a, s ) =

a s2 + a2

From equation (10.3-2) we have: !

⎧⎪ ∂ ( sin ( at )) ⎫⎪ ∂ ⎛ a ⎞ L ⎨ ⎜⎝ 2 ⎬= 2⎟ ⎠ ∂a ⎪⎩ ⎪⎭ ∂a s + a

or !

L {t cos ( at )} =

1 s +a 2

2



(s

2 a2 2

+a

=

s2 − a2

) (s

2 2

2

+ a2

)

2

172

Chapter 11 Complex Variables

!∫

C

f ( z ) dz = 2 π i

k



Res ⎡⎣ f ( z ) , zn ⎤⎦

n=1

173

()

()

In the next chapter we will consider the inversion of the

where x t and y t are real-valued continuous functions. The

Laplace transform using residue integration. Since residue

parameter t has definite limits a ≤ t ≤ b such that the curve

integration uses complex analysis methods, in this chapter we

proceeds from a point z1 = z a

will present a brief review of some complex variable definitions

parameterization of the curve provides a direction to the curve.

and propositions. We will not prove most of the propositions.

!

Proofs of these propositions can be found in any good book on

continuous derivative z ′ t ≠ 0 at all points along the curve

complex variables. We will assume a knowledge of complex

between the endpoints. A piecewise-smooth integration path C

numbers.

in the complex plane is known as a contour, and consists of a

!

()

()

to a point zn = z b . The

A smooth curve is a continuous curve that has a

()

connected set of points z = x + i y . The integral of a complex

11.1! CONTOURS IN THE COMPLEX PLANE !

The integral of a complex function is not restricted to an

interval of the x-axis, but can be evaluated on some path in the complex plane. The integral of a complex function is similar, therefore, to a line integral of a real function in the Cartesian plane. !

We will consider the integration path in a complex plane

to be a piecewise continuous curve. Such a curve can be defined as a set of points that can be represented by a piecewise

function along such a path is referred to as a contour integral. !

Open sets of points do not include any points that form a

boundary to the set. We will assume that contours are all within regions of the complex plane consisting of open sets of points that are simply connected (any two points in the region can be connected by a polygonal path). Such regions are called domains. A simply connected domain is a domain without holes. !

A δ neighborhood of a point z0 is taken to be an open set

continuous complex function (see Section 1.4.2). This complex

of points in the form of a disk centered at the point z0 such that:

function can be specified using a parameterization (such as

!

()

z t , where t is a real parameter). We then have: !

() ()

()

z t = x t +iy t !

z − z0 < δ !

(11.1-2)

( )

The notation for the open δ -disk centered at z0 is Dδ z0 . An (11.1-1)

open disk is defined to be a disk that does not include any of its boundary points (see Figure 11.1-1). 174

!

Curve C4 is smooth, simple, and closed.

Figure 11.1-1! Graphical depiction of the set of points forming the open δ -disk Dδ z0 about the point z0 . The disk’s boundary circle is dashed to indicate that points on this circle are not in the disk.

( )

!

Since contours are piecewise-smooth curves, each segment

of a contour will be continuous and will have continuous derivatives at all points within the segment. Curves in the

Figure 11.1-2! Curves in the complex plane:

complex plane can be categorized as follows (see Figure 11.1-2):

!

!

Curve C1 is smooth, simple, and not closed.

!

! !

Curve C2 is piecewise-smooth, simple, and not closed.

!

Curve C3 is closed but not simple.

Contours can then be defined as follows: 1.! A simple contour is a piecewise-smooth curve in the complex plane that does not intersect itself except possibly at its endpoints (see curves C1 and C4 in Figure 11.1-2). 175

!

2.! A closed contour is a contour in the complex plane whose initial and terminal points are the same point (see curves C3 and C4 in Figure 11.1-2).

!

that is not closed (see curves C1 and C2 in Figure 11.1-2).

11.2! DERIVATIVES OF COMPLEX FUNCTIONS

where we will define: (11.2-1)

) (

)

()

(11.2-2)

If f z is a complex function defined in the neighborhood of

()

( )

()

( ) = lim f ( z ) − f ( z0 ) = lim f ( z0 + Δz ) − f ( z0 )

Δz

Δz →0

z → z0

z − z0

Δ z→ 0

Δz

(11.2-4)

()

provided that the limit exists. For the limit to exist f z must

() said to be differentiable at the point z0 . The derivative of f ( z )

be bounded in a neighborhood of z0 . If the limit exists, f z is

() () ( ) ( ) ( ) the limit f ′ ( z0 ) to exist at the point z0 . Therefore f ( z ) must be continuous at point z0 . !

()

For a function f z to be complex differentiable at a point

z0 , there must then exist for any real number ε > 0 a real number δ > 0 such that: !

()

( )− f′

f z − f z0 z − z0

( z0 ) < ε !

when z − z0 < δ ! (11.2-5)

or

z0 , then the change in the function f z due to the change from point z0 to the point z is Δ f z , where:

( )

Δf z

f ′ z0 = lim

so that z = z0 + Δz . From equation (11.2-1) we have:

(

(11.2-3)

have Δ f z = f z − f z0 = f z0 + Δz − f z0 → 0 as Δz → 0 for

consider another point z located within a neighborhood of z0

Δz = x − x0 + i y − y0 = Δx + i Δ y !

( )

is then specified with reference to the given point z0 . We must

Given some point z0 in the complex z-plane, we will now

!

)

! !

4.! An open contour is any contour in the complex plane

Δz = z − z0 !

(

()

3.! A simple closed contour is a simple contour in the

!

( )

a complex function f z is the limit:

same point (see curve C4 in Figure 11.1-2).

!

()

By definition, the derivative f ′ z0 at a given point z0 of

!

complex plane whose initial and terminal points are the

!

()

Δ f z = f z − f z0 = f z0 + Δz − f z0 !

!

!

( )− f′

Δf z Δz

( z0 ) < ε !

when Δ z < δ !

(11.2-6) 176

!

The number δ will generally depend on ε , but the

complex variable is therefore two-dimensional. There are an

is independent of both ε and Δz since the

infinity of directions of approach in the complex plane to a

( )

derivative f ′ z0

derivative represents the actual spatial rate of change in the

()

function f z at the given point z0 . From equation (11.2-6) we have for some ε > 0 :

()

( )

(11.2-7)

()

!

( ) (

) ( ) (

)

f z = f z0 + z − z0 f ′ z0 + ε z − z0 !

(11.2-8)

From equations (11.2-4), (11.2-1), and (11.2-3) with Δz → 0 ,

( )

we can write the derivative f ′ z0

at a given point z0 in the

form: !

( )

f ′ z0 =

()

dz

= lim z = z0

Δz → 0

( ) = lim f ( z0 + Δ z ) − f ( z0 ) !

Δf z Δz

Δz → 0

! !

Δz

(11.2-9)

11.2.1! EXISTENCE OF THE DERIVATIVE !

to exist at a point z0 , it is necessary

obtained independent of the argument of Δz .

Since Δz = Δx + i Δ y is a complex number, to have Δz → 0 ,

we must have both Δx → 0 and Δ y → 0 . The limiting process for complex functions of a complex variable then involves both

Δx and Δ y . How Δz → 0 is dependent upon how both Δx → 0 and Δ y → 0 . The limiting process for complex functions of a

()

Since the derivative of a complex function f z at a point

z0 is independent of the direction to z0 along which the

()

derivative is taken, f z will be differentiable at z0 only if the

()

spatial rate of change in f z is the same no matter from what direction z0 is approached. !

df z

( )

For the limit f ′ z0

that the same limit be determined as z0 is approached from any

!

which gives us the mean value theorem: !

!

direction in the z-plane. That is, the same limit must be

Δ f z = f ′ z0 Δ z + ε Δ z !

!

point z0 as Δz → 0 .

The two-dimensional existence requirement of direction

independence for the derivative of a complex function is a much more stringent condition that must be satisfied than is the analogous one-dimensional existence requirement for the derivative of a real-valued function. As a result, many relatively simple complex functions are not differentiable, and those that are differentiable have properties very different from their real counterparts. !

In particular, complex functions have one property not

()

commonly found in real functions. If a complex function f z

has one derivative at a point, it will have an infinity of higher 177

order derivatives at the same point. By comparison, if a real

If this derivative exists, then the limits of the real and

function f x has one derivative at a point, it may have no,

imaginary parts of w = f z must exist:

()

some, or many higher order derivatives at the same point. !

We will now omit the subscript 0 from z0 , and we will use

simply z to represent the point at which we wish to obtain the derivative: !

()

f′ z =

( ) = lim

df z dz

( ) = lim f ( z + Δ z ) − f ( z )

Δz

Δz

Δz → 0

! !

(11.2-10)

11.2.2! CAUCHY-RIEMANN EQUATIONS !

(

)

(

)

()

()

⎡⎣u z + Δz + i v z + Δz ⎤⎦ − ⎡⎣ u z + i v z ⎤⎦ ! f ′ z = lim ! (11.2-13) Δz → 0 Δz

()

or

Δf z

Δz → 0

()

(

)

(

)

⎡⎣u x + Δx, y + Δy + i v x + Δx, y + Δy ⎤⎦ ! f ′ z = lim Δx→ 0 Δx + iΔy Δy → 0

()

( )

( )

⎡⎣u x, y + i v x, y ⎤⎦ ! − Δlim x→ 0 Δ x + iΔ y Δy→0

!

We will now develop two partial differential equations

(11.2-14)

known as the Cauchy-Riemann equations that together provide the necessary and sufficient conditions for the existence of a derivative of a complex function at a point. We will consider

()

()

the derivative f ′ z of a complex function w = f z at a given

()

point z = x + i y in a domain D . Writing the function w = f z in the form:

() ()

() ( )

( )

w = f z = u z + i v z = u x, y + i v x, y !

!

(11.2-11)

()

!f′ z ≡

( ) = lim

dz

Δz →0

( ) = lim f ( z + Δz ) − f ( z ) !

Δf z Δz

Δz → 0

Δz

!

NECESSARY CONDITIONS FOR THE EXISTENCE OF A DERIVATIVE

()

If the derivative f ′ z given in equation (11.2-14) exists at

( )

a point z x, y in a domain D , the limits in this equation cannot

( )

depend upon the path of approach to z x, y as Δz → 0 since each limit must be unique. To examine the necessary conditions

()

for the existence of the derivative f ′ z , we will choose two different paths of approach within the domain D to the point

the derivative is given by equation (11.2-4):

df z

11.2.2.1!

( )

z x, y as shown in Figure 11.2-1: a path parallel to the real axis (11.2-12)

(x) and a second path parallel to the imaginary axis (y). 178

!

Now lettings Δz → 0 parallel to the y-axis so that Δx = 0

and Δz = Δ y → 0 (see Figure 11.2-1), equation (11.2-14) then becomes:

(

) ( )

(

) ( )

⎡⎣u x, y + Δ y − u x, y ⎤⎦ + i ⎡⎣v x, y + Δ y − v x, y ⎤⎦ f ′ z = lim Δy →0 i Δy ! ! (11.2-17) or, from the definition of real partial derivatives:

()

! !

( )

Figure 11.2-1! Two paths of approach to a point z x, y . !

Letting Δz → 0 parallel to the x-axis so that Δ y = 0 and

Δz = Δx → 0 (see Figure 11.2-1), equation (11.2-14) becomes:

(

) ( )

(

) ( )

⎡⎣u x + Δ x, y − u x, y ⎤⎦ + i ⎡⎣v x + Δ x, y − v x, y ⎤⎦ f ′ z = lim Δx → 0 Δx ! ! (11.2-15)

()

()

Since the derivative f ′ z

exists, the limits of the real and

imaginary parts of equation (11.2-15) exist. From the definition

()

f′ z =

∂u ∂v ! +i ∂x ∂x

1 ∂u ∂v ∂v ∂u ! + = −i i ∂y ∂y ∂y ∂y

(11.2-18)

()

( )

If the function w = f z is differentiable at a point z x, y ,

the derivatives given in equations (11.2-16) and (11.2-18), which were calculated for two different directions of approach, must exist and be equal. We then can write: !

()

f′ z =

( ) = ∂u + i ∂v = ∂v − i ∂u !

df z dz

∂x

∂x

∂y

∂y

(11.2-19)

Equating the real and imaginary parts, we obtain: !

∂u ∂v ! = ∂x ∂ y

∂u ∂v =− ! ∂y ∂x

(11.2-20)

These two linear partial differential equations are known as the

of real partial derivatives, we then have: !

()

f′ z =

(11.2-16)

Cauchy-Riemann equations.

179

!

()

() ( )

( )

If the derivative f ′ z of a function f z = u x, y + i v x, y

( ) of u ( x, y ) and v ( x, y ) with respect to

exists at a point z x, y , then the first order partial derivatives

x and y must exist and

!

conditions for a complex function to be differentiable at a point. 11.2.2.2! !

SUFFICIENT CONDITIONS FOR THE EXISTENCE OF A DERIVATIVE

Given that we considered only two paths of approach to

( )

the point z x, y , it is not obvious that the Cauchy-Riemann

can write for Δw = Δu + i Δv : !

!

( ) ( ) ( ) ( ) ( ) (11.2-21) to have a derivative f ′ ( z ) at a given point so that we have: ()

f′ z =

! !

∂u ∂v ∂v ∂u ! +i = −i ∂x ∂x ∂ y ∂y

(11.2-22)

We will now show that if the partial derivatives ∂u ∂x ,

∂u ∂ y , ∂v ∂x , and ∂v ∂ y exist and are continuous at a point

( )

z x, y

within a domain D , and if the Cauchy-Riemann

()

equations hold at this point, then the derivative f ′ z given in

(11.2-23)

⎛ ∂v ⎞ ⎛ ∂v ⎞ Δv = ⎜ + ε 2 ⎟ Δx+ ⎜ + η2 ⎟ Δ y ! ⎝ ∂x ⎠ ⎝ ∂y ⎠

(11.2-24)

where ε1 , ε 2 → 0 and η1 , η2 → 0 as Δx → 0 and Δ y → 0 . We then

( )

have at the point z x, y : !

w = f z = u z + i v z = u x, y + i v x, y !

!

⎛ ∂u ⎞ ⎛ ∂u ⎞ Δu = ⎜ + ε1 ⎟ Δx+ ⎜ + η1 ⎟ Δ y ! ⎝ ∂x ⎠ ⎝ ∂y ⎠

and

equations constitute sufficient conditions for any complex function:

( )

and are continuous in a neighborhood of the point z x, y , we

must satisfy the Cauchy-Riemann equations at this point. The Cauchy-Riemann equations therefore constitute necessary

Since we are assuming that the partial derivatives exist

⎛ ∂u ⎛ ∂u ∂v ⎞ ∂v ⎞ Δw = Δu + i Δv = ⎜ + i ⎟ Δx+ ⎜ + i ⎟ Δy ∂x ⎠ ∂y⎠ ⎝ ∂x ⎝ ∂y

(

)

(

)

+ ε1 + i ε 2 Δx+ η1 + i η2 Δ y !

!

(11.2-25)

or

⎛ ∂u ⎛ ∂u ∂v ⎞ ∂v ⎞ ! Δw = ⎜ + i ⎟ Δx + ⎜ + i ⎟ Δ y+ ε Δx+ η Δ y ! (11.2-26) ∂x ⎠ ∂y⎠ ⎝ ∂x ⎝ ∂y where ε = ε1 + i ε 2 and η = η1 + i η 2 . !

Using the Cauchy-Riemann equations, we can rewrite

equation (11.2-26) as:

equation (11.2-22) exists. 180

!

⎛ ∂u ⎛ ∂v ∂v ⎞ ∂u ⎞ Δw = ⎜ + i ⎟ Δx+ ⎜ − + i ⎟ Δ y+ ε Δx+ η Δ y ∂x ⎠ ∂x ⎠ ⎝ ∂x ⎝ ∂x

!!

(11.2-27)

()

f′ z =

!

⎛ ∂u ∂v ⎞ Δw = ⎜ + i ⎟ Δx+ i Δ y + ε Δx+ η Δ y ! ∂x ⎠ ⎝ ∂x

(

!

)

()

(11.2-28)

Riemann equations at the point

equations therefore constitute sufficient conditions for a

Δw ⎛ ∂u ∂v ⎞ Δx Δy ! lim − ⎜ + i ⎟ = lim ε +η Δ z → 0 Δz Δz →0 ∂x ⎠ Δz Δz ⎝ ∂x

(11.2-29)

( )

11.2.2.3! ! (11.2-30)

Δw ⎛ ∂u ∂v ⎞ Δx Δy ! lim − ⎜ + i ⎟ ≤ lim ε +η Δ z → 0 Δz ∂x ⎠ Δ z → 0 Δz Δz ⎝ ∂x

or

Δz →0

Δw ∂u ∂v ! = +i Δz ∂x ∂x

DERIVATIVE OF A COMPLEX FUNCTION IN RECTANGULAR FORM

From equations (11.2-33) and (11.2-20), we then have the

( )

complex function at a point z x, y : ! (11.2-31)

Δz ≤ 1 . Since ε → 0 and η → 0

as Δz → 0 , we have:

lim

()

following equivalent expressions for the derivative of a

or

Δz ≤ 1 and Δy

()

point z x, y .

and so:

We have Δx

( ) satisfies the Cauchyz ( x, y ) . The Cauchy-Riemann

complex function f z to have a derivative f ′ z at a given

Δw ⎛ ∂u ∂v ⎞ Δx Δy ! =⎜ +i ⎟ +ε +η Δz ⎝ ∂x ∂x ⎠ Δz Δz

!

!

()

independent of how Δz → 0 . The derivative f ′ z then exists with respect to x and y , and if f z

Using Δz = Δ x + i Δ y , we have:

!

(11.2-33)

and is unique if f z has continuous first partial derivatives

or

!

dw ∂u ∂v = +i ! dz ∂x ∂x

dw ∂u ∂v ∂v ∂u ∂u ∂u ∂v ∂v ! = f′ z = +i = −i = −i = +i dz ∂x ∂x ∂y ∂y ∂x ∂y ∂y ∂x

()

! !

(11.2-34)

We also have from equation (11.2-34): !

(11.2-32)

dw dz

2

()

= f′ z

2

2

2

!

2

2

⎛ ∂u ⎞ ⎛ ∂v ⎞ ⎛ ∂u ⎞ ⎛ ∂v ⎞ =⎜ ⎟ + ⎜ ⎟ = ⎜ ⎟ +⎜ ⎟ ⎝ ∂x ⎠ ⎝ ∂x ⎠ ⎝ ∂y ⎠ ⎝ ∂y ⎠ 2

2

2

2

⎛ ∂u ⎞ ⎛ ∂u ⎞ ⎛ ∂v ⎞ ⎛ ∂v ⎞ =⎜ ⎟ +⎜ ⎟ =⎜ ⎟ +⎜ ⎟ ! ⎝ ∂x ⎠ ⎝ ∂y ⎠ ⎝ ∂x ⎠ ⎝ ∂ y ⎠

(11.2-35) 181

!

We can summarize the results of the last two sections in

Proposition 11.2-1.

!

at every point z of a δ neighborhood of z0 . A complex function

()

∂u ∂v = ! ∂x ∂ y

∂u ∂v =− ! ∂y ∂x

at a point

(11.2-36)

( )

∂u ∂ y , ∂v ∂x , and ∂v ∂ y all exist and are continuous in a

( )

neighborhood of the point z x, y .

( )

in a whole neighborhood Dδ z0 that exists within D . Therefore !

() f ( z ) to be holomorphic

While it is possible for a derivative f ′ z to exist at just a

single point, it is then not possible for

just at a single point. For this reason holomorphicity is considered to be a global property of a function, while differentiability is considered to be a local property of a holomorphic along a curve, the function must be differentiable not only at points on the curve, but at points in some region



surrounding the curve. !

()

domain D requires then that the function f z be differentiable

function. We can conclude then that, for a function to be

Proof: Given in Sections 11.2.2.1 and 11.2.2.2.

()

at z0 , where δ > 0 . For f z to be holomorphic at a point z0 in a

z0 must be an interior point of D .

hold at the point z x, y , and that the partial derivatives ∂u ∂x ,

!

is

defined and differentiable in an open disk z − z0 < δ centered

The necessary and sufficient conditions for a complex function

( ) ( ) ( ) to be differentiable z ( x, y ) are that the Cauchy-Riemann equations:

()

w = f z is said to be holomorphic at a point z0 if f z

Proposition 11.2-1, Cauchy-Riemann Equations:

w = f z = u x, y + i v x, y

We will now consider a more restrictive set of functions

that are differentiable not only at some single point z0 , but also

11.2.3! CAUCHY-RIEMANN EQUATIONS IN RECTANGULAR FORM

!

11.2.4! HOLOMORPHIC FUNCTIONS

The Cauchy-Riemann equations are a consequence of the

()

requirement that the derivative of a complex function f z at a point z0 must be independent of the path in the complex plane taken as Δz → 0 during the calculation of the derivative.

!

()

A complex function w = f z is said to be holomorphic on

() the domain. If f ( z ) is holomorphic on the entire finite complex

a domain if f z is defined and differentiable at every point in plane, the function is referred to as an entire function. 182

Functions such as e z , sin z , and cos z are differentiable everywhere in the finite complex plane, and so are entire functions. Polynomials are also entire functions. !

∫ f ( z ) dz !

!

(11.3-2)

C

is independent of path C within D .

Since we will consider the inversion of the Laplace

transform using residue integration in the next chapter, we will

Proposition 11.3-3, Contour Integral Inequality:

()

now define residues and discuss theorems related to finding

If C is a simple contour and f z is a complex function

residues.

continuous on C , then:





11.3! IMPORTANT THEOREMS FOR DETERMINING RESIDUES

!

Proposition 11.3-1, Fundamental Theorem of Calculus for

Proposition 11.3-4, ML-Inequality:

C

Complex Functions:

()

f z

C

dz !

(11.3-3)

()

Let C be a simple contour of length L , and let f z be a

()

!

()

f z dz ≤

()

If a complex function f z is holomorphic in a simply

continuous complex function on C . If f z is bounded by a real

connected domain D , then

number M along C :



z2 z1

()

f z dz =



z2 z1

( ) dz = F ( z ) − F ( z ) !

dF z dz

2

1

(11.3-1)

()

!

f z ≤ M!

(11.3-4)



(11.3-5)

then:

()

()

where F z is the antiderivative of f z .

!

C

()

f z dz ≤ M L !

Proposition 11.3-2, Independent of Path:

()

If a complex function f z is holomorphic in a simply connected domain D , then the integral:

Proof: !

From Proposition 11.3-3 we can write: 183

!



C

()

f z dz ≤



()

f z

C

dz ≤ M



dz !

(11.3-6)

C

Proposition 11.3-6, Deformation Theorem: Let C1 and C2 be two simple closed contours within a domain

()

In parametric form we have: !



C

()

f z dz ≤ M



b

a

()

dz t dt

D , and let C2 be interior to C1 (see Figure 11.3-1). If f z is a dt = M



b

ds !

a

(11.3-7)

where t = a and t = b are the endpoints of the contour C , and



C

outside or on C2 , then: !

!∫

C1

ds is differential arc length. We then have: !

complex function holomorphic at all points within or on C1 , and

()

f z dz ≤ M L !

()

f z dz =

!∫

C2

()

f z dz !

(11.3-10)

(11.3-8)

which is the upper bound for moduli of contour integrals. This equation is known as the ML-inequality.



Proposition 11.3-5, Cauchy-Goursat Theorem:

()

If f z is a holomorphic complex function in a simply connected domain D , then along every simple closed contour C within D we must have: !

!∫

C

()

f z dz = 0 !

(11.3-9) Figure 11.3-1! Arrows show direction of integration.

The arrow on the integral sign indicates that the direction of integration along the contour is counterclockwise. 184

their value at any point within the contour C . There is no Proposition 11.3-7:

equivalent property for real-valued functions of a real variable.

If C is a simple closed contour within a domain D , and if z = z0 is a point inside the contour C , then: !

!∫

C

1

( z − z0 )

n

!

()

If f z is a holomorphic function with a derivative at a

()

point z0 , then f z will have derivatives of all orders at z0 , and these derivatives can be expressed in integral form using an

⎧⎪ 2 π i n = 1 dz = ⎨ ! 0 n ≠ 1 ⎪⎩

(11.3-11)

integral formula of Cauchy. Proposition 11.3-9, Cauchy’s Integral Formula for Derivatives:

where n is an integer.

()

Let f z be a holomorphic complex function in a simply Proposition 11.3-8, Cauchy’s Integral Formula:

connected domain D . If C is a simple closed contour within the

()

Let f z be a holomorphic complex function in a simply

domain D , and if z0 is any given point inside the contour C ,

connected domain D . If C is a simple closed contour that lies

then f z has derivatives of all orders at the point z0 given by:

()

within D , and if z0 is a fixed point inside the contour C , then: !

!

1 f z0 = 2π i

( )

!∫

C

!

( ) dz !

f z

(11.3-12)

z − z0

!

Cauchy’s integral formula states that if a complex

()

function f z is holomorphic at all points inside and on a

()

simple closed contour C , then the value of f z at any point

z = z0 inside the contour C is completely determined by just

()

the values of f z on this contour. This is a special property of holomorphic functions that provides a method of computing

n! n f ( ) z0 = 2π i

( )

( ) dz ! !∫C ( z − z0 )n+1 f z

(11.3-13)

where n is a nonnegative integer.

Proposition 11.3-10, Higher Order Derivatives Exist for Holomorphic Functions and are Holomorphic:

() ()

If a complex function f z is holomorphic at a point z = z0 , n then its derivatives f ( ) z of all orders n exist and are holomorphic at the point z0 . 185



Proposition 11.3-11, Morera’s Theorem:

∑z = z + z + z + !!

!

()

n

If f z is a continuous complex function in a simply connected domain D , and if: !

!∫

C

()

(11.3-14)

()

Since it is not possible to add an infinity of terms, it is not

possible to directly obtain the sum of an infinite complex sequence. Instead the sum of an infinite complex sequence must be determined in the form of a limit. To obtain a limit

holomorphic in D .

that can represent the sum of an infinite complex sequence, a

A sequence of complex numbers

()

sequence of partial sums is used. The sum Sk z of the first k

{ zn }

is obtained if an

{ } is known as the k th

terms of an infinite complex sequence zn partial sum:

infinite set of complex numbers zn can be put in one-to-one correspondence with the set of positive integers, and if the complex numbers zn have a definite order of arrangement: !

(11.4-2)

3

for every simple closed contour C within D , then f z is

11.4! COMPLEX SERIES !

2

n =1

!

f z dz = 0 !

1

{ zn } = z1 , z2 , z3 , ! !

(11.4-1)

!

()

Sk z =

k

∑z = z + z + z + ! + z ! n

1

2

3

k

(11.4-3)

n =1

As additional terms of the sequence are summed, new partial

()

A complex sequence then forms a countably infinite set of

sums are formed and so Sk z represents the summation of the

complex numbers.

terms of the infinite complex sequence zn truncated at the k th

!

A complex series is defined as the sum of the terms zn of a

complex sequence

{ zn } . Complex series therefore require the

{ }

term:

existence of a sequence and the algebraic operation of addition. An infinite complex series is defined as the sum of the terms zn

{ }

of an infinite complex sequence zn : 186

!

()

S1 z =

to a sum. For the purpose of determining a limit, therefore, an

1

∑z = z n

infinite series is treated as a sequence of partial sums.

1

n =1

! ! !

()

2

S2 z =

1

2

series is a complex function series. Such a function series is

n =1

!

()

S3 z =

(11.4-4)

3

∑z = z + z + z n

1

()

numbers, but are functions f n z of complex numbers, then the

∑z = z + z ! n

If the series terms zn are not simply constant complex

!

2

{ f ( z )} of complex functions:

!



()

∑z = z + z + z + ! + z n

1

2

3

1

2

(11.4-7)

a region R of the complex plane, then for any given point z in

{S ( z ) } = S ( z ) , S ( z ) , S ( z ) ,!, S ( z ) ! 1

2

3

()

If a complex function f n z is defined and single-valued in

!

k

For any given complex sequence

k

0

k

{ zn } , the set of partial sums forms a complex sequence of partial sums { Sk ( z ) } : !

n

n=0

n =1

!

∑ f (z) = f (z) + f (z) + f (z) + ! !

!

! Sk z =

of an infinite sequence

n

3

n =1

!

()

defined as the sum of terms f n z

k

(11.4-5)

or

the region R , the sequence of complex functions

{ f ( z )} n

becomes a sequence of complex numbers. If the sequence of partial sums

{S ( z ) } k

()

of the function f n z

converges for all

points in a region R , then R is called a region of convergence for the function series. If for the same value of N , a sequence of partial sums

!

{S ( z ) }

()

(11.4-6) {S ( z ) } = z , ( z + z ) , ( z + z + z ) , !! If the sequence of partial sums { S ( z ) } has a finite limit, then

convergence R when n ≥ N , then the sequence of partial sums

the infinite series given in equation (11.4-3) is said to converge

any real number ε > 0 :

!

k

1

1

2

1

2

3

k

k

converges to f z

for all points z in its region of

()

is said to converge uniformly to f z , and we will have for

187

!

()

()

Sk z − f z < ε !

for all k > N !

(11.4-8)

Proposition 11.4-3, Term-by-Term Differentiation:

()

For uniform convergence the number N can depend on ε , but

If a series of continuous complex functions fn z

not z . We therefore have N = N ε , but not N = N ε , z .

uniformly to the sum f z in a domain D , then the derivative

()

( )

()

of f z at any point z0 in D can be determined by term-by-

Proposition 11.4-1, Term-by-Term Integration:

term differentiation of the series.

()

If a series of continuous complex functions f k z

()

converges

uniformly to the sum f z in a domain D , then for any simple contour C that lies within D the integral of the sum along C can be determined using term-by-term integration of the series: n

!

lim

n→ ∞

n

∫ ∑ f ( z ) dz = lim ∑ ∫ C

k

k=0

n→ ∞

k=0

C

11.5! POWER SERIES !

A complex power series is an infinite series of complex

functions having the form: ∞

()

fk z dz !

()

converges

(11.4-9)

!



(

an z − z0

)n = a0 + a1 ( z − z0 ) + a2 ( z − z0 )2 !!

(11.5-1)

n=0

where an , z, z0 ∈! . This series is also known as a power series in z − z0 . The point z0 is a fixed point in the complex plane, and

Proposition 11.4-2:

{ ( ) } is a sequence of holomorphic functions in a simply

If fk z

connected domain D , and if the series:

∑ f (z) !

(11.4-10)

k

The coefficients of a power series for any given point of expansion z0 determine the series.

k= 0

()

()

converges uniformly to f z in D , then f z is holomorphic on D .

the series. The coefficients an are complex constants that are generally dependent on the particular center z0 of the series.



!

is called the center of the series or the point of expansion of

!

For every complex power series there is always a unique

circle centered at z0 and having a radius z − z0 = r ≥ 0 such that the series converges absolutely at every point inside the circle, 188

and diverges at all points outside of the circle. This circle is known as the circle of convergence or disk of convergence,

!

)n !

(11.5-4)

converges uniformly inside its circle of convergence z − z0 = r .

power series may converge on some or none of the points that only one radius of convergence.



(

an z − z0

n=0

and its radius r is known as the radius of convergence. A are on its circle of convergence. Each power series can have

()

f z =



Proposition 11.5-3, Uniqueness of Power Series Representation: A power series provides a unique representation of a function

()

f z within its circle of convergence z − z0 = r .

Proposition 11.5-1, Circle of Convergence of a Power Series: Every power series: ∞

!



(

)n

an z − z0 !

Proposition 11.5-4, Power Series is Holomorphic:

()

If a function f z can be represented by a power series:

(11.5-2)

n=0

has a circle of convergence z − z0 = r with radius given by: !

an ! r = lim n→ ∞ an+1

!

series will converge absolutely, and outside this circle it will



(

an z − z0

)n !

(11.5-5)

n=0

()

then f z is holomorphic at every point inside its circle of

(11.5-3)

if this limit exists. Within the circle of convergence the power

()

f z =



convergence z − z0 = r Proposition 11.5-5, Derivatives of All Orders:

diverge.

Derivatives of all orders exist for a power series:

Proposition 11.5-2, Uniform Convergence of a Power Series:

()

Every function f z that can be represented by a power series:

!

()

f z =





(

)n

an z − z0 !

(11.5-6)

n=0

at every point inside its circle of convergence so that: 189

!

k f( ) z =

()



∑ ( n − k )! n!

(

an z − z0

)n−k !

where n = 0, 1, 2, ! , and the series is a Taylor series. (11.5-7) Proposition 11.6-2, Existence of Taylor Series:

n= k

11.6! TAYLOR SERIES !

()

Proof:

having the form: !

()

exist on a domain D if and only if f z is holomorphic on D .

If a function f z can be represented by a power series

()

f z =

!

n f ( ) ( z0 )





( z − z0 )n !

n!

n=0

()

A Taylor series representation of a complex function f z will

(11.6-1)

From the definition of a Taylor series representation of a

()

complex function f z , we see that a necessary and sufficient condition for the Taylor series to exist on a domain D is that the coefficients of the series exist for all points within D . From

then the power series is known as a Taylor series.

Proposition 11.6-1 we know that the coefficients of a Taylor

()

If a complex function f z can be represented by the power series: !

()

f z =

()

orders of f z

must exist at all points within D . From the

definition of a holomorphic function, we then see that a Taylor

()

series representation of a complex function f z will exist on a





()

series are the derivatives of f z , and so derivatives of all

Proposition 11.6-1, Taylor Series:

(

)n !

an z − z0

(11.6-2)

()

domain D if and only if f z is holomorphic on D .



n=0

inside its circle of convergence z − z0 = r , then the coefficients of the series are given by: !

an =

n f ( ) z0

( )=

n!

1 2π i

( ) ds ! !∫C ( s − z0 )n+1 f s

!

A function that can be represented locally by a power

series is known as an analytic function. A Taylor series is a power series. If and only if a function is holomorphic on a

(11.6-3)

domain D can the function be represented on D by a Taylor

()

series. Any function f z that is holomorphic on a domain D 190

()

will be analytic on D . From Propositions 11.5-1 and 11.5-2 we

not holomorphic at z0 . A complex function

know that the Taylor series will converge absolutely and

holomorphic at any of its singularities. If a point z0 is a

uniformly inside its circle of convergence. Within its circle of

singularity of a function f z , then f z

convergence, therefore, a complex function will be both

discontinuous at z0 since

holomorphic and analytic, and so these two designations can

!

be taken to be equivalent for our purposes.

isolated singularity of a function

!

A Taylor series representation of any analytic function

holomorphic at z0 , but f z is holomorphic throughout some

can be continued by selecting a different point of

neighborhood of z0 . We see then that, for an isolated singular

expansion within its circle of convergence to form a new Taylor

point z0 of a function f z , a δ neighborhood of z0 will always

series representing a function f 2 z . This new series can be

exist in which there are no other singular points of f z .

analytically continued through any points on the circle of

!

convergence of the f1 z

series that are not singular. This

then a Taylor series can be used to expand f z about the point

process is known as analytic continuation. The process of

z0 . If the point z0 is an isolated singularity of a complex

analytic continuation can be repeated many times as long as

function f z , then a Taylor series cannot be used to expand

only isolated singularities exist that can be avoided by new

f z about the point z0 . Instead we must use a different series

circles of congruence. This makes it possible for an analytic

known as a Laurent series. To describe the Laurent series we

function having only poles to be represented by Taylor series

need to first define the concept of a punctured disk.

everywhere in the complex plane (except at the poles) starting

!

from its values in some neighborhood of a point of expansion.

include the point z0 is called a punctured δ -disk. We will use

()

f1 z

()

()

is not

will always be

A point z0 is called an isolated singular point or an

()

()

f z

if

()

f z

()

()

is not

()

If a complex function f z is holomorphic at a point z0 ,

()

()

()

An open δ -disk centered at a point z0 that does not

( )

the notation Dδ* z0 for an open punctured disk centered at a

11.7! LAURENT SERIES !

() () f ′ ( z0 ) will not exist.

f z

( )

point z0 and having a radius of δ . The disk Dδ* z0 is defined

A point z = z0 in the complex plane is called a singular

()

()

point or a singularity of a function f z if the function f z is

by: !

0 < z − z0 < δ !

(11.7-1) 191

where !

1 bn = 2π i

( ) ds ! !∫C (s − z0 )− n +1

n =1, 2, 3, ! !

(11.7-3)

!

1 cn = 2π i

( ) ds ! !∫ C (s − z0 )n +1

n = 0, 1, 2, ! !

(11.7-4)

f s

f s

and where C is any simple closed contour lying within some

( )

()

punctured disk Dδ* z0 which surrounds z0 , and in which f z is analytic. ! Figure 11.7-1! Graphical representation of the set of points in the open punctured disk centered at the point z0 and having a radius of δ . !

()

A point z0 that is an isolated singular point of a function

f z will have an open punctured δ -disk centered at z0 in

()

The Laurent series can be considered a generalization of

the Taylor series to expansions of functions about points at which the function is not analytic. The Laurent series can

()

therefore represent a function f z

that is analytic in an

annular domain D centered at an isolated singularity z0 , and defined by r < z − z0 < R , where 0 ≤ r < R ≤ ∞ .

which f z is holomorphic. The inner radius of the punctured

!

disk in which f z

powers. That part of a Laurent series having only negative

()

is holomorphic can be made arbitrarily

A Laurent series can include terms having negative

(

)

small (see Figure 11.7-1).

powers of z − z0 is referred to as the principal part; the rest of

!

the series is referred to as the analytic part. The principal part

!

A Laurent series has the form:

()

f z =





∑ (z − z ) +∑ c (z − z ) ! bn

n

n=1

0

n

n

0

of a Laurent series is given by: (11.7-2)

n=0

192

!



()

f1 z =

∑ (z − z ) ! bn

(11.7-5)

n

n =1

0

11.8.1! ISOLATED SINGULARITIES

and the analytic part of a Laurent series is given by: !



()

f2 z =



(

)

n

cn z − z0 !

(11.7-6)

The Laurent series about z0 given in equation (11.7-2) can

be written in the form: !

()



f z =



(

)

n

an z − z0 !

(11.7-7)

where the coefficients an are: !

( ) dz ! !∫C ( z − z0 )n+1 f z

( )

will converge in an open

()

punctured disk Dδ* z0 . An isolated singularity of f z at a point z0 can be classified as either a removable singularity, a pole, or an essential singularity according to the number of

the Laurent series expansion about the singularity.

n = 0, ± 1, + 2, + 3, ! ! (11.7-8)

REMOVABLE SINGULARITY

()

If a complex function f z has an isolated singularity at a

!

lying within some punctured disk and in which f z is analytic.

()

z0 . Any isolated singularity of f z can be classified using

11.8.1.1!

Dδ*

( z0 ) which surrounds z0 ,

()

point z0 , and if the Laurent series for f z about z0 has no principal part, then

and where C is any simple closed contour centered at z0 , and

()

()

isolated singularity z0 of f z

terms in the principal part of its Laurent series expansion about

n=−∞

1 an = 2π i

()

The Laurent series expansion for a function f z about an

!

n=0

!

11.8! SINGULARITIES

()

f z

()

is said to have a removable

singularity at z0 . If f z has a removable singularity at z0 , then

()

( )

a finite lim f z exists, but this limit is not equal to f z0 . We z → z0

()

see, therefore, that f z is not continuous at z0 . !

()

()

The Laurent series expansion centered at z0 for f z when

f z has a removable singularity at z0 is: 193

()



f z =

!

( )

convergent in the punctured disk Dδ* z0 . Such a punctured

∑a (z− z ) ! n

n

(11.8-1)

0

n=0

()

disk always exists for a pole. !

( )

()

If the principal part of the Laurent series for f z about

which defines f z on the punctured disk Dδ* z0 . This series is

z = z0 can be written with k > 0 :

also a Taylor series that converges on the punctured disk

!

( ) f ( z ) at only the one point z0 so that: ! lim f ( z ) = f ( z0 ) = a0 ! z→ z Dδ*

z0 . The singularity at z0 can be removed by redefining

()

(11.8-2)

(11.8-4)

()

()

( )

()

f z =

!



(

an z − z0

)n !

(11.8-5)

n ≥ −k

making f z continuous at z0 . The function f z then becomes holomorphic on the disk Dδ z0 . 11.8.1.2!

a− ( k+1) = a− ( k+2) = ! = 0 !

so that the Laurent series for f z has the form:

0

!

a− k ≠ 0 !

or !

POLE

()

If a function f z has an isolated singularity at a point z0

()

f z =

a− k

( z − z0 )

k

+

a− ( k −1)

( z − z0 )

k−1

+!+

a−1 z − z0

(

)

+ a0 + a1 z − z0 + !

and, if the principal part of the Laurent series for f z about

! ! (11.8-6) then z0 is said to be a pole of order k . Therefore the principal

z0 has a finite number of terms, then f z is said to have a

part of the Laurent series expansion of f z is a polynomial of

pole at the point z0 . If a pole exists at z0 for f z , we will have:

degree k in z − z0

()

!

()

()

()

lim f z = ∞ !

z → z0

(11.8-3)

()

and so the magnitude of the function f z increases without bound as z → z0 . Equation (11.8-3) holds independently of the

()

manner in which z → z0 . The Laurent series for f z about z0 is

(

()

)−1 . A pole can be seen to have an analytic

reciprocal that is zero. If z0 is a pole, then z = z0 makes the denominators of the principal part of the Laurent series equal to zero. A pole can be real or complex. !

If the principal part of a Laurent series has exactly one

term, then the series can be written as:

194

()

f z =

!

a−1 z − z0

(

)

(

)2

+ a0 + a1 z − z0 + a2 z − z0 +! !

(11.8-7)

and the pole is said to be a pole of order one or a simple pole. 11.8.1.3!

!

Removable singularity – Laurent series has no

!

!

!

Pole – principle part of Laurent series has a finite

!

ESSENTIAL SINGULARITY

()

If a function f z has an isolated singularity at a point z0

!

and if the principal part of the Laurent series for f z about

!

!

()

z0 has an infinity of terms, then f z

()

()

f z with an essential singularity at z0 , the order of the pole goes to infinity, and its Laurent series has the form:

()

! f z = !+ ! !

(

z − z0

)

2

+

a−1 z − z0

(

number of terms. Essential singularity – principle part of Laurent series has an infinity of terms.

is said to have an

isolated essential singularity at the point z0 . For a function

a−2

principle part.

)

(

11.8.2! NONISOLATED SINGULARITIES !

()

If w = f z is a multivalued function of a complex variable

z , then more than one value of w will correspond to some

)

()

values of z in the domain of f z . A number of points in the w-

2

+ a0 + a1 z − z0 + a2 z − z0 +!

plane can then be the images of a single point z0 in the z-plane.

(11.8-8)

where an infinity of the coefficients an with n < 0 are nonzero,

!

If we consider a restricted part of the range of a

multivalued function in which the function assigns just one

and where the series is convergent in the punctured disk

value w for each z , and in which the function is continuous,

Dδ* z0 .

we refer to this part of the range as a branch of the multivalued

( )

11.8.1.4! !

function. Within any given branch of a multivalued function, SINGULARITY CLASSIFICATION

()

As noted above, if a function f z

the function will be single-valued and continuous. has an isolated

()

singularity at a point z0 , a Laurent series expansion of f z about z0 can be used to classify this isolated singularity:

11.8.2.1! !

BRANCH CUTS

A branch cut of a function is a curve in the complex plane

across which the multivalued function is discontinuous. The 195

purpose of a branch cut is to demarcate branches, with each

a branch point of a function will cross a branch cut, and so

edge of the curve belonging to different branches. The branch

every neighborhood of a branch point will contain at least one

cut acts to transform a complex function that is multivalued

singularity other than the singularity at the branch point. Since

into one that is single-valued and continuous within each of its

any δ neighborhood of z0 will contain other singularities of

11.8.2.2! !

() () Dδ* ( z0 ) by a Laurent series centered at z0 . Some functions that f z , it is not possible to represent f z in a punctured disk

branches. BRANCH POINTS

have nonisolated singularities have an infinity of poles that

A point z0 is designated a branch point for a complex

()

()

converge to a limit point z0 .

multivalued function f z if f z does not return to its initial value after once traversing a closed circle of arbitrarily small

11.9! RESIDUES

radius centered at z0 . Therefore a multivalued function will

!

( )

never be single-valued in any δ neighborhood Dδ z0

of a

branch point z0 , where z − z0 < δ . If a closed contour does not

One of the coefficients in a Laurent series expansion of an

analytic function plays a key role both in evaluating integrals of the function, and in determining the location of isolated poles

encircle a branch point of a multivalued function, that function

of the function. This coefficient is known as the residue of the

will be single-valued when traversing such a contour.

function.

!

!

Every branch cut of a multivalued function will join two

(and only two) branch points of the function. The location of any branch point depends on the function, and is independent of the chosen branch cut. Therefore the location of branch points is unique for a given multivalued function. Branch points of a function can be located at infinity. !

Branch points and all points on a branch cut are examples

()

If an analytic function f z has an isolated singularity at a

point z0 , then integrating its Laurent expansion term-by-term on a simple closed contour C encircling z0 , we can write: !

!∫

C

()

f z dz =





n=−∞

an

n z − z0 ) dz ( !∫ C

!

(11.9-1)

Using Proposition 11.3-7 we obtain:

of nonisolated singular points. Even the smallest circuit around 196

!

!∫ ( z − z ) 0

C

n

C

(11.9-2) !

!∫

C

()

f z dz = 2 π i a−1 !

a−1 =

()

(11.9-6)

If the Laurent series expansion for an analytic function

a−1 of the Laurent series can be determined by inspection, and (11.9-3)

()

the residue of the function f z at the singularity z0 obtained. It is often necessary to calculate only a few terms of a Laurent

or !

()

()

f z dz = 2 π i Res ⎡⎣ f z , z0 ⎤⎦ !

f z about a singularity z0 can be found, then the coefficient

We therefore have: !

!∫

!

⎧⎪ 2 π i for n = −1 dz = ⎨ ! 0 for n ≠ −1 ⎪⎩

1 2π i

!∫

series expansion to determine the term with coefficient a−1 .

()

f z dz !

C

(11.9-4)

!

Nevertheless, finding the Laurent series expansion of a

()

function f z can be quite difficult. Fortunately there are some

()

Obviously if a−1 is known for the function f z , then the

other ways of determining the single coefficient a−1 without

integral of the function f z on the closed contour C can be

first having to find other terms of the Laurent series expansion.

()

obtained immediately from equation (11.9-3). !

The term with coefficient a−1 in the Laurent series

Proposition 11.9-1:

()

()

expansion of f z about the singularity z0 is the only nonzero

If an analytic function f z has a simple pole at a point z = z0 ,

term left following term-by-term integration of the Laurent

then:

series on the contour C . For this reason a−1 is called the residue

()

of the function f z for the singularity z0 . !

()

We will designate the residue of a function f z

at a

singularity z0 by: !

()

Equation (11.9-3) can then be written:

(

Proof: !

a−1 = Res ⎡⎣ f z , z0 ⎤⎦ !

()

) ()

Res ⎡⎣ f z , z0 ⎤⎦ = lim ⎡⎣ z − z0 f z ⎤⎦ ! z → z0

!

()

(11.9-7)

()

Since z0 is a simple pole of f z , the Laurent series for

f z is:

(11.9-5) !

()

f z =

a−1 z − z0

(

)

(

+ a0 + a1 z − z0 + a2 z − z0

)2 +! !

(11.9-8) 197

(

) ( z − z0 ) f ( z ) = a−1 + ( z − z0 ) ⎡⎣ a0 + a1 ( z − z0 ) +! ⎤⎦ !

Multiplying by z − z0 we have: !

Proof: (11.9-9)

()

(

Representing the terms in equation (11.9-11) as a Laurent

series about the point z = a , we have:

Taking the limit as z → z0 , we obtain a−1 :

) ()

Res ⎡⎣ f z , z0 ⎤⎦ = lim ⎡⎣ z − z0 f z ⎤⎦ = a−1 ! z → z0

!

!

(11.9-10)

A

!

(z − a)



()

If an analytic function f z has the form:

()

f z =

()

P z

=

A

+

!

B

+

C

( z − a ) ( z − b) ( z − c ) ( z − a ) ( z − b) ( z − c )

!!

()

()

!

()

A = Res ⎡⎣ f z , a ⎤⎦ =

()

B = Res ⎡⎣ f z , b ⎤⎦ =

()

C = Res ⎡⎣ f z , c ⎤⎦ =

() ! ( a − b) ( a − c ) P a

() ! ( b − a )( b − c )

(11.9-12)

−B



∑(

1 B = =− b − a 1− z − a z− b n=0 b − a b− a

) (

)

)

n ! z − a ( ) n+1

(11.9-16)

!

(

C

−C



∑(

1 C = =− c − a 1− z − a z− c n=0 c − a c−a

) (

)

)

n ! z − a ( ) n+1

(11.9-17)

which is a Taylor series analytic at the point z = a . !

Using equations (11.9-15), (11.9-16), and (11.9-17), equation

(11.3-12) becomes:

P b

() ! ( c − a ) ( c − b)

B

which is a Taylor series analytic at the point z = a , and:

c are the poles of f z , then:

!

(

(11.9-11) where P z is a polynomial of degree ≤ 2 , and where a , b , and

!

(11.9-15)

which is a single term Laurent series. We also have:

Proposition 11.9-2:

!

!

(11.9-13)

P c

(11.9-14)

()

!f z =

(

A z− a



)



∑ n=0

⎡ B ⎢ ⎢ b− a ⎣

(

⎤ ⎥ z−a n! + n+1 n+1 ⎥ c−a ⎦ C

)

(

)

(

)

(11.9-18)

and so using Proposition 11.9-1: 198

( ) = P( a) A = Res ⎡⎣ f ( z ) , a ⎤⎦ = lim z→a z − b z − c ( ) ( ) ( a − b) ( a − c ) P z

! !!

(11.9-19)

!

() ! ( b − a )( b − c ) P(c) ! C = Res ⎡⎣ f ( z ) , c ⎤⎦ = ( c − a ) ( c − b) ()

B = Res ⎡⎣ f z , b ⎤⎦ =

!

(11.9-23)

z = z0 , then:

(11.9-20)

d n−1 ⎡ ! Res ⎡⎣ f z , z0 ⎤⎦ = lim z − z0 z → z0 n − 1 ! dz n−1 ⎢ ⎣

()

(11.9-21)

(

1

)

(

)n f ( z )⎤⎥⎦ !

(11.9-24)

11.10! RESIDUE INTEGRATION

Proposition 11.9-3:

()

() q ( z ) has a

Rather than the singularities of an analytic function

increasing the difficulty in evaluating integrals of the function,

Let f z = p z q z where the functions p z and q z are

these singularities can actually aid in this evaluation. Residues

both analytic at a point z = z0 . If p z0 ≠ 0 and

provide a method for evaluating integrals of functions that are

( )

()

simple zero at z0 , then z0 is a simple pole of f z , and we have: !

!

()

!

() ()

( )

q′ z0

If an analytic function f z has a pole of order k ≥ 2 at a point

P b



()

1

Proposition 11.9-5:

Using a similar derivation for the other singularities, we obtain: !

()

Res ⎡⎣ f z , z0 ⎤⎦ =

( )! Res ⎡⎣ f ( z ) , z0 ⎤⎦ = q′ ( z0 )

singularities. It is the residues of the integrand at singular

p z0

(11.9-22)

()

Proposition 11.10-1, Cauchy’s Residue Theorem for an

()

Let f z = 1 q z where the function q z is analytic at a point

()

points within the contour that determine the value of the contour integral.

Proposition 11.9-4:

()

analytic inside a closed contour except for some isolated

Isolated Singularity:

()

z = z0 . If q z has a simple zero at z0 , then z0 is a simple pole of

If a complex function f z is analytic in a domain D except at

f z and we have:

an isolated singularity at a point z = z0 , then:

()

199

!∫

!

C

()

()

f z dz = 2 π i Res ⎡⎣ f z , z0 ⎤⎦ !

()

integral of f z using a closed contour around z0 . This type of

(11.10-1)

integration is known as residue integration, and it can be

where C is any positively oriented simple closed contour lying

considered an extension of the method of integration using

within D and enclosing z0 .

Cauchy’s integral formula. Residue integration can often make possible the evaluation of integrals even when an explicit

Proof: !

()

Since the function f z

antiderivative does not exist. is analytic in D except at an

()

isolated singularity at a point z0 , we can represent f z

at

Proposition 11.10-2, Cauchy’s Residue Theorem:

point z0 using the Laurent series given in equation (11.7-7). We

Let C be a simple closed contour that lies within a simply

can enclose z0 with a simple circular contour C . From equation

()

connected domain D . If a complex function f z is analytic in

(11.7-8) we can then write: !

1 a−1 = 2π i

( ) dz = 1 f z dz ! !∫C ( z − z0 )−1+1 2π i !∫C ( ) f z

D except at a finite number of isolated singular points

!

or !

!∫

C

!∫

C

()

()

f z dz = 2 π i Res ⎡⎣ f z , z0 ⎤⎦ !

as given in equation (11.9-6). !

z1 , z2 , ! , zk that lie inside C , then:

(11.10-2)

()

f z dz = 2 π i

k

∑ Res ⎡⎣ f ( z ), z ⎤⎦ ! n

(11.10-4)

n =1

(11.10-3)



The implications of this equation are rather profound and

far-reaching. If we are able by any method to determine the

()

coefficient a−1 of the Laurent series of a complex function f z

having an isolated singularity z0 , then we can evaluate the 200

Chapter 12 Inverse Laplace Transforms from Residue Integration

1 f (t ) = 2πi



σ0 + i ∞

F ( s ) e ds st

σ0 − i ∞

201

!

In this chapter we will consider the inversion of the

Laplace transform using residue integration. This is a general

Proposition 12.1-1:

inversion technique for the Laplace transform, and it is the

The Laplace transform F ( s ) is analytic in its region of absolute

inversion method often used when a Laplace transform cannot

convergence.

be found in a table.

12.1! !

Proof:

INVERSE LAPLACE TRANSFORM

As given in equation (1.3-6), the direct Laplace transform

F (s) = L

{ f (t )} = ∫ f (t ) e− s t dt ! −

(12.1-1)

convergence. We can integrate F ( s ) on a closed contour C in

f (t ) = L

{F ( s )} = 21π i

−1



σ0 + i ∞

F ( s ) es t ds !

σ0 − i ∞

(12.1-2)

We will now show that these two integral transforms do indeed form a direct and an inverse pair of transforms. We will also present examples of the use of the inverse Laplace transform for certain elementary functions. We will begin by considering the inverse Laplace transforms of the unit step function and of the unit impulse function. We will first establish that the Laplace transform F ( s ) is

analytic in its region of absolute convergence.

!

!∫

F ( s ) ds =

C

and from equation (6.2-5), the inverse Laplace transform is:

!

transform F ( s ) converges uniformly in its region of absolute



0

!

From Proposition 1.4-7 we know that the Laplace

this region:

is: !

!

!∫ ∫ C



0−

f ( t ) e− s t dt ds !

(12.1-3)

Since F ( s ) converges uniformly within this contour, we can invert the order of integration: !

!∫

C

F ( s ) ds =





0−

f (t )

!∫

e− s t ds dt !

(12.1-4)

C

Since e− s t is holomorphic on the entire complex plane, we can use the Cauchy-Goursat theorem given in Proposition 11.3-5 to write: !

!∫

e− s tds = 0 !

(12.1-5)

C

Therefore equation (12.1-4) becomes: 202

!∫

!

F ( s ) ds =

C





0−

f (t )

!∫

e− s t ds dt = 0 !

(12.1-6)

C

From Morera’s theorem given in Proposition 11.3-11 we then can conclude that the Laplace transform F ( s ) is holomorphic and analytic in its region of absolute convergence.

!

convergence of the Laplace integral from which we obtain

(12.2-2)

Laplace transform of the unit step function U ( t ) is:



The requirement that Re s > σ 0 is necessary to ensure

1 = F (s)! s

as given in equation (5.1-10). We will now show that the inverse

! !

L {U ( t )} =

1 U (t ) = 2πi



σ0 + i ∞

1 F ( s ) e dss = 2πi σ0 − i ∞ st



σ 0 +i ∞

σ 0 −i ∞

es t ds ! (12.2-3) s

The integration path for this integral is shown in Figure 12.2-1.

F ( s ) . Once having obtained F ( s ) , however, there is then no longer a requirement that Re s > σ 0 , and so we can use the

analytic properties of the function F ( s ) for residue integration wherever F ( s ) may be analytic in the s-plane (essentially by means of analytic continuation).

12.2!

INVERSE LAPLACE TRANSFORM OF A UNIT STEP FUNCTION

Proposition 12.2-1: The inverse Laplace transform of the unit step function U ( t ) is:

U (t ) = L

!

−1⎧

1⎫ 1 ⎨ ⎬= ⎩ s ⎭ 2πi



σ 0 +i ∞

σ 0 −i ∞

es t ds ! s

(12.2-1)

Proof: !

The Laplace transform of the unit step function is:

Figure 12.2-1! Integration path for the integral for U ( t ) . 203

!

We can use residue integration to evaluate the integral in

for equation (12.2-3) together with a semicircle C R of radius R

equation (12.2-3). The only singularity of es t s is a simple pole

centered at ( σ 0 , 0 ) . The contour C is known as the Bromwich

located at the origin a shown in Figure 12.2-2.

contour and is composed of two curves C L and C R which together enclose the pole at the origin (see Figure 12.2-2). !

Since t > 0 , the contour C R is chosen to extend to the left

so that as Re s becomes more negative, es t s will vanish. The contour integral of es t s on C , where C consists of C = C L ∪ C R , can be written as: !

!∫

C

es t ds = s



es t ds + s CL



es t ds ! s CR

(12.2-4)

Such a contour integral is known as a Bromwich contour integral. We will evaluate each of the integrals on the right side of equation (12.2-4) as the semicircle CR radius R → ∞ . ! !

For the integral along path C L we have:



es t ds = s CL



σ 0 +i R

σ 0 −i R

es t ds ! s

(12.2-5)

Path C L is the line extending from σ 0 − i R to σ 0 + i R where the constant σ 0 is the abscissa of absolute convergence. As R → ∞ Figure 12.2-2! Contour C = C L ∪ C R for the inverse Laplace transform of the unit step function. !

the path C L becomes the integration path of the integral as given in equation 12.2-3:

Residue integration can be performed by constructing a

closed contour C that includes part of the integration path C L 204



es t ds = s CL

! !



σ0 + i ∞

and so:

es t ds ! s

σ0 − i ∞

(12.2-6) !

To evaluate the integral along the path CR , we will let

s = σ 0 + R ei φ so that ds = i R ei φ dφ . We then have: !



es t ds = CR s





CR

eσ 0 t e Rt e iφ ! i φ i R e dφ σ0 + R e

(12.2-7)



e ds = CR s



CR

(12.2-8)

!



e ds ≤ s CR



CR

σ0 + R e



!



!





e Rt e dφ !

(12.2-10)

CR



!



e− Rt sinθ dθ !

0

θ 2θ = ≥ 0! π π 2

(12.2-14)

e Rt cosφ ei Rt sinφ dφ !

We can now write equation (12.2-13) as:



es t ds ≤ 2 eσ 0 t CR s



π 2

0

e

− Rt 2θ π

π eσ0 t dθ = 1− e− Rt ! (12.2-15) Rt

(

)

As R → ∞ we have:

3π 2

π 2

0

π 2

This is known as Jordan’s inequality.

We can integrate on the range π 2 ≤ φ ≤ 3π 2 :

es t ds ≤ eσ 0 t CR s

sin θ ≥

!



dθ = 2 e

σ0 t

θ = π 2 . On 0 ≤ θ ≤ π 2 we have sin θ greater than the straight

(12.2-9)

For R >> σ 0 we have:

es t ds ≤ eσ 0 t CR s

e

Rt cos( θ + π 2 )

(12.2-13)

!

i R ei φ dφ !



π

line connecting the end-points of this range (see Figure 12.2-3):



eσ 0 t e Rt e

(12.2-12)

!

or using Proposition 11.3-3: st

π 2

e Rt cosφ dφ !

where we have used the fact that e− Rt sinθ is symmetric about

σ 0 t Rt ei φ

e e iφ ! i φ i R e dφ σ0 + R e



es t ds ≤ eσ 0 t CR s

! !

st



3π 2

Letting φ = θ + π 2 :

and so: !



es t ds ≤ eσ 0 t CR s

(12.2-11)

!



es t ds → 0 ! s CR

(12.2-16) 205

!

Using Cauchy’s residue theorem given in Proposition

11.10-1, we can now evaluate the contour integral in equation (12.2-17):

⎡ est ⎤ est ds = 2 π i Res ⎢ , 0 ⎥ = 2 π i ! C s ⎦ ⎣ s

!∫

!

(12.2-19)

Therefore we have:



!

σ0 + i ∞

σ0 − i ∞

es t ds = 2 π i ! s

(12.2-20)

and so the inverse Laplace transform of the unit step function Figure 12.2-3!

Straight line connecting the end-points of sin θ in the range 0 ≤ θ ≤ π 2 .

is:

1 2πi

! !

From equations (12.2-6), (12.2-16), and (12.2-4) we then

have: !

σ0 − i ∞

1 es t ds = ( 2 π i ) = 1 = U (t ) ! 2πi s

t > 0!

(12.2-21)





σ0 + i ∞

σ0 − i ∞

es t ds = s

!∫

C

es t ds ! s

(12.2-17)

Proposition 12.2-2: The inverse Laplace transform of the shifted unit step function

Since the integrand es t s has a simple pole at s = 0 , using

is:

Proposition 11.9-1 the residue of the integrand is:

⎡e ⎤ ⎡ e ⎤ Res ⎢ , 0 ⎥ = lim ⎢ s ⎥ = lim est = 1! ⎣ s ⎦ s→ 0 ⎣ s ⎦ s→ 0 st

!



σ0 + i ∞

!

st

U (t − t0 ) = L

(12.2-18)

−1⎧ e

⎫ 1 = ⎨ ⎬ ⎩ s ⎭ 2πi − s t0



σ0 + i ∞

σ0 − i ∞

e− s t0 s t e ds ! (12.2-22) s

Proof: 206

!

Letting t = τ − τ 0 we can write equation (12.2-1) as:

1 U ( τ − τ0 ) = 2πi

!



!

σ0 + i ∞

1 e s ( τ−τ0 ) ds ! s σ0 − i ∞

(12.2-23)

or changing notation, the inverse Laplace transform of the shifted unit step function is: !

1 U (t − t0 ) = 2πi



σ0 + i ∞

σ0 − i ∞

e− s t0 s t e ds = L s

−1⎧ e

12.3!

⎫ ⎨ ⎬! s ⎩ ⎭

(12.2-24)

(12.2-25)



INVERSE LAPLACE TRANSFORM OF A UNIT IMPULSE FUNCTION

!

function is:

{e }

−1

− s t0

δ (t − t0 ) =

d U (t − t0 ) ! dt

(12.3-2)

1 = 2πi



σ0 + i ∞

e

σ0 − i ∞

− s t0

st

e ds !



σ0 + i ∞

σ0 − i ∞

⎤ e− s t0 s t e ds ⎥ ! s ⎦

(12.3-3)

are independent, we can move the derivative inside the integral. We then obtain:

1 δ (t − t0 ) = 2πi



σ0 + i ∞

e− s t0 es t ds !

(12.3-4)

σ0 − i ∞

and so the inverse Laplace transform of the shifted unit impulse function is:

δ (t − t0 ) = L

{e }

−1

− s t0

1 = 2πi



σ0 + i ∞

e− s t0 es t ds !

σ0 − i ∞

(12.3-5)

The Laplace transform of the shifted unit impulse function is: !

(12.3-1)

d ⎡ 1 δ (t − t0 ) = ⎢ dt ⎣ 2 π i

Since the exponential function is continuous, and since s and t

!

The inverse Laplace transform for the shifted unit impulse

δ (t − t0 ) = L

!

!

Proposition 12.3-1:

!

the derivative of the shifted unit step function:

− s t0

e− s t0 L {U ( t − t 0 )} = = F (s)! s

as given in Proposition 5.1-1.

Since the shifted unit impulse function δ ( t − t 0 ) is equal to

we have from Proposition 12.2-2:

The Laplace transform of the shifted unit step function is: !

Proof:

L {δ ( t − t 0 )} = e− s t0 = F ( s ) !

as given in Proposition 5.4-1.

(12.3-6)



207

!

We see from equations (12.2-21), (12.2-24), and (12.3-5) that

the inverse Laplace transforms of unit step functions, shifted

f ( t ) ≤ M eσ0 t !

!

t ≥ T0 !

(12.4-2)

unit step functions, and unit impulse functions all have the

where M , σ 0 , and T0 are all nonnegative real constants. The

form known as the Bromwich contour integral:

Laplace transform:

1 f (t ) = 2πi

!



σ0 + i ∞

F ( s ) e dss ! st

(12.3-7)

σ0 − i ∞

F (s) =

!





0



f ( t ) e− s t dt = L

{ f (t )} !

(12.4-3)

We will now show that the inverse Laplace transforms of

will then exist (see Section 1.4). All the singularities will fall to

functions of exponential order also have the same form.

the left of the line segment specified by s = σ 0 . !

12.4!

INVERSE LAPLACE TRANSFORM OF A FUNCTION OF EXPONENTIAL ORDER

we obtain: !

Proposition 12.4-1, Existence of the Inverse Laplace Transform: If f ( t ) is a piecewise continuous function of exponential order on [ 0, ∞ ) with L

{ f (t )} = F ( s ) , then the inverse Laplace

transform exists where Re s > σ 0 , and is given by:

f (t ) = L

!

{F ( s )} = 21π i

−1



σ0 + i ∞

F ( s ) e ds !

σ0 − i ∞

st

(12.4-1)

Proof: !

Since the function f ( t ) is taken to be of exponential order,

Multiplying equation (12.4-3) by es t 2 π i and integrating,

1 2πi



σ0 + i ∞

1 F ( s ) e ds = 2πi σ0 − i ∞ st



σ0 + i ∞

e

σ0 − i ∞

st





0−

f ( τ ) e− sτ dτ ds

! ! !

(12.4-4) Since the integrals are uniformly convergent (as shown in

Proposition 1.4-7), we can change the order of integration: !

1 2πi



σ0 + i ∞





1 F ( s ) e ds = f (τ) 2πi σ0 − i ∞ 0−

! !

st



σ0 + i ∞

es t e− s τ ds dτ

σ0 − i ∞

(12.4-5)

Using Proposition 12.3-1 we know that the inner integral exists and is given by:

we have: 208

1 δ (t − τ ) = 2πi

!



σ0 + i ∞

e− s τ es t ds !

(12.4-6)

σ0 − i ∞

transform. The contour will include a segment of the vertical line Re s = σ 0 and a semicircle that encloses all the poles. The

and so equation (12.4-5) becomes: !

1 2πi



σ0 + i ∞

F ( s ) es t ds =

σ0 − i ∞



with residue integration to determine the inverse Laplace



0



f ( τ ) δ ( t − τ ) dτ = f ( t ) !

contour will then be similar to that shown in Figure 12.2-2, but (12.4-7)

it will enclose a number of poles, none of which need be at the coordinate system origin.

Therefore:

f (t ) = L

!

{F ( s )} = 21π i

−1



σ0 + i ∞

F ( s ) e ds ! st

σ0 − i ∞

Proposition 12.5-1: (12.4-8)

!If f ( t ) is a piecewise continuous function of exponential order,

on [ 0, ∞ ) with Laplace transform F ( s ) , and if F ( s ) es t has only

as given in equations 6.2-5 and 12.3-7. The inverse Laplace

N simple poles sn , then the inverse Laplace transform:

transform of f ( t ) exists where Re s > σ 0 if f ( t ) is a piecewise continuous function of exponential order on [ 0, ∞ ) .

12.5! !



RESIDUE INTEGRATION OF THE INVERSE LAPLACE TRANSFORM

The Laplace transform will be finite in its region of

f (t ) = L

!

{F ( s )} = 21π i

−1



σ0 + i ∞

F ( s ) es t ds !

σ0 − i ∞

(12.5-1)

is given by:

f (t ) =

!

convergence since it is holomorphic there. Any singularities in

N

∑ n =1

Res ⎡⎣F ( s ) es t , sn ⎤⎦ !

(12.5-2)

the Laplace transform must then occur in the s-plane to the left of a vertical line through the abscissa of simple convergence.

Proof: !

We will use the semicircular contour C shown in Figure

12.5.1! FINITE NUMBER OF POLES

12.2-2 to evaluate the integral in equation (12.5-1). The contour

!

C is composed of two curves: a segment C L of the vertical line

If all singularities of the Laplace transform are in the form

of a finite number of poles, a Bromwich contour can be used

Re s = σ 0 and a semicircle C R of radius R centered at ( σ0 , 0 ) . If 209

the Laplace transform F ( s ) has poles sn where n = 1, 2, !, N , then R is taken to be large enough so that the contour C We can now write the integral in equation (12.5-1) as a

contour integral on C , where C consists of C = C L ∪ C R : !

!∫

F ( s ) e ds = st

C



F ( s ) e ds + st

CL



F ( s ) e ds ! st

(12.5-3)



F ( s ) es t ds =

CL



σ 0 −i R

F ( s ) es t ds !

(12.5-4)

constant Re s > σ 0 is the abscissa of absolute convergence. As

R → ∞ we have the desired integral given in equation (12.5-1). To evaluate the integral along the path C R , we will let iφ

s = σ 0 + R e so that ds = i R e dφ . We then have: !



F ( s ) es tds =

CR

and so:



CR

( σ +R e ) t F

e

0





0+

)

i R ei φ dφ

!

(12.5-6)

We have:



!

F ( s ) es t ds ≤ R e σ 0 t

!



e Rt e



CR

(

F σ 0 + R ei φ

)

i ei φ dφ

!

(12.5-7)

For a function f ( t ) of exponential order, the Laplace transform

(

F σ 0 + R ei φ

!

)

≤ MR !

)

R ei φ i R ei φ dφ ! (12.5-5)

(12.5-8)

and equation (12.5-7) becomes: !



F ( s ) es t ds ≤ MR R e σ 0 t

CR



F σ 0 + R ei φ

We then have:

Path C L is the line extending from σ 0 − i R to σ 0 + i R where the

!

(



F ( s ) is bounded such that F ( s ) ≤ MR where MR → 0 as R → ∞ .

For the integral along path C L we have: σ 0 +i R



e Rt e

CR

CR

equation (12.5-3) as the radius R → ∞ .

!

!

CR

We will evaluate each of the integrals on the right side of !

F ( s ) es t ds ≤ e σ 0 t

CR

encloses all these poles. !



!



e Rt cosφ ei Rt sinφ dφ !

(12.5-9)

CR

We can integrate on the range π 2 ≤ φ ≤ 3π 2 : !



CR

! !

F ( s ) e ds ≤ MR R e st

σ0 t



3π 2

e Rt cosφ ei Rt sinφ dφ

π 2

(12.5-10)

or 210



!

F ( s ) e ds ≤ MR R e st

σ0 t



e

Rt cosφ

π 2

CR

From equations (12.5-3), (12.5-4), and (12.5-16) we then have:

3π 2

dφ !

(12.5-11)



!

σ 0 −i R

Letting φ = θ + π 2 : !



F ( s ) es t ds ≤ MR R eσ 0 t

CR



π

F ( s ) es t ds =

!∫

F ( s ) es t ds !

(12.5-17)

C

and from equations (12.5-17) and (12.4-8) we have:

e Rt cos (θ + π 2 ) dθ !

(12.5-12)

0

or !

σ 0 +i R



1 f (t ) = 2πi

!

σ 0 +i R

σ 0 −i R

F ( s ) es t ds =

1 2πi

!∫

F ( s ) es t ds !

(12.5-18)

C

Finally from Cauchy’s residue theorem (Proposition 11.10-2) we



F ( s ) e ds ≤ 2 MR R e st

σ0 t

CR

π 2



e− Rt sinθ dθ !

(12.5-13)

0

where we have used the fact that e

− Rt sinθ

is symmetric about

can write:

1 ! f (t ) = 2πi

θ = π 2 . From Jordan’s inequality given in equation (12.2-14)



σ0 + i ∞

F ( s ) e ds = st

σ0 − i ∞

N

∑ n =1

Res ⎡⎣ F ( s ) es t , sn ⎤⎦ ! (12.5-19)



we have: !



F ( s ) es t ds ≤ 2 MR R eσ 0 t

CR



π 2

Proposition 12.5-2:

e− Rt 2θ π dθ !

!If f ( t ) is a piecewise continuous function of exponential order

(12.5-14)

0

with Laplace transform F ( s ) , and if F ( s ) es t has only N simple

or !



es t F ( s ) ds ≤

CR

! !

π MR e t

σ0 t

poles sn , then the inverse Laplace transform is:

(1− e ) ! − Rt

(12.5-15)

f (t ) =

!



CR

∑ lim ⎡⎣ ( s − s ) F ( s ) e n =1

Since MR → 0 as R → ∞ , we have:

es t F ( s ) ds → 0 !

N

(12.5-16)

s → sn

n

st

⎤⎦ !

(12.5-20)

Proof: !

Follows from Propositions 12.5-1 and 11.9-1.



211

Using Proposition 12.5-2:

Example 12.5-1

1 Determine the inverse Laplace transform f ( t ) of F ( s ) = . s+a Solution: The Laplace transform has a simple pole at s = − a . From Propositions 12.5-1 and 12.5-2 we then have: !

⎡ 1 st ⎤ ⎡ 1 st ⎤ f t = Res ⎢ e , − a ⎥ = lim ⎢ s + a e ⎥ = e− at s+a ⎦ ⎣ s+a ⎦ s→ − a ⎣

()

(

f (t ) = L

⎡ 1 f t = lim ⎢ s est s→ 0 ⎢⎣ s s + a

()

−1⎧

1 ⎫ − at ⎨ ⎬=e ⎩s + a⎭

(

)

⎤ ⎥ + lim ⎥⎦ s→ − a

⎡ 1 est ⎢ s+a s s+a ⎢⎣

(

)

(

)

⎤ ⎥ ⎥⎦

and so: !

f (t ) = L

)

and so: !

!

−1⎧

1 ⎫ 1 − at ⎨ ⎬ = 1− e ⎩ s ( s + a) ⎭ a

(

)

Example 12.5-3 Determine the inverse Laplace transform f ( t ) of F ( s ) =

1 . s2

Solution: The Laplace transform has a pole of order 2 at s = 0 . From

Example 12.5-2 Determine the inverse Laplace transform f ( t ) of 1 . F (s) = s ( s + a) Solution: The Laplace transform has simple poles at s = 0 and s = − a .

Propositions 12.5-1 and 11.9-5 we then have: !

⎡ 1 st ⎤ d 2−1 ⎡ f t = Res ⎢ 2 e , 0 ⎥ = lim 2−1 ⎢ ⎣s ⎦ s→ 0 ds ⎣

()

( s − 0)2 12 est s

⎤ ⎥ ⎦

or !

()

f t = L

−1⎧

1⎫ d st = lim e = lim t es t = t ⎨ 2 ⎬ s→ 0 s→ 0 ds ⎩s ⎭

From Proposition 12.5-1 we then have: !

⎡ ⎤ ⎡ ⎤ 1 1 f ( t ) = Res ⎢ es t , 0 ⎥ + Res ⎢ es t , − a ⎥ ⎣ s ( s + a) ⎦ ⎣ s ( s + a) ⎦

Example 12.5-4 Determine the inverse Laplace transform of F ( s ) =

1 . s2 + a2 212

Solution:

calculate the inverse Laplace transform of F ( s ) . Both the

The Laplace transform has simple poles at s ± i a . From

branch point and the branch cut must be outside the contour,

Proposition 12.5-1 we then have: !

1 1 ⎡ ⎤ ⎡ ⎤ st st f ( t ) = Res ⎢ 2 e , i a + Res e , − i a ⎥⎦ ⎢⎣ s 2 + a 2 ⎥⎦ ⎣ s + a2

!

upon the particular Laplace transform integral. !

⎡ ⎤ 1 + lim ⎢ ( s + i a ) es t ⎥ s→ −ia ( s − i a )( s + i a ) ⎦ ⎣

An example of the determination of an inverse Laplace

transform when F ( s ) is multivalued is given in Example 12.5-5 for F ( s ) = 1

or !

A branch for F ( s ) must first be selected in order to

contour chosen for evaluating the integral will be dependent

⎡ ⎤ 1 f ( t ) = lim ⎢ ( s − i a ) es t ⎥ s→ia ( s − i a )( s + i a ) ⎦ ⎣

!

be inside the contour. perform the integration. Generally the branch selected and the

Using Proposition 12.5-2: !

while any poles of F ( s ) included in the residue summation will

s.

Example 12.5-5

()

f t = L

⎫ e ia e− ia 1 e ia − e− ia 1 = − = = sin at ⎨ 2 2⎬ 2i a 2i a a 2i a ⎩s + a ⎭

−1⎧

1

Determine f ( t ) if its Laplace transform is F ( s ) =

( )

1 . s

Solution: The Laplace transform F ( s ) is multivalued (has two values)

12.5.2! BRANCH CUTS !

with a branch point at s = 0 . We will use the contour C

If the residue theorem is used to integrate a multivalued

shown in Figure 12.5-1 to determine the inverse Laplace

function, it is important that the contour for the integral not

transform:

enclose any branch point nor cross any branch cut of the function. If a Laplace transform F ( s ) is multivalued, then the

branch point and the branch cut for F ( s ) must be taken into

!

f (t ) = L

{F ( s )} = 21π i

−1



σ0 + i ∞

σ0 − i ∞

1 st e ds s

account when constructing the integration contour used to 213

The contour C is composed of the following contours:

C = C L ∪ C1 ∪ C2 ∪ L1 ∪ Cr ∪ L2 ∪ C3 ∪ C4

!

where Cr is the arc of a circle of radius r centered at ( 0, 0 ) , and C1 , C2 , C3 and C4 are arcs of a circle of radius R centered at

( 0, 0 ) .

Cauchy’s residue theorem (Proposition

11.10-2) for a function F ( s ) on the contour C shown in Figure 12.5-1 can be symbolically expressed as:

!∫ = ∫ + ∫ + ∫ + ∫ + ∫ + ∫ + ∫ + ∫

!

C

CL

C1

C2

L1

Cr

L2

C3

C4

k

!

= 2π i

∑ Res ⎣⎡ F (s) e , s ⎦⎤ st

j

j =1

where sj are any poles of F ( s ) within the contour C . Since

F (s) = 1

Figure 12.5-1! Contour C = C L ∪ C1 ∪ C2 ∪ L1 ∪ Cr ∪ L2 ∪ C3 ∪ C4 around a branch point and a branch cut in the splane for F ( s ) = 1 s . The contour C has a branch cut along the negative real axis and a branch point at the origin (purple line and

s has no poles within C , we have from the

Cauchy-Goursat theorem (Proposition 11.3-5):

!∫ = ∫ + ∫ + ∫ + ∫ + ∫ + ∫ + ∫ + ∫

!

C

CL

C1

C2

L1

Cr

L2

C3

=0

C4

We will now evaluate each of these integrals separately. **********************************************************************

purple dot

! !

Along the contour C L we have as R → ∞ :



CL

1 st e ds = s



σ0 + i ∞

σ0 − i ∞

1 st e ds s 214

********************************************************************** !

Along the contour C1 we will let s = R ei φ so that

!

ds = i R ei φ dφ :



!

C1

1 st e ds = s



π 2

C1

!

1



e Rt e

R ei φ

β1

i R ei φ dφ



!

C1



π 2

C1

e R t ( cosφ + i sinφ)

β1

1 iφ i R e dφ R 1 2e i φ 2

!



1 st e ds ≤ R s

Let θ = φ −



C1

ds = i R ei φ dφ

!



π 2

e Rt cosφ dφ

β1

C2

π so that: 2

1 st e ds ≤ R s



C1

and so:

1 st e ds ≤ R s



!

1 st e ds = s



π − β2



e Rt e

π 2

1 R ei φ

i R ei φ dφ

where β 2 is the small angle shown in Figure 12.5-2. We have:



0

e

− Rt sinθ



!

β1+π 2



0

β1+π 2

e − 2 Rt θ π dθ



π − β2



1 st e ds ≤ s



1 st e ds ≤ R s

C2

e R t ( cosφ + i sinφ)

π 2

1 iφ i R e dφ 1 2 iφ 2 R e

or

From Jordan’s inequality given in equation (12.2-14) we have: !

Along the contour C2 we will let s = R ei φ so that:

Therefore: C1

!

1 st e ds → 0 s

**********************************************************************

or !

1 st π ⎡ − ( 2 Rt π ) (β1 +π 2 ) ⎤ e ds ≤ R e −1 ⎦ 2 Rt ⎣ s

Along the contour C1 we then have as R → ∞ :



!

where 0 < β1 < π 2 . We then have:

1 st e ds ≤ s



!

!

C2

Let θ = φ −



π−β2

e Rt cosφ dφ

π 2

π so that: 2

215

!



C2

1 st e ds ≤ R s

π 2−β2



and so:

e− Rt sinθ dθ

0

From Jordan’s inequality given in equation (12.2-14) we have: !



C2

1 st e ds ≤ R s



π 2 − β2



!

C2

!

e − 2 Rt θ π dθ

0

Along the contour C2 we then have as R → ∞ :



!

1 st π ⎡ e ds ≤ R 1− e − ( 2 Rt π ) (π 2−β2 ) ⎤ ⎦ 2 Rt ⎣ s

C2

1 st e ds → 0 s

********************************************************************** Similarly for contours C3 and C4 as R → ∞ :



!

C3



1 st e ds → 0 ! s

1 st e ds → 0 s

C4

********************************************************************** !

Along the contour L1 we will let s = x ei θ = x ei π = −x and

ds = ei π dx = − dx . We then have: !

! Figure 12.5-2! Contour C2 geometry for F ( s ) = 1

s.



L1

1 st e ds = s

r

∫e R

st

1 s

ds = − =−





r R

r R

e− x t dx 1 2 iπ 2 x e

e− x t 1 dx = i i x1 2



R −xt

r

e x1

2

dx

**********************************************************************

216

!

Along the contour L 2 we will let s = x ei θ = x e− i π = −x and

ds = e−i π dx = − dx where x is taken positive. We then have:



!

L2

1 st e ds = s



−R

1

es t

ds = −

s

−r



R

r

!

e dx ! x 1 2 e− i π 2

r

=−

!



−xt

R

Letting R → ∞ and r → 0 :

e− x t − i x1

1 dx = i

2





!

Cr



−π

π

r

e x1

2

dx !

!

1 f t = 2π i

()



σ0 − i∞

1

1 e ds = π s st

so that du = σ0 + i∞ σ0 − i∞

1 s





e− x t x1

0

2

dx

1 −1 2 x t ) t dx . We then have: ( 2

st

e ds =

2

π t





2

e− u du

0

Using the integral identity:



er e t iθ 1 2 i θ 2 i r e dθ r e

!





2

e − y dy =

0



1 st e ds = i r 1 2 s

−π





er e t e i θ 2 dθ = 0 !

π

as r → 0

********************************************************************** We then have for the contour C :



σ0 + i ∞

σ0 − i ∞

1 st 1 e ds + i s



R −xt

r

e x1

2

1 dx + i



R −xt

r

e x1

π 2

1 2πi



σ0 + i ∞ σ0 − i ∞

1 st 1 e ds − π s

2

!

()



σ0 + i∞ σ0 − i∞

1 s

est ds =

1

πt

12.5.3! INFINITE NUMBER OF POLES !

dx = 0

1 f t = 2π i

If the Laplace transform of a function f ( t ) has an infinite

number of poles to the left of the abscissa of convergence, then it is not possible to use partial fraction expansion since the denominator of the transform is not a finite polynomial.

Multiplying by 1 2 π i : !

12

σ0 + i∞

we obtain:

Cr

!



R −xt

or !

()

Let u = ( x t )

********************************************************************** ! Along the contour Cr we will let s = r ei θ and ds = i rei θ dθ :

1 st e ds = s

1 f t = 2π i



r

R

e

−xt

x

1 2

Therefore transform tables cannot be used for inverting the

dx = 0

Laplace transform.

217

!

For an infinite number of poles, residue integration uses

Example 12.5-6

an initial Bromwich contour enclosing a finite number of poles

Determine the inverse Laplace transform of F ( s ) =

(see Figure 12.5-3). The contour is then expanded to include an infinite number of poles, and residue integration takes the form

Solution:

of an infinite sum of residues.

The Taylor series expansion of sinh ( s ) is:

s 3 s5 sinh s = s + + +! = 3! 5!

! or !



∑( n=0

1 . s sinh ( s )

s 2 n+1 2n +1 !

)

( )

()

s sinh s ≈ s 2 + O s 4

and so F ( s ) has a pole of order 2 at s = 0 . In addition, the

Laplace transform F ( s ) has an infinite number of simple poles along the imaginary axis as can be seen from: !

( )

()

sinh iθ = i sin θ

We will use the contour C shown in Figure 12.5-3 to determine the inverse Laplace transform: !

f (t ) = L

{F ( s )} = 21π i

−1



σ0 + i ∞

es t

σ0 − i ∞

1 ds s sinh ( s )

From Proposition 12.5-1 we have: Figure 12.5-3! Infinite number of poles.

!



σ0 + i R

1 f (t ) = es t ds = 2 π i s sinh ( s ) σ0 − i R

k

∑ Res ⎡⎣ F ( s ) e , s ⎤⎦ st

n

n =1

218

where sn are the poles of F ( s ) within the contour C :

sn = ± n π i !

!

n = 1, 2, 3, !

or using L’Hôpital’s rule:

es t Res ⎡⎣ F ( s ) e , s = sn ⎤⎦ = lim s →sn sinh ( s ) + s cosh ( s ) st

!

The residue at s = 0 is:

d ⎡ es t ⎤ 2 Res ⎡⎣ F ( s ) , s = 0 ⎤⎦ = lim ⎢( s − 0 ) ⎥ s → 0 ds s sinh ( s ) ⎦ ⎣

! or !

d ⎡ s es t ⎤ Res ⎡⎣ F ( s ) , s = 0 ⎤⎦ = lim ⎢ ⎥ s → 0 ds sinh s ( ) ⎣ ⎦

Since: !

cosh ( ± n π i ) = cos ( n π ) = ( −1)

!

sinh ( ± n π i ) = i sin ( ± n π ) = 0 we have:

Res ⎡⎣ F ( s ) e , s = 0 ⎤⎦ = st

!

and so: !

s cosh ( s ) es t ⎤ ⎡ es t s t es t Res ⎡⎣ F ( s ) , s = 0 ⎤⎦ = lim ⎢ + − ⎥ 2 s→0 sinh s sinh s sinh s ( ) ( ) ( ) ⎣ ⎦ Approximating the hyperbolic function near s = 0 with:

!

( )

()

3

sinh s ≈ s + O s !

()

( )

cosh s ≈ 1+ O s

2

e± n π i t

( −1)n ( ± n π i )

and so summing all residues in a Bromwich contour extending to infinity:

f (t ) = t +

!





( −1)n e+n π i t

n =1

we have: !

n

nπi





∑ n =1

( −1)n e− n π i t nπi

or

⎡ est st est s est ⎤ st Res ⎡⎣ F s e , s = 0 ⎤⎦ = lim ⎢ + − 2 ⎥ = lim ⎡⎣t est ⎤⎦ = t s→ 0 s s ⎦ s→ 0 ⎣ s

()

!

f (t ) = t + 2



∑ n =1

( −1)n

sin ( n π t ) nπ

The residue at sn = ± n π i is: !

⎡ es t ⎤ Res ⎡⎣ F ( s ) e , s = sn ⎤⎦ = lim ⎢( s − sn ) ⎥ s →sn s sinh ( s ) ⎦ ⎣ st

219

12.6! !

POLE POSITION DETERMINES SYSTEM STABILITY

As noted in Chapter 6, a Laplace transform F ( s ) generally

takes the form of a fraction having a numerator P ( s ) and a denominator Q ( s ) that are polynomials in the variable s : !

am s m + am−1 s m−1 +!+ a1 s + a0 ( ) F ( s) = = ! bn s n + bn−1 s n−1 +!+ b1 s + b0 Q ( s) P s

(12.6-1)

Assuming this fraction is a proper rational fraction where

n > m , the first step in obtaining an inverse Laplace transform is to factor the denominator into first-order factors having real coefficients so that we obtain: !

F (s) =

P (s)

bn ( s − s1 ) ( s − s2 ) ( s − s3 )!( s − sn )

!

(12.6-2)

where sn are the roots of Q ( s ) . !

Figure 12.6-1! Pole positions indicated by ∞ .

A rational fraction transform that has been expanded into

1.! Conjugate poles with negative real parts result in damped

partial fractions will have a term for each pole in the contour C

harmonic oscillations which tend towards zero:

of the transform. The inverse transform will depend upon both the nature and position of the poles of F ( s ) in the s-plane. The

!

damping, growth, or steady oscillation of a function of time

!

(signal) is then characterized by pole position as follows (see

()

(

)

f t = e− α t Acos ω t + Bsin ω t !

(12.6-3)

A physical system whose output is described by equation (12.6-3) is considered to be stable.

Figure 12.6-1): 220

2.! Conjugate simple poles on the imaginary axis result in

5.! Simple or higher order poles on the positive real axis result in increasing exponential functions such as e α t , t e α t ,

stationary harmonic oscillation: ! !

()

f t = Acos ω t + Bsin ω t !

t 2 e α t , t 3 e α t , ! which tend towards infinity:

(12.6-4)

A physical system whose output is described by equation (12.6-4) is considered to be stable.

! !

()

f t = Atn e

α1 t

+ B tn e

α2 t

!

(12.6-7)

A physical system whose output is described by equation (12.6-7) is considered to be unstable.

3.! Conjugate poles with positive real parts result in

! !

increasing harmonic oscillations which tend towards

!

infinity:

only if all the poles of its Laplace transform are located left of

()

(

)

f t = e α t Acos ω t + Bsin ω t !

(12.6-5)

We see that a function f ( t ) will have a unique final value

the imaginary axis.

A physical system whose output is described by equation (12.6-5) is considered to be unstable. 4.! Simple or higher order poles on the negative real axis result in decreasing exponential functions which tend towards zero:

! !

()

f t = Atn e

− α1 t

+ B tn e

−α2 t

!

(12.6-6)

A physical system whose output is described by equation (12.6-6) is considered to be stable.

221

Chapter 13 Solution of Linear Ordinary Differential Equations having Variable Coefficients

L { t y′′ ( t )} = − s 2Y ′ ( s ) − 2 sY ( s ) + y ( 0 )

222

!

Most differential equations with variable coefficients are

insolvable using Laplace transforms. There are exceptions, however, as we will show in this chapter. The Laplace transform of a differential equation with variable coefficients

Proof: !

From Proposition 4.1-1, we have:

L { t f ( t )} = −

!

will never result in an algebraic equation, but always in another

d L ds

differential equation.

Therefore we can write:

13.1!

!

!

DIFFERENTIAL EQUATIONS OF SECOND ORDER

A second order linear differential equation of f ( t ) with

L { t f ′ ( t )} = −

L { t f ′ ( t )} = −s F ′ ( s ) − F ( s ) !

!

first order linear differential equation of F ( s ) with variable

We also have:

coefficients in the form of polynomials of s . This transformed dependent variable.

If f ( t ) is a piecewise-smooth function of exponential order, and

! !

!

L { t f ′′ ( t )} = −

d L ds

!

L { t f ′′ ( t )} = −

d 2 ⎡ s F ( s ) − s f ( 0 ) − f ′ ( 0 ) ⎤⎦ ! ds ⎣

{ f (t )} = F ( s ) , then:

Therefore:

L { t f ′ ( t )} = − s F ′ ( s ) − F ( s ) !

!

L { t f ′′ ( t )} = − s 2 F ′( s ) − 2 s F ( s ) + f ( 0 ) !

(13.1-4)

{ f ′′ (t )} !

(13.1-5)

(13.1-6)

and so:

Proposition 13.1-1:

if L

{ f ′ (t )} = − dsd ⎡⎣ s F ( s ) − f ( 0 )⎤⎦ !

(13.1-3)

or

the variable coefficient t may be Laplace transformed into a

differential equation must then be solved for the transformed

d L ds

{ f (t )} = − F ′ ( s ) !

(13.1-1)

L { t f ′′ ( t )} = − s 2 F ′( s ) − 2 s F ( s ) + f ( 0 ) !

(13.1-7)

(13.1-8)



(13.1-2)

223

Example 13.1-1

!

Determine the solution of the differential equation: !

t y′′ ( t ) + 2 y′ ( t ) + t y ( t ) = sin ( t )

with initial condition: y ( 0 ) = 0 .

y (t ) =

1 ⎡ sin ( t ) ⎤ − cos t ( ) ⎥ 2 ⎢⎣ t ⎦

This is consistent with the initial condition since: !

⎡ sin ( t ) ⎤ y ( t ) = lim ⎢ − cos ( t ) ⎥ = lim ⎡⎣ cos ( t ) − cos ( t ) ⎤⎦ = 0 t→0 ⎣ t ⎦ t→0

Solution: From Propositions 13.1-1, 3.1-1, and 4.1-1 we have:

1 ! ⎡⎣ − s Y ′( s ) − 2 sY ( s ) + y ( 0 ) ⎤⎦ + 2 ⎡⎣ sY ( s ) − y ( 0 ) ⎤⎦ − Y ′ ( s ) = 2 s +1 2

Applying the initial condition, we obtain: !

1 s2 + 1

− s 2 Y ′( s ) − Y ′( s ) =

Y ′(s) = −

(

1

)

s2 + 1

2

d 1 Y (s) = ds s2 + 1

Therefore:

t y′′ ( t ) − 2 t y′ ( t ) +2 y ( t ) = 4

with initial conditions: y ( 0 ) = 2 , y′ ( 0 ) = 1 . Solution:

Applying the initial conditions, we obtain: !

4.1-1:



!

4 ! ⎡⎣ − s 2 Y ′( s ) − 2 sY ( s ) + y ( 0 ) ⎤⎦ − 2 ⎡⎣ − s Y ′ ( s ) − Y ( s ) ⎤⎦ + 2Y ( s ) = s

Using the transform table in Appendix E and Proposition

!

Determine the solution of the differential equation:

From Proposition 13.1-1 we have:

and so: !

Example 13.1-2

(

)

2

⎧1 ⎫ = L ⎨ ( sin ( t ) − t cos ( t )) ⎬ = L {t y ( t )} ⎩2 ⎭

4 ⎡⎣ − s 2 Y ′( s ) − 2 sY ( s ) + 2 ⎤⎦ − 2 ⎡⎣ − s Y ′ ( s ) − Y ( s ) ⎤⎦ + 2Y ( s ) = s

or !

( 2 s − s ) Y ′( s ) + ( 4 − 2 s ) Y ( s ) + 2 = 4s 2

224

and so: !

s ( 2 − s ) Y ′( s ) + 2 ( 2 − s ) Y ( s ) =

2 (2 − s) 4 −2= s s

We then have: !

Y ′( s ) +

2 2 Y (s) = 2 s s

This is an ODE that has the solution: !

Y (s) =

2 c + s s2

Using the transform table in Appendix E: !

y ( t ) = 2 + ct

From the initial condition y′ ( 0 ) = 1 we have c = 1 , and so: !

y (t ) = 2 + t

!

⎡⎣ − s 2 Y ′( s ) − 2 sY ( s ) + y ( 0 ) ⎤⎦ + ⎡⎣ s Y ′ ( s ) − y ( 0 ) ⎤⎦ − Y ′( s ) = 0

Applying the initial conditions, we obtain: !

⎡⎣ − s 2 Y ′( s ) − 2 sY ( s ) + 1⎤⎦ + ⎡⎣ sY ( s ) − 1⎤⎦ − Y ′( s ) = 0

We then have: !

(s

2

)

+ 1 Y ′( s ) + s Y ( s ) = 0

or !

dY ( s ) Y (s)

+

s ds

(s

2

)

+1

=0

This is an ODE that has the solution: !

(

)

1 lnY ( s ) + ln s 2 + 1 = ln c 2

or Example 13.1-3

!

Determine the solution of the differential equation: !

t y′′ ( t ) + y′ ( t ) + t y ( t ) = 0

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 0 . Solution: From Propositions 13.1-1 and 4.1-1 we have:

Y (s) =

c s2 + 1

Using the transform table in Appendix E: !

y (t ) = c J 0 (t )

From the initial condition y ( 0 ) = 1 we have c = 1 , and so: !

y (t ) = J 0 (t ) 225

13.2!

DIFFERENTIAL EQUATIONS OF HIGHER ORDER

L

!

{t

m

f

(n)

(t ) } = ( −1)

( )

!

exponential order on [ 0, ∞ ) , then:

L

{t

}

f ( n )( t ) = ( −1)

m

m

( )

m

( )

d ⎡ n n−1 f 0 − − s n−2 f ′ 0 − m ⎣s F (s) − s ds

( )

( )

− s n−3 f ′′ 0 − − !− f ( n−1) 0 − ⎤⎦ !

!

where n and m are positive integers, and L

(13.2-1)

{ f (t )} = F ( s ) .

!

From Proposition 3.1-3, we have:

L

!

{ f ( ) (t )} = s F ( s ) − s f ( 0 ) − s

!

n

n

!

n−1



n−2

!

( )

( )

f ′ 0 − − !− f ( n−1) 0 −

!

(13.2-2)

Letting g ( t ) = f

{t

(n)

(t ) , from Proposition 4.1-3 we can write:

}

g ( t ) = ( −1) G m

(m)

(s) !

!

L

where L

{ g (t )} = G ( s ) . Therefore we have:

m

( )

(13.2-4)

L

{t

2

L

{t

2

}

( )

d2 ⎡ − ⎤ = ( −1) s F s − f 0 ( ) ⎦! ds 2 ⎣

f ′(t )

f ′ ( t ) = s F ′′ ( s ) + 2 F ′ ( s ) !

2

(13.2-5)

}

(13.2-6)

We also have:

L

{t

2

L

{t

2

f ′′ ( t )

( )

(13.2-7)

f ′′ ( t ) = s 2 F ′′ ( s ) + 4 s F ′ ( s ) + 2 F ( s ) !

(13.2-8)

}

d2 ⎡ 2 − ⎤ ! = ( −1) 2 ⎣s F (s) − s f 0 ⎦ ds 2

or ! !

(13.2-3)

Using equation (13.2-1) we can write:

or

!

Proof: !

( )



If f ( t ) and f ( n ) ( t ) are piecewise-smooth functions of

!

( )

dm ⎡ n n−1 f 0 − − s n−2 f ′ 0 − m ⎣s F (s) − s ds

− s n−3 f ′′ 0 − − !− f ( n−1) 0 − ⎤⎦ !

! Proposition 13.2-1:

m

}

Laplace transforms of derivatives are summarized in

Appendix I.

226

!

From Proposition 13.2-1 we see that a linear differential

This differential equation is not simpler than the original

equation with variable coefficients in the form of polynomials

differential equation. Therefore the method of Laplace

will be Laplace transformed into a differential equation having

transformation does not aid in solving the original

the order of the maximum degree of the polynomial

differential equation.

coefficients. In general, therefore, only if the maximum degree of the polynomial coefficients is less than the order of the original linear differential equation will the method of Laplace transformation aid in solving the original differential equation. Example 13.2-1 Determine the solution of the differential equation: !

y′′ ( t ) + t 2 y ( t ) = 0

with initial conditions: y ( 0 ) = 1 , y′ ( 0 ) = 0 . Solution: From Proposition 13.1-1 we have: !

( ) ( )

⎡ s 2 Y ( s ) − s y 0 − − y′ 0 − ⎤ + Y ′′( s ) = 0 ⎣ ⎦

Applying the initial conditions, we obtain: !

⎡⎣ s 2 Y ( s ) − s ⎤⎦ + Y ′′( s ) = 0

We then have: !

Y ′′( s ) + s 2 Y ( s ) = s 227

Chapter 14 Evaluating Integral Equations with the Laplace Transform

y (t ) = f (t ) +



t

y ( τ ) K ( t − τ ) dτ

0

228

!

In this chapter we will consider the application of Laplace

transform techniques to the solution of certain integral equations.

14.1! !

The Volterra integral equation of the second kind:

y (t ) = f (t ) + λ

!

following are some examples of linear integral equations.

!

(14.1-4)

Volterra Integral equations having a convolution-type

kernel such as:

y (t ) = f (t ) +

!

∫ y (τ) K (t − τ) dτ !

(14.1-5)

are called integral equations of convolution type since their

b

∫ ϕ (t ) K ( y, t ) dt !

t

0

The Fredholm integral equation of the first kind:

f ( y) =

∫ y (τ) K (t, τ) dτ !

where λ is an unknown parameter.

INTEGRAL EQUATIONS

An integral equation by definition is an equation having

!

t

a

an unknown function that appears under an integral sign. The !

!

(14.1-1)

integral is a convolution integral.

a

where ϕ ( t ) is an unknown function and f ( y ) is a known

14.2!

LAPLACE TRANSFORMS OF INTEGRAL EQUATIONS

function. !

The Fredholm integral equation of the second kind:

ϕ ( y) = f ( y) + λ

!

(14.1-2)

a

!

The Volterra integral equation of the first kind:

!

f (t ) =



t

a

ϕ ( τ ) K ( t, τ ) dτ !

(14.1-3)

where ϕ ( t ) is an unknown function and f ( t ) is a known function.

convolution type. These equations have a Laplace transform of the form:

where λ is an unknown parameter. !

Most integral equations cannot be solved using Laplace

transforms. Exceptions are certain integral equations of the

b

∫ ϕ (t ) K ( y, t ) dt !

!

Y (s) = F (s) + Y (s) K (s) !

(14.2-1)

F (s) ! 1− K ( s )

(14.2-2)

or !

Y (s) =

The inverse Laplace transformation then gives y ( t ) . 229

Example 14.2-1

Example 14.2-2

Determine the solution of the integral equation: !

y ( t ) = 2t −



t

y ( τ ) ( t − τ ) dτ

0

Laplace transforming this equation gives us:

L { y ( t )} = L {2t } − L { y ( t ) ∗t }

or !

Y (s) =

2 s2

− L { y ( t )} L {t }

and so: !

2

!

2 Y (s) = 2 s +1

We then have: !

2 s2

1 Y (s) = 2 − Y (s) 2 = 1 s s 1+ 2 s

Therefore:

y ( t ) = 2sin ( t )

!

y ( t ) = 2sint − 2

t

∫ y (τ) cos(t − τ) dτ 0

Solution:

!

Determine the solution of the integral equation:

Solution: Laplace transforming this equation gives us: !

L { y ( t )} = 2 L {sint } − 2 L { y ( t ) ∗ cost }

or !

Y (s) =

2 2s + Y s ( ) s2 + 1 s2 + 1

and so: !

2s ⎞ 2 ⎛ 1− Y s = ( ) ⎜⎝ ⎟ s 2 + 1⎠ s2 + 1

Therefore: !

2s ⎞ 2 s2 + 1 2 ⎛ = ⎜⎝ 1− 2 ⎟⎠ Y ( s ) = s +1 ( s − 1)2 s 2 + 1 ( s − 1)2

We then have: !

y ( t ) = 2t et

230

Example 14.2-3

Example 14.2-4

Determine the solution of the integral equation:

Determine the solution of the integral equation:

!

y (t ) = t + 2

t

∫ y (τ) sin (t − τ) dτ

!

y (t ) = 2 e − t

0

t

∫ y (τ) sinh (t − τ) dτ 0

Solution:

Solution:

Laplace transforming this equation gives us:

Laplace transforming this equation gives us:

!

{ }

L { y ( t )} = L t 2 + L { y ( t ) ∗sint }

or !

Y (s) =

2 1 + Y s ( ) s3 s2 + 1

s2 2 Y s = ( ) s2 + 1 s3

Y (s) =

Y (s) =

2 1 − Y (s) 2 s −1 s −1

!

s2 2 Y s = ( ) s −1 s2 − 1

Therefore:

2 2 + s3 s5

We then have: !

!

and so:

Therefore: !

{ }

L { y ( t )} = L 2 et − L { y ( t ) ∗sinh ( t )}

or

and so: !

!

t4 y (t ) = t + 12 2

!

Y (s) =

2 2 + s s2

We then have: !

y ( t ) = 2 + 2t

231

Example 14.2-5

Solution:

Determine the solution of the integral equation: !



t

0

y ( τ ) y ( t − τ ) dτ = 8sin ( 2t )

Taking the Laplace transform: !

L { y ( t )} + L { y ( t ) ∗t } =

Solution:

and so:

Laplace transforming this equation gives us:

!

!

L { y ( t ) ∗ y ( t )} = L { y ( t )} L { y ( t )} = L {8sin ( 2t )}

or !

16 s2 + 4

Y (s) = 2

and so: !

Y (s) = ±

4 s2 + 4

Therefore: !

y ( t ) = ± 4J 0 ( 2t )

Example 14.2-6 Determine the solution of the integral equation: !

y (t ) +



t

0

y ( τ ) ( t − τ ) dτ = sin ( 3t )

3 s2 + 9

L { y ( t )} + L { y ( t )} L {t } =

3 s2 + 9

or !

1 3 Y s = ( ) s2 s2 + 9

Y (s) +

Therefore: !

3 s2 9 3 3 1 Y (s) = 2 = − s + 9 s2 + 1 8 s2 + 9 8 s2 + 1

and so: !

y (t ) =

9 3 sin ( 3t ) − sin ( t ) 8 8

Example 14.2-7 Determine the solution of the integral equation: !

y (t ) = t − 2

t

∫ y (τ) (t − τ) dτ 0

232

Solution:

Solution: Taking the Laplace transform: !

{ } − L { y (t ) ∗t }

L { y ( t )} = L t

2

and so: !

Y (s) =

2 1 − Y s ( ) s3 s2

!

1⎞ 2 ⎛ ⎜⎝ 1+ 2 ⎟⎠ Y ( s ) = 3 s s

!

2 s2 2 2 2s Y (s) = 3 2 = = − s s2 + 1 s s + 1 s s2 + 1

(

)

Therefore: !

L { y ( t ) ∗ y ( t )} = L { y ( t )} L { y ( t )} = ⎡⎣Y ( s ) ⎤⎦ =

1

2

( s + 2 )2

Y (s) = ±

1 s+2

or

or !

!

and so:

Therefore: !

Taking the Laplace transform:

y ( t ) = ± e− 2t

Example 14.2-9 Determine the solution of the integral equation: t

!

∫ y (τ) y (t − τ) dτ = 2sin 2t 0

y ( t ) = 2 − 2 cos ( t )

Solution: Example 14.2-8 Determine the solution of the integral equation: !



t

y ( τ ) y ( t − τ ) dτ = t e− 2t

Taking the Laplace transform: !

L { y ( t ) ∗ y ( t )} = L { y ( t )} L { y ( t )} = ⎡⎣Y ( s ) ⎤⎦ = 2 2

2 s2 + 22

and so:

0

233

!

Y (s) = ±

2 s2 + 22

or !

y ( t ) = ± 2 J 0 ( 2t )

234

Appendix A

The Greek Alphabet

Kappa!

κ!

Κ

Lambda!

λ!

Λ

Mu!

µ!

Μ

Nu!

ν!

Ν

Xi!

ξ!

Ξ

Omicron!

ο!

Ο

Pi!

π!

Π

Rho!

ρ!

Ρ

Sigma!

σ!

Σ

Tau!

τ!

Τ

Upsilon!

υ!

ϒ

Phi!

φ, ϕ !

Φ

Alpha!

α!

Α

Beta!

β!

Β

Chi!

χ!

Χ

Gamma!

γ!

Γ

Psi!

ψ!

Ψ

Delta!

δ!

Δ

Omega!

ω!

Ω

Epsilon!

ε!

Ε

Zeta!

ζ!

Ζ

Eta!

η!

Η

Theta!

θ!

Θ

Iota!

ι!

Ι 235

f g )′ dt = ( f ′ g + f g′ ) dt ! ( ∫ ∫

!

Appendix B

(B-2)

or

fg=

!

∫ f ′ g dt + ∫ f g′ dt !

(B-3)

The constant resulting from integration of the left side of this equation is omitted since it will be combined with the constants from the remaining integrals. Rewriting equation (B-3), we have:

Integration by Parts

!

∫ f g′ dt = f g − ∫ f ′ g dt !

(B-4)

Now we can let: !

In this Appendix we describe the process of changing a

function that we wish to integrate into another function that can be easier to integrate. This process is known as integration by parts. !

Let f ( t ) and g ( t ) be two functions of t , and let both

functions have first order derivatives. We can then write: !

( f g )′ = f ′ g + f g′ !

Integrating this equation, we have:

(B-1)

!

u = f (t ) !

dv = g′ ( t ) dt !

(B-5)

We then have: !

du = f ′ ( t ) dt !

v=

∫ g′ (t ) dt = g (t ) !

(B-6)

Equation (B-4) becomes: ! !

∫ u dv = u v − ∫ v du !

(B-7)

This is the equation resulting from integration by parts.

For any particular equation it is necessary to select u and dv . 236

After dv is integrated to obtain v , equation (B-7) can be used to obtain the solution of the original integral. Note that the constant of integration in equation (B-6) is taken to be zero since, if it is nonzero, it will be incorporated in the constant of integration of the integral in equation (B-7). Example B-1 Evaluate the integral: !

L

{ f (t )} = Tlim →∞ ∫

T

0−

e− s t f ( t ) dt

Solution: Let: ! !

u = f (t ) !

dv = e− s t dt

du = f ′ ( t ) dt !

e− st v=− s

We then have: !

lim

T →∞



T



T

0−

e

−st

⎡ 1 f ( t ) dt = lim ⎢ − f ( t ) e− s t T →∞ ⎢ s ⎣

and so: !

lim

T →∞

0−

e

−st

f ( t ) dt =

( )+1

f 0− s

s





0−



1 + s 0−

⎤ e− s t f ′ ( t ) dt ⎥ 0− ⎥⎦



T

e− s t f ′ ( t ) dt 237

A proper rational fraction is a rational fraction F ( s ) having

n > m . A proper rational fraction can be converted into partial

Appendix C

fractions by taking the following steps: ! 1! –! Factor the denominator Q ( s ) into its prime factors having the forms listed below in steps 2, 3, 4, and 5. ! 2! –! Each linear factor of Q ( s ) will have the form ( s + a ) , and can be written as the partial fraction:

A ! ( s + a)

!

Partial Fractions

! !

!

!

(C.1-2)

where A is a constant, and where − a can be real or complex, and is a simple pole of F ( s ) .

!

In this Appendix we describe the process of changing a

proper rational fraction into partial fractions.

C.1! !

( s + a )n , and can be written as the partial fractions:

GENERAL PROCEDURE

B1

!

denominator Q ( s ) is a polynomial of degree n :

am s m + am−1 s m−1 +!+ a1 s + a0 ( ) F ( s) = = ! bn s n + bn−1 s n−1 +!+ b1 s + b0 Q ( s) P s

(C.1-1)

! !

+

B2

( s + a) ( s + a)

A rational fraction is defined to be an algebraic fraction in

which the numerator is a polynomial P ( s ) of degree m , and the

!

! 3! –! Each repeated linear factor of Q ( s ) will have the form

!

2

+!+

Bn

( s + a)

n

!

(C.1-3)

where B1, B2, ! Bn are constants, and where − a can be real or complex, and is a higher order pole of F ( s ) .

! 4! –! Each irreducible quadratic factor of Q ( s ) will have the form

(s

2

)

+ a s + b , and can be written as the partial

fraction: 238

Cs+D ! s2 + a s + b

! ! !

!

!

!

(C.1-4)

where C and D are constants, and where each such

Solution:

quadratic factor can be expressed as linear factors that

We have:

are complex conjugates, and are poles of F ( s ) .

!

! 5! –! Each repeated irreducible quadratic factor of Q ( s ) will

(

The form of the partial fractions is:

partial fractions:

!

s + as +b 2

! !

!

)

1 1 = s 2 − 2 s − 3 ( s + 1) ( s − 3)

have the form s 2 + a s + b , and can be written as the

E1 s + F1

!

1 s2 − 2 s − 3

+

(s

E 2 s + F2 2

+ as +b

)

2

n

+!+

(s

E n s + Fn 2

+ as +b

)

n

!

(C.1-5)

where E1, E 2, !, E n and F1, F2, !, Fn are constants, and where each quadratic factor can be expressed as linear factors that are complex conjugates, and are poles of

Example C.1-2 Determine the form of the partial fractions for: !

F ( s ) . Repeated pairs of complex conjugate poles will then exist. ! 6! –! The final step is to determine the constants of all the partial fractions. Example C.1-1

1 A1 A2 = + s2 − 2 s − 3 s + 1 s − 3

s +1 s2 − 4 s + 4

Solution: We have: !

s +1 1 = s 2 − 4 s + 4 ( s − 2 )2

The form of the partial fractions for repeated factors is:

Determine the form of the partial fractions for: 239

!

s +1 B1 B2 = + s 2 − 4 s + 4 s − 2 ( s − 2 )2

!

)(

s 2 ( s + 1) s 2 + 1 3

+

!

Example C.1-3

(

1

B3

( s + 1)3

+

2

)

s2 + s + 1

C1 s + D1 2 s +1

+

2

=

A1 s

+

C2 s + D2

(s

2

+1

)

2

A2 s +

2

+

B1 B2 + s + 1 ( s + 1)2

C3 s + D3 s2 + s + 1

+

C4 s + D4

(s

2

)

+ s +1

2

Determine the form of the partial fractions for: !

(s

s2 + 1 2

C.2!

)

+ s + 1 ( s − 2)

!

The denominator is irreducible. The form of the partial fractions is:

s +1 2

(s

2

The constants associated with the partial fractions can be

determined by first multiplying the original rational fraction

Solution:

!

CALCULATING PARTIAL FRACTIONS

)

+ s + 1 ( s − 2)

=

C1 s + D1 s2 + s + 1

+

A s−2

and its partial fraction expansion by the denominator Q ( s ) of the original rational fraction. The resulting equation can be considered to be an identity. The constants of the partial fractions can be determined by selecting values for s or by comparing the coefficients of s on both sides of the equation.

Example C.1-4

Example C.2-1

Determine the form of the partial fractions for:

Determine the partial fractions for:

!

(

1

) (s

s ( s + 1) s 2 + 1 2

2

2

)

+ s +1

2

Solution: The form of the partial fractions is:

!

1 s2 − 2 s − 3

Solution: From Example C.1-1 we have:

240

A1 A2 1 = + s 2 − 2 s − 3 ( s + 1) ( s − 3)

!

Clearing fractions by multiplying this equation by

(s !

2

)

− 2 s − 3 = ( s + 1) ( s − 3) , we obtain the identity: 1 ≡ A1 ( s − 3) + A2 ( s + 1)

Letting s = 3 we have: !

1 = 4 A2 !

⇒!

1 A2 = 4

Letting s = −1 we have: !

1 = − 4 A1 ! ⇒ !

1 A1 = − 4

Therefore: !

1 −1 1 = + s 2 − 2 s − 3 4 ( s + 1) 4 ( s − 3)

s +1 B1 B2 = + s 2 − 4 s + 4 s − 2 ( s − 2 )2

!

Clearing fractions by multiplying this equation by

(s !

2

)

− 4 s + 4 we obtain the identity: s + 1 ≡ B1 ( s − 2 ) + B2

Equating coefficients of equal powers of s we have: !

B1 = 1

Letting s = 2 we have: !

B2 = 3

Therefore: !

s +1 1 3 = + s 2 − 4 s + 4 s − 2 ( s − 2 )2

Example C.2-3 Example C.2-2 Determine the partial fractions for: !

s +1 s2 − 4 s + 4

Solution: From Example C.1-2 we have:

Determine the form of the partial fractions for: !

(s

s2 + 1 2

)

+ s + 1 ( s − 2)

Solution: From Example C.1-3 we have: 241

!

(s

s2 + 1 2

)

+ s + 1 ( s − 2)

=

Cs+D s + s +1 2

+

A

!

s−2

Clearing fractions by multiplying this equation by

(s !

2

)

+ s + 1 ( s − 2 ) we obtain the identity:

(

)

s 2 + 1 ≡ ( Cs + D ) ( s − 2 ) + A s 2 + s + 1

5 = 7A !

5 ! 7

⇒!

D=−

Equating coefficients of s :

1= C + A = C +

5 ! 7

(s

2

Letting s = 2 we have: !

B3 = 4

Differentiating s 2 = B1 ( s − 2 ) + B2 ( s − 2 ) + B3 we have:

⇒!

C=

2 7

!

2 s = 2 B1 ( s − 2 ) + B2

Equating coefficients of s :

s2 + 1 2

1 7

s 2 = B1 ( s − 2 ) + B2 ( s − 2 ) + B3

2

Therefore: !

B1 B2 B3 + + s − 2 ( s − 2 )2 ( s − 2 ) 3

obtain the identity:

2

!

( s − 2 )3

=

3

!

1= −2D + A = −2D +

s2

Clearing fractions by multiplying this equation by ( s − 2 ) we

Letting s = 0 we have: !

Solution:

!

5 A= 7

⇒!

( s − 2 )3

From Example C.1-2 we have:

Letting s = 2 we have: !

s2

)

+ s + 1 ( s − 2)

=

(

2s −1

)

7 s + s +1 2

+

5

7( s − 2)

!

Letting s = 2 we have: !

Example C.2-4

B1 = 1

B2 = 4

Therefore:

Determine the partial fractions for: 242

s2

!

( s − 2)

C.3! !

3

=

1 4 4 + + s − 2 ( s − 2 )2 ( s − 2 ) 3

HEAVISIDE COVER-UP METHOD

In the special case where the prime factors of the

denominator of a rational fraction are all linear and nonrepeating, a method known as the Heaviside cover-up can be used to easily determine the partial fraction constants. The Heaviside cover-up method consists of covering up each factor

!

1 1 A1 A2 = = + s 2 − 2 s − 3 ( s + 1) ( s − 3) ( s + 1) ( s − 3)

Covering up s + 1 and letting s + 1 → 0 , the rational fraction becomes: !

1 1 =− s → −1 ( s − 3) 4 lim

From Example C.2-1 we see that we do have: !

A1 = −

1 4

in turn in the rational fraction, and in determining the limit of

Covering up s − 3 and letting s − 3 → 0 , the rational fraction

what remains of the rational function as the covered-up factor

becomes:

approaches zero. The limits obtained are the constants for the partial fractions corresponding to the covered up factors.

!

1 1 = s → 3 ( s + 1) 4 lim

Example C.3-1

From Example C.2-1 we see that we do have:

Using the Heaviside cover-up method determine the partial

!

fractions for: !

1 s2 − 2 s − 3

A2 =

1 4

Therefore: !

1 −1 1 = + s 2 − 2 s − 3 4 ( s + 1) 4 ( s − 3)

Solution: We have:

243

The coefficients of the partial fractions are then given by:

C.4! !

HEAVISIDE EXPANSION FORMULA

Heaviside developed a formula for evaluating inverse

!

Laplace transformations. We will consider the case where

( )! F ( s) = Q ( s)

(C.4-1)

P (s)

( s − s1 )( s − s2 )( s − s3 )!( s − sn )

!

!

If Q ( s ) does not have any repeating factors, we can write

A1 A2 An F (s) = + +!+ = ( s − s1 ) ( s − s2 ) ( s − sn )

(

∑ (s − s ) ! Aj

j =1

(

)

(

)

P sj

j =1

j

(C.4-7)

j

The inverse Laplace transform of F ( s ) is then:

L

−1

( ) {F ( s )} = ∑ Q′ ( s ) P sj

j =1

j

L

−1

⎧⎪ 1 ⎨ ⎩⎪ s − sj

(

)

⎫⎪ ⎬! ⎭⎪

(C.4-8)

or from the Laplace transform table in Appendix E and

j

P (s) ⎤ ⎡ F ( s ) ⎤⎦ = lim ⎢ s − sj ⎥! s→sj Q s ( ) ⎣ ⎦

!

(C.4-3)

)

Aj = lim ⎡⎣ s − sj s→sj

(C.4-6)

( ) 1 ! F (s) = ∑ Q′ ( s ) ( s − s ) n

!

Proposition 2.2-1:

Multiplying by s − sj and letting s → sj , we have: !

( )! Q′ ( s )

n

equation (C.4-2) in the form of partial fractions as: n

Aj =

P sj

j

(C.4-2)

singularities of F ( s ) as discussed in Chapter 11.

!

is present in the numerator and

Equation (C.4-3) can now be written:

where sj are the roots of Q ( s ) and are called poles or !

(C.4-5)

L’Hôpital’s rule:

polynomials. If Q ( s ) has a finite number of roots, we then have:

F (s) =

)

denominator making this equation indeterminate, we use

P s

where F ( s ) is a rational fraction and where P ( s ) and Q ( s ) are

!

( )

Since the factor s − sj

Laplace transform has the form: !

(

⎡ s − sj ⎤ ⎥! Aj = P sj lim ⎢ s→sj ⎢ Q ( s ) ⎥ ⎣ ⎦

( ) f (t ) = ∑ e Q′ ( s ) n

(C.4-4)

!

j =1

P sj

sj t

!

(C.4-9)

j

244

!

This is Heaviside’s expansion formula when there are no

repeated factors. This is the same result obtained by calculating

C.5!

the residues for simple poles given by Proposition 11.9-3. If Q ( s ) has repeating factors, formulas similar to equation

!

!

1 1 ⎡1 1 ⎤ = ⎢ − ⎥ s ( s + s1 ) s1 ⎣ s s + s1 ⎦

!

1 1 ⎡ s1 1 1 ⎤ = − + ⎢ ⎥ s 2 ( s + s1 ) s12 ⎣ s s s + s1 ⎦

!

2 s1 1 1 1 ⎡ s1 1 ⎤ = − + − ⎢ ⎥ s 3 ( s + s1 ) s13 ⎣ s 3 s 2 s s + s1 ⎦

(C.4-8) can be developed. Using Proposition 11.9-5 the same results can be obtained by evaluating higher order poles, and so we will not present additional Heaviside expansion formulas. Example C.4-1 Determine the inverse Laplace transform of F ( s ) =

s +1 . s2 + s − 6

Solution:

!

We have: !

F (s) =

s +1 s +1 = s 2 + s − 6 ( s + 3) ( s − 2 )

and so the roots are s1 = − 3 and s2 = 2 . We also have: !

P (s) = s + 1 !

!

Q′ ( s ) = 2 s + 1

Q ( s) = s2 + s − 6

We then have from equation (C.4-9): !

f (t ) =

( )e ∑ Q′ ( s ) 2

j =1

P sj

j

sj t

=

2 − 3t 3 2t e + e 5 5

COMMON PARTIAL FRACTIONS

!

!

!

!

1

( s + s1 ) ( s + s2 )

=

1

=

s ( s + s1 ) ( s + s2 ) 1 s ( s + s1 )

2

=

1 s12

1

−1 s1 − s2

⎡ 1 1 ⎤ − ⎢ ⎥ ⎣ s + s1 s + s2 ⎦

s2 s1 ⎤ ⎡ s1 − s2 1 + − ⎥ s1 s2 ( s1 − s2 ) ⎢⎣ s s + s1 s + s2 ⎦

⎡1 s1 ⎤ 1 − ⎢ − ⎥ 2 ⎢⎣ s s + s1 ( s + s1 ) ⎥⎦ =

1

( s + s1 ) ( s + s2 ) ( s1 − s2 )2 2

s

( s + s1 ) ( s + s2 )

=

⎡ s2 − s1 1 1 ⎤ − + ⎢ ⎥ 2 s + s s + s ⎢⎣ ( s + s1 ) 1 2⎥ ⎦

s2 ⎤ 1 ⎡ s1 − s1 − s2 ⎢⎣ s + s1 s + s2 ⎥⎦

(see Example 9.3-10). 245

Appendix D

Proposition 1.4-3, Existence of the Laplace Transform: If f ( t ) is a piecewise continuous function for t ≥ 0 , and if

f ( t ) ≤ M eσ 0 t where Re s > σ 0 with σ 0 ≥ 0 and t ≥ T0 , then the

Laplace transform:

L

!



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) 0−

exists since the integral converges.

Summary of Propositions

Proposition 1.4-4, Uniqueness of the Laplace Transform: If the Laplace transform L

{ f (t )} = F ( s ) exists, then f (t ) has

only the one Laplace transform F ( s ) . Proposition 1.4-1: If f ( t ) is a bounded function such that f ( t ) ≤ M for t ≥ T0 ,

Proposition 1.4-5, Riemann Lebesque Lemma: If L

then f ( t ) is of exponential order.

Proposition 1.4-2, Absolute Convergence of the Laplace Transform: If f ( t ) is a piecewise continuous function for t ≥ 0 , and if

f ( t ) ≤ M eσ 0 t for t ≥ T0 , then the Laplace transform:

!

L



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) 0−

converges absolutely for Re s > σ 0 where σ 0 ≥ 0 .

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

)

function of exponential order on ⎡⎣ 0 − , ∞ , then: !

lim F ( s ) → 0

Re s→ ∞

uniformly. Proposition 1.4-6: If L

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

)

function of exponential order on ⎡⎣ 0 − , ∞ , then s F ( s ) is bounded as Re s → ∞ .

246

Proposition 1.4-7, Laplace Transform Converges Uniformly in its Region of Absolute Convergence:

Proposition 2.1-1, Linearity Property of Laplace Transforms: The Laplace transform is a linear operator. If f ( t ) and g ( t ) are

If f ( t ) is a piecewise continuous function having

f ( t ) ≤ M eσ 0 t for t ≥ T0 , then the Laplace transform:

!

L



real-valued functions having Laplace transforms that exist, then:

{ f (t )} = ∫ f (t ) e− s t dt = F ( s )

Re s > σ 0 .

Proposition 2.1-2, Convergence of a Sum of Laplace Transforms: If f ( t ) and g ( t ) are piecewise continuous functions of

Proposition 1.4-8:

f (t ) ≤ M e

!

L

{ f (t )} converges for σ > σ f and {g (t )} converges for σ > σ g , then L {c1 f (t ) + c2 g (t )}

exponential order, and if L

If f ( t ) is a piecewise continuous function having σ0 t

{ f (t )} + c2 L {g (t )}

where c1 and c2 are arbitrary constants.

0−

converges uniformly with respect to s in the open half-plane

L {c1 f ( t ) + c2 g ( t )} = c1 L

!

L

for t ≥ T0 , then the Laplace transform:

(



{ f (t )} = ∫ f (t ) e− s t dt = F ( s ) 0−

Proposition 2.2-1, First Shift Theorem (s-Shifting Property): If L

converges absolutely and uniformly with respect to s in the open half-plane Re s > σ 0 .

)

converges absolutely for σ > σ m where σ m = max σ f , σ g .

{ f (t )} = F ( s ) , then: L

!

{ f (t ) e } = F ( s − b ) bt

Proposition 1.4-9: If the Laplace transform of a function f ( t ) with f ( t ) ≤ M eσ 0 t converges absolutely with respect to s for Re s ≥ σ1 > σ 0 , then it converges uniformly and absolutely with respect to s for

Re s = σ1 where σ1 ≥ σ 0 .

Proposition 2.2-2: If L

L

{ f (t )} = F ( s ) is absolutely convergent for Re s > σ 0 , then

{ f (t ) e } is absolutely convergent for Re s > σ − bt

0

+ Reb .

247

Proposition 2.3-1, Scaling Property (s-Scaling): If L

Proposition 3.1-2, Laplace Transforms of Second Derivatives:

{ f (t )} = F ( s ) , then:

If f ( t ) and f ′ ( t ) are both continuous functions of exponential

!

)

have:

where b > 0 . Proposition 2.3-2, Scaling Property (t-Scaling): If L

1 ⎧ ⎛ t ⎞⎫ L ⎨f ⎜ ⎟⎬ b ⎩ ⎝ b⎠ ⎭

L

!

{ f (t )} = F ( s ) , then: F (b s ) =

!

If f ( t ) , f ′ ( t ) , f ′′ ( t ) ,!, f ( n−1) ( t ) , are continuous functions of

Re s > σ 0 , and we have:

and f ′ ( t ) is a piecewise-smooth function on [ 0, ∞ ) , then

L

{ f ′ (t )} = s F ( s ) − f ( 0− )

where L

)

exponential order on ⎡⎣ 0 − , ∞ , and if f ( n ) ( t ) is a piecewisesmooth function on ⎡⎣ 0 − , ∞ , then L f ( n ) ( t ) exists for all

If f ( t ) is a continuous function of exponential order on [ 0, ∞ )

!

{ f (t )} = F ( s ) .

Proposition 3.1-3, Laplace Transforms of Higher Order Derivatives:

Proposition 3.1-1, Laplace Transforms of Derivatives:

{ f ′ (t )} exists for all Re s > σ 0 , and we have:

{ f ′′ (t )} = s 2 F ( s ) − s f ( 0− ) − f ′ ( 0− )

where L

where b > 0 .

L

)

order on ⎡⎣ 0 − , ∞ , and if f ′′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then L { f ′′ ( t )} exists for all Re s > σ 0 , and we

1 ⎛ s⎞ L { f ( bt )} = F ⎜ ⎟ b ⎝ b⎠

!

L

)

{

{ f ( ) (t )} = s F ( s ) − s f ( 0 ) − s n

where L

n

n−1



n−2

}

( )

( )

f ′ 0 − − !− f ( n−1) 0 −

{ f (t )} = F ( s ) .

{ f (t )} = F ( s ) .

248

Proposition 3.2-1:

Proposition 3.3-2: If f ( t ) is a piecewise-smooth function of exponential order, then

If f ( t ) is a piecewise continuous function of exponential order

)

on ⎡⎣ 0 − , ∞ with a single finite discontinuity at t = t 0 , and if f ′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then: !

L

)

{ f ′ (t )} = s F ( s ) − f ( 0− ) − ⎡⎣ f (t0+ ) − f (t0− )⎤⎦ e− s t

where L

the Laplace tranform of the integral of f ( t ) exists. Proposition 3.3-3, Laplace Transform of Integrals:

If f ( t ) is a piecewise-smooth function of exponential order, and

0

if L

{ f (t )} = F ( s ) and Re s > σ 0 .

⎧ L ⎨ ⎩

!

Proposition 3.2-2: If f ( t ) is a piecewise continuous function of exponential order

)

on ⎡⎣ 0 − , ∞ with finite discontinuities at t = t 0 , t1, t 2 , !, t n , and if f ′ ( t ) is a piecewise-smooth function on ⎡⎣ 0 − , ∞ , then:

L

k

k=0

where L

{ f (t )} = F ( s ) and Re s > σ 0 .

0−

⎫ F (s) f ( τ ) dτ ⎬ = s ⎭

If f ( t ) is a piecewise-smooth function of exponential order, and

)

{ f ′ (t )} = s F ( s ) − f ( 0− ) − ∑ ⎡⎣ f (t k+ ) − f (t k− )⎤⎦ e− s t



t

Proposition 3.3-4, Laplace Transform of Double Integrals: if L

n

!

{ f (t )} = F ( s ) , then:

{ f (t )} = F ( s ) , then: ⎧ L ⎨ ⎩

!

t

τ

0−

0−

∫∫

⎫ F (s) f ( γ ) dγ dτ ⎬ = 2 s ⎭

Proposition 3.3-5, Laplace Transform of Multiple Integrals: If f ( t ) is a piecewise-smooth function of exponential order, and

Proposition 3.3-1: If f ( t ) is a piecewise-smooth function of exponential order, then

if L

the integral of f ( t ) is of exponential order.

!

{ f (t )} = F ( s ) , then:

⎧ L ⎨ ⎩

t

τ1

τ2

0−

0−

∫∫ ∫ ∫ 0−

!

τn

0−

⎫ F (s) f ( γ n ) dγ n dγ n −1 ! dγ 1 dτ ⎬ = n s ⎭ 249

!

Proposition 4.1-1: If f ( t ) is a piecewise-smooth function of exponential order, and if L

{ f (t )} = F ( s ) , then we have:

L { t f ( t )} = −

!

d L ds

If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists, then: t →0 ⎝ t ⎠

Proposition 4.1-2: if L

( )

Proposition 4.2-1:

{ f (t )} = − F ′ ( s )

If f ( t ) is a piecewise-smooth function of exponential order, and

dF ( s ) − 2 s F ( s ) + f 0− ds

L {t f ′′ ( t )} = −s 2

!

{ f (t )} = F ( s ) , then F ′ ( s ) exists for Re s > σ 0 .

⎧ f (t ) ⎫ L ⎨ ⎬= t ⎩ ⎭ where L





F ( u ) du

s

{ f (t )} = F ( s ) .

Proposition 4.1-3: If f ( t ) is a piecewise-smooth function of exponential order, and

L

Proposition 4.2-2: If f ( t ) is a piecewise-smooth function of exponential order, and

{ f (t )} = F ( s ) , then F ( s ) exists for Re s > σ 0 , and is (n)

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists where a is a constant, then: t →0 ⎝ t + a ⎠

given by:

{

}

L t n f ( t ) = ( −1) F ( n ) ( s )

!

n

where n is a positive integer. Proposition 4.1-4:

!

⎧ f (t ) ⎫ s a L ⎨ ⎬=e t + a ⎩ ⎭ where L





e−u a F ( u ) du

s

{ f (t )} = F ( s ) .

If f ( t ) is a piecewise-smooth function of exponential order, and

L

{ f (t )} = F ( s ) , then for all Re s > σ 0 :

250

⎧ f (t ) ⎫ L ⎨ n ⎬= ⎩ t ⎭

!

Proposition 4.2-3: If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ ⎟ exists, then: t →0 ⎝ t ⎠ !

⎧ f (t ) ⎫ L ⎨ ⎬= t ⎩ ⎭ where L





If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ 2 ⎟ exists, then: t →0 ⎝ t ⎠

where L

u1

u2

un−1

∫ ∫ ∫ !∫ s

F ( un ) dun dun−1 !du1

If L

∫∫



u1

Proposition 4.2-5:

s→∞

Proposition 4.3-2, Initial Value Theorem for Derivative:

{ f (t )} = F ( s ) where f (t ) and f ′ (t ) are continuous functions of exponential order on [ 0, ∞ ) , and f ′′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then: If L

F ( u 2 ) du 2 du1

{ f (t )} = F ( s ) .

( )

lim ( s F ( s )) = f 0 +

!

s



{ f (t )} = F ( s ) where f (t ) is a continuous function of exponential order on [ 0, ∞ ) and f ′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then:

F ( u ) du

Proposition 4.2-4:

!



Proposition 4.3-1, Initial Value Theorem:

0





{ f (t )} = F ( s ) .

where L

{ f (t )} = F ( s ) .

⎧ f (t ) ⎫ L ⎨ 2 ⎬= ⎩ t ⎭



(

( )) = f ′ ( 0 )

lim s 2 F ( s ) − s f 0 −

!

s→∞

+

Proposition 4.3-3, Final Value Theorem:

{ f (t )} = F ( s ) where f (t ) is a continuous function of exponential order on [ 0, ∞ ) and f ′ ( t ) is a piecewise-smooth function of exponential order on [ 0, ∞ ) , then: lim ( s F ( s )) = lim f ( t ) s→0 t→∞ If L

If f ( t ) is a piecewise-smooth function of exponential order, and

⎛ f (t ) ⎞ if lim+ ⎜ n ⎟ exists, then: t →0 ⎝ t ⎠

!

251

Proposition 5.1-1, Laplace Transform of the Shifted Unit Step Function:

Proposition 5.3-1, Laplace Transform of a Rectangular Pulse: If f ( t ) is a rectangular pulse defined by:

The Laplace transform of the shifted unit step function is:

L {U ( t − t 0 )} =

!

e

− s t0

s

t0 ≥ 0

!

! then

Proposition 5.1-2, Laplace Transform of a Unit Step Function: The Laplace transform of a unit step function is:

L {U ( t )} =

!

1 s

Proposition 5.2-1, Second Shift Theorem (t-Shifting Property): If L

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

function of exponential order, and if U ( t − t 0 ) is the shifted unit step function, then:

L

!

{ f (t − t0 ) U (t − t0 )} = e− s t

0

L

{ f (t )}

Proposition 5.2-2, Third Shift Theorem: If L

{ f (t )} = F ( s ) where f (t ) is a piecewise continuous

function of exponential order, and if U ( t − t 0 ) is the shifted unit step function, then: !

L

{ f (t ) U (t − t0 )} = e− s t

0

L

{ f (t + t0 )}

⎧⎪ 1 t1 ≤ t ≤ t 2 f (t ) = ⎨ ⎪⎩ 0 otherwise

!

e − s t1 − e − s t 2 L { f ( t )} = L {U ( t − t1 )} − L {U ( t − t 2 )} = s

Proposition 5.4-1, Laplace Transform of the Unit Impulse Function δ ( t − t 0 ) : The Laplace transform of the unit impulse function δ ( t − t 0 ) is: !

L {δ ( t − t 0 )} = e− s t0

Proposition 5.4-2, Laplace Transform of the Unit Impulse Function δ ( t ) : The Laplace transform of the unit impulse function δ ( t ) is: !

L {δ ( t )} = 1

Proposition 5.4-3, Sifting Property of δ ( t − t 0 ) : If f ( t ) is continuous at t 0 , then the unit impulse function has the property: 252

!





−∞

f ( t ) δ ( t − t 0 ) dt = f ( t 0 )

Proposition 5.5-1, Laplace Transform of a Periodic Function: If f ( t ) is a piecewise-smooth periodic function of exponential

Proposition 5.4-4, Laplace Transform of the Derivative of a Unit Impulse Function δ ( t ) : If δ ( t ) is a unit impulse function, then the Laplace transform of

order with period T , then:

1 L { f ( t )} = 1− e− sT

!

its derivative is given by: !

L { δ′ ( t )} = s

)

If the functions f ( t ) and g ( t ) are continuous on ⎡⎣ 0 − , ∞ and of exponential order with Laplace transforms L { f ( t )} = F ( s ) and L { g ( t )} = G ( s ) , and if F ( s ) = G ( s ) for some Re s > σ 0 ,

The Laplace transform of the second derivative of the unit impulse function δ ( t ) is:

L {δ′′ ( t )} = s 2

then f ( t ) ≡ g ( t ) for all t > 0 .

Proposition 6.2-2, Linearity Property of Inverse Laplace Transforms:

Proposition 5.4-6, Laplace Transform of the Higher Order Derivatives of the Unit Impulse Function δ (t ) :

If f ( t ) and g ( t ) are continuous functions having Laplace transforms L

{

}

L { g ( t )} = G ( s ) ,

c1 F ( s ) + c2 G ( s ) is a linear operator:

impulse function δ ( t ) are:

L δ( n ) ( t ) = s n

{ f (t )} = F ( s ) and

respectively, then the inverse Laplace transform of

The Laplace transform of higher order derivatives of the unit

!

0−

f ( t ) e− s t dt

Proposition 6.2-1, Lerch’s Theorem; Uniqueness of the Inverse Laplace Transform:

Proposition 5.4-5, Laplace Transform of the Second Derivative of the Unit Impulse Function δ ( t ) :

!



T

! !

L

{c1 F ( s ) + c2 G ( s )} = c1 L −1{F ( s )} + c2 L −1{G ( s )}

−1

where c1 and c2 are arbitrary constants. 253

Proposition 7.1-1, Convergence of the Convolution:

!

Proposition 7.2-2:

If f ( t ) and g ( t ) are piecewise continuous functions of

If f1 ( t ) , f2 ( t ) , !, fn ( t ) are piecewise continuous

exponential order, then their convolution:

functions of exponential order with Laplace transforms of

f (t ) ∗ g (t ) =



t

F1 ( s ) , F2 ( s ) , !, Fn ( s ) , respectively, then:

f ( τ ) g ( t − τ ) dτ

0

is also a continuous function of exponential order, and so it

!

L

{ f1 (t ) ∗ f2 (t ) ∗!∗ fn (t )} = F1 ( s ) F2 ( s ) ! Fn ( s )

converges absolutely. Proposition 7.2-3: Proposition 7.1-2:

The convolution of any function f ( t ) with the unit impulse

If f ( t ) and g ( t ) are piecewise continuous functions of

function δ ( t ) is f ( t ) :

exponential order, then their convolution: !

f (t ) ∗ g (t ) =



t

!

f ( τ ) g ( t − τ ) dτ

f ( t ) ∗δ ( t ) =



t

f ( τ ) δ ( t − τ ) dτ = f ( t )

0

0

Proposition 8.1-1, Laplace Transform of a Power Series:

has a Laplace transform.

)

If a function f ( t ) is continuous on ⎡⎣ 0 − , ∞ and is of exponential order, and if f ( t ) can be represented by a

Proposition 7.2-1, Laplace Convolution Theorem: If f ( t ) and g ( t ) are piecewise continuous functions of exponential order with Laplace transforms of L and L !

L

{ g (t )} = G ( s ) , respectively, then: { f (t ) ∗ g (t )} = F ( s ) G ( s )

converging power series:

{ f (t )} = F ( s ) !

f (t ) =





an t n !

n = 0, 1, 2, !

n=0

where the coefficients satisfy:

254

!

σ 0n an ≤ M ! n!

M >0

and where Re s > σ 0 , then L

{ f (t )} can be represented by the

Proposition 8.1-3, Convergence of the Laplace Transform of a Power Series: If f ( t ) can be represented by a converging Taylor series:

Laurent series: ∞

!

L

{ f (t )} = ∑ n=0

an n! s n+1

)

If a function f ( t ) is continuous on ⎡⎣ 0 , ∞ and is of exponential order, and if f ( t ) can be represented by a

where M > 0 , then the series L

ν > −1

Proposition 8.1-4: If f ( t ) , f ′ ( t ) , f ′′ ( t ) ,! are continuous functions of exponential

n=0

)

where the coefficients satisfy: !

σ 0n ! an ≤ M n!

order on ⎡⎣ 0 − , ∞ , then:

M >0

and where Re s > σ 0 , then L

{ f (t )} can be represented by the

series: ∞

!

L

{ f (t )} will be uniformly

convergent in Re s > σ 0 .





M σ 0n an ≤ n!

!

converging power series:

an t n+ν !

n = 0, 1, 2, !

and if: −

f (t ) =



an t n !

n=0

Proposition 8.1-2, Watson’s Lemma:

!

f (t ) =

!



Γ n + ν +1 { f (t )} = ∑ an (s n + ν +1 ) n=0



!

L

{ f (t )} = ∑ n=0

f (n) ( 0 ) s n+1

Proposition 8.2-1, Inverse of a Laplace Transform Series: If the Laplace transform F ( s ) of a function f ( t ) can be represented by a series: 255



!

L

1 { f (t )} = F ( s ) = ∑ an s n+1

Proposition 9.3-1: If L { y ( t )} = Y ( s ) and L

{ f (t )} = F ( s ) , where y (t ) is a continuous function of exponential order on [ 0, ∞ ) , and f ( t )

n=0

that converges for Re s > σ0 with lim F ( s ) = 0 , then F ( s ) can s→∞

and y′ ( t ) are piecewise-smooth functions of exponential order

be inverted term-by-term to obtain: !

f (t ) =



∑ n! an

on [ 0, ∞ ) , the Laplace transform of the second-order ODE:

tn

!

n=0

where f ( t ) converges absolutely and uniformly in its circle of convergence.

is then: !

Proposition 8.2-2, Inverse of a Laplace Transform Series:



{ f (t )} = F ( s ) = ∑ n=0

an

1 s n + ν +1

!

ν > −1

and y′ ( t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) . Given the second-order ODE:

be inverted term-by-term to obtain:

f (t ) =

∑ n=0

1 an t n + ν +1 Γ ( n + ν + 1)

!

a y′′ ( t ) + b y′ ( t ) + c f ( t ) = f ( t ) where a , b , and c are constants, and y ( 0 ) = 0 and y′ ( 0 ) = 0 so that:

where f ( t ) converges absolutely and uniformly in its circle of convergence.

a s2 + b s + c

{ f (t )} = F ( s ) where y (t ) is a continuous function of exponential order on [ 0, ∞ ) , and f ( t )

s→∞

!

+

Let L { y ( t )} = Y ( s ) and L

that converges for Re s > σ0 with lim F ( s ) = 0 , then F ( s ) can ∞

a s2 + b s + c

F (s)

Proposition 9.3-2:

represented by a series:

L

Y (s) =

( a s + b ) y ( 0 ) + a y′ ( 0 )

where a , b , and c are constants.

If the Laplace transform F ( s ) of a function f ( t ) can be

!

a y′′ ( t ) + b y′ ( t ) + c f ( t ) = f ( t )

!

H (s) =

Y (s) 1 = 2 F (s) a s + b s + c 256

! !

then we have:

y (t ) =

Proposition 10.1-3:

t

∫ h (t − τ) f (τ) dτ

If f ( x, t ) is a continuous function of exponential order on

[ 0, ∞ ) and if the first partial derivatives of f ( x, t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) , then:

0

Proposition 10.1-1: If f ( x, t ) is a continuous function of exponential order on

[ 0, ∞ ) and if the first partial derivatives of f ( x, t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) , then: !

⎧ ∂ f ( x, t ) ⎫ − L ⎨ ⎬ = s F ( x, s ) − f x, 0 ⎩ ∂t ⎭

(

where L

)

{ f ( x, t )} = F ( x, s ) .

!

⎧ ∂ f ( x, t ) ⎫ dF ( x, s ) L ⎨ ⎬= ∂x dx ⎩ ⎭

!

⎧⎪ ∂2 f ( x, t ) ⎫⎪ dF 2 ( x, s ) L ⎨ ⎬= 2 dx 2 ⎪⎩ ∂x ⎪⎭

Proposition 11.2-1, Cauchy-Riemann Equations:

Proposition 10.1-2:

The necessary and sufficient conditions for a complex function

( ) ( ) ( ) to be differentiable z ( x, y ) are that the Cauchy-Riemann equations:

If f ( x, t ) and its first partial derivatives are continuous

w = f z = u x, y + i v x, y

)

functions of exponential order on ⎡⎣ 0 − , ∞ , and if the second partial derivatives of f ( x, t ) are piecewise-smooth functions of −

)

exponential order on ⎡⎣ 0 , ∞ , then: !

(

∂ f x, 0 − ⎧⎪ ∂2 f ( x, t ) ⎫⎪ 2 − L ⎨ ⎬ = s F ( x, s ) − s f x, 0 − 2 ∂t ⎪⎩ ∂t ⎪⎭

(

where L

{ f ( x, t )} = F ( x, s ) .

)

)

!

∂u ∂v ! = ∂x ∂ y

at a point

∂u ∂v =− ∂y ∂x

( )

hold at the point z x, y , and that the partial derivatives ∂u ∂x ,

∂u ∂ y , ∂v ∂x , and ∂v ∂ y all exist and are continuous in a

( )

neighborhood of the point z x, y .

257

Proposition 11.3-1, Fundamental Theorem of Calculus for

Proposition 11.3-4, ML-Inequality:

Complex Functions:

()

!

()

Let C be a simple contour of length L , and let f z be a

()

If a complex function f z is holomorphic in a simply

continuous complex function on C . If f z is bounded by a real

connected domain D , then

number M along C :



z2 z1

()

f z dz =



z2

( ) dz = F ( z ) − F ( z )

dF z

2

dz

z1

()

f z ≤M

!

1

then:

()

()

where F z is the antiderivative of f z .

!



C

()

f z dz ≤ M L

Proposition 11.3-2, Independent of Path:

()

If a complex function f z is holomorphic in a simply connected

Proposition 11.3-5, Cauchy-Goursat Theorem:

()

If f z is a holomorphic complex function in a simply

domain D , then the integral: !

connected domain D , then along every simple closed contour C

∫ f ( z ) dz

within D we must have:

C

is independent of path C within D .

!

!∫

C

()

f z dz = 0

Proposition 11.3-3, Contour Integral Inequality:

()

If C is a simple contour and f z is a complex function

Let C1 and C2 be two simple closed contours within a domain

continuous on C , then: !



C

()

f z dz ≤



C

Proposition 11.3-6, Deformation Theorem:

()

f z

dz

()

D , and let C2 be interior to C1 (see Figure 11.3-1). If f z is a

258

complex function holomorphic at all points within or on C1 , and outside or on C2 , then: !

!∫

()

f z dz =

C1

!∫

Proposition 11.3-9, Cauchy’s Integral Formula for Derivatives:

()

()

Let f z be a holomorphic complex function in a simply

f z dz

C2

connected domain D . If C is a simple closed contour within the domain D , and if z0 is any given point inside the contour C ,

()

Proposition 11.3-7:

then f z has derivatives of all orders at the point z0 given by:

If C is a simple closed contour within a domain D , and if z = z0

!

is a point inside the contour C , then:

!

⎧⎪ 2 π i n = 1 dz = ⎨ n ≠1 ⎩⎪ 0

!

!∫

C

1

( z − z0 )

n

n! n f ( ) z0 = 2π i

( )

where n is a nonnegative integer.

Proposition 11.3-10, Higher Order Derivatives of a

where n is an integer.

Holomorphic Function Exist and are Holomorphic:

Proposition 11.3-8, Cauchy’s Integral Formula:

()

() ()

connected domain D . If C is a simple closed contour that lies

If a complex function f z is holomorphic at a point z = z0 , n then its derivatives f ( ) z of all orders n exist and are

within D , and if z0 is a fixed point inside the contour C , then:

holomorphic at the point z0 .

Let f z be a holomorphic complex function in a simply

!

( ) dz !∫C ( z − z0 )n+1 f z

( )

f z0 =

1 2π i

!∫

C

( ) dz

f z

z − z0

Proposition 11.3-11, Morera’s Theorem:

()

If f z is a continuous complex function in a simply connected domain D , and if:

259

!

!∫

()

f z dz = 0

C

Proposition 11.4-3, Term-by-Term Differentiation:

()

for every simple closed contour C within D , then f z is holomorphic in D .

()

of f z at any point z0 in D can be determined by term-by-

()

If a series of continuous complex functions f k z

()

term differentiation of the series. converges

uniformly to the sum f z in a domain D , then for any simple

Proposition 11.5-1, Circle of Convergence of a Power Series:

contour C that lies within D the integral of the sum along C

Every power series:

can be determined using term-by-term integration of the series: n

lim

n→ ∞

n

∫ ∑ f ( z ) dz = lim ∑ ∫ C

k

k=0

n→ ∞

k=0

C



!

()

fk z dz



(

an z − z0

)n

n=0

has a circle of convergence z − z0 = r with radius given by:

Proposition 11.4-2:

!

{ ( ) } is a sequence of holomorphic functions in a simply

If fk z

connected domain D , and if the series:

an r = lim n→ ∞ an+1 if this limit exists. Within the circle of convergence the power series will converge absolutely, and outside this circle it will



!

()

converges

uniformly to the sum f z in a domain D , then the derivative

Proposition 11.4-1, Term-by-Term Integration:

!

()

If a series of continuous complex functions fn z

∑ f (z)

diverge.

k

k= 0

()

()

converges uniformly to f z in D , then f z is holomorphic on D .

Proposition 11.5-2, Uniform Convergence of a Power Series:

()

Every function f z that can be represented by a power series:

260

!

()

f z =





(

an z − z0

)

n

!

k f( ) z =

()

n=0

()

f z within its circle of convergence z − z0 = r .

()

series: !

()

f z =



(

)n

n=0

()

then f z is holomorphic at every point inside its circle of convergence z − z0 = r Proposition 11.5-5, Derivatives of All Orders: Derivatives of all orders exist for a power series: !

()

f z =







(

)n

an z − z0

inside its circle of convergence z − z0 = r , then the coefficients

()

If a function f z can be represented by a power series:

an z − z0



n=0

Proposition 11.5-4, Power Series is Holomorphic:

()

)n−k

If a complex function f z can be represented by the power

A power series provides a unique representation of a function

f z =

(

an z − z0

Proposition 11.6-1, Taylor Series:

Proposition 11.5-3, Uniqueness of Power Series Representation:

!

∑ ( n − k )! n!

n= k

converges uniformly inside its circle of convergence z − z0 = r .





of the series are given by: !

an =

n f ( ) z0

( )=

n!

1 2π i

( ) ds !∫C ( s − z0 )n+1 f s

where n = 0, 1, 2, ! , and the series is a Taylor series. Proposition 11.6-2, Existence of Taylor Series:

()

A Taylor series representation of a complex function f z will

()

exist on a domain D if and only if f z is holomorphic on D .

(

an z − z0

)n

n=0

at every point inside its circle of convergence so that: 261

Proposition 11.9-1:

Proposition 11.9-3:

()

!

()

() ()

()

() q ( z ) has a

If an analytic function f z has a simple pole at a point z = z0 ,

Let f z = p z q z where the functions p z and q z are

then:

both analytic at a point z = z0 . If p z0 ≠ 0 and

()

(

) ()

Res ⎡⎣ f z , z0 ⎤⎦ = lim ⎡⎣ z − z0 f z ⎤⎦ z → z0

( )

()

simple zero at z0 , then z0 is a simple pole of f z , and we have: !

Proposition 11.9-2:

()

()

Res ⎡⎣ f z , z0 ⎤⎦ =

( ) q′ ( z0 ) p z0

If an analytic function f z has the form: !

!

!

!

()

f z =

()

P z

=

A

+

B

+

C

( z − a ) ( z − b) ( z − c ) ( z − a ) ( z − b) ( z − c ) where P ( z ) is a polynomial of degree ≤ 2 , and where a , b , and c are the poles of f ( z ) , then: P( a) ⎡ ⎤ A = Res ⎣ f ( z ) , a ⎦ = ( a − b) ( a − c ) () B = Res ⎡⎣ f ( z ) , b ⎤⎦ = ( b − a )( b − c ) P b

()

C = Res ⎡⎣ f z , c ⎤⎦ =

() ( c − a ) ( c − b)

Proposition 11.9-4:

()

()

()

Let f z = 1 q z where the function q z is analytic at a point

()

z = z0 . If q z has a simple zero at z0 , then z0 is a simple pole of

()

f z and we have: !

()

Res ⎡⎣ f z , z0 ⎤⎦ =

1

( )

q′ z0

Proposition 11.9-5:

()

If an analytic function f z has a pole of order k ≥ 2 at a point

z = z0 , then:

P c

!

d n−1 ⎡ Res ⎡⎣ f z , z0 ⎤⎦ = lim z − z0 ⎢ z → z0 n − 1 ! dz n−1 ⎣

()

(

1

)

(

)n f ( z )⎤⎦⎥ 262

Proposition 11.10-1, Cauchy’s Residue Theorem for an

Proposition 12.2-1:

Isolated Singularity:

The inverse Laplace transform of the unit step function is:

()

If a complex function f z is analytic in a domain D except at an isolated singularity at a point z = z0 , then: !

!∫

C

()

()

f z dz = 2 π i Res ⎡⎣ f z , z0 ⎤⎦

U (t ) = L

!

−1⎧

1⎫ 1 ⎨ ⎬= ⎩ s ⎭ 2πi

The inverse Laplace transform of the shifted unit step function

within D and enclosing z0 .

is:

Proposition 11.10-2, Cauchy’s Residue Theorem:

!

U (t − t0 ) = L

−1⎧ e

⎫ 1 ⎨ ⎬= ⎩ s ⎭ 2πi

Let C be a simple closed contour that lies within a simply

()

connected domain D . If a complex function f z is analytic in

D except at a finite number of isolated singular points

C

()

f z dz = 2 π i

− s t0



σ0 + i ∞

σ0 − i ∞

e− s t0 s t e ds s

Proposition 12.3-1: The inverse Laplace transform for the shifted unit impulse

z1 , z2 , ! , zk that lie inside C , then:

!∫

σ 0 −i ∞

es t ds s

Proposition 12.2-2:

where C is any positively oriented simple closed contour lying

!



σ 0 +i ∞

function is:

k

∑ Res ⎡⎣ f ( z ), z ⎤⎦ n

n =1

!

δ (t − t0 ) = L

{e }

−1

− s t0

1 = 2πi



σ0 + i ∞

e− s t0 es t ds

σ0 − i ∞

Proposition 12.1-1: The Laplace transform F ( s ) is analytic in its region of absolute convergence. 263

Proposition 12.4-1, Existence of the Inverse Laplace Transform:

Proposition 12.5-2: !If f ( t ) is a piecewise continuous function of exponential order

If f ( t ) is a piecewise continuous function of exponential order on [ 0, ∞ ) with L

with Laplace transform F ( s ) , and if F ( s ) es t has only N simple

{ f (t )} = F ( s ) , then the inverse Laplace

poles sn , then the inverse Laplace transform is:

transform exists where Re s > σ 0 , and is given by: !

1 f ( t ) = L −1{ F ( s )} = 2πi



σ0 + i ∞

f (t ) =

!

F ( s ) e ds st

N

∑ lim ⎡⎣ ( s − s ) F ( s ) e n =1

σ0 − i ∞

n

s → sn

st

⎤⎦

Proposition 13.1-1: Proposition 12.5-1:

If f ( t ) is a piecewise-smooth function of exponential order, and

!If f ( t ) is a piecewise continuous function of exponential order,

on [ 0, ∞ ) with Laplace transform F ( s ) , and if F ( s ) e has only

if L

st

N simple poles sn , then the inverse Laplace transform: !

f (t ) = L

{F ( s )} = 21π i

−1



σ0 + i ∞

σ0 − i ∞

is given by: !

f (t ) =

N

∑ n =1

F ( s ) e ds st

{ f (t )} = F ( s ) , then:

!

L { t f ′ ( t )} = − s F ′ ( s ) − F ( s )

!

L { t f ′′ ( t )} = − s 2 F ′( s ) − 2 s F ( s ) + f ( 0 )

Proposition 13.2-1: If f ( t ) and f ( n ) ( t ) are piecewise-smooth functions of exponential order on [ 0, ∞ ) , then:

Res ⎡⎣F ( s ) es t , sn ⎤⎦ ! !

L

{t

m

f

(n)

(t ) } = ( −1)

m

( )

( )

dm ⎡ n n−1 − n−2 − s F s − s f 0 − s f 0 ′ ( ) ds m ⎣

( )

( )

− s n−3 f ′′ 0 − − !− f ( n−1) 0 − ⎤⎦ where n and m are positive integers, and L

{ f (t )} = F ( s ) . 264

!

U (t − t0 ) !

!

c!

c s

!

t!

1 s2

!

t2 !

2 s3

!

tn !

n! s n+1

!

tα!

!

eat !

!

t n ebt !

e− s t0

!

sin ( at ) !

1 s

!

cos ( at ) !

Appendix E

Laplace Transform Tables !

In the table of Laplace transforms presented below n is a

positive integer, a , b , c are real constants, U ( t ) is the unit step function, and δ ( t ) is the unit impulse function.

f ( t )! !

δ (t ) !

!

δ (t − t0 ) !

!

1 or U ( t ) !

L

e− s t0 s

{ f (t )} 1

α > −1 !

Γ ( α + 1) s α +1 1 s−a n!

( s − b )n+1 a s2 + a2 s s2 + a2 265

!

!

!

!

!

sinh ( at ) ! cosh ( at ) ! ebt sin (at ) ! e cos (at ) ! bt

ebt sinh (at ) !

!

e cosh ( at ) !

!

t sin ( at ) !

!

!

bt

t cos ( at ) ! 1− cos ( at ) !

a

!

s2 − a2

at − sin ( at ) !

(

s2 s2 + a2

s s −a 2

2

!

sin ( at ) − at cos ( at ) !

(

a

( s − b )2 + a 2

!

s−b

sin ( at ) + at cos ( at ) !

(

( s − b )2 + a 2 !

a

( s − b )2 − a 2

(s

2as 2

+ a2

)

2

!

!

cos ( at ) !

!

sinh 2 ( t ) !

s2 − a2

(s

(

2

+a

)

2 2

a2

s s +a 2

2

)

2 a s2 s2 + a2

(s

) )

2

( (s

+ a2

)

2

2

)

)

2

s s 2 + 3a 2

cos ( at ) + at sin ( at ) !

sin ( at ) !

s2 + a2

(

cos ( at ) − at sin ( at ) !

!

2 a3

s s2 − a2

( s − b )2 − a 2 s−b

a3

2

+ a2

2

(

)

2

2 a2

s s2 + 4 a2 s2 + 2 a2

2

(

s s2 + 4 a2

(

2

s s −4 2

)

!

) ) ) s>2

266

!

cosh ( t ) !

!

t sinh ( at ) !

!

!

!

!

!

s2 − 2

2

(

s s −4 2

(s

)

s>2

!

2as −a

2

s +a 2

t cosh ( at ) !

(s

2

+a

)

2 2

2

e− s t0 s

!

U (t − t0 ) !

!

U ( t − t1 ) − U ( t − t 2 ) !

!

erf

!

eat erf

!

1− erf

1 ! ( n − 1)!

)

(

(

1 − s t1 − s t 2 e −e s

)

a

at !

s s+a

2 2

3

(

)

)

a s ( s − a)

at !

sinh ( at ) − sin ( at ) !

2a s4 − a4

cosh ( at ) − cos ( at ) !

2 a2 s s4 − a4

sinh ( at ) + sin ( at ) !

2 a s2 s4 − a4

!

cosh ( at ) + cos ( at ) !

2 s3 s4 − a4

!

a s ( s + a)

!

sin ( ω t + θ ) !

s sinθ + ω cosθ s2 + ω2

!

cos ( ω t + θ ) !

s cosθ − ω sinθ s2 + ω2

!

1− e−at !

!

1 e−at − e−bt ! b−a

(

)

a≠b!

1 t

a 2 t

= erfc

cos 2 at !

1

( s + a) ( s + b)

a 2 t

!

e− a s

s

1 sn

π −a e s

s

267

!

!

!

!

!

!

!

!

!

e

e

−at

−at

sin ( ω t + θ ) !

( s + a ) sinθ + ω cosθ ( s + a )2 + ω 2

cos ( ω t + θ ) !

( s + a ) cosθ − ω sinθ ( s + a )2 + ω 2

ei at !

at − 1+ e

1− e

−at

−at

!

− at e

−at

!

t! 1 t

!

sin ( at ) sinh ( at ) ! sin ( at ) cosh ( at ) !

!

!

1 s − ia

!

a2 s2 ( s + a)

!

a2 s ( s + a)

2

π 2 s3

2

2

2a s s 4 + 4a 4 a s + 2a 2

s + 4a 4

4

2

)

(

at cosh ( at ) − sinh ( at ) !

e− t a !

b a ⎡ − at − bt ⎤ 1− e + e ⎢⎣ b − a ⎥⎦ ! b−a

!

1 ab

!

e−at e− bt ! t + (b − a )( c − a ) ( a − b )( c − b )

)

s 4 + 4a 4 s3 s 4 + 4a 4

cos ( at ) cosh ( at ) !

(

2 s3 s2 − a2

)

2

a 1+ a s 1 s ( s + a) ( s + b) 1 ( s + a) ( s + b) ( s + c)

e− ct + ( a − c )(b − c )

!

π s

(

cos ( at ) sinh ( at ) !

a s2 − 2 a2

!

sin ( at ) cosh ( at ) − cos ( at ) sinh ( at ) !

!

erf

( t)!

4 s3 s4 + 4 a4 1 s s +1

268

!

!

!

!

!

!

!

!

⎛ a ⎞ ! erf ⎜ ⎟ 2 t ⎝ ⎠

( at ) !

at

e erfc

a 2 πt 1 πt

3

e

1− e− a s

e

−a2 4t

−a2 4t

1 s + as a>0!

!

a>0!

!

t

1

e

s

2s

3

32

(e

bt

)

− eat !

( 3 − a t ) sin ( at ) − 3at cos ( at ) ! 2 2

!

e−1

Ci ( t ) !

!

J0 ( at ) !

(s

2

+ a2

)

)

2s 1 s2 + a2

(

s +a −s 2

2

Jn ( at ) !

!

J0 2 at !

1 −a e s

!

δ′ ( t ) !

s

!

δ( n ) ( t ) !

sn

!

1 bt at e −e ! t

!

1 sinh ( at ) − t ! a

s−a − s−b

8 a5

(

ln s 2 + 1

! 4s

4s

1 1 tan −1 s s

Si ( t ) !

!

−a s

π −1 e s

!

1 2 πt

e

−a s

π

sin t !

cos t

s

(

(

)

n

an s2 + a2

)

)

ln

s

s−a s−b

3 2

(

a2

s s2 − a2

) 269

!

!

!

!

!

!

sin ( at ) t

!

1− cos ( at ) ! t 1− cosh ( at ) ! t sinh ( at ) ! 2

cosh 2 ( at ) !

t sin ( at ) ! 2

!

t cos ( at ) !

!

at cosh ( at ) − t sinh ( at ) !

2

2

tan −1

a s

!

1 ⎛ s2 + a2 ⎞ ln 2 ⎜⎝ s 2 ⎟⎠

!

1 ⎛ s −a ⎞ ln 2 ⎜⎝ s 2 ⎟⎠ 2

2a

(

s − 2a

) !

2

(

s s2 − 4 a2

(

)

2 a 3s 2 − a 2 s2 + a2

(

)

+ a2

)

at 1− cos ( at ) − sin ( at ) ! 2

(

)

J0 2 at !

8 a 3s 2

(s

2

+a

2

)

3

a4

(

s s −a 2

2

)

2

2

)

2

a4

(

s s +a 2

1 −a e s

s

)

3

2 s s 2 − 3a 2 2

!

2

2

(s

at 1− cosh ( at ) + sinh ( at ) ! 2

2

s s2 − 4 a2

(

t sin ( at ) − at cos ( at ) ! 2

)

3

8 a 3s

(s

2

−a

2

)

3

270

Appendix F

!

f ( at ) !

1 ⎛ s⎞ F a ⎜⎝ a ⎟⎠

!

f ′ (t ) !

s F ( s ) − f 0−

!

f ′′ ( t ) !

!

f

(n)

( )

( )

( )

s2 F ( s ) − s f 0− − f ′ 0−

(t ) !

n

s F(s) −

n−1



s n−1−k  f (k)(0−)

k  =  0

Properties of Laplace Transforms !

In the table of Laplace transforms presented below n is a

positive integer, a and b are constants, and U ( t ) is the unit step function.

functions ! !

L



{ f ( t )} = ∫ f (t ) e− s t dt ! 0



!

a f (t ) + b g (t ) !

!

ebt f ( t ) !

L {functions }

!

f (t − t0 ) U (t − t0 ) !

!

f (t ) U (t − t0 ) !

!

t f (t ) !

!

t n f (t ) !

!

f (t ) ∗ g (t ) =

F ( s − b)

e− s t0 L

{ f ( t + t0 )}

− F′ ( s)

( −1)n F (n ) ( s )



t

f ( τ ) g ( t − τ ) dτ !

F (s) G (s)

0

F (s) a F (s) + b G (s)

e− s t0 F ( s )

!



t

f ( τ ) dτ !

0

!

f (t ) = f (t + T ) !

F (s) s 1 F (s) 1− e− sT 271

!

!

!

f (t )

!

t

1 2πi

f (t ) ! t

τ

0

0

∫∫

a−i∞

F (s) s2 1 a

!

f 0+ !

( )

⎛ s⎞ f⎜ ⎟ ⎝ a⎠

lim ( s F ( s ))

s→∞

lim ( s F ( s ))

lim f ( t ) !

t→∞

f (t ) =

es t F ( s ) ds

f ( u ) du dτ !

f ( at ) !

!



a+i∞

F ( u ) du

s

!

!





s→0



∑ n=0

!

t f ′ (t ) !

!

t f ′′ ( t ) !

an t n !

σ 0n ! an ≤ M n!



∑ n=0

an n! s n+1

− s F′ ( s) − F ( s) − s 2 F ′( s ) − 2 s F ( s ) + f ( 0 )

272

!

Appendix G

Power Series !

A few of the most common series representations of

!

!

!

!

!

Series

z 3 z5 sin z = z − + −! = 3! 5!



∑ n=0

∑ (−1) ( n=0

n

!

zn n! z 2 n+1 ! 2n +1 !

)

n=0

z 3 z5 sinh z = z + + +! = 3! 5!

!

z2 z3 z4 z e = 1+ z + + + +! = 2! 3! 4!

∑ (−1) (

!

z2 n 2n !

n

!

positive integer.





z 3 2 z 5 17 z 7 tan z = z + + + +! ! 3 15 315

functions are listed below. In this table of power series, n is a

!

z2 z4 cos z = 1− + −! = 2! 4!



∑( n=0

z2 z4 cosh z = 1+ + +! = 2! 4!



)

(

)

ln 1+ z = z −

3



1

(1− z )

m

= 1+

∑ n =1

(

z