Sturm-Liouville Theory and its Applications (Instructor Solution Manual, Solutions) [1 ed.] 1846289718, 9781846289712

obtained thanks to https://t.me/HermitianSociety

280 103 432KB

English Pages [69] Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Sturm-Liouville Theory and its Applications  (Instructor Solution Manual, Solutions) [1 ed.]
 1846289718, 9781846289712

Citation preview

SOLUTIONS TO THE EXERCISES FOR

STURM-LIOUVILLE THEORY AND ITS APPLICATIONS

M.A. AL-GWAIZ

1

Chapter 1 1.1 (a) 0  x =0  x + [0  x + (

0x)] = (0 + 0)  x + ( 0  x) =0  x + ( 0:x) = 0:

(b) a0 = a0 + [a0+( a0)] = a(0 + 0) + ( a0) =a0 + ( a0) = 0: (c) ( 1)  x + x =( 1 + 1)  x =0  x = 0: (d) If a 6= 0 then a  x = 0 implies x =a

1

 (a  x) = a

1

 0 = 0:

1.2 (a) Complex vector space, (b) real vector space, (c) not a vector space, (d) real vector space. 1.3 If x1 ; :::; xn are linearly dependent then there are scalars b1 ; :::; bn ; not all Pn zeros, such that i=1 bi xi = 0: Assuming bk 6= 0 for some k 2 f1; :::; ng, P we can multiply by bk 1 to obtain xk = i6=k ai xi , where ai = bk 1 bi : The converse is obvious. If the set of vectors x1 ; x2 ; x3 ; ::: is in…nite , then it is linearly dependent if, and only if, it has a …nite subset which is linearly dependent, and the desired conclusion follows. 1.4 Assume that fx1 ;    ; xn g and fy1 ;    ; ym g are bases of the same vector space with n 6= m and show that this leads to a contradiction. If m > n; express each yi , 0  i  n, as a linear combination of x1 ;    ; xn : The resulting system of n linear equations can be solved uniquely for each xi , 0  i  n; as a linear combination of yi , 0  i  n (why?). Since each vector yn+1 ; :::; ym is also a linear combination of x1 ;    ; xn (and hence of y1 ;    ; yn ), this contradicts the linear independence of fy1 ;    ; ym g: Similarly, If m < n then fx1 ; :::; xn g is linearly dependent. Hence m = n: 1.5 Assume an xn + ::: + a1 x + a0 = 0 for all x in the interval I. We can di¤erentiate both sides of this identity n times to conclude that an = 0, then n 1 times to obtain an 1 = 0, etc. Therefore all the coe¢cients ak are zeros, and so f1; x; :::; xn : x 2 Ig is linearly independent for every n: It follows that the in…nite set f1; x; ::: : x 2 Ig is linearly independent. 1.6 It su¢ces to consider the case where both dim X and dim Y are …nite. If B is a basis of Y , then B lies in X: Since the vectors in B are linearly independent, dim X cannot be less than the number of vectors in B, namely dim Y: 1.7 Recall that a determinant is zero if, and only if, one of its rows (or columns) is a linear combination of the other rows (or columns). 2

2

2

1.8 Since kx + yk = kxk + 2 Re hx; yi + kyk , the equality holds if, and only if, Re hx; yi = 0: Consider x = (1; 1) and y = (i; i) in C2 .

2

1.9 (a) Let a(x + y) + b(x y) = (a + b)x + (a b)y =0: Because x and y are linearly independent, it follows that a + b = 0 and a b = 0. But this implies a = b = 0, hence the linear independence of x + y and x y: (b) hx + y; x x + y and x

2

2

yi = kxk +hy; xi hx; yi kyk = kxk y are orthogonal if kxk = kyk. p 1.10 (a) 0, (b) 2=3, (c) 8=3, (d) 14:

2

2

kyk . Therefore

1.11 h'1 ; '3 i = h'1 ; '4 i = h'2 ; '4 i = h'3 ; '4 i = 0: Thus the largest orthogonal subset is f'1 ; '3 ; '4 g: p p 1.12 hf; f1 i = kf1 k = =2; hf; f2 i = kf2 k = 0; hf; f3 i = kf3 k = =2:

1.13 Let a + bx + cx2 = 0 for all x 2 [ 1; 1]: Setting x = 0; x = 1; and x = 1; the resulting three equations yield the only solution a = b = c = 0: The corresponding orthogonal functions are given by f1 (x) = 1; f2 (x) = x f3 (x) = x2

hx; 1i

2 = x; k1k hx2 ; 1i hx2 ; xi 2

k1k

2

kxk

x = x2

1 : 3

1.14 Let a + bx + c jxj = 0 for all x 2 [ 1; 1]: Setting x = 0; x = 1; and x = yields a = b = c = 0: The corresponding orthogonal set is

1

f1 (x) = 1; f2 (x) = x f3 (x) = jxj

hx; 1i

2 =x k1k hjxj ; 1i hjxj ; xi 2

k1k

2

kxk

= jxj

1 ; 2

and the normalized set is 1 f1 (x) =p ; kf1 k 2 x f2 (x) ; =p kf2 k 2=3 f3 (x) 1 = p (jxj kf3 k 6

1=2) :

The set f1; x; jxjg is not linearly independent on [0; 1] because jxj = x on [0; 1]:

3

1.15 From the result of Exercise 1.3 we know that f1 ; :::; fn are linearly dependent if, and only if, there is a number k 2 f1; :::; ng such that P fk = i6=k ai fi on I. By di¤erentiating this identity up to order n 1, P (j) (j) we arrive at the system of equations fk = i6=k ai fi ; 0  j  n 1. Writing this system in matrix form, and using the properties of determinants, we conclude that the system is equivalent to the single equation (j) det(fi ) = 0 on I, where 1  i  n and 0  j  n 1: 1.16 Noting that both '1 and '2 are even whereas '3 is odd, we conclude that h'1 ; '3 i = h'2 ; '3 i = 0: Moreover, h'1 ; '2 i = 0: The corresponding orthonormal set is '1 (x) 1 =p ; k'1 k 2 r  '2 (x) 45 = x2 k'2 k 8 '3 (x) 1 = p '3 (x): k'3 k 2

1 3



;

1.17 Solving the pair of equations hx2 +ax+b; x+1i = 0 and hx2 +ax+b; x 1i = 0 gives a = 1; b = 1=6: 1.18 If f R: [a; b] ! C is a continuous function and kf k = 0, then it follows b 2 2 that a jf (x)j dx = 0 and jf (x)j is continuous and nonnegative on [a; b]: But this implies f (x) = 0 for all x 2 [a; b]: On the other hand, the noncontinuous function  0; x 2 [0; 1]nf1=2g f (x) = 1; x = 1=2 clearly satis…es kf k = 0; but f is not identically 0 on [0; 1]: 2

2

2

2

1.19 From the CBS inequality, kf + gk = kf k + 2 Rehf; gi + kgk  kf k + 2 2 kf k kgk + kgk = (kf k + kgk)2 : The triangle inequality follows by taking the square root of each side. p 1.20 h1; xi = 1=2; k1k = 1; and kxk = 1= 3: Clearly h1; xi < k1k kxk : 1.21 Use the de…nition of the Riemann integral, based on Riemann sums, to show that f; and hence f g; is not integrable on [0; 1], whereas f 2 = 1 and g 2 = 1 are both integrable. p 1.22 (i) 1= 2, (ii) not in L2 (0; 1), (iii) 1, (iv) not in L2 (0; 1):

4

1.23 If kf k = 0 then f , being continuous, is identically 0 and the pair f , g is linearly dependent. The same is true if kgk = 0. Hence we assume kf k = 6 0 and kgk = 6 0: Now

2 Z b 2 Z b 2 Z b

f f g fg g

= 2 2 + 2

kf k kgk a kf k a kgk a kf k kgk =1+1

2 = 0;

where we used hf; gi = kf k kgk in the second equality. The implies g = f with  = kgk = kf k : 2

Conversely, if g = f for some positive number , then hf; gi =  kf k = kf k kgk : 2

2

2

1.24 In general kf + gk = kf k + kgk + 2 Rehf; gi: If kf + gk = kf k + kgk then we must have Rehf; gi = kf k kgk ; and if, furthermore, the functions f and g are positive and continuous, then Rehf; gi = hf; gi = kf k kgk and (by Exercise 1.23) f and g are linearly dependent. Conversely, if the functions are linearly dependent then g = f for some number , and kf + gk = k(1 + )f k = j1 + j kf k : For the equality kf + gk = kf k + kgk to hold we must therefore have j1 + j = 1 + j j ; which implies  0: R1 2 1.25 The norm kx k = 0 x2 dx is …nite if, and only if, 2 > 1; that is, > 1=2: 1.26
0 and there is a positive integer n such that, for all x  n; jjf j

j`jj < j`j =2

0 < j`j =2 < jf (x)j < 3 j`j =2: R1 2 But this implies n jf (x)j dx = 1; which contradicts the integrability of 2 jf j on (0; 1): p Rb 1.28 a jf (x)j dx = hjf (x)j ; 1i  kf k k1k = b a kf k by the CBS inequality. p f (x) = 1= x is integrable on (0; 1) but f 2 (x) = 1=x is not. 1.29 Suppose jf (x)j  M for all x  0: Then Z 1 Z 1 f 2 (x)dx  M jf (x)j dx < 1: 0

The function f (x) = (1 + x) is not integrable on [0; 1):

0

1

is bounded on [0; 1), lies in L2 (0; 1), but

5

1.30 Using familiar trigonometric identities, sin3 x = sin x(1

cos2 x)

1 cos x sin 2x 2 1 = sin x (sin 3x + sin x) 4 3 1 = sin x sin 3x: 4 4

= sin x

1.31 There are many answers to this exercise. Any odd function, such as f (x) = 2 ax; where a is a (non-zero) constant satis…es hf; x2 + 1i = 0 since px + 1 is even. For this p choice of f , the constant a has to satisfy kaxk = jaj 2=3 = 2, p that is, jaj = 6: Thus f (x) = 6x is one possible answer. R 1 1=2 1.32 The L2 (0; 1) norm of a polynomial p is given by R0 p2 (x) e x dx : It 1 x is therefore su¢cient to show that the integral I = 0 q(x)e dx is …nite for any polynomial q: This follows from integrating parts, noting that R 1 by 0 x x q (x)e dx. Similarly, q(x)e ! 0 as x ! 1; to obtain I = q(0) + 0 R1 0 R 1 00 x 0 x q (x)e dx = q (0) + q (x)e dx: If q has degree n, we can use in0 0 duction to show that, in the n-th step, the integral is reduced to a constant R1 x multiple of 0 e dx = 1. 1.33 Using the monotonic property of the integral, Z b Z b 2 2 2 2 kf k = jf (x)j (x)dx  jf (x)j (x)dx = kf k : a

Therefore, if f 2

a

L2 (a; b)

then f 2

L2 (a; b):

1.34 (a) The limit is the discontinuous function 8 1; jxj > 1 > > < xn 0; jxj < 1 lim = n!1 1 + xn > 1=2; x=1 > : unde…ned, x = 1: (b) limn!1

p n

x=



0; x = 0 1; x > 0:

(c) limn!1 sin nx does not exist, except when x is an integral multiple of : 1.35 (a) Pointwise (not uniform), by Theorem 1.17(i), since the limit (Exercise 1.34(a)) is discontinuous.

6

(b) Uniform, since x1=n ! 1 for all x 2 [1=2; 1] and x1=n (1=2)1=n ! 0:

1  1

(c) Pointwise (not uniform), since the limit is discontinuous at x = 0 (see Exercise 1.34(b)).  0; x=0 1.36 fn is continuous for every n; whereas limn!1 fn (x) = is 1; 0 < x  1: discontinuous, hence the convergence fn ! f is not uniform. Clearly, Z 1 Z 1 lim fn (x)dx = 1 = lim fn (x)dx: 0

0

1.37 At x = 0, fn (0) = 0 for every n: When x > 0, fn (x) = n(1 x)=(n 1) ! 1: Therefore the sequence fn converges pointwise to  0; x=0 f (x) = 1 x; 0 < x  1: Since f is not continuous the convergence is not uniform. 1.38 fn (0) = fn (1) = 0 and, for all x 2 (0; 1); jfn (x)j = nx(1 x2 )n  n(1 x2 )n ! 0 as n ! 1: Hence limn!1 fn (x) = 0 on [0; 1]: Since Z 1 1 n = ; lim fn (x)dx = lim n!1 0 n!1 2n + 2 2 R1 whereas 0 limn!1 fn (x)dx = 0; the convergence fn ! 0 is not uniform (by Theorem 1.17(ii)). x a x u ! 0 on [0; a]: 1.39 0   ! 0, hence n+x n n+x x For x  0, we also have limn!1 = 0 pointwise. In this case, assuming n+x x 0 < " < 1; the inequality < " cannot be satis…ed when x  n"=(1 "), n+x hence the convergence is not uniform. Another approach: Since the statement jfn (x) f (x)j  " for all x 2 I is equivalent to the requirement that supx2I jfn (x) f (x)j  ", we see that u fn ! f on I if, and only if, supx2I jfn (x) f (x)j ! 0 as n ! 1: When x 2 [0; a]; sup fn (x) = a=n ! 0; but when x 2 [0; 1) we have fn (n) = 1=2, hence sup fn (x)  1=2 9 0: 1.40 The sequence fn is de…ned by fn (x) =



1=n; jxj  n 0; jxj > n:

7

Since 0 R fn (x)  1=n ! 0 for all x 2 R; the convergence fn ! 0 is 1 uniform. 1 fn (x)dx = 2; therefore Z 1 Z 1 lim fn (x)dx 6= lim fn (x)dx; 1

1

the reason being that the domain of de…nition of fn is not bounded. u

1.41 Suppose fn ! f on [a; b]. Given any " > 0; it then follows that there is an integer N1 such that n  N1 ) jfn (x) which implies jfn (x)

f (x)j < " for all x 2 [a; b];

u

f (x)j ! 0: We can also …nd an integer N2 such that

n  N2 ) jfn (x)

f (x)j < 1 for all x 2 [a; b]:

If N = maxfN1 ; N2 g, then n  N ) jfn (x) which implies jfn (x)

2

f (x)j < " for all x 2 [a; b];

2 u

f (x)j ! 0:

P P 1.42 (a) jfn (x)j  1=n2 for all x 2 R. Since 1=n2 converges, fn (x) converges uniformly on R by the Weierstrasse M-test. n

(b) If x 2 ( 1; 1) then there is an integer N such that jxj < 1=2 for all n  N; and hence n xn n  jxj 1 + xn 1 1=2 = 2 jxj for all n  N;

from which we conclude that the series converges by comparison with the geometric series. It diverges on ( 1; 1] [ [1; 1) where fn (x) 9 0 (see Exercise 1.34(a)). P P 1.43 jan sin nxj  jan j for all x 2 R: Since jan j converges, an sin nx converges uniformly. 1.44 Z

(n+1)

n

Z (n+1) 1 jsin xj dx  jsin xj dx x n n Z  1 = sin xdx n 0 2 ! 0 as n ! 0: = n

8

R (n+1) 1 Let An = n x jsin xj dx: Because x 1 jsin xj > (x+) 1 jsin(x + )j for every x > 0, we see that An  An+1 for all n and An ! 0. Moreover, Z

(n+1)

0

k=n X Z (k+1) jsin xj sin x ( 1)k dx = dx x x k k=0

=

k=n X

( 1)k Ak :

k=0

R1

Hence 0 x 1 sin xdx = ing series test.

P1

k=0 (

1)k Ak ; which converges by the alternat-

On the other hand, Z (n+1)

Z  2 1 jsin xj sin xdx = dx  x (n + 1) (n + 1) 0 n Z 1 Z 1 1 (n+1) X X 2 jsin xj jsin xj ) dx = dx  = 1: x x (n + 1) 0 n=0 n k=0

1.45 Let r = R ": Then jan xn j  jan j rn for all x 2 [ r; r]: Since 0 < r < R, the P numerical series jan j rn is convergent. By the M-test, with Mn = jan j rn , P the series an xn is uniformly convergent in [ r; r]: P 1.46 Being uniformly convergent on [R "; R+"], the series an xn represents a continuous function f on [R "; R+"]: Since this is true for every " > 0; the series is continuous on ( R; R): Each term fn (x) = an xn is di¤erentiable, P1 fn0 (x) = nan xn 1 ; and the series n=1 nan xn 1 has the same radius of convergence R as the original series (by the root test). Now the power series P1 n 1 ; by the result of Exercise 1.45, is uniformly convergent on n=1 nan x [R "; R + "]. Therefore Theorem 1.17(iii) applies and we conclude that P1 P1 0 f 0 (x) = ( n=0 an xn ) = n=1 nan xn 1 : P1 1.47 With bn = (n + 1)an+1 we can write f 0 (x) = n=0 bn xn and repeat the P1 n 1 = argument in Exercise 1.46 to conclude that f 00 (x) = n=1 nbn x P1 n 2 n(n 1)a x ; and so on to any order of di¤erentiation. By inducn n=2 tion we clearly have f (k) (x) =

1 X

n=k

n! (n

k)!

an xn

k

= k!ak +

(k + 1)! (k + 2)! ak+1 x+ ak+2 x2 +   : 1! 2!

At x = 0 this yields ak = f (k) (0)=k!; k 2 N:

9

1.48 Let the function f (x) be continuous and di¤erentiable up to any order on R: According to Taylor’s theorem we can represent such a function at any x 2 R by f (x) = f (0) +

f 00 (0) 2 f (n) (0) n f (n+1) (c) n+1 f 0 (0) x+ x +  + x + x ; 1! 2! n! (n + 1)!

where c lies between 0 and x: If f (x) = ex ; then f (n) (x) = ex and f (n) (0) = 1 for all n. Thus we obtain ex = 1 +

x x2 xn xn+1 c + +  + + e : 1! 2! n! (n + 1)!

Since, for any x 2 R; xn+1 =(n + 1)! ! 0 as n ! 1, we arrive at the desired power series representation of ex by taking the limit of the right-hand side as n ! 1: If f (x) = cos x, then f (n) (0) = 0 if n is odd and f (n) (0) = ( 1)n=2 if n is even. The remainder term is bounded by xn+1 =(n + 1)! which tends to 0 as P1 n ! 1; and we obtain cos x = n=0 ( 1)n x2n =(2n)!. Similarly we arrive at the given representation for sin x: 1.49 Euler’s formula is obtained by replacing x by ix in the power series which represent ex ; cos x; and sin x; and using the equations i2n = ( 1)n and i2n+1 = ( 1)n i. 1.50 (a) 1: (b) 1. (c) fn (0) = fn (1) = 0: For every x 2 (0; 1); fn (x) = nx(1 n ! 1: Therefore fn (x) ! 0 pointwise. Z 1 2 x2 (1 x)2n dx kfn 0k = n2 0 2

=

2n 2n + 1

Z

x)n ! 0 as

1

x(1

x)2n+1 dx

0

Z 1 n2 (1 x)2n+2 dx = (2n + 1)(n + 1) 0 n2 = ! 0 as n ! 1: (2n + 1)(n + 1)(2n + 3) L2

Hence fn ! 0:

P 1.51 (a) convergent, since k 1.

2=3

2 2P sin kx = ksin kxk k

4=3

=

P

k

4=3


x0 : Integrating this inequality over [x0 ; x], we obtain '(x)  '(x0 ) + (x

x0 )'0 (x0 ) for all x > x0 :

Since '0 (x0 ) < 0, this implies limx!1 '(x) = assumption that '(x) > 0 for all x > 0:

1, which contradicts the

2.12 The solutions of (a) and (c) are oscillatory. 2.13 When  = 1=2; Equation (2.20) becomes u00 + u = 0; whose solution is u(x) = c1 cos x + c2 sin x: Therefore y(x) = x 1=2 (c1 cos x + c2 sin x): The zeros of x 1=2 cos x are f 2 +ng, and those of x 1=2 sin x are fng; n 2 N0 : 2.14 Since limx!1 f (x) = 0 there is a number a such that jf (x)j  1=2 for all x  a: Consequently 1=2  1 + f (x)  3=2 on [a; 1), and we conclude that the solutions of y 00 + (1 + f (x))y = 0 oscillate on [a; 1) by comparison with those of y 00 + y=2 = 0: 2.15 For any " > 0, the solutions of y 00 +xy = 0 oscillate on ["; 1) by comparison with y 00 + "y = 0: On ( 1; "] they cannot oscillate by comparison with y 00 "y = 0: Since " > 0 is arbitrary, the number of zeros for such solutions is in…nite on (0; 1) and …nite on ( 1; 0): 2.16 uLv

vLu = u [(pv 0 )0 + rv]

v [(pu0 )0 + ru]

= p0 (uv 0

vu0 ) + p(uv 00

vu00 )

= p0 (uv 0

vu0 ) + p(uv 0

vu0 )0

= [p(uv 0

vu0 )]0 : p

2.17 The linearly independent solutions of y 00 = y in C 2 (0; 1) are e x when  2 Cnf0g; and f1; xg when  =p 0. Hence the eigenfunctions and corresponding are : (a) e x for  2 Cnf0g; and f1; xg for  = 0; p eigenvalues p x (b) e ; Re  > 0:

15

2.18 Since p = 1; q = 0 = p0 , and r = 0, the operator d2 =dx2 is formally self-adjoint. The given boundary conditions ensure that Equation (2.26) holds, hence it is also self-adjoint. If   0 the only solution of u00 + u = 0 which satis…es the boundary conditions is the trivial solution (see Example 2.17). If  > 0 the solution is p p u(x) = c1 cos x + c2 sin x: The boundarypcondition u(0) = 0 implies c1 = 0, and from u0 () = 0 we conclude that  = 12 (2n + 1): Therefore we obtain the following sequence of eigenvalues and their corresponding eigenfunctions  2   2n + 1 2n + 1 x; n 2 N0 : ; un (x) = sin n = 2 2 The eigenvalues R  are clearly real and it is a simple matter to verify that hun ; um i = 0 un (x)um (x)dx = 0 whenever m 6= n:

2.19 (a)  = 1=x2 ; (b) e

x2 =2

, (c)  = e

x3 =3

; (d) 1=x:

00

2.20 The solutions of u + u p= 0; where  > 0; under the boundary condition u(0) = 0 is u(x) = c sin x: If h = 0 then the second boundary condition u0 (l) = 0 yields the sequence of eigenvalues 2  2n + 1  ; n 2 N0 ; n = 2l and the corresponding sequence of eigenfunctions p un (x) = sin n x; x 2 [0; l]:

If h < 0 then we follow the method of Example 2.17, except that the line y = =hl now has positive, instead of negative, slope. Its points of intersection with the curve y = tan (which determine n ) will therefore be in the upper half plane. It intersects the …rst branch of tan in the …rst interval (0; =2) only if its slope is greater than 1 (the slope of tan at 0), that is, only if 1=hl > 1: From Figure 2.2 we would expect a shift to the right in the value of n as a result, so that n < n < (n + 21 ): Hence the eigenvalues n = 2n =l2 in this case will be greater than the corresponding eigenvalues of Example 2.17, and n behaves like n + 12  for large values of n. 2.21  = e2x ; n = n2  2 + 1, un (x) = e

x

sin n x:

16

2.22 The equation is of the Cauchy-Euler, so we assume a solution of p the form 1 1 m x , where m satis…es m(m 1)  = 0: Therefore m = 2  2 1 4 = p 1 x(c1 x +c2 x ). The 2  : If  < 1=4 then is positive and the solution is boundary conditions give c1 + c2 = 0 and c1 e + c2 e = 0, which implies p c1 = c2 = 0: If  = 1=4; the solution takes the form u(x) = x(c1 +c2 log x); and again p the boundary conditions imply c1 = c2 = 0: If  > 1=4, then m = 21  i  1=4, and the solution of x2 u00 + u = 0 is given by p  p i p h u(x) = x c1 cos  1=4 log x + c2 sin  1=4 log x : The condition u(0) = 0 implies c1 = 0, and u(e) = 0 implies n: Thus the eigenvalues and eigenfunctions are

p



1=4 =

p 1 n = n2  2 + ; un (x) = x sin(n log x); n 2 N: 4 2.23 If   0 the boundary-value problem has only the trivial solution, as shown in Example 2.16. If  p > 0; the solution p of the di¤erential equation can be represented by c1 cos x + c2 sin px as we have been doing, or, more conveniently in this case, by c sin( x + ); where c is a non-zero arbitrary constant and  is an arbitrary phase angle. Applying the boundary conditions to the second expression yields p (b a)  = n; n 2 N: Hence the eigenvalues and eigenfunctions are   n(x a) n2  2 ; un (x) = sin : n = (b a)2 b a 2.24 This is a Cauchy-Euler equation, hence we solve m(m m2 2m +  = 0 to obtain p p 2  4 4 = 1  1 : m= 2

1)

m+ =

If   1 the only solution of the equation which satis…es u(1) = u(e) = 0 is the trivial solution (see Exercise 2.22), so we assume  > 1: In this case the solution is h i p p u(x) = x c1 cos(  1 log x) + c2 sin(  1 log x) and the boundary conditions imply c1 = 0 and eigenvalues and eigenfunctions are

p



1 = n: Hence the

n = n2  2 + 1; un (x) = x sin(n log x); n 2 N:

17

To write the orthogonality relation between the eigenfunctions, we …rst have to put the di¤erential equation in standard Sturm-Liouville form by dividing by x3 (see Equation (2.29)), 1 00 u x

1 0 1 u +  3 u = 0; 2 x x

from which we conclude that the weight function is 1=x: Hence Z e dx hun ; um i = x2 sin(n log x) sin(m log x) 3 x 1 Z 1  = sin n sin n d  0 = 0 for all n 6= m:

2.25 If 2 = 0 then f (a) = g(a) = 0 and (f 0 g f g0 )(a) = 0. If 2 6= 0 then f 0 (a) = 1 f (a)= 2 and g 0 (a) = 1 g(a)= 2 : Therefore (f 0 g f g0 )(a) = 1 2 1 [f (a) g (a) f (a) g (a)] = 0: Similarly (f 0 g f g0 )(b) = 0: 2.26 This follows from the equality p(b) [f 0 (b) g (b)

f (b) g 0 (b)]

p(a)[f 0 (a) g (a) [p(b)

f (a) g 0 (a)] =

p(a)][f 0 (a) g (a)

f (a) g 0 (a)]:

2.27 (a), (b), (c) and (f). 2.28 Boundary conditions (c), (e), and (f) de…ne singular problems because p(0) = 0 in each case. Condition (d) does not satisfy Equation (2.26), so it does not de…ne an SL problem. 2.29 Change the independent variable to  = x + 3 and solve the SL problem  2 0 0  z () + z = 0;  2 [1; 4]; z(1) = z(4) = 0:

p The roots of the equation m2 + m +  = 0 are 12  12 1 4: As in Exercise 2.22, only when  > 1=4 do we get a non-trivial solution, given by  p i h p  1=4 log  + c2 sin  1=4 log  : z() =  1=2 c1 cos Now the boundary conditions at  = 1 implies c1 = 0 and from the condition at  = 4 we arrive at the sequence of eigenvalues  2 n 1 n = + ; n 2 N: log 4 4

18

In terms of the original variables, the corresponding eigenfunctions are   n 1=2 yn (x) = (x + 3) sin log(x + 3) : log 4 2.30 (a) This SL problem is a special case of Exercise 2.20 in which l =  and h = 2. Therefore the positive eigenvalues  are determinedpby the solutions of the transcendental equation tan = =2, where =   > 0: These may be obtained graphically, as in Example 2.17, and form a positive increasing sequence 1 ; 2 ; 3 ; ::: which tends to 1, and n = 2n = 2 : If  = 0 the solution of u00 = 0 is c1 x + c2 , which does not satisfy the boundary conditions u(0) = 0 and 2u() u0 () = 0 unless c1 = c2 = 0: p p x and sinh x and If  < 0; the solution is expressed in terms cosh the resulting transcendental equation which determine the eigenvalues is p tanh = =2 with =   > 0. The function f ( ) = tanh =2 satis…es f (0) = 0; f ( ) ! 1, f 0 (0) = 1 1=2 > 0; and f 0 ( ) = 0 has a unique positive solution where f attains its maximum value. Hence f has a single zero on (0; 1) which we denote 0 ; and the problem therefore has a single negative eigenvalue 0 = 20 = 2 : (b) The eigenfunctions corresponding to the positive eigenvalues are p un (x) = sin n x = sin( n x=); n 2 N; and the eigenfunction corresponding to 0 =

20 = 2 is

u0 (x) = sinh( 0 x=): 2.31 Imposing p the boundary conditions on the solution u(x) = c1 cos c2 sin x leads to the pair of equations p p  p p  c2 = c1 ;  c2 cos l c1 sin l = 0: Together these imply

p

x +

p 1 p = tan l:  The solutions of tan = l= in (0; 1) is an increasing sequence 1 ; 2 ; 3 ; ::: such that (n 1) < n < (n 21 ); which is de…ned by the points of intersection of tan and the hyperbola l= : The eigenvalues and eigenfunctions are     n n n n = 2n =l2 ; un (x) = cos x + sin x , n 2 N: l l l  = 0 is not an eigenvalue.

19

If  < 0 we follow the procedure of the solution p to Exercise 2.30 to conclude  > 0: that tanh = l= has no solution on = 2.32 Multiply the di¤erential equation by u  and integrate over [a; b] to obtain Z b Z b Z b 2 2 (pu0 )0 u  dx + r juj dx +  juj dx = 0 a

Z

a

b

a

0 2

p ju j dx +

Z

a

b

a

2

r juj dx + 

Z

b

2

juj dx = 0;

a

where we integrated by parts and used the boundary conditions to arrive at the second equation. Therefore Z b Z b Z b 2 2 2  juj dx = p ju0 j dx r juj dx a

a

  Dividing by

Rb a

a

Z

b

a

c

Z

a

2

r juj dx b

2

juj dx:

2

juj dx yields the desired result.

2.33 By the CBS inequality and Equation (2.42) jhT u; uij  kT uk  kT k for all u 2 C([a; b]), kuk = 1: Therefore sup jhT u; uij  kT k :

kuk=1

On the other hand, hT (u + v); u + vi = hT u; ui + hT v; vi + 2 RehT u; vi hT (u

v); u

vi = hT u; ui + hT v; vi

2 RehT v; ui

Let N (T ) denote supkuk=1 jhT u; uij : For any w 6= 0 we can then write 2 2 jhT w; wij = kwk jhT w= kwk ; w= kwkij  kwk N (T ): Consequently, 4 RehT u; vi = hT (u + v); u + vi

hT (u

v); u

 jhT (u + v); u + vij + jhT (u 2

v); u

vi vij

2

 ku + vk N (T ) + ku vk N (T )   2 2  2 kuk + kvk N (T ):

Setting kuk = 1 and v = T u= kT uk ; we obtain kT k = sup kT uk  N (T ):

20

Chapter 3 Rl 3.1 heinx=l ; eimx=l i = l ei(n m)x=l dx = i(n 0 for all m 6= n: p

einx=l = einx=l = einx=l = 2l; n 2 Z:

l m)

 i(n e

m)

e

i(n m)



=

3.2 No, because its sum is discontinuous at x = 0 (Example 3.4). 3.3 a0 = an = bn =

1 2 Z

Z

Z

1

f (x)dx = 1

1

1 2

Z

1

dx =

0

f (x) cos nx dx = 1 1

0

Z

1 ; 2

1

cos nx dx =

0

1 sin nx dx = (1 n

cos n) =

1 sin n = 0; n

1 [1 n

( 1)n ]; n 2 N:

Hence f (x) =

1 2 + 2 

  1 1 sin x + sin 3x + sin 5x +    : 3 5

4 X1 1  + cos(2n + 1)x: Uniformly convergent by n=0 (2n + 1)2 2  the Weierstrass M-test with Mn = (4=)(2n + 1) 2 :

3.4 

jxj =

3.5 1 a0 = 4 1 an = 2

Z Z

2

(x2 + x)dx = 2 2

(x2 + x) cos 2 2

4 ; 3

16 n x dx = 2 2 ( 1)n ; 2 n 

4 n x dx = ( 1)n ; n 2 N: 2 n 2   1 4 4 X n n 4 2 n x +x= + cos x sin x ( 1) 3 n n=1 n 2 2 bn =

1 2

Z

(x2 + x) sin

The convergence of the Fourier series to f (x) = x2 + x on the interval [ 2; 2] is not uniform for the following reason. Since f is piecewise smooth, its Fourier series Sn converges pointwise on [ 2; 2] to some function S: If the convergence were uniform S would be continuous on [ 2; 2]: But, f being in L2 ( 2; 2); we also have kf Sk = 0 by Theorem 3.2. Because

21

both f and S are continuous, f = S pointwise on [ 2; 2]; because otherwise equality would not hold in L2 : But S( 2) = S(2) whereas f (2) 6= f ( 2); so S cannot be continuous, and this contradicts the assumption that the convergence of Sn to f is uniform. It turns out (see Corollary 3.15) that the periodicity of f , as well as its continuity, are necessary for the convergence to be uniform. 3.6 Use the M-test. 3.7 The series Sn (x) = kSn

Pn

k=1 (ak

cos kx + bk sin kx) satis…es

2 P n 2 Sm k = k=m+1 (ak cos kx + bk sin kx) Pn =  k=m+1 (a2k + b2k );

for all n > m; due to the orthogonality of fcos kx; sin kxg inL2 ( ; ). Therefore (Sn ) is a Cauchy sequence in L2 ( ; ); and hence convergent, Pn if and only if k=m+1 (a2k + b2k ) converges.

3.8 Since f is periodic in p; Z x Z x f (t)dt = f (t + p)dt 0 0 Z x+p = f (t)dt p

=

Z

x

f (t)dt +

p

Z

x+p

f (t)dt;

x

from which Equation (3.14) follows. 3.9 (a) and (c) are piecewise continuous. (b), (d) and (e) are piecewise smooth. 3.10 f is continuous on R: f 0 (x) = 1 when x > 0; and f 0 (x) = hence f is piecewise smooth on R: At x = 0; lim

h!0

f (h)

f (0) h

= lim

h!0

1 when x < 0,

jhj h

does not exist, hence f is not di¤erentiable at x = 0: The function f (x) = [x] is piecewise smooth on R but not di¤erentiable at any n 2 Z: 3.11 f is clearly smooth on Rnf0g: At x = 0, we have f (0) h = lim h sin(1=h)

f 0 (0) = lim

h!0

h!0

= 0;

f (h)

22

hence f is di¤erentiable at 0: On the other hand, for all x 6= 0; f 0 (x) = 2x sin(1=x)

cos(1=x)

does not approach a limit as x ! 0+ or x ! 0 ; so f is not piecewise smooth on R: 3.12 According to De…nition 3.6, if f is piecewise smooth on ( l; l) it is also piecewise smooth on [ l; l]: Being periodic in 2l, f is therefore piecewise smooth on any interval of the form [kl; (k +2)l] , where k is any integer, and hence on any …nite union of such intervals. If I is any …nite interval in R; then there is a positive integer n such that I  [ nl; nl] = [nk= 1 n [kl; (k + 2)l], from which we conclude that f is piecewise smooth on I: By De…nition 3.6, f is therefore piecewise smooth on I. 3.13 Let I be a …nite interval and f be piecewise smooth on I. Then both f and f 0 are continuous on I except at a …nite number of points fx1 ; x2 ; :::; xn g, where the left-hand and right-hand limits exist, and the one-sided limits at the endpoints of I also exist. Similarly for g. If f 1 ;  2 ; :::;  m g are the points of (jump) discontinuity of g and g 0 , then it follows that the functions f + g and (f +g)0 = f 0 +g 0 are continuous on I except at fx1 ; :::; xn g[f 1 ; :::;  m g; where the left-hand and right-hand limits exist, as well as the one-sided limits at the endpoints of I: If I is unbounded, then f and g, and therefore f +g, are smooth on any bounded subinterval. f =g is also piecewise smooth on I provided g has no zeros in I: If g(x) = 0 on some subset A of I; then f =g is only de…ned on InA. It is piecewise smooth on InN (A), where N (A) is any neighborhood of A; that is, an open set in I which contains A: This is to ensure the existence of the limit of f (x)=g(x) as x approaches a zero of g either from the left or from the right. 3.14 From the expression  sin n + 21 Dn ( ) = ; 6= 0; 2; 4; ::: 2 sin 12  it is clear that Dn ( ) = 0 when n + 12 = k; that is, when = 2k=(2n + 1) for k = 1; 2; ::: . Therefore the zeros of Dn are located at 4 8 2 ; ; ; 2n + 1 2n + 1 2n + 1 1 Its maximum value is 2 (2n + 1), located at the points = 2k; k 2 Z: n X1 ( 1) 3.15 (a) 2 sin nx: n=1 n 4 X1 1 1 cos(2k + 1)x: (b) k=0 (2k + 1)2 2 2

23

1 2

1 cos 2x: 2 1 3 (d) cos3 x = cos x + cos 3x: 4 4 1 X ( 1)n  (e) 21 sinh 2 + sinh 2 cos 2 2 1 + n 4 n=1 (c)

(f)

1 X

n=1

( 1)n



12l3 n3  3

2l3 n



sin

n l x



n 2 x



n 2

sinh 2 sin

n 2 x



:

:

3.16 The convergence is uniform where f is continuous on R, hence in (b), (c), and (d). 3.17 In Exercise 3.15 (e), S(2) = 21 [f (2+ ) + f (2 )] = 21 (e2 + e S(l) = 0:

2

), and in (f),

3.18 Using the fact that x = =2 is a point of continuity for f , we have f (=2) = S(=2); where S(x) is the Fourier series obtained in Exercise 3.15(a). Therefore 1 X  ( 1)n+1 n =2 sin : 2 n 2 n=1

Since sin(n=2) is 0 when n is even and ( 1)k when n = 2k + 1; k 2 N0 ; we obtain 1 X ( 1)k : =4 2k + 1 k=0

3.19 Setting x =  in the Fourier expansion of jxj (Exercise 3.15(b)) yields 2

 =8

1 X

1 : (2n + 1)2 n=0

3.20 Setting t = 0 in the Fourier series representation of f (t), we obtain =2

4

1 X ( 1)n : 4n2 1 n=1

X1 ( 1)n 2 +4 cos nx: Evaluating at x =  and x = 0, we arrive 3.21 x2 = n=1 3 n2 at X1 1 2 (a) = : 2 n=1 n 6 X1 ( 1)n 2 = : (b) 2 n=1 n 12

24

By adding and subtracting these results, we obtain (c) (d)

X1

n=1

X1

n=1

1 2 = : (2n)2 24 1 (2n

1)2

=

2 : 8

3.22 jsin xj is an even function whose Fourier coe¢cients are a0 = 2=; an = 0 when n is odd, and an = 4=(n2 1) when n is even. Its series representation is therefore jsin xj =

1 1 4X cos 2nx: 2  n=1 (4n 1)

2 

This series is uniformly convergent by the M-test with Mn = (4n2 Setting x = 0 and x = =2 gives 1 X

n=1

1 (4n2

1)

=

1)

1

:

1 1 X ( 1)n+1  2 ; = : 2 2 n=1 (4n 1) 4

3.23 Since sin x is a C 1 function, f (x) = jsin xj is continuous on R whereas f 0 (x) and f 00 (x) are continuous except at the points x = n; n 2 Z; where f vanishes. Since f 0 (n  ) = 1 and f 00 (n  ) = 0; the functions f 0 and f 00 are piecewise continuous, hence f 0 is piecewise smooth. f 0 is an odd function which is periodic in i with f 0 (x) = cos x on [0; ]: h 1+( 1)n ; n > 1; b1 = 0, and its series Its Fourier coe¢cients are bn = 2n  n2 1 representation is 1 8X k S(x) = sin 2kx:  4k 2 1 k=1

S(n) of f 0 ).

= 12 [f 0 (n + ) + f 0 (n )] = 21 [1 + ( 1)] = 0 (points of S( 2 + n) = f 0 ( 2 + n) = 0 (points of continuity).

discontinuity

3.24 (a) Since f is continuous on [0; l] and periodic in 2l, we need only check continuity at x = 0 and x = l: Being an even function, f~e has the same left-hand and right-hand limits at x = 0 and, being periodic as well, it satis…es limx!l+ f~e (x) = limx!( l)+ f~e (x) = limx!l f~e (x). Hence f~e is continuous on R: f~o ; on the other hand, is continuous at 0 if, and only if, limx!0 f~o (x) = limx!0+ f~o (x): But since lim f~o (x) =

x!0

lim f~o ( x) =

x!0

lim f~o (x);

x!0+

25

it is clear that we have continuity at x = 0 if, and only if limx!0+ f~o (x) = f (0) = 0: A similar argument shows that continuity at x = l is achieved if, and only if, limx!l f~o (x) = f (l) = 0: (b) f~e (x) = a0 +

1 X

an cos

n=1

where

a0 = an =

1 l 2 l

2 bn = l

Z

1 X n n x; f~o (x) = x; bn sin l l n=1

l

f (x)dx;

0

Z

l

f (x) cos

n x dx; l

f (x) sin

n x dx: l

0

Z

l

0

R1 R1 (c) a0 = 0 x dx = 1=2; an = 2 0 x cos nx dx = 2[1 R1 bn = 2 0 x sin nx dx = 2( 1)n =n: Therefore 1 f~e (x) = 2

( 1)n ]=n2  2 ,

1 1 4 X cos(2k + 1)x; 2 (2k + 1)2 k=0

1 2 X ( 1)n+1 ~ sin nx: f0 (x) =  n=1 n

3.25 (a) The even periodic extension of sin x from [0; ] to R is a continuous function which coincides with its Fourier (cosine) series expansion at every point in R: At x = 0 and x = , the function is 0: (b) The odd periodic extension of f (x) = cos x has jump discontinuities at integral multiples of ; where the left-hand and right-hand limits have opposite signs. f (0) = 1; f () = 1, S(0) = S() = 0: (c) As in (b), the periodic extension here is also not continuous at integral multiples of ; and we have f (0) = 1; f () = e ; S(0) = S() = 0: 1=2

3.26 The function f (x) = jxj being even and continuous on [ ; ], it su¢ces P to show that its cosine series an cos nx converges uniformly on [ ; ].

26

Z 2  1=2 an = x cos nx dx; n 2 N <  0    Z  1 2 1 1=2 1=2 x sin nx x sin nx dx =  n 2n 0 0 Z  1 x 1=2 sin nx dx = n 0 Z n 1 = y 1=2 sin y dy: n3=2 0 The integral Z

n

y

1=2

sin y dy =

0

n X1 Z (k+1) k=0

y

1=2

sin y dy =

k

n X1

( 1)k Ak

k=0

R (k+1) 1=2 jsin yj dy converges by the alternating series test, where Ak = k R n y is a decreasing sequence which tends to 0: Hence 0 y 1=2 sin y dy is bounded by some positive constant M , and it follows that the numeriP P P cal series jan j converges by comparison with M=n3=2 . an cos nx therefore converges uniformly in [ ; ] to a continuous function S(x). P 1=2 With jxj in L2 ( ; ) we also know from Theorem 3.2 that an cos nx 1=2 1=2 converges to jxj in L2 ( ; ): Since both S(x) and jxj are continuous, they coincide and the convergence is pointwise in [ ; ]. 3.27 (a) Solving the heat equation by separation of variables, and applying the boundary conditions at x = 0 and x = ; yields the sequence of solu2 tions un (x; t) = bn e n t sin nx; n 2 N: Applying the initial condition to P 2 the general solution u(x; t) = bn e n t sin nx; and using the identity sin3 x =

leads to u(x; t) =

3 e 4

3 sin x 4

1 sin 3x 4

t

1 e 4

sin x

9t

sin 3x:

x 5x 2 e 25k t=36 sin : 2 6 3.28 Suppose v(x; t) is a solution of the heat equation and satis…es the homogeneous boundary conditions v(0; t) = v(l; t) = 0: Then we know (see Equation (3.30)) that it has the form (b) u(x; t) = e

k 2 t=4

sin

v(x; t) =

1 X

n=0

bn e

(n=l)2 t

sin

n x: l

27

Assume u(x; t) = v(x; t) + (x); where is the correction on v which is needed for u to satisfy the nonhomogeneous conditions. To determine , substitute into the heat equation and use the fact that both u and v are solutions, to obtain 00 (x) = 0: Thus (x) = ax + b: Now apply the nonhomogeneous boundary conditions to u = v + to conclude that (0) = T0 and (l) = T1 ; and therefore T0 x + T0 : l Finally, apply the initial condition at t = 0 to u = v + determine bn ; (x) =

T1

in order to

u(x; 0) = v(x; 0) + (x) 1 X n T1 T0 bn sin x+ x + T0 : f (x) = l l n=0  Z  2 l n T1 T0 ) bn = x T0 sin x dx: f (x) l 0 l l This determines the solution v(x; t) + (x) completely. 3.29 Use the procedure outlined in Section 3.3.2 to obtain u(x; t) =

1 X

an sin

n=1

where an =

2 l

Z

l

x(l

x) sin

0

n n x cos t; l l

n x dx: l

3.30 The general solution of the wave equation under the boundary conditions at x = 0 and x = l is given by Equation (3.40), with Z Z l 2 l 2 n cn an = x dx; bn = x dx: f (x) sin g(x) sin l 0 l cn 0 l

D’Alembert’s formula can be proved by substituting the above expressions for an and bn back into Equation (3.40), u(x; t) =

1  X

n=1

an cos

n n  n t + bn sin t sin x; l l l

and using the periodicity properties of f~ and g~: 3.31 Assume u(x; t) = v(x; t) + (x) where v satis…es the homogeneous wave equation with homogeneous boundary conditions x = l. This X1at x = 0 and n cn g 2 an sin lx); and v(x; t) = x cos t; leads to (x) = 2c2 (x n=1 l l Z l n 2 x dx: (x) sin with an = l 0 l

28

3.32 Let u(x; t) = v(x)w(t) and substitute into the di¤erential equation to arrive at the pair of equations v 00 + 2 v = 0 and w00 + kw0 + 2 c2 w = 0. Under homogeneous boundary conditions, the …rst equation is solved by vn (x) = an sin

n x; n 2 N: 10

The general solution of the second equation is wn (t) = e kt=2 (bn cosh n t + cn sinh n t); p where n = 21 k 2 4(cn=10)2 , assuming k 2 6= 4(cn=10)2 : Since the string is released from rest, ut (x; 0) = v(x)w0 (0) = 0 for all x 2 (0; 10); hence w0 (0) = 0: This implies cn = kbn =2n and   k sinh n t : wn (t) = bn e kt=2 cosh n t + 2n Before applying the initial (nonhomogeneous) condition we use superposition to form the solution u(x; t) =

1 X

vn (x)wn (t):

n=1

P1 vn (x)wn (0) = n=1 dn sin(nx=10) = 1 implies Z 10 2 nx 2 dn = dx = (1 cos n); sin 10 0 10 n   1 X nx kt=2 k u(x; t) = dn sin e sinh n t : cosh n t + 10 2n n=1

Now u(x; 0) =

P1

n=1

It should be noted that, as n increases, p there is a positive integer N such i 2 2 that k 4(cn=10) < 0 and n = 2 4(cn=10)2 k 2 = i n for all n  N; where n is a positive number. In the above in…nite series representation for u; the sum over n  N converges because cosh i n t = cos n t and 1 1 i n sinh i n t = n sin n t: 3.33 Assume u(x; y; t) = v(x; y)w(t) and conclude that w00 =w = c2 v=v = 2 (separation constant). Hence w(t) = A cos t + B sin t: Assume v(x; y) = X(x)Y (y); and use the given boundary conditions to conclude that r n2 m2  = mn = + 2 ; m; n 2 N; 2 a b m n x; Y (y) = sin y; X(x) = sin a b n m umn (x; y; t) = (Amn cos mn ct + Bmn sin mn ct) sin x sin y: a b

29

Apply the initial conditions to the solution u(x; y; t) =

1 1 X X

umn (x; y; t)

n=1 m=1

to evaluate the coe¢cients. This yields Z aZ b 4 m n Amn = x sin y dxdy; f (x; y) sin ab 0 0 a b Z aZ b 4 m n Bmn = x sin y dxdy: g(x; y) sin mn ab 0 0 a b For the series representing u to converge both f and g are assumed to be piecewise smooth in x and y. 3.34 Let u(x; y) = v(x)w(y) and substitute into uxx + uyy = 0 to arrive at v 00 = v

w00 = w

2 :

The sign of the separation constant determines whether v is a linear combination of sin x and cos x and w is a linear combination of the corresponding hyperbolic functions sinh y and cosh y (corresponding to 2 ), or vice-versa. But since, eventually, we shall need to expand f (x) and g(x) in Fourier series, we should obviously opt for 2 . In any case, choosing the right sign for 2 at the beginning is a question of convenience, for if we choose the wrong sign the worst that can happen is that  will turn out to be imaginary. Thus v (x) =  cos x +  sin x; w(y) =  cosh y +   sinh y: The boundary conditions u(0; y) = u(a; y) = 0 imply vn (x) = n sin n x, where n = n=a: Before imposing the nonhomogeneous boundary conditions, we express u as the formal sum u(x; y) =

1 X

(cn cosh n y + dn sinh n y) sin n x:

n=1

The remaining boundary conditions yield f (x) =

1 X

cn sin n x;

n=1

from which cn =

2 a

Ra

g(x) =

0

f (x) sin(nx=a) dx; and

1 X

n=1

n (dn cosh n b + cn sinh n b) sin n x;

30

Ra from which n (dn cosh n b + cn sinh n b) = a2 0 g(x) sin(nx=a) dx: This completely determines the coe¢cients cn and dn in the series representation of u: To ensure the convergence of the series, the functions f and g obviously have to be piecewise smooth. 3.35 Here again, as in Exercise 3.34, the separated solution of Laplace’s equation is a product of trigonometric and hyperbolic functions. But in this case the homogeneous boundary conditions u(0; y) = u(x; 0) = 0 annihilate the coe¢cients of cosh y and cos x; the condition at x = 1 determines the admissible discrete values of ; and the nonhomogeneous boundary condition u(x; 1) = sin 3 2 x reduces the in…nite formal sum for u(x; y) to the single term  1  3 3 3 sin x sinh y: u(x; y) = sinh 2 2 2 3.36 Starting with the solution of Laplace’s equation u(x; y) = (a cos x + b sin x)(c cosh y + d sinh y); the conditions at x = 0 and x =  imply b = 0 and  = n: If  = 0; the solution is simply a constant. If  6= 0, then b = 0 and  = n 2 N; and we can take a = 1 without loss of generality. The condition at y = 0 implies d cos nx = cos x, hence n = d = 1: The condition at y =  implies (cosh  + c sinh ) cos x = 0, hence c = coth : Thus the solution is u(x; y) = cos x(sinh y

coth  cosh y) + c0 ;

where c0 is an arbitrary constant. Thus the solution is determined up to an additive constant. This is consistent with the fact that this is a Neumann problem for Laplace’s equation, where the boundary conditions are all on the derivatives of u. 3.37 (a) Substituting u(r; ) = v(r)w() into Laplace’s equation yields the two equations w00 + 2 w = 0; r2 v 00 + rv 0 2 v = 0; whose solutions are, respectively,  a0 + b0 ; =0 w() = a cos  + b sin ;  6= 0;  c0 + d0 log r; =0 v(r) = c r + d r  ;  6= 0: Since u(r;  + 2) = u(r; ) for all  in ( ; ], it follows that b0 = 0 when  = 0; and a cos (+2)+b sin (+2) = a cos +b sin 

31

when  6= 0: This last equation implies  = n 2 N; and hence the sequence of solutions  c0 + d0 log r; n=0 un (r; ) = (cn rn + dn r n ) (an cos n + bn sin n); n 2 N: (b) Use the fact that u must be bounded at r = 0 to eliminate the coef…cients dn for all n = 0; 1; 2;    ; then apply the boundary condition. P1 n The series A0 + n=1 (r=R) (An cos n + Bn sin n) is clearly convergent for all r < R by the M-test. (c) Here log r and the positive powers of r are unbounded outside the circle P1 n r = R: The solution is therefore u(r; ) = A0 + n=1 (R=r) (An cos n+ Bn sin n); which converges in r > R:

Chapter 4 4.1 With u(x) = P3 (x) = 12 5x3  = 3(3 + 1) = 12, we have (1

x2 )P300 (x)

 3x ; u0 (x) = 23 (5x2

2xP30 (x) + 6P3 (x) = 15x

15x3

1); u00 (x) = 15x; and 15x3 + 3x + 30x3

18x

= 0; hence Equation (4.4) is satis…ed. It is also satis…ed if u = P4 and  = 4(4 + 1) = 20: 3 6 2 3 1 3 4 4.2 y1 = 1; y2 (x) = x; y3 = x2 3 ; y4 = x 5 x; y5 = x 7 x + 35 ; 10 3 5 2 5 y6 = x 9 x + 21 x: Clearly, we have y1 = P0 , y2 = P1 , y3 = 3 P2 , 8 8 P4 , y6 = 63 P5 : y4 = 25 P3 , y5 = 35

4.3 From the recursion formula (4.8) with k = 2j; it follows that c2(j+1) x2(j+1) = x2 < 1 for all x 2 ( 1; 1): lim j!1 jc2j x2j j The same conclusion holds if k = 2j + 1: Since Q00 (x) = (1 lim (1

x2 )Q00 (x) = 1;

lim (1

x2 )Q0 (x) = 0:

x!1

whereas x!1

x2 )

1

;

32

4.4 From the recursion formula (4.8), with n = 1 and k = 0; 2; 4; :::; Q1 (x) = 1 =1 =1

1 4 1 6 x x  x2 3 5   1 3 1 5 x x + x + x +  3 5   x 1+x log : 2 1 x

4.5 The …rst two formulas follow from the fact that Pn is an even function when n is even, and odd when n is odd. P2n (0) = a0 =

(2n 1)    (3)(1) ( 1)n (2n)! = ( 1)n : 2n 2 n!n! (2n)    (4)(2)

4.6 Let u(x) = u(cos ) = y(): Then dy d 1 dy = d dx sin   d    2 d y 1 cos  dy 1 1 00 : + u (x) = 2 2 sin  d sin  sin  dx sin  u0 (x) =

Substituting into Equation (4.4) with  = n(n + 1) yields the transformed equation. 4.7 Di¤erentiate Rodrigues’ formula (4.13) for Pn and replace n by n + 1 to obtain   1 dn  0 (2n + 1)x2 1 (x2 1)n 1 ; Pn+1 = n n 2 n! dx then di¤erentiate Pn 1 and subtract to arrive at (4.14). The …rst integral formula follows directly from (4.14) and the equality Pn (1) = (1)n : The second formula is a special case of the …rst in which x = 1. 4.8 The polynomial (n+1)Pn+1 (x) (2n+1)xPn (x) is of degree n+1: According to the Rodrigues formula, the coe¢cient of xn+1 is given by (n + 1)

1 (2n + 2)! 2n+1 (n + 1)! (n + 1)!

(2n + 1)

1 (2n)! = 0; 2n n! n!

Hence we can write (n + 1)Pn+1 (x)

(2n + 1)xPn (x) =

n X1

ck Pk (x):

k=0

Let m be a non-negative integer such that m  n 2; and take the inner product of Pm (x) with the equation above. Because hPm ; Pk i = 0 for all k 6= m, we obtain (n + 1)hPm ; xPn i =

2

(2n + 1)hxPm ; Pn i = cm kPm k :

33

But xPn is a polynomial of degree m + 1  n 1 for all m = 0; 1; :::; n 2: Therefore the left-hand side of the equation is zero and cm = 0 for all m = 0; 1; :::; n 2: Thus (n + 1)Pn+1 (x)

(2n + 1)xPn (x) = cn

Taking the limit as x ! 1 in this equation yields cn n:

1 Pn 1 (x): 1

= (n+1) (2n+1) =

4.9 (1

2xt + t2 )

@f (x) = (1 @t

2xt + t2 )

x t = (x 2xt + t2 )3=2

(1

t)f (x):

Using the series representation of f (Equation (4.21)) in this equation, we arrive at 1 1   P P nan tn 1 2xnan tn + nan tn+1 = xan tn an tn+1 n=1

a1 +

1 P

n=0

[(n + 1)an+1

2nxan + (n

1)an

1] t

n

= xa0 +

1 P

(xan

an

1 )t

n=1

n=1

Equating corresponding coe¢cients of the powers of t gives the recursion relation for an : 4.10 1 =p j1 rei j (1 =p =

1 1 X

n=0

1 r cos )2 + (r sin )2 1 2r cos  + r2

Pn (cos )tn ; jtj < 1;

by the formula (4.20). We can write 1 ky

xk

=

1

kyk

kyk

1 1

y

kyk

In the plane de…ned by the vectors x and y; let

1

:

x

y x = (cos ; sin ); = (r cos ; r sin ), kyk kyk where and are the polar angles of x and y, respectively, and r = kxk = kyk < 1: Consequently

n

:

34

1 ky

xk

= =

1 p kyk (cos

r cos )2 + (sin 1

r sin )2

p (1 2r cos( ) + r2 1 1 X Pn (cos( ))rn : = kyk n=0

4.11 (a) 1

kyk

2 5 P3 (x):

3 5 P1 (x)

x3 = P0 (x)

(b) jxj = 21 P0 (x) + 85 P2 (x)

3 16 P4 (x)

2

+  :

4.12 The formula kPn k = 2=(2n + 1) can be veri…ed by direct computation for n = 0 and n = 1: For n  2 we have, by Equation (4.15), (n + 1)Pn+1 (x) nPn (x)

(2n

(2n + 1)xPn (x) + nPn 1)xPn

Multiply the …rst equation by Pn integrate over [ 1; 1] to obtain n kPn

2 1k

1 (x)

1 ; Pn i

= (2n + 1)hxPn

2

= hxPn ; Pn

1 i;

(2n + 1) kPn k = (2n 1) kPn

2

2

4.13 f (x) =

X1

n=0

1)Pn

1k

= 0;

2 (x)

= 0:

and the second equation by Pn and

1

n kPn k = (2n Since hxPn

+ (n

1 (x)

1 ; Pn i;

1)hxPn ; Pn

1 i:

this implies = (2n 3) kPn

cn Pn (x); where cn =

2n+1 2

Z

2 2k

1

f (x)Pn (x)dx: Since f is odd, 1

cn = 0 for all even values of n: For n = 2k + 1; Z 1 c2k+1 = (4k + 3) P2k+1 (x)dx 0

1 [P2k (0) P2k+2 (0)] = (4k + 3) 4k + 3   (2k)! (2k + 2)! = ( 1)k 2k + 2k+2 2 k!k! 2 (k + 1)!(k + 1)! (2k)! (4k + 3) ; k 2 N0 : = ( 1)k 2k 2 k!k! (2k + 2)

Hence f (x) = 23 P1 (x) 1 X

n=0

7 8 P3 (x)

+

11 16 P5 (x)

cn Pn (0) = 0 =

2

=    = 3 kP1 k = 2:

+    : At x = 0;

1 [f (0+ ) + f (0 )]: 2

35

4.14 In even polynomials, we have x = c2n =

hjxj ; P2n i kP2n k

2

P1

n=0 c2n P2n (x)

= (4n + 1)

for all x 2 (0; 1); where

1

Z

xP2n (x)dx:

0

Hence

1 5 3 P0 (x) + P2 (x) P4 (x) +    : 2 8 16 Since 0 is a point of continuity for the even extension of x to ( 1; 1), the expansion is valid at x = 0: In odd degree polynomials, c2n+1 = 2 hx; P2n+1 i= kP2n+1 k = 1 if n = 0 and 0 otherwise. Therefore x = P1 (x), as would be expected. The equality is obviously valid on [0; 1]. x=

For the function f (x) = 1, the expansion in even polynomials is simply 1 = P0 (x), which is valid on [0; 1]: The odd expansion is 1=

1 X

c2n+1 P2n+1 (x) =

n=0

3 P1 (x) 2

7 11 P3 (x) + P5 (x) +    ; 8 16

which is valid on (0; 1) but not at x = 0; where the right-hand side is 0; being the average of f (0+ ) = 1 and f (0 ) = 1: Here f (0 ) is obtained from the odd extension of f to ( 1; 1): 2 R R 1 R =2 2 R1R1 2 2 2 1 e x dx = 4 0 0 e (x +y ) dxdy = 4 0 0 er r drd = : 4.15 1 4.16 Replace t by

t in Equation (4.25) to obtain

1 X 1 Hn (x)( t)n = e n! n=0

2xt t2

=

1 X 1 Hn ( x)tn ; n! n=0

which implies Hn ( x) = ( 1)n Hn (x): 4.17 Setting x = 0 in (4.25) yields 1 X 1 Hk (0)tk = e k!

t2

=

1 X

n=0

k=0

( 1)n

1 2n t : n!

By equating corresponding coe¢cients of the powers of t we obtain the desired formulas. 4.18 The equality is true for n = 0: Assume its validity for any n 2 N: Replace n by n + 1 in Equation (4.29) and integrate both sides to obtain Z x Hn+1 (x) = 2(n + 1) Hn (t)dt + cn ; 0

where

36

2(n + 1)

Z

[n=2]

x

Hn (t)dt = 2(n + 1)n!

0

X

( 1)k

k=0 [n=2]

= (n + 1)!

X

( 1)k

k=0

(2x)n+1 2k k!(n + 1 2k)!2

(2x)n+1 2k k!(n + 1 2k)!

and cn is a constant of integration. At x = 0, Hn+1 (0) = 0 + cn . Therefore cn = 0 if n is even, and (from Exercise 4.17), cn = ( 1)(n+1)=2

(n + 1)! ((n + 1)=2)!

if n is odd. Consequently, [n=2]

Hn+1 (x) = (n + 1)!

X

( 1)k

k=0

(2x)n+1 2k k!(n + 1 2k)!

[(n+1)=2]

= (n + 1)!

X

( 1)k

k=0

(2x)n+1 2k k!(n + 1 2k)!

when n is even, and [n=2]

Hn+1 (x) = (n + 1)!

X

( 1)k

k=0

(2x)n+1 2k + cn k!(n + 1 2k)!

[(n+1)=2]

= (n + 1)!

X

k=0

( 1)k

(2x)n+1 2k k!(n + 1 2k)!

when n is odd. 4.19 If m = 2n,

n

x2n =

H2k (x) (2n)! X : 2n 2 (2k)!(n k)! k=0

If m = 2n + 1,

n

2n+1

x

(2n + 1)! X H2k+1 (x) = ; 22n+1 (2k + 1)!(n k)! k=0

x 2 R; n 2 N0 :

4.20 The coe¢cients in the Hermite expansion are given by Z 1 2 1 hf; Hn i p Hn (x)e x dx; 2 = 2n n!  kHn k 0

37

the …rst of which is 1=2: Now the function f (x) (1=2)H0 (x) = f (x) 1=2 P1 is odd, hence it has an expansion of the form k=0 c2k+1 H2k+1 (x); and we can write 1 X 1 f (x) = H0 (x) + c2k+1 H2k+1 (x); 2 k=0

where

c2k+1

Z

1

[e

x2

1 p = 2k+1 2 (2k + 1)!  2

From (4.23) we have e x Hn (x) = above formula for c2k+1 ; we obtain c2k+1 = =

H2k+1 (x)e

x2

dx

0

Hn

0 1 (x)] :

Using this in the

H2k (0) p 2k+1 2 (2k + 1)!  ( 1)k p : + 1)k! 

22k+1 (2k

Therefore, 1 1 X ( 1)k 1 H0 (x) + p H2k+1 (x); 2k 2 2 (2k + 1)k! 2  k=0   1 1 1 1 = H0 (x) + p H1 (x) H3 (x) + H5 (x) +    ; 2 12 160 2 

f (x) =

4.21 (a) Substituting y = 1 X

k(k

P1

k k=0 ck x k 2

1)ck x

into the di¤erential equation yields 2

1 X

k

kck x + 2

k=0

k=2

from which it follows that ck+2 =

x 2 R:

2(k ) ck ; (k + 2)(k + 1)

1 X

ck xk = 0;

k=0

k 2 N0 ;

with c0 and c1 arbitrary. P (b) The power series ck xk converges for all x 2 R by the ratio test. The general solution of Hermite’s equation can therefore be represented by the sum u = c0 u0 + c1 u1 ; where 2 2 22 ( 2) 4 23 ( 2)( 4) 6 x + x x +  ; 2! 4! 6! 2 2( 1) 3 2 ( 1)( 3) 5 x + x u1 (x) = x 3! 5! 23 ( 1)( 3)( 5) 7 x +  ; 7! are linearly independent (even and odd) solutions. u0 (x) = 1

38

(c) When  = n is a nonnegative integer, either u0 or u1 becomes a polynomial, depending on whether n is even or odd, which coincides with a constant multiple of Hn : 2

4.22 If the solution ui (i = 1 or 2) is a polynomial, the product e x ui (x) will clearly tend to 0 as jxj ! 1: If ui is an in…nite series, we cannot give 2 an analytical proof that e x ui (x) will tend to a nonzero constant, but we present the following argument towards that conclusion. If ui is an in…nite series then ck+2 =

2 2(k ) ck  ck (k + 2)(k + 1) k

as k ! 1;

in the sense that kck+2 =2ck tends to a constant as k ! 1: The power 2 series expansion for ex is given by 2

ex =

1 X

a2k x2k

k=0

=1+

1 2 1 1 1 x + x4 +    + x2k + x2(k+1) +    : 1! 2! k! (k + 1)!

Therefore, with n = 2k; an+2 =

n 2

1 2 an  an +1 n

as n ! 1:

This implies that there is a constant such that cn =an  as n ! 1.

is not zero because none of the coe¢cients in the series for ui can be zero as long as  2 = N0 : We can therefore argue that the higher terms of 2 the series for u0 and u1 di¤er from the corresponding terms of ex only by multiplicative constants, say 0 and 1 ; respectively; and that, for large values of jxj ; 2

2

u0 (x)  0 ex ; u1 (x)  1 ex

as jxj ! 1;

since for such values the higher terms dominate the series. It then follows 2 that ui e x =2 is not square integrable on R, so ui cannot belong to L2ex2 (R): The interested reader may wish to consult Special Functions and Their Applications by N.N. Lebedev (Dover, 1972) on the asymptotic behaviour of Hermite funtions. 4.23 Use Leibnitz’ rule for the derivative of a product, n   X n (n k) (k) (f g)(n) = f g ; k k=0

with f (x) = xn and g(x) = e

x

:

39

4.24 x3 4.25 xm 4.26 he

x = 5L0 (x) 17L1 (x) + 18L2 (x) 6L3 (x): Z 1 Xm cn Ln (x); where cn = = e x xm Ln (x)dx = ( 1)n n=0

x=2

; Ln ie

0

x

1 1 = n! 2n

Z

1

xn e

3x=2

dx =

0

e

x=2

=2

1 X

3

2

3n+1

n 1

m!m! . n!(m n)!

: Therefore

Ln (x):

n=0

4.27 First we establish two important identities, L0n (x) = L0n

1 (x)

Ln

1 (x);

xL0n (x) = n[Ln (x)

Ln

1 (x)];

both of which may be proved by using the de…nition (4.34). In the case of the …rst identity,  ex dn  ex dn n x x e + nxn 1 e x xn e x n n n! dx n! dx ex dn n 1 x = (nx e ) n! dxn d dn 1 n 1 x ex (x e ) = (n 1)! dx dxn 1   dn 1 n 1 x dn 1 n ex ex d (x e ) (x = n 1 dx (n 1)! dx (n 1)! dxn 1

L0n (x) =

= L0n

1 (x)

Ln

1

e

x

)

1 (x):

For the second identity, ex dn (nxn 1 e x ) n! dxn  dn n x ex (x e ) = (n 1)! dxn

xL0n (x) = x

= nLn (x)

nLn

n

dn dxn

1

(xn 1

1 (x):

1

e

x

 )

Use the equality Ln 1 (x) = Ln (x) n1 xL0n (x) in the …rst identity to arrive at Equation (4.32).   Z x 1 x2 e dx = c1 + c2 log x + x + +  : 4.28 u(x) = c1 + c2 x 2 2! 4.29 The surface ' = =2; corresponding to the xy-plane.

40

4.30 The bounded solution outside the sphere, according to Equation (4.41), P1 is n=0 bn r n 1 Pn (cos '); where the coe¢cients can evaluated from the P1 boundary condition u(1; ') = f (') = n=0 bn Pn (cos ') as Z 2n + 1  f (')Pn (cos ') sin ' d'; n 2 N0 : bn = 2 0 4.31 The solution of Laplace’s equation in the spherical coordinates (r; ') is P1 given by Equation (4.42) as u(r; ') = n=0 an rn Pn (cos '). Using the given boundary condition in Equation (4.43), Z 2n + 1 =2 an = 10Pn (cos ') sin ' d' 2Rn 0 Z 1 5(2n + 1) Pn (x)dx = Rn 0 5 = n [Pn 1 (0) Pn+1 (0)]; n 2 N; R where the result of Exercise 4.7 is used in the last equality. We therefore arrive at the solution 1 h  r n i X u(r; ') = 5 + 5 Pn 1 (0) Pn+1 (0)] Pn (cos ') R n=1     7 r 3 3r P3 (cos ') +    : P1 (cos ') =5 1+ 2R 8 R Note that u(R; ') 5 is an odd function of '; hence the summation (starting with n = 1) is over odd values of n:

4.32 Inside the hemisphere u(r; ') is represented by a series of the form (4.42). If u(1; ') is extended as an odd function from [0; =2] to [0; ], then u(1; ') has a series expansion in odd Legendre polynomials in which the coe¢cients are Z 4n + 3  a2n+1 = u(1; ')P2n+1 (cos ') sin ' d' 2 0 Z 1 = (4n + 3) P2n+1 (x)dx 0

= [P2n (0) P2n+2 (0)]   4n + 3 (2n)! (2n + 2)! (2n)! n = ( 1)n + 2n+2 : = ( 1) 2n 2 2 2 (n!) 2 ((n + 1)!) 2n + 2 22n (n!)2

4.33 In view of the boundary condition u' (r; =2) = 0, f may be extended as an even function of ' from [0; =2] to [0; ]: By symmetry the solution is even about ' = =2; hence the summation in Equation (4.42) is over even orders of the Legendre polynomials.

41

Chapter 5 Rn 5.1 For all n 2 N, the integral In (x) = 0 e t tx 1 dt is a continuous function of x 2 [a; b]; where 0 < a < b < 1: Since Z 1 Z 1 u t x 1 0 e t dt  e t tb 1 dt ! 0 as n ! 1; n

n

it follows that In (x) converges uniformly to (x) on [a; b]: Therefore (x) is continuous on [a; b] for any 0 < a < b < 1, and hence on (0; 1): By also show that its derivatives R 1a similar procedure we canR 1 0 (x) = 0 e t tx 1 log t dt; 00 (x) = n e t tx 1 (log t)2 dt, ::: , are all continuous on (0; 1). R1 R1 R1 p 2 2 5.2 (1=2) = 0 e t t 1=2 dt = 2 0 e y dy = 1 e y dy =  (see Exercise 4.15). 5.3

n+

1 2



know that

= n  1 2

1 2

=



p



:

1 2



1 2



=

(2n)! n!22n

1 2



: From Exercise 5.2 we

5.4 (a) Replace t by u=(1 + u) and dt by du=(1 + u)2 with appropriate limits on u: (b) Replace t in (5.1) by st: (c) Using the result of part (b), 1 1 = z (u + 1)x+y s Z 1 1 = e st tz (z) 0 Z 1 1 e = (x + y) 0

1

dt

(u+1)t x+y 1

t

dt:

Using this in the integral expression in (a),  Z 1 Z 1 1 (u+1)t x+y 1 x 1 e t dt du u (x; y) = (x + y) 0 0 Z 1 Z 1 1 = e t tx+y 1 e ut ux dudt (x + y) 0 Z 01  Z 1 1 t y 1 e t e v v x 1 dv dt = (x + y) 0 0 (x) (y) : = (x + y)

42

5.5 Use the integral de…nition of the gamma function to write   Z 1Z 1  p 2x 1 1 2x 1 t 1=2 dsdt 2 (x) x+ = e (s+t) 2 st 2 0 0 Z 1Z 1 2 2 =4 e ( + ) (2 )2x 1 d d Z0 1 Z0 1 2 2 =4 e ( + ) (2 )2x 1 d d : 0

0

Interchange and to obtain a similar formula, then add and divide by 2 to arrive at   Z 1Z 1 2 2 1 22x 1 (x) x+ e ( + ) (2 )2x 1 ( + )d d =2 2 Z0 Z 0 2 2 =4 e ( + ) (2 )2x 1 ( + )d d ; where the last double integral is over the sector 0  < < 1. By changing the variables of integration to  = 2 + 2 ;  = 2 ; we see that p  p =   ; and the Jacobian of the transformation ( ; ) 7! (; ) is 1=  2  2 : Therefore   Z 1 Z 1 1 e  2x 1 2x 1 p 2 (x) x+ d =  d 2   0 0 Z 1 Z 1 2 =2 e   2x 1 d e y dy 0 0 p =  (2x): 5.6 (a) Change the variable of integration from t to

t:

(b) Use the result of Exercise 4.15. 2

(c) The exponential function e t has a power series which conR xexpansion t2 verges everywhere in R; so the same is true of 0 e dt:

5.7 Apply the ratio test. 5.8 Using (5.12),

J1=2 (x) =

r

1 ( 1)m x X  x2m : 2m 2 m=0 2 m! m + 1 + 21

  1p 1 m + 1 + 21 = (2m + 1)! 12 2 = 2 (2m + 1)!; we obtain r r 1 2m+1 2 X 2 m x J1=2 (x) = = sin x: ( 1) x m=0 (2m + 1)! x

Since 22m m!

43

Similarly J

1=2 (x) =

r

=

r

1 2 X ( 1)m  x2m 2m x m=0 2 m! m + 21 1 2 X ( 1)m  x2m x m=0 22m m! m + 21

1 ( 1)m 2 X  x2m x m=0 22m m! m + 21 r r 1 ( 1)m 2 X 2 2m x = cos x: = x m=0 (2m)! (1=2) x

=

r

5.9 Di¤erentiating Equation (5.12) and multiplying by x; xJ0 (x) = 

1 1  x  X  x  X ( 1)m (x=2)2m ( 1)m 2m(x=2)2m + 2 m=0 m! (m +  + 1) 2 m=1 m! (m +  + 1)

= J (x) +

1  x  X

2

= J (x) +

m=1 1  x  X

= J (x)

x

= J (x)

xJ+1 (x):

(m

 x 2m ( 1)m 2 1)! (m +  + 1) 2

 x 2m+2 ( 1)m+1 2 m! (m +  + 2) 2 m=0

2

1  x +1 X

2

 x 2m ( 1)m+1 m! (m +  + 2) 2 m=0

5.10 From the identity in Exercise 5.9, J3=2 (x) = x J1=2 (x) representation of J1=2 (x) obtained in Exercise 5.8.

0 J1=2 (x): Use the

5.11 From Equation (5.12) we can write x J (x) =

1 X

x2m+2 ( 1)m m! (m +  + 1) 22m+ m=0

1 X ( 1)m (2m + 2) x2m+2 [x J (x)] = m! (m +  + 1) 22m+ m=0 

0

= x

1 X

 x 2m+ ( 1)m m! (m + ) 2 m=0

= x J Substitute  =

1

1

1 (x):

1=2 into this identity and use Exercise 5.8.

44

5.12 From Example 5.2 and Exercise 5.11, we have [x



J (x)]0 =



x

0

 1

 1

[x J (x)] = x



J (x) + x 

J (x) + x

J0 (x) =

J0 (x)

x



J+1 (x)



= x J+1 (x):

Multiply the …rst equation by x and the second by x



and add.

5.13 Subtract, instead of adding, the equations in Exercise 5.12. 5.14 (a) Integrating by parts, and using Example 5.4, Z x Z x t2 J1 (t)dt = t2 J00 (t)dt 0 0 Z 2 = x J0 (x) + 2

x

tJ0 (t)dt

0

x2 J0 (x) + 2xJ1 (x):

=

(b) From Exercise 5.12 we have the relation 2J20 (t) =

J3 (t) = J1 (t) Therefore Z x

J3 (t)dt = 1

J0 (x)

J00 (t)

2J2 (x) = 1

2J20 (t):

J2 (x)

2J1 (x)=x;

0

where the second equality follows from Exercise 5.13. 5.15 Z

x n

t J0 (t)dt =

0

Z

x

tn

1

tJ0 (t)dt

tn

1

[tJ1 (t)]0 dt

0

=

Z

x

0

= tn

1

x tJ1 (t) 0

= xn J1 (x) + (n

(n

Z

x

tn

1

1)

= xn J1 (x) + (n

2

tJ1 (t)dt

0

1)

Z

x

0

= xn J1 (x) + (n

tn

 1) tn 1)xn

1

1

J00 (t)dt

x J0 (t) 0

J0 (x)

(n (n

1) Z

Z

1)2

0

for all n = 2; 3; 4; :::.

x

tn

2

J0 (t)dt

0 x

tn

2

J0 (t)dt;



45

5.16 J and J



satisfy the same Bessel equation, that is, 00 0 x2 J (x) + xJ (x) + (x2  0 00 J (x) + xJ (x) + x

 2 )J (x) = 0;  2 J (x) = 0: x

Multiplying one equation by J and the other by J  and subtracting, we obtain 0 xW 0 (x) + W (x) = (xW (x)) = 0; where W (x) = J (x)J 0  (x) J0 (x)J

 (x):

W (x) =

Integrating the above equation,

c ; x

where the constant c can be evaluated from c = lim xW (x): x!0

Using the representation (5.12), this limit is given by 2 (1 + ) (1

)

=

2 () (1

)

;

2 = N0 ;

hence the desired result. 5.17 Substitute directly into Bessel’s equation. Jn (x) is bounded at x = 0 for all n 2 N0 ; whereas yn (x) is not . Hence the two functions cannot be linearly dependent. 5.18 Follows directly from the series representation of Yn . 5.19 For nonintegral values of  the function Y (x) is a linear combination of J (x) and J  (x): Since the formula [x J (x)]0 = x J+1 (x) holds for all = Z: The relation real values of ; it also holds for Y (x) provided  2 Y (x) =

1 [J (x) cos  sin 

J

 (x)]

shows that Y is analytic in , except possibly at integer values. But since the limit Yn (x) = lim!n Y (x) exists at every x > 0, Y (x) is analytic (and hence continuous) as a function of  on R; so the same formula applies when  is an integer. 5.20 Use the same argument as in Exercise 5.19.

46

5.21 1 Y n (x) = 

"

@J (x) @ =

( 1)

n

@J

#



 (x)

@ = n  @J  (x) 1 n @J (x) = + ( 1)  @ =n @ =n   @J (x) n @J  (x) n1 ( 1) = ( 1)  @ @ 

n

=n

n

= ( 1) Yn (x);

=n

n 2 N0 :

5.22 Under the transformation x 7! ix Bessel’s equation takes the form i2 x2 J00 (ix) + ixJ (ix) + (i2 x2

 2 )J (ix) = 0:

Using the relation J (ix) = i I (x) yields the equation x2 I00 (x) + xI (x) (x2 +  2 )J (x) = 0: 5.23 From Equation (5.12) we have I (x) = i



J (ix)   X  2m 1 ix ( 1)m ix  =i 2 m! ( + m + 1) 2 m=0

=

1 X

 x 2m+ 1 : m! ( + m + 1) 2 m=0

5.24 In the series representation of I (x) in Exercise 5.23 it is clear that, with x > 0, the series is positive provided the coe¢cients are positive. This is the case if  > 0: Moreover, I

n (x) =

1 X

m=0

1 m! (m

n + 1)

Since 1= (m n + 1) = 0 for all m may be taken over m  n: Therefore I

n (x)

= =

1 X

m=n 1 X

n

2

:

n + 1  0, it is clear that the sum 1

m! (m

 x 2m

n + 1)

 x 2m

n

2

 x 2m+n 1 (m + n)! (m + 1) 2 m=0

= In (x):

47

5.25 The de…nition of I ; as given in Exercise 5.22, extends to negative values of : Equation (5.18) is invariant under a change of sign of ; hence it is satis…ed by both I and I  : 5.26 Di¤erentiation can be carried across the integral sign because the integrand is di¤erentiable and the interval of integration is …nite. The integral repre(k) sentation of Jn will be proved by induction: When k = 1 the formula is true because cos(n x sin  =2) = sin(n x sin ): Moreover Z d 1  k (k+1) Jn (x) = sin  cos(n x sin  k=2)d dx  0 Z  1 = sink+1  sin(n x sin  k=2)d  0 Z 1  k+1 sin  cos[n x sin  (k + 1)=2]d; =  0 hence the formula holds for all k = 1; 2; 3; ::: . 5.27 Follows from the bounds on the sine and cosine functions. 5.28 (a) Set  = 0 in Equation (5.20) and use the relation J

n (x)

= ( 1)n Jn (x):

(b) Set  = =2 in the imaginary part of Equation (5.20), that is, P1 sin(x sin ) = n= 1 Jn (x) sin n: (c) Set  = =2 in the imaginary part of Equation (5.20).

(d) Di¤erentiate Equation (5.20) with respect to  and set  = 0 to obtain x=

1 X

nJn (x) =

n= 1

1 X

[nJn (x)

nJ

n (x)]

=2

1 X

(2m

1)J2m

1 (x):

m=1

n=1

5.29 Applying Parseval’s relation to Equations (5.22) and (5.23), we obtain Z  1 X 2 J2m (x) cos2 (x sin )d = 2J02 (x) + 4 

Z





m=1

sin2 (x sin )d = 4

1 X

2 J2m

1 (x):

m=1

By adding these two equations we arrive at the desired identity. 5.30 Noting that the integrand in Equation (5.24) is symmetric with respect to  = =2, we can write

48

1 J2m (x) =  =

1 

Z



cos(x sin ) cos 2m d

0

Z

=2

cos[x sin( + =2)] cos 2m( + =2) d =2

( 1)m = 

Z

=2

cos(x cos ) cos 2m d

=2 m Z 0

( 1) cos(x cos ) cos 2m d  =2 Z 2 =2 = cos(x sin ) cos 2m d:  0 =2

Similarly, using the symmetry of the integrand in Equation (5.25) with respect to  = =2; we arrive at Z 1  sin(x sin ) sin(2m 1) d J2m 1 (x) =  0 Z 2 =2 = sin(x sin ) sin(2m 1) d:  0 5.31 Apply Lemma 3.7 to Equations (5.24) and (5.25). 5.32 For any n 2 N0 ; we conclude from the formulas for J2m and J2m Z 2 =2 cos(x sin )d : jJn (x)j   0

1

that

We shall show that the integral on the right-hand side may be made arbitrarily small by taking x large enough. Under the change of variable y = sin ; this integral takes the form Z 1 cos(xy)(1 y 2 ) 1=2 dy; 0

and may be expressed as a sum of two integrals, Z 1  Z 1 2 1=2 cos(xy)(1 y ) dy + cos(xy)(1 0

y2 )

1=2

dy;

1 

where  > 0: Let " be any positive number. The second integral is improper but bounded by Z 2 1 (1 y 2 ) 1=2 dy  1 

49

which is convergent, so we can choose  > 0 small enough to obtain Z 1 " cos(xy)(1 y 2 ) 1=2 dy  : 2 1  With this choice of ; since (1 y 2 ) 1=2 is a smooth function on [0; 1 ], we can apply Lemma 3.7 to the …rst integral to conclude that there is a positive number M such that Z " 1  2 1=2 for all x > M: cos(xy)(1 y ) dy  2 0 R =2 Consequently 0 cos(x sin )d  " for all x > M:

5.33 (a) h1; J0 (k x)ix = Therefore

Rb

J0 (k x)xdx =

0

1=

b b2 2 J1 (k b); kJ0 (k x)kx = J12 (k b). k 2

1

1 2X J0 (k x): b k J1 (k b) k=1

(b) hx; J0 (k x)ix = The integral

Rb

Rb 0

J0 (k x)x2 dx =

b2 J1 (k b) k

1 Rb J0 (k x)dx: 2k 0

J0 (k x)dx cannot be expressed more simply, so that Rb 1 J ( x)dx 2 X k b2 J1 (k b) 0 0 k x= 2 J0 (k x): b 2k J1 (k b) 0

k=1



(c) x2 ; J0 (k x) x =

b3 k

4b 3k

x2 =



J1 (k b): Hence

1

2 X 2k b2 4 J0 (k x): b 3k J1 (k b) k=1

(d) From (a) and (c) we have b2

x2 =

1

8X 1 J0 (k x): b 3k J1 (k b) k=1

(e) hf; J0 (k x)ix =

Z

0

b=2

J0 (k x)xdx = 1

b J1 (k b=2): Hence 2k

1 X J1 (k b=2) f (x) = J0 (k x): b k J12 (k b) k=1

50

5.34 Since J00 (x) = J1 (x); it follows that J1 (k ) = 0 for all k: The …rst (nonnegative) eigenvalue is 0 = 0, corresponding to the eigenfunction J0 (0) = 1: The remaining eigenvalues k are the positive zeros of J1 ; corresponding to the eigenfunctions J0 (k x): Since Z 1 1 h1; J0 (0)ix = x dx = ; 2 0 1 k h1; J0 (k x)ix = 2 xJ1 (x)j0 = 0 for all k 2 N; k 1 2 kJ0 (0)kx = ; 2 we see that the Bessel series expansion of 1 reduces to the single term 1: R1 5.35 From Exercises 5.13 and 5.14(a) we have hx; J1 (k x)ix = 0 J1 (k x)x2 dx = 2 J0 (k )=k = J2 (k )=k ; and, from Equation (5.34), kJ1 (k x)kx = 1 2 2 J2 (k ): Therefore x=2

1 X

k=1

1 J1 (k x); k J2 (k )

0 < x < 1:

0

5.36 Using the identity [xn Jn (x)] = xn Jn 1 (x); Z 1 hxn ; Jn (k x)ix = xn+1 Jn (k x)dx 0

= =

1

n+2 k

 tn+1 Jn+1 (t) 0 k

1 Jn+1 (k ): k 2

From equation (5.36) we obtain kJn (k x)kx = 212 2k k Therefore 1 X k Jn+1 (k ) Jn (k x): xn = 2 2 ( n2 ) Jn2 (k ) k n=1  0 5.37 Using the relation x2 J1 (x) = x2 J2 (x) we have hf; J1 (k x)ix =

Z

1

0

x2 J1 (k x)dx =

1 J2 (k ): k

Equation (5.36) also implies 2

kJ1 (k x)kx =

42k 1 2 J1 (2k ): 22k

 n2 Jn2 (k ):

51

Consequently, f (x) = 2

1 X

k=1

k J2 (k ) J1 (k x); (42k 1)J12 (2k )

0 < x < 2:

This representation is not pointwise. At x = 1; f (1) = 1 whereas the right-hand side is 21 [f (1+ ) + f (1 )] = 12 : 5.38 With  = 0 Equation (5.26) takes the form x2 u00 + xu0

 2 u = 0;

which is of the Cauchy-Euler type. Its general solution is c1 x +c2 x  : The eigenfunction associated with the eigenvalue 0 is therefore x ; the solution which is bounded at x = 0 when  > 0: Applying the boundary condition (5.28) to this eigenfunction leads to the relation 1 b +  2 = 0: 5.39 Assuming u(r; t) = v(r)w(t) leads to   w0 1 1 0 00 = v + v = kw v r

2 : 2

The solutions of the resulting pair of equations are w(t) = e  kt and v(r) = cJ0 (r) + dY0 (r): Discarding the unbounded solution Y0 ; and setting v(1) = cJ0 () = 0; yields the sequence of solutions 2 kt

un (r; t) = cn e

5.40

J0 (n r);

where n are the positive zeros of J0 : Note that  = 0 yields the trivial P solution. The general solution is then the formal sum un (r; t): u(r; 0) =

1 X

un (r; 0) =

n=0

Assuming the function f belongs to Bessel series expansion of f are cn =

hf (r); Jn (n r)ix 2

kJ0 (k r)kx

=

1 X

cn J0 (n r) = f (r): n=0 L2x (0; 1); the coe¢cients in 2 2 J1 (n )

Z

0

1

f (r)J0 (n r)r dr:

5.41 Use separation of variables to conclude that u(r; t) =

1 X

J0 (k r)[ak cos k ct + bk sin k ct];

k=1

ak = bk =

2 2 2 R J1 (k R)

Z

R

0

2 ck R2 J12 (k R)

f (r)J0 (k r)rdr; Z

0

the Fourier-

R

g(r)J0 (k r)rdr:

52

Chapter 6 R1 dx = 2 0 (1 x) cos x dx = 22 (1 cos ): R R (b) f^() =  cos x e ix dx = 2 0 cos x cos x dx = 2 1sin 2 :  1 (c) f^() = i 1 e i :

6.1 (a) f^() =

R1

1

(1

jxj)e

ix

6.2 (a) Suppose I = [a; b]: By the CBS inequality, "Z #1=2 "Z #1=2 Z b

a

b

jf (x)j dx 

a

b

2

jf (x)j dx

dx

a

=

p

b

a kf k < 1;

hence f 2 L1 (a; b): (b) Let jf (x)j  M for all x 2 I; where M is a positive number. Then Z 1=2 Z 1=2 2 kf k = jf (x)j dx M jf (x)j dx < 1: I

I

6.3 For any …xed point  2 J, let  n be a sequence in J which converges to : Since Z jF ( n ) F ()j  j'(x;  n ) '(x; )j dx; I

and j'(x;  n ) '(x; )j  2g(x) 2 L1 (I), we can apply Theorem 6.4 to the sequence of functions 'n (x) = '(x;  n ) '(x; ) to conclude that Z j'(x;  n ) '(x; )j dx lim jF ( n ) F ()j  lim n!1 n!1 I Z = lim j'(x;  n ) '(x; )j dx = 0: I n!1

6.4 From Exercise 6.3 we know that F is continuous where '(x; ) is continuous. Suppose '(x; ) has a jump discontinuity at  2 J: It su¢ces to show that F also has a jump discontinuity at : Let  n be a sequence in J which tends to  from the left. Applying Theorem 6.4 to the sequence 'n (x) = '(x;  n ); which is dominated by g(x) 2 L1 (I); we obtain F ( ) = lim F ( n ) n!1 Z '(x;  n )dx = lim n!1 I Z = lim '(x;  n )dx: I n!1

53

R Hence the integral F ( ) = I '(x;  )dx exists. Similarly, by taking a sequence in J which approaches  from the right, we conclude that F ( + ) also exists. Thus F has no worse than a jump discontinuity at : 6.5 For any  2 J; let  n be a sequence in J such that  n 6=  for all n and  n ! : We have Z F ( n ) F () '(x;  n ) '(x; ) = dx: n  n  I De…ne n (x; )

=

'(x;  n ) n

'(x; ) ; 

then n (x; ) ! ' (x; ) pointwise. n is integrable on I and, by the mean value theorem, n (x; ) = ' (x;  n ) for some  n between  n and : Therefore, for all n and ; j n (x; )j  jh(x)j on I: Now useR the dominated R convergence theorem to conclude that I n (x; )dx ! I ' (x; )dx as n ! 1. This proves that Z F ( n ) F () F 0 () = lim = ' (x; )dx: n!1 n  I The continuity of F 0 follows from Exercise 6.3. R1 6.6 The formula 0 xn e x dx = n!= n+1 for all   a > 0 may be proved by induction on n; using integration by parts. But we shall follow an approach which utilizes the result of Exercise 6.5 with I = [0; 1) and J = [a; 1). The function e x is di¤erentiable with respect to  on [a; 1) any number of times, and its derivatives are all continuous on J for any x 2 I. Furthermore, e x  e ax for all   a and e ax is integrable on [0; 1): Therefore Z 1 Z 1 Z 1 @ d x x e dx = xe x dx; e dx = d 0 @ 0 0 which implies Z

1

xe

x

dx =

0

d d

  1 1 = 2:  

Now the function xe x is dominated by a 1 e ax ; which belongs to L1 (a; 1), hence Z 1 Z 1 Z 1 d @ x2 e x dx xe x dx = x e x dx = d 0 @ 0 0 and

Z

0

1

2

x e

x

dx =

d d



1 2



=

2! : 3

54

In the n-th step, since xn 1 e x  (n write Z 1 d xn 1 e x dx = d 0 and we conclude that Z 1 xn e x dx =

d d

0



(n

1)a 1 e ax 2 L1 (a; 1); we can Z 1 xn e x dx; 0

1)! n



=

n! ,   a:  n+1

6.7 For every x  0 the integrand e x x 1 is a C 1 function in the variable  on [a; b]. It is dominated by the L1 (0; 1) function e x xb 1 ; hence (x) is a continuous function on [a; b]: Since this is true for every a > 0 and b > a; (x) is continuous on (0; 1): Its n-th derivative is dominated by the L1 (0; 1) function e x xb 1 (log x)n ; so (n) (x) is also continuous on (0; 1) for any n 2 N: 6.8 (a) 1, (b) 1/2, (c) 0. 6.9 Express the integral over (a; b) as a sum of integrals over the subintervals (a; x1 );    ; (xn ; b). Since both f and g are smooth over each sub-interval, the formula for integration by parts applies to each integral in the sum. Z  1 + cos  , 6.10 (a) f is even, hence B() = 0, A() = 2 sin x cos x dx = 2 1 2 0 and Z 2 1 1 + cos  f (x) = cos x d:  0 1 2 (b) f^() = i(e

i

1)=: Therefore Z L i(x 1) Z i e eix 1 1 sin(1 f (x) = lim d = 2 L!1 L   0

x) + sin  d: 

sin )= 2 ; and Z 2 1  sin  sin x d: f (x) =  0 2

(c) f is odd, B() = 2(

(d) f is odd, B() = 2

Z

=2

cos x sin x dx = 2

0

2 f (x) = 

Z

0

1





sin(=2) , and 2 1

sin(=2) sin x d: 1 2

55

(e) A() =

 sin   +  cos  ; B() = ; and 2  1 2 1 Z 1 1  f (x) = [sin(x ) + sin x]d:  0 2 1

6.11 At x = 0 the Fourier integral representation of f in Exercise 6.10(e) gives Z f (0+ ) + f (0 ) 1 1 1  = = sin  d; 2 2  0 1 2 from which the desired formula follows. 6.12 Let f (x) be the odd extension of e x from (0; 1) to R; that is,  e x ; x > 0 f (x) = e x ; x < 0: Its Fourier sine transform is Z B() = 2

1

x

e

sin x dx =

0

2 :  + 2 2

Therefore, by the Fourier integral formula (6.28), Z  2 1 sin x d; f (x) =  0  2 + 2 which gives the desired formula when x > 0: 6.13 De…ne f as the odd extension of e x cos x from (0; 1) to R;  e x cos x; x > 0 f (x) = ex cos x; x < 0: Its Fourier sine transform is Z 1 B() = 2 e 0

x

cos x sin x dx =

2 3 :  +4 4

Now f (x) may be represented on R by the inversion formula (6.28), which yields Z 2 1 3 e x cos x = sin x d  0 4 + 4

when x > 0: Since f is not continuous at x = 0; this integral is not uniformly convergent.

56

6.14 The even extension of e x cos x from [0; 1) to R is e jxj cos x, whose cosine transform is Z 1 1 1 A() = 2 e x cos x cos x dx = + : 1 + (1 + )2 1 + (1 )2 0 Hence e

x

cos x =

1 

Z

0

1



1 1 + 2 1 + (1 + ) 1 + (1

This representation is pointwise because e 6.15 Extend f (x) =



)2 jxj



cos x d;

for all x  0:

cos x is continuous on R:

1; 0 < x <  0; x>

as an odd function to R and show that its sine transform is B() = 2(1 cos )=: Equation (6.28) then gives 8 Z < 1; 0 < x <  2 1 1 cos  sin x d = 1=2; x= :  0  0; x > :

6.16 Being even, f has a cosine transform which is given by Z 1 cos x dx: A() = 2 1 + x2 0

From Example 6.3 we know that the cosine transform of e Z 1 2 cos x 2 dx = : 2 1 + x 1 + 2 0

Therefore, by the inversion formula, Z 2 1 cos x d; e jxj =  0 1 + 2 and hence

A() = e

jj

jxj

is

x 2 R;

:

6.17 The cosine transform of the even function f is Z 1 sin2 (=2) 1 cos  = : A() = 2 (1 x) cos x dx = 2 (=2)2 2 0 Hence

Z 1 1 sin2 (=2) cos x d:  0 (=2)2 At x = 0; which is a point of continuity of f; this yields Z Z 1 1 sin2  1 1 sin2 (=2) d = 1= 2 d:  0 (=2)2  1  f (x) =

57

jxj 2

R1 = 4 0 e 2x dx = 2: From Example 6.3, f^() = 2(1 +  2 )

2 R1

hence f^ = 4 0 (1 +  2 ) 1 dx = 2:

6.18 2 e

1

;

2

2 2 6.19 From Equations (6.21) and (6.31) it follows that f^ = kAk + kBk = 2

2 kf k :

6.20 F[f (x a)]() =

F[eia f (x)]() = a)](): 6.21

R1

1

R1

1

f (x a)e

ix

dx =

i(x+a)

f (x)e

R1

dx =

1

f (x)e

R1

1

f (x

i(x+a)

a)e

dx = e

ix

ia

f^():

dx = F[f (x

1 ^ decays exponentially as jxj ! 1, so it belongs p to L (R) and ^ () = 2 (): Assuming therefore exists. From Example 6.17 we have 0 0 p ^ () = ( i)n 2 (); we have n n   x2 =2 ^ () = F e H (x) () n+1 n+1 h i 2 = F e x =2 (2xHn (x) Hn0 (x)) ()   0 = F x n (x) n (x) () n (x)

0

i ^ n () p = ( i)n+1 2[ 0n () +  p = ( i)n+1 2 n+1 (x); = i ^ n ()

n ()]

where we used the identity Hn+1 (x) =p2xHn (x) Hn0 (x) and Theorem 6.15. Thus, by induction, ^ n () = ( i)n 2 n () is true for all n 2 N0 : R1 6.22 Since u ^() = 2 0 u(x) cos x dx;  2; 0 <  <  u ^() = 0; x > : Consequently, 1 u(x) = 

Z



2 cos x d =

0

2 sin x; x

x > 0:

R1 2 6.23 Let I(z) = 0 e b cos z d: Since the integrand is dominated by the 2 L1R function e b and di¤erentiable with respect to z; we have I 0 (z) = 2 1 e b  sin z d: Integrating by parts, 0 I 0 (z) =

z I(z): 2b

The solution of this equation = ce z R 1 b2is I(z)1 p d = 2 =b: constant is c = I(0) = 0 e

2

=4b

; where the integration

58

6.24 From Exercise 6.18 the temperature u is given by Z 1 2 1 u(x; t) = p f (y)e (y x) =4kt dy 2 kt 1 Z a 2 1 T0 p e (y x) =4kt dy =p  a 2 kt Z (a x)=2pkt T0 p2 dp =p p e  (a+x)=2 kt " Z 0 Z (a T0 2 2 p2 p = dp + p p e 2   0 (a+x)=2 kt      T0 x+a a x p p = erf + erf : 2 2 kt 2 kt

p x)=2 kt

e

2

p

dp

#

If we set T0 = 1 this solution coincides with that of Example 6.19. That is because the temperature u(x; 0) in the example, though initially de…ned on 0 < x < 1, is then extended as an even function to R; so it coincides with f (x) as de…ned in this exercise. 6.25 The boundary condition at x = 0 implies A() = 0 in the representation of u(x; t) given by (6.39), so that u is now an odd function of x. By extending f (x) as an odd function from (0; 1) to ( 1; 1) we can see that B() is the sine transform of f and the same procedure followed in Example 6.18 leads to Z Z 2 1 1 1 f (y) sin y sin x e k t dyd u(x; t) =  0 1 Z Z 2 2 1 1 = f (y) sin y sin x e k t dyd  0 Z Z0 2 1 1 1 f (y) [cos(y x) cos(y + x)] e k t dyd =  0 Z0 1 i h 2 2 1 = p f (y) e (y x) =4kt e (y+x) =4kt dy: 2 kt 0 6.26 Substituting into the formula in Exercise 6.25, Z 1h i 2 2 1 e (y x) =4kt e (y+x) =4kt dy u(x; t) = p 2 kt 0 Z (1 x)=2pkt Z (1+x)=2pkt 2 1 1 p2 e dp p e p dp =p p p   x=2 kt x=2 kt        1 x+1 x x 1 p p p + erf ; x > 0; t > 0: = erf erf 2 2 kt 2 kt 2 kt

59

6.27 The transformed wave equation is u ^tt (; t) = c2  2 u ^(; t); under the initial ^ conditions u ^(; 0) = f (); u ^t (; 0) = 0, is solved by u ^(; t) = f^() cos ct. Taking the inverse Fourier transform yields Z 1 1 u(x; t) = f^() cos ct eix d: 2 1   Using the relation cos ct eix = 21 ei(x+ct) + ei(x ct) ; we obtain Z 1 i h 1 1 u(x; t) = f^() ei(x+ct) + ei(x ct) d = [f (x + ct) + f (x ct)]: 4 2 1 6.28 The condition u(0; t) = 0 allows us to extend both f and u as odd functions of x; so that Z 1

B() = 2

f (x) sin x dx;

0

and

1 u(x; t) = 

Z

1

B() cos ct sin x d:

0

6.29 Assuming f is continuous, Z 1 1 f (t)e jx tj dt y(x) = 2 1 Z Z 1 x 1 1 = f (t)e (x t) dt + f (t)e (t x) dt 2 1 2 x    Z x Z 1 1 1 f (x) f (x) + f (t)e (x t) dt + f (t)e y 0 (x) = 2 2 1 x Z Z 1 x 1 1 = f (t)e (x t) dt + f (t)e (t x) dt 2 1 2 x 00

y (x)

= =

1 1 f (x) + 2 2 Z 1 f (x) + 2

Consequently, y

Z

x

1

f (t)e

dt

1

1

y 00 = f:

(x t)

f (t)e

jx tj

dt:

1 1 f (x) + 2 2

Z

1

x

(t x)

f (t)e

dt



(t x)

dt

60

Chapter 7 2ab b2 2a2 + 2 + : 3 s s s s ; s > 1: (b) 2 s 1 2 (c) s(s2 + 4)

7.1 (a)

(d)

s2

1 : +4

c (1 e as ) : s a a (1 e bs ): (f) s bs2 2s ; s > 1: (g) 2 (s 1)2 (e)

2 ; (s 1)3 p =s: (i)

(h)

7.2 (a) ae

bx

:

(b) 2 cosh 3x (c) 1

e

s > 1:

x

5 sinh 3x: 3

:

 1 1 e 2x : 2   p p 1 p sinh 6x : (e) 3 cosh 6x 6 p (f) 2 x=:

(d)

(g)

1 (5e 2

x

+ 3e

2x

+ 6e

3x

):

7.3 L(f (ax))(s) =

Z

1

f (ax)e

sx

dx

0

1 1 f (x)e sx=a dx a 0 1 = F (s=a); a > 0: a =

Z

61

s

7.4 (a) e

s

(b) 2e s

(c) e (d)

=s2 :

e

=s3 :

(s

1

s=2

(e) (1

e

s

7.6 (a)

3

+ 2s

1

)s

(1

e

3s

s

e

(9s

1

+ 6s

2

+ 2s

3

):

)(s + 1)

1)] + e 1 e s

)

s

+

H(x

1 e s+1

1

:

1): s

:

6)2 :

6)(x 1)e

(x 1)

sin(x

(c) H(x

3) + H(x

1):

(d) H(x

3)ex

3

(s+1)

1 x

H(x

1 (1 s2

1 H(x 2

e

(b) H(x

(e)

)

=(s2 + 1):

7.5 f (x) = x[H(x) L(f )() =

2

+ 2s

+ H(x

1 [sin 3x + H(x 3

1):

1)ex

1

) sin 3(x

:

)] =

1 [sin 3x 3

H(x

) sin 3x]:

7.7 If f has jump discontinuities at the points x1 ;    ; xn then the sum f (x1 ) + f (x+ 1 ) +    + f (xn ) f (xn ) has to be added to the right-hand side of (7.6). 7.8 (a) The transformed equation is gives Y (s) =

s+5 s+2 1 = +3 ; s2 + 4s + 5 (s + 2)2 + 1 (s + 2)2 + 1

hence y(x) = e

2x

(cos x + 3 sin x):

(b) y(x) = 3ex=3 : (c) y(x) = e

x

(sin 2x + sin x):

(d) y(x) = 15 + 36x + 24x2 + 32x3 :   1 1 2(x 1) x 1 2x x e + H(x 1): (e) y(x) = (e e )H(x) + e 2 2  1 1 (f) y(x) = e 2x + 2x 1 H(x) + (e 2(x 1) 2x + 1)H(x 1)]: 4 4 3 1 (g) y(x) = (e x cos x + sin x) + [e (x ) + cos x sin x]H(x 1): 2 2 7.9 The …rst three answers may be obtained by using Equations (7.9), (7.10) or Theorem 7.14.

62

1 x sin 3x: 6 1 (b) (ex 1): x  1 e bx e ax : (c) x (d) Di¤erentiate F (s) = arccot(s+1); use Equation (7.14), and then invert 1 to arrive at L 1 (F )(x) = e x sin x: x (a)

7.10 Using Equation (7.9), L



sin x x



(s) =

Z

1

s

1 dz z2 + 1

 2

arctan s   1 : = arctan s =

Now Theorem 7.6 implies 1 L[Si(x)](s) = arctan s

  1 : s

7.11 (a) Write L(f )(s) = =

Z

1

f (x)e

sx

0 1 Z (n+1)p X

dx

f (x)e

sx

dx

np

=

n=0 1 Z p X

n=0

f (x + np)e

s(x+np)

dx;

0

P1 then use the equations f (x + np) = f (x) for all n and n=0 e (1 e ps ) 1 ; with p > 0 and s > 0; to arrive at the answer.   1 e s 1 s (b) L(f )(s) = (1 e ) : 1 e s s2 s

nps

=

7.12 For any function f in E there are positive constants M; ; and b such that jf (x)j  M e x

for all x  b:

63

Therefore jL(f )(s)j   

Z

1

0

Z

0

dx

b

0

Z

x Re s

jf (xj e

jf (xj e

x Re s

jf (xj e

x Re s

dx + M

Z

1

e

x Re s

dx

b

b

dx +

M e Re s

b Re s

:

Since f is locally integrable the right hand-side tends to 0 as Re s ! 1: 7.13 The left-hand side is the convolution of x3 and y(x): Applying Theorem 7.14 to the integral equation gives 3!Y (s)=s4 = F (s); from which Y (s) = s4 F (s)=6: From Corollary 7.7 we conclude that y(x) =

1 (4) 1 f (x) + L 6 6

1

[f (0+ )s3 + f 0 (0+ )s2 + f 00 (0+ )s + f 000 (0+ )]:

The integral expression for f (x) implies that f (n) (0+ ) = 0 for n = 0; 1; 2; 3 (we also know from Exercise 7.12 that sn cannot be the Laplace transform of a function in E for any n 2 N0 ). Assuming that f is di¤erentiable to fourth order (or that y is continuous), the solution is y(x) = f (4) (x)=6: 7.14 f  g(x) =

Z

0

x

f (x

t)g(t)dt =

Z

x

ex

0

t

1 1 p dt = ex p  t

2

Z

Under the change of variable t = p ; we obtain Z px p 2 x 2 e p dp = ex erf( x): f  g(x) = e p  0 Therefore p L[ex erf( x)](s) = L(f )L(g)(s)

(1=2) 1 1 p p  s 1 s 1 1 p ; s > 1; = s 1 s

=

from which p L[erf( x)](s) = L[e

p e erf( x)](s)

x x

1 : = p s s+1

0

x

e

t

1 p dt: t

64

7.15 L([x])(s) = = = = =

Z

0 1 X

sx

[x]e Z

n

1 n(e s n=0

1 1X ne s n=0

1

e s

1

s(1

e

sx

ns

e

ns

s

(1

dx (n+1)s

e

s

d 1 ds 1 e

)

)

1 d X e ds n=0

s

e s e

dx

n+1

n n=0 1 X

= =

1

ns

!

s

s

e

s)

:

7.16 (a) Suppose, without loss of generality, that f is continuous. Z x+ jf  g(x + ) f  g(x)j  jf (x +  t)g(t) f (x t)g(t)j dt Z0 x = jf (x +  t) f (x t)j jg(t)j dt +

Z

0 x+

x

jf (x + 

t)

f (x

t)j jg(t)j dt:

Since f is continuous and g is locally integrable, the right-hand side tends to 0 as  ! 0: (b) Suppose f is smooth and g is continuous. Then Z x (f  g)0 (x) = f (0)g(x) + f 0 (x t)g(t)dt; 0

and, based on the result of (a), the right-hand side is continuous. (c) If f is piecewise continuous, then the convolution integral over (0; x) can be expressed as a …nite sum of integrals over subintervals (xi ; xi+1 ); 0  i  n; in each of which f is continuous, and the conclusion of (a) follows for each subinterval. Furthermore, the one-sided limits at xi clearly exist, hence f  g is piecewise continuous. A similar argument can be used to prove that f  g is piecewise smooth if f is piecewise

65

smooth and g is piecewise continuous. In the latter case the interval (0; x) is partitioned by the points of discontinuity of f; f 0 ; and g for the argument to go through. 7.17 The transformed wave equation is Uxx = s2 U=c2 , and its solution U (x; s) = a(s)e

sx=c

+ b(s)esx=c

becomes unbounded as Re s ! 1 unless b(s) = 0: From the boundary condition at x = 0; U (0; s) = a(s) = L(cos2 t); and therefore u(x; s) = L

1

= H(t

h

sx=c

e

i L(cos2 t)

x=c) cos2 (t

x=c):

7.18 The (bounded) solution of the transformed equation is p U (x; s) = a(s)e (s a)=kx : The boundary condition at x = 0 implies a(s) = f^(s); hence i h p u(x; t) = L 1 f^(s)e (s a)=kx Z t  p  = f (t  )L 1 e ( a)=kx d : 0

p  =kx was obtained in Exercise 7.21, and that The inverse transform of e p ( a)=kx follows from applying Theorem 7.10. of e

7.19 The transformed equation is Uxx =

1 (s + 1)2 U; c2

hence U (x; s) = F (s)e

(s+1)x=c

;

where F (s) = U (0; s) = L( sin t): The solution is therefore h i u(x; t) = L 1 e (s+1)x=c L( sin t) =e

x=c

H(t

x=c) sin(t

x=c):

66

p 7.20 By changing the variable of integration from  to p = x=2 k ; Equation (7.20) becomes Z 1  2 2 f t x2 =4kp2 e p dp u(x; t) = p  x=2pkt Z 1  2 2 lim u(x; t) = lim p x2 =4kp2 e p dp: p f t x!0 x!0  x=2 kt

 2 For any function in E, the integrand f t x2 =4kp2 e p is clearly dominated by an L1 function for large values of p; therefore we can take the limit inside the integral to obtain Z 1 2 2 lim u(x; t) = p f (t) e p dp = f (t): x!0  x=2pkt

p p 7.21 F (s) = e a s = s is analytic in the complex plane cut along the negative axis ( 1; 0]: Using Cauchy’s theorem, the integral along the vertical line ( i1; + i1) can be reduced to two integrals, one along the bottom edge of the cut from left to right, and the other along the top edge from right to left. This yields Z +i1 1 F (s)exs ds L 1 (F )(x) = 2i i1 p Z 1 iaps 1 e + e ia s xs p e ds = 2 0 s p Z 1 1 cos (a s) sx p = e ds  0 s Z 1 2 2 e xt cos at dt: =  0 2

Noting that the last integral is the Fourier transform of e xt ; and using the result of Example 6.17, we obtain the desired expression for L 1 (F )(x).  p  To evaluate L 1 e a s one is tempted to use Theorem 7.14, whereby L

1



e

p  a s

(x) = L

1

=L

1

p p  s  e a s= s  p p  p  s  L 1 e a s= s ;

p

and attempt to compute the resulting convolution. But the the problem p here is that L 1 ( s) does not exist, at least not in E (see Example 7.4 or

67

p Exercise 7.12). Another approach is to try to remove s from the denominator of F (s) by integration (this is actually suggested by the formula we seek to prove). Z Z 1 apz p 2 e 1 1 az p dz = e dz = e a s : p 2 s a z s Using the formula (7.10), we therefore conclude that i  hR  p  p p  a 1 L 1 e a s (x) = L 1 s e a z = z dz 2  p p  a = L 1 e a s = s (x) 2x 2 a e a =4x : = p 3 2 x

http://www.springer.com/978-1-84628-971-2