318 110 19MB
English Pages [312]
Table of contents :
Cover
Half Title
Title
Copyright
Preface
Acknowledgements
Contents
Chapter 1: Sampling and Discrete Time Systems
1.1 Sampling-data Signals
1.2 Holding Circuit
1.3 Discrete-time System
1.4 Convolution of Linear-time-invariant System
Problems
Multiple-Choice Questions
Chapter 2: Z-Transforms
2.1 Introduction to Z-transform
2.2 Relation Between Z-transform and Fourier Transform
2.3 Z-transform of Unit Impulse and Step Functions
2.4 ROC and Its Properties
2.5 Transfer Function
2.6 Inverse Z-transform
2.7 Z-domain Stability
2.8 Some Typical Examples on Z-transform
Problems
Multiple-Choice Questions
Chapter 3: Analog Filter Approximations
3.1 Butterworth Approximation
3.2 Chebyshev Approximation
3.3 Survey of Other Approximations
Problems
Multiple-Choice Questions
Chapter 4: IIR Filters
4.1 Introduction to Digital Filter Design
4.2 Conversion from Analog to Digital
4.3 Bilinear Transformation Method
4.4 Impulse Invariance Method
4.5 Step Invariance Method
4.6 Comparison of the Amplitude Responses of the Three Methods
Problems
Multiple-Choice Questions
Chapter 5: FIR Filters
5.1 Fourier Series Method
5.2 FIR Filter Design Based on Windowing Techniques
5.3 Summary of the Window Method of Determining FIR Filter Coefficients
5.4 Comparison between FIR and IIR Filters
5.5 Amplitude Responses of Various Window Functions
5.6 Some Typical Examples on FIR Digital Filters
Problems
Multiple-Choice Questions
Chapter 6: Realization of Digital Filters
6.1 Structural Representation of the DTLTI Systems Using Z-transform
6.2 Cascade and Parallel Realization Forms
6.3 Some Typical Examples on Realization of Filters
Problems
Multiple-Choice Questions
Chapter 7: The Discrete Fourier Transform
7.1 Introduction
7.2 Forms of the Fourier Transform
7.3 Discrete Fourier Transform
7.4 Properties of DFT
7.5 Relation Between Fourier Transform and Z-transform
7.6 Comparison Between Linear Convolution and Circular Convolution
7.7 Some Typical Examples on DFT
Problems
Multiple-Choice Questions
Chapter 8: Fast Fourier Transform
8.1 Introduction
8.2 Motivation for Fast Fourier Transform
8.3 Decimation-in-Time (DIT) – FFT Algorithm
8.4 Decimation-in-Frequency (DIF) – FFT Algorithm
8.5 Inverse Fast Fourier Transform (IFFT)
8.6 Applications of FFT Algorithms
Problems
Multiple-Choice Questions
Chapter 9: Multirate Digital Signal Processing
9.1 Introduction
9.2 Digital Filter Banks
9.3 Applications of Multirate Signal Processing
Problems
Multiple-Choice Questions
Chapter 10: Digital Correlation Techniques
10.1 Introduction
10.2 Correlation Between Waveforms
10.3 Power and Cross-correlation
10.4 Autocorrelation
10.5 Autocorrelation of Nonperiodic Waveform of Finite Energy
10.6 Autocorrelation and Cross-correlation of Discrete-Time Signals
10.7 Overlap-Add Block Convolution
10.8 Overlap-Save Block Convolution
Problems
Multiple-Choice Questions
Chapter 11: Power Spectrum Estimation
11.1 Estimation of Spectra from Finite-duration Observation of Signals
11.2 Computation of the Power Spectrum Estimation from Finite-duration Observations of Signals
Chapter 12: DSP Processors
12.1 Introduction
12.2 Architectures for Digital Signal Processing
12.3 On-chip Peripherals
12.4 DSP Implementation Technology
12.5 Types of Digital Signal Processors
Multiple-Choice Questions
Chapter 13: DSP Applications
13.1 Applications of DSP in Telecommunications
13.2 Applications of DSP in Speech Processing
13.3 Radar/Sonar Communications
13.4 Biomedical Applications of DSP
Chapter 14: Digital Signal Processing with MATLAB
14.1 MATLAB Environment
14.2 Quick Tutorial of MATLAB
14.3 Digital Signal Processing with MATLAB
Index
Backcover
TM
This book provides a comprehensive treatment of DSP techniques commencing from an elementary level of sampling process. It covers topics like Z-transforms, filter approximations, digital filters (both IIR & FIR), Discrete Fourier Transforms (DFTs), Fast Fourier Transforms (FFTs), filter realisation techniques, Multi-rate Signal Processing, DSP Processors, and DSP Applications. At the end, MATLAB programming is given on various DSP topics. The aim is to impart adequate knowledge to the undergraduate-level students in DSP area. This book focusses on theoretical concepts and problem-solving. Salient features: • Provides basic knowledge of the subject, implementation and applications of the DSP techniques. • Solved examples and problems for exercise. • Theoretical concepts with mathematical illustrations. • Insight into practical problems. • Orientation to practical applications. • Knowledge about the design aspects for actual DSP systems for given specifications. • Objective-type questions with multiple-choice pattern and key also have been provided. • MATLAB examples and problems to provide hands-on experience. This book is intended for the B.E./B.Tech. students of Electrical Engineering, Electronics and Communication Engineering, Computer Science and Engineering, Information Technology, Telecommunication Engineering and Biomedical Engineering. K. Raja Rajeswari (Ph.D.) is Principal, Viswanadha Institute of Technology and Management (VITAM), Visakhapatnam. Earlier, she has served as Professor for about 35 years in the Department of Electronics and Communication Engineering, College of Engineering (Autonomous), Andhra University, Visakhapatnam. She has published more than 200 papers in IEE, IEEE and IETE journals, and presented many papers in national and international conferences. Dr. Rajeswari is expert member for the Department of Science and Technology (DST), New Delhi under the women scientist scheme. She is also expert member for National Board of Accreditation (NBA), All India Council of Technical Education, New Delhi. Her research interests include radar/sonar signal processing and wireless mobile communications. She has guided eighteen Ph.D.s and, presently, twelve are pursuing under her guidance. She has published 3 books – Electronic Devices and Circuits, Electronic Circuit Analysis, and Signals and Systems. 978-93-89307-41-2
` 425/Distributed by: 9 789389 307412 TM
Digital Signal Processing
Digital Signal Processing K. Raja Rajeswari Principal Viswanadha Institute of Technology and Management Visakhapatnam Former Professor Dept. of Electronics and Communication Engineering College of Engineering (Autonomous) Andhra University, Visakhapatnam
©Copyright 2019 I.K. International Pvt. Ltd., New Delhi-110002. This book may not be duplicated in any way without the express written consent of the publisher, except in the form of brief excerpts or quotations for the purposes of review. The information contained herein is for the personal use of the reader and may not be incorporated in any commercial programs, other books, databases, or any kind of software without written consent of the publisher. Making copies of this book or any portion for any purpose other than your own is a violation of copyright laws. Limits of Liability/disclaimer of Warranty: The author and publisher have used their best efforts in preparing this book. The author make no representation or warranties with respect to the accuracy or completeness of the contents of this book, and specifically disclaim any implied warranties of merchantability or fitness of any particular purpose. There are no warranties which extend beyond the descriptions contained in this paragraph. No warranty may be created or extended by sales representatives or written sales materials. The accuracy and completeness of the information provided herein and the opinions stated herein are not guaranteed or warranted to produce any particulars results, and the advice and strategies contained herein may not be suitable for every individual. Neither Dreamtech Press nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Trademarks: All brand names and product names used in this book are trademarks, registered trademarks, or trade names of their respective holders. Dreamtech Press is not associated with any product or vendor mentioned in this book. ISBN: 978-93-89307-41-2 EISBN: 978-93-89795-43-1 Edition: 2019
Preface In the recent years, Digital Signal Processing (DSP) has continued to have a major and increasing impact in many key areas of technology, including telecommunications, digital television, multimedia communications, mobile and wireless communications, biomedicine, digital audio and instrumentation. In its technological evolution it is taking new shape from time to time. DSP is now introduced in many universities as core subject and for some courses, it is taught as elective subject. DSP is playing vital role in many new and emerging digital products and applications in the information society. The need and expectation for electronic communication and computer engineers to be competent in DSP have grown even stronger since two decades. Now DSP is made compulsory as core subject in most electronic, communications, computer, information technology, biomedical curriculae. This book provides a comprehensive treatment of DSP techniques commencing from an elementary level of sampling process. It covers topics like Z-transforms, filter approximations, digital filters (both IIR & FIR), Discrete Fourier Transforms (DFTs), Fast Fourier Transforms (FFTs), filter realization techniques, Multirate Signal Processing, DSP Processors, DSP Applications. At the end MATLAB programming is given on various DSP topics. MATLAB is now widely used as a generic tool in industry and academia and requires less programming skills than C. It provides good graphics and display facilities and gives a good environment for developing DSP. MATLAB is a useful tool for students to become familiar with, and to develop confidence in real subject learning. MATLAB is only a simulation tool, it comes to reality by using DSP processors. In the manufacturing of the processors Texas Instruments in USA is leading. Intel also develops processors. Processors architecture also not a fixed one, it varies based on the development taking place. Always high-speed and small size processors are preferred as miniaturization is one of the important requirements. Cost also is one of the criteria to have cheaper products to come into market. Main target of the book is to impart adequate knowledge to the undergraduate level students in DSP area. For the students this book projects on theoretical concepts and problem solving. Many important problems which often come in the examinations have been solved. Some problems have been given at the end of the chapters as exercise to the students. Multiple choice questions were given at the end of the each chapter which helps the students those who opt to appear for competitive examinations. Necessary figures are given which are needed to a student for easy understanding of the subject. Mathematical equations have been given along with the text as lot of mathematical knowledge is essential to understand DSP properly.
vi
Preface
The book is a blend of everything essential for the student to get through examination with satisfactory result in DSP at the same time giving more exposure in the subject. Student after going through the text can understand the full content of the DSP required for the examination. The topics are covered without giving much stress to the student and in fact student enjoys in studying the book. Main features of the book: • Provides basic knowledge of the subject, implementation and applications of the DSP techniques. • Simple, clear and easy to read with required mathematical content. • With good solved examples and problems for exercise. • Theoretical concepts with mathematical illustrations. • Provides good insight to practical problems. • Gives orientation to practical applications. • Provides knowledge to readers to look into design aspects and to develop actual DSP systems for given specifications. • Objective type questions with multiple choice pattern and key also has been provided. • MATLAB examples and problems to provide hands-on experience. Finally, I hope this book remains as masterpiece to a student for the fulfilment of his degree examination in engineering and technology. No where in the book it is compromised in giving competitive spirit to the student in learning the subject into depth. At the same time it is comfortable to go through the whole book and understanding the content for the examination purpose with full grip and confidence in the whole subject. The book is aimed at engineering, science and computer engineering students and applications engineers and scientist in the industry who wish to gain a working knowledge of DSP. In particular pre-final or final year students studying for degree in electronics, communications or electrical engineering will find the book valuable for both subject study as well as their project work. Various aspects can be extracted from the book for pursuing their project work. This book can be treated as source of information either for undergraduate study and also for the scientists working in the industry, research organization, defence organization, etc.
Prof. K. Raja Rajeswari
Acknowledgements I express my deep sense of gratitude to Prof. G. S. N. Raju, my former colleague and present Vice Chancellor of Andhra University for his continuous support and encouragement in bringing this book to the final form. My thanks are due to Prof. Ch. V. Ramachandra Murthy Principal and Prof. P.S. Avadhani Vice-Principal AUCE (A) for their administrative support. I also thank Prof. P. Mallikarjuna Rao, Chairman, Board of Studies, Prof. G. Sasibhushana Rao, Prof. P. Rajesh Kumar, Head of the Department, Prof. P. V. Sridevi, Smt. S. Santa Kumari, Smt. M. S. Anuradha and Smt. S. Aruna of Department of Electronics and Communication Engineering, College of Engineering (A), Andhra University, Visakhapatnam for their cooperation and support. My special thanks are due to Sri. V. Narasimham Chairman, Sri V. Nageswara Rao ViceChairman, Sri V. Dhananjaya, Secretary, VITAM Educational Society for extending their cooperation. I express my sincere thanks to Prof. K. Satya Prasad, Prof. B. Prabhakara Rao, Prof. S. Srinuvas Kumar, Prof. I. Santhi Prabha, Prof. K. Padma Raju, Prof. M. Sailaja, Prof. A. Mallikarjuna Prasad, Prof. K. Babulu, of Jawaharlal Nehru Technological University, Kakinada. My thanks always go to B. Visveswara Rao and B. Satyam CEO, Neo Silca, Hyderabad for their constant support. My thanks are due to my son Chy S. V. S. Ganesh and my daughter-in-law Smt. Lavanya Divya USA for their constant support and encouragement in writing the book. I express my heartfelt thanks to my brother Konduri Sarveswara Rao, Smt K. V. Lakshmi and Miss Eiswarya Jyothi who are source of inspiration to me. My heartfelt thanks to Shri G. V. K. Sharma of GITAM University for his invaluable help in preparing the manuscript. I thank Ch. Kusuma Kumari and A. Naga Jyothi, SRFs of DST, Dept. of ECE, AUCE (A) who helped me a lot in preparing the soft copy of the book. Finally, I thank all my scholars B. Leelaram Prakash, P. Srihari, M. Uttara Kumari, P. Radha Krishna, K. Srihari Rao, K. Murali Krishna, J. B. Seventline, M. V. Nageswara Rao, G. Manmadha Rao, V. Jagan Naveen, D. Thirumala Rao, M. Sandhya, N. Vijay Kumar and V. Vamsi Mohana Krishna for their constant support. Prof. K. Raja Rajeswari
Contents Preface Acknowledgements
v vii
1. Sampling and Discrete Time Systems 1.1 Sampling-data Signals 1.2 Holding Circuit 1.3 Discrete-time System 1.4 Convolution of Linear-time-invariant System Problems Multiple-Choice Questions
1 1 6 8 11 28 28
2. Z-Transforms 2.1 Introduction to Z-transform 2.2 Relation Between Z-transform and Fourier Transform 2.3 Z-transform of Unit Impulse and Step Functions 2.4 ROC and Its Properties 2.5 Transfer Function 2.6 Inverse Z-transform 2.7 Z-domain Stability 2.8 Some Typical Examples on Z-transform Problems Multiple-Choice Questions
30 30 30 31 33 34 38 44 46 58 59
3. Analog Filter Approximations 3.1 Butterworth Approximation 3.2 Chebyshev Approximation 3.3 Survey of Other Approximations Problems Multiple-Choice Questions
63 66 71 74 75 77
4. IIR Filters 4.1 Introduction to Digital Filter Design 4.2 Conversion from Analog to Digital 4.3 Bilinear Transformation Method 4.4 Impulse Invariance Method
81 81 84 85 86
x
Contents
4.5 Step Invariance Method 4.6 Comparison of the Amplitude Responses of the Three Methods Problems Multiple-Choice Questions
87 90 96 98
5. FIR Filters 5.1 Fourier Series Method 5.2 FIR Filter Design Based on Windowing Techniques 5.3 Summary of the Window Method of Determining FIR Filter Coefficients 5.4 Comparison between FIR and IIR Filters 5.5 Amplitude Responses of Various Window Functions 5.6 Some Typical Examples on FIR Digital Filters Problems Multiple-Choice Questions
100 100 103 107 109 109 112 120 121
6. Realization of Digital Filters 6.1 Structural Representation of the DTLTI Systems Using Z-transform 6.2 Cascade and Parallel Realization Forms 6.3 Some Typical Examples on Realization of Filters Problems Multiple-Choice Questions
123 123 125 127 134 136
7. The Discrete Fourier Transform 7.1 Introduction 7.2 Forms of the Fourier Transform 7.3 Discrete Fourier Transform 7.4 Properties of DFT 7.5 Relation Between Fourier Transform and Z-transform 7.6 Comparison Between Linear Convolution and Circular Convolution 7.7 Some Typical Examples on DFT Problems Multiple-Choice Questions
139 139 139 144 145 155 156 157 170 170
8. Fast Fourier Transform 8.1 Introduction 8.2 Motivation for Fast Fourier Transform 8.3 Decimation-in-Time (DIT) – FFT Algorithm 8.4 Decimation-in-Frequency (DIF) – FFT Algorithm 8.5 Inverse Fast Fourier Transform (IFFT) 8.6 Applications of FFT Algorithms Problems Multiple-Choice Questions
176 176 176 177 182 188 189 193 193
Contents
9. Multirate Digital Signal Processing 9.1 Introduction 9.2 Digital Filter Banks 9.3 Applications of Multirate Signal Processing Problems Multiple-Choice Questions
xi
197 197 198 199 201 202
10. Digital Correlation Techniques 10.1 Introduction 10.2 Correlation Between Waveforms 10.3 Power and Cross-correlation 10.4 Autocorrelation 10.5 Autocorrelation of Nonperiodic Waveform of Finite Energy 10.6 Autocorrelation and Cross-correlation of Discrete-Time Signals 10.7 Overlap-Add Block Convolution 10.8 Overlap-Save Block Convolution Problems Multiple-Choice Questions 11. Power Spectrum Estimation 11.1 Estimation of Spectra from Finite-duration Observation of Signals 11.2 Computation of the Power Spectrum Estimation from Finite-duration Observations of Signals 12. DSP Processors 12.1 Introduction 12.2 Architectures for Digital Signal Processing 12.3 On-chip Peripherals 12.4 DSP Implementation Technology 12.5 Types of Digital Signal Processors Multiple-Choice Questions 13. DSP Applications 13.1 Applications of DSP in Telecommunications 13.2 Applications of DSP in Speech Processing 13.3 Radar/Sonar Communications 13.4 Biomedical Applications of DSP
204 204 204 205 206 207 208 214 215 218 219 221 221
14. Digital Signal Processing with MATLAB 14.1 MATLAB Environment 14.2 Quick Tutorial of MATLAB 14.3 Digital Signal Processing with MATLAB
242 242 242 251
Index
222 227 227 227 230 230 231 232 233 235 236 238 240
297
1
Sampling and Discrete Time Systems
Sampling is a process to convert continuous-time signals into discrete-time or digital signals.
1.1
SAMPLING-DATA SIGNALS
A sampled-data signal can be considered as arising from sampling a continuous-time signal at a periodic interval of time T as illustrated in Fig. 1.1. The sampling rate or sampling frequency is fs = 1/T. Sampling is a preprocessing before digitizing the continuous-time signal usually sampling could be done using non-zero width pulses and impulses. Impulse sampling process is ideal, not practically possible. In practice non-zero pulse width sampling is used, lower the width of the pulse, lesser the information loss.
1.1.1 Non-Zero Pulse Width Sampling Initially, we will assume that each sample has a width t, so that the resulting signal consists of a series of relatively narrow pulses whose amplitudes are modulated by the original continuoustime signal. This particular form of a sampling-data signal is designed in communication systems as a pulse amplitude modulated (PAM) signal. Let x*(t) represent the sampled-data signal, and x (t) represent the original continuous-time signal. We may consider x*(t) the product of x (t) and a hypothetical pulse train p (t) as illustrated in Fig. 1.1. Thus, x* (t) = x (t) p (t)
(1.1)
An important property of the sampled-data signal is its spectrum X (f ). This can be derived by first expressing p (t) in the Fourier series form. *
•
p (t) =
Âa
k
e jk w S t
–•
(1.2)
sin k pd frequency variation, k pd where k is order of harmonic and d is duty cycle. Duty cycle can be defined as the ratio of pulse duration to pulse repetition period. d = t/T (1.3)
where ws = 2pfs = 2p/T. The coefficients in ak in eq. (1.2) follow a
2
Digital Signal Processing
Fig. 1.1
Development of sampled-data signal using non-zero width pulse sampling.
Substitution of eq. (1.2) in eq. (1.1) result in the expression •
x*(t) =
Âa
k
x (t ) e jk w S t
(1.4)
–•
The spectrum may now be determined by taking the Fourier transform of both sides of eq. (1.4). Each term of the series on the right may be transformed using Fourier transform operation. This results in •
X *(f ) =
Âa X ( f k
– kf s )
(1.5)
–•
Typical sketches of |X (f )| and |X * (f )| are shown in Fig. 1.2. For scale adjustment, only a small section of the negative frequency range of |X * ( f )| is shown, but since it is an even function of frequency, its behavior in the negative frequency range is readily understood. It can be observed that the spectrum of a sampled-data signal consists of the original spectrum plus an infinite number of translated versions of the original spectrum. These various translated functions are shifted in frequency by amounts equal to the sampling frequency and its harmonics. The magnitudes are multiplied by the cm coefficients so that they diminish with frequency. However, for a very short duty cycle (t wm where wm = 2p / fm (i) Given signal rect 300 t consider x (t ) = rect (300 t) ⎛ 1 ⎞ ⎛ f ⎞ x( f ) = ⎜ sin c ⎜ ⎟ ⎝ 300 ⎠ ⎝ 300 ⎟⎠ The sinc function never goes to zero and stays there at a finite frequency. Its highest frequency is infinite. Hence, the Nyquist rate is also infinite. (ii) Given signal, – 10 sin 40 pt cos 300 pt Consider x ( t ) = –10 sin 40 pt cos 300 pt
= – 5 ( 2 sin 40 pt cos 300 pt ) = – 5 ÈÎsin ( 40 pt + sin 300 pt ) + sin ( 40 pt – 300 pt ) ˘˚ = – 5 ÈÎsin (340 pt ) – sin ( 260 pt )˘˚ x ( t ) = 5 ÈÎsin ( 260 pt ) – sin ( 340 pt ) ˘˚
Consider a signal x ( t ) = 5 ÈÎsin ( w1t ) – sin ( w 2 t ) ˘˚
w1 = 260 p 2p f1 = 260 p f1 = 130 Hz and
w2 = 340 p 2p f2 = 340 p f2 = 170 Hz
16
Digital Signal Processing
\ The maximum frequency present in, x (t ) is f 2 = 170 Hz
The Nyquist rate is given by, fs = 2 fmax where, fmax = Maximum frequency present in the signal \ fmax = f2 = 170 Hz \ Nyquist rate,
fs = 2 fmax = 2 (170 Hz )
= 340 Hz Example 1.5 Determine the Nyquist rate corresponding to each of the following signals. (i) x (t ) = 1 + cos 2000 pt + sin 4000 pt sin 4000 pt (ii) x (t ) = pt Solution: (i) x (t ) = 1 + cos 2000 pt + sin 4000 pt w1t = 2000 pt
( 2pf1 ) = 2000 pt f1 =
2000 = 1000 Hz 2
Similarly, w2t = 4000 pt
2pf 2 = 4000 p f2 =
4000 = 2000 Hz 2
Therefore, maximum frequency in signal x (t) is, f2 = 2000 Hz Nyquist rate,
fs = 2 fm where \ \
fm = fm = fs = fs =
Maximum frequency f2 = 2000 Hz 2 fm = 2 × 2000 Hz 4000 Hz
Sampling and Discrete Time Systems 17
Nyquist interval is,
Ts =
(ii) Given that,
1 1 = 2 f m 4000
Ts = 0.25 m sec sin 4000 pt x (t ) = pt
sin 4000 pt pt w1t = 4000 pt
x (t ) =
2pf1 = 4000 p Nyquist rate,
f1 = 2000 Hz fs = 2 fm = 2 × 2000 fs = 4000 Hz
Nyquist interval,
Ts =
1 fs
Ts = 0.25 m sec
Example 1.6 Determine the Nyquist sampling rate and Nyquist sampling interval for the signals (a) sin c (100 pt) (b) sin t (100 pt) (c) sin c (100 pt) + sin c (50 pt) (d) sin c (100 pt) + sin c2 (60 pt) Solution: (a) sinc (100 pt) Let, x ( t ) = sin c (100 pt )
sin (100 pt ) 100 pt
\
x (t ) =
\
w1t = 2p f1 t = 100 pt f1 = 50 Hz
The Nyquist sampling rate = 2 × fmax
= 2 × 50 (∴ f max = f1 ) = 100 Hz
18
Digital Signal Processing
Nyquist sampling interval,
1 Nyquist sampling rate 1 = 100 = 0.01 seconds = 10 msec
=
(b) sin t (100 pt) There is no function as sin t() (c) sin c (100 pt) + sin c (50 pt) Let, x ( t ) = sin c (100 pt ) + sin c ( 50 pt )
=
sin (100 pt ) sin ( 50 pt ) + 100 pt 50 pt
\ From the above equation, w1 t = 2p f1 t = 100 pt f1 = 50 Hz and w2 t = 2p f2 t = 50 pt f2 = 25 Hz \ The maximum frequency in x(t) is f1 = 50 Hz Nyquist sampling rate, = 2 fmax = 2 (50) (∵ f max = f1 ) = 100Hz Nyquist sampling interval, 1 = Nyquist sampling rate 1 = 100 = 0.01 seconds = 10 msec (d) sin c (100 pt) + sin c (60 pt) Let, x (t ) = sin c (100 pt ) + 3 sin c 2 ( 60 pt ) Consider,
x1 ( t ) = sin c (100 pt ) \
w1 t = 2p f1 t = 100 pt
Sampling and Discrete Time Systems 19
f1 = 50 Hz Consider,
x2 ( t ) = 3sin c 2 ( 60 pt ) w2 t 2p f2 t f2 \ The maximum frequency in Nyquist sampling rate, \
= 120 pt = 120 pt = 60 Hz x (t) is f2 = 60 Hz
= 2 fmax = 2 (60) (∵ f max = f 2 ) = 120 Hz Nyquist sampling interval,
1 Nyquist sampling rate 1 = 120 = 0.00833 seconds = 8.33 msec Example 1.7 A low-pass signal x (t) has a spectrum x (f ) given by,
=
X ( f ) = 1 − f / 200 = 0
f < 200 elsewhere
Assume that x (t) is ideally sampled at fs = 300 Hz. Sketch the spectrum of xd (t) for | f | < 200. Solution: The spectrum, X (f ) of the given low-pass signal x (t) is,
X ( f ) = 1−
f
f < 200 200 elsewhere = 0 The frequency with which signal x (t) has been sampled, fs = 300 Hz. Then Fig. 1.11 shows the spectrum of the given low-pass signal, x (t).
Fig. 1.11
20
Digital Signal Processing
But from the definition, the given low-pass signal x (t) is sampled using an impulse train function. • • Ê mˆ P ( t ) = Â d ( t - mTs ) = Â d Á t - ˜ f ¯ Ë m=-•
m=-•
s
Then, the respective sampling process is represented by, •
xd ( t ) = x ( t ) p ( t ) = x ( t )
Â
d ( t - mTs ) =
m=-•
•
 x ( mT ) d (t - mT ) s
s
m=-•
Where, xd (t) = sampled signal of x (t) In the frequency domain, i.e., by using convolution theorem F .T { xd ( t )} = F .T { x ( t ) p ( t )}
fi
xd ( f ) = x ( f ) * p ( f )
where, X (f ), P (f ) and xd (t) are spectra of x (t), p (t) and xd (t) respectively. Then, X d ( f ) = X ( f ) * f s
•
•
•
 d ( f - kf ) = f Â Ú X ( f - f ¢) d ( f ¢ - kf ) df ¢ s
s
k =-•
X d ( f ) = fs
•
s
k =-• -•
 X(f
- kf s )
k =-•
Thus, Xd (f ) is a periodic function composed of an infinite number of replicas of repeating 1 at period, f s = . Ts Also note that these replicas are scaled by a factor fs. Since, the highest frequency component in the signal x(t) is fm = 200 Hz and Sampling frequency, fs = 300 Hz. \ fs < 2 fm Thus, the successive cycles of the sampled spectrum will overlap each other and hence in this case, the original spectrum X (f ) cannot be extracted out of the spectrum Xd (f ), which is shown in Fig. 1.12.
Fig. 1.12
Sampling and Discrete Time Systems 21
Example 1.8 A signal x ( t ) = 2cos 400 pt + 6cos 640 pt is ideally sampled at fs = 500 Hz. If the sampled signal is passed through an ideal low-pass filter with a cut-off frequency of 400 Hz, what frequency components will appear in the output? Solution: Given that, x ( t ) = 2cos 400 pt + 6 cos 640 pt
fs = 500 Hz Cut-off frequency, f0 = 400 Hz w1t = 400 pt 2p f1 = 400 p f1 = 200 Hz Similarly, w2t = 640 pt 2p f2 = 640 p f2 = 320 Hz The different frequency components in the sampled signal are fs + f1, fs – f1, 2fs + f1 and 2fs – f2 , etc. fs + f1 = 500 + 200 = 700 Hz fs – f1 = 500 – 200 = 300 Hz 2 fs + f1 = 1000 + 200 = 1200 Hz 2 fs – f1 = 1000 – 200 = 800 Hz fs + f2 = 500 + 320 = 820 Hz fs – f2 = 500 – 320 = 180 Hz \ Cut-off frequency f0 = 400 Hz The frequency components which will appear in the output of LPF are 300 Hz, 180 Hz, i.e., fs – f1 and fs – f2. Example 1.9 The signal y (t) is generated by convolving a band-limited signal x1 (t) with another band-limited signal x2 (t) that is y (t) = x1 (t)* x2 (t) where x1 ( jw) = 0 for |w| > 1000 p, x2( jw) = 0 for |w| > 2000 p Impulse train sampling is performed on y (t) to obtain, y p ( t ) =
•
 y ( nT ) d (t - nT ) .
n = -•
Specify the range of values for sampling period T which ensures that y (t) is recoverable.
22
Digital Signal Processing
Solution: The spectra of the given two band-limited signals x1(t) and x2(t) are x1 ( jw) and x2 ( jw) respectively and which are defined as, x1 ( jw) = 0 for |w| > 1000 p x2 ( jw) = 0 for |w| > 2000 p Then, the signal y (t) is generated by the convolution of the above two band-limited signals x1(t) and x2(t), i.e., y ( t ) = x1 ( t ) * x2 ( t ) Using the time convolution property of Fourier transform, i.e., F . T { f1 ( t ) * f 2 ( t )} = F1 ( jw ) . F2 ( jw )
Thus,
Y ( jw ) = X 1 ( jw ) . X 2 ( jw ) = [ 0] w >1000 p ¥ [ 0] w > 2000 p = [ 0] w >1000 p
i.e.,
Y ( jw) = 0 for |w| > 1000 p Then, impulse train sampling is performed on y(t) to obtain the sampled signal, i.e., y p (t ) =
•
 y ( nT ) d (t - nT )
n=-•
where, T = sampling period But, the Fourier transform of the signal, y (t) is defined as, y ( jw) = 0 for |w| > 1000 p From the definition of sampling theorem, a band limited signal of finite energy, which has no frequency component higher than fm Hz (say), then it is completely described by its sample 1 1 . values at uniform intervals less than or equal to i.e., Ts £ 2 fm 2 fm Then wm = 1000 p fi
2p fm = 1000 p
fi fm = 500 Hz and the range of values for sampling period, T is given by, Ts £
1 2 fm
1 1 fiT £ 2 ¥ 500 1000 \ T £ 1 msec. Thus, the range of sampling period values which will ensure the recoverability of y (t) and yp (t) are, 0 £ T £ 1 msec.
fi
T£
Sampling and Discrete Time Systems 23
Example 1.10 A rectangular pulse waveform shown in figure is sampled once every Ts seconds and reconstructed using an ideal LPF with a cut-off frequency of fs / 2. Sketch the reconstructed 1 1 waveform for and Ts = sec and Ts = sec . 6 12
Fig. 1.13
Solution:
Given that,
Cut-off frequency, (i)
where,
fs 2 1 Ts = sec 6 1 Ts = 2 fm f0 =
Ts = Sampling period fm = Maximum Frequency 1 1 fm = = = 3 Hz 1 2Ts 2¥ 6 fm = 3 Hz Then, the respective given rectangular pulse function can be represented as,
Fig. 1.14
Reconstructed waveform.
As the cut-off frequency of ideal LPF filter is, f 6 f 0 = s fi f 0 = = 3 Hz 2 2
24
Digital Signal Processing
Then, the reconstructed waveform after passing through the given ideal LPF is shown in Fig. 1.15.
Fig. 1.15
The dotted rectangular part in the above figure shows the output reconstructed waveform after passing through ideal LPF with f0 = 3 Hz. (ii) \
Ts =
1 sec 12
Ts =
1 2 fm
fm =
1 1 2¥ 12
= 6 Hz
Then, the given rectangular pulse function can be represented as,
Fig. 1.16
As the cut-off frequency of ideal LPF filter is, f0 =
fs 12 fi f0 = = 6 Hz 2 2
Sampling and Discrete Time Systems 25
Then, the reconstructed waveform after passing through the given ideal LPF is shown in Fig. 1.17.
Fig. 1.17
Reconstructed waveform.
The dotted rectangular part in the above figure shows the output reconstructed waveform after passing through ideal LPF with f0 = 6 Hz. Example 1.11 The signal x ( t ) = u ( t + T0 ) - u ( t - T0 ) can undergo impulse train sampling without aliasing provided that the sampling period T < 2 T0. Justify. Solution:
Given signal, x ( t ) = u ( t + T0 ) - u ( t - T0 ) as shown in figure below.
Fig. 1.18
Aliasing effect occurs when continuous band-limited signal is sampled lower than the Nyquist rate. i.e., fs < 2 fm where fs = Nyquist rate fm = Band-limited signal frequency But given that, T < 2T0 fi T < But the Nyquist rate, f s = Then,
1 T
1 1 < fi f s > 2 f0 f s 2 f0
1 2 f0
26
Digital Signal Processing
Consider x (t) is a band-limited signal which has no frequency components other than f0. The above signal is sampled by impulse train as shown below, dTs ( t ) =
•
 d (t - nT ) s
n=-•
\ The sampled version of x (t) is denoted by g (t). g (t ) =
•
 x ( nT ) . d (t - nT ) s
s
n=-•
Apply the Fourier transform, G( f ) =
•
 x ( nT ) . e
- j 2 p nTs
s
n = -•
When the signal x (t) is passed through low-pass filter, the higher frequency components get reduced. The Nyquist rate must be less than the band-limited signal. Thus, the signal x (t) with sampling period T < 2 T0 will not suffer from aliasing. Therefore, the above-mentioned statement is true. Example 1.12 The signal x(t) with Fourier transform can undergo impulse train sampling without aliasing, provided that the sampling period T < p /w0. Justify. Solution:
Given that, X ( jw) = d (w + w0 ) – d (w – w0 )
Consider the spectrum of the above signal,
Fig. 1.19
Spectrum of original signal.
The given sampling interval, i.e., p p 1 1 T< = = i.e., T < w 0 2pf 0 2 f 0 2 f0 1 1 1 Since, T= < then, fs f s 2 f0 fi\ fs > 2 f0 p then aliasing effect will not occur Thus, the signal X ( jw) is sampled with period T < w0 1 because Nyquist rate is more than twice the band-limited signal frequency, i.e., 2 f0. The T above statement is true.
Sampling and Discrete Time Systems 27
Example 1.13 Explain briefly bandpass sampling. Solution: Sampling theorem for bandpass signals. Consider bandpass signal M (t) whose bandwidth is completely recovered and represented from its samples if it is sampled at the minimum rate of twice the bandwidth. The spectrum of bandpass signal is as shown in the figure below.
Fig. 1.20
Spectrum of bandpass signal.
The maximum frequency present in the bandpass signal is denoted by fm and bandwidth by 2 fm. From the above spectrum, the range of frequencies lies in bandpass signal, i.e., from – fc – fm to fc + fm. where, fc + fm = highest frequency component fc – fm = lowest frequency component The bandpass signal m (t) is represented in terms of quadrature and inphase components. Let, xP (t) = Inphase component xQ(t) = Quadrature component Then, m ( t ) = xP ( t ) cos 2pf c t - xQ ( t ) sin 2pf c t
The output of lowpass filter is, m (t ) =
•
È Ê n ˆ Ê nˆ n ˆ˘ Ê p sin 2 cos 2 c f t f t Í m c Á ˜ ˜ ÁË 4 f ˜¯ ˙ Ë 2¯ ˙ m¯ m ˚ ÎÍ
 m ÁË 4 f
n=-•
For bandpass signal, the minimum sampling rate must be 4 fm samples per second then only the bandwidth is completely recovered from its samples.
28
Digital Signal Processing
Problems 1. State the sampling theorem and explain how a low-pass filter can retrieve information from the sampled data. 2. Explain the ‘aliasing’ problem in sampling scheme. 3. A signal having a spectrum ranging 10 kHz to 100 kHz is said to be sampled and converted to discrete form. What is the theoretical minimum number of samples per second that must be taken to ensure recovery? 4. A signal having a spectrum ranging from near dc to 10 kHz is to be sampled and converted to discrete form. What is the minimum number of samples per second that must be taken to ensure recovery?
Multiple-Choice Questions 1. For a continuous-time signal of frequency 1 kHz the minimum required sampling frequency is (a) 4 kHz (b) 2 kHz (c) 1 kHz (d) 500 Hz 2. The process of reconstruction from discrete-time signal can be aided by the use of (a) Holding circuit (b) Pulse circuit (c) Sampling circuit (d) None of the above 3. The “aliasing” process means (a) Amplitude overlapping (b) Phase overlapping (c) Spectral overlapping (d) Peaks overlapping 4. Nquist sampling theorem states that (b) fs ≥ fh (a) fs = fh (c) fs fh (d) fs < fh 5. Sampled output means (a) Sum of original signal and sampling signal (b) Difference of original signal and sampling signal (c) Product of original signal and sampling signal (d) Division of original signal and sampling signal 6. Nyquist frequency is (a) Half the sampling frequency (b) Greater than the sampling frequency (c) Equal to sampling frequency (d) None of the above 7. If maximum frequency component in an ECG signal is 50 Hz, then ECG signal should be sampled with minimum sampling frequency (a) 50 Hz (b) 75 Hz (c) 25 Hz (d) 100 Hz
Sampling and Discrete Time Systems 29
8. Analog frequency F in Hz and digital frequency w in rad are related by (b) w = 2 FTs (a) w = 2p FTs (c) w = p FTs (d) w = FTs
Key to the Multiple-Choice Questions 1. (b) 5. (c)
2. (a) 6. (a)
3. (c) 7. (d)
4. (b) 8. (a)
2 2.1
Z-Transforms
INTRODUCTION TO Z-TRANSFORM
The Z-transform is a convenient and valuable tool for representing, analyzing and designing discrete-time signals and systems. It plays a similar role in discrete-time systems to that which a Laplace transform plays in continuous time systems. In this unit the main objective is to present important concepts of the Z-transform and their application in finding the stability of the discrete time systems. Z-transform is a powerful tool for determining the transfer function of a system. This is useful to study about stability of a system with respect to pole-zero pattern in z-plane. The frequency response of a system can be obtained by replacing z by e jw within the transfer function of the system.
2.1.1 Definition The Z-transform of a sequence x [n] is simply defined as Z ÈÎ x [ n ]˘˚ = X ( z ) =
•
 x [ n] z
(2.1)
-n
n= -•
and the inverse Z-transform is defined as 1 Z -1 ÈÎ X ( z )˘˚ = x [ n ] = X ( z ) z n -1 dz 2pj ÚC •
( ) Â x [k ] e
X e jw =
•
- j wk
 x [k ] < •
¤
k = -•
X ( z) =
(2.2)
•
 x [k ] z
(2.3)
k = -•
-k
( z = re ) jw
k = -•
(
•
jw - k
) Â x [ k ]( re )
X re jw =
•
¤
k = -•
2.2
 x [k ] r
-k
< • (ROC)
k = -•
RELATION BETWEEN Z-TRANSFORM AND FOURIER TRANSFORM
As we know X ( z)
•
 x [ n] z
n= -•
-n
Z-Transforms 31
x [ n] ´ X ( z ) Z
z = re jw let substituting this in eq. (2.3)
(
•
) Â x [ n] r
X re jw =
- n - j wn
e
n=-•
(
)
{ } = X (e )
X re jw = ¡ x [ n] r - n X ( z)
jw
Z = e jw
= ¡{ x [ n]}
where r is treated as unity. From the above equation, it can be stated that in the Z-transform of any system function, if z is replaced by re it results in Fourier transform of the particular function. This mapping is a very useful concept as we can determine the stability of a system from Z-transform and also by replacing z = rejw, the spectral behaviour of the system can also be known simultaneously.
2.3
Z-TRANSFORM OF UNIT IMPULSE AND STEP FUNCTIONS
Let us derive the Z-transform of a few familiar discrete-time sequences. Consider the unit impulse Ï1, n = 0 d [ n] = Ì Ó0, n π 0 There is only one term in the Z-transform of unit impulse d [n], which is 1.
(2.4)
Z ÈÎ d [ n] ˘˚ = 1 Consider the unit sample sequence u [n] Ï1 n = 0 u [ n] = Ì Ó0 n π 0 From the definition of the Z-transform, we get
(2.5)
Z ÈÎu [ n] ˘˚ = 1 + z -1 + z -2 + z -3 + º •
= Â z -n n=0
This is an infinite series that converges to a closed-form expression, shown in eq. (2.6) only when | z–1 | < 1 or | z | > 1. This represents the region outside the unit circle in the z-plane and it is called the region of convergence (ROC). This means that the closed-form expression exists only for values of z that lie in this region:
32
Digital Signal Processing •
Âz
-n
=
n=0
1 z = -1 1- z z -1
(2.6)
It is obvious that the region of convergence for the Z-transform of d [n] is the entire z-plane. Table 2.1 List of Z-Transform pairs S. No.
x[n] for n ≥ 0
X(z)
ROC
1.
∂ [n]
1
Entire z-plane
2.
∂ [n – m]
3.
u [n]
z z -1
4.
an u [n]
z z−a
5.
– an u [– n – 1]
z z−a
6.
e–naT x [n]
X (eaT z)
ROC of X (z)
7.
 x [ m] h [ n - m]
X (z) H (z)
Intersection of ROC of X (z) and ROC of H (z)
z
–m
All z, except 0 (if m > 0) or • (if m < 0) |z| > 1 |z| > |a|
|z| < |a|
n
m=0
Table 2.2 S. No.
Properties of Z-transform Properties
Sequence
Z-transform
ROC
ax1 [n] + bx2 [n]
aX1(z) + bX2(z)
x [n – n0]
e– m X (z)
Rx expect for the possible addition or deletion of the origin or infinity
X (a–1 z)
Scaled version of R (i.e., | a | R = then set of points {|a| z} for z in R)
1.
Linearity
2.
Time-shifting
3.
Multiplication by exponential sequence
an x [n]
4.
Differentiation
nx [n]
5.
Conjugate
x* [n]
X * (z)
6.
Time reversal
x [– n]
X (z )
7.
Convolution
x1[n] ƒ x2[n]
X1 (z) X2 (z)
R1 « R2
8.
First difference
x [n] – x [n – 1]
(1 – z–1) X (z)
At least the intersection of R and |z > 0|
-z
d X ( z) dz
–1
Contains R1 « R2
R
R 1 R
Z-Transforms 33
2.4
ROC AND ITS PROPERTIES
The Z-transform does not converge for all values of z. For any given sequence, the set of values of z for which the Z-transform converges is called region of convergence (ROC), which is governed by condition: •
 x [ n] r
-n
N2: X ( z) =
N2
 x[k ] z
-k
k =-•
If r0 in ROC, then:
N2
 x[k ] r
-k 0
0.5
X ( z) =
(1 + 0.3 z ) + (1 - 0.5 z ) , ROC : (1 - 0.5 z ) (1 + 0.3 z )
X ( z) =
2 - 0.2 z -1 , ROC : 1 - 0.5 z -1 1 + 0.3 z -1
-1
X ( z) =
-1
-1
(
z > 0.5
-1
)(
z > 0.5
)
2 z ( z - 0.1) , ROC : ( z - 0.5) ( z + 0.3)
z > 0.5
Example 2.6 Obtain the Z-transform of the following function n
x [ n] = - 0.5n u [ - n - 1] + ( - 0.3) u [ n] ax [ n] + by [ n] ´ aX ( z ) + bY ( z ) , ROC a n u [ n] ´ 1
(1 - az ) -1
- a n u [ - n - 1] ´ 1
Solution: X ( z ) =
z > a
, ROC
(1 - az ) -1
Rx « Ry
, ROC
1 1 + , ROC : -1 1 - 0.5 z 1 + 0.3 z -1
z < a
z > - 0.3 « z < 0.5
(1 + 0.3 z ) + (1 - 0.5 z ) , ROC : X ( z) = (1 - 0.5 z )(1 + 0.3 z ) -1
-1
-1
X ( z) =
X ( z) =
-1
0.3 < z < 0.5
2 - 0.2 z -1 , ROC : 0.3 z < 0.5 1 - 0.5 z -1 1 + 0.3 z -1
(
)(
)
2 z ( z - 0.1) , ROC : 0.3 < z < 0.5 ( z - 0.5) ( z + 0.3)
It has 2 poles and 2 zeros
38
2.6
Digital Signal Processing
INVERSE Z-TRANSFORM
2.6.1 Power Series
(
)
X ( z ) = log 1 + az -1 ,
(
•
)
log 1 + az -1 = Â X ( z) =
n -1 •
ROC : z > a
1 n +1 ( - 1) a n z - n n
 x[k ] z
-k
k =-•
x [ n] =
1 ( - 1) n +1 a n u [ n - 1] n
2.6.2 Inversion Integral A powerful analytical method determining the inverse Z-transform is the inversion integral method. The function Y (z) can be considered in the complex z-plane. A given coefficient in such a series may be determined by an integral relationship. It can be shown that application of this concept to y (z) yields for the inverse transform. 1 (2.11) y ( n) = y ( z ) z n -1 dz 2pj Úc where c is a contour chosen to include all singularities of the integrand. By Cauchy’s residue theorem the integral can be reduced to y ( n ) = Â Re s ÈÎ y ( z ) z -1 ˘˚ z = p m
where pm represents a pole of y (z) z
m n–1
and Res [] represents the residue at z = pm
Example 2.7 Find the inverse Z-transform of 1 y ( z) = 1 - z -1 1 - 0.5 z -1
(
y ( z) =
)(
z
)
2
( z - 1) ( z - 0.5)
Solution: This can be expressed as È ˘ zn + 1 y ( n ) = Â Re s Í ˙ m Î ( z - 1) ( z - 5) ˚ z = pm
For the poles at z = 1 and z = 0.5, the residues are calculated as follows È ˘ È zn + 1 ˘ zn + 1 Re s Í ˙ =Í ˙=2 Î ( z - 1) ( z - 0.5) ˚ z = 1 Î z - 0.5 ˚ È ˘ zn + 1 È zn + 1 ˘ z n ˙ =Í = - ( 0.5) Re s Í ˙ Í ( z - 1) ( z - 0.5) ˙ 1 z ˚ z = 0.5 Î ˚ z = 0.5 Î
(2.12)
Z-Transforms 39
y ( n ) = 2 - ( 0.5)
n
Example 2.8 Determine the inverse transform of Y ( z) =
1 + 2 z -1 + z -3 1 - z -1 1 - 0.5 z -1
(
)(
)
Solution: Note that the maximum negative power of z in the numerator is larger than for the denominator. Multiplication of the numerator and the denominator by z3 results in z3 + 2 z 2 + 1 Y ( z) = z ( z - 1) ( z - 0.5) È ˘ zn + 1 y ( n ) = Â Re s Í ˙ m Î ( z - 1)( z - 5) ˚ z = pm
According to eq. (2.12), we may determine the inverse transform from
(
)
È z3 + 2 z 2 + 1 z n - 2 ˘ ˙ y ( n ) = Â Re s Í ÍÎ z ( z - 1) ( z - 0.5) ˙˚ m We must examine zn – 2 to see if there are any values of n for which there is a pole at the origin. Indeed, for n = 0 there is a second-order pole at z = 0, and for n = 1 there is a simple pole at z = 0. However, for n ≥ 2, the only poles are z = 1 and z = 0.5. Let us first determine the inverse transform pertinent to this latter range. We have y ( n ) = Re s [
] z = 1 + Re s [ ] z = 0.5 n = 8 - 13 ( 0.5) for n≥2
(2.13)
The values of y (0) and y (1) can be determined from the expressions y ( 0) = Â m
(
= Re s [ y (1) = Â m
)
È z3 + 2 z 2 + 1 z n - 2 ˘ ˙ Re s Í ÍÎ z ( z - 1)( z - 0.5) ˙˚ z = pm
(2.14)
] z = 0 + Re s [ ] z = 1 + Re s [ ] z = 0.5
(
)
È z3 + 2 z 2 + 1 z n - 2 ˘ ˙ Re s Í ÍÎ z ( z - 1) ( z - 0.5) ˙˚
= Re s [
] z = 0 + Re s [ ] z =1 + Re s [ ] z = 0.5
(2.15)
The reader is invited to demonstrate that the sum of the last two residues in each of equations (2.14) & (2.15) is the same as would be obtained by taking eq. (2.13) and evaluating it for n = 0 and n = 1 respectively. Thus, instead of performing a complete evaluation of all the residues for n = 0 and n = 1, it is necessary only to determine the additional residues at z = 0 in each case. For eq. (2.13), we have
40
Digital Signal Processing
(
)
(
)
È z3 + 2 z 2 + 1 ˘ ˙ =6 Re s Í 2 ÍÎ z ( z - 1) ( z - 0.5) ˙˚ z =0
For eq. (2.15) we have È z3 + 2z 2 + 1 ˘ ˙ =2 Re s Í 2 ÍÎ z ( z - 1) ( z - 0.5) ˙˚ z =0
This gives
y(0) = 6 + 8 – 13 = 1
y(1) = 2 + 8 – 13 (0.5) = 3.5 For n ≥ 2, the expression of eq. (2.13) is applicable. An alternate way to write y (n) for n ≥ 0 in one expression is the equation y ( n ) = 6 d ( n ) + 2 d ( n - 1) + 8 - 13 ( 0.5)
n
A few values are tabulated in the following Table 2.3. Table 2.3 n
0
1
2
3
4
5
6
•
y (n)
1
3.5
4.75
6.375
7.1875
7.59375
7.796875
8
Example 2.9 Find the inverse Z-transform of the following X ( z) =
Solution:
1 , 1 -1 1- z 2
z >
1 2
X ( z) =
1 , 1 -1 1- z 2
z
a
1 , 1 - az -1
Solution using the above equations by visualization n
n
Ê 1ˆ x [ n] = Á ˜ u [ n] Ë 2¯
Ê 1ˆ x [ n] = - Á ˜ u [ - n - 1] Ë 2¯
2.6.3 Study of Some Examples Using Partial Fraction Expansion q
q
Âb
k
X ( z) =
z -k
k =0 p
 ak z - k k =0
b0
’ (1 - c z ) k =1 p
= a0
q
-1
k
’ (1 - d k =1
k
z -1
)
b z p-q = 0 a0
’(z - c ) k
k =1 p
’(z - d ) k
k =1
z < a
Z-Transforms 41
Partial Fraction Expansion: q < p, Simple Roots q
’ (1 - c
k
X ( z) =
z -1
k =1 p
’ (1 - d
z -1
k
)
p
)
=Â
k =1
Ak
(1 - d
k
z -1
)
k =1
1 , 1 - az -1 1 X ( z) = , 1 - az -1
z > a ´ x [ n] = a n u [ n]
X ( z) =
((
z < a ´ x [ n] = - a n u [ - n - 1]
)
Ak = 1 - d k z -1 X ( z )
)
z = dk
Partial Fraction Expansion: q < p, Simple Roots X ( z) = zi-1 =
1 - z -1 1 + z -1 - 6 z -2 -b ± b 2 - 4ac -1 ± 1 + 24 -1 ± 5 = = = 2⁄ - 3 2a 2 2
(
)
1 - z -1 z 1 - z -1 X ( z) = = ( z - 2)( z + 3) 1 - 2 z -1 1 + 3 z -1
(
X ( z) =
)(
A1
)
A2
+
(1 - 2 z ) (1 + 3 z ) -1
(
(
-1
A1 = X ( z ) 1 - 2 z -1
(
(
A2 = X ( z ) 1 + 3 z -1
))
))
(1 - z ) (1 + 3 z ) (1 - z ) = (1 - 2 z ) -1
z=2
=
=
-1
z=2
( z - 1) = 1 ( z + 3) z = 2 5
-1
z = –3
=
-1
z = –3
( z - 1) -4 4 = = ( z - 2) z = – 3 - 5 5
1/ 5 4/5 X ( z) = + -1 1- 2z 1 + 3 z -1 1 X ( z) = , z > a ´ x [ n] = a n u [ n] 1 - az -1 1 X ( z) = , z < a ´ x [ n] = - a n u [ - n - 1] -1 1 - az 1 4 4 n nˆ Ê1 z > 3 fi x [ n] = 2n u [ n] + ( -3) u [ n] = Á 2n + ( -3) ˜ u [ n] Ë ¯ 5 5 5 5 1 4 4 n nˆ Ê1 z < 2 fi x [ n] = - 2n u [ - n - 1] - ( -3) u [ - n - 1] = - Á 2n + ( -3) ˜ u [ - n - 1] Ë5 ¯ 5 5 5
(
) (
)
42
Digital Signal Processing
1 n 4 n 2 u [ n] - ( -3) u [ - n - 1] 5 5 Partial Fraction Expansion: q < p, Simple Roots 2 < z < 3 fi x [ n] =
X ( z) =
1 1 - 10 z -1 + 35 z -2 - 50 z -3 + 24 z -4
=
z4 z 4 - 10 z 3 + 35 z 2 - 50 z1 + 24
ROC: 2 < | z | < 3, poles are real and positive 2 and 3 are poles (z – 2) (z – 3) = z 2 – 5 z + 6 z 2 - 5 z + 4 = ( z - 1)( z - 4) –––––––––––––––––––––––––– z 2 - 5 z + 6 z 4 - 10 z 3 + 35 z 2 - 50 z1 + 24 c z 4 - 5 z3 + 6 z 2 –––––––––––––––––––––––––––––––––––– – 5 z3 + 29 z2 – 50 z – 5 z3 + 25 z2 – 30 z –––––––––––––––––––––––––– 4 z2 + 29 z + 24 4 z2 + 29 z + 24 –––––––––––––––––––––––––– 0 –––––––––––––––––––––––––– X ( z) =
z4
( z - 1) ( z - 2) ( z - 3) ( z - 4)
Partial Fraction Expansion: q < p, Simple Roots X ( z) =
Ê A1 A3 A2 A4 ˆ = z4 Á + + + ˜ ( z - 1) ( z - 2) ( z - 3) ( z - 4) Ë ( z - 1) ( z - 2) ( z - 3) ( z - 4) ¯
X ( z) =
Ê A1 A3 A2 A4 ˆ = z4 Á + + + ˜ ( z - 1) ( z - 2) ( z - 3) ( z - 4) Ë ( z - 1) ( z - 2) ( z - 3) ( z - 4) ¯
A1 =
A2 =
A3 =
z4
z4
1 1 - 3 z -1 1 - 4 z -1
(1 - 2 z ) ( -1
1 1 - 3 z -1 1 - 4 z -1
(1 - z ) ( -1
)(
1 1 - 2 z -1 1 - 4 z -1
(1 - z ) ( -1
)(
)(
)
)
)
= z =1
= z=2
= z=3
1 1 =( -1) ( - 2) ( - 3) 6
1 =4 Ê 1ˆ Ê 1ˆ 1 ( ) ÁË ˜¯ ÁË ˜¯ 2 2 1 Ê 2ˆ Ê 1ˆ ÁË ˜¯ ÁË ˜¯ 3 3
Ê 1ˆ ÁË - ˜¯ 3
=-
27 2
Z-Transforms 43
A4 =
1 1 - 2 z -1 1 - 3 z -1
(1 - z ) ( -1
)(
)
= z=4
Ê 3ˆ ÁË ˜¯ 4
1 Ê 2ˆ ÁË ˜¯ 4
=
Ê 1ˆ ÁË ˜¯ 4
32 3
Partial Fraction Expansion: q < p, Simple Roots -1 / 6 4 - 27 / 2 32 / 3 + + + -1 -1 -1 1- z 1- 2z 1- 3z 1 - 4 z -1 1 X ( z) = z > a ´ x [ n] = a n u [ n] , 1 - az -1 X ( z) =
X ( z) =
az
,
ROC: 2 < | z | < 3
z < a ´ x [ n] = - a u [ - n - 1]
Ê 1ˆ Ê 27 ˆ Ê 32 ˆ x [ n] = Á - ˜ u [ n] + ( 4) 2n u [ n] + Á - ˜ ( -1) 3n u [ - n - 1] + Á ˜ ( -1) 4n u [ - n - 1] Ë 6¯ Ë 2¯ Ë 3¯ Ê 3n + 3 22 n + 5 ˆ 1ˆ Ê x [ n] = Á 2n + 2 - ˜ u [ n] + Á u [ - n - 1] Ë 6¯ 3 ˜¯ Ë 2
Partial Fraction Expansion: q > = p X ( z) =
1 + 2 z -1 + z -2 , z >1 3 -1 1 -2 1- z + z 2 2
-1 + 5 z -1 , z >1 3 1 1 - z -1 + z -2 2 2 A1 A2 -1 + 5 z -1 X ( z) = 2 + =2+ + 1 1 1 - z -1 Ê -1 -1 ˆ -1 1 z 1 z 1 z ÁË ˜¯ 2 2 X ( z) = 2 +
(
)
2
1 -2 3 -1 z - z + 1 z -2 + 2 z -1 + 1 2 2 3 9 4 3 9-8 3 1 ± ± ± 4 2 = 2 4 = 2 2 =1⁄ 1 poles = 2 2 2 2 2 È -1 + 5 z -1 ˘ 9 A1 = Í = = -9 -1 ˙ Î 1 - z ˚ z =1/ 2 -1 È ˘ Í -1 + 5 z -1 ˙ 4 A2 = Í =8 ˙ = 1 1/ 2 Í 1 - z -1 ˙ 2 Î ˚ z =1
44
Digital Signal Processing n
Ê 1ˆ x [ n ] = 2d [ n ] - 9 Á ˜ u [ n ] + 8 u [ n ] Ë 2¯
2.7
Z-DOMAIN STABILITY
2.7.1 Stability • A system is said to be stable if it produces bounded output for a bounded input (BIBO) • A system is said to be stable if its impulse response vanishes after sufficiently long time. h [n] Æ 0 as n Æ •
Fig. 2.4
Unit circle in z-plane.
System is realizable ¤ system is stable Ÿ system is causal •
stable ¤
 h [ k ] < • ¤ H ( e ) converges ¤ z jv
= 1 is in ROC
k = –•
causal ¤ h[n] = 0 for n < 0 ¤ h[n] right-handed ¤ ROC is exterior of a circle:| z | > | a | realizable ¤ ROC | z | > | a | and includes unit circle ¤ all poles inside unit circle
2.7.2 Stability of a DTLTI System As in the case of a continuous-time system a discrete-time system is said to be stable if every finite input produces a finite output. The stability concept may be readily expressed by conditions relating to the impulse response h (n). These conditions are:
Z-Transforms 45
(a) Stable System A DTLTI (Discrete–time Linear Time Invariant) system is stable if h (n) vanishes after a sufficiently long time. (b) Unstable system A DTLTI system is unstable if h (n) grows without bound after a sufficiently long time. (c) Marginally stable system A DTLTI system is marginally stable if h (n) approaches a constant non-zero value or a bounded oscillations after a sufficiently long time. A summary of the above points of stability of a system can be expressed as follows in the pole-zero form of the z-plane. (a) Poles of a discrete transfer function inside unit circle represent stable terms regardless of their order. (b) Poles of a discrete transfer function outside the unit circle represent unstable terms regardless of their order. (c) First-order poles on the unit circle represent marginally stable terms, but multiple-order poles on the unit circle represent unstable terms. (d) In general zeros are permitted to lie anywhere in the z-plane
Example 2.10 A system is described by the difference equation y ( n ) + 0.1 y ( n - 1) - 0.2 y ( n - 2) = x ( n ) + x ( n - 1) (a) Determine the transfer function H (z) (b) Discuss its stability
Solution: y ( n ) + 0.1 y ( n - 1) - 0.2 y ( n - 2) = x ( n ) + x ( n - 1) Taking Z-transform on both the sides Y ( z ) + 0.1 z -1 Y ( z ) - 0.2 z -2 Y ( z ) = X ( z ) + z -1 X ( z ) Transfer function of the system Y ( z) 1 + z -1 H ( z) = = X ( z ) 1 + 0.1 z -1 - 0.2 z -2
The poles and zeros are best obtained by momentarily arranging numerator and denominator polynomials in positive powers of z H ( z) =
z ( z + 1) z2 + z = 2 z + 0.1 z - 0.2 ( z - 0.4) ( z + 0.5)
The poles are located at + 0.4 and – 0.5 which are inside the unit circle. Thus the system is stable. Example 2.11 A system is described by the difference equation y ( n ) + 0.1 y ( n - 1) - 0.2 y ( n - 2) = x ( n ) + x ( n - 1) (a) Determine the transfer function H (z) and discuss its stability. (b) Determine the impulse response h (n). (c) Determine the response due to a unit step function excitation if the system is initially relaxed.
46
Digital Signal Processing
Solution: (a) Taking the Z-transforms of both sides of the given system difference equation and solving for H (z), we obtain Y ( z) 1 + z -1 H ( z) = = X ( z ) 1 + 0.1 z -1 - 0.2 z -2 The poles and zeros are best obtained by momentarily arranging numerator and denominator polynomials in positive powers of z. z ( z + 1) z2 + z H ( z) = 2 = z + 0.1 z - 0.2 ( z - 0.4) ( z + 0.5) The poles are located at + 0.4 and – 0.5, which are inside the unit circle. Thus, the system is stable. (b) The impulse response may be obtained by expanding H (z) in a partial fraction expansion according to the procedure of the preceding section. This yields 1.555556 z 0.555556 z H ( z) = z - 0.4 z + 0.5 Inversion of the above equation yields n
h ( n ) = 1.555556 ( 0.4) - 0.555556 ( - 0.5)
n
It can be readily seen that the impulse response h (n) vanishes after a sufficiently long time as expected, since this is a stable transfer function. (c) To obtain the response due to x (n) = 1, we multiply X (z) by H (z) and obtain
Y ( z) =
z 2 ( z + 1) ( z - 1) ( z - 0.4) ( z + 0.5)
Partial fraction expansion yields 2.222222 z 1.037037 z 0.185185 z Y ( z) = z -1 z - 0.4 z + 0.5 The inverse transform is n
Y ( n ) = 2.222222 - 1.037037 ( 0.4) - 0.185185 ( - 0.5)
2.8
SOME TYPICAL EXAMPLES ON Z-TRANSFORM
Example 2.12 Find x [n] for the following system transfer function. 1 -1 z 2 X ( z) = 1 1 - z -1 2 1+
n
Z-Transforms 47
Solution:
1 -1 z 2 X ( z) = 1 1 - z -1 2 1+
1 -1 z 1 = + 2 1 1 1 - z -1 1 - z -1 2 2 1 -1 ˘ È z Í ˙ 1 + 2 x [ n] = Z Í ˙ Í1 - 1 z -1 1 - 1 z -1 ˙ 2 Î 2 ˚ -1
n
1 Ê 1ˆ Ê 1ˆ = Á ˜ u [ n] + Á ˜ Ë 2¯ 2 Ë 2¯
n -1
u [ n - 1]
n
Ê 1ˆ = Á ˜ ÈÎu [ n ] + u [ n - 1]˘˚ Ë 2¯ n
Ê 1ˆ = Á ˜ ÈÎu [ n] + 2u [ n - 1] - u [ n - 1] ˘˚ Ë 2¯ n
Ê 1ˆ = Á ˜ ÈÎu [ n ] - u [ n - 1] + 2u [ n - 1]˘˚ Ë 2¯ n
Ê 1ˆ x [ n] = Á ˜ ÎÈ d [ n] + 2u [ n - 1] ˚˘ Ë 2¯
Example 2.13 Find Z-transform of n
x [ n ] = ( 2 ) u [ n - 2]
Solution:
Z ÈÎu [ n ]˘˚ =
1 1 - z -1
z -2 Z ÎÈu [ n - 2] ˚˘ = 1 - z -1 z -2 Z ÈÎ 2n u [ n - 2] ˘˚ = z -1 = 2 z -1 -1 1- z -2 2
(2 z ) = 1- 2z
-1
=
4 z -2 1 - 2 z -1
48
Digital Signal Processing
Example 2.14 Find the inverse Z-transform of X ( z) =
Solution:
1 + 2 z -2 + z -1 Ê 1 -1 ˆ -1 ÁË1 - z ˜¯ 1 - z 2
(
X ( z ) = A0 +
X ( z) = 2 +
A1 =
A1 A2 + Ê 1 -1 ˆ 1 - z -1 ÁË1 - z ˜¯ 2
(
-1 + 5 z -1 Ê 1 -1 ˆ -1 ÁË1 - z ˜¯ 1 - z 2
(
)
)
1 + 2 z -2 + z -1 -1 z =2 1 - z -1 =
A2 =
)
1+ 4 + 4 =-9 1- 2
1 + 2 z -2 + z -1 -1 z =1 1 1 - z -1 2
1+ 2 +1 =8 1 2 9 8 + X ( z) = 2 1 1 - z -1 1 - z -1 2 z 2 ´ 2d [ n ] =
n
z Ê 1ˆ 1 ´ Á ˜ u [ n] 1 Ë 2¯ 1 - z -1 2 z 1 ´ u [ n] -1 1- z n
Ê 1ˆ x [ n ] = 2d [ n ] - 9 Á ˜ u [ n ] + 8 u [ n ] Ë 2¯
Example 2.15
Find x[n] inverse Z-transform of the following function z 1 a n u [ n] ´ z > a 1 - az -1
Z-Transforms 49
Solution:
z
a n u [ n] ´
1 1 - az -1
z > a
Using the differentiation property of Z-transform (refer Table 2.2, Property 4) z
na n u [ n] ´ - z =
d Ê 1 ˆ ÁË1 - -1 ˜¯ dz az
az -1
z > a
-1 2
(1 - az )
Example 2.16 Find x [n] of the following function using convolution property of Z-transform. X ( z) =
Solution:
1 Ê 1 -1 ˆ Ê 1 -1 ˆ ÁË1 - z ˜¯ ÁË1 + z ˜¯ 2 4
X ( z ) = X1 ( z ) X 2 ( z ) X1 ( z ) =
1 1 -1 1- z 2 n
Ê 1ˆ x1 [ n] = Á ˜ u [ n] Ë 2¯ 1 X 2 ( z) = 1 1 - z -1 4 n
Ê 1ˆ x2 [ n] = Á - ˜ u [ n] Ë 4¯
Using convolution property of Z-transform (refer Table 2.2, Property 7) x [ n] = x1 [ n] * x2 [ n] n
= Â x1 ( n - k ) x2 ( k ) k =0
n Ê 1ˆ = ÂÁ ˜ Ë ¯ k =0 2
Ê 1ˆ =Á ˜ Ë 2¯
n
n-k
n
Â
k =0
k
Ê -1ˆ ÁË ˜¯ 4
Ê 1ˆ ÁË ˜¯ 2 k
-k
k
Ê -1ˆ ÁË ˜¯ 4
Ê 1ˆ Ê 1ˆ Ê 1ˆ = Á ˜ Á- ˜ Á ˜ Ë 2¯ Ë 2¯ Ë 2¯
k
k
50
Digital Signal Processing
Ê 1ˆ =Á ˜ Ë 2¯
n
n
Â
k =0
Ê -1ˆ ÁË ˜¯ 2
k
n n +1 Ê 1ˆ Ê Ê 1ˆ ˆ 1 ÁË ˜¯ Á ÁË ˜¯ ˜ 2 Ë 2 ¯ 1 - a n +1 = = 1- a Ê 1ˆ 1- Á- ˜ Ë 2¯
È 2 Ê 1 ˆ n 2 Ê 1 ˆ n Ê 1 ˆ n +1 ˘ = Í Á ˜ - Á ˜ Á- ˜ ˙ 3 Ë 2 ¯ Ë 2 ¯ ˚˙ ÎÍ 3 Ë 2 ¯ È 2 Ê 1ˆn 2 Ê 1ˆn Ê 1ˆ = Í Á ˜ - Á ˜ Á- ˜ 3 Ë 2¯ Ë 2¯ ÎÍ 3 Ë 2 ¯
n Ê 1ˆ ˘ ÁË ˜¯ ˙ 2 ˚˙
È 2 Ê 1ˆn 1 Ê 1ˆn ˘ = Í Á ˜ + Á - ˜ ˙ u [ n] 3 Ë 4 ¯ ˙˚ ÍÎ 3 Ë 2 ¯
Example 2.17 conditions.
Find inverse Z-transform of the following function under different ROC X ( z) =
Solution: For
For
For
z
1 1 < z < 4 3
1 4 1 3
1 2 + 1 1 Ê Ê -1 ˆ -1 ˆ ÁË1 - z ˜¯ ÁË1 - z ˜¯ 4 3 n
n
Ê 1ˆ Ê 1ˆ x [ n] = Á ˜ u [ n] - 2 Á ˜ u [ - n - 1] Ë 4¯ Ë 3¯ n
n
Ê 1ˆ Ê 1ˆ x [ n] = - Á ˜ u [ - n - 1] - 2 Á ˜ u [ - n - 1] Ë 4¯ Ë 3¯ n
n
Ê 1ˆ Ê 1ˆ x [ n] = Á ˜ u [ n] + 2 Á ˜ u [ n] Ë 4¯ Ë 3¯
Example 2.18 Find the Z-transform of x(n) = cos (nw) u (n). Solution: Given that x ( n ) = cos ( nw ) u ( n )
From definition z ÈÎ x ( n ) ˘˚ = x ( z ) =
•
 x ( n) . z
n=-•
-n
Z-Transforms 51
z ÈÎ cos n w u ( n ) ˘˚ =
•
 cos nw . u ( n) z
-n
n=-• •
= Â cos nw . z - n n=0
• È e jnw + e - jnw ˘ - n = ÂÍ ˙ z 2 n=0 Î ˚ •
=Â
(e
jw
. z -1
n
) + (e
. z -1
)
n
2
n=0
=
- jw
1 È • jw -1 ÍÂ e . z 2 În =0
(
•
n
) + Â (e
- jw
n=0
n˘ . z -1 ˙ ˚
=
˘ 1È 1 1 + Í jw -1 - jw -1 ˙ 2 Î1 - e . z 1- e . z ˚
=
1È z z ˘ + 2 ÍÎ z - e jw z - e - jw ˙˚
(
) ( ) ˘˙ )( ) ˙˚
È z z - e - jw + z z - e jw Í z - e jw z - e - jw ÍÎ
(
=
1 È z 2 - z . e - jw + z 2 - z . e jw ˘ Í ˙ 2 Î z 2 - z . e - jw - z . e jw + 1 ˚
=
2 - jw jw ˘ 1 È 2z - z e + e Í 2 ˙ - jw jw 2 Í z - z e + e + 1˙ Î ˚
=
1 È 2 z 2 - 2 z cos w ˘ Í ˙ 2 Î z 2 - 2 z cos w + 1 ˚
(
(
x ( z) =
z 2 - z cos w z 2 - 2 z cos w + 1
z ÈÎ cos nw . u ( n ) ˘˚ =
z 2 - z cos w z 2 - 2 z cos w + 1
Example 2.19 Find the Z-transform of an cos (nw) u (n). Solution: Given that, x ( n ) = a n cos ( nw ) u ( n )
)
)
)
52
Digital Signal Processing
We know that
fi
Ê e jwn + e - jwn ˆ cos ( nw ) = Á ˜¯ 2 Ë Ê e jwn + e - jwn ˆ a n cos ( nw ) = a n Á ˜¯ 2 Ë n j wn n - j wn a e +a e = 2 jw n
- jw n
( ae ) + ( ae ) = 2
Form the definition of Z-transform, •
z ÈÎ x ( n ) ˘˚ = x ( z ) = Â x ( n ) z - n n=0
•
x ( z ) = Â a . cos ( nw ) . z - n n
\
n=0 •
=Â
n=0
•
=Â
(
È a . e jw Í Í Î jw n
(a . e )
n
=Â
˘ ˙ z -n ˙ ˚
2
(
z - n + a . e - jw
)
n
z -n
2
n=0 •
- jw n
) + (a . e )
(a . e
jw
z -1
n
) + (a .e
- jw
z -1
)
n
2
n=0
=
˘ 1È 1 1 + Í jw -1 - jw -1 ˙ 2 Î1 - ae . z 1 - ae . z ˚
=
1È z z ˘ + jw Í 2 Î z - ae z - ae - jw ˙˚
=
- jw + z z - ae jw 1 È z z - ae Í 2 Í z - ae jw z - ae - jw Î
=
1 È z 2 - az . e - jw + z 2 - az . e jw ˘ Í ˙ 2 Î z 2 - az . e - jw - az . e jw + a 2 ˚
(
(
) ( )(
)
) ˘˙ ˙˚
È Ê e jw + e - jw ˆ ˘ 2 z 2 - az Í 2 Á ˜¯ ˙ 2 Ë Î ˚ = È 2 ˘ È Ê e jw + e - jw ˆ ˘ 2 2 Í z - az Í 2 Á ˙+a ˙ ˜ 2 ¯˚ Î Ë ÎÍ ˚˙
Z-Transforms 53
=
2 z 2 - 2az cos w 2 z 2 - 2az cos w + a 2
(
)
2
=
Example 2.20 Solution:
z - az cos w z 2 - 2 az cos w + a 2
Find the Z-transform and ROC of the signal x [n] = Î4(5)n – 3(4)n ˚ u (n).
Given that, n n x ( n ) = È 4 ( 5) - 3 ( 4 ) ˘ u ( n ) Î ˚
From definition, z ÈÎ x ( n ) ˘˚ = x ( z ) =
•
 x ( n) . z
-n
n=-•
•
n n n n z È 4 ( 5) - 3 ( 4 ) ˘ = Â È 4 ( 5) - 3 ( 4 ) ˘ u ( n ) z - n Î ˚ n=-• Î ˚ •
=
 4 ( 5)
n
n=-•
•
•
. z - nu ( n) -
 3 ( 4)
n
. z - n u ( n)
n=-•
•
n
n
= 4 Â ( 5) z - n - 3 Â ( 4 ) z - n n=0
= 4.
n=0
z z - 3. ( z - 5) ( z - 4)
=
4z 3z ( z - 5) ( z - 4 )
=
4 3 -1 1- 5z 1 - 4 z -1
(
) (
)
ROC: 5 > | z | > 4. Example 2.21 Solution:
Find the inverse Z-transform of X ( Z ) =
Given that,
X (Z ) =
Z Z ( Z - 1) ( z - 2)
Z Z ( Z - 1) ( z - 2)
2
X (Z ) 1 = 2 Z Z ( Z - 1) ( z - 2)
Finding the partial fractions 1 Z ( Z - 1) ( z - 2)
2
=
A B C D + + + Z ( Z - 1) ( Z - 2) ( Z - 2) 2
2
| z | > 2.
54
Digital Signal Processing 2
2
1 = A ( Z - 1) ( Z - 2) + BZ ( Z - 2) + C ( Z - 1) ( Z - 2) Z + DZ ( Z - 1)
(
) ( + 8 Z - 4) + B ( Z
) ( + 4 Z ) + C (Z
) ( Z - 2) + D ( Z - 3 Z + 2 Z ) + D (Z
1 = A ( Z - 1) Z 2 - 4 Z + 4 + B Z 3 - 4 Z 2 + 4 Z + C Z 2 - Z
(
1= A Z3 - 5Z2
3
- 4Z2
3
2
2 2
Comparing, Z 3-terms fi A + B + C = 0 Z 2-terms fi 5A – 4B – 3C + D = 0 Z-terms fi 8A + 4B + 2C – D = 0 Constant fi – 4A = 1 -1 A= 4 By solving the above terms, we get B=1 -3 C= 4 1 D= 2 Substituting A, B, C, D from the above equations, then 1 Z ( Z - 1) ( z - 2)
fi
2
=
X (Z ) =
-1 1 3 1 + + 4 Z ( Z - 1) 4 ( Z - 2) 2 ( Z - 2) 2 -1 Z 3Z Z + + 4 ( Z - 1) 4 ( Z - 2) 2 ( Z - 2) 2
-1 Z 3Z Z + + 4 ( Z - 1) 4 ( Z - 2) 2 ( Z - 2) 2 Applying the inverse Z-transform, we get, -1 3 1 n n n x ( n) = d ( n ) + (1) u ( n ) - ( 2) u ( n ) + n ( 2) u ( n ) 4 4 2 X (Z ) =
Example 2.22 Find the inverse Z-transform of x ( z ) = Solution: Given that, x ( z) = =
2 + z3 + 3 z - 4 z >0 z2 + 4 z + 3
(
)
z -4 2 z 4 + z 7 + 3
(z
2
)
+ 4z +3
2 + z3 + 3 z - 4 z > 0. z2 + 4 z + 3
) - Z)
-Z
Z-Transforms 55
=
(2 z z (z
=
z7 + 2 z4 + 3 z6 + 4 z5 + 3 z 4
4
4
) + 4 z + 3)
+ z7 + 3
2
13 38 118 338 Ê z6 + 4 z5 + 3 z 4 z7 + 2z 4 + 3 Á z - 4 + - 2 + 3 - 4 Ë z z z z
)
z 7 + 4 z6 + 3 z5 – – – –––––––––––––––––––––– 6 5 – 4 z – 3 z + 2 z4 + 3 – 4 z6 – 3 z5 + 12 z4 + 3 – 4 z6 – 16 z5 – 12 z4 + + + ––––––––––––––––––––––––––––––––– 5 13 z + 14 z4 + 3 13 z5 + 52 z4 + 39 z3 – – – –––––––––––––––––––––––––––– 4 – 38 z – 39 z3 + 3 – 38 z4 – 152 z3 – 114 z2 + + + –––––––––––––––––––––––––––– 3 2 113 z + 114 z + 3 113 z3 + 452 z2 + 339 z – – – ––––––––––––––––––––––––––––– – 338 z2 – 339 z + 3 – 338 z2 – 1352 z – 1014 + + + –––––––––––––––––––––––––––– 1013 z + 1017 –––––––––––––––––––––––––––– The result is z – 4 + 13 z–1 – 38 z–2 + 113 z–3 – 338 z– 4 \ From the definition of Z-transform, z ÎÈ x ( n ) ˚˘ = x ( z ) =
\
•
 x ( n) . z
-n
n=-•
x ( z ) = º + Z ( -1) z + x ( 0) + x (1) z -1 + x ( 2) z -2 + x ( 3) z -3 + º
By comparing equations (1) and (2), we get, x (– 1) = 1 x (0) = – 4 x (1) = 13 x (2) = – 38
(1)
(2)
56
Digital Signal Processing
x (3) = 113 x (4) = – 338 Example 2.23 Find inverse Z-transform of x (z) using the long division method x ( z) =
Solution:
x ( z) =
x ( z) =
2 + 3 z -1 Ê z -2 ˆ 1 + z -1 Á1 + 0.25 z -1 8 ˜¯ Ë
(
)
2 + 3 z -1 Ê z -2 ˆ 1 + z -1 Á1 + 0.25 z -1 8 ˜¯ Ë
(
)
2 + 3 z -1 1 + 1.25 z + 0.125 z -2 - 0.125 z -3 -1
By dividing the numerator of x (z) by its denominator, we get,
)
(
1 + 1.25 z -1 + 0.125 z -2 - 0.125 z -3 2 + 3 z -1 2 + 0.5 z -1 + 0.875 z -2 + 1.28125 z -3 -1
2 + 2.5 z + 0.25 z -2 + 0.25 z -3 –––––––––––––––––––––––––––––––––– 0.5 z -1 - 0.25 z -2 + 0.25 z -3 0.5 z -1 - 0.625 z -2 + 0.0625 z -3 - 0.0625 z -4 ––––––––––––––––––––––––––––––––––––––––– -0.875 z -2 + 0.1875 z -3 + 0.0625 z -4 0.875 z -2 + 1.09375 z -3 + 0.109375 z -4 - 0.109375 z -5 –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– 1.28125 z -3 + 0.171875 z -4 + 0.109375 z -5 1.28125 z -3 + 1.6015625 z -4 + 0.16015625 z -5 - 0.16015625 z -6 –––––––––––––––––––––––––––––––––––––––––––––––––––––––––––– -1.4296875 z -4 - 0.26953125 z -5 + 0.16015625 z -6 ––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Therefore, x ( z ) = 2 + 0.5 z -1 + 0.875 z -2 + 1.28125 z -3 Ï2, 0.5, 0.875, 1.28125, º¸ x ( n) Ì ˝ Ó≠ ˛ Example 2.24 Find the inverse Z-transform of the following X (Z ).
\
1 ˆ (i) X ( Z ) = log ÊÁ , Ë 1 - az -1 ˜¯
z > a
Ê 1 ˆ (ii) X ( Z ) = log Á , Ë 1 - a -1 z ˜¯
z < a
Z-Transforms 57
Solution: (i) Given that, \
Ê 1 ˆ X ( z ) = log Á , z > a Ë 1 - az -1 ˜¯
(
)
X ( z ) = - log 1 - az -1 , z > a
The power series expansion for log(1 – p) is given as •
1 n p p | a |, i.e., |az–1| < 1 \ Power series expansion of X (z) is given by, • n 1 X ( z) = Â az -1 n =1 n • 1 n -n =Â a z n =1 n \ From above equations, x (n) can be defined as log (1 - p ) = -Â
(
)
(
or (ii) Given that \
)
Ê an ˆ x ( n ) = Á ˜ , for n ≥ 1 Ë n¯ = 0, for n 0 an x ( n) = u ( n - 1) n
Ê 1 ˆ X ( Z ) = log Á , Ë 1 - a -1 z ˜¯
(
)
X ( Z ) = - log 1 - a -1 z ,
z < a z < a
The region of convergence is, | z | < | a | i.e., | a–1 z | < 1 \ Power series expansion of X (z) is given by, • n 1 X ( z) = Â a -1 z n =1 n -• -n 1 = Â - a -1 z n n = -1
(
)
(
)
\ From above equations, x (n) can be defined as x (n) = 0, for n ≥ 0
Ê an ˆ = - Á ˜ for n £ –1 (or) Ë n¯ x ( n) = -
an u ( n - 1) n
58
Digital Signal Processing
Problems 1. Find the Z-transform of the following sequence
x ( n ) = a sin ( nw 0 t ) 2. Obtain the Z-transform and ROC of the following sequence x ( n ) = na n n ≥ 0
= 0
n 1 7. Find the inverse Z-transform of
(ii) z
1 a
10. Find the inverse Z-transform of
1 1 + 0.5 z -1 11. Find the inverse Z-transform of a function z-4 X ( z) = ( z - 1) ( z - 2)3 X ( z) =
z > 0.5
for
z >2
Z-Transforms 59
12. Determine the inverse Z-transform of the following z ( z + 0.5) X ( z) = ( z + 0.2) ( z + 0.4) 13. Obtain the transfer function for the system described by difference equation 1 y ( n ) + y ( n - 1) = x ( n ) 2 14. Obtain the transfer function of the system described by the difference equation given by and hence, impulse response
y ( n - 1) - 4 y ( n - 2) + 3 y ( n - 3) = x ( n - 1) + x ( n - 2) 15. Find x (0) and x (•) for the sequence whose Z-transform is z X ( z) = z -3
Multiple-Choice Questions 1. The Z-transform of the unit ramp is given by z z (a) (b) 2 z -1 ( z - 1)
z -1 z 2. The Z-transform of x (0) is (a) X (z) (c) X (1)
(d)
(c)
( z - 1) 2 z
(b) X (0) (d) lim X ( z ) zƕ
3. For causal signals and systems, the Z-transform is defined as (a)
X ( z) =
•
Â
•
x ( n) z - n
(b)
n=-•
(c)
X ( z) =
0
Â
X ( z ) = Â x ( n) z - n n=0
x ( n) z - n
(d)
X ( z) =
n=-•
1
 x ( n) z
-n
n=-•
4. Region of convergence is defined as (a) Set of z-values for which the series converges. (b) Set of n-values for which the series converges. (c) Set of n-values for which the series diverges. (d) Set of z-values for which the series diverges. 5. If lower limit of the ROC is greater than the upper limit of ROC the series
X ( z) =
•
 x ( n) z
n=-•
-n
60
Digital Signal Processing
(a) (c) 6. The (a) (c)
Converge Zero Z-transform of (n + n0) is z– n0 X (z) X (z + z0)
(b) Does not converge (d) None of the above (b) zn0 X (z) (d) X (z0)
7. The inverse Z-transform of the given function
X ( z) =
z 2 - z cos w 0 z 2 - 2 z cos w 0 + 1
for
z >1
(a) sin w0 n (b) tan w0 n (d) cos w0 n (c) cot w0 n 8. The Z-transform of – u (– n – 1) is z z with | z | > 1 with | z | < 1 (a) (b) z -1 z -1 z z with | z | = 1 with | z | = 0 (c) (d) z -1 z -1 9. Obtain the inverse Z-transform of a sequence z z x ( z) = + a < z 0 19. The system is described by the following difference equation y (n) – ay (n – 1) = x (n) If the excitation is the unit impulse, the system transfer function is z z (b) (a) z-a z+a z a (d) (c) a z-a 20. The inverse Z-transform of X ( z ) =
z2
( z - a)2
if X (z) converges absolutely for some | z | < | a |
(a) (n + 1) an for n ≥ 0
(b) – (n + 1) an for n £ – 1
(c) – (n + 1) an for n ≥ 0
(d) (n + 1) an for n £ – 1
62
Digital Signal Processing
21. The system is described by the following difference equation y (n) + ay (n – 1) = x (n) If the excitation is unit impulse, the system transfer function is z z (b) (a) z-a z+a z a (d) (c) a z-a 22. The Z-transform of x (n) = an u (n) is z with ROC | z| > | a | (a) z-a z with ROC | z| < | a | (c) z-a 23. The Z-transform of x (n) = nx (n) is d X ( z) (a) z dz 2 d X ( z) (c) z dz 24. The Z-transform of an x (n) is (a) X (z – a) (c)
Ê zˆ XÁ ˜ Ë a¯
a
( z - a)
(d)
(b) (d)
z with ROC | z| > | a | z+a a with ROC | z| > | a | z-a d X ( z) dz d - z2 X ( z) dz
-z
(b) X (za) (d) X (z + a)
25. The Z-transform of nan u (n) is az (a) z-a (c)
(b)
(b) (d)
2
az
( z - a)2 z
( z - a)2
Key to the Multiple-Choice Questions 1. 5. 9. 13. 17. 21. 25.
(a) (b) (d) (a) (b) (b) (b)
2. 6. 10. 14. 18. 22.
(d) (b) (c) (c) (a) (a)
3. 7. 11. 15. 19. 23.
(b) (d) (a) (b) (a) (b)
4. 8. 12. 16. 20. 24.
(a) (b) (a) (c) (b) (c)
3
Analog Filter Approximations
Analog filters are prototype filters for the design of digital filters. They play a key role in the design of digital filters.
Fig. 3.1
Block diagram of a filter.
Consider the block diagram of a filter as illustrated in Fig. 3.1. Assume that the input can be expressed as x (t) + u (t), where x (t) represents a desired signal at the input and u (t) represents an undesired signal (or composite of signals). The purpose of the filter is to eliminate u (t) while preserving x (t) as close to its original form as possible. The process of filtering requires a certain amount of delay and possible changes in the signal level, so the best we can hope for is that the output signal will be a delayed version of the original desired signal with a possible difference in amplitude, but with the correct shape preserved. Thus, the output of distortionless filter can be expressed as x (t) = kx (t – t)
(3.1)
where k represents a level change and t is the delay. The concept is illustrated in Fig. 3.2.
Fig. 3.2
Input and output of distortionless filter.
The frequency domain interpretation of the ideal filter can be seen by taking the Fourier transform of both sides of eq. (3.1). This operation yields Y (f ) = ke – jwt X (f )
(3.2)
64
Digital Signal Processing
Solving for steady-state transfer function G ( jw), we obtain G ( jw ) = ke - jwt = k – - wt The amplitude response A (f ) and the phase response B (f ) are determined as
A (f ) = k
(3.3) (3.4)
B (f ) = – wt (3.5) From these results, it can be seen that the amplitude response of the ideal filter should be constant and the phase response should be a linear function of frequency. However, these conditions apply only with respect to the frequency range of the desired signal x (t ). If the amplitude response were constant everywhere; the undesired signal would not be removed at all. To utilize the most basic form of frequency-domain filtering, it must be assumed that the spectrum of the undesired signal occupies a different frequency range than that of the desired signal, and the amplitude response must approximate to zero in the frequency range of the undesired signal. The conclusion is that a distortionless frequency-domain filter should have constant amplitude response and linear phase response over the frequency band representing the spectrum of the desired signal. Outside this band the amplitude response should drop towards zero as rapidly as possible, and the phase response in this range is usually unimportant. The frequency range in which a signal is transmitted through the filter is called the passband, and the frequency range in which a signal is rejected is called the stopband. It can be shown that the attainment of both ideal constant amplitude and ideal linear phase is physically impossible in a practical filter. Furthermore, as the amplitude approximation is improved, the phase response often becomes poorer, and vice versa. However, it is possible to provide approximations that approach the ideal conditions sufficiently close to satisfy most applications, particularly if a relatively complex filter is permitted. Practical filters are characterized by a transmission band between the passband and the stopband. The exact locations of the boundaries of these different bands are somewhat arbitrary. For ideal filter, the attenuation in passband should have zero value and attenuation in stopband should be high and also transition band should be of zero value. But this is not possible in practical applications. • Analog approximation techniques are highly advanced. • They usually yield closed-form solutions • Extensive tables are available for analog filter-design • Many applications require the digital summation of analog filters. Some of the common Matlab commands for analog filter design are • Butterworth Filter : butter, buttord; • Chebyshev Filter type-1: cheby1, cheb1ord; • Chebyshev Filter type-2: cheby2, cheb2ord; • Elliptic(Cauer) Filter: ellip, ellipord Three popular methods for analog filter design are Butterworth, Chebyshev and Elliptic filters. Chebyshev filters come in 2 varieties. The three approaches give increasingly sharp transitions between passband and stopband. Their respective properties are summarized in Table 3.1.
Analog Filter Approximations
65
Table 3.1 Filter type
Properties
Butterworth filter
• Monotone in passband and stopband (no ripple) (maximally flat in passband) • Only poles
Type I Chebyshev Filter
• Equiripple in passband, monotone in stopband • Only poles
Type II Chebyshev Filter
• Monotone in passband, equiripple in stopband • Poles and Zeros
Elliptic Filters
• Equiripple in passband and stopband • Poles and Zeros
Fig. 3.3
Amplitude and phase characteristics of a filter with ideal passband response.
In certain applications, the time delay of a signal passing through a filter is of more significance than the phase shift. The two delay functions phase delay Tp and the group (or envelope) delay Tg are mathematically represented as b( f ) Tp ( f ) = (3.6) w Tg ( f ) = -
db ( f ) dw
(3.7)
The graphical significance of these definitions is illustrated in Fig. 3.4. It can be seen that the phase delay at a given frequency represents the slope of the secant line from dc to the particular frequency and is a sort of overall average delay parameter. The group delay at a given frequency represents the slope of the tangent line at the particular frequency and represents a local or narrow-range delay parameter. Considering the case of a filter with a constant-amplitude response and a linear-phase response as described by eq. (3.4) and eq. (3.5). It is readily seen that Tp (f ) = Tg (f ) = t
(3.8)
66
Digital Signal Processing
Fig. 3.4
Graphical significance of phase delay and group delay.
For the ideal filter, the phase and group delays are identical and represent the exact delay of the signal, which has not been distorted in this case. In the general case where the amplitude response is not constant in the passband and the phase response is not linear. It is more difficult to precisely define the exact delay since a signal will undergo some distortion in passing through the filter. In fact, any attempt to define the exact delay will result in some variation of delay as different types of signals are applied to the filter. Nevertheless, the preceding definitions are quite useful in describing the approximate delay characteristics of a filter. The phase delay parameter is often used to estimate the delay of a low-pass type signal, such as a basic pulse waveform, when it is passed through a low-pass filter. The phase delay is computed over the frequency range representing the major portion of the input signal spectrum. If the phase response does not deviate too far from linearity over the range involved, this value may represent a reasonable approximation to the actual delay of the waveform involved. Coming to the ideal frequency-domain filter concept, it is convenient to consider several models representing the amplitude response for various classes of filters as illustrated in Fig. 3.5. The four models shown are the low-pass, high-pass, band-pass and band-rejection ideal frequency-domain amplitude functions. The corresponding ideal phase function should be linear over the passband in each case.
3.1
BUTTERWORTH APPROXIMATION
Butterworth approximation is a special form of Taylor’s series approximation in which the approximating function Œ (w) and the specified function f (w) are identical at w = 0. For this approximation kn (w) is selected as kn ( w ) = b0 + b1 w + b 2 w 2 + ..... + b n w 2 n
Analog Filter Approximations
Fig. 3.5
67
Ideal frequency-domain amplitude response models.
For a Taylor series approximation the function kn (w), must be maximally flat at the origin (i.e., w = 0). Hence, as many derivatives of kn (w) as possible must vanish at w = 0. Hence, for Butterworth approximation kn (w) = wn The magnitude function |T ( jw)|2 and the attenuation function a (w) are then given by 2
T (w ) =
1 1 + Œ2 w 2 n
(3.9)
a ( w ) = 10 log ÈÎ1 + Œ2 w 2 n ˘˚ (3.10) In eq. (3.10) it is interpreted as the frequency normalized with respect to pass-band edge wp, i.e. 2 1 T ( jw ) = (3.11) 2n Ê wˆ 2 1+Œ Á ˜ Ë wp ¯
and
2n È Ê wˆ ˘ 2 Í ˙ a ( w ) = 10log 1 + Œ Á w p ˜¯ ˙ Í Ë Î ˚
(3.12)
the passband edge frequency wp, sometimes is referred as “cut-off frequency” wc for ideal filter characteristics, and the constant Œ determines the passband and/or stopband attenuation, n represents order of the transfer function or filter. The frequency response of a Butterworth filter for various values of n is shown in Fig. 3.6. All the curves pass through the same point at w = wc, “the cut-off frequency” and this point is determined by passband attenuation ap. 1 times the dc gain, and this At the “cut-off frequency” wc, the amplitude response is 2 corresponds to an attenuation of 3.01 dB. In most applications this value is rounded off to 3 dB. In many developments it is convenient to normalize the frequency scale by selecting wc = 1 rad/s.
68
Digital Signal Processing
Fig. 3.6 Frequency response of Butterworth filter for various orders.
The Butterworth response is a monotonically decreasing function of frequency in the positive frequency range. As the order n increases, the response becomes “flatter” in the passband, and the attenuation is greater in the stopband. Above the cut-off, the Butterworth amplitude response of order n approaches a high frequency asymptotic having a slope of – 6 n dB/octave. Passband attenuation 2n È wp ˆ ˘ 2 Ê ˙ a p = 10 log Í1 + Œ Á Ë w c ˜¯ ˙˚ ÍÎ
Stopband attenuation È a s = 10 log Í1 + Œ2 ÍÎ
2n Ê ws ˆ ˘ ÁË w ˜¯ ˙ ˙˚ c
The factor wp/ws is called the selectivity parameter and is represented by k wp i.e. k= ws Equation (3.14) can be rewritten as Œ2 = 10
If we define
0.1a p
(3.13)
(3.14)
-1
1/ 2
È100.1a p - 1 ˘ Í 0.1a s ˙ - 1˚ Î 10
= k1 n=
(3.15)
log k1 log k
The order of the filter n should be selected such that n≥
log k1 , n an integer log k
(3.16)
Analog Filter Approximations
69
If n happens to be equal to log k1 / log k then the values of Œ obtained from both the equations in eq. (3.13) are the same. If n π log k1 / log k, then Œ can be selected to satisfy either the passband edge or the stopband edge requirements exactly (3.17a)
In the first case: Œ= 100.1a p - 1
(
)
In the second case: Œ= k 2 n 100.1a s - 1
(3.17b)
Example 3.1 The specifications for a low-pass filter are given as follows a £ 1 dB for f £ 3 MHz a ≥ 60 dB for f ≥ 12 MHz Obtain the order of the filter and transfer function for Butterworth approximation. Solution:
For this filter the selectivity and the discrimination parameters are fp 3 k= = = 0.25 f s 12 1/ 2
È100.1 - 1 ˘ k1 = Í 6 ˙ Î 10 - 1 ˚
= 0.5089 ¥ 10 -3
The order of the Butterworth polynomial from eq. (3.16) is n≥
log k1 - 3.2934 = = 5.4702 log k - 0.6021
n=6 i.e., (i) If Œ is to satisfy passband requirement (eq. 3.17a), then Œ= 100.1 - 1 = 0.5089 (ii) If Œ is to satisfy stopband requirement (eq. 3.17b), then
Œ=
( 0.25)12 (106 - 1) = 0.2441
Example 3.2 Derive the transfer function for the third-order Butterworth low-pass filter with wc = 1 rad/s. Solution:
The amplitude-squared function is A2 ( f ) =
1 1 + w6
(3.18)
1 1 - s6
(3.19)
Setting w2 = – s2, we have G ( s) G ( -s) =
70
Digital Signal Processing
The poles of eq. (3.19) are determined as follows: s1 = 1 –0∞ = 1 1 3 + j 2 2 1 3 s3 = 1 –120∞ = + j 2 2 s2 = 1 –60∞ =
s4 = 1 –180∞ = - 1 1 3 - j 2 2 1 3 s6 = 1 – - 600 = - j 2 2
s5 = 1 – - 120∞ = -
Notice that all the poles lie on a circle. This is one of the characteristics of the Butterworth function. The transfer function is formed from the left-hand half-plane poles (S3, S4 and S5) and is 1 G ( s) = 3 2 s + 2s + 2s +1 Filter specification often includes the cut-off frequency which is defined as the – 3 dB point. If wc is the cut-off frequency then eq. (3.9) yields 1 Êw ˆ 1 + c2 Á c ˜ Ë wp ¯
2n
Êw ˆ Œ Á c˜ Ë wp ¯ 2
=
1 2
2n
=1
and wc =
wp n
Œ
(3.20)
For the previous example, if Œ is selected to satisfy the passband requirement wp wc = 6 = 1.12 w p 0.5089 If the passband edge is the cut-off frequency, i.e., ap = 3 dB, then Œ = 1. For convenience, we consider a normalized Butterworth filter, i.e., wp = 1 and Œ = 1. In order to realize a filter we have to determine its transfer function. From realizability conditions, the poles of the transfer here and all pole transfer function, must be in the left half of s-plane. The poles of the transfer function are determined by factorizing the denominator |T ( jw)|2 and properly allocating its roots.
Analog Filter Approximations
71
To this end let 2
T ( jw ) =
1 H ( jw )
2
The transfer function for the sixth order can be written as using the Table 3.2. H ( s) =
1 s 6 + 3.8637 s 5 + 7.4641 s 4 + 9.1416 s 3 + 7.4641 s 2 + 3.8637 s + 1
Table 3.2 The coefficients in the system function of a normalized Butterworth filter (wc = 1) for order 1 £ N £ 8 N
a1
1
1.0000
2
1.4142
1.0000
3
2.0000
2.0000
1.0000
4
2.6131
3.4142
2.6131
1.0000
5
3.2361
5.2361
5.2361
3.2361
1.0000
6
3.8637
7.4641
9.1416
7.4641
3.8637
7
4.4940
10.0978
14.5918
14.5918
10.0978
4.4940
1.0000
8
5.1258
13.1371
21.8462
25.6884
21.8462
13.1372
5.1258
Fig. 3.7
3.2
a2
a3
a4
a5
a6
a7
a8
1.0000 1.0000
The poles of Ga (s) = Ha(s) Ha(– s) for a Butterworth filter of order N = 6 and N = 7.
CHEBYSHEV APPROXIMATION
The next function that will be considered is the Chebyshev or equiripple amplitude approximation. The approximation is derived from the Chebyshev polynomials Ck (x), which are a set of orthogonal functions possessing certain interesting properties.
72
Digital Signal Processing
Some of the basic properties: (i) The polynomials have equiripple amplitude characteristics over the range – 1 £ x ≥ 1 with ripple oscillating between – 1 and + 1. (ii) Ck (x) increase more rapidly for x > 1 than any other polynomial of order k bounded by the limits stated in (i). The Chebyshev polynomials can be derived from either of the equations. (3.21) Ck (x) = cos (k cos–1 x) Ck (x) = cosh (k cosh–1 x)
Fig. 3.8
Fig. 3.9
(3.22)
Frequency response of Chebyshev type I filter for orders N = 5 and N = 6.
Frequency response of a Chebyshev type II filter for orders N = 5 and N = 6.
The form of eq. (3.10) is most useful in the range | x | > 1: while neither eq. (3.21) nor (3.22) appears to be a polynomial, it can be shown that these expressions can be expanded in polynomial form. The Chebyshev polynomials of order one through ten are listed in Table 3.3. Table 3.3 Several of the Chebyshev polynomials n
Tn (w)
0
1
1
w
Analog Filter Approximations
2
2w2 – 1
3
4 w3 – 3w
4
8 w4 – 8 w2 + 1
5
16 w5 – 20 w3 + 5 w
6
32 w 6 – 48 w4 + 18 w2 – 1
7
64 w 7 – 112 w5 + 56 w3 – 7 w
8
128 w 8 – 256 w6 + 160 w4 – 32 w2 + 1
9
256 w 9 – 576 w7 + 432 w5 – 120 w3 + 9 w
10
512 w 10 – 1280 w8 + 120 w6 – 400 w4 + 50 w2 – 1
73
The basic Chebyshev amplitude response is defined by A2 ( f ) =
a 1 + Œ C Êww ˆ Ë c¯ 2
2 k
=
a 1+Œ C Ê f f ˆ Ë c¯ 2
(3.23)
2 k
where Ck represents both the Chebyshev polynomial and k the order of the corresponding transfer function. The quantity Œ2 is a parameter chosen to provide the proper passband ripple, and a is a constant chosen to determine the proper dc gain level. The cyclic frequency (or radian fc frequency wc) is defined as the “cut-off frequency” and it is the highest frequency at which the response is governed by the passband ripple bound. Above fc , the response moves into the transition band. The passband dB ripple (Œ) is defined by the steps involved in designing a Type1 Chebyshev filter are as follows: 1. Find the value for the selectivity factor k and the discrimination factor k1 (as determined in the case of Butterworth filter) 2. Determine the filter order using the formulae
Ê 1ˆ cosh -1 Á ˜ Ë k1 ¯ N≥ Ê 1ˆ cosh -1 Á ˜ Ë k¯ 3. Form the rational function
Ga ( s ) = H a ( s ) H a ( - s ) A2 ( f ) =
where
a 1 + Œ2 Ck2 ( f / f c )
(
Œ= ÈÍ 1 - d p Î
)
-2
1/ 2
- 1˘˙ ˚
74
Digital Signal Processing
The wp and wp are passband and stopband frequencies, dp and ds are passband and stopband ripples. Comparison between Butterworth and Chebyshev filters 1. Butterworth filter has maximally flat characteristic of its frequency response in the passband. 2. Chebyshev filter has sharp cut-off characteristic of its frequency response in the transition band (from passband to stopband). 3. Chebyshev (Chebyshev-I) filter has permissive ripple in the passband of its frequency response. This is not present in Butterworth filter. 4. The ripple can be upto 3 dB. If it exceeds 3 dB, we can consider it as a bad filter because 3dB down point in the frequency response represents cut-off frequency. 5. Inverse Chebyshev filter or (Chebyshev-II) has ripple in the stopband.
Example 3.3 Let the specifications for a low-pass filter be a p £ 1 dB w £ 150 rad / sec a s ≥ 60 dB w ≥ 200 rad / sec Find the order of the filter for Chebyshev filter. Solution: The selectivity and discrimination parameters for this filter are w p 150 k= = = 0.75 w s 200 1/ 2
È100.1 - 1 ˘ k1 = Í 6 ˙ Î 10 - 1 ˚
= 0.5089 ¥ 10 -3
Chebyshev filter: n≥
cosh -1 (1 / k1 ) cosh -1 (1 / k )
Ê ˆ 1 cosh -1 Á -3 ˜ Ë 0.5089 ¥ 10 ¯ = 10.4 n≥ Ê 1 ˆ cosh -1 Á Ë 0.75 ˜¯ n = 11
3.3
SURVEY OF OTHER APPROXIMATIONS
The Butterworth and Chebyshev functions both obtained from approximations involving only the amplitude response, and no attention was paid to the phase response in either case. In the case of maximally flat time-delay (MFTD) approximation the group delay is made to be maximally-flat in the vicinity of DC. The amplitude characteristic that results from the
Analog Filter Approximations
75
MFTD approximation has a low-pass shape with a monotonically decreasing behavior as the frequency is increased. However, the passband response is not as great at a given frequency as for either the Butterworth or the Chebyshev function of the same order. As the order n increases, the amplitude response of MFTD approximation approaches the form of a Gaussian probability density function. The MFTD filter is used where excellent phase shift (or time delay) characteristics are required, but where the amplitude response need not display a rapid attenuation increase just about cut-off. As in the case of both the Butterworth and Chebyshev functions, the highfrequency attenuation rate of the MFTD filter will eventually approach 6 n dB/octave, but the total attenuation will not be as great. The general form of the amplitude response is illustrated in Fig. 3.10.
Fig. 3.10
Form of the amplitude response for the maximally-flat time-delay filter.
The three filter functions considered thus far have all the zeros of transmission at infinity. Of these three types, the Chebyshev is normally considered to have to “best” amplitude response (more attenuation in the stopband for a given passband ripple bound) and the “poorest” phase response (most nonlinear). At the opposite extreme, the MFTD filter has the ‘best’ phase response and “poorest” amplitude response. The Butterworth filter represents a reasonable compromise between amplitude and phase. No doubt that is one of the reasons for its widespread popularity.
Problems 1. Compare Butterworth and Chebyshev approximation techniques. 2. Find the transfer function of a Butterworth LPF that satisfies the following constraints.
( )
0.5 £ H e jw
( )
H e jw
£ 0.2
£1
0 £ w £ p /2
3p /4 £ w £ p
3. Design an analog Chebyshev filter to satisfy the constraints
( )
0.707 £ H e jw
£1
0 £ w £ 0.2 p
76
Digital Signal Processing
( )
H e jw
£ 0.1
0.5 p w £ p
4. Compare the Butterworth circle and Chebyshev ellipse for pole locations. Find and plot them for analog low-pass poles for N = 6. 5. Determine the impulse response and frequency response of the filter defined by y (n) = x (n) + b y (n – 1) 6. Obtain the polyphase structure of the filter with the transfer function.
H ( z) =
1 – 3 z –1 1 + 4 z –1
1 . Obtain a bandpass s + 2s +1 filter with w0 = 2 rad/sec and Q = 10. Also derive a highpass filter with cut-off frequency wc = 2 rad/sec, when the prototype has a cut-off of 2 rad/sec.
7. A prototype lowpass filter has system response (s) =
2
8. A low-pass filter is to be designed to satisfy the following requirements: (a) response flat to within 3dB from dc to 5 kHz (b) attenuation ≥ 30 dB for frequencies greater than 10 kHz Determine the minimum order for Butterworth filter and Chebyshev filter (with ripple ≤ 1 dB passband) that will meet the specifications. 9. Design a Butterworth filter for the following specifications: (a) Passband gain required: 0.85 dB (b) Frequency upto which passband gain must remain more or less steady, w1 : 1000 rad/s (c) Amount of attenuation required: 0.10 dB (d) Frequency from which attenuation must start w2: 1000 rad/s 10. Design a Butterworth filter for the following specifications: (a) Passband gain required: 0.95 dB (b) Frequency upto which passband gain must remain more or less steady, w1: 500 Hz (c) Amount of attenuation required: 0.20 dB (d) Frequency from which attenuation must start w2: 3000 Hz 11. Design a Chebyshev filter for the following specifications: (a) Passband gain required: – 2dB (b) Frequency upto which passband gain must remain more or less steady, wc: 250 Hz (c) Amount of attenuation required: – 40 dB (d) Frequency from which attenuation must start w2 : 800 Hz 12. Design a Chebyshev filter for the following specifications: (a) Passband gain required: .85 dB (b) Frequency upto which passband gain must remain more or less steady, wc: 1000 rad/s (c) Amount of attenuation required: 0.10 dB (d) Frequency from which attenuation must start w2: 3000 rad/s
Analog Filter Approximations
77
13. Design a Chebyshev filter for the following specifications: (a) Passband gain required: – 0.85 dB (b) Frequency upto which passband gain must remain more or less steady, wc : 500 Hz (c) Amount of attenuation required: 0.15 dB (d) Frequency from which attenuation must start w2 : 600 Hz 14. Design Chebyshev filter for the following specifications: (a) 3 dB band width wc = 10 rad/sec (b) Ripple in the passband ≤ 0.5 dB (c) Atleast 25 dB attenuation for w ≥ 20 rad/sec. Determine the order of the filter and its transfer function. 15. Design a Butterworth bandpass filter with the following specifications: (a) Centre frequency: 100 Hz (b) 3 dB bandwidth: 20 kHz (c) Attenuation in the band beyond (100 ± 20 kHz): 35 dB 16. A low-pass filter is desired to satisfy the following requirements: (i) Response flat within 1 dB from dc to 5 kHz (ii) Attenuation ≥ 30 dB for f ≥ 10 kHz Determine the minimum orders of a filter that will realize the specifications for both Butterworth filter and Chebyshev filter. Find the transfer function for any one method.
Multiple-Choice Questions 1. The classical analog filters (a) Butterworth filter (b) Chebyshev filter (c) Elliptic filter (d) All the above 2. Butterworth filters have (a) Wideband transition region (b) Sharp transition region (c) Oscillation in the transition region (d) None of the above 3. Chebyshev filters have (a) Wideband transition region (b) Sharp transition region (c) Oscillation in the transition region (d) None of the above 4. Chebyshev filter contains (a) Oscillations in the passband (b) Oscillations in the stopband
78
Digital Signal Processing
(c) (d) 5. The (a) (c) 6. The (a)
(b)
(c)
(d)
Oscillations in the passband and the stopband Oscillations in the transition band following filter exhibits equiripple behavior in the passband and the stopband Butterworth filter (b) Type- I Chebyshev filter Type-II Chebyshev filter (d) Elliptic filter magnitude function of Butterworth filter is given by
H a (w ) =
1
N = 1, 2, 3, º
È Ê w ˆN ˘ Í1 + Á ˜ ˙ ÍÎ Ë w c ¯ ˙˚ 1 2 H a (w ) = È Ê w ˆ 2N ˘ Í1 + Á ˜ ˙ ÍÎ Ë w c ¯ ˙˚ 1 H a (w ) = È Ê w ˆ 2N ˘ Í1 + Á c ˜ ˙ ÍÎ Ë w ¯ ˙˚ 1 H a (w ) = È Êw ˆN ˘ Í1 + Á c ˜ ˙ ÍÎ Ë w ¯ ˙˚
N = 1, 2, 3, º
N = 1, 2, 3, º
N = 1, 2, 3, º
7. The magnitude function of Chebyshev filter is given by (a)
(b)
H a (w ) =
H a (w ) =
(c)
H a (w ) =
(d)
H a (w ) =
1 1/ 2
È 2 2 Ê wc ˆ ˘ Í1 + e C N ÁË w ˜¯ ˙ Î ˚ 1
1/ 2
È ˘ 2 Êw ˆ Í1 + e C N Á ˜ ˙ Ë wc ¯ ˚ Î 1
1/ 2
È ˘ 2 2 Êw ˆ Í1 + e C N Á ˜ ˙ Ë wc ¯ ˚ Î 1 1/ 2
È 2 Ê wc ˆ ˘ Í1 + C N ÁË w ˜¯ ˙ Î ˚
N = 1, 2, 3, º
N = 1, 2, 3, º
N = 1, 2, 3, º
N = 1, 2, 3, º
Analog Filter Approximations
8. For Butterworth filter, filter order N is given by
(a)
ˆ ˆ Ô¸ Ê 1 ÔÏÊ 1 log ÌÁ 2 - 1˜ ˝ Á 2 - 1˜ ¯ Ô˛ Ë d p ¯ 1 ÓÔË d s N= 2 Êw ˆ log Á s ˜ Ë wp ¯
(b)
ˆ ˆ Ô¸ Ê 1 ÔÏÊ 1 log ÌÁ 2 + 1˜ ˝ Á 2 + 1˜ ¯ Ô˛ Ë d p ¯ ÔÓË d s 1 N= 2 Êw ˆ log Á s ˜ Ë wp ¯
(c)
ˆ ˆ Ô¸ Ê 1 ÔÏÊ 1 log ÌÁ 2 - 1˜ ˝ Á 2 - 1˜ ¯ Ô˛ Ë d p ¯ 1 ÓÔË d s N= 2 Ê wp ˆ log Á Ë w ˜¯ s
(d)
ˆ ÏÔÊ 1 ˆ ¸Ô Ê 1 log ÌÁ 2 + 1˜ ˝ Á 2 + 1˜ ¯ Ô˛ Ë d p ¯ 1 ÓÔË d s N= 2 Ê wp ˆ log Á Ë w ˜¯ s
9. For Butterworth filter, cut-off frequency is given by wp wp (a) w c = (b) w c = 1 1 È1 ˘N È1 ˘ 2N Í 2 - 1˙ Í 2 - 1˙ ÍÎ d p ˙˚ ÎÍ d p ˚˙ (c)
wc =
wp È1 ˘ Í 2 + 1˙ Î ds ˚
1 2N
(d)
wc =
wp 1
È1 ˘N Í 2 + 1˙ ˙˚ ÎÍ d p
10. For Cheyshev filter, filter order N is given by
(a)
1 È ˘ 2 Ê ˆ 1 1 –1 Í + 1˜ ˙ cosh Í e ÁË d 2s ¯ ˙ Í Î ˚˙ N≥ Êw ˆ cosh -1 Á s ˜ Ë wp ¯
(b)
1 È ˘ 2 Ê ˆ 1 1 –1 Í + 1˜ ˙ cosh Í e ÁË d 2s ¯ ˙ Í Î ˚˙ N≥ Êw ˆ cosh -1 Á s ˜ Ë wp ¯
79
80
Digital Signal Processing
(c)
1 È ˘ 2 Ê ˆ 1 1 ˙ –1 Í cosh Í Á 2 - 1˜ ˙ e Ë dp ¯ ˙ ÍÎ ˚ N≥ Ê ˆ w cosh -1 Á s ˜ w Ë p¯
(d)
1 È ˘ 2 Ê ˆ 1 1 ˙ –1 Í cosh Í Á 2 + 1˜ ˙ e Ë dp ¯ ˙ ÍÎ ˚ N≥ Ê ˆ w cosh -1 Á s ˜ w Ë p¯
11. For Chebyshev filter, cut-off frequency is given by wp ws (b) w c = (a) w c = 2 2 (c) wc = wp (d) wc = ws 12. For Butterworth filter, when Ap and As in dB, filter order N is given by
(a)
È 100.1 As dB + 1 ˘ log Í 0.1 Ap dB ˙ 1 + 1˚ Î10 N= 2 Êw ˆ log Á s ˜ Ë wp ¯
(c)
È 100.1 As dB + 1 ˘ log Í 0.1 Ap dB ˙ 1 + 1˚ Î10 N= 2 Ê wp ˆ log Á Ë w s ˜¯
(b)
È 100.1 As dB - 1 ˘ log Í 0.1 Ap dB ˙ 1 - 1˚ Î10 N= 2 Êw ˆ log Á s ˜ Ë wp ¯
(d)
È 100.1 As dB - 1 ˘ log Í 0.1 Ap dB ˙ 1 - 1˚ Î10 N= 2 Ê wp ˆ log Á Ë w s ˜¯
13. For Chebyshev filter, when Ap and As in dB, cut-off frequency is given by (a)
wc =
wp
ws 2 (d) wc = ws (b)
2 (c) wc = wp
wc =
14. The magnitude function of elliptic filter is given by k k 2 2 (b) H (w ) = (a) H (w ) = 2 2 2 1 + e GN ( w ) 1 + e GN ( w ) k k (c) H (w ) = 1 + e G (w ) (d) H (w ) = 1 + e GN2 (w ) N
Key to the Multiple-Choice Questions 1. 5. 9. 13.
(d) (d) (a) (c)
2. 6. 10. 14.
(a) (b) (a) (a)
3. (b) 7. (c) 11. (c)
4. (a) 8. (a) 12. (b)
4 4.1
IIR Filters
INTRODUCTION TO DIGITAL FILTER DESIGN
In DSP, there are two important application aspects, i.e.,: • spectrum analyzer: provides signal representation in frequency domain. • digital filter: performs signal filtering in frequency domain, e.g., lowpass, highpass, bandpass and bandstop filters.
Fig. 4.1
Frequency responses of (i) low-pass, (ii) high-pass, (iii) band-pass and (iv) bandstop filters.
Digital filters (discrete-time filters) are examples of LTI systems. They are used to alter a discrete-time signal in a desired fashion. Such a discrete-time signal could be sampled analog signal or a time series representation such as the stock market reading data. A digital filter may be defined as a computational process or algorithm which converts one sequence of members representing an input signal into another sequence of members representing an output signal and in which conversion changes the character of the signal in some prescribed fasion.
4.1.1 Digital Filters Compared to Analog Filters Advantages • Digital filters can have characteristics that are not possible with analog filters, e.g., a truly linear phase response. • Performance of digital filters does not vary with environment, e.g., thermal factor, thus avoiding the need to calibrate periodically.
82
Digital Signal Processing
• Frequency responses can be configured to be adjusted automatically using software. Thus, they are widely used in adaptive filters. • Several input signals or channels can be filtered by one digital filter without the need to replicate the hardware. • Both filtered and unfiltered data can be saved for future use. • The performance of digital filters is repeatable from unit to unit. • Digital filters can be used at very low frequencies, where usage of analog filters is impractical. Digital filters can also be made to work over a wide range of frequencies by mere change to the sampling frequency.
Disadvantages of Digital Filters • Speed limitation The maximum bandwidth these digital filters can handle in real time, is much narrower than analog filters. In real time, analog to digital and digital to analog conversion impose constraints on the highest frequency capable by the digital filters. The processing speed can also be limited by the computational time in the processor. • Finite wordlength effects Digital filters are subject to ADC noise introduced in quantizing a continuous signal, and to roundoff noise incurred during computation. With higher order recursive filters, the accumulation of roundoff noise could lead to instability. • Long design and development time The initial design time for digital filters can be much longer than for analog filters. However, re-designing or modifications of digital filters need fewer efforts.
4.1.2 Steps in Digital Filter Design There are several processes involved in the design of digital filters, namely: 1. filter specifications, which may include constraints on the magnitude and/or phase of the frequency response, constraints of the unit sample or step response of the filter; 2. specification of the type of filter (FIR or IIR); 3. determining the filter order; 4. finding a set of coefficients that produce an acceptable filter; 5. implementing the system in hardware or software; 6. quantizing the filter coefficients; 7. redesigning if necessary; and 8. choosing an appropriate filter structure.
4.1.3 Digital Filter Specifications Before a filter can be designed, a set of filter specifications must be defined. For example, suppose we want to design a low-pass filter with a cut-off frequency wc. The frequency response of an ideal low-pass filter with linear phase and a cut-off frequency wc is Ïe - jaw H d (e j w ) = Ì Ó0
w £ wc wc £ w £ p
(4.1)
IIR Filters 83
which has the unit response sin (n - a) w c (4.2) p (n - a) Because this filter is unrealizable (non-causal and unstable), it is necessary to relax the ideal constraints on the frequency response and allow some deviation from the ideal response. The specifications for a low-pass filter will typically have the form hd (n) =
1 - d p < H (e j w ) £ 1 + d p H (e j w ) £ d s
0 £ w £ wp ws £ w £ p
(4.3)
as illustrated in Fig. 4.2. Thus, the specifications include the passband cut-off frequency wp, the stopband cut-off frequency ws the passband deviation dp, and the stopband deviation ds. The passband and stopband deviations are often given in decibels (dB) as follows: a p = 20 log (1 - d p ) and (4.4) a s = 20 log (d s ) The interval [dp, ds] is called transition band.
Fig. 4.2 Filter specifications for a low-pass filter.
The passband and stopband frequencies are usually specified in Hertz, together with the sampling rate T of the digital filter. Since all filter design techniques are developed in terms of normalized angular frequencies ws and wp, the specified critical frequencies need to be normalized first before a specific filter design algorithm can be applied. Let fT denote the sampling frequency in Hertz, and fp and fs denote the passband and stopband edge frequencies in Hertz, respectively. Then the normalized angular frequencies in radians are given by: w p = 2 p f p / fT = 2 p f p T w s = 2 p f s / fT = 2 p f s T
where T is the sampling interval.
(4.5)
84
Digital Signal Processing
4.2
CONVERSION FROM ANALOG TO DIGITAL
Next, we will discuss the mapping from the (analog) s-plane to the (digital) z-plane, since the design of a digital filter from an analog prototype requires that we transform ha(t) to h [n] or Ha(s) to H (z). A mapping from the s-plane to the z-plane may be written as: H ( z ) = H a (s) s = m ( z ) (4.6) where s = m (z) is the mapping function. In order for this transformation to produce an acceptable digital filter, the mapping m (z) should have the following properties: 1. The mapping from the jw-axis to the unit circle, | z | = 1, should be one-to-one and unto the unit circle to preserve the frequency response characteristics of the analog filter. 2. Points in the left-half s-plane should map to points inside the unit circle to preserve the stability of the analog filter. 3. The mapping m (z) should be a rational function of z so that a rational Ha (s) is mapped to a rational H (z). For design of digital filter the prototype chosen is analog filter. There are three popular methods in designing IIR filters 1. Bilinear transformation method 2. Impulse invariance method 3. Step invariance method The impulse response duration characteristics can be divided into two broad classes: (a) Infinite Impulse Response (IIR): An IIR filter is one in which the impulse response h (n) has infinite number of samples. Thus, h (n) is non-zero at an infinite number of points in the range n1 £ n £ •. (b) Finite Impulse Response (FIR): An FIR filter is one in which the impulse response h (n) is limited to a finite number of samples defined over the range n1 £ n £ n2, where n1 and n2 are both finite.
The possible realization procedures can be divided in to three broad classless: (a) Recursive realization: A recursive realization is one in which the present value of the output depends both on the input (present and/or past values) and previous values of the output. A recursive filter is usually recognized by the presence of both of ai and bi terms in a realization of the form k
k
y (n) = Â ai x (n - i ) - Â bi y (n - i ) i=0
i =1
(4.7)
(b) Non-Recursive (direct convolution) realization: A non-recursive or direct convolution realization is one in which the present value of the output depends only on the present and past values of the input. This usually means that all values of bi = 0 in a realization of the form of k
y (n) = Â ai x (n - i ) i=0
(4.8)
IIR Filters 85
(c) Fast Fourier Transform (FFT) Realization: This type of realization is achieved by transforming the input signal with the FFT, filtering the spectrum as desired, and performing an inverse transformation.
4.3
BILINEAR TRANSFORMATION METHOD
Mapping equation from prototype analog filter to IIR digital filter using bilinear transformation technique s=
(
C 1 - z -1
)
(1 + z ) -1
(4.9)
where C is mapping constant. For low sampling frequencies C = 2 fs = 4 f0 where f0 is called the folding frequency fs f0 = 2 The folding frequency is simply the highest frequency that can be processed by a given discrete-time system with sampling rate fs. Any frequency greater than f0 will be “folded” and cannot be recovered. otherwise Ê pˆ C = l r cot Á ˜ nr Ë 2¯ lr = 1 rad/sec lr is normalized cut-off wavelength (generally referred in tables) Example 4.1 Design a low-pass digital filter derived from a second-order Butterworth analog filter with 3 dB cut-off frequency of 50 Hz. The sampling rate of the system is 500 Hz. Solution: The normalized analog filter transfer function of the Butterworth filter is obtained from previous knowledge. Using the dummy variables, the function is 1 G(s) = 1 + 1.4142136 s + s 2 The frequency lr = 1 rad/sec in the prototype must correspond to fr = 50 Hz, in the digital filter, so that the design should be based on exact correspondence at these frequencies. The folding frequency f0 = 500/2 = 250 Hz, and ur = 50/250 = 0.2. The constant C is determined as follows C = cot
p p ( 0.2) = cot ÊÁË ˆ˜¯ = 3.0776835 2 10
86
Digital Signal Processing
The required transformation is 1 - z -1 1 + z -1 0.0674553 (1 + 2 z -1 + z -2 ) H ( z) = 1 - 1.14298 z -1 + 0.41280 z -2 s = 3.0776835
4.4
IMPULSE INVARIANCE METHOD
Illustration of the impulse-invariance method H ( z ) = TZ [ g (t )] = TG ( z )
Fig. 4.3
(4.10)
Illustration of the impulse-invariance concept.
Example 4.2 Using the impulse invariance method, design a low-pass filter according to the requirements stated in Example 4.1. Solution:
The normalized analog transfer function is expressed as G1 ( s ) =
1 1 + 1.4142136 s + s 2
Before taking the Z-transform, it is necessary to change the frequency scale of G1(s) so that its 3 dB cut-off frequency is 50 Hz. This can be achieved by replacing‘s’ in the above equation s . The resulting transfer function is by 2 p ¥ 50 Ê s ˆ 9.8696044 ¥ 104 G ( s ) = G1 Á = Ë 100 p ˜¯ s 2 + 444.28829 s + 9.8696044 ¥ 104
g (t) can be obtained using the equation sin nat =
a s + a2 2
g (t ) = 444.28829 e -222.1415 t sin (222.14415 t )
(Using the transformation equation z [sin (naT )] = and pairing eq. e – naT x(n) = x (eaT z )
z sin (aT ) z - 2 z cos (aT ) + 1
Z [ sin (222.14415 t )] =
2
0.42981538 z z - 1.8058336 z + 1 2
IIR Filters 87
H (z) is obtained by multiplying by T. After arrangement in negative powers of z, the result is H ( z) =
0.2449203 z -1 1 - 1.1580459 z -1 + 0.41124070 z -2
Example 4.3 Determine a possible digital filter for the system 2 G ( s) = ( s + 1) ( s + 2) The system is to be interfaced to a process control digital computer, and it is desired to replace many of the units within the system by software realizations. Determine the transfer function H (z) of a possible replacement for the given analog unit. The sampling rate used in the system is 10 Hz. Solution: The impulse response g (t) can be written as g (t) = 2 e – t – 2 e– 2 t The Z-transform of the above equation can be represented as 2 2 G ( z) = - 0.1 -1 - 0.2 -1 1- e z 1- e z The transfer function H (z) is obtained by multiplying the above equation by sampling interval T. The result can be rearranged as H ( z) =
4.5
0.017221333 z -1 1 - 1.7235682 z -1 + 0.74081822 z -2
STEP INVARIANCE METHOD
Illustration of Step-invariance concept
Fig. 4.4
Illustration of the step-invariance concept.
hs ( n ) = g s ( t )
t = nT
H s ( z ) = Gs ( z )
Hs ( z) =
z H ( z) z -1
(4.11) (4.12) (4.13)
88
Digital Signal Processing
ÏÔ È G ( s ) ˘ ¸Ô Gs ( z ) = Z Ìa -1 Í ˙˝ ÔÓ Î s ˚ Ô˛ Using eqs. (4.12), (4.13), (4.14) it can be written as H ( z) =
z -1 Z z
(4.14)
ÔÏ -1 È G ( s ) ˘ Ô¸ Ìa Í ˙˝ ÔÓ Î s ˚ Ô˛
(4.15)
Example 4.4 Using the step invariance method, design a low-pass filter according to the requirements stated in Example 4.1. Solution:
H ( z) =
z -1 Z z
ÔÏ -1 È G ( s ) ˘ Ô¸ Ìa Í ˙˝ ÓÔ Î s ˚ ˛Ô
We must now determine the Z-transform corresponding to the sampled version of the continuous-time response g s ( t ) = 1 - e - 222.14415 t ( sin 222.14415 t + cos 222.14415 t )
(Using the transformation equations z 2 - z cos aT z 2 - 2 cos aT + 1 z sin aT z [sin naT ] = 2 z - 2 z cos aT + 1 z [cos naT ] =
and pairing eq. e – naT x(n) = x (eaT z )) Hs ( z) =
z z 2 - 0.30339071 z - 2 z - 1 z - 1.1580459 z + 0.41124070
=
0.14534481 z 2 + 0.10784999 z ( z - 1) z 2 - 1.1580459 z + 0.41124070
H ( z) =
(
)
0.14534481 z -1 + 0.10784999 z -2 1 - 1.1580459 z -1 + 0.41124070 z -2
Example 4.5 The sampling rate in a certain digital processing system is 2000 Hz. It is desired to program a digital filter within the system to behave approximately like a simple first-order low-pass filter with a 3 dB frequency in the neighborhood of (but not necessarily equal to) 400 Hz. Rather, the major criterion is that the low-frequency response of the digital filter be close to that of the reference analog filter. Solution: The requirements suggest the use of the bilinear transformation with the mapping constant selected to give good low-frequency correlation. The normalized low-pass analog transfer function with a cut-off frequency of 1 rad/s is
IIR Filters 89
G1 ( s ) =
1 s +1
In this case, it is necessary to scale the analog filter to the proper frequency range before applying the transformation. This is achieved by replacing s by s / (2p × 400). Letting G2 (s) represent the resulting analog transfer function, we have Ê s ˆ 800 p G2 ( s ) = G1 Á = ˜ Ë 800 p ¯ s + 800 p
The bilinear transformation constant is obtained as follows C = 2 × fs (low sampling rates) C = 2 × 2000 = 4000 The transformation is s = 4000
1 - z -1 1 + z -1
The final transfer function can be obtained as H ( z) =
(
0.385870 1 + z -1 1 - 0.228261 z
)
-1
Example 4.6 The transfer function of one unit within a particular analog control system is given by G ( s) =
2 s + 1 ( ) ( s + 2)
The system is to be interfaced to a process control digital computer, and it is desired to replace many of the units within the system by software realizations. Determine the transfer function H (z) of a possible replacement for the given analog unit. The sampling rate used in the system is 10 Hz. Solution: The mapping constant will be selected as C = 2 fs = 2 × 10 = 20 (for low sampling rates) s=
(
20 1 - z -1 1+ z
)
-1
The final transfer function can be expressed as H ( z) =
(
0.0043290043 1 + z -1
)
2
1 - 1.7229437 z -1 + 0.74025974 z -2
90
4.6
Digital Signal Processing
COMPARISON OF THE AMPLITUDE RESPONSES OF THE THREE METHODS
(i) In the passband, the three responses are all fairly close. (ii) Gain of the impulse-invariance filter is slightly less. This problem is not serious since we would always adjust the overall gain level if desired. (iii) In the stopband, the bilinear transformation filter has the sharpest cut-off rate, and at the other extreme, the impulse invariance filter exhibits relatively poor roll-off. (iv) Both the impulse-invariance and the step invariance filters display aliasing errors, although they are not as serious in the latter filter.
Example 4.7 What is an IIR digital filter? Solution: If all the infinite samples of impulse response are considered for designing of filter then it is known as Infinite Impulse Response (IIR) filter. IIR filters are recursive type, i.e., present output depends on present, past inputs and past outputs. The differential equation of the IIR system is given by, N
M
k =1
k =0
y ( n ) = Â Ak y ( n - k ) + Â Bk x ( n - k )
Applying Z-transform, we get N
M
Y ( z ) = Â Ak z - k Y ( z ) + Â Bk z - k X ( z ) k =1
k =0
È ˘ Y ( z ) Í1 - Â Ak z - k ˙ = Â Bk z - k X ( z ) Î k =1 ˚ k =0 N
H ( z) =
\
M
Y ( z) X ( z) M
ÂB z
-k
k
k =0 N
H ( z) =
1 - Â Ak z - k k =1
where H (z) = Transfer function of IIR system. 1 Example 4.8 An analog integrator is described by a transfer function H A ( s ) = . s (i) Obtain a digital integrator using bilinear transformation method. (ii) Obtain the difference equation for the digital integrator related input x (n) to the output y (n). Solution:
Given that, H A ( s) =
1 s
IIR Filters 91
(i) Digital integrator using bilinear transformation method. According to bilinear transformation method, the analog filter transfer function can be converted into digital type by substituting 1 H A ( s) = s
s=
2 Ê z - 1ˆ Á ˜ in HA (s) T Ë z + 1¯
H ( z) =
1 2 Ê z - 1ˆ Á ˜ T Ë z + 1¯
H ( z) =
1 Ê z - 1ˆ 2Á Ë z + 1˜¯
H ( z) =
z +1 2 ( z - 1)
Assume, T = 1 sec
(ii) Difference equation for the digital integrator z +1 H ( z) = 2 ( z - 1)
H ( z) =
fi
Y ( z) z +1 = X ( z ) 2 ( z - 1)
fi
Y ( z ) ( 2 z - 2) = X ( z ) ( z + 1)
fi
2 zY ( z ) - 2Y ( z ) = zX ( z ) + X ( z ) Applying inverse Z-transform, we get,
2 y ( n + 1) - 2 y ( n ) = x ( n + 1) + x ( n )
Example 4.9 Convert analog filter with transfer function (s + 0.1) / (s + 0.1)2 + 9, into a digital IIR filter using bilinear transformation. The digital filter should have a resonant frequency of wr = p/4. Solution: The given transfer function of an analog filter is, H ( s) =
( s + 0.1) ( s + 0.1) 2 + 9
wr =
and resonant frequency,
p 4
92
Digital Signal Processing
From the given transfer function, wC = 3 rad/ sec Then, the analog filter with the given transfer function can be converted into digital IIR filter using bilinear transformation by calculating sampling period T as, w=
2 Ê wˆ tan Á ˜ Ë 2¯ T
fi
wC =
2 Êw ˆ tan Á r ˜ Ë 2¯ T
fi
3=
fi
fi
2 Ê pˆ tan Á ˜ Ë 8¯ T
2 Ê pˆ tan Á ˜ = 0.276 sec Ë 8¯ 3 Then, the conversion function in bilinear transformation technique is given by, T=
Ê z - 1ˆ ÁË ˜ z + 1¯
s=
2 T
s=
2 Ê z - 1ˆ Á ˜ 0.276 Ë z + 1¯
[∵ T = 0.276]
Ê z - 1ˆ s = 7.246 Á Ë z + 1˜¯
The transfer function of the digital filter using bilinear transformation is, H ( z) = H ( s)
H ( z) =
=
=
=
Ê z -1ˆ s = 7.246 Á Ë z +1¯˜
Ê z - 1ˆ + 0.1 7.246 Á Ë z + 1˜¯ 2
Ê 7.246 ( z - 1) ˆ + 0.1˜ + 9 Á ( z + 1) Ë ¯ ÈÎ 7.246 ( z - 1) + 0.1 ( z + 1) ˘˚ ( z + 1) 2 2 ÈÎ 7.246 ( z - 1) + 0.1 ( z + 1) ˘˚ + 9 ( z + 1)
(
)
7.246 ( z - 1) + 0.1 z 2 + 2 z + 1
( 7.346 z - 7.146)
2
2
+ 9 z + 18 z + 9
7.346 z 2 + 0.2 z - 7.146 53.96 z 2 - 104.98 z + 51.06 + 9 z 2 + 18 z + 9
IIR Filters 93
H ( z) =
7.346 z 2 + 0.2 z - 7.146 62.96 z 2 - 86.98 z + 60.06
Example 4.10 Find H (z) using impulse invariant method for given analog system.
(
H ( s ) = 1 / ( s + 0.5) s 2 + 0.5 s + 2
)
Solution: Given that, H ( s) = H ( s) =
\
1
( s + 0.5) ( s
2
( s + 0.5) ( s
2
+ 0.5 s + 2
)
1
(
+ 0.5 s + 2
)
=
A s + A3 A1 + 2 2 ( s + 0.5) s + 0.5 s + 2
(
)
)
1 = A1 s 2 + 0.5 s + 2 + ( A2 s + A3 ) ( s + 0.5)
After simplification, we get, A1 = 0.5, A2 = – 0.5, A3 = 0 H ( s) =
0.5 0.5 s - 2 ( s + 0.5) s + 0.5 s + 2
(
)
=
Ê ˆ Ê ˆ 0.5 s 0.5 s + 0.25 - 0.25 - 0.5 Á = 0.5 ˜ Á 2 2 2 2˜ s + 0.5 Ë ( s + 0.25) + (1.3919) ¯ ( s + 0.5) Ë ( s + 0.25) + (1.3919) ¯
=
Ê ˆ 0.5 s + 0.25 0.25 - 0.5 Á 2 2 2 2˜ ( s + 0.5) ( s + 0.25) + (1.3919) ¯ Ë ( s + 0.25) + (1.3919)
=
Ê ˆ 0.5 s + 0.25 0.125 - 0.5 Á + 2 2˜ 2 2 ( s + 0.5) Ë ( s + 0.25) + (1.3919) ¯ ( s + 0.25) + (1.3919)
=
Ê ˆ Ê ˆ 0.5 s + 0.25 1.3919 - 0.5 Á + 0.0898 ˜ Á 2 2 2 2˜ ( s + 0.5) Ë ( s + 0.25) + (1.3919) ¯ Ë ( s + 0.25) + (1.3919) ¯
We know that, 1 1 Æ s - Pi 1 - e Pi .T z -1 s+a
( s + a)
2
+ b2
b
( s + a)
2
+ b2
Æ
1 - e - aT ( cos bT ) z -1 1 - 2e - aT ( cos bT ) z -1 + e -2 aT z -2
Æ
e - aT ( sin bT ) z -1 1 - 2e - aT ( cos bT ) z -1 + e -2 aT z -2
94
Digital Signal Processing
By using the above three equations we can find the transfer function of the digital filter. Ê ˆ 1 - e -0.25T ( cos 1.3919) z -1 Ê ˆ 0.5 \ H ( z) = Á 0.5 Á ˜ - 0.25 T Ë 1 - e - 0.5T z -1 ˜¯ ( cos (1.3919T ) ) z -1 + e- 0.5T z -2 ¯ Ë 1 - 2e Ê ˆ e - 0.25T ( sin (1.3919 T ) ) z -1 + 0.0898 Á - 0.25 T ( cos (1.3919 T ) ) z -1 + e- 0.5T z -2 ˜¯ Ë1 - 2e
Taking, T = 1 sec
(
)
Ê ˆ 1 - 0.1385 z -1 Ê ˆ 0.5 0.7663 z -1 + H ( z) = 0.5 0.0898 Á ˜ -1 -1 -2 -1 -2 ˜ Á 1 - ( 0.606) z Ë 1 - 0.277 z + 0.606 z ¯ Ë 1 - 0.277 z + 0.606 z ¯ Ê Ê ˆ z 2 - 0.1385 z ˆ 0.7663 z Ê 0.5 z ˆ + 0.0898 Á 2 H ( z) = Á 0.5 ˜ 2 Á ˜ Ë z - 0.606 ¯ Ë z - 0.277 z + 0.606 ˜¯ Ë z - 0.277 z + 0.606 ¯ 2 Ê 0.5 z ˆ Ê 0.5 z - 0.06925 z - 0.068814 z ˆ =Á -Á ˜ ˜¯ Ë z - 0.606 ¯ Ë z 2 - 0.277 z + 0.606 2 Ê 0.5 z ˆ Ê 0.5 z - 0.138 z ˆ H ( z) = Á Ë z - 0.606 ˜¯ ÁË z 2 - 0.277 z + 0.606 ˜¯
H ( z) =
=
0.5 z 3 - 0.138 z 2 + 0.303 z - 0.5 z 3 + 0.138 z 2 + 0.303 z 2 - 0.0836 z ( z - 0.606) z 2 - 0.277 z + 0.606
(
)
0.303 z 2 + 0.303 z - 0.0836 z ( z - 0.606) z 2 - 0.277 z + 0.606
( ) 0.303 ( z + z - 0.276 z ) H ( z) = ( z - 0.606) ( z - 0.277 z + 0.606) 2
\
2
Example 4.11 Convert analog filter with transfer function, (s + 0.1) / (s + 0.1)2 + 9 into digital IIR filter using impulse invariance method. Also sketch response and comment on T value. How does it affect aliasing? Solution: Let H ( s) =
( s + 0.1) = s + 0.1 2 2 + s 0.2 s + 9.01 ( s + 0.1) + 9
s + 0.1 ( s - ( - 0.1 + j3) ) ( s - ( - 0.1 - j3) ) A B = + s - ( - 0.1 + j 3) s - ( - 0.1 - j 3)
H ( s) =
IIR Filters 95
After simplification, we get 1 2 Ê 1ˆ Ê 1ˆ ÁË ˜¯ ÁË ˜¯ 2 2 H ( s) = + s - ( - 0.1 + j 3) s - ( - 0.1 - j 3) A=
\
1 2
B=
By using impulse invariant method the analog transfer function H (s) is converted into H (z). H ( z) =
=
= = H ( z) =
Ê 1ˆ ÁË ˜¯ 2
+
1 - e( - 0.1 + j 3) T z -1
Ê 1ˆ ÁË ˜¯ 2 1 - e( - 0.1 - j 3) T z -1
(
)
(
0.5 1 - e( - 0.1 - j 3) T z -1 + 0.5 1 - e( - 0.1 + j 3) T z -1
(1 - e(
- 0.1 + j 3) T
)(
z -1 1 - e( - 0.1 - j 3) T z -1
(
1 - 0.5 e - 0.1 T z -1 e j 3 T + e - j 3 T 1- e
( - 0.1 - j 3)T
-1
z -e
)
( - 0.1 + j 3) T ( - 0.1 - j 3) T
e
z -2
1 - e - 0.1T z -1 ( cos ( 3T ) ) 1 - 2 e - 0.1T z -1 ( cos ( 3T ) ) + e - 0.2 T z -2 1 - ( cos ( 3T ) ) e - 0.1 T z -1 1 - 2 ( cos ( 3T ) ) e - 0.1 T z -1 + e - 0.2T z -2
Put, z = e jw
( )
H e jw =
1 - ( cos ( 3 T ) ) e - 0.1T e - jw 1 - 2 ( cos ( 3T ) ) e - 0.1 T e - jw + e - 0.2 T e - 2 jw
By substituting T = 0.1 the above equation becomes, 1 - ( cos ( 0.3) ) e - 0.01e - jw
( ) = 1 - 2 (cos (0.3)) e
H e
jw
( )
H e jw =
- 0.01
e - jw + e - 0.02 e - 2 jw
1 - 0.946 e - jw 1 - 1.892 e - jw + 0.98 e 2 jw
By substituting T = 0.5 in equation (1), 1 - cos (1.5) e - 0.05 e - jw
( ) = 1 - 2 (cos (1.5)) e
H e
jw
( )
H e jw =
- 0.05
e - jw + e - 0.1 e - 2 jw
1 - 0.0673 e - jw 1 - 0.1346 e - jw + 0.905 e - 2 jw
)
)
96
Digital Signal Processing
The frequency response characteristics of H (e jw) for T = 0.1 and T = 0.5 are shown in figure below.
Fig. 4.5
Frequency response of H (e jw).
As the value of T increases we obtain better aliasing effect. In this case, when T = 0.5 we obtain better aliasing effect than at T = 0.1.
Problems 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Explain design of IIR filters. Give the steps to design the bandpass digital IIR filters. Explain bilinear transformation method. Explain the bilinear transformation technique for the design of IIR filter. Explain polyphase decomposition for IIR filter structures. What are the advantages and disadvantages of bilinear transformation? What are the advantages and disadvantages of analog and digital filters? What is frequency wrapping? What do you mean by infinite impulse response? Explain frequency transformation? Discuss the procedure used for design of Butterworth low-pass digital filter? Discuss the procedure used for design of Chebyshev low-pass digital filter? Explain IIR filter design using bilinear transformation method and impulse invariance method.
IIR Filters 97
14. A digital low-pass filter is required to meet the following specifications: (i) Passband ripple ≤ 1 dB (ii) Passband edge: 4 kHz (iii) Stopband attenuation ≥ 40 dB (iv) Stopband edge: 6 kHz (v) Sampling rate: 24 kHz The filter is to be designed by performing a bilinear transformation on an analog system function satisfying the above specifications. Determine the order of Butterworth analog design to be used to meet the specifications in the digital implementation. Determine the transfer function of the digital filter. 15. The specifications of a bandpass filter are: Passband ripple £ 3 dB for 20 krad/sec £ w £ 30 krad/sec Stopband ripple ≥ 50 dB for w £ 10 krad/sec and w ≥ 430 krad/sec The filter is to be designed by performing an impulse invariance technique on an analog system function satisfying above specifications. Determine the order of Butterworth, Chebyshev and elliptic analog designs to be used to meet the specifications in the digital implementation. Determine the transfer function of the digital filter in each case. 16. Using bilinear transformation, design a digital low-pass filter with a passband magnitude characteristic that is constant to within 0.75 dB for frequencies below w = 0.2613 p and stopband attenuation of at least 20 dB for frequencies between w = 0.4018 p and p. 17. Determine the impulse response and frequency response of the filter defined by y (n) = x (n) + by (n – 1) 18. A low-pass filter with the following specifications is required (i) frequency response within 3 dB from 0 to 100 Hz; (ii) attenuation at 200 Hz must be greater than 20 dB; and (iii) sampling frequency 500 Hz . Design the IIR filter using Butterworth approximation and bilinear transformation. 19. Using bilinear transformation method design a low-pass digital filter desired from a second order Chebyshev filter with 1 dB passband ripple. The 1 dB cut off frequency is 20 Hz and the sampling rate of the system is 100 Hz. 20. A Low-pass filter is designed to satisfy the following requirements: (a) Response flat within 1 dB from dc to 5 KHZ (b) Attenuation ≥ 30 dB for f ≥ 10 Hz Determine the minimum order of a filter that will realize specifications for both Butterworth filter and Chebyshev filter. Find the transfer function for bilinear transformation. 21. A low-pass filter with the following specifications is required (a) Frequency response within 3 dB from 0 to 100 Hz (b) Attenuation at 200 Hz must be greater than 20 dB (c) Sampling frequency 500 Hz Design an IIR filter using Butterworth approximation and bilinear transformation.
98
Digital Signal Processing
22. Using Bilinear transformation method design a low-pass filter derived from a second-order Butterworth analog filter with a 3 dB, cut-off frequency of 100 Hz. The sampling rate is 1 kHz.
MULTIPLE CHOICE QUESTIONS 1. IIR filters are designed by considering all the (a) Infinite samples of frequency response (b) Finite samples of impulse response (c) Infinite samples of impulse response (d) None of the above 2. For the analog and digital IIR filters to be casual, the number of zeros should be (a) ≥ Number of poles (b) ≤ Number of poles (c) = Number of poles (d) Zeros 3. IIR filters will not have ________ phase characteristics. (a) Linear (b) Non-linear (c) Elliptical (d) None of the above 4. For IIR digital filter design the following is prototype (a) Analog filter (b) FIR filter (c) Butterworth filter (d) Chebyshev filter 5. The following techniques used to transform analog filter to digital filter are (a) Impulse invariant (b) Bilinear (c) Both the above (d) None of the above 6. In which of the following transformation any strip of width 2p/T in s-plane is mapped into the entire z-plane. (a) Bilinear (b) Impulse invariant (c) Impulse invariant and bilinear (d) None of the above 7. The following properties which are preserved in analog to digital transformations are (a) Stability (b) Causality (c) Both the above (d) None of the above 8. The tolerance in the passband and the stopband are called as (a) Ideal frequency (b) Higher cut-off frequency (c) Lower cut-off frequency (d) Ripples 9. In __________ transformation the impulse response of digital filter is sampled version of the impulse response of analog filter. (a) Bilinear (b) Impulse invariant (c) Impulse invariant and bilinear (d) None of the above
IIR Filters 99
10. The phenomena of high frequency components acquiring the identity of low frequency components is called (a) Aliasing (b) Warping (c) Monotonic (d) Pre-warping 11. The distortion in frequency axis due to non-linear relationship between analog and digital frequency is called (a) Stability (b) Aliasing (c) Frequency warping (d) Causality 12. The transfer function of a normalized low-pass filter can be transformed to high-passs filter with cut-off frequency wc by the transformation (a) s Æ 1/s (b) s Æ wc/s (c) s Æ s / wc (d) s Æ wc 13. An analog filter has poles at s = 0, s = – 2, s = –1. If impulse invariant transform is employed then the corresponding poles of digital filters are respectively (a) 1, e – 2T, eT (a) 0, e – T / 2, eT 2T –T (a) 1, e , e (a) 0, e– 2 T, e – T 3 14. An analog filter transfer function is given by, H (s) = . When the filter is transformed s +1 to digital filter using impulse invariant transformation, then the poles and zeros of the filter are (a) Zeros at z = 0, Poles at z = 0.368 (a) Zeros at z = 1, Poles at z = 0 (a) Zeros at z = 0.368, Poles at z = 0 (a) Zeros at z = 0, Poles at z = 1
Key to the Multiple-Choice Questions 1. 5. 9. 13.
(c) (c) (b) (d)
2. 6. 10. 14.
(b) (b) (a) (a)
3. (a) 7. (c) 11. (c)
4. (a) 8. (d) 12. (b)
5 5.1
FIR Filters
FOURIER SERIES METHOD
In the Fourier series method mostly we consider two cases 1. Cosine series representation 2. Sine series representation In cosine series representation, the desired amplitude response Ad (f ). Let A(f ) represent the approximation to Ad (f ) The form of the finite series is M a A( f ) = 0 + Â a m cos (2pmTf ) 2 m =1 For determining the coefficients
(5.1)
f /2
4 + Ú Ad ( f ) cos (2pmTf ) df f Exponential form can be expressed as
(5.2)
am =
M
(5.3)
A ( f ) = Â cm e j 2 pmTf Cm = Cm =
We have
am 2 = f 2
f /2
Ú
(
)
Ad ( f ) cos 2pmT j f df
(5.4)
–M
H 1 ( z ) = Â m = M Cm z m
The above form represents an FIR transfer function, but it is non-causal in that it has positive powers of z. Final transfer function in the causal form can be expressed as 2M
H ( z ) = Â ai z – i
(5.5)
ai = CM – i
(5.6)
The FIR transfer function H (z) is of order 2 M, which implies that there will be 2 M delays in a direct implementation.
FIR Filters
101
Letting t represent the length of the impulse response t = 2 MT (5.7) The amplitude response is completely unaffected by the time shift and is still A (f ) on the other hand, the additional time delay introduces a phase shift b ( f ) for H (z), which in radians is given by (5.8) b ( f ) = - MT w = -2pMTf Observe that the phase is a linear function of frequency representing a constant time delay of MT seconds. The delay increases linearly with the order of the filter. Consider a second case of sine series representation The form of the finite series is M
A( f ) = Â b m sin (2pmTf )
For determining the coefficients bm =
4 fs
fs / 2
Ú
M
A( f ) =
Â
m=-M
dm =
(5.9)
m =1
Ad ( f )sin (2pmTf ) df
(5.10)
0
d m j 2 pm Tf e j
bm 2 = fs 2
(5.11)
f / 2s
Ú
Ad ( f )sin (2pmTf ) df
(5.12)
0
d – m = – dm
(5.13)
Final transfer function can be expressed as 2M
H ( z ) = Â ai z - i
(5.14)
a i = dM – i
(5.15)
i=0
p p - MT w = - 2pMTf (5.16) 2 2 Example 5.1 A low-pass FIR digital filter is to be designed using the Fourier series approach. The desired amplitude response is b( f ) =
Ad (f ) = 1 for 0 £ f < 125 Hz = 0 elsewhere in the range 0 £ f £ f0 The sampling frequency is 1 KHz , and the impulse response is to be limited to 20 delays. Determine the transfer function using the techniques of this section. Solution: The appropriate representation for the desired amplitude response is an even function of frequency as shown in the following figure.
102
Digital Signal Processing
Desired amplitude response
The first waveform represents actual frequency scale. The second waveform represents normalized frequency scale. The folding frequency is f0 = 500 Hz. For convenience, the problem will be developed in terms of the normalized frequency. Thus, the desired response can be treated for the expansion purposes as Ad (v ) = 1 for – 0.25 £ v < 0.25 = 0 elsewhere in the range – 1 £ v £ 1 f v= where f0 = fs / 2 f0 f0 is the folding frequency, fs is sampling frequency The coefficients can be determined using the equation 5.4 Cm =
Ú
0.25
0
0.25
(1) cos mpvdv =
sin mpv ˘ mpv ˙˚ 0
sin 0.25 mpv mp The requirement that the impulse response is limited to 20 delays implies that the order of the transfer function should be 20. There could be as many as 21 terms in the impulse response, since one component need not be delayed. The necessary coefficients may be obtained by evaluating the above equation for m = 0 through m = 10. The various values are
=
C0 = 0.25
C1 = 0.22507908
C2 = 0.15915594
C3 = 0.07502636
C4 = 0
C5 = – 0.04501582
C6 = – 0.05305165
C7 = – 0.03215415
C8 = 0
C9 = 0.02500879
C10 = 0.03183099
FIR Filters
103
The initial form of the transfer function is of the form –M
H1 =
ÂC
m
zm
m=M
with coefficients for negative i determined by c– m = cm. To make this function causal, we must multiply by z – 10. The final result can be expressed as 20
H (z) =
i
z–i
i=0
where ai = c10 – i
5.2
Âa
FIR FILTER DESIGN BASED ON WINDOWING TECHNIQUES
In contrast to the IIR digital filter design approaches, the FIR filter design does not have any connection with the design of analog filters. The design of FIR filters is therefore based on a direct approximation of the specified magnitude response, with the often added-requirement that the phase response be linear, thereby avoiding the problem of spectrum factorization that complicates the direct design of IIR filters. The main task in digital FIR filter design is to determine the impulse response of the filter, i.e., the discrete impulse response at each discrete instant. The frequency response of an Nth-order causal FIR filter is: N
H (e jw ) = Â h (n) e - jnw
(5.17)
n=0
and the design of an FIR filter involves finding the coefficients h(n) that result in a frequency response that satisfies a given set of filter specifications. FIR filters have two important advantages over IIR filters. First, they are guaranteed to be stable, even after the filter coefficients have been quantized. Second, they may be easily constrained to have (generalized) linear phase. The FIR filters are generally designed to have linear phase. The ideal low-pass filter has a frequency response,, as shown in Fig. 5.1(b). The corresponding unit impulse response is easily calculated from the inverse DTFT: 1 hD [n] = 2p
p
Ú
-p
jw
H D (e ) e
j wn
1 dw = 2p
wc
wc
Ú 1e
-w c
j wn
1 È e jwn ˘ 1 È e jwc n - e - jwc n ˘ dw = = Í ˙ Í ˙ 2p Î jn ˚ -w 2p Î jn ˚ c
sin (w c n) (5.18) np This is a well known sinc function and is shown in Fig. 5.1 (a). This impulse response, however, is two-sided, i.e., infinite length and non-causal. Hence, it is non-realizable. An obvious solution is to truncate the ideal impulse response hD [n] and to shift the result over the necessary number of samples to make it causal. However, truncating hD [n] corresponds to a multiplication with a rectangular window of the form: hD [n] =
Ï1, n < ( M / 2) w[n] = Ì ÔÓ0, elsewhere
104
Digital Signal Processing
The resultant truncated impulse response can be written as: ht [n] = hD [n] w [n] Multiplication in the time domain in turn corresponds to convolution in the frequency domain: p 1 H t (e j w ) = H D (e j w ) * W (e j w ) = H D (e jq ) W (e j ( w -q ) ) d q 2p -Úp where W(e jw) is the DTFT of w[n]. As W(e jw) has a classic sinc shape, thus truncation of hD [n] leads to the overshoots and ripples in the frequency response. This is the well known Gibb’s phenomenon, and is illustrated in Fig. 5.1 and Fig. 5.2.
Fig. 5.1 Low-pass filtering using the window method: : Ideal low-pass filter in the time domain. (a) hD[n] jw (b) |HD (e )| : Ideal low-pass filter in the frequency domain. (c) w[n] : Rectangular window in the time domain. jw (d) |W (e )| : Rectangular window in the frequency domain. : Truncated low-pass filter in the time domain. (e) ht[n] (f) |HT (e jw)| : Truncated low-pass filter in the frequency domain.
FIR Filters
Fig. 5.2
105
Gibbs phenomenon is a function of the length of the window w [n].
Gibbs phenomenon The implication is that the truncated Fourier series approximation xN (f ) of a discontinuous signal x (t) will, in general, exhibit high-frequency ripples and overshoot x (t) near the discontinuities. If such an approximation is used in practice, a large enough value of N should be chosen so as to guarantee that the total energy in these ripples is insignificant. In the limit, of course, we know that the energy in the approximation error vanishes and that the Fourier series representation of a discontinuous signal such as square wave converges. Ideally, one would like to convolve HD (e jw) with a unit impulse, which would leave it unaffected. However, W (e jw) has a finite mainlobe height and width, which smoothens the sharp transition in the ideal frequency response, and finite sidelobes, which introduce ripples and overshoots in the frequency response. Obviously, one would like to have a mainlobe that is as narrow as possible, and sidelobes that are as small as possible. Instead of abruptly truncating the coefficients of hD [n], one might try to let them smoothly go to zero. This means multiplying them with a non-rectangular window. Several types of windows are used: triangular (Bartlett), etc., as mentioned in Table 5.1. The convolution of HD (e jw) with the DTFT of the corresponding window function results in different mainlobe widths and sidelobe heights. As mentioned earlier, the mainlobe width should ideally be narrow, and the sidelobe amplitude should ideally be small. However, for a fixed-length window, these cannot be minimized independently. Some general properties of windows are as follows: 1. As the length N of the window increases, the width of the mainlobe decreases, which results in a decrease in the transition width between passbands and stopbands. The relationship is given approximately by:
106
Digital Signal Processing
NDT = c where Df is the transition width, and c is the parameter that depends on the window. 2. The peak sidelobe amplitude of the window is determined by the shape of the window, and it is essentially independent of the window length. 3. If the window shape is changed to decrease the sidelobe amplitude, the width of the mainlobe will generally increase. Listed in Table 5.2 are the sidelobe amplitudes of several windows along with the approximate transition width and stopband attenuation that results when the given window is used to design an Nth-order low-pass filter. Table 5.1 Some common windows Rectangular
Bartlett (Triangular)
Hanning
Hamming
Blackman
Kaiser
Ï1 w [ n] = Ì Ó0
0£n£ N elsewhere
Ï (N – 1) Ô 2n– 2 Ô1 – wBart [n] = Ì N – 1 Ô ÔÓ0 Ï Ê 2np ˆ Ô0.5 - 0.5 cos Á Ë N ˜¯ w[n] = Ì Ô0 Ó Ï Ê 2np ˆ Ô0.54 + 0.46 cos Á Ë N ˜¯ w[n] = Ì Ô0 Ó
0£ n £ N –1 otherwise 0£n£ N elsewhere 0£n£ N elsewhere
Ï Ê 2np ˆ Ê 4np ˆ + 0.08 cos Á Ô0.42 - 0.5 cos Á Ë N ˜¯ Ë N ˜¯ w[n] = Ì Ô0 Ó Ï ÔI Ô 0 Ô wk [n] = Ì Ô Ô ÔÓ0
2 1/ 2 ¸ Ï È Ê 2n ˆ ˘ Ô Ô Ìb Í1 - Á ˜ ˙ ˝ ÔÓ ÍÎ Ë N – 1¯ ˙˚ Ô˛
I 0 (b)
0£n£ N elsewhere
N –1 N –1 £n£ 2 2 otherwise -
FIR Filters
107
Table 5.2 The peak sidelobe amplitude of some common windows and the approximate transition width and stopband attenuation of an Nth-order low-pass filter designed using the given window Window
Sidelobe Amplitude (dB)
Transition Width (Df )
Stopband Attenuation (dB)
Rectangular
– 13
0.9/N
– 21
Hanning
– 31
3.1/N
– 44
Hamming
– 41
3.3/N
– 53
Blackman
– 57
5.5/N
– 74
Table 5.3 Summary of ideal impulse response for standard frequency selective filters Ideal Impulse Response hD [n] Filter Type
hD [n], n π 0
Low-pass 2 fc High-pass
sin (nw c ) nw c
- 2 fc
Bandpass
2 fc
1 – 2 fc
2 fc 2
sin (nw c 2 ) sin (nw c1 ) - 2 f c1 nw c 2 nw c1
2 (fc2 – fc1 )
2 f c1
sin (nw c1 ) sin (nw c 2 ) - 2 fc 2 nw c1 nw c 2
1 – 2 (fc2 – fc 1)
Bandstop
5.3
sin (nw c ) nw c
hD [n = 0]
SUMMARY OF THE WINDOW METHOD OF DETERMINING FIR FILTER COEFFICIENTS
(i) Specify the ‘ideal’ or desired frequency response of filter, HD(e jw). (ii) Obtain the impulse response, hD [n] of HD (e jw) through inverse Fourier transform. (iii) Select a window function that satisfies the passband or attenuation specifications and then determine the number of filter coefficients. Note that the cut-off frequency for FIR design usually is taken as the average of passband and stopband frequencies. (iv) Obtain the values for w [n] for the chosen window function. The values of the actual FIR coefficients h [n] is computed sample by sample, such that:
h [n] = hD [n] w [n] Note: Matlab commands for window based design are: Hanning, Blackman, Hamming, Chebwin, Kaiser, fir1, fir2. For Optimal FIR filters design Remez exchange algorithm can be used to obtain the solution to the approximation problem.
b = 8.96
11.42 p M
Kaiser
0.0274
– 90
– 70
– 50
– 74
11.0 p M
0.000275
Blackman
0.0017
– 53
6.6 p M
b = 6.76
Hamming
0.0194
–
57
41
31
– 44
6.2 p M
8.84 p M
Hanning
0.0546
25
– 25
–
0.0275
Bartlett (Triangular)
–
13
– 21
1.8 p M
b = 4.45
Rectangular
0.7416
Mainlobe relative to sidelobe (dB)
Stopband attenuation – 20 log (ds ) (dB)
Transition width Dw = |wp ws|
5.86 p M
Name of window function
Summary of important features of common window functions
Passband ripple – 20 log (1 – dp ) (dB)
Table 5.4
–
12 p M
8p M
8p M
8p M
4p M –1
Mainlobe width wML
1 È ˘ 2n ˆ 2 ˙ Í Ê I 0 Íb Á1 – ˜ ( M – 1)2 ¯ ˙˙ Í Ë Î ˚ I 0 (b )
Ê 4np ˆ 0.08 cos Á Ë M – 1˜¯
Ê 2np ˆ 0.42 + 0.5 cos Á + Ë M – 1¯˜
Ê 2np ˆ 0.5 + 0.5 cos Á Ë M ˜¯
Ê 2np ˆ 0.5 – 0.5 cos Á Ë M ˜¯
2p ( for single side) M
1
Window function w [n], 0 < | n | £ (M – 1)/2 w [n] = 0, otherwise
108 Digital Signal Processing
FIR Filters
5.4
109
COMPARISON BETWEEN FIR AND IIR FILTERS
5.4.1 Advantages of FIR Filters • They can be designed with exact linear phase, and thus no phase distortion will be introduced. The phase response of IIR is non-linear, especially at the band edge. • FIR filter structures are non-recursive and thus, are always stable. Stability of IIR filters cannot always be guaranteed. • The effects of using limited number of bits to implement filters such as round-off noise and coefficient quantization errors are much less severe in FIR than in IIR. • It is easier to synthesize filters of arbitrary frequency responses using FIR. • It is possible to compute the filter response via FFT and multiplication in the frequency domain.
5.4.2 Advantages of IIR Filters • In most cases, the order of IIR is less than the order of a FIR filter with the same magnitude specifications. It has been shown that for practical filter specifications, the ratio MFIR / NIIR is typically of the order of ten or more. Hence, the IIR filter is computationally more efficient than the corresponding FIR filter. • In many applications, the linearity of the phase response of the digital filter is not an issue, making the IIR more preferable due to the lower computational requirement. • Analog filters can be readily transformed into equivalent IIR digital filters meeting similar specifications. This is not possible for FIR as they have no analog counterpart. (Analog filter design is well-established and has design tables.) • IIR filters can have closed-form design formulas, which implies no need for iterations in the filter design. • IIR are used when the only important requirements are sharp cut-off filters and high throughpout, as IIR filters (especially those using elliptic characteristics) will give fewer coefficients than FIR.
5.5
AMPLITUDE RESPONSES OF VARIOUS WINDOW FUNCTIONS
Comparison of various window functions Amplitude response for various window functions has been shown from Figs. 5.3 to 5.8. The amplitude vs normalized frequency is plotted. f where f0 is the folding frequency. The normalized frequency n = f0 f f 0 = s where fs is the sampling frequency. 2 The rectangular window displays the response with reasonably sharp rate of cut-off, the sidelobe ripple level is rather high. In the triangular window response, the sidelobe level has been reduced significantly, although the sharpness of cut-off is not great. In the amplitude response
110
Digital Signal Processing
Fig. 5.3
Fig. 5.4
Amplitude response for rectangular window function.
Amplitude response for Bartlett (triangular) window function.
Fig. 5.5
Amplitude response for Blackman window function.
FIR Filters
Fig. 5.6
Fig. 5.7
111
Amplitude response for Hanning window function.
Amplitude response for Hamming window function.
the mainlobe width of triangular window is double to that of rectangular window. Blackman window consists of a constant and two cosine terms. The peak amplitude of the sidelobe level of the Blackman window spectrum is more than 50 dB down from the mainlobe level, but the mainlobe has a width triple that of the rectangular window. Dolph-Chebyshev window is optimum in the sense that the mainlobe width is as small as possible for a given sidelobe peak ripple level. But this window is not popularly used. Kaiser window function displays negligible sidelobes in its amplitude response.
112
Digital Signal Processing
Fig. 5.8
5.6
Amplitude response for Kaiser window function.
SOME TYPICAL EXAMPLES ON FIR DIGITAL FILTERS
Example 5.2 Using the Kaiser window method, design a low-pass filter with a cut-off frequency wc = p/4, a transition width Dw = 0.02 p, and stopband ripple ds = 0.01. Solution:
a p = 1 - 10
- d p / 20
a s = 10 - ds / 20
Because as = – 20 log (0.01) = – 40, the Kaiser window parameter is: With we have
b = 05842 (40 – 21)0.4 + 0.07886 (40 – 21) = 3.4 Df = Dw/2p = 0.01, N= N=
– 20 log10
(
)
d p d s – 13
14.6 (w s – w p ) / 2p 40 - 13 = 1849 14.6 * (0.01)
Therefore, h [n] = hd [n] w [n] where hd [n] =
sin [(n - 112) p / 4] is the unit sample response of the ideal low-pass filter. (n - 112) p
FIR Filters
113
Although it is simple to design a filter using window design method, there are some limitations with this method. First, it is necessary to find a closed-form expression for hd [n] (or it must be approximated using a very long DFT). Second, for a frequency selective filter, the transition widths between frequency bands, and the ripples within these bands, will be approximately the same. As a result, the window design method requires that the filter be designed to the highest tolerances in all bands by selecting the smallest transition width and the smallest ripple. Finally, window design filters are not, in general, optimum in the sense that they do not have the smallest possible ripple for a given filter order and a given set of cut-off frequencies. Example 5.3 Obtain the coefficients of an FIR low-pass filter to meet the specifications given below using Hamming window method. Compute only the middle five impulse response of the filter. Passband edge frequency = 1.5 kHz Stopband attenuation > 50 dB Stopband edge frequency = 2.0 kHz Sampling frequency = 8 kHz Solution: The Hamming window gives indeed a stopband attenuation of 53 dB, which meets the required 50 dB. The transition width is given by: Df =
( fstop - f pass ) fs
=
(2.0 - 1.5) = 0.0625 (normalized by the sampling frequency) 8 N≥
3.3 3.3 = = 52.8 Df 0.0625
fi N = 53
We have h [n] = hD [n] w [n] where the ideal digital low-pass filter, hD [n]: sin (nw c ) Ï Ô2 f c nw c hD [n] = Ì Ô2 f Ó c
nπ0 nπ0
and the Hamming window is given by: Ê 2 np ˆ w[n] = 0.54 + 0.46cos Á Ë M ˜¯
The desired cut-off frequency of the low-pass filter: ( f pass + fstop )
1.75 = 0.21875 2 8 Noting that h[n] is a symmetrical impulse response, we need to compute h [0], h [1], h [2], º, h [26] and use the symmetry property to obtain the other coefficients. fc =
n = 0:
n = 1:
= 1.75 kHz
fi
hD [0] = 2 f c = 2 (0.21875) = 0.4375 w[0] = 0.54 + 0.46 cos (0) = 1 h [0] = hD [0] w[0] = 0.4375 hD [1] =
2 ¥ 0.21875 sin ( 2 p ¥ 0.21875) 2p ¥ 0.21875
114
Digital Signal Processing
1 Èsin ( 2 p ¥ 0.21875) ˚˘ pÎ = 0.318 [sin (1.373)] = 0.318 × 0.9806 = 0.31219 2(0.21875) hD [2] = sin [2 (2p) (0.21875)] = 0.0006091 2(2p)(0.21875) w[2] = 0.54 + 0.46 cos [2 (2p ) 53] = 0.98713 h[2] = h [ -2] = hD [2] w[2] = 0.006012
=
n = 2:
Similarly, h [3] = h [– 3] = – 0.08568, h [4] = h [– 4] = – 0.0534136, h [5] = h [– 5] = – 0.0325932, º, h [26] = h [– 26] = – 9.14 × 10–4. Plots of hD [n], w [n] and h [n] can be seen in Fig. 5.1. Note that the indices of the filter coefficients run from – 26 to 26. To make the filter causal (necessary for implementation), 26 is added to each index to so that they start at zero. After applying the shifting, the overall results are shown in following Table: n
hD[n]
w[n]
h[n]
26
0.4375
1.0000
0.4375
0
– 0.0111
0.0800
– 0.0009
27
0.3122
0.9966
0.3111
1
0.0024
0.0834
0.0002
28
0.0609
0.9866
0.0601
2
0.0131
0.0934
0.0012
29
– 0.0882
0.9701
– 0.0856
3
0.0027
0.1099
0.0003
30
– 0.0562
0.9473
– 0.0533
4
– 0.0132
0.1327
– 0.0018
31
0.0353
0.9186
0.0325
5
– 0.0083
0.1614
– 0.0013
32
0.0490
0.8843
0.0433
6
0.0111
0.1957
0.0022
33
– 0.0089
0.8450
– 0.0075
7
0.0138
0.2350
0.0032
34
– 0.0397
0.8013
– 0.0318
8
– 0.0067
0.2787
– 0.0019
35
– 0.0069
0.7538
– 0.0052
9
– 0.0182
0.3262
– 0.0059
36
0.0293
0.7031
0.0206
10
0.0000
0.0000
0.0000
37
0.0160
0.6501
0.0104
11
0.0207
0.4299
0.0089
38
– 0.0187
0.5954
– 0.0111
12
0.0087
0.4846
0.0042
39
– 0.0203
0.5400
– 0.0109
13
– 0.0203
0.5400
– 0.0109
40
0.0087
0.4846
0.0042
14
– 0.0187
0.5954
– 0.0111
41
0.0207
0.4299
0.0089
15
0.0160
0.6501
0.0104
42
0.0000
0.3769
0.0000
16
0.0293
0.70310
0.0206
43
– 0.0182
0.3262
– 0.0059
17
– 0.0069
0.7538
– 0.0052
44
– 0.0067
0.2787
– 0.0019
18
– 0.0397
0.8013
– 0.0318
45
0.0138
0.2350
0.0032
19
– 0.0089
0.8450
– 0.0075
46
0.0111
0.1957
0.0022
20
0.0490
0.8843
0.0433
47
– 0.0083
0.1614
– 0.0013
21
0.0353
0.9186
0.0325
48
– 0.0132
0.1327
– 0.0018
FIR Filters
22
– 0.0562
0.9473
– 0.0533
49
0.0027
0.1099
0.0003
23
– 0.0882
0.9701
– 0.0856
50
0.0131
0.0934
0.0012
24
0.0609
0.9866
0.0601
51
0.0024
0.0834
0.0002
25
0.3122
0.9966
0.3111
52
– 0.0111
0.0800
– 0.0009
115
The filter co-efficients can be modified as follows: h (0), h (1), h (2), º h (26), º h (50), h (51), h (52) That means h (0) = h (52) h (1) = h (51) h (2) = h (50) …………… …………… h (25) = h (27) h (24) = h (28) h (23) = h (29) h (26) remains separately as being middle co-efficient. The final transfer function appears in the following form: M -1
H ( z) =
Âh
i
z-i
i=0
Example 5.4 Determine the frequency response of a linear phase FIR filter given by, y(n) = A1 x (n) + A2 x (n – 1) + A3 x (n – 2) + A2 x (n – 3) + A1 x (n – 4). Solution: Given that y (n) = A1 x (n) + A2 x (n – 1) + A3 x (n – 2) + A2 x (n – 3) + A1 x (n – 4) Taking Z-transform, we get Y (z) = A1 X (z) + A2 z – 1 X (z) + A3 z – 2 X (z) + A2 z – 3 X (z) + A1 z – 4 X (z) Y (z) = X (z) [A1 + A2 z – 1 + A3 z – 2 + A2 z – 3 + A1 z – 4] H (z) =
Y ( z) = A1 + A2 z – 1 + A3 z – 2 + A2 z – 3 + A1 z – 4 X ( z)
H (z) = z – 2 [A1 z 2 + A2 z 1 + A3 + A2 z – 1 + A1 z – 2] = z– 2 ÎA1 (z 2 + z – 2) + A2 (z 1 + z– 1) + A3˚ The frequency response of a linear phase FIR filter is obtained by replacing z with e jw
( )
(
)
(
)
H e jw = e -2 jw ÍÎ A1 e 2 jw + e -2 jw + A2 e jw + e - jw + A3 ˙˚
( )
H e jw = e - 2 jw ÈÎ 2 A1 cos ( 2w ) + 2 A2 cos ( w ) + A3 ˘˚
Example 5.5 Determine the frequency response of FIR filter defined by y (n) = 0.25 x (n) + x (n – 1) + 0.25 x (n – 2). Calculate the phase delay and group delay.
116
Digital Signal Processing
Solution: Given that, y ( n ) = 0.25 x ( n ) + x ( n - 1) + 0.25 x ( n - 2)
By taking Z-transform, we get, y ( z ) = 0.25 X ( z ) + z -1 X ( z ) + 0.25 z -2 X ( z ) Y ( z) = 0.25 + z -1 + 0.25 z -2 X ( z)
\
H ( z) =
\
H ( z ) = h ( 0) + h (1) z -1 + h ( 2) z -2
where, h (0) = h (2) = 0.25 h (1) = 1 Equation (1) is also written as 2
H ( z ) = Â h ( n) z - n n=0
Here, M = 3 ( M – 1 = 2 fi M = 3) Phase delay, tp =
M -1 =1 2
From equation (1), we get, H ( z ) = z -1 ÈÎ h ( 0) z + h (1) + h ( 2) z -1 ˘˚ ( z . z – 1 = 1)
(
)
H ( z ) = z -1 ÈÎ h ( 0) z + z -1 + h (1) ˘˚ ( h (0) = h (2)) By substituting in z = e jw in H (z), the frequency response of FIR filter is obtained.
\
( )
(
)
H e jw = e - jw ÈÎ h ( 0) e jw + e - jw + h (1) ˘˚ È ˘ Ê e jw + e - jw ˆ = e - jw Í 2h ( 0) Á + h (1) ˙ ˜ 2 Ë ¯ Î ˚ È ˘ = e - jw Í h (1) + 2 Â h ( n ) cos ( t - n ) w ˙ n=0 Î ˚ – jw =e M1 (w)
where, È ˘ M 1 ( w ) = Í h (1) + 2 Â h ( n ) cos ( t - n ) w ˙ n=0 Î ˚
(Magnitude response)
FIR Filters
f (w) = – w \
tp =
Group delay, tg =
117
(Phase response)
- f (w) =1 w - d (f (w)) dw
=1
Example 5.6 Use a rectangular and Hanning windows to find the fourth-order linear phase FIR filter to approximate ideal low-pass filter e– j 2.5 w for ‘a’ |w| £ 1 and zero for 1 £ |w| £ p. Solution:
Given that the frequency response of the filter is, ÏÔe - j ( 2.5w) w £1 H d e jw = Ì 1£ w £ p ÔÓ 0
( )
Using rectangular window, The rectangular window sequence is given by, WR ( n ) = 1 -
( N - 1) £ n £ ( N - 1) 2 otherwise
=0
2
where, ‘ N ’ is order of the filter. Here, \
N = 4 therefore, the range will be 0 to (N – 1) WR (n) = 1 for 0 £ n £ 3 = 0 otherwise Then, applying inverse Fourier transform an Hd (e jw), we get, 1 hd ( n ) = 2p hd ( n ) = hd ( n ) =
1 2p 1 2p
•
Ú H (e ) e jw
j wn
d
dw
-• 1
Ú
e - j ( 2.5 w) . e jwn d w
-1 1
Ú
e - j ( 2.5w - wn) d w
-1
=
1 È 1 e - j ( 2.5 - n) w Í 2p Î j ( n - 2.5)
=
1 È e - j ( 2.5 - n) - e + j ( 2.5 - n) ˘ Í ˙ 2p Î j ( n - 2.5) ˚
(
)
1 -1
˘ ˙ ˚
118
Digital Signal Processing
=
1 È e j ( 2.5 - n) - e - j ( 2.5 - n) ˘ Í ˙ 2p Î j ( 2.5 - n ) ˚
=
1 È 2 j sin ( 2.5 - n ) ˘ Í ˙ 2p Î j ( 2.5 - n ) ˚
hd ( n ) =
q -q ÎÈ∵ e - e = 2 j sin q ˚˘
sin ( 2.5 - n ) 2p ( 2.5 - n )
Then, the values of hD (n) for ‘n’ equal to 0 to 3 are, sin 2.5 hd ( 0) = = 0.038 2p ( 2.5) hd (1) =
sin 1.5 = 0.106 2p (1.5)
hd ( 2) =
sin 0.5 = 0.153 2p ( 0.5)
hd ( 3) =
sin - 0.5 = 0.153 2p ( - 0.5)
The filter coefficients using rectangular window are, Where,
h (n) = hd (n) WR (n) h (n) = hd (n), 0 £ n £ 3
Then, the transfer function of the required filter is, È 3 ˘ H ( Z ) = Z -3 Í Â h ( n ) Z n + Z - n ˙ În = 0 ˚
(
fi
)
(
)
H ( Z ) = Z -3 ÈÎ 0.038 ( 2) + 0.106 Z 1 + Z -1 +
(
)
(
)
0.153 Z 2 + Z -2 + 0.153 Z 3 + Z -3 ˘˚ = 0.076 Z -3 + 0.106 Z -2 + 0.106 Z -4 + 0.153 Z -1 +
\
0.153 Z -5 + 0.153 + 0.153 Z -6 H ( Z ) = 0.153 + 0.153 Z -1 + 0.106 Z -2 + 0.076 Z -3 + 0.106 Z -4 + 0.153 Z -5 + 0.153 Z -6
Using Hanning window The Hanning window sequence is given by, Ê 2pn ˆ for WHn ( n ) = 0.5 + 0.5 cos Á , Ë N - 1˜¯ = 0,
( N - 1) £ n £ ( N - 1) 2
otherwise
2
FIR Filters
Since, N = 4 here, the range of ‘n’ for which WHn (n) has a value is 0 to 3. WHn ( n ) = 0.5 + 0.5 cos
0£n£3
2pn , 3
= 0,
otherwise
As the given transfer function is, - j ( 2.5 w ) w £1 ÔÏe H d e jw = Ì 1£ w £ p ÔÓ 0
( )
The inverse Fourier transfer of Hd (e jw) is obtained as hd ( n ) =
sin ( 2.5 - n ) 2p ( 2.5 - n )
The values of hd (n) for n = 0 to 3 also remain same i.e.,
hd (0) = 0.038, hd (1) = 0.106, hd (2) = 0.153, hd (3) = 0.153 Then, the values of WHn (n) for n = 0 to 3 are obtained as, WHn (0) = 0.5 + 0.5 = 1 2p = 0.25 WHn (1) = 0.5 + 0.5 cos 3 4p = 0.25 WHn (2) = 0.5 + 0.5 cos 3 WHn (3) = 0.5 + 0.5 cos (2 p) = 1 Then, the filter coefficients are obtained as, h ( n ) = hd ( n ) WHn ( n ) h ( 0) = hd ( 0) WHn ( 0) = 0.038 ¥ 1 = 0.038 h (1) = hd (1) WHn (1) = 0.106 ¥ 0.25 = 0.0265 h ( 2) = hd ( 2) WHn ( 2) = 0.153 ¥ 0.25 = 0.03825 h ( 3) = hd ( 3) WHn ( 3) = 0.153 ¥ 1 = 0.153
\ The transfer function of the required filter is, H ( Z ) = Z -3
3
 h ( n) ( Z
n
+ Z -n
)
n=0
(
)
H ( Z ) = Z -3 [ 0.038 ¥ 2 + 0.0265 Z + Z -1 +
(
)
(
)
0.03825 Z 2 + Z -2 + 0.153 Z 3 + Z -3 ˘˚
119
120
fi
Digital Signal Processing
H ( Z ) = 0.076 Z -3 + 0.0265 Z -2 + 0.0265 Z -4 + 0.03825 Z -1 + 0.03825 Z -5 + 0.153 + 0.153 Z -6 H ( Z ) = 0.153 + 0.03825 Z -1 + 0.0265 Z -2 + 0.076 Z -3 + 0.265 Z -4 + 0.03825 Z -5 + 0.153 Z -6
Problems 1. Compare FIR filters and IIR filters. 2. Mention the advantages of FIR filters. Explain the design procedure of an FIR digital filter using Fourier series method. 3. Based on what factors are the FIR and IIR filters selected? Discuss. 4. Discuss the features of FIR filter design using the Kaiser approach. 5. What are commonly used windows? Explain. 6. Compare the advantages and disadvantages of various window functions. 7. Why are the FIR filters called constant phase filters? 8. Prove that FIR filters show linearity in phase. 9. With reference to digital filters, what do you mean by finite-impulse response? 10. Derive the transfer functions of FIR and IIR systems. 11. What are the conditions imposed on the FIR sequence in order that the filter will have linear phase response. 12. Explain the various window function techniques for the design of FIR digital filters. 13. Realize the FIR filter hardware in direct form model The desired response of a certain FIR filter is given by
Ï1 Hd ( f ) = Ì Ó0
0 £ f £ 1 kHz f > 1 kHz
Let the sampling rate fs = 10 kHz, impulse response is of 1 millisec duration. Use Hamming window and compute the impulse response of FIR filter. 14. (a) Explain windowing technique to design FIR filter. (b) The desired amplitude response of a certain bandpass FIR filter can be stated as Ad ( f ) = 1 for 250 £ f £ 750 Hz = 0 else where in the range 0 £ f £ f 0 The sampling rate is 2 kHz and impulse response is to be limited to 20 delays. Using Hamming window function determine the transfer function.
FIR Filters
121
15. Design Chebyshev filter for the following specifications. (a) 3 dB bandwidth wc =10 rad/sec (b) Ripple in the passband ≤ 0.5 dB (c) Atleast 25 dB attenuation for w ≥ 20 rad/sec Determine the order of the filter and its transfer function. 16. (a) State the merit and demerits of FIR digital filter over IIR digital filters. (b) Design an FIR filter with Bartlett window with N = 10 for a filter with specifications H(w) = 1 for – 10 ≤ w ≤ 10. Sampling frequency is 2 kHz.
Multiple-Choice Questions 1. In FIR filters the Gibbs oscillations are due to (a) Non-linear magnitude characteristics (b) Non-linear phase characteristics (c) Sharp transition from stopband to stopband (d) Gradual from stopband to stopband 2. Symmetric impulse response having even number of samples can be used to design (a) Low-pass and high-pass filters (b) Low-pass and bandpass filters (c) Low-pass and bandstop filters (d) Only low-pass filters 3. Raised cosine windows also called generalized (a) Hamming window (b) Hanning window (c) Rectangular window (d) Blackman window 4. The symmetric impulse response having odd number of samples has (a) Symmetric magnitude function (b) Antisymmetric magnitude function (c) Both (a) and (b) (d) None of the above 5. The width of the mainlobe in rectangular window spectrum is (a) 4p/N (b) 16p/N (c) 8p/N (d) 2p/N 6. Symmetric impulse response having even number of samples cannot be used to design (a) Low-pass filters (b) Bandstop filters (c) High-pass filters (d) Bandpass filters 7. In Hamming window spectrum the sidelobe magnitude remains constant with (a) Decreasing w (b) Constant w (c) Increasing w (d) None of the above 8. The width of the mainlobe should be _____________ and it should contain as much of the total energy as possible (a) Large (b) Medium (c) Very large (d) Small
122
Digital Signal Processing
9. Symmetric impulse response having odd number of samples, N = 7 with centre of symmetry w is equal to (a) 2 (b) 5 (c) 3.5 (d) 3 10. In Hamming window spectrum the sidelobe magnitude remains constant with (a) Decreasing w (b) Constant w (c) Increasing w (d) None of the above 11. The condition for the impulse response to be anti-symmetric is (a) h (n) = – h (N – 1 – n) (b) h (n) = h (– n) (c) h (n) = h (N – 1 – n) (d) All of the above 12. The ideal filters are (a) Causal (b) Non-causal (c) May be causal or may not be causal (d) None of the above 13. The frequency response of a digital filter is (a) Periodic (b) Non-periodic (c) May be periodic or non-periodic (d) None of the above 14. The abrupt truncation of Fourier series results in oscillations in (a) Stopband (b) Passband (c) Both (a) and (b) (d) None of the above 15. The condition for the impulse response to be anti-symmetric is (a) h (n) = – h (N – 1 – n) (b) h (n) = h (– n) (c) h (n) = h (N – 1 – n) (d) All of the above 16. For Blackman window, the peak sidelobe magnitude in dB is (a) 13 (b) – 31 (c) – 41 (d) – 58 17. For Hamming window, the peak sidelobe magnitude in dB is (a) 13 (b) – 31 (c) – 41 (d) – 58 18. For Hanning window, the peak sidelobe magnitude in dB is (a) 13 (a) – 31 (a) – 41 (a) – 58
Key to the Multiple-Choice Questions 1. 5. 9. 13. 17.
(a) (a) (d) (a) (c)
2. 6. 10. 14. 18.
(b) (c) (c) (c) (d)
3. 7. 11. 15.
(a) (c) (a) (b)
4. 8. 12. 16.
(a) (d) (b) (b)
6 6.1
Realization of Digital Filters
STRUCTURAL REPRESENTATION OF THE DTLTI SYSTEMS USING Z-TRANSFORM
Let the DTLTI system be characterized by the following difference equation p
q
y[n] = Â ak x [n - k ] - Â bk y [n - k ] k =0
(6.1)
k =1
The transfer function of the system can be represented as p
Y (Z ) = H ( z) = X (Z )
Âa
k
z -k
(6.2)
k =0 q
1 + Â bk z
-k
k =1
In any transfer function of Discrete Time Linear Time Invariant (DTLTI) system, if z is replaced by e jw the required frequency response of the system can be obtained. p
Âa
k
( )
H e jw =
z - jk w
k =0 q
(6.3)
1 + Â bk z - jk w k =1
In the following explanation we will consider the basic operations outlined in Fig. 6.1 for developing realizations. The operations will now be discussed individually. The unit delay operation shown in Fig. 6.1 (a) represents the process of delaying or storing a particular sample by T seconds, at which time it appears at the output. Since all operations in a uniformly-sampled system occur at integer multiples of the basic sample time T, we refer to a delay of T as a “unit delay”. The output of the unit delay block can be expressed as y [n] = x [n – 1]
(6.4)
The adder/subtractor operation shown in Fig. 6.1 (b) represents the arithmetic process of combining the two signals x (n) and y (n) by either addition or subtraction in the form w [n] = x [n] ± y [n]
(6.5)
124
Digital Signal Processing
In many realization diagrams, more than two signals will be combined in the same unit. In such cases, the signs adjacent to the various input branches may be used to identify whether a given value is added or subtracted. If the signs are omitted, it will be understood that all of the signals are added algebraically in the adder itself.
Fig. 6.1
Basic operations in discrete-time system realizations.
The constant multiplier shown in Fig. 6.1 (c) represents the arithmetic process of multiplying the signal x [n] by a constant according to the equation y [n] = Ax [n] (6.6) The constant A may be rigidly fixed in the system, or it may be programmable. However, if the system is considered to be time-invariant, it will usually be constant for definite intervals of time. The branching operation shown in Fig. 6.1 (d) simply refers to the process of simultaneously connecting a signal to two or more points in the system. If y1 [n] and y2 [n] represent the two points for which x [n] is to appear, we have y1 [n] = x [n] y2 [n] = x [n]
(6.7)
Realization of Digital Filters
125
The primary limitation on branching in hardware design is the fact that for each additional branch, more power must be supplied by the previous stage. Consequently, there will be some maximum number of branches that could be driven by a specific digital circuit, depending on the type of circuitry and the nature of the input circuitry that follows. The signal multiplier shown in Fig. 6.1 (e) refers to the process of multiplication of two dynamic signals x (n) and y (n). The output is w [n] = x [n] y [n]
(6.8)
The signal multiplier differs from the constant multiplier in the sense that the signal unit multiplies samples of two separate discrete-time signals whose values may both continually vary with time.
6.2
CASCADE AND PARALLEL REALIZATION FORMS
The cascade canonic form (or series form) is obtained by decomposing H (z) into the product of several simple transfer function as given by H (z ) = a0 H1 (z ) H 2 (z ) ºººº H l (z )
(6.9)
1
= a0
’ H ( z) i
(6.10)
i =1
In most cases, the individual transfer functions are chosen to be either first-order or secondorder section. A first-order section will have the form H i (z ) =
1 + ai1 z –1 1 + bi1 z –1
(6.11)
A second-order section will have the form H i (z ) =
1 + ai1 z –1 + ai 2 z – 2 1 + bi1 z –1 + bi 2 z – 2
(6.12)
Note that if it is desired to place any ai 0 coefficients in any of the individual sections, the overall gain constant in eq (6.10) would not be a0. The general layout of a cascade realization is shown in Fig. 6.2. The individual sections may be realized by either of the direct methods. The typical forms for these sections are illustrated in Fig. 6.3.
Fig. 6.2
Form of a cascade or series realization.
126
Digital Signal Processing
Fig. 6.3
Typical forms for sections used in cascade realization.
The parallel canonic form is obtained by decomposing H (z) into the sum of several simpler transfer functions (first or second-order) and a constant as expressed by H (z ) = A + H1 (z ) + H 2 (z ) + º + H r (z ) r
= A + Â H i (z )
(6.13)
i =1
Because of the presence of the constant term in eq (6.13), a first-order section be chosen to the simple form ai 0 H i (z ) = (6.14) 1 + bi1 z –1 A second-order section can be chosen in the form ai 0 + ai1 z –1 H i (z ) = (6.15) 1 + bi1 z –1 + bi 2 z –2 The general layout of a parallel realization is shown in Fig. 6.4. Once again, the individual sections are illustrated in Fig. 6.5 using the direct form 2 realization in each case. Both the cascade and the parallel forms require that the transfer function be mathematically decomposed for realization. If the poles and zeros of the overall transfer function are known, the sections of a cascade realization can be obtained by grouping complex conjugate pairs of poles and complex conjugate pairs of zeros to produce second-order sections, and by grouping real poles and real zeros to produce either first- or second-order sections, of course a pair of real zeros may be grouped with a pair of complex conjugate poles, or vice versa. The same procedure discussed for the cascade realization applied to the parallel realization as far as the poles are concerned. The various denominator polynomials may be determined directly
Realization of Digital Filters
Fig. 6.4
Fig. 6.5
127
Form of a parallel realization.
Typical forms for sections used in parallel realization.
from the zeros. Instead, it is necessary to first carry out a partial fraction expansion in terms of individual poles or in terms of a combination of first-order and second-order denominator polynomials.
6.3
SOME TYPICAL EXAMPLES ON REALIZATION OF FILTERS
Example 6.1
b0 + b1 z -1 , z > a 1 - az -1 h [n] = b0 a n u [n] + b1 a n -1 u [n - 1] Æ numerical convolution
H ( z) =
128
Digital Signal Processing
y [n] - ay [n - 1] = b0 x [n] + b1 x [n - 1]
y [n] – y [n – 1] = x [n]
Fig. 6.6
Example 6.2
Structural representation of y [n] – y [n – 1] = x [n]
Obtain the realization of the following equation y [n] – ay [n – 1] = b0 x [n] + b1 x [n – 1]
Fig. 6.7
Block diagram and signal flow graph of the Example 6.2.
Example 6.3 Realize the following transfer function b H ( z) = -1 1 + a1 z - a2 z -2 y [n] + a1 y [n - 1] - a2 y [n - 2] = bx [n] y [n] = - a1 y [n - 1] + a2 y [n - 2] + bx [n]
Fig. 6.8
Realization of the Example 6.3.
Realization of Digital Filters
129
Example 6.4 Represent the following system difference equation using Direct Form I and Direct Form II. q p
Âb
k
q
y[n] = Â ak y [n - k ] + Â bk x [n - k ] ´ H ( z ) = k =1
k =0
z -k
k =0 p
1 - Â ak z - k k =1
Fig. 6.9
Direct form-I for the Example 6.4.
Equations for Direct Form–2: Ê ˆ V ( z ) = H1 ( z ) X ( z ) = Á Â bk z - k ˜ X ( z ) Ë k =0 ¯ q
Ê ˆ Á ˜ 1 ˜ V ( z) Y ( z) = H 2 ( z) V ( z) = Á p Á -k ˜ Á 1 - Â ak z ˜ Ë ¯ k =1 q
v[n] = Â bk x [n - k ] k =0 p
y[n] = Â ak y [n - k ] + v [n] k =1 q
v [n] = Â bk x [n - k ] k =0 p
y [n] = Â ak y [n - k ] + v [n] k =1
Ê ˆ Á ˜ 1 ˜ X ( z) W ( z) = H 2 ( z) X ( z) = Á p Á -k ˜ Á 1 - Â ak z ˜ Ë ¯ k =1 Ê q ˆ Y ( z ) = H1 ( z ) W ( z ) = Á Â bk z - k ˜ W ( z ) Ë k =0 ¯ p
w[n] = Â ak w[n - k ] + x [n] k =1 q
y[n] = Â bk w[n - k ] k =0 p
w[n] = Â ak w[n - k ] + x [n] k =1 q
y [n] = Â bk w[n - k ] k =0
130
Digital Signal Processing
Fig. 6.10
Direct form-I and Direct form-II for the Example 6.4.
Note: It is worth to note in the Direct Form-II the number of unit delays can be reduced to half compared with Direct Form-I. The Direct form-I and Direct form-II can also be called canonical forms. Example 6.5 Develop both (a) cascade and (b) parallel realization schemes for the function H (z ) =
3 + 3.6 z –1 + 0.6 z –2 1 + 0.1 z –1 - 0.2 z –2
Solution: (a) It should be pointed out that decomposition is done for the sake of illustration as the system is a second-order function, and it would not normally require a cascade or a parallel type of realization. In fact, it would not be possible to decompose it into functions with real coefficients if either the poles or zeros were complex. However, the poles and zeros are all real in this case. As a first step in decomposition let us change the above transfer function into positive powers of z as 3 z 2 + 3.6 z + 0.6 H (z ) = (6.16) z 2 + 0.1 z - 0.2 Factorization of the numerator and denominator polynomials yields zeros at – 1 and – 0.2 and poles at – 0.5 and 0.4. (Note that the system is stable). The function may then be expressed as 3 (z + 1) (z + 0.2) H (z ) = (6.17) (z + 0.5) (z – 0.4) As an arbitrary grouping, the first polynomial factor in the numerator will be grouped with the first factor in the denominator, and a similar grouping will be used for the second factors. The gain constant will be maintained as a separate constant factor. After conversion back to negative power of z, the separate functions may be expressed as 1 + z –1 H1 (z ) = (6.18) 1 + 0.5 z –1
Realization of Digital Filters
1 + 0.2 z –1 1 - 0.4 z –1 The realization is shown in Fig. 6.11(a).
(6.19)
H 2 (z ) =
Fig. 6.11
131
(a) Realization for system of Example 6.5.
(b) The parallel development is best achieved by first expanding H (z) / z in a partial fraction expansion we have
3 (z + 1) (z + 0.2) H (z ) = = z z (z + 0.5) (z – 0.4) The coefficients are determined to be A1 = 3, A2 = – 1 z and conversion back to negative power of z, the various A = –3 –1 H1 (z ) = 1 + 0.5 z –1 H 2 (z ) =
Fig. 6.11
A3 A1 A2 + + (6.20) z z + 0.5 z – 0.4 and A3 = 7. After multiplication by quantities may be expressed as (6.21)
7 1 - 0.4 z –1
(b) Realization for system of Example 6.5.
(6.22) (6.23)
132
Digital Signal Processing
Example 6.6 The partially factored form of a certain transform function is given by 2 (z - 1) (z 2 + 1.4142136 z + 1) (6.24) (z + 0.5) (z 2 – 0.9 z + 0.81) Develop a cascade realization of the function using a first-order section and a second-order section. H (z ) =
Solution: As a check to determine various possible grouping combinations, the roots of the two quadratics will be determined. Factorization of the numerator quadratic reveals that two zeros are located at z = 1 – ± 135°. A similar procedure applied to the denominator quadratic indicates that two poles are located at z = 0.9 – ± 60°. As long as we are restricted to real coefficients, this means that the decomposition must contain a second-order section representing the two second-order polynomials and a first-order section representing the two first-order polynomials. Arranging in negative powers of z and allowing for a0 = 2, We have –1 (6.25) H1 (z ) = 1 + 0.5 z –1 H 2 (z ) =
1 + 1.4142136 z –1 + z -2 1 - 0.9 z –1 + 0.81 z -2
(6.26)
The realization is shown in Fig. 6.12.
Fig. 6.12
Realization for system of Example 6.6.
Example 6.7 Develop a parallel realization for the system of Example 6.6. Solution: The development is best achieved by expanding in a partial fraction expansion. By expanding H (z) / z using partial fraction expansion A z + A4 A2 2 (z - 1) (z 2 + 1.41421 z + 1) A1 H (z ) = = + + 2 3 2 z z (z + 0.5) (z – 0.9 z + 0.81) z z + 0.5 z – 0.9 z + 0.81
(6.27)
Realization of Digital Filters
133
The constants A1 and A2 can be determined by the usual partial fraction procedure for firstorder poles, but A3 and A4 must be determined in a different fashion. One way this can be achieved is to simply choose some non-singular values of z and substituting on both sides of the above equation to yield a series of simultaneous linear equations. Following the preceding steps, we first determine that A1 = – 4.9382716 and A2 = 2.1571915. Placing these two values in the eq. (6.27) and the values z = 1 and z = –1 seem suitable. After some simplification, substitution of these values results in the following simultaneous equations. A3 + A4 = 3.1851310 – A3 + A4 = 3.1851310
(6.28)
Solution of eq. (6.28 yields) A3= 4.7810802 and A4 = – 1.5959492. After substituting these values in eq. (6.27), the proper form of the expansion is obtained by multiplying both sides by z and rearranging in negative powers. A1 = – 4.9382716 H1 (z ) =
2.1571915 1 + 0.5 z – 1
4.7810802 – 1.595949 z – 1 1 - 0.9 z – 1 + 0.81 z - 2 The realization is shown in Fig. 6.13. H 2 (z ) =
Fig. 6.13
Realization for system of Example 6.7.
(6.29) (6.30)
134
Digital Signal Processing
Problems 1. Realize the following transfer function in Direct form-I
H ( z) =
z2 + z z 3 + 0.6 z 2 + 0.11 z + 0.006
2. Realize the following transfer function in Direct form-II z -2 + 2 z -1 + 1 H ( z ) = -3 z + 6 z -2 + 5.5 z -1 + 1.1 3. Realize the following transfer function in Direct form-I and Direct form-II
H ( z) =
z -3 + 2 z -2 + z -1 + 0.5 z -3 + 5 z -2 + 6 z -1 + 3
4. Explain what you understand about Direct form-I and Direct form-II realizations. 5. Obtain a Direct-form realization for the following systems 3 1 (a) y (n) + y (n - 1) + y (n - 2) = x (n) + x (n - 1) 4 8 3 -1 1 -2 1+ z + z 2 2 (b) H ( z ) = Ê 1 -1 ˆ Ê 1 -1 1 -2 ˆ ÁË1 + z ˜¯ ÁË1 + z + z ˜¯ 2 2 4
1 1 -1 1 -2 + z + z + z -3 2 (c) H ( z ) = 2 2 1 -1 1 -2 1 -3 1+ z + z + z 4 2 2 6. Obtain the cascade and parallel realization for the following systems 1 1 + z -1 4 (a) H ( z ) = Ê 1 -1 ˆ Ê 1 -1 1 -2 ˆ ÁË1 + z ˜¯ ÁË1 + z + z ˜¯ 2 2 4 Ê 3 -1 1 -2 ˆ Ê 3 -1 -2 ˆ ÁË1 + z + z ˜¯ ÁË1 - z + z ˜¯ 2 2 2 (b) H ( z ) = 1 -2 ˆ Ê 1 -1 1 -2 ˆ Ê -1 ÁË1 + z + z ˜¯ ÁË1 + z + z ˜¯ 4 4 2 Ê 1 -1 ˆ Ê 1 -1 1 -2 ˆ ÁË1 - z ˜¯ ÁË1 - z + z ˜¯ 2 2 4 (c) H ( z ) = 1 1 1 -1 1 -2 ˆ Ê -1 ˆ Ê -1 -2 ˆ Ê ÁË1 + z ˜¯ ÁË1 + z + z ˜¯ ÁË1 - z + z ˜¯ 4 2 4 2
Realization of Digital Filters
135
-1 3
(d) H ( z ) =
(1 + z )
1 -2 ˆ Ê 1 -1 ˆ Ê -1 ÁË1 + z ˜¯ ÁË1 - z + z ˜¯ 4 2
7. Obtain a parallel realization for the following systems
5 4 (a) H ( z ) = Ê 1 -1 1 -2 ˆ ÁË1 + z + z ˜¯ 2 4 2 + z -1 +
1 -3 z 4 Ê 1 -1 1 -2 ˆ ÁË1 - z + z ˜¯ 2 2
z -2 +
(1 - z ) (1 + 2 z ) -1
(b) H ( z ) =
-1
1 -1 ˆ Ê 1 -1 ˆ Ê 1 -1 ˆ Ê ÁË1 + z ˜¯ ÁË1 - z ˜¯ ÁË1 + z ˜¯ 2 4 8
8. Obtain a cascade realization of
H ( z) =
2 + z -1 + z -2 Ê 1 -1 ˆ Ê 1 -1 ˆ ÁË1 + z ˜¯ ÁË1 - z ˜¯ 2 4
Ê 1 -1 ˆ ÁË1 + z ˜¯ 8
9. A linear time invariant system is described by the following input-output relation
5 y (n) - 3 y (n - 2) - 5 x (n - 1) = 0 realize the system using Direct form-I and Direct form-II. 10. A linear time invariant system is described by the following input-output relation
4 y (n) - 4 y (n - 3) = 2 x (n - 2) realize the system using Direct form-I and Direct form-II. 11. A linear time invariant system is described by the following input – output relation 1 2 y (n) - y (n - 2) + 2 x (n) = 0 8 realize the system using Direct form-I and Direct form-II and parallel form. 12. Realize the following transfer function of a system using Direct form-I and Direct form-II
H ( z) =
4 z -3 - 2 z -2 + 5 z - 1 5 3 1 z -3 - z 2 + z 4 4 4
13. Obtain the cascade realization for the following transfer function Ê 1 -1 ˆ ÁË1 + z ˜¯ 2 H ( z) = 1 1 -1 1 -2 ˆ Ê -1 ˆ Ê ÁË1 + z ˜¯ ÁË1 + z + z ˜¯ 4 4 8
136
Digital Signal Processing
14. Obtain a parallel realization for the following transfer function 1 -1 ˆ Ê ÁË 2 + z ˜¯ 2 H ( z) = 1 1 -2 ˆ Ê -1 ˆ Ê -1 ÁË1 + z ˜¯ ÁË 2 + z + z ˜¯ 2 2 15. Obtain a cascade realization for a IIR system described by 3
Ê 1 -1 ˆ ÁË1 + z ˜¯ 2 H ( z) = 1 1 -1 1 -2 ˆ Ê -1 ˆ Ê ÁË1 + z ˜¯ ÁË1 + z - z ˜¯ 4 4 6 16. Realize the following IIR system using Direct form-I and Direct form-II
H ( z) =
z ( z + 5) ÈÎ z + (1 + j ) ˘˚ [ z + (1 – j )]
17. Realize an FIR system described by the following input-output relation
y ( n) = x ( n) +
3 15 3 x (n - 1) x (n - 2) + x (n - 3) + x (n - 4) 7 7 7
18. Realize an FIR system with impulse response h (n) given by n
Ê 1ˆ h (n) = Á ˜ [ u (n) - u (n - 3)] Ë 3¯ 19. Realize linear-phase FIR system with system function H(z) given by
1 1 Ê 1 ˆ Ê 1 ˆ H ( z ) = Á1 + z -1 + z -2 + z -3 ˜ Á1 + z -1 + z -2 + z -3 ˜ Ë 3 ¯ Ë ¯ 5 5 3 20. Realize the following FIR system using direct and cascade form
Ê 1 ˆ Ê 1 ˆ H ( z ) = Á1 + z -1 + z -2 ˜ Á1 + z -1 + z -2 ˜ Ë 5 ¯ Ë 6 ¯
Multiple-Choice Questions 1. For realization of the transfer function of continuous-time system H (s), the important component used is (a) Integrator (b) Unit-delay (c) Divider (d) None of the above
Realization of Digital Filters
137
2. For realization of the transfer function of discrete-time system H (z), the important component used is (a) Integrator (b) Differentiator (c) Unit-delay (d) None of the above 3. The components for the realization of continuous-time systems are made up of (a) Operational amplifiers (b) Digital signal processors (c) Transistors (d) Diodes 4. The components for the realization of discrete-time systems are made up of (a) Operational amplifiers (b) Digital signal processors (c) Transistors (d) Diodes 5. In the Direct form-II realization, the number of unit delays compared with Direct form-I are reduced to (a) One-third (b) One-quarter (c) Half (d) Not reduced 6. Find the wrong statement (a) Direct form-I and II require same number of additions and multiplications (b) Direct form-II requires less number of additions compared to Direct form-I (c) Direct form-II requires less number of multiplications compared to Direct form-I (d) Direct form-II requires less number of multiplications and additions compared to Direct form-I 7. In cascade form transfer function is given by k
k
(a)
H ( z) = ’ Hi ( z)
(b)
i =1
i =1
k
(c)
H ( z) = Hi ( z) k
H ( z) = Â Hi ( z)
(d)
i =1
H ( z) = ∪ Hi ( z) i =1
8. In parallel form transfer function is given by k
k
(a)
H ( z) = ’ Hi ( z)
(b)
i =1
i =1
k
(c)
k
H ( z) = Â Hi ( z)
(d)
i =1
9. In cascade form (a) Quantization (b) Quantization (c) Quantization (d) Quantization
H ( z) = Hi ( z)
error error error error
H ( z) = ∪ Hi ( z) i =1
is is is is
more compared to more compared to more compared to reduced compared
Direct form-I Direct form-II Direct form-I and Direct form-II to Direct form-I and Direct form-II
138
Digital Signal Processing
10. In parallel form (a) Quantization error is more compared to (b) Quantization error is more compared to (c) Quantization error is more compared to (d) Quantization error is reduced compared 11. In linear symmetric FIR system for N even
Direct form-I Direct form-II Direct form-I and Direct form-II to Direct form-I and Direct form-II
N 2 N –1 (b) Number of multiplications is reduced from N to 2 N –1 (c) Number of multiplications is reduced from N to 2 N –1 –1 (d) Number of multiplications is reduced from N to 2 In linear symmetric FIR system for odd N (a) Number of multiplications is reduced from N to 2 N –1 (b) Number of multiplications is reduced from N to 2 N –1 (c) Number of multiplications is reduced from N to 2 N –1 –1 (d) Number of multiplications is reduced from N to 2 FIR systems are realized by Direct and cascade form (b) Direct and parallel form Cascade and parallel form (d) None of the above In cascade form (N – 1)th order FIR transfer function requires (a) Number of multiplications is reduced from N to
12.
13. (a) (c) 14.
(a) (N – 1) adders and N multipliers (b) N adders and (N – 1) multipliers N multipliers (c) (N – 1) adders and 2 N N –1 adders and multipliers (d) 2 2
Key to the Multiple-Choice Questions 1. 5. 9. 13.
(a) (c) (d) (a)
2. 6. 10. 14.
(c) (a) (d) (a)
3. (a) 7. (a) 11. (a)
4. (b) 8. (c) 12. (b)
7 7.1
The Discrete Fourier Transform
INTRODUCTION
The primary objective in this chapter in the development of the basic theory and computational procedure for evaluating discrete Fourier transform. The particular aspect of major importance is the so-called Fast Fourier Transform (FFT) which is high-speed algorithm for computing the Fourier transform of a discrete-time signal. The FFT can make it possible to compute the Fourier transforms of signals containing thousands of points in a matter of milliseconds. The FFT is discussed in Chapter 8.
7.2
FORMS OF THE FOURIER TRANSFORM
In describing the properties of the Fourier transform and inverse transform, it is quite convenient to use the concepts of time and frequency, even though the transformation is applicable to a wide range of physical and mathematical problems having other variables. It is very worthwhile to see the variety of forms that the transform takes when the time and frequency variable assume combinations of continuous and discrete forms. The following quantities are defined: t = continuous-time variable. T = time increment between successive components when a time function is sampled. tp = effective period for a time function when it is periodic f = continuous-frequency variable F = frequency increment between successive components when a frequency function is sampled. fs = sampling rate or frequency when a time function is sampled i.e., number of samples per second. N = Number of samples in the range 0 ≤ t < tp. When the time function is sampled, N is also equal to the number of samples in the range 0 ≤ f < fs when the frequency function is sampled. From the previous definition, it can be seen that when the time function is sampled and the length of the signal is limited to tp, we have (7.1) tp = NT
140
Digital Signal Processing
Similarly, when the frequency function is sampled and the width of the frequency function is limited to fs , we have fs = NF
(7.2)
We will now consider four possible forms that could be used in representing Fourier transform and inverse transform functions. These correspond to the four combinations obtained from successively assuming the time and frequency variables to each be continuous and discrete. 1. Continuous-time and Continuous-frequency: The Fourier transform X (f ) of a continuous time function x (t) can be expressed as a
X (f ) =
Ú x (t ) e
– j 2p ft
dt
(7.3)
–a
a
x (t ) =
Ú X (f ) e
j 2p ft
df
(7.4)
–a
The forms for the time function and the transform functions are illustrated in Fig. 7.1. It is seen that a non-periodic continuous-time function corresponds to a non-periodic continuousfrequency transform function.
Fig. 7.1
Non-periodic continuous-time function and its Fourier transform, which is a nonperiodic continuous-frequency function.
2. Continuous time- and Discrete-frequency This is the form of the Fourier transform that is most often referred to as Fourier series. Let x (t ) represent a periodic continuous-time function with period tp. The Fourier transform of x (t) is a discrete-frequency function, which we will denote here by X (mF ). The transform pair is given by X (mF ) =
and
1 tp
Ú
x (t ) e – j 2p mFt dt
(7.5)
tp
•
x (t ) = Â X (mF ) e j 2p mFt –•
The integral in (7.5) is evaluated over one period of x (t).
(7.6)
The Discrete Fourier Transform 141
Some of the properties of these functions are illustrated in Fig. 7.2.
Fig. 7.2
Periodic continuous-time function and its Fourier transform, which is a non-periodic discrete-frequency function.
In giving the transform relationships, it was stated that x (t) was periodic. This property automatically forces the transform to be a discrete-frequency function. On the other hand, consider the possibility where x (t) is originally not periodic, which leads to a continuous-frequency transform as shown in Fig. 7.1. Then assume that the transform is sampled. In effect, the process of sampling the spectrum leads to a periodic time function upon applying the inverse transform of eq. (7.6). This, in a sense, it is immaterial whether the original time function was periodic or not if the spectrum is sampled. The sampling process itself forces the time function to be periodic if inversion is performed. The frequency increment F between successive spectral components is related to the time period tp by 1 (7.7) F= tp The conclusion is that a periodic continuous-time function corresponds to a non-periodic discrete-frequency transform function. 3.
Discrete-time and continuous-frequency
This form of the Fourier transform is equivalent to evaluating the Z-transform and inverse transform on the unit circle. Let x (nT ) represent the discrete-time signal, and let X (f ) represent the transform, it can be mathematically expressed as •
X (f ) = Â x (nT ) e – j 2p fT
(7.8)
0
and x (nT ) =
1 fs
Ú fs
X (f ) e j 2 FT df
(7.9)
142
Digital Signal Processing
Fig. 7.3
Non-periodic discrete-time function and its Fourier transform, which is a periodic continuous-frequency function.
The integral in eq. (7.9) is evaluated over one period of X (f ). Some of the properties of these functions are illustrated in Fig. 7.3. Sampling of the time function produces a periodic frequency. By the same logic, if a frequency function is specified as being periodic, the resulting time function must be discretetime signal. The period of the frequency function is simply the sampling rate fs , and it is related to the sampling time T by 1 fs = (7.10) T The conclusion is that a non-periodic discrete-time function corresponds to a periodic continuous-frequency transform function. 4.
Discrete-time and Discrete frequency
We will now consider the fourth possibility, which is illustrated in Fig. 7.4. This is the case where both the time and frequency variables are discrete. Let x (nT ) represent the discrete-time signal, and let X (mF ) represent the discrete-frequency transform function. A suitable Fourier transform is given by X (kF ) = Â x (nT ) e – j 2pk nFT
(7.11)
n
x (nT ) =
1 N
 X (mF ) e
j 2p knFT
(7.12)
m
The summation in eq. (7.11) is evaluated over one period of x (nT ), and the summation of eq. (7.12) is evaluated over one period of X (mF). Equations (7.11) and (7.12) describe one form of the discrete Fourier transform (DFT) basis in the area of the digital signal processing.
The Discrete Fourier Transform 143
Fig. 7.4
Periodic discrete-time function and its Fourier transform, which is a periodic discrete-frequency function.
The time function is sampled and periodic the frequency function is periodic with a period fs and discrete 1 fs = (7.13) T On the other hand, since the frequency function is sampled, the time function is periodic with a period tp given by tp =
1 F
(7.14)
It is seen, that a periodic discrete-time signal corresponds to a periodic discrete-frequencytransform function. By reviewing the preceding steps, several general conclusions can be made. If a function in one domain (either time or frequency) is periodic, then the corresponding transform in the other domain is a sampled, which means it is a function of discrete variables. Conversely, if a function in one domain is sampled, then the function in the other domain becomes periodic. The period in one domain is always the reciprocal of the increment between samples in the other domain. Some of the preceding properties are summarized in Table 7.1 Table 7.1 Comparison of forms for Fourier transform pairs Time Function
Frequency Function
Non-periodic and continuous
Non-periodic and continuous
Periodic and continuous
Periodic and discrete
Non-periodic and discrete
Periodic and continuous
Periodic and discrete
Non-periodic and discrete
144
Digital Signal Processing
When a function is evaluated by numerical procedures, it is always necessary to sample it in the same fashion. This means that in order to fully evaluate a Fourier transform or inverse transform with digital operations, it is necessary that both the time and frequency functions be eventually sampled in one form or another. Thus, the last of the four possible Fourier pairs (DFT) is the one that is of primary interested in digital computation. It is necessary that the implication of the sampling process in both the time and frequency domains be considered in order to ascertain that the data obtained by the discrete process represents the actual data desired. The transform pairs used for illustration in Fig. 7.1 through 7.4 were chosen to be both band-limited and time-limited within proper ranges. The sampling rate was assumed to be greater than twice the highest frequency, and the period of the time functions was chosen to be larger than the time length of the signal, so no overlapping of time or a frequency functions (aliasing) occurs.
7.3
DISCRETE FOURIER TRANSFORM
In this section, the discrete Fourier transform pair introduced in the section will be studied in more detail. Considering the transform pair introduced by eqs. (7.11) and (7.12). The following modification in notation and standard form can be made using equations (7.1) and (7.14), the quality FT can be expressed as FT =
1 N
(7.15)
We will now define a quality WN as (7.16)
WN = e - j (2 p / N ) The reciprocal of WN can be expressed as
(7.17)
WN–1 = e j (2 p / N )
Using the relationship given by (7.15), (7.16) and (7.17) and the assumptions previously made, the discrete Fourier transform (DFT) pair can be stated as N –1
X (k ) =
 x ( n) W
(7.18)
kn
n=0
x ( n) =
1 N
N –1
 X (k ) W
– kn
(7.19)
n=0
The transformation of (7.18) will sometimes be denoted as X (k) = D [x (n)]
(7.20)
The inverse transform will sometimes be denoted as x (n) = D–1 [X (k)] The following notation is convenient in relating the various transformation pairs x (n) ¤ X (k)
(7.21)
The Discrete Fourier Transform 145
DFT is finite duration discrete frequency sequence that is obtained by sampling one period of Fourier transform is done at N equally spaced points over a period extending over 0 ≤ w ≤ 2p i.e., w = 2p k / N for 0 ≤ k ≤ N – 1. DFT sequence starts at k = 0 corresponding to w = 0, but does not include k = N corresponding to 2p. Fourier transform of discrete sequence is X (w), then DFT is denoted by X (k) X (k ) = X (w) w = 2 pk / N
0£ k £ N –1
We are aware of the fact that FT is periodic in w, with period 2p when we sample function of continuous value time, with period Ts , then the spectrum of resulting discrete time sequence becomes periodic function of frequency, with period 2p/ Ts. The same is true when we sample FT and get DFT. Table 7.2 Discrete Fourier transform operation pairs X (n)
X (m) = D [x (n)]
ax1 (n) + bx2 (m)
aX1 (n) + bX2 (m)
X (n – k)
WN– km X (m)
WN– kn x (n)
X (m – k)
N –1
x ( n) * h ( n) =
Â
X (m) H (m)
x(k ) h (n – k )
k =0 N –1
Â
X (k) Y (m)
x ( n) y ( n – k )
n=0
X (n) y (n)
7.4
1 N
N –1
Â
X (k ) Y (m – k )
k =0
PROPERTIES OF DFT
Now we will see some of the properties of DFT. We are already aware with the fact that DFT is a set of N sample X (k) of FT X (w) for a finite duration sequence x (n) of length L ≤ N. Frequency samples are taken at wk = 2p k / N: K = 1, 2, º N – 1. DFT and IDFT pairs can be represented as DFT x (n) ¨ææ Æ X (k )
7.4.1 Linearity DFT DFT If x1 (n) ¨ææ Æ X 1 (k ) and x2 (n) ¨ææ Æ X 2 (k ) then for real valued/complex valued constant DFT Æ b1 X1 (k) + b2 X2 (k). b1 and b2 then b1 x1 (n) + b2 x2 (n) ¨ææ
146
Digital Signal Processing
It means that one can compute DFT of several different signals and determine combined DFT via summation of individual DFTs. The advantage of this property is the system output DFT via summation of individual DFTs. The advantage of this property is the system output frequency can be easily evaluated for specific frequency components and then combined to determine total system frequency response. Linearity property can be represented as
where
È L ˘ L DFT ÍÂ ai xi (n) ˙ = Â ai X i (k ) Î i =1 ˚ i =1 a = Arbitary constant X i (k ) = DFT [ xi (n)]
7.4.2 Periodicity If x (n) and X (k) are N point DFT pair, then x (n + N) = x (n) for all n X (k + N) = X (k) for all k The periodicity is the result of complex exponential periodic property shown in Fig. 7.5. Proof: Meaning of periodicity is the DFT of finite length sequence results in periodic sequence as shown in Fig. 7.6 X (k + N ) = =
1 N 1 N
WNnN = e x (k + N ) =
N -1
Â
n=0
N -1
Â
x (n) WNnk WNnN
n=0
- j 2 p nN N
1 N
x (n) WNn ( k + N )
= e - j 2 pN = 1
N -1
Â
x ( n) e
- j 2 p kN N
for all n X 1
k =0
= X (k )
7.4.3 Circular Symmetry of a Sequence Consider signal x (n) of length L. N point DFT of sequence x (n) is taken X (k). Here L ≤ N, the X (k) is equivalent to N point DFT of a periodic sequence xp (n). The period of xp(n) is N. The signal xp(n) is extended version of x (n). Figure 7.5 (a) shows signal x (n) (b) shows periodic extension of x (n). x p ( n) = • Â x ( n - l N ) l =-•
The Discrete Fourier Transform 147
Fig. 7.5
Periodicity of DFT. The signal is viewed as a periodic extension of a finite length DFT input sample sequence (N = 12).
If the shifting operation is continued different periodic sequence sets can be obtained. The sequence xp (n) is basically related to original sequence x (n) by “circular shift”. It means that if we represent samples of x (n) on a circle, then “shifting” x (n) by “circular shift”. It means that if samples of x (n) are represented on a circle, then “shifting” x (n) by “ k ” samples means “rotating” (circulating) sequence by k samples. Let us elaborate the same using graphical method. Figure 7.6 (a) shows, signal x (n) (b) shows periodic extension of x (n) i.e xp (n) (c) shows, shifting (delaying) periodic sequence xp(n) by two samples. Therefore, we get sequence xp (n – 2) (d) shows shifted x (n) without periodic extension i.e., Figure 7.7 (a) shows signal x (n) samples on circle. (b) shows x (n) i.e., xp (n – 2) which is equivalent to rotating circle in anticlockwise direction by 2 samples. We can have formula to find values of i.e., x¢ (n) x (n) = x (n – k, modulo N ) ∫ x (n – k) N It can be understood from the following illustration Divide n – k by and relation the remainder only n – k 0-2 = =–2 If N = 4, k = 2, n = 0 then N 4
148
Digital Signal Processing
Fig. 7.6
Fig. 7.7
Let us find out the value of x (n) x ( n ) = [1, 2, 3, 4] ≠
n=0
\
x ¢p (n) = [....4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4 º] ≠
n=0
Let us say N – 1 = 3
N=4 By making the shifts (delay) K = 2 (2 samples) x ¢ (n) = x [ (n - k )]n = x [(n - 2)]4 \ n =0
(0) = x [(–2)]4 = x (2) = 3
The Discrete Fourier Transform 149
n =1
(1) = x [(–1)]4 = x (3) = 4
n =2
(2) = x [(0)]4 = x (0) = 1
n =3
(3) = x [(1)]4 = x (1) = 2
Therefore, anticlockwise direction gives delayed sequence and clockwise direction gives advance sequence. Now we have to decide some ‘sign’ for rotation circular shift of an N point sequence is equivalent to linear shift of its periodic extension and vice versa. Fold or time reversal of x (n) can be given by x (( - n)) N = x ( N - n) x(n) = (1, 2, 3, 4)
0£n£N–1
≠
Fig. 7.8
Let us say Fig 7.8 (a) signal x (n) (b) periodic extension of x (n), i.e., xp (n). Now to find x ((– n))N. Let’s say new folded sequence will be x¢ (n) N= 4 N – 1= 3 xˆ (n) = x ( N - n) n=0
(0) = x (4 – 0) = x (4) = 1
n=1
(1) = x (4 – 1) = x (3) = 4
n=2
(2) = x (4 – 2) = x (2) = 3
n=3
(3) = x (4 – 3) = x (1) = 2
150
Digital Signal Processing
Fig. 7.9
(a) shows sequence x (n) (b) shows folded sequence x (n) or time interval of x (n).
The time reversal of N point sequence is attained by reversing its sample about the point zero on the circle. Circularly Even Sequence: Any N point sequence is called even if it is symmetric about the point zero on the circle. Circularly odd sequence: Any N point sequence is called circularly odd if it is symmetry about the point zero on the circle. It means that x (N – n) = – x (n) even
xP ( n ) = xP ( - n ) = xP ( N - n )
odd
xP ( n ) = - xP ( - n ) = - xP ( N - n )
If periodic sequence is complex valued, then * Conjugate even = xP (n) = xP ( N - n) * Conjugate odd = xP (n) = - xP ( N - n) In a more detailed way the above circular shift can be explained as Let us consider a finite duration sequence x (n) of length N so that x (n) = 0 except in the range 0 ≤ n ≤ (N – 1). Clearly a sequence of length M less than N can also be considered to be of length N with last (N – M) points in the interval having amplitude zero, and sometime it will be convenient to do this. The corresponding periodic sequence of period N, for which x (n) is one period, will be denoted by x (n) and is given by •
xˆ (n) =
Â
x (n + rN )
(7.22)
n=-•
Since x (n) is of finite length N there is no overlap between the terms x (n + rN) for different values of ‘r’. Thus, eq 7.22 can alternatively be written as xˆ (n) = x (n module N )
(7.23)
The Discrete Fourier Transform 151
For convenience we shall use the notation ((n))N to denote (n modulo N) and with this notation eq. 7.23 can be expressed as xˆ (n) = x ((n)) N (7.24) ˆ x ( n ) the finite duration sequence x (n) is obtained from by extracting one period, i.e., Ï xˆ (n) x ( n) = Ì Ó0
0£ n £ N –1 otherwise
Again for notational convenience it is useful to define the rectangular sequence ¬ N (n) given by ˆ (n) = ÏÌ1 ¬ N Ó0
0£ n £ N –1 otherwise
With this notation the above expansion can be expressed as x (n) = xˆ (n) ¬ N (n)
7.4.4 Circular Shift of a Sequence Consider a sequence x (n) as shown in Fig 7.10 (a), its periodic xˆ (n) as shown in Fig 7.10 (b) and xˆ (n + m) the result of shifting xˆ (n) by m samples indicated in Fig. 7.10 (c). The finite-duration sequence, which we shall denote by x1(n), obtained by extracting one period of xˆ (n + m) in the range 0 £ n £ N – 1 is shown in Fig. 7.10 (d). Comparison of Fig. 7.10 (a) and (d) indicates clearly that x1(n) does not correspond to a linear shift of x (n), and in fact both sequences are confined to the interval between 0 and (N – 1). By reference to Fig. 7.10 (b) and (c) we see that in shifting the periodic sequence and examining the interval between 0 and (N – 1), as a sample leaves the interval an identical sample enters the interval at the other end. We can imagine forming x1 (n) does not correspond to a linear shift of x (n), and in fact both sequences are confined to the interval between 0 and (N – 1) at one end it enters at the other end. Circular shift of a sequence property states that if X (k) is the DFT of the sequence x (n), then DFT [ x ((n - m)) N ] = WNmk X (k ) fork = 0, 1, º N - 1 The circular shift of x (n) is denoted by x ((n – m))N. To generate x ((n – m))N , we move the last n samples of x (n) to the beginning of the sequence x (n). The shifted version of x (n) is shown below for n = 1, 2, º. x (n) = [ x (0), x (1), x (2), º x ( N - 2), x ( N - 1)] x ((n - 1)) N = [ x ( N - 1), x (0), x (1),º, x ( N - 3), x ( N - 2)] x ((n - 2)) N = [ x ( N - 2), x ( N - 1), x (0), x (1), º, x ( N - 4), x ( N - 3)] x ((n - m)) N = [ x ( N - m), x ( N - m + 1),º x ( N - m + N - 1)]
If shift m = N, it results in original sequence x (n) x ((n - 1)) N = [ x (0), x (1), x (2), º, x ( N - 2), x ( N - 1)]
152
Digital Signal Processing
Fig. 7.10
Circular shift of a sequence.
Hence, x ((n - m)) N = x (n) . Proof: From definition of inverse DFT we have 1 N -1 x (n) = Â X (k ) WN- kn N i=0 x ((n - m)) N =
1 N
N -1
 X (k ) W
- k (n-m) N
k =0
1 N -1 Â È X (k ) WNkm ˘˚ WN- kn N k =0 Î x ((n - m)) N = IDFT [ X (k ) WNkm ]
x ((n - m)) N =
Hence, DFT [ x ((n - m)) N ] = X (k ) WNkm
In terms of transform pairs [ x ((n - m)) N ] ´ WNkm X (k )
7.4.5 Circular Convolution Circular convolution property states, if x1(n) and x2(n) are finite duration sequences of length N with DFTs X1 (k) and X2(k) then DFT [ x1 (n) ƒ x2 (n)] = X 1 (k ) X 2 (k )
(7.25)
The Discrete Fourier Transform 153
Proof: Let X1 (k) and X2 (k) be DFTs of x1 (n) and x2 (n), then X1 (k) and X2 (k) are given by N -1
X 1 (k ) = Â x1 (n) e
Ê 2p ˆ - j Á ˜ kn Ë N¯
k = 0, 1, º N - 1
n=0
N -1
X 2 (k ) = Â x2 (l ) e
Ê 2p ˆ - j Á ˜ kl Ë N¯
k = 0, 1, º N - 1
l =0
Let X3 (k) = X1 (k) X2 (k), then x3 (m) is obtained by taking IDFT of X3 (k). x3 (m) = =
1 N 1 N
N -1
Â
X 3 (k ) e
Ê 2p ˆ j Á ˜ km Ë N¯
k =0
N -1
Â
X 1 (k ) X 2 (k ) e
Ê 2p ˆ j Á ˜ km Ë N¯
k =0
Substituting X1(k) and X2(k) in the above equation we get 1 x3 (m) = N
N -1
Â
k =0
Ê 2p ˆ È N -1 - j Á ˜ kn Ë ¯ Í Â x1 (n) e N ÍÎ n = 0
˘ ˙ ˙˚
Ê 2p ˆ Ê 2p ˆ È N -1 - j Á ˜ kl ˘ j Á ˜ km Ë N¯ Ë ¯ Í Â x2 (l ) e ˙e N ÍÎ l = 0 ˙˚
Rearranging the above equation, we get x3 (m) =
But e
Ê 2p ˆ j Á ˜ k (m - n - l ) Ë N¯
1 N
N -1
È N -1
N -1
 x (n)  x (l ) Í Â 1
n=0
2
l= 0
e
Ê 2p ˆ j Á ˜ k (m - n - l ) Ë N¯
ÍÎ k = 0
˘ ˙ ˙˚
(7.26)
= 1 , hence in the above equation
N -1
Â
k =0
e
Ê 2p ˆ j Á ˜ k (m - n - l ) Ë N¯
ÏN =Ì Ó0
where m - n - l is multiple of N where m - n - l is not multiple of N
Hence in eq. (7.26) (m – n – l) multiple of N can be written as (m – n – l ) = PN where P is an integer, it can be positive or negative, hence the above equation can be written as l – m – n – PN or l = m – n + PN Hence, substituting the values of l in eq. (7.26), we get N -1
x3 (m) = Â x1 (n) x2 (m - n + PN ) n=0
x3 (m – n + PN) represents the sequence x2(m) shifted circularly by n samples. Such sequence is represented also by
x2 (m - n + PN ) = x2 (m - n, modulo N ) = x2 (( m - n)) N Hence,
N -1
x3 (m) = Â x1 (n) x2 ((m - n) N n=0
m = 0, 1, º N - 1
(7.27)
154
Digital Signal Processing
Comparing the above equation with convolution equation •
y ( n) =
Â
x1 (k ) x2 (n - k )
k =-•
Equation (7.27) appears line convolution operation, but x2() is shifted circularly. Hence, eq. (7.27) is called circular convolution. Circular convolution of the sequences x1 (n) and x2 (n) are denoted by [x1 (n) ƒ x2 (n)] Hence, x3 (m) = [x1 (n) ƒ x2 (n)] = IDFT [X3 (k)] = IDFT [X1 (k) X2 (k)] DFT [x1 (n) ƒ x2 (n)] = [X1 (k) X2 (k)] Hence, multiplication of two DFTs in frequency domain equivalent to circular circular convolution of their sequences both of lengths N in time domain.
7.4.6 Multiplication by Exponentials This property states that
Proof:
M j 2 pn È ˘ N DFT Í x ( n ) e ˙ = x ÈÎ( k - M ) N ˘˚ Î ˚
M j 2 pn È N DFT Í x ( n ) e Î M j 2p n È N DFT Í x ( n ) e Î
(7.28)
j 2 p kn M j 2 pn ˘ N -1 N N x n e e = ( ) ˙ Â ˚ n=0
(k - M ) j 2p n ˘ N -1 N x n e = ˙ Â ( ) ˚ n=0
= X [(k – M)N ]
7.4.7 Multiplication in Time This property states that
Proof:
1 ÈÎ x1 ( n ) x2 ( n ) ˘˚ = x1 ( k ) ƒ x2 ( k ) N N -1 j 2 p ln ˘ È 1 N ˙ DFT ÎÈ x1 ( n ) x2 ( n ) ˚˘ = DFT Í x1 ( n ) Â x2 ( l ) e N ÍÎ ˙˚ i=0
N -1 j 2 p ln È ˘ 1 N x l x n e DFT Â Í 1( ) ˙ 2( ) N i=0 Î ˚ From the above equation, we have N -1 1 = Â x2 ( l ) x1 ÈÎ( k - l ) N ˘˚ N i=0
=
(7.29)
The Discrete Fourier Transform 155
1 X1 ( k ) ƒ X 2 ( k ) N
=
7.4.8
Parseval’s Relation
Parseval’s theorem states that, if X1 (k) and X2 (k) are the DFTs of the sequences x1(n) and x2 (n), respectively, both of length N then N -1
Â
x1 (n) x2* (n)
=
n=0
N -1
1 N
Â
(7.30)
X 1 (k ) X 2* (k )
k =0
Proof: N -1
N -1
 x1 (n) x2* (n) n=0
n=0
N -1
 x1 (n) x2* (n)
=
n=0
N -1
 x ( n) x 1
* 2
( n)
=
n=0
7.5
1 N
=Â
N -1
 X1* (k ) e k =0
N -1
1 N
– j 2p kn N
N -1
 X1* (k )  X 2 (n) e k =0
– j 2p kn N
n=0
N -1
1 N
ÂX
* 1
(k ) X 2 (k )
k =0
RELATION BETWEEN FOURIER TRANSFORM AND Z-TRANSFORM
Z-transform of the sequence x (n) is given by •
X ( z) =
Â
x ( n) z - n
(7.31)
n=-•
Substituting Z-transform in polar coordinates, i.e., Z = re jw in the above equation we can obtain •
X (re jw ) =
Â
x(n) (re jw ) - n
(7.32)
n=-•
If r = 1, i.e., if the above equation is evaluated on a unit circle, then eq. (7.32) is given by •
X (re jw ) =
Â
x ( n) e - jw n
n=-•
(7.33)
The RHS of the eq. (7.32) is x (z) hence if Z-transform of the sequence is evaluated on a unit circle, the Z-transform of the sequence x (n) is equal to Fourier transform X (e jw) of the sequence x (n). Let us sample X (z) at N equally spaced points on a unit circle. N -1
X ( Z ) = Â x ( n) e n=0
- j 2 p kn N
(7.34) where k = 0, 1, 2, º N – 1
The RHS of the equation is DFT of the sequence x (n). Then if Z-transform is evaluated on unit circle with N equally spaced points, then it is equivalent to X (K) DFT of the sequence x (n).
156
7.6
Digital Signal Processing
COMPARISON BETWEEN LINEAR CONVOLUTION AND CIRCULAR CONVOLUTION
Linear convolution of two sequences x1 (n) and x2 (n) of length n1 and n2 numbers of samples is given by y (n) = x1 (n) * x2 (n) n1 - 1
=
 x (k ) * x 1
2
(n - k )
k =0
n2 - 1
=
Â
x2 (k ) x1 (n - k )
k =0
It results in an output y (n) which contains n1 + n2 – 1 samples. In circular convolution if n1 > n2 we add n1 – n2 number of zeros samples to the sequence x2 (n) at the end or n2 > n1 we add n2 – n1 number of samples to the sequence x1 (n) at the end. So that both the sequences are periodic with N, where N = max (n1, n2) Circular convolution of two sequences, which are equal in length N, is given by y (m) = x1 (n) ƒ x2 (n) N -1
=
Â
x1 (m) x1 ((n - m)) N
m=0
m = 0, 1, º, N - 1 where The sequences x2 (n) is circular shift of samples and hence, x2 (n) can be represented in N × N matrix form. Table 7.3 Comparison of linear, circular and periodic convolution Linear Convolution
Circular Convolution
•
y ( n) =
Â
Periodic Convolution
N -1
x2 (k ) x1 (n - k )
k =-•
x3 (m) =
Â
N -1
x2 (n) x1 (m - n) N
y ( n) =
n=0
Â
x2 (k ) x1 (n - k )
k =0
Sequences are non-periodic and of Sequences are of length N and they Sequences are of length n and finite length may be periodic always periodic FT [ x1 (n) * x2 (n)] ¨æÆ x1 (k ) x2 (k )
DFT [ x1 (n) * x2 (n)] ¨ææ Æ X 1 (k ) X 2 (k )
Discrete Fourier series Ck =
Sequences are shifted linearly
Sequences are shifted circularly
7.6.1 Circular Convolution from Linear Convolution Linear convolution of two sequences x1 (n) and x2 (n) is given by
1 N
N -1
Â
x ( n) e
- j 2 p kn N
n=0
Sequences are shifted circularly
The Discrete Fourier Transform 157 n1 - 1
y (n) = x1 (n) * x2 (n) = Â x1 (k ) * x2 (n - k ) k =0
n2 - 1
=
Âx
2
(k ) * x1 (n - k )
k =0
where y (n) is a finite duration sequences of n1 + n2 – 1 samples. The circular convolution of sequences x1 (n) and x2 (n) is given by n1 - 1
y (m) = x1 (n) ƒ x2 (n) = Â x1 (m) * x2 (n - m) N k =0
where
m = 0, 1, º, N – 1, N = max (n1, n2)
and y (m) is circular convolution of x1 (n) and x2 (n) of N samples. In order to get circular convolution from linear convolution following steps are implemented. • Compute linear convolution of two sequences x1 (n) of length n1 samples and x2 (n) of length n2 samples • Compute length L of the linear convolution result • Compute maximum length of x1 (n) and x2 (n) is N = max (n1, n2) • Compute number of zeros to be padded to linear convolution result using P = 2* N – L • Pad P number of zeros to the end of convolution result, length of convolution result in N1 = L + P. N • To compute circular convolution of two sequences, add other samples to 1 samples of 2 N convolution result, next and first sample with 1 + 1 sample, and repeat this procedure 2 N for all the samples upto 1 - 1 . 2
7.6.2 Linear Convolution from Circular Convolution In order to get circular convolution from linear convolution • Add (n2 – 1) number of zeros to the end of sequence • Add (n1 – 1) number of zeros to the end of sequence • Compute circular convolution of sequences x1 (n) and
7.7
following steps are implemented. x1 (n) x2 (n) x2 (n)
SOME TYPICAL EXAMPLES ON DFT
Example 7.1 If X (k) is the DFT of the sequence k (n), determine the N-point DFTs of the sequences 2pkn xc ( n ) = x ( n ) cos , 0 £ n £ N -1 N and in terms of X ( k ) . xs ( n ) = x ( n ) sin
2pkn , N
0 £ n £ N -1
158
Digital Signal Processing N -1
Solution:
X c (k ) = Â
n=0
1 2
Similarly,
- j 2 p ( k - k0 ) n
N -1
Â
2 pk0 n -j Ê j 2 pNk0 n ˆ -2 pkn 1 x ( n) Á e +e N ˜e N 2 Ë ¯
x ( n) e
+
N
n=0
1 2
N -1
Â
x ( n) e
-j
2 p ( k + k0 ) n N
n=0
1 1 X ( k - k0 ) mod N + X ( k + k0 ) mod N 2 2 X s (k ) =
1 1 X ( k - k0 ) mod N X ( k + k0 ) mod N 2j 2j
Example 7.2 If X (k) is the DFT of the sequence x (n), determine the N-point DFTs of the sequences xc ( n ) = x ( n ) cos
2pkn , 0 ≤ n ≤ N −1 N
in terms of X ( f )
and xs ( n ) = x ( n ) sin
2pkn , 0 ≤ n ≤ N −1 N N −1
Solution:
X c (k ) = ∑
n=0
=
=
Similarly, X s (k ) =
1 2
⎛ j 2 p k0 n 1 x ( n) ⎜ e N + e j 2 ⎝
N −1
∑ X ( n) e
j
2 p ( k − k0 ) n N
n=0
+
2 pk0 n N
⎞ −2Npkn ⎟e ⎠
j 1 N −1 X ( n) e ∑ 2 n=0
1 1 X ( k − k0 )mod N + X ( k + k0 )mod N 2 2 1 1 X ( k – k0 )mod N – X ( k + k0 )mod N 2j 2j
Example 7.3 Determine the circular convolution of the sequences x1 ( n ) = {1, 2, 3, 1} x2 ( n ) = {4, 3, 2, 2}
Using the time-domain formula Solution: y (n) = x1 (n) =
2 p ( k + k0 ) n
4 x2 (n)
3
∑ x ( m) 1
m=0
mod 4
x2 ( n − m )mod 4
= {17, 19, 22, 19}
N
The Discrete Fourier Transform 159
Example 7.4 Use the four-point DFT and IDFT to determine the sequence x3 (n) = x1 (n) N x2 (n) where
x1 ( n ) = {1, 2, 3, 1}
x2 ( n ) = {4, 3, 2, 2}
x1 ( k ) = {7, − 2 − j , 1, − 2 + j}
Solution:
x2 ( k ) = {11, 2 − j , 1, 2 + j}
x3 ( k ) = x1 ( k ) x2 ( k ) = {77, − 5, 1, 5} x3 ( n ) = {17, 19, 22, 19}
x1 ( n ) = {0, 1, 2, 3, 4} , x2 ( n ) = {0, 1, 0, 0, 0} , S ( n ) = {1, 0, 0, 0, 0}
Example 7.5 and their 5-point DFTs
(a) Determine a sequence y (n) so that Y (k) = X1 (k) X2 (k). (b) Is there a sequence x3 (n) such that S (k) = X1 (k) X2 (k)?
Solution: (a)
y (n) = x1 (n)
5 x2 (n)
= {4, 0, 1, 2, 3} (b) Let
Then
x3 (n) = {x0, x1, º, x4}
⎡0 ⎢1 ⎢ ⎢2 ⎢ ⎢3 ⎢⎣ 4
Solving yields sequence
4 0 1 2 3
3 4 0 1 2
2 3 4 0 1
1 ⎤ ⎡ x0 ⎤ ⎡1⎤ 2⎥⎥ ⎢⎢ x1 ⎥⎥ ⎢⎢0⎥⎥ 3⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎥⎢ ⎥ ⎢ ⎥ 4⎥ ⎢ x3 ⎥ ⎢0⎥ 0⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
x3 ( n ) = {− 0.18, 0.22, 0.02, 0.02, 0.02}
Example 7.6 Compute the energy of the N-point sequence 2pkn x ( n ) = cos , 0 ≤ n ≤ N −1 N Solution:
x ( n) = x ( n ) x∗ ( n ) =
2 pkn j ⎞ 1 ⎛ j 2 pNkn N e e + ⎜ ⎟⎠ 2⎝ 4 pkn 4 pkn j −j ⎞ 1⎛ N N e e 2 + + ⎜ ⎟⎠ 4⎝
160
Digital Signal Processing N −1
E = ∑ x ( n ) x∗ ( n ) = n=0
1 4
N −1
∑
n=0
4 pkn 4 pkn j −j ⎛ ⎞ N N e e 2 + + ⎜⎝ ⎟⎠
1 N × 2N = 4 2 Example 7.7 Given the eight-point DFT of the sequence 0≤n≤3 ⎧1, x ( n) = ⎨ 4≤n≤7 ⎩0, Compute the DFT of the sequences: n=0 ⎧1, ⎪ x1 ( n ) = ⎨0, 1≤ n ≤ 4 (a) ⎪1, 5≤ n≤7 ⎩ =
(b)
Solution: (a)
⎧0, ⎪ x2 ( n ) = ⎨1, ⎪0, ⎩
0 ≤ n ≤1 2≤n≤5 6≤n≤7
x1 ( n ) = x ( n − 5)mod 8 ∴ X1 (k ) = X (k ) e
−j
= X (k ) e
(b)
2 ⋅ p ⋅ 5⋅ k 8
−j
5p k 4
x2 ( n ) = x ( n − 2)mod 8 ∴ X 2 (k ) = X (k ) e
−j
2 p⋅2⋅k 8
= X (k ) e
p −j k 2
Example 7.8 Let X (k) be the N-point DFT of the sequence. We define a 2 N-point sequence y (N) as ⎧ ⎛ n⎞ ⎪ x ⎜ ⎟ n even y ( n) = ⎨ ⎝ 2 ⎠ ⎪ 0, n odd ⎩
Express the 2N-point DFT of y (N ) in terms of X (k). Solution:
Y (k ) = =
2N − 1
∑ y ( n) W
kn N
n=0
2N − 1
∑ y ( n) W
kn 2N
n=0 n even
k = 0, 1, …, 2 N − 1
The Discrete Fourier Transform 161 N −1
∑ y ( 2m ) W
=
km N
m=0
=
N −1
∑ x ( m) W
km N
m=0
⎧ X (k ) , k ∈[ 0, N − 1] =⎨ ⎩ X ( k − N ) , k ∈[ N , 2 N − 1]
Ï1 / 3 0 £ n < 2 Example 7.9 Find DFT of given sequence x (n) = Ì elsewhere Ó0 Solution: To calculate N point DFT obtain N as N – 1 = 2; and N = 3; Therefore, minimum number of DFT point should be N – 1. N point more than ‘2’ will also work in our example. N –1
X (k ) =
 x ( n) e
– j 2pkn N
0£ k £ N –1
n=0
2
=
 x ( n) e
– j 2pkn N
0£k £2
n=0
X (k ) = x (0) e – j 0 + x(1) e
– J 2p k N
+ x (2) e
– J 4p k N
1 – j 0 1 – jN2pk 1 – jN4pk e + e + e 3 3 3
= =
– j 2pk – j 4p k ˘ 1È N N + + e e 1 Í ˙ 3Î ˚
=
1 – j N2pk e 3
2
=e
– j 2pk N
– j 2p k È j 2Np k ˘ N + + e e 1 Í ˙ Î ˚ È1 + 2 cos (2pk / N ) ˘ Í ˙ 3 Î ˚
Example 7.10 A finite duration sequence of length L is given as Ï1 x ( n) = Ì Ó0
0£ n < L –1 elsewhere
Determine N point DFT N ≥ L. Solution:
N –1
X (k ) =
 x ( n) e
– j 2p kn N
n=0
L –1
=
 1-
n=0
e
– j 2p kn N
0£ k £ N –1
0£k £2
162
Digital Signal Processing
X (k ) =
1– e
0 j 2pk L N
1– e
– j 2p k N
Ê pkL ˆ sin Á Ë N ˜¯ = Ê pk ˆ sin Á ˜ ËN¯
k = 0, 1, º, N – 1
Example 7.11 Compute DFT of each of the following finite length sequences considered to be of length N 1. x (n) = d (n) 2. x (n) = d (n – b) 3. x (n) = an for 0 ≤ n ≤ N – 1 Solution: x (n)) = d (n) DFT of x (n) is given by N –1
X (k ) =
 x ( n) w
kn N
n=0
Example 7.12 Compute DFT of each of the following finite length sequences considered to be of length N 1. x (n) = d (n) 2. x (n) = d (n – p) 3. x (n) = an for 0 ≤ n ≤ N – 1 Solution: 1. x (n) = d (n) DFT of x (n) is given by N -1
X (k ) = Â x (n) WNkn n=0
N -1
X (k ) = Â d (n) e
– j 2p kn N
n=0
Ï1 for n = 0 d (n) = Ì Ó0 for n π 0 X (k) = 1 2. x (n) = d (n – p) DFT of x (n) is given by
since
N -1
X (k ) = Â x (n) WNkn n=0
The Discrete Fourier Transform 163 N -1
X (k ) = Â d (n – p ) e
– j 2p kn N
n=0
Since
Ï1 for n = p d (n – p ) = Ì Ó0 for n π p X (k ) = e
3.
- j 2 p kp N
= w kp N
x (n) = an for
0≤n≤N–1
DFT of x (n) is given by N -1
X (k ) = Â x (n) WNkn n=0
N -1
N -1
X (k ) = Â a n WNkn = Â ÈÎ a WNk ˘˚ n =0
n
n=0
- j 2 pkN
n
1 - ÈÎ a WNk ˘˚ 1 - aN e N X (k ) = = - j 2 pk 1 - ÈÎ a WNk ˘˚ 1- a e N
X (k ) =
1 - aN e
- j 2 pkN N
1- a e
- j 2 pk N
1 - aN
=
1- a e
- j 2 pk N
Example 7.13 Find DFT of a periodic signal Ï1 for 2 £ k £ 5 x ( n) = Ì Ó0 forn = 0, 1, 6, 7, 8, 9
Solution: DFT of x (n) is given by N -1
X (k ) = Â x (n) WNkn n=0 9
X (k ) = Â x(n) WNkn n=0 5
= Â WNkn n=2 5
X (k ) =
Â
5
WNkn =
n=2
X (k ) =
Â
n=2 6k N K N
1-W 1-W
-
l
WNkn - Â WNkn n=2
2k N K N
1-W 1-W
=
WN2 k - WN6 k 1 - WNK
for
0≤n≤N–1
164
Digital Signal Processing - j 12 pk È - jN4 pk ˘ -e N ˙ Íe ˚ X (k ) = Î - j 2 pk È ˘ N Í1 - e ˙ Î ˚
for
k = 0, 1, º, N - 1
Example 7.14 If x (n) = [d (n), d (n – 3)]. Find DFT of a periodic signal x (n) with N = 4 Solution:
N -1
X (k ) = Â x(n) WNkn for k = 0, 1, º, N - 1 n=0 3
X (k ) =
Â
x(n) WNkn
n=0
= x (0) WN0 + x (1) WNk + x (1) WN2 k + x (1) WN3k for k = 0, 1, 2, 3
Substituting x (0) = 1, x (1) = 0, x (2) = 0, x (3) = 1, we get X (k ) = 1 + WN3k = 1 + e =1+ e
- j 3 pk 2
- j 6 pk 4
for k = 0, 1, 2, 3
X (0) = 1 + 1 = 2 X (1) = 1 + e X (2) = 1 + e
- j 3p 2 - j 6p 2
X (3) = 1 + e
- j 9p 2
=1+ j = 1-1 = 0 =1- j
x (k) = {2, 1 + j, 0, 1 – j} Example 7.15 If X (k) = {2, 1 + j, 0, 1 – j}. Find the periodic signal x (n) with N = 4 Solution: Inverse Discrete Fourier Transform (DFT) is given by 1 N -1 x (n) = Â X (k ) WN- kn for n = 0, 1, º, N - 1 N k =0 N=4 x ( n) =
1 4
3
Â
X (k ) WN- kn
k =0
for n = 0, 1, 2, 3 x ( n) =
j pn j 3 pn ˘ 1È j pn 2 2 + + + X X e X e X e (0) (1) (2) (3) Í ˙ 4Î ˚
The Discrete Fourier Transform 165
Evaluating x (n) for n = 0, 1, 2, 3, we get 1 x (0) = [ x (0) + x (1) + x (2) + x (3)] 4 1 = [2 + 1 – j + 0 + 1 - j ] = 1 4 Similarly, x (1) = 0, x (2) = 0, x (3) = 1 Hence x (n) = {1, 0, 0, 1} Example 7.16 Let x (n) ={1, 2, 3, 4} compute the values of the following sequences 1.
ax1 (n) = x ((n – 1))4
2.
bx2 (n) = x ((n + 1))4
Solution: 1. ax1 (n) = x ((n – 1))4 Sequence x1 (n) is obtained as shown in the following figure by shifting x (n) in anticlockwise by one sample.
x1 (n) = x ((n – 1))4 = {4, 1, 2, 3} 2.
bx2 (n) = x ((n – 1))4 Sequence x2 (n) is obtained as shown in the following figure by shifting x (n) in clockwise by one sample.
x2(n) = x ((n + 1))4 = {2, 3, 4, 1} Ï1 for 2 £ k £ 5 Example 7.17 Find DFT of periodic signal x(n) = Ì Ó0 for n = 0, 1, 6, 7, 8, 9 Ï1 for 2 £ k £ 5 Find DFT of periodic signal x(n) = Ì Ó0 for n = 0, 1, 6, 7, 8, 9 Solution: DFT of x (n) is given by N -1
X (k ) = Â x(n) WNkn for k = 0, 1, º, N - 1 n=0
166
Digital Signal Processing 9
5
X (k ) = Â x(n) WNkn n=0
= Â x(n) WNkn n=2
5
5
X (k ) = Â x(n) WNkn n=2
6k N k N
1-W 1-W
X (k ) = =
–
1
= Â WNkn – Â WNkn n=0
n=0
2k N k N
1-W 1-W
WN2 k - WN6 k 1 - WNk
- j 2 pk È - jN4 pk ˘ N e e Í ˙ ˚ for k = 0 1, º N – 1 X (k ) = Î - j 2 pk È ˘ N Í1 - e ˙ Î ˚
Example 7.18 Perform circular convolution of the signals x1 (n) = {2, 1, 2, 1} and x2 (n) = {1, 2, 3, 4} Solution: Circular convolution of two sequences x1 (n) and x2 (n) is given by N -1
y ( n) =
 x (k ) x ((n - k )) 1
2
N
0£ n £ N –1
k =0
substituting N = 4, in the above equation, we get 3
y (n) = Â x1 (k ) x2 ((n - k )) N k =0
for n = 0
3
y (0) = Â x1 (k ) x2 ((0 - k )) 4 k =0 3
= Â x1 (k ) x2 (( - k ))4 k =0
=2+4+6+2 = 14
0£n£3
The Discrete Fourier Transform 167
x1(n) x2 ((– n))4 plotted on two concentric circles for n = 1 3
y (1) = Â x1 (k ) x2 ((1 - k )) 4 k =0
= 4 +1+ 8 + 3 = 16
x1 (n) x2((1 – n))4 plotted on two concentric circles for n = 2 3
y (2) = Â x1 (k ) x2 ((2 - k )) 4 k =0
=6+2+2+4 = 14
x1 (n) x2((2 – n))4 plotted on two concentric circles for n = 3 3
y (3) = Â x1 (k ) x2 ((3 - k )) 4 k =0
168
Digital Signal Processing
= 8 + 3 + 4 +1 = 16
x1 (n) x2 ((3 – n))4 plotted on two concentric circles Hence, circular convolution of x1(n) and x 2(n) is given by y (n) = {14, 16, 14, 16} Example 7.19 Compute the circular convolution between following sequences using DFT and IDFT mehod. x (n) = {1, 2, 3, 4} and y (n)= {– 1, –1, –3, – 4}. N -1
x(k ) = Â x(n) WN- kn for
k = 0, 1, º, N - 1
n=0
Solution: x (n) and y (n) are periodic sequences and is given by N -1
x(k ) = Â x(n) WNkn for k = 0, 1, º, N - 1 n=0
substituting N = 4, we get 3
x(k ) = Â x(n) WNkn for k = 0, 1, 2, 3 n=0
Substituting k and n values, we get X (k) ={10, –2 + j 2, –2, –2 – j2} DFT of the second sequence is given for N = 4 is given by 3
Y (k ) =
 y ( n) W
kn N
n=0
for k = 0, 1, 2, 3
The Discrete Fourier Transform 169
Substituting k and n values, we get Y (k) = {–10, –2 –j2, 2, –2 + j 2} From circular convolution property of DFT is from eq. (7.24) DFT [x1 (n) x2 (n)] = X1 (k) X2 (k) Example 7.20 Compute circular convolution of the signals x1 (n) = {2, 1, 2, 1} and x2 (n) = {1, 2, 3, 4}. Solution: Linear convolution of two sequences x1 (n) of length n1 samples and x2 (n) of length n2 samples is given by 3
y (n) = x1 (n) * x2 (n) = Â x1 (k ) * x2 (n - k ) k =0
= {2, 5, 10, 16, 12, 11, 4} Compute length L of the linear convolution result is L = 7 Compute maximum length of x1 (n) and x2(n), is N = max (n1, n2) = 4 Compute number of zeros to be padded to linear convolution result using P=2×N–L=8–7=1 pad p number of zeros to the end of convolution result y (n) = {2, 5, 10, 16, 12, 11, 4, 0} Length of the convolution result is N1 = L + P = 8 N To compute circular convolution of two sequences, add other sample to 1 + 1 sample, and 2 N1 - 1. repeat the procedure for all the samples upto 2 y (n) = x1 (n) * x2 (n) = {14, 16, 14, 10} Example 7.21 Compute linear convolution from circular convolution of the signals x1 (n) = {2, 1, 2, 1} and x2 (n) = {1, 2, 3, 4}. Solution: Add (N2 – 1) number of zeros to the end of sequences x1 (n) x1 (n) = {2, 1, 2, 1, 0, 0, 0} Compute circular convolution of sequences x1(n), x2 (n) y (m) = x1 (n) ƒ x2 (n) n1 - 1
= Â x1 (m) * x2 (n - m) N k =0
= {2, 5, 10, 16, 12, 11, 4}
170
Digital Signal Processing
Problems 1. Find the DFT of the sequences (a) x (n) = {1, 1, 0, 0} (b) x (n) = 1/5, for –1 ¨ n ¨ 1 0, otherwise. 2. Find the 4-point DFT of the sequence x(n) = cos np/4. 3. Find the DFT of x (n) = {0.5, 0, 0.5, 0}. 4. FIND the IDFT of X (k) = {3, 2 + j, 1, 2 – j}. 5. FIND the IDFT of X (k) = {0, – 1 – j, 6, – 1 + j}. 6. Find the IDFT of X (k) = {5, 0.5 + j 0.866, 0.5 – j 0.866}. 7. The DFT of a real signal is {2, 1 – j3, A, 2 + j1, 0, B, 3 – j5, C}. Find the A, B and C. 8. Let x (n) = {2, A, 3, 0, 4, 0, B, 5 }. If X (0) = 18 and X (4) = 0, find A and B. 9. The DFT of a real signal is X (k) = {1, A, –2, B, –5, j3, C, 2 – j}. What is its signal energy? 10. What do you mean by a transformation? 11. What is the basic difference between the two representations? 12. What is a spectrum analyzer? How is it different from a CRO? 13. What is the difference between discrete-time Laplace transformation and Z-transformation?
Multiple Choice Questions 1. Give the expression for symmetry property, (b) WNk + N / 2 = – WNK (a) WNk + N / 2 = WNK (c) WNk+N/2 = WNK/2 (d) WNk + N = – WNK / 2 2. Give the expression for periodicity property, (b) WNk + N/2 = – WNK (a) WNk + N = WNK k +N K/2 (c) WN = WN (d) WNk + N = – WNK 3. For a complex-valued sequence x (n) of N point, the DFT may be expressed as N –1
(a)
X R (k ) =
Â
n=0
2pkn 2pkn ˆ Ê – xI (n) sin ÁË xR (n) cos ˜ N N ¯
N –1
X I (k ) = –
Â
n=0
2pkn 2pkn ˆ Ê – xI (n) cos ÁË xR (n) sin ˜ N N ¯
The Discrete Fourier Transform 171 N –1
(b)
X R (k ) =
Â
n=0
2pkn 2pkn ˆ Ê + xI (n) sin ÁË xR (n) cos ˜ N N ¯
N –1
X I (k ) = –
Â
n=0 N –1
(c)
X R (k ) =
Â
n=0
2pkn 2pkn ˆ Ê – xI (n) sin ÁË xR (n) cos ˜ N N ¯
N –1
X I (k ) = –
Â
n=0 N –1
(d)
X R (k ) =
Â
n=0
2pkn 2pkn ˆ Ê + xI (n) cos ÁË xR (n) cos ˜ N N ¯
2pkn 2pkn ˆ Ê + xI (n) sin ÁË xR (n) cos ˜ N N ¯
N –1
X I (k ) = –
2pkn 2pkn ˆ Ê + x1 (n) cos ÁË xR (n) cos ˜ N N ¯
Â
n=0
2pkn 2pkn ˆ Ê – xI (n) cos ÁË xR (n) sin ˜ N N ¯
4. Divide-and-conquer approach is based on the decomposition of an N-point DFT into successively________, (a) Smaller FFTs (b) Smaller DFTs (c) Large DFTs (d) Smaller IDFTs 5. The sequence x (n) is stored in a two-dimensional data array, l-is row index, m-is column index, then give the row wise mapping of the index n to the indices (l, m). {N = LM} (a) N = Ml + m (b) N = l + mL (c) N = l + m (d) M = l – m 6. Divide-and-conquer approach algorithm to computation of the DFT 1. Compute the L-Point DFT of each column. 2. Compute the M-Point DFT of each row. 3. Store the signal column wise. 4. Multiply the resulting array by the phase factor WNIq 5. Read the resulting array row-wise. (a) 4, 2, 1, 3, 5 (b) 3, 1, 4, 2, 5 (c) 3, 2, 4, 1, 5 (d) None of the above 7. Butterfly computations require the _________ at various stages in either natural or bitreversal order. (b) Phase factor (a) WNk (c) (b) or (a) (d) None of the above 8. In computational algorithms for DFT and IDFT involves basically the _____type of computations. (a) Different (b) Same (c) (a) or (b) (d) None of the above
172
Digital Signal Processing
9. Wn2 = (b) WN /2 (a) W2N (d) WN (c) WN / 4 10. Give the expression for twiddle factor. (a) WN = e (c) WN = e
–
j 2p n
j 2p n
(b) WN = e
jp n
(d) WN = e
–
jp n
11. W84 =_______, W88=_______, (a) 1, – 1 (b) – 1, 1 (c) 0, – 1 (d) 1, 0 12. Radix 2 algorithm requires _______operations to compute the DFT. (b) M log2 M (a) M log2 N (d) N log2 N (c) N log2 M 13. For DIF, the input is _______ while the output is _____ order (a) Natural, bit-reversal (b) Bit reversal, natural (d) Natural, natural (c) Bit reversed, bit reversed 14. The storage registers required to perform the Butterfly operation on a pair of complex numbers is _____ storage registers. (a) N (b) 2 N (c) N / 2 (d) N 2 15. Discrete time signal x (n) is periodic (a) x (n + N) = x (–n) (b) x (n + N) = 0 (c) x (n + N) = x (n) (d) x (n + N) = 1 16. Fourier series representation of x (n) consists of N harmonically relative exponential functions, given by (a) (b) (c)
e jwn = e e jwn = e e jwn = e j wn
j 2 pkn N - j 2 pkn N j 2 pk w N 2 pk w N
k = 0, 1, 2, º, N – 1 k = 0, 1, 2, º, N – 1 k = 0, 1, 2, º, N – 1
k = 0, 1, 2, º, N – 1 (d) e = e 17. Fourier coefficient ck of x (n) consists of n harmonically related exponential functions given by (a) (b) (c) (d)
e jwn = e e jwn = e e jwn = e e jwn = e
j 2 pkn N - j 2 pkn N j 2 pk w N 2 pk w N
k = 0, 1, 2, º, N – 1 k = 0, 1, 2, º, N – 1 k = 0, 1, 2, º, N – 1 k = 0, 1, 2, º, N – 1
The Discrete Fourier Transform 173
18. Frequency range of discrete signal is unique over the interval (a) (0, p) (b) (– p, 0) (c) (– p, p) (d) (p, 2p) jw 19. Fourier transform X (e ) of x (n) is given by •
(a)
x (e j w )=
Â
x (n) ew n
•
(b)
x (e jw ) =
(d)
jw
n=-•
(c)
x (e ) =
Â
x (n) e - jwn
Â
x (n) e -wn
n=-• •
•
jw
Â
x (n) e
j wn
x (e ) =
n=-•
n=-•
20. Inverse Fourier transform of x (n) is given is p 1 1 (a) x(n) = Ú x(e jw ) e jwn d w ( 0, p ) (b) x(n) = p -p 2p (c)
x ( n) =
1 p
p
Ú
p
Ú x (e
jw
) e jwn d w
-p p
x(e jw ) e - jwn d w
(d)
-p
x ( n) =
1 x (e jw ) e - jwn d w 2p -Úp
21. Fourier spectrum of x (e jw) of discrete time sequence x (n) will have (a) Both magnitude and phase component (b) Neither magnitude and phase component (c) Only magnitude component (d) Only phase component 22. Fourier spectrum of X (e jω) of sequence d (n) is (b) e jω (a) e– j ω (c) 1 (d) 0 jω 23. Fourier spectrum of X (e ) of sequence d (n – 2) is (b) e–2 jω (a) 2 e– 2jω (c) 2e– jω (d) e – jω jω 24. Fourier spectrum of X (e ) of unit sep sequence is
1 1 (b) – jw 1– e 1 – e – jw 1 1 (c) (d) 1 + e jw jw 1– e 25. If x (n) is real, then magnitude component of X (e jω) is (a) An odd function (b) An even function (c) Both even and odd functions (d) Neither even nor odd function 26. Discrete Fourier Transform (DFT) of x (n) is given by (a)
N -1
(a)
X ( k ) = Â x ( n )e
- j 2 p kn N
N -1
(b)
n=0
N -1
(c)
X ( k ) = Â x ( n) e n=0
X ( k ) = Â x ( n) e
- j p kn N
n=0
j p kn N
N -1
(d)
X ( k ) = Â x ( n) e n=0
j p kn N
174
Digital Signal Processing
27. Inverse discrete Fourier transform of x (n) is given by (a) (c)
x ( n) = x ( n) =
1 N 1 N
- j 2 p kn N
N -1
 X (k ) e
(b)
x ( n) =
k =0
N -1
Â
X (k ) e
j 2 p kn N
(d)
k =0
x ( n) =
1 N 1 N
N -1
 X (k ) e
- j p kn N
k =0
N -1
Â
X (k ) e
j p kn N
k =0
28. Circular convolution of two sequences both of length N in time domain is equivalent to (a) Convolution of their spectra in frequency domain (b) Multiplication of their spectra in frequency domain (c) Circular convolution of their spectra in frequency domain (d) Exponential product of their spectra in frequency domain 29. Multiplication in time property stats that 1 (a) [ x1 (n) x2 (n)] = [ X 1 (k ) ƒ X 2 (k )] N (b) [ x1 (n) x2 (n)] = X 1 (k ) ƒ X 2 (k ) 1 (c) [ x1 (n) x2 (n)] = X 1 (k ) ƒ X 2 (k ) N 1 (d) [ x1 (n) x2 (n)] = [ X 1 (k ) ƒ X 2 (k )] N 30. Multiplication by exponentials property states M j 2 pn È N x ( n ) e (a) DFT Í Î
˘ ˙ = X ÈÎ( k - M ) N ˘˚ ˚
M j 2 pn È ˘ N (b) DFT Í x(n)e ˙ = X ÈÎ( k + M ) N ˘˚ Î ˚ M j 2 pn È N ( ) x n e (c) DFT Í Î
˘ ÈÊ Mˆ ˘ ˙ = X Í ÁË k - ˜¯ ˙ N ˚ Î ˚
M j 2 pn È N ( ) x n e (d) DFT Í Î
˘ ÈÊ Mˆ ˘ ˙ = X Í ÁË k + ˜¯ ˙ N ˚ Î ˚
31. The circular convolution of sequence x1(n) and x2(n) is given by N -1
(a)
x1 (n) ƒ x2 (n) =
 x (m) x ((n - m)) 1
2
N
m=0
N -1
(b)
x1 (n) ƒ x2 (n) =
 x (n) x ((n - m)) 1
m=0
2
N
The Discrete Fourier Transform 175 N -1
(c)
x1 (n) ƒ x2 (n) =
 x (m) x ((m - n)) 1
2
N
m=0
N -1
(d)
x1 (n) ƒ x2 (n) =
 x (m) x ((m)) 1
2
N
m=0
32. DFT of d (n – p) is given by (a) WNk
(b) WN– kp
(c) WN kp
(d) – WN– kp
33. If Z-transform of a sequence x (n) is evaluated on a unit circle with N equally spaced points, then it is equivalent (a) Fourier series of x (n) (b) Fourier transform of x (n) (c) Discrete Fourier transform of x (n) (d) None of the above 34. Fourier transform x (n) is (a) Continuous in w and is periodic (b) Continuous in w and is non-periodic (c) Discrete in w and is periodic (d) Discrete in w and is non-periodic 35. Zero padding means (a) Value of a sequence X (k) is zero (b) Zero values appearing in X (k) (c) Padding dummy values zero in X (k) (d) None of the above
Key to the Multiple-Choice Questions 1. 5. 9. 13. 17. 21. 25. 29. 33.
(b) (a) (b) (b) (b) (a) (b) (a) (c)
2. 6. 10. 14. 18. 22. 26. 30. 34.
(d) (c) (a) (b) (c) (c) (a) (a) (b)
3. 7. 11. 15. 19. 23. 27. 31. 35.
(d) (c) (b) (c) (b) (b) (c) (a) (c)
4. 8. 12. 16. 20. 24. 28. 32.
(b) (b) (d) (a) (b) (a) (b) (a)
8 8.1
Fast Fourier Transform
INTRODUCTION
In the previous chapter, a computational method for converting a N-point discrete time sequence x (n) to a frequency domain N-point sequence x (k) and their properties are presented. However, the direct computation of X (k) requires N 2 complex multiplications and N (N – 1) complex additions. Many algorithms have been developed for efficiently computing the N-point DFT of the sequence x (n). However, the most popular of them is the Fast Fourier Transform (FFT) developed by Cooley and Tukey. The reductions in computational complexity are achieved by these algorithms by using the divide and conquer approach. This involves computation of DFT of subsequences using small size DFTs and then combining them to obtain the complete transform. Depending on the size of the subsequences, these algorithms are collectively called FFT algorithms. When we require the values of the DFT over only a portion of the frequency range 0 £ w £ 2p, other algorithms may be more efficient and flexible, even though they are less efficient than the FFT algorithms for computation of all the values of the DFT. Gortezel algorithm and chirp transform are examples of such algorithms. In this chapter, the most efficiently used decimation-in-time (DIT) and decimation-infrequency (DIF) algorithms for computing the FFT are presented. Further applications of FFT that allow reduction of computational complexity in various problems are presented.
8.2
MOTIVATION FOR FAST FOURIER TRANSFORM
The DFT of a sequence x (n) of length N is identical to the samples of the Fourier transform at equally spaced frequencies w k = 2pk/N given by N –1
X (k ) =
N –1
 x ( n) e
n=0
– j 2 pnk / N
=
 x ( n) W
nk N
,
k = 0, 1, º, N – 1
(8.1)
n=0
where WN = e– j2p/N is a complex valued phase factor and is an N th root of unity. Similarly, the IDFT is given by x ( n) =
1 N
N –1
Â
k =0
X ( k ) e j 2 pnk / N =
1 N
N –1
 X (k ) W
– nk N
n=0
,
n = 0, 1, º, N – 1
(8.2)
Fast Fourier Transform 177
From the above equation for X (k), it is clear that for each value of k, the direct computation of X (k) involves N complex multiplications (4 N real multiplications) and (N – 1) complex additions (4 N – 2 real additions). Therefore, to compute all the values of X (k), a total of N 2 complex multiplications and N (N – 1) complex additions are required. Hence, the direct computation of DFT requires complex arthimetic operations proportional to N 2. This becomes large for large values of N. However, the computational efficiency of the DFT procedure can be improved by using the symmetry and periodicity properties of W Nkn given by
(
)
1. Complex conjugate symmetry WNk [ N – n] = WN– kn = WNkn * 2. Periodicity property W
kn N
k ( N + n) N
=W
n (N + k) N
=W
Efficient algorithms for computing the DFT utilizing the above properties are called fast fourier transform algorithms presented next. These algorithms require an order of N log2 N complex arithmetic operations. By adopting a divide-and-conquer approach, a computationally efficient algorithm can be developed. This approach depends on the decomposition of an N-point DFT into successively smaller size DFTs. An N-point sequence, if N can be expressed as N = r1 r2 r3 º rm, where r1 = r2 = r3 = º = rm = r, then N = rm, can be decimated into r-point sequences.
8.3
DECIMATION-IN-TIME (DIT) – FFT ALGORITHM
In computing the DFT, dramatic reduction in computational complexity results in decomposing the computation into smaller DFT computations. Algorithms in which the decomposition is based on decomposing the sequence x (n) into successively smaller subsequences x (n) are called decimation-in-time algorithms. This principle is illustrated by considering the special case of N being an integer power of 2 i.e., N = 2r. Since N is an even integer, we can consider computing X (k) by separating x (n) into two (N / 2) point sequences, consisting of even-numbered samples in x (n) and odd-numbered samples in x (n). For each k, we can write N –1
X (k ) =
 x ( n) W
nk N
, k = 0, 1 º, N – 1
(8.3)
 x ( n) W
(8.4)
n=0
X (k ) =
 x ( n) W
nk N
n even
+
nk N
, k = 0, 1, º, N – 1
n odd
With the substitution of variables n = 2p for n even and n = 2p + 1 for n odd, ( N / 2) – 1
X (k ) =
Â
( N / 2) – 1
x ( 2 p ) WN2 pk +
n=0 ( N / 2) – 1
=
Â
p=0
(8.5)
 x ( 2 p + 1) W (
2 p + 1) k N
p=0
( )
x ( 2 p ) WN2
pk
( N / 2) – 1
+ WNk
2 pk N
 x ( 2 p + 1) (W )
p=0
However, WN2 = WN / 2, since WN2 = e – 2 j ( 2 p / N ) = e – j ( 2 p / N / 2) = WN / 2
(8.6)
178
Digital Signal Processing
Consequently, (Eq. 8.6) can be written as
X (k ) =
( N / 2) – 1
Â
p=0
Ê ˆ x ( 2 p ) ÁW N ˜ Ë ¯ 2
= G ( k ) + WNk H ( k ) ,
pk
( N / 2) – 1 k N
+W
Â
p=0
Ê ˆ x ( 2 p + 1) ÁW N ˜ Ë ¯
pk
(8.7)
2
k = 0, 1, º, ( N – 1)
Each sum in Eq. 8.7 can be identified as an (N / 2)-point DFT, the first sum being the N/2 point DFT of the even-numbered samples of the initial sequence and the second being the N/2 point DFT of the odd-numbered samples of the initial sequence. Even though the index k ranges from 0 to N – 1, each of the terms G (k) and H (k) must be computed only from 0 to (N / 2) – 1, since G (k) and H (k) are periodic in k with period (N / 2). After the two DFTs are computed, they are combined according to Eq. 8.7 to yield the N-point DFT X (k). Figure 8.1 shows the computation for N = 8. In the figure, two 4-point DFTs are computed with G (k) indicating the 4-point DFT of the even-numbered samples and H (k) indicating the 4-point DFT of the odd-numbered samples. X (0) is obtained by multiplying H (0) with W 80 and then adding the product to G (0). Similarly, X (1) is obtained by multiplying H (1) with W 18 and then adding the product to G (1).
Fig. 8.1
Signal flow graph for computing the N-point DFT using two N/2 point DFTs.
However to compute X (4) we need to use G (4) and H (4). Since G (k) and H (k) are periodic in k with period 4, X (4) can be computed using G (0) and H (0) as X (4) = G (0) + W 84 H (0). At this point, we can compare the computational complexity of N-point DFT using the direct method and the proposed method in Fig. 8.1. Direct computation of DFT requires N 2 complex
Fast Fourier Transform 179
multiplications and N (N – 1) complex additions. However, with the computation proposed in Fig. 8.1 we need to compute two (N / 2) point DFTs (for computing G (k) and H (k) requiring 2 (N / 2)2 complex multiplications (if we do the N /2 point DFTs using the direct method) and 2 ((N /2) (N / 2 – 1)) complex additions. Further, to compute X (k) from G (k) and H (k) we require N more complex multiplications and additions. Hence, a total of N + 2 (N / 2)2 = N + N 2 / 2 = N + N 2 / 2 complex multiplications are required. It is easy to verify that for any value of N, N + N 2/2 will be less than N 2. Equation 8.11 corresponds to breaking the initial N-point DFT computation into two (N / 2) point DFTs. If N / 2 is even, as it is when N is equal to a power of 2, we can consider computing the N / 2 point DFTs by breaking each sum in that equation into two N / 4-point DFTs, which would then by combined to yield the N/2 point DFTs. Thus G (k) can be calculated as G (k ) =
( N / 2) – 1
Â
s=0
Ê ˆ ( N / 4) – 1 Ê ˆ g ( s ) ÁW Nsk ˜ = Â g ( 2l ) ÁW N ˜ Ë 2 ¯ Ë 2¯ i=0
2 lk
( N / 4) – 1
+
Â
i=0
Ê ˆ g ( 2l + 1) ÁW N ˜ Ë 2¯
( 2 l + 1) k
(8.8)
or G (k ) =
( N / 4) – 1
Â
l=0
lk
Ê ˆ g ( 2l ) ÁW N ˜ + WNk / 2 Ë 2¯
( N / 4) – 1
Â
l=0
Ê ˆ g ( 2l + 1) ÁW N ˜ Ë 2¯
lk
(8.9)
Similarly, H (k) can be written as H (k ) =
( N / 4) – 1
Â
l=0
lk
Ê ˆ h ( 2l ) ÁW N ˜ + WNk / 2 Ë ¯ 4
( N / 4) – 1
Â
l=0
Ê ˆ h ( 2l + 1) ÁW N ˜ Ë ¯
lk
(8.10)
2
Thus, the N / 2 point DFT G (k) can be obtained by combining the N / 4 point DFT of the sequences g (2l ) and g (2l + 1). Similarly, the N / 2 point DFT H (k) can be obtained by combining the N / 4 point DFT of the sequences h (2l ) and h (2l + 1). For the 8-point DFT this computation has been reduced to the computation of 2-point DFTs as shown in Fig. 8.3.
Fig. 8.2
Signal flow graph for computing the N-point DFT using N/4 point DFTs.
180
Digital Signal Processing
This procedure can be applied recursively until each of the sub-DFTs reduce to a 2-point DFT which can be easily calculated using the below signal flow graph. This elementary signal flow graph for computing the 2-point DFT is called a butterfly. With N a power of 2, this requires stages v = log2 N.
Fig. 8.3
Basic butterfly unit for computing the 2-point DFT.
The complete signal flow graph for computing the 8-point DFT of sequence is shown in Fig. 8.4.
Fig. 8.4
Complete signal flow graph for computing the 8-point DFT using DIT-FFT.
Computational Complexity With N a power of 2, we see that in the computation of N-point DFT using decimation-in-time (DIT) algorithm, there are stages with each v = log2 N stage containing N / 2 2-point DFTs (implemented using basic butterfly shown in Fig. 8.4). With each butterfly requiring 2 complex multiplications, the total number of complex multiplications in the computation of N-point DFT using DIT-FFT is given by 2 (N / 2) log2 N = N log2 N. Thus we see that computation using DIT-FFT algorithm requires N log2 N complex multiplications. Further reduction in computational complexity in the computation of 2-point DFT using butterfly unit in Fig. 8.3 can be obtained by noting that WNr + N / 2 = WNN / 2 WNr = – WNr . With this observation, the butterfly computation of Fig. 8.3 can be simplified to the form shown in Fig. 8.5. Thus the total number of complex multiplications in DIT-FFT method can be calculated as(N /2) log2 N. For example, with N = 1024, the direct computation of N-point DFT requires 10242 = 1048576 complex multiplications, whereas the DIT-FFT algorithm requires (1024/2) log2 1024 = 5120 complex multiplications.
Fast Fourier Transform 181
Fig. 8.5
Modified butterfly unit requiring only one complex multiplication.
From the signal flow graph shown in Fig. 8.4, we notice that after the computation of DFTs is performed in the first stage and the input to the second stage of butterfly-DFTs are obtained, the memory space used for holding the input sequence in the first stage is no-longer required. Hence for optimizing the memory requirement, the results of the first stage can be stored in the memory space used for holding the input sequence. This method is called in-place computation. In fact, the result of each butterfly unit operating on two input variables can be stored in the same two locations. Hence, the computation of N-point DFT using DIT-FFT algorithm requires N memory locations with each location capable of holding a complex number.
Bit Reversed Order In order that the computation may be done in place as just described, the input sequence must be stored (or atleast accessed) in a non-sequential order, as shown in the signal flow graph of Fig 8.4. In fact the order in which the input data are stored and accessed is referred to as bit-reversed order. If (n2, n1, n0) is the binary representation of the index of the sequence x (n), then the sequence value x (n2, n1, n0) is stored in the array position x (n0, n1, n2). For example, we can write the indices of the input sequence numbers in binary format in Fig. 8.4 as x [000] = x (0) x [100] = x (4) x [010] = x (2) x [110] = x (6) x [001] = x (1) x [101] = x (5) x [011] = x (3) x [111] = x (7) Example 8.1 Compute the 4-point DFT of the sequence x (n) Œ {1, 2, 3, 4} using the decimationin-time FFT algorithm. Also determine the number of complex multiplications. Solution: Using the block diagram shown in Fig. 8.6. We notice that the input is in bit-reversed order and the output is in normal order. The number of complex multiplications (with N = 4) can be computed as (N / 2) log2 N = (4/2) log2 4 = 4.
182
Digital Signal Processing
Fig. 8.6
4-point DFT using DIT-FFT algorithm.
Example 8.2 Compute the 8-point DFT of the sequence x (n) {1, 1, 1, 0, 0, 0, 0} using the decimation-in-time FFT algorithm. Solution: The 8-point DFT X (k) 0 £ k £ 7 of the sequence x (n) can be computed using the signal flow graph shown in Fig. 8.7. With N = 8, four twiddle constants needed in the signal flow graph are W0 = 1 W1 = e– j 2p / 8 = 0.707 – j 0.707 W2 = e– j 4p / 8 = – j W3 = e– j 6p / 8 = – 0.707 – j 0.707 The values X (0), X (1), º X(7) form the scrambled output sequence. The final output sequence X (k) is given by X (k) = {4, 1 – j 2.414, 0, 1 – j 0.414, 0, 1 + j 0.414, 0, 1 + j 2.414}
8.4
DECIMATION-IN-FREQUENCY (DIF) – FFT ALGORITHM
The decimation-in-time FFT algorithm is based computing the DFT of smaller subsequences of the input sequence and then combining them. Alternatively we can consider dividing the output sequence X (k) into smaller and smaller subsequences in a similar manner. FFT algorithms based on the procedure are called decimation-in-frequency FFT algorithm. For simplicity, the discussion is restricted to N a power of 2. Consider computing separately the even-numbered and odd-numbered frequency samples. Since N –1
X (k ) =
 x ( n) W
nk N
k = 0, 1, º, N – 1
,
(8.11)
n=0
the even numbered samples are N –1
X (2 p) =
n ( 2 p) N
 x ( n) W
,
p = 0, 1, º, ( N / 2) – 1
(8.12)
n=0
which can be expressed as X (2 p) =
( N / 2) – 1
Â
n=0
N –1
x ( n ) WN2 np +
 x ( n) W
2 np N
n = ( N / 2)
,
p = 0, 1, º, ( N / 2) – 1
(8.13)
Fast Fourier Transform 183
With a substitution of variables in the second summation in Eq. 8.13, we obtain X (2 p) =
( N / 2) – 1
 x ( n) W
2 np N
n=0
( N / 2) – 1
+
 x ( n + ( N / 2) ) W
2 p ( n + N / 2) N
,
p = 0, 1, º, ( N / 2) – 1 (8.14)
n=0
Finally, using the periodicity of WN2np i.e., WN2 p ( n + ( N / 2)) = WN2 np WNpN = WN2 pn, and since WN2 = WN / 2, Eq 8.14 can be expressed as X (2 p) =
( N / 2) – 1
 ( x ( n ) + x ( n + N / 2) ) W
p - 0, 1, º, ( N / 2) – 1
pn N /2
(8.15)
n=0
Equation 8.15 is the N/2 point DFT of the N / 2 point sequence obtained by adding the first half and the last half of the input sequence. We can now consider obtaining the odd-numbered frequency samples, given by N –1
X ( 2 p + 1) =
n ( 2 p + 1) N
 x ( n) W
p = 0, 1, º, ( N / 2) – 1
,
(8.16)
n=0
As before, we can arrange Eq. 8.16 as X ( 2 p + 1) =
( N / 2) – 1
Â
x ( n ) WNn ( 2 p +1) +
N –1
n ( 2 p + 1) N
 x ( n) W
, p = 0, 1, º, ( N / 2) – 1
(8.17)
n = ( N / 2)
n=0
An alternative form for the second summation in Eq. 8.17 is N –1
Â
x ( n + ( N / 2) ) WN(
n + ( N / 2) ) ( 2 p + 1)
n=0
N –1
= WN( N / 2) ( 2 p + 1) Â x ( n + ( N / 2) ) WN( n) ( 2 p + 1) n=0
N –1
=–
 x ( n + ( N / 2) ) W ( ) (
n 2 p + 1) N
(8.18)
n=0
ÊNˆ Á ˜
where we have used the fact WN(N / 2) (2p) = 1 and WNË 2 ¯ = – 1. Substituting Eq. 8.18 into Eq. 8.17 and combining the two summations, we obtain X ( 2 p + 1) =
( N / 2) – 1
 ( x ( n ) – x ( n + ( N / 2) ) ) W
n ( 2 p + 1) N
,
p = 0, 1 º, ( N / 2) – 1
(8.19)
n=0
X ( 2 p + 1) =
( N / 2) – 1
 ( x ( n ) – x ( n + ( N / 2) ) ) W
n N
WNnp/ 2 ,
p = 0, 1, º, ( N / 2) – 1
(8.20)
n=0
Equation 8.20 is the N/2 point DFT of the sequence obtained by substracting the second half of the input sequence from the first half and multiplying the resulting sequence by WNn. Thus, based on the Eqs. (8.15) and (8.20) with g (n) = x (n) + x (n + (N / 2)) and h (n) = x (n) – x (n + (N /2)), the DFT can be formed by first forming the sequence g (n) and h (n), then computing h (n) WNn, and finally computing the (N / 2)-point DFTs of these two sequences to obtain the even-numbered output points and odd-numbered output points, respectively. This procedure in the case of an 8-point DFT is shown in Fig. 8.7.
184
Digital Signal Processing
Fig. 8.7
8-point DFT using DIT-FFT algorithm.
Proceeding similarly in a manner to that followed in deriving the decimation-in-time algorithm, we note that since N is a power of 2, N / 2 is even; consequently the (N / 2)-point DFTs can be computed by computing the even-numbered and odd-numbered output points for those DFTs separately. As in the case of the procedure leading to Eqs. (8.15) and (8.20), this is accomplished by combining the first half and the last half of the input points for each of the (N /2)-point DFTs and then computing (N/4)-point DFTs. The flow graph resulting from taking this step is shown in Fig. 8.8.
Fig. 8.8
Flow graph of decimation-in-frequency decomposition of an N-point DFT computation into two (N/2) point DFT computations (N = 8).
For the 8-point DFT case the computation has now been reduced to computing the 2-point DFTs, which are obtained by adding and subtracting the input points as discussed in Fig. 8.5. The flow graph of a typical 2-point DFT in the computation of DFT using decimation-in-frequency algorithm is shown in Fig. 8.9.
Fast Fourier Transform 185
Fig. 8.9
Modified butterfly flow graph requiring only one complex multiplication.
Thus, the 2-point DFTs in Fig. 8.8 can be replaced with the computation shown in Fig. 8.9, so that the 8-point DFT can be accomplished by the flow graph depicted in Fig. 8.10.
Fig. 8.10
Flow graph of a typical 2-point DFT as required in a typical stage of decimationin-frequency decomposition.
Computational Complexity Assuming that N is a power of 2, the computation of N-point FFT proceeds in log2 N stages, with (N / 2) 2-point DFTs in each stage, resulting in (N / 2) log2 N complex multiplications and N log2 N complex additions. Thus, the total number of complex multiplications and additions is same for decimation-in-time and decimation-in-frequency algorithms. Example 8.3 Compute the 4-point DFT of the sequence x (n) = {1, 2, 3, 4} using the decimationin-frequency FFT algorithm. Also determine the number of complex multiplications Solution: Using the signal flow graph shown in Fig. 8.12, we can compute the DFT as detailed in Fig. 8.12. Exercise 8.4 Compute the 8-point DFT of the sequence x (n) = {1, 1, 1, 1, 0, 0, 0, 0} using the decimation-in-frequency FFT algorithm.
186
Digital Signal Processing
Fig. 8.11
Flow graph of complete decimation-in-frequency decomposition of an 8-point DFT computation.
Fig. 8.12
4-point DFT using DIF-FFT algorithm.
Solution: The 8-point DFT X (k) 0 £ k £ 7 of the sequence x (n) can be computed using the signal flow graph shown in Fig. 8.13. With N = 8, four twiddle constants needed in the signal flow graph are W0 = 1 W1 = e– j 2p / 8 = 0.707 – j 0.707 W2 = e– j 4p / 8 = – j W3 = e– j 6p / 8 = – 0.707 – j 0.707 The values X (0), X (1) º, X (7) form the scrambled output sequence. The final output sequence is given by X (k) = {4, 1 – j 2.414, 0, 1 – j 0.414, 0, 1 + j 0.414, 0, 1 + j 2.414} Example 8.5 Compute the 16-point DFT of the sequence Ï1 0 £ n £ 7 x ( n) = Ì Ó0 8 £ n £ 15
Fast Fourier Transform 187
Fig. 8.13
8-point DFT using DIF-FFT algorithm.
Solution: Using the signal flow graph shown in Fig. 8.14, we can compute the DFT using DIF-FFT algorithm as
Fig. 8.14
16-point DFT using DIF-FFT algorithm.
188
8.5
Digital Signal Processing
INVERSE FAST FOURIER TRANSFORM (IFFT)
The inverse DFT of an N-point sequence is defined as x ( n) =
1 N
N –1
 X (k ) e
j 2 pnk / N
k =0
=
1 N
N –1
 X (k ) W
– nk N
n=0
=
1 N
N –1
 X ( k ) W
nk N
, n = 0, 1, º, N – 1 (8.21)
n=0
Comparing Eq. (8.1) with Eq. (8.21) we see that the twiddle factor WN is changed to WN = WN–1 and the sum is multiplied by a factor of 1/N. Hence, by modifying the FFT flow graph as shown in Fig. 8.13, we can achieve the IDFT computation with the same computational complexity as that of FFT.
Fig. 8.15
IFFT computation using modified DIF-FFT flow graph with twiddle
factor W N = WN–1 .
Example 8.6 Compute the 4-point IDFT of the sequence in Example 8.1 using the decimationin-time FFT algorithm. Solution:
Using the block diagram shown in Fig. 8.16.
Fig. 8.16
4-point IDFT using modified DIT-FFT algorithm.
Example 8.7 Compute the 4-point IDFT of the sequence X (k), 0 £ k £ 3 in Example 8.2 using the decimation-in-frequency FFT algorithm.
Fast Fourier Transform 189
Solution:
Using the block diagram shown in Fig. 8.17.
Fig. 8.17
8.6
4-point IDFT using modified DIF-FFT algorithm.
APPLICATIONS OF FFT ALGORITHMS
The FFT algorithms described in the previous sections are used in wide range of applications including spectrum estimation, linear filtering, correlation, etc. In fact, FFT is considered the most efficient method for computing the DFT and the IDFT.
8.6.1 FFT Algorithm in Linear Convolution and Correlation Direct FIR filtering of long data sequences is a computation with O (N 2) complexity. To reduce the computational complexity, long data sequences can be divided into blocks and the output of the FIR filter for each block can be computed and later combined using the overlap-add or overlap-save method. To efficiently compute the output of the FIR filter for each block of input data, FFT algorithm instead of direct convolution can be used to reduce the computational complexity. While the direct linear convolution of two finite length long data sequences requires N 2 complex arithmetic operations, it can be efficiently computed using FFT by means of the following steps: (a) The DFT of the unit samples response h (n), 0 £ n £ M – 1, of the FIR filter (after padding with L zeros), H (k) is precomputed and stored in the memory (b) For each block of input data x (n), 0 £ n £ L – 1, pad M zeros and compute the FFT coefficients X (k). (c) These FFT coefficients X (k) are multiplied with the precomputed H (k) resulting in Y (k), the FFT coefficients of the output sequence for each block of input data. (d) Finally IFFT algorithm is applied on Y (k) resulting in the linear convolution of the block of data with the unit samples response of the FIR filter. The size of the input block L is chosen such that the FFT size N = L + M – 1, is typically a power of 2. Hence, the computational complexity involved in the computation of FIR filter output for each input block reduces to N log2 N complex arithmetic operations. The computation of the crosscorrelation between two sequences, by means of the FFT algorithm is similar to the linear FIR filtering problem just described. In practical applications involving crosscorrelation, atleast one of the input sequences has finite duration and is similar to the impulse response of the FIR filter. The second sequence may be a long sequence akin to the input of the FIR filter. By time reversing the first sequence and computing its DFT, we have reduced the crosscorrelation to an equivalent convolution problem.
190
Digital Signal Processing
Example 8.8 Compute the linear convolution of the sequences x (n) {1, 2, 1, 3, 2} and h (n) = {4, 3, 5, 1} using FFT algorithm. Solution: The FFT coefficients of the zero-padded unit sample response h1 (n) = {4, 3, 5, 1, 0, 0, 0, 0} are computed only once and stored in a memory device as H (k) = {1300, 5.4142 – 7.8284 i, – 1.00 – 2.00 i, 2.5858 + 2.1716 i, 5.00, 2.5858 – 2.1716 i, – 1.00 + 2.00 i, 5.4142 + 7.8284 i} The FFT of the zero padded input sequence is computed as X (k) = {9.0000, – 1.7071 – 4.5355 i, 2.0000 + 1.0000 i, 0.2929 – 2.5355 i + 1.0000, – 0.2929 + 2.5355 i, 2.0000 – 1.0000 i, – 1.7071 + 4.5355 i} The output sequence is computed as the IFFT of X (k)* H (k) y (n) = {4, 11, 15, 26, 24, 22, 13,
3}
Verification: clc; clear; close all; % Direct Linear Convolution x = [1 2 1 3 2] h = [4 3 5 1] y1 = conv(x,h) % Linear Convolution using FFT x = [1 2 1 3 2 0 0 0] Xk = fft(x); h = [4 3 5 1 0 0 0 0] Hk = fft(h); y2 = real(ifft(Xk.*Hk))
8.6.2
Efficient Computation of DFT of Two Real Sequences
Suppose g1 (n) and g2 (n) are two-real valued sequences of length N, and let g (n) be a complex valued sequence defined as 0£n£N–1 (8.22) g (n) = g1(n) + jg2 (n), The DFT operation is linear and hence the DFT of can be expressed as (8.23) G (k) = G1 (k) + jG2 (k) The sequences g1 (n) and g2 (n) can be expressed in terms of g (n) as follows g1 ( n ) =
g ( n) + g * ( n) 2
(8.24)
Fast Fourier Transform 191
g 2 ( n) =
g ( n) – g * ( n) 2j
Hence, the DFTs of g1 (n) and g2 (n) are G (k ) + G * ( N – k ) 2 G (k ) – G * ( N – k ) G2 ( k ) = 2j
(8.25)
G1 ( k ) =
Thus, by performing single DFT on the complex valued sequence g (n), we have obtained the DFT of two real sequences with only a small amount of additional computation that is involved in computing G1 (k) and G2 (k). Example 8.9 Compute the 8-point DFT of the two real-valued sequences g1 (n) and g2 (n) using the DFT algorithm only once. The sequences are given by g1 (n) = {1, 2, – 1, 3, –4, 1, 2, 3} g2 (n) = {5,
6,
2, 7,
3, 1, 3, – 2}
Solution: The complex valued sequence g (n) is formed as g (n) = g1 (n) + jg2 (n), 0 £ n £ 7 g (n) = {1 + 5 j, 2 + 6 j, – 2 j, 3 + 7 j, – 4 + 3 j, 1 + j,
2 + 3 j,
3 – 2 j}
The DFT of g (n) is calculated as G (k) = {7 + 25 j, 14.60 + 1.46 j, – 2 + 6 j, 15.19 + 1.12 j, – 11 + 1 j, – 6.60 + 8.53 j, – 6.00, – 3.19 – 3.12 j} The DFT of g1 (n) and g2 (n) can be computed as G (k ) + G * ( N – k ) 2 G (k ) – G * ( N – k ) G2 ( k ) = 2j
G1 ( k ) =
G1 (k) = {7.0000, 5.7071 + 2.2929 i, – 4.0000 + 3.0000 i, 4.2929 – 3.7071 i, – 11.0000, 4.2929 + 3.7071 i, – 4.0000 – 3.0000 i, 5.7071 – 2.2929 i} G2 (k) = {25.00, – 0.8284 – 8.8995 i, 3.00 – 2.00 i, 4.8284 – 10.8995 i, 10.8995 i, 3.00 + 2.00 i, – 0.8284 + 8.8995 i} Verification: clc; clear; close all;
1.00,
4.8284, +
192
Digital Signal Processing
g1 = [1,2,-1,3,-4,1,2,3]; g2 = [5,6,2,7,3,1,3,-2]; g = g1+j*g2; % Direct Computation G11 = fft(g1); G21 = fft(g2); % Computation using FFT algorithm G = fft(g); GC = conj([G(1) G(8:-1:2)]); G12 = (G + GC)/2; G22 = (G - GC)/(2*j); [transpose(G11) transpose(G12) transpose(G21) transpose(G22)]
Example 8.10 Verify the Parseval’s theorem of FFT. Parseval’s theorem states that time and frequency domain representations of a signal are two views of the same signal. Hence the energy computed from discrete time sequence and its FFT samples should be the same. The following example illustrates this theorem. Given g (n) = {1, 2, 3, 4}. Its corresponding FFT coefficients are G (k) = {10, – 2 + 2 i, 2, 3
– 2 – 2 i}. The energy computed from the sequence g (n) is
2
is 30. The corresponding
i=0
3
energy computed from G (k) is
 | g ( n) |
 | G (k) |
2
is 30.
k=0
Example 8.11 Verify the time-shifting property of FFT. Time shifting property of FFT states that circular shifting a sequence in time domain by n0 units is equivalent to multiplying the corresponding FFT coefficients by g– j 2pn0k/N. Given g (n) = {1, 2, 3, 4} and circular shifted version of g (n) i.e., g (n – 2) = {0, 4, 1, 2}. The FFT coefficients of g (n), i.e., G (k) = {7, 1 + 2j, –5, 1 – 2j}. The FFT coefficients of g (n – 2) are Gr (k) = {7, – 1 – 2j, – 5, – 1 + 2j}. These coefficients can be verified to be G(k) e – j 2p (2) (k)/4 with k = 0, 1, 2, 3. Example 8.12 Verify the convolution property of FFT. Circular convolution property states that circular convolution of two discrete-time sequences in time domain is equivalent to pointby-point multiplication of FFT coefficients in frequency domain. Given g1 (n) = {1, 2, 3, 4} and g2 (n) = {1, 2, 3, 4}. The circular convolution of g1(n) and g2 (n) can be calculated as g (n) = {36, 38, 36, 30}. The FFT of g1(n) i.e., G1 (k) = {10, – 2, + 2j, – 2, – 2 – 2j}. The FFT of g2 (n) i.e G2 (k) = {14, – 2 + 2j, – 2, – 2 – 2j}. The IFFT of G1 (k) G2 (k) can be computed to be {36, 38, 36, 30} which can be seen to be the circular convolution of g1 (n) and g2 (n).
Fast Fourier Transform 193
Problems 1. Explain the computation efficiency of FFT over DFT. 2. Develop an 8-point FFT algorithm based on decimation in frequency and explain how computational complexity is reduced. Draw the flow chart neatly. 3. Give the properties of DFT and obtain a FFT algorithm for evaluation of 8–point DFT. 4. Discuss FFT decimation in frequency algorithm development. Draw the flow chart. 5. (a) Why FFT is preferred to DFT? When FFT is used? (b) Compare different types of FFT with relevant examples. 6. Compare the computational complexity of DFT and FFT algorithms. 7. Describe the decimation-in-time FFT algorithm in detail. 8. Compare decimation-in-time and decimation-in-frequency algorithms for computing the DFT of an N-point sequence 9. Draw the signal flow graph for computing the 8-point DFT using DIT-FFT algorithm 10. Draw the signal flow graph for computing the 16-point DFT using DIF-FFT algorithm 11. Describe the applications of FFT algorithm. 12. Describe the computation of inverse DFT using FFT signal flow graph. 13. Compute the linear convolution of x (n) = {1, 5, 3, 2} and h (n) = {3, 5, 2, 1, 8} using FFT method. Also compare the computational complexity of FFT based method with the direct method. 14. Compute the FFT of the 8-point sequence x (n) = {1, 6, 3, 5, 2, 8, 6, 3} using DIF FFT algorithm. 15. Compute the FFT of the 4-point sequence x (n) = {7, 4, 1 5} using DIT FFT algorithm. 16. Compute the IFFT of the sequence X (k) = {20, – 2, + 2 j, – 12, – 2 – 2 j} using DIF signal flow graph. 17. Compute the DFT of the two real sequences x1 (n) = {4, 2, 6, 1} and x2 (n) = {6, 1, 4, 7} using FFT algorithm only once. 18. Write short notes on (i) in-place computation and (ii) bit-reversed addressing. 19. Compute the circular convolution of the sequences x1 (n) = {4, 2, 6, 1} and x2 (n) = {6, 1, 4, 7} using FFT algorithm and properties of DFT.
Multiple-Choice Questions 1. The number of complex additions involved in the direct computation of 16-point DFT is (a) 128 (b) 240 (c) 120 (d) 256
194
Digital Signal Processing
2. The number of complex multiplications involved in the direct computation of 8-point DFT is (a) 48 (b) 60 (c) 64 (d) 72 3. The number of complex multiplications involved in the computation of 16-point DIT-FFT is (a) 256 (b) 24 (c) 64 (d) 128 4. The number of butterflies in each stage of computation of 64-point radix-2 FFT is (a) 16 (b) 64 (c) 32 (d) 8 5. The DFT X (k) of a 2-sample sequence x (n) = {7, 2} is (a) {9, 5} (b) {4, 2} (c) {8, 4} (d) {2, 1} 6. The number of complex additions involved in the computation of 256-point DFT by radix-2 FFT is (a) 1024 (b) 128 (c) 2048 (d) 256 7. For radix-2 FFT, N must be a power of (a) 8 (b) N / 2 (c) 4 (d) 2 8. The number of stages in the computation of 1024-piont DFT by radix-2 FFT is (a) 128 (b) 8 (c) 10 (d) 32 9. If the number of stages in the computation of DFT by radix-2 FFT is 8, the number of samples in x (n) are (a) 128 (b) 256 (c) 512 (d) 1024 10. The concept of in-place computation can be used to reduced the number of (a) additions (b) multiplications (c) memory elements (d) None of the above 11. DIT-FFT,
(a) (b) (c) (d)
A A A A
= = = =
a a b b
– + – +
WNr b, B = a + WNr b WNr b, B = a – WNr b WNr a, B = b + WNr a WNr a, B = b – WNr a
Fast Fourier Transform 195
12. DIT-FFT,
(a) (b) (c) (d) 13. For
(a)
A = a + b, B = (a + 2b) WNr A = a + 2b, B = (a + b) WNr A = a + 2b, B = (a – 2b) WNr A = a + b, B = (a – 2 b) WNr Radix-2 DIF-FFT algorithm give the even numbered samples of the N-Point DFT.
x ( 2k ) =
n –1 2
Ê
Ê
 ÁË x ( n) + x ÁË n +
n=0
(b)
x ( 2k ) =
n –1 2
N ˆ ˆ nk ˜ WN / 2 2 ¯ ˜¯
Ê
N N ˆ ˆ nk ˜¯ ˜¯ WN / 2 k = 0, 1, 2, 3 º 2 – 1 2
Ê
Ê
N ˆ ˆ nk N –1 ˜¯ ˜¯ WN / 2 k = 0, 1, 2, 3 º 2 2
n=0
(c)
x ( 2k ) =
 ÁË x ( 2n) + x ÁË n +
n=0
(d)
x ( 2k ) =
N – 1 2
Ê Ê nˆ
 ÁË x ÁË 2 ˜¯ + x ÁË n + n –1 2
k = 0, 1, 2, 3 º
n –1 2
 ( x ( n ) + x ( n )) W
nk N /2
k = 0, 1, 2, 3 º
n=0
N –1 2
14. FFT algorithm reduces the number of complex multiplications required to perform DFT from (a) N to (c) N 2 to
N log2 N 2 N log2 N 2
(b) N 2 to N log2 N (d) None.
15. In Radix-2 DIF-FFT algorithm the number of stages in the flow graph is (a) M = log2 N
(b) M = log2 2 N
(c) M = log2 N / 2 (d) None. 16. For a 32-point sequence calculate the number of multiplications needed in the calculation of DFT using FFT algorithm. (a) 40 (b) 16 (d) 5 (d) 80
196
Digital Signal Processing
Key to the Multiple-Choice Questions 1. 5. 9. 13.
(b) (a) (b) (a)
2. 6. 10. 14.
(c) (c) (c) (c)
3. 7. 11. 15.
(c) (d) (b) (a)
4. 8. 12. 16.
(c) (c) (d) (d)
9 9.1
Multirate Digital Signal Processing
INTRODUCTION
The input signal x (n) is characterized by the sampling rate Fx = 1/Tx and the output signal y (m) is characterized by the sampling rate Fy = 1/ Ty where Tx and Ty are the corresponding sampling intervals. Fy I = (9.1) Fx D where D and I are relatively prime integers.
Fig. 9.1
Sampling rate conversion.
Linear filter is characterized by a time-invariant impulse response denoted as h (n, m). Hence the input x (n) and the output y (m) are related by the convolution summation for time-invariant systems.
Fig. 9.2
Waveform showing the sampling rate conversion.
The process of reducing the sampling rate by a factor D (down sampling by D) is called decimation. The process of increasing the sampling rate by an integer factor I (upsampling by I ) is called interpolation.
198
Digital Signal Processing
Decimation by a factor D
Fig. 9.3
Downsampler.
Interpolation by a factor I An increase in the sampling rate by an integer factor I Fy = I Fx
Sampling rate conversion by a rational factor I/D
Fig. 9.4
9.2
Sampling rate conversion by rational number.
DIGITAL FILTER BANKS
Filter banks are generally categorized as two types, analysis filter banks and synthesis filter banks. Analysis filter banks consist of a set of filters, with system functions [Hk (Z )] arranged in a parallel bank. The frequency response characteristic of the filter bank splits the signal into a corresponding number of subbands. On the other hand, synthesis filter bank consists of a set of filters with system functions {Gk (Z )}, arranged as shown in Fig. 9.5.
Multirate Digital Signal Processing
Fig. 9.5
199
Filter banks.
The outputs of the filter are summed to form the synthesized signal{x (n)}. Filter banks are often used for performing spectrum analysis and signal synthesis when the filter bank is employed in the computation of the Discrete Fourier Transform (DFT) of a sequence {x (n)}, the filter bank is called a DFT filter bank.
9.3
APPLICATIONS OF MULTIRATE SIGNAL PROCESSING
There are numerous practical applications of multirate signal processing. In this section we describe a few of these applications.
9.3.1
Subband Coding of Speech Signals
A variety of techniques have been developed to sufficiently represent speech signals in digital form for either transmission or storage. Since most of the speech energy is contained in the lowest frequencies, we would like to encode the lower-frequency band with more bits than the higher frequency band. Subband coding is a method, where the speech signal is subdivided into several frequency bands and each band is digitally encoded separately. As an example of a frequency division is shown in Fig 9.6 (a). Let us assume that the speech signal is sampled at a rate Fs samples for second. The first frequency subdivision splits the signal spectrum into two equal-width segments, a low-pass signal (0 ≤ F ≤ Fs / 4) and high
200
Digital Signal Processing
pass signal (Fs / 4 ≤ F ≤ Fs / 2). The second frequency subdivision splits the low-pass signal from the first stage into the equal bands, a low-pass signal (0 ≤ F ≤ Fs / 8) and high-pass signal (Fs / 8 ≤ F ≤ Fs / 4). Finally, the third frequency subdivision splits the low-pass signal from the second stage into equal bandwidth signals. Thus, the signal is subdivided into four frequency bands, covering three octaves as shown in Fig. 9.6 (b).
Fig. 9.6
Block diagram of a subband speech coder.
Decimation by a factor of 2 is performed after frequency subdivision. By allocating a different number of bits per sample to the signal in the four subbands, we can achieve a reduction in the bit rate of the digitized speech signal. Filter design is particularly important in achieving good performance in subband coding. Aliasing resulting from decimation of the subband signals must be negligible. It is clear that we cannot use brickwall filter characteristics as shown in Fig 9.7 (a), since such filters are physically unrealizable. A particularly practical solution to the aliasing problem is to use quadrature mirror filters (QMF), which have the frequency response characteristics shown in Fig 9.7 (b). The synthesis method for the subband encoded speech signal is basically the reverse of the encoding process. The signals in adjacent lowpass and highpass frequency bands are interpolated, filtered, and combined as shown in Fig. 9.8. A pair of QMF is used in the signal synthesis for each octave of the signal. Subband coding is also an effective method to achieve data compression in image and signal processing. In general, subband coding of signals is an effective method for achieving band-width compression in a digital representation of the signal, when the signal energy is concentrated in a particular region of the frequency band. Multirate signal processing notions provide efficient implementations of the subband encoder.
Multirate Digital Signal Processing
Fig. 9.7
Fig. 9.8
201
Filter characteristics for subband coding.
Synthesis of subband-encoded signals.
Problems 1. Define (i) decimation (ii) interpolation and explain the process of decimation by a factor ‘D’. 2. Decimating x (n) by a factor of D = 2 produces the signal xd (n) = x (2 n) for all n. Show that xd (w) = Xs (w / 2). Plot the signal xd (n) and its transform xd (w). Do we lose any information when we decimate the sampled signal xs (n).
202
Digital Signal Processing
3. 4. 5. 6.
Show that the linear interpolation is a second order approximation. Explain the implementation of digital filter banks. What are digital filter banks? Give some applications where filter banks are used. Design a 4-stage decimator where the sampling rate has to be reduced from 20 KHz to 500 Hz. The specification for the decimator filter H (z) are as follows: (1) Passband edge: 200 Hz (2) Stopband edge: 220 Hz (3) Passband ripple: 0.004 (4) Stopband ripple: 0.002 Determine the filter length and number of multiplication per second. 7. Explain the interpolation process with an example. Show also that the linear interpolation is a second order approximation. •
8. Consider an arbitrary digital filter with transfer function H (z) = Sn = – • h (n) z– n. Perform a two component polyphase decomposition of H (z) by grouping the even numbered sample h0 (n) = h (2 n) and the odd number of samples h1 (n) = h (2 nt). Thus show that H (z) can be explained as H (z) = H0 (z2) + z –1 H1 (z2) and determine H0 (z) and H1 (z). 9. Explain tunable digital filters. 10. Explain multilevel filter banks. 11. Design a fractional sampling rate converter that reduces the sampling rate by a factor of 3/g. 12. What is meant by decimation and interpolation? Explain the process of decimation by a factor ‘ D ’. 13. A signal V (n) = anu (n), | a | < 1. Determine the spectrum V (w). The signal v (n) is applied to a decimator that reduces the rate by a factor of ‘2’. Determine the output spectrum.
Multiple-Choice Questions 1. Interpolation means (a) Decreasing the sampling rate (b) Keeping the sampling rate same (c) Increasing the sampling rate (d) None of the above 2. Decimation means (a) Decreasing the sampling rate (b) Keeping the sampling rate same (c) Increasing the sampling rate (d) None of the above 3. Before downsampling the signal is to be processed through (a) Band-pass filter (b) Low-pass filter (c) High-pass filter (d) All pass filters 4. After up sampling the signal is to be processed through (a) Band-pass filter (b) Low-pass filter (c) High-pass filter (d) All pass filters
Multirate Digital Signal Processing
203
5. Filter bank is combination of (a) Analysis filter bank (b) Synthesis filter bank (c) Analysis filter bank and synthesis filter bank (d) Synthesis filter bank and analysis filter bank 6. Subband coding finds application mostly in (a) Radar signal processing (b) Sonar signal processing (c) Speech signal processing (d) ECG 6. For interpolating the signal by 7/3 (a) Will be interpolated by 3 and decimated by 7 (b) Will be interpolated by 7 and decimated by 3 (c) Will be decimated by 3 and interpolated by 7 (d) Will be decimated by 7 and interpolated by 30 7. For sampling rate conversion by a rational number the following order is to be followed (a) Interpolator and decimator (b) Decimator and interpolator (c) Filter and interpolator (d) Filter and decimator 8. Alternative form of brickwall filter is (a) Band pass filter (b) Quadrature mirror filter (c) High pass filter (d) Band elimination filter 9. Subband coding is an effective method to achieve (a) Data storage (b) Data retrieval (c) Data compression (d) Data encoding
Key to the Multiple-Choice Questions 1. (c) 5. (c) 9. (b)
2. (a) 6. (c) 10. (c)
3. (b) 7. (b)
4. (b) 8. (a)
10 10.1
Digital Correlation Techniques
INTRODUCTION
Convolution and correlation, two different processes with slight difference, play important roles in signal operations and system performance evaluation. Correlation is a process which determines closeness between two signals. It plays a key role in RADAR (Radio Detection and Ranging), SONAR (Sound Navigation and Ranging) and mobile communications. For radar/sonar communications, the correlation factor between transmitted and received signals should be very high to have good performance of the system. For mobile communications, i.e., in the case of MIMO (Multi Input Multi Output) systems the correlation factor between two signals of two different users should be zero to minimize interference between two signals. This way, without knowledge of correlation, the performance evaluation of above-mentioned communication systems is not possible. Hence, it play a key role in various communication systems.
10.2
CORRELATION BETWEEN WAVEFORMS
The correlation between waveforms is a measure of the similarity or relatedness between the waveforms. Suppose we have waveforms v1 (t) and v2 (t), not necessarily periodic not confined to a finite time interval. Then correlation between them, or more precisely the average crosscorrelation between v1 (t) and v2 (t), is R12 (t) defined as T
1 R12 ( t ) ∫ Lim T Æ• T
2
Ú v (t ) v (t + t ) dt 1
-T
2
(10.1)
2
If v1 (t) and v2 (t) are periodic with the same fundamental period T0, then the average crosscorrelation is T0
1 R12 ( t ) = T0
2
Ú v (t ) v (t + t ) dt 1
- T0
2
(10.2)
2
If v1 (t) and v2 (t) are waveforms of finite energy (for example, non-periodic pulse-type waveforms), then the cross-correlation is defined as
Digital Correlation Techniques 205 •
R12 ( t ) =
(10.3)
Ú v (t ) v (t + t ) dt 1
2
-•
In some context, non-periodic signal or waveform can be referred as aperiodic signal or waveform.
Fig. 10.1
Two related waveforms. The timing is such that the product v1 (t) v2 (t) = 0.
The need for introducing the parameter t in the definition of cross-correlation may be seen in Fig. 10.3. The figure though the two waveforms are different but related in certain aspects. They have the same period and nearly the same form. However, the integral product v1 (t) v2 (t) is zero since at all times one or other function is zero. The function v2 (t + t) is the function v2 (t) shifted to the left by amount t. It is clear from the figure that while R12 (0), R12 (t) will increase as t increases from zero, becoming a maximum when t = t0. This t is a searching or scanning parameter which may be adjusted to a proper time shift to reveal, to the maximum extent possible, the relatedness or correlation between the functions. The term coherence is sometimes used as a synonym for correlation. Functions for which R12 (t) for all t are described as being uncorrelated or noncoherent. In scanning to see the extent of the correlation between functions, it is necessary to specify which function is being shifted. In general, R12 (t) is not equal to R21 (t). R21 (t) can be expressed as follows T
1 R21 ( t ) ∫ Lim T Æ• T
2
Ú v (t + t ) v (t ) dt 1
-T
2
2
= R12 (– t)
(10.4)
with identical results for periodic waveforms or waveforms of finite energy.
10.3
POWER AND CROSS-CORRELATION
Let v1 (t) and v2 (t) be waveforms which are neither periodic nor confined to a finite time interval. Suppose that the normalized power of v1 (t) in S1 and the normalized power of v2 (t) is S2. Then the normalized power of v1 (t) + v2 (t), i.e., normalized power of S12 of v1 (t) + v2 (t + t) can be formulated as
206
Digital Signal Processing T
1 S12 = Lim T Æ• T
2
-T
2
ÈÎ v1 ( t ) + v2 ( t + t ) ˘˚ dt
Ú
(10.5)
2 T
1 = Lim T Æ• T
T T Ï 2 ¸ 2 2 2 Ô Ô 2 Ì Ú v1 ( t ) d t + Ú ÈÎ v2 ( t + t ) ˘˚ dt + 2 Ú v1 ( t ) v2 ( t + t ) dt ˝ -T -T ÔÓ -T 2 Ô˛ 2 2
= S1 + S 2 + 2 R12 ( t )
(10.6) (10.7)
In writing eq. (10.7) it has been taken account of the fact that normalized power of v2 (t – t) is the same as the normalized power of v1 (t). For, since the integration in eq. (10.6) extends eventually over the entire time axis, a time shift in v2 will clearly not affect the value of the integral. From eq. (10.7) we have the important result that if two waveforms are uncorrelated, that is, R12 (t) = 0 for all t, then no matter how these waveforms are time-shifted with respect to one another, the normalized power due to the superposition of the waveform is the sum of the powers due to the waveforms individually. Similarly, if a waveform is the sum of any number of mutually uncorrelated waveforms, the normalized power is the sum of the individual powers. Same result applies for periodic waveforms also. Suppose that two waveforms v11 (t) and v21 (t) are uncorrelated. If dc components V1 and V2 are added to the waveforms, then the waveforms v1 (t) = v11 (t) + V1 and v2 (t) = v21 (t) + V2 will be correlated with correlation R12 (t) = V1 V2. In most applications where the correlation between waveforms is of concern, there is rarely any interest in the dc component. It is customary, then, to continue to refer to waveforms as being uncorrelated if the only source of the correlation is the dc components.
10.4
AUTOCORRELATION
Correlation of a function with itself is called the autocorrelation. Thus, with v1 (t) = v2 (t), R12 (t) becomes R (t). This can be expressed as T
1 R ( t ) ∫ Lim T Æ• T
2
Ú v (t ) v (t + t ) dt 1
-T
2
(10.8)
2
A number of properties of R (t) are listed in the following: T
(i)
1 R ( 0) = Lim T Æ• T
2
-T
2
ÈÎ v1 ( t ) ˘˚ dt = S
Ú
(10.9)
2
That is the autocorrelation for t = 0 is the average power S of the waveform R (0) ≥ R (t)
(10.10)
Digital Correlation Techniques 207
This result is rather intuitively obvious since we would surely expect that similarly between v (t) and v (t + t) be a maximum when t = 0. (ii) The autocorrelation function is an even function of t. R (t) = R(– t) (10.11) To prove eq. (10.11) assume that the axis t = 0 is moved in the negative t direction by an amount t. Then the integrand in eq. (10.8) would become v (t – t), and R (t) would become R (– t). Since, however, the integration eventually extends from – • to •, such a shift in time axis can have no effect on the value of the integral. Thus R (t) = R (– t). The three characteristics given in eqs. (10.9) to (10.10) are features not only for autocorrelation defined for eq. (10.8) but also for cross-correlation defined in eq. (10.1) for the periodic case and non-periodic case of finite energy. In the latter case, of course, R (0) = E, the energy rather than the power.
10.5
AUTOCORRELATION OF NONPERIODIC WAVEFORM OF FINITE ENERGY
For pulse-type waveforms of finite energy there is a relationship between the correlation function of eq. (10.1) and the energy spectral density which corresponds to the relationship given in the equation given below for the periodic waveform. G ( f ) = ¡ ÎÈ R ( t ) ˚˘
(10.12)
This relationship is that correlation function R (t) and the energy spectral density are a Fourier transform pair. This result is established as follows: Taking into account of following two equations of convolution called convolution integrals •
v (t ) =
Ú v ( t ) v (t - t ) d t 1
(10.13)
2
-•
or equivalently
•
v (t ) =
Now let us prove
Ú v ( t ) v (t - t ) d t 2
(10.14)
1
-•
v ( t ) = ¡-1 ÈÎV1 ( f ) V2 ( f ) ˘˚ =
By definition we have
1 2p
(10.15)
•
Ú V ( f )V ( f )e 1
2
j wt
dw
(10.16)
-•
•
V1 ( f ) =
Ú v (t) e
- j wt
1
(10.17)
dt
-•
Substituting V1 ( f )as given in eq. (10.17) into integral of eq. (10.16), we have •
1 v (t ) = 2p -Ú•
•
Ú v (t) e 1
-•
- j wt
d t V2 ( f ) e jwt d w
(10.18)
208
Digital Signal Processing
Interchanging the order of integration, we find • È 1 • ˘ v ( t ) = Ú v1 ( t ) Í V2 ( f ) e jw (t -t) d w ˙ d t Ú -• Î 2p -• ˚
(10.19)
We recognize that the expression in brackets in eq. (10.19) is v2 (t – t), so that finally •
v (t ) =
Ú v ( t ) v (t - t ) d t 1
2
-•
Suppose if v1 (t) and v2 (t) are the same waveforms, that is v1 (t) = v2 (t) = v (t) and we get •
¡-1 ÈÎV ( f ) V ( f ) ˘˚ =
Ú v ( t ) v (t - t ) d t
(10.20)
-•
Since, V (– f ) = V *(f ) = ¡ [v (– t)], eq. (10.20) may be written as 2 ¡-1 ÈÎV ( f ) V * ( f ) ˘˚ = ¡-1 È V ( f ) ˘ Î ˚ •
=
(10.21)
Ú v (t) v (t - t ) d t
-•
The integral in eq. (10.21) is a function of t, and hence this equation expresses ¡ ÈÎV ( f ) V * ( f ) ˘˚ as a function of t. If we want to express ¡-1 ÈÎV ( f ) V * ( f ) ˘˚ as a function without changing the form of the function, we need but to interchange t and we then have -1
•
¡-1 ÈÎV ( f ) V * ( f ) ˘˚ =
Ú v (t ) v (t - t ) dt
(10.22)
-•
The integral in eq. (10.22) is precisely R (t), and thus ¡ ÈÎ R ( t ) ˘˚ = V ( f ) * V ( f ) = V(f)
2
(10.23)
which verifies that R (t) and the energy spectral density |V (f )| are Fourier transform pairs. 2
10.6
AUTOCORRELATION AND CROSS-CORRELATION OF DISCRETE-TIME SIGNALS
So far we have discussed autocorrelation and cross-correlation of continuous-time (analog) signals. Now in this section the emphasis is on autocorrelation and cross-correlation of discretetime signals. Actually there is relation between convolution and correlation. In the correlation if the second waveform is bit reversal in time and correlation is performed which results in convolution. In some cases the searching parameter t is referred as k Let a discrete signal (or sequence) be defined in the following binary form
(
X = x0 , x1 , x2 , x3 º xN -1
)
Digital Correlation Techniques 209
its aperiodic (or non-periodic) autocorrelation function is defined as N -1 - k
r (k ) =
Â
xi xi + k ,
k = 0, 1, 2, º, N - 1
i=0
can be expanded as r ( 0) = x02 + x12 + x22 + º + xN2 -1 r (1) = x0 x1 + x1 x2 + x2 x3 + º + xN - 2 xN -1 r ( 2) = x0 x2 + x1 x3 + x2 x4 . . . . r ( N – 1) = x0 xN – 1
+ º + xN - 3 xN -1 . . . .
The autocorrelation for a periodic signal (or sequence) can be written as N -1
R ( k ) = Â xi xi + k ,
k = 0, 1, 2, º, N - 1
i=0
because in the periodic sequence the last bit will be cyclically shifted to initial position. The cross-correlation between two aperiodic signals can be expressed in the following form. N -1- k
rxy ( k ) =
Â
xi yi + k ,
k = 0, 1, 2, º, N - 1
i=0
Usually r (0) is called the mainlobe and for k = 1, 2 … N – 1 are called sidelobes. For good correlation function the mainlobe value should be ideally infinite and sidelobe value should be zero. In other words for ideal autocorrelation function, it should possess the property of impulse function.
Fig. 10.2
Ideal autocorrelation function.
Example 10.1 Find the aperiodic autocorrelation of the following binary signal. S = (1, – 1, 1, 1, – 1) and plot its autocorrelation function. Solution: Autocorrelation function can be expressed as N -1- k
r (k ) =
Â
i=0
xi xi + k
k = 0, 1, 2, º, N - 1
210
Digital Signal Processing
The given discrete-time signal
x0 x1 x 2 x3 x 4 X = (1, –1, 1, 1, –1) The length of the signal is dependent on the number of elements in the signal. For the above discrete-time signal the length of the signal (code is 5) r ( 0) = x02 + x12 + x22 + x32 + x42 r ( 0) = x02 + x12 + x22 + x32 + x42
=1+1+1+1+1 =5 r (1) = x0 x1 + x1 x2 + x2 x3 + x3 x4 = (1) (– 1) + (– 1) (1) + (1) (1) + (1) (– 1) = –1 – 1 + 1 – 1 = –2
r (2 ) = x0 x 2 + x1 x3 + x 2 x 4
= (1) (1) + (– 1) (1) + (1) (–1) =1–1–1 = –1 r ( 3) = x0 x3 + x1 x4 = (1) (1) + (– 1) (– 1) =1+1 =2 r ( 4) = x0 x4 = (1) (– 1) =–1 This can be plotted as
Fig. 10.3
Autocorrelation function for the signal S = (1, – 1, 1, 1, – 1).
Digital Correlation Techniques 211
For plotting the above figure the magnitude of the autocorrelation function is taken on the y-axis and time shift is taken on the x-axis. As the autocorrelation function is symmetric with respect to zero shift the values of autocorrelation for positive time shift and negative time shift are the same or it can be called even function. r (k) = r (– k) Example 10.2 Obtain the aperiodic cross-correlation of the following two discrete-time (binary) signals. X = (1, – 1, 1, – 1, 1, 1) Y = (– 1, – 1, 1, 1, – 1, – 1) Solution: The elements of the two signals are denoted as x0 x1 x2 x3 x4 x5 X = (1, – 1, 1, – 1, 1, 1) y0 y1 y2 y3 y4 y5 Y = (– 1, – 1, 1, 1, – 1, – 1) The cross-correlation between two signals X and Y can be represented as N -1- k
rxy ( k ) =
Â
xi yi + k , k = 0, 1, 2, º, N - 1
i=0
rxy ( 0) = x0 y0 + x1 y1 + x2 y2 + x3 y3 + x4 y4 + x5 y5
= (1) (– 1) + (– 1) (– 1) + (1) (1) + (1) (– 1) + (1) (– 1) =–1+1+1–1–1–1 =–2 rxy (1) = x0 y1 + x1 y2 + x2 y3 + x3 y4 + x4 y5 = (1) (– 1) + (– 1) (1) + (1) (1) + (– 1) (– 1) + (1) (– 1) = –1 – 1 + 1 + 1 – 1 =–1 rxy ( 2) = x0 y2 + x1 y3 + x2 y4 + x3 y5 = (1) (– 1) + (– 1) (1) + (1) (– 1) + (– 1) (– 1) =– 1–1–1+1 =–2 rxy ( 3) = x0 y3 + x1 y4 + x2 y5
= (1) (– 1) + (– 1) (– 1) + (1) (– 1) =1+1–1 =1 rxy ( 4) = x0 y4 + x1 y5 = (1) (– 1) + (– 1) (– 1) = –1 + 1
212
Digital Signal Processing
=0 rxy ( 5) = x0 y5
= (1) (– 1) = –1 Example 10.3 Obtain the aperiodic and periodic autocorrelation of the following discretetime signal. X = (– 1, 0, – 2, 3, 1) Solution: The aperiodic autocorrelation may be expressed as N -1- k
rxx ( k ) =
Â
xi xi + k ,
k = 0, 1, 2, º, N - 1
i=0
The given discrete-time signal x0 x1 x2 x3 x4 X = (– 1, 0, – 2, 3, 1) rxx ( 0) = x02 + x12 + x22 + x32 + x42
= (– 1)2 + 02 + (– 2)2 + (3)2 + (1)2 =1+0+4+9+1 = 15 rxx (1) = x0 x1 + x1 x2 + x2 x3 + x3 x4 = (– 1) (0) + (0) (– 2) + (– 2) (3) + (3) (1) =0+0–6+3 =–3 rxx ( 2) = x0 x2 + x1 x3 + x2 x4 = (– 1) (– 2) + (0) (3) + (– 2) (1) =2+0–2 =0 rxx ( 3) = x0 x3 + x1 x4 = (–1) (3) + (0) (1) =–3 rxx ( 4) = x0 x4
= (1) (– 1) = –1 Periodic autocorrelation can be expressed as N -1
R (k ) =
Âx
i
i=0
xi + k ,
i = 0, 1, 2, º, N - 1
Digital Correlation Techniques 213
where (i + k) is taken modulo N R ( 0) = x02 + x12 + x22 + x32 + x42
= (– 1)2 + 02 + (– 2)2 + (3)2 + (1)2 =1+0+4+9+1 = 15 R (1) = x0 x1 + x1 x2 + x2 x3 + x3 x4 If we take (i + k) as modulo N, x5 becomes x0 R (1) = x0 x1 + x1 x2 + x2 x3 + x3 x4 + x4 x0 = (– 1) (0) + (0) (– 2) + (– 2) (3) + (3) (1) + (1) (– 1) = 0 + 0 –6 + 3 –1 = –4 R ( 2) = x0 x2 + x1 x3 + x2 x4 + x3 x5 Again by using the concept of (i + k) as taken modulo N = x0 x2 + x1 x3 + x2 x4 + x3 x0 + x4 x1 = (– 1) (– 2) + (0) (3) + (– 2) (1) + (3) (– 1) + (1) (0) =2+0–2–3+0 =–3 R ( 3) = x0 x3 + x1 x4 + x2 x5 + x3 x6 + x4 x7 Again by taking (i + k) as modulo N R ( 3) = x0 x3 + x1 x4 + x2 x0 + x3 x1 + x4 x2
= (– 1) (3) + (0) (1) + (– 2) (– 1) + (3) (0) + (1) (– 2) =–3+0+2+0–2 =–3 R ( 4) = x0 x4 + x1 x5 + x2 x6 + x3 x7 + x4 x7 Again by taking (i + k) as modulo N R ( 4) = x0 x4 + x1 x0 + x2 x1 + x3 x2 + x4 x3
= (– 1) (1) + (0) (– 1) + (– 2) (0) + (3) (– 2) + (1) (3) =–1+0+0–6+3 =–4 Correlation plays an important role in mobile, RADAR (Radio Detection and Ranging), SONAR (Sound Navigation and Ranging) communications. Correlation gives the relationship or closeness between two signals. If correlation coefficient is low, then the two signals are not very much related. If correlation coefficient is high, then the two signals are very much related. Usually, in RADAR/SONAR communication high correlation coefficient between transmitted and received signals is required as transmitted and received signals are to be ideally same.
214
Digital Signal Processing
In mobile communication the correlation coefficients between signals used by neighboring users should be as low as possible as the low interference between neighboring users is essential for good performance.
10.7
OVERLAP-ADD BLOCK CONVOLUTION
Consider the example shown in Fig. 10.4. The linear convolution of x (n) (Fig. 10.4(a) and h (n) (Fig. 10.4(b) is shown in Fig. 10.4(c). Fig. 10.4(d), (f) illustrate consecutive linear convolutions of blocks of x(n) with h(n). Each block convolution has the expected length Nxi + Nh – 1. In this case, the three block convolutions have a combined length of 3 (Nxi + Nh – 1) = 15, whereas the initial convolution has length Nx + Nh – 1 = 11. By inspection, it is reasonable to assume that the overlap between the sectioned convolutions should be (15 – 11)/2 = 2 samples, as shown in Fig. 10.4(g). This corresponds to the value of Nh – 1. When the overlapping samples are summed, the resulting sequence (Fig. 10.4(h)) equals the initial convolution result as shown in Fig. 10.4(c). This method is called overlap-add block convolution.
Fig. 10.4. Block convolution using the overlap-add method: (a) input x (n), (b) impulse response h (n), (c) expected output y (n), (d) output y1 (n) for block convolution of x1 (n) and h (n), (e) output y2 (n), (f ) output y3 (n), (g) shifted block outputs, overlap is Nh – 1 = 2, and (h) the sum of overlapped block outputs equivalent to the direct convolution result.
Digital Correlation Techniques 215
The input blocks need not be precisely Nh samples long. But it is generally a good idea to keep Nxi on the order of Nh to avoid unnecessarily long block convolutions. The overlap between block outputs must remain Nh – 1, regardless. Mathematically, x (n) and y (n) can be represented as •
x(n) = Â xi (n)
where xi (n) = x (n) for iN block £ n £ (i + 1) N block
i=0
x ( n) = 0
otherwise
(10.24)
y ( n) = h( n) * x ( n) n
y(n) =
• i
k=0
10.8
•
 h( k )  x i=0
•
(n – k ) = Â h(n) * xi (n) = Â yi (n) i=0
(10.25)
i=0
OVERLAP-SAVE BLOCK CONVOLUTION
Unlike the overlap-add method, the overlap-save method requires that the input blocks overlap. Then the input blocks are circularly convolved with the impulse response. Because of the overlap redundancy at the input, the circular artifacts in the output (the first Nh – 1 samples) can simply be discarded. Figure 10.5 illustrates the overlap-save method. Since this method uses circular convolution, it lends itself to use of the FFT when calculating each block. All that is required is zero-padding the impulse response (to the right) to length Nxi so that the FFT of h (n), H (m), will have the proper length for multiplication with Xi (m). Since h (n) and Nxi are known in advance, H (m) can simply be calculated and stored in memory before processing begins. Again there is no size constraint on the input blocks as long as they overlap by at least Nh – 1 samples. However, noting that each block requires an approximately Nblock.log2 (Nblock) operation FFT, Nblockcomplex multiplies, and an Nblock.log2 (Nblock) operation IFFT, it is a good idea to keep Nblock on the order of Nh. Mathematically, symbolic representations of x (n) and y (n) are rather cumbersome, but can be expressed as •
x (n) = Â xi (n) – xi – 1 (m)
(10.26)
i=0
where xi (n) = x (n) for [i. Nblock – (i + 1). (Nh – 1) ≤ n ≤ (i + 1). Nblock – (i + 1). (Nh – 1) – 1] and xi (n) = 0 otherwise. Also, xi – 1(m) = xi – 1(n) for [i . Nblock – (i + 1) . (Nh – 1) ≤ m ≤ i . Nblock – i . (Nh – 1) – 1]. yi (n) = h (n) (*) xi (n)
(10.27)
where (*) denotes circular convolution, and finally, y (n) = y0 (m) | y1 (m)| y2 (m)|º
(10.28)
where “ | ” denotes concatenation and m indexes the last Nblock – (Nh – 1) samples of each block. Alternatively, the overlap-save block convolution can be illustrated as shown by the block diagram in Fig. 10.6. In this case, the impulse response is zero-padded to the left to 2.Nh and
216
Digital Signal Processing
Fig. 10.5 Block convolution using the overlap-save method: (a) input signal x (n) divided into overlapping sections, overlap is Nh – 1 = 2, (b) impulse response h (n), (c) output y (n) using direct convolution, (d) output y 1 (n) for block circular convolution of x1(n) and h (n), (e) output y2 (n), (f) output y3(n), (g) output y4 (n), and (h) sequential concatenation of block outputs after discarding the first two samples of each block, which is equivalent to the direct convolution result. “|” represents concatenation.
Nblock is set to precisely 2.Nh. Now the circular convolution produces the desired block output followed by its unwanted artifacts. Hence, the last Nh samples, the circular artifacts, can be discarded. It should be noted that this form is not the most efficient implementation. An Ni point input buffer (including the minimum overlap) requires two Ni point FFTs, Ni complex multiplications, and an Ni point IFFT to produce Ni – Nh output samples. This can be expressed as an estimated “operations per sample (OPS)” ratio. OPS =
3 N i log 2 ( N i ) + 4 N i Ni – N h
(10.29)
Digital Correlation Techniques 217
Fig. 10.6 Block diagram of an efficient implementation of the overlap-save method using zero-padding to the left of the impulse response, h (n). Inputs xi – 1 and xi are adjacent input blocks.
Although this function is not bounded as Ni approaches infinity, it does have a minimum that can be found for a given Nh . For Ni = a . Nh, where a is an integer, the function reduces to 3a log 2 (aN h ) + 4 a N h (10.30) OPS = a –1 which increases monotonically with Nh. When Ni is not an integer multiple of Nh, a discrete minimum can be found numerically. Of course, as Ni increases, the input to output latency increases because the operation can only be performed every (Ni – Nh) samples. In any event, the form shown in Fig. 10.6 will be exploited in the minimum delay convolution algorithm described in the next section and in subsequent chapters. At this point, it will be useful to introduce less cumbersome notation describing the operation of the overlap-save method. N 2 M [log 2 ( M ) + 1] = 2 N [l og 2 ( M ) + 1] (10.31) M This represents the N-point convolution of input, x, and impulse response, hi , starting at sample n of the input signal. The output can be produced again by concatenating the results of many block convolution operations.
y [n] = hi *x (0, N) | h * x (N, N) | h * x (2N, N) where “ | ” represents concatenation.
218
Digital Signal Processing
Problems 1. Obtain the autocorrelation function of the following discrete signal and plot it x [ n] = {1, - 1, 2, 2, 3, - 1, 1} 2. Obtain the cross-correlation of the following two discrete signals and plot it
x1 [ n] = {1, - 1, 2, - 1, 1, - 2, 3} x2 [ n] = { 2, 1, - 1, 2, 3, - 1, 1} 3. Find the autocorrelation of the waveform shown below.
4. Find the convolution of x (t) and h (t) which are given in the figure below.
5. Obtain the convolution of the two following signals.
x [ n] = {1, - 1, 2, - 1, 1} x [ n] = {1, 1, 1, 1} 6. Find the autocorrelation of the function shown in the figure below.
7. Show that power spectral density function is Fourier transform of autocorrelation function. 8. Use the convolution integral to find y (t) of the LTI system with impulse response h (t) to the input x (t) given below: h (t) = e– t [u (t – 1) – u (t – 2)]
Digital Correlation Techniques 219
9. A given system has impulse response h (t) and transfer function H (w). Obtain expressions for y (t) and Y (w) when x (t) = A [d (t + td) – d (t – td)]. 10. v (t) = cos w0t + 2 sin 3 w0t + 0.5 sin 4w0t, Find Rvv (t).
Multiple-Choice Questions 1. The cross-correlation of two orthogonal functions f1 (t) and f2 (t) is (a) • (a) 0 (c) < 1 (d) = 1 2. If R11 (t) is autocorrelation of v1 (t) then (b) R11 (t) < R11 (– t) (a) R11 (t) = R11 (– t) (c) R11 (t) > R11 (– t)
(d) none of the above
3. If Rvv (t) is the autocorrelation of v (t). Then (a)
Gv ( f ) = ¡ ÎÈ Rvv ( t ) ˚˘
(b)
f ( f ) = Rvv ( f )
(c)
Gv ( f ) = Rvv2 ( t )
(d)
Gv ( f ) < Rvv ( t )
4. Is R12 (t) = R21 (t) (a) No (c) R12 (t) > R21(t)
(b) Yes (d) R12 (t) < R21 (t)
5. The condition for orthogonality of f1 (t) and f2 (t) over t1 to t2 is •
t2
(a)
Ú f (t ) f (t ) dt = 0 1
2
(b)
t1
Ú 0
1
2
t
T0
(c)
Ú f (t ) f (t ) dt = 0
-•
f1 ( t ) f 2 ( t ) dt = 0
(d)
Ú f (t ) f (t ) dt = 0 1
2
0
6. For calculating convolution integral shifting function must be folded about the (a) y-axis (b) x-axis (c) Zero axis (d) None of the above
220
Digital Signal Processing
7. For calculating autocorrelation function the shifting function need not be folded about the (a) x-axis (b) y-axis (c) Zero axis (d) None of the above 8. The cross-correlation of f1 (t) and f2 (t) of periodic function is (a)
(c)
1 T0 1 T0
T0
Ú
f1 ( t ) f1 ( t + t ) dt = R12 ( t )
(b)
R12 ( t ) =
0
•
Ú f (t ) f (t - t ) dt = R ( t ) 1
1
12
(d)
R12 ( t ) =
-•
1 T0 1 T0
T0
Ú f (t ) f (t + t ) dt 2 1
2 2
0
•
Ú
f1 ( t ) f 2 ( t + t ) dt
-•
9. The range of values that could be taken by correlation coefficient r is (a) {0, 1} (b) {– 1, 1} (c) {– 2, 2} (d) {0, 2} 10. If the autocorrelation of v1 (t) is Rx (t) = 2 e– | t | then the average power in the signal is (a) 0 (b) 4 (c) 16 (d) 2 11. The autocorrelation function of the sequence x (n) = {1, 2, 1} is (a) {3, 2, 1, 2, 3} (b) {1, 4, 6, 4, 1} (c) {1, 2, 3, 2, 1} (d) {4, 3, 2, 6, 4} 12. The linear convolution of the two sequences x (n) = {1, 3, 2} and h (n) = {1, 4} is (a) {1 5 4 3 (b) {2 5 14 3} (c) {1 7 14 8} (d) {8 12 11 8} 13. The cross-correlation of the two sequences x (n) = {1, 3, 2} and y (n) = {0.1, 0.2, 2, 6, 4} is (a) {2, 18, 28, 18.2, 0.7, 0.2} (b) {14, 18, 28, 18.2, 0.7, 0.2} (c) {4, 18, 38, 18.6, 0.7, 0.2} (d) {4, 18, 28, 18.2, 0.7, 0.2}
Key to the Multiple-Choice Questions 1. 5. 9. 13.
(b) (a) (b) (d)
2. (a) 6. (a) 10. (d)
3. (a) 7. (b) 11. (b)
4. (a) 8. (a) 12. (c)
11 11.1
Power Spectrum Estimation
ESTIMATION OF SPECTRA FROM FINITE-DURATION OBSERVATION OF SIGNALS
The basic problem that we consider in this chapter is the estimation of the power density spectrum of a signal from observation of the signal over a finite time interval. The finite record length of the data sequence is a major limitation on the quality of the power spectrum estimate. When dealing with signals that are statistically stationary, the larger the data record, the better the estimate that can be extracted from the data. On the other hand, if the signal statistics are non-stationary, we cannot select an arbitrarily long data record to estimate the spectrum. In such a case, the length of the data record that we select is determined by the rapidity of the time variations in the signal statistics. Ultimately, our goal is to select as short a data record as possible that still allows us to resolve the spectral characteristics of different signal components in the data record that have closely spaced spectrum. One of the problems that we encounter with classical power spectrum estimation methods based on a finite-length data record is the distortion of the spectrum that we are attempting to estimate. This problem occurs in both the computation of the spectrum for a deterministic signal and the estimation of the power spectrum of a random signal. It is easier to observe the effect of the finite length of the data record on a deterministic signal. The power spectrum estimation methods are developed by Bartlett, Blackman and Tukey, and Welch. These methods make no assumption about how the data were generated are called nonparametric. The Blackman and Tukey proposed and analyzed the method in which the sample autocorrelation sequence is windowed first and then Fourier transformed to yield the estimate of their power spectra. The effect of windowing the autocorrelation is to smooth the periodogram estimate, thus decreasing the variance in the estimate at the expense of reducing the resolution. The nonparametric power spectral estimation methods are relatively simple, easy to compute using the FFT algorithms. However, these methods require the availability of long data records in order to obtain the necessary frequency resolution required in many applications. Furthermore, these methods suffer from spectral leakage effects due to windowing that are inherent in finitelength data records often, the spectral leakage masks weak signals that are present in the data.
222
Digital Signal Processing
In the parametric methods of power spectrum estimation do not require such assumptions. In fact these methods extrapolate the values of the autocorrelation. Usually AutoRegressive Moving Average (ARMA) methods are used in the parametric methods of power spectrum estimation methods. Some of the methods are given here under: 1. Yule-Walker method for the AR model parameters 2. Burg method for the AR model parameters 3. AR model for power spectrum estimation 4. MA model for power spectrum estimation 5. ARMA model for power spectrum estimation In the following section we deal with some simple computational methods without touching windowing techniques for the power spectrum estimation.
11.2
COMPUTATION OF THE POWER SPECTRUM ESTIMATION FROM FINITE-DURATION OBSERVATIONS OF SIGNALS
When a time series x (n) is transformed by a DFT or FFT algorithm, a complex transform X (m) is obtained. We will refer to X (m) as a linear spectrum. Since the linear spectrum is complex, it has both real and imaginary parts. These individual parts are each dependent on the position of the signal. But the resulting magnitude is of wave independent of the position. While the linear spectrum is sufficient for many purposes, there are applications in which the square of the magnitude is of primary importance. Since square of the magnitude is proportional to power, the term power spectrum is widely used in reference to this function. Let Sxx (m) represent the power spectrum. The power spectrum can be expressed as X (m) X (m) X (m) X r2 + X c2 S xx (m) = = = N N N 2
(11.1)
The power spectrum is readily calculated from X (m) by squaring the real and imaginary parts and adding the results. Many FFT processors have a direct provision for determining same form of the power spectrum. The function Sxx (m) may or may not actually represent true power in watts as this would depend on what physical variables are involved and in what manner the signal is used. However, it is conceptually useful to think of this quality as representing the power in a general sense. Additional insight into this concept can be gained by a modified form of Parseval’s theorem as applied to discrete-time signals. N –1
Â
n=0
N –1
x 2 ( n) =
ÂS
xx
( m)
(11.2)
m=0
This expression on the left is proportional to the energy represented by the signal in one time-domain cycle. According to this theorem, the same result may be obtained directly from the spectrum by summing the terms of the power spectrum over a frequency-domain cycle.
Power Spectrum Estimation 223
Correlation and Statistical Analysis The concept of lagged product in conjunction with transform pair. These are two primary operations in which this form is used (a) Cross-correlation (b) Autocorrelation These functional operations are used extensively in signal and statistical analysis, both in continuous-time systems and in discrete-time systems. Many of the correlation applications previously performed with analog circuits for continuous time signals can now be achieved with either a general purpose computer or with special purpose digital processor using discrete-time techniques. For the sake of simplicity, the basic correlation relationships were first discussed in terms of the continuous-time forms. Let x (t) and y (t) represent two random continuous-time signals. The correlation function Rxy (t) of the two signals over an interval tp can be defined as 1 tp Æ• t p
Rxy (t ) = x (t ) y (t – t ) = lim
tp
Ú
x (t ) y (t – t ) dt
(11.3)
0
The quality t represents a delay or lag variable. The integral represents the area of the product of the two signals expressed as a function of the amount by which the signal is delayed. The line above the product x (t) y (t – t) is used to denote a time average of the quantity involved. The relative value of Rxy (t) indicates how well the two signals are correlated for the particular value of delay. If the correlation function peaks for a particular value of t, this would indicate a good correlation, which means that the two signals match each other very well. Conversely, a very small or zero value of the correlation function indicates little or no correlation. The autocorrelation function Rxx (t) can be considered a special case of the correlation function with y (t) = x (t). The definition for continuous-time random signal 1 t p Æ• t p
Rxx (t ) = x (t ) x (t – t ) = lim
tp
Ú x (t ) x (t – t) dt
(11.4)
0
The operation is simplifying the average of the product of the signal and a delayed version of the signal as a function of the delay. The Fourier transform of the errors and autocorrelation functions often provide useful interpretations of the nature of the signals. It can be shown that the Fourier transform of the autocorrelation function is square of the magnitude of the Fourier transform X (f ) of the signal x (t). This function is called the auto power spectrum, and it will be denoted by Sxx (f ). Hence, S xy ( f ) = F ÈÎ Rxy ( t ) ˘˚ = x (t ) y (t ) (11.5) Note that while Sxx (f ) is a real function and Sxy (f ) is in general complex. One of the primary applications of cross-correlation is in determining the delay of a signal which has been hidden in additive noise. For example, this operation arises in radar and sonar system where a known signal is transmitted and reflected from a target at some later time. Measurement of the exact delay will provide information regarding the range of the target.
224
Digital Signal Processing
Let x (t) represent the transmitted signal and let u (t) represent the additive noise. Although the transmitted signal will undergo some distortion, we will neglect this effect in this discussion. The received signal y (t) will then be in the form y (t) = x (t – Td ) + u (t)
(11.6)
where Td represents the total two-way delay of the signal. A cross-correlation can now be made at the receiver between a stored version of the transmitted signal x (t) and the received signal y (t). This operation yields Rxy (t ) = x (t ) y (t ) = x (t ) x (t – td ) + x (t ) u (t )
(11.7)
The second term represents the correlation between the transmitted signal and the noise. In general, there is no correlation between three quantities so that the expected value of the function is zero. The first term is actually an autocorrelation if any distortion of the received signal is neglected. The function Rxy (t) can be expected to show a peak for t = Td , which would provide an accurate measure of the delay time. Using these methods, it is possible to measure the delay of a signal which has been virtually buried in noise. Certain statistical parameters can be related to the autocorrelation function. Let p (x) represent the probability density function of the variable x. The function p (x) is characterized by a number of useful statistical parameters of which some of the most important are: (a) the mean value –x — (b) the mean-square value x2 (c) the variance s2 • (11.8) The mean (or dc) value is defined as x = Ú x p (x) dx -• •
The mean-squared value is defined as
x2 =
Úx
2
p (x) dx
(11.9)
-• •
Finally, the variance is defined as
s2 =
Ú (x – x)
2
p (x) dx
(11.10)
-•
It can be readily shown that s2 can be expressed as (11.11) s2 = x2 - x 2 (mean squared value – square of the mean) Assume that the process under consideration is an ergodic random process. This means that the ensemble averages given in the preceding few equations are equivalent to appropriate time average taken from the same process. The mean-square value is determined from 1 x = lim tp Æ• tp 2
tp
Ú x (t ) dt = R ( 0) 2
xx
0
(11.12)
Power Spectrum Estimation 225
If the process contains a dc value, it would appear as a long-term constant in Rxx (t). Assuming the absence of any periodic components, the square of the mean can be expressed as 1 x = lim tp Æ • tp 2
tp
Ú x (t ) x (t – t ) dt = Rxx (•)
(11.13)
0
Finally, the variance is readily expressed as (11.14) s2 = Rxx (0) – Rxx (•) It is seen then that for an ergodic process, some of the most important statistical properties may be determined from the autocorrelation function. Having considered some of the basic definitions and properties of correlation and the associated statistical concepts, we will now integrate the possible digital implementation of these operations. In order to keep the notation as simple as possible, we will continue to use the same basic symbols established in this section for continuous-time functions, but with the arguments replaced by integers for the discrete-time case. Consider that we are given two discrete-time signals x (n) and y (n). The discrete cross-correlation function will be denoted by Rxy (k), and a suitable definition is Rxy (k ) = x (n) y (n – k ) =
1 N
N –1
 x (n) y (n – k )
(11.15)
n=0
The discrete autocorrelation function will be denoted by Rxx (k), and it can be expressed as Rxx (k ) = x (n) x (n – k ) =
is readily obtained as
1 N
N –1
 x (n) x (n – k )
(11.16)
n=0 2
x (m) x (m) X (m) D [ Rxx (k )] = = = S xx (m) (11.17) N N where Sxx (m) is the discrete auto power spectrum of the signal as defined earlier in (11.1). The inverse DFT of the power spectrum is the autocorrelation function as expressed by D –1 [ S xx (m)] = Rxx (k )
(11.18)
The discrete cross power spectrum Sxy (m) can be readily expressed as D [Rxy (k )] = x (m) y (m) = S xy (m)
(11.19)
The inverse DFT of the cross power spectrum is the cross-correlation function D –1 [ S xy (m)] = Rxy (k )
(11.20)
The various relationships associated with the discrete cross-correlation function are illustrated in Fig 11.1. The cross-correlation function of two signals x (n) and y (n) may be calculated directly as shown on the left. However, an alternate procedure is to use an FFT algorithm to compute the spectra of the two signals and then multiply one spectrum by the conjugate of the other. The result of this operation is the cross spectrum which can then be inversely transformed with
226
Digital Signal Processing
the FFT to yield the cross-correlation function. All the operations apply to the autocorrelation function when the two signals are the same. Other computational combinations are possible depending on what functions are known in a given case.
Fig. 11.1
Various cross-correlation relationships.
Use of the FFT and discrete-time techniques necessitates that the interval of analysis be restricted to a finite length having a finite number of sample points.
12 12.1
DSP Processors
INTRODUCTION
The purpose of this chapter is to provide an understanding of DSP architectectural features, general purpose DSP processors, special purpose DSP processors and implementation of DSP algorithms. Typical DSP operations include convolution, correlation, FIR filtering, IIR filtering, spectral analysis, adaptive filtering, multirate filtering, etc. All these operations demand realtime operations which cannot be realized using general purpose microprocessors. Hence, DSP processors having special architectural features are required to implement DSP algorithms requiring real-time performance.
12.2 12.2.1
ARCHITECTURES FOR DIGITAL SIGNAL PROCESSING Need for Special Architectures
Suppose we wish to process an audio signal containing frequencies in the range 0-20 kHz. To adequately represent the signal using digital samples without any loss of information we need to take samples at a rate of 40 kHz (as per Nyquist sampling theorem fs ≥ 2 fm). Hence, from the ADC samples arrive at a rate of 40,000 samples per second with a sampling period of Ts = 1/40000. The DSP system needs to process these samples and send the processed samples to the DAC at a rate of 40,000 samples per sec. If a sample is received from the ADC at t = 0, the next sample arrives at t = 1/40000. Hence, the computation of the output sample should be complete within t = 1/40000 sec. In other words, DSP algorithms require realtime performance which cannot be achieved with a general purpose microprocessor. General purpose microprocessors are designed so that they are efficient in running a wide variety of applications requiring graphics performance, scientific computing, high rate input-output, heavy weights. Whereas DSP processors run single application containing huge number of numerical computations like complex multiplications, additions, circular buffering, FFT, bit reversal operations, etc. Hence, the architecture of a DSP processor is designed so that it is efficient at running DSP algorithms with real-time performance.
12.2.2
DSP Architectural Highlights
DSP processors have special architectural highlights like Harvard architecture, pipelining, fast dedicated hardware multiplier/accumulator, special instructions dedicated to DSP, hardware replication, on-chip memory cache, etc.
228
Digital Signal Processing
Harvard Architecture In conventional processors, data and instructions that operate on the data are kept in one single unified memory. Hence, while the data corresponding to an instruction is fetched, the next instruction cannot be fetched. In Harvard architecture, separate memories are available for holding data and instructions. The memory used to hold data is called data memory whereas memory used to hold instructions is called program memory. DSP processors employ Harvard architecture wherein separate program and data address bus/ data bus are available so that data fetching and instruction fetching can be carried out at the same time resulting in high memory throughput.
Fig. 12.1
Memory organization of conventional processors.
Fig. 12.2
Harvard architecture employing dual memory.
Pipelining Pipelining is a technique which allows two or more instructions to overlap during execution. In pipelining a task is divided into a number of subtasks and overlapped during execution. Pipelining increases the throughput of the processor by executing more number of instructions per second. Implementation of pipelining requires Harvard architecture to be implemented so that data fetch and instruction fetch can be overlapped. The execution of an instruction by a processor without pipelining and by a processor with pipelining is shown in the figure below.
Fig. 12.3
Execution of instructions in a unpipelined processor.
DSP Processors 229
Fig. 12.4
Execution of instructions in a pipelined processor.
Hardware Multiplier-Accumulator DSP processors extensively perform multiplications and additions. Multiplication in software is time consuming. Floating point addition, multiplication and division are still more time consuming. DSP processors employ hardware multiplier/division units thereby speeding up multiplications and additions by a factor of 1000. Special Instructions for DSP Conventional processors have instructions for memory load and store, arithmetic logic shift operations, register transfer and input output operations. DSP processors support special instructions thereby multiple microoperations can be computed using a single instruction. This results in small code size, increased execution speed, less overhead instruction loops and application specific instructions. Special instructions include: (a) instructions that perform multiply, accumulate and store in a single cycle; (b) block transfer to coefficients/data; (c) zero overhead instruction repeat; (d) scrambling/unscrambling of data for FFT operations; (e) bit reversal addressing; and (f) circular buffering used in FIR filters.
Hardware Replication Typical DSPs support at least four fixed point/ four floating point multipliers. Since multipliers, execution units, store units are replicated; multiple instructions can be executed simultaneously. Most of the DSP operations like convolution, filtering, correlation involve independent multiplications of coefficients with input data which can be performed parallelly. Hardware replications allows such parallel operations to be performed simultaneously in one cycle. On-Chip Memory/Cache Over the past few years, processors have improved in their execution speed but the data access speeds of the memories haven’t improved at that speed. Hence most of the processors waste their time waiting for memory access cycles to be complete. DSP processors employ on-chip memory/cache thereby memory access is reduced to the order of ns/ps. Frequently accessed instructions and data are kept in on-chip caches to reduce the memory access time. Most of the commercial DSP processors employ two-level caches so that the first level cache provides low memory access time whereas the second level cache provides high capacity to reduce miss rate.
230
12.3
Digital Signal Processing
ON-CHIP PERIPHERALS
Programmable DSPs have a number of on-chip peripherals that relieve the CPU from routine functions. Further, they also help to reduce the chip count on the DSP system based around P-DSP. Some of the on-chip peripherals in P-DSPs and their functions are as follows. On-Chip Timer Timers are typically used in DSP processors for generation of periodic interrupts and sampling clocks for A/D converters. Serial Port This peripheral allows the processor to communicate with the external peripherals like A/D converter, D/A converter or a RS232 device. These ports normally have input and output buffers so that the P-DSP reads or writes from the serial port in parallel form and the serial port sends and receives data to the peripherals in serial form. Some of these serial ports are also used in Time Division Multiplexed (TDM) fashion to allow communication with multiple devices using single serial port. Parallel Port Parallel ports allow communication using multiple parallel lines thereby offering high speed data communication due to parallel communication. In addition to regular data lines, additional lines are used for strobing/handshaking. Onchip A/D and D/A converters Some programmable DSPs targeted for voice applications, etc., used in cellphones have onchip A/D and D/A converters to minimize the chip-count and decrease the area required in the implementation. Host Ports Host port is a special parallel port that programmable DSPs have. This enables them to communicate with a microprocessor or a PC, which is called a host. In addition to data communication, a host can generate interrupts and also cause the P-DSP to load a program from ROM to the RAM on reset.
12.4
DSP IMPLEMENTATION TECHNOLOGY
The choice of implementation technology for realizing a DSP algorithm depends on many factors such as operating speed, flexibility, cost, etc. Popular choices for realization of DSP algorithms include general purpose DSP processors, special purpose DSP processors, FPGAs, etc. Implementation general purpose DSP chips are basically high speed microprocessors with their hardware architectures and instruction sets optimized for DSP operations. Popular companies that make DSP processors include Motorola,
DSP Processors 231
Texas Instruments, Analog Devices, etc. General purpose DSP processors typically find wide applications in audio signal processing systems. Development of a DSP algorithm in a general purpose DSP processor initially involves coding the algorithm in C language and further crosscompiling the code for generating the executable code that runs on the specific DSP processor. Each of the previously mentioned DSP vendors provides software IDEs In wide bandwidth applications, where the input/output data rates are high, most general purpose DSP chips can’t handle the required computations fast enough. Further for a given application, most general purpose DSPs contain many on-chip resources that are either redundant or underutilized, for example, I/O peripherals, addressing modes, instruction sets, etc. This leads to lower speed and higher power consumption. Special purpose DSPs are designed with their hardware optimized to perform a certain operation or a specific application. Since the hardware is customized, it occupies lower area, operates at higher speed and consumes low power. However, we lose flexibility for the high speed achieved. Special purpose DSPs are available for FIR filtering, IIR filtering, FFT processing, multirate processing, etc. High speed mission critical DSP applications like radar target tracking, etc., are typically implemented in field programmable gate arrays (FPGAs). These chips offer the flexibility of a general purpose DSP processor and also the high speed of custom application specific processors. Given the specifications of the custom DSP application, the architecture is developed and coded in Verilog/VHDL language. Further, the bit file that configures the FPGA/CPLD is generated using the corresponding vendor IDE and the device is programmed. This technology offers low cost, low power, high speed, flexible/reconfigurable option for building a DSP application. However the high design time and technical expertise required limits its usage only to high end applications.
12.5
TYPES OF DIGITAL SIGNAL PROCESSORS
Depending on the speed requirements needed by different applications, dominant DSP vendors like Texas Instruments, Motorola, Analog Devices manufacture two types of DSP processors. (a) General Purpose DSPs or Programmable DSPs These processors are programmed to run a specific application by writing a C-program and generating a machine code with the compiler specific for that processor. This allows the processor to be programmed to run a wide variety of DSP applications. These processors further allow flexibility, re-programmability and reuse. For signal processing applications that run at low to moderate speeds these DSPs are preferred. (b)
Special Purpose DSP Hardware
Special purpose hardware can be implemented on a single chip or realized as individual components. Special purpose chips for FIR filtering, IIR filtering and FFT processing are available from major vendors that run at tens of gigahertz speeds. These chips are needed in mission critical applications. Examples include PDSP16112A from Plessy semiconductors.
232
Digital Signal Processing
Multiple-Choice Questions 1. The features in which a P-DSP is superior to advanced microprocessors is (a) low cost (b) low power (c) computational speed (d) real-time I/O capability 2. The addressing mode that is convenient for FFT computation is (a) indirect addressing (b) circular mode addressing (c) bit-reversed addressing (d) memory mapped addressing 3. The addressing mode that is convenient for FIR filtering is (a) indirect addressing (b) circular mode addressing (c) bit-reversed addressing (d) memory mapped addressing 4. Which of the characteristics are true for a RISC processor (a) smaller control unit (b) small instruction set (c) short program length (d) less traffic between CPU and memory 5. Programmable DSPs follow the (a) von-Neuman architecture (b) Harvard architecture (c) memory mapped architecture
Key to the Multiple-Choice Questions 1. (c) 5. (b)
2. (c)
3. (b)
4. (b)
13
DSP Applications
DSP is one of the fastest growing fields in modern electronics, being used in any area where information is handled in a digital form or controlled by digital processor. Application areas include the following: 1. Image Processing – pattern recognition – robotic vision – image enhancement – facsimile – satellite weather map – animation 2. Instrumentation/Control – spectrum analysis – position and rate control – noise reduction – data compression 3. Speech/audio – speech recognition – speech synthesis – text to speech – digital audio – equalization 4. Military – secure communication – radar signal processing – sonar signal processing – missile guidance 5. Telecommunications – echo cancellation – adaptive equalization
234
Digital Signal Processing
– spread spectrum – videoconferencing – data communication 6. Biomedical – pattern monitoring – scanners – ECG analysis – EEG beam mappers – X-ray storage/enhancement 7. Consumer applications – digital cellular mobile phones – digital television – digital cameras – universal mobile communication system – Internet phones, music and video. – digital answer machines, fax and modems – voice mail systems – interactive entertainment systems
Applications of DSP DSP has many applications starting from ECG (electrocardiogram) signals processing and detection of any sporadic variations. ECG is a low frequency signal. Seismogram signals are signals emitted in disaster conditions of earthquake. To measure its intensity on a rector scale and to remove the signal portion in the unwanted range DSP techniques will be used. In communications everywhere it has its unique application. Example, radar/sonar communications, satellite, mobile communications, etc. Any signal is to be processed before transmission. At the receiver end also it needs to undergo processing for optimum detection. Filters have its usage in almost all types of communications. DSP is a generic area, and hence it finds applications in many areas. Sampling is a preprocessing technique for digitizing the signal. Multirate signal processing techniques are used in subband coding of speech signals. In the required band of signals the high sampling rate is used and the unrequired band will be kept untouched. The correlation techniques are used in the spectral estimation. Several DSP algorithms exist and many more are being invented or discovered. For example, fast scheduling algorithms will be used in Wi-max to increase the data rate and power control algorithms for reducing the power dissipation in mobile handsets. The audio mixing systems are a prime example of where DSP has been successfully employed to improve audio quality and enhance its functionality.
DSP Applications 235
Audio mixing is used in professional audio applications, e.g., studio recording, broadcasting, sound reinforcement and public address systems. Likewise DSP plays vital role in speech synthesis and recognition. DSP is used extensively to process signals and data at radio base stations and in the mobile phones (e.g., for speech encoding, multipath equalization, signal strength measurements, voice messaging, error control coding, modulation and demodulation). DSP chips optimized for wireless communications are now available enabling the mobile communications industries to offer affordable, high quality products for the mass market. Modern mobile phone systems employ digital cellular radio concepts. Besides speech coding and multipath equalization, DSP techniques have also found use in digital modulation. The set-top boxes convert the digital information into a form suitable for reception by analog TV sets. The latest TV sets have a built-in decoder. In digital TV, DSP plays a crucial role in the processing, encoding/decoding and modulation/ demodulation of video and audio signals from the point of capture to when they are viewed.
Fig. 13.1
13.1
Digital signal processing has fuzzy and overlapping borders with many other areas of science, engineering and mathematics.
APPLICATIONS OF DSP IN TELECOMMUNICATIONS
Telecommunications is about transferring information from one location to another. This includes many forms of information: telephone conversations, television signals, computer files, and other types of data. To transfer the information, you need a channel between the two locations. This may be a wire pair, radio signal, optical fiber, etc. Telecommunications companies receive payment for transferring their customer’s information, while they must pay to establish and maintain the channel. The financial bottom line is simple: the more information they can pass through a single channel, the more money they make. DSP has revolutionized the telecommunications industry in many areas: signaling tone generation and detection, frequency band shifting, filtering to remove power line hum, etc.
236
Digital Signal Processing
13.1.1 Multiplexing There are approximately one billion telephones in the world. At the press of a few buttons, switching networks allow any one of these to be connected to any other in only a few seconds. The immensity of this task is mind-boggling! Until the 1960s, a connection between two telephones required passing the analog voice signals through mechanical switches and amplifiers. One connection required one pair of wires. In comparison, DSP converts audio signals into a stream of serial digital data. Since bits can be easily intertwined and later separated, many telephone conversations can be transmitted on a single channel. For example, a telephone standard known as the T-carrier system can simultaneously transmit 24 voice signals. Each voice signal is sampled 8000 times per second using an 8 bit commanded (logarithmic compressed) analogto-digital conversion. This results in each voice signal being represented as 64,000 bits/sec, and all 24 channels being contained in 1.544 megabits/sec. This signal can be transmitted about 6000 feet using ordinary telephone lines of 22 gauge copper wire, a typical interconnection distance. The financial advantage of digital transmissions is enormous. Wire and analog switches are expensive; digital logic gates are cheap.
13.2
APPLICATIONS OF DSP IN SPEECH PROCESSING
13.2.1 Audio Processing The two principal human senses are vision and hearing. Correspondingly, much of DSP is related to image and audio processing. People listen to both music and speech. DSP has made revolutionary changes in both these areas.
13.2.2 Music The path leading from the musician’s microphone to the audio files is remarkably long. Digital data representation is important to prevent the degradation commonly associated with analog storage and manipulation. This is very familiar to anyone who has compared the musical quality of cassette tapes with compact disks. In a typical scenario, a musical piece is recorded in a sound studio on multiple channels or tracks. In some cases, this involves recording individual instruments and singers separately. This is done to give the sound engineer greater flexibility in creating the final product. The complex process of combining the individual tracks into a final product is called mix down. DSP can provide several important functions during mix down, including: filtering, signal addition and subtraction, signal editing, etc. One of the most interesting DSP applications in music preparation is artificial reverberation. If the individual channels are simply added together, the resulting piece sounds frail and diluted, much as if the musicians were playing outdoors. This is because listeners are greatly influenced by the echo or reverberation content of the music, which is usually minimized in the sound studio. DSP allows artificial echoes and reverberation to be added during mix down to simulate various ideal listening environments. Echoes with delays of a few hundred milliseconds give the impression of cathedral-like locations. Adding echoes with delays of 10-20 milliseconds provide the perception of more modest size listening rooms.
DSP Applications 237
13.2.3 Speech Generation Speech generation and recognition are used to communicate between humans and machines. Rather than using your hands and eyes, you use your mouth and ears. This is very convenient when your hands and eyes should be doing something else, such as: driving a car, performing surgery, or (unfortunately) firing your weapons at the enemy. Two approaches are used for computer-generated speech: digital recording and vocal tract simulation. In digital recording, the voice of a human speaker is digitized and stored, usually in a compressed form. During playback, the stored data are uncompressed and converted back into an analog signal. An entire hour of recorded speech requires only about three megabytes of storage, well within the capabilities of even small computer systems. This is the most common method of digital speech generation used today. Vocal tract simulators are more complicated, trying to mimic the physical mechanisms by which humans create speech. The human vocal tract is an acoustic cavity with resonant frequencies determined by the size and shape of the chambers. Sound originates in the vocal tract in one of two basic ways, called voiced and fricative sounds. With voiced sounds, vocal cord vibration produces near periodic pulses of air into the vocal cavities. In comparison, fricative sounds originate from the noisy air turbulence at narrow constrictions, such as the teeth and lips. Vocal tract simulators operate by generating digital signals that resemble these two types of excitation. The characteristics of the resonate chamber are simulated by passing the excitation signal through a digital filter with similar resonances. This approach was used in one of the very early DSP success stories, the Speak & Spell, a widely sold electronic learning aid for children.
Fig. 13.2 Human speech model. Over a short segment of time, about 2 to 40 milliseconds, speech can be modeled by three parameters: (1) the selection of either a periodic or a noise excitation, (2) the pitch of the periodic excitation, and (3) the coefficients of a recursive linear filtering mimicking the vocal tract response.
238
Digital Signal Processing
13.2.4 Speech Recognition The automated recognition of human speech is immensely more difficult than speech generation. Speech recognition is a classic example of things that the human brain does well, but digital computers do purely. Digital computers can store and recall vast amounts of data, perform mathematical calculations at blazing speeds, and do repetitive tasks without becoming bored or inefficient. Teaching the same computer to understand your voice is a major undertaking. Digital signal processing generally approaches the problem of voice recognition in two steps: feature extraction followed by feature matching. Each word in the incoming audio signal is isolated and then analyzed to identify the type of excitation and resonate frequencies. These parameters are then compared with previous examples of spoken words to identify the closest match. Often, these systems are limited to only a few hundred words; can only accept speech with distinct pauses between words; and must be retrained for each individual speaker. While this is adequate for many commercial applications, these limitations are humbling when compared to the abilities of human hearing. There is a great deal of work to be done in this area, with tremendous financial rewards for those that produce successful commercial products.
13.3
RADAR/SONAR COMMUNICATIONS
Flexibility and versatility of digital techniques grew in the front-end signal processing and with the advent of integrated digital circuitry, high speed signal processors were developed and realized. DSP is the science of using computers to interpret digital patterns that exist everywhere in technology today. DSP applications analyze and interpret digital patterns or signals and mathematically manipulate them. DSP applications have shaped the development of technology since 1960s in the development of radar, sonar and space exploration, etc. More recently, these DSP-based systems provide a sophisticated foundation for missile defense – searching, tracking, and launching with great precision. Radar and sonar have key applications for military platforms worldwide. The essence of DSP is to efficiently transform a stream of data into relevant information quickly enough to be highly useful and applicable. In radar and sonar applications, for example, this may mean processing incoming electromagnetic or acoustic signals to determine the location, speed, and direction of potential threats, targets, and terrain while filtering out irrelevant data such as a small bird or fish. Radar continued to grow in the recent years by keeping the future developments in mind and with better digital capability. Significant contributions in DSP for radar have been in MTI processing, automatic detection and extraction of signal, etc. Detection is the process by which the presence of the target is sensed in the presence of competing indications which arise from background echoes (clutter), atmospheric noise, or noise generated in the radar receiver. The noise power present at the output of the radar receiver can be minimized by using filter, whose frequency response function maximizes the output peak-signal to mean-noise (power) ratio is called matched filter. Correlation techniques are widely used in radar and sonar at the receiver stage to detect the target. The correlation coefficient gives the information about the closeness between two signals.
DSP Applications 239
The correlation coefficient is high if the two correlated signals are very close. The performance of the radar/sonar receiver is determined by the correlation coefficient of the transmitted signal and the received echo. If the correlation coefficient is high the optimum detection is possible. In the digital form the matched filter is called digital correlator. In analog form it is simply called matched filter though the function is same. Without being the transmitted and received signals correlated the radar performance cannot be assessed. Sometimes to improve the performance of the radar receiver mismatched filters are used. These are specially designed for a specific purpose unlike conventional filters like lowpass, highpass, bandpass. Doppler processing is used to filter out clutter and thereby reveal fast moving targets. Such filters are implemented digitally, FFT or a set of transversal filters. Cancellers and few optimized methods are some of the clutter rejection techniques. Digital processing permits the reference level to be generated/internally from the observations themselves, thereby permitting more sensitive and faster thresholds. Most of the radars employ automatic detection circuits to maintain, ideally, a constant false alarm rate [CFAR] by generating estimates of the receiver output. Digital processing has also permits increased capability for extracting target information from the radar signal. High resolution SAR provides an image of a scene. Radars are used to recognize one type of target from another, with the aid of digital processing, inverse SAR (ISAR) produces an image of a target good enough to recognize from other classes of targets by extracting the spectrum of an target echo signal. Interferometric SAR, which uses two antennas spaced vertically with a common SAR system, can provide height information to obtain 3D image of a scene. Greater flexibility and real-time operation suggests digital signal processing in SAR. SAR exploits the probability density of the clutter to detect man-made features by modeling the clutter by a family of densities and picking the density that best describes the clutter on a local basis. Fourier based methods are used for detection of stationary and moving target detection and identification in reconnaissance SAR. The computational time of the time domain correlator [TDC] is overcome in the frequency domain. Here digital spotlighting principle is used to extract the target’s coherent SAR signature. Sonar digital signal processing has been heavily influenced by the commercial availability of minicomputers, microcomputers, and miniaturized high-speed digital arithmetic and storage devices and by algorithmic developments such as improved active radar waveform processing and the advent of digital filtering and Fast Fourier transform techniques developed for speech and statistical time series analysis. DSPs allow sonar echo real time processing with more accurate range/bearing than laser range finders. Modern sonar measurement applications effectively use digital signal processing (DSP) tasks such as signal averaging, filtering, and correlation. These tasks should be implemented obeying the application requirements such as signal lengths, required calculation speed, precision. Therefore, recent DSP technologies are helpful due to optimal algorithm and architecture matching. Implementation of signal correlation in many cases is also the subject under research. In sonar applications the correlation of long signal arrays and short computation period is often needed. This problem is crucial in applications of sonar tracking moving objects, sonar vision
240
Digital Signal Processing
based control systems for mobile robots where high computation speed is required in order to perform real time control. Some of the DSP functions like signal generation, temporal sampling and quantization, spatial sampling and beam forming, filtering and smoothing, decision processing and advanced DSP functions are adaptive beamforming, synthetic aperture, coherence processing, random arrays, automation are done in sonar application. Along with the defense applications, DSP is also involved in both the implementation and growth throughout countless market applications including oil exploration, medical imaging, audio and speech processing, digital image and video processing, mobile and telecommunications and much more. The recent advances in signal processing are blended with many more algorithms to present an up-to-date perspective and can be implemented in digital signal processor because of their flexibility and the ability to attain high precisions. Very high performance and data acquisition modules in sonar application can be acquired using DSP, both in military and commercial markets. DSP applications would be effectively realized using programmable logic technologies (FPGA) thus improving system parameters such as lower cost and energy consumption, smaller dimensions, higher operation speed and shorter implementation period.
13.4
BIOMEDICAL APPLICATIONS OF DSP
Biomedical represents an important and very fertile area both for the application of conventional DSP and for the development of new and robust DSP algorithms. Often medical data are not well behaved and this represents a challenge to the DSP experts. In most of the cases, medical data are in the audio frequency range. Many applications of DSP in biomedical involve signal enhancement and/or the estimation of features of clinical interest. There electrocardiogram (ECG) represents the electrical activity of the human heart as measured from body surface.
13.4.1
Fetal Monitoring-cancelling of Maternal ECG During Labour
A visual analysis of the continuous display of the fetal heart rate (FHR) together with the contraction of the womb (uterine activity), known as the cardiotocogram (CTG), is normally used to access the condition of the fetus during labour. Difficulties with interpreting the CTG during labour can lead to unnecessary medical intervention (e.g., Caesarean section or forceps deliveries), fetal injury or a failure to intervene when needed. The correct use of combined fetal ECG and CTG analysis can significantly reduce unnessary medical intervention with no adverse effect on neonatal outcome. Information derived from the fetal electrocardiogram (ECG), such as the fetal heart rate pattern is valuable in assessing the condition of the baby before or during childbirth. The ECG derived from electrodes placed on the mother’s abdomen is susceptible to contamination from much larger background noise (for example, muscle activity and fetal motion) and the mother’s own ECG. Adaptive filters are used to derive a “noise-free” fetal ECG.
DSP Applications 241
13.4.2
DSP-based Closed Loop Controlled Anaesthesia
Evidence of medical applications of DSP in the intensive care units (ICU’s) of all major hospitals. More advanced techniques are continually being developed for use in anaesthesia. During surgery patients are normally anaesthetized, e.g., by injecting anaesthetic drugs intravenously so that they do not feel pain and to create a suitable condition for the surgeon to carry out the operation. Anaesthetists aim to deliver just the right amount of drug to induce anaesthesia to the required depth as quickly as possible to maintain the level until a change is necessary. Injecting too much drug into a patient can lead to complication and other side effects, while inadequate drug leads to intra-operation awareness which may have long-term psychological consequences. In most cases, the depth of consciousness of the patient is gauged by an experimental anaesthetist by observation of clinical signs. The anaesthetist then makes appropriate changes in the dose of anaesthetic drugs to control anaesthesia. Automated drug delivery using closed loop control techniques offers potential benefits to busy anaesthetists and leads to better patient care and reduce costs. Its use reduces the possibility of excessive dosing and enables the anaesthestists to identify and respond promptly to perturbations which might go unnoticed or be deemed too small to merit manual alterations of drug administration. However, automated closed loop controlled drug delivery requires a reliable means of monitoring depth of anaesthesia to determine changes to the drug delivery necessary to maintain anaesthesia. State-of-the art closed-controlled anaesthesia systems use biological signals to measure depth of anaesthesia and as feedback signals to determine the adjustments to be made to delivery. In particular, a variety of signal processing methods are used to process the EEG to extract feature such as the auditory evoked response (AER) and bispectral index, and to estimate depth of anaesthesia from these. The EEG is the electrical activity of the brain, measured from electrodes placed on the scalp, and the AER is the electrical response of the brain to external sound stimuli. AER signals are valuable for assessing the transmission from consciousness to unconsciousness, but are difficult to obtain because they are buried in EEG signals, being several times smaller than the EEG. Signal averaging of response to successive auditory stimuli is often used to extract them. Thus, AER signals need to be processed, to extract them from the background EEG, and to extract features of clinical interest (e.g., peaks and shape). The bispectral index can be derived from higher order spectrum analysis of the EEG. It provides a quantitative measure of the complex changes and interrelationships in the frequency components of the EEG at different levels of consciousness.
14
Digital Signal Processing with MATLAB
MATLAB is a powerful high-level programming language for scientific computations. It is used for solving numerically complex engineering problems. MATLAB consists of functions that are either built into the interpreter or available as M-files, with each containing a sequence of program statements that execute a certain algorithm. A completely new algorithm can be written as a program containing only a few of these functions and can be saved as another M-file.
14.1
MATLAB ENVIRONMENT
MATLAB works with three types of windows on your computer screen. These are the Command window, the Figure window and the Editor window. The Command window has the heading Command, the Figure window has the heading Fig. No. 1, and the Editor window has the heading showing the name of an opened existing M-file or Untitled if it is a new M-file under construction. The Command window also shows the prompt >> indicating it is ready to execute MATLAB commands. Results of most printing commands are displayed in the Command window. This window can also be used to run small programs and saved M-files. All plots generated by the plotting commands appear in a Figure window. Either new M-files or old M-files are run from the Command window. Existing M-files can also be run from the Command window by typing the name of the file.
14.2
QUICK TUTORIAL OF MATLAB
This section quickly goes through matrix entry, number formats, operators, expression, useful functions and graphics in MATLAB. Variables MATLAB does not require any declarations or dimension statements. When MATLAB encounters a new variable name, it automatically creates the variable and allocates the appropriate amount of storage. If the variable already exists, MATLAB changes its contents and, if necessary, allocates new storage. For example, num_students = 25 creates a 1-by-1 matrix named num_students and stores the value 25 in its single element. Variable names consist of a letter, followed by any number of letters, digits, or underscores. MATLAB uses only the first 31 characters of a
Digital Signal Processing with MATLAB 243
variable name. MATLAB is case sensitive; it distinguishes between uppercase and lowercase letters. A and a are not the same variables. To view the matrix assigned to any variable, simply enter the variable name. Entering Matrices
Number Formats
244
Digital Signal Processing
Operators
Digital Signal Processing with MATLAB 245
Relational Operators
246
Digital Signal Processing
Colon Operator
Advanced Matrix Operations
Digital Signal Processing with MATLAB 247
Examples of Expressions
Simple Example The section illustrates solving the two linear equations x+y=3 2x + 3y = 8 Method-1
Method-2 >> >> >> >>
x = – 4: 1e– 3 : 4; y1 = 3 – x; y2 = (8–2*x)/3; plot(x,y1,x,y2); grid;
248
Digital Signal Processing
Fig. 14.1
Commonly used Matrices
Graphical plot of two linear equations.
Digital Signal Processing with MATLAB 249
General Purpose Commands • • • • • • • • • • • • • • • • • •
help fft helpwin fft2 doc freqz lookfor fourier clc clear close all dir pwd ver date save abcd a b c load abcd who whos cd quit exit
Graphics Simple Plots >> x = 0: pi/100: 6*pi; >> y = sin(x); >> plot(x, y, ‘r’) Multiple Plots >> x = 0: pi/100: 6*pi; >> y = sin(x); z = cos(x); >> plot(x, y, ‘r:’, x, z, ‘k--’); grid; Subplots >> >> >> >> >> >> >>
x = 0: pi/100: 6*pi; y = sin(x); z = cos(x); figure; subplot(2, 2, 1); plot(x, subplot(2, 2, 2); plot(x, subplot(2, 2, 3); plot(x, subplot(2, 2, 4); plot(x,
y); z); y+z); y-z);
250
Digital Signal Processing
Fig. 14.2
Subplots of Sinusoids with different phases.
Labeling Axes >> >> >> >> >> >> >> >>
x = linspace(–5, 5, 200); y = x.*x –4*x + 3; plot(x, y); grid xlabel(‘x axis’); ylabel(‘y axis’); title(‘Quadratic Equation’); legend(‘x^{2}–4x+3’);
Fig. 14.3
Graphical plot of a quadratic equation.
Digital Signal Processing with MATLAB 251
3-D Plots [X,Y] = meshgrid(-3:.125:3); Z = peaks(X,Y); meshc(X,Y,Z); axis([-3 3 -3 3 -10 5])
14.3
DIGITAL SIGNAL PROCESSING WITH MATLAB
MATLAB is extensively used by the signal processing engineers during algorithm development for graphical visualization, debugging, etc. Once the signal processing system is simulated in MATLAB and verified to be correctly functioning, real-time implementation of the DSP system is carried out on a DSP processor by using C as the prominent coding language. This section describes the following experiments that could be used as part of DSP laboratory offered for undergraduate students. • Discrete Time Signal Generation • Implementation of Discrete Time Systems • Frequency Analysis of Discrete Time Signals • Frequency Analysis of Discrete Time Systems • FIR Filter design • IIR Filter Design
14.3.1 Generation of Discrete Time Signals Generation of Unit-Sample Sequence • Unit Sample Sequence Ï1, n = 0 d ( n) = Ì = , 0, 0, 1, 0, 0, ≠ Ó0, n π 0
{
}
• Delayed Unit Sample Sequence
Ï1, n = n0 d(n - n0 ) = Ì Ó0, n π n0 • Unit Step Sequence
Ï1, n ≥ 0 = , 0, 0, 1, 1, 1, º u ( n) = Ì ≠ Ó0, n < 0
{
• Delayed Unit Step Sequence
Ï1, n ≥ n0 u (n - n0 ) = Ì Ó0, n < n0
}
252
Digital Signal Processing
clc; clear; close all; % Generation of Unit Sample and Delayed Unit Sample Sequences n=0:10; x = [1 zeros(1,10)]; xd = [zeros(1,3) 1 zeros(1,7)]; y = ones(1,11); yd = [zeros(1,3) ones(1,8)]; figure; subplot(2,2,1); stem(n, x); grid on; xlabel(‘n’); title(‘Unit Sample Sequence’); subplot(2,2,3); stem(n, xd); grid on; xlabel(‘n’); title(‘Delayed Unit Sample Sequence’); subplot(2,2,2); stem(n, y); grid on; xlabel(‘n’); title(‘Unit Step Sequence’); subplot(2,2,4); stem(n, yd); grid on; xlabel(‘n’); title(‘Delayed Unit Step Sequence’);
Fig. 14.4
Unit sample and delayed unit sample sequences.
Real Exponential Sequence clc; clear; close all; n = 0 : 40; x1 = (0.9).^n; x2 = (1.1).^n; x3 = (-0.9).^n; x4 = (-1.1).^n;
x(n) = a n , "n; a Œ R
Digital Signal Processing with MATLAB 253 figure; subplot(2,2,1); stem(n, x1); legend(‘(0.9)^n’); grid on; xlabel(‘n’); title(‘Decaying Real Exponential Sequence (Stable)’); subplot(2,2,3); stem(n, x2); legend(‘(1.1)^n’); grid on; xlabel(‘n’); title(‘Growing Real Exponential Sequence (Unstable)’); subplot(2,2,2); stem(n, x3); legend(‘(-0.9)^n’); grid on; xlabel(‘n’); title(‘Decaying Real Exponential Sequence (Stable)’); subplot(2,2,4); stem(n, x4); legend(‘(-1.1)^n’); grid on; xlabel(‘n’); title(‘Growing Real Exponential Sequence (Unstable)’);
Fig. 14.5 Plots of real exponential sequences for different values of parameter a sinusoidal sequence x(n) = cos(w 0 n + q), "n . clc; clear; close all; fs = 10000; t = 0 : 1/fs : 0.01; x1 = 2*sin(2*pi*500*t); x2 = 2*sin(2*pi*1000*t); x3 = 2*sin(2*pi*500*t); x4 = 2*sin(2*pi*500*t + pi/2); figure; subplot(2,2,1); stem(t, x1); grid on; xlabel(‘t’); title(‘Sinusoidal Sequence f=(1kHz)/(10kHz)’); subplot(2,2,3); stem(t, x2); grid on; xlabel(‘t’); title(‘Sinusoidal Sequence f=(2kHz)/(10kHz)’); subplot(2,2,2); stem(t, x3); grid on; xlabel(‘t’); title(‘Sinusoidal Sequence with phi = 0’); subplot(2,2,4); stem(t, x4); grid on; xlabel(‘t’); title(‘Sinusoidal Sequence with phi = \pi/2’);
254
Digital Signal Processing
Fig. 14.6
Plots of sinusoidal sequences with different frequencies and initial phases.
Complex Exponential Sequence x ( n ) = e ( s + j w 0 ) n , "n • Produces decaying/growing sinusoidal sequence • s < 0 generates decaying sinusoid • s > 0 generates growing sinusoid • small w0 generates less number of oscillations per unit time • large w0 generates more number of oscillations per unit time clc; clear; close all; n = 0 : 40; x1 = exp((-1/24 + j*pi/6)*n); x2 = exp((-1/12 + j*pi/6)*n); x3 = exp((-1/24 + j*pi/12)*n); x4 = exp((-1/24 + j*pi/6)*n); figure; subplot(2,2,1); stem(n, real(x1)); grid on; xlabel(‘n’); title(‘Complex Exponential Sequence (Slow Decay)’); subplot(2,2,3); stem(n, real(x2)); grid on; xlabel(‘n’); title(‘Complex Exponential Sequence (Fast Decay)’); subplot(2,2,2); stem(n, real(x3)); grid on; xlabel(‘n’); title(‘Complex Exponential Sequence (Low Osc.)’); subplot(2,2,4); stem(n, real(x4)); grid on; xlabel(‘n’); title(‘Complex Exponential Sequence (Fast Osc.)’);
Digital Signal Processing with MATLAB 255
Fig. 14.7
Plots of complex exponential sequences with different decay and oscillation rates.
Random Sequence Generation • rand (1, N) – generate a length N random sequence whose elements are uniformly distributed between [0, 1] • randn (1, N) – Generate a length N Gaussian random sequence with mean 0 and variance 1. en [0, 1] clc; clear; close all; x1 = rand(1, 100000); figure; hist(x1, 50); title(‘Uniformly Distributed Random Number in the range (0-1)’); a = 3; b = 7; x2 = (b-a)*rand(1, 100000) + a; figure; hist(x2, 50); title(‘Uniformly Distributed Random Number in the range (a-b)’); x3 = randn(1, 100000); figure; hist(x3, 50); axis([-24 24 0 7000]); title(‘Normal Distributed Random Number with m=0, \sigma=1’); m = 3; sig = 5; x4 = sig*randn(1, 100000) + m; figure; hist(x4, 50); title(‘Normal Distributed Random Numbers with m=3, \sigma=5’);
256
Digital Signal Processing
Fig. 14.8 Histogram plots of uniform and normal distributed random variables (a) uniform distributed random sequence in the range (0 – 1) (b) uniform distributed random sequence in the range (3 – 7) (c) normal distributed random numbers with zero mean and unit variance (d) normal distributed random numbers with mean m = 3 and standard deviation 5.
Generation of Bit Sequence (with p0 = 0.3, p1 = 0.7) clc; close all; clear; N = 200000; % Number of Bits p0 = 0.3; x = rand(1, N); bits = double(x > p0); figure; hist(bits); N = 200000; % Number of Bits p = [0.2 0.3 0.5]; pn = cumsum(p)
Digital Signal Processing with MATLAB 257 x = rand(1, N); y = zeros(1, length(x)); for i=1:length(x); if x(i) < pn(1) y(i) = 0; elseif x(i) < pn(2) y(i) = 1; else y(i) = 2; end end figure; hist(y);
Fig. 14.9 Generation of discrete random variable with specified probability mass function.
14.3.2 Correlation of Sequences It is a measure of the degree to which two sequences are similar. Given two real-valued sequences x(n) and y(n) of finite energy +•
Cross-correlation
rx , y (l ) =
Â
x ( n) y ( n - l )
n=-•
The index l is called the shift or lag parameter +•
Autocorrelation
rx , x (l ) =
Â
n=-•
x ( n) x ( n - l )
258
Digital Signal Processing
Case Studies Autocorrelation function of a sinusoidal sequence clc; clear; close all; N=1024; % Number of samples f1=1; % Frequency of the sinewave FS=200; % Sampling Frequency n=0:N-1; % Sample index numbers x=sin(2*pi*f1*n/FS); % Generate the signal, x(n) t=[1:N]*(1/FS); % Prepare a time axis figure; subplot(2,1,1); % Prepare the figure plot(t,x); % Plot x(n) title(‘Sinwave of frequency 1000Hz [FS=8000Hz]’); xlabel(‘Time, [s]’); ylabel(‘Amplitude’); grid on; Rxx=xcorr(x); % Estimate its autocorrelation subplot(2,1,2); % Prepare the figure plot(-1023:1023, Rxx); % Plot the autocorrelation grid; axis([-1023 1023 -500 500]); title(‘Autocorrelation function of the sinewave’); xlabel(‘lags’); ylabel(‘Autocorrelation’); barker13 = [1 1 1 1 1 -1 -1 1 1 -1 1 -1 1]; rxx=xcorr(barker13); figure; subplot(2,1,1); % Prepare the figure stem(barker13); % Plot x(n) title(‘Barker Sequence of Length=13’); xlabel(‘n’); ylabel(‘Amplitude of Subpulse’); grid on; subplot(2,1,2); % Prepare the figure plot(-12:12, rxx); % Plot the autocorrelation grid; axis([-12 12 0 15]); title(‘Autocorrelation function of the Barker Sequence’); xlabel(‘lags’); ylabel(‘Autocorrelation’);
Digital Signal Processing with MATLAB 259
Fig. 14.10
Applications of correlation operation to (a) identify hidden periodicities and (b) to detect target echoes in a radar system.
Detecting hidden periodicities in noise clc; clear; close all; N=1024; % Number of samples to generate f1=1; % Frequency of the sinewave FS=200; % Sampling frequency n=0:N-1; % Sampling index
260
Digital Signal Processing
x=sin(2*pi*f1*n/FS); % Generate x(n) y=x+10*randn(1,N); % Generate y(n) figure; subplot(3,1,1); plot(x); grid on; title(‘Pure Sinewave’); subplot(3,1,2); plot(y); grid on; title(‘y(n), Pure Sinewave + Noise’); Rxy=xcorr(x,y); % Estimate the cross-correlation subplot(3,1,3); plot(Rxy); title(‘Cross-correlation Rxy’); grid on;
Fig. 14.11
Application of correlation operation to identify hidden periodicities in a noisy signal.
Time Delay Estimation in Radar using Cross-correlation clc; clear; close all; n=0:99; tx = [ones(1,13) zeros(1,87)]; rx1 = 0.4*[zeros(1,20) ones(1,13) zeros(1,67)];
Digital Signal Processing with MATLAB 261 rx2 = 0.4*[zeros(1,20) ones(1,13) zeros(1,67)] + 0.4*(rand(1, 100)-0.5); rxy1 = xcorr(rx1, tx); rxy2 = xcorr(rx2, tx); figure; subplot(5,1,1); stem(n, tx);grid on; title(‘Transmitted Sequence’); subplot(5,1,2); stem(n, rx1); grid on;title(‘Received Sequence without Noise’); subplot(5,1,3); stem(rxy1(100:end));grid on; title(‘Output of Matched Filter’); subplot(5,1,4); stem(n, rx2);grid on; title(‘Received Sequence with Noise’); subplot(5,1,5); stem(rxy2(100:end)); grid on;title(‘Output of Matched Filter’); n=0:99; barker13 = [1 1 1 1 1 -1 -1 1 1 -1 1 -1 1]; tx = [barker13 zeros(1,87)]; rx1 = 0.4*[zeros(1,20) barker13 zeros(1,67)]; rx2 = 0.4*[zeros(1,20) barker13 zeros(1,67)] + 0.4*(rand(1, 100)-0.5); rxy1 = xcorr(rx1, tx); rxy2 = xcorr(rx2, tx); figure; subplot(5,1,1); stem(n, tx);grid on; title(‘Transmitted Sequence (Barker)’); subplot(5,1,2); stem(n, rx1); grid on;title(‘Received Sequence without Noise’); subplot(5,1,3); stem(rxy1(100:end)); grid on; title(‘Output of Matched Filter’); subplot(5,1,4); stem(n, rx2); grid on;title(‘Received Sequence with Noise’); subplot(5,1,5); stem(rxy2(100:end));grid on; title(‘Output of Matched Filter’);
262
Digital Signal Processing
Fig. 14.12
Time delay estimation of a target using (a) unmodulated pulse and (b) Barker-13 phase modulated pulse with better autocorrelation properties.
14.3.3 Implementation of Discrete Time Systems • The output of a LTI system is calculated as a linear convolution of input sequence x (n) and unit sample response of the system h (n) +•
y (n) = LTI [ x(n)] =
 x ( k ) k ( n - k ) x ( n) * h ( n)
k =-•
• An LTI system is completely characterized in the time domain by the impulse response h (n) • MATLAB functions for linear filtering • conv – computes the linear convolution of two sequences • filter – computes the output of a linear time invariant system characterized by the difference equation • y (n) = b0x (n) + b1x (n – 1) + º + bM x (n – M) + a1y (n – 1) + a2y (n – 2) º + aNy (n – N)
Linear Filtering using Convolution % Linear Filtering using Convolution clc; clear; close all; nx= 8;
Digital Signal Processing with MATLAB 263 x = [1 1 1 1 1 1 1 1]; ny = 16; h = 0.7.^(0:15); n = nx + ny -1; y = conv(x, h); figure; subplot(3,1,1); stem(x); grid on; title(‘Input Sequence x’); axis([0 n 0 2]); subplot(3,1,2); stem(h); grid on; title(‘Unit Sample Response h’); axis([0 n 0 1]); subplot(3,1,3); stem(y); grid on; title(‘Output Sequence y’); axis([0 n 0 5]);
Fig. 14.13 Illustration of filtering operation using LTI system characterized by a unit sample response h (n).
Linear Filtering using evaluation of difference equation • FIR filtering involves computing the output sample using only present and previous input samples. Example: y (n) = 0.5 x (n) + 0.27 x (n – 1) + 0.77 x (n – 2) • IIR filtering involves computing the output sample using present input sample, previous input samples and also previous output samples Example: y (n) = 0.45 x (n) + 0.5 x (n – 1) + 0.45 x (n 0 2) + 0.53 y (n – 1) – 0.46 y (n – 2) Ê 20pn ˆ Ê 200pn ˆ + cos Á , with0 £ n £ 299 • Input signal: x ( n ) = cos Á Ë 256 ˜¯ Ë 256 ˜¯
264
Digital Signal Processing
% Linear Filtering using difference equation (filter function) clc; clear; close all; n = 0 : 299; x = cos(2*pi*(10/256)*n) + cos(2*pi*(100/256)*n); num1 = [0.5 0.27 0.77]; den1 = 1; num2 = [0.45 0.5 0.45]; den2 = [1 -0.53 0.46]; y1 = filter(num1, den1, x); y2 = filter(num2, den2, x); figure; subplot(2,1,1); stem(impz(num1, den1)); grid on; axis([0 25 0 1]); title(‘Unit Sample Response of FIR Filter’); subplot(2,1,2); stem(impz(num2, den2)); grid on; axis([0 25 -0.5 1]); title(‘Unit Sample Response of IIR Filter’); figure; subplot(3,1,1); plot(n, x); grid on; title(‘Input Signal (Low Frequency Signal + Noise)’); subplot(3,1,2); plot(n, y1); grid on; title(‘Output of the FIR Filter’); subplot(3,1,3); plot(n, y2); grid on; title(‘Output of the IIR Filter’);
Fig. 14.14
Illustration of filtering operation using difference equation.
Digital Signal Processing with MATLAB 265
Fig. 14.15
Comparison between FIR and IIR filters.
% Plotting the impulse response and determination of stability % Linear Filtering using difference equation (filter function) clc; clear; close all; n = 0 : 40; x = [1 zeros(1, 40)]; num1 = [0.45 0.5 0.45]; den1 = [1 -0.53 0.46]; num2 = [1 -4 3]; den2 = [1 -1.7 1]; y1 = filter(num1, den1, x); y2 = filter(num2, den2, x); figure; subplot(2,1,1); stem(n, y1); subplot(2,1,2); stem(n, y2);
14.3.4
Frequency Analysis of Discrete-Time Signals
The convolution representation is based on the fact that any signal can be represented by a linear combination of scaled and delayed unit samples. We can also represent any arbitrary discrete signal as a linear combination of basis signals. Each basis signal set provides a new signal representation. Each representation has some advantages and disadvantages depending upon the type of system under consideration.
266
Digital Signal Processing
Fig. 14.16
Unit sample response of (a) stable and (b) marginally stable systems.
When the system is linear and time-invariant, only one representation stands out as the most j wn useful. It is based on the complex exponential signal set e and is called the discrete-time Fourier Transform MATLAB Functions – freqz, fft, spectrum, psd
{ }
DFT
+•
X (e jw ) = F [ x(n)] =
Â
x(n) e - jwn
Existence Condition
n=-•
IDFT
x(n) = F -1 [ X (e jw )] =
1 +p X (e jw ) e jwn dw 2 p Ú- p
Example – Spectrum of a Sinusoidal Sequence (Fs = 10 kHz) t = 0: 1/fs : 0.005; y(t) = 2*cos(2*pi*1000*t) + 3*cos(2*pi*2000*t) clear; close all; clc; fs = 10000; t = 0: 1/fs : 0.05; y = 3*cos(2*pi*500*t) + cos(2*pi*3000*t); w = -pi: pi/100: pi; yk = freqz(y, 1, w); figure; subplot(2,2,1); plot(t(1:100), y(1:100));
Â
+• -•
| x (n) |< •
Digital Signal Processing with MATLAB 267 grid on; xlabel(‘time (in secs)’); ylabel(‘Amplitude’); title(‘Sinusoidal Signal with 500Hz and 3000Hz Frequency Components’); subplot(2,2,3); plot( (w/(2*pi))*fs, abs(yk)); grid on; xlabel(‘Continuous Frequency (F)’); ylabel(‘Magnitude X(\omega)’); title(‘Spectrum of Continuous Time Signal x(t)’); subplot(2,2,2); plot( w/(2*pi), abs(yk)); grid on; xlabel(‘Normalized Discrete Frequency (f)’); ylabel(‘Magnitude X(f)’); title(‘Spectrum of Discrete Time Sequence x(n)’); subplot(2,2,4); plot( w, abs(yk)); grid on; xlabel(‘Normalized Angular Frequency (w)’); ylabel(‘Magnitude X(w)’); title(‘Spectrum of Discrete Time Sequence x(n)’);
Fig. 14.17 Spectral analysis of (a) sinusoidal signals and their plots in (b) continuous frequency f (c) normalized discrete frequency f and (d) normalized angular frequency w.
Spectral Resolution Spectral resolution refers to the small frequency difference between spectral components that can be resolved as separate components. Spectral resolution depends on number of samples. The more the samples, the smaller the frequency resolution. The number of samples (for estimating the spectrum) required to resolve two frequencies f1 and f2, to be separate is N > 1/(f2 – f1)
268
Digital Signal Processing
Example Suppose x(t) = 4*cos(2*pi*100*t1) + 2*cos(2*pi*200*t1) + 3*cos(2*pi*210*t1) is sampled at sampling frequency of Fs = 1 kHz Analog Signal Frequencies F1 = 100 Hz, F2 = 200 Hz, F3 = 210 Hz Discrete Signal Frequencies f1 = F1/Fs= 0.1, f2 = F2/Fs = 0.2, f3 = F3/Fs = 0.21, Hence, number of samples required is N > 1/(0.21-0.20) = 100 to resolve 200 Hz and 210 Hz as separate frequencies clear; close all; clc; fs = 1000; t1 = 0: 1/fs : 0.05; y1 = 4*cos(2*pi*100*t1) + 2*cos(2*pi*200*t1) + 3*cos(2*pi*210*t1) ; t2 = 0: 1/fs : 0.1; y2 = 4*cos(2*pi*100*t2) + 2*cos(2*pi*200*t2) + 3*cos(2*pi*210*t2) ; w = -pi: pi/100: pi; yk1 = freqz(y1, 1, w); yk2 = freqz(y2, 1, w); figure; subplot(2,2,1); plot(t1, y1); grid on; xlabel(‘time (in secs)’); ylabel(‘Amplitude’); title(‘Sinusoidal Signal with 100Hz, 200Hz and 210Hz Frequency Components’); subplot(2,2,2); plot( (w/(2*pi)), abs(yk1) ); grid on; xlabel(‘Normalized Discrete Frequency (f)’); ylabel(‘Magnitude X(f)’); title(‘Spectrum of x(n) with 50 samples’); subplot(2,2,3); plot(t2, y2); grid on; xlabel(‘time (in secs)’); ylabel(‘Amplitude’); title(‘Sinusoidal Signal with 100Hz, 200Hz and 210Hz Frequency Components’); subplot(2,2,4); plot( (w/(2*pi)), abs(yk2) ); grid on; xlabel(‘Normalized Discrete Frequency (f)’); ylabel(‘Magnitude X(f)’); title(‘Spectrum of x(n) with 100 samples’);
Digital Signal Processing with MATLAB 269
Fig. 14.18 Peaks of the spectrum of sinusoidal signal with 200 Hz and 210 Hz components could not be resolved in (b) due to spectrum estimation with limited samples. However the 200 Hz and 210 Hz frequency components could be distinctly resolved with 100 samples.
Spectral Analysis using Windows Calculating the spectrum from finite number of samples results in actual spectrum being convolved with spectrum of window function – resulting in high sidelobes in the spectrum masking some frequency components. In practice, the windowed sequence is used to calculate spectrum resulting in low sidelobes (at the cost of decreased resolution). clear; close all; clc; f1=30; % Signal frequency fs=128; % Sampling frequency N=256; % Number of samples N1=1024; % Number of FFT points n=0:N-1; % Index n f=(0:N1-1)*fs/N1; % Defining the frequency points [axis] x=cos(2*pi*f1*n/fs); % Generate the signal XR=abs(fft(x,N1)); % find the magnitude of the FFT using No % windowing (i.e. Rectangular window) xh=hamming(N); % Define the hamming samples xw=x .* xh’; % Window the signal
270
Digital Signal Processing
XH=abs(fft(xw,N1)); % find the magnitude of the FFT of the % windowed signal. figure; subplot(2,2,1); plot(n, x); grid on; xlabel(‘n’); title(‘Original Sequence’); subplot(2,2,3); plot(n, xw); grid on; xlabel(‘n’); title(‘Windowed Sequence’); subplot(2,2,2); plot(f(1:N1/2),20*log10(XR(1:N1/2)/max(XR))); title(‘Spectrum of x(n) using Rectangular Windows’); grid; axis([0 fs/2 -100 0]); xlabel(‘Frequency, Hz’); ylabel(‘Normalised Magnitude, [dB]’); subplot(2,2,4); plot(f(1:N1/2),20*log10(XH(1:N1/2)/max(XH))); title(‘Spectrum of x(n) using Hamming Windows’); grid; axis([0 fs/2 -100 0]); xlabel(‘Frequency, Hz’); ylabel(‘Normalised Magnitude, [dB]’);
Fig. 14.19
Time-domain weighting to reduce spectral sidelobes. This allows weak spectral components to be identified at the cost of decreased spectral resolution.
Digital Signal Processing with MATLAB 271
14.3.5 Frequency Analysis of Discrete-Time Systems Provides more insight into the behavior of the system. Typical analyses include • Pole-Zero Analysis • For a stable system, all the poles of the system function should lie within the unit circle • Matlab Function – zplane(num, den) • Frequency Response Analysis • The gain and phase shift provided by the discrete time system for each frequency component • Matlab Function – freqz(num, den) • Group Delay Analysis • Group delay refers to the delay suffered by the envelope of each input frequency component • Matlab Function – grpdelay(num, den)
Pole-Zero Analysis clear; close all; clc; num1 = 1; den1 = [1 -1.1]; num2 = 1; den2 = [1 -1]; num3 = 1; den3 = [1 -0.9]; figure; subplot(3,2,1); zplane(num1, den1); grid on; subplot(3,2,2); stem(impz(num1,den1,150)); grid on; title(‘Unit Sample Response of H(z) = z/(z-1.1)’); subplot(3,2,3); zplane(num2, den2); grid on; subplot(3,2,4); stem(impz(num2,den2,150)); grid on; title(‘Unit Sample Response of H(z) = z/(z-1.0)’); subplot(3,2,5); zplane(num3, den3); grid on; subplot(3,2,6); stem(impz(num3,den3,150)); grid on; title(‘Unit Sample Response of H(z) = z/(z-0.9)’);
272
Digital Signal Processing
Fig. 14.20
Pole-zero plots of (a) stable (b) marginally stable and (c) unstable systems. Their corresponding unit sample responses.
Frequency Response and Group Delay Analysis – Example • H (z) = ( 1 + z – 1 + z – 2 ) / 3 • y (n) = (x (n) + x (n – 1) + x (n – 2)) / 3 • h (n) = {1/3, 1/3, 1/3} clc; clear; close all; h = [1/3 1/3 1/3]; w = 0:pi/100:pi; hk = freqz(h, 1, w); figure; freqz(h,1); figure; zplane(h, 1); figure; grpdelay(h, 1);
****
Digital Signal Processing with MATLAB 273
Fig. 14.21
(b) Frequency response (c) Pole-Zero plot and (d) group delay plot of a linear phase FIR system.
Frequency Analysis of Discrete-Time Systems • Illustration • H (z) = ( 1 + z – 1 + z – 2) / 3 • y (n) = (x (n) + x (n – 1) + x (n – 2)) / 3 • h (n) = {1/3, 1/3, 1/3} • We notice that this filter provides a null at normalized frequency w = 2*pi/3. • Suppose the sampling frequency of the system is 180 Hz. Then the highest frequency component in the input signal that can be processed is 90 Hz • When a signal sampled at 180 Hz is processed by this filter, this filter removes 60 Hz frequency component and passes the remaining frequencies • When a sampling frequency of 360 Hz is used, the same filter provides a null at 120 Hz. clc; clear; close all; t = 0: 1/180 : 1; s = 2*cos(2*pi*10*t); n = cos(2*pi*60*t); x = s + n; y = filter([1/3 1/3 1/3], 1, x); figure; subplot(3,1,1); plot(t, s); grid on; xlabel(‘time’); title(‘Original Signal (10Hz)’); subplot(3,1,2); plot(t, x); grid on; xlabel(‘time’);
274
Digital Signal Processing
title(‘Noisy Signal (10Hz + 60Hz)’); subplot(3,1,3); plot(t, y); grid on; xlabel(‘time’); title(‘Filtered Signal with attenuated 10Hz component’);
Fig. 14.22 Time-domain plots of (a) original 10 Hz sinusoidal signal (b) noisy signal corrupted by 60 Hz power line interference and (c) notch filtered signal with a group delay of 1 sample time i.e 1/180 sec.
14.3.6 FIR Filter Design • Output sample depends on present and previous input samples • Non-recursive • Extremely stable • Used when linear phase characteristics (i.e., constant group delay is required) • MATLAB functions – fir1, fir2, remez Conceptually, the simplest approach to FIR filter design is to simply truncate to a finite number of terms the doubly infinite-length impulse response coefficients obtained by computing the inverse discrete-time Fourier transform of the desired ideal frequency response. However, a simple truncation results in an oscillatory behavior in the respective magnitude response of the FIR filter, which is more commonly referred to as the Gibbs phenomenon. The Gibbs phenomenon can be reduced by windowing the doubly infinite-length impulse response coefficients by an appropriate finite-length window function. The functions fir1 and fir2 can be employed to design windowed FIR digital filters in MATLAB. Both functions yield a linear-phase design.
Digital Signal Processing with MATLAB 275
The function fir1 can be used to design conventional low-pass, high-pass, bandpass, and bandstop linear-phase FIR filters. The command b = fir1 (N, Wn) returns in vector b the impulse response coefficients, arranged in ascending powers of z −1, of a low-pass or a bandpass filter of order N for an assumed sampling frequency of 2 Hz. Example – 1: Low-Pass Filter Design Design a linear-phase low-pass FIR filter with the following specifications: passband edge = 2 kHz, stopband edge = 2.5 kHz, passband ripple dp = 0.005, stopband ripple ds = 0.005, and sampling rate of 10 kHz. fsamp = 10000; fcuts = [2000 2500]; mags = [1 0]; devs = [0.005 0.005]; [n, Wn, beta, ftype] = kaiserord(fcuts, mags, devs, fsamp); hh = fir1(n, Wn, ftype, kaiser(n+1,beta), ‘noscale’); freqz(hh)
Fig. 14.23
Magnitude and phase response of the designed linear phase FIR filter.
Example-2 Bandstop Filter Design Design a linear-phase bandpass FIR filter with the following specifications: passband edges = 1.8 and 3.6 kHz, stopband edges 1.2 and 4.2 kHz, passband ripple dp = 0.1, stopband ripple ds = 0.02, and sampling rate of 12 kHz.
276
Digital Signal Processing
fsamp = 12000; fcuts = [1200 1800 3600 4200]; mags = [1 0 1]; devs = [0.1 0.02 0.1]; [n,Wn,beta,ftype] = kaiserord(fcuts,mags,devs,fsamp); n = n + rem(n,2); hh = fir1(n,Wn,ftype,kaiser(n+1,beta),’noscale’); [H,f] = freqz(hh,1,1024,fsamp); plot(f,abs(H)), grid on xlabel(‘Frequency (Hz)’); ylabel(‘Gain of the Filter’) title(‘Frequency Response of the Bandstop Filter’);
Fig. 14.24
Magnitude response of the bandstop filter.
14.3.7 IIR Filter Design • Output sample depends on present, previous input and previous output samples • Recursive • Marginally stable • Used when low order filter is required • MATLAB functions – butter, cheby1, cheby2, ellip • Low-Pass Butterworth IIR Digital Filter Design Design a digital IIR low-pass filter with the following specifications: sampling rate of 40 kHz, passband edge frequency of 4 kHz, stopband edge frequency of 8 kHz, passband ripple of 0.5 dB, and a minimum stopband attenuation of 40 dB. Comment on your results.
Digital Signal Processing with MATLAB 277 clc; clear; close all; fp = 4000; fs = 8000; fsamp = 40000; rp = 0.5; rs = 40; wp = fp/(fsamp/2); ws = fs/(fsamp/2); [N wn] = buttord(wp, ws, rp, rs); [num den] = butter(N, wn);
Comments We observe a monotonous response in the passband and stopband of the frequency response The phase response of the filter is non-linear in the passband The order of the filter and cut-off frequency are >> N N= 8 >> wn wn = 0.2469
figure; freqz(num, den); % Illustration t = 0: 1/fsamp: 0.01; % Sampling time points s = 2*sin(2*pi*500*t); % Desired Signal n = cos(2*pi*10000*t); % Noise Signal x = s + n; % Composite Signal y = filter(num, den, x);% Filtered Signal figure; subplot(3,1,1); plot(t, s); grid; title(‘Desired Signal’); subplot(3,1,2); plot(t, x); grid; title(‘Noisy Signal’); subplot(3,1,3); plot(t, y); grid; title(‘Filtered Signal’); xlabel(‘Time (in sec)’);
HighPass Chebyshev-I IIR Digital Filter Design Design a digital IIR highpass filter with following specifications: sampling rate of 3,500 Hz, passband edge frequency of 1,050 Hz, stopband edge frequency of 600 Hz, passband ripple of 1 dB, and a minimum stopband attenuation of 50 dB. Comment on your results.
278
Digital Signal Processing
Fig. 14.25 (a) Magnitude and phase response of the designed low-pass IIR digital filter (b) illustration of filtering operation using a combined (desired low frequency + undesired high frequency) signal as input.
Digital Signal Processing with MATLAB 279 % Chebyshev-I Highpass IIR filter design clc; clear; close all; fp = 1050; fs = 600; fsamp = 3500; rp = 1; rs = 50; wp = fp/(fsamp/2); ws = fs/(fsamp/2); [N wn] = cheb1ord(wp, ws, rp, rs); [num den] = cheby1(N, rp, wn, ‘high’); figure; freqz(num, den); % Illustration t = 0: 1/fsamp: 0.05; % Sampling time points s = 2*sin(2*pi*1200*t); % Desired Signal n = cos(2*pi*100*t); % Noise Signal x = s + n; % Composite Signal y = filter(num, den, x);% Filtered Signal figure; subplot(3,1,1); plot(t, s); grid; title(‘Desired Signal’); subplot(3,1,2); plot(t, x); grid; title(‘Noisy Signal’); subplot(3,1,3); plot(t, y); grid; title(‘Filtered Signal’); xlabel(‘Time (in sec)’);
Bandpass Chebyshev-II IIR Digital Filter Design Design a digital IIR bandpass filter with the following specifications: sampling rate of 7 kHz, passband edge frequencies at 1.4 kHz and 2.1 kHz, stopband edge frequencies at 1.05 kHz and 2.45 kHz, passband ripple of 0.4 dB, and a minimum stopband attenuation of 50 dB. Comment on your results.
280
Digital Signal Processing
Fig. 14.26 (a) Magnitude and phase response plot of the Chebyshev-I IIR digital high-pass filter with N = 5, wn = 0.6 (b) illustration of filtering operation using a combined (undesired low frequency + desired high frequency) signal as input.
Digital Signal Processing with MATLAB 281 % Chebyshev-II Bandpass IIR filter design clc; clear; close all; fp = [1400 2100]; fs = [1050 2450]; fsamp = 7000; rp = 0.5; rs = 50; wp = fp/(fsamp/2); ws = fs/(fsamp/2); [N wn] = cheb2ord(wp, ws, rp, rs); [num den] = cheby2(N, rs, wn); figure; freqz(num, den); % Illustration t = 0: 1/fsamp: 0.05; % Sampling time points s = 2*sin(2*pi*1500*t); % Desired Signal n = cos(2*pi*3000*t); % Noise Signal x = s + n; % Composite Signal y = filter(num, den, x);% Filtered Signal figure; subplot(3,1,1); plot(t, s); grid; title(‘Desired Signal’); subplot(3,1,2); plot(t, x); grid; title(‘Noisy Signal’); subplot(3,1,3); plot(t, y); grid; title(‘Filtered Signal’); xlabel(‘Time (in sec)’);
Bandstop Elliptic IIR Digital Filter Design Design a digital IIR bandstop filter with the following specifications: sampling rate of 12 kHz, passband edge frequencies at 2.1 kHz and 4.5 kHz, stopband edge frequencies at 2.7 kHz and 3.9 kHz, passband ripple of 0.6 dB, and a minimum stopband attenuation of 45 dB. Comment on your results.
282
Digital Signal Processing
Fig. 14.27 (a) Magnitude and phase response plot of the Chebyshev-II IIR digital band pass filter (b) Illustration of filtering operation using a combined (undesired high frequency + desired midband frequency) signal as input.
Digital Signal Processing with MATLAB 283 % Elliptic Bandstop IIR filter design clc; clear; close all; fp = [2100 4500]; fs = [2700 3900]; fsamp = 12000; rp = 0.6; rs = 45; wp = fp/(fsamp/2); ws = fs/(fsamp/2); [N wn] = ellipord(wp, ws, rp, rs); [num den] = ellip(N, rp, rs, wn, ‘stop’); figure; freqz(num, den); subplot(2,1,1); axis([0 1 -100 10]); % Illustration t = 0: 1/fsamp: 0.03; % Sampling time points s = 2*sin(2*pi*500*t); % Desired Signal n = cos(2*pi*3000*t); % Noise Signal x = s + n; % Composite Signal y = filter(num, den, x);% Filtered Signal figure; subplot(3,1,1); plot(t, s); grid; title(‘Desired Signal’); subplot(3,1,2); plot(t, x); grid; title(‘Noisy Signal’); subplot(3,1,3); plot(t, y); grid; title(‘Filtered Signal’); xlabel(‘Time (in sec)’);
14.3.8
Frequency Analysis of Discrete Time Signals
This section analyses of frequency response of fundamental discrete time signals including (a) unit sample sequence (b) unit pulse sequence (c) sinusoidal sequence (i) Unit Sample Sequence: w=-pi:.01:pi; %unit sample figure(1); x1=[1 zeros(1,9)]; h=freqz(x1,1,w); subplot(2,1,1);
284
Digital Signal Processing
Fig. 14.28 (a) Magnitude and phase response plot of the elliptic IIR digital band stop filter (b) illustration of filtering operation using a combined (desired low frequency + undesired high frequency) signal as input.
Digital Signal Processing with MATLAB 285 plot(w,abs(h)); subplot(2,1,2); plot(w,angle(h)); %delayed unit sample figure(2); x2=[zeros(1,3) 1 zeros(1,3)]; h=freqz(x2,1,w); subplot(2,1,1); plot(w,abs(h)); subplot(2,1,2); plot(w,angle(h));
Fig. 14.29
Magnitude and phase response plot of (a) unit sample sequence and (ii) Delayed unit sample sequence.
286
Digital Signal Processing
(ii) Unit Pulse Sequence: clc; clear; close all; w=-pi:.01:pi; %pulse x1=ones(1,10); freqz(x1,1,w); axis([-1 1 -40 50]) figure; %delayed pulse x2=[zeros(1,4) ones(1,10)]; freqz(x2,1,w); axis([-1 1 -40 50]) figure; %increased pulse x3=ones(1,20); freqz(x3,1,w); axis([-1 1 -40 50])
Digital Signal Processing with MATLAB 287
Fig. 14.30
(iii)
Magnitude and phase response plot of (a) long pulse (b) delayed long pulse and (c) compressed pulse.
Sinusoidal Sequence:
clc; clear; close all; w=-pi:pi/255:pi; t=0:1/1000:3; x1=5*cos(2*100*pi*t); figure(1); freqz(x1,1,w); x2=5*cos(2*300*pi*t); figure(2); freqz(x2,1,w); x3=5*cos(2*100*pi*t+pi/3);
288
Digital Signal Processing figure(3); freqz(x3,1,w);
Digital Signal Processing with MATLAB 289
Fig. 14.31 Frequency response of a sinusoidal sequence with (a) frequency f = 100 Hz (b) f = 300 Hz and (c) f = 100 Hz with initial phase of p/3, sampled at fs = 1 kHz.
14.3.9 Notch Filter Design Design a filter that eliminates 60 Hz component from the given input? Also extend this to 60 Hz and 30 Hz components? Clc; Clear; Close all; fs=180; t=0:1/fs:.5; figure; freqz([1/3,1/3,1/3],1); x=4*cos(2*pi*30*t)+3*cos(2*pi*20*t); figure; subplot(3,1,1); plot(t,x); grid; x1=x+2*cos(2*pi*60*t); subplot(3,1,2);
290
Digital Signal Processing plot (t,x1); subplot(3,1,3); y=filter([1/3 1/3 1/3],1,x1); plot(t,y); grid; figure; freqz([1/6 1/6 1/6 1/6 1/6 1/6],1); figure; subplot(3,1,1); plot(t,3*cos(2*pi*20*t)); grid; subplot(3,1,2); plot(t,x1); subplot(3,1,3); y=filter([1/6 1/6 1/6 1/6 1/6 1/6],1,x1); plot(t,y); grid;
Digital Signal Processing with MATLAB 291
Fig. 14.32
(a) Frequency response of a 60 Hz notch filter and (b) time domain illustration of notch filter operation.
292
Digital Signal Processing
Fig. 14.33
(a) Frequency response of a 30 Hz, 60 Hz notch filter and (b) time domain illustration of notch filter operation.
14.3.10 Fast Fourier Transform and its Properties Write the functions for circular shift and circular convolution using MATLAB Programs: (a) y = circshift (x, m) function y=circshift1(x,m) n=length(x); if m>length(x) m=rem(m,length(x)); elseif m