Signals and Systems: Fundamentals 9783110379549, 9783110378115

Signals and systems enjoy wide application in industry and daily life, and understanding basic concepts of the subject a

578 117 4MB

English Pages 282 [284] Year 2015

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Signals and Systems: Fundamentals
 9783110379549, 9783110378115

Table of contents :
Contents
Preface
1 Introduction
1.1 Overview of signals and systems
1.1.1 What is a signal?
1.1.2 What is a system?
1.2 Description and classification of signals
1.2.1 Continuous-time signals and discrete-time signals
1.2.2 Energy signals and power signals
1.2.3 Periodic signals and nonperiodic signals
1.2.4 Deterministic signals and random signals
1.2.5 Elementary signals
1.3 Description of systems
1.3.1 Elementary systems
1.3.2 System modelling
1.4 Properties of systems
1.4.1 Memoryless and with memory
1.4.2 Causality
1.4.3 Invertibility
1.4.4 Stability
1.4.5 Time-invariance
1.4.6 Linearity
1.5 Summary
1.6 Problems
2 Time-domain analysis of LTI systems
2.1 Introduction
2.2 The unit impulse response and convolutions
2.2.1 The convolution sum
2.2.2 The convolution integral
2.3 Properties of convolutions and equivalent systems
2.4 Causality and stability of LTI systems
2.5 Systems constrained with LCCDEs
2.5.1 Continuous-time systems constrained with LCCDEs
2.5.2 Discrete-time systems characterized by LCCDEs
2.6 Summary
2.7 Problems
3 Fourier analysis of signals
3.1 Introduction
3.2 Fourier series for continuous-time periodic signals
3.3 Fourier series for discrete-time periodic signals
3.4 Why should a signal be transformed?
3.5 Fourier transform for continuous-time signals
3.5.1 Properties of Fourier transform
3.5.2 Inverse Fourier transform
3.6 The discrete-time Fourier transform
3.6.1 Properties of DTFT
3.6.2 Inverse DTFT
3.7 Fourier series and Fourier transforms
3.8 Summary
3.9 Problems
4 Frequency-domain approach to LTI systems
4.1 Introduction
4.2 Frequency response of LTI systems
4.3 Bode plots for continuous-time LTI systems
4.4 Frequency response of LTIs described with LCCDEs
4.5 Frequency domain approach to system outputs
4.6 Some typical LTI systems
4.6.1 All-pass systems
4.6.2 Linear phase response systems
4.6.3 Ideal filters
4.6.4 Ideal transmission channels
4.7 Summary
4.8 Problems
5 Discrete processing of analog signals
5.1 Introduction
5.2 Sampling of a continuous-time signal
5.3 Spectral relationship and sampling theorem
5.4 Reconstruction of continuous-time signals
5.5 Hybrid systems for discrete processing
5.6 Discrete Fourier transform
5.7 Compressed sensing
5.8 Summary
5.9 Problems
6 Transform-domain approaches
6.1 Motivation
6.2 The Laplace transform
6.2.1 Derivation of the transform
6.2.2 Region of convergence
6.2.3 Inverse Laplace transform
6.2.4 Properties of Laplace transform
6.3 The z-transform
6.3.1 Region of convergence
6.3.2 Properties of the z-transform
6.3.3 Inverse z-transform
6.4 Transform-domain approach to LTI systems
6.4.1 Transfer function of LTI systems
6.4.2 Inverse systems of LTIs and deconvolutions
6.4.3 Revisit of LTI system’s stability and causality
6.4.4 Transfer function of LTI systems by LCCDEs
6.5 Transform domain approach to LCCDEs
6.6 Decomposition of LTI system responses
6.7 Unilateral transforms
6.7.1 Unilateral Laplace transform
6.7.2 Unilateral z-transform
6.8 Summary
6.9 Problems
7 Structures and state-space realizations
7.1 Block-diagram representation
7.2 Structures of LTIs with a rational transfer function
7.3 State-space variable representation
7.3.1 State model and state-space realizations
7.3.2 Construction of an equivalent state-space realization
7.3.3 Similarity transformations
7.4 Discretizing a continuous-time state model
7.5 Summary
7.6 Problems
8 Comprehensive problems
8.1 Motivation
8.2 Problems
Appendices
A. Proof of the condition of initial rest (1.49)
B. Proof of Theorem 2.5
C. Orthogonality principle
D. Residue theorem and inverse transforms
E. Partial-fraction expansion
Bibliography
Index

Citation preview

Gang Li, Liping Chang, Sheng Li Signals and Systems De Gruyter Textbook

Also of Interest Speech and Automata in Health Care A. Neustein, 2014 ISBN 978-1-61451-709-2, e-ISBN (PDF) 978-1-61451-515-9, e-ISBN (EPUB) 978-1-61451-960-7,

Bispectral Methods of Signal Processing A. Totsky, A. Zelensky, V. Kravchenko, 2014 ISBN 978-3-11-037456-8, e-ISBN 978-3-11-036888-8, e-ISBN (EPUB) 978-3-11-038606-6, Set-ISBN 978-3-11-036889-5

Signal and Acoustic Modeling for Speech and Communication Disorders ISBN 978-1-61451-759-7, e-ISBN 978-1-5015-0241-5, e-ISBN (EPUB) 978-1-5015-0243-9, set-ISBN 978-1-5015-0242-2

Computational Bioacoustics D. T. Ganchev, 2015 ISBN 978-1-61451-729-0, e-ISBN 978-1-61451-631-6, e-ISBN (EPUB) 978-1-61451-966-9, Set-ISBN 978-1-61451-632-3

www.degruyter.com

Gang Li, Liping Chang, Sheng Li

Signals and Systems | Fundamentals

Authors Dr. Gang Li Zhejiang Hua Yue Institute of Information and Data Processing Hangzhou 310085 People’s Republic of China [email protected] Dr. Liping Chang Zhejiang University of Technology College of Information Engineering Hangzhou 310023 Peoples’s Repulic of China [email protected] Dr. Sheng Li Zhejiang University of Technology College of Information Engineering Hangzhou 310023 People’s Republic of China [email protected]

ISBN 978-3-11-037811-5 e-ISBN (PDF) ) 978-3-11-037954-9 e-ISBN (EPUB) 978-3-11-041684-8 Library of Congress Cataloging-in-Publication Data A CIP catalog record for this book has been applied for at the Library of Congress. Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://www.dnb.de. © 2015 Walter de Gruyter GmbH, Berlin/Boston Typesetting: PTP-Berlin, Protago TEX-Production GmbH Printing and binding: CPI books GmbH, Leck Cover image: mareandmare/iStock/thinkstock ♾ Printed on acid-free paper Printed in Germany www.degruyter.com

| To those who taught and were taught by us

Contents Preface | xi 1 1.1 1.1.1 1.1.2 1.2 1.2.1 1.2.2 1.2.3 1.2.4 1.2.5 1.3 1.3.1 1.3.2 1.4 1.4.1 1.4.2 1.4.3 1.4.4 1.4.5 1.4.6 1.5 1.6

Introduction | 1 Overview of signals and systems | 1 What is a signal? | 1 What is a system? | 2 Description and classification of signals | 6 Continuous-time signals and discrete-time signals | 6 Energy signals and power signals | 10 Periodic signals and nonperiodic signals | 13 Deterministic signals and random signals | 16 Elementary signals | 16 Description of systems | 24 Elementary systems | 24 System modelling | 28 Properties of systems | 31 Memoryless and with memory | 32 Causality | 32 Invertibility | 33 Stability | 34 Time-invariance | 36 Linearity | 37 Summary | 40 Problems | 40

2 2.1 2.2 2.2.1 2.2.2 2.3 2.4 2.5 2.5.1 2.5.2 2.6 2.7

Time-domain analysis of LTI systems | 46 Introduction | 46 The unit impulse response and convolutions | 46 The convolution sum | 47 The convolution integral | 52 Properties of convolutions and equivalent systems | 55 Causality and stability of LTI systems | 60 Systems constrained with LCCDEs | 63 Continuous-time systems constrained with LCCDEs | 63 Discrete-time systems characterized by LCCDEs | 67 Summary | 69 Problems | 69

viii | Contents 3 3.1 3.2 3.3 3.4 3.5 3.5.1 3.5.2 3.6 3.6.1 3.6.2 3.7 3.8 3.9

Fourier analysis of signals | 73 Introduction | 73 Fourier series for continuous-time periodic signals | 74 Fourier series for discrete-time periodic signals | 85 Why should a signal be transformed? | 89 Fourier transform for continuous-time signals | 92 Properties of Fourier transform | 97 Inverse Fourier transform | 106 The discrete-time Fourier transform | 107 Properties of DTFT | 112 Inverse DTFT | 114 Fourier series and Fourier transforms | 117 Summary | 121 Problems | 122

4 4.1 4.2 4.3 4.4 4.5 4.6 4.6.1 4.6.2 4.6.3 4.6.4 4.7 4.8

Frequency-domain approach to LTI systems | 128 Introduction | 128 Frequency response of LTI systems | 128 Bode plots for continuous-time LTI systems | 135 Frequency response of LTIs described with LCCDEs | 139 Frequency domain approach to system outputs | 143 Some typical LTI systems | 146 All-pass systems | 147 Linear phase response systems | 148 Ideal filters | 149 Ideal transmission channels | 154 Summary | 155 Problems | 155

5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9

Discrete processing of analog signals | 159 Introduction | 159 Sampling of a continuous-time signal | 160 Spectral relationship and sampling theorem | 160 Reconstruction of continuous-time signals | 165 Hybrid systems for discrete processing | 170 Discrete Fourier transform | 171 Compressed sensing | 175 Summary | 177 Problems | 178

Contents

6 6.1 6.2 6.2.1 6.2.2 6.2.3 6.2.4 6.3 6.3.1 6.3.2 6.3.3 6.4 6.4.1 6.4.2 6.4.3 6.4.4 6.5 6.6 6.7 6.7.1 6.7.2 6.8 6.9

Transform-domain approaches | 180 Motivation | 180 The Laplace transform | 180 Derivation of the transform | 181 Region of convergence | 182 Inverse Laplace transform | 188 Properties of Laplace transform | 190 The z-transform | 192 Region of convergence | 193 Properties of the z-transform | 197 Inverse z-transform | 198 Transform-domain approach to LTI systems | 200 Transfer function of LTI systems | 200 Inverse systems of LTIs and deconvolutions | 203 Revisit of LTI system’s stability and causality | 204 Transfer function of LTI systems by LCCDEs | 208 Transform domain approach to LCCDEs | 210 Decomposition of LTI system responses | 213 Unilateral transforms | 216 Unilateral Laplace transform | 216 Unilateral z-transform | 219 Summary | 220 Problems | 221

7 7.1 7.2 7.3 7.3.1 7.3.2 7.3.3 7.4 7.5 7.6

Structures and state-space realizations | 226 Block-diagram representation | 226 Structures of LTIs with a rational transfer function | 228 State-space variable representation | 234 State model and state-space realizations | 234 Construction of an equivalent state-space realization | 235 Similarity transformations | 237 Discretizing a continuous-time state model | 239 Summary | 242 Problems | 242

8 8.1 8.2

Comprehensive problems | 247 Motivation | 247 Problems | 247

Appendices | 253 A. Proof of the condition of initial rest (1.49) | 253 B. Proof of Theorem 2.5 | 253

| ix

x | Contents C. D. E.

Orthogonality principle | 255 Residue theorem and inverse transforms | 256 Partial-fraction expansion | 260

Bibliography | 263 Index | 265

Preface The concepts of signals and systems arise in a wide variety of areas, ranging from home-oriented consumer electronics and multimedia entertainment products to sophisticated communications, aeronautics and astronautics, and control. The ideas and approaches associated with these concepts have great effects on our life in one way or another. Although the signals and systems which arise across those fields are naturally different in their physical make-up and application, the principles and tools for analyzing signals and systems are the same and hence applicable to all of them. Therefore, an introductory course on signals and systems is fundamental and compulsory for an engineering under-graduate curriculum in any well-established tertiary (education) institutions. Such a course is commonly designed in one of the two forms below: – a one-semester subject that intends to provide students with a rich set of concepts and tools for analyzing deterministic signals and an important class of systems known as linear time-invariant systems; – a two-semester subject that expands on the one-semester course by adding more detailed treatment of signal processing and systems for specified applications such as communications, multimedia signal processing and control engineering. This book takes the first form and assumes that the students have a background in calculus and introductory physics.

Why another “Signals and Systems” Given that there are many well-written textbooks on signals and systems available, our readers would be screaming when seeing this textbook: Why another “Signals and Systems”? The thought of writing such a textbook was stimulated in 2007 when the first author conducted a 2nd year course on signals and systems in Zhejiang University of Technology (ZJUT) in Hangzhou, which he just joined from Nanyang Technological University (NTU), Republic of Singapore. The course was designed as “shuang yu ke” spelled in Chinese, meaning that the teaching materials such as the textbooks and slides are all in English and as to the class language, the lecturers can choose either Chinese, English or a mixture of the two—a typical Chinese style! The textbooks adopted were those popularly used in the world but were found very difficult to most of our students as their English is not good enough to deal with those textbooks of

xii | Preface one thousand pages or so in length.¹ This made him think of writing a new “Signals and Systems” with a primary objective of providing a condensed version of “Signals and Systems” in English, while keeping the important technical materials as much as possible. Most of us agree that for a university study it is more important to teach students how to learn and analyze than what to be learned and analyzed. The first author still remembers what was said by Professor Y. Liu who taught him calculus in 1978 in Beijing Institute of Technology (BIT) (now, known as Bei Li Gong) that a good textbook, say of ten chapters, should be written in such a way that after the first four chapters taught by the lecturer, the rest can be studied by students themselves easily. What Prof. Liu really meant is that a textbook should be written to facilitate and reinforce selfstudy. This is another objective that this textbook is intended to achieve. Given the mathematical nature of this subject, rigidity should be sustained as much as possible, which is something very important to engineering students to learn. This is the third objective of this textbook.

How the book is arranged One of the reasons for the existing textbooks of signals and systems to have easily over eight hundred pages is due to the continuous- and discrete-time forms of signals and systems. The success of a signals and systems that achieves our primary object lies in how to provide a balanced and integrated treatment of the two forms in a pedagogical way that can help students see the fundamental similarities and differences between the two without too much repeating. With all of these in mind, this book is organized as follows. – Chapter 1 is aimed to provide an overview of signals and systems. Compared with the one in most of the signals and systems, it is condensed with an emphasis on the periodical signals and linear time-invariant (LTI) systems. – Chapter 2 deals with the time-domain approach to the LTI systems. Since the key to the development of this chapter is to exploit the properties of linearity and time-invariance as well as signal decompositions, the concepts of unit impulse response and convolution are developed in details for the discrete-time case, while the continuous-time counterparts are directly given. One of the remarkable points emphasized in this chapter is the equivalence between the LTI systems and convolutions, which relates the physical interpretation and mathematical expression. Another important technical point in this chapter is the establishment of a conclusion which states that any complete solution of a linear constant coefficient

1 Here, we have those big masters of Signals and Systems to “blame” – why not write their textbook in Chinese!

Preface









| xiii

differential/difference equation (LCCDE) can be characterized by the sum of the output of an LTI system excited by the force signal and a homogeneous solution of the LCCDE. This conclusion yields a very clear picture of the relationship between an LCCDE and the systems that it can characterize. With the help of the transforms to be discussed in Chapter 6, it also provides an easy way to find a particular solution and hence the set of complete solutions of the LCCDE. Chapter 3 is the biggest chapter, dealing with four signal representations, namely, Fourier series (FS), discrete-time FS (DTFS), Fourier transform (FT), and discretetime FT (DTFT). The most important concept in this chapter is signal decomposition. Along this line, the four signal transforms can be unified as a linear combination of basis signals. This unification allows us to condense the text significantly as one can just focus on the development and the properties of FT and DTFT. Great effort has been made to explain why a signal should be transformed, which is one of the difficult points for most of the 2nd year students to understand given that they usually do not have a relevant background. Chapter 4 studies the class of LTI systems using the concepts provided in the first two chapters and Fourier-analysis techniques developed in Chapter 3. The key points of the chapter are the frequency response and the equivalence between convolution in time-domain and multiplication in frequency-domain and the development of the concepts in both continuous- and discrete-time domains are rotated to avoid unnecessary repetition. Chapter 5 mainly focuses on discrete processing of continuous-time signals. The development flows mainly from the techniques derived in Chapters 2–4. Starting from sampling a signal that is continuous in time-domain, the relationship between the spectrum of the continuous-time signal and that of the discrete-time signal obtained is established, based on which the famous sampling theorem and hence the ideal reconstruction can be derived. Unlike most of the existing textbooks, this relation is derived directly using a simple mathematical procedure rather than the one that is obtained with the help of impulse train. Note that the DTFT of a discrete-time signal is a continuous function in frequency. Sampling (in frequency) of such a function leads to a new transform-discrete Fourier transform (DFT) that is popularly used in many applications of digital signal processing, including linear filtering, correlation analysis, and spectral analysis. More profound discussions on the DFT, however, are beyond the scope of this book as the relevant topics are parts of the core contents for the subjects on digital signal processing. Chapter 6 deals with transform-domain approaches to signals and systems. The Laplace transform and z-transform are studied in a parallel way, while the applications of the two transforms to LTI systems and finding the complete solution of an LCCDE are given in a unified manner, which once again allows us to condense the textbook. As two of the new features of this book, the problems of finding inverse systems of a given LTI system & de-convolutions and decomposing system responses are carefully treated. The former is rarely found in most of the existing

xiv | Preface







textbooks, while the latter is intended organized to help students have an easy understanding of the unilateral transforms. The advantage of the unified/integrated treatment of the continuous- and discrete-time forms is particularly demonstrated in Chapter 7 in study of blockdiagram representations and structures of the LTI systems in transform-domain. As a special class of system structures, the state-space realizations of LTI systems are introduced. Following the traditional pedagogy, the tutorial questions given in each chapter are mainly designed for helping students to enhance the understanding of the concepts in an individual chapter only. This, however, would make most of the students problem-result-oriented and hence loses the real purpose of learning. To overcome this, Chapter 8 is devoted to providing a set of comprehensive exercises, each of which usually involves mathematical deviations and a more sophisticated application of the concepts and approaches provided in the entire text book rather than an individual chapter. Appendices are used to make the book concise and self-contained, and also sustain rigidity of the development.

Acknowledgements It is our great pleasure to thank a number of people who helped us in writing this book with insightful suggestions and constructive feedback and those who did nothing directly to this book but we just feel good in doing so. Gang Li is grateful to his former NTU colleagues, particularly, G.A. Bi, C.R. Wan, Z.P. Lin, Y.C. Lim, L.H. Xie, and W.J. Cai for insightful suggestions, constructive discussions, and more importantly, the friendship. M.Y. Fu of the University of Newcastle (Australia), S. Chen of the University of Southampton (UK), J. Wu of Zhejiang University, and Y.Q. Shi of the New Jersey Institute of Technology (USA) have shown their continuous support to Gang Li’s work in ZJUT, which is very much appreciated. He would like to take this opportunity to thank the most important person to his academic career—Michel Gevers of the Université Catholique de Louvain (UCL) (Belgium), who was his Ph.D supervisor and taught him so much, particularly on how to write. This book comes true partially owing to the adventure they had together in writing their coauthored book “Parametrizations in Control, Estimation and Filtering Problems: Accuracy Aspects” published by Springer-Verlag (London limited, 1993). Un Grand Mercy, Mig! Liping Chang is grateful to her Ph.D supervisor, academician Z.Q. Lin, SIOM of Chinese Academy of Science, Shanghai, who taught her how to explore and solve scientific problems during her graduate period. Sheng Li wants to say ‘thank you for taking me to this field’ to his Ph.D supervisor, Rodrigo C. de Lamare (Reader), University of York, U.K.. In addition, he is grateful

Preface |

xv

to his ‘boss’ during his post-doctorial work, Martin Haardt (Professor), TU-Ilmenau, Germany. Both of these nice guys taught him many things and helped him to improve himself. All of us would like to express our gratitude to many colleagues in the College of Information Engineering of ZJUT. Particularly, X.X. He, Y.L. Qin and S.Q. Guo are acknowledged for their full supports in various ways. It is really our pleasure to work with them and to have achieved something meaningful together. The contribution of other members in the teaching team for the subject “Signals and Systems” is acknowledged. It was due to a great collective effort such that this subject was horned as a National Model Course in English by the Chinese Ministry of Education and a Provincial Course of Excellency by the Education Board of Zhejiang province. Particularly, Tao Wu’s contribution and the support from the Academic Affairs Office of ZJUT as well as the two projects are very much appreciated. It is also our pleasure to acknowledge the authors of many existing textbooks on signals and systems published in English and Chinese since their works have influenced our book in one way or another. We thank many students who made contributions to this book with suggestions and questions that helped us over years to refine and rethink the organization and presentation of the materials in the book. Particularly, we would like to thank Chaogeng Huang, who has just obtained his Ph.D under the guidance of the 1st author, and Huang Bai for the excellent job done in terms of research and administration, and Yue Wang and Dehui Yang - two talented undergraduate students of Year 2007, who helped us with simulations, figures and tutorial problems. Special thanks go to our teaching assistants Zhihui Zhu, Tao Hong, Qiuwei Li, Shuang Li, and Liang Zhu for their help in preparing tutorial solutions and some of the computer experiments. We are grateful to our students for their active participation and intensive interaction with us, which makes this book even more student-oriented. We would like to take this opportunity to thank Professor Xianyi Gong, a Fellow of the Chinese Academy of Engineering, of Zhejiang university for sharing with us his insightful views on signal processing (SP) as well as many other issues. It is our pleasure to say that his understanding and conception of SP as well as the enthusiasm to education have great influence on us! Lastly, we would like to thank the support from our families, which is the driving force for us to write this book. It may be felt unusual by those not close to Gang Li that the first one among his family members to mention is his brother-in-law Aiguo Li who has done everything possible for him and his family. Liping Chang wishes to thank her family members, particularly her husband Jia Li and their four-month old son for their deep love. Sheng Li has the pleasure to say “I love you much more than myself ” to his darling wife Xiaolei Zhang and his parents. July 01, 2014, Xiao He Shan, Hangzhou

Gang Li Liping Chang Sheng Li

1 Introduction This chapter is devoted to providing students a set of general concepts of signals and systems. We will begin our development with the intuitive questions that may be raised by most of the students at the beginning of the first class of this course: what is a signal and what is a system? Mathematical descriptions and representations of signals and systems are the most important concepts throughout this course and also play a role of corn-stone for other more advanced subjects. We will build on the foundation for developing these concepts and discuss some properties of systems as well as the relationship between signals and systems in this chapter.

1.1 Overview of signals and systems Signals and systems are two of the words which are heard most frequently in our daily life. These concepts arise in virtually all areas, and particularly, play a very important role in engineering. In fact, it can be argued that much of the recent development of high technology, which has brought our life to a new dimension, is a result of advancements in the theory and techniques of signals and systems.

1.1.1 What is a signal? Examples of signals we encounter frequently include speech/music signals, and, picture/image signals, which, in the signal processing community, are usually referred to as audio and video signals, respectively. A signal is actually a variable used to carry information. For example, a speech signal from the speaker of a research seminar represents air pressure that varies with time and stimulates the audience’s ears, and the information is reflected by the way how the air pressure (i.e. the signal) changes with time. A signal can be represented in different forms. Figure 1.1 shows the waveform of a recorded speech signal displayed on a computer screen. The physical meaning (i.e. information) carried by this signal is “di qiu” in Mandarin Chinese (it means “earth” in English) spoken by a Chinese male. An image signal is the light intensity, also called gray level, that varies with two spatial coordinates (see Fig. 1.2), while a video signal consists of a set of pictures that occur in sequence and hence are the light intensity that changes with the two spatial coordinates as well as the time.

2 | 1 Introduction 1

s(t)

0.5 0 −0.5 −1

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

t (in sec.) Fig. 1.1: A recording of the speech for “di qiu” in Chinese (meaning “earth” in English) spoken by a Chinese male.

Fig. 1.2: The scene of female teacher in San Qing Mountain.

A signal is formally defined as a function of one or more independent variables.

The speech signal in Figure 1.1 can then be denoted as s(t) with the independent variable t representing the time, while the image signal in Figure 1.2, as p(x, y) with x, y representing the horizontal and vertical coordinates, respectively. It should be pointed out that most signals in our life are generated by natural means and hence correspond to physical phenomena. However, a signal can also be produced artificially, say by computer simulations. Such a signal does not have a specified physical meaning as it is not born from any natural phenomenon but we can always use it to represent a certain meaning (i.e. information). In fact, a signal, no matter how it is generated, is just a carrier that can be used to represent different information.

1.1.2 What is a system? In the broadest sense, a system is an entity that is used to achieve a specified function. Figure 1.3 depicts a simplified rectifier circuit.

1.1 Overview of signals and systems

r

R

+ x(t)

| 3

+ ∼

p(t)

D



+ y(t)

C





Fig. 1.3: Block diagram of a rectifier circuit.

Such a system consists of two resistors r, R, one diode D, and a capacitor C as components. Denote y(t) as the voltage across the capacitor, which is the response to the voltage source x(t) = A cos(2πF0 t + ϕ ) applied. The function of this system is to make y(t) constant, namely, invariant with time t. In this system, the source x(t) and the capacitor voltage y(t) are referred to as the i*nput signal and output signal, respectively.

A system is an interconnection of components or parts with terminals or access ports through which signals can be applied and extracted.

Let y be the output responding to the input x of a system. This fact is denoted as x → y. Such a notation focuses on the relationship between the input and output of the system rather than how all the components are connected. Frequently, a system can be viewed as a black box, in which the input signals are processed in some manner to yield the output signals. Figure 1.4(a) is the black box representation of the system x(t) → y(t), whose detailed structure is specified by Figure 1.3. y(t) x(t)

-

y(t)

-

x(t)

-

-

p(t)

-

(a)

(b)

Fig. 1.4: Black box representations of the circuit by Figure 1.3.

4 | 1 Introduction It should be pointed out that if we are also interested in the voltage p(t) across the diode D in Figure 1.3, then the same circuit can be viewed as a system of one input and two outputs. See Figure 1.4(b). A system may have M inputs and N outputs. Such a system is usually referred to as a multi-input multi-output (MIMO) system when M, N are all bigger than one. If M = N = 1, the system is a single-input single-output (SISO) system. In this book, most of the systems in our discussions belong to the catalog of SISO. Examples of much more complicated systems can be easily found. Below are three important classes of systems that find a lot of applications in our daily life.

A. Communication systems Figure 1.5 shows a simplified structure of communication systems. The function of such a system is to convey information from one point (the sender) to the other point (the destination). s(t)

-

x(t) Emitter

-

r(t) Channel

-

y(t) Receiver

-

Fig. 1.5: A block diagram of communication systems.

Every communication system consists of three basic sub-systems: the emitter, the channel, and the receiver. The emitter, located at one point in space, is to generate a signal x(t) that contains the message signal s(t) (say a speech signal) produced by a source of information and to transmit it efficiently to the channel. The channel, as the physical medium, can be an optical fiber, a coaxial cable, or simply the air, and is to guide the transmitted signal x(t) to the receiver that is located at some other point separate from the emitter. As the transmitted signal propagates over the channel, it is distorted before reaching the receiver due to many factors including the physical characteristics of the channel, noise and interfering signals from other sources. The objective of the receiver is to process the received signal r(t), a distorted version of the transmitted signal x(t), so as to yield a signal y(t) which is in a recognizable form of the original message signal s(t).

B. Control systems Control engineering is another important area in which the concepts of signals and systems have found successful applications. Such examples are ranged from simple appliances such as air-conditioners and refrigerators found in homes to very sophisticated engineering innovations such as aircraft autopilots, robots, paper mills,

1.1 Overview of signals and systems

| 5

mass-transit vehicles, oil refineries, automobile engines, nuclear reactors, and power plants. A block-diagram for a class of control systems is depicted in Figure 1.6, where the plant is the system to be controlled, w(t) is the disturbance signal which plus the plant output forms the measurement signal y(t), and the (feedback) controller is a system that consists of a sensor to collect the signal y(t) and a micro-processor to generate the feedback signal xf (t). The latter is then to be compared with a reference signal xr (t) to produce an error signal e(t) = xr (t) − xf (t). This error signal is then fed into the compensator, a system used to generate a signal v(t) to control the plant such that a desired plant output p(t) is achieved. w(t) xr (t)

e(t)

- +m

− xf (t)

-

v(t) Compensator

-

Plant

6

? p(t) - +m

y(t)

Controller



Fig. 1.6: A block diagram for a class of feedback control systems.

In an aircraft landing system, the plant refers to the aircraft’s body and actuator. The sensor system is used by the pilot to measure the lateral position of the aircraft. In this situation, w(t) is the measurement error, while the reference input xr (t) corresponds to the desired landing path of the vehicle and the compensator is designed such that the output of the plant tracks xr (t) well.

C. Signal processing systems As stated by Simon Haykin¹, signal processing is at its best when it successfully combines the unique ability of mathematics to generalize with both the insight and prior information gained from the underlying physics of the problem at hand. In general, the functional form of a signal does not directly reveal the embedded information. One of the reasons for this is that due to factors such as measurement noise and channel distortion, the signal under processing is usually a corrupted version of the one which contains information. Filtering, one of the most important tools

1 S. S. Haykin, “Signal processing: where physics and mathematics meet,” IEEE Signal Processing Magazine, vol. 18, issue 4, pp. 6–7, July, 2001.

6 | 1 Introduction of signal processing, is a set of signal operations used to get rid of the disturbance such that information can be extracted easily. In the system depicted in Figure 1.7, the input is a corrupted music signal s(t) = s0 (t) + e(t), where s0 (t) is the desired music signal and e(t) is the noise attached somehow.

s(t) Filter

ŝ0(t)

Fig. 1.7: Block-diagram of a simplified audio play system.

No one would enjoy listening to the sound from the louder speaker driven directly by s(t). The filter is a system that is intended to block the noise e(t) and let s0 (t) pass through. It would be a completely different story if the speaker is excited by the output s0̂ (t) ≈ s0 (t) of the filter. A digital mixer is a more sophisticated audio instrument used for audio signal processing such as equalization, noise gating, and dynamic control. One of the most important parts of such a system is a set of filters, called filter bank.

1.2 Description and classification of signals A signal, represented mathematically as a function of M independent variables, is usually referred to as an M-dimensional (M-D) signal. The speech signal s(t) shown in Figure 1.1 is a 1-D signal, while the light density p(x, y) of a picture (see Figure 1.2) is a 2-D signal. Define R as the set of all real numbers. In this book, we focus our attention mainly on 1-D signals that are defined as a single-valued function of an independent variable, say ξ , taking on a subset of R, denoted as Rξ . Single-valued means that for any value of ξ ∈ Rξ , there is one and only one value of the function corresponding to it.

1.2.1 Continuous-time signals and discrete-time signals This classification of signals is on the basis of how they are defined as a function of ξ . A signal x is said a continuous-time signal if the set Rξ is continuous. Such a signal is denoted as x(ξ ). Most of the signals generated naturally in our world are continuous. For example, a speech signal, represented by acoustic pressure as a function of the time t, is a continuous-time signal as the time variable t is always continuous. A continuoustime signal can also be generated artificially using computers with a given function.

1.2 Description and classification of signals

|

7

Figure 1.8 shows two continuous-time signals defined on R: {

s(t) = cos(0.4πt), t ∈ R 2 y(τ ) = 2e−0.1τ , τ ∈ R.

(1.1)

It should be noted that a continuous-time signal does not necessarily imply that the underlying independent variable represents time. Let x(h) be the variation of air pressure with altitude h of a spot on the earth. Such a signal is a continuous-time signal though the altitude h, which varies continuously, has nothing to do with the time. It is just for convenience that we generally refer to the independent variable of a signal as time. Denote Z as the set of all integers, i.e. Z ≜ {. . . , −2, −1, 0, 1, 2, . . . }.

A signal is said a discrete-time signal if the set Rξ belongs to Z. Two examples of discrete-time signals defined on Z are given as {

s[n] = 10 cos(0.125πn − 0.5), 2 y[k] = e−0.5k ,

n∈Z k∈Z

(1.2)

1

s(t)

0.5 0 −0.5 t

−1 0

2

4

6

8

10

6

8

10

t

(a)

y(τ)

2 1.5 1 0.5 0 0 (b)

2

4 τ

Fig. 1.8: Waveforms of the two signals defined in Equation (1.1), both plotted for the interval [0, 10].

8 | 1 Introduction 10

s[n]

5 0 −5 −10

0

5

10

15 n

(a)

20

25

30

1

y[k]

0.8 0.6 0.4 0.2 0 −0.2 (b)

0

2

4

6

8

10

k

Fig. 1.9: Waveforms of the two discrete-time signals defined in Equation (1.2).

and are shown in Figure 1.9, respectively. Roughly speaking, the set of discrete-time signals can be classified into two categories. – In the first one, the discrete-time signals are inherently discrete. Figure 1.10 shows a daily averaged Shanghai Composite Index for a period of 40 days, where the independent variable n represents the nth day of that period. Another example of such discrete-time signals is an image signal obtained with a digital camera, where the gray level of the image is represented over a finite set of points, called pixels. – In the second one, the discrete-time signals are obtained by sampling continuoustime signals x(t) at some values of t. For example, if the continuous-time signal x(t) = A cos(10πt), defined on [0, 1], is collected at the points tn = 1 − 2−n , then a discrete-time signal, denoted as x[n], is obtained with x[n] ≜ x(tn ) = A cos[10π(1 − 2−n )]. Very often, sampling is done uniformly such as tn = nTs for n ∈ Z, where Ts is usually referred to as sampling period. Figure 1.11 shows s(t) = 10 cos(20πt − 0.5) and its sampled version s[n] = s(nTs ) with Ts = 1/50. More details for this topic will be given in Chapter 5.

1.2 Description and classification of signals

|

9

3000

p[n]

2800 2600 2400 2200 5

10

15

20

25

30

35

40

n Fig. 1.10: Daily average of Shanghai Composite Index from May 4 to July 1, 2010.

10

s(t)

5 0 −5 −10 0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

t

(a)

10

s[n]

5 0 −5 −10 0 (b)

5

10

15

20

n

Fig. 1.11: (a) s(t) = 10 cos(20πt − 0.5), t ∈ [0, 0.4]. (b) s[n] = s(tn ) with tn = n/50.

It should be noted that throughout this book, we use the letters like t, τ , f to denote continuous (independent) variables, while the letters like i, k, n, m, p, q for the independent variables of discrete-time signals. More importantly, as a convention we use

10 | 1 Introduction the parentheses (.) and the bracket [.] to distinguish continuous-time (CT) signals from discrete-time (DT) signals.

1.2.2 Energy signals and power signals In order to study signals, we have to find a proper measure or norm to quantify them. The absolute value, usually referred to as magnitude, is a proper quantity to measure how big a signal is if it is constant but such a measure can not give the overall view of the signal when it is time-varying. Let i(t) and v(t) be the current and voltage which are through and across a resistor R of unit resistance (i.e. R = 1 ohm), respectively. As well known from physics, the instantaneous power is p(t) ≜ i(t)v(t) = v2 (t) and the total energy expended/consumed by this resistor over the time interval (t1 , t2 ) is given by t2

t2

∫ p(t)dt = ∫ v2 (t)dt, t1

t1

and the average power over (t1 , t2 ) is t2

t2

t1

t1

1 1 ∫ p(t)dt = ∫ v2 (t)dt. t2 − t1 t2 − t1 Borrowed from the above, we have the concepts of energy and power for a signal x, which are defined below – Energy Ex : Denote T

{ Ex (T) ≜ ∫ |x(t)|2 dt −T { Ex [N] ≜ ∑Nn=−N |x[n]|2 {

− CT

(1.3)

− DT

for T > 0, N > 0. The energy for a continuous-time signal x(t) over R and a discretetime signal x[n] over Z is defined respectively as +∞

{ –

Ex ≜ limT→+∞ Ex (T) = ∫−∞ |x(t)|2 dt Ex ≜ limN→+∞ Ex [N] =

∑+∞ n=−∞

|x[n]|

2

− CT − DT.

(1.4)

Power Px : With Ex (T), Ex [N] defined in (1.3), the power for a continuous-time signal x(t) over R and a discrete-time signal x[n] over Z is defined respectively as { Px ≜ limT→+∞ { P ≜ limN→+∞ { x

Ex (T) 2T Ex [N] 2N+1

− CT − DT.

(1.5)

1.2 Description and classification of signals

| 11

A signal x is said to be an energy signal if its energy satisfies 0 < Ex 0, then the equation holds for T = kT,̃ where k is any positive integer. The smallest value of T that satisfies (1.12), denoted as T0 , is called the fundamental period of x(t). T0 defines the duration of one complete cycle of x(t). Observing carefully, we can see that the periodic signal shown in Figure 1.13 has a (fundamental) period of one, i.e. T0 = 1. The reciprocal of the fundamental period of a periodic signal refers to as the fundamental frequency, formally defined as f0 ≜

1 . T0

(1.13)

This number describes how frequently the signal repeats itself. The unit of f0 is in hertz (Hz), i.e. cycles per second. In many scenarios, we use angular frequency which is defined as ω0 ≜ 2πf0 (1.14) and measured in the metric of radians per second.

14 | 1 Introduction 1

x(t)

0.5 0

−0.5 −1 −2

−1

0

1

2

3

t Fig. 1.13: A periodic continuous-time signal.

Example 1.2: As is well known, x(t) = A cos(t) repeats itself for every 2π, i.e. x(t + 2π) = x(t), it is periodic with T0 = 2π. Suppose that both ω and ϕ0 are constant. Is s(t) = A cos(ω t − ϕ0 ) periodic? Solution: Note that s(t + T) = A cos(ω (t + T) − ϕ0 ) = A cos(ω t − ϕ0 ) = s(t) as long as ω T = 2πm ⇒ T = m

2π ω

for any integer m. This means that s(t) is periodic and the period is T0 = 2π and hence ω ω the fundamental frequency is f0 = 2π . In fact, ω is the angular frequency. Noting that x(t) = 1 = cos(2π 0 t). A constant can be considered as a special periodic signal whose fundamental frequency is f0 = 0, i.e. the period is T0 = ∞. Such a signal is usually referred to as a direct current (DC) component. A discrete-time signal x[n] is said periodic if there exists a positive integer, say N, such that x[n] = x[n + N], ∀ n ∈ Z. (1.15) An example of periodic discrete-time signal is shown in Figure 1.14, where x[n + 10] = x[n], ∀ n. It is easy to see that if (1.15) holds for N = Ñ > 0, then the equation holds for all N = ̃ kN, where k is any positive integer. The smallest value, say N0 , of N that satisfies (1.15) is called the fundamental period of x[n], which defines the duration of one complete cycle of x[n]. Similarly, we have the concepts of (digital) fundamental frequency and angular frequency for this periodic signal x[n], defined respectively as F0 ≜

1 , N0

Ω0 = 2πF0 .

(1.16)

1.2 Description and classification of signals

|

15

1

x[n]

0.5 0 −0.5 −1 0

10

20

30 n

40

50

60

Fig. 1.14: A periodic discrete-time signal.

A signal is said aperiodic or nonperiodic if it is not periodic. Clearly, the signal space can be divided into two catalogs: periodic and aperiodic. Any signal must belong to one and only one of them. Example 1.3: For each of the following signals, determine whether it is periodic or not, and if it is periodic, find out the fundamental period. – x1 (t) = cos2 (2t). 2 – x2 (t) = e−α t cos(2πt), where α ≠ 0 is a real number. – x3 [n] = (−1)n . – x4 [n] = x(nTs ), where x(t) is periodic with fundamental period T0 = 3Ts . Solution: – Noting that cos2 (2t) = 12 [1 + cos(4t)] and that cos(4t) is periodic with a fundamental period T0 = 2π = π2 , we can see that x1 (t + T0 ) = x1 (t), ∀ t. So, x1 (t) is 4 periodic and the period is the same as that of cos(4t). – If x2 (t) is periodic, then there must exist a T > 0 such that x2 (t) = x2 (t + T), that 2 2 is e−α t cos(2πt) = e−α (t+T) cos[2π(t + T)]. Equivalently, cos(2πt) = e−α





2

T

cos[2π(t + T)]

for all t ∈ R, which is impossible as for t = 0 the left side of the equation is one, while the right side is smaller than one. This indicates that x2 (t) is aperiodic. It is easy to see from the plot of x3 [n] that it is periodic with a fundamental period N0 = 2. In fact, an integer N such that x3 [n + N] = x3 [n], ∀ n should satisfy 1 = (−1)N and the smallest positive N is 2. It follows from x(t) = x(t + T0 ) that x(nTs ) = x(nTs + T0 ) = x(nTs + 3Ts ) ⇒ x4 [n] = x4 [n + 3], ∀ n ∈ Z, which implies that x4 [n] is periodic with the period not bigger than 3. In fact, a careful study shows that the period of such a signal is equal to either 1 or 3.

16 | 1 Introduction 1.2.4 Deterministic signals and random signals A sinusoidal signal is characterized with three parameters: A, f , ϕ with the following model: x(t) = A cos(2πft + ϕ ). If all these parameters are given, say A = 10, f = 1, ϕ = π/4, the value of signal at any given time instance can be determined. A signal is said deterministic if its characteristics are completely known. If the signal x(t) = 10 cos(2πft + ϕ ), A = 10, f = 1 are given, but ϕ is uncertain. So, the value of signal at any given time instance can not be determined. A random signal is a signal whose characteristics obey a certain probabilistic law. This law may be completely known, known in part, or completely unknown. From the view-point of information theory, a deterministic signal to both the sender and the destination contains no information. In contrast, a random signal can be used to carry information. Suppose we know that the signal to be received is of the following form: s(t) = A cos(2π × 106 t + ϕ ), where ϕ takes on either π/4 or −π/4. As to which one, the receiver end does not know until s(t) is received. In such a case, the random variable ϕ can be used for hiding information. For example, you can prescribe with your beloved that ϕ ={

π/4, −π/4,

for “I love you” for “I hate you”.

Do not be disappointed when you receive a s(t) with ϕ = −π/4 because there are always transmission errors. In this book, we focus mainly on deterministic signals.

1.2.5 Elementary signals A signal may be represented as a function, i.e. a mathematic model. Depending on the characteristics of the signal, this model can be as simple as a constant and as complicated as a highly nonlinear function. A sophisticated model is usually derived based on some of the so-called elementary functions/signals. In this subsection, we will introduce some very important elementary signals which are used intensively throughout this book. Unit step signals: The continuous-time and discrete-time unit step signal, denoted as u(t) and u[n], respectively, are defined as u(t) ≜ {

1 0

t>0 t −1 t < −1 n⩾3 3 < n.

Clearly, both can be rewritten into a more concise form with the help of unit step signals: 2 x(t) = 10 cos(ω0 t + π/4)u(t + 1), y[n] = 2e−0.05(n−3) u[n − 3]. The sign signal sgn(t), another popularly used signal, is defined in terms of u(t) as sgn(t) ≜ u(t) − u(−t). (1.19)

3 It is noted that unlike u[n], the continuous-time unit step signal u(t) is not defined at the origin t = 0.

18 | 1 Introduction

w[n] ∙

1







0.5 ∙

∙ ∙ 0

∙ 2





n

5

Fig. 1.17: A rectangular window signal.

Window signals: As seen from Figure 1.17, this signal is constantly equal to one for 2 ⩽ n ⩽ 5 and nil outside this range. Such a signal is called discrete-time window signal. In general, w[n] = u[n − N1 ] − u[n − N2 − 1] which starts at n = N1 and ends at n = N2 . For the continuous-time case, the window signal, denoted as wτ (t), is defined as wτ (t) ≜ {

−τ /2 < t < τ /2 otherwise

1, 0,

(1.20)

where τ > 0 is a positive constant. See Figure 1.18.

wτ (t) 1 0.5

−τ /2

0

τ /2

t Fig. 1.18: The window signal wτ (t).

Clearly, wτ (t) = u(t + τ /2) − u(t − τ /2) is determined uniquely by the parameter τ which refers to the width of the window. This is a very important signal/function to be used frequently throughout this book. Unit impulse signals: The discrete-time unit impulse signal is defined as δ [n] ≜ {

1, 0,

n=0 otherwise.

(1.21)

Such a signal, also called the unit sample signal, is shown graphically in Figure 1.19.

1.2 Description and classification of signals

| 19

δ [n − k]

δ [n] ∙

1



1

0.5

0.5 ∙

∙ −2



∙ 0

∙ 2





n

∙ ∙ k−2

∙ k

∙ ∙ k+2

n

Fig. 1.19: The unit sample signal δ [n] and its shifted version δ [n − k].

A very important application of the discrete-time impulse signal is the following signal decomposition of any x[n]: x[n] = ⋅ ⋅ ⋅ + x[−1]δ [n + 1] + ⋅ ⋅ ⋅ + x[m]δ [n − m] + ⋅ ⋅ ⋅ +∞

≜ ∑ x[m]δ [n − m],

(1.22)

m=−∞

which can be proved based on the definition of δ [n]. As to be seen in Chapter 2, such a decomposition plays a key role in studying a very important class of systems. The continuous-time unit impulse signal δ (t) is defined as δ (t) ≜ lim

τ →0

1 w (t), τ τ

(1.23)

where wτ (t) is the window function defined in (1.20). Such a signal is also called Dirac function in the literature. Figure 1.20 depicts τ1 wτ (t) for different τ . As observed, the areas of the shadowed rectangles are all the same and equal to one. Therefore, +∞

σ

∫ δ (t)dt = ∫ δ (t)dt = 1, −∞

∀ σ > 0.

(1.24)

−σ

Graphically, κδ (t − τ ) is represented by a vertical line located at t = τ with the amplitude κ positioned near the arrow of the line.

1 w (t) τ τ

1 w (t) τ τ

τ = 1/3

δ (t) ≜ limτ →0 1τ wτ (t)

3

1

τ =1

6

1

1 −1/2

0

1/2

t

− 16

Fig. 1.20: Demonstration of δ (t) as a limit of 1τ wτ (t) for τ = 1,

1 6 1 , and 3

t 0.

0

t

20 | 1 Introduction Based on the definition (1.23), it can be shown (see Problem 1.10) that – for any constant α ≠ 0 the following is true: δ (α t) =



1 δ (t), |α |

(1.25)

and particularly, δ (−t) = δ (t); for any given x(t) x(ξ )δ (t − ξ ) ≡ x(t)δ (t − ξ )

(1.26)

holds for arbitrary t and ξ . It follows from (1.24) and (1.26) that +∞

+∞

+∞

∫ x(ξ )δ (t − ξ )dξ = ∫ x(t)δ (t − ξ )dξ = x(t) ∫ δ (τ )dτ , −∞

−∞

−∞

Noting the fact that the integration in the last term above is equal to one, we have +∞

∫ x(ξ )δ (t − ξ )dξ = x(t)

(1.27)

−∞

for any signal x(t) as long as it is well defined at the instance t.⁴ +∞ +∞ Based on (1.27), one has u(t) = ∫−∞ u(ξ )δ (t − ξ )dξ = ∫0 δ (t − ξ )dξ and hence t

u(t) = ∫ δ (τ )dτ .

(1.28)

−∞

Equivalently, δ (t) =

du(t) . dt

(1.29)

(1.28) and (1.29) yield the relationship between the two elementary signals. It follows from (1.29) that for any given t0 dwτ (t − t0 ) τ τ = δ (t + − t0 ) − δ (t − − t0 ) dt 2 2 Example 1.4: With x(t) sketched in Figure 1.21(a), compute

dx(t) . dt

Solution: Note that x(t) can be divided into the following four portions: x(t) = 2w1 (t − 1.5) + (t − 2)w1 (t − 2.5) + (−t + 4)w1 (t − 3.5) + u(t − 4).

4 In a more rigorous manner, the Dirac function is defined as a function δ (t) such that (1.27) holds for any well-defined signal x(t). (1.23) is just one of the ways to define such a δ (t).

1.2 Description and classification of signals

dx(t) dt

x(t) 2

2

1

1

2

3

1

6

t

t

4

1

−1

−1

−2

−2

(a)

21

2

6

1

@ @

|

2

−2

3

4

?

(b)

Fig. 1.21: Wave-forms of x(t) and

dx(t) dt

in Example 1.4: (a) x(t) ; (b)

dx(t) . dt

Therefore, using the equality f (t)δ (t − t0 ) = f (t0 )δ (t − t0 ) one has dw (t − 2.5) dw (t − 1.5) dx(t) =2 1 + w1 (t − 2.5) + (t − 2) 1 dt dt dt dw1 (t − 3.5) −w1 (t − 3.5) + (−t + 4) + δ (t − 4) dt = 2δ (t − 1) − 2δ (t − 2) + w1 (t − 2.5) − w1 (t − 3.5) + δ (t − 4), which is depicted in Figure 1.21(b). Real exponential and sinusoidal signals: This is a very important class of signals in the sense that not only can these signals describe many real-world signals, but also they serve as elementary building blocks with which more complicated signals can be constructed. Continuous-time real exponential signals are of the form x(t) = eα t

(1.30)

where α is constant. The behavior of such a signal is determined by α . Clearly, x(t) is constant when α = 0, and – when α is positive: x(t) = eα t is exponentially growing as t increases. See Figure 1.22(a). This form can describe well many different physical processes such as chain reactions in atomic explosions and complex chemical reactions; – when α < 0: x(t) = eα t decays with time t, as shown in Figure 1.22(b), and this form models well a variety of phenomena such as the voltage across the resistor in a resistor-capacitor (RC) circuit (in series) that is excited by a constant voltage source, damped mechanical systems, and the process of radioactive decay. Such an exponential signal is often referred to as exponential damped signal. The class of exponentially amplitude-modulated sinusoidal signals is defined as x(t) = eα t cos(2πft + ϕ ).

(1.31)

22 | 1 Introduction 60 50 x(t)

40 30 20 10 0 −4

−3

−2

−1

0 t

1

2

3

4

−3

−2

−1

0

1

2

3

4

(a) 60 50 x(t)

40 30 20 10 0 −4

t

(b)

Fig. 1.22: Real exponential signals x(t) = eα t . (a) α = 1; (b) α = −1.

When α < 0, such a signal is called an exponentially damped sinusoidal. See Figure 1.23. 60 40

x(t)

20 0 −20 −40 −60 −4

−3

−2

−1

0 t

1

2

3

4

Fig. 1.23: Waveform of a damped sinusoidal signal x(t) = e−t cos(2πt − π/4).

1.2 Description and classification of signals

|

23

Complex exponential signals: Let 𝛾 and A be two complex numbers. Then x(t) = Ae𝛾t is called a continuous-time complex exponential signal. It follows from (1.10) that a (real) exponentially amplitude-modulated sinusoidal x(t) = Ar eα t cos(β t + ϕ0 ) can be rewritten as Ar α t j(β t+ϕ0 ) + e−j(β t+ϕ0 ) ] e [e 2 A A = r ejϕ0 e(α +jβ )t + r e−jϕ0 e(α −jβ )t ≜ A1 e𝛾1 t + A2 e𝛾2 t , 2 2

x(t) =

(1.32)

which implies that a real-valued continuous-time exponential sinusoidal signal can be represented mathematically using two complex exponential signals. For the discrete-time case, an exponentially amplitude-modulated sinusoidal signal is defined as x[n] = ρ n cos(Ω n + ϕ ),

(1.33)

where ρ > 0, Ω and ϕ are all real and constant. A discrete-time complex exponential signal is of the form x[n] = A𝛾n , where 𝛾 and A are two complex constants. Similarly, we can show that any real-valued discrete-time exponential sinusoidal signal can be represented with two complex discrete-time exponential signals. The last elementary signal introduced in this section is the sinc function: s𝜈 (t) ≜

sin(t𝜈/2) , t𝜈/2

(1.34)

where 𝜈 is constant. Figure 1.24 shows the waveform of this signal for 𝜈 = 2. Clearly, s(t) = 0 is achieved at t = πm/𝜈, where m is any nonzero integer. It should be noted that although all the signals generated from physical phenomena can be well represented with real-valued functions, complex functions are sometimes found very useful in signal analysis. This will be revealed in the chapters that follow.

24 | 1 Introduction

1.2 1

sv(t)

0.8 0.6 0.4 0.2 0 −0.2 −25

−20

−15

−10

−5

0

5

10

15

20

25

t Fig. 1.24: Waveform of the sinc function s𝜈 (t) for 𝜈 = 2.

1.3 Description of systems Roughly speaking, a system is an entity that manipulates the input signals in such a way that desired output signals are achieved. For example, in a high-fidelity audio processing system, an input signal representing music as recorded on a cassette or compact disc (CD) is modified to enhance/reduce some of the components in the signal (e.g., treble and bass), or to remove recording noise. A system is therefore characterized by a series of operations that specify how the input signals are manipulated. Any sophisticated system consists of a set of elementary systems specified by basic operations. In what follows, we will introduce some basic systems.

1.3.1 Elementary systems Time shifting systems: The time shifting is to move the whole signal along the horizonal axis. Figures 1.25 and 1.26 demonstrate the time shifting for a continuous-time signal and a discrete-time signal, respectively. A time shifting system is mathematically described as {

x(t) x[n]

→ →

y(t) = x(t − τ ) y[n] = x[n − n0 ],

(1.35)

where τ is a constant and n0 is a constant integer. The time shift is sometimes called forward or right shift when τ > 0, n0 > 0, and back or left shift for τ < 0, n0 < 0.

1.3 Description of systems

x(t)

1 0.5 0 −4

−3

−2

−1

0

1

2

3

4

1

2

3

4

1

2

3

4

t

(a)

x(t – 2)

1 0.5 0 −4

−3

−2

−1

0 t

(b)

x(t + 2)

1 0.5 0 −4

−3

−2

−1

0 t

(c)

Fig. 1.25: (a) x(t). (b) x(t − 2). (c) x(t + 2).

x[n]

1 0.5 0 −4

−3

−2

−1

0 n

1

2

3

4

−3

−2

−1

0 n

1

2

3

4

−3

−2

−1

0 n

1

2

3

4

(a)

x[n – 2]

1 0.5 0 −4 (b)

x[n + 2]

1 0.5 0 −4 (c)

Fig. 1.26: (a) x[n]. (b) x[n − 2]. (c) x[n + 2].

| 25

26 | 1 Introduction A signal travels in the free space from one point to another at the velocity of light. Ideally, the received signal is a delayed version of the emitted one and in that case, the free space is then a time shifting system. Time scaling systems: This class of systems is defined as x(t) → y(t) = x(α t), α ≠ 0.

(1.36)

Figure 1.27 shows x(α t) graphically with x(t) = (1 − t)w1 (t − 1/2) for different α . Particularly, such an operation is called time reversal when α = −1.

−3 −2

−1

0

1

2

3

4

t

x(2t) (c)

−3 −2

−1

1 0.8 0.6 0.4 0.2 0 −0.2 −4 −3 −2

0

1

2

3

4

1

2

3

4

t

(b) x(t/2)

(a) 1 0.8 0.6 0.4 0.2 0 −0.2 −4

1 0.8 0.6 0.4 0.2 0 −0.2 −4

x(– t)

x(t)

1 0.8 0.6 0.4 0.2 0 −0.2 −4

−3 −2

−1

0

1

2

3

t

4

−1

(d)

0 t

Fig. 1.27: (a) x(t). (b) x(−t). (c) x(2t). (d) x(t/2).

If x(t) is a recording of a speech signal played at a certain speed, x(2t) is the recording played at twice of the speed, while x(t/2), at half of the speed. Combining the time shifting and the time scaling, we have a more general system: x(t) → y(t) = x(α t − β ). Noting y(t) = x(α (t − β /α )), we can see that such an operation can be achieved by a cascade connection of a time scaler with a factor α , yielding w(t) = x(α t), and a time shifter with β /α , giving y(t) = w(t − β /α ). See Figure 1.28. Example 1.5: Figure 1.29 shows a signal y(t). If y(t) = x(−2t + 1), sketch x(t). Solution: Let τ = −2t + 1, then t = − 12 (τ − 1) and hence 1 y(t) = x(−2t + 1) ⇔ x(τ ) = y (− (τ − 1)) . 2

1.3 Description of systems

x(t)

-

w(t)

T -scaler α

-

| 27

y(t)

-

T -shifter β /α

Fig. 1.28: Block-diagram of time shifting-scaling systems. y(t)

w(t)

1

J J J 0

x(t)

1







1



J

1







−4

2

−3





−2



−1

−3

0

−2

−1

0

1

Fig. 1.29: Waveforms for Example 1.5.

Equivalently, x(t) = y(− 12 (t − 1)), which leads to w(t) ≜ y (−

1 t) → x(t) = w(t − 1). 2

See Figure 1.29. The discrete-time systems that realize the time scaling are defined as x[n] → y[n] = x[α n]

(1.37)

where the constant α , unlike that in (1.36), takes on values which are a nonzero integer only. Figure 1.30 shows a signal x[n] and its time scaled versions x[2n] and x[−2n]. x[n] 1

x[2n]



0.5

1 ∙

∙ ∙ −2

∙ 2

∙ 4



∙ ∙ −2



1 ∙

0.5 ∙

0

x[−2n]



0



0.5 ∙ 2





∙ −2

∙ 0

∙ 2



Fig. 1.30: Waveforms for x[α n].

Arithmetical operations: Another three elementary operations are shown in Figure 1.31. The amplitude scaling can be considered as an amplifier with the scaling factor changing the volumes of signals. An example of amplitude scaling is the Ohm’s law: v(t) = Ri(t),

(1.38)

where the scaling factor R is the resistance of a resistor, while v(t) and i(t) are the voltage and current across and through the resistor, respectively.

28 | 1 Introduction x1 c

-

x

y = cx x2

(a)

x1

? = x1 + x 2 i y6

? y= x1 x 2 6

×i x2

(b)

(c)

Fig. 1.31: Three arithmetical operations on signals. (a) amplitude scaling; (b) signal addition; (c) signal multiplication.

The addition and multiplication are actually multiple-input single-output (MISO) systems. Physical examples of such systems include audio mixers, which add music and voice signals, and the amplitude modulation (AM) radios, in which signals are of the form y(t) = s(t) cos(2πfc t + ϕ ), where s(t) contains the message signal and cos(2πfc t + ϕ ) is called the carrier wave.

1.3.2 System modelling Studying a system actually means to investigate the relationships between the signals appearing in the system. A mathematical model for a system is a collection of equations describing these relationships. Examples of the simplest system models include those studied in the previous subsection. The set-up of the equations for system modeling is usually based on the physical laws such as Newton’s laws, Ohm’s law and Kirchhoff ’s laws. Some examples of system modeling are presented as follows. System I – RLC circuit: Look at the resistor-inductor-capacitor (RLC) (in series) network shown in Figure 1.32. The excitation signal is a voltage source and is denoted as x(t). Suppose that we are interested in studying how y(t)—the voltage across the capacitor, is affected by x(t). This can be done by finding the relationship between x(t) and y(t). R i(t)

+ x(t)

L



+ C



Fig. 1.32: A resistor-inductor-capacitor (RLC) (in series) circuit.

y(t) −

1.3 Description of systems | 29

First of all, the Kirchhoff ’s voltage law tells us that x(t) = vR (t) + vL (t) + y(t). Knowing that di(t) dy(t) vR (t) = Ri(t), vL (t) = L , i(t) = C dt dt we have d2 y(t) dy(t) LC + RC + y(t) = x(t), (1.39) dt dt2 which is a 2nd order differential equation. Solving this equation, we then have the explicit expression of y(t) in terms of the input x(t). System II – Mechanical system: Figure 1.33 is a mechanical system, where x(t) is the applied force and y(t) denotes the displacement of the mass M due to the external force x(t), and K, D are the spring constant and the damping constant, respectively. Now, let us establish the relationship between the external force x(t) and the mass displacement y(t). K

        

C C C C C C C C  X

x(t) M

-

D y(t)

-

O Fig. 1.33: A mechanical system.

2

Based on the force balance law, we get M ddty(t) = x(t) − Ky(t) − D dy(t) , i.e. 2 dt M

d2 y(t) dy(t) +D + Ky(t) = x(t). dt dt2

(1.40)

It is interesting to note that though System I and System II are two very different physical systems, both are described with the same system model, i.e. 2nd order linear differential equation of form α2

d2 y(t) dy(t) + α1 + α0 y(t) = x(t). 2 dt dt

(1.41)

Such a mathematical model can be used to describe many different physical systems. System III – A repayment scheme for bank loan: Suppose one borrows Y = 100 000 dollars from a bank. Let x[n] be the loan payments in the nth month, while

30 | 1 Introduction 5

x0[n]

x[n]

5

0

−5

0

50

(a)

100 n

0

−5

150

0

50

100 n

150

0

50

100 n

150

(b)

200

5

y[n]

y[n]

100 0

0

−100 −200 (c)

0

50

100 n

−5

150 (d)

Fig. 1.34: (a) x[n]—the measurement available; (b) x0 [n]—the desired signal to be detected; (c) y[n]— the filter output with α = −0.75; (d) y[n]—the filter output with α = 0.75.

y[n] is the balance of the loan at the end of the nth month. If the monthly interest is I, say I = 1.25 %, then y[n] = (1 + I)y[n − 1] − x[n], n = 1, 2, . . . , with y[0] = Y the amount of the loan. Equivalently, y[n] − (1 + I)y[n − 1] = −x[n], n = 1, 2, . . . , which is a 1st order difference equation. System IV –A digital filter: A filter is a system used to process signals. Suppose that we have a signal x[n] shown in Figure 1.34(a). We know that this signal is a sum of a slowly varying signal x0 [n] and a component e[n] that changes much faster than x0 [n] does. The problem we encounter here is how to estimate x0 [n] with the only available signal x[n] for n ⩾ 0. We feed x[n] into a system, i.e. a digital filter, that is described with the following difference equation: y[n] = α y[n − 1] +

1−α (x[n] + α x[n − 1]), 1+α

(1.42)

with y[−1] = 0. Assume x[n] = 0, ∀ n < 0. The output y[n] for n ⩾ 0 can be computed with the above recursive equation as long as the parameter α is given.

1.4 Properties of systems

| 31

Figure 1.34(c) yields the output y[n] for α = −0.75. As observed, it is far away from the desired signal x0 [n] that is depicted in Figure 1.34(b). Taking α = 0.75, we evaluate the output using the difference equation again with the same input x[n]. The corresponding y[n] is plotted in Figure 1.34(d). This result, in contrast to the situation for α = −0.75, is very close to the desired signal x0 [n]. For the same difference equation, why its performance with α = 0.75 is much better than that with α = −0.75? The readers of this book should be able to answer this question after Chapter 4. To end this section, it is important to note that – the models for the four systems discussed above are either a differential equation or a difference equation. Though there exist other models, the differential and difference equations can represent the most important class of systems; – a mathematical model is actually an idealized representation of a system and many physical systems cannot be described exactly by such a model. In most of the situations, a model used for a system yields an approximation of the system behavior, which is acceptable for a specified application.

1.4 Properties of systems A system can be viewed as a black box in which the input signal x is manipulated with a set of operations to result in the output signal y (Fig. 1.35).

x

-

S

-

y

Fig. 1.35: Block box representation of a system.

A system S is said a continuous-time system when both the input and output signals are of continuous-time. Similarly, a discrete-time system implies that both its input and output signals are of discrete-time. A hybrid system is a system in which some of the signals are in continuous-time form and the others are in discrete-time form. In the subsections that follow, we will introduce and discuss some properties of continuous-time and discrete-time systems. These properties have important physical interpretations and are described with signals and systems language we have just set up in the previous sections.

32 | 1 Introduction 1.4.1 Memoryless and with memory A system is said to be memoryless if for any input signal, the output at any given time depends only on the value of the input signal at that time. Otherwise, it is a system with memory. For example, a resistor with resistance R is memoryless since the current i(t) passing through it is proportional to the voltage excitation v(t) that is applied on it: v(t) . R

i(t) =

This is not the case for an inductor of inductance L. In fact, the voltage excitation v(t) across the inductor and the current i(t) that flows through it obey t

1 i(t) = ∫ v(τ )dτ . L −∞

This implies that for a given time instance, say t0 , i(t0 ) depends on all the values of v(t) for t < t0 . Therefore, as a system an inductor is with memory. It is easy to see that the discrete-time system S x[n] → y[n] : y[n] = x[n] − x2 [n] is memoryless, while y[n] = 12 {x[n] + x[n − 1]} is with memory, because y[n] depends x[n − 1].

1.4.2 Causality A system is said to be causal if for any input signal, the output at any given time depends only on values of the input up to that time. Such a system is sometimes referred to as being nonanticipative, as the system output does not anticipate future values of the input. Clearly, the system y(t) = x2 (t − 1) is causal, while y(t) = x(t + 1) is noncausal. The time reversal system y[n] = x[−n] is another example of noncausal systems as y[−5] = x[5] indicates that the output value at n = −5 is dependent of the future value x[5] of the input signal. Let S be a discrete-time system: x[n] → y[n]. Suppose that xk [n] → yk [n],

k = 1, 2.

It is important to note that if the system is causal, then the following if-then statement holds for any given n0 x1 [n] = x2 [n], ∀ n ⩽ n0 ⇒ y1 [n] = y2 [n],

∀ n ⩽ n0 .

(1.43)

1.4 Properties of systems |

33

Similarly, if a continuous-time system x(t) → y(t) is causal and xk (t) → yk (t), k = 1, 2, then x1 (t) = x2 (t), ∀ t ⩽ t0 ⇒ y1 (t) = y2 (t), ∀ t ⩽ t0 (1.44) holds for any t0 given. Causality is an essential requirement for the systems that carry out real-time processing of signals. There exist systems for which causality is not a constraint. An example of such systems is digital image smoothing system. When dirtied by noises of high frequency fluctuations, the original image G[k, m] is usually filtered by averaging the pixels centered at [k, m]: ̂ m] = G[k,

M K 1 ∑ ∑ G[k − l1 , m − l2 ], (2K + 1)(2M + 1) l =−K l =−M 1

2

̂ 0] depends on the where K, M are two positive integers given. As seen, the output G[0, values of the “future” values such as G[1, 1], G[1, 2], G[2, 1], . . . of the input G[k, m].

1.4.3 Invertibility Consider the block-diagram shown in Figure 1.36, where the system Sinv is called the inverse system of system S if the input-output relationship indicated in Figure 1.36 for the entire system holds for arbitrary input signal x. x

-

y

S

-

x

Sinv

-

Fig. 1.36: System invertibility and inverse system.

A system S is said to be invertible if it has an inverse system Sinv . The system S specified with x(t) → y(t): y(t) = 2x(t −3), is invertible as the system Sinv specified with v(t) → w(t): w(t) = 12 v(t + 3), satisfies w(t) = x(t) when v(t) = y(t). Another example of invertible system is the integrator described by t

y(t) = ∫ x(ξ )dξ , −∞

since x(t) = entiator.⁵

dy(t) , dt

which means that the inverse system of the integrator is the differ-

5 The differentiator is invertible for the class of signals x(t) with x(−∞) = 0 as its output to such a signal x(t), when fed into the integrator, yields the same signal x(t).

34 | 1 Introduction There exist noninvertible systems. The simplest system of this kind may be the one x(t) → y(t): y(t) = 0, for which there is no way to determine the input which leads to y(t) = 0. The concept of invertibility is of importance in many contexts. In communication systems, the received signal is generally different from the transmitted signal that propagates through the channel. If the channel is not an invertible system, there is no way to correct the distortion caused by the channel.

1.4.4 Stability A system is said to be unstable if it is out of work for an input signal of finite magnitude. Consider the diode D shown in Figure 1.3—one of the simplest systems. Let the voltage V = −p(t) across D be the input and the current I passing through it be the output. The input-output relationship of this system is shown in Figure 1.37. It is well-known from the semi-conduct theory that the magnitude of I boosts dramatically when V is higher than 0.75 volts or lower than −1.75 volts. This implies that such a system is at risk of being broken. So, a diode is a unstable system. 100 80 60 40

I

20 0 −20 −40 −60 −80 −100 −2

−1.5

−1

−0.5

0

0.5

1

V Fig. 1.37: The relationship between voltage V (in volt) and current I (in mA) of a diode D.

Another example of a unstable system is the first Tacoma Narrows suspension bridge, situated on Tacoma Narrows in Puget Sound, near the city of Tacoma, Washington. It collapsed on Nov. 7, 1940, due to wind-induced vibrations that coincided with the inherent frequency of the bridge.

1.4 Properties of systems |

35

From an engineering perspective, it is important that a system of interest remains well behaved under all possible operating conditions. A rigor definition for stable systems is related to the concept of bounded signals. A signal x is said to be bounded if there exists a finite positive constant M such that |x| < M holds on R or Z. An example of bounded signal is x(t) = cos(2πft) as |x(t)| < 2,

∀ t ∈ R.

A system x → y is said to be stable if for any bounded input x, the corresponding output y is bounded. Mathematically, |x| ⩽ Mx 2: y(t) = 0, as x(τ )h(t − τ ) = 0.

An analytical approach to this example will be given later.

2.3 Properties of convolutions and equivalent systems Mathematically, the convolutions defined with (2.5) and (2.7) are an operation between two functions. In this section, we will discuss several important properties of such operations. In order to understand better the physical meanings of these properties, it would be helpful to consider the outcome of v ∗ w as the output of such an LTI system that has a unit impulse response w and is excited by the input signal v. This is supported by Theorems 2.1 and 2.2. In this way, many properties of this convolution can follow directly from those of the LTI systems. For example, based on the definition of the convolutions (2.5) and (2.7) and the properties of the unit impulse signals δ [n] and δ (t), we can show mathematically that {

δ [n − n0 ] ∗ w[n] = w[n − n0 ], ∀ n0 ∈ Z δ (t − t0 ) ∗ w(t) = w(t − t0 ), ∀ t0 ∈ R.

(2.8)

On the other hand, consider w as the unit impulse response of an LTI system. This is straight forward since the property (2.8) is just the time invariance of LTI systems. Commutativity: This property states that y = x ∗ h = h ∗ x.

(2.9)

We just show it for the convolution sum. Proof: By definition, y[n] = x[n] ∗ h[n] = ∑+∞ k=−∞ x[k]h[n − k]. With m = n − k, +∞

+∞

y[n] = ∑ x[n − m]h[m] = ∑ h[m]x[n − m] = h[n] ∗ x[n]. m=−∞

m=−∞

This property tells us that the output y of an LTI system, which has unit impulse response h and is excited by x, can be viewed as the output of such a “system” that has unit impulse response x and is excited by the input h. See Figure 2.7. Distributivity: This property states that y = x ∗ (h1 + h2 ) = x ∗ h1 + x ∗ h2 ,

(2.10)

56 | 2 Time-domain analysis of LTI systems

x

-

y

-

h

h

y

-

-

x

Fig. 2.7: Equivalence due to commutativity .

which implies the overall system of two LTI subsystems, connected in parallel and having unit impulse response h1 and h2 , respectively, can be viewed as an LTI system of unit impulse response h = h1 + h2 . See Figure 2.8.

x

-

y h1 + h2

-

x

h2

? +l

-

y

-

6 h1

Fig. 2.8: Equivalence due to distributivity - parallel connection.

In general, if yk = x ∗ hk , then ŷ = x ∗ (∑k hk ) = ∑k yk , which indicates that an LTI system of unit impulse response h = ∑k hh can be realized with a series of LTI subsystems that are connected in a parallel form. Keeping the commutativity in mind and considering x as the unit impulse response of an LTI system, the above is actually the result of superposition principle of LTI systems. Associativity: The associativity of convolutions tells us y = x ∗ {h1 ∗ h2 } { { (Commutativity) ⇓ { { y = x ∗ {h 2 ∗ h1 } {

=

{x ∗ h1 } ∗ h2

=

{x ∗ h2 } ∗ h1 .

(2.11)

The physical interpretation of this property is shown by Figure 2.9. Consider h1 and h2 as the unit impulse responses of two LTI systems. The associativity specified by (2.11) indicates the equivalence between the overall system of unit impulse response h1 ∗ h2 , the cascade system of the sub-systems h1 and h2 , and the cascade system of the sub-systems h2 and h1 . An outline of the proof for the equivalence is to show that all the three systems are LTI and that they have the same unit impulse response equal to h = h1 ∗ h2 . The detailed proof is left to the readers. The properties mentioned above are shared by both the convolution sum and the convolution integral. Below are some properties that are possessed by the convolution integral only.

2.3 Properties of convolutions and equivalent systems |

x

x

x

-

57

y

h1 ∗ h2

-

y1

-

h1

-

-

h2

-

y

h2

-

h1

-

y2

y

Fig. 2.9: Three equivalent LTI systems resulted from the associativity.

Derivative property: Let f (t) be a signal and denote f ̇(t) ≜ then ̇ = x(t) ̇ ∗ h(t). y(t)

df (t) . dt

If y(t) = x(t) ∗ h(t), (2.12)

+∞

Proof: Note y(t) = x(t) ∗ h(t) = h(t) ∗ x(t). By definition, y(t) = ∫−∞ h(τ )x(t − τ )dτ . As dx(t−τ ) ̇ − τ ), = x(t dt +∞

̇ = ∫ h(τ ) y(t) −∞

dx(t − τ ) ̇ = x(t) ̇ ∗ h(t). dτ = h(t) ∗ x(t) dt

Note that the property of commutativity is used throughout the proof. This property leads to the equivalence of two systems shown in Figure 2.10, where denotes the unit impulse response of the differentiator which is an LTI system. As both the system h(t) and the differentiator are LTI, the two equivalences shown in Figure 2.10 coincide with the associativity of the convolution integral. d dt

x(t)

x(t)

-

-

̇ y(t)

y(t) h(t)

-

d dt

̇ x(t)

d dt

-

-

̇ y(t)

h(t)

Fig. 2.10: Two equivalent systems from the derivative property.

-

58 | 2 Time-domain analysis of LTI systems If s(t) is the response of an LTI system to the unit step signal u(t): u(t) → s(t), the derivative property suggests that ̇ h(t) = s(t). (2.13) In many occasions, it is impossible to measure the unit impulse response h(t) directly as generating the impulse signal δ (t) is physically unrealistic, while it seems much easier to measure the unit step response s(t). In such a situation, (2.13) can be used to obtain h(t). Example 2.5: Consider an RC (in series) circuit excited by a driving voltage x(t). The voltage across the capacitor is y(t). As well known from the circuit theory, dy(t) 1 1 + y(t) = x(t), dt RC RC t

and the unit step response is s(t) = (1 − e− RC )u(t). Therefore, ̇ = h(t) = s(t)

1 − RCt 1 − RCt u(t) + 0 × δ (t) = u(t) e e RC RC

can be obtained indirectly. t

Integration property: Let f (t) be a signal and denote fint (t) ≜ ∫−∞ f (τ )dτ . If y(t) = h(t) ∗ x(t), then yint (t) = xint (t) ∗ h(t). (2.14) The interpretation of this property is shown by Figure 2.11, where ∫ denotes the unit impulse response of the integrator which is actually equal to u(t).

x(t)

x(t)

yint (t)

y(t)

-

h(t)

-



-



-

h(t)

-

xint (t)

-

yint (t)

Fig. 2.11: The two equivalent systems from the integration property.

Proof: The proof can be done with direct mathematical manipulations. Alternatively, noting that the integrator, as shown before, is an LTI system, the equivalence between the two systems in Figure 2.11 is simply due to the property of associativity we have proved before. ̇ ̇ ∗ h(t), g(t) ≜ x(t) ∗ h(t). Let y(t) = x(t) ∗ h(t) and denote f (t) ≜ x(t) It is easy to ̇ = f (t) = g(t) and yint (t) = xint (t) ∗ h(t) = x(t) ∗ hint (t). Applying derivative see that y(t)

2.3 Properties of convolutions and equivalent systems |

59

property to the latter yields ̇ = x(t) ̇ ∗ hint (t). y(t) = xint (t) ∗ h(t)

(2.15)

Also, integrating both f (t) and g(t) gives y(t) = fint (t) + y(−∞) = gint (t) + y(−∞).

(2.16)

Now, we will demonstrate how to take advantage of (2.15) and (2.16) for evaluating convolution integrals with two examples given below. Example 2.6: Re-consider Example 2.4. Note x(t) = u(t − 1) − u(t − 3), h(t) = (t + 2) [u(t + 2) − u(t + 1)]. Solution: We can solve this problem using (2.15). First of all, it is interesting to note that for any signal f (t) and arbitrary constant t0 , t

t

∫ f (τ )u(τ + t0 )dτ = ∫ f (τ )u(τ + t0 )dτ −∞

−t0 t

= ∫ f (τ )dτ u(t + t0 ), −t0

where the 1st equation is due to u(τ + t0 ) = 0 for all τ in τ + t0 < 0, while the 2nd one is because of u(τ + t0 ) = 0 for t satisfying t < τ < −t0 , i.e. t < −t0 . Now, note t

t

hint (t) = ∫ h(τ )dτ = ∫ (τ + 2)[u(τ + 2) − u(τ + 1)]dτ −∞

−∞

t

t

= ∫ (τ + 2)u(τ + 2)dτ − ∫ (τ + 2)u(τ + 1)dτ −∞ t

−∞ t

= ∫ (τ + 2)dτ u(t + 2) − ∫ (τ + 2)dτ u(t + 1), −2

that is, hint (t) =

−1

1 1 (t + 2)2 u(t + 2) − [(t + 2)2 − 1]u(t + 1). 2 2

̇ = δ (t − 1) − δ (t − 3) and δ (t − t0 ) ∗ v(t) = v(t − t0 ), it follows from (2.15) Since x(t) ̇ ∗ hint (t) = hint (t − 1) − hint (t − 3) y(t) = x(t) ∗ h(t) = x(t) 1 1 = (t + 1)2 u(t + 1) − [(t + 1)2 − 1]u(t) 2 2 1 1 2 − (t − 1) u(t − 1) + [(t − 1)2 − 1]u(t − 2), 2 2 which is exactly the same as that in Example 2.4.

60 | 2 Time-domain analysis of LTI systems Example 2.7: Consider an LTI system with an unit impulse response given in Figure 2.12. Compute the system response y(t) to the input x(t) depicted in Figure 2.12. h(t)

x(t) 1

1 1.5

t

2.5

−1

1

t

2

−1

Fig. 2.12: The waveforms for h(t) and x(t) in Example 2.7.

̇ = δ (t + 1) − 2δ (t − 1) + Solution: Note x(t) = w2 (t) − w1 (t − 1.5), h(t) = w1 (t − 2). As x(t) δ (t − 2), one has ̇ ∗ h(t) = h(t + 1) − 2h(t − 1) + h(t − 2), g(t) ≜ x(t) which is sketched in Figure 2.13. g(t)

y(t)

1 −1 −2

1 0.5

1.5

2.5

3.5

4.5

" "

"

t −1

0.5

1.5

e 2.5

e 3.5 " " e" 4.5

t

−2

̇ ∗ h(t) and y(t) in Example 2.7. Fig. 2.13: The waveforms for g(t) = x(t)

t

Noting y(−∞) = 0, we have y(t) = gint (t) + y(−∞) = ∫−∞ g(τ )dτ which can be easily obtained based on g(t). See Figure 2.13.

2.4 Causality and stability of LTI systems One of the most important conclusions we have achieved is that an LTI system is completely characterized by its unit impulse response. A question to be asked is how the properties of this system such as causality and stability are related with this response. The following theorem is regarding causality.

2.4 Causality and stability of LTI systems

| 61

Theorem 2.3: An LTI system of unit impulse response h is causal if and only if its unit impulse response satisfies the following: {

h(t) h[n]

= =

0, 0,

∀t < 0 ∀ n < 0.

(2.17)

Proof: We will prove the theorem for the class of discrete-time LTI systems. The proof for its continuous-time counterpart can be done in the same way. Let us show the necessary condition first. As mentioned before, a linear system is causal if and only if the condition of initial rest holds. See (1.49). Note x[n] = δ [n] → y[n] = h[n]. As δ [n] = 0, ∀ n ⩽ −1, we should have h[n] = 0, ∀ n ⩽ −1 if the system is causal. Now, let us turn to the proof of sufficient condition. In fact, for any input x[n] we have +∞

y[n] = x[n] ∗ h[n] = ∑ x[k]h[n − k]. k=−∞

If h[n] satisfies (2.17), then h[n − k] = 0 for all n − k < 0, i.e. n < k. Therefore, we have n

y[n] = ∑ x[k]h[n − k], k=−∞

which claims that y[n] depends on only the values of x[k] for k ⩽ n and hence the system is causal. This completes the proof. The following theorem shows how the unit impulse response of an LTI system is related to the system stability. Theorem 2.4: An LTI system of unit impulse response h is stable if and only if its unit impulse response satisfies the following: +∞

{ { { ∫ |h(t)|dt < + ∞ { { { −∞ { { +∞ { { { { ∑ |h[n]| < + ∞. { n=−∞

(2.18)

Proof: Let us prove it for the continuous-time case. The proof for discrete-time LTI systems can be done in a similar manner. Necessary condition: Assume that the system is stable. Then define a bounded signal x(t) as follows 1, ∀ t : h(−t) ⩾ 0 x(t) ≜ { −1, ∀ t : h(−t) < 0.

62 | 2 Time-domain analysis of LTI systems For such a bounded (by one) input signal, the corresponding output y(t) should be bounded for all t, particularly for t = 0: |y(0)| 0, yields the real-

N

xN (t) ≜ c[0] + ∑ a[m] cos(mω0 t + ϕ [m]).

(3.12)

m=1

Example 3.1: Consider three signals p1 (t), p2 (t) and x(t) shown in Figure 3.3, which are all periodic with a period of one. Given that p1 (t) = cos(2πt) for −0.5 < t < 0.5, p2 (t) = t for 0 < t < 1, and x(t) = p1 (t) + p2 (t). Compute the optimal coefficients c[k] using (3.9) for each of the signals. Solution: First of all, the fundamental frequency is ω0 = the optimal coefficients for p1 (t) are given:³ T0 /2

c1 [k] =

= 2π. According to (3.9),

1/4

1 ∫ p1 (t)e−jkω0 t dt = ∫ cos(2πt)e−jkω0 t dt = ⋅ ⋅ ⋅ T0 −T0 /2

={ where sinc(t) ≜

2π T0

1 [sinc( k−1 ) 4 2 1 , 4

−1/4

+

sinc( k+1 )], 2

sin(tπ) . tπ

3 Hint: Use Euler’s formula for the integration.

k ≠ 1 , ∀ k ⩾ 0, k=1

3.2 Fourier series for continuous-time periodic signals

|

79

p1(t)

1 0.5 0 −2.5

−2

−1.5

−1

−0.5

(a)

0

0.5

1

1.5

2

0.5

1

1.5

2

0.5

1

1.5

2

t

p2(t)

1 0.5 0 −2.5

−2

−1.5

−1

−0.5

(b)

0 t

2 x(t)

1.5 1 0.5 0 (c)

−2.5

−2

−1.5

−1

−0.5

0 t

Fig. 3.3: Three periodic signals in Example 3.1.

Similarly, for p2 (t) using the technique of integration by parts we have T0

1

0 1 , 2 j , 2kπ

0

1 ∫ p2 (t)e−jkω0 t dt = ∫ te−jkω0 t dt = ⋅ ⋅ ⋅ c2 [k] = T0 ={

k=0 , ∀ k ⩾ 0. k ≠ 0

As to the third signal x(t) = p1 (t) + p2 (t), the corresponding optimal coefficients are c[k] =

1 1 ∫ x(t)e−jkω0 t dt = ∫ [p1 (t) + p2 (t)]e−jkω0 t dt T0 T0 T0

= c1 [k] + c2 [k], ∀ k.

T0

80 | 3 Fourier analysis of signals In general, the coefficients of x(t) = ∑m αm pm (t) are c[k] = ∑m αm cm [k], where pm (t) = pm (t + T0 ), ∀ t, m and cm [k] is the set of the optimal coefficients of pm (t). Now, let us consider the approximation of the signal x(t) in Example 3.1 with different N. For a given N, we can compute c[k] = c1 [k] + c2 [k] first and then the best approximate xN (t) can be obtained with (3.2). For example, for N = 4, calculations show that c[0] = 0.8183 and c[1] = 0.5927ej0.5669 , c[2] = 0.2653ej0.6435 , c[3] = 0.1061ej1.5708 , c[4] = 0.0902ej2.0608 , and hence c[−k] = c∗ [k] can be obtained for k = 1, 2, 3, 4. The corresponding x4 (t) is shown in Figure 3.4(b). One observes that the best x4 (t) is quite different from the original signal x(t). As indicated by (3.10), the best xN (t) will get closer to x(t) when N increases. The same procedure is applied for N = 115 and the corresponding best approximation x115 (t) is given in Figure 3.5(b). As seen, there is no significant difference between x(t) and x115 (t). So, it seems that the optimized xN (t) goes to some signal that is very close to x(t). Fourier series: Let x(t) be a periodic signal. With a constant T0 > 0 such that x(t + T0 ) = x(t), ∀ t and c[k] computed using (3.9), the following +∞

x∞ (t) ≜ ∑ c[k]e

jk T2π t

(3.13)

0

k=−∞ jk 2π t

is called the Fourier series (FS) of x(t), expanded in {e T0 }, while c[k], ∀ k are named as the corresponding FS coefficients. One fundamental question to be asked is what the relationship between x(t) and its FS x∞ (t) is. In what follows, two somewhat different results are presented without proof. Theorem 3.1: Let x∞ (t) = ∑+∞ k=−∞ c[k]e

jk T2π t 0

be the FS of a periodic signal x(t). If

∫ |x(t)|2 dt 1; – The signal y(t), whose FS coefficients are d[k] = e−jπk/2 c[k], is odd;⁵ 2 – 14 ∫−2 |x(t)|2 dt = 1/2. Solution: The first condition simply tells us that x(t) = ∑ c[k]ejω0 kt , k

ω0 = 2π/T0 = π/2. With c[k] = rk ejϕk , the 2nd condition implies c[−k] = c∗ [k] = rk e−jϕk . It follows from the 3rd condition that x(t) can be further specified as x(t) = c[−1]e−jω0 t + c[0] + c[1]ejω0 t = c[0] + 2r1 cos(ω0 t + ϕ1 ). The 4th condition suggests ̃

̃

y(t) = d[−1]e−jω0 t + d[0] + d[1]ejω0 t = c[0] + 2r1 sin(ω̃ 0 t + ϕ1 ), where ω̃ 0 may be different from ω0 . As given, y(−t) = −y(t), which leads to c[0] = −2r1 sin ϕ1 cos(ω̃ 0 t). Noting the left is constant, we should have r1 sin ϕ1 = 0, leading to c[0] = 0 and r1 = 0 or ϕ1 = mπ. The last condition, according to the Parseval’s relation (3.16), is equivalent to say 1/2 = ∑k |c[k]|2 = |c[−1]|2 + |c[1]|2 = 2r12 , giving r1 = 1/2. Therefore, x(t) = 2r1 cos(ω0 t + ϕ1 ) = cos(ω0 t + mπ) = (−1)m cos(ω0 t). 5 A function f (t) is said to be odd if f (−t) = −f (t), ∀ t and even if f (−t) = f (t), ∀ t.

3.3 Fourier series for discrete-time periodic signals

|

85

3.3 Fourier series for discrete-time periodic signals Let x[n] be a discrete-time periodic signal and satisfy x[n] = x[n + N0 ], ∀n ∈ Z,

(3.19)

for some positive integer N0 .⁶ Look at the FS pair by (3.17). The first equation implies that a continuous-time periodic signal x(t) satisfying x(t + T0 ) = x(t) can be decomposed into a linear combination of the basis signals ejω0 kt . The question we may ask is whether a similar conclusion holds for the periodic signal x[n]. First of all, consider the following infinite set of basis signals {ejkΩ0 n , k ∈ Z}, where Ω0 ≜ N2π . As ej(k+mN0 )Ω0 n = ejkΩ0 n for any integers k, m, there are only N0 different ele0 ments in this set, which are {ejkΩ0 n , k = 0, 1, ⋅ ⋅ ⋅ , N0 − 1}. ̃ be a linear combination of the N0 basis signals ejkΩ0 n : Let x[n] N0 −1

̃ = ∑ Xp [k]ejkΩ0 n , x[n] k=0

where Xp [k], k = 0, 1, ⋅ ⋅ ⋅ , N0 − 1 are complex constants. ̃ is periodic and satisfies As ejkΩ0 (n+N0 ) = ejkΩ0 n , ∀n, k ∈ Z, x[n] ̃ ̃ + N0 ] = x[n]. x[n The question posed above is specified as follows. For any periodic x[n] satisfying ̃ such that x[n] = x[n], ̃ (3.19), is it possible to find a x[n] ∀ n? In another words, does there exist a set of Xp [k] such that N0 −1

x[n] = ∑ Xp [k]ejkΩ0 n

(3.20)

k=0

for n = 0, 1, ⋅ ⋅ ⋅ , N0 − 1? Intuitively, the answer seems positive as (3.20) yields N linear equations with the same number of unknowns. Take N0 = 3 as an example. (3.20) means { x[0] = ej0Ω0 0 Xp [0] + ej1Ω0 0 Xp [1] + ej2Ω0 0 Xp [2] { { { x[1] = ej0Ω0 1 Xp [0] + ej1Ω0 1 Xp [1] + ej2Ω0 1 Xp [2] { { { { j0Ω0 2 Xp [0] + ej1Ω0 2 Xp [1] + ej2Ω0 2 Xp [2]. { x[2] = e

6 Here, N0 may be any multiple of the fundamental period of x[n].

86 | 3 Fourier analysis of signals Denote rkn ≜ ejkΩ0 n , k, n = 0, 1, 2. The above equation can be rewritten as 1 [ [ r0 2 [ r0

1 r1 r12

Xp [0] 1 x[0] ] [ ] ][ r2 ] [ Xp [1] ] = [ x[1] ] . 2 r2 ] [ Xp [2] ] [ x[2] ]

As observed, the 3 × 3 matrix is a Vandermonde matrix. Such a matrix is nonsingular as long as rm ≠ rk for all k ≠ m, which obviously hold. Therefore, for any given x[0], x[1], x[2] we can determine uniquely the coefficients Xp [0], Xp [1], Xp [2] with matrix inverse. Alternatively, multiplying both sides of (3.20) with e−jmΩ0 n results in N0 −1

x[n]e−jmΩ0 n = ∑ Xp [k]ejkΩ0 n e−jmΩ0 n , k=0

and hence N0 −1

N0 −1 N0 −1

∑ x[n]e−jmΩ0 n = ∑ ∑ Xp [k]ej(k−m)Ω0 n

n=0

n=0 k=0 N0 −1

N0 −1

N0 −1

k=0

n=0

k=0

= ∑ Xp [k][ ∑ ej(k−m)Ω0 n ] ≜ ∑ Xp [k]ρ [k, m], where ρ [k, m] is given by the following finite geometric series: N0 −1

N0 −1

n=0

n=0

ρ [k, m] = ∑ ej(k−m)Ω0 n = ∑ (ej(k−m)Ω0 )n ej(k−m)Ω0 = 1

N , { { 0 = { 1 − (ej(k−m)Ω0 )N0 { , { 1 − ej(k−m)Ω0

ej(k−m)Ω0 ≠ 1

Note ej(k−m)Ω0 = 1 if and only if (k − m)Ω0 = 2π k−m = 2πl for some integer l. As N 0

0 ⩽ k, m < N0 , there must be k = m, leading to l = 0. Because Ω0 = for any k, m. Therefore, ρ [k, m] = {

N0 , 0,

2π , (ej(k−m)Ω0 )N0 N0

=1

k=m k ≠ m

and hence N0 −1

N0 −1

n=0

k=0

∑ x[n]e−jmΩ0 n = ∑ Xp [k]ρ [k, m] = Xp [m]ρ [m, m] = N0 Xp [m].

The optimal coefficients are therefore given by N −1

Xp [m] =

1 0 ∑ x[n]e−jmΩ0 n , m = 0, 1, . . . , N0 − 1. N0 n=0

(3.21)

3.3 Fourier series for discrete-time periodic signals

|

87

Discrete-time Fourier series: The linear combination given by the right side of (3.20) is called the discrete-time FS (DTFS) of x[n] and hence Xp [m] given by (3.21) are the DTFS coefficients of x[n]. Clearly, the mapping specified by the two equations is oneto-one and denoted with the notation x[n] ↔ Xp [m]. T +τ

In the continuous-time FS case, c[k] = ∫τ 0 with τ . Does the below hold for any n0 ? Xp [m] =

1 N0

N0 −1+n0



x(t)e−jkω0 t dt, ∀ k has nothing to do

x[n]e−jmΩ0 n , ∀ m.

(3.22)

n=n0

The answer is positive due to the fact f [n] ≜ x[n]e−jmΩ0 n satisfies f [n] = f [n + N0 ], ∀ n. In fact, let f [n] be a periodic signal of period N0 , the summation of any conN0 −1 0 −1+n0 secutive N0 samples of f [n] is the same, that is ∑Nn=n f [n] = ∑n=0 f [n]. 0 Though Xp [m] is defined for m = 0, 1 ⋅ ⋅ ⋅ , N0 − 1, it is sometimes convenient to think of Xp [m] as being defined for all integer m. In that case, it follows from (3.21) that Xp [m ± N0 ] = Xp [m], ∀m, i.e. Xp [m] is also periodic. Comparing (3.21) with (3.20), we can see that x[m] is the DTFS coefficients of the periodic signal N0 Xp [−n]. Having established the theory of DTFS, we are now going to consider some examples. Example 3.4: Let x[n] be periodic with a period of N0 . Suppose x[n] = ρ n , n = 0, 1, ⋅ ⋅ ⋅ , N0 − 1, where ρ is constant. Compute the DTFS of x[n] for N0 = 5, ρ = 1/2. Solution: According to (3.21), the DTFS coefficients are given by N −1

Xp [m] =

N −1

1 0 1 0 ∑ x[n]e−jmΩ0 n = ∑ ρ n e−jmΩ0 n N0 n=0 N0 n=0

N −1 { { 1, 1 0 ∑ (ρ e−jmΩ0 )n = { (1 − ρ N0 )/N0 = { N0 n=0 , −jmΩ0 { 1 − ρe

For N0 = 5, ρ = 1/2, we have Ω0 = Xp [m] =

[1 − ( 12 )5 ]/5 1 − 12 e−jmΩ0

=

1−0.5 cos(mΩ )

2π 5

ρ e−jmΩ0 = 1 ρ e−jmΩ0 ≠ 1.

and [1 − ( 12 )5 ]/5

√(1 − 0.5 cos(mΩ0 ))2 + 0.52 cos2 (mΩ0 )

ejϕ [m] ,

N −1

0 Xp [m]ejmΩ0 n with with ϕ [m] = −atan 0.5 sin(mΩ )0 . The DTFS of x[n] is given by ∑m=0 0 Xp [m] obtained above.

Computation of Xp [m] from x[n] and that of x[n] from Xp [m] sometimes can be done easily by inspection. This is demonstrated with the following two examples.

88 | 3 Fourier analysis of signals Example 3.5: Let x[n] = cos(πn/8 − π/6) + 2 cos(πn/4 + π/4). See Figure 3.7. Find out the DTFS coefficients of x[n]. 3 2

x[n]

1 0 −1 −2 −3 0

5

10

15

n Fig. 3.7: Waveform of the periodic x[n] for one period in Example 3.5.

Solution: First, N1 = 16, N2 = 8 ⇒ N0 = 16. The DTFS coefficients Xp [k] can be obtained by either (3.21) or the following x[n] = cos(πn/8 − π/6) + 2 cos(πn/4 + π/4) 1 = [ej(πn/8−π/6) + e−j(πn/8−π/6) ] + [ej(πn/4+π/4) + e−j(πn/4+π/4) ]. 2 As Ω0 =

2π N0

= π8 , we have

e−jπ/6 jΩ0 n ejπ/6 −jΩ0 n + + ejπ/4 ej2Ω0 n + e−jπ/4 e−j2Ω0 n e e 2 2 e−jπ/6 jΩ0 n ejπ/6 j(N0 −1)Ω0 n = + + ejπ/4 ej2Ω0 n + e−jπ/4 ej(N0 −2)Ω0 n . e e 2 2

x[n] =

N −1

jπ/6

e

0 Comparing the above with x[n] = ∑k=0 Xp [k]ejkΩ0 n leads to Xp [1] =

jπ/4

−jπ/4

e−jπ/6 , 2

Xp [15] =

, Xp [2] = e , Xp [14] = e , and Xp [k] = 0, k = 0, 3, 4, . . . , 13. The magnitude 2 and angle sequences of Xp [m] are plotted in Figure 3.8. Example 3.6: Let Xp [m] = cos(2m 2π ) + 2jsin(3m 2π ) be the DTFS coefficients of a 11 11 signal x[n]. What is the corresponding signal x[n]?

3.4 Why should a signal be transformed?

|

89

1.2

|Xp[m]|

1 0.8 0.6 0.4 0.2 0

0

5

10

15

10

15

m 1

ϕ[m]

0.5 0 −0.5 −1

0

5 m

Fig. 3.8: The DTFS coefficients Xp [m] = |Xp [m]| ejϕ [m] of the periodic x[n] in Example 3.5.

Solution: The first thing to do is to determine the parameter N0 such that x[n + N0 ] = x[n], Xp [m + N0 ] = Xp [m]. Note Xp [m + 11] = Xp [m], we take N0 = 11 and hence Ω0 = 2π . So, Xp [m] can be rewritten into 11 Xp [m] = cos(2mΩ0 ) + 2jsin(3mΩ0 ) 1 = [e−jmΩ0 2 + ejmΩ0 2 ] + [−e−jmΩ0 3 + ejmΩ0 3 ] 2 1 = [e−jmΩ0 2 + e−jmΩ0 (11−2) ] + [−e−jmΩ0 3 + e−jmΩ0 (11−3) ]. 2 Comparing it with (3.21), we have x[2] = x[9] = 11/2, x[8] = 11, x[3] = −11, and x[n] = 0 for other n. It should be noted that the solution is dependent of the choice for N0 . For this example, we can take N0 = 22 and a different solution can be obtained in the same way.

3.4 Why should a signal be transformed? Signals are usually represented in time-domain. An electrical signal can be observed on the screen of an oscilloscope with the vertical axis for the magnitude of the signal and the horizontal axis for the time.

90 | 3 Fourier analysis of signals What does the DTFS (3.20) tell us? It says that a signal x[n] (in time domain) can be completely recovered from its DTFS coefficients Xp [m] as long as Ω0 is given. So, Xp [m] is an alternative representation of the signal x[n]. Roughly speaking, a signal transformation refers to the procedure for representing signals in an alternative way. For example, (3.22) defines a signal transformation that changes the signal representation from x[n] to Xp [m]. The procedure for recovering x[n] from the alternative representation is called the inverse transformation. So, (3.20) is usually referred to as the inverse DTFS of the sequence Xp [m], which is actually used to synthesize the signal in time domain. Generally speaking, a signal transformation is to look at signals from a different angle. It is usually invertible and hence there is no information lost. A fundamental question one may ask is why a signal should be transformed if both representations yield the one and the same signal? What kind of advantages can we gain from a transformation? Let us consider two application examples, which can give some insights on the question. Case I: Data compression: Look at Example 3.5. The periodic signal is represented by 16 samples (one period), as shown in Figure 3.7. Consider the following two ways to store the 16 samples digitally. – Direct storage: each sample is represented with 16 bits, leading to a memory size of 16 × 16 = 256 (bits). –

Indirect storage: As the signal is known to be periodic with N0 = 16, one can computed the DTFS transform, that is Xp [m]. As obtained before, there are 4 nonzero coefficients only and since the signal is real, Xp [15] = Xp∗ [1] and Xp [14] = Xp∗ [2], which means that the four complex coefficients can be represented by four realvalued numbers, say |Xp [1]|, ϕ [1], |Xp [2]|, ϕ [2]. If the same 16-bit implementation is used for each of the four real-valued parameters, 16 × 4 = 64 bits are required to represent the same signal. So, a data compression ratio of 4 is achieved with the transform domain representation.

A music is generated as a sequence of symbols such as Do Re Mi Fa Sol La Si. As observed from Figure 3.1, a symbol, when represented as a signal, is periodic with a fixed fundamental frequency. If we can store one period of signal for each symbol in a computer, then we would be able to synthesize a piece of nice music with the computer. Here, we encounter the same data compression issue just discussed above. Figure 3.9 yields the plot of |c[m]| for a symbol. As seen, there are only a few significant terms of nonzero FS coefficients that need to be coded for this symbol. Therefore, we can code the signals for all the notes efficiently using the transform domain coding that is explained above.

3.4 Why should a signal be transformed?

| 91

0.4

|ck[m]|

0.3 0.2 0.1 0 −0.1 (a)

−7

−5

−3

−1

0

1

3

5

7

1

3

5

7

m 0.4

|cr[m]|

0.3 0.2 0.1 0 −0.1 (b)

−7

−5

−3

−1

0 m

Fig. 3.9: Plot of |c[m]| for the two signals k(t) and r(t) in Figure 3.1.

Case II: Frequency estimation: This is regarding to estimate the sinusoidal components underlying a signal—a very important area of research that is encountered in many applications. Can you tell how many sinusoidal components underlying in each of the two signals given in Figure 3.10? As seen, both signals are periodical with T0 = 1. Applying the FS (transform) (3.9), one would have – x1 (t): c(1) = c(−1) = 1/2 and c(k) = 0, ∀k ≠ ±1. This information indicates that there is one (real-valued) sinusoidal component with a frequency of 1 Hz in x1 (t). In fact, 1 1 x1 (t) = e−j2πt + ej2πt = cos(2πt); 2 2 – x2 (t): c(±2) = 1/2, c(±3) = 1/4, and c(k) = 0, ∀k ≠ ±2, ±3, implying that there are two (real-valued) sinusoidal components in x2 (t) and that the corresponding frequencies are 2Hz and 3Hz, respectively: x2 (t) = cos(2π × 2t) +

1 cos(2π × 3t). 2

Thus, with the FS transform one can identify the frequency components underlying a periodic signal as long as the the period is known. But what to do if x(t) is aperiodic,

92 | 3 Fourier analysis of signals

1

x1(t)

0.5 0

−0.5 −1 −2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

0.5

1

1.5

2

2.5

t

(a) 1.5 1 x2(t)

0.5 0

−0.5 −1 −1.5 −2.5

−2

−1.5

−1

(b)

−0.5

0 t

Fig. 3.10: (a) x1 (t); (b) x2 (t).

say x(t) = cos(2π × √2t) + 12 cos(2π × 2t) as the FS transform is not applicable to such signals? More general transform is needed. In the next section, we will derive the Fourier transform based on the FS developed in the previous section.

3.5 Fourier transform for continuous-time signals Let x(t) be an arbitrary signal, as depicted in Figure 3.11, and wT (t), the rectangular window function as defined before. Denote x0 (t) ≜ x(t)wT (t). We then construct a periodic signal xp (t): +∞

xp (t) ≜ ∑ x0 (t − kT). k=−∞

3.5 Fourier transform for continuous-time signals

93

x0 (t)

x(t)

A

A A

Q −T Q Q 2

|

 wT (t)  ) − T2

t



T 2

A

A

A

t T 2

xp (t)

...

A

A

A

A

A

A



A A A T T

−T

T − 2

... t

2

Fig. 3.11: Waveforms of x(t), x0 (t) and xp (t).

It follows from the FS pair that⁷ +∞



xp (t) = ∑ c[m]ejm T t , ∀ t, m=−∞

where the FS coefficients are given by T/2

c[m] =

2π 1 ∫ x(t)e−jm T t dt, ∀ m. T

−T/2

Denote T/2

XT (jω ) ≜ ∫ x(t)e−jω t dt, −T/2

where ω is a continuous variable −∞ < ω 0 ↔

1 . jω + 𝛾

(3.27)

This is a very important FT pair. The FT X(jω ) of x(t) is usually referred to as spectrum of the signal. It is a complexvalued function of ω and can be represented as X(jω ) = |X(jω )|ejϕx (ω ) , where |X(jω )| and ϕx (ω ) are called the magnitude spectrum and phase spectrum of the signal x(t), respectively. For the example above, we have |X(jω )| =

1 √[Re (𝛾)]2 + [ω + Im (𝛾)]2

, ϕx (ω ) = −atan

ω + Im (𝛾) , Re (𝛾)

which are sketched in Figure 3.12 for 𝛾 = 1. Using the same procedure, we can obtain the following FT pair: −e−β t u(−t), Re (β ) < 0 ↔

1 , jω + β

(3.28)

which is another important FT pair. Example 3.8: Compute the FT of the window function x(t) = κ wτ (t), where κ is constant. Solution: Based on the definition, the FT of the signal is +∞

+∞ −jω t

X(jω ) = ∫ x(t)e

dt = ∫ κ wτ (t)e−jω t dt

−∞

−∞ τ /2

= κ ∫ e−jω t dt, −τ /2

96 | 3 Fourier analysis of signals

1

|X(jω)|

0.8 0.6 0.4 0.2 0 −15

−10

−5

0

5

10

15

5

10

15

ω

(a) 2

ϕx(ω)

1 0 −1 −2 −15

(b)

−10

−5

0 ω

Fig. 3.12: Signal spectrum for Example 3.7 with 𝛾 = 1. (a) |X(jω )|; (b) ϕx (ω ). /2) which leads to κ wτ (t) ↔ κτ sin(ωτ , yielding the following important FT pair: ωτ /2

wτ (t) ↔ τ sτ (ω ),

(3.29)

where sτ (w) is the sinc function defined in (1.34). Figure 3.13 depicts X(jω ) for different τ . One observe that when τ gets bigger (i.e. the window gets wider), the main lobe of signal spectrum becomes narrower and vice versa. See Figure 3.13. With κ = τ −1 in (3.29), we have the FT pair limτ →0 κ wτ (t) ↔ limτ →0 which leads to the following important FT pair: δ (t) ↔ 1.

sin(ωτ /2) , ωτ /2

(3.30)

It is easy to see that the unit impulse signal δ (t) is neither an energy signal nor +∞ meets the Dirichlet conditions but its FT exists. In fact, with ∫−∞ δ (ξ )f (ξ )dξ = f (0) in mind a direct evaluation of (3.23) also shows that the FT of δ (t) is 1. What is the signal x(t) whose FT is X(jω ) = 2πδ (ω )? Using the IFT equation (3.24), one has +∞ 1 x(t) = ∫ X(jω )ejω t dω = e−j0t = 1, 2π −∞

which implies 1 ↔ 2πδ (ω ).

(3.31)

3.5 Fourier transform for continuous-time signals

|

97

X(jω)

1

0.5

0

−25

−20

−15

−10

−5

0

5

10

15

20

25

5

10

15

20

25

ω

(a) 1.5 1 X(jω)

0.5 0 −0.5 −1 −1.5 −25

−20

−15

−10

−5

0 ω

(b)

Fig. 3.13: Signal spectrum for Example 3.8 with κ = τ −1 . (a) X(jω ) with τ = 1; (b) X(jω ) with τ = 0.01.

3.5.1 Properties of Fourier transform The content of Fourier analysis discussed in this book essentially consists of four Fourier representations: FS, DTFS, FT and DTFT with the latter standing for discretetime Fourier transform that will be studied in the next section. As they are all based on signal decompositions in terms of complex sinusoids, they share a set of properties that follow from the characteristic of such sinusoids. As the proofs of a common property shared by the four transforms are very similar, in this book we just focus on discussing these properties for the continuous-time Fourier transform. P.1. Linearity: If xk (t) ↔ Xk (jω ), ∀k ∈ Z, then for any constants {αk } x(t) ≜ ∑ αk xk (t) ↔ X(jω ) = ∑ αk Xk (jω ). k

k

For example, for any constant α with Re (α ) > 0 the signal x(t) = e−α |t| can be rewritten into x(t) = eα t u(−t) + e−α t u(t) ≜ xl (t) + xr (t). It follows from the FT pairs by (3.27) and (3.28) that −xl (t) ↔

1 1 , xr (t) ↔ . jω − α jω + α

98 | 3 Fourier analysis of signals The linearity of FT suggests that e−α |t| ↔ −

1 1 2α . + = 2 jω − α jω + α ω + α2

̃ Let us consider the FT of the unit step signal u(t). First of all, define u(t) ≜ − x2 (t)], where

1 [x (t) 2 1

1 2

+

x1 (t) = e−α t u(t), x2 (t) = eα t u(−t), α > 0. It follows from the FT pairs (3.27) and (3.28) that x1 (t) ↔ X1 (jω ) =

1 1 , x2 (t) ↔ X2 (jω ) = , α + jω α − jω

and hence 1 1 1 ̃ ̃ ] u(t) ↔ U(jω ) = πδ (ω ) + [ − 2 α + jω α − jω = πδ (ω ) +

0, 1 { 1 1 ×{ − , 2 { α + jω α − jω

ω =0 ω ≠ 0

.

̃ = u(t). ̃ for different α . In fact, limα → 0 u(t) Figure 3.14 shows u(t) Based on this observation, one concludes { { πδ (ω ), ω = 0 ̃ u(t) ↔ U(jω ) = lim U(jω )={ 1 α→ 0 { , ω ≠ 0, { jω which can be simply denoted as u(t) ↔

1 + πδ (ω ). jω

P.2. Time shift: If x(t) ↔ X(jω ), then for any constant τ y(t) ≜ x(t − τ ) ↔ Y(jω ) = X(jω )e−jωτ . As given by (3.30), δ (t) ↔ 1. The time shift implies δ (t − τ ) ↔ e−jωτ , which can be verified with a direct computation with (3.23). What is the FT of x(t) = e−α t u(t − t0 ), α > 0?

(3.32)

3.5 Fourier transform for continuous-time signals

| 99

1

ũ(t)

0.5 0 −0.5 −1 −30

−20

−10

0

10

20

30

10

20

30

t

(a) 1

ũ(t)

0.5 0 −0.5 −1 −30 (b)

−20

−10

0 t

̃ Fig. 3.14: Wave-form of u(t). (a) α = 0.1; (b) α = 0.01.

P.3. Frequency shift: If x(t) ↔ X(jω ), then for any constant ω0 y(t) ≜ x(t)ejω0 t ↔ Y(jω ) = X(j(ω − ω0 )). As known from (3.31), 1 ↔ 2πδ (ω ), it turns out of the frequency shift that ejω0 t ↔ 2πδ (ω − ω0 ),

(3.33)

which means that a complex sinusoid (in time domain) is represented in frequency domain as a shifted Dirac function with the amount of shift equal to the frequency of the sinusoid. Basically, most of the FT properties can be proved easily just based on the definition specified by (3.23) and this is particular true for the three properties listed above. We leave the proofs of the three properties to the readers. Example 3.9: Consider the signal x(t) = cos(2π × √2t) + end of Section 3.4. What is the FT of this signal?

1 2

cos(4πt) mentioned at the

100 | 3 Fourier analysis of signals Solution: First of all, as cos(ω0 t) = 12 [ejω0 t + ej(−ω0 )t ] and the pair ejωc t 2πδ (ω − ωc ) for any constant ωc , the linearity suggests that



cos(ω0 t) ↔ π[δ (ω − ω0 ) + δ (ω + ω0 )], and hence cos(2π × √2t) ↔ π[δ (ω − 2√2π) + δ (ω + 2√2π)] 1 π cos(4πt) ↔ [δ (ω − 4π) + δ (ω + 4π)]. 2 2 Therefore, with the linearity we have X(jω ) = π[δ (ω − 2√2π) + δ (ω + 2√2π)] +

π [δ (ω − 4π) + δ (ω + 4π)], 2

from which the two (angular) frequencies 2√2π, 4π can be detected easily from the peaks of X(jω ). P.4. Time scaling: If x(t) ↔ X(jω ), then x(α t) ↔

1 ω X (j ) , α ≠ 0. |α | α

Particularly, the time reversal signal x(−t) has an FT X(−jω ). Proof: It follows from +∞

x(t) =

1 ∫ X(jω )ejω t dω 2π −∞

that

+∞

+∞

−∞

−∞

ξ 1 1 1 ∫ X(jω )ejωα t dω = ∫ x(α t) = X (j ) ejξ t dξ . 2π 2π |α | α Comparing with (3.24), we realize x(α t) ↔

ω 1 X (j ) . |α | α

This completes the proof. As to be seen, we have the following FT pair: x(t) =

2 sin(0.5t) cos(1.5t) ↔ X(jω ) = w1 (ω + 1.5) + w1 (ω − 1.5). πt

The time-scaling property of FT is demonstrated with this pair in Figure 3.15 for different values of α .

3.5 Fourier transform for continuous-time signals

|α|X(jω/α)

x(αt)

0.2 0

−0.2 −50

0

−0.2 0

0.5 0 −5

|α|X(jω/α)

x(αt)

0

−0.2 0

0

5

ω 1 0.5 0 −5

50

t

5

1

(d)

0.2

−50

0 ω

50

t

(e)

0 −5

|α|X(jω/α)

x(αt)

0

−50

0.5

(b)

0.2

(c)

1

50

t

(a)

| 101

0

5

ω

(f)

Fig. 3.15: Demonstration of the time-scaling property of FT: x(α t) is on the left, while |α |X(jω /α ) is the on the right. (a)-(b) α = 0.5; (c)-(d) α = 1; (e)-(f) α = 2.

P.5. Duality: If x(t) ↔ X(jω ), then X(−jt) ↔ 2πx(ω ) ⇔

1 X(jt) ↔ x(−ω ). 2π

Before giving the proof, let us specify what this property signifies with the window signal. As indicated by (3.29), we have x(t) ≜ w2ω0 (t) ↔ X(jω ) = 2ω0

sin(ωω0 ) . ωω0

The duality tells us that the following is true: sin(ω0 t) ↔ w2ω0 (ω ). πt

(3.34)

Figure 3.16 shows graphically the two pairs of FT. It is interesting to note that the FT pairs by (3.30) and (3.31) obey the duality. Now, we are going to give the proof. Proof: First of all, +∞

+∞

−∞

−∞

1 1 x(t) = ∫ X(jω )ejω t dω = ∫ X(jξ )ejξ t dξ . 2π 2π

2

2

1.5

1.5 w2ω0(t)

w2ω0(t)

102 | 3 Fourier analysis of signals

1 0.5

1 0.5

0

0

−0.5 −10

−0.5 −10

−5

0

5

10

t

(a)

0

5

10

5

10

ω

0.4

2

0.3

sin(ω0t) πt

sin(ω0t) πt

−5

(b)

0.2 0.1

1.5 1 0.5

0

0

−0.1 −10

−0.5 −10

−5

0

5

10

t

(c)

−5

0 ω

(d)

Fig. 3.16: Demonstration of the duality property. (a)–(b) w2ω0 (t) and its FT; (c)-(d)

sin(ω0 t) πt

and its FT.

By multiplying both sides with 2π and changing t → ω , we have +∞

+∞ jξω

2πx(ω ) = ∫ X(jξ )e

dξ = ∫ X(−jt)e−jω t dt.

−∞

−∞

Comparing the right side with (3.23), we know immediately that the FT of X(−jt) is 2πx(ω ). This completes the proof. P.6. Conjugate symmetry: If x(t) ↔ X(jω ) = |X(jω )|ejϕx (ω ) , then x∗ (t) ↔ X ∗ (−jω ). – –

Furthermore, if x(t) is real, then X(−jω ) = X ∗ (jω ). This signifies that |X(jω )| is an even function of ω : |X(j(−ω ))| = |X(jω )|; ϕx (ω ) is an odd function of ω : ϕx (−ω ) = −ϕx (ω ).

These are confirmed with the spectra shown by Figure 3.12. It is for this reason that |X(jω )| and ϕx (ω ) are usually plotted for ω ⩾ 0. Proof : From the definition of FT, the FT of x∗ (t) is +∞ −jω t

∫ x (t)e −∞



+∞ ∗

jω t

dt = ( ∫ x(t)e

dt) = X ∗ (−jω ),

−∞

which is actually the first part of the property. If x(t) is real-valued, then x∗ (t) = x(t) and hence its FT is X(jω ), namely, X ∗ (−jω ) = X(jω ) or X(−jω ) = X ∗ (jω ). With X(jω ) = |X(jω )|ejϕx (ω ) , the latter yields |X(−jω )|ejϕx (−ω ) = |X(jω )|e−jϕx (ω ) ,

3.5 Fourier transform for continuous-time signals

| 103

Noting −π < ϕx (ω ) ⩽ π, we have |X(j(−ω ))| = |X(jω )| and ϕx (−ω ) = −ϕx (ω ), which ends the proof. P.7. Differentiation in time: If x(t) ↔ X(jω ), then dx(t) ↔ jω X(jω ). dt Proof: The above result comes out of differentiating both sides of (3.24) with respect to time variable t. The factor jω in the spectrum of the output from the differentiator implies that the differentiation accentuates the high frequency components in the input signal x(t) and attenuates the low frequency components, particularly, diminishes the DC component in x(t). As an LTI system, the differentiator will be discussed in frequency domain in Chapter 4. P.8. Differentiation in frequency: If x(t) ↔ X(jω ), then tx(t) ↔ j

dX(jω ) . dω

Proof: Differentiating both sides of (3.23) with respect to the frequency variable ω yields +∞

dX(jω ) = ∫ x(t)(−jt)e−jω t dt, dω −∞

namely, +∞

j

dX(jω ) = ∫ tx(t)e−jω t dt, dω −∞

) . which tells us that the FT of tx(t) is j dX(jω dω

Table 3.1 lists 12 properties of Fourier transform⁹. The first 8 properties have been proved. Now, let us consider the last four. P.9. Convolution in time: If v(t) ↔ V(jω ) and w(t) ↔ W(jω ), then v(t) ∗ w(t) ↔ V(jω )W(jω ).

9 A comprehensive table of all properties for each of FS, DTFS, FT and DTFT can be found from Appendix C in [2] of the textbook by Haykin and Van Veen.

104 | 3 Fourier analysis of signals Table 3.1: Properties of the Fourier transform. Given FT pair

Linearity

∑k αk xk (t)



∑k αk Xk (jω )

x(t − τ )



X(jω )e−jωτ

x(α t)



x(t)ejω0 t

1 X( jω |α | α



X(j(ω − ω0 ))



jω X(jω )

Conjugate symmetry

dx(t) dt ∗

x (t)



X ∗ (−jω )

Derivative in frequency

tx(t)



j

Time shift Time scaling Frequency shift Derivative in time

d dt

Convolution in time Duality Multiplication in time Integration in time Parseval Theorem

:

x(t) ↔ X(jω )

Property

)

α ≠ 0 real ω0 real

dX(jω ) dω

x(t) ∗ y(t)



X(jω )Y(jω )

X(jt)



2πx(−ω )

x(t)y(t)



t ∫−∞ x(τ )dτ +∞ ∫−∞ x(t)y(t)dt



1 X(jω ) ∗ Y(jω ) 2π 1 X(jω ) + πX(j0)δ (ω ) jω +∞ 1 ∫ X(jξ )Y(−jξ )dξ 2π −∞

=

Remarks

The proof will be given in Chapter 4. As known from Chapter 2, the output of an LTI system is given by the convolution between the unit impulse response of the system and the input in time domain: y(t) = h(t) ∗ x(t). The convolution property simply tells us that the FT of the system output is the product of the FT of the unit impulse response and that of the input signal: y(t) = h(t) ∗ x(t) ↔ Y(jω ) = H(jω )X(jω ). This yields an alternative way to evaluate the system output by computing Y(jω ) as a product of two spectra first and then finding out the IFT of Y(jω ). More importantly, this can significantly simply system analysis and provides considerable insight into system characteristics. All these will be revealed in Chapter 4. The last three can be derived with the properties discussed above. P.10. Multiplication in time: As the properties of duality and convolution lead to X(jt) ∗ Y(jt) ↔ 4π2 x(−ω )y(−ω ), applying duality again yields 4π2 x(t)y(t) ↔ 2πX(jω ) ∗ Y(jω ), which is equivalent to 1 x(t)y(t) ↔ X(jω ) ∗ Y(jω ). 2π P.11. Integration in time: It can be proved with the convolution in time by noting t ∫−∞ x(τ )dτ = x(t) ∗ u(t) and (3.32). P.12. Parseval theorem: It follows directly from multiplication in time which implies +∞

1 X(jω ) ∗ Y(jω ) = ∫ x(t)y(t)e−jω t dt. 2π −∞

3.5 Fourier transform for continuous-time signals

| 105

Noting that +∞

1 1 ∫ X(jξ )Y(j(ω − ξ )dξ , X(jω ) ∗ Y(jω ) = 2π 2π −∞

the property follows by letting ω = 0. Particularly, let y(t) = x∗ (t) and hence Y(jω ) = X ∗ (−jω ) (due to the conjugate symmetry property), we have +∞

+∞

1 ∫ |x(t)| dt = ∫ |X(jξ )|2 dξ . 2π 2

−∞

−∞

As realized, the right side is the energy of signal x(t). The Parseval’s theorem actually yields an alternative way to evaluate the signal energy. Usually, the function P(ω ) defined as 1 P(ω ) ≜ |X(jω )|2 2π is referred to as the energy spectrum density function of x(t). With these properties and FT pairs for the FT pairs obtained before, FT pairs for more complicated signals can be obtained easily. This is demonstrated with the example below. Example 3.10: In this example, two signal are considered. – What is the FT of x1 (t) = dtd {te−α t u(t) ∗ wτ (t)} with Re (α ) > 0? First of all, denote g1 (t) ≜ t g2 (t) with g2 (t) = e−α t u(t) ↔ G2 (jω ) = property of derivative in frequency, we have g1 (t) = te−α t u(t) ↔ G1 (jω ) = j

1 . jω +α

With the

dG2 (jω ) 1 . = dω (jω + α )2

It then follows from the properties of derivative and convolution both in time domain that τ sin(ωτ /2) jω X1 (jω ) = jω G1 (jω )Wτ (jω ) = . ωτ /2 (jω + α )2 –

Let us consider the FT of x2 (t) = tw2 (t). dx (t) Denote g(t) ≜ dt2 , that is g(t) = w2 (t) + t[δ (t + 1) − δ (t − 1)] = w2 (t) − δ (t + 1) − δ (t − 1). Note w2 (t) ↔

2 sin ω ω

and δ (t − t0 ) ↔ e−jω t0 . So,

g(t) ↔ G(jω ) =

2 sin ω 2 sin ω − ejω − e−jω = − 2 cos ω . ω ω

Since x2 (−∞) = 0, t

t

x2 (t) − x2 (−∞) = ∫ g(τ )dτ ⇒ x2 (t) = ∫ g(τ )dτ . −∞

−∞

106 | 3 Fourier analysis of signals With the integration in time property and G(j0) = 0, we finally obtain X2 (jω ) =

G(jω ) G(jω ) + πG(j0)δ (ω ) = . jω jω

This problem can also be attacked using the property of derivative in frequency as there is a factor t in the signal.

3.5.2 Inverse Fourier transform Given an FT X(jω ), a direct way to find the corresponding signal x(t) is to use (3.24): +∞

x(t) =

1 ∫ X(jω )ejω t dω . 2π −∞

This integration can be evaluated with the residue theorem. See Appendix D. Alternatively, it is much more efficient to decompose a complex X(jω ) into a linear combination of well-known FT terms such as those obtained previously and the IFT is then given by the same linear combination of the IFTs for these terms. This approach is particularly efficient when the FT is given by a ratio of two polynomials in jω . In this connection, the technique of partial-fraction expansions plays a crucial role. For more detailed discussions on this topic, we refer to Appendix E. The procedure of this approach is demonstrated with the following example. Example 3.11: Let X(jω ) = x(t).

4+j3ω (2+jω )(jω )

+ δ (ω − ω0 ) +

e−jω . jω −3

Find out the corresponding

Solution: First of all, using partial fraction expansions (take ρ = jω . See Appendix E) we have 4 + j3ω A B = + (2 + jω )(jω ) jω + 2 jω Comparing the coefficients of the denominator polynomials in jω results in A = 1, B = 2 and hence X(jω ) =

A e−jω 1 + B[ + πδ (ω )] − Bπδ (ω ) + δ (ω − ω0 ) + . 2 + jω jω jω − 3

The FT pairs specified by (3.27), (3.28), (3.32), (3.33) and the property of time shift suggest that x(t) = Ae−2t u(t) + Bu(t) + = [e−2t + 2]u(t) − 1 +

1 [−Bπej0t + ejω0 t ] − e3(t−1) u(1 − t) 2π ejω0 t − e3(t−1) u(1 − t). 2π

3.6 The discrete-time Fourier transform

|

107

It is interesting to note that the duality of Fourier transforms suggest that the IFT 1 of Z(jω ) can be obtained effectively by finding the FT of 2π Z(−jt). The property of multiplication in time is a good example to demonstrate this method.

3.6 The discrete-time Fourier transform One of the remarkable features of the FT we have developed in the previous section for continuous-time signals is that a sinusoid signal of frequency ωs is transformed in frequency domain as an impulse that is located at ω = ωs and hence the number of sinusoidal components underlying in a linear combination of sinusoids and the corresponding frequencies can be easily obtained by inspecting the spectrum by the FT of this combination. Consider a discrete-time sinusoidal signal x[n] = A cos(Ω0 n). Is it possible to find a transform that converts x[n] into some kind of spectrum, say some function of a continuous variable Ω , such that the function yields an impulse at Ω = Ω0 ? The answer is positive. Recall that a continuous-time periodic signal x(t) can be represented completely with its period T0 and its FS coefficients c[n] which actually can be considered as a discrete-time signal. Now, for any given discrete-time signal x[n] one can consider x[−n] as the FS coefficients of a periodic signal in a continuous variable, say Ω , that repeats itself every 2π , and clearly this periodic function is given by its FS below +∞

+∞



∑ x[−n]ejn 2π Ω = ∑ x[n]e−jnΩ .

n=−∞

n=−∞ jΩ

Denote the summation as X(e ).¹⁰ Clearly, with the periodic signal X(ejΩ ) given one can find its FS coefficients x[−n] with π

1 x[−n] = ∫ X(ejΩ )e−jnΩ dΩ . 2π −π

In summary, we have the following: +∞

X(ejΩ ) ≜ ∑ x[n]e−jnΩ n=−∞

10 As to be seen in Chapter 6, X(ejΩ ) is a special case of the z-transform X(z) when z = ejΩ .

(3.35)

108 | 3 Fourier analysis of signals and π

x[n] =

1 ∫ X(ejΩ )ejnΩ dΩ . 2π

(3.36)

−π

X(ejΩ ) defined by (3.35) is called the discrete-time Fourier transform (DTFT) of x[n], while (3.36) provides a way to recover the time-domain signal x[n] from its DTFT, called inverse discrete-time Fourier transform (IDTFT). Once again, we use the notation x[n] ↔ X(ejΩ ) to signify an DTFT pair. As (3.35) involves a wide-sense summation, one may question about its convergence, that is the existence of X(ejΩ ). For this issue, we have the following results – If x[n] is absolutely summable, namely, +∞

∑ |x[n]| 2; (d) x(t) = −x(t − 3);

3.9 Problems

x(t)

–3

x(t)

1

–2 –1

–1

4

2 01

–3

3

(a)

5

–1

4

2 –2

(b)

t

5

3

1

0

–2

–4

1

t

x(t) 2 1 –5 –4 –3 –2 –1

0 1

2

3

5

4

6

7

t

(c) Fig. 3.22: Signals for Problem 3.1.

x[n]

(a)

–14

–7

0

7

n

14

21

x[n]

–18

–12

–6

0

(b)

6

n

12

18

x[n]

–18

–12

|

–6

0 –1

(c)

6

12

18

n

Fig. 3.23: Signals for Problem 3.2. 3 󵄨 󵄨2 (e) 16 ∫−3 󵄨󵄨󵄨x(t)󵄨󵄨󵄨 dt = 12 ; (f) c[1] is a positive real number. Show that x(t) = α cos(β t + 𝛾), and determine the constants α , β and 𝛾.

6

123

124 | 3 Fourier analysis of signals Xp[k]

8

0

8

16

k

(a)

Xp[k] 2 1 1 21 4 8 (b)

0

8

16

k

Fig. 3.24: DTFS coefficients of the first two signals for Problem 3.4.

Problem 3.6: Suppose we are given the following information about a periodic signal x[n] with period 8 and its Fourier coefficients Xp [k]: i) Xp [k] = −Xp [k − 4], ii) x[2n + 1] = (−1)n . Sketch x[n] for one period. Problem 3.7: Calculate the Fourier transforms of the following signals x1 (t) = e−2(t−1) u(t − 1),

x2 (t) = e−2|t−1| .

Sketch and label the magnitude of each of the Fourier transforms. Problem 3.8: Given that x(t) has the Fourier transform X(jω ), express the Fourier transforms of the signals listed below in terms of X(jω ). x1 (t) = x(1 − t) + x(−1 − t), x2 (t) = x(3t − 6), x3 (t) =

d2 x(t − 1) . dt2

2 Problem 3.9: Consider the Fourier transform pair: e−|t| ←→ 1+ω 2. (a) Use the appropriate FT properties to find the Fourier transform of te−|t| ; (b) Use the result from the first part, along with the duality property, to determine the Fourier transform of (1+t4t2 )2 .

3.9 Problems

|

125

Problem 3.10: Compute the Fourier transform of each of the following signals: +∞

x1 (t) = [e−α t cos(ω0 t)]u(t), α > 0; x2 (t) = e−3|t| sin(2t); x3 (t) = ∑ e−|t−2n| ; n=−∞ ∞

x4 (t) = ∑ α k δ (t − kT), |α | < 1; x5 (t) = [te−2t sin(4t)]u(t); k=0

x6 (t) =

sin(πt) sin(2π(t − 1)) ; x7 (t) = u(t + π) − u(t − π); πt π(t − 1)

x8 (t) = {

1 + sin t 0

−π ⩽ t ⩽ π otherwise.

Problem 3.11: Determine the continuous-time signal corresponding to each of the following transforms. −2π)] (a) X(jω ) = 2 sin[3(ω ; (ω −2π) (b) X(jω ) = cos(4ω + π/3); (c) X(jω ) as given by the magnitude and phase plots shown in Figure 3.25; (d) Its magnitude spectrum is |X(jω )| = c[u(ω + ωc ) − u(ω − ωc )] and its phase spectrum is ϕ (ω ) = −t0 ω , where c is a constant; (e) X(jω ) = 2[δ (ω − 1) − δ (ω + 1)] + 3[δ (ω − 2π) + δ (ω + 2π)].

|X(jω )|

ϕx (ω ) = −ω

@

1

@

@

@ @

−1

1

@

@

0

1

ω

−1

@

−1

0 −1

1

@

ω

@ @

Fig. 3.25: Spectra of the signal x(t) for Problem 3.11(c).

Problem 3.12: Assume that the Fourier transform of x1 (t) = u(t + 1) − u(t − 1) is X1 (jω ) and X2 (jω ) = s2π (ω ) (see (1.34)) is the Fourier transform of x2 (t). (a) Calculate X1 (jω ) and x2 (t); (b) If X3 (jω ) = X1 (jω )X2 (jω ), find such a signal x3 (t); +∞ (c) Determine the following integral ∫−∞ πX1 (jω )X2 (jω )e−jω dω . Problem 3.13: There is a real-valued discrete time sequence x[n], find such a sequence using the following statements: (a) x[n] is a periodic sequence, the period is 6;

126 | 3 Fourier analysis of signals (b) ∑5n=0 x[n] = 3; (c) ∑7n=2 (−1)n x[n] = 1; (d) The energy within one period of x[n] has been minimized. Problem 3.14: Calculate the Discrete-time Fourier transforms of x[n] = ( 12 )|n−1| . Sketch and label one period of the magnitude of the Fourier transform. Problem 3.15: Determine the Fourier transform for each of the following signals (a) x[n] = 2n sin(Ω0 n)u[−n + 1]; 1 n (b) x[n] = ∑+∞ k=0 ( π ) δ [n − 2k]; π (c) x[n] = cos( 4 n)(u[n + 2] − u[n − 2]); (d) x[n] = (n + 1)( 13 )n u[n − 1]; (e) x[n] = 2 + cos( π6 n + π8 ). Problem 3.16: It is known that the DTFT of a sequence x(n) is X(ejΩ ). (a) Prove that X(ej0 ) = ∑+∞ n=−∞ x(n); (b) Using the result that is obtained in part (a), calculate the value of the summation n A = ∑+∞ n=2 n(0.5) . Problem 3.17: Let x[n] ↔ X(ejΩ ), find the DTFT for each of x1 [n] ≜ {

0, x[n],

n ≠ ±kM, ∀ k ∈ Z 0, , x2 [n] ≜ { n = ±kM, ∀ k ∈ Z x[k],

n ≠ ±kM, ∀ k ∈ Z n = ±kM, ∀ k ∈ Z

in terms of X(ejΩ ). Problem 3.18: Let x(t) be a periodic signal satisfying x(t) = x(t + T0 ), ∀ t with T0 > 0 and 2π x∞ (t) = ∑ c[k]ejω0 kt , ω0 = . T0 k As x(t) = x(t + T̃ 0 ), ∀ t, where T̃ 0 ≜ pT0 with p a positive integer, we also have an FS given by 2π ω0 jω̃ 0 mt ̃ = ∑ c[m]e ̃ x(t) , ω̃ 0 ≜ = p T̃ 0 m with ̃ c[m] =

1 ̃ ∫ x(t)e−jω0 mt dt. T̃ 0 T̃ 0

Show that ̃ c[m] ={

0 c[k]

, ,

m ≠ kp m = kp

̃ = x∞ (t). Hint: substitute x(t) in the expression for c[m] ̃ and hence x(t) with x∞ (t).

3.9 Problems

| 127

Problem 3.19: Let c[m] be the FS coefficients of x(t). Show that x∗ (t) ↔ c∗ [−m]. Problem 3.20: Assume T1 , T2 are the period of x1 (t), x2 (t), respectively. Show that x(t) = x1 (t) + x2 (t) is periodical if and only if T1 /T2 is rational. Problem 3.21: Let x(t) ↔ X(jω ) = XR (ω ) + jHI (ω ). Show that if x(t) satisfies x(t) = 0, ∀ t < 0, then XR (ω ) and XI (ω ) form the following Hilbert transform pair: XR (ω ) =

+∞

+∞

−∞

−∞

X (ω ) X (ω ) 1 1 ∫ I dξ , XI (ω ) = − ∫ R dξ . π ω −ξ π ω −ξ

Hint: x(t) = x(t)u(t). Problem 3.22: Let x0 (t) be a signal and define xp (t) ≜ ∑+∞ k=−∞ [x0 (t−kT0 ) + x0 (−t + kT0 )], where T0 > 0 is constant. Determine the FS coefficients of xp (t) in terms of the Fourier transform X0 (jω ) of x0 (t). ̃ = μ1 ψ1 (t) + μ2 ψ2 (t), where ψ1 (t), ψ2 (t) are real-valued funcProblem 3.23: Let x(t) tions defined on [0, 1] and satisfy 1

1

∫ ψk2 (t)dt = 1, k = 1, 2; ∫ ψ1 (t)ψ2 (t)dt = 0.5. 0

0

Assume that x(t) is defined on [0, 1] with 1

∫ x(t)ψk (t)dt = k, k = 1, 2. 0

̃ in Find the optimal coefficients μ1 , μ2 such that x(t) is best approximated with x(t) 2 ̃ 2 dt is minimized. the sense that ∫0 |x(t) − x(t)| 1 x( t−τ ), where s > 0 and τ are Problem 3.24: Let x(t) ↔ X(jω ). Define ϕ (t) ≜ √s s constant. Find out the FT of ϕ (t) and compute the energy of ϕ (t) in terms of that of x(t).

4 Frequency-domain approach to LTI systems 4.1 Introduction Having provided the fundamentals of Fourier analysis in Chapter 3 with four Fourier representations of signals, we continue our journey in this chapter by exploring possible applications of this theory to signals analysis and particularly, to LTI systems. According to the important conclusion obtained in Chapter 2 that the output y of an LTI system is given by a time-domain convolution between the unit impulse response h and the input x, namely, y = h ∗ x, and the convolution property revealed in Chapter 3, we conclude that this relationship is equivalent to a multiplication Y = HX in frequency-domain. What more could be derived as a consequence of this remarkable property of Fourier analysis to LTI systems is the main objective of this chapter. As the signals involved in this chapter are usually represented as functions of frequency, it is intuitive to have the terminology frequency-domain appearing in the chapter’s title. The outline for this chapter is as follows. Starting with studying the response of an LTI system to a sinusoidal input, we introduce the concept of frequency response of an LTI system in Section 4.2. Section 4.3 is devoted to investigating the properties of frequency response. The bode plot and the straight-line approximation are also studied in this section. An alternative expression of frequency response is derived in Section 4.4 for both discrete-time and continuous-time LTI systems characterized by LCCDEs. It is also shown in this section that the output of a causal LTI system can be computed recursively using an LCCDE if its frequency response is a rational function. The journey of frequency-domain approach to LTI systems continues in Section 4.5, in which the frequency-domain relationship Y = HX is derived and the significance of this very important conclusion is discussed. Some typical LTI systems, including ideal transmission channels and ideal filters are discussed in Section 4.6. These contents will be used for the chapters that follow. To end this chapter, we give some concluding remarks in Section 4.7.

4.2 Frequency response of LTI systems As derived in Chapter 2, the response of a discrete-time LTI system to any input x[n] is given by y[n] = h[n] ∗ x[n]. Now, let us look at what form of the output y[n] takes for a special input – a complex sinusoid: x[n] = ρ ejϕ0 ejΩ0 n ≜ AejΩ0 n , where ρ ⩾ 0, Ω0 and ϕ0 are all real-valued constants.

4.2 Frequency response of LTI systems | 129

Clearly, the corresponding output is +∞

+∞

y[n] = ∑ h[m]x[n − m] = ∑ h[m]AejΩ0 (n−m) m=−∞ +∞

m=−∞

= {A ∑ h[m]e−jΩ0 m } ejΩ0 n ≜ BejΩ0 n , m=−∞

which states that the output of an LTI system in response to a complex sinusoidal input is also a sinusoid with the same frequency as that of the input and an amplitude given by +∞

B = A ∑ h[m]e−jΩ0 m . m=−∞

It is interesting to note that the 2nd factor on the right of the above equation is equal to H(ejΩ0 ), where +∞

H(ejΩ ) = ∑ h(m)e−jΩ m ≜ |H(ejΩ )|ejϕh (Ω )

(4.1)

m=−∞

is the DTFT of unit impulse response h[n] of the LTI system, which is usually referred to as the frequency response of the (LTI) system with the two real-valued functions |H(ejΩ )| and ϕh (Ω ) being called the magnitude response and the phase response of the system, respectively. Therefore, x[n] = AejΩ0 n → y[n] = AH(ejΩ0 )ejΩ0 n .

(4.2)

What does the frequency response signify? As AB = H(ejΩ0 ), the frequency response H(ejΩ0 ), evaluated at the frequency of the input sinusoid, is physically interpreted as an amplitude gain. Now, let us consider the output of an LTI system when the input is a real-valued sinusoidal: x[n] = ρx cos(Ω0 n + ϕx ). Note ρ x[n] = x [ej(Ω0 n+ϕx ) + e−j(Ω0 n+ϕx ) ] 2 = A1 ejΩ1 n + A2 ejΩ2 n ≜ x1 [n] + x2 [n], ρ

where A1 = 2x ejϕx , Ω1 = Ω0 and A2 = According to (4.2), we have

ρx −jϕx e , Ω2 2

= −Ω0 .

xk [n] = Ak ejΩk n → yk [n] = Ak H(ejΩk )ejΩk n for k = 1, 2, the output y[n] in response to this x[n] is therefore given by y[n] = y1 [n] + y2 [n] = A1 H(ejΩ1 )ejΩ1 n + A2 H(ejΩ2 )ejΩ2 n , due to linearity.

(4.3)

130 | 4 Frequency-domain approach to LTI systems Noting that A1 = 2x ejϕx = A∗2 , Ω1 = Ω0 = −Ω2 and the fact that h[n] is real-valued, leading to H(ej(−Ω ) ) = H ∗ (ejΩ ) = |H(ejΩ )|e−jϕ (Ω ) , we finally reach ρ

x[n] = ρx cos(Ω0 n + ϕx ) → y[n] = ρy cos(Ω0 n + ϕy ),

(4.4)

where ρy = ρx |H(ejΩ0 )|, ϕy = ϕx + ϕh (Ω0 ). (4.4) indicates that when excited by the sinusoidal signal x[n] = ρx cos(Ω0 n + ϕx ) with ρx , ϕx known, the output of an LTI system should be a sinusoidal of the same frequency and that once the amplitude ρy and the phase ϕy can be measured (say with an oscilloscope), we are then able to determine the frequency response of the system at this particular frequency Ω0 with |H(ejΩ0 )| =

ρy ρx

, ϕh (Ω0 ) = ϕy − ϕx .

(4.5)

Sweeping Ω0 from 0 to π, we can then find the complete frequency response of the system. This is what we practically do in determining the frequency response for an LTI system. Now, let us consider two examples that will help us have a better understanding of frequency response. Example 4.1: We have an LTI system given by the following difference equation y[n] = 1 x[n] + 12 x[n − 1]. Compute the frequency response of this system and determine the 2 output when the input signal x[n] is of the form x[n] = κ + (−1)n with κ constant but unknown. See Figure 4.1(a). Solution: With the input signal x[n] given in Figure 4.1(a), the output can be obtained directly from 1 1 y[n] = x[n] + x[n − 1], 2 2 which is shown in Figure 4.1(b). It seems that the unknown constant κ is 2. Why is that? The answer can be obtained from the concept of frequency response with the analysis below. First of all, let us consider the frequency response of this system. Substituting x[n] with δ [n] in the difference equation, we can find the unit impulse response. In fact, h[n] = 12 δ [n] + 12 δ [n − 1] leads to h[n] = 0, ∀n ≠ 0, 1 and h[0] = h[1] = 12 . So, H(ejΩ ) =

1 1 −jΩ = cos(Ω /2)e−jΩ /2 . + e 2 2

Therefore, the magnitude response is |H(ejΩ )| = | cos(Ω /2)| and the phase response is of form ϕh (Ω ) = −Ω /2, |Ω | ⩽ π. See Figure 4.2 for −π ⩽ ω ⩽ π.

4.2 Frequency response of LTI systems

| 131

3 2.5 x[n]

2 1.5 1 0.5 0 0

5

10

15

10

15

n

(a) 3 2.5 y[n]

2 1.5 1 0.5 0 0 (b)

5 n

Fig. 4.1: Time-domain waveforms for Example 4.1. (a) x[n] with κ = 2; (b) y[n].

Note that x[n] = κ cos(0n + 0) + cos(πn + 0) ≜ x1 [n] + x2 [n]. Applying (4.4) and linearity yields y[n] = κ × |H(ej0 )| cos[0n + 0 + ϕh (0)] + 1 × |H(ejπ )| cos[πn + 0 + ϕh (π)]. As H(ej0 ) = 1 and H(ejπ ) = 0, the output should be given by y[n] = κ = x1 [n] ⇒ κ = 2, which is confirmed by the actual computation result in Figure 4.1(b). This system blocks the high frequency component x2 [n] = (−1)n but lets x1 [n] = κ pass – a typical low-pass filtering operation. Example 4.2: A causal LTI system is given by the following difference equation: y[n] =

1 1 x[n] − x[n − 1]. 2 2

Compute the frequency response of the system and determine the output in response to the same signal x[n] used in Example 4.1.

132 | 4 Frequency-domain approach to LTI systems 1.2

|H(ejΩ)|

1 0.8 0.6 0.4 0.2 0 –π (a)

–π 2

0

–π 2

0 Ω

Ω

π 2

π

π 2

π

1.5

ϕh(Ω)

1 0.5 0 −0.5 −1 −1.5 –π (b)

Fig. 4.2: Frequency response for Example 4.1, where the x-axis denotes angular frequency [−π, π]. (a) |H(ejΩ )| – the magnitude response; (b) ϕh (Ω ) – the phase response-

Solution: Applying the same procedure as the one used in Example 4.1, we have h[n] = 0, ∀ n ≠ 0, 1 and h[0] = 12 , h[1] = − 12 . Noting j = ejπ/2 , we have H(ejΩ ) =

Ω π 1 1 −jΩ = j sin(Ω /2)e−jΩ /2 = sin(Ω /2)e−j( 2 − 2 ) . − e 2 2

Both magnitude and phase responses are plotted in Figure 4.3 for −π ⩽ Ω ⩽ π. With x[n] = κ + (−1)n , the output is in the same form: y[n] = κ × |H(ej0 )| cos[0n + 0 + ϕ (0)] + 1 × |H(ejπ )| cos[πn + 0 + ϕ (π)], but this time, H(ej0 ) = 0, H(ejπ ) = 1, which leads to y[n] = (−1)n = x2 [n], This system blocks x1 [n] = κ and lets x2 [n] = (−1)n pass – a typical high-pass filtering operation.

4.2 Frequency response of LTI systems |

133

sin(Ω/2)

1 0.5 0 −0.5 −1 –π (a)

–π 2

0

–π 2

0

Ω

π 2

π

π 2

π

π 2

π

|H(ejΩ)|

1

0.5

0 (b)

–π

Ω

ϕh(Ω)

1 0 −1 –π (c)

–π 2

0 Ω

Fig. 4.3: Frequency response for Example 4.2, where the x-axis denotes angular frequency [−π, π], and (a) sin(Ω /2); (b) |H(ejΩ )| – the magnitude response; (c) ϕh (Ω ) – the phase response

The frequency response of a continuous-time LTI system is defined as the FT of the unit impulse response h(t): +∞

H(jω ) = ∫ h(τ )e−jωτ dτ = |H(jω )|ejϕh (ω ) ,

(4.6)

−∞

where |H(jω )| and ϕh (ω ), similar to the discrete-time LTI systems, are the magnitude and phase responses of the system, respectively. Based on the convolution integral +∞

y(t) = h(t) ∗ x(t) = ∫ h(τ )x(t − τ )dτ , −∞

it can be shown with the same procedure that x(t) = Aejωx t ↔ y(t) = AH(jωx ) ejωx t ,

(4.7)

134 | 4 Frequency-domain approach to LTI systems and furthermore, x(t) = ρx cos(ωx t + ϕx ) → y(t) = ρy cos(ωx t + ϕy ),

(4.8)

with ϕy ≜ ϕx + ϕh (ωx ), ρy ≜ ρx |H(jωx )| as long as the LTI system is real-valued, which is always the case in practice. Example 4.3: The differentiator y(t) = quency response. Solution: As h(t) =

δ (t) dt

dx(t) dt

is a causal LTI system. Determine its fre-

has an FT jω , the frequency response of such a system is π

H(jω ) = jω = |ω |ejsgn(w) 2 and is shown in Figure 4.4. 3.5 3 |H(jω)|

2.5 2 1.5 1 0.5 0

–π

(a)

–π 2

0

–π 2

0

ω

π 2

π

π 2

π

1.5 1 ϕh(ω)

0.5 0 −0.5 −1 −1.5 –π (b)

ω

Fig. 4.4: (a) Magnitude response; (b) phase response.

Clearly, this system blocks the low frequency components and amplifies the high ones.

4.3 Bode plots for continuous-time LTI systems | 135

Such a system can be used for edge detection. An image is represented by a gray scale function G = f (x, y) in such a way that G = 0 means that the pixel (x, y) is black, while a very large value of G corresponds to a white pixel. The image on the left of Figure 4.5 shows a photo f (x, y) of two families. Applying the differentiating operation in both horizontal and vertical directions to it, we then obtain an image formed with 2

2

̃ y) ≜ √ { 𝜕f (x, y) } + { 𝜕f (x, y) } G(x, 𝜕x 𝜕y

which is displayed on the right of Figure 4.5. As seen, this image yields information on the contours of the objects in the original photo.

(a)

(b)

Fig. 4.5: (a) An image of two families -Happiness. (b) The image of the contour of Happiness using a differentiator.

4.3 Bode plots for continuous-time LTI systems Since the frequency response of an LTI system is the DTFT/FT of the unit impulse response, the frequency responses of LTI systems and DTFT/FT share the same properties. Please refer to Chapter 3 for the details of those properties. Here, we just highlight the following. For discrete-time LTI systems, we have: i) H(ejΩ ) = |H(ejΩ )|ejϕh (Ω ) and hence both |H(ejΩ )| and ϕh (Ω ) are all periodic in Ω with a period of 2π; ii) For a real-valued h[n], |H(ejΩ )| is an even function, while ϕh (Ω ) is an odd function. It is due to these properties that the magnitude and phase responses |H(ejΩ )| (or 20log10 |H(ejΩ )|) and ϕh (Ω ) are usually plotted with Ω just for 0 ⩽ Ω < π. The frequency response of a real-valued continuous-time LTI system is given for ω ⩾ 0 because: – |H(jω )| is an even function, while ϕh (ω ) is an odd function; – very often the bode plot is used, in which both 20 log10 |H(jω )| and ϕh (ω ) are presented with a logarithmic frequency scale log10 ω for ω > 0. See Figure 4.6.

136 | 4 Frequency-domain approach to LTI systems

20log10|H(jω)|

−15 −20 −25 −30 −35 −40 −45

100

101

102

103

102

103

ω

(a) 1

ϕh(ω)

0.8 0.6 0.4 0.2 0

100

101 ω

(b) 1+jω /ω

Fig. 4.6: Bode plot for H(jω ) = κ 1+jω /ω1 with κ = 0.01, ω1 = 10 and ω2 = 100. (a) 20 log10 |H(jω )|; 2 (b) ϕh (ω ), where the x-axis is log10 ω .

The use of logarithmic scale allows details to be displayed over a wider dynamic range. If we wish to observe the detailed variations around a value of 10−5 and a value of 105 on the same graph, the logarithmic scaling proves a powerful tool. The same argument applies to the logarithmic frequency scale used in a bode plot, where the frequency varies from 0 Hz to infinity, while it is not used in frequency response plot of discretetime systems as the frequency range is just from 0 to π. Now, let us consider the frequency response of the following system: H(jω ) = κ

1 + jω /ω1 , 1 + jω /ω2

with κ , ω1 , ω2 all constant. So, 20 log10 |H(jω )| = 20 log10 |κ | + 20 log10 |1 + jω /ω1 | − 20 log10 |1 + jω /ω2 |.



We note that 20 log10 |1 + jω /ωk | ≈ 0 for 0 ⩽ ω > |ωk |, i.e. the response increases with 20 dB-per-decade.¹

The straight-line approximation of Bode magnitude plot is the graph obtained with the following approximation rule: 20 log10 |1 + jω /ωk | ≈ {

0, 20 log10 ω − 20 log10 |ωk |,

0 ⩽ ω < |ωk | . ω ⩾ |ωk |

(4.10)

The straight-line approximation of the Bode magnitude plot for the H(jω ) given by (4.9) is shown in Figure 4.7. −15 20log10|H(jω)|

−20 −25 −30 −35 −40 100

101

102

103

102

103

ω

(a) −35 20log10|H(jω)|

−40 −45 −50 −55 −60 −65 (b)

100

101 ω

Fig. 4.7: The straight-line approximation (solid-line) for 20 log10 |H(jω )| = 20 log10 |κ (dotted-line) with κ = 0.01. (a) ω1 = 10 and ω2 = 100; (b) ω1 = 100 and ω2 = 10.

1+jω /ω1 | 1+jω /ω2

1 The function y = 20 log10 ω is linear in log10 ω (but not in ω ). As log10 ω − log10 ω̃ = 1 is equivalent to ω /ω̃ = 10, y increases 20 dB with an increase of factor 10 (i.e. a decade) in ω .

138 | 4 Frequency-domain approach to LTI systems It should be noted that when we refer to a frequency response H(jω ), it indicates that the system is stable.² Otherwise, it does not exist. In addition, since both |1 + j ωω | p

and |1 − j ωω | have the same straight-line approximation, a straight-line corresponds to p two possible frequency responses. Example 4.4: A straight-line approximation of a causal LTI system is given by Figure 4.8. Determine the frequency response of this system. 65 60

20log10|H(jω)|

55 50 45 40 35 100

101

102

103

ω Fig. 4.8: Straight-line approximation for Example 4.4.

Solution: Denote ω1 = 10, ω2 = 100 and ω3 = 1000. Observing the plot given, we know that the frequency response of the system is of form H(jω ) = κ

ω 2 1 1 ) , ω (1 ± j ω 1 ± jω 1 ± j ωω 2 1

where |κ | = 1060/20 = 103 . Since ωp e−ωp t u(t) ↔

3

1 1 , ωp eωp t u(−t) ↔ 1 + j ωω 1 − j ωω p

p

and the system is causal, we have H(jω ) = ±103

(jω ± ω2 )2 , (jω + ω1 )(jω + ω3 )

which yields four possible frequency responses.

2 As discussed before, the DTFT of a sequence, say h[n], exists if it is absolutely summable. We then realize that the concept of frequency response applies to stable LTI systems only. The same applies to the FT.

4.4 Frequency response of LTIs described with LCCDEs |

139

4.4 Frequency response of LTIs described with LCCDEs A system is called a finite impulse response (FIR) system if there exists a pair of finite integers N1 , N2 with N2 > N1 such that its unit impulse response h[n] = 0 for all n beyond the range [N1 , N2 ]. Otherwise, it is called infinite impulse response (IIR) system. For example, the system with h[n] = α n u(n) is an IIR system. Generally speaking, the frequency response of an IIR system, if evaluated using the summation (4.1), is not tractable in computation because of the sum accumulating an infinite number of terms. Consider the class of LTI systems that are constrained with the following LCCDE N

M

x[n] → y[n] : y[n] + ∑ ak y[n − k] = ∑ bm x[n − m].

(4.11)

m=0

k=1

The existence of such LTI systems has been proved in Chapter 2. What is the H(ejΩ ) for such a system? One would replace x[n] with δ [n] and solve the LCCDE for h[n] under the auxiliary condition that the system is LTI. The frequency response is then obtained using (4.1). Alternatively, as it has been shown before that for LTI systems, x[n] = ejΩ n → y[n] = H(ejΩ )ejΩ n as long as H(ejΩ ) exists, i.e. the system is stable. Therefore, x[n − m] = ejΩ (n−m) = ejΩ n e−jΩ m y[n − m] = H(ejΩ )ejΩ (n−m) = H(ejΩ )ejΩ n e−jΩ m , and hence the difference equation (4.11) becomes N

M

k=1

m=0

H(ejΩ )ejΩ n + ∑ ak H(ejΩ )ejΩ n e−jΩ k = ∑ bm ejΩ n e−jΩ m , which leads to H(ejΩ ) =

−jΩ m ∑M m=0 bm e

1 + ∑Nk=1 ak e−jΩ k

.

(4.12)

With such an expression, the response can be evaluated much easily once the coefficients ak , bm are given. Example 4.5: Determine the frequency response of the stable LTI system given by y[n] −

1 y[n − 2] = 2x[n]. 4

Solution: Based on (4.12), the frequency response is given directly: H(ejΩ ) =

2 . 1 − 14 e−j2Ω

140 | 4 Frequency-domain approach to LTI systems This procedure avoids computing the unit impulse response of the system though the latter can be obtained by the IDTFT of H(ejω ) with H(ejΩ ) =

1 1 + . 1 − 12 e−jΩ 1 + 12 e−jΩ

Noting the DTFT pair (3.37), we then have h[n] = 0.5n u[n] + (−0.5)n u[n] Clearly, the system is causal and stable due to h[n] = 0, ∀ n < 0 and ∑n |h[n]| > r and x(t) = cos(2πF0 t), F0 = 1 Hz. The design problem is to choose R and C such that the output y(t) is close to a constant. Figures 4.9(b) and 4.9(c) show the output y(t) for RC = 0.01 and RC = 10, respectively. Try to explain why the difference is so big by evaluating the voltage y(t) across the capacitor C. Solution: Ideally, p(t) shown in Figure 1.3 is a periodic signal of period T0 = 1/F0 = 1 second. See Figure 4.9(a). Denote x0 (t) ≜ x(t)wT0 /2 (t) ⇒ p(t) = ∑ x0 (t − kT0 ). k jω0 mt

As p(t) is periodic, p(t) = ∑m c[m]e , where, as indicated by (3.48), the FS coefficients are given by c[m] = T1 X0 (jω0 m) with X0 (jω ) to be computed below. 0

4.4 Frequency response of LTIs described with LCCDEs

| 141

p(t)

1 0.5 0 −2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

0.5

1

1.5

2

0.5

1

1.5

2

t

(a)

y(t)

1 0.5 0 −2.5

t −2

−1.5

−1

−0.5

0 t

(b)

y(t)

1 0.5 0 −2.5

t −2

−1.5

−1

−0.5

0 t

(c)

Fig. 4.9: Signals for Example 4.6. (a) p(t); (b) y(t) with RC = 0.01; (c) y(t) with RC = 10.

Noting x(t) = 0.5 [ejω0 t + e−jω0 t ] with ω0 = 2π and wT0 /2 (t) T0 sin(ω T0 /4) , 2 ω T0 /4



WT0 /2 (jω ) =

one has X0 (jω ) =

1 sin((ω − ω0 )/4) sin((ω + ω0 )/4) [ + ], 4 (ω − ω0 )/4 (ω + ω0 )/4

and hence c[m] = X0 (j2πm) =

1 sin((m − 1)π/2) sin((m + 1)π/2) [ ]. + 4 (m − 1)π/2 (m + 1)π/2

Particularly, c[0] = 1/π and c[1] = 1/4. Since the RC circuit is an LTI system constrained with y(t) + RC

dy(t) = p(t), dt

the frequency response of such a system is H(jω ) =

1 . 1 + jω RC

142 | 4 Frequency-domain approach to LTI systems Therefore, y(t) = ∑m c[m]H(jmω0 )ejmω0 t ≜ ∑m d[m] ejmω0 t , where d[m] =

c[m] . 1 + jmω0 RC

Figure 4.10 shows c[m], H(jω ) and d[m] for different values of RC. 0.4

0.4 0.3 |c[m]|

|c[m]|

0.3 0.2 0.1 0

0

10

20

30

40

0.2 0.1 0

50

0

10

20

m

0.6 0.4 0

10

20

30

40

0

50

0

10

20

40

50

30

40

50

ω

0.4

0.4

0.3

|d[m]|

|d[m]|

30

0.5

ω

(a)

50

1 |H(jω)|

|H(jω)|

1

0.2 0.1 0

40

m

0.8

0.2

30

0.3 0.2 0.1

0

10

20

30 m

40

0

50

0

10

(b)

20 m

Fig. 4.10: Spectral relationship for Example 4.6. (a) RC = 0.01; (b) RC = 10.

The corresponding outputs y(t) can be seen from Figure 4.9(b) and Figure 4.9(c). With RC = 10, (say C = 10μF and R = 100 kΩ ) the mission of generating an DC can be accomplished. The LCCDEs provide an efficient way to implement causal LTI systems. This will be demonstrate with the following example. A similar example has been given in Chapter 2. Example 4.7: Let h[n] = nβ n u[n] be the unit impulse response of an LTI system, where β is a real-valued constant, absolutely smaller than one. Find a more efficient way than the time-domain convolution to evaluate the output in response to an input x[n] that starts at n = 0.

4.5 Frequency domain approach to system outputs |

143

Solution: With h[n], one can see that it requires a lot of effort to compute a sample y[n] if the convolution is used. This is especially true when the index n gets big. Note that the frequency response of the system is H(ejω ) =

β e−jΩ β e−jΩ = . −jΩ 2 (1 − β e ) 1 − 2β e−jΩ + β 2 e−j2Ω

Comparing it with (4.14), we realize that the input and the output of this system should satisfy y[n] − 2β y[n − 1] + β 2 y[n − 2] = β x[n − 1]. Equivalently, y[n] = 2β y[n − 1] − β 2 y[n − 2] + β x[n − 1]. As x[n] = 0, ∀ n < 0 and the system is assumed to be causal, y[n] = 0, ∀ n < 0. Therefore, the above LCCDE means that y[n] can be evaluated recursively: y[0] = 2β y[−1] − β 2 y[−2] + β x[−1] = 0 y[1] = 2β y[0] − β 2 y[−1] + β x[0] .. . y[k] = 2β y[k − 1] − β 2 y[k − 2] + β x[k − 1] .. . This is the implementation of this causal LTI system using the so-called directform structure to be discussed in Chapter 7.

4.5 Frequency domain approach to system outputs We have shown in the previous sections that when the input is a sinusoid the output of an LTI can be related with its frequency response. We now show a more interesting result that holds for arbitrary input signals. As shown before, for an LTI system with unit impulse response h we have in the time-domain x → y ⇒ y = x ∗ h. Denote x ↔ X, y ↔ Y, h ↔ H, where the transform is either the FT or the DTFT. What is the relationship between the three spectra X, H and Y? The answer has actually been given in Chapter 3, which states that a convolution in time-domain is equivalent to a multiplication in frequency-domain, namely y[n] = ∑+∞ Y(ejΩ ) = X(ejΩ )H(ejΩ ) { m=−∞ h[m]x[n − m] { ⇔ . { { +∞ Y(jω ) = X(jω )H(jω ) ∫ h(τ )x(t − τ )dτ y(t) = { −∞

(4.15)

144 | 4 Frequency-domain approach to LTI systems Here, we just provide the proof for the continuous-time case and leave the discretetime counterpart to the readers. First of all, we note +∞

y(t) = ∫ x(τ )h(t − τ )dτ −∞

and

+∞

h(t − τ ) =

1 ∫ H(jω )ejω (t−τ ) dω . 2π −∞

Substituting the latter into the former, we obtain +∞

+∞

1 ∫ H(jω )ejω (t−τ ) dω ] dτ y(t) = ∫ x(τ ) [ 2π −∞ [ −∞ ] +∞

+∞

=

1 ∫ H(jω ) [ ∫ x(τ )e−jωτ dτ ] ejω t dω 2π −∞ [−∞ ]

=

1 ∫ H(jω )X(jω )ejω t dω . 2π

+∞

−∞

This implies that the FT of y(t) is H(jω )X(jω ) and hence completes the proof. The output y (in time-domain) can then be evaluated by the inverse transform, namely π

1 { { y[n] = ∫ H(ejΩ )X(ejΩ )ejnΩ dΩ { { { 2π { { −π . { +∞ { { { 1 { { ∫ X(jω )H(jω )ejω t dω { y(t) = 2π { −∞

(4.16)

It should be pointed out that the time-domain descriptions of the systems with convolutions are totally equivalent to the one specified by (4.15) (in frequency-domain) as both represent the one and the same system but in different domains. The frequency domain approach has several advantages over the time domain one. At the first sight, multiplications seem simpler than convolutions. Example 4.8: Consider a causal LTI system described with d2 y(t) dy(t) dx(t) +4 + 3y(t) = + 2x(t). dt dt dt2 Derive a closed-form expression for the output y(t) in response to x(t) = e−t u(t).

4.5 Frequency domain approach to system outputs |

145

Solution: For this LTI system, we have H(jω ) =

jω + 2 (jω )2 + 4jω + 3

Time domain approach: h(t) is required: H(jω ) =

A B jω + 2 = + , (jω + 1)(jω + 3) jω + 1 jω + 3

where, by comparing the coefficients of the numerator, A = B = 1/2, that the unit impulse response is h(t) = 12 [e−t + e−3t ]u(t) and hence the output can be evaluated with y(t) = x(t) ∗ h(t). Frequency domain approach: Note X(jω ) =

1 . 1+jω

Then

jω + 2 1 (jω + 1)(jω + 3) 1 + jω C3 C1 C2 = + + . 2 1 + jω 3 + jω (1 + jω )

Y(jω ) = H(jω )X(jω ) =

By comparing the coefficients, one has C1 = 1/4, C2 = 1/2, C3 = −1/4. Finally, 1 −t [e + 2te−t − e−3t ]u(t). 4 This approach seems simpler than directly computing the convolution. y(t) =

The most important advantage of the frequency-domain approach over the timedomain one is due to the fact that the design of systems can be made much easier in frequency-domain. Let us consider a signal x[n] = x0 [n] + e[n], in which x0 [n] is the desired signal with a spectrum ended at Ω0 and e[n] is a unwanted component, called noise signal, having a spectrum starting from Ωe > Ω0 . That is all we know about the signal. How to design an LTI system/filter H(ejΩ ) to block e[n]? It seems very tough to study the design problem based on the waveform of x[n] in time-domain. If we look at the problem in frequency-domain, however, it becomes very easy. In fact, according to the information given on x[n], it should have a spectrum X(ejΩ ) like the one shown in Figure 4.11(a), in which the spectrum of the desired signal x0 [n] and that of the noise e[n] are separable even though we do not know exactly what they are! If we could realize a filter whose frequency response is given by Figure 4.11(b), then the frequency-domain relation (4.15) claims that the output spectrum Y(ejΩ ) of this filter should be the same as X0 (ejΩ ) as demonstrated in Figure 4.11(c) and hence the output y[n] of the filter should be x0 [n].

146 | 4 Frequency-domain approach to LTI systems 40

X(ejΩ)

30 20 10 0 –π (a)

–π 2

0 Ω

π 2

π

π 2

π

π 2

π

H(ejΩ)

1

0.5

–π (b)

–π 2

40

0 Ω

X0(ejΩ)

30 20 10 0 –π (c)

–π 2

0 Ω

Fig. 4.11: Spectra of x[n], h[n] and y[n].

Look at the filter H(ejΩ ) = ∑7k=0 18 e−jΩ k , used in Chapter 2 for processing x[n] shown in Figurer 2.3. The frequency responses is depicted in Figure 4.12. We observe that the magnitude response is quite close to one around Ω = 0, much larger than that for the higher frequencies. It is an approximation of the ideal one shown in Figure 4.12(b). How to design a filter such that its frequency response is close to a given one is all what filter design is about. Such a topic will be briefly discussed in the next section.

4.6 Some typical LTI systems In this section, we will introduce several classes of well-known LTI systems, most of which are characterized in frequency-domain.

147

4.6 Some typical LTI systems |

1.2 1

|H(ejΩ)|

0.8 0.6 0.4 0.2 0 –π

0

π

Ω

(a) 3 2

ϕh(Ω)

1 0 −1 −2 −3 –π

0

π

Ω

(b)

Fig. 4.12: Frequency response of H(ejΩ ) = ∑7k=0 18 e−jΩ k : (a) Magnitude response; (b) Phase response.

The effects of an LTI system on the output signal can be seen from frequencydomain easily. It follows from Y(⋅) = H(⋅)X(⋅) that {

|Y(⋅)| ϕy (⋅)

= =

|H(⋅)X(⋅)| . ϕh (⋅) + ϕx (⋅)

(4.17)

Generally speaking, both magnitude and phase responses of the system cause the output spectrum to be different from the spectrum of the input signal though in different manners. Besides, as suggested by Parseval Theorem, the phase response has no effect on energy distribution of the output though it can have a dramatic effect on the detailed behavior of the output.

4.6.1 All-pass systems An all-pass system is the one whose magnitude response is constant: |H(⋅)| = c (say, c = 1) over the frequency range.

148 | 4 Frequency-domain approach to LTI systems Continuous-time case: e.g. – H(jω ) = e−jαω : |H(jω )| = 1, ϕ (ω ) = −αω , yielding y(t) = x(t − α ); – H(jω ) = ββ −jω : |H(jω )| = ⋅ ⋅ ⋅ = 1, ϕ (ω ) = ⋅ ⋅ ⋅ = −2atan −ω . +jω β Discrete-time case: e.g. – H(ejΩ ) = e−jn0 Ω with n0 integer: |H(ejΩ )| = 1, ϕ (Ω ) = −n0 Ω , yielding y[n] = x[n − n0 ]; –

e−jΩ −β 1−β e−jΩ −β sin Ω 2atan 1−β . cos Ω

H(ejΩ ) =

=

e−jΩ (1−β ejΩ ) 1−β e−jΩ

with β real-valued: |H(ejΩ )| = ⋅ ⋅ ⋅ = 1, ϕ (Ω ) = −Ω +

Based on Parserval’s theorem all-pass systems do not change the energy spectrum of the input signals and eventually, the input and the output have the same energy. But they do cause distortions or information lost of the input signals. Demonstrations: Pass a music signal into the all-pass filters H1 (ejΩ ) = e−jn0 Ω and −jΩ

e −β H2 (ejΩ ) = 1−β , respectively, and listen to the output of each filter. Can you figure e−jΩ out what makes the difference?

4.6.2 Linear phase response systems A discrete-time system is said of linear phase response if its phase response ϕh (Ω ) is linear in frequency variable Ω . The group delay is a measure used in study of this topic: dϕh (Ω ) . (4.18) dΩ So, an LTI system h[n] of linear phase response actually implies that the group delay g(Ω ) is constant. All these apply to continuous-time LTI systems. g(Ω ) ≜ −

Example 4.9: Let y[n] be the output of the system H(ejΩ ) = e−jαΩ in response to x[n], where α is not necessarily an integer. What is the time-domain relationship between x[n] and y[n]? Solution : First of all, π

π

−π π

−π

1 1 y[n] = ∫ Y(ejΩ )ejnΩ dΩ = ∫ X(ejΩ )ej(n−α )Ω dΩ 2π 2π π

1 1 = ∫ ∑ x[m]e−jmΩ ej(n−α )Ω dΩ = ∑ x[m] ∫ ej(n−α −m)Ω dΩ 2π 2π m m −π

sin[(n − α − m)π] = ∑ x[m] . π(n − α − m) m

−π

4.6 Some typical LTI systems

| 149

If x[n] is obtained by sampling x(t) with sampling period T, i.e. x[n] = x(nT), then y[n] is the result of sampling x(t − α T) with the same period: y[n] = x(nT − α T). As x(t) and x(t − α T) have the same information, if x[n] contains the same information as x(t) does, so does y[n]. For detailed discussions on this topic, please see Chapter 5. Therefore, the linear phase system H(ejΩ ) = e−jαΩ preserves all information of its input signals. Is this system causal?

4.6.3 Ideal filters The filters to be discussed here are actually a class of ideal LTI systems used for filtering purpose. Let C(ω ) = |C(ω )|ejϕc (ω ) and D(Ω ) = |D(Ω )|ejϕd (Ω ) be the frequency response of the continuous-time and discrete-time filters, respectively. The types of filters are determined by the characteristics of the magnitudes responses, while the phase responses are usually assumed to be linear within the pass-bands. Ideal low-pass: |C(ω )| = wωl (ω − ωl /2), ω ⩾ 0 |D(Ω )| = wΩl (Ω − Ωl /2), 0 ⩽ Ω ⩽ π. Ideal high-pass: |C(ω )| = u(ω − ωh ),

ω ⩾0

|D(Ω )| = u(Ω − Ωh ) − u(Ω − π), 0 ⩽ Ω ⩽ π. Ideal band-pass: |C(ω )| = u(ω − ωl ) − u(ω − ωh ),

ω ⩾0

|D(Ω )| = u(Ω − Ωl ) − u(Ω − Ωh ), 0 ⩽ Ω ⩽ π. Ideal band-stop: |C(ω )| = 1 − [u(ω − ωl ) − u(ω − ωh )],

ω ⩾0

|D(Ω )| = 1 − [u(Ω − Ωl ) − u(Ω − Ωh )], 0 ⩽ Ω ⩽ π. Figure 4.13 shows graphically the magnitude responses for four types of digital filters.

150 | 4 Frequency-domain approach to LTI systems Hlp (ejΩ )

Hhp (ejΩ )

1

0

1

π

Ωl

Ω

0

Ωh

(b)

(a) Hbp (ejΩ )

Hbs (ejΩ )

1

0

Ω

π

1

Ωl

Ωh

π

Ω

0

(c)

Ωl

Ωh

Ω

π

(d)

Fig. 4.13: Four types of ideal digital filters with 0 < Ωl < Ωh < π . (a) Low-pass; (b) High-pass; (c) Band-pass; (d) Band-stop.

Example 4.10: Consider the two signals containing different information:³ s1 (t) =

ω1 sin(ω1 t/2) 2π ω1 t/2

s2 (t) =

1 ω2 sin(ω2 t/4) ( ) 2π 2 ω2 t/4



S1 (jω ) = wω1 (ω )



S2 (jω ) = (1 −

2

|ω | ) w2ω2 (ω ). ω2

Assume ω1 = 20π, ω2 = 30π. The two signals are given in Figure 4.14. Suppose r(t) = s1 (t) + s2 (t) cos(ωc t) is available, where ωc = 50π. How to detect s1 (t) and s2 (t) from r(t)? Solution: The signal r(t) is shown in Figure 4.15(a). Clearly, the two signals are not separable from r(t) in time domain. By analyzing the spectrum of r(t), we realize that 1 R(jω ) = S1 (jω ) + [S2 (j(ω + ωc )) + S2 (j(ω − ωc ))] 2 (see Fig. 4.15(b)). This means that s1 (t) can be obtained by feeding r(t) into an ideal low-pass filter H1 (jω ) as long as the pass-band frequency ωl satisfies ω1 < ωl < ωc − ω2 . 2 The corresponding output, say y1 (t), is then s1 (t) due to Y1 (jω ) = H1 (jω )R(jω ) = S1 (jω ).

3 How do we get the FT pair s2 (t) ↔ S2 (jω )? One way to do it is to apply the property of multiplication in time domain, leading to a FT given by a convolution in frequency domain.

4.6 Some typical LTI systems

1

5

S1(jω)

s1(t)

10

| 151

0.5

0 0 −2

0

2

t

(a)

– (b)

1

5

S2(jω)

s2(t)

10

ω1 0 ω1 2 2 ω

0.5

0 0 −2

0

2

–ω2

t

(c)

0

ω2

ω

(d)

Fig. 4.14: (a) s1 (t); (b) S1 (jω ); (c) s2 (t); (d) S2 (jω ). 20

r(t)

15 10 5 0 −5 −2.5 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

t

(a)

R(jω)

1.5 1.0 0.5 0 –ωc (b)

ω –2 1 ω

ωc – ω2

–ωl

Fig. 4.15: (a) r(t) = s1 (t) + s2 (t) cos(ωc t); (b) R(jω ).

Clearly, y2 (t) = r(t) − y1 (t) is then available and y2 (t) = s2 (t) cos(ωc t). Then, s2 (t) can be detected with the system depicted by Figure 4.16, where the mixer is a signal multiplier: ̃ = y2 (t) × 2 cos(ωc t) = 2s2 (t) cos(ωc t) cos(ωc t) r(t) = s2 (t) + s2 (t) cos(2ωc t),

152 | 4 Frequency-domain approach to LTI systems and H2 (jω ) is an ideal low-pass filter with pass-band frequency ωl constrained with ω2 < ωl < 2ωc − ω2 . Clearly, the output of such a filter is y3 (t) = s2 (t)—mission completed!

2 cos(ωc t) y2 (t)

? -

Mixer

̃ r(t)

-

-

y3 (t) = s2 (t)

-

H2 (jω )

Fig. 4.16: Block diagram of demodulation.

Although possessing quite different frequency characteristics, these ideal filters have one thing in common, that is they are all noncausal as their unit impulse response is a two-sided signal. Therefore, they are practically not implementable. Generally speaking, filter design is all about to approximate a given frequency response, say D(Ω ), with a physically realizable LTI system in the sense that the difference between the frequency response of the system and the given response is minimized. Precisely speaking, let D(Ω ) be one of the ideal filters defined above. Note that the following model H(ejΩ ) =

b0 + b1 e−jΩ + ⋅ ⋅ ⋅ + bN e−jNΩ 1 + a1 e−jΩ + ⋅ ⋅ ⋅ + aN e−jNΩ

can represent a causal LTI system that can be realized with the following LCCDE: N

N

y[n] = − ∑ a1 y[n − k] + ∑ bk x[n − k]. k=1

k=0

For a given N, the order of the system, the behavior of H(ejΩ ) is determined by the 2N + 1 parameters ak , bk . So, the filter design is to find a set of parameters {ak , bk } such that H(ejΩ ) is as close to D(Ω ) as possible in a certain sense. −jnΩ For example, the causal H(ejΩ ) = ∑33 is a practically used low-pass filter. n=0 bn e As seen from Figure 4.17, its magnitude response is very close to the ideal low-pass response, which is particularly true within the pass-band. When the two frequencies are equal, the stop-band filter is called an ideal notch filter. Such an ideal filter can be approximated with the following 2nd order system: y[n] + a1 y[n − 1] + a2 y[n − 2] = b0 x[n] + b1 x[n − 1] + b2 x[n − 2], where a1 = −2ρ cos π4 , a2 = ρ 2 , b0 = b2 = 1, b1 = −2 cos θ0 . Clearly, its frequency response is given by jΩ

H(e ) =

1 − 2 cos π4 e−jΩ + e−j2Ω 1 − 2ρ cos π4 e−jΩ + ρ 2 e−j2Ω

,

4.6 Some typical LTI systems |

153

20log|H(ejΩ)|

0

−50

−100 −π

0

π

Ω Fig. 4.17: Magnitude response (in dB) of a causal low-pass digital filter with Ωp = 0.4π and Ωs = 0.6π.

1.2 1

|H(ejΩ)|

0.8 0.6 0.4 0.2 0 0

π 2

0

π 2

π Ω

3π 2



3π 2



(a) 3 2

ϕh(Ω)

1 0 −1 −2 −3

π Ω

(b) Fig. 4.18: Frequency response of a 2nd order notch system with ρ = 0.99, θ0 = π/4: (a) magnitude response; (b) phase response.

which is represented in Figure 4.18. This filter can be used to block the sinusoid of frequency equal to π/4.

154 | 4 Frequency-domain approach to LTI systems 4.6.4 Ideal transmission channels A continuous-time system is said a distortion-less / ideal transmission channel if for any input signal x(t), the corresponding output is y(t) = κ x(t−τ ), where both κ ≠ 0 and τ are real constant. Clearly, y(t) contains exactly the same information as x(t) does. Such a system is LTI and its frequency response, denoted as CI (jω ), is of form CI (jω ) = κ e−jτω , ∀ ω . In most of applications, the signals under consideration are usually of a spectrum spreading within a certain frequency interval, say (ω1 , ω2 ). The ideal channel for transmitting such a class of signals is then of a frequency response as CI (jω ) = {

κ e−jτω , not interested,

ω1 ⩽ |ω | ⩽ ω2 otherwise,

(4.19)

with the constants κ and τ mentioned before. The output signal y(t) is then given by y(t) = κ x(t − τ ). In practice, due to the multi-path effect in communications systems the received signals are of form r(t) = x(t) + β x(t − τ ). The frequency response of this channel is C(jω ) = 1 + β e−jωτ . See Figure 4.19 for β = 0.25, τ = 0.005. When we transmit the signal x(t) = 10 cos(600t) + cos(200t −π/4) through such a channel, the received signal r(t), as observed from Figure 4.20(b), is greatly distorted. One way to recover back the transmitted signal x(t) is to feed the received signal r(t) into a well designed system, called channel equalizer, which is obtained based on the channel frequency response C(jω ) such that the output y(t) of the equalizer is equal to y(t) = x(t − ξ ), where ξ is constant. More discussions can be found from the textbooks on communications. Similarly, the ideal transmission channel for the class of discrete-time signals x[n] with a spectrum spreading over a certain interval, say [Ω1 , Ω2 ], is CI (ejΩ ) = {

κ e−jmΩ , not interested,

Ω1 ⩽ |Ω | ⩽ Ω2 otherwise,

(4.20)

where κ ≠ 0 is a real constant, while m is an integer independent of time n. It is easy to see that the output of such system is of form y[n] = κ x[n − m] for any x[n] belonging to this class of signals.

4.8 Problems

|

155

1.5

|C(jω|

1

0.5

0 −1500

−1000

−500

0

500

1000

1500

500

1000

1500

ω

(a)

ϕc(ω)

0.5

0

−0.5 −1500

−1000

(b)

−500

0 ω

Fig. 4.19: (a) Magnitude response |C(jω )|; (b) Phase response.

4.7 Summary The most important concept raised in this chapter is the frequency response of LTI systems, with which the time-domain based convolution is transformed into a frequencydomain multiplication. This important result provides us a way to look at the effect of a system on the input signal from a different angle and eventually a very useful tool for system design. Several classes of typical systems have been introduced. In the next chapter, we will consider discrete processing of continuous-time signals, in which a hybrid system is involved. As to be seen, the frequency domain approach discussed in this chapter is a powerful tool for analyzing such a system.

4.8 Problems Problem 4.1: Compute the convolution of each of the following pairs of signals x(t) and h(t) by calculating X(jω ) and H(jω ), using the convolution property, and inverse transforming.

156 | 4 Frequency-domain approach to LTI systems

10

x(t)

5 0 −5 −10 0

(a)

0.02

0.04

0.06

0.08

0.1

0.06

0.08

0.1

t 15 10

r(t)

5 0 −5 −10 −15 (b)

0

0.02

0.04 t

Fig. 4.20: Signal waveforms. (a) x(t); (b) r(t).

(a) x(t) = te−2t u(t), h(t) = e−4t u(t); (b) x(t) = te−2t u(t), h(t) = te−4t u(t); (c) x(t) = e−t u(t), h(t) = et u(−t). Problem 4.2: Consider an LTI system S with impulse response h(t) = Determine the output of S for each of the following inputs: x1 (t) = cos (6t + x3 (t) =

sin(4(t−1)) . π(t−1)

∞ π 1 k ) , x2 (t) = ∑ ( ) sin(3kt) 2 2 k=0

sin(4(t + 1)) sin 2t 2 ) . , x4 (t) = ( π(t + 1) πt

Problem 4.3: A causal and stable LTI system S has the frequency response H(jω ) =

jω + 4 . 6 − ω 2 + 5jω

(a) Determine a differential equation relating the input and output of S. (b) Determine the impulse response h(t) of S. (c) What is the output of S when the input is x(t) = e−4t u(t) − te−4t u(t)?

4.8 Problems

|

157

Problem 4.4: Consider an LTI system whose response to the input x(t) = (e−t + e−3t )u(t) is y(t) = (2e−t − 2e−4t )u(t). (a) Find the frequency response of this system. (b) Determine the system’s impulse response. (c) Find the differential equation relating the input and the output of this system. Problem 4.5: Consider a continuous-time ideal bandpass filter whose frequency response is 1, ωc ⩽ |ω | ⩽ 3ωc H(jω ) = { 0, elsewhere. (a) If h(t) is the impulse response of this filter, determine a function g(t) such that sin ω t h(t) = ( πt c )g(t). (b) As ωc is increased, dose the impulse response of the filter get more concentrated or less concentrated about the origin? Problem 4.6: A continuous-time LTI system S with frequency response H(jω ) is constructed by cascading two continuous-time LTI systems with frequency response H1 (jω ) and H2 (jω ), respectively. The straight-line approximations of the Bode magnitude plots of H1 (jω ) and H(jω ) are given below: { { { { 20 log10 |H1 (jω )| ≈ { { { { {

6, 6 + 20 log10 ω , α, ω α − 20 log10 40 ,

0⩽ω ⩽1 1⩽ω ⩽8 8 ⩽ ω ⩽ 40 ω ⩾ 40

where α = 6 + 20 log10 8 ≈ 24 (dB), and 20 log10 |H(jω )| ≈ {

−20, −20 − 40 log10

ω , 8

0⩽ω ⩽8 ω ⩾ 8.

Sketch the two two Bode plots and then specify H2 (jω ). Problem 4.7: The straight-line approximations of the Bode magnitude plot of a causal and stable continuous-time LTI system H(jω ) is shown below. { { { { 20 log10 |H(jω )| ≈ { { { { {

12, ω 12 + 40 log10 0.2 , ω α + 20 log10 10 , β,

0 ⩽ ω ⩽ 0.2 0.2 ⩽ ω ⩽ 10 10 ⩽ ω ⩽ 50 ω ⩾ 50

10 where α = 12 + 40log10 0.2 (dB) and β = α + 20log10 50 (dB). Sketch the Bode plot 10 and then specify the frequency response of a system that is the inverse of H(jω ).

Problem 4.8: Given that a discrete-time LTI system S is constructed by cascading two discrete-time LTI systems h1 [n], h2 [n], consider the following problems:

158 | 4 Frequency-domain approach to LTI systems (a) For the first LTI system, when the input is x[n] = ( 12 )n u[n] + 2n u[−n − 1], the output is y[n] = 6( 12 )n u[n] − 6( 34 )n u[n], determine h1 [n]. (b) For the second LTI system, the relationship between the input and output can be expressed as y[n] = 0.9y[n − 1] + x[n] + 0.9x[n − 1], determine h2 [n]. (c) If an input x[n] = ejΩ0 n is fed into the system S, determine the output. Problem 4.9: The 2nd order filter F(Ω0 , ejΩ ) F(Ω0 , ejΩ ) =

1 − 2 cos Ω0 e−jΩ + e−j2Ω 1 − 2ρ cos Ω0 e−jΩ + ρ 2 e−j2Ω

can be considered as a special (ideal) band-stop filter with Ωl = Ωh = Ω0 when 0 < ρ < 1 is very close to one. Such a system is called a notch filter, where Ω0 , the notching frequency, satisfies F(Ω0 , ejΩ0 ) = 0. Let x[n] be a signal, which is known of form x[n] = x0 [n] + s1 [n] + s2 [n], where s1 [n] = cos( π4 n), s2 [n] = 10 cos( 5π n + 1), while x0 [n] is 7 π π 5π , , that is X0 (ej 4 ) = 4 7 F(Ω0 , ejΩ ) as subsystems to

a signal that does not contain frequency components at Ω = j 5π 7

X0 (e ) = 0. Design a system with notch filters of the form separate x0 [n], s1 [n] and s2 [n] from x[n].

Problem 4.10: Is H(ejΩ ) = b0 + b1 e−jΩ + b1 e−j2Ω + b0 e−j3Ω of linear phase response? Justify your answer. Problem 4.11: Let A(ejΩ ) = 1 + a1 e−jΩ + ⋅ ⋅ ⋅ + ap e−jpΩ with all ak constant. Show that for any constant q, e−jqΩ A∗ (ejΩ ) H(ejΩ ) = A(ejΩ ) is an all-pass system. If the input signal has an energy of 123, what is the energy of this system’s output?

5 Discrete processing of analog signals 5.1 Introduction The signals we have studied so far in the previous chapters are classified into two categories: continuous-time signals and discrete-time signals. As mentioned before, most of the signals generated from physical phenomena are of continuous-time and they used to be processed by analog/continuous-time systems before the digital ages came. As a consequence of the dramatic development of digital technology over the past few decades, which results in the availability of low-cost, lightweight, programmable, and reproducible discrete-time systems, processing of discrete-time signals has taken place of processing of continuous-time signals in many contexts, leading to the socalled discrete processing of continuous-time signals. Processing a continuous-time signal with a discrete-time system has several merits that result from the power, flexibility, and reliability of discrete-time computing devices such as micro-processors and specialized digital devices (e.g. digital controllers and digital signal processors): – a signal manipulation is much more easily carried out with arithmetic operations of a digital device than the use of an analog system; – implementation of a discrete-time system only involves writing/modifying a set of program codes; – digital signals are more robust against disturbances as there are a limited number of signal levels involved in the systems. The first thing of first to do for discrete-time processing of a continuous-time signal x(t) is to convert the latter into discrete-time one, say x[n], and this is done by an operation named sampling that leads to the following basic relation: x[n] = x(nTs ), where Ts is the sampling period. The 2nd step for the processing is to manipulate x[n] using a discrete-time system and the last step is to convert the output of the system back to the continuous-time form. It should be noted that processing of x[n] will not enhance the amount of information on x(t). So, a number of very essential questions we ask on such a processing include the following. – How much information of the original signal x(t) is lost by sampling? – Is it possible to recover x(t) from its samples x[n]? – If the answer to the 2nd one is positive, how to reconstruct x(t) using x[n]? An outline of this chapter is given as follows. Section 5.2 is devoted to discussing the concept of sampling of continuous-time signals. The spectral relation is derived in Section 5.3, which represents the DTFT of the discrete-time signal (i.e. the sampled version of a continuous-time signal) in terms of the FT of this continuous-time signal.

160 | 5 Discrete processing of analog signals Based on this relation, the famous sampling theorem is embodied in terms of sampling frequency and the maximum frequency that the spectrum of the continuous-time signals is limited to. The issue on reconstruction of a continuous-time signal is studied in Section 5.4. A hybrid system for discrete processing of continuous-time signals is presented in Section 5.5, where the effects of under-sampling and nonideal reconstruction functions are discussed. In Section 5.6, the issue of frequency-domain sampling is raised and leads to the important topic—discrete Fourier transform. The curtain comes down on this chapter with the chapter summary that is given in Section 5.8.

5.2 Sampling of a continuous-time signal To take the advantages of powerful digital computing devices, signals of continuous variables have to be discretized. This procedure is usually referred to as sampling. Take a time-domain signal x(t) as an example. Sampling is executed with a system whose function is mathematically described with x(t) → x[n] = x(nTs ).

(5.1)

Physically, a sampling can be implemented with an electronic switch (see Figure 5.1). x(t)

 Q k Q   Q

x[n] = x(nTs )

t = nTs Fig. 5.1: Implementation of sampling operation in time domain

Suppose x[n] is obtained from a continuous-time signal x(t) with sampling. In order to keep all the information of x(t), the mapping between x(t) and its samples x[n] should be one-to-one. Figure 5.2 show the situation where a given sequence x[n] corresponds to three continuous-time signals. This is definitely a unwanted situation in many contexts as the original signal x(t) cannot be uniquely recovered from its samples x[n]. Under what conditions x(t) can be uniquely determined by its samples x[n]? And how serious the situation is if these conditions are not satisfied? In what follows, we will answer these two questions.

5.3 Spectral relationship and sampling theorem Let x[n] be the discrete-time signal obtained by sampling the continuous-time signal x(t). As the FT of x(t) and the DTFT of x[n] are discussed at the same time, in order

5.3 Spectral relationship and sampling theorem

|

161

x(t)

4 2 0 −2

0

0.2

0.4

0.6

0.8

1

0.6

0.8

1

0.6

0.8

1

t

(a)

x1(t)

4 2 0 −2

0

0.2

0.4 t

(b)

x2(t)

4 2 0 −2

0

0.2

0.4 t

(c)

Fig. 5.2: Three continuous-time signals yielding the same x[n] when sampled with the same period Ts .

̃ to avoid any confusion we use X(jω ), rather than X(jω ), to denote the FT of x(t) and jΩ X(e ) for the DTFT of x[n]. Note that the mapping between x(t) and x[n] is one-to-one if and only if so is that ̃ and X(.). So, let us look at the relationship between the between the two spectra X(.) ̃ two functions X(.) and X(.): +∞

x(t)

FT



̃ X(jω ) = ∫ x(t)e−jω t dt −∞

↓ t = nT x[n]

↓ ?? DTFT



+∞

X(ejΩ ) = ∑ x[n]e−jΩ n n=−∞

162 | 5 Discrete processing of analog signals ̃ It follows from the IFT of X(jω ) and (5.1) that +∞

1 ̃ )ejω nTs dω . ∫ X(jω x[n] = 2π −∞

Now, divide the region (−∞, + ∞) for the variable ω into a set of small intervals with the points 2πk + π ωk ≜ , k = ⋅ ⋅ ⋅ , −2, −1, 0, 1, 2, . . . . Ts Then x[n] =

1 2π

(2πk+π)/Ts jω nTs ̃ ∑+∞ dω and hence with ξ = Ts ω k=−∞ ∫(2πk−π)/T X(jω )e s

x[n] =

1 +∞ 1 ∑ 2π k=−∞ Ts

(2πk+π)

̃ /T )ejξ n dξ . X(jξ s

∫ (2πk−π)

With the intermediate variable Ω = ξ − 2πk defined, we finally have π

x[n] =

1 +∞ ̃ Ω + 2πk jΩ n 1 ∑ X (j ∫ ) e dΩ . 2π Ts k=−∞ Ts

(5.2)

−π

Denote Φ (Ω ) ≜

1 +∞ ̃ Ω + 2πk ∑ X (j ). Ts k=−∞ Ts

Note that Φ (Ω ) is periodic: Φ (Ω + 2π) = Φ (Ω ). It is clear that if c[m] are the FS coefficients of Φ (Ω ), (5.2) suggests that c[m] = x[−m] and hence (noting Ω0 = 2π/T0 = 2π/2π = 1) Φ (Ω ) = ∑ c[m]ejmΩ0 Ω = ∑ x[n]e−jnΩ = X(ejΩ ), m

n

which yields the following two equivalent relationships: 1 +∞ ̃ Ω + 2kπ { jΩ { X(e ∑ X (j ) ) = { { { Ts k=−∞ Ts { { { { { Ω 2π (ω = , ωs ≜ ) { { { T Ts s { { { +∞ { { { ̃ { X(ejTs ω ) = 1 ∑ X(j(ω + kωs )). Ts k=−∞ {

(5.3)

2

ω sin(ω t/2) ̃ ) = (1 − ω|ω | ) w2ωM (ω ) As seen in Example 4.10, x(t) = 2πM ( ω Mt/2 ) ↔ X(jω M M with the latter shown in Figure 5.3(a). Figures 5.3(b) and (c) show the spectra of x[n] obtained with x(t) sampled using different Ts . Let x(t) be a signal of bandwidth limited to ωM like the one shown above. It is easy to see that if ωM < ωs − ωM , i.e. ωs > 2ωM , then

̃ ), |ω | < ωs /2. Ts X(ejTs ω ) = X(jω

(5.4)

|X̃( jω)|

5.3 Spectral relationship and sampling theorem

ωM

0 ω

(a)

|X(ejωTs)|

163

1

–ωM

–1 Ts –ωs

–ωM

ωM

0

ωs

ω

(b)

|X(ejωTs)|

|

–1 Ts –ωs

–ωM

ωM

0

ωs

ω

(c) ̃ )|; (b) |X(ejω Ts )| with Ts = Fig. 5.3: (a) |X(jω

4π ; 3ωM

(c) |X(ejω Ts )| with Ts =

4π . 5ωM

̃ This means that the analog spectrum X(jω ) can be extracted from the corresponding digital spectrum X(ejΩ ). We therefore have the following important theorem. Theorem 5.1: Suppose x(t) is bandlimited: ̃ X(jω ) = 0, ∀ |ω | ⩾ ωM . Then x(t) can be recovered from its samples x[n] = x(nTs ) if ωs ≜

2π > 2ωM ≜ ωN , Ts

(5.5)

where ωN is usually referred to as the Nyquist rate. This is also called Nyquist theorem or sampling theorem. Example 5.1: Let x(t) be a signal with a band limited to ωM and pτ (t) is a periodic signal: +∞ 1 pτ (t) = ∑ wτ (t − kTs ). τ k=−∞ ̂ = x(t)pτ (t). Analyze the spectrum of x(t)

164 | 5 Discrete processing of analog signals Solution: First of all, pτ (t) = ∑ c[k]ejωs kt , k

where the FS coefficients are given by c[k] = τ −1

1 sin(kωs τ /2) 1 sin(kωs τ /2) τ = , ∀ k. Ts kωs τ /2 Ts kωs τ /2

Therefore, ̂ ̂ = x(t)pτ (t) ↔ X(jω x(t) ) = ∑ c[k]X(j(ω − kωs )). k

x(t)

4 2 0 −2 (a)

0

0.2

0.4

0.6

0.8

1

0.6

0.8

1

0.6

0.8

1

(t)

τpτ(t)

2 1 0 −1

τx͂(t)

(b)

0

0.2

0.4 (t)

4 2 0 −2

(c)

0

0.2

0.4 (t)

̃ Fig. 5.4: A mixer equivalent to a sampler. (a) x(t); (b) τ pτ (t); (c) τ x(t).

̂ One observes that when τ /Ts is very Figure 5.4 shows graphically x(t), τ pτ (t) and τ x(t). ̂ small, c[k] ≈ 1/Ts , ∀ k and hence X(jω ) is very close to X(ejω Ts ), where X(ejΩ ) is the ̂ with small τ /Ts is DTFT of x[n] = x(nTs ). See (5.3). This means that the signal x(t) almost equivalent to the discrete-time signal x[n], while the former is obtained with an analog multiplier. Furthermore, note that pτ (t) becomes an impulse train when τ → 0. This is why the sample sequence x[n] is usually modelled as the product of the analog sinal x(t) and the impulse train.

5.4 Reconstruction of continuous-time signals

165

|

5.4 Reconstruction of continuous-time signals Let hd (t) be the unit impulse response of an LTI system Hd (jω ). We can construct a continuous-time signal with the discrete-time signal x[n] in the following way +∞

̂ = ∑ x[m]hd (t − mTs ), x[n] → x(t)

(5.6)

m=−∞

which can be implemented using the system shown in Figure 5.5, where Mix is a mixture such that xp (t) = ∑+∞ m=−∞ x[m]δ (t−mTs ) and as to be discussed late, can be replaced by a zero—or first order hold circuit in practice. x[n]

-

xp (t)

Mix

-

̂ x(t)

- H (jω ) i

hd (t)

̄ x(t)

-

Fig. 5.5: Structure of the discrete-analog-convertor (DAC).

̂ and the original x(t)? What is the relationship between the reconstructed signal x(t) Let us look at the problem from frequency-domain as it seems impossible to get an answer from time-domain. Applying the FT to both sides of the above equation yields +∞

+∞

m=−∞

m=−∞

̂ X(jω ) = ∑ x[m]Hd (jω )e−jmω Ts = Hd (jω ) ∑ x[m]e−jmω Ts = Hd (jω )X(ejω Ts ).

(5.7)

Take Hd (jω ) = Ts w2ωl (ω ) ≜ H0 (jω ) ⇔ h0 (t) = Ts

sin(ωl t) , πt

(5.8)

where w2ωl (ω ) is the window function. It then turns out from (5.6) that +∞

+∞

m=−∞

m=−∞

x̂0 (t) = ∑ x[m]h0 (t − mTs ) = Ts ∑ x[m]

sin(ωl (t − mTs )) . π(t − mTs )

(5.9)

̂ ̃ It follows from (5.3) that under the recovery condition (5.5), X(jω ) = X(jω ) and hence x̂0 (t) ≡ x(t), as long as wM < ω l ⩽

ωs . 2

(5.10)

166 | 5 Discrete processing of analog signals It is interesting to note that x̂0 (nTs ) = x[n] is always true if Hd (jω ) = k−1 Ts w2ωl (ω ), kω

where ωl = 2 s for any integer k > 1. This provides a way to generate different analog signals that yield the same discrete sequence x[n] when sampled with Ts . Note that (5.6) can be rewritten into n

+∞

m=−∞

m=n+1

̂ = ∑ x[m]hd (t − mTs ) + ∑ x[m]hd (t − mTs ). x(t) In an on-line (i.e. real-time) signal reconstruction system, for t < (n + 1)Ts the samples of discrete-time sequence x[m] are available just for m ⩽ n. In that case, the 2nd term of the above equation should be nil for all t < (n + 1)Ts in order to avoid ̂ using x[n + 1], x[n + 2], ⋅ ⋅ ⋅ in evaluating x(t). To achieve that, it suffices to ensure hd (t) ≡ 0, ∀ t < 0. This means that the system Hd (jω ) should be causal ! sin(ω t) Look at (5.9) in which the ideal reconstruction system h0 (t) = Ts πt l is noncausal. It is of theoretical importance, but in practice it has to be replaced with an implementable reconstruction system. Zero Order Hold (ZOH): It is defined as h1 (t) ≜ wTs (t − Ts /2), leading to x̂1 (t) = ∑+∞ m=−∞ x[m]h1 (t − mTs ). See Figure 5.6 as an example. sin(ω T /2)

s e−jω T2 /2 , we can see that the reconstructed signal As Hd (jω ) = H1 (jω ) = 2 ω x̂1 (t) is different from x(t) due to the following three factors: i) there is a time delay of Ts /2; ii) the magnitude response H1 (jω ) is not flat within [−ωM , ωM ], and iii) the magnitude response H1 (jω ) is not nil constantly outside [−ωM , ωM ]. The first two factors may not be serious when Ts is very small, while the third one can introduce high frequency noises. This is demonstrated with Figure 5.7.

First Order Hold (FOH): It is defined as h2 (t) ≜ Tt [wTs (t − s Figure 5.8(a). This leads to x̂2 (t) = ∑+∞ m=−∞ x[m]h2 (t − mTs ). It can be shown that +∞

x̂2 (t) = ∑ [x[m] + m=−∞

Ts ) 2

− wTs (t −

3Ts )]. 2

See

x[m + 1] − x[m] (t − mTs )]h1 (t − mTs ). Ts

Clearly, the first order hold yields a better result. Why is that? To remove these undesired high frequency components, the continuous-time siĝ from the hold is usually smoothed by a smoothing analog low-pass filter – nal x(t) anti-imaging filter Hi (jω ). See Figure 5.5. Based on (5.7), we have ̄ X(jω ) = Hi (jω )Hd (jω )X(ejω Ts ).

(5.11)

What is the best Hi (jω )? Before turning to the next topic, we should point out that the condition specified by (5.5) is just a sufficient condition to ensure a continuous-time signal to be restored

5.4 Reconstruction of continuous-time signals

| 167

1.5

h1(t)

1

0.5

0 (a)

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

t

5 4 x1̂ (t)

3 2 1 0 (b)

0

0.2

0.4

0.6

0.8

1

t

|H1(jω)|

|X̃( jω)|

Fig. 5.6: Reconstruction of x(t) (dotted-line) using ZOH, where Ts = 0.1.

1

–ωm

0

Ts

–2ωs

ωm

–ωs

1 Ts

|X̃1(jω)|

|X(ejTsω)|

ω

–ωs

–ωm

0 ω

ωm

ωs

ωs

0 ω

1

–ωs

Fig. 5.7: Demonstration of the imperfection of the ZOH reconstruction.

–ωm 0 ω

ωm

ωs

2ωs

168 | 5 Discrete processing of analog signals

h2(t)

1.5

1

0.5

(a)

0 −0.2

−0.1

0

0.1

0.2

0.3

0.4

t

5

x2̂ (t)

4 3 2 1 0

0

0.2

(b)

0.4

0.6

0.8

1

t

Fig. 5.8: Reconstruction of x(t) (dotted-line) using FOH, where Ts = 0.1.

from its discrete-time counterpart. There are signals which do not satisfy this condition but still can be recovered from their samples. Look at the spectrum of a bandpass signal depicted in Figure 5.9. In communications, it is quite often that the band width of such a signal is relatively much smaller than ω1 , i.e. Bw ≜ ω2 − ω1 ωs /2, we will have ̄ = y(t) = g(t) ∗ x(t), y(t) as long as Hd (jω )Hi (jω )H(ejω Ts )Ha (jω ) = G(jω ), ∀ |ω | < ωs /2.

5.6 Discrete Fourier transform Figure 5.12 depicts a discrete-time signal x[n] and its magnitude spectrum |X(ejΩ )|. As is known, x[n] can be represented using X(ejΩ )—a function of the continuous variable Ω . It is therefore not an efficient way to represent the sequence x[n] by using X(ejΩ ) since to do so, one would need a memory device of infinite bits, which is practically impossible.

172 | 5 Discrete processing of analog signals 2 1.5 x[n]

1 0.5 0 −0.5 0

5

10

15

20

25

30

35

40

45

50

n

(a)

|X(ejΩ)|

20 15 10 5 0 (b)

0

1

2

3

4

5

6

Ω

Fig. 5.12: (a) a finite duration x[n] of 24 samples; (b) |X(ejΩ )|.

To overcome this problem, like what we have just discussed above on discrete processing of continuous-time signals, we can evaluate the DTFT of x[n] just for a set of frequency points only. As most of our readers have realized, this is something like a sampling procedure. The discrete Fourier transform (DFT) of a discrete-time signal x[n] is the DTFT of the same signal but computed at the frequency points Ωk =

2πk ≜ kΩs , k ∈ Z, Ns

(5.14)

where Ωs = 2π/Ns is the sampling period with Ns a positive integer indicating the number of samples taken within 2π. The Ns -point DFT of x[n] is then defined as +∞

X[k] ≜ ∑ x[n]e−j

2πkn Ns

= X(ejΩs k ).

(5.15)

n=−∞

Note X[k] defined by (5.15) is periodic with period Ns . So, the DTFS suggests Ns −1

−jΩs mk ̃ , X[k] = ∑ x[m]e m=0

(5.16)

5.6 Discrete Fourier transform

|

173

̃ where² the sequence x[m] (i.e. the DTFS coefficients of X[k]) are given by ̃ x[m] =

N −1

1 s ∑ X[k]ejΩs km , m = 0, 1, . . . , Ns − 1, Ns k=0

(5.17)

called the Ns -point inverse discrete Fourier transform (IDTF) of X[k]. ̃ and x[n] are the inverse of X[k] and X(ejΩ ), respectively, and X[k] = X(ejΩs k ), As x[n] ̃ and x[n]? a fundamental question to be asked is: what is the relationship between x[n] Inserting X[k] by (5.15) into (5.17) yields ̃ x[m] =

N −1

N −1

+∞ s 1 s +∞ ∑ ∑ x[n]e−jΩs kn ejΩs km = ∑ x[n] ∑ ejΩs k(m−n) . Ns k=0 n=−∞ n=−∞ k=0

It follows from Ωs = 2π/Ns that Ns −1

∑ ejΩs k(m−n) = Ns δ [m − n − lNs ]

k=0

holds for any integer −∞ < l N1 + L − 1. Then ̃ = x[n], x[n]

n = N1 , N1 + 1, . . . , N1 + L − 1,

as long as Ns ⩾ L.

(5.19)

The above is actually consistent with the sampling theorem specified with (5.5), while (5.18) is the discrete counterpart of (5.3). It should be noted that (5.19) is just a sufficient condition for the sampling number Ns to ensure the reconstruction of x[n] from its DFT. Remark: For a finite duration signal, say x[n] = x[n](u[n]−u[n−N]), xp [n] = ∑+∞ k=−∞ x[n− kN] is periodic with a period of N. As known before (see (3.50)), the DTFS coefficients of such a periodic signal is given by Xp [k] =

2πk 1 X(ej N ), ∀ k. N

2 Note that the periodic signal X[k] is decomposed using bases {e−jΩs m , m = 0, 1, ⋅ ⋅ ⋅ , Ns − 1} rather than {ejΩs m , m = 0, 1, ⋅ ⋅ ⋅ , Ns − 1} as they are identical.

174 | 5 Discrete processing of analog signals Clearly, the N-DFT of x[n] is X[k] = NXp [k]. In that sense, DFT and DTFS are equivalent though they are derived from the concepts of sampling and transforming, respectively. Example 5.2: Consider the signal x[n] shown in Figure 5.12, which is of a duration of L = 24 samples, starting from n = 0 and ending at n = 23. We computed its Ns -DFT ̃ for different Ns = 12, 24, 28. and then corresponding IDFT x[n] Case 1: Ns = 12 < L, corresponding a under-sampling situation. Figure 5.13(a) 2πk shows the 24 samples |X[k]| = |X(ej 24 )|, while Figure 5.13(b) yields the corresponding ̃ IDFT x[n]. ̃ is totally different from x[n] for n = 0, 1, 2, . . . , 23 due to the As seen, the IDFT x[n] aliasing effect. Case 2: Ns = 24 = L, corresponding to the critical sampling. The results are represented in Figure 5.14. 20

|X[k]|

15 10 5 0 0

1

2

3 4 Ωk = 2πk 12

x͂[n]

(a)

6

3 2.5 2 1.5 1 0.5 0 −0.5 0

(b)

5

5

10

15

20

25

30

35

40

45

50

n

̃ Fig. 5.13: Top:12-point DFT of x[n] shown in Figure 5.12; Bottom:the IDFT x[n].

̃ For this case, as observed, x[n] = x[n], n = 0, 1, 2, ⋅ ⋅ ⋅ , 23, which means that x[n] can be recovered from X[k] as there is no overlapping between x[n] and x[n + 24m] for any nonzero integer. Case 3: Ns = 28 > L, corresponding to an over-sampling. The results are represented in Figure 5.15.

5.7 Compressed sensing

|

175

20 |X[k]|

15 10 5 0 0

1

2

3

(a)

Ωk = 2πk 24

4

5

6

2 1.5 x͂[n]

1 0.5 0 −0.5 (b)

0

5

10

15

20

25

30

35

40

45

50

n

̃ Fig. 5.14: Top:24-point DFT of x[n] shown in Figure 5.12; bottom:the IDFT x[n].

̃ As expected, x[n] = x[n], n = 0, 1, 2, ⋅ ⋅ ⋅ , 23, which is confirmed in Figure 5.15(a). Though over-sampling provides a better resolution than the critical-sampling, both have exactly the same information on x[n].

5.7 Compressed sensing We have pointed out that Nyquist Theorem yields a sufficient condition that guarantees analog signals can be recovered exactly from their samples, in another words, xa (t) can be totally represented by x[n] = xa (nTs ) as long as (5.5) is satisfied. Now, let ψ1 (t) = ej2π 310t and ψ2 (t) = ej2π 499 and xa (t) = α1 ψ1 (t) + α2 ψ2 (t).

(5.20)

Clearly, xa (t) is bandlimited to fM = 499 Hz. If we sample this signal with fs = 1000 Hz, we have 1000 samples x[n] per second. Can we use much fewer samples of x[n] (than 1000) if we know a priori that xa (t) has a structure given by (5.20) with ψ1 (t), ψ2 (t) known, say just two samples xa (t1 ), xa (t2 )? This is possible! Surprised? In fact, it follows from (5.20) that [

xa (t1 ) ψ (t ) ]=[ 1 1 xa (t2 ) ψ1 (t2 )

ψ2 (t1 ) α α ][ 1 ] ≜ Ψ [ 1 ]. ψ2 (t2 ) α2 α2

(5.21)

176 | 5 Discrete processing of analog signals 20 |X[k]|

15 10 5 0 0

1

2

3

(a)

Ωk = 2πk 28

4

5

6

2 1.5 x͂[n]

1 0.5 0 −0.5 (b)

0

5

10

15

20

25

30

35

40

45

50

n

̃ Fig. 5.15: Top:28-point DFT of x[n] shown in Figure 5.12; Bottom:the IDFT x[n].

As we have seen, we can obtain the signal parameters α1 , α2 from just two samples of xa (t) as long as the matrix Ψ is nonsingular, and hence the whole signa xa (t) with (5.20). Consider a more complicated situation, where L

xa (t) = ∑ αk ψk (t) k=1 T

= [ ψ1 (t) ⋅ ⋅ ⋅ ψL (t) ] [ α1 ⋅ ⋅ ⋅ αL ] ≜ ψ (t) α ,

(5.22)

with the set {ψk (t)} given. Suppose we know that there are at most K (K Re (α ), and otherwise, to infinity. Therefore, −cr T

eα t u(t) ↔

1 , ∀ s ∈ ROCx1 = {s : Re (α ) < Re (s)}. s−α

(6.3)

For x2 (t) = −eα t u(−t), using a similar procedure one can show −eα t u(−t) ↔

1 , ∀ s ∈ ROCx2 = {s : Re (s) < Re (α )}. s−α

(6.4)

1 As seen, both X1 (s) and X2 (s) converge to the same s−α but with different ROC. Therefore, to uniquely determine x(t), one needs both X(s) and ROCx .

A signal x(t) is said to be right-sided if there exists a finite constant Tr such that x(t) = 0 for t ⩽ Tr ; a signal x(t) is said to be left-sided if there exists a finite constant Tl such that x(t) = 0 for t ⩾ Tl , while a signal is said two-sided if it does not belong to any of the two classes. Particularly, xr (t) and xl (t) are said to be causal and anti-causal if Tr = 0 and Tl = 0, respectively. Let X(s) be the Laplace transform of x(t) with ROCx . Assume that s0 ∈ ROCx and there exists a small constant 𝜀 > 0 such that |s − s0 | < 𝜀 ∈ ROCx ,² then as seen from the previous subsection, the signal x(t)e−σ t is integrable for all s = σ + jω ∈ |s − s0 | < 𝜀. Furthermore, it is assumed in the sequel that x(t)e−σ t is also absolutely integrable. Consequently, one has +∞

∫ |x(t)e−st |dt 0, then x(t) = xr (t) + xl (t), where³ {

(A.9)

∑∀ pk ∈Pl Res[X(s)est ]|s=p u(t)

xr (t) =

k

xl (t) = − ∑∀ pk ∈Pr Res[X(s)est ]|s=p u(−t),

(A.10)

k

where Pl and Pr are the sets that contain the (finite) poles located in the left and right of ROCx , respectively. Proof: Look at the circle in Figure D.1, which is centered at s = 0 with a radius R. Im (s) B

·•

R

C

•· D



·•

Re (s

A

Fig. D.1: Contours of integrations for inverse LT.

For t > 0: Let the contour Γl̄ , specified by the line AB and the left half Γl of the circle with counter-clockwise, be chosen such that it contains all the finite poles in Pl . According to the Residue theorem, we have jR

1 1 1 ∫ X(s)est ds ∮ X(s)est ds = ∮ X(s)est ds + j2π j2π j2π Γl̄

−jR

Γl st

= ∑ Res[X(s)e ]|s=p . ∀ pk ∈Sl

3 Here, x(0) is defined as x(0) ≜ [xr (0+ ) + xl (0− )]/2.

k

258 | Appendices Note Re (s) ⩽ 0 on Γl and t ⩾ 0, it follows from (A.8) and s = Rej(θ +π/2) that 󵄨󵄨 󵄨󵄨 󵄨󵄨 󵄨󵄨 π π 󵄨󵄨 󵄨󵄨 󵄨󵄨 1 󵄨󵄨󵄨󵄨 1 󵄨󵄨 1 󵄨󵄨 󵄨 st st j(θ +π/2) 󵄨 󵄨󵄨 󵄨 dθ 󵄨󵄨 ⩽ ∫ |X(s)est Rej(θ +π/2) |dθ 󵄨󵄨 j2π ∮ X(s)e ds󵄨󵄨󵄨 = 2π 󵄨󵄨󵄨󵄨∫ X(s)e Re 󵄨󵄨 2π 󵄨󵄨 󵄨 󵄨󵄨 󵄨󵄨 0 󵄨󵄨 0 Γl 󵄨 π

η η −tR sin θ 1 ⩽ ∫ e Rdθ ⩽ . 2π R1+𝜀 2R𝜀 0

Therefore, j∞

jR

1 1 xr (t) = ∫ X(s)est ds = lim ∫ X(s)est ds R→ ∞ j2π j2π −j∞

−jR

[ 1 ] ∮ X(s)est ds + ∑ Res[X(s)est ]|s=p ] = lim [ k R→ ∞ j2π ∀ p ∈S k l Γl [ ] st = ∑ Res[X(s)e ]|s=p , ∀ t > 0. k

∀ pk ∈Pl

For t < 0: Let the contour Γr̄ , specified by the line BA and the right half Γr of the circle with counter-clockwise, be chosen such that it contains all the finite poles in Pr . According to the Residue theorem, we have −jR

1 1 1 ∫ X(s)est ds ∮ X(s)est ds = ∮ X(s)est ds + j2π j2π j2π Γr̄

Γr

jR st

= ∑ Res[X(s)e ]|s=p . ∀ pk ∈Pr

k

Using the same procedure, one can show the 2nd expression of (A.10). Remarks: – It is interesting to note that if X(s) meets (A.8) with an ROCx including σr < Re (s), equivalently, no poles located on the right side of ROC, xl (t) = 0 and hence x(t) = xr (t) is a causal signal. We then have the following claim: the inverse LT of X(s) satisfying lim|s|→ ∞ X(s) = 0 is causal if and only if its ROCx is of form σr < Re (s). – We note that (A.10) is derived under the condition (A.8). The inverse LTs of a wider range of signals than that by (A.8) can be obtained with the help of (A.10). For ̃ example, if X(s) ≜ X(s)/(s − s0 )p satisfies (A.8) for some positive integer p and some constant s0 outside σr < Re (s) < σl (the ROC of X(s)), then applying (A.10) to ̃ ̃ ̃ X(s), one can get x(t). Since X(s) = (s − s0 )p X(s), x(t) = x(p) (t), where x(k+1) (t) =

dx(k) (t) ̃ − s0 x(k) (t), x(0) (t) = x(t). dt

D. Residue theorem and inverse transforms

| 259

1 with −2 < Re (s). Clearly, X(s) does not satisfy (A.8), while Example: X(s) = s+2 1 −(1+𝜀) ̃ = ̃ X(s) does since with 𝜀 = 1/2, lim|s| → ∞ X(s)/s = 0. (s+2)2 st ̃ ̃ Note that Pr of X(s) is empty, xl (t) = 0 and that X(s)e has a pole at s = −2 with st ̃ ̃ = te−2t u(t). multiplicity 2. As seen before, Res[X(s)e ]| = te−2t and hence x(t) s=−2

Therefore, x(t) =



̃ dx(t) ̃ = [−2te−2t + e−2t ]u(t) + tδ (t) + 2e−2t u(t) = e−2t u(t), + 2x(t) dt

which is exactly the same as what we know. Based on the two points above, we can conclude that the inverse LT of any rational X(s), no matter the order (in s) of the numerator is higher than that of the denominator or not, is causal if and only if its ROC contains a right half plane.

Inverse z-transform. Let x[n] ↔ X(z) with a region of convergence ROCx . Assume Γ ⊂ ROCx be a counter-clockwise circle centered at z = 0 with a radius r. As shown in Chapter 6 that 1 ∮ X(z)zn−1 dz. x[n] = j2π Γ

Let {pk } be the set of poles for X(z), and Pi and Po be the sets of the poles inside |z| = ρr and outside |z| = ρl , then For n ⩾ 0: we have x[n] =

∑ ∀ pk ∈Pi ⋃ z=0

Res[X(z)zn−1 ]|z=p

k

,

where adding z = 0 as a pole is just for the case when n = 0. For n < 0: With ξ = z−1 , one has x[n] =

1 1 ∮ X(z)zn−1 dz = − ∮ X(ξ −1 )ξ −(n−1) ξ −2 dξ j2π j2π Γ̃

Γ

≜−

1 ̃ )ξ −(n+1) dξ , ∮ X(ξ j2π Γ̃

̃ )= where Γ ̃ is the counter-clockwise circle centered at ξ = 0 with a radius r−1 and X(ξ −1 −1 −1 X(ξ ) which has an ROC of form ρl < |ξ | < ρr . The latter tells that all the poles of ̃ ) inside |ξ | = r−1 are {p−1 : ∀ p ∈ P }. Note that ξ −(n+1) yields no pole at all inside X(ξ k o k |ξ | = r−1 for n < 0. Therefore, x[n] =

∑ ∀ p−1 :pk ∈Po k

̃ )ξ −(n+1) ] Res[X(ξ |

ξ =p−1 k

=

∑ Res[X(z)zn+1 ]|z=p .

∀ pk ∈Po

k

Remark: It is easy to see that x[n] is causal if and only if X(z) has an ROC of form ρr < |z|, that is X(z) has no poles (including infinite poles) outside a circle that is centered at the origin.

260 | Appendices

E. Partial-fraction expansion Partial-fraction expansions are used to express a rational function as a sum of ratios of lower order polynomials. Denote b ρ M + bM−1 ρ M−1 + ⋅ ⋅ ⋅ + b1 ρ + b0 B(ρ ) G(ρ ) = M N ≜ . A(ρ ) ρ + aN1 ρ N−1 + ⋅ ⋅ ⋅ + a1 ρ + a0 Suppose Np

A(ρ ) = ρ N + aN1 ρ N−1 + ⋅ ⋅ ⋅ + a1 ρ + a0 = ∏ (λ − λm )pm = 0,

(A.11)

m=1 N

p pm = N and λm , m = 1, 2, ⋅ ⋅ ⋅ , Np are the distinct roots with pm as the where ∑m=1 multiplicity of the root λm . – Case 1: M < N. In this case, G(ρ ) can be expanded as

Np p m

G(ρ ) = ∑ ∑ m=1 k=1

μm,k (ρ − λm )k

.

There are essentially two way to determine the coefficients μm,k . In the first method, we place all the terms above over a common denominator A(ρ ) and equate the coefficient of each power of ρ to the corresponding coefficient in polynomial B(ρ ). This yields a system of N linear equations that can be solved for the N coefficients μm,k . Example: Consider 3ρ + 5 G(ρ ) = , (ρ + 2)(ρ + 1)2 for which λ1 = −2, p1 = 1; λ2 = −1, p2 = 2 So, the partial-fraction expansion of this function is of the form G(ρ ) =

μ2,1 μ2,2 μ1,1 . + + ρ + 2 ρ + 1 (ρ + 1)2

By placing all the terms over a common denominator, we have G(ρ ) =

(μ1,1 + μ2,1 )ρ 2 + (3μ2,1 + 2μ1,1 + μ2,2 )ρ + (2μ2,1 + 2μ2,2 + μ1,1 ) . (ρ + 2)(ρ + 1)2

Equating the numerators of the two expressions for G(ρ ) gives μ1,1 + μ2,1 = 0, 3μ2,1 + 2μ1,1 + μ2,2 = 3, 2μ2,1 + 2μ2,2 + μ1,1 = 5. Solving these equations leads to μ1,1 = −1, μ2,1 = 1, μ2,2 = 2.

E. Partial-fraction expansion

| 261

The second method seems easier and will be demonstrated with the above example. To determine μ1,1 , let us multiply both sides of the partial-fraction expansion with (ρ + 2): (ρ + 2)G(ρ ) = μ1,1 + (ρ + 2) [

μ2,1 μ2,2 ] ⇒ μ1,1 = lim (ρ + 2)G(ρ ) = −1. + ρ →−2 ρ + 1 (ρ + 1)2

μ2,2 can be obtained in the same way by multiplying both sides of the expansion with (ρ + 1)2 . (ρ + 1)2 G(ρ ) = μ2,2 + (ρ + 1)2 [

μ1,1 μ2,1 ] ⇒ μ2,2 = lim (ρ + 1)G(ρ ) = 2. + ρ →−1 ρ +2 ρ +1

As for μ2,1 , multiplying both sides of the expansion with (ρ + 1)2 leads to (ρ + 1)2 G(ρ ) = μ2,2 + μ2,1 (ρ + 1) +

μ1,1 (ρ + 1)2 . ρ +2

By differentiating both sides with respect to ρ and then letting ρ → −1, we have μ2,1 = lim

ρ → −1



d [(ρ + 1)2 G(ρ )] = 1. dρ

Case 2: M ⩾ N. In this case, G(ρ ) can be expanded as M−N

G(ρ ) = ∑ 𝜈k ρ k + k=1

̃ ) B(ρ , A(ρ )

where 𝜈k can be obtained using long division, while M−N

̃ ) ≜ B(ρ ) − A(ρ ) ∑ 𝜈 ρ k B(ρ k k=1

is a polynomial of order lower than N. Applying the techniques explained above, ̃ ) ̃ ) ≜ B(ρ we can find the partial-fraction expansion of G(ρ and hence that of G(ρ ). A(ρ )

Bibliography [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]

A. V. Oppenheim, A. S. Willsky, and S. H. Nawab, Signals and Systems, 2nd Edition, PrenticeHall, 1997. S. Haykin and B. V. Veen, Signals and Systems, 2nd. Edition, John Wiley & Sons, Inc., 2003. E. W. Kamen and B. S. Heck, Fundamentals of Signals and Systems: Using the Web and Matlab, 2nd. Edition, Prentice Hall, Inc., N.J., 2000. A. V. Oppenheim, R. W. Schafer, and J. R. Buck, Discrete-time Signal Processing, 2nd Edition, Prentice Hall, Inc., N.J., 1999. J. G. Proakis and D. G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, 4th Edition, Pearson Prentice Hall, N.J., 2007. S. K. Mitra, Digital Signal Processing: A Computer-based Aprroach, 2dn Edition, McGraw-Hill Companies, Inc., 2002. T. Kailath, Linear Systems. Prentice Hall, Inc., N.J., 1980. R. A. Roberts and C. T. Mullis, Digital Signal Processing. Reading, MA: Addison Wesley, 1987. M. Gevers and G. Li, Parametrizations in Control, Estimation and Filtering Problems: Accuracy Aspects. Springer Verlag (London), 1993. B. Widrow and S. D. Stearns, Adaptive Signal Processing. Prentice Hall, Inc., N.J., 1985. S. S. Haykin, Adaptive Filter Theory. Prentice Hall, Inc., N.J., 1991. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications, Cambridge University Press, 2012.

Index actuator, 5 additivity, 38 aeronautics, xi air-conditioners, 4 aircraft, 5 aliasing – effect, 170 – filter, 171 amplifier, 27 amplitude – modulation (AM), 28 approximation – straight-line, 128 associativity, 56 astronautics, xi bandwidth, 162 block-diagram, 152 capacitor, 3 Cartesian-form, 11 causality, 32 channel, 34 – transmission, 154 circuit – rectifier , 2 co-prime, 75 coefficients, 77 commutativity, 55 compensator, 5 components, 74 condition – of initial rest, 39 conditions – initial, 252 conjugate, 78 continuous time, xii controllers – digital, 159 convergence, 82 – region of (ROC), 182 convolution – integral, 52 – sum, 47 convolutions, 46

current, 10 – direct, 14 deconvolution, 203 delay – group, 148 demodulation, 152 denominator, 189 detection – edge, 135 devices – computing, 159 diode, 3 Dirac function, 19 discrete time, xii distributivity, 55 domain – time-, 46, 89 duality, 101 energy, 10 equalizer – channel, 154 equation – homogeneous, 63 – output, 234 equations – linear constant coefficient differential/difference (LCCDE), 63 – state-space, 226 estimation, 91 Euler’s formula, 11 expansions – partial-fraction, 106 filter, 30 – anti-imaging, 166 – digital, 49 – notch, 152 Fourier – analysis, 46 – discrete Fourier transform (DFT), 172 – discrete-time Fourier series, 73 – discrete-time Fourier transform (DTFT), 73 – fast Fourier transform (FFT), 177

266 | Index – series (FS), 73 – transform (FT), 73 framework, 94 frequency – -domain, 128 – angular, 13 – carrier, 251 – fundamental, 13 – logarithmic frequence scale, 135 function – matrix exponential, 240 – rational transfer, 228 – sinc, 23 – transfer, 200 Hold – First Order Hold (FOH), 166 – Zero Order Hold (ZOH), 166 homogeneity, 38 impulse, 118 – train, 118 information – source, 251 integer, 14 integration – by parts, 79 integrator, 33 interconnection – cascade, 227 – feedback, 227 – parallel, 227 intersection, 202 invertibility, 33 laws, 28 – Kirchhoff’s, 28 – Newton’s, 28 linearity, 37 mapping, 108 – one-to-one, 108 matrix – identity, 237 – nonsingular, 237 – Vandermonde, 86 micro-processor, 5 mixer, 6 multimedia, xi

multiplier, 164 numerator, 189 Nyquist – frequency, 169 – rate, 169 orthogonality – principle, 77 oscilloscope, 89 parameters, 16 Parseval’s – relation, 82 period – sampling, 149 periodic, 13 periodicity, 74 phase, 11 pixel, 135 plot – bode, 135 – pole-zero, 196 Polar-form, 11 poles, 186 – infinite, 259 polynomials, 106 radians, 13 range – dynamic, 136 realization – controllable, 236 – state-space, 234 refrigerators, 4 reliability, 159 representation – block-diagram, 226 – state-space, 226 – state-variable, 226 residue – theorem, 106 resistors, 3 response, 46 – complete, 213 – finite impulse response (FIR), 139 – forced, 210 – frequency, 128 – infinite impulse response (IIR), 139

Index |

– linear phase, 148 – natural, 210 – unit impulse, 46 – zero-input, 213 – zero-state, 213 sampling, 159 sensing – compessed (CS), 177 shift – frequency, 99 – time, 98 signal – analytical, 252 – bandpass, 168 – basis, 121 – bounded, 35 – causal, 198 – continuous-time periodic, 74 – decomposition, 19 – deterministic, 16 – exponential, 21 – image, 1 – input, 3 – left-sided, 193 – output, 3 – random, 16 – representations of, 1 – right-sided, 193 – sinusoidal, 16 – speech, 1 – two-sided, 194 – unit impulse, 18 – unit step, 16 – video, 1 solution, 63 – complete, 63 – homogeneous, 64 – particular, 63 spectrum, 95 – density function, 105 – line, 118 – magnitude, 95 – phase, 95 stability, 34 – triangle, 207

state – variables, 226 structure – direct-form, 143 – lattice, 230 superposition, 38 – principle, 56 symbols, 90 symmetry – conjugate, 102 system – all-pass, 147 – communication, 34 – hybrid, 160 – linear time-invariant (LTI), xii – multi-input multi-output (MIMO), 4 – single-input single-output (SISO), 4 – stable, 35 – time scaling, 26 – time shifting, 24 – unstable, 35 Taylor – series, 191 theorem – final-value, 191 – initial-value, 191 – Nyquist, 163 – Parseval, 104 – sampling, 160 transform – Hilbert, 251 – Laplace, 180 – unilateral z-, 219 – unilateral Laplace, 216 – z-, 180 transformation – similarity, 237 transforms – wavelet, 46 transmission, 128 voltage, 10 zeros, 186

267