Digital Signal Processing Technology: Essentials of the Communications Revolution 0872598195, 9780872598195

A comprehensive, readable work for anyone interested in Digital Signal Processing (DSP). The book begins with basic conc

733 63 34MB

English Pages 231 Year 2001

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Digital Signal Processing Technology: Essentials of the Communications Revolution
 0872598195, 9780872598195

Citation preview

Digital Signal Processing . Technology: Essentials of the Communications Revolution Doug Smith,KF6DX

Production Michelle Bloom, WBlENT-Production Supervisor Paul Lappen-Composition and layout Sue Fagan-Cover design David Pingree, Michael Daniels-Technical illustrations Jayne Pratt Lovelace-Proofreader



V

ARRL AMATEUitRADIO Newington, CT 06111-1494 ARRLWeb: www.arrl.org

Copyright © 2001-2003 by The American Radio Relay League, Inc. Copyright secured under the Pan-American Convention

International Copyright secured This work is publication No. 264 of the Radio Amateur 's Library , published by the ARRL . All rights reserved. No part of this work may be reproduced in any form except by written permission of the publisher. All rights of translation are reserved . Printed in USA Quedan reservados todos los derechos

First Edition Second Printing, 2003 ISBN: 0-87259-819-5

Dedication To my lovely wife, Dari, who gracio usly endured unmenti onable hardships durin g the producti on of this book: My dear, your wisdom and support made it possible.

About the Author Doug Smith , KF6DX , has over 20 years experience designing co mmunications systems and circuits for commercial, military and Amateur Radi o applicati ons. His computer programming career began on main frames like the CDC 7000 about eig ht years before the IBM PC was first introduced, when punch cards and nine-track tape dri ves were prevalent. Doug's areas of concentration have covered a wide range of systems, including control systems for radio, automatic link establishment, frequency synthesis and DSP. He was invo lve d in the design of some innovative products for amateurs, incl uding the Kachin a 505DSP, the first full-power, computer-co ntroll ed HF amateur transceiver. His curre nt techni cal work extends to digital voice -coding methods. Sin ce 1998, Doug has edit ed QEXlCommun ication s Quart erly : Forum for Communi cat ion s Ex perimenters, publi sh ed bimonthly by th e ARRL (www.a r r l.or glqex). He was the recipient of the 1998 ARRL, Doug DeMaw, WIFB , Technical Excellence Award, and remains involved in various League activiti es. On the air, Doug enjoys rag-ch ewin g and RTTY operation. Away from the shack, he is an avid amateur astronomer and photograph er. He enjoys teachin g children about the joys of Amateur Radi o. Doug may be cont acted throu gh Leagu e headquarters: Editor, QEX , 225 Main St, Newington, CT, 06111.

Contents Preface I 2 3 4 5 6

7 8 9 10 11

12 Appendix A AppendixB Appendix C Appendix D References Index

Introduction to DSP Digital Sampling Computer Representations of Data Digital Filtering Analytic Signals and Modulation Digital Coding Systems for Speech Direct Digital Synthesis Interference Reduction Digital Transceiver Architectures Hardware for Embedded DSP Systems DSP System Software Advanced Topics in DSP Conversion Loss of Passive, Commutative Mixers and the Mean Squares Method of Harmonic Analysis RMS Value of Sawtooth Waves Alternate Methods of SSB Demodulation Details of FIR Filter Design

Preface A digital revolution has already transformed telecommunications and virtually every other facet of our lives. We stand at a point where hard ware capabilities have very nearly caught up with the development of theory ; this situatio n has only evolved within the last decade or so of the 20th century. Demands for more communication bandwidth have dri ven DSP execution speeds higher. Lately, awesome computing power has become available at reasonable cost. It is somewhat ironic that those higher speeds are forcing computer designers to become RF engineers, since their PCB traces are UHF transmission lines; some RF engineers now focus entirely on DSP-we have exchanged our roles! It' s ju st as well, because when we look into DSP, we find that a good grasp of both digital and analog concepts is important. At the interface between the two, many trade-offs present themselves. As you read these pages, therefore, don' t be too surprised to find a fair discussion of analog system requirements. That doesn't mean that DSP need be overly complex to learn: it just involves acquiring a new set of skills. Perh aps it is appro priate to defin e what we mean when We say DSP. In this volume, DSP refers not only to the process of numerically man ipul atin g sampled signals, but also to how those signals are used. For example, the output of a speec h-proc essi ng circuit may not be a proc essed analog signal, but a control signal that perform s another functio n. Also, the input to a DSP algorithm may not be a sampled analog signal. Th at distin ction is not particul arly emph asized where it arises, but may be impo rtant to kee p in mind. Also , note that eve ry DSP construc t describ ed in thi s book has an analog equivalent that, altho ugh perhaps incredibl y implausible to build, could function nearly the same. We've used analog-circ uit equiva lents where possib le to aid understand ing. For example, the fast Fourier transfor m- DSP's best spectrum- analysis met hod-may be thought of as a bank of band- pass filters and detectors. That wonderful equiva lence expose s a rich variety of applications, some of which are described . I'm confident anot her crop will have emerged as soo n as the ink on this page has dried. We hear much lame nting that radio design is too difficult for the amateur these days, since eve rything has gotten so complex . Make no mistake: a smaller percent age of designers are worki ng on state-of-the-art development than before . Th ose few, however, are actually making it easier for the rest of us! Highly integrated sub-systems are now readily avai lable to the experimenter, offering features above and beyond anythi ng that was out there eve n five years ago . I guess there is no goi ng back now that we've tapp ed into technology that gives us higher perform ance at lower cos t. It's important that technology-including DSP-serve one purpose: to make things better. Science has a different end: to discover what makes things tick. While this may be a fine line to draw, we must state that the material in this book is presented from an engineering perspective. We've made an effort to balance the theoretical with the practical. Mathematics has been used freely where it presents itself as the most efficient form of expression. Elsewhere, plain English seems to do well enough. Chapter I provides a brief history of DSP and an overv iew of its applica tion-not only in communications, but also in explora tio n for oil, restorati on of recordings, astro nomy and other fields. Genera l benefit s and drawbacks of the techn ology are discussed. Chap ter 2 examines the sampling theor em,

alias ing and ce rtain mechanism s at play in real data converters . The relationship bet ween band width and sampling rate is ce ntral. Meth ods of cha nging the sampling rate of an already-sa mpled signal are discu ssed and reason s for wanting to do this are explained. Repr esent ation of signals is a major consideration in DSP systems . Chapter 3 takes up this subject to make cle ar how numb ers are actually stored and man ipulated. Chapter 4 begins a look at DSP algorithms with perhaps the most important subset: digital filters. It covers the construction of well-known types and their properties. Adaptive filtering is covered in a later chapter. Chapter 5 introduces the concept of analytic signals and their representation as complex numbers. We revised the chapter in this printing to correc t details of certain digital modulat ion modes. Chapter 6 examines digital coding methods for speech, includin g error detection and correction in digital transmission. Chapter 7 examines digital frequency synthesis methods, including direct digital synthesis (DDS), fractional-N, and hybrid techniques. In Chapter 8, duality between time-d om ain and frequency-do main represent ation s of signals is dis cu ssed. It begins with adaptive-filtering algorithms. The chapter continues with a treatment of Fouri er transforms and their inv erses. Th e concl usio n looks at variation s of Fouri er-transform methods and their applica tio n. Cha pter 9 takes us through the design of digital tran scei vers at the blockdiagram level. The discourse starts with DSP at AF stages , then moves the digiti zati on poin t closer to the antenna in steps. Receiver s and transmitters are co nsi dered separately but co mmo nality of circuits is illu str ated. Cha pter 10 describes DSP hard ware: ge nera l-purpose and dedi cated DSP s, data co nverters and DD C chip sets. Cha pter II discusses softwa re aspe cts of DSP design . For journ eyman and advanced readers , Chapter 12 introduces some excit ing areas of current research. I believe ma ny of these shall find their way int o Amate ur Radi o soo n. This book begins with basic concepts and gradually brings in more com plex ideas . Th e di scu ssion of sampling theory requ ires no spec ial math skills; the re mainder ass umes a working knowled ge of alge bra, tri gonom etr y and binary arithmetic. Experience with engi neering sta tistics (summa tion signs) is help ful. Projects requ ire the ability to program a co mputer and, for adva nced development , the purchase of a DSP eva luation kit and other off-the-s helf hardware and software. I hope the book will help you reach a better und erstanding of thi s rapidl y developing top ic . Acknowledgment Man y thanks to the kind folk s at ARRL who participated in the making of this book, including Mark Wil son, KIRO; Joel Kleinman, NIBKE ; Jan Carman, , K5M A; Dave Sumner, KIZZ; and to the Producti on crew in Ne wington for mak ing it shine. Denn is Sil age , K3DS, of Temple University has my eterna l appreciation for reviewin g the material and giv ing me a good sanity check. His input was invaluable. Doug Smith, KF 6DX Seymo ur, Tennessee Oct ober , 20 03

Introduction to DSP DSP stands as one of the greatest innovations of the last millennium. Because of rapid advances in microp rocessor technology, DSP systems are today revolutionizing the way we live our lives-even, in many cases, without our awareness of them. Th is chapter takes a look at the role of DSP in fields other than communica tio ns and includes a bit abo ut its history.

DSP Without Computers In the centuries before digital computers were inve nted, all calc ulation s had to be performed by hand . In the case of celestia l mechanics, for exa mple, these calc ulation s became quite complex; large team s of mathematicians would work for weeks on end to obtain a single numerical res ult. The accom plishments of Galile o Galilei (1564-1642) and Johannes Kepl er (157 1- 1630) are particul arl y astonis hing give n that they lacked the calculus of Newton . In the late 1500s, a Scot named John Napier (1550-16 17) discovered a thing called the logarithm that simplifie d ari thmetic by replac ing multiplica tion with addition. I Two numbers may be mult iplied by adding their logarithms and finding the anti-logarithm of the sum. He star ted to compile a gigantic book of logarithms whil e Tycho Brah e (1546- 1601) waited in vain for the thing that would speed his calculations man ifold . Napi er (and Brah e) expired before the book was finished, but Nap ier 's friend Henry Bri ggs (1561-1 630) completed the work and pub lished it in London . Briggs' logarithm s were a boon to both earthly and celestial navigators. It was not until the late 1600s that Nico laus Mercator (1619-1 687) found that the natur al logari thm of a numb er could be represented as the sum of certain fractions relating to that numb er. Thi s type of sum is now called a McLaurin series , eve n though Merc ator and James Grego ry (1638 - 1675) were working with them before McLaurin's birth. It is perhaps ironic that history remembers

Introductio n to DSP

1-1

the name Mercator for map s (Gerardus Mercator, 1512-1 594, not Nico laus); Col in McLaurin (1698 -1746) him self invented an important rule for solving sys tems of equations that is tod ay calle d Cramer's rule ! At any rate, the discovery of the McLaurin series for natural logarith ms meant that very acc urate values could be computed, rather than retrieved fro m a book, for any number, and with reason abl y few terms. It is an example of a summation of discrete term s that converges to a result. We shall see this type of const ruct agai n later. Math emati cians of the time realized that breaking down calcul ations into smalle r, more eas ily soluble part s was their firs t line of attack on many problem s. Iterative meth ods were also common: the repetiti ve computation of a set of equations that converges on a result. Most of these techniques are today unn ecessary and forgotten; but a fe w of them are still in use. Their inventors hardl y would have called them DSP algorithms, but they remain as part of DSP' s legacy. After Isaac Newton (1642-1727) spra ng his genius on the mathem atical world , myriad appli cations for calculus revealed themselves. Often , it was necessary to integrate curves or surfaces for which equations were either unknown or intr act able. Practitioners found they could get a reasonable approximatio n by bre akin g the curve into many short seg ments and treating the seg men ts as straight lines or sec tions of a parab ola; the areas under these segme nts were then added to get the final result . While this method was kno wn long before his birth, Thomas Simpso n (1720- 176 1) mentioned it in his widely read book and thu s it is remembere d as Simpson 's Rule. Again, history gets it wro ng as one of the greatest mathem atician s of 18th-century England is remembered not for his uniqu e contributions, but for something that wasn 't really his at all. Simp son's Rule is nevertheless a very early example of the representation of continuous waveforms as discrete samp les. In fac t, it is remarkable how similar the process is to that of digital-filt er algorithms. You may note that similarity when you get to Chapter 4. Using the calculus, Joseph Fouri er (1768 - 1830) di sco vered the relationship between applica tion of heat to a solid body and its propagati on.s By 1812, he had a compl ete explanation of the effec t and was awarded a grea t pri ze for scientific acco mplishment- the judges were Laplace, Legendre and Lagrange! Fourier could scarce ly have imagined the imp act his work has had on almos t every imagin able discipl ine . His methods as applied to analysis of signals are tre ated in Chapter 8. When working with Fourier transf orm s, a great many integrals are involved; tho se ofte n include integration over infinit e spaces of time and distance. Folk s discovered , though, that there was littl e point in including the contributions of a heat source that had been appli ed , say, several hour s before. Equilibri um had alre ady been reached. We may suppose that at some stage, a bright engineer asked him self, "Why should I integrate over infinity? I will ju st integrate the significant (recent) parts and truncate the input data to that length." Combined

1-2

Chapter 1

with integration usin g Simpson's Rule, this approach constitutes finite -imp ulse, disc rete Fourier tra nsforms that are quit e similar to what is done in modern DSP systems. It shows that discrete signal processing was not foreig n to those who came long before us. Altho ugh it provided a way to efficient computing, the y did not have the computers to make it shine. Fourier transforms became so import ant in physics that many highl y skilled mathema ticians applied their wits to breaking down their computational com plexity. There is evi dence that Carl Gauss (1777- 1855) worked on the problem with some success, even anticipating Fourier in many ways .J At the start of the 20th cent ury, Carl Run ge (1856- 1927) ana lyzed the prob lem intensively and produced a solutio n very similar to today'sfast Fourier transform (FF T). Because even the reduced calc ulations were not practical by hand, the discovery was largely overlooked until Cooley and Tuk ey picked up the gauntlet in the 1960s.4 By the n, digital computers were ready for the task. It is shown that the roots of DSP run deep . The development of the atomi c bom b, with Rich ard Feynman (19 18-1988) in charge of computations, could not have been accomplished without it. The orbital mechanics necessary to put Armstro ng and Aldrin on the Moon would have bee n nearly impossible. While those endeavors had comp utational assistance , the help was pretty basic compared with wha t we have today.

DSP With Computers In general, modern DSP sys tems char acterize and modify analog signals, producing other analog signals as their outputs. Note that thi s is not always the case, though; the output of a DSP circuit might ju st as well be the opening of a squelc h gate or the triggering of a VOX . Alt ern atively, DSP algo rithms may use non-linear stimuli as their inputs and eve n produce non-linear outputs; but for the most part, we are interested in exploi ting DSP ' s advantages in linear systems we 're used to. Most DSP hardware sys tems take a very flexib le approach in that they include means of translating ana log signals to digi tal form, a microprocessor to manipulate those digital signals, and then of converting the results back to analog. Such a system may be defined com pletely by the program runn ing on the microprocessor, and is therefore software-defined. Both hardware and software may have some say-so about how signals are converted between anal og and digital formats. Setting of the sampling rate and analog input bandwidth may be placed under microprocessor control. Most DSP systems operate at fixe d sampling rates and ratios, but cert ain advantages may come to the designer who makes them variable. Man y DSP systems look like analog , two-port networks: analog signals into and out of a black box that somehow transforms the signa ls. That transform may be virtually anything you can thin k of that is physically possible, such as a filter or a speech processor. It may also be some eso teric function like

Introduction to DSP

1-3

a band width compressor: Pro gramming determin es function. So a large part of DSP involves programming of digital computers; however, advan ces in circuit int egration are produ cin g dedi cated DSP chips that perform specific fun ctions very well, such as filtering and mixing. As against that , though, field -programmable logic devices make excelle nt platforms for dedi cated DSP functions. We thereb y ret ain the full measure of flexib ility. DSP systems are popul ar because they tend to exhibit better perfor mance than analog equivalents. One reason for this is illu str ated in filter design. An analog filter consists of certain physical comp onent s, such as indu ctors and cap acitors, that determin e its characterist ics. To meet tight tolerances on its passband or stopband, such a filter demands equally tight tolerances on its component values. In a digital filter, the "co mponent" values are stored as number s in a computer: They are known as coeffic ients. The se coeffi cients are the same from unit to unit and so, very nearl y, is the frequency response of the filt ers. Temperatur e variations are nonexistent. Thi s freedom from vari ation lets us attempt complex filters that would be beyond reason in the analog world . Digital filters having more than one hundred coeffic ients are commonly used. Imagine trying to construct an analog filt er with half that many poles! Another positive note is that DSP filter s don 't exh ibi t higher loss as they get more complex-but they do get an additional noise contribution. This isn't usually a significant fac tor, though, in practical circuits. A seco nd outsta nding reason for using DSP is that it usually eliminates a lot of hard ware from traditional analog designs, thus lowering cost. An expensive set of analog band-pass filt ers, for exampl e, might be rep laced with a much larger set of superior digit al filters- as many as the associ ated coeffici ent memory would hold-at littl e additional cost. Modul ator, demodul ator, squelch and speec h-processing circuits might be eli minated entirely in favor of DSP. As hard ware capabilities pro gress, the digiti zation point in transceivers will move closer to the antenna jack, eliminating still more hardware. Thi s trend comes at a price, though, for DSP hardw are; that price is somehow proportional to the bandwidth in which we' re interested. DSP also tend s to tax time resource s because of the need to learn programming constructs.

DSP in Othe r Techni cal Fields DSP has found its way into fields other than communicati ons. In many cases, advan cem ent of the technology has been driven forward by nece ssity ; in other cases, it has serve d mainly an analytic role. Thi s distin ction may be illu strated with example s from the engineering world , the arts, the pure sciences and medicine. Th e fo llow ing examples are offered as evidence of DSP 's versatility and emphas ize its wide-ranging impac t on humankind.

DSP in the Search for Fossil Fuels DSP is a marvelous tool that has help ed unl ock some of natur e's best-k ept secrets. It has achieve d what was genera lly considere d quit e impl ausibl e eve n

1-4

Chapter 1

10 years ago. Many algor ithms developed for one particul ar applica tio n have found new use s in otherwise unr elated area s. An outs tanding example of this is the conce pt of echo -cancellation as used in the exp loration for oil and other underground foss il fuels. Algorithms nearl y identi cal to those used in geop hysics were originally created to elimi nate echoes on the pub lic switched telephone network . Much of thi s ea rly work was done at Bell Labora tories . Similar algorithms are now employe d to combat multi -path distortion on radio co mmunication s path s. Come back in time with us to 1939. Suppose yo u are a geo physical engineer for an oil company and your j ob is to build a machin e that find s pockets of "black gold" underground. The boss wants you to get cra cking, because WWII has ju st started and need for the stuff is about to soar. He knows his current team s are achievi ng onl y a l-in-12 strike rate- that is, they find significant oil depo sit s at only 8.3% of their test well s. You have already worked out a clever scheme to improv e on this and filed a patent application for it, but it will be another three years before the pat ent is issued. Your name is R. T. Cloud. > You reason that to find something that is hidd en from view, you must apply some kind of stimulus to it and see how it reac ts . Your weapon of choice is a hydrauli c ram. To test your inventi on, you locate it where you already know a large subterra ne an oil pocket exists . You arra nge for the ram to deli ver swept sine-wave exc itation to the surface of the Earth. Th en you bur y some microphones at strategic spots (ge ophones) to pick up sounds coming back fro m the Earth. Ordin arily, it is very qui et down there because Texas does not get many earthquakes ; but when you fir e the ram, you get a lot of sound-seemingly from everywher e! Refer to Fig 1.1. The hydraulic ram applies impulses to the Earth and these impulses propagate away in all directions, including straight to geophone # 1, which is located very close to the ram so as to capture only the incident wave. Distant geophones pick up waves directly through the Earth through paths near the surface; these are of little interest. They also record impulses reflected from objec ts and from the discontinuities between layers of dissimilar material deep beneath the surface. You know this because of the delayed times of arrival of these echoes. Armed with a reasonable guess about the properties of the Earth 's crust beneath you (and perhaps some core samples), you may induce the distance traveled by the echoes and therefore the depth of the objects or reflecting layers. By pla cin g enough geophones, you obtain enough spatial di versity to triangulate the exac t locations of reflecting matt er underground. You comp are the data from the variou s geophones and find data from each that corre late with the others. Further reasonin g reveals that you could build a circuit using a tapp ed dela y line and atte nuato rs tha t would all ow you to cancel any part icul ar echo by manu ally adj usting the taps and the attenuation applied to eac h. See Fig 1.2 . You buil d it and it works. When you exa mine the final delay taps and atte nuator settings, you find they form an image of the propagation properties of the path taken by that parti cular echo. You go into the fie ld and discover you can

Introduction t o DSP

1-5

Earth's Surface

Fig 1.1- Hydraulic ram applying impulses to the surface of the Earth.

tell the boss just where to drill. Your strike rate is now 1 in 6-you have doubled output and you get a big raise ! Now another bright geo logist comes along and poi nts out that your wells are having to go deeper than ever before to hit home; most of the easy oil has already been found. He reas ons that the Earth 's crust is thinner at the ocea n floor, so it is a better place to look, but sinki ng the hydraulic ram to the sea floor is a bit much. He dec ides tha t a sing le impulse is as good as swept sine wave s (they are both broadband) and he packs some exp los ives in his kit and heads to sea with your machi ne. Some say that he is already "at sea" with that ide a, but the y are wr ong. His more powerful arrangement requires some revamping of the machine, but eventually his strike rate reaches 1 in 4. As we zoom forward to the pre sent day, geop hys ical engineers are able to

1-6

C ha pter 1

1

-¥-

D)

GeOPhOnes

+

Variable Attenuator

Output

+ -

3D-

~ Delay

Variable Attenuators

-¥-¥+

Delay

T

-¥-

Delay

-¥-

Delay

-¥-

Adjustable Delay Line

Fig 1.2-Manual system for canceling a parti cular echo .

Introd ucti on to DSP

1-7

produce 3-D renderings of subsurface structures using techniques that grew from this early work. They can even plot the motions of oil in pockets and discern its temperature and density. Adaptive DSP methods have thus risen to ·dominance in the field of oil exploration.

DSP in the Recording Studio Now suppose you are a top-flight recording engineer and your company embarks on a project to restore some old recordings . One set includes Enrico Caruso (1873-1921) singing in Milan in the early part of the last century. The original recording engineer was Emil Berliner (1851-1929). The audio has since been transferred from its original medium, wax discs , to vinyl discs, but you are not sure of the quality of equipment that was used to do that. You do know a lot about the original recording equipment and, of course, you know that the original recording hall has not changed much over the years. The recordings are plagued by pops , clicks and broadband background noise that are obviously undesirable. They are also colored by the inadequacies of Berliner 's equipment. You don 't want to eliminate too many of the echoes, since they constitute much of the acoustic quality of the room, but you do want to compensate for the poor directional characteristics of those ancient microphones. You wish to fix these things so the result will sound like a modern , digital recording-a tall order, to be sure! With exact measurements of the hall and of some of the old equipment, you determine a net transfer characteristic for the system, and you arrange to build a DSP system that corrects the original system's flaws through the process of deconvolution, discussed in detail later. What you cannot precisely define, though, is when a pop or a hiss will appear on the recording; however, you do know a pop when you hear it and this information may be used to eliminate it. Being a smart engineer, you know that you must take what the game gives you and use all the information at your disposal to achieve the goal. You realize you are fortunate in that you do not have to process the data in "real time" and you may take as long as you like; but you also know that to manually eliminate all the faults including the background noise would be a herculean task. You therefore draw on DSP algorithms to do it for you . One critical bit of information in your possession is that the pops, clicks and noise do not resemble the desired program material very much . Transient events on the recording do not sound like Enrico at all , and thus ought to be removable ; broadband noise does not sound much like him either. His output is characterized by a tremendous range of tonal qualities and by a fairly large dynamic range . The key word here is tonal, since the singer produces waves that are the sums of large numbers of sinusoids at different frequencies and their harmonics. These waves are therefore repetitive in some way over short time spans. This fact may be used to differentiate between desired and undesired content. So , very much like geophysicist Cloud, you build a circuit that allows you

1-8

Chapter 1

to correlate current chunks of audio with recent chunks. You accept only the current chunks that resemble the recent ones, and reject the rest. The pops and clicks are eliminated, since they do not repeat themselves over short time frames . Broadband noise also does not correlate with itself, since it is random. This adaptive noise-reduction technique will be covered in detail in Chapter 8.

DSP at the Telescope DSP may also be applied with advantage to the field of image compression and enhancement. In fact, this is one of the fastest-growing areas of current research. As in the above examples, DSP is extended to two and three spatial dimensions by virtue of a particular system's architecture. In the science of astronomy, men and women study the part of the universe that lies beyond Earth's atmosphere, notwithstanding that meteors cause a bit of overlap. A large part of their work involves extracting information from data taken near the limits of our ability to measure. In many cases, a very small number of photons arriving at the film or sensor plane may be all that is required to confirm or refute some premise. DSP is handy in these and many other situations related to image processing . Thanks to the rapid proliferation of charge-coupled devices (CCDs) and personal computers, many folks have been exposed to digital image processing. Computer programs are now readily available that perform very sophisticated transformations on digital image data: A group of techniques employing maximum-entropy methods is among the foremost of those. Still, basic properties of an image are familiar terms to any television viewer: brightness, contrast, hue, resolution. By digitizing image data, control over all these properties is given to DSP, with advantage. For example, many astronomical objects of interest are highcontrast, but some are not. The surfaces of planets and moons within the Solar System, faint nebulae and galaxies may reveal unseen parts of their structures to the DSP-equipped astronomer. Image traits such as contrast and hue may be manipulated to enhance available information content as long as the original traits are not destroyed. As will be noted later, even resolution may be improved by DSP control of data acquisition. One of the first applications of CCDs was in observing the surface of the Earth from orbit. This government project fostered the development of enhancement techniques that many enjoy today on their home PCs . Many types of semiconductor sensors are now employed, including CMOS, to record images. That does not mean that film is obsolete, though . Astronomers have long held that film-plane images with resolution in the lu-um range are the best obtainable. This is about equal to the resolution of CCDs, but very large CCDs are difficult to build because of the limitations of current silicon technology. Millions of picture elements (pixels) require millions of transistors on a single slice of material and at least a few of them are bound to fail on fabrication or in the field . Multiplexing techniques have minimized this problem, especially in large LCD displays . Astronomers can live with a few bad pixels, just as they lived with dust specks and scratches for so many years . Image scale may be

Introduction to DSP

1-9

very important in recording all the information from a particular subject and , at the time of thi s writing, film still wins by it s sh eer number of pixel s. An 8"x I0" piece of film, for ex ample, contai ns a number of lu-um pi xel s: ( 80 in 2 ) (645 x 10 6 urn 2/ in 2 ) ( 0.0 1 pixels/urn 2 )

= 516 x 10 6 pixels

(I)

Whether the dat a wer e digi tized at acqui sition or later, processin g them all is a formidable task ; but again, you do not nec essarily have to do it in real time . Correlation properties mentioned above form the basis for deciding how image data differ from other types of sig nals and , therefore , how they should be differently proc essed. One continuing, robu st seg ment of the image-p ro ce ssin g com munity is focu sed on data compression. Variou s standards ha ve emerged in this category, including lPEG, MPEG , wa velet transforms and fractal code rs. De scriptions of these are mainl y outsid e the goals of th is book . Let it suffice that achie veme nts in image compressi on have exce eded expectat ion s to the po int where digital tele vis io n (DT V) broadcasting is plau sible ; howe ver, we have to ackno wle dge that advances in modulation format s are equalIy respon sible . As menti oned, DSP is also used in astronomy to enhance data as they are gathered. Thi s is oft en a requ irem ent for best pe rformance. Telescope s have been built that employ adapti ve refl ect ing element s whose alignm ent chan ge s rapidly in resp onse to variati ons in the atmos phere, temperature, and so forth to keep an image steady. Th e trend the se days is to take advantage of whateve r spa tial div ersity a particular system give s you by using smart DSP algor ithms. Even without this, thou gh , DSP may be used to can cel distortion s encountered in ima ge form ation. Witne ss the design of a "contact lens" that was placed in the optical path of the Hubble Space Tele scop e to correct its " visio n." Sci entists and engineers found the refractive fun ction that nearly nullified the gross error in the original grinding of the primary mirror-it would hav e been usele ss without mod ern DSP ray tracing.

DSP in Medicine Medical doctors have learned so me DSP, too . As in th e ca se of ec ho ca ncell ation abo ve, physicians ma y employ DSP to cance l a pre gn ant wo man 's heartbeat and listen to the heartbeat of an unborn child within her.f Again , the spatia l diversity of the sensor (s tethoscope) arr ay may be ex ploited to accomplish what otherwi se would be impracti cal. One stethoscope is pl aced near the mother ' s heart and produces a sig na l that contain s little of the he artbeat from the fetu s. An oth er is placed farther down the abdome n and produces a sign al that con tain s both heartbeats. A DSP sys te m is con stru cted that fi nds the co rre la tion in the two sig nals- that whic h the y ha ve in com mon: the mother ' s heartbeat. Th e sy stem is made to perfectl y cancel the mother ' s heartbeat and wh at is left is that of the fetus . A s you wilI see below, many other criteri a may be used to condition sig nals for use in DSP. Some use spatial diversity, some use bandwidth and so me use temporal quali ties to differ entiate betwe en desired and und esir ed sig na ls. Th e possibili ties are endle ss . Before we di ve into advanced top ics, thou gh , a dis cu ssion of how analog signal s are sam pled is in ord er.

1-10

Chapter 1

Digital Sampling Fundamental Sampling Sampli ng may be simply described as the process of making periodic measureme nts of a signal and storing them. For purposes of this discussion , we define a signal as something in the physical world that can be measured and that con tains information . It would be nice if those measurements captured all the information con tai ned in a signal, but that is not always the case . Frequ entl y we are interested in only one sig nal of many. We may discuss takin g samples of signal para meters such as band width or signal-to-noise rati o (SNR), just as well as amplitude versu s time. Also, instead of taking samples at regul ar intervals of time, we may elec t to take them at reg ular interval s in space ; or, we may eve n talk about intervals of time-space and time-frequency. In elect ronics , sampling most often refers to measuring a signal's voltage at regular time intervals . This is the simples t case and most other repre sent ations stem from it. Thi s process is illu strated in Fig 2.1 . A sine wave is shown bein g sampled at times denoted by a sampling fun ction. Note that the frequency of the sine wave being sampled is much less than the sampling f requency, Is. That is, we are takin g many samples during each cycle of the sine wave. Note that the sequence of samples still resembles a sine wave . Although it doe s not contain informatio n about the actual voltage betwe en samples, we may state that all the inform ati on about amplitude versus time has been acquired by the sampled signal. This sampled signal also contains new information, thou gh : information about the sampling process. Difference in information content would be evi dent were we to examine the spectra of the two signals. All the energy in the sine wave is concentrated very near a single frequency; the sampled sig nal's spec trum is obviou sly not the same, since it is com posed of steps separated by the sampling period. The

Digital Sampling

2- 1

1.2000 0.8000 Q)

-0

:2 0.

0.4000 0.0000

.... ~

7\"\

/

;;

/

/1

17\

E

-c - 0.4 00 0

;\ ,

- 0.8 00 0

y

,/

~

(A)

-1 .200 0

1.200 0 0.8000 Q)

0.4000

-0

E 0.

0.0000

r:

II

~

~

[

n

(

f/ 2, the first alia s and the fund amental exchange their place s in halves of bandwidth f s ' A sine wave of frequ ency fs - f would produ ce a sam pled signal identical to that of a sine wave of frequency f, as show n in Fig 2.2 . Th is demonstr ates that to avoid ali asin g, we must limit the band width of our sampler's input to half the sampling frequen cy: That sampling rate is the often-mi squoted Ny quist f requency . An analog filter is typic all y employed to do thi s and is ca lled an ant i-alias ing filter. Onc e aliasing has been incurred, nothing may remed y it and information about input signals may be destroyed. Higher-frequency signals take on the identities of lower-frequency signals, and vice ver sa. Thi s frequency-translation propert y may be exploited, though , to minimize sampling frequencies . As described below, this is quit e desirable in many instances.

Harmonic Sampling Let us look at the case where the sampling frequency is less than that of an input sine wave; that is, f > f s ' See Fig 2.3. Notice that the shape of the sampled signal no longer match es the input signal; it retain s the shape of a sine wave , but of lower frequency. Thi s is the situation, mentioned above , that is ordinaril y to be avoided ; but a downward frequ ency translation is useful in the design of rad io transceivers. It is equivalent to a frequ enc y translation obtained in a mixer, which anal ogy was mentioned before. With certain restrictions, it allows samplin g of signals that are much higher than the sampling frequency. In practical systems , higher sampling frequencies place a higher burden on sampling hard ware to achieve acc uracy, and on software to complete proce ssin g tasks between samples . The y also usuall y incu r a significant current-cons umption pen alt y. Any techn iqu e that minimizes these fac tors is therefore quite desirable. Caution is requir ed, though, since an input signal near twice the sampling frequen cy would produce the same sampled signal as that of Fig 2.3. To use this technique, then, we must first bandw idth-limit the input, as before; but this time, a band-pass anti-aliasing filter (BPF) is called for. This technique is called harmonic samp ling. Now the large st bandwidth for the filter is fs /2. Placing the filt er' s passband between the fund amental (or some harm onic ) of f s and the point half way to the next higher harmonic ensures that the entire bandwidth is usab le. A frequency tran slation will take plac e equ al to an integral mu ltiple of f s ' but no informati on about the input signals will be lost. A spectral representation of harmonic sampling is shown in F ig 2.4. Again , the sampled spectru m is the convolution of the two input spectra, as in the mixer analogy abo ve. No inband aliasing occ urs because the sampled band width is less than half fs ' Harm onic sampling is also called bandpass samp ling. Its use is crit ical in

2-4

Cha pter 2

1.200 0 0.8 00 0 ~

0.4000

~ 0. 0 00 0 a. E

«

,

(A)

- 0.4 000

-0. 8 00 0 -1 .2000

1.2 000 0.8000 Q)

'1J

:2a. E

«

n

n

n

\

0.4000

(B)

0.0000 - 0.400 0 - 0.8 000 - 1.2000

\

u 1 I I I

1

u, I I I I

Sam ple TImes

Fig 2.3-Sampling of sine wave greater than the sampling frequency.

the design of IF-DSP and digit al dir ect-con version tran sceivers.f Th e subject is intro duce d agai n with detail as we explore digital transcei ver architec tures in Ch apt er 9.

Data Converters and Quantization Noise The device that performs sampling is generally called an analog-to-digital conve rter (ADC) . At each measurement, an idea l ADC produces a numb er that is directl y and exactly proportion al to input amplit ude. Computer represent ation s of thi s numb er mean that only a finite numb er of values are possibl e; usuall y, the numb er is in straight ba se-tw o or bin ary format having some num ber of bits, b, ava ilable. An 8-bit ADC, for exampl e, ca n onl y give one of 256 values. Th is means the amplitude report ed is never ex act, but only the closest of those at hand . Th e difference between the actual input value and the rep orted is ca lled the quantiza tion error.

Dig ital Sampling

2-5

Amplitude

(A)

t -fs

0

I I I" I

I I nl I 1\ AI

2 fs

fs

/\1\1

11\1\

3 fs

4 fs

Frequency

(B)

\/\/\

a

(C)

Fig 2.4-At A, spectrum of sampling function. At B, spectrum of a band of real signals. At C, spectrum of a harmonically sampled band of real signals.

Ass uming that the input signal is changing and co ver s a fairl y large range of quanti zation values, the error is j ust as likel y to be positi ve as negative. It is also ju st as likel y to be small as large, within certain limits . Hence, this err or signa l -a sequence in its own right- is pseudo-random and appears as quanti zatio n noise. In a perfect ADC, the error cannot exceed ± lh of the least-significant bit of the convert er ; therefore, this is the erro r signal's peak amplitude. Th e sequence rep orted by the ADC can thu s be thought of as the sum of the real input signal and the quanti zation noise. Quanti zation noise is norm all y sprea d uni forml y ove r the entire sampling band width of f/ 2. Fro m the previous discu ssion, we might expec t that its amplitude is somehow proportion al to the characteristics of the input signal. We

2-6

Chapter 2

want to fi nd how it limits our maximum SN R in data co nverters . T hat maximum is very lik el y to occ ur when the inp ut signal occupies the e ntire range of qu antizat ion level s, since its power will be maximized. Let us ca ll this "rail-to -rail" vo ltage V max' Rem ember th at th is is the peakto-p eak input voltage. The sma llest voltage step the co nverter can resolve is then :

(3) We have stated that the error signal, e, has a range of -~ V/2 S; e S; ~ V/2, or a peak-to -peak amplitude of ~ V. In our firs t use of engineering statistics, the RMS noise power into a 1 Q load is found by integrating the square of the error (power is pro portio nal to voltage squared) over the range of possible errors, each value of which is as likel y as the next : tJ.V

I 0' 2 noise

f 2

2

e de

= -

!'N

tJ.V

2

t

tJ.V

=

=

~~ [ e;

2

(4)

tJ.V

2

_1_ [~V 3 + ~V 3 ] ~V

24

24

~V 2

12 The RMS noise vo ltage is just the square roo t of this, or : ~V

m

V nor.se = -

(5)

When this is applied over the input ban dwid th of f/2, th e noise power per unit ban dwidth (per Hz) is:

(6)

Digital Sampling

2-7

in watts/hertz. The powe r into a load of R ohms is j ust this div ided by R. It is, perh aps surprisi ngly, not related to the nature of the input signal but only to the sampling frequency. We state d the input sig nal was large when we started. Small signals that do not exercise man y quan tization levels are liable to produc e different result s. In fact, when the input signal and f s have a harm oni c relationship, qu antization effects tend to concentra te the noise at di screte fre quencies. Thi s may have an impac t on dynam ic range , as discu ssed further bel ow. Now for the SNR calc ulation. The RMS input power of a V max sine wave, again normalized to one ohm, is:

(7) and so the SNR is found by taking the ratio of this value to the noise power:

(8) 12 '" 6.02b+ 1.76 dB Th is equation co nve niently places an upper limit on the dynamic range of a b-bit ADC. Additi on al pseud o-random noise sources appear in ADCs, though, that tend to limit performan ce still further.

Aperture Jitter Noise is introdu ced in ADC result s by var iations in the exact times of sampling. Wh ile this effect is described here in a negative light , later it will be show n that with co ntrol, this noise may actua lly be help ful in extending dynamic range. Phase noise or j itter in an ADC's clock source, as well as other inacc uracies in sampling mechanisms, may produce undes ired phase mod ulation of the sampled sig nal. Agai n assumi ng the effec t is uncorrelated with the input signal, this aperture-jitter noise will be uniformly distributed across the input BW. We may expre ss the mean aperture jitter as a a in seco nds : Thi s is the given quant ity and we want to extract the noise power produce d by the phase modulati on it causes. Now in thi s case , it is obvious that the input sig nal's frequency will co me into the equatio n because the modul ation index will change . In addition , it clearly matters how large a a is with respect to the sampling period, 1/fs ' It is now co nve nient to sw itch to disc ussing frequencies in both radians per second (angular forma t) and in hertz. Recall the sim ple relation:

2-8

Chapter 2

(9)

co = 2 rtf For small phase deviations , a PM signa l m ay be represented by": V pM = Acos(CO ct+p sinco mt) = A [co s coct = A cos CO ct -

P(sin CO ct)

(sin CO mt ) ]

Apcos( CO e - CO m )t

2

(10)

Ajicos] CO e +COm)t

+ -------'------'-------'-'-'2

w here coe is the carrier, A its amplitude , and com is the modulation frequency, both in radians per second; is the modulation index , or the pe ak phase deviation in radians (from Reference 9) . The first te rm in Eq lOis th e carrier; th e second, the lower sideband; and the third ter m is the upper side band . For P« I , theo ry shows th at very li ttle energy is contained in higher-order sidebands . The phase deviation caused by the jitter is just equal to the ratio of jitter to carrier period, times 2n: radians:

P

(1 1)

Substituting th is va lue into a sideband term in Eq 10 and assuming A duces a single-side ba nd noise power of:

P SSB noise

=(

P 2 -/2 )

2

=

co

2

e 4

= I pro-

2

0

(12)

a

T he re are two sidebands, so the total noise power is twice that:

Ptotal noise =

(13)

2

N o w the maxim um sine-wave input has A SNR is therefore :

=I

and its power is 112. Maximum

(14) 2 This equ ation reveals that values of

P near

10- 5 are where ape rture jitter

Digita l Sampling

2-9

becomes a problem for l6-bi t co nverters . Fo r example , in the case where fc = 100 kHz and O"a = 20 ps, SNR "" 98 dB. Assuming uniform no ise distribution to fJ2 , di viding Eq 13 by f J2 yields the noise den si ty in W/ Hz :

[ro/ ND =

2 0" a

2

J

471: 2 f c 2 0" a 2

(; )

(1 5)

Note that thi s effect increases wit h the squares of the inp ut fre quency and the sampling jitter, but in inverse proportio n to only the first power of the sampling frequ ency . Th at makes it diffi cul t to mai nta in dyn am ic range whi le increasing sam pli ng frequency. Jitt er is a maj or co ncern fo r DD C designers dealing with VHF and UHF signals and very high samp ling fre quencies , such as for those working on high- speed data co mmunicatio ns sys tems . It is sometimes useful to thi nk of the noise densit y-to-sign al ra tio fo r a particular converter. Divide Eq 15 by the maximum sig nal power of 112 to obtain: NO

(~)

871: 2 f c 2 0" a 2

(16)

NonIinearities Nonlinearity in data con verters means distortion and more no ise . Quanti zation steps in any con ver ter are no t perfect ly spaced and conver sion resul ts are contaminated by the inaccuracy. In general , two types of nonlinearity are characterized : differential no nlinea rity (DNL) and integral nonl inearity (l NL). DN L is a mea sure of outp ut nonuniformi ty fro m one input step to the next. It is expressed (in bit s) as the maximu m error in the output between adjacent inp ut steps over the ent ire input range of the device . We 're discu ssing the accuracy of the sma llest steps a co nverter can reso lve . No isy, low-or der IMD pro du cts prod uced by this effect tend to infl uence the dyn amic range of any pa rt icul ar device . Man ufacturers have rece ntly begun specifying spurious-free dynamic range (SFDR) for their devices under actual opera ting conditions- thank you ! Usua lly, this is given for single-tone conditions ; two-to ne measurements are not as common. Obvio usly, nonlinearities in a data converter will result in IMD that may have bearing on various applic ations . In addition, a harmonic relationship between input signals and fs may tend to concentrate spurious energy in discrete bands. That is a troublesome subje ct to illustrate with mathematics; a myr iad of aliase s and in-band

2-10

Chapt er 2

distortions may limit performance unless careful attention is paid to specification of data conver ters. It is not hard, though, to show that jitter-as much as it is deplored above-may help dissipate those discrete spurs . That subject is treated in detail later. Converters are considered monotonic if a steady increase in input signal always results in an increase in output. Backward steps may cause unexpected prob lems in systems working close to resolutio n limits. Manufacturers often offer different grades of converters that are specified to some number of least-significant bits : ± ljz bit at the minimum, for example , to maintain monotonicity. INL is a measure of a device' s large-sig nal-handling capability. To test it, we first inject a signal of amplitude A and take the outp ut. Then , we inject a signal of amplitude 100A and compare the result wit h 100 time s what we got before: We expect the output to increase in exact proportion. INL is a measure of the output error between any two input levels. Input vs output may be plotted and maximumdeviation from a stra ight line is the n easy to see . This effect produces additional harmonic distortion (HD) and IMD tha t may be quite undesirable. Typical values for INL center on ±1 bit.

Oversampling and Sigma-Delta Converters Eqs 6 and 15 show that as f s increases, noise density decreases in direct proportion. That means that if the sampling frequ ency were artificially increased by some large factor N, then the sampled signal dig itally filtered to redu ce its bandwidth by the same factor, a SNR improvement of nearly N would be obtained. Quantization and aperture-jitter noise would be spread over N times the bandwidth it would otherwise occupy; most of it would be removed by fil tering. This technique is known as oversampling . So-called sigma-delta converters use oversampling to achieve high SNRs. They employ sing le-bit quantizers at very high speed and digital filters to re duce bandwidth and sampling rate , thus obtaining noise reduction. These and other data converter types are further discussed in Chapter 10. Ways of red ucing sampling rate in tandem with bandwidth are treated in a later section of this chapter.

Digital-to-Analog Converters: Additional Distortion Sources Digital-to-analog converters (DACs) translate binary numbers back to analog voltages -the inverse operation of ADCs. They suffer from all the effects above, as well as few of their own . One unique distortion of DACs is one of frequency response: sample-and-hold distortion . Typical DACs are sample-and-hold devices : They con tinue to output their most-recent value throughout the sample period. The re sult is a step -wise rep resentation of output data that acts as a low-pass filter, albeit a med iocre one . The frequency re sponse of such a filter is :

Dig ita l Samp ling

2- 11

· [nf) SIll ~

[~)

(17)

N.ote the sin(x)/x form of this function, called a 's ine ' function . Its valu e is defined to be unity at f=O, where the function is otherwise discontinuous. The high-frequency roll-off is quite undesirable in many instances. For example, were the output frequency equal to f/4, an attenuation of about 1 dB would occur. Correction may be made for this , but increasing the sampling rate is often an easier solution. That is di scussed more below. When the output of a DAC changes from one voltage to another, it obviously cannot do so in stantaneously ; a finite time is required for the voltage to reach its new value. This is generally known as the settling time. It is usu ally defined as the time required to settle within some number of voltage-equivalent bits of the final value. Glitch energy or glitch area may be defined as the product of the voltage error during settling and the settling time itself. While we know volt-seconds are not units of energy, we may assume that a DAC is driving some kind of load ; thus, glitch area may be translated into unit s of energ y (watt-seconds). Settling mechanisms are important factors in the production of spurious outputs in DACs . Manufacturers usually specify glitch area for their high-speed devices. It is an especially important specification for digital-oscillator applications. Those are discus sed further in Chapter 7.

Sampling in More Than One Dimension The above discussion concerns itself with sampling at regular intervals of time. Notice that althoughsampling of voltage is how DSPs get their information, those samples may well represent a variety of quantities, such as water flow, brightness or sound intensity. Those things may be sampled not only over time but over space, too. As is often the case in recording, two microphones on a stage may be arranged to detect sounds at different places to help create a two-dimensional effect for the listener: stereo. That creates a two -dim ensional arra y or latti ce of dat a. More than two microphones may be used lying roughly in a single plane (the stage) and two spatial dimen sions are involved. A pair of stereo loudspeaker s is normally arranged in a single line, though, which has only one dimension ; but the recording engineer has captured two set s of sound inten sity data from two different places. The two sets of data correlate with one another based on the distance of any particular sound sourc e from each microphone and its frequency. When the recording is replayed, a listener's brain detects the correlation and a two-dimensional imag e of what was recorded is restored.

2-12

Chapter 2

The foregoing example produces an array of data that were sampled at regular intervals of both time and space. The data may be analyzed in each of those domains separately and each may be defined as afundamental domain of the data array.l? Spatial analysis lends itself to other problems, such as determining the direction of arrival of a radio signal. A set of omnidirectional antennas replaces the microphones above, and are again usually arranged in a line, or in a single plane. Assuming the distance between antennas is not large compared to the distance to the signal source, the output of each is identical to the next but for a small time delay . A signal may arrive at one antenna before it arrives at another because of its finite propagation speed, c. This fact may be used to steer the pattern of the antenna array to some extent. Let us say that the signals from two antennas separated in space are simply added together to form the input to a receiver. Placing a fixed delay line in one of the antenna output leads will make the array most sensitive to signals arriving at some angle or angles to the line between the antennas. When the delay is zero, the angle is naturally 90°. Changing the delay alters the radiation pattern of the array . Criteria for spatial analysis extend to other signal properties, as well. We may elect to analyze the strength of a signal received at each antenna separately over time . Especially with ionospheric propagation, it is often found that signal strengths vary markedly at each of the two antennas . When one antenna delivers a strong signal, the other 's may be weak. That is because each antenna is really receiving many time-delayed copies of the source that have refracted from many different parts of the ionosphere. Path lengths vary, and so do times of arrival; at one antenna site , signals may cancel to produce fading while at the other site , they reinforce. This leads to a spatial-diversity reception system that switches antennas based on the received signal strength at each, maximizing energy into the receiver over time . Because each antenna's signal may be sampled at regular intervals of time , Fourier transform techniques may be employed to analyze frequency content over time , resulting in terms of time-frequency . Adding spatial dimensions to the mix means that we can discuss quantities of space-frequency. These terms come into treatments of speech analysis and compression, and of adaptive antenna arrays. Chapter 12 contains further details of these and other esoteric modes.

Changing the Sampling Rate: Multi-Rate Processing Lowering the Sampling Rate: Decimation There are many reasons for wanting to change the sampling frequency of an already-sampled signal. Perhaps foremost among those is the desire to minimize the sampling frequency f s for some particular signal of interest. That

Digital Sampling

2- 13

desire is driven by the need for more time between samples to perform calculations-the calculations necessary to do modulation, demodulation, squelch, and other functions vital to radio transceivers. Systems that employ sampling-rate conversion may be referred to as multi-rate DSP systems . We may be faced, for example, with a situation where we have to filter a broadband input to extract a narrow-band signal. The broadband input must be limited to a bandwidth of half the initial sampling rate ; as signals are digitally filtered to lesser bandwidth, sampling rate may be reduced. Reduction in sampling rate is achieved by resampling the already-sampled signal at a lower rate. This process is generally known as decimation; and the filter, as a decimation filter. Decimation is most often performed entirely in the digital domain on signals prev iously sampled at a highe r rate . Decimation itself is simple: Just discard samples according to the decimation ratio . For a decimation ratio of two, keep only every other sample; for three, keep only every third sample. Decimation is usually performed by such integer ratios, although it does not have to be. A useful applic ation of noninteger decimation occurs when mating two systems with different sampling rates. Fractional decimation is discussed more late r. A decimation filter runs at the higher sampling rate , eliminating components above half the new, lower sampling frequency f s_ new that would cause aliasing after the rate reduction. Depending on the application, this filter may be a BPF, LPF, or HPF, so long as it limits bandwidth. Perhaps the simplest case is that of a low-pass decimation filter ahead of the decimator. Fig 2.5 illustrates such an arrangement. Normalized to a I-Hz sampling rate , the ideal frequency response of the low-pass is a "brick wall" such that: 1, (J}:5:1t D

=

H

{ (j)

}

(18)

0, otherwise

where D is the decimation ratio. The output of the decimator is an alias-free signal having a sampling rate D times lower than that of the input. Although the filtering operation is linear and time-invariant, it may readily be shown that combining it with decimation yields a time-variant system (see Reference 7) .

~IL....-__

Input )>-------'..

f - - - -....

LPF Fig 2.5-Decimation LPF and decimator.

2- 14

Chapter 2

Wl-----... D

Output

Astute rea de rs may be wondering, "During decimation, why compute filter outputs that.must only be discarded? Isn 't that a was te of proc essin g time?" Well, yes, it is ! Just ca lculate tho se samples that are to be kept. Thi s is equivalent to run ning the decim ation filter at the lower ra te. So decim ati on reduc es the sampling ra te thr ough filt erin g and it makes the filtering a bit easier, too. Increasing th e Sampling Rate: Interpo lation Wh ile low sampling frequencies are pleasant for the reason s menti oned, con vert ing digit al signals back to analog produ ces aliases that may not be far rem oved from the desired response. Th ere, they are difficult to eliminate with the mandatory analog anti-alias ing filt er. Sample-and -hold effe cts distort the frequenc y respon se much more , too . An arti ficial increase in sampling frequency is often called for. Thi s is usually kno wn as interpolation . Again , thi s is typically done by intege r factors . Samples with a value of zero are inserted betw een data samp les to produ ce a longer sequence ; the se are then filtered at the higher samp ling rate to remove aliases of the lower sampling rate. The filt er is called an interpolati on f ilter and is most often a lowpas s, although high-pass and band-pass have been used to advantage. The ideal interpol ation filt er has the same type of brick- wall frequenc y re spon se as the decimation filt er of Eq 18; however, note that interpolation actu ally produces mor e usable bandw idth at its output than that at its input. There is room to fit oth er signals in this ex tra band width, but an analog anti-alias ing filter is then more difficult to build . Fractional Sampling-Rate Conversion Adopt ing the notation of Proakis et aI., II let us consider sampling-rate co nversion by a rati onal factor U/D. Perh aps the sampling rat e of one subsys tem is exactl y 31z of another. If they need to interface with one anoth er, a fractional sampling- rate co nve rsi on is needed. Th e co nvers ion may be made by first int erpol atin g by U = 3, then decimating by D = 2. Th is cascade is shown in Fig 2.6. In this simplest form , the input band width should not exceed 1/3 of the output band width to avoid aliasing. Adding interpolation and decimation fil ters dod ges this limitation. See Fig 2.7. Int erp olati on must occur fir st in the cha in lest inform ation in the input signal be lost. It cannot necessarily be done the other way ro und.

Input

>>-----.1-[01-----[0---U=3

Output

0 =2

Fig 2.6-Fractional decimation by f irst inter polat in g, then decimati ng by integer ratios.

Digital Sampling

2- 15

Input

>---{]]--@J--1'---__

KIJI---......out put

U =3

LPF

LPF

0=2

Fig 2.7- Fractional decimation with filtering.

Because of the way the filters in Fig 2.7 work, they may operate with economy at a common sampling rate, and thus may be combined into a single filter. They are, after all, linear, time -dependent processes that are chained together. DSP filtering is treated in Chapter 4; a mathematical derivation of how to combine the filters' impulse responses may be found there, as well. Before we can consider the detai ls of filtering operations necessary to con tinue this thread, a disc ussio n of how samples (numbers) are represented in computers is in order. Accuracy of a DSP res ult is affected not only by imperfections in data converters, but also by com putatio nal errors; these, in turn, are caused by the limitations of binary-number representation as demonstrated in the following chapter.

2-16

Chapter 2

Computer Representations of Data Numerical Formats Th is chapter intr oduc es two popul ar binary number-storage formats and comp ares their attributes: fixed-point and floatin g-point. A hybrid notation , block floating-point , is also tre ated. Many DSP text s begin with the subj ect of numeric represe ntatio n beca use it is so important to performance of actual sys tem s. Accuracy and dynamic range issue s are examined for each format. A knowledge of binary arithmetic is assumed.

Fixed-Point Forma t In fixe d-po int form at, the available number of bits , b, is usually used to repr esent a numb er whose absolute value is les s than one; that is, the number is a sign ed fraction within the range ±l. Using typical ADC s and DACs, samples would normally be handl ed in thi s manner. Mo st-signi ficant bit b is taken to be the sign bit. When the sign bit is zero, the fraction represented by the remaining b - 1 bits is positive; when one, the fracti on is negative. Negative, fixed point numb ers are stored as the two's complem ent of their absolute value. The two 's comp lement of a number may be found by subtracting it from zero . The fraction is ju st a sample's value as compared with the maximum-possible amplitude. The radix point, or separation between integer and fractional part s, is ass umed to reside left of the second most -significant bit (MSB). This is terribl y co nvenie nt in DSP calculation s because the product of two fraction s less than uni ty is always another fraction less than unit y. This advantage will become apparent. Refer to Fig 3.1. The largest positive numb er is 2 b- 1 - 1, and the mostnegati ve numb er is _2 b - 1. With b = 16, tho se numb ers are equal to 0 111 1111 1111 1111 2 and 1000 0000 0000 00 00 2 , respe cti vel y. The dynamic range of

Com p uter Repr esentation s of Data

3- 1

Fig 3.1-Fixed-point representation.

such represent ation (about 98 dB) is limited by quantizati on effects, just as in the case of data conver ters above. Numb ers mu st be rounded or trun cated to 16 bit s. In fact, computational quanti zation noise is calculated in exac tly the same fa shion as for dat a converters in Chapter 2. It may not always be clear whether quanti zation noise fro m the two sources add, or not; poorly understood correlation effec ts often cont ribute to energy concentratio n at discrete freq uencies. It is safe to write, thou gh, that DSP -system dynamic range is near optimum when the bit-resoluti on of the data converters nearl y matc hes that of the signal pro cessor. Certain adva ntages are retained by selecting a processor having slightly more bit -resolution than the data conver ters, especially when filter ing is involved. Th is is describ ed furthe r below. Whil e multiplyin g fixed-poin t numb ers does not present a pro blem in numeric representation, adding or subtrac ting them does. Adding two frac tions less than unity may obviously produ ce a result grea ter than unity: overflow. Thi s leads to the need for scaling of data , alternate num eric representations, or both . DSP algorithms may be exam ined for computational dynamic range and input data scaled to prevent overflow; but in some cases, the range must be extended. One way of extending the computational dynamic range of fixed-point representation is to add more bits. The extra bits may be used to represent the integer part of the number. Take a total number of bits, 2b, and place the radix point in the middle, instead of to the left of the MSB. See Fig 3.2. Now this number may be multiplied with another having the same representation in straight binary fashion to produce a result having 4b bits. The radix point in the result is placed at 2b bits. Many processors don't have registers with enough bits to do this directly, though. When b = 16, 4b =64 and that is a large result. An interesting situation occurs when the data-handling capacity of the digital signal processor is only b bits. The integer and fractional parts are handled separately in the computation, much as real and imaginary parts are in complex mathematics. Adopting the notation c.d, where c is the b-bit intege r part and d the b-bit fract ional pa rt , two numb ers c. d and e.f may be ad ded at once in bin ary fashion: .

3-2

Chapter 3

Fig 3.2-An extended fixed-point representation .

g . h = (c + e) . (d + f)

(19 )

Remember that the fractional parts should be add ed first so that any carr y may be added with the integer parts. Two such numbers may be multiplied in a proc essor supporting b-bit multiplicands and a 2b-bit product register using : (c. d) (e. f) = [ce+3 (cf )+3 (de )] . [df+ ~ ( cf )+ ~ (de )]

(20)

where the scr ipt lett ers indicate the intege r and fracti onal part s of inside prod uct s cf and de. Again, the fractional part may overflow; it should be computed first.

Floating-Point Format Dynamic-r ange limitati ons of fixe d-poi nt proc essin g often force the use of alterna te numer ic repr esentation s. Floating-point format vastly extends the range of numbers that may be represent ed with a given number of bit s, b. In this format, numbers are repr esented by both a fra ctional part and another part tha t stands as a scaling factor. The fr acti onal part, M, is known as the mantissa, and the sca ling factor, c, is the characteristic or exponent. In the most common rep resentation, a positi ve numb er F is und erstood to be: F=2 CM

(2 1)

wher e the mantissa is restricted to the ran ge: I -

2

:0;

IMI < 1

(22)

That res triction mean s the represent ation is normalized. Note that the charac teri stic may be either positi ve or negative, impl yin g a very larg e range of num ber s may be repr esented. M may be treated as a fixed-point , signed fraction and c as a signed inte ger.

Computer Representations of Da ta

3-3

Two floa ting-poi nt numb ers are multiplied by fir st multiplying the two manti ssas as fixed-p oint fractions, then adding the characteris tics. Since the product of the manti ssas will fall in the range :

'!":s:; M' < 1 4

(23)

a norm aliz ation of the new mantissa and corres ponding adjustment of the character istic may be requ ired. When M' < 'h , it is multi plie d by two (shifted one bit to the right) and the chara cteristic incremente d. This check must be perform ed after every mathematica l opera tion that produces a new res ult. Adding two fl oating-p oint numb ers is done by deno rmali zation of the small er number. Its bit s are shifted left and its charac teristic decremented until it is equal to the larger number ' s. The mantis sas are then added directl y and the numb ers renormalized. From thi s discussion , it is eviden t that the mantissa may exceed available register length for both multiplication and addition by a long way; in fixed-point format, that was only the case for multiplicati on.

Truncation and Rounding Truncation is the process of "chopping off' one or more LSBs to make a numb er fit into a smaller register. Thi s may occur, for exa mple, dur ing multiplication of two 16-bit numbers-whose produ ct is a 32-bit number-when onl y 16 bits are ava ilable to store the pro duct. Trun cation obviously loses inform ation about the exact value of that product ; thu s, computatio nal accuracy is degraded . The effect is cumulative over all subsequent operations on that and other numbers and result s in quantization noise at the final output. In general , the amplitude of this noi se is directl y prop orti onal to the numb er of trun cations; that is, the noise is N time s worse for an N-multiplica tion algorithm than for one mul tipl ication. A reasonably simple way of combating truncation noise is to employ rounding of products instead. Convergent rounding adds a value of half the kept LSB to the product prior to truncation. In this way, the truncated number is guaranteed to be the value closest to the actual product of those available. Error still exists, obviously, but it is absolutely minimized. Because the error is now just as likely to be positive as negative, and just as likely to be large as small, the resulting quantization noise has a zero mean and is uniformly distributed across the sampling bandwidth. In the case of trunca tion, errors do not necessarily ha ve a zero mean and their peak-to-peak range is the same for both positive and negati ve numb ers. Trun cation always increases the magnitud e of a negative number, though, and always decreases the magnitude of a positi ve numb er. Roundi ng limits the error for each operation to ±'h LSB with a nearl y-zero bias, independently of the sign of the number being rounded.

3-4

Ch a pte r 3

Normalization and Block Floating-Point Notation Normalization is the process of conditioning numbers to fit a certain scale or reference. As demonstrated above, floating-point numbers go through this process at every stage of their use . Blo ck floating-point representations normaliz e numbe rs based on their range over a conti guous block or sequence. Block flo ating-point is a useful notation for many DSP algorithms, especially those that operate on multi-dimen sional data array s. 12 The characteristic or exponent in block floating-point notation is set equal to that of the largest-magnitude number in a block of, say, N numbers: (24)

where e is an integer. This exponent is associated with a set of signed, fixedpoint fractions in binary that are normalized by the factor 2 e . That is, the exponent e is associ ated with a length-N block of numbers. An array of length L containing block floating-point vectors (sequences of complex numbers) oflength N may be constructed that has L exponents and LN mantissas. This segmented blockfloating-point representation may allow a wider dynamic range than that obtained with a single exponent. It is well-suited to algorithms in which the scale of the data is expected to rapidly change from block to block. Such situations are commonly found in speech processing and discrete Fourier transforms. Note that with N = 1, we just have traditional floating-point representation, wherein normalization must occur at every step. In segmented block floating-point, the normalization occurs on demand and it s implementation is subj ect to change based on knowledge of how data change from block to block. For example, if data magnitude is known to slowl y increase, then scale may be adj usted slowly from block to block. Thi s avoids having to recalculate normalizations for each sample, thu s saving processing time .

Reduction, Saturation and Justification Reduction is the normal consequence of binary operations that produce overflow: Just let the sum or product roll over in two 's-complement notation, modulo 2b wher e b is the number of binary bits used. Saturation involves an alternate way to handle overflow : When overflow occurs, retain as the result the maximum binary number possible of the appropriate sign. For l o-bit, fixedpoint fractions, the two saturation values are $7FFF and $8000, where the dollar sign indicate s hexadecimal notation. Fixed-point DSP s quite often include saturation capability because it is a common tool in speech processing, angular modulation and demodulation, and a variety of other fields. DSP algorithms that employ satura tion may tolerate some level of overflow in preceding and following stage s. Reduction arithmetic is intolerant of overflow and it must never be allowed to occu r.

Computer Representations of Data

3-5

A co mputer program may be able to detect that overflow has occ urre d using sticky ove rflow bits that are latched in a microprocessor until purp osefull y cle ared. A programm er may thu s make provision for sensing an overflo w catastrophe even though he or she doe s not necessarily have the information necessary to remedy it by saturation or other means. In fixed-point notation, numb ers are left -ju stif ied : Any numb er of zeros may be added at the right-hand side of the numbe r witho ut altering its value. Doin g so would extend the precis ion of the number, but not its acc uracy since no actual infor matio n has been added . On a 16-bit machine, for example, a 16bit single-p recision numb er may have 16 zero s appended on the right to produc e a 32-bit double-precision numb er. Th is co nversion applies equally well to all represent ations. Conversion back to single precision must occ ur either by trunc ati on or by rounding. As noted above , co nvergent rounding avoid s de bias term s that may be significant in many application s, but it involves additional computing time. Unsigned numbers are often ju st int egers and are right -justified . Hence, any numb er of zeros may be app end ed to the left-h and side with out alteri ng the result. Such extension does not really extend the precision of a numb er and it obviously does not affect its acc uracy. Zeros appended to a left-ju stified num ber are significant digits; tho se appended to a right -ju stified numb er are not significa nt digits. Extensio n of right -ju stified numb ers, though, may be necessary in DSP algo rithms to make them play correctly.

Finding the Logarithm of a Binary Numbe r Floating-point and block floatin g-point require the computation of the integer part of the base-t wo logarithm of a number. Thi s is defined as the scale of the number. A simple algorithm illu strates how this is done in digit al computer s. Let us begin with an unsigned, 8-bit bin ary intege r M = 0010 1011 2 = 43 10 , As this is a right-justified num ber, we know zer os appended at the left of Mare not significant digit s and may be discarded without altering the result. Looking at the string of bits from left to right re vea ls the posit ion at whic h the fir st binary 1 appears . For number M = 43 , that is the third digit fro m the left. Th e inte ger part of log-M is therefore k = log2 (00 10 0000 2) = 8 - 3 = 5. If we wish to co mpute the base-tw o logarithm of M more accurately, a 256 -ent ry look-up tabl e clearl y does the jo b quite quickly since M is an 8-bit number. The acc urac y of a res ult so obtained is determined by the bit-resolu tion of the entrie s, not the number of entries . Were M a 16-bit numb er, a lookup tabl e of 2 16 = 65,536 entries would be require d. Th is may tax available memory in embedded systems and another approach is often sought. As show n in the following algorithm, a normali zation process allows reductio n of tabl e size where facilitie s exist for fa st fractional division . As an example, take a 16-bit numb er M = $6978 = 0 110 1001 0 111 1000 2 = 27 ,000 10, Normalize the number to a sca le of 8 by dividin g it by 28 and

3-6

Chapter 3

taking the int eger part. We get N = int (M / 2 8) = $69 = 0110 1001 2 = 105 10. Now log 2 (2 8N) = 8+log 2N is clo se to the corre ct answ er and log -N '" 6.714246 ma y be looked up in a 256-entry table. A cl oser estimate ma y be computed usin g the relation log-M '" 8+10g 2N - log 2 (2 8N / M). The division is a 16-bit division bu t only 8 bit s of the quotient are retained. Those 8 bits are convergently rounded and aga in for m an address into a 256-entry table ; the last term is subtracted to get the final result: logjM '" 8 + log 2105 -log2 (255/2 56) = 14.7199. 255/256 is the cl osest 8-bit fraction to 2 8N/M in this ca se . With 16 bits available for the result, the int eger part of the result (14) take s up four bit s since 14 10 = 1110 2; 12 bits remain for the unsigned fractional part. Th e closest 12-bit fr action is 0.7199 + 2949/2 12 = 0.7200 and the final result (14.7200) comes out a little low. The actual error is about -6.7 x 10-4 or less than 0.01 %. Th e sca le of the re sult is four and in binary, the radix point is placed between the int eger and fractional part s: 14.7200 10 = 1110 .1011 1000 0101 2. Thi s example illu strates mo st of the principles outlined in this chapter. In the next chapter, a significant new set of DSP algorithms is explored: digital filt ers .

Computer Representations of Data

3-7

Digital Filtering The ability to construct very-high-performance filters is a compelling reason to use DSP in radio design. Quite often, expensive analog components may be eliminated in favor of superior DSP implementations. As filtering requirements get more stringent, filters must get more complex. In the analog world, a filter becomes more complex by adding inductors and capacitors, for example, and the sensitivity of its frequency response to exact element values becomes more critical. Establishing and maintaining exact values over temperature and time quickly become implausible for larger filters. DSP filters, on the other hand, use delay elements and multipliers that may be very accurately set; once they are set, they are unchanging. They are numbers stored in a computer that are not susceptible to temperature or aging problems. That means filters judged impossible in the analog world may readily be achieved in the digital. Filters having linear phase responses may be constructed, which is a distinct advantage when it comes to digital transmission modes, as described in the following chapter. This is, again, a difficult goal for the analog designer. In addition, cascaded filters may often be numerically combined into a single filter, saving computation time-that quantity that is always obtained at a premium. DSP filters come in several varieties . Each has an analog counterpart that, although incredibly hard to build, would function nearly the same. That fact may be utilized to advantage in understanding how they function. The approach below even shows how traditional analog filter families may be adapted to digital use; however, certain DSP filter constructs are not normally attempted in analog .

Dig it al Filtering

4- 1

Characterization of Signals: Terminology of Linear, Time-Invariant Systems A system or function Yt commuta tive. That is:

= Hfx.)

is defined as linear if and only if it is

(2 5)

where x 1 and x2 are samples taken at different times; A I and A 2 are their amplitudes. A sys tem is time -invariant if and only if older output samples are determined onl y by older input samples. The fo llow ing equation must then be true : Yt-t o =

H(X () t_ t

)

(26)

H is often calle d the sys tem or tran sf er f unction. A sys tem may be defin ed as causa l if its output depends on ly on current and past input values. A system, for exa mple, whose past outputs changed based on new inputs would not be causa l. A sys tem is stable if and only if a bound ed set of input values produces a bound ed set of output values. Thi s is the same as sayi ng the imp ulse response of the system integrates to a finit e value.

Impulse Response All filters may be characterized by their impulse responses. The impulse response of a filter is its output when the input is a one-sample, unity-amplitude impulse; think of this input as a very narrow "spike." Output may be quite complex, as in Fig 4.1 ; this is often referred to as "ringing," although it is just a consequence of how filters- both analog and digital-behave. Output voltage vs time may be sampled, just as any other analog signal may be; however, the sampled impulse response cannot be infinite in length. It is, therefore, only an approximation to that required to exactly describe the filter. The truncated length of the sampled impulse response is found to correspond with an error in the digital filter's frequency response- the thing in which main interest lies. A digital filter designed this way is therefore called e fi nite-imp ulse-response (FIR) filter. Imagin e building an analog low-pass filt er and shooting a unity -amp litud e impu lse into it, making the width of the spike very narrow . Then , take L samples of the output waveform at regular intervals l/f s . The sampled impu lse response may be used as a sequence of coefficients in a bas ic FIR filter struc ture employing delay elemen ts and multipliers, as shown in Fig 4.2. Each box labe led z-I is a one-sa mple delay; the cascaded string of those boxes is an (L - 1) samp le de lay line. Programm ers will recog nize that this is ju st a buffer of length L - 1. Each location in the delay line may be referred to as a tap in the line. The

4-2

C ha pter 4

Fig 4.1-lmpulse response of a typical filter.

Signal Input

~...-----

Filter Output

Fig 4.2-Block diagram of a short FIR filter.

Digital Filte ring

4-3

datum at each tap, x n' is multiplied at eac h sample time with one of the coefficients, h n. All the products are summed at eac h sample tim e to produ ce the filter ' s output. . At the next sample time, samples are rig ht-shifted down the delay line by one position and the multiply-and-accum ulate (MA C) opera tion is perform ed agai n. Coefficients remain in place and do not shift. The mathem atical expressio n describing this repetitive operation is also called a convolution sum : L -l Yk

== Lh nx k - n

(2 7)

n=O

where Yk is the output at sample time k, xk-n is the set of input samples, and h, is the set of L coefficien ts. Since the output depend s only on past input values , the filter is a ca usal process. Since no feedbac k is employed , it is unconditionally stable. Th e sampled input signal is convolved with the filt er 's impu lse response ; the output spec trum is the produ ct of the two input spectr a. Thi s relation ship was illu str ated in Ch apter 2: Convolut ion in the time domain corre sponds to mult ipli cation in the frequency domain. Inversel y, multiplication in the time domain (mixi ng) corr esponds to convo lution in the frequency domain .

Computer Design of FIR Filters In any filter-desig n project , a desired freq uenc y response is usuall y sought and element (coefficient) values must be computed . Mos t meth ods begin with an estimate of the number of elements needed to achieve the desired respo nse . In the case of FIR filters, Rabiner and Gold l 3 indi cate the numb er of taps, L, must be at least: L ~

IOlog (b 1b 2 )

-

14( ~: J

15 (28 )

where b l is the pass band ripple, b2 is the stopband ripple, fT is the transition bandwidth, and f s is the sampling frequency. Thi s equation assumes enough bits of resolu tion are used to achie ve the requi red acc ura cy. It is shown below that trun cation of filter coeffi cient s affec ts freq uency resp onse adve rsely and unexpected things may occur . Norm ally, an FIR filte r's impul se response has a symmetry abo ut center, such that ho == h L_1, hI == h L_2 , and so forth. Thi s is suffic ient to ensure a linear phase response and flat group-de lay charac teri stics. The total delay through an FIR filt er of length L is:

4-4

Chapter 4

L

t=-

(29)

zr,

This delay is independe nt of input frequency : Th at is why the filter has a line ar phase response. One FIR filter design appr oach takes adva ntage of the fact that a filte r' s frequency response is j ust the Fouri er transform of its impulse response. Fourier transforms are discussed in Chapt er 8. Filters may be designed star ting with a sampled version of the desired frequency res ponse and an inve rse Fourier transf orm employe d to obtain the impulse response. Better designs may be produ ced in many cases using an algorithm developed by Parks and McClellan. 14 It achieves an equi-ripple design in which all the passband ripple s are the same amplitude, as are all the stopband rippl es. Ano ther popular algorithm is called the least -squares me thod. Its claim to fame is that it minimizes error in the desired frequency respon se . Sinc e finding coefficient sets for a given filter des ign is so computationally inten sive, it is a goo d j ob for a computer. DSP filter-design programs are readil y avail able at low cost. Some of tho se are menti oned in Chapter 11 and in the Bibliograph y.

Infinite-Impulse-Response (IIR) Filters While FIR filters have a lot going for them, the y tend to require a large numb er of taps for decent transiti on band width s and an attendant amount of processing power. As opposed to that, an IlR filter may provide sharp skirts with relatively few calc ulations. What it will not provide , in general, is a linear phase response. In circumstance s where the computational burde n is of more conce rn than the phase respon se, IIR filters may be desirable . Unlike FIR filters, IIR filt er s employ feedbac k: That is what makes their impulse responses infinit e. The same thing is true of traditional analog filter types, such as Chebychev and elliptical. For that reason, IIR filters are usuall y designed by conver ting analog prototypes. IIR filt ers may have both zeros and poles; FIR filters have only zero s. The transfer functi on of an analog Chebychev low-p ass filte r may be written as the ratio of a constant to an nth-order polyn omial:

(3 0)

Tables in the literature, such as Zverev ' >, list values of coefficients anrelated to the cutoff frequency; these may be translated into component values in the actual filter. This low-pass design may be transformed into band-pass or band-stop responses.

Dig ital Filtering

4-5

,.

,.

N

N

r

r

r

r

,.

,.

N

N

N

N

X

c

Fig 4.3-Block diagram of a short IIR filter.

4-6

Chapter 4

N

N

+

4

L

Ok

Z-k

I--+i

k= O

+ 1 - - - - - - - - . - - 1 Yn

Fig 4.4-Equivalent block diagram of a cascade-form IIR filter.

Two popular methods exist for deriving the digital transfer function from the analog: These are known as the impulse-invariant and bilinear transform methods. The impulse-invariant method assures that a digital filter will have an impul se response equivalent to its ana log counterpart, and thus , the same phase response. Problems arise , thoug h, if the bands of interest are near half the sam pling frequency. The digital filter's response may develop serio us erro rs in thi s case . While most filter-design software is capable of this method, it is not as often used as the bilinear-transform method. The bilinear transform makes a convenient substitution for s in Eq 30 and the filter come s out looking like: L-l

Yk = I

L- l

u nx k- n -

n=O

I~ n Y k-n

(31)

n-el

This filter has L zero s and L - 1 pole s. The block diagram of such a filter for L = 5 is shown in F ig 4.3 . Feedback is evident in both the equation and diagram

since path s involving coefficients ~ loop back and are added to the signal path. See Fig 4.4. The dir ect form of Eq 31 may be factored into 2-pole section and implemented in cascaded form . This configuration requires a few more multi plications than the direct form, but it suffers less from instability problems that plague IIR filters . Since feedback is being used, IIR filters are not neces sarily unconditionally stable. They are prone to limit cycles, or low -level oscillations sustained by quantization effects described in the previous chapters .

Numerical Effects in Digital Filters: Coefficient Accuracy When computers are used to design DSP filters, coefficients are usually represented in floating -point format to the full accura cy of the computer, often with 12 or more significant decimal figures in the mantissa. Embedded, fixedpoi nt implementations ordinarily achiev e onl y 16-bit bin ary accuracy. The truncation or rounding of coefficients and data to this res olution affect s the

Dig ital Filtering

4-7

frequ en cy respon se, ult imate att enu ation and noi se performance of digital filt ers. ' It is interestin g to note that while coefficient acc uracy affects frequency response, it doe s not contribute to quantization noise in a filt er ' s output signal since the noise sources are not proc essed at all by the system. On the other hand , truncation and rounding of dat a do not affe ct frequency response but add noi se to the output. Noti ce that the produ ct of two 16-bit numbers is a 32-bit numb er and many of these mu st be added togeth er to form the output of an FIR filt er. The result may grow by several more bit s before a fin al result is produced . At some point , the result may overflow the final accumulator, especiall y in FIR filters with low shape factor s. When the input is a strange, sign-matched copy of the filter's impulse response, worst-case output may grow as large as the sum of the absol ute value of all the coefficients: L- l

Y max =

± ~] h n I

(32)

n=O

Dat a, coefficients, or both might have to be scaled by the recip rocal of thi s numb er to avo id overflow. Usua lly, the final output value must be trun cated or ro unded to some numb er of bits, say 16. That introduces a small additional quanti zati on-noise component that has already been defined. Some sys tems may not have the luxur y of a fin al accumulator that has 32 or more bits. In this case, indi vidu al produ cts in an FIR filter must be rounded prior to accumulation. To anal yze dat a quantization noise in such an FIR filter, a modified block diagram is used that inserts noi se sources en I ,e n2 and so forth at the point where individu al produ cts are rounded . See Fig 4.5. An input scalin g factor F may also be added to prevent ove rflow. Clearl y, eac h noi se source adds directl y to the output. Assuming the noise sources are not corr elated to one another, the total noise output is:

e

tota1

L-l

=

Le

kn

(33)

k=O

Afte r the quanti zation-noise deriv ation in Chapter 2, the vari ance of the output noi se for rounded products is equ al to the normalized noi se power:

(34)

where b is the num ber of bit s to which interim result s are rounded . The effec ts of coefficient quantiza tion error are more difficul t to analyze

4-8

Chapter 4

Signa l Inpu t 1---+--'"

Filt er Output

Fig 4.5-Block diagram of an FIR filter with rounding noise inserted.



math emati cally, but the model is still fairly easy to draw. Refer to Fig 4.6A . Here, the error sources reside in the coefficients themselves ; howev er, the errors are constants and do not change from sample to sample as data quantization erro rs do. The coefficients never change, so neither do the errors . Each interim product ma y be separated into the produ ct involving the coefficient and the product invol ving only the error. See Fig 4.6B . A final refin ement to this model shows that this produces a small bia s in frequenc y response becau se the system is the same as an FIR filter with infinite coefficient accuracy in par allel with one that uses only the errors as its coe fficients, as in Fig 4.6C. The perfect FIR filter has the desired frequency response, while the error filt er has an undefin ed response. Thi s shows that coefficient trun cati on or rou nding introduces distortion in a filter's frequency response, but not noise in its output. It also demonstrate s that filte rs designed in floating -point format but for use in fixed point sys tems must be checked with actual coefficient resolution . Further, it is shown that when errors in coefficient s can be determined to be repeatable , tactics may be employed that min imize them . Th at is the subje ct of ongoing resea rch.

IIR Limit Cycles Limit cycles in IIR filters were mentioned above as a quantization prob-

Dig ita l Filtering

4-9

Signal Input

1-,.----1

Filter Output

Filter Output

Fig 4.6-At A, block diagram of an FIR filter modeling coefficient errors. At B, modified bl ock diagram showing separation of error products. At C, final block diagram showing equivalence to two separate filters whose outputs are summed.

4- 10

Chapter 4

lem. The presence of feedback in the algorithm poses this problem at signal levels near the smalles t-resolva ble signal. Supp ose the algo rithm is started with zero at its input and with a very small numb er at a feedbac k node. Were the coe fficient s siza ble enough, and depending on the complexity of the filter, a very small numeri cal error might prop agate thro ugh the system endlessly because it never made it to zero in any multipli cati on or addit ion . This cannot happen in a straight FIR filter because signals do not find their ways back to the inpu t. Adaptive FIR fi lters are an exce ption, covered in Chapter 8. Limit cycles also will not happen in an IIR filter when coefficient values are sufficien tly low to assure that two small numbers, when mult iplied, pro duce a zero produ ct. Lim it cycles exhibi t a dead band and other fa miliar characteristics of osc illa tors , but in the realm of non-lin ear, step-wise behavior only. Further detail s will not be discussed here, excep t to point out that with due care , these oscill ations need not exceed several LSBs . Now the rea son for wanting to factor IIR filters into cascaded, 2-p ole sec tions becomes evident.

Floating-Point Effects Floating-poin t format readily removes dynamic-r ange limit ations of fixedpoint , but it also suffers fro m effects of finite precision. In computing interim produ cts in an FIR fi lter, each product must be reno rmalized and may lose precision in the process. The block diagram of a model for this, F ig 4.7, is quit e similar to Fig 4.6 A; they differ in that errors in produ cts are multiplicati ve and not additive. Distortions are therefore both those of the output signal and of frequency response . Deri vation of an expression for these floating-point errors is a compli-

Signal Input ~-----..I

Filter Output

Fig 4.7-A model for errors in floating-point implementation of DSP filters.

Dig ita l Filtering

4- 11

cated session in statistics . Fortunately, Opp enheim and Schafer' > have done it for us and sho wed that output SNR is bounded by:

P signal

:s;

(35)

P noise

Filter Design Using Impulse Windows An impul se windo w is ju st a sequence who se en velope matches some particular shape, such as a rectangle or a tri angle . Rectangular and triangular windows are shown in Fig 4.8 for L = 64, along with several other shapes and their freq uency responses whe n used as coefficients in an FIR fil ter. It is evident that the rectangular window achieves the fa stest roll -off to stopband and they all produce various amo unts of ultimate attenuation and stopband ripple . Note that the po sit ions of zero s in the frequency responses are dependent on the length of the window and its shape. These functions may be selected according to the demands of a spec ific applica tio n. These low-pass functions may be transformed to band-pass and high-pass responses as shown below. Band-pass foll ows directly from the low-pass case . Transformation take s pla ce through multiplicati on (mixing) of the low -pa ss prototype's impulse respon se by a sinuso ida l, " loca l-osc illator" seque nce . Thi s is precisely the same mult iplic ati on and mixing that take s pla ce in an analog mixer; but now, we are con cerned with the frequency response of a filter, not the frequ en cy of a signal. The general frequency-translation properti es of multipliers and filters are treated specifically in the following chapter. Let us say that the prototype LPF has coefficients h, and a frequency response H m. Multipl yin g the coefficient s by a sinusoid COD results in new coe ffi cients: (36)

As will be proved later, the frequency respo nse of this filter is : H

=

H

(co-coo )

co

+H

( co+co o)

2

(37)

which is a band -pa ss filter centered at COD ' To per form thi s tran sformation on the L coefficients of the pro tot ype, calculate new coefficients according to:

h

4- 12

n' = h ncos[

Cha pter 4

CO o (

n-

~ +±}s]

(38)

•••••••• • .

8 ,91169

.

. .

...

n

,

1I

.'

. .

. . . . .

.

. .

.

. . .'.

Rectangular

: - za.ell1l

!

.-

. . . . . . . . . . . ..1

8 .8 00 "

,,"'c

8 . 63 89 ,

8 .81 88

.

_

-

_

HI ·..··················································

,

:

/ .. . . . . . . . .. .....

".,

-Eo8 . tl66

;

8 .00 Eoe

e.eeee



Triangular 8. 99 8

,,/=,,

8 . 83 58 ,

8.8288

-;

~

8 .81 48 ~· · ·· ··· · · ··· ·· .. 8 .8818 ,

8 . 08 66

;

l' ············· ············· ·\ ········

~

: -80 .889

,


80

Q)

c

60

"

t\

l'

t\ 40

....1;'

8,000,000

...

~ K l'

~

800,000

~ .....

80 ,000

~ .....

"""'-

~~ ~ r-..... ... ........ ...... ~ i'...

,"'"