Advanced Phase-Lock Techniques [1 ed.] 9781596931404, 159693140X

Frequency and time control systems are key circuits found in almost every electronic device today. Mobile phones, GPS sy

252 82 5MB

English Pages 534 Year 2007

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advanced Phase-Lock Techniques [1 ed.]
 9781596931404, 159693140X

Table of contents :
Contents
Preface
CHAPTER 1 Phase-Locked Systems—A High-Level Perspective
1.1 PHASE-LOCKED LOOP BASICS
1.2 CONTINUOUS-TIME CONTROL SYSTEM PERSPECTIVE FOR PLLS (HIGH SNR)
1.3 TIME-SAMPLED PLL SYSTEMS (HIGH SNR)
1.4 ESTIMATION THEORETIC PERSPECTIVE (LOW SNR) FOR PLLS
1.5 SUMMARY
References
CHAPTER 2 Design Notes
2.1 SUMMARY OF CLASSIC CONTINUOUS-TIME TYPE-2 SECOND-ORDER PLL DESIGN EQUATIONS
2.2 CONTINUOUS-TIME TYPE-2 FOURTH-ORDER PLLS
2.3 DISCRETIZED PLLS
2.4 HYBRID PLLS INCORPORATING SAMPLE-AND-HOLDS
2.5 COMMUNICATION THEORY
2.6 SPECTRAL RELATIONSHIPS
2.7 TRIGONOMETRY
2.8 LAPLACE TRANSFORMS
2.9 Z-TRANSFORMS
2.10 PROBABILITY AND STOCHASTIC PROCESSES
2.11 NUMERICAL SIMULATION
2.12 CALCULUS
2.13 BUTTERWORTH LOWPASS FILTERS
2.14 CHEBYSHEV LOWPASS FILTERS
2.15 CONSTANTS
References
CHAPTER 3 Fundamental Limits
3.1 PHASE MODULATION AND BESSEL FUNCTIONS
3.2 HILBERT TRANSFORMS
3.3 CAUCHY-SCHWARZ INEQUALITY
3.4 RF FILTERING EFFECTS ON FREQUENCY STABILITY
3.5 CHEBYSHEV INEQUALITY
3.6 CHERNOFF BOUND
3.7 CRAMER-RAO BOUND
3.8 EIGENFILTERS (OPTIMAL FILTERS)
3.9 FANO BROADBAND MATCHING THEOREM
3.10 LEESON–SCHERER PHASE NOISE MODEL
3.11 THERMAL NOISE LIMITS
3.12 NYQUIST SAMPLING THEOREM
3.13 PALEY-WIENER CRITERION
3.14 PARSEVAL’S THEOREM
3.15 POISSON SUM
3.16 TIME-BANDWIDTH PRODUCT
3.17 MATCHED-FILTERS FOR DETERMINISTIC SIGNALS IN ADDITIVE WHITE GAUSSIAN NOISE (AWGN)
3.18 WEAK LAW OF LARGE NUMBERS
References
Appendix 3A: Maximum-Likelihood Frequency Estimator
Appendix 3B: Phase Probability Density Function for Sine Wave in AWGN
CHAPTER 4 Noise in PLL-Based Systems
4.1 INTRODUCTION
4.2 SOURCES OF NOISE
4.3 POWER SPECTRAL DENSITY CONCEPT FOR CONTINUOUS-TIME STOCHASTIC SIGNALS
4.4 POWER SPECTRAL DENSITY FOR DISCRETE-TIME SAMPLED SYSTEMS
4.5 PHASE NOISE FIRST PRINCIPLES
4.6 RANDOM PHASE NOISE
4.7 NOISE IMPRESSION ON TIME AND FREQUENCY SOURCES
References
Appendix 4A: Review of Stochastic Random Processes
Appendix 4B: Accurate Noise Modeling for Computer Simulations
Appendix 4C: Creating Arbitrary Noise Spectra in a Digital Signal Processing Environment
Appendix 4D: Noise in Direct Digital Synthesizers
CHAPTER 5 System Performance
5.1 SYSTEM PERFORMANCE OVERVIEW
5.2 INTEGRATED PHASE NOISE
5.3 LOCAL OSCILLATORS FOR RECEIVE SYSTEMS
5.4 LOCAL OSCILLATORS FOR TRANSMIT SYSTEMS
5.5 LOCAL OSCILLATOR PHASE NOISE IMPACT ON DIGITAL COMMUNICATION ERROR RATE PERFORMANCE
5.6 PHASE NOISE EFFECTS ON OFDM SYSTEMS
5.7 PHASE NOISE EFFECTS ON SPREAD-SPECTRUM SYSTEMS
5.8 PHASE NOISE IMPACT FOR MORE ADVANCED MODULATION WAVEFORMS
5.9 CLOCK NOISE IMPACT ON DAC PERFORMANCE
5.10 CLOCK NOISE IMPACT ON ADC PERFORMANCE
References
Appendix 5A: Image Suppression and Error Vector Magnitude
Appendix 5B: Channel Capacity and Cutoff Rate
CHAPTER 6 Fundamental Concepts for Continuous-Time Systems
6.1 CONTINUOUS VERSUS DISCRETE TIME
6.2 BASIC CONTINUOUS-TIME PHASE-LOCKED LOOPS
6.3 ADDITIONAL RESULTS FOR THE IDEAL TYPE-2 PLL
6.4 LOOP FILTERS
6.5 MORE COMPLICATED LOOP FILTERS
6.6 TYPE-3 PLL
6.7 HAGGAI CONSTANT PHASE MARGIN LOOP (9 DB PER OCTAVE)
6.8 PSEUDO-CONTINUOUS PHASE DETECTOR MODELS
6.9 STABILITY ANALYSIS
6.10 TRANSIENT RESPONSE EVALUATION FOR CONTINUOUS-TIME SYSTEMS
References
Appendix 6A: Simplification of Linear Systems
Appendix 6B: Bandwidth Considerations for Continuous-Time Modeling of Time-Sampled Systems
Appendix 6C: Christiaan Huygens and Phase-Locked Pendulum Clocks
Appendix 6D: Admittance Matrix Methods for Analyzing Complex Loop Filters
CHAPTER 7 Fundamental Concepts for Sampled-Data Control Systems
7.1 SAMPLED SIGNAL BASICS
7.2 RELATIONSHIPS BETWEEN CONTINUOUS-TIME AND DISCRETE-TIME SIGNAL REPRESENTATIONS
7.3 SAMPLED-TIME PLL
7.4 STABILITY ASSESSMENT FOR SAMPLED SYSTEMS
7.5 TIME-DOMAIN RESPONSE
7.6 CLOSED-FORM RESULTS FOR SAMPLED PLLS
7.7 PSEUDO-CONTINUOUS VERSUS SAMPLED SYSTEM ANALYSIS
7.8 NOISE IN SAMPLED SYSTEMS
References
Appendix 7A: Additional Closed-Form Results for Sampled PLLs
CHAPTER 8 Fractional-N Frequency Synthesizers
8.1 A BRIEF HISTORY OF FRACTIONAL-N SYNTHESIS
8.2 ANALOG-BASED FRACTIONAL-N SYNTHESIS
8.3 -Σ MODULATOR FUNDAMENTALS
8.4 -Σ FREQUENCY SYNTHESIS ARCHITECTURES
8.5 SINGLE-BIT VERSUS MULTIPLE-BIT OUTPUT -Σ MODULATORS
8.6 COMBATING DISCRETE SPURIOUS TONES
8.7 -Σ FRACTIONAL-N CAVEATS TO AVOID
8.8 FINAL RECOMMENDATIONS
References
CHAPTER 9 Oscillators
9.1 LINEAR OSCILLATOR THEORY
9.2 OSCILLATOR CONFIGURATIONS
9.3 OSCILLATOR USAGE IN PHASE-LOCKED LOOPS
9.4 OSCILLATOR IMPAIRMENTS
9.5 CLASSICAL PHASE NOISE MODELS
9.6 NONLINEAR OSCILLATORS AND NOISE
References
CHAPTER 10 Clock and Data Recovery
10.1 CLOCK AND DATA RECOVERY BASICS
10.2 SIGNALING WAVEFORMS
10.3 INTERSYMBOL INTERFERENCE
10.4 BIT ERROR RATE
10.5 OPTIMAL TIMING RECOVERY METHODS
10.6 BIT ERROR RATE INCLUDING TIME RECOVERY
10.7 FINAL THOUGHTS
References
Appendix 10A: BER Calculation Using the Gil-Pelaez Theorem
Acronyms and Abbreviations
List of Symbols
About the Author
Index

Citation preview

Advanced Phase-Lock Techniques

DISCLAIMER OF WARRANTY The technical descriptions, procedures, and computer programs in this book have been developed with the greatest of care and they have been useful to the author in a broad range of applications; however, they are provided as is, without warranty of any kind. Artech House, Inc. and the author and editors of the book titled Advanced Phase-Lock Techniques make no warranties, expressed or implied, that the equations, programs, and procedures in this book or its associated software are free of error, or are consistent with any particular standard of merchantability, or will meet your requirements for any particular application. They should not be relied upon for solving a problem whose incorrect solution could result in injury to a person or loss of property. Any use of the programs or procedures in such a manner is at the user’s own risk. The editors, author, and publisher disclaim all liability for direct, incidental, or consequent damages resulting from use of the programs or procedures in this book or the associated software.

Advanced Phase-Lock Techniques James A. Crawford

Software to accompany this book is available for download at: http://www.artechhouse.com/static/downloads/crawford_140.zip

Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. British Library Cataloguing in Publication Data A catalog record for this book is available from the British Library.

ISBN-13: 978-1-59693-140-4 Cover design by Yekaterina Ratner MATLAB® is a trademark of The MathWorks, Inc., and is used with permission. The MathWorks does not warranty the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® software or related products does not constitute endorsement or sponsorship by The Math Works of a particular pedagogical approach or particular use of the MATLAB® software.

© 2008 Artech House 685 Canton Street Norwood, MA 02062 All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark. 10 9 8 7 6 5 4 3 2 1

To my future grandchildren

and to God be all the glory. Hebrews 2:1–3

Contents Preface .......................................................................................................................xvii Chapter 1 Phase-Locked Systems—A High-Level Perspective ....................................................1 1.1 Phase-Locked Loop Basics ............................................................................................ 1 1.1.1 Some PLL History................................................................................................................... 2

1.2 Continuous-Time Control System Perspective for PLLs (High SNR) ...................... 2 1.3 Time-Sampled PLL Systems (High SNR) .................................................................... 8 1.4 Estimation Theoretic Perspective (Low SNR) for PLLs........................................... 14 1.4.1 PLL as a Minimum Mean-Square-Error (MMSE) Estimator................................................ 17 1.4.2 PLL as a Maximum-Likelihood (ML) Estimator .................................................................. 18 1.4.3 PLL as a Maximum A Posteriori (MAP)-Based Estimator ................................................... 19 1.4.4 Performance Limits from the Cramer-Rao Bound ................................................................ 21 1.4.5 Optimal Mean-Square-Error Tracking: Kalman Filtering ..................................................... 22

1.5 Summary....................................................................................................................... 23 References ........................................................................................................................... 25 Selected Bibliography…………………………………………...………………………..26

Chapter 2 Design Notes................................................................................................................27 2.1 Summary of Classic Continuous-Time Type-2 Second-Order PLL Design Equations ............................................................................................................................ 27 2.2 Continuous-Time Type-2 Fourth-Order PLLs.......................................................... 40 2.3 Discretized PLLs .......................................................................................................... 40 2.3.1 Integration Methods .............................................................................................................. 40 2.3.2 Closed-Form Discrete-Time PLL Solutions.......................................................................... 42 2.3.3 Higher-Order Differentiation Formulas ................................................................................ 44

2.4 Hybrid PLLs Incorporating Sample-and-Holds ....................................................... 45 2.4.1 Ideal Type-1 with Zero-Order Sample-and-Hold .................................................................. 45 2.4.2 Ideal Type-2 with Zero-Order Sample-and-Hold .................................................................. 46

2.5 Communication Theory............................................................................................... 47 2.5.1 Graphical Bit Error Rate and Symbol Error Rate Results ..................................................... 49 2.5.2 BPSK Bit Error Rate ............................................................................................................. 49 2.5.3 QPSK Bit Error Rate ............................................................................................................. 50 2.5.4 16-QAM Symbol Error Rate ................................................................................................. 51 2.5.5 64-QAM Symbol Error Rate ................................................................................................. 52 2.5.6 256-QAM Symbol Error Rate ............................................................................................... 53 vii

Contents

viii

2.5.7 8-PSK Symbol Error Rate ..................................................................................................... 54 2.5.8 16-PSK Symbol Error Rate ................................................................................................... 55

2.6 Spectral Relationships ................................................................................................. 56 2.7 Trigonometry................................................................................................................ 58 2.8 Laplace Transforms ..................................................................................................... 59 2.9 z-Transforms................................................................................................................. 60 2.10 Probability and Stochastic Processes........................................................................ 62 2.11 Numerical Simulation ................................................................................................ 64 2.11.1 DSP Windows ..................................................................................................................... 66 2.11.2 Polynomial-Based Interpolation.......................................................................................... 67 2.11.3 Raised-Cosine-Based Interpolation ..................................................................................... 68 2.11.4 Fourth-Order Runge-Kutta Numerical Integration.............................................................. 68

2.12 Calculus....................................................................................................................... 68 2.13 Butterworth Lowpass Filters..................................................................................... 69 2.14 Chebyshev Lowpass Filters ....................................................................................... 69 2.15 Constants..................................................................................................................... 70 References ........................................................................................................................... 70

Chapter 3 Fundamental Limits....................................................................................................71 3.1 Phase Modulation and Bessel Functions .................................................................... 71 3.2 Hilbert Transforms ...................................................................................................... 73 3.3 Cauchy-Schwarz Inequality ........................................................................................ 78 3.4 RF Filtering Effects on Frequency Stability .............................................................. 78 3.5 Chebyshev Inequality................................................................................................... 81 3.6 Chernoff Bound............................................................................................................ 81 3.7 Cramer-Rao Bound...................................................................................................... 83 3.7.1 CR Bound for Sine Wave in AWGN .................................................................................... 85 3.7.2 Phase Estimation for Sine Wave in AWGN.......................................................................... 89 3.7.3 CR Bound for Bit-Time Estimation ...................................................................................... 89

3.8 Eigenfilters (Optimal Filters) ...................................................................................... 90 3.9 Fano Broadband Matching Theorem......................................................................... 93 3.10 Leeson–Scherer Phase Noise Model ......................................................................... 94 3.11 Thermal Noise Limits ................................................................................................ 94 3.12 Nyquist Sampling Theorem....................................................................................... 95 3.13 Paley-Wiener Criterion ............................................................................................. 97

Contents

ix

3.14 Parseval’s Theorem.................................................................................................... 97 3.15 Poisson Sum ................................................................................................................ 97 3.16 Time-Bandwidth Product .......................................................................................... 98 3.16.1 Gabor Limit for Deterministic Signals ................................................................................ 98 3.16.2 Time-Frequency Resolution for Deterministic Signals ....................................................... 99 3.16.3 Time-Frequency Resolution Limits for Stochastic Signals ................................................. 99

3.17 Matched-Filters for Deterministic Signals in Additive White Gaussian Noise (AWGN) ............................................................................................................................ 100 3.18 Weak Law of Large Numbers ................................................................................. 101 References ......................................................................................................................... 103 Appendix 3A: Maximum-Likelihood Frequency Estimator ........................................ 104 Appendix 3B: Phase Probability Density Function for Sine Wave in AWGN ........... 105

Chapter 4 Noise in PLL-Based Systems ....................................................................................109 4.1 Introduction ................................................................................................................ 109 4.2 Sources of Noise.......................................................................................................... 109 4.2.1 Semiconductor Noise Sources............................................................................................. 109 4.2.2 Quantization Noise.............................................................................................................. 116 4.2.3 Other Sources of Noise........................................................................................................ 118

4.3 Power Spectral Density Concept for Continuous-Time Stochastic Signals .......... 118 4.4 Power Spectral Density for Discrete-Time Sampled Systems ................................ 120 4.4.1 Example Results for Time-Sampled Noise ......................................................................... 121 4.4.2 DAC and ADC Quantization Noise .................................................................................... 124 4.4.3 Power Spectral Densities Refinements................................................................................ 125 4.4.4 Windowing Functions for Power Spectral Density Estimation ........................................... 125 4.4.5 Stationary Versus Cyclostationary Processes...................................................................... 131

4.5 Phase Noise First Principles ...................................................................................... 131 4.5.1 Discrete Spurious Contaminations ...................................................................................... 131

4.6 Random Phase Noise.................................................................................................. 132 4.6.1 Phase Noise Spectrum Terminology ................................................................................... 135 4.6.2 Time-Domain Phase Noise Terminology ............................................................................ 138 4.6.3 Modeling Phase Noise Processes ........................................................................................ 141

4.7 Noise Impression on Time and Frequency Sources ................................................ 142 4.7.1 Noise Equipartition with AM and PM Noise ...................................................................... 142 4.7.2 Noise in Linear Two-Port Networks ................................................................................... 142 4.7.3 Noise in Dividers................................................................................................................. 145 4.7.4 Macroscopic Noise Modeling in PLLs................................................................................ 146

References ......................................................................................................................... 148 Appendix 4A: Review of Stochastic Random Processes ............................................... 150 4A.1 Wide-Sense Stationarity ...................................................................................................... 151

Contents

x

4A.2 Probability Density Functions ............................................................................................. 151 4A.3 Characteristic Function........................................................................................................ 154 4A.4 Cumulative Probability Distribution Function .................................................................... 155 4A.5 Creation of Sample Sequences Exhibiting an Arbitrary Probability Density ...................... 155 4A.6 Power Spectral Density ....................................................................................................... 156 4A.7 Linear Filtering of WSS Processes...................................................................................... 157 4A.8 Equivalent Noise Bandwidth............................................................................................... 158

References ......................................................................................................................... 158 Appendix 4B: Accurate Noise Modeling for Computer Simulations .......................... 159 4B.1 Noise Modeling for 1/f α Processes with 0 < α < 2……………………………………….. 160 References ......................................................................................................................... 167 Appendix 4C: Creating Arbitrary Noise Spectra in a Digital Signal Processing Environment ..................................................................................................................... 168 References ......................................................................................................................... 171 Appendix 4D: Noise in Direct Digital Synthesizers....................................................... 172 4D.1 Traditional DDS General Concepts..................................................................................... 172 4D.2 Phase Truncation and Related Spurious Effects .................................................................. 174 4D.3 DDS Output C/N ................................................................................................................. 174

References ......................................................................................................................... 175

Chapter 5 System Performance..................................................................................................176 5.1 System Performance Overview ................................................................................. 176 5.2 Integrated Phase Noise .............................................................................................. 176 5.3 Local Oscillators for Receive Systems ...................................................................... 177 5.3.1 Close-In Phase Noise Effects .............................................................................................. 180 5.3.2 Large Frequency Offset Phase Noise Effects ...................................................................... 184

5.4 Local Oscillators for Transmit Systems ................................................................... 187 5.4.1 Close-In Phase Noise Effects .............................................................................................. 187 5.4.2 Large Frequency Offset Phase Noise Effects ...................................................................... 188

5.5 Local Oscillator Phase Noise Impact on Digital Communication Error Rate Performance ..................................................................................................................... 189 5.5.1 Uncoded BPSK Bit Error Rate Performance....................................................................... 190 5.5.2 Uncoded QPSK Bit Error Rate Performance ...................................................................... 191 5.5.3 Symbol Error Rate for Square QAM Signal Constellations ................................................ 191 5.5.4 Phase-Modulated Signals M-PSK ....................................................................................... 195

5.6 Phase Noise Effects on OFDM Systems.................................................................... 198 5.6.1 Channel Estimation Errors Due to Phase Noise .................................................................. 202

5.7 Phase Noise Effects on Spread-Spectrum Systems.................................................. 206 5.8 Phase Noise Impact for More Advanced Modulation Waveforms ........................ 206 5.8.1 Euclidean Distance Measures.............................................................................................. 206

Contents

xi

5.8.2 Forward Error Correction Coding Benefits ......................................................................... 208

5.9 Clock Noise Impact on DAC Performance .............................................................. 208 5.10 Clock Noise Impact on ADC Performance ............................................................ 210 5.10.1 ADC Example with Rectangular Interfering Spectrum..................................................... 211

References ......................................................................................................................... 213 Appendix 5A: Image Suppression and Error Vector Magnitude................................ 213 Appendix 5B: Channel Capacity and Cutoff Rate ........................................................ 216 5B.1 Channel Capacity................................................................................................................. 216

References ......................................................................................................................... 224

Chapter 6 Fundamental Concepts for Continuous-Time Systems ...........................................225 6.1 Continuous Versus Discrete Time ............................................................................ 225 6.2 Basic Continuous-Time Phase-Locked Loops ......................................................... 225 6.3 Additional Results for the Ideal Type-2 PLL .......................................................... 230 6.3.1 Natural Frequency ωn .......................................................................................................... 230 6.3.2 Damping Factor................................................................................................................... 234

6.4 Loop Filters................................................................................................................. 235 6.4.1 Single-Ended Versus Differential........................................................................................ 236

6.5 More Complicated Loop Filters................................................................................ 237 6.5.1 One Additional Real Pole in Loop Filter............................................................................. 237 6.5.2 Additional RC Lowpass Filter Section................................................................................ 242 6.5.3 Cascading Two RC Lowpass Sections ................................................................................ 245

6.6 Type-3 PLL ................................................................................................................. 246 6.6.1 Close Equivalence for the Type-3 PLL with the Ideal Type-2 PLL.................................... 248

6.7 Haggai Constant Phase Margin Loop (9 dB per Octave) ....................................... 253 6.8 Pseudo-Continuous Phase Detector Models ............................................................ 261 6.8.1 Tri-State Voltage-Based Charge-Pump ............................................................................... 261 6.8.2 Tri-State Charge-Pump Detector—Current-Based.............................................................. 265 6.8.3 Zero-Order Sample-and-Hold ............................................................................................. 268 6.8.4 Digital Feedback Dividers................................................................................................... 270 6.8.5 Modeling Time Delays in Continuous Systems .................................................................. 272

6.9 Stability Analysis........................................................................................................ 274 6.9.1 Nyquist Stability Criterion .................................................................................................. 274 6.9.2 Measures of System Stability: Gain and Phase Margins .................................................... 275

6.10 Transient Response Evaluation for Continuous-Time Systems........................... 276 6.10.1 Exact Method—Partial Fractions ...................................................................................... 277 6.10.2 Exact Method—System of Differential Equations ............................................................ 278 6.10.3 Exact Method—State-Transition Matrix Method.............................................................. 280 6.10.4 Exact Method—Corrington............................................................................................... 281 6.10.5 Approximate Method—Integration Formula Substitution ................................................ 283

Contents

xii

6.10.6 Approximate Method—Line Integration........................................................................... 283 6.10.7 Approximate Method—FFT.............................................................................................. 283 6.10.8 Approximate Method—Poisson Sum................................................................................ 284 6.10.9 Approximate Method—Companion Models ..................................................................... 284

References ......................................................................................................................... 285 Appendix 6A: Simplification of Linear Systems ........................................................... 286 References ......................................................................................................................... 288 Appendix 6B: Bandwidth Considerations for Continuous-Time Modeling of TimeSampled Systems .............................................................................................................. 290 6B.1 Appearance of exp(–sTs/2) Factor ....................................................................................... 292 6B.2 Pole-Zero Excess and the Poisson Sum Formula................................................................. 292

Reference........................................................................................................................... 293 Appendix 6C: Christiaan Huygens and Phase-Locked Pendulum Clocks ................. 296 Appendix 6D: Admittance Matrix Methods for Analyzing Complex Loop Filters.... 296

Chapter 7 Fundamental Concepts for Sampled-Data Control Systems...................................298 7.1 Sampled Signal Basics................................................................................................ 298 7.2 Relationships Between Continuous-Time and Discrete-Time Signal Representations ................................................................................................................ 299 7.2.1 Additional Insights for Sampled Signals ............................................................................. 300

7.3 Sampled-Time PLL.................................................................................................... 303 7.4 Stability Assessment for Sampled Systems .............................................................. 306 7.5 Time-Domain Response ............................................................................................. 307 7.6 Closed-Form Results for Sampled PLLs.................................................................. 308 7.6.1 Ideal Type-1 with Sample-Hold .......................................................................................... 308 7.6.2 Ideal Type-2 PLL with Sample-Hold .................................................................................. 312 7.6.3 Type-2 Third-Order with Charge-Pump Phase Detector..................................................... 316 7.6.4 Type-2 Fourth-Order with Charge-Pump Phase Detector ................................................... 320

7.7 Pseudo-Continuous Versus Sampled System Analysis ........................................... 322 7.8 Noise in Sampled Systems ......................................................................................... 323 7.8.1 Reference-Referred Noise ................................................................................................... 324 7.8.2 VCO-Referred Noise........................................................................................................... 327

References ......................................................................................................................... 328 Appendix 7A: Additional Closed-Form Results for Sampled PLLs............................ 329

Chapter 8

Contents

xiii

Fractional-N Frequency Synthesizers......................................................................330 8.1 A Brief History of Fractional-N Synthesis ............................................................... 330 8.2 Analog-Based Fractional-N Synthesis ...................................................................... 336 8.3 ∆-Σ Modulator Fundamentals................................................................................... 336

8.3.1 Quantization ........................................................................................................................ 339 8.3.2 Oversampling Rate.............................................................................................................. 340 8.3.3 Noise Shaping ..................................................................................................................... 340 8.3.4 Signal Transfer Function ..................................................................................................... 344 8.3.5 ∆-Σ Modulator Stability ...................................................................................................... 345 8.3.6 Phase Error Probability Density Functions ......................................................................... 347

8.4 ∆-Σ Frequency Synthesis Architectures ................................................................... 349 8.4.1 Single-Stage ∆-Σ Modulator Architectures ......................................................................... 349 8.4.2 Multi-Stage Modulator Architectures.................................................................................. 358

8.5 Single-Bit Versus Multiple-Bit Output ∆-Σ Modulators......................................... 362 8.6 Combating Discrete Spurious Tones ........................................................................ 366 8.6.1 Spur Reduction Using Dithering ......................................................................................... 367 8.6.2 Spur Reduction Using Chaos .............................................................................................. 367 8.6.3 Irrational Initial Condition................................................................................................... 368 8.6.4 Limit-Cycles........................................................................................................................ 368

8.7 ∆-Σ Fractional-N Caveats to Avoid........................................................................... 372 8.7.1 Load Pulling and Pushing on VCO ..................................................................................... 372 8.7.2 Time Delay Variations ........................................................................................................ 372 8.7.3 Charge-Pump Nonlinearities ............................................................................................... 373 8.7.4 Loop Filter Requirements.................................................................................................... 379

8.8 Final Recommendations ............................................................................................ 380 References ......................................................................................................................... 380

Chapter 9 Oscillators ..................................................................................................................384 9.1 Linear Oscillator Theory ........................................................................................... 384 9.1.1 Control System Perspective................................................................................................. 384 9.1.2 Negative-Resistance Oscillator ........................................................................................... 386

9.2 Oscillator Configurations .......................................................................................... 392 9.2.1 RC Oscillators ..................................................................................................................... 392 9.2.2 Ring Oscillators................................................................................................................... 397 9.2.3 Bridge Oscillators................................................................................................................ 398 9.2.4 LC Oscillators ..................................................................................................................... 406 9.2.5 Oscillator Summary............................................................................................................. 409 9.2.6 Oscillator ALC .................................................................................................................... 411 9.2.7 Best Oscillator Design Practices ......................................................................................... 413

9.3 Oscillator Usage in Phase-Locked Loops ................................................................. 419 9.3.1 VCO Coarse-Tuning Methods............................................................................................. 419 9.3.2 VCO Fine-Tuning Methods................................................................................................. 420

Contents

xiv

9.3.3 VCO Gain Compensation.................................................................................................... 424

9.4 Oscillator Impairments.............................................................................................. 426 9.4.1 Load-Pulling........................................................................................................................ 426 9.4.2 Injection-Locking ................................................................................................................ 428 9.4.3 Oscillator-Pushing............................................................................................................... 429 9.4.4 Post-Tuning Drift ................................................................................................................ 429

9.5 Classical Phase Noise Models .................................................................................... 432 9.5.1 Leeson’s Model ................................................................................................................... 432 9.5.2 Haggai Phase Noise Model ................................................................................................. 433 9.5.3 Ring Oscillator Phase Noise Model .................................................................................... 436

9.6 Nonlinear Oscillators and Noise................................................................................ 442 References ......................................................................................................................... 443 Selected Bibliography ...................................................................................................... 445

Chapter 10 Clock and Data Recovery..........................................................................................446 10.1 Clock and Data Recovery Basics............................................................................. 446 10.2 Signaling Waveforms ............................................................................................... 447 10.3 Intersymbol Interference......................................................................................... 450 10.3.1 Zero Intersymbol Interference........................................................................................... 451

10.4 Bit Error Rate........................................................................................................... 454 10.5 Optimal Timing Recovery Methods ....................................................................... 458 10.5.1 Cramer-Rao Bound Limits ................................................................................................ 458 10.5.2 Estimation Theory-Based Timing-Error Metrics............................................................... 458 10.5.3 Hardware-Based Timing-Error Metrics............................................................................. 474

10.6 Bit Error Rate Including Time Recovery .............................................................. 479 10.6.1 Clock Recovery Using First-Order Markov Modeling...................................................... 480 10.6.2 Computing Transition-Probabilities for CDR Applications .............................................. 485 10.6.3 Mean-Time to First-Slip.................................................................................................... 490 10.6.4 Applying First-Order Markov Modeling to Real PLLs ..................................................... 492 10.6.5 Conventional Approach to Timing-Recovery Analysis .................................................... 494 10.6.6 Connecting Phase Tracking Performance with CDR BER Performance .......................... 495

10.7 Final Thoughts.......................................................................................................... 495 References ......................................................................................................................... 496 Appendix 10A: BER Calculation Using the Gil-Pelaez Theorem ................................ 497 References ......................................................................................................................... 498

Acronyms and Abbreviations ....................................................................................500 List of Symbols ..........................................................................................................504 About the Author.......................................................................................................506

Contents

xv

Index ..........................................................................................................................508

Preface Phase-lock techniques in time and frequency control systems have expanded substantially since my first book1 on this subject was released in 1994. Complete systems-on-chip are now a genuine reality, and they frequently entail numerous RF, baseband, and clock frequency sources as well as time and frequency tracking systems. Performance levels that were not even imaginable ten years ago are now common-place in products ranging from cell phones to multi-gigahertz computer processors. This has led to the creation of many new design techniques as well as the obsolescence of other more traditional methods. This text builds on the foundational material that was provided in the 1994 text with expanded attention given to a wide range of topics that are germane to phase-lock systems. All of the computer-based analysis used in creating this text was performed with MATLAB, and all of the associated scripts are provided on the CD that accompanies this work.2 The coding style was purposely kept simple so that readers who do not already own, or otherwise have access to, MATLAB can translate the scripts into other languages like C++ if they so choose. The MATLAB script used to create each book-figure is usually identified through a footnote or other reference in the text so that the appropriate script can be easily found on the CD. I want to offer a special word of thanks to The MathWorks for providing MATLAB to me through their book program.3 Having been a consultant to industry through almost all of the wireless communications explosion in the 1990s, followed by my own subsequent business ventures, it has been very interesting to apply the phase-lock technique in its many associated forms across many different applications. It is with this broad-brush perspective that the phase-lock technique is first introduced in Chapter 1. Since encountering my first phase-locked loop (PLL) following college in the 1980s, I have found that I routinely refer to a number of helpful formulas even to this day, and these have been conveniently collected in Chapter 2 for easy and quick reference. Chapter 3 is motivated by the rather obvious need to always know what the theoretical limits are concerning a given design problem. In general, design problems that demand performance close to theoretical limits will require greater design precision, power consumption, and/or development effort. As my father has frequently said, “Measure twice, cut once.” And as I say to my own children, “Count the cost before you start the journey.” It is this sage advice in the context of theoretical engineering limits that cautions every engineer to know just how close to the line he or she is attempting to stand. Much of the material in this text would not be necessary were it not for the nemesis that consumes the pages of Chapter 4, namely noise. Noise competes with everything we attempt to do in our lives, whether it be computational noise, the transmission of electrical signals, or the search for truth itself. Many different noise sources are described and analysis techniques presented in this chapter. The appendix to this chapter provides some hints about my long-standing interest in computer-based system simulation, an interest that I have had since perhaps junior high. Many computational details are lost behind the speed and sophistication of our computerized world nowadays, but the science of computation will always remain a form of art from my perspective. The role that noise plays at the system level is considered in Chapter 5. The topics explored in this chapter are primarily motivated by some of the design problems that I encountered in my consulting activities. Chapter 6 provides a discussion about classical continuous-time PLLs used for 1

Frequency Synthesizer Design Handbook, Artech House, 1994. MATLAB®, Version 7.2.0.232 (R2006a). 3 The Mathworks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098, Tel: 508-647-7000, mail: e-mailto: [email protected], http://www.mathworks.com/. 2

xvii

xviii

frequency synthesis, quickly moving into a presentation of pseudo-continuous systems that are more representative of modern systems. The computational methods presented in the latter portion of this chapter should also be of interest to readers who share my passion for computer simulation. Although sampled-PLLs were developed at length in my original text, Chapter 7 is an expanded discussion of this topic that should be helpful in firmly connecting the concepts between continuous-time PLLs and sampled-time PLLs in a rigorous manner. With the proliferation of fractional-N frequency synthesis methods and ∆-Σ methods in general, Chapter 8 opens with a historical development time-line for fractional-N synthesis largely based on the U.S. patent record. The balance of this chapter is devoted entirely to fractional-N frequency synthesis. A clear understanding of these techniques is vital to anyone designing in the frequency synthesis area today. Nearly all phase-lock methods would be unnecessary, of course, if we could design and build perfect oscillators, but the noise-nemesis of Chapter 4 forces us to deal with imperfect oscillators, just as gravity binds us to the surface of the earth. The entirety of Chapter 9 is devoted to a fairly extensive discussion about oscillators. The final chapter in the book stems from a number of development projects pertaining to precision clock and data recovery (CDR) systems; also called bit synchronization. The design of these functions requires a knowledge of communication theory as well as PLL theory. It is only fitting that this chapter come last in the text because of its multidisciplined nature. Having measured the breadth and scope of the phase-lock topic now for a second time with the release of this book, I wish every reader an enlightened journey through this timeless concept. James A. Crawford San Diego, California November 2007

CHAPTER 1 Phase-Locked Systems—A High-Level Perspective At their most basic level, time and frequency control systems are irrevocably linked together, being based on many of the same foundational principles and concepts. RF engineers frequently discuss integrated phase noise in terms of dBc, for example, whereas optical communication engineers discuss their clock-jitter issues in terms of unit intervals (UI). Each measurement description captures a different perspective of the same underlying physical phenomena. This chapter is devoted to an expanded view of phase-locked systems that encompasses (i) traditional frequency- and time-domain perspectives, (ii) the traditional control system perspective, and the perhaps less familiar (iii) the estimation theoretic perspective. Each view serves to provide valuable insight into this pervasive engineering topic while deferring more detailed discussion and analyses to the chapters that follow. 1.1 PHASE-LOCKED LOOP BASICS Few topics in electrical engineering have demanded as much attention over the years as the phaselocked loop (PLL). The PLL is arguably one of the most important building blocks necessary for modern digital communications, whether in the RF radio portion of the hardware where it is used to synthesize pristine carrier signals, or in the baseband digital signal processing where it is often used for carrier- and time recovery processing. The PLL topic is also intriguing because a thorough understanding of the concept embraces ingredients from many disciplines including RF design, digital design, continuous- and discrete-time control systems, estimation theory, and communication theory. The PLL landscape is naturally divided into (i) low signal-to-noise ratio (SNR) applications like Costas carrier-recovery and time recovery applications and (ii) high SNR applications like frequency synthesis. Each of these areas is further divided between (a) analog/RF continuous-time implementations versus (b) digital discrete-time implementations as suggested in Figure 1-1. The different manifestations of the PLL concept require careful attention to different usage, analysis, design, and implementation considerations. PLL Systems

High SNR

Low SNR Clock & Carrier Recovery

Frequency Synthesis

ContinuousTime

DiscreteTime

ContinuousTime

Figure 1-1 A coarse segmentation of the PLL systems subject.

1

DiscreteTime

Phase-Locked Systems—A High-Level Perspective

2

The best way to develop a sound understanding of the phase-locked loop is to review the fundamental theories on which this concept is based. One of the factors contributing to the longevity of the PLL is that relatively simple implementations can still lead to nearly optimal performance. 1.1.1 Some PLL History “While recovering from an illness in 1665, Dutch astronomer and physicist Christiaan Huygens noticed something very odd. Two of the large pendulum clocks in his room were beating in unison, and would return to this synchronized pattern regardless of how they were started, stopped, or otherwise disturbed. “An inventor who had patented the pendulum clock only eight years earlier, Huygens was understandably intrigued. He set out to investigate this phenomenon, and the records of his experiments were preserved in a letter to his father. Written in Latin, the letter provides what is believed to be the first recorded example of the synchronized oscillator, a physical phenomena that has become increasingly important to physicists and engineers in modern times.”1 It should come as no surprise that modern researchers would later find that the behavior of such injection-locked oscillators can be closely modeled based on PLL principles [1]–[4]. Anyone who has tried to colocate RF oscillators running at nearly the same but different frequencies has experienced how incredibly sensitive this coupling phenomenon is! “In 1840, Alexander Bain proposed a fax machine that used synchronized pendulums to scan an image at the transmitting end and send electrical impulses to a matching pendulum at the receiving end to reconstruct the image. The device, however, was never developed. “The phase-lock concept as we know it today was originally described in a published work by de Bellescize in 1932 [5] but did not fall into widespread use until the era of television where it was used to synchronize horizontal and vertical video scans. One of the earliest patents showing the use of a phase-locked loop with a feedback divider for frequency synthesis appeared in 1970 [6]. The phaselocked loop concept is now used almost universally in many products ranging from citizens band radio to deep-space coherent receivers.”2 1.2 CONTINUOUS-TIME CONTROL SYSTEM PERSPECTIVE FOR PLLS (HIGH SNR) Phase-locked systems fundamentally entail a control system or other mathematical means for synchronizing a local time or frequency source with an externally applied signal. The external signal may be contaminated with noise, and may also exhibit time-varying behavior that affects the signal’s amplitude, frequency, and phase. In their most basic form, phase-locked systems deal with sinusoidal signals that exhibit periodicity, and this naturally leads to thinking of the underlying process as one of synchronization. When the local source is perfectly synchronized with the external signal, the phase and frequency of the local source perfectly match the phase and frequency of the externally applied signal. This condition is referred to as having achieved phase-lock. Phase-lock operations occur all around us every day. Transferring accurate time from a master clock to our wristwatch entails synchronization. When a musician tunes an instrument, or a piano craftsman tunes a piano, the underlying process entails synchronizing the instrument or piano with a precision (external) frequency source. The AC generators that provide electrical power throughout the country must be similarly synchronized in phase and frequency before they are connected to the power transmission grid. The PLL form that is considered in this text consists of three basic components that appear in one form or another within the system [8], [9]: (i) a phase error metric or detector, (ii) a frequencycontrollable oscillator, and (iii) a loop filter. These constitutive elements are shown in Figure 1-2, and 1 2

http://www.globaltechnoscan.com/20thSep-26thSep/out_of_time.htm. See Chapter 1 of [7].

Phase-Locked Systems—A High-Level Perspective

3

are described further for the ideal type-2 PLL in Table 1-1. The feedback divider is normally present only in frequency synthesis applications, and is therefore shown as an optional element in this figure. PLLs are most frequently discussed in the context of continuous-time and Laplace transforms. A clear distinction is made in this text between continuous-time and discrete-time (i.e., sampled) PLLs because the analysis methods are, rigorously speaking, related but different. A brief introduction to continuous-time PLLs is provided in this section with more extensive details provided in Chapter 6. PLL type and PLL order are two technical terms that are frequently used interchangeably even though they represent distinctly different quantities. PLL type refers to the number of ideal poles (or integrators) within the linear system. A voltage-controlled oscillator (VCO) is an ideal integrator of phase, for example. PLL order refers to the order of the characteristic equation polynomial for the linear system (e.g., denominator portion of (1.4)). The loop-order must always be greater than or equal to the loop-type. Type-2 third- and fourth-order PLLs are discussed in Chapter 6, as well as a type-3 PLL, for example. Phase Detector

Loop Filter

θout

θ ref

Kd

VCO

1 N Feedback Divider Figure 1-2 Basic PLL structure exhibiting the basic functional ingredients. Table 1-1

Block Name Phase Detector

Loop Filter

Basic Constitutive Elements for a Type-2 Second-Order PLL Laplace Transfer Function Description Kd, V/rad Phase error metric that outputs a voltage that is proportional to the phase error existing between its input θref and the feedback phase θout/N. Charge-pump phase detectors output a current rather than a voltage, in which case Kd has units of A/rad. Also called the lead-lag network, it contains one ideal pole 1 + sτ 2 and one finite zero.

sτ 1

VCO

Kv s

The voltage-controlled oscillator (VCO) is an ideal integrator of phase. Kv normally has units of rad/s/V.

Feedback Divider

1/N

A digital divider that is represented by a continuous divider of phase in the continuous-time description.

The type-2 second-order PLL is arguably the workhorse even for modern PLL designs. This PLL is characterized by (i) its natural frequency ωn (rad/s) and (ii) its damping factor ζ. These terms are used extensively throughout the text, including the examples used in this chapter. These terms are separately discussed later in Sections 6.3.1 and 6.3.2. The role of these parameters in shaping the timeand frequency-domain behavior of this PLL is captured in the extensive list of formula provided in Section 2.1. In the continuous-time-domain, the type-2 second-order PLL3 open-loop gain function is given by 3

See Section 6.2.

Phase-Locked Systems—A High-Level Perspective

4

2

 ω  1 + sτ 2 GOL ( s ) =  n   s  sτ 1

(1.1)

and the key loop parameters are given by Kd Kv Nτ 1

(1.2)

ζ = ωnτ 2

(1.3)

ωn = 1 2

The time constants τ1 and τ2 are associated with the loop filter’s R and C values as developed in Chapter 6. The closed-loop transfer function associated with this PLL is given by the classical result 

2ζ  s ωn  1 θ out ( s )  = 2 H1 ( s ) = N θ ref ( s ) s + 2ζωn s + ωn2

ωn2 1 +

(1.4)

The transfer function between the synthesizer output phase noise and the VCO self-noise is given by H2(s) where H 2 ( s ) = 1 − H1 ( s )

(1.5)

A convenient frequency-domain description of the open-loop gain function is provided in Figure 1-3. The frequency break-points called out in this figure and the next two appear frequently in PLL work and are worth committing to memory. The unity-gain radian frequency is denoted by ωu in this figure and is given by

ωu = ωn 2ζ 2 + 4ζ 4 + 1

(1.6)

A convenient approximation for the unity-gain frequency (1.6) is given by ωu ≅ 2ζωn. This result is accurate to within 10% for ζ ≥ 0.704. The H1(s) transfer function determines how phase noise sources appearing at the PLL input are conveyed to the PLL output and a number of other important quantities. Normally, the input phase noise spectrum is assumed to be spectrally flat resulting in the output spectrum due to the reference noise being shaped entirely by |H1(s)|2. A representative plot of |H1|2 is shown in Figure 1-4. The key frequencies in the figure are the frequency of maximum gain, the zero dB gain frequency, and the –3 dB gain frequency which are given respectively by

FPk =

1 ωn 2π 2ζ F0 dB =

F3dB =

1 + 8ζ 2 − 1 Hz

1 2π

2ωn Hz

ωn 1 1 + 2ζ 2 + 2 ζ 4 + ζ 2 + Hz 2π 2

(1.7) (1.8) (1.9)

Phase-Locked Systems—A High-Level Perspective

5

10log10 ωn4 + 4ωn2ζ 2 

Gain, dB (6 dB/cm)

-12 dB/octave

-6 dB/octave

40log10 ( 2.38ζ )

0 dB 1

2

3

ωn 2ζ

5

7

10

ωn

2

3

ωu

5

Frequency, rad/sec

Figure 1-3 Open-loop gain approximations for classic continuous-time type-2 PLL. H1 Closed-Loop Gain 6 4

FPk F0dB

2

Gain, dB

0

F3dB

GPk

-2 -4 -6 -8

Asymptotic -6 dB/octave

-10 -12 0 10

1

2

10 Frequency, Hz

10

Figure 1-4 Closed-loop gain H1( f ) for type-2 second-order PLL4 from (1.4).

The amount of gain-peaking that occurs at frequency Fpk is given by  8ζ 4 GPk = 10 log10   8ζ 4 − 4ζ 2 − 1 + 1 + 8ζ 2 

  dB  

(1.10)

For situations where the close-in phase noise spectrum is dominated by reference-related phase noise, the amount of gain-peaking can be directly used to infer the loop’s damping factor from (1.10), and the 4

Book CD:\Ch1\u14033_figequs.m, ζ = 0.707, ωn = 2π 10 Hz.

Phase-Locked Systems—A High-Level Perspective

6

loop’s natural frequency from (1.7). Normally, the close-in (i.e., radian offset frequencies less than ωn /2ζ) phase noise performance of a frequency synthesizer is entirely dominated by reference-related phase noise since the VCO phase noise generally increases 6 dB/octave with decreasing offset frequency5 whereas the open-loop gain function exhibits a 12 dB/octave increase in this same frequency range. VCO-related phase noise is attenuated by the H2(s) transfer function (1.5) at the PLL’s output for offset frequencies less than approximately ωn. At larger offset frequencies, H2(s) is insufficient to suppress VCO-related phase noise at the PLL’s output. Consequently, the PLL’s output phase noise spectrum is normally dominated by the VCO self-noise phase noise spectrum for the larger frequency offsets. The key frequency offsets and relevant H2(s) gains are shown in Figure 1-5 and given in Table 1-2. Closed-Loop Gain H2 5

GH2max

GFn 0

-3 dB

H2 Gain, dB

-5

FH2max Fn FH2-3dB

-10

ζ =0.4

-15

Fn =10 Hz -20

-25 0 10

1

2

10 Frequency, Hz

10

Figure 1-5 Closed-loop gain6 H2 and key frequencies for the classic continuous-time type-2 PLL. Table 1-2

Key Frequencies Associated with H2(s) for the Ideal Type-2 PLL Frequency, Hz Associated H2 Gain, dB

GH 2 _1rad / s = −10 log10 ωn4 + ωn2 ( 4ζ 2 − 2 ) + 1

1/2π FH 2 − 3dB

2  ω  2ζ − 1 + = n   2 4 2π  2 − 4ζ + 4ζ 

FH 2 _ 0 dB =

1 2π

Fn = ωn / 2π FH 2 _ max =

5 6

1 2π

ωn 1 − 2ζ 2



1/ 2

–3

ωn 2 − 4ζ 2

Constraints on ζ

0

GH 2 _ ωn = −10 log10 ( 4ζ 2 ) GH 2 _ max = −10 log10 ( 4ζ 2 − 4ζ 4 )

Leeson’s model in Section 9.5.1; Haggai oscillator model in Section 9.5.2. Book CD:\Ch1\u14035_h2.m.



ζ
1

Figure 2-29

Ts  • •   x n + x n +1  2 

Entire right-half plane

NA

3 1 − + 2e − jθ − e − j 2θ 2 2 for − π ≤ θ < π

Figure 2-30

z −1 Ts

xn +1 = xn + Ts xn

BackwardEuler

1 z −1 Ts z

xn +1 = xn + Ts x n +1

Bilinear Transform

2 z −1 Ts z + 1

SecondOrder Gear

3 2Ts



xn +1 = xn +

 4 −1 1 −2  1 − z + z  3  3 

xn +1 =

4 1 2 • xn − xn −1 + Ts x n +1 3 3 3

Backward-Euler Stability Region 3

2

Stability Region is Exterior of Circle

Im( λTs )

1

0

-1

-2

-3

-3

-2

-1

0 Re( λTs )

1

2

3

Figure 2-29 Initial value problem stability region6 for the backward-Euler method is the exterior of the circular region.

5 6

Example finite-difference transient responses can be found using Book CD:\Ch2\u14027_discretepll_transients.m. Book CD:\Ch2\u14026_intforms.m.

Design Notes

43

2nd-Order Gear Stability Region 3

2

Stability Region

Im( λTs )

1

Unstable Region

0

-1

-2

-3

-4

-3

-2

-1 0 Re( λTs )

1

2

3

Figure 2-30 Initial value problem stability region7 for the second-order Gear method is the exterior of the drawn perimeter.

2.3.2.1 Bilinear Transform Result for H1(z) for Ideal Type-2 PLL V

ac

Ts 2

D

θin +

Σ

Σ

+

+

Σ

D

+

θo

Σ +

D

+

bc

+

D

-

+

+

θe

Figure 2-31 Bilinear transform redesign of H1(s) (2.4). Voltage V represents an externally applied tuning voltage to the VCO, assuming that Kv = 1.

 az −1 + b   1 + z −1  GOL ( z ) = c  −1   −1   1− z  1− z 

θo ( k ) =

Ts   1 bcθ in ( k ) + ( ac + bc ) θ in ( k − 1) + acθin ( k − 2 ) + v ( k ) − v ( k − 2 )  +  2  1 + bc   ( 2 − ac − bc ) θ o ( k − 1) − (1 + ac ) θ o ( k − 2 ) a = 1−

7

(2.49)

Book CD:\Ch2\u14026_intforms.m.

4ζ 4ζ ω T  ; b = 1+ ; c= n s  ωnTs ωnTs  2 

(2.50)

2

(2.51)

Design Notes

44

2.3.2.2 Second-Order Gear Result for H1(z) for Ideal Type-2 PLL v

θin +

θe

Σ

2Ts ω 2 3 n

4ζ ωnTs D

D

+

_

+

Σ

Σ

+

_

+

1 3

Figure 2-32 Second-order Gear redesign of H1(s) (2.4).

 2ω T  GOL ( z ) =  n s   3 

θo ( k ) = a0 = 1 +

3ζ ωnTs

4ζ ωnTs ζ a2 = ωnTs a1 = −

1+

1 3

3ζ ωnTs

 4 −1 1 −2  1 − z + z  3  3  2 4 1  −1 −2  1 − z + z  3  3 

(2.52)

2 4 1 2  anθin ( k − n ) + ∑ bn v ( k − n ) + ∑ cnθ o ( k − n )  ∑  D  n=0 n=0 n =1 

b0 =

(2.54)

2

3

b2 =

c1 =

2ω T

2 n s

b1 = −

2

ωn2Ts

D

4 3

D

4 3

Σ

+

+

1+ 3ζ ωnTs

θo

D

ζ ωnTs

Σ

D

-

2Ts 3

_

(2.55)

6

(ωnTs )

c2 = −

1

c3 =

2ω T

2 n s

4ζ ωnTs

+

11 2 (ωnTs )

2

2

(ωnTs )

c4 = −  3  3ζ D = 1+ +  ωnTs  2ωnTs 

2

(2.53)



ζ ωnTs

(2.56)

2

1

( 2ωnTs )

2

2

(2.57)

2.3.3 Higher-Order Differentiation Formulas

In cases where a precision first-order time-derivative f (xn+1) must be computed from an equally spaced sample sequence, higher-order formulas may be helpful.8 Several of these are provided here in Table 2-2. The uniform time between samples is represented by Ts.

8

Precisions compared in Book CD:\Ch2\u14028_diff_forms.m.

Design Notes

45

Table 2-2

Higher-Order Differentiation Formula Formula

Name

Third-Order Gear

f ( xn +1 ) =

Fourth-Order Gear

f ( xn +1 ) =

Fifth-Order Gear

f ( xn +1 ) =

Fourth-Order Central Difference

1 Ts

1  11 3 1   xn +1 − 3xn + xn −1 − xn − 2  2 3 Ts  6 

(2.58)

4 1  25   xn +1 − 4 xn + 3 xn −1 − xn − 2 + xn − 3  3 4  12 

1  137 10 5 1  xn +1 − 5 xn + 5 xn −1 − xn − 2 + xn −3 − xn − 4   3 4 5 Ts  60  f ( xn ) =

1 [ − xn + 2 + 8 xn +1 − 8 xn −1 + xn − 2 ] 12Ts

(2.59) (2.60) (2.61)

2.4 HYBRID PLLS INCORPORATING SAMPLE-AND-HOLDS

Hybrid PLLs incorporate discrete-time sampling (zeroth-order sample-and-hold) along with continuous-time elements. This PLL variety is discussed in Chapter 7. Although the associated loop filter transfer functions are often described in terms of Laplace transforms, a truly accurate representation of these systems is not possible without using either the Poisson Sum formula (2.154) or z-transforms. Other Laplace transform representations are only approximations. Kd Kv K N Ts

ωn ωs ζ

Phase detector gain VCO tuning sensitivity Lumped gain parameter K = K d K vTs N −1 Feedback divider ratio Sampling period = 1/Fs Natural frequency 2π Fs Damping factor

V/rad rad/sec/V — — sec rad/sec rad/sec —

2.4.1 Ideal Type-1 with Zero-Order Sample-and-Hold

Refer to Section 7.6.1 for complete details. Laplace Transform Representation for Open-Loop Gain K 1 − e − sTs K v GOL ( s ) = d N sTs s z-transform Representation for Open-Loop Gain K K Tz K K K GOL ( z ) = d v (1 − z −1 ) s 2 = ; K ≡ d v Ts N N ( z − 1) z − 1 GOL(z) Unity-Gain Frequency 2 K ωu = sin −1   rad/sec Ts 2

(2.62)

(2.63)

(2.64)

Design Notes

46

GOL(z) Margin  ω  K GM = −20 log10   = 20 log10  π s  dB 2  ωn  GOL(z) Phase Margin  ω  π K ϕ M = − sin −1   = cos −1  π n  rad 2 2    ωs 

H1(z) Transfer Function K H1 ( z ) = z + K −1 H2(z) Transfer Function z −1 H2 ( z ) = z + K −1 Transient Response ∆F → θ ( t )

θo ( n ) =

(2.65)

(2.66)

(2.67)

(2.68)

2π ∆F Ts  n 1 − 1 − K )  rad for n ≥ 0  (  K

(2.69)

2.4.2 Ideal Type-2 with Zero-Order Sample-and-Hold

Refer to Section 7.6.2 for complete details. Laplace Transform Representation for Open-Loop Gain GOL(s) K 1 − e− sTs 1 + sτ 2 K v GOL ( s ) = d N sTs sτ 1 s Phase Argument of GOL(s)  2ζω  ωT ϕ (ω ) = −π − s + tan −1   rad 2  ωn 

(2.70)

(2.71)

Open-Loop Gain GOL(z) GOL ( z ) = (ωnTs )

2

1 τ  1 τ  z + 2 + − 2   2 Ts   2 Ts  2 ( z − 1)

(2.72)

GOL(z) Gain Margin GM = −20 log10 (ζωnTs ) dB

(2.73)

Defined for ωnTs < 4ζ GOL(z) Unity-Gain Frequency 1 ωu = cos −1 1 + abc −  T s

( abc )

2

(2.74)

+c 

(ω T ) 1 2ζ 1 2ζ a= + ; b= − ; c= n s 2 ωnTs 2 ωnTs 4

4

(2.75)

Design Notes

47

GOL(z) Phase Margin ϕ M = −ωuTs + tan −1  a sin (ωu Ts ) , a cos (ωu Ts ) + b  rad

(2.76)

Defined for ωnTs < 4ζ Transient Response ∆F → θ ( t ) z

θ o ( z ) = 2π ∆FTs

2 z + z  a (ωnTs ) − 2  + 1 + b (ωnTs )      1 2ζ 1 2ζ a= + ; b= − 2 ωnTs 2 ωnTs 2

2

rad

(2.77) (2.78)

2.5 COMMUNICATION THEORY

See Chapter 5 for a more extensive discussion of the material presented in this section. Complementary Error Function ∞ 2 2 erfc ( x ) = ∫ exp ( −t ) dt

π

(

x

= 2Q x 2

(2.79)

)

Q-Function ∞

Q ( x) = ∫ x

 u2  exp  −  du 2π  2  1

(2.80)

1  x  erfc   2  2 Approximation for Q-Function k  1  x2  5  1   exp  −  ∑ bk    2π  2  k =1  1 + p x  Q ( x) =  k   x2  5  1  1 1 exp b − −     ∑ k  2π  2  k =1  1 + p x  

(2.81)

Q ( x) =

p = 0.2316419 b1 = 0.319381530;

b2 = −0.356563782;

for x ≥ 0

(2.82) for x < 0

b3 = 1.781477937

b4 = −1.821255978; b5 = 1.330274429 Bounds for Q-Function  x2    x2  1 1  1 exp  −   1 − 2  ≤ Q ( x ) ≤ exp  −  x 2π x 2π  2  x   2  Bit Error Rate for BPSK with Static Phase Error  E  E  1 PBPSK _ b  b θ  = erfc  b cos (θ )   No  2  N o 

(2.83)

(2.84)

(2.85)

Design Notes

48

Bit Error Rate for QPSK with Static Phase Error  E  1  E  E  1 PQPSK _ b  b θ  = erfc  b  cos (θ ) − sin (θ )   + erfc  b  cos (θ ) + sin (θ )    No  4  N o  4  N o 

(2.86)

Square-QAM Symbol Error Rate   3k γ b   1  M − 1  3k γ b    M −1     − Psym ( γ b , M , k ) = 2  erfc 1 erfc      2 ( M 2 − 1)   2  M   2 ( M 2 − 1)    M      γb = SNR per bit, Eb/No K = Number of bits per symbol M = Number of signal levels on each I- and Q-rail Single-Sideband Image Rejection with Gain and Phase Imbalance 1 + g 2 + 2 g cos (θ )  SSBdB ( g , θ ) = 10 log10   2 1 + g − 2 g cos (θ ) 

(2.87)

(2.88)

Phase imbalance = θ. Gain imbalance = ∆GdB with g = 100.05 ∆GdB Error Vector Magnitude Due to Gain and Phase Imbalance I and Q channel signals represented with gain and phase imbalances δ and θ, respectively, as shown in Figure 2-33 and as given by: θ  δ  LOI ( t ) = 2 1 +  cos  ωo t +  2  2  θ  δ  LOQ ( t ) = −2  1 −  sin  ωo t −  2  2  −2

2

4 ρ − 2 − 2ρ  δ  δ ρ = 1 +  1 −  with δ = 1− ρ  2  2 θ  δ  EVM % rms = 2 − 2 cos   +   2 2

(2.90)

2

× 100%

(2.91)

EVM Versus IQ Gain and Phase Mismatch 8

0.75 dB

7

1.0 dB

6

EVM, %rms

5 4 3

0.50 dB 2

0.25 dB 0.1 dB

1

0.01 dB 0

0

0.5

1

1.5

2 2.5 3 Phase Mismatch, deg

Figure 2-33 IQ gain and phase mismatch contribution to EVM.9 9

Book CD:\Ch5\u13149_iq_imbal_evm.m.

(2.89)

3.5

4

4.5

5

Design Notes

49

2.5.1 Graphical Bit Error Rate and Symbol Error Rate Results

Graphical BER and SER results from Chapter 5 follow. In all cases, the Tikhonov probability density (2.146) is used for the local oscillator phase distribution where σφ represents the standard deviation of the local oscillator’s phase noise in degrees rms. 2.5.2 BPSK Bit Error Rate BPSK BER with Noisy LO

-1

10

-2

10

o

σφ = 20 -3

10

Bit Error Rate

-4

o

σφ = 17.5

10

-5

10

o

σφ = 15

-6

10

o

σφ = 12.5 o

σφ = 0

-7

10

o

σφ = 10 -8

10

3

4

5

6

7

8

9 10 Eb/No, dB

Figure 2-34 BPSK uncoded bit error rate with noisy local oscillator.10 10

Book CD:\Ch5\u13154_bpsk_ber.m.

11

12

13

14

15

Design Notes

50

2.5.3 QPSK Bit Error Rate QPSK BER with Noisy LO

-1

10

o

σφ = 13

o

σφ = 11

-2

10

o

σφ = 9

o

σφ = 7

-3

10

Bit Error Rate

-4

10

-5

10

o

σφ = 5

-6

10

o

o

σφ = 0

-7

σφ = 3

10

-8

10

4

5

6

7

8

9 10 Eb/No, dB

Figure 2-35 QPSK uncoded bit error rate with noisy local oscillator.11

11

Book CD:\Ch5\u13155_qpsk_ber.m.

11

12

13

14

15

Design Notes

51

2.5.4 16-QAM Symbol Error Rate

16-QAM Symbol Error Rate

-2

10

o

σφ = 4 rms -3

o

σφ = 3 rms

10

o

σφ = 2 rms

-4

10

o

SER

σφ = 1 rms -5

10

No Phase Noise

-6

10

Proakis -7

10

-8

10

10

11 12

13 14

15 16

17 18 19 20 Eb/No, dB

21 22

23 24

25

Figure 2-36 16-QAM uncoded symbol error rate with noisy local oscillator.12 Circled datapoints are from (2.87).

12 Book CD:\Ch5\u13159_qam_ser.m. See Section 5.5.3 for additional information. Circled datapoints are based on Proakis [3] page 282, equation (4.2.144), included in this text as (2.87).

Design Notes

52

2.5.5 64-QAM Symbol Error Rate

64-QAM Symbol Error Rate

-2

10

o σφ = 2 rms o σφ = 1.5 rms

-3

10

o

σφ = 1 rms -4

SER

10

-5

10

-6

10

o

σφ = 0.5 rms Proakis

-7

10

No Phase Noise

-8

10

15

16

17

18

19 20 Eb/No, dB

21

22

23

24

25

Figure 2-37 64-QAM uncoded symbol error rate with noisy local oscillator.13 Circled datapoints are from (2.87).

13 Book CD:\Ch5\u13159_qam_ser.m. See Section 5.5.3 for additional information. Circled datapoints are based on Proakis [3] page 282, equation (4.2.144), included in this text as (2.87).

Design Notes

53

2.5.6 256-QAM Symbol Error Rate

256-QAM Symbol Error Rate

-1

10

o σφ = 1.25 rms o

σφ = 1.0 rms o

σφ = 0.75 rms -2

o σφ = 0.5 rms

SER

10

-3

10

o

σφ = 0.25 rms No Phase Noise

Proakis

-4

10

16

17

18

19

20 21 Eb/No, dB

22

23

24

25

Figure 2-38 256-QAM uncoded symbol error rate with noisy local oscillator.14 Circled datapoints are from (2.87).

14 Book CD:\Ch5\u13159_qam_ser.m. See Section 5.5.3 for additional information. Circled datapoints are based on Proakis [3] page 282, equation (4.2.144), included in this text as (2.87).

Design Notes

54

2.5.7 8-PSK Symbol Error Rate

8-PSK Symbol Error Rate

-1

10

o

σφ = 7 rms -2

10

o

σφ = 6 rms -3

Symbol Error Rate

10

o

σφ = 5 rms

-4

10

-5

10

o

σφ = 4 rms o

-6

10

σφ = 3 rms

No Phase Noise

-7

10

o

σφ = 2 rms -8

10

5

10

15

20 Eb/No, dB

Figure 2-39 Uncoded 8-PSK symbol error rate.15

15

Book CD:\Ch5\u13170_mpsk_ber.m.

25

30

Design Notes

55

2.5.8 16-PSK Symbol Error Rate

16-PSK Symbol Error Rate

-1

10

o

σφ=3 rms o σφ=2.5 rms

-2

10

o σφ=2 rms -3

Symbol Error Rate

10

-4

10

-5

o σφ=1.5 rms

10

No Phase Noise -6

10

o σφ=1 rms

-7

10

-8

10

10

12

14

16

Figure 2-40 Uncoded 16-PSK symbol error rate.16

16

Book CD:\Ch5\u13170_mpsk_ber.m.

18 20 Eb/No, dB

22

24

26

28

Design Notes

56

2.6 SPECTRAL RELATIONSHIPS

Discrete Sideband Spur Level  ∆θ  Lside ≈ 20 log   dBc  2  where ∆θ is the maximum sinusoidal phase modulation in rad. See Section 3.1. FM/PM Discrete Sideband Composition exp  ± j β sin (θ )  =

+∞



m =−∞

J m ( β ) exp [ ± jmθ ]

β = modulation index Jn( ) = nth-order Bessel function See Section 3.1. Power Spectral Density Formal Definition (Continuous-Time Signals) 2  1 S x ( f ) = lim  E  X Tm ( f )    Tm →∞ T     m

(2.92)

(2.93)

(2.94)

where the finite-time Fourier transform of x(t) is given by X Tm ( f ) =

+ Tm / 2

∫ x (v) e

− j 2π f v

dv

− Tm / 2

See Section 4.3. Fractional-N Shaped Phase Noise Spectrum for All-Zero ∆-∑ Modulator 2( m −1) 2    π f   ( 2π )   dBc L ( f ) = 10 log10   2sin     Hz 12 Fref    Fref   m = order of modulator Fref = sampling rate, Hz See Section 8.3.3. Lorentzian Power Spectral Density and Corresponding Autocorrelation Function

(2.95)

(2.96)

Sx ( f ) =

α2 2 α 2 + ( 2π f )

(2.97)

Rx (τ ) =

α

(2.98)

exp ( −α τ

)

2 See Section 4A.6. Integrated Lorentzian PSD ∞ Lo df = π Lo f c rad 2 σ ϕ2 = ∫ 2  f  −∞ 1+    fc 

See Figure 2-41 and Figure 2-42, and Section 5.2. Direct Digital Synthesizer Ideal Output C/N (Sine Wave Output) C = 10 log10 ( 3Fclk 2 2 D −1 ) dBc/Hz N D = Number of DAC bits Fclk = Clock rate, Hz See Appendix 4D.

(2.99)

(2.100)

Design Notes

57

Total Integrated Phase Noise -50

5 deg rms 3 deg rms

-60

2 deg rms Required Lo, dBc / Hz

-70

-80

-90

1.5 deg rms -100

1.0 deg rms 0.5 deg rms

-110 3 10

4

5

10

6

10

10

Fc , Hz

Figure 2-41 Constant integrated phase noise17 lines for Lorentzian spectrum parameter choices, using (5.2).

Total Integrated Phase Noise 5 4.5 4

Lo = -60 dBc/Hz -65

Integrated Phase, deg rms

-70 3.5

-75

3

-80

2.5

-85

2

-90

1.5

-95

1 0.5

-100 0 3 10

4

5

10

10 Fc , Hz

Figure 2-42 Alternative presentation of Lorentzian spectrum details18 similar to Figure 2-41.

17 18

Book CD:\Ch5\u13150_total_noise.m. Ibid.

6

10

Design Notes

58

2.7 TRIGONOMETRY

Angle Sum and Difference Formulas sin ( A + B ) = sin ( A ) cos ( B ) + cos ( A ) sin ( B )

(2.101)

sin ( A − B ) = sin ( A ) cos ( B ) − cos ( A ) sin ( B )

(2.102)

cos ( A + B ) = cos ( A ) cos ( B ) − sin ( A ) sin ( B )

cos ( A − B ) = cos ( A ) cos ( B ) + sin ( A ) sin ( B )

Tangent of Difference tan ( A − B ) =

tan ( A ) − tan ( B )

1 + tan ( A ) tan ( B )

(2.103) (2.104)

(2.105)

Sine and Cosine Sums  A+ B   A− B  sin ( A ) + sin ( B ) = 2sin   cos    2   2   A− B   A+ B  sin ( A ) − sin ( B ) = 2sin   cos    2   2   A+ B   A− B  cos ( A ) + cos ( B ) = 2 cos   cos   2    2   A+ B   A− B  cos ( A ) − cos ( B ) = −2sin   sin    2   2  Half-Angle Formulas 1 + cos ( A )  A cos   = ± 2 2 1 − cos ( A )  A sin   = ± 2 2 1 − cos ( A )  A  1 − cos ( A ) tan   = =± sin ( A ) 1 + cos ( A ) 2

Sine and Cosine Products cos ( A − B ) + cos ( A + B ) cos ( A ) cos ( B ) = 2 cos ( A − B ) − cos ( A + B ) sin ( A ) sin ( B ) = 2 sin ( A − B ) + sin ( A + B ) sin ( A ) cos ( B ) = 2 Phase Difference ∠ ( I 2 , Q2 ) − ∠ ( I1 , Q1 ) = tan −1 [Q2 I1 − Q1 I 2 , I1 I 2 + Q1Q2 ] for complex samples (I1, Q1) and (I2, Q2). Arctangent has the form tan–1(y, x).

(2.106) (2.107) (2.108) (2.109)

(2.110) (2.111) (2.112)

(2.113) (2.114) (2.115)

(2.116)

Design Notes

59

Law of Cosines a 2 = c 2 + b 2 − 2bc cos ( A )

(2.117)

b 2 = a 2 + c 2 − 2ca cos ( B )

(2.118)

c 2 = a 2 + b 2 − 2ab cos ( C ) Angle A is the angle of the triangle opposite side a, etc. Law of Sines a b c = = sin ( A ) sin ( B ) sin ( C )

(2.119)

(2.120)

Angle A is the angle of the triangle opposite side a, etc. 2.8 LAPLACE TRANSFORMS

Refer to Section 6.10 for additional details. Table 2-3

Laplace Transform Fundamentals

Laplace Transform

Inverse Laplace Transform (Bromwich Inversion Integral)

Let f (t) be a function that is piecewise continuous on every interval for t ≥ 0 and satisfies f ( t ) ≤ Meα t

Given a Laplace transform F(s) of a suitable function f (t), the function f (t) can be found from the contour integral

(2.121)

for some constants α and M. Then the Laplace transform of f (t) exists for all s > α and is given by ∞

F ( s ) = ∫ f ( t ) e − st dt

(2.122)

0

f (t ) =

1 j 2π

c + j∞

∫ F (s) e

st

ds

(2.123)

c − j∞

for t ≥ 0 in which the constant c is chosen such that all poles of F(s) have real parts less than c.

Time Derivative Laplace Representation dn d n −1 x n n −1 n − 2 dx + x t s X s s x s ⇒ − 0 − − . . . − ( ) ( ) ( ) dt n dt t = 0+ dt n −1 t = 0+

(2.124)

Initial Value Theorem lim+ f ( t ) = lim sF ( s )

(2.125)

Final Value Theorem lim f ( t ) = lim sF ( s )

(2.126)

t →0

t →∞

s →∞

s →0

Integration  t  F ( s ) 1 ( −1) + L  ∫ f (ξ ) d ξ  = + f (0 ) s s  0  Multiply by Time n n d F (s) L {t n f ( t )} = ( −1) ds n

(2.127)

(2.128)

Design Notes

60

Table 2-4

Common Laplace Transforms F(s)

f (t)

s −1 s −2

u(t), unit-step function t

1 sn

t n −1 ( n − 1)!

s −3 / 2

2

1 s−a 1

(s − a)

exp ( at ) t exp ( at )

2

t n −1 exp ( at ) ( n − 1)!

1

(s − a)

t

π

n

1

1 exp ( at ) − exp ( bt )  (a − b) 

1 s + a2 s 2 s + a2 1

1 sin ( at ) a

( s − a )( s − b ) 2

(s − a)

2

cos ( at )

1 exp ( at ) sin ( bt ) b

+ b2

s−a

(s − a)

2

exp ( at ) cos ( bt )

+ b2

1

J 0 ( at )

s + a2 2

2.9 Z-TRANSFORMS

See Section 7.5 for more details. Forward z-Transform ∞

F ( z ) = ∑ f ( nTs ) z − n

(2.129)

n=0

Inverse z-Transform 1 f ( kTs ) = j 2π

∫ F (z) z

k −1

dz

for k ≥ 0

(2.130)

Design Notes

61

Table 2-5

Laplace Transform

Table of Commonly Used z-Transforms Time Function f (t), z-Transform F(z) t≥0

1

δ (t )

1

exp ( −kTs s )

δ ( t − kTs )

z −k

1 s

u (t )

1 s2

t

z z −1 Ts z

2 s3

t2

( k − 1)! s

( z − 1) Ts2 z ( z + 1) 3 ( z − 1) 2

t k −1

k

1 s+a

k −1 k −1 ∂  lim ( −1) a →0 ∂a k −1 

   z    z − exp ( −aTs )   z z − exp ( − aTs )

exp ( −at )

 1    s+a

2

a s (s + a)

1 − exp ( −at )

a s2 ( s + a ) a ( s + a ) s3

Ts z exp ( − aTs )

t exp ( − aTs )

t−

1 − exp ( −at ) a

t 2 t 1 exp ( −at ) − + − 2 a a2 a2

 z − exp ( − aTs ) 

2

z 1 − exp ( −aTs ) 

( z − 1)  z − exp ( −aTs ) 1 − exp ( −aTs )  z Ts z − 2 ( z − 1) a ( z − 1)  z − exp ( −aTs ) T z ( aTs − 2 ) Ts2 z z + s + 2 − 3 2 ( z − 1) 2a ( z − 1) a ( z − 1) z a  z − exp ( − aTs )  2

1 s + a ( )( s + b )

exp ( − at ) − exp ( −bt ) b−a

 1  z z −   b − a  z − exp ( −aTs ) z − exp ( −bTs )  z ( aTs + 2 ) − 2 z 2

a

2

(s + a)

2

s

2

t−

2  2 +  t +  exp ( − at ) a  a

a ( z − 1)

2

+

Ts z exp ( −aTs )

 z − exp ( −aTs ) 

2z + a  z − exp ( −aTs )  2

Design Notes

Multiplication by n Z {nf ( nTs )} = − z

dF ( z )

62

(2.131)

dz

Initial Value Theorem f ( 0 ) = lim F ( z )

(2.132)

Final Value Theorem lim f ( nTs ) = lim ( z − 1) F ( z ) if f ( ∞ ) exists

(2.133)

Time Reversal Z { f ( −nTs )} = F ( z −1 )

(2.134)

z →∞

n →∞

z →1

Parseval’s Theorem ∞



n =−∞

f ( nTs ) = 2

1

ωs

ωs / 2 −

∫ ω s

2

F  exp ( jωTs )  d ω

(2.135)

/2

where Fs = 1/Ts and ωs = 2πFs where f (nTs) ⇔ F(z) constitute a z-transform pair. 2.10 PROBABILITY AND STOCHASTIC PROCESSES

Uniform Density Function 1  for 0 ≤ x ≤ L f ( x) =  L  0 otherwise Mean = L/2, variance = L2/12 See Section 4A.2 for additional information. Gaussian Probability Density  x2  1 exp  − 2  f ( x) = 2πσ  2σ  F ( x0 ) =

x0

∫ f ( x ) dx =

−∞

1

π

x / 2σ



−∞

exp ( −u 2 ) du

1  x0   1 + erf   2  2σ   where σ 2 is the variance Binomial Distribution n f ( k ) =   pk qn−k k  where k and n are discrete integers, p = probability of a given event, and q = 1 – p De Moivre Laplace Limit Theorem (of the Binomial Distribution)  z2  1 x − np f ( x) → exp  −  with z = 2 2π npq npq   =

for npq >> 1, mean = np, variance = npq, and p = probability of a given event and q =1–p

(2.136)

(2.137) (2.138) (2.139)

(2.140)

(2.141)

Design Notes

Rayleigh Probability Density  r2  r f ( r ) = 2 exp  − 2  with r ≥ 0 σ  2σ   1  r0  2   r2  F ( r0 ) = ∫ 2 exp  − 2  dr = 1 − exp  −     2σ   2  σ   0σ r0

Mean = σ

r

π 2

;

Variance =

Gaussian Moments 1  n E { x } = 0 1× 3× . . . ( n − 1) σ n 

4 −π 2 σ 2

for n = 0 for n odd assuming a mean-zero random variable for n even

Tikhonov Density Function  cos (ϕ )   cos (ϕ ) − 1  exp   exp   2 2  σ ϕ  ≈  σ ϕ  pϕ (ϕ ) = −2 2π I 0 (σ ϕ ) σ ϕ 2π

63

(2.142) (2.143) (2.144)

(2.145)

(2.146)

See Section 5.5 for additional details. Characteristic Function Φ x (ω ) = E {exp ( jω x )} =

+∞

∫ f ( x ) exp ( jω w) dx

(2.147)

−∞

1   Φ Gaussian (ω ) = exp  jωµ − σ 2ω 2  for mean µ and variance σ 2 2   sin (π f L ) ΦUniform (ω ) = for uniform distribution [–L/2, L/2] πfL

Power Spectral Density for Discrete-Time Sampled-Signals 2 M   1 − j 2π f nTs S x ( f ) lim E  Ts ∑ yn e  M →∞  ( 2 M + 1) Ts n =− M  Sampling rate Fs = 1/Ts. See Section 4.4 for additional details. Power Spectral Density for Discrete-Time Sampled-Signal Followed by Sample-Hold  sin (π f Ts )  S xc ( f ) = 2Ts    π f Ts 

2

 Ry ( 0 ) M  + ∑ Ry ( m ) cos ( 2π f mTs )   m =1  2 

Sampling rate Fs = 1/Ts. Auto-correlation function of the discrete-time sampled signal y(nTs) ⇒ Ry(m). See Section 4.4 for additional details. Correlation Function from Power Spectral Density with Finite Observation Time Tm +∞  sin 2 (π f Tm )  σ x2 (τ , Tm ) = 2 ∫ S x ( f ) 1 −  df 2  (π f Tm )  0 See Section 4.6.2.

(2.148) (2.149)

(2.150)

(2.151)

(2.152)

Design Notes

64

Sample-Difference Correlation Function and Power Spectral Density

{

σ ∆2θ = E θ ( t + Tsym ) − θ ( t ) 

2

= 2  Rθ ( 0 ) − Rθ (Tsym ) 

}

(2.153)



= 4∫ Sθ ( f ) sin 2 (π f Tsym ) df 0

See Section 4.6.2. 2.11 NUMERICAL SIMULATION

See Section 2.3.1, Section 6.10, and Appendix 6C for additional information about numerical integration.

Poisson Sum Formula 

m  s 

∑ T h ( kT ) exp ( − j 2π f k T ) = ∑ H  f ± T s

s

s

k

m



(2.154)

See Section 3.15. Box-Muller Method for Generating Gaussian Random Values x = −2 log e ( u1 ) exp ( j 2π u2 ) where x is a complex Gaussian noise random value,

u1 and u2 are statistically independent uniformly distributed (0, 1] random variables, and E{|x|2} = 2 See Appendix 4A for additional information. Simulating AWGN noisek = σ N randnk

(2.155)

(2.156)

where randnk is a Gaussian random value with unit-variance, the simulation sampling rate is Fs = 1/Ts, and the two-sided noise power spectral density is No/2 V2/Hz as shown in Figure 2-43. See Appendix 4B for complete details.

R (τ ) = R (τ ) =

+∞

No rect ( f ) e j 2π f τ df 2 −∞



No 2

Fs / 2



e j 2π f τ df =

− Fs / 2

N F N o Fs sin (π Fsτ ) ; σ N2 = R ( 0 ) = o s 2 π Fsτ 2

FFT-Based Approximate Laplace Transform Inversion ∆ω ect M −1 f (t ) ∑ F ( c + jk ∆ω ) e jk ∆ω

π

k =− M

See Section 6.10.7. Raised-Cosine Pulse Shape sin (π t / Ts ) cos (πβ t / Ts ) pRC ( t ) = π t / Ts 1 − ( 2 β t / Ts )2 See Section 3.12.

(2.157) (2.158)

(2.159)

(2.160)

Design Notes

65

PSD, W/Hz

No 2



Fs 2

Fs 2

Frequency, Hz

Figure 2-43 AWGN passed through an ideal brick-wall lowpass filter. See Appendix 4B for additional details.

Raised-Cosine Pulse Fourier Transform  1− β for 0 ≤ f ≤ Ts 2Ts   π T   T  1 − β    1− β 1+ β < f ≤ PRC ( f ) =  s 1 + cos  s  f −    for 2Ts    2Ts 2Ts  β   2   1+ β  for f > 0 2Ts  See Section 3.12. Effective Number of (ADC/DAC) Bits SNDRdB − 1.76 ENOB = bits 6.02 Calculated from the SNDR for a full-scale sine wave. See Section 4.2.2.1.

(2.161)

(2.162)

Table 2-619 Closed-Form Formula for Creation of Random Sample Values with Specified PDF20 from Uniformly Distributed Variables

F(λ)

Name of Density Rayleigh

 x2  p ( x ) = 2 exp  − 2  σ  2σ 

x

Exponential

p ( x) =

 λ2  1 − exp  − 2  for λ ≥ 0  2σ 

exp ( γλ )

γ 2

exp ( −γ x )

u⇒λ

1−

2 exp ( −γλ ) 2

−2σ 2 log e (1 − u )

1

for λ < 0 for λ ≥ 0

α −

1

α

log e ( 2u )

for 0 ≤ u ≤ 0.50

log e  2 (1 − u ) 

for 0.50 < u ≤ 1

Cauchy

p ( x) =

19 20

α 1 2 π x +α 2

From Appendix 4A. Examples in Book CD:\Ch4\u13149_pdfs.m.

1 2 λ + tan −1   2 π α 

   

1 

α tan π  u −   2 

Design Notes

66

2.11.1 DSP Windows

See Figure 2-44 for symmetries involving an odd versus even number of samples for FFT operations, and Figure 2-45 for data-window symmetries. Additional discussion is available in Section 4.4.4. Table 2-7

Window Type

–3 dB Width, Bins

Rectangular Bartlett Blackman Hanning Hamming Gaussian α = 2.5 α = 3.0 α = 3.5

0.89 1.28 1.68 1.44 1.30

Detailed Comparison of Data Windows21 Equivalent Coherent Highest Stopband Noise Gain Sidelobe, Rolloff, Bandwidth, (Gain @ dB dB/Octave Bins DC) 1.0 1.00 –13.26 –6 1.33 0.50 –26.52 –12 1.73 0.42 –58.1 –18 1.50 0.50 –32 –18 1.36 0.54 –43 –6

1.37 1.60 1.85

1.45 1.70 1.98

0.50 0.42 0.36

–43 –57 –71

–10 dB Bandwidth, Bins

–20 dB Bandwidth, Bins

0.738 1.118 1.465 2.51 2.292

2.681 1.482 1.990 3.299 3.065

2.44 2.90 3.38

3.36 4.08 4.78

–6 –6 –6

Blackman Window  2n − 1 − N   2n − 1 − N  w ( n ) = 0.42 + 0.50 cos  π  + 0.08cos  2π  for 1 ≤ n ≤ N N −1  N −1    See Section 4.4.4.3. Hanning Window  2π ( n − 1)  1 1 w ( n ) = − cos   for 1 ≤ n ≤ N 2 2 N   Hamming Window  2π ( n − 1)  w ( n ) = 0.54 − 0.46 cos   for 1 ≤ n ≤ N N   Gaussian (Periodic-Symmetry)  1  2 ( n − 1) − N  2  w ( n ) = exp − α   for 1 ≤ n ≤ N N    2  where α is the window shaping parameter. Data-symmetric windows of the same length are computed using a modified value of N equal to N – 1 in the equation. Line of Symmetry FFT Bin Frequency Sample Index

f1

f2

f3

1

2

3

... ...

f N / 2 f N / 2+1 − f N / 2

2N / 2 2N / 2 +1

2N / 2 +2

(2.164)

(2.165)

(2.166)

Start of Periodic Continuation

... ...

− f3

− f2

2N

Figure 2-44 Positive and negative FFT frequency symmetries for an even number of FFT points.

21

(2.163)

All bandwidths are two-sided (positive and negative frequencies). See Book CD:\Ch2\u14030_wndws.m.

Design Notes

67 This sample is part of the next sine wave period.

(b)

111

110

101

100

011

010

Line of Periodic Symmetry

000

111

110

101

100

011

010

001

000

Line of Symmetry for Data

001

(a)

Sample Index

Sample Index

Figure 2-45 Illustration of (a) data-symmetric versus (b) periodic-symmetric DSP window placement for an 8-point sample sequence. From Section 4.4.4.

2.11.2 Polynomial-Based Interpolation

The following formulas pass precisely through each pair of data-coordinates provided.22

Second-Order Polynomial Fit (3-Points) f ( x ) = ax 2 + bx + c  x   −1 0 1  Source data pairs:   =   ; interpolate for 0 ≤ x ≤ 1  y   y−1 y0 y1  y + y−1 − 2 y0 y − y−1 ; b= 1 c = y0 ; a = 1 2 2 Third-Order Polynomial Fit (4-Points) f ( x ) = ax 3 + bx 2 + cx + d

 x   −1 0 1 2  Source data pairs:   =   ; interpolate for 0 ≤ x ≤ 1  y   y−1 y0 y1 y2  y + y − 2 y0 y y y y − y−1 ; c = − −1 + y1 − 0 − 2 ; a = 1 d = y0 ; b = −1 1 −c 2 3 2 6 2 Fourth-Order Polynomial Fit (5-Points) f ( x ) = ax 4 + bx 3 + cx 2 + dx + e  x   −2 Source data pairs:   =   y   y−2 6 a   1 −4 b   −2 4 0   1  c  =  −1 16 −30   24  0 d   2 −16  e   0 0 24 22

−4 −4 16 16 0

See Book CD:\Ch2\u14031_check_interp.m.

−1

0

1

y−1

y0

y1

1  y−2  2   y−1  −1  y0    −2   y1  0   y2 

(2.167)

(2.168) (2.169)

(2.170) (2.171)

2  ; interpolate for 0 ≤ x ≤ 1 y2 

(2.172)

Design Notes

68

2.11.3 Raised-Cosine-Based Interpolation

Raised-Cosine Interpolation23 f ( x) =

N2



k =− N1

yk pRC ( β , x − k )

(2.173)

 x   − N1 − N1 + 1 . . . N 2 − 1 N 2  Source data pairs:   =   yN2  . . . y N2 −1  y   y− N1 y− N1 +1 Interpolate for –L ≤ x ≤ L; L depends on N1, N2, and precision requirements. pRC( β, x ) is the raised-cosine interpolation function given by (2.160).

2.11.4 Fourth-Order Runge-Kutta Numerical Integration

This integration method is used extensively in Appendix 6C. The fourth-order formula is provided here where f (t, x) represents the first-order time-derivative of x(t) that is being integrated from time tn to tn+1. k1 = f ( tn , xn ) h h   k2 = f  tn + , xn + k1  2 2   h h   k3 = f  tn + , xn + k2  2 2  

(2.174)

k4 = f ( tn + h, xn + hk3 ) xn +1 = xn +

h ( k1 + 2k2 + 2k3 + k4 ) 6

2.12 CALCULUS

Binomial Theorem

( x + y)

n

n n = ∑   xn−k y k k =0  k 

(2.175)

n n!  = k n −   ( k )!k ! See Table 2-8 for the first few binomial coefficients. Integration by Parts ∫ u dv = uv − ∫ v du

l’Hopital’s Rule For lim n ( x ) = 0 and lim d ( x ) = 0, x→a

23

x →a

See Book CD:\Ch2\u14031_check_interp.m.

lim x→a

n ( x)

d ( x)

n′ ( x ) x →a d ′ ( x )

= lim

(2.176)

(2.177)

(2.178)

Design Notes

69

Taylor Series f ( x0 + δ x ) = f ( x0 ) +

1 df 1! dx

δx+ x0

1 d2 f 2! dx 2

(δ x )

2

x0

+

1 d3 f 3! dx3

(δ x )

3

+...

(2.179)

x0

Table 2-8

Pascal’s Triangle of Binomial Coefficients 1 1

1

1 1

3

1 1 1 1 1

7

15 21

8 36

20

56 84

1 4

10

35

28

9

6 10

6

1 3

4 5

1

2

15 35

70 126

1 5 21

56 126

1 6

1 7

28 84

1 8

36

1 9

1

2.13 BUTTERWORTH LOWPASS FILTERS

Pole Locations sk = − sin (θ k ) + j cos (θ k )

(2.180)

2k − 1 π for k = 1, 2,. . ., N where N = filter order 2N Filter Attenuation   f 2 N  AdB ( f ) = 10 log10 1 +    dB   f c   N = filter order; –3 dB corner frequency = fc Filter Group Delay

θk =

σk

N

τ (ω ) = −∑

(2.182)

σ + ( ω − ωk ) N = filter order; sk = σk + jωk given by (2.180) Equivalent Noise Bandwidth π / ( 2N ) Hz BN = f c sin π / ( 2 N )  k =1

(2.181)

2

2 k

(2.183)

N = filter order; –3 dB corner frequency fc 2.14 CHEBYSHEV LOWPASS FILTERS

Pole Locations (Normalized) 1 1  1    2k − 1   1   2k − 1  π  + j cosh  sinh −1    cos  π sk = − sinh  sinh −1    sin   ε   2 N   ε   2N  N N

ε = 10

0.1 Amax_ dB

−1

N = filter order; Amax_ dB = passband ripple, dB

(2.184)

Design Notes

70

Filter Attenuation AdB ( f n ) = 10 log10 1 + ε 2 CN2 ( f n )  dB

(2.185)

N = filter order, CN(.) = N th-order Chebyshev polynomial, fn = normalized frequency. Amax_dB = maximum passband loss, dB.

ε = 10

0.1 Amax_ dB

(2.186)

−1

 cos  N cos ( f n )  for f n ≤ 1    CN ( f n ) =  −1   cosh  N cosh ( f n )  for f n > 1 −1

Filter Group Delay

σk

N

τ (ω ) = −∑

(2.187)

(2.188)

σ + ( ω − ωk ) Filter order = N, sk = σk + jωk given by (2.184) Filter –3 dB (Normalized)Frequency k =1

2 k

2

1   1 f −3 _ dB = cosh  cosh −1   A 0.1    N  10 max_ dB − 1   N = filter order; Amax_dB = passband ripple, dB

(2.189)

2.15 CONSTANTS π

e kB To kBTo q

µo εo ϕ

Pi Natural base Boltzmann’s constant Absolute temperature for noise figure measurements Noise per Hz (ambient) Electron charge Free-space permeability Free-space permittivity Golden ratio

3.14159265358979323846 2.71828182845904523536 1.38e–23 MKS 290 °K 4.002e–21 W/Hz –174 dBm/Hz 1.602e–19 coul 4π 10–7 henry/meter 8.85 10–12 farad/meter

1+ 5 ≈ 1.618033989 2

References [1]

Gerald, C.F., and P.O. Wheatley, Applied Numerical Analysis, 3rd ed., Reading, MA: Addison-Wesley Publishing, 1984.

[2]

Crawford, J.A., Frequency Synthesizer Design Handbook, Norwood, MA: Artech House, 1994.

[3]

Proakis, J.G., Digital Communications, 2nd ed., New York: McGraw-Hill, 1989.

CHAPTER 3 Fundamental Limits Time and frequency control system design often involves performance requirements that border on the theoretical limits dictated by the laws of physics. In order to have a physically realizable design that can be reliably reproduced in a high-volume production environment, it is important to know how much design margin exists between the theoretical limits and the performance requirements involved. This margin can be used to assess the level of difficulty and/or risk involved during the development process. This chapter provides a short compendium of these limits that frequently arise in the design of PLL-based systems. 3.1 PHASE MODULATION AND BESSEL FUNCTIONS

Bessel functions arise as a natural consequence of sinusoidal phase or frequency modulation. They are useful for predicting the spurious sideband tone levels that can appear at the PLL output when unwanted signals modulate the PLL’s voltage controlled oscillator (VCO). These unwanted signals can be due to signal coupling from other sources, harmonic content at the output of a digital phase detector, or other means. When the spurious sideband tones are created by phase modulation impressed on the VCO, the sideband tones must obey certain relationships with each other as discussed here. When these relationships are not satisfied, other types of PLL impairments (e.g., AM modulation) must also be considered. For the sinusoidal phase modulation case, the modulated carrier signal can be represented by

s ( t ) = A0 cos ωo t + ∆θ sin (ωm t ) 

(3.1)

where ∆θ is the peak phase deviation in radians and ωm is the rate of the phase modulation in radians/second. The common first-order approximation for the sideband tone levels that appear ±ωm relative to the signal carrier frequency ωo is  ∆θ  Lside ≈ 20 log10  (3.2)  dBc  2  The exact sideband levels can be computed from the Jacobi-Anger formula1 which is given by exp  ± j β sin (θ )  = 1

[1] Section 21.8-4.

71

+∞

∑ J ( β ) exp [ ± jmθ ]

m =−∞

m

(3.3)

Fundamental Limits

72

in which Jm(.) are mth-order Bessel functions. Using this formula, the sinusoidal phase modulation represented by (3.1) can be expanded into the equivalent form s ( t ) = A0

+∞

∑ J ( ∆θ ) cos (ω

n =−∞

n

o

+ nωm ) t 

(3.4)

where each sideband tone is given by a distinct cosine term. The level of the nth sideband tone relative to the fundamental tone is clearly given by |Jn(∆θ) / J0(∆θ)|. Equation (3.4) is significant in that if a local oscillator is only perturbed by (sinusoidal) phase modulation, the second, third, etc. sideband tones can be no smaller than that predicted by (3.4). It is impossible to have high firstorder sideband tones and arbitrarily low higher-order sideband tones without introducing some amplitude modulation on the carrier. This situation often arises in frequency synthesizer work where the first-order reference sideband tones are allowed to be fairly high and it appears that no amount of additional filtering within the PLL can lower the second-order reference spurs as desired. The sideband levels corresponding to (3.1) are plotted with respect to the peak phase deviation in Figure 3-1, and with respect to the first sideband level in Figure 3-2. The series expansion for the mth-order Bessel function is given by β  Jm ( β ) =   2

 ( −1)n ( β / 2 )2 n    ∑ n =0  n ! ( n + m )!   

m ∞

(3.5)

and this form clearly shows that the series expansion begins with an mth-order dependence on β. This dependence explains why the higher-order sideband terms fall off increasingly fast as ∆θ is reduced in Figure 3-1. Spurious Level Versus ∆θ 0 -10 -20

Actual Spur Level, dBc

-30

st

1 Sideband

-40 -50 -60

nd

2

Sideband

-70 -80

rd

3 Sideband

-90 -100 -110 -120 -3 10

-2

Figure 3-1 Spurious sideband levels versus peak phase deviation2 from (3.1). 2

Book CD:\Ch3\u12995_sideband_levels.m.

-1

10 10 Peak Phase Deviation ∆θ, rad.

0

10

Fundamental Limits

73

Sideband Levels vs First-Sideband Level 0 -10 -20

Sideband Level, dBc

-30 -40

2nd Sideband

-50

3rd Sideband

-60 -70 -80 -90 -100 -110 -120 -60

-50

-40 -30 -20 First Sideband Level, dBc

-10

0

Figure 3-2 Spurious sideband levels versus first sideband spurious level for sinusoidal phase modulation3 from (3.1).

If the spurious sideband tone levels are not symmetric in amplitude about the carrier, amplitude as well as phase modulation must be present. Pure phase modulation can only produce symmetric spectra, regardless of whether the modulation is sinusoidal or not. Two additional relationships that are helpful in quickly computing sideband levels are J n +1 ( β ) =

2n

β

J n ( β ) − J n −1 ( β ) ∞

J 02 ( β ) + 2∑ J n2 ( β ) = 1

(3.6) (3.7)

n =1

The first relationship is a helpful recursion for computing the higher-order Bessel function values once J0(β) and J1(β) have been computed using (3.5). The second relationship states that the total power in all of the sidebands plus carrier is always constant because the signal is a constantenvelope signal even when phase modulation is present. Key Point: Significant local oscillator sideband spurs at ±ωm that result from sinusoidal phase modulation will always be accompanied by additional symmetric sideband tones at ±nωm. 3.2 HILBERT TRANSFORMS

Time-domain and frequency-domain characteristics of a linear time-invariant (LTI) network can generally not be specified independently. Time-domain characteristics are normally specified in terms of group delay requirements. Frequency-domain characteristics are usually given in terms of attenuation versus frequency. The Hilbert transform makes it possible to compute the time-domain behavior of an LTI network from its frequency-domain behavior or vice versa, thereby making it possible to identify network specifications that are impractical or not realizable without having to first design the actual filter. 3

Ibid.

Fundamental Limits

74

A practical example where the Hilbert transform can be used in PLL design is in the loop filter area. Severe spurious requirements may mandate that a specified amount of attenuation be realized in the loop filter, yet loop stability requirements also require that the filter’s phase shift be less than a specified value at the PLL natural frequency. The Hilbert transform can be used to explore whether any linear filter can achieve the requirements without having to actually design the loop filter. Once feasibility has been demonstrated, detailed circuit design can proceed with confidence using the Hilbert transform findings as a guide. The real and imaginary portions of a network voltage transfer function H(s) of a causal system4 are related to each other through the Hilbert transform. This is true provided that the system is analytic in the closed right-half s-plane5 [2], and the time-domain impulse response corresponding to H(s) is real. A further consequence of the Hilbert relationship is that the amplitude and phase characteristics of H(s) are directly inter-related and cannot be separately specified when H(s) represents the transfer function of a minimum-phase network.6 Most common filters (e.g., Butterworth, Chebyshev) are minimum-phase in nature whereas networks that employ mutual coupling or have multiple paths between the input and output (e.g., delay equalizers, allpass networks) are not. When H(s) corresponds to a real voltage impulse response h(t), H(–jω) = H*(jω) where the asterisk denotes complex conjugation. If the impulse response h(t) is broken into its even and odd parts, it can be written as h ( t ) = he ( t ) + ho ( t )

(3.8)

in which he(–t) = he(t) and ho(–t) = –ho(t) by definition. The only way that h(t) can be zero for t < 0 is to require that  h ( t ) t > 0 ho ( t ) =  e (3.9)  −he ( t ) t < 0 = sgn ( t ) he ( t )

When this result is used in (3.8), the network transfer function H(jω) can be rewritten as  1  H ( jω ) = H e ( jω ) − j  ⊗ H e ( jω )   πω 

(3.10)

where He(jω) and he(t) constitute a Fourier transform pair. In this result, He(jω) must be real and the imaginary portion is given by the Hilbert transform of He(jω) as shown.7 This result was used by Carlin to design wideband impedance-matching networks in [4], [5]. Alternatively, the transfer function H(s) may be expressed in terms of its magnitude and phase as H ( jω ) = A ( jω ) exp  − jθ ( jω )  (3.11) = exp  −α ( jω )  exp  − jθ ( jω )  4

Impulse response h(t) = 0 for t < 0. Normally, H(s) is assumed to be a rational function of s. If H(s) is analytic in the closed right-half s-plane, this is equivalent to stipulating that it has no poles in the right-half s-plane. For the Hilbert transform results to be applicable, any jω-axis poles must be simple. 6 A minimum-phase network has no poles or zeros in the right-half s-plane. 7 Also see Section 2.3.15 of [3]. 5

Fundamental Limits

75

Using similar arguments, it can be shown that the magnitude must be an even function of ω whereas the phase must be an odd function of ω. It must also be true that α ( jω) and θ ( jω) constitute a Hilbert transform pair as

θ (ω ) = −

1

π

+∞

α (υ )

∫ ω − υ dυ

−∞

(3.12)

θ (υ ) α (ω ) = ∫ dυ π −∞ ω − υ 1

+∞

These relationships make it possible to specify a network’s group delay response (or phase response) and calculate the associated amplitude response, or vice versa. As an example, consider a linear phase lowpass filter that has a constant group delay given by kπ /(2ωc) for –ωc < ω < ωc but is otherwise constant.8 The resultant amplitude response is given by [2]   ω  − 1    2 ω  ω ω k  α (ω ) = log e  2 − 1  − log e  c ω    ω ω 2 c c     ω + 1   c  

(3.13)

and the normalized response is shown in Figure 3-3. An inflection point is apparent at a frequency of 1.0 corresponding to ω = ωc in (3.13). The lazy amplitude response in the passband region is typical of all filters that strive to approximate linear phase (e.g., Bessel, Gaussian filters) with no regard to selectivity. The integrals in (3.12) are usually computed using the Cauchy principle value theorem, but Carlin devised a numerical method in [4] that is well suited for general network analysis. The technique is described in greater detail in [6] for use with impedance-matching networks. In this case, the real-part of H( jω) corresponds to resistance and the imaginary-part corresponds to reactance of an impedance function corresponding to a minimum-phase network. Representing the resistance function by R(ω), the reactance portion is given by X (ω ) =

1

π

+∞

∫ 0

 y +ω dR log e  dy  y −ω

  dy 

(3.14)

Carlin found it convenient to express R(ω) as a piecewise linear approximation9 using resistance decrements rk as shown in Figure 3-4 and given mathematically as n

Rq (ω ) = ∑ rk ak (ω ) k =0

with ak being the normalized linear interpolation function10 that is given by

–kπ /2 for ω < –ωc; kπ /2 for ω > ωc. The first radian frequency must correspond to dc and the ultimate resistance value must be 0 Ohms. 10 a0 = 1; ωk = 0 for k = 0. 8 9

(3.15)

Fundamental Limits

 0  ω ωk −1 −  ak (ω ) =   ωk − ωk −1  1

76

for ω ≤ ωk −1 for ωk −1 < ω < ωk

(3.16)

for ω ≥ ωk

Amplitude Response of Ideal Linear-Phase Filter 0

Amplitude, dB

-5

-10

-15

-20

-25

0

0.5

1

1.5 2 2.5 Normalized Frequency

3

3.5

4

Figure 3-3 Amplitude response of ideal linear-phase lowpass filter.11

The corresponding reactance function is given by n

X q (ω ) = ∑ rk bk (ω )

(3.17)

k =0

in which the bk functions are given by bk (ω ) = =

ωk  y +ω 1 log  π ωk − ωk −1 ω∫k −1 e  y − ω

1

  dy 

B (ω , ωk ) − B (ω , ωk −1 )

(3.18)

π (ωk − ωk −1 )

with b0 ≡ 0 and  ω ω   ω  ω   ω   ω B (ω1 , ω2 ) = ω2  1 + 1 log e  1 + 1 +  1 − 1 log e  1 − 1  − 2 1 log e  1   ω2   ω2   ω2   ω2    ω2   ω2

(3.19)

In this formulation, ω1/ω2 values corresponding to 0 and 1 must be appropriately dealt with in order to avoid undefined results in the logarithm function calls.

11

Book CD:\Ch3\u12997_hilbert_linphase.m. k = 1.0 in (3.13).

Fundamental Limits

77

These same formulas may be used in the context of (3.12) to compute the phase response of an arbitrary minimum-phase network from its proposed amplitude response or vice versa. An example calculation is shown in Figure 3-5 for an N = 4 Butterworth lowpass filter. Piecewise Representation for Rq 2.5

2

Resistance, Ohms

rk 1.5

r0 1

0.5

0

0

0.2

0.4

0.6

0.8 1 1.2 Radian Frequency

1.4

1.6

1.8

2

Figure 3-4 Piecewise linear approximation12 to Rq versus frequency. Amplitude, Phase, and Group Delay for N=4 Butterworth 4 3 2

Group Delay, sec

1 0

Amplitude, nepers -1 -2 -3 -4 -5

Phase, rad 0

0.5

1 1.5 Radian Frequency

2

2.5

Figure 3-5 Hilbert transform method used to compute the phase response of an N = 4 Butterworth lowpass filter from which the filter group delay is easily calculated.13

Key Point: Filter amplitude and phase responses are inescapably linked together, particularly for minimum-phase networks. Constraints in one domain must normally be traded off against characteristics in the other domain. Sharp amplitude attenuation characteristics result in substantial group delay. The Hilbert transform makes it possible to consider different tradeoffs between amplitude and group delay requirements prior to doing actual circuit design. 12 13

Book CD:\Ch3\u12996_hilbert1.m. Book CD:\Ch3\u12996_hilbert1.m. Lowpass –3 dB corner frequency = 1 rad/sec.

Fundamental Limits

78

3.3 CAUCHY-SCHWARZ INEQUALITY

The Cauchy-Schwarz inequality is used in the matched-filter bound that is discussed in Section 3.17 and then later used in Chapter 10. It can also be used as an upper-bound for other situations that arise in signal processing like the Cramer-Rao bound that is discussed in Section 3.7. In its most simple form, the Cauchy-Schwarz inequality for two vectors x and y in Euclidean space is given by x y = x y cos (θ ) ≤ x y (3.20) In the case of Euclidean space RN, the inequality takes the form 2

 N   N 2  N 2   ∑ xn yn  ≤  ∑ xn   ∑ yn   n =1   n =1   n =1 

(3.21)

When f (x) and g(x) represent two complex square-integratable functions, the inequality is given by

∫ f ( x ) g ( x ) dx *

2

≤ ∫ f ( x ) dx ∫ g ( x ) dx 2

2

(3.22)

3.4 RF FILTERING EFFECTS ON FREQUENCY STABILITY

If a modulated RF signal is filtered, the signal’s amplitude and phase fluctuations remain completely independent of each other so long as the RF filtering is arithmetically symmetric about the signal center frequency which is denoted here by fo. If, on the other hand, the RF filtering is not frequencysymmetric, phase and amplitude fluctuations will be cross-coupled due to the filtering [7]. In PLL-related work, narrowband nonsymmetrical bandpass filtering can lead to a nonsymmetrical PLL output spectrum. The cause of this spectrum asymmetry is often diagnosed incorrectly unless filtering effects are well understood. This section explores how signal amplitude and phase characteristics are cross-coupled whenever a signal is passed through frequency nonsymmetrical filtering. Assume that the input RF signal can be represented by s ( t ) = A 1 + ε ( t )  cos  2π f o t + φ ( t ) + θ o 

(3.23)

where A is the mean-amplitude, ε(t) represents the amplitude variations, and φ (t) represents the phase fluctuations of the signal. Angle θo is an arbitrary phase constant. Further assume that the signal is passed through a real, linear, time-invariant, causal filter having g(t) as its impulse response with a corresponding frequency-domain description G( f ). Since g(t) is real, it is also true that G ( − f ) = G* ( f )

(3.24)

where the asterisk denotes complex conjugation. As developed in [7], the power spectral density of the output phase fluctuations can be computed from the autocorrelation function by using the Wiener-Khintchine theorem (see Chapter 4) resulting in Sφo ( f ) = H a ( f ) Sε ( f ) + H p ( f ) Sφ ( f ) 2

2

(3.25)

Fundamental Limits

79

where

Ha ( f )

2

Hp ( f )

2

* 1 G ( f + fo ) G ( fo − f ) = − 4 G ( fo ) G* ( f o )

2

* 1 G ( f + fo ) G ( fo − f ) = + 4 G ( fo ) G* ( f o )

2

(3.26)

(3.27)

Sε ( f ) and Sφ ( f ) are the power spectral densities of the amplitude and phase portions of the input signal, respectively. The phase-to-phase transfer function Hp( f ) depends on the symmetrical portion of the transfer function G( f ) whereas the amplitude-to-phase portion depends on the anti-symmetric portion. For example, if G( f ) is a simple one-pole lowpass filter given by  f  G ( f ) = 1 + j  f c  

then Hp ( f ) = 2

Ha ( f ) =

(f

(f 2 o

2 o

2

2

+ f c2 − f 2 ) + ( 2 f c f ) 2

( fo f )

2 o

(3.28)

+ f c2 ) + ( f c f )

2

(f

−1

(3.29)

2

2

+ f c2 − f 2 ) + ( 2 f c f ) 2

(3.30)

2

These transfer functions are plotted versus several values of fc/fo in Figure 3-6 and Figure 3-7. Amplitude-to-Phase Ha( f ) 20

fc / fo = 0.10 10

Ha( f ), dB

0

-10

-20

-30

0.30 -40

3

1.0

10 -50 -1 10

0

1

10 10 Normalized Frequency, f / fo

Figure 3-6 Example amplitude-to-phase transfer function Ha( f ) for one-pole filter14 (3.30).

14

Book CD:\Ch3\u12999_rf_filtering.m.

2

10

Fundamental Limits

80

Phase-to-Phase Hp( f ) 20

0.30 fc / fo = 0.10

10

10 3

Hp( f ), dB

0

1

-10

-20

-30

-40

-50 -1 10

0

1

10 10 Normalized Frequency, f / fo

2

10

Figure 3-7 Example phase-to-phase transfer function Hp( f ) for one-pole filter15 (3.29).

The interaction between AM and PM signal sidebands can also be seen based on strictly trigonometric means. A sinusoidally amplitude-modulated sine wave can be represented by s AM ( t ) = 1 + m cos (ωm t )  cos (ωo t + θ o ) = cos (ωo t + θ o ) +

m cos (ωo + ωm ) t + θ o  + cos (ωo − ωm ) t + θ o  2

{

}

(3.31)

A sinusoidally phase-modulated sine wave can be represented by sPM ( t ) = cos ωo t + θ o + ∆φ sin (ωm t )  = cos (ωo t + θ o ) cos  ∆φ sin (ωm t )  − sin (ωo t + θ o ) sin  ∆φ sin (ωm t )  ≈ cos (ωo t + θ o ) − sin (ωo t + θ o )  ∆φ sin (ωm t ) 

(3.32)

where the small-angle approximation for |∆φ | 0, c > 0, r ≠ c for r > 0, c > 0, r = c

(3.74) for c = 0, r > 0 for c > 0, r = 0 for r = c = 0

Fundamental Limits

92

As an example, assume that a filter is required with a passband edge ωp = 0.25π and stopband edge ωs = 0.50π. The time-domain impulse response is to be minimized for the bn tap indices greater than n0 using a weighting function w(n) = 1.25^(n – n0) with n0 = 7, and the total FIR filter length is to be 25-taps. The stopband attenuation level is strongly dictated by the objective error function weight applied to the time-domain sidelobe error term Et given in (3.73). Applying the eigenfilter design methodology just described, the resultant optimum FIR impulse response is shown in Figure 3-15 assuming passband and stopband weighting factors of αp = 0.50 and αs = 0.35, respectively. The corresponding frequency-domain response is shown in Figure 3-16. FIR Impulse Response 0.4 0.35 0.3

Tap Value

0.25 0.2 0.15 0.1 0.05 0 -0.05

0

5

10

15

20

25

Tap Index

Figure 3-15 Optimum FIR impulse response25 for ωp = 0.25π, ωs = 0.50π, αp = 0.50, αs = 0.35, and 25 FIR taps. Frequency Response 10

ωp/2π

ωs /2π

0

Filter Gain, dB

-10

-20

-30

-40

-50

-60

0

0.05

0.1

0.15

0.2 0.25 0.3 0.35 Normalized Frequency

0.4

0.45

0.5

Figure 3-16 Frequency-domain response26 for optimized filter corresponding to Figure 3-15.

Key Point: Although the eigenfilter formulation provided here is limited to digital FIR filters that have no group delay distortion, this perspective provides an effective means to compare the difficulty represented by different filtering requirements even in the analog domain. The cost function (3.73) can also be modified to account for other design criteria that are representable in a quadratic form. 25 26

Book CD:\Ch3\u13004_eigenfilter.m. Ibid.

Fundamental Limits

93

3.9 FANO BROADBAND MATCHING THEOREM The Fano broadband matching theorem [20] stipulates that a fundamental limit exists between bandwidth and a network’s reflection coefficient behavior with respect to frequency. This constraint is always present in RF circuit design, and a command of the imposed limits is helpful for designing wideband circuitry. This section provides a formal statement of the matching theorem followed by a simple example that is useful for first-order approximations in real design situations. Rg R1 C1 Eg

Lossless ImpedanceMatching Network

Z in

Figure 3-17 Lossless impedance-matching limitations imposed by load reactance.

Consider the lossless impedance-matching network shown in Figure 3-17. The input reflection coefficient Γ is given by Z in − Rg Γ= (3.75) Z in + Rg The physical limitation for the best possible input impedance match is given by Fano as ∞

∫ log

e

0

 1  π   dω ≤ R Γ   1C1

(3.76)

In the case of a bandpass characteristic, assume that the reflection coefficient within the passband (ω1, ω2) is given by Γpass and it is equal to unity outside the passband as shown in Figure 3-18. Based on (3.76),  1  π ≤ (3.77) (ω2 − ω1 ) log e   R1C1  Γ pass  In this example, the resulting passband reflection coefficient is limited to Γ pass ≥ exp ( −πδ ) where

δ=

1

ωo

R1ωo C1 ω2 − ω1

(3.78)

(3.79)

with ωo = sqrt(ω1ω2). The significance of this result is that (3.76) sets a physical limit on the achievable impedance-matching quality that is attainable across frequency when the load contains a reactive component and passive matching techniques are used.

Fundamental Limits

94

Γ 1.0

Γ pass

ω1

ω2

f

Figure 3-18 Idealized bandpass impedance-matching example. After: [6].

Key Point: The Fano broadband matching theorem establishes a performance limit between the achievable passband VSWR27 and reactance behavior of any one-port network. In general, achieving a specified degree of impedance-matching becomes more difficult as the bandwidth of the required match is increased. 3.10 LEESON–SCHERER PHASE NOISE MODEL The Leeson oscillator phase noise model is discussed in greater length in Section 9.5 along with the closely related Haggai model. Both models are based on linear oscillator theory. The most common form for Leeson’s model includes additional provision for a 1/f noise term that was added by Scherer. Leeson’s model is frequently used and is given by L (δ ) =

FkTo 2 Po

  f 2   ff  1 +  o    1 +  δ    2Qδ   

rad 2 Hz

(3.80)

with

F k To Po fo Q

δ ff

Noise factor Boltzmann’s constant (1.381e–23 J/°K) Absolute temperature (normally 290 °K) Resonator power, W Oscillator center frequency, Hz Resonator quality factor Frequency offset from oscillator center frequency, Hz 1/f noise corner frequency, Hz

Key Point: The Leeson–Scherer oscillator phase noise model is an excellent first-order description for the phase noise performance of an oscillator as viewed on a spectrum analyzer. 3.11 THERMAL NOISE LIMITS All electrical conductors that exhibit resistance also exhibit electrical noise that is a function of absolute temperature. Chapter 4 is devoted entirely to a discussion about noise.

27

Voltage Standing Wave Ratio = (1 + |Γ|)/(1 – |Γ|).

Fundamental Limits

95

3.12 NYQUIST SAMPLING THEOREM A signal s(t) that is strictly limited to a baseband bandwidth of W Hz can be precisely represented by samples that are taken at a minimum sampling rate Fs ≥ 2W which is known as the Nyquist rate. As long as the sampling rate equals or exceeds this rate, s(t) can be perfectly reconstructed from its samples using the interpolation formula s (t ) =



∑ s ( nT ) h ( t − nT ) s

n =−∞

(3.81)

s

where Ts = 1/Fs, and h(t) is the impulse response of the interpolation filter given by

h (t ) =

sin (π t / Ts )

(3.82)

π t / Ts

Nyquist sampling is used in Chapter 7 to theoretically bridge from continuous-time systems to sampled-time control systems. Nyquist methods and the raised-cosine pulse shape that is described later in this section are used extensively in Chapter 10 for the design and analysis of bit synchronization systems. The Nyquist sampling theorem also governs the numerical precision achievable in computer simulation owing to frequency-domain aliasing effects that may be present. The Nyquist sampling theorem is closely related to the Poisson Sum formula that is described in Section 3.15 and is used extensively in this text. A proper discussion of discrete-time systems must rely in part on the Nyquist sampling theorem. A necessary and sufficient condition for h(t)⇔H( f ) to have no interference28 between samples is given by the frequency-domain constraint that +∞

∑ H ( f + mF ) = T s

m =−∞

(3.83)

s

Other band-limited interpolation functions beside (3.82) can be used that also satisfy (3.83), most notably the raised-cosine which has an impulse response given by pRC ( t ) =

sin (π t / Ts ) cos (πβ t / Ts )

π t / Ts

1 − ( 2 β t / Ts )

(3.84)

2

in which β is the excess bandwidth parameter ( 0 ≤ β ≤ 1 ). The frequency-domain description that corresponds to (3.84) is given by  Ts    Ts PRC ( f ) =  2   0  28

h(0) = 1, h(n/2W) = 0 for n ≠ 0.

for 0 ≤ f ≤   π Ts  1− β 1 + cos   f − 2Ts β   

1− β 2Ts

   1− β 1+ β < f ≤    for 2 2Ts T s   

for f >

1+ β 2Ts

(3.85)

Fundamental Limits

96

Several time-domain raised-cosine impulse responses are shown in Figure 3-19 along with their respective frequency-domain characteristics shown in Figure 3-20. Raised-Cosine Pulse Shapes 1

β=0

0.8

Voltage, V

0.6

β = 0.5

0.4

0.2

0

-0.2

β = 1.0 -0.4 -3

-2

-1

0 Time, s

1

2

3

Figure 3-19 Raised-cosine pulse shapes29 from (3.84). Raised Cosine Frequency Domain Spectrum

1

β = 0.0

PRC( f )

0.8

0.6

β = 1.0 0.4

β = 0.5 0.2

0

-1

-0.8

-0.6

-0.4

-0.2 0 0.2 Frequency, Hz

0.4

0.6

0.8

1

Figure 3-20 Raised-cosine pulse spectra30 from (3.85).

Key Point: The Nyquist sampling theorem and its implications arise throughout the study of PLLs because the inclusion of any digital device (e.g., digital divider, phase detector) inherently implies sampling. A command of this theorem is also vital to any accurate computer simulation work. It has deep connections with information theory, digital signal processing, and estimation theory. Equation (3.81) can also be viewed as an interpolation formula in which the interpolating functions h(t – n/Ts) constitute a complete orthonormal basis function set. 29 30

Book CD:\Ch3\u13005_raised_cosine.m. Ibid.

Fundamental Limits

97

3.13 PALEY-WIENER CRITERION The Paley-Wiener criterion determines whether a specified amplitude response can be physically realized by a causal filter or not. If the amplitude response in question is represented by |H(jω)|, realizability demands that +∞



−∞

log e H ( jω ) 1+ ω2

dω < ∞

(3.86)

Inspection of (3.86) permits the following observations to be made: • •

The amplitude response |H(jω)| may be zero at specific frequency values, but may not be zero over any nonzero span of frequencies. The filter’s attenuation with respect to frequency cannot increase arbitrarily fast.

This second point is best exemplified by the Gaussian filter shape which is only Gaussian out to a specified attenuation level like 6 dB or 12 dB. A filter exhibiting the Gaussian attenuation shape for all frequencies is not realizable because it would violate (3.86).

Key Point: The Paley-Wiener condition provides existence limitations for filter realizability. If a given amplitude characteristic H(jω) does not satisfy (3.86), it is not physically realizable. 3.14 PARSEVAL’S THEOREM Parseval’s theorem is a statement about equal energy in the time and frequency-domain descriptions of a given signal or system impulse response. This theorem is frequently used in drawing energybased conclusions in the time and frequency-domains. Given that H( f ) and h(t) constitute a Fourier transform pair, E=

+∞



−∞

H ( f ) df = 2

+∞

∫ h (t )

2

dt

(3.87)

−∞

Key Point: Parseval’s theorem states that the total energy in a given waveform h(t)⇔H( f ) is the same, whether computed in the time- or frequency-domain. 3.15 POISSON SUM The Poisson Sum formula was first introduced in Chapter 1 for mathematically bridging between continuous-time and discrete-time system descriptions. This formula is relied on extensively in Appendix 6B and Chapter 7 in developing a detailed mathematical description of sampling effects as they apply to hybrid and discrete-time PLLs. The Poisson Sum formula (7.9) is closely related to the Nyquist sampling theorem that is discussed in Section 3.12. The Poisson Sum formula is repeated here for convenience as  m (3.88) ∑k Ts h ( kTs ) exp ( − j 2π f k Ts ) = ∑m H  f ± T  s  

Fundamental Limits

98

where Ts is the time-interval between uniformly spaced samples, h(t) is the continuous-time impulse response of the system, and H( f ) is the Fourier transform of the continuous-time impulse response.

Key Point: The Poisson Sum provides an exact mathematical relationship between a continuoustime system description given in the frequency-domain as H( f ) and its discrete time-sampled impulse response represented by h(kTs). The left-hand side of (3.88) is the z-transform of h(t) scaled by the sampling time-interval Ts where z = exp( j2π f Ts ). 3.16 TIME-BANDWIDTH PRODUCT Time-bandwidth product relationships appear in several forms within engineering circles. Their specific definition and usage are application-dependent. Although the product relationships are frequently encountered in the mathematical analysis of PLL systems (e.g., longer time-domain simulations for better frequency resolution), they normally do not directly affect detailed PLL design except possibly in situations where signal dynamics like Doppler or modulation are involved. Several time-bandwidth product relationships are considered in the subsections that follow.

Key Point: Ascertainable time and frequency features based on a given data observation duration are limited, and depend on each other in a reciprocal manner, roughly speaking. Different definitions apply to the time and frequency features in question, based on whether deterministic or stochastic systems are being considered. 3.16.1 Gabor Limit for Deterministic Signals The Gabor limit is frequently encountered in radar and spectrum measurement applications. Assume that a measurement filter H( f ) has an impulse response given by h(t) and that the filter impulse response satisfies the constraint lim th ( t ) = 0

(3.89)

t →∞

Under this condition, the Gabor limit is given by [21]–[23] ∆trms ∆f rms ≥

1 4π

(3.90)

in which +∞

∫ t h (t )

∆trms =

2

2

dt

(3.91)

−∞

∆f rms =

+∞



f 2 H (ω ) df

−∞

Equality holds in (3.90) when h(t) is a Gaussian pulse shape.

2

(3.92)

Fundamental Limits

99

3.16.2 Time-Frequency Resolution for Deterministic Signals Frequency resolution in hertz is often equated to the reciprocal of the time-domain observation duration in seconds, but this relationship is only an approximate one [22]. The equivalent time width for a deterministic signal h(t) is defined as Te =

+∞

1 ∫ h ( t ) dt h ( 0 ) −∞

(3.93)

where h(0) normally corresponds to the maximum pulse-height value assumed by h(t). Similarly, the equivalent bandwidth for h(t) is defined as Be =

1

+∞

∫ H ( 0 ) −∞

H ( f ) df

(3.94)

where h(t) and H( f ) constitute a Fourier transform pair. Making use of the forward and inverse Fourier transform relationships along with (3.93) and (3.94), it is possible to quickly show that TeBe = 1. Although Te and Be are reciprocally related, nothing can be concluded about the ability of a given H( f ) to resolve the spectral responses between multiple signals based on this discussion. Even so, the TeBe product is sometimes referred to as the time-frequency uncertainty principle [23] because it does convey that there is a fundamental limit between simultaneous spectral and temporal observation precisions. See Sections 4.3 and 4.4 for additional discussion regarding spectral resolution and filtering.

3.16.3 Time-Frequency Resolution Limits for Stochastic Signals In the case of stochastic signals, temporal and spectral resolution limits must be constrained further than in Section 3.16.2 in order to obtain reliable results [23]. When the statistical spectrum is obtained from either temporal or spectral smoothing of a time-varying periodogram,31 the temporalspectral resolution product must be much greater than unity (i.e., ∆t ∆f >>1). This condition is known as Grenander’s uncertainty condition. Resolution uncertainty in this context is also referred to as the stability time-bandwidth product in [22]. The time-frequency uncertainty principle from Section 3.16.2 is modified to include the degradation caused by the signal’s random qualities to QsTe Bs ≈ 1

(3.95)

in which Qs is the statistical quality ratio for the spectral estimate, and Bs is the effective statistical bandwidth. Assuming that Se( f ) is the power spectral density estimate for a single observation timeinterval of TM = MTs seconds, Qs is given by Qs =

variance {Se ( f )}

{E  S

e ( f ) }

2

(3.96)

31 The periodogram is an estimate of the power spectral density based on a Fourier transform that is computed over a finite time measurement period Tm.

Fundamental Limits

100

The quality ratio Qs is a ratio of the estimated spectral density variance to the expected spectral density, and as such is effectively the inverse of the signal-to-noise ratio. Small values of Qs are desirable in order to have meaningful results. The quantity Bs is defined as  1/ 2Ts  Bs =  ∫ Ω ( f ) df   −1/ 2Ts 

2

 1/ 2Ts 2   ∫ Ω ( f ) df   −1/ 2Ts 

−1

(3.97)

where Ω( f ) represents the spectral shaping window32 used in the power spectral density estimate. Power spectral density estimation is addressed further in Sections 4.3 and 4.4, and windowing is considered in Section 4.4.4.

Key Point: The observation time interval, frequency resolution, and spectrum reliability are all interrelated through (3.95). This result stipulates that only two of the three quantities can be independently constrained. For example, if the spectrum variance is required to be 1/30th of the spectral mean, the observation time-interval would necessarily be on the order of 30/Bs. 3.17 MATCHED-FILTERS FOR DETERMINISTIC SIGNALS IN ADDITIVE WHITE GAUSSIAN NOISE (AWGN) In the case of a deterministic signal pulse g(t) in AWGN, the matched-filter maximizes the output signal-to-noise ratio (SNR), resulting in an SNR value that is independent of the signal pulse shape and only dependent on the ratio Eg/No, where Eg is the total pulse energy and No is the one-sided noise spectral density [24]. This matched-filter bound is used in Chapter 10 to compare different bit synchronization methods with respect to theory. This result can be obtained by considering Figure 3-21 in which a deterministic pulse shape represented by g(t) is immersed in additive white Gaussian noise that is represented by n(t). The noise is assumed to have a two-sided power spectral density of No/2 W/Hz. The total energy of the g(t) pulse is represented by Eg. The matched-filter H( f ) is the linear filter that maximizes the output SNR at t = to.

g (t ) + n (t )

h (t ) ⇔ H ( f )

t = to

g o ( t ) + no ( t ) Figure 3-21 Matched-filter concept for a deterministic signal g(t) in AWGN.

The output SNR at the sampling instant is mathematically given by

ρ=

g o2 ( to )

E {no2 ( to )}

=

g o2 ( to )

σ n2

(3.98)

in which E represents statistical expectation, to corresponds to the optimum sampling point in time, and 32

Same as Fourier transform of the time-domain windowing function.

Fundamental Limits

σ n2 =

No 2

+∞

∫ H(f)

101 2

(3.99)

df

−∞

The deterministic portion of the filter output signal go(to) is given by

g o ( to ) =

+∞

∫ G ( f ) H ( f ) exp ( j 2π f t ) df

(3.100)

o

−∞

Assuming that H( f ) is normalized, the Cauchy-Schwarz inequality (Section 3.3) permits the output SNR ρ to be bounded by 2

+∞

∫ G ( f ) H ( f ) exp ( j 2π f t ) df o

ρ=



−∞

No 2 2 No

+∞



−∞

+∞



H ( f ) df 2

2 = No

2

+∞

∫ G ( f ) H ( f ) exp ( j 2π f t ) df o

−∞

(3.101)

−∞

H ( f ) df 2

+∞

∫ G( f )

2

df =

−∞

2 Eg No

The maximum achievable output SNR is limited to 2Eg/No. This maximum value is only achieved when H( f ) is the complex-conjugate of the input pulse shape represented by G( f ). As such, h(t) corresponds to a time-reversed but delayed copy of g(t). In radar applications, (3.100) is called the ambiguity function [17], and the mismatches between G( f ) and H( f ) that arise due to unknown signal parameters (e.g., Doppler frequency, time of arrival) are studied at length.

Key Point: A matched-filter maximizes the output SNR for a deterministic desired signal g(t). The matched-filter impulse response is equal to a time-reversed but delayed copy of g(t). The output SNR is independent of the actual pulse shape, only depending on the ratio Eg/No. 3.18 WEAK LAW OF LARGE NUMBERS The weak law of large numbers serves as a useful guideline for the simulation of PLL systems when SNR conditions are poor. The law provides a convenient basis for judging the attainable accuracy of such simulations for a given signal duration and signal SNR. When properly applied, this law can be used to shorten computer simulation run-times without inadvertently degrading the statistical confidence of the results. The section concludes with an example where a sum of uniformly distributed random variables is used to approximate a Gaussian random variable. Consider the sum SN of N statistically independent, identically distributed random variables represented by {xi}. Assume that the mean of each random variable is µ and that the variance of each is σ 2. The weak law of large numbers [25] requires that Prob ( S N − µ ≥ ε ) ≤

where

σ2 Nε 2

(3.102)

Fundamental Limits

SN =

1 N

102

N

∑x k =1

(3.103)

k

Proof for the weak law can be argued directly using the Chebyshev inequality from Section 3.5. The central limit theorem is also supportive of this result [26]. “For statistically independent samples, the probability distribution of the sample mean tends to become Gaussian as the number of statistically independent samples is increased without limit, regardless of the probability distribution of the random variables or process being sampled as long as it has a finite mean and variance” [27]. In the case of N statistically independent uniformly distributed random variables, characteristic function methods (see [13], [27]) can be used to show that the sum becomes Gaussian for large N. In the case of N such variables, assume that each is mean-zero and has a variance of 1/√N such that the variance of the sum remains unity. The characteristic function for the sum of N such random variables is given by   π f   sin   N  CN ( f ) =    πf         N 

N

(3.104)

In the limit as N→∞, it can be shown that CN( f ) becomes the characteristic function for a Gaussian random variable. This can be seen by first making use of the Taylor series expansion sin ( x ) x

x−

=

x3 x5 x 7 + − ± ... x2 3! 5! 7! ≈ 1− 6 x

(3.105)

with x = π f /√N. Applying the binomial theorem (2.175) to (3.104) while using (3.105) results in

(1 − α )

n

= 1 − nα +

n ( n − 1) 2!

α2 −

n ( n − 1)( n − 2 ) 3!

α3 ± . . .

(3.106)

where α = ( π f )2/(6N). Making this substituting for α in (3.106) finally results in lim (1 − α )

N →∞

1  (π f ) = 1 + ∑ − 6 k =1 k !   ∞

N

α = (π f ) / ( 6 N ) 2

2

  

k

(3.107)

Comparing (3.107) with the Taylor series expansion for exp(x), it can be shown that N

  π f   sin   2 2  N   = exp  − π f  = exp  − 1 2πσ f 2   lim )    2( N →∞   π f   6           N 

(3.108)

Fundamental Limits

103

where σ 2 = 1/12. Since this final result corresponds to the characteristic function for a mean-zero Gaussian random variable with variance σ 2, in the limit the sum does in fact become Gaussian.33

Key Point: The weak law of large numbers provides a means to bound the measurement error variance as given by (3.102) that is independent of the underlying probability density function. This bound is indirectly employed in Chapter 4 and Chapter 10. References [1]

Korn, G.A., and T.M. Korn, Mathematical Handbook for Scientists and Engineers, 2nd ed., New York: McGraw-Hill, 1968.

[2]

Lam, H., Analog and Digital Filters: Design and Realization, Englewood Cliffs, NJ: Prentice-Hall, 1979.

[3]

Poularikas, A.D., The Transforms and Applications Handbook, Boca Raton, FL: CRC Press, 1996.

[4]

Carlin, H.J., “A New Approach to Gain-Bandwidth Problems,” IEEE Trans. Circuits and Systems, April 1977.

[5]

Carlin, H.J., and J.J. Komiak, “A New Method of Broad-Band Equalization Applied to Microwave Networks,” IEEE Trans. Microwave Theory and Techniques, Feb. 1979.

[6]

Cuthbert, T.R., Circuit Design Using Personal Computers, New York: John Wiley & Sons, 1983.

[7]

Tremblay, P., and M. Tetu, “Characterization of Frequency Stability: Effect of RF Filtering,” IEEE Trans. Instrumentation and Measurement, June 1985.

[8]

Schwartz, M., Information, Transmission, Modulation and Noise, 3rd ed., New York: McGraw-Hill, 1980.

[9]

Proakis, J.G., Digital Communications, 4th ed., New York: McGraw-Hill, 2001.

[10] Mendel, J.M., Lessons in Theory for Signal Processing, Communications, and Control, Englewood Cliffs, NJ: PrenticeHall, 1995. [11] Van Trees, H.L., Detection, Estimation, and Modulation Theory, New York: John Wiley & Sons, 1968. [12] Srinath, M.D., and P.K. Rajasekaran, An Introduction to Statistical Signal Processing with Applications, New York: John Wiley & Sons, 1979. [13] Scharf, L.L., Statistical Signal Processing Detection, Estimation, and Time Series Analysis, Reading, MA: AddisonWesley, 1991. [14] Meyr, H., M. Moeneclaey, and S.A. Fechtel, Digital Communication Receivers Synchronization, Channel Estimation, and Signal Processing, New York: John Wiley & Sons, 1998. [15] Rife, D.C., and R.R. Boorstyn, “Single-Tone Parameter Estimation from Discrete-Time Observations,” IEEE Trans. Information Theory, Sept. 1974. [16] Helstrom, C.W., Elements of Signal Detection and Estimation, Englewood Cliffs, NJ: Prentice-Hall, 1995. [17] Skolnik, M.I., Radar Handbook, New York: McGraw-Hill, 1970. [18] Zverev, A.I., Handbook of Filter Synthesis, New York: John Wiley & Sons, 1967. [19] Daniels, R.W., Approximation Methods for Electronic Filter Design, New York: McGraw-Hill, 1974. [20] Matthaei, G.L., L. Young, and E.M.T. Jones, Microwave Filters, Impedance-Matching Networks, and Coupling Structures, Dedham, MA: Artech House, 1980. [21] Qian, S., and D. Chen, Joint Time-Frequency Analysis Methods and Applications, Upper Saddle River, NJ: PrenticeHall, 1996. [22] Marple, S.L., Digital Spectral Analysis, Englewood Cliffs, NJ: Prentice-Hall, 1987. [23] Gardner, W.A., Statistical Spectral Analysis—A Nonprobabilistic Theory, Englewood Cliffs, NJ: Prentice-Hall, 1988. 33 The characteristic function of a probability density function is unique; see Section 18.3-8 of [1]; also Section 5.5 and specifically Theorem 5.5.2 due to Levy in [26].

Fundamental Limits

104

[24] Ziemer, R.E., and R.L. Peterson, Digital Communications and Spread Spectrum Systems, New York: Macmillan Publishing, 1985. [25] Wozencraft, J.M., and I.M. Jacobs, Principles of Communication Engineering, New York: John Wiley & Sons, 1965. [26] Larson, H.J., and B.O. Shubert, Probabilistic Models in Engineering Sciences Volume I, New York: John Wiley & Sons, 1979. [27] Davenport, W.B., and W.L. Root, An Introduction to the Theory of Random Signals and Noise, New York: IEEE Press, 1987.

Appendix 3A: Maximum-Likelihood Frequency Estimator A maximum-likelihood (ML) frequency estimator can be found starting with (3.57). The associated log-likelihood function is given by   I − b cos (φ )  2 + Q − b sin (φ )  2   k 0 k  k    k 0  2 σ 2 k =1    M

log e  p ( Z )  = − N log e ( 2πσ 2 ) − ∑  

with

φk = ( k − 1) ωoTs + θ o

(3A.1)

(3A.2)

In order to maximize the log-probability, it suffices to take the derivative of (3A.1) with respect to

ωoTs and set it equal to zero. This is equivalent to finding the solution to g(u) = 0 where M

g ( u ) = ∑ ( k − 1)  I k sin ( k − 1) u + θ o  − Qk cos ( k − 1) u + θ o   

(3A.3)

k =1

and u corresponds to the ML frequency estimate for ωoTs. Since g(u) is not monotonic, a coarse search for the globally optimum solution must first be done and the method used here is based on finding the estimated value for which g′(u) is maximized where M

2 g ′ ( u ) = ∑ ( k − 1)  I k cos ( k − 1) u + θ o  + Qk sin ( k − 1) u + θ o      k =1

(3A.4)

The cost function and its derivative are plotted in Figure 3A-1. Once an initial estimate for the solution to (3A.3) has been found by brute-force means, it can be easily polished using the NewtonRaphson method that employs the derivative of the cost function given by (3A.4). The onset of threshold behavior is shown in Figure 3-11 for ρ values less than approximately – 2 dB. This thresholding effect is caused by the appearance of multiple zeros in the cost function and the associated inability to correctly choose among them. The worst-case estimation error variance is bounded in the computations by limiting the maximum excursion of the Newton-Raphson result compared to the brute-force estimate to 50% above or below the initial estimate value.

Fundamental Limits

105

ML Cost Function and Derivative 1

Derivative of Cost Function

Normalized Cost Function | Derivative

0.8

Cost Function

0.6 0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0.2

0.25

0.3

0.35

0.4

0.45

ωoT

Figure 3A-1 ML cost function (3A.3) and its derivative1 for SNR = 15 dB and M = 160.

Appendix 3B: Phase Probability Density Function for Sine Wave in AWGN The probability density function for the phase of a sine wave immersed in additive white Gaussian noise2 can be developed based on the simple signal model given by s ( t ) = cos (ωo t + θ ) + nI ( t ) cos (ωo t ) − nQ ( t ) sin (ωo t ) Sine Wave in AWGN Vectorially 1 0.8 0.6

Quad-Phase (y)

0.4 0.2 0 -0.2 -0.4 -0.6 -0.8 -1

0

0.2

0.4

0.6

0.8 1 1.2 In-Phase (x)

1.4

1.6

1.8

2

Figure 3B-1 Sample cloud associated with unit-amplitude sine wave3 in AWGN with an SNR of 10 dB. 1 2 3

Book CD:\Ch3\u13001_freq_estimator1.m. Also see Section 1.4. Book CD:\Ch3\u13003_sinewave_in_awgn.m.

(3B.1)

Fundamental Limits

106

in which θ is an unknown but deterministic phase angle, and nI(t) and nQ(t) represent narrowband Gaussian noise processes each having variance σ 2. Taking θ to be zero without loss of generality, the probability density function for the in-phase and quadrature-phase values of an individual timesample of (3B.1) is given by p ( x, y ) =

 ( x − 1)2 + y 2  exp −  2πσ 2 2σ 2  

1

(3B.2)

This rectangular form can be written in polar form as   r − cos (φ )  2   cos 2 (φ ) − 1      p ( r,φ ) = exp   exp −  2 2 2 2πσ 2σ  2σ   

r

(3B.3)

in which r = x2 + y 2

(3B.4)

 y  

φ = tan −1   x

The dependence on r can be integrated out leaving the probability density function of φ alone as p (φ , ρ ) =

exp ( − ρ ) 2π

{1 +

πρ cos (φ ) exp  ρ cos 2 (φ )  1 + erf 

(

)}

ρ cos (φ ) 

(3B.5)

in which the signal-to-noise ratio is given by ρ = 1/(2σ 2). A vectorial representation of (3B.1) is shown in Figure 3B-1 for a signal-to-noise ratio of 10 dB. The error-function erf(u) is defined as erf ( u ) =

2

π

u

∫ exp ( −v ) dv 2

(3B.6)

0

The probability density function (3B.5) is plotted for several values of ρ in Figure 3B-2. It is a simple matter to verify the validity of (3B.5) numerically by computing a large number of samples and using histogram methods to estimate the associated probability density function. These computations are done for ρ = 12 dB in Figure 3B-3 and excellent agreement is apparent.

Fundamental Limits

107

Phase Error PDF for Sine Wave in AWGN 3.5

ρ = 15 dB 3

Probability Density

2.5

2

1.5

ρ = 10 dB ρ = 5 dB ρ = 0 dB ρ = -5 dB

1

0.5

0 -2

-1.5

-1

-0.5

0 Phase, φ

0.5

1

1.5

2

Figure 3B-2 Phase probability density function for sine wave in AWGN using (3B.5). Sine Wave in AWGN Phase PDF 2.5

ρ = 12 dB

Probability Density

2

Closed-Form 1.5

Histogram Results 1

0.5

0 -1

-0.8

-0.6

-0.4

-0.2

0 0.2 Phase, rad

Figure 3B-3 Closed-form result versus computed histogram result4 for ρ = 12 dB.

4

Book CD:\Ch3\u13003_sinewave_in_awgn.m for 107 points.

0.4

0.6

0.8

1

CHAPTER 4 Noise in PLL-Based Systems 4.1 INTRODUCTION Noise plays a major role in the design of high performance PLL-based systems. In the absence of noise, it is not unusual to encounter digital designs that operate unexpectedly unless random noise is present due to underlying limit-cycle behavior that may otherwise be present. Many varieties of noise exist in modern systems owing to the diverse means through which it is created. These range from traditional transistor-level circuits to ∆-Σ fractional-N phase-locked loops, analog-to-digital converters, switched-capacitor filters, and direct digital synthesizers to name but a few. It is only fitting that this chapter be devoted to a detailed look at noise phenomena given that the balance of this text is largely concerned with minimizing the effects of noise on system performance. Foundational material pertaining to the origin, characterization, terminology, definition, and simulation of noise is provided in this chapter. A number of the topics, notably the material on power spectral density in Sections 4.3 and 4.4, and the mechanisms through which noise is impressed on time and frequency control systems discussed in Section 4.7, are central to understanding much of the other material in this text. System performance ramifications due to noise are deferred to Chapter 5.

4.2 SOURCES OF NOISE Noise is best described as a signal that is relatively unpredictable over a specific observation time period. Some reliance on probability theory as it applies to stochastic processes1 is necessary in order to characterize and analyze noise. A brief mathematical review of these processes is provided in Appendix 4A. In this section, the primary noise sources that arise in time and frequency control systems are described in some detail.

4.2.1 Semiconductor Noise Sources There are five primary types of noise mechanisms2 that arise in semiconductor characterization and modeling as discussed in the sections that follow. Other noise sources also exist such as hot-electron noise, avalanche noise like that produced in Zener diodes, and quantum 1/f noise, but these will not be addressed here. These primary noise sources are usually combined to construct noise models for Random functions denoted by x(t, ξ) where t represents time and ξ denotes the time function that is randomly chosen from the ensemble of possible time functions [1]. 2 J.R. Hellums, University of Texas-Dallas, class notes EE7331, Fall 2004. http://www.utdallas.edu/~hellums/docs/EE7331/Fall2004/ThermalNoise.pdf. 1

109

Noise in PLL-Based Systems

110

larger macro-devices such as operational amplifiers and oscillators. A transistor-level example is provided for the bipolar transistor in Section 4.2.1.6. Active research continues as new devices are developed and feature sizes are reduced. Van der Ziel [1] develops the semiconductor noise subject in much greater detail, and a description of the noise modeling methods used in SPICE can be found in [2]. 4.2.1.1 Thermal Noise Thermal noise was first measured by J.B. Johnson at Bell Labs in 1928 [3]. His findings were used by Harry Nyquist (also of Bell Labs) to construct a thermodynamic model that fit the laboratory observations. Consequently, thermal noise is also referred to as Johnson noise or Nyquist noise. Thermal noise is present in all conductors3 that have a temperature above absolute zero. Thermal noise results from the Brownian motion of electrons due to temperature, and it is the ultimate factor that limits the achievable noise floor of highly sensitive systems. This random motion of free electrons within a conductor creates an equivalent open-circuit voltage across the ends of the conductor that can be characterized by a stochastic random variable. Arguments posed by Nyquist [4] and van der Ziel [1] reveal that this voltage has a Gaussian distribution, and that its mean-square value is given by f2

E {en2 } = 4k BTA R ∫

f1

hf df   hf   k BTA exp   − 1  k BTA   

(4.1)

where

kB TA R h

Boltzmann’s constant 1.38 10–23 joules/Kelvin Resistor’s ambient temperature, Kelvin Resistance in ohms Plank’s constant 6.62 10–34 joules/sec

and E represents statistical expectation. This relationship is intimately involved with an area of physics known as black body radiation [5]. For frequencies of interest here ( f , the classical diode equation is given by   qV   (4.4) I = I s  exp  d  − 1  k BTo    5

“Shot Noise,” J.R. Hellums, University of Texas-Dallas, class notes EE7331, Fall 2004. http://ftp.utdallas.edu/~hellums/docs/EE7331/Fall2004/ShotNoise.pdf . 6 See Section 3.18. 7 q = 1.60 10–19 coul.

Noise in PLL-Based Systems

112

where Is is a semiconductor junction parameter known as the saturation current and Vd is the external voltage applied across the diode. The small-signal conductance can be found by differentiating (4.4) with respect to Vd resulting in q gd = I (4.5) k BTo This result can be used along with (4.3) to create the small-signal equivalent noise model shown in Figure 4-2 for a pn-junction diode where the noise current in has a power spectral density given by (4.3) and a corresponding rms value given by E ( I − I 

)

2

 = 2q I B n 

A rms

(4.6)

where Bn is the equivalent noise bandwidth of the system under consideration.

+

Norton

I

1/ g d

in

Vd

Noiseless

_ (a)

(b)

Figure 4-2 Ideal pn-junction diode (a) and its small-signal noise-equivalent circuit (b) where the small-signal transconductance gd is given by (4.5) and the shot noise current in has a standard deviation given by (4.6).

4.2.1.3 Flicker (1/f ) Noise

Flicker noise is a noise phenomenon that can be found throughout nature from the daily height of the Nile river and the music of Bach and the Beatles [7] to the financial markets. In fact, the 1/f noise process provides a remarkably good starting point for stochastic music composition [8], [9]. The term 1/f noise applies to the shape of the power spectral density with respect to frequency for the observed noise rather than to a specific underlying physical mechanism or process. Flicker noise is present in all active devices and some passive devices. The origins of flicker noise in semiconductors are wide-ranging. In bipolar transistors, it is primarily caused by carrier traps associated with crystalline defects or contamination within the emitter-base depletion region [10]. In MOSFET devices, there is no universally accepted model for 1/f noise, but there are two primary schools of thought that have emerged. Physical evidence has been observed for both models. In the McWhorter model [11], the 1/f noise is attributed to the random trapping and detrapping of charge carriers with different relaxation times near the silicon-insulator interface within the device. The Hooge model [12] bases the origin of 1/f noise on carrier scattering that occurs within the device due to lattice vibrations. The carrier mobility in the bulk-silicon material is assumed to fluctuate making the Hooge model a volumetric effect. Flicker noise is always associated with the flow of direct current and exhibits a power spectral density of the form

Noise in PLL-Based Systems

S1/ f ( f ) = K1

I

113

a

A 2 /Hz

fb

(4.7)

where

K1

f a b

device-dependent constant direct-current, A frequency of interest, Hz device-dependent constant, normally within the range of 0.50 to 2 device-dependent constant ≈ 1

The amplitude distribution of flicker noise is frequently non-Gaussian. Owing to the ubiquity of 1/f noise and the difficulty of accurately simulating it, several numerical techniques for creating 1/f noise are provided in Appendix 4B. 4.2.1.4 Generation-Recombination Noise

Generation-recombination noise is a natural part of all semiconductor behavior [13] that occurs whenever free charge carriers are generated and recombine in a semiconductor material. The effects at room temperature are normally very small. The noise does not appear if there is no direct-current present, but the noise is not produced by the current. Since many carriers are involved in this process, the underlying amplitude distribution is Gaussian. It is a low-frequency noise phenomenon having a Lorentzian power spectral density with one primary characterization parameter τ as given by 2 I τ S gr ( f ) = K 2 A 2 /Hz (4.8) 2 1 + ( 2π f τ ) where K2 device-dependent constant < I > direct-current, A f frequency of interest, Hz device-dependent time constant τ 4.2.1.5 Burst (Popcorn) Noise

Burst noise or popcorn noise [10] is also referred to as the random telegraph signal (RTS) in some literature. It results from a special kind of generation-recombination effect that is related to the presence of heavy-metal ion contamination within a semiconductor. Gold-doped devices exhibit high levels of burst noise, for example. The spectral density of burst noise has the form given by S pop ( f ) = K 3

I

c

 f  1+    f bc 

2

A 2 /Hz

where

K3

f c fbc

device dependent constant direct-current, A frequency of interest, Hz device-dependent constant, normally within the range of 0.50 to 2 frequency corner for a particular noise process

(4.9)

Noise in PLL-Based Systems

114

It is possible that more than one burst noise process is present within a device, each process having its own characteristic parameters. The amplitude distribution of the noise is generally non-Gaussian. The RTS burst noise variety appears when only a few carrier traps are involved that control the flow of a large number of other charge-carriers thereby making it possible for discrete current states to appear [14]. An example RTS signal is shown in Figure 4-3 where the time-intervals between state-changes are Poisson-distributed. Burst Noise with Poisson Time-Increments

1

Noise Value

0.8

0.6

0.4

0.2

0

-0.2

0

2

4

6

8

10 Time

12

14

16

18

20

Figure 4-3 Example8 of burst noise with Poisson-distributed time intervals.

4.2.1.6 SPICE Bipolar Transistor Noise Model The foregoing elementary noise models are sufficient to describe the small-signal noise behavior of most semiconductor devices reasonably well. The IEEE established a notation standard for electronic device noise in 1964 to help with device modeling activities [15], but more free-form notation is frequently seen in the literature as device model complexity has evolved. SPICE and similar circuit simulation tools use a noise-modeling schematic for the bipolar transistor like that shown in Figure 4-4 [2], [16]. All of the resistances shown in the figure are noisefree because their equivalent noise mechanisms have been explicitly shown in their Nortonequivalent form. The distinction between the uppercase and lowercase resistance nomenclature is that RB, RC, and RE are physical resistances that are present within the transistor, whereas the resistances shown in lowercase are not. The mean-square spectral density values for each noise source are provided in Table 4-1. Currents IB and IC are the dc base and collector currents, respectively. Aside from the thermal- and shot-noise contributions in these formulas, parameters associated with the other contributions must be determined as part of the device modeling effort. Noise modeling complexity involved for even a single transistor is substantial. In order to condense the noise modeling information into a more concise form, equivalent noise modeling is used extensively in this text (e.g., Section 4.7.2). The noise model shown in Figure 4-4 can be reduced to the simple modeling shown in Figure 4-5. Transistor-level noise modeling for other semiconductor devices can be found in [2]. Equivalent noise modeling like this can be used at any level of abstraction. 8

Book CD:\Ch4\u13071_rts_example.m.

Noise in PLL-Based Systems

C

115

inRC

CBC

inRB

B

C

B RB

E

rBE

CBE

RC

rBC

inB

rCE

RE

inC

inRE

E Figure 4-4 SPICE-like small-signal noise model for bipolar transistor. After: [16]. Table 4-1

Bipolar Transistor Noise-Modeling Details for Figure 4-4 Mean-Square Value, A2/Hz

Symbol

inRB

2 inRB =

inB 2 nB

i

inRC

4k BTo RB

(4.10)

  f Ia = 2qI B + K1 Bb + K 4 I Bc 1 +  f   Fgr  4k T 2 inRC = B o RC

inC

2 inC = 2qI C

inRE

2 inRE =

  

2

−1

  (4.11)  

Shot + Flicker + GenerationRecombination

Thermal

(4.12)

Shot

(4.13)

4k BTo RE

Noise Type Thermal

Thermal

(4.14)

C

C B vneq

Noiseless

B ineq

E E Figure 4-5 Transistor noise modeling using equivalent noise voltage and noise current sources. Noise sources are correlated in general.

Noise in PLL-Based Systems

116

4.2.2 Quantization Noise The extensive use of digital devices and techniques in modern systems demands a thorough understanding of these noise mechanisms as well. Several of the most important digital noise contributors are discussed in this section. 4.2.2.1 DACs and ADCs

Digital-to-analog converters (DAC) and analog-to-digital converters (ADC) provide the interface between the continuous-time analog and the discrete-time quantized signal domains. The effects of finite sampling-frequency are manifested by the Nyquist sampling theorem (Section 3.12) and the Poisson Sum formula (Section 3.15). The degradation effects from sampling clock-jitter are developed separately in Section 5.9. The quantization process is inherently noisy. Small signal conversion errors are unavoidable because the quantization process involves mapping a continuous set of signal values into a finite set of digital-word representations. Converter quantization noise is discussed separately in Section 4.4.2. The conversion process is further hampered by hardware imperfections including nonlinearities and hysteresis memory effects. This section provides an overview of the most important converter terminology that affects time and frequency control systems. Converter Terminology

Converters are an inescapable source of noise that arises from quantization at a minimum, to more complicated noise-like effects caused by converter imperfections, as discussed here. Converter action is normally triggered by an externally provided precision clock. Any effective time-jitter of this clock, whether due to internal converter noise mechanisms or caused by the spectrally inferior external clock creates additional noise-like effects that are discussed separately in Chapter 5. Converters generally use a straight-binary or 2’s complement digital representation for their associated analog input/output values. The mapping is referred to as a midtread configuration if the transfer function characteristic is as shown in Figure 8-15 near zero, or as a midriser configuration if the behavior matches Figure 8-16. In feedback control systems that seek to drive an error quantity to zero, the midriser configuration should be used in order to avoid introducing a dead-zone (and associated limit-cycle behavior) near zero. Converters that provide or accept analog outputs/inputs at the same rate as their clock frequency are known as Nyquist converters. These converters ideally support signal bandwidth up to the Nyquist rate Fs/2 where Fs is the converter clock frequency being used. In contrast, ∆-Σ converters that make use of oversampling can only provide proper conversion over a smaller fraction of Fs for bandwidth. These converters have nonetheless taken center stage as integration levels and available clock frequencies have increased because their inherent nature makes it possible to realize much smaller quantization levels, reduced silicon die area, and lower power consumption in many applications compared to their Nyquist converter counterparts. The ∆-Σ converters create colored noise spectra like that discussed in Chapter 8 in the context of fractional-N frequency synthesis. Nyquist converter terminology most germane to time and frequency control systems is discussed in the paragraphs that follow. It is assumed that these converters ideally exhibit a linear relationship between the converter’s digital codes and their associated analog signal values. In order to minimize confusion, the terminology will be discussed in terms of a DAC with the understanding that similar definitions apply for ADCs as well.

Noise in PLL-Based Systems

117

To facilitate these discussions, assume that the converter’s actual analog output voltage range spans from Vmin to Vmax corresponding to a digital representation range of 0 to 2N – 1 where N is the number of bits in each DAC digital word. Assume further that the voltage step transitions occur9 at Vk for 0 ≤ k ≤ 2N – 1. A detailed diagram showing the DAC’s transfer characteristics is shown in Figure 4-6. This information makes it possible to define the following DAC performance characteristics: DNL Best Straight-Line Fit INL

Not Monotonic

111

110

101

100

011

010

001

000

Offset

DAC Codeword Value Figure 4-6 Accentuated errors illustrating INL and DNL for a 3-bit DAC output.

Differential Nonlinearity (DNL)—the normalized error between any two adjacent converter codes with respect to an ideal LSB based on the full-scale range of the converter. The DNL must always be less than 1 LSB in order for the converter to exhibit monotonic behavior. Integral Nonlinearity (INL)—the normalized error between the actual converter output value and the best straight-line curve-fit through all of the converter’s codes, with respect to an ideal LSB based on the straight-line fit. Signal-to-Noise Ratio (SNR)—the ratio of desired signal power to noise (less harmonics) power at the converter output for an applied full-scale sine wave, normally expressed in dB. For an ideal DAC, the SNR is given by 1.76 + 6.02N dB where N is the number of bits in each digital codeword used. Signal-to-Noise-plus-Distortion (SNDR)—the ratio of desired signal-power-to-noise-plusdistortion power including any dc offset that may be present, up to the Nyquist frequency for a full-scale sine wave output. Normally expressed in dB. Total Harmonic Distortion (THD)—measured in the frequency-domain for a full-scale sine wave output, as a decibel-ratio between the desired sine wave amplitude and the rootmean-square sum of selected harmonics present in the DAC output. Effective Number of Bits (ENOB)—calculated from the SNDR for a full-scale sine wave as ENOB =

SNDRdB − 1.76 bits 6.02

(4.15)

9 In the ADC case, transition voltages for each digital code will exhibit some variability due to internal noise and imperfections even in static testing. The average transition voltage should be determined in these cases corresponding to the point where the likelihood that the converter reports code D versus code D + 1 is equal.

Noise in PLL-Based Systems

118

This result follows directly from (4.30) by assuming that the DAC quantization errors are statistically independent and uniformly distributed resulting in a noise variance of ∆2/12 where ∆ corresponds to a DAC LSB-step.

Intermodulation Distortion Noise (IMD)—results from the nonharmonically related nonlinear distortion products that arise between two or more sine waves. This is normally measured for a pair of output sine waves situated at nonharmonic frequencies. More extensive information regarding ADC and DAC specifications and performance can be obtained from device manufacturers. Information regarding ∆-Σ converters can be found in Chapter 8 and in [17]. Some of the major issues involved with DACs that are used in direct-digital synthesizers are discussed in Appendix 4D. 4.2.2.2 Direct Digital Synthesizers (DDS) In the 1980s, direct digital synthesis was a popular means to create high-performance frequency sources that exhibited high agility and extremely small frequency steps. The rapid evolution of ∆-Σ fractional-N techniques curtailed their popularity in the late 1990s, however. DDS techniques are still mathematically used extensively within baseband digital signal processing algorithms. DDS noise and spurious issues are separately discussed in Appendix 4D. 4.2.2.3 ∆-Σ Fractional-N Synthesizers ∆-Σ fractional-N frequency synthesizers make use of digital noise-shaping techniques in order to synthesize arbitrarily fine frequency steps. This subject is developed at length in Chapter 8.

4.2.3 Other Sources of Noise The complexity of modern systems combined with their unprecedented levels of integration introduce many other potential noise problems in system design. Extensive digital processing, switching dc-to-dc power supplies, and switched-capacitor mixed-signal techniques are but a few of the many noise sources that may have to be considered. Wherever possible, switching or sampling frequencies should be made commensurate with the precision system clock in order to avoid aliasing problems as predicted by the Poisson Sum formula (1.11). Noise sources that exhibit abrupt edges, like those discussed in Section 4.5.1, are especially problematic due to their high-frequency spectral content. A cascade of one or more RC lowpass sections can be very effective in reducing such high-frequency content, and this preventative circuit design measure is highly recommended.

4.3 POWER SPECTRAL DENSITY CONCEPT FOR CONTINUOUS-TIME STOCHASTIC SIGNALS The frequency content of a specific deterministic signal can be found through direct application of the Fourier transform. This frequency-domain analysis is considerably more involved if the signal contains random features, however. The spectral nature of random signals is normally described in terms of their power spectral density (PSD). The power spectral density of a stochastic signal x(t) is specifically defined in this text as

Noise in PLL-Based Systems

119

2  1 S x ( f ) = lim  E  X Tm ( f )    Tm →∞ T    m 

(4.16)

where the finite-time Fourier transform of x(t) is given by X Tm ( f ) =

+ Tm / 2

∫ x (v) e

− j 2π f v

dv

(4.17)

− Tm / 2

Strictly speaking, x(t) could be a function that is drawn from an ensemble of random functions denoted by x(t, ζ ) in which ζ refers to a specific outcome within the possible event space (Appendix 4A), but this level of formality is omitted in this discussion. E represents the statistical (or ensemble) expectation [18] operation from probability theory, but as argued momentarily, this mathematical operation will ultimately be replaced by a time-domain averaging operation. Although the definition of PSD in (4.16) involves statistical expectation, the basic operation of a classical spectrum analyzer provides a clear indication that the PSD can be obtained (for most signals encountered in everyday practice) by exclusively using time-averages rather than ensemble averages. This perspective has been revived in several popular textbooks [19]–[21] on this subject. This viewpoint is advantageous since the theory of random processes based on time-averages is considerably more developed than the theory based on statistical ensemble-averages. Time-averages represent the physical behavior of most communication systems more closely as well. Timeaverage-based concepts have historically been known as generalized harmonic analysis and were first explored extensively by Wiener. A random process is said to be ergodic if all of the statistics from the ensemble of possible signals can be determined using only a single (time-domain) member of the ensemble, in which case ensemble averaging can be replaced with time-averaging [21]. Ergodicity also requires that the timedomain signal exhibit stationary behavior up through its fourth-order moments. Only ergodic stochastic processes are considered in this text which means that the expectation operator E implies a time-averaging operation. The more detailed interpretation of (4.16) when time-averaging is invoked for the expectation operation is given by 1 S x ( f ) = lim lim  Tm →∞ U →∞ U 

2  1 X Tm (ν , f ) dν  −U / 2 Tm  U /2



(4.18)

where the Fourier transform of x(t) (4.17) has been slightly modified to X Tm ( u, f ) =

u + Tm / 2



x ( t ) e− j 2π f t dt

(4.19)

u −Tm / 2

In general, the order of the limits in (4.18) cannot be interchanged. Furthermore, it can be shown that Sx( f ) will remain highly erratic and inconsistent if only the limit over time (Tm) is taken. Both limits must be taken in the order shown if a reliable PSD result is to be obtained. In the limit when x(t) is a wide-sense stationary random process having an autocorrelation function Rx(τ ), the spectrum is given by the Wiener-Khintchine theorem10 as 10

See Appendix 4A.

Noise in PLL-Based Systems

Sx ( f ) =

+∞

∫ R (τ ) e

120

− j 2π f τ

x



(4.20)

−∞

In this context, Rx(τ ) and Sx( f ) are Fourier transform pairs.

4.4 POWER SPECTRAL DENSITY FOR DISCRETE-TIME SAMPLED SYSTEMS Virtually all simulation work is done using discrete-time sampled-system analysis. Hardware implementations are increasingly digital in nature as well. This section develops the power spectral density concept for time-sampled systems in some detail. Consider a deterministic continuous-time causal signal x(t) whose Fourier transform is given by X(f)=

+∞

∫ x (t ) e

− j 2π f t

dt

(4.21)

−∞

If the signal is time-sampled every Ts seconds, let the sample sequence for time tn = nTs be represented by xn = x(nTs) with x ( nTs ) =

+∞

∫ X ( f )e

j 2π f nTs

df

(4.22)

−∞

Also let yn = xn for notional convenience. The z-transform of yn is given by ∞

Y ( z ) = ∑ yn z − n

(4.23)

n=0

Following the development in Section 7.2, since yn and x(nTs) are assumed to be identical sample sequences, it must be true that 1 +∞  r  (4.24) Y e j 2π f Ts = Xf +  ∑ Ts r =−∞  Ts 

(

)

This result is known as the Poisson Sum formula. If the oversampling rate is sufficiently high, the higher-order r-terms in (4.24) will be negligible for the frequency range of interest and the discrete Fourier transform (DFT) represented by Y can be approximated by

(

)

Y e j 2π f Ts ≈

1 1 X ( f ) for f Ts < Ts 2

(4.25)

This convention then requires a slight modification to the normal discrete-time Fourier transform pair as N −1

Yk = Ts ∑ yn e

− j 2π ( kn / N )

(4.26)

n=0

yn =

1 NTs

N −1

∑Y e k =0

k

j 2π ( kn / N )

(4.27)

Noise in PLL-Based Systems

121

where Yk corresponds to (4.25) evaluated at f = k /(NTs). In order to avoid confusion by redefining how the DFT is normally calculated, Ts is separately carried explicitly in the equations that follow. Thus far, x(t) has been assumed to be a deterministic signal, but this restriction will now be removed. Assume that x(t) is a wide-sense stationary ergodic random signal. This assumption allows the ensuing discussions to focus entirely on time-averaged statistical properties rather than having to deal with ensemble-averaged quantities. Returning to (4.16), and making use of the approximation in (4.25) along with (4.26), the power spectral density for the time-sampled signal x(t) can be approximated by 2 M   1 S x ( f ) lim E  Ts ∑ xn e − j 2π f nTs  (4.28) M →∞  ( 2 M + 1) Ts n =− M  Expanding the squared-magnitude and interchanging the order of expectation (averaging) and summation produces M M  Ts − j 2π f ( m − n )Ts  S x ( f ) = lim  Rx ( m − n ) e ∑ ∑  M →∞ 2 M + 1 m =− M n =− M  

(4.29)

where Rx(k) is the autocorrelation function of the xn samples. Since Rx corresponds to the autocorrelation function of an ergodic wide-sense stationary random process, it only depends on the index differences (m – n) and (4.29) can be simplified to 2M  Ts S x ( f ) = lim  ( 2M + 1 − m ) Rx ( m ) e− j 2π f mTs  ∑ M →∞ 2 M + 1 m =−2 M  

= Ts



∑ R (m) e

m =−∞

x

(4.30)

− j 2π f mTs

As shown here, the PSD is simply the scaled DFT of the autocorrelation function Rx(m) for the timesampled signal xn = x(nTs). This amounts to a restatement of the Wiener-Khintchine theorem for time-sampled wide-sense stationary random signals. In situations where the sample sequence xn is applied to an analog zero-order sample-and-hold in order to convert the sample-sequence xn back to continuous-time, (4.30) must be augmented with the magnitude-squared transfer function of the sample-and-hold which results in the two-sided power spectral density given by  sin (π f Ts )   Rx ( 0 ) M  S xc ( f ) = 2Ts  + ∑ Rx ( m ) cos ( 2π f mTs )    f T 2 π m =1 s     2

(4.31)

This result will be used to compute the output PSD for a DAC in Section 4.4.2.

4.4.1 Example Results for Time-Sampled Noise A simple example is presented here to further illustrate these important concepts. Assume that additive white Gaussian noise (AWGN) is passed through a second-order Butterworth lowpass filter and a computer simulation is to be done using discrete-time. The AWGN source can be modeled as a random Gaussian source having a sample variance of No Fs /2 where No is the one-sided noise power spectral density and Fs is the sampling rate as developed in Appendix 4B. The Butterworth filtering is applied using a digital filter as shown in Figure 4-7.

Noise in PLL-Based Systems

122

Filtered Noise

Discrete Gaussian Random Number Generator N F σ g2 = o s 2

Digital Filter H(z)

+ _

Figure 4-7 Simplistic model for simulating continuous-time lowpass-filtered AWGN using discrete-time simulation.

In order to convert the Butterworth lowpass filter into its digital equivalent form, the impulseinvariant method is used here. This entails starting with the Laplace transform for the second-order lowpass filter which is given by11

H (s) =

ωn2 s + 2ζωn s + ωn2

(4.32)

2

and first finding its respective inverse Laplace transform as h (t ) =

ωn 1−ζ

2

(

e −ζωn t sin ωn 1 − ζ 2 t

)

for ζ 0 is synonymous with making measurements over a finite time period rather than an infinite one. This point deserves additional discussion. In (4.69), very close-in phase noise components in Sθ ( f ) can dominate the result if f1 is chosen too low, or if Sθ ( f ) has significant f α behavior31 with α > –2. The situation is less severe in (4.70) and (4.77) where the f 2 and sin2( ) terms, respectively, act as highpass filter functions to reduce the role of close-in phase noise components. Any practical system includes mechanisms that track out slowly varying changes in carrier phase and time alignment. It is reasonable to assume that if these tracking elements have bandwidths on the order of BTr Hz that phase noise changes occurring over time intervals greater than roughly 1/BTr are probably not consequential. In order to illustrate this point, assume that the phase tracking system has a lowpass characteristic HL( f ) as shown in Figure 4-19. The tracking system effectively computes a running time-average of the phase noise process θ (t) and subtracts it out from θ (t) such that the system experiences a modified phase noise process represented by θ∆(t). The power spectral density of this resultant process is given by

θ (t )

+ _

θ∆ (t )

Σ

θ (t )

HL ( f )

Figure 4-19 Long-term average phase error removed from the random phase noise process θ (t) using a low bandwidth phase tracking system represented by HL( f ).

S ∆θ ( f ) = Sθ ( f ) 1 − H L ( f )

2

(4.78)

The 1 – HL( f ) factor functions as a highpass filter that eliminates very low frequency phase noise spectral components. 4.6.2.1 Finite Observation Time The expectation operation used in (4.77) and elsewhere has assumed an infinite observation time period which is never the case in practice. When the expectation (time averaging in the present context) must be confined to a finite observation time Tm, modifications must be made to the results presented earlier. Take, for instance, the variance of a mean-zero wide-sense stationary ergodic random process x(t) [45]. This can be developed as  τ  τ σ x2 (τ , Tm ) = x  t +  x  t −  2 2 

 1  Tm

σ x2 (τ , Tm ) = E 

31

 



− x (t ) Tm

 1 τ   τ    x  u +  x  u −  du  − E  ∫ 2  2   t −Tm / 2   Tm

t + Tm / 2

As developed in Appendix 4B, such phase noise processes may not be stationary.

2

(4.79)

Tm

 x ( u ) du  ∫ t −Tm / 2 

t + Tm / 2

2

(4.80)

Noise in PLL-Based Systems

σ x2 (τ , Tm ) = σ x2 −

140

t +T / 2 t + Tm / 2 1  m  E du dv x ( u ) x ( v )   2 ∫ ∫ Tm t −Tm / 2 t −Tm / 2 

(4.81)

where the angled-brackets represent time-averaging, and the quantity Tm means that the average is to be computed over a time interval of Tm seconds. The end result is that the variance is given by +∞



0



σ x2 (τ , Tm ) = 2 ∫ S x ( f ) 1 −

sin 2 (π f Tm )   df 2 (π f Tm ) 

(4.82)

where the low-frequency portion of Sx( f ) is again suppressed by the bracketed quantity. Similar calculations can be done for any other spectrum or autocorrelation measure of interest that is constrained to a finite observation time. 4.6.2.2 Synchronization Synchronization in high-speed data communication applications presents several new issues that are largely absent in frequency synthesis applications. In data communications, phase noise performance is often spoken of in terms of a time-jitter quantity known as the unit interval. If the time duration of each data symbol is denoted by Tsym, time-jitter quantities (e.g., peak time-jitter, rms time-jitter) are expressed in terms of unit intervals by simply normalizing the quantities with respect to Tsym. Normally, data eye-openings in time and voltage are much more important in data communications than specific phase noise spectrum information. A simple example will illustrate this point further. Slicer Random Data Source

±1

Data

S/H

±1

Control Loop Square-Root Raised-Cosine Shaping Filter

Analog Matched Filter

Clock

VCO

Sample Clock

Figure 4-20 Simplified bit synchronization example. Data source utilizes square-root raised-cosine pulse-shaping. Receiving end approximates the ideal matched-filter with a second-order Butterworth lowpass filter.

The eye-diagram32 of the received signal as it appears immediately before the analog matchedfilter in Figure 4-20 is shown in Figure 4-21. The multiple signal trajectories are due to the bandwidth-constrained pulse-shaping that is being used and is called intersymbol-interference (ISI). This interference causes the zero-crossings between data symbols to be spread over a time-region as annotated in the figure. The presence of any VCO phase noise in the form of clock-jitter adds to the zero-crossing interference. Generally, the VCO-related clock-jitter is Gaussian-distributed whereas the ISI is not. A second-order Butterworth lowpass filter33 is used as a reasonably good approximation of the ideal matched-filter for the system, provided that its 3 dB bandwidth B is chosen such that BTsym = 0.50. The eye-diagram at the analog matched-filter output is shown in Figure 4-22. If the sample32

An oscilloscope perspective where the horizontal sweep rate is synchronously tied to the exact data symbol rate and the signal voltage is displayed in the vertical dimension versus time. A third-order Chebyshev filter performs slightly better.

33

Noise in PLL-Based Systems

141

and-hold clock is properly synchronized such that time samples are taken at the center of the data eye-opening (center of the diagram, horizontal axis value = 1.0), low variance estimates of the originating data bit values are obtained. The presence of any noise causes further blurring of the signal trajectories, however, and causes the eye-opening to become more constricted, thereby leading to more frequent bit-errors. A detailed discussion of bit synchronization is provided in Chapter 10. Eye Diagram at Receiver Input 1.5

1

0.5

V

0

-0.5

-1

-1.5

Time Jitter -2

0

0.2

0.4

0.6

0.8

1 Symbol Units

1.2

1.4

1.6

1.8

2

Figure 4-21 Bit synchronizer receive signal’s appearance immediately prior to the matched-filter34 in Figure 4-20. Eye Diagram After Matched Filter 1.5

1

V

0.5

0

-0.5

-1

-1.5

0

0.2

0.4

0.6

0.8 1 1.2 Symbol Units

1.4

1.6

1.8

2

Figure 4-22 Eye-diagram at the analog matched-filter output in Figure 4-20.

4.6.3 Modeling Phase Noise Processes In many projects, phase noise performance is a middle-ground discipline that must involve both system designers as well as hardware-designers from the outset. The detailed performance analysis involves understanding the underlying hardware limitations as well as their impact on system performance. Phase noise computer modeling is discussed at some length in Appendix 4B in order to assist in this effort. 34

Book CD:\Ch4\u13146_eye_diagram.m.

Noise in PLL-Based Systems

142

4.7 NOISE IMPRESSION ON TIME AND FREQUENCY SOURCES The connection between a physical noise source and a resulting phase noise spectrum contribution can be somewhat obvious in some situations such as for the oscillators discussed in Chapter 9, perhaps. In other cases like the transistor-level noise discussed in Section 4.2, the connection may be less clear. This section looks more closely at how different noise sources ultimately impress their random behavior on frequency sources.

4.7.1 Noise Equipartition with AM and PM Noise Once additive white Gaussian noise (AWGN) is added to an ideal sine wave signal, AM and PM noise are both present. At first this statement may seem to be in error somehow, since signal addition is a completely linear operation. The (mathematical) transposition to an amplitude and phase representation is anything but linear, however, and therein lies the explanation. Stated in another way, truly random noise exhibits no directional preferences. Since there are necessary sideband phasing relationships in pure AM and pure PM signals as discussed in Section 3.4, no preference toward AM versus PM noise is possible unless some kind of a priori correlation between the noise and the sine wave somehow exists. The noise power must consequently be split evenly between AM and PM contributions, and this accounts for the factor of ½ that appears in Leeson’s phase noise model (9.116) as well as in the Haggai model (9.125). This equal division between AM and PM noise is known as equipartition. Its roots can be traced to statistical thermodynamics [6] and stochastic process theory.

4.7.2 Noise in Linear Two-Port Networks Noisy linear two-port models are commonly used to describe electrical networks over a wide range of complexity levels ranging from simple resistor models (e.g., Figure 4-1) to complete amplifiers, oscillators, and subsystems. Normally, individual noise sources are statistically (independent) uncorrelated which, when true, greatly simplifies the ensuing analysis. A lossless LC filter example is considered here in order to illustrate several important concepts involving noise. A second example that examines the noise behavior of a PLL is discussed in Section 4.7.4. Consider the lossless LC filter that is situated between two physical resistors RS and RL in Figure 4-23. The noise power spectral density that is available from each resistor is kBTo W/Hz. Scattering parameter (S-parameter) analysis [46] which is commonly used in linear RF and microwave design, defines the transducer gain for the lossless filter as

GT =

Power Delivered to Load Power Available from Source

and this is expressed in terms of S-parameters as

Lossless Network

RS

RL

Figure 4-23 Lossless frequency-selective network situated between real-world source and load resistances.

(4.83)

Noise in PLL-Based Systems

GT =

S21

2

143

(1 − Γ )(1 − Γ ) 2

2

S

L

(1 − S11Γ S )(1 − S22 Γ L ) − S12 S21Γ S Γ L

(4.84)

2

where the Sij represent the S-parameters for the lossless network, and ΓS and ΓL represent the source and load reflection coefficients, respectively. This relationship can be used to calculate the amount of noise power delivered from the source to the load and vice versa. Assume now for simplicity that the source and load impedances are perfectly matched for the system, in which case ΓS = ΓL = 0 and GT = |S21|2. In terms of normal S-parameter notation, the incident, transmitted, and reflected waves for the network are as shown in Figure 4-24.

Source to Load:

RS

a1

Lossless Network

2

RL

k BTo

b1

k BTo S21

2

S11a1

2

2

Load to Source:

a2 k BTo S12

2

2

k BTo 2 b2 S22 a2

2

Figure 4-24 Incident and reflected noise power terms for the lossless linear two-port model, using scattering parameter concepts.

In order to have thermal equilibrium for the load, the total incident and emanating power spectral densities at the load must be equal as given by 2

2

k BTo S 21 + k BTo S 22 = k BTo

(4.85)

Applying the same thermal equilibrium constraint at the source requires that 2

2

k BTo S12 + k BTo S11 = k BTo

(4.86)

For a passive lossless network, it is also true that [6]  S * 

T

[S ] = I

(4.87)

where * denotes complex conjugation, T denotes matrix transposition, and I is the identity matrix. Continuing from (4.87) produces

Noise in PLL-Based Systems

*  S 

T

 S11 2 + S21 2

[S ] = 

*  S12* S11 + S 22 S21

* S11* S12 + S21 S22   2 2 S22 + S12 

144

(4.88)

Putting these results together, two identities can be identified from (4.88) as 2

2

(4.89)

2

2

(4.90)

S11 + S 21 = 1

S22 + S12 = 1

These equalities are a restatement of the famous Feldtkeller energy relationship for reactance twoports that arises in modern filter design based on insertion-loss techniques [47]. Using (4.90) in (4.85) makes it possible to also conclude that |S12|2 = |S21|2 for the lossless network. Returning once more to (4.85) along with these results, this equation can be rewritten as 2 2 k BTo S 21 + k BTo 1 − S 21  = k BTo  

(4.91)

This is a very interesting result in that it states that even if the noise power available from the source resistance is attenuated (reflected) by the lossless filter, the same noise density is delivered to the load regardless. If the lossless network input is now replaced by an input power spectral density PV ( f ) (that includes both AM and PM components), the output power spectral density is given by 2 2 PVo ( f ) = PV ( f ) S 21 + k BTo 1 − S21   

(4.92)

and it is assumed that PV ( f ) includes any noise due to the source impedance. If the total signal (carrier) power is Pin at the input to the network, the total power delivered to the load is Pin|S21|2. Recognizing that only one-half of the power spectral density in (4.92) can contribute to the phase noise35 and dividing both sides by the total output power leads to  PVo ( f )  PM Part 2

S 21 Pin

k BTo  2 2   PV ( f )  PM Part S21 + 2 1 − S21  = 2 S21 Pin

Lo ( f ) = Li ( f ) +

k BTo 2 Pin

 1  − 1  2  S 21 

(4.93)

(4.94)

k T = Li ( f ) + B o [ L − 1] 2 Pin

where Li( f ) and Lo( f ) are the input and output phase noise spectrums, respectively, and L = 1/|S21|2. This is a useful result that makes it possible to directly compute the phase noise spectrum at the output of a lossless network (e.g., an LC filter) given the phase noise spectrum at the input and knowledge of |S21|2.

35

The other half constituting AM noise by virtue of the equipartitioning of noise between AM and PM portions.

Noise in PLL-Based Systems

145

Phase noise transfer functions for other circuit elements can be similarly found. In general, however, it is better to perform all computations in terms of power spectral density rather than differentiating between AM and PM noise portions at the intermediate steps. Otherwise, it is easy to overlook more subtle factors like the AM-to-PM and PM-to-AM coupling that occurs through a nonsymmetrical filter as discussed earlier in Section 3.4.

4.7.3 Noise in Dividers Digital dividers and digital devices in general play a major role in time and frequency control systems. The often-touted inherent noise immunity of digital devices is only enjoyed, however, after the signal information has become digital in nature. The weakest point in using digital devices in time and frequency control applications is normally at the interface between the analog continuoustime world and the digitization/quantization process that is involved. A simplified perspective of the digitization process is shown in Figure 4-25. Circuit noise, voltage supply noise, and metastability effects all contribute time uncertainty to when the precise threshold voltage is crossed. If these effects are constant cycle-to-cycle, the digital device will not degrade the phase integrity of the input signal, but this is not the case. Since 1° represents a time interval of only 1.39 ps at 2 GHz, the time quantities involved are very small. The analog to digital interface at the divider input is most susceptible to noise for weak signals and low-frequency signals that have a correspondingly lower input slew-rate. Once an analog signal has been passed through the first digital device, however, the signal edges are normally much faster and the only analog information that remains is the threshold-crossing time-of-arrival information. Analog Input V Threshold

Critical Noise Immunity Region

Time Digitized Output

Time Uncertainty Figure 4-25 Time uncertainty in the digitization of an analog signal that occurs at the input of a digital device.

The time-jitter ∆t in seconds is easily converted to an equivalent measure of phase jitter for a periodic waveform by simply multiplying ∆t by the nominal radian frequency of the square-wave, ωo. Any subsequent digital division of the signal by an integer N reduces the radian frequency by the same factor but leaves the physical quantity ∆t unchanged. Therefore, an ideal digital divider with divide-ratio N reduces the output-referred phase noise by 20 log10(N). Because of this fact, phase noise floor requirements for divider elements near the divider output can be more critical than at the divider input which is directly opposite of the more familiar concept for receiver cascaded noise figure. If, for instance, several digital divider stages are cascaded as shown in Figure 4-26 with divide ratios denoted by Nj and input-referred phase noise contributions given by Rj rad2/Hz, the overall input-referred noise floor for the divider cascade is given approximately by

Noise in PLL-Based Systems

146

M  m −1  RTot = R1 + ∑  Rm ∏ N n  rad 2 /Hz m=2  n =1 

R1

Σ

R3

R2 N1

Σ

(4.95)

N2

Σ

R4 N3

Σ

N4

Figure 4-26 Simple cascade of digital divide ratios Nj with each divider having its own input-referred phase noise contribution Rj.

This result should clarify why it is possible to use high-frequency prescaling dividers that may have poorer phase noise performance and yet still be limited by the performance of the later divide elements. The cascaded-divider noise computations just mentioned really amount to simply calculating a root sum-of-squares of all of the individual divider input-referred time-jitter quantities, and then converting that result to an equivalent phase noise value at the output. Matters are simplified if time quantities are used rather than rad2/Hz quantities in the calculations, and this practice is encouraged. In monolithic designs where phase noise performance is critical (e.g., in PLLs), divider topologies can be used that resynchronize the overall divider output with the input clock thereby avoiding time-jitter buildup through any divider cascade that might otherwise occur. This is also why synchronous counters are highly preferred in precision frequency synthesis work over ripple counters.

4.7.4 Macroscopic Noise Modeling in PLLs Once all of the individual noise contributions in a PLL have been identified, they must be appropriately combined while including the control-system behavior of the PLL so that the overall phase noise performance can be assessed. A detailed PLL example helps to illustrate the details involved. The circuit schematic for a type-2 fourth-order PLL is shown in Figure 4-27. Resistor noise is included by using a Norton current noise source across each respective resistor per the earlier discussion in Section 4.2.1.1. Phase noise from the reference frequency source θref is normally expressed in tabular form as dBc/Hz versus offset frequency, and is either provided from characterization data or first-hand analysis. In this example, it is assumed to be zero for simplicity. Similarly, other reference port-related noise quantities are represented here by θrn. This source includes the phase noise performance limits of the phase detector block (Kd), for instance. Phase noise performance of the voltage-controlled oscillator (VCO) is represented by θvn. Oscillator phase noise analysis methods discussed in Chapter 9 can be used to evaluate this phase noise source, or other circuit-level simulators or lab measurements can be used. The total output phase noise of the PLL is represented by θPLL. The phase noise sources are normally assumed to be statistically independent. This makes it possible to compute the overall phase noise performance of the system by power-adding the individual power spectral densities of each noise contributor together at the PLL output. If noise sources are partially correlated, this can be dealt with by simply adding additional noise sources in Figure 4-27 that delineate between the uncorrelated and correlated quantities, but otherwise performing the computations in the same way.

Noise in PLL-Based Systems

θ rn

θ ref

Σ

θ vn

I2

1

Kd

147

Kv

θ PLL

Σ

VCO

R2 C0

R1 2

θo

C2

3

I1

C1

N Figure 4-27 Circuit diagram for type-2 fourth-order PLL using charge-pump phase detector. Phase noise and Johnson resistor noise sources are all shown.

The easiest way to compute the phase noise contribution of each noise source is to reflect the impact of each noise source to one of two points in the closed-loop PLL and then apply the appropriate closed-loop transfer function between that point and the PLL output. Using this approach, every noise source in the diagram is either represented as an additional equivalent (uncorrelated) noise source at the reference port (e.g., θrn), or as an additional equivalent (uncorrelated) VCO noise source (e.g., θvn). The two closed-loop transfer functions involved are discussed at length in Sections 1.2 and 6.2, and are referred to as H1(s) and H2(s). Transfer function H1(s) is a lowpass transfer function that describes how reference-related phase noise is conveyed to the PLL output whereas H2(s) is a highpass transfer function that describes how VCO-related phase noise is conveyed to the same output. These transfer functions are explicitly developed for the analysis of noise with sampled-systems in Sections 7.8.1 and 7.8.2. These complications can be deferred for now, however, by simply using matrix calculation methods as discussed next. In analyzing a PLL diagram like Figure 4-27, most engineers go through the painful effort of deriving Laplace transform relationships for the network’s behavior. This effort can be completely side-stepped by using simple nodal equations and matrix methods instead. For instance at node 1, applying Kirchhoff’s current law results in the equation

K d (θ ref + θ rn ) + I1 − I 2 = ( sC0 + G1 + G2 ) V1 − G1V2 − G2V3 +

K d θ PLL N

(4.96)

where the Gi = 1/Ri, and Kd (θref + θrn) represent the current out of the charge-pump phase detector due to the externally applied reference phase and reference-port related phase noise. The final term in (4.96) is due to the feedback path from the VCO through the feedback divider to the phase detector. Recognizing that the VCO is a perfect integrator of phase, θo = KvV3/s. Nodal equations are found for the remaining nodes and the full set of equations assembled in matrix form as  sC + G1 + G2  I1 − I 2 + K d θ rn   0    − I1 −G1  =    I2 −G2    θ vn    0 

−G1

−G2

sC1 + G1

0

0

G2 + sC2

0



Kv s

  V   1    V2   0  V3     θ PLL  1  

Kd N 0

(4.97)

Noise in PLL-Based Systems

148

This matrix equation captures all of the control features of this PLL example making it unnecessary to develop explicit relationships for transfer functions H1(s) and H2(s) at this time. With a little practice, matrix equations like this can be written down by direct inspection even for very complicated loop filter arrangements. Analysis tools like MATLAB and Mathcad make solving this matrix equation for any complex frequency s = j2π f a trivial matter. The total phase noise at a specific frequency offset from the carrier foffset is found by evaluating θPLL in (4.97) for each individual noise source one at a time, and then summing the squares of each contribution together to get the total power in rad2/Hz units. In the detailed results that follow in Figure 4-28, the phase detector noise floor model (used for θrn) adopted is that given for the National Semiconductor Platinum PLL devices as LPD = LFloor + 20log10 ( FVCO ) − 10 Log10 ( FREF )

(4.98)

where LFloor = –205/–210/–211/–218 dBc/Hz for the LMX2315/LMX2306/LMX2330/LMX2346 devices, respectively. Leeson’s model is used for VCO phase noise from (9.116). Phase noise contributions for the different noise contributors are shown separately in Figure 4-28 along with the composite phase noise spectrum. This kind of presentation makes it very easy to identify which contributors may be problematic and where improvements can be considered. The MATLAB code is easily modified to include additional results such as integrated phase noise, and equivalent noise bandwidth. Phase Noise Output Spectrum -70

Total Noise -75

Reference Noise

-80

Phase Noise, dBc/Hz

-85 -90 -95 -100 -105 -110

Resistor R1 Noise Resistor R2 Noise

VCO Noise

-115 -120 2 10

3

4

10

10

5

10

Frequency Offset, Hz

Figure 4-28 Example phase noise computation using matrix methods.36

References [1]

Van der Ziel, A., Noise in Solid State Devices and Circuits, New York: Wiley-Interscience, 1986.

[2]

Massobrio, G., and P. Antognetti, Semiconductor Device Modeling with SPICE, 2nd ed., New York: McGraw-Hill, 1993.

36 Book CD:\Ch1\u12548_t2_order4_pll.m. Kd = 1 mA/rad, Kv = 2π 10 MHz/V, N = 9000, Fref = 100 kHz, loop natural frequency = 2.5 kHz, damping factor ζ = 0.75, phase detector noise floor –160 dBc/Hz.

Noise in PLL-Based Systems

149

[3]

Johnson, J.B., “Thermal Agitation of Electricity in Conductors,” Physical Review, Vol. 32, July 1928, pp. 97-109.

[4]

Nyquist, H., “Thermal Agitation of Electronic Charge in Conductors,” Physical Review, Vol. 32, July 1928, pp. 110113.

[5]

Svelto, O., Principles of Lasers, New York: Plenum Press, 1976.

[6]

Davis, W. Alan, Microwave Semiconductor Circuit Design, New York: Van Nostrand Reinhold, 1984.

[7]

Barnes, J.A., “Noise and Time and Frequency— A Potpourri,” 42nd Annual Frequency Control Symposium, 1988.

[8]

Voss, R.F., “1/f (Flicker) Noise: A Brief Review,” IBM Thomas J. Watson Research Center, Yorktown Heights, NY, est. 1970.

[9]

Robins, W.P., Phase Noise in Signal Sources, London: Peter Peregrinus Ltd., 1982.

[10] Gray, P.R., and R.G. Meyer, Analysis and Design of Analog Integrated Circuits, New York: John Wiley & Sons, 1977. [11] McWhorter, A.L., “1/f Noise and Germanium Surface Properties,” Semiconductor Surface Physics, Philadelphia, PA: U. of Pennsylvania Press, 1957. [12] Hooge, F.N., “1/f Noise Is No Surface Effect,” Physics Letters, April 1969. [13] Hellums, J., “Generation-Recombination (g-r) Noise,” University of Texas-Dallas. [14] von Haartman, M., “Low-Frequency Noise Characterization, Evaluation and Modeling of Advanced Si- and SiGeBased CMOS Transistors,” Royal Institute of Technology, Stockholm, Sweden, 2006. [15] IRE Symbols Committee et al., “IEEE Standard Letter Symbols for Semiconductor Devices,” IEEE Trans. Electron Devices, Aug. 1964. [16] Sischka, F., “Noise Modeling for Semiconductors,” Agilent Technologies, April 2002. [17] Norsworthy, S.R., R. Schreier, and G.C. Temes, Delta-Sigma Data Converters: Theory, Design, and Simulation, New York: IEEE Press, 1996. [18] Blachman, N.M., Noise and Its Effect on Communication, 2nd ed., Malabar, FL: Robert E. Krieger Publishing, 1982. [19] Gardner, W.A., Introduction to Random Processes with Applications to Signals and Systems, 2nd ed., New York: McGraw-Hill, 1990. [20] ———, Statistical Spectral Analysis: A Nonprobabilistic Theory, Englewood Cliffs, NJ: Prentice-Hall, 1988. [21] Marple, S.L., Digital Spectral Analysis with Applications, Englewood Cliffs, NJ: Prentice-Hall, 1987. [22] Daniell, P.J., “On the Theoretical Specification and Sampling Properties of the Autocorrelated Time-Series,” J.R. Stat. Soc., ser. B, Vol. 8, 1946, pp. 88-90. [23] Bartlett, M.S., “Smoothing Periodograms from Time Series with Continuous Spectra,” Nature, Vol. 161, May 1948, pp. 686-687. [24] Welch, P.D., “The Use of the Fast Fourier Transform and the Estimation of Power Spectra: A Method Based on Time Averaging Over Short Modified Periodograms,” IEEE Trans. Audio Electroacoust., Vol. AU-15, June 1967, pp. 70-73. [25] Harris, F.J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,” Proc. IEEE, Vol. 66, No. 1, Jan. 1978. [26] Oppenheim, A.V., and R.W. Schafer, Digital Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1975. [27] Rabiner, L.R., and B. Gold, Theory and Application of Digital Signal Processing, Englewood Cliffs, NJ: Prentice-Hall, 1975. [28] Proakis, J.G., and D.G. Manolakis, Introduction to Digital Signal Processing, New York: Macmillan Publishing, 1988. [29] Gardner, W.A., “Stationarizable Random Processes,” IEEE Trans. Information Theory, Jan. 1978. [30] Gardner, W.A., and L.E. Franks, “Characterization of Cyclostationary Random Signal Processes,” IEEE Trans. Information Theory, Jan. 1975. [31] Bennett, W.R., “Statistics of Regenerative Digital Transmission,” Bell System Technical Journal, Vol. 27, Nov. 1958, pp. 1501-1542. [32] Van der Wurf, P., “On the Spectral Density of a Cyclostationary Process,” IEEE Trans. Communications, Oct. 1974.

Noise in PLL-Based Systems

150

[33] Papoulis, A., Probability, Random Variables, and Stochastic Processes, New York: McGraw-Hill, 1965. [34] Rutman, J., and F.L. Walls, “Characterization of Frequency Stability in Precision Frequency Sources,” Proc. IEEE, June 1991, pp. 952-960. [35] Percival, D.B., “Characterization of Frequency Stability: Frequency-Domain Estimation of Stability Measures,” Proc. IEEE, June 1991, pp. 961-972. [36] Rutman, J., “Oscillator Specifications: A Review of Classical and New Ideas,” 31st Frequency Control Symposium, 1977, pp. 291-301. [37] Halford, D., et al., “Spectral Density Analysis: Frequency Domain Specifications and Measurement of Signal Stability,” 27th Annual Frequency Control Symposium, 1973, pp. 421-431. [38] Walls, F.L., and D.W. Allan, “Measurements of Frequency Stability,” Proc. IEEE, Vol. 74, No. 1, Jan. 1986, pp. 162-168. [39] Cohen, L., “Time-Frequency Distributions— A Review,” Proc. IEEE, Vol. 77, No. 77, July 1989, pp. 941-981. [40] Allan, D., et al., “Standard Terminology for Fundamental Frequency and Time Metrology,” 42nd Annual Frequency Control Symposium, 1988, pp. 419-425. [41] “IEEE Standard Definition of Physical Quantities for Fundamental and Time Metrology,” IEEE Std. 1139. [42] CCIR Recommendation 686(1990), “Glossary,” in Vol. 7 (Standard Frequencies and Time Signals), International Telecommunications Union, General Secretariat—Sales Section, Place des Nations, CH-1211, Geneva, Switzerland, 1990. [43] Kroupa, V.F., Frequency Stability: Fundamentals and Measurement, New York: IEEE Press, 1983. [44] Gilmore, R., “Specifying Local Oscillator Phase Noise Performance: How Good Is Good Enough,” RF Expo, 1991. [45] Cutler, L.S., and C.L. Searle, “Some Aspects of the Theory and Measurement of Frequency Fluctuations in Frequency Standards,” Proc. IEEE, Feb. 1966. [46] Gonzalez, G., Microwave Transistor Amplifiers Analysis and Design, 2nd ed., Upper Saddle River, NJ: Prentice-Hall, 1997. [47] Temes, G.C., and S.K. Mitra, Modern Filter Theory and Design, New York: John Wiley & Sons, 1973, p. 83.

Appendix 4A: Review of Stochastic Random Processes A stochastic random process is defined as an ensemble of time functions x(t, ξ) = {xξ1(t), xξ2(t),…} in which each member time function xξ (t) is constructed using the same rules. Parameter ξ denotes a specific experimental outcome that may occur. The probability of each event’s occurrence is probability px(ξ). If, for example, the experimental output space S only consists of three values for ξ, the time functions might appear something similar to that shown in Figure 4A-1. The probability assigned to each possible ξ is arbitrary, so long as the probabilities sum to unity. In this example stochastic process as defined, the average value of x(t) at time t1 is found by computing the sum of probability-weighted xξ (t1) values across the ensemble of functions. The individual time functions may exhibit different attributes so long as they obey the a priori given rules for construction. Some characterizations of the process involve ensemble-averaging (i.e., vertically across the set of possible time functions) whereas others may involve time-averaging along an individual function (i.e., horizontal averaging). Ergodic processes constitute an important subset of stochastic processes and are defined as follows: A stochastic process is ergodic if all of the statistics for the ensemble of member functions can be completely determined from any individual member function xξ (t).

Noise in PLL-Based Systems

151

xξ1 ( t ) xξ2 ( t ) xξ3 ( t ) time

t1 Figure 4A-1 Stochastic process example having a possible outcome space consisting of three member time functions.

Statistical ensemble expectation operations denoted by bold-face E in this text amount to simple time-averages when a stochastic process is ergodic. This is an important mathematical simplification. Only ergodic processes are considered for the remainder of this appendix.

4A.1 Wide-Sense Stationarity A stochastic process is called stationary if its statistics are not altered by a shift in the time origin. A process is called wide-sense stationary (WSS) if the process exhibits a constant mean-value and its autocorrelation function depends only on time-differences. Mathematically, these conditions are written as E  x ( t )  = µ = constant   τ   τ  E  x ( t + τ ) x ( t )  = E  x  t +  x  t −   = Rx (τ )   2   2 

(4A.1)

In the general case, E represents a statistical ensemble expectation operation, but it can be replaced by a simple time-average when the underlying process x(t, to) is ergodic. An example for the meanvalue of x(t, to) is given by 1 E { x ( t , to )} = lim  Tm →∞ T  m



+ Tm / 2

∫ x ( v, t ) dv  o

−Tm / 2

(4A.2)



where to corresponds to the time origin. WSS processes are only concerned with the first two moments of the process. WSS processes are very significant because they can frequently be used to represent the behavior of real-world random processes, and they form the foundation for a broad understanding of many other engineering and mathematical concepts. The autocorrelation function of a real WSS process is symmetric, meaning that R (τ ) = R ( −τ )

(4A.3)

4A.2 Probability Density Functions Assume that an ergodic band-limited random signal x(t) is sampled at a rate Fs consistent with Nyquist requirements over an extensive period of time Tm→∞ and that the possible range of

Noise in PLL-Based Systems

152

observed values is given by (Lmin, Lmax). The distribution of observed sample values can be analyzed using a histogram by breaking the range into equally spaced segments of length ∆L = (Lmax – Lmin)/(NH – 1) where NH – 1 is the number of bins used in the histogram, and assessing how many samples fall within each bin. The center of each bin is given by bk = Lmin + ( k − 0.50 ) ∆L

for 1 ≤ k ≤ N H − 1

(4A.4)

A specific observed sample value u corresponds to a bin-index value of 1

 u − Lmin  +ε  ∆L 

k (u ) = 

(4A.5)

where ε is an arbitrarily small constant greater than zero. A representative histogram is shown in Figure 4A-2 using a total number of captured samples Ncap = FsTm of 10,000. Example Histogram for Normal Distribution 450 400

Number of Samples

350 300 250 200 150 100 50 0

-3

-2

-1

0 x-value

1

2

3

Figure 4A-2 Example histogram for normally distributed random variable.2 Ncap = 10,000 used with σ = 1.

If the number of samples falling within the kth histogram bin is given by hk, the probability density function (PDF) can be defined in the limit as

 1

px ( u ) = lim lim  ∆L → 0 T →∞ m

 FsTm



hk ( u ) 



(4A.6)

Distribution Mean and Variance The mean and variance of a probability distribution function are defined by

µ = E ( x)

σ = E( x 2

1 2

u denotes the function ceil( ) in MATLAB. Book CD:\Ch4\u14042_hist_pdf.m.

2

(4A.7)

)

(4A.8)

Noise in PLL-Based Systems

153

provided that x corresponds to a WSS random process. When the distribution is not mean-zero, it is frequently convenient to work with the covariance of the random variable which is defined as 2 K x = E ( x − µ )  = E ( x 2 ) − µ 2  

(4A.9)

Uniform Probability Density The uniform probability density is usually defined as 1 0 ≤ x ≤ 1 puniform ( x ) =  0 otherwise

(4A.10)

In this form, the density exhibits a mean-value of 0.50. The covariance of this density is given by 2 2 σ uniform = E ( x − µ ) 



= E( x

2

)−µ



2 1

1

= ∫ v 2 dv − µ 2 = 0

=

v3 1 −  3 0 2

2

(4A.11)

1 12

Uniform random numbers are most frequently created using linear congruential generators which have the general form uk +1 = ( auk + b ) modulo M

(4A.12)

Integer constants a, b, and M must be carefully chosen in order to maximize the length of the resulting random sequence as well as optimize the correlation properties of the generated random sequence. Additional information is available in [1], [2].

Gaussian (Normal) Probability Density The Gaussian distribution is one of the most frequently encountered densities in electrical engineering. This is in large part due to the many-electron nature of electrical current that naturally leads to the Gaussian distribution when the electron motions are statistically independent. Whenever a large number of independent random variables are added together, the net distribution becomes increasingly Gaussian as the number of variables increases. This fact can be used to create Gaussian random sample values by simply adding N mean-zero independent uniformly distributed random quantities together and normalizing by √N as shown in Section 3.18. The one-dimensional Gaussian PDF is given by pGauss ( x ) =

 ( x − µ )2  exp  −  2σ 2   2πσ 2 1

(4A.13)

Noise in PLL-Based Systems

154

where σ 2 is the variance of the distribution and µ is its mean-value. The cumulative probability distribution function (see Section 4A.4) is given by FGauss ( x ) =

 ( v − µ )2  exp  −  dv 2σ 2   2πσ 2

x

1



−∞ x−µ

=

σ



−∞

(4A.14)

 y2  1 exp  −  dy 2π  2 

The solution for (4A.14) is normally given in terms of the erf(x) function that is defined as erf ( x ) =

2

π

x

∫ exp ( −t ) dt 2

(4A.15)

0

A convenient means to compute random sample values with a prescribed PDF is discussed in Section 4A.5, but this technique is not well suited for the Gaussian PDF because the cumulative probability distribution function (4A.14) is difficult to invert. One of the most popular methods for creating Gaussian random samples is the Box-Muller method which is based on using a twodimensional joint Gaussian distribution and transforming it into a polar format that can be easily worked with. The method provided in Table 4A-1 can be used to create random Rayleigh samples (denoted as rk here) and a pair of uncorrelated Gaussian samples can then be calculated as xk = rk cos (φk )

(4A.16)

yk = rk sin (φk )

where φk is a uniformly distributed random variable on the range (–π, +π].

Poisson Probability Density The Poisson probability density is often called a counting process or a point process because it corresponds to the probability that a discrete number of events occurs over a specified amount of time. Given a rate parameter that is denoted here by λ, the probability that k events occurs within t seconds is given by Prob ( k events ) =

( λt )

k

k!

exp ( −λ t ) for t ≥ 0

(4A.17)

4A.3 Characteristic Function The characteristic function of a probability density function px(u) is defined as the Fourier transform of the PDF as Ψx ( f ) =

+∞

∫ p (u ) e x

−∞

j 2π f u

du

(4A.18)

Noise in PLL-Based Systems

155

The characteristic function is often expressed in an equivalent form as Ψ x ( f ) = E ( e j 2π f x )

(4A.19)

4A.4 Cumulative Probability Distribution Function The cumulative probability distribution function (CDF) for a probability density px(u) is defined as F (v) =

v

∫ p ( u ) du

(4A.20)

x

−∞

4A.5 Creation of Sample Sequences Exhibiting an Arbitrary Probability Density The CDF F(ν ) given by (4A.20) is constrained to lie in the range 0 ≤ F(ν ) ≤ 1. This observation can be used along with a uniform random number generator spanning (0, 1) to create a wide range of random sample sequences having a prescribed PDF. Since the CDF is the integral of the PDF, it must always be monotonically increasing as shown in Figure 4A-3. As shown in Figure 4A-4, a small region dυ in the uniform distribution can be mapped to a corresponding region dλ in the CDF by using the simple relationship F (λ ) =ν

(4A.21)

Example PDF and Corresponding CDF 1 0.9 0.8

PDF, CDF

0.7 0.6

CDF

0.5

PDF

0.4 0.3 0.2 0.1 0 -3

-2

-1

0

1

2

3

ν

Figure 4A-3 Example probability density function and corresponding cumulative distribution function.3

where ν is a uniformly distributed random sample value on the range (0, 1). This relationship causes uniformly distributed random sample values to be distributed with the desired probability density function since the scaling factor between incremental probability areas in the two distributions is dF/dλ which corresponds to px(ν ). This technique can be used whenever it is convenient to solve (4A.21) for λ in terms of ν. Several closed-form results are provided in Table 4A-1. 3

Book CD:\Ch4\u13147_example_pdf.m.

Noise in PLL-Based Systems

F (λ )

156

Cumulative Distribution Uniform PDF





λ



1.0

Figure 4A-4 Heuristic argument for creating arbitrarily distributed random sample values from uniformly distributed values using the cumulative probability distribution function F(λ). Table 4A-1 Closed-Form Formula for Creation of Random Sample Values with Specified PDF4 from Uniformly Distributed Variables

F(λ)

Name of Density Rayleigh

p ( x) =

 x2  exp  − 2  σ  2σ  x

2

u⇒λ

−2σ 2 log e (1 − u )

 λ2  1 − exp  − 2  for λ ≥ 0  2σ 

Exponential

p ( x) =

γ 2

exp ( γλ )

exp ( −γ x )

1−

2 exp ( −γλ ) 2

1

for λ < 0 for λ ≥ 0

α −

1

α

log e ( 2u )

for 0 ≤ u ≤ 0.50

log e  2 (1 − u ) 

for 0.50 < u ≤ 1

Cauchy

p ( x) =

α 1 2 π x +α 2

1 1 λ + tan −1   2 π α 

   

1 

α tan π  u −   2 

4A.6 Power Spectral Density The power spectral density of a WSS stochastic signal x(t) is defined as 2  1 S x ( f ) = lim  E  X Tm ( f )     Tm →∞ T  m 

(4A.22)

where the finite-time Fourier transform of x(t) is given by X Tm ( f ) =

4

Examples in Book CD:\Ch4\u13149_pdfs.m.

+ Tm / 2

∫ x (v) e

− Tm / 2

− j 2π f v

dv

(4A.23)

Noise in PLL-Based Systems

157

The power spectral density of a WSS process and its autocorrelation function are a Fourier transform pair by way of the Wiener-Khintchine theorem. As a result, the autocorrelation function for a complex-valued WSS process must satisfy R (τ ) = R* ( −τ )

(4A.24)

since the corresponding power spectral density S( f ) must be a real quantity. This is an extension of the result (4A.3) given earlier for real-valued WSS processes. The Lorentzian PSD is given by Sx ( f ) =

α2 2 α 2 + ( 2π f )

(4A.25)

and the corresponding autocorrelation function is Rx (τ ) =

α 2

exp ( − ατ

)

(4A.26)

4A.7 Linear Filtering of WSS Processes Filtering of a WSS process x(t) by a linear time-invariant filtering function h(t) is based on linear time-convolution just as it would be for deterministic signals [3]. The filtered WSS process is denoted here by y(t). Starting with the time-domain description for y(t), y (t ) =

+∞

∫ h (α ) x ( t − α ) d α

(4A.27)

−∞

The autocorrelation function for y(t) is found by computing   τ   τ  Ry (τ ) = E  y  t +  y  t −     2   2  ∞ ∞  τ   τ  = E  ∫ d α ∫ d β h (α ) h ( β ) x  t + − α  x  t − − β    2   2   −∞ −∞

(4A.28)

For real h(t), this can be simplified to Ry (τ ) = Rx (τ ) ⊗ h (τ ) ⊗ h ( −τ )

(4A.29)

The equivalent result can be obtained in the frequency transform domain by first writing Y ( j 2π f ) = X ( j 2π f ) H ( j 2π f )

from which it immediately follows that

(4A.30)

Noise in PLL-Based Systems

158

S y ( f ) = Y ( j 2π f ) = X ( j 2π f ) H ( j 2π f ) 2

2

= S x ( f ) H ( j 2π f )

2

(4A.31)

2

These steps are outlined in a slightly different form in Figure 4A-5. S xx ( f )

H * ( j 2π f )

Rxx (τ )

h* ( −τ )

S xx ( f ) H * ( j 2π f )

H ( j 2π f ) h (τ )

Rxy (τ )

S xx ( f ) H ( j 2π f )

2

Ryy (τ )

Figure 4A-5 Filtering5 of a WSS process by a linear time-invariant filter H( j2π f ).

4A.8 Equivalent Noise Bandwidth The equivalent noise bandwidth of a lowpass filter H( j2π f ) is defined as the width of an ideal rectangular filter having the same dc-gain, and passing the same amount of noise power as the lowpass filter in question. Mathematically, the two-sided equivalent noise bandwidth is given by ∞

Bequ =

∫ H ( j 2π f )

−∞

H (0)

2

df

2

(4A.32)

References [1]

Knuth, D.E., The Art of Computer Programming, Volume 2: Seminumerical Algorithms, 3rd ed., Reading, MA: Addison-Wesley, 1997.

[2]

Press, W.H., B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling, Numerical Recipes in C, Cambridge, MA: Cambridge University Press, 1990.

[3]

Papoulis, A., Probability, Random Variables, and Stochastic Processes, New York: McGraw-Hill, 1965.

5

After Figure 10-5 of [3].

Appendix 4B: Accurate Noise Modeling for Computer Simulations Accurate noise modeling is a necessary requirement for precision computer simulation. Discrete time-steps and the associated Nyquist bandwidth limit immediately force a discussion about how ideal white Gaussian noise should be accurately represented in the finite-bandwidth simulation environment. This bridge between continuous and discrete-time noise modeling can be solved using the arguments presented here. Additive white Gaussian noise (AWGN) is shown in Figure 4B-1 extending across all frequencies with a two-sided spectral density level of No/2 W/Hz. Assume that the discrete-time sampling rate is Fs Hz, and the associated Nyquist bandwidth is Fs/2. There is no choice but to lowpass filter the infinite bandwidth AWGN because the digital system cannot accurately represent any signal frequency content above the Nyquist bandwidth. The noise is filtered by passing it through an ideal brick-wall continuous-time lowpass filter as suggested in Figure 4B-1 and only noise components below the Nyquist frequency survive. Given that the input Gaussian noise is a wide-sense stationary random process, the WienerKhintchine theorem can be used to compute the autocorrelation function of the filter’s output noise as R (τ ) =

+∞

No rect ( f ) e j 2π f τ df 2 −∞



(4B.1)

which leads to a final result

R (τ ) =

No 2

Fs / 2



e j 2π f τ df =

− Fs / 2

N o Fs sin (π Fsτ ) 2 π Fsτ

(4B.2)

PSD, W/Hz

No 2



Fs 2

Fs 2

Frequency, Hz

Figure 4B-1 AWGN passed through an ideal brick wall lowpass filter.

Several important observations follow from (4B.2): • • •

The continuous-time noise at the filter output has a sample variance of NoFs/2. Output time samples that are separated by integer-multiples of 1/Fs seconds (i.e., τ = n/Fs) are completely uncorrelated. For a Gaussian noise input, the uncorrelated output samples are also statistically independent.

Using these results, the ideal continuous AWGN noise source can be modeled exactly in a discretetime simulation by using a Gaussian random number generator that provides a sample every 1/Fs seconds having a sample variance of NoFs/2. 159

Noise in PLL-Based Systems

160

4B.1 Noise Modeling for 1/f α Processes with 0 < α < 2 The ubiquitous existence of 1/f α noise throughout nature combined with the difficulties involved in its accurate simulation have induced the development of many different models and techniques for its study. This situation has been further compounded by the need to represent processes that are physically observed in nature in contrast to purely mathematical models that are often not physically realizable. Terminology across the many different technical disciplines where 1/f α appears is unfortunately not always consistent. This section develops some of the more rigorous concepts behind these noise processes, and then presents several computer simulation methods that can be used to model 1/f α noise with different degrees of precision and complexity. This apparent detour into complex noise theory is necessary because the 1/f α processes behave very differently than the more familiar densities where α ≥ 2. Any attempt to utilize the WienerKhintchine theorem to directly compute the exact autocorrelation function from the power spectral density for these processes leads to immediate difficulty since the variance given by R ( 0) =

+∞



−∞

df f

α

(4B.3)

is undefined for 0 < α < 2. The integration is problematic for | f | near zero when 1 < α < 2, and similarly problematic for | f |→∞ when 0 < α < 1. Both frequency limits pose difficulties when α is precisely unity. Any number of engineering approximations can be used to circumvent these problems such as bandpass filtering the otherwise ideal 1/f α noise, but the real question that needs to be addressed is how to best represent the real-world noise mechanisms of interest. Significant differences that may arise between continuous-time and discrete-time system representations must also be properly accounted for. 1/f α noise processes that are observed in the physical world generally entail additional characteristics that do not mirror the purely mathematical descriptions precisely. For example, no physical world noise model exhibits infinite bandwidth or infinite power. The existence or nonexistence of 1/f α noise at extremely low frequencies where f→0 is more arguable even in the real world, but is no less difficult to deal with mathematically. The associated noise statistics are frequently non-Gaussian as well. These noise processes are precisely defined in the purely mathematical realm. In the mathematically precise definition, white noise exhibits a completely uniform power spectral density over infinite bandwith, and although it is time-continuous, it is not time-differentiable. The timeintegral of Gaussian white noise is called Brownian motion B(t) and is defined as [1] dB ( t ) dt

= n (t )

(4B.4)

where n(t) represents the infinite bandwidth Gaussian noise source just mentioned. In onedimension, B(t) takes on the appearance of a random walk. A rigorous solution of (4B.4) involves some knowledge of Ito calculus, but this can be largely avoided by observing its lessons rather than engaging in its detailed study. B(t) is also known as the standard Wiener process [2]. Brownian noise exhibits a power spectral density proportional to 1/f 2. Noise processes that exhibit spectral densities with α < 2 are frequently called pink noise or colored noise in engineering circles, and fractional Brownian motion or long-range dependent processes by mathematicians. The literature can be somewhat misleading in that 1/f α noise is often referred to as simply 1/f noise even though α may not be precisely unity.

Noise in PLL-Based Systems

161

Fractional Brownian noise (FBN) is a more general form of B(t) and is denoted by BH (t) where H is known as the Hurst parameter. This parameter is a measure of the self-symmetry1 present in the process and is related to α by H = (1 + α)/2. Normally, H is limited to the range 0.5 < H < 1. BH (t) for ( t ≥ 0 ) is required to satisfy the following criteria [3]: • • • •

BH (0) = 0 exhibit stationary increments (i.e., z(t, s) = BH (t) – BH (s) must be a stationary random process) that are mean-zero and normally distributed having variances that are proportional to |t – s|α–1 for s ≤ t increments must be statistically independent BH (t) must exhibit self-symmetry (i.e., scale invariance)

When a continuous-time FBN process is uniformly time-sampled, it can be shown that the autocorrelation coefficient for samples separated by k time-steps is given by 1 2H 2H ρ z ( k ) =  k + 1 + k − 1 − 2k 2 H  2



(4B.5)

1

(4B.6)

As k increases, (4B.5) asymptotically takes the form

ρ z ( k ) → H ( 2 H − 1) k 2 H − 2

for k

In order to keep the correlation coefficient from growing without bound for large k, H < 1 is required. This autocorrelation function will be used shortly to create one of the most accurate (yet computationally intensive) modeling methods possible for 1/f α noise. In creating 1/f α simulation models for engineering applications, it is important to verify that the model produces the desired autocorrelation behavior and probability density. This is especially true when bit-error rate or timing recovery performance is being investigated. Otherwise, it is possible to generate many non-Gaussian processes that are quite different from Gaussian and yet exhibit identical autocorrelation behavior. 1/f α processes are stationary for 0 < α < 1 but nonstationary for 1 ≤ α < 2. Discrete-time modeling precision should be gauged based on the autocorrelation function behavior rather than the power spectral density. This should be done in part to avoid aliasing problems that exist in the frequency-domain. A zero-mean, discrete-time, Gaussian random process xk is said to simulate a continuous-time, Gaussian random process x(t) if the discrete autocorrelation function Rd(k, m) precisely matches the continuous autocorrelation function R(k∆t, m∆t) [3].

1/f α Noise Generation Using the Auto-Regressive2 (AR) Method The AR method for 1/f α noise generation is one of the most accurate methods available for creating a specified noise process, but it is also one of the most computationally intensive. This method uses the algorithm described in Appendix 4C with the associated Toeplitz correlation matrix elements

1 From Mandelbrot’s extensive work in fractal geometry. Self-symmetry is a term that has arisen through the mathematics of fractals. It is a property that means that the signal behavior appears and behaves the same way when viewed with different scaling factors. The parameter also determines how long ranging the process is (i.e., over what time duration it exhibits memory). Also referred to as self-similarity. 2 Also referred to as the all-pole or maximum-entropy method [4].

Noise in PLL-Based Systems

162

populated using (4B.5). This technique is most applicable for 0 < α ≤ 1. An example result using this method is shown in Figure 4B-2.

1/f Noise Using Ideal AR Method 110

Power Spectral Density, dB

100

90

80

70

60

Ideal 1/f Slope 50

40

-4

10

-3

10 Frequency

-2

10

-1

10

Figure 4B-2 1/f noise spectrum using AR method for generation.3

Recursive Filtering Method for 1/f 1 Noise Creation Generation of 1/f α noise for α = 1 can be done using recursive digital filtering in which the filtering is applied to white Gaussian noise. The filtering is performed using a cascade of first-order digital filters having appropriately selected pole and zero frequencies. The squared transfer function for the filter is given by Nf  ω 2 + zi2  2 H (ω ) ≈ ∏  2 2  i =1  ω + pi 

(4B.7)

where Nf is the number of cascaded filter sections involved, and the poles and zeros are given by the zi and pi, respectively. The poles must be located on a logarithmic grid across the frequency span of interest (ωmin, ωmax) as 1  α   pi = ωmin exp   1 −  ∆p  2  2 

(4B.8)

where

3

Book CD:\Ch4\u13128_AR_clicker.m with 4096 taps used, 64 trials averaged, 218 sample points per trial, α = 0.99.

Noise in PLL-Based Systems

∆p =

163

log e (ωmax ) − log e (ωmin )

(4B.9)

Nf

for i = 1 to Nf. In order to have a symmetrical error with respect to the ideal 1/f spectrum line, the zeros are given by [5] α  zi = pi exp  ∆p  2 

(4B.10)

A minimum of one filter section per frequency decade is recommended for reasonable accuracy. A sample result using this method across four frequency decades using 3 and 5 filter sections is shown in Figure 4B-3. f -α Power Spectral Density 45 40

α =1

Relative Spectrum Level, dB

35 30

5 Filter Sections

25 20 15 10

3 Filter Sections

5 0 0 10

1

10

2

3

10 10 Radian Frequency

4

10

5

10

Figure 4B-3 1/f noise creation using recursive 1/f 2 filtering method4 with white Gaussian noise.

1/f α Noise Generation Using Fractional-Differencing Methods Hosking [6] was the first to propose the fractional differencing method for generating 1/f α noise. As pointed out in [3], this approach resolves many of the problems associated with other generation methods. In the continuous-time-domain, the generation of 1/f α noise processes involves the application of a nonrealizable filter to a white Gaussian noise source having s–α/2 for its transfer function. Since the z-transform equivalent of 1/s is H(z) = (1 – z–1)–1, the fractional digital filter of interest here is given by 1 (4B.11) Hα ( z ) = α /2 (1 − z −1 ) A straightforward power series expansion of the denominator can be used to express the filter as an infinite IIR filter response that uses only integer-powers of z as

4

Book CD:\Ch4\u13070_recursive_flicker_noise.m.

Noise in PLL-Based Systems

 α α  1 −   α  2 2  z −2 − . . . H α ( z ) ≈ 1 − z −1 −  2!  2     

164 −1

(4B.12)

in which the general recursion formula for the polynomial coefficients is given by a0 = 1

α a  ak =  k − 1 −  k −1 2 k 

(4B.13)

This method results in an IIR filter structure similar to that identified in the AR method that was discussed earlier, but the solution formulation as a digital filtering problem rather than as an autocorrelation-based problem avoids many of the limitations posed by the AR method. The solution can also be cast as an FIR filter by using a slightly different power-series expansion of (4B.11). In this case, Hα ( z ) ≈ 1 +

α 2

z −1 +

1 α α  −2  + 1 z + . . . 2! 2  2 

(4B.14)

and the general recursion formula for the FIR filter coefficients is given by h0 = 1 α h hk =  + k − 1 k −1 2  k

(4B.15)

Both of these methods are attractive for generating 1/f α noise because of their accuracy and reasonable complexity level. Example results for the IIR and FIR methods are shown in Figure 4B-4 and Figure 4B-5, respectively.

Noise in PLL-Based Systems

165

1/f Noise- Hosking IIR Method 50

40

Simulation Results Spectral Density, dB

30

20

10

0

Ideal 1/f Slope

-10

-20

-4

-3

10

-2

10

-1

10

0

10

10

Frequency

Figure 4B-4 Hosking IIR method for generation of 1/f noise.5

Random Midpoint Displacement Method for Generation of 1/f α Noise This method is based on the self-symmetry property of fractal noise recognized by Mandelbrot that was mentioned earlier. This method is used extensively in computer graphics to create textures and other imaging surfaces. This algorithm is very attractive for graphical applications because of its simplicity and high speed. 1/f Noise- Hosking FIR Method 50

40

Spectral Density, dB

Simulation Result 30

20

10

Ideal 1/f Slope

0

-10 -6 10

-5

10

-4

10

-3

10 Frequency

-2

10

-1

10

0

10

Figure 4B-5 Hosking FIR method for generation of 1/f noise.6

5

Book CD:\Ch4\m13121_hosking_iir.m. 218 samples computed per track, IIR filter length = 212, number of tracks averaged = 1024. 6 Book CD:\Ch4\u13121_hosking_fir.m. 218 samples computed per track, length of FIR filter = 215, number of tracks averaged = 1024.

Noise in PLL-Based Systems

166

The algorithm [7] is based on fractional Brownian motion BH (t) in which the variance between the process at time t and the origin is given by variance  BH ( t ) − BH ( 0 )  = σ 2 t 2 H

(4B.16)

Note that this is the variance of fractional Brownian motion rather than of the fractional Brownian motion increments that resulted in the correlation coefficient given by (4B.5). This is an important distinction to recognize because it results in a slightly different relationship between the exponents H and α given as H = (α – 1)/2 for this particular algorithm. Algorithm execution is quite simple. Assume that Npts = 2N + 1 samples are to be created between time points t = 0 and t = 1. Let the created samples be denoted by xk and set x1 = 0. Use a Gaussian random number source with variance σ 2 to set the sample value at xNpts. For the next step, choose the midpoint index ( 2N – 1 + 1 ) and set its value to the average of x1 and xNpts plus a random displacement. The random displacement is computed using the same Gaussian random number generator source, but the variance used in this case must be scaled down to var [ ∆ n ] =

σ2

(2 )

n 2H

1 − 22( H −1)   

(4B.17)

with n = 1 corresponding to this first midpoint operation step, so that (4B.16) is still satisfied at this new midpoint. This formula accounts for the fact that the total variance at the new midpoint is the sum of the variance of the averaged sample values plus the variance of the random displacement. This midpoint averaging-plus-random-displacement operation is then repeated again and again until all sample indices have been populated. An example result for the creation of 1/f noise is shown in Figure 4B-6. This algorithm can be used for 1 ≤ α ≤ 2, but does exhibit some accuracy issues for some values of α. The accuracy issue can be improved with a simple modification to this method known as the random addition method [7].

Noise in PLL-Based Systems

167

Power Spectral Density for Random-Midpoint Method 80 75

Spectral Density, dB

70

Ideal 1/f Slope

65 60 55 50 45

Simulation Result

40 35 -4 10

-3

10

-2

10 Frequency

-1

10

0

10

Figure 4B-6 1/f noise creation using the random midpoint displacement method.7

Other 1/f

α

Generation Methods

Many other 1/f α noise generation methods are reported in the literature. The suitability of these methods for a given application vary widely. Some of the methods that have been used extensively in the past or are worthy of additional consideration include the Voss8 and Voss-McCartney9 methods [8], [9], and the level-crossing method by Kaulakys,10 just to mention but a few.

Key Points: • 1/f α noise behavior for 0 < α < 2 is substantially different than for α ≥ 2. • The precision of a noise generation method should normally be judged based on its autocorrelation function rather than on its power spectral density alone. • The Hosking FIR and IIR methods are preferred for their relative simplicity, uniform behavior, and good accuracy. References [1]

Papoulis, A., Probability, Random Variables, and Stochastic Processes, New York: McGraw-Hill, 1965.

[2]

Larson, H.J., and B.O. Shubert, Probablistic Models in Engineering Sciences, Vol. 2, New York: John Wiley & Sons, 1979.

[3]

Kasdin, N.J., “Discrete Simulation of Colored Noise and Stochastic Processes and 1/f α Power Law Noise Generation,” Proc. IEEE, May 1995.

[4]

Crawford, J.A., Frequency Synthesizer Design Handbook, Norwood, MA: Artech House, 1994.

7

Book CD:\Ch4\u13122_random_midpoint.m using 214 sample points for each trial, 128 trials averaged. Book CD:\Ch4\u13073_flicker3.m. 9 Book CD:\Ch4\u13075_stochastic_voss.m. 10 Book CD:\Ch4\u13074_flicker4.m. 8

Noise in PLL-Based Systems

168

[5]

Saletti, R., Proc. IEEE, Vol. 74, 1986, p. 1595.

[6]

Hosking, J.R.M., “Fractional Differencing,” Biometrika, Vol. 68, No. 1, 1981, pp. 165-176.

[7]

Strecker, J., “Fractional Brownian Motion Simulation: Observing Fractal Statistics in the Wild and Raising Them in Captivity,” Bachelor of Arts Thesis, College of Wooster, 2004.

[8]

Voss, R.F., and J. Clarke, Nature, 258, 1975.

[9]

———, J. Acoust. Soc. Am 63, 1978.

Appendix 4C: Creating Arbitrary Noise Spectra in a Digital Signal Processing Environment The need to create a random time-domain sample sequence with a prescribed power spectral density frequently occurs in the design of high-performance time and frequency control systems. If the desired power spectral density is simple (e.g., lowpass filtered noise with a Butterworth shape), standard filters can be used to model the noise. In the more general case, the method described within this appendix may be helpful. While it is possible to assemble an ad hoc cascade of digital filters to approximate the desired power spectral density (PSD), this appendix provides a means to do this in a very systematic way for wide-sense stationary random processes. The technique is known by several different names including autoregressive modeling1 (ARM), the all-pole method (APM), and the maximum-entropy method (MEM) [1]–[3]. The first and last names imply an underlying vein of optimality which is indeed the case from estimation theory and information theory, respectively. The latter perspective is of the greatest interest when it comes to creating compliant noise sample sequences. Kolmogoroff showed [4] that the entropy rate of a stationary Gaussian stochastic process can be expressed as

h=

1 2 Fs

Fs / 2



− Fs / 2

log e  2π eS x ( f )  df

(4C.1)

where Sx( f ) is the PSD of the Gaussian process and Fs is the sampling rate. The significance of this statement is that entropy is a mathematical measure of randomness or outcome uncertainty, and the random noise sequence that is generated should arguably be as random as possible while still satisfying the constraints imposed by the PSD requirements. The question that is specifically addressed by the MEM is “What is the power spectral density of the least predictable time series {xn} whose first p + 1 autocorrelation values are {r0, r1, . . ., rp}?” Scharf [2] develops the MEM solution based on Lagrange multipliers and these specific constraints, arriving at the all-pole model for the PSD given by a0 Sx ( f ) (4C.2) 2 M

1 + ∑ ak z k k =1

in which the ak must be computed so as to approximate Sx( f ) as just described. The order of the approximation M is chosen to set a prescribed degree of desired approximation precision. Although not obvious in any way, the solution for the ak coefficients can be found by solving

1

Autoregressive and maximum-entropy methods are equivalent if the underlying random process is Gaussian [3].

Noise in PLL-Based Systems

φ0 φ  1 φ2  K φM

φ1 φ0 φ1

φ2 φ1 φ0

K

K

169

K φM  1   a0  K φM −1   a1  0  K φ M − 2   a2  =  0      K K  K  K  K φ0   aM  0 

φM −1 φM − 2

(4C.3)

in which the φk values correspond to the autocorrelation function of the underlying process evaluated at equally spaced2 time values as Rx(kTs). The matrix in (4C.3) is known as a Toeplitz matrix because of its special symmetry. This symmetry makes it possible to solve the equation much more efficiently using the Levinson-Durbin algorithm rather than with classical matrix methods such as Gaussian elimination or LU decomposition. The LPC-10 speech compression techniques are based on this solution method as well. Normally, the PSD Sx( f ) is specified and the autocorrelation function Rx(τ) must be found. In observing the computational guidelines outlined in [5], this is most accurately done by using an FFT to compute Rx(τ ) using the Wiener-Khintchine theorem while choosing an adequately large frequency grid so that neither low nor high frequency modeling information is lost from Sx( f ). The autocorrelation values required in (4C.3) can then be interpolated from Rx(τ ) with high precision. The Wiener-Khintchine theorem motivates a second perspective of the approximation problem represented by (4C.2). Since the autocorrelation function and PSD are Fourier transform pairs, and the Fourier transform of Sx( f ) is just the Laurent series-expansion of (4C.2), it is also true that a0 M

1 + ∑ ak z

2 k

M



∑φ

m =− M

m

zm

(4C.4)

k =1

where the ≈ is meant to imply that the series expansion of the left-hand side of the equation agrees term-by-term with the right-hand side for terms z–M through zM [1]. This perspective also provides impetus to formulate the sample sequence generation solution as a moving-average (MA) filter rather than as an AR-based filter. For equal-length AR and MA implementations, the AR method is usually superior, however. In using this method, access to the exact autocorrelation function is preferable for populating the Toeplitz matrix. If Rx(τ ) must be estimated from actual data, however, care must be exercised to ensure that the autocorrelation values do in fact represent a positive semidefinite autocorrelation matrix. This problem can be mitigated by using biased autocorrelation estimates [6] given by

Rx ( k ) ≅

1 N

N− k

∑xx n=0

* n n+k

(4C.5)

The PSD requirement shown in Figure 4C-1 can be used as an extreme example in order to test out this general method. The first simulation result shown in Figure 4C-2 exhibits very good matching with the requirement, but uses 1,000 IIR filter coefficients to accomplish this performance. Close-in spectral requirements are increasingly compromised as the number of IIR coefficients is reduced from 1,000 to 100 and then to 64 as shown in Figure 4C-3 and Figure 4C-4,

2

Variants of this method can choose to use autocorrelation times that are not equally spaced.

Noise in PLL-Based Systems

170

respectively. The suitability of this method and tolerable complexity must be considered on a caseby-case basis. Interpolated Ideal PSD 5 0

PSD, dBV/Hz

-5 -10 -15 -20 -25 -30 -35 -1 10

0

10

1

2

10 10 Frequency, Hz

3

4

10

10

Figure 4C-1 Example PSD specification. Power Spectral Density Using Ideal AR Method 85 80

Power Spectral Density, dB

75 70 65 60 55 50 45 -1 10

0

10

1

2

10

10

3

10

4

10

Frequency

Figure 4C-2 Resultant PSD3 using 1,000 IIR coefficients and averaging 500 trials, each trial consisting of 217 time samples. A sampling rate of 20 kHz was assumed.

3

Book CD:\Ch4\u13132_general_ar_psd.m.

Noise in PLL-Based Systems

171

Power Spectral Density Using Ideal AR Method 80

Power Spectral Density, dB

75

70

65

60

55

50

45 0 10

1

10

2

10 Frequency

3

10

4

10

Figure 4C-3 Same as Figure 4C-2 except number of IIR filter taps reduced from 1,000 to 100. Power Spectral Density Using Ideal AR Method 80

Power Spectral Density, dB

75

70

65

60

55

50

45 0 10

1

10

2

10 Frequency

3

10

4

10

Figure 4C-4 Same as Figure 4C-2 except number of IIR filter taps reduced from 100 to 64.

References [1]

Flannery, B.P., et al., Numerical Recipes in C, New York: Cambridge University Press, 1990.

[2]

Scharf, L.L., Statistical Signal Processing Detection, Estimation, and Time Series Analysis, Reading, MA: AddisonWesley, 1991.

[3]

Marple, S.L., Digital Spectral Analysis with Applications, Englewood Cliffs, NJ: Prentice-Hall, 1987.

[4]

Cover, T.M., and J.A. Thomas, Elements of Information Theory, New York: John Wiley & Sons, 1991.

[5]

Kasdin, N.J., “Discrete Simulation of Colored Noise and Stochastic Processes and 1/f α Power Law Noise Generation,” Proc. IEEE, May 1995.

[6]

Proakis, J.G., and D.G. Manolakis, Introduction to Digital Signal Processing, New York: Macmillan Publishing, 1988.

Noise in PLL-Based Systems

172

Appendix 4D: Noise in Direct Digital Synthesizers A traditional direct digital synthesizer (DDS) is shown in Figure 4D-1. This basic synthesis technique was prominent through much of the 1980s, but was then largely superseded by more advanced ∆-Σ techniques as integration levels increased and performance requirements tightened. Traditional DDS implementations were primarily focused on reducing table look-up memory size, improving DAC performance, and increasing clock speed. Much of the discussion in [1] is focused on these technical areas whereas limited use is reported in the same reference for noise-shaping techniques. With the advent of increased digital hardware capabilities, the sin(θ ) table look-up element is usually replaced by a high-precision direct calculation of sin(θ ) values using CORDIC techniques [2]. CORDIC solutions are usually implemented in a pipelined fashion (thereby largely avoiding speed issues), and they can deliver virtually any level of numerical precision desired [3]. Worst-case discrete spurious performance of the traditional DDS architectures have largely been subdued by ∆-Σ techniques that exhibit better randomized behavior than a single phase-accumulator can offer. This fact combined with the inherent noise-shaping that ∆-Σ techniques offer have changed the traditional DDS landscape forever.

Fclk

sin (θ )

W

Register

AntiAliasing

D

DAC

Table Look-Up

Σ L −1 ∆θ Figure 4D-1 Traditional direct digital frequency synthesizer constituent elements.

4D.1 Traditional DDS General Concepts In an ideal DDS, no phase accumulator truncation occurs (W = L in Figure 4D-1) and the sin(θ ) lookup table can be assumed to have infinite precision. The ideal DDS output can then be represented by d (t ) =



∑ sin ( 2π f nT ) h ( t − nT ) o

n =−∞

s

s

(4D.1)

where fo is the fundamental DDS output frequency, Ts= FClk–1, and h(t) = 1 for 0 ≤ t < Ts and otherwise 0. In the more general case where the sampled signal is not an ideal sinusoid but is rather represented by the waveform v(t), d (t ) =



∑ v ( nT ) h ( t − nT )

n =−∞

s

s

(4D.2)

Noise in PLL-Based Systems

v (t )

Ideal Sampler

173

Ideal Hold

d (t )

v (t ) *

S/H

h(t)

∑ δ ( t − nT ) s

n

Figure 4D-2 Idealized sampling process involved with direct digital synthesis as described by (4D.2).

These operations are shown schematically in Figure 4D-2. The Poisson Sum formula (2.154) makes it possible to express the Fourier transform1 of v*(t) as F  v* ( t )  =

1 Ts





n =−∞



n  s 

∑V  f −T

(4D.3)

where V( f ) is the Fourier transform of v(t). Since v(t) is assumed to be periodic with frequency fo, it may of course be represented by its Fourier series as V(f)=



∑ c δ ( f − mf )

m =−∞

m

(4D.4)

o

where Tp = fo–1 and the Fourier series coefficients cm are given by cm =

1 Tp

Tp / 2

∫ v (t ) e

− j 2π f o mt

(4D.5)

dt

−Tp / 2

Substitution of (4D.4) into (4D.3) produces F  v* ( t )  = Fclk





∑ ∑ c δ ( f − nF

n =−∞ m =−∞

m

clk

− mf o )

(4D.6)

The transform function corresponding to the ideal zero-order hold-function is given by H(f)=

1 − e− j 2π f Ts j 2π f

(4D.7)

After appending this to (4D.6), the final transform for d(t) in Figure 4D-2 is given by [4] D ( f ) = exp ( − jπ f Ts )

sin (π f Ts )

π f Ts





∑ ∑ c δ ( f − nF

n =−∞ m =−∞

m

clk

− mf o )

(4D.8)

In summary, the DDS action causes all frequency components residing in v(t) to be aliased about every harmonic of the clock frequency Fclk. Therefore, if v(t) is a perfect sine wave, only the nearest alias frequency is normally of concern. On the other hand, if v(t) is chosen to be triangular for example, 1

The asterisk denoting ideal time-sampling rather than complex conjugation.

Noise in PLL-Based Systems

174

harmonics of the triangular wave would be aliased arbitrarily close to the desired fundamental output frequency and a substantially different spurious performance would be realized.

4D.2 Phase Truncation and Related Spurious Effects Binary precision issues in Figure 4D-1 represented a major area of investigation in the 1980s before higher levels of digital integration were possible and before ∆-Σ techniques became well known. An elegant closed-form theory that predicted the location and amplitude of DDS-related spurious components was eventually developed in [5], [6]. The underlying technical details are also available in [1] but will not be pursued further here. Perhaps the most interesting result is the predicted worst-case output spurious level versus the number of bits used in the phase precision and the number of DAC bits used as shown in Figure 4D-3. This set of curves makes it possible to immediately estimate the achievable spurious performance possible in a conventional DDS structure like that shown in Figure 4D-1. DDS Worst-Case Spur Level -30

8 Bit DAC

-35

13 Bit

9 Bit Maximum Spur Level, dBc

-40

12 Bit

10 Bit 11 Bit

-45 -50 -55 -60 -65 -70 -75 -80

6

8

10 12 14 Bits of Phase Precision

16

18

Figure 4D-3 Maximum DDS spur level versus the number of bits of phase precision and DAC precision.2

4D.3 DDS Output C/N The output power spectral density of the DDS can be computed by using the autocorrelation function result given by (4.31). Under the best of circumstances, the truncation errors are statistically uncorrelated and the output two-sided noise spectrum is given by [1]

σ 2 sin (π f Ts ) S dn ( f ) = π f Ts P 2

Book CD:\Ch4\u13148_dds_spurs.m.

2

(4D.9)

Noise in PLL-Based Systems

175

where σ 2 is the variance of the DDS output sample errors relative to an ideal sinusoid and P = 2L. Referring to the terminology used in Figure 4D-1, if the peak sine wave value is assumed to have an amplitude of 2D–1, it can be shown that the output carrier to spectral noise floor density ratio is given by C = 10 log10 ( 3Fclk 22 D −1 ) dBc/Hz N = 6.02 D + 1.76 + 10 log10 ( Fs ) dBc/Hz

(4D.10)

In the case where D = 10 bits and Fclk = 40 MHz, for example, C/N = 138 dBc/Hz. The DDS output contains both AM and PM noise, so a bandpass filtering operation followed by a hard-limiter could improve the phase noise performance an additional 3 dB in theory.

References [1]

Crawford, J.A., Frequency Synthesizer Design Handbook, Norwood, MA: Artech House, 1994.

[2]

Volder, J.E., “The CORDIC Trigonometric Computing Technique,” IRE Trans. Electronic Computers, Sept. 1959.

[3]

Parhi, K.K., VLSI Digital Signal Processing Systems—Design and Implementation, New York: John Wiley & Sons, 1999.

[4]

Reinhardt, V.S., Paper presented at the 17th Annual Precise Time and Time Interval Applications Planning Meeting (NASA/DOD), Washington, D.C., December 3-5, 1985.

[5]

Nicholas, H.T., and H. Samueli, “An Analysis of the Output Spectrum of Direct Digital Frequency Synthesizers in the Presence of Phase-Accumulator Truncation,” 41st Annual Frequency Control Symposium, 1987, pp. 495-502.

[6]

Nicholas, H.T., et al., “The Optimization of Direct Digital Frequency Synthesizer Performance in the Presence of Finite Word Length Effects,” 42nd Annual Frequency Control Symposium, 1988, pp. 357-363.

CHAPTER 5 System Performance 5.1 SYSTEM PERFORMANCE OVERVIEW Phase noise impacts system performance in a variety of ways. In receivers, local oscillator (LO) phase noise is most detrimental to: • • • •

Adjacent channel rejection; Out-of-band blocking performance; Input third-order intercept point (IIP3); Receive signal-to-noise ratio.

The first three issues are related to large frequency offset phase noise performance of the LO whereas the last item is driven by close-in phase noise performance. In transmitter applications, the LO phase noise primarily affects: • • •

Adjacent channel noise levels; Ultimate output transmitter noise floor; Transmit signal-to-noise ratio.

Similar with the receive case, the first two items are attributable to phase noise performance at large frequency offsets whereas the last is due primarily to close-in phase noise performance. Phase noise degradation in receivers and transmitters is looked at in detail in this chapter. In communication systems, the amount and type of phase noise-related degradation also depend on the type of signaling waveform being used. For example, binary phase-shift keyed (BPSK) signals are very robust under poor phase noise conditions whereas M-ary quadrature amplitude modulation (M-QAM) signals are not. Phase noise degradation related to these signaling waveforms as well as others is considered in this chapter. Phase noise that is more appropriately referred to as time-jitter can affect DAC and ADC performance significantly, depending on the number of bits involved, the clock rates used, and the signal environment present. The chapter concludes with an in-depth look at these important topics.

5.2 INTEGRATED PHASE NOISE Local oscillator phase noise performance is frequently summarized by specifying the total integrated phase noise in degrees rms. This is a common practice for both receive and transmit applications. A 176

System Performance

177

reasonable first-order approximation for any phase-locked source is the Lorentzian power spectral density that is given by Lo L (f)= (5.1) 2  f  1+    fc  This is a two-sided spectrum that is assumed to be centered on the RF carrier frequency, and has units of Hz–1 as discussed earlier in Section 4.6.1. The total integrated phase noise from (5.1) is given by ∞ Lo σ ϕ2 = ∫ (5.2) df = π Lo f c rad 2 2  f  −∞ 1+    fc  This result is used to create the nomographs given in Figure 5-1 and Figure 5-2 which are helpful with first-order design tradeoffs between PLL closed-loop bandwidth and the close-in phase noise level being considered. Total Integrated Phase Noise -50

5 deg rms 3 deg rms

-60

Required Lo, dBc / Hz

2 deg rms -70

-80

-90

1.5 deg rms -100

1.0 deg rms 0.5 deg rms

-110 3 10

4

5

10

10

6

10

Fc , Hz

Figure 5-1 Constant integrated phase noise1 for Lorentzian spectrum parameter choices using (5.2).

5.3 LOCAL OSCILLATORS FOR RECEIVE SYSTEMS One of the most important attributes of any communication receiver is its ability to separate a desired signal from the myriad of other signals and noise that reach its antenna. This attribute is known as receiver selectivity. Receiver selectivity is normally defined in the RF frequency-domain as the maximum ratio of undesired to desired signal level in dB for which a specific output signalto-noise ratio (SNR) or bit error rate (BER) can be maintained from the receiver versus frequency separation. The (stronger) undesired signal may have the same attributes as the desired signal, or it may be a continuous-wave (CW) sinusoidal signal, or something altogether different. The receiver 1

Book CD:\Ch5\u13150_total_noise.m.

System Performance

178

should be able to tolerate significantly stronger interference levels as the frequency separation between the desired and undesired channels is increased. Total Integrated Phase Noise 5 4.5 4

Lo = -60 dBc/Hz -65

Integrated Phase, deg rms

-70 3.5

-75

3

-80

2.5

-85

2

-90

1.5

-95

1 0.5

-100 0 3 10

4

5

10

10

6

10

Fc , Hz

Figure 5-2 Alternative presentation of Lorentzian spectrum details2 similar to Figure 5-1.

Some of the most stringent receiver selectivity requirements appear in military communication systems. These systems must contend with very strong signal jamming conditions and the colocation of potentially many transceivers operating in close proximity to each other as in a command-post situation. The command-post scenario is often the most demanding because many high-power transmitters are operating while sensitive receive operations are being simultaneously carried on. Receiver selectivity requirements in modern cellular telephone handsets are also quite demanding, particularly for narrowband systems like GSM and EDGE. These systems utilize tightly spaced (e.g., 200 kHz) RF channels to maximize their capacity in contrast to more advanced wideband systems based on spread-spectrum techniques (e.g., WCDMA) or orthogonal frequency division multiplex (OFDM) methods. Local oscillator phase noise requirements are often more demanding for the narrowband systems as examined in this section. Receiver selectivity is always a function of the frequency separation between the undesired and desired channels. A signal that falls in either channel next to the desired channel is usually called the adjacent channel. A signal falling two channels away is called the second adjacent channel or the alternate channel. Receiver selectivity is normally specified for these two adjacent channel cases whereas strong signals that appear at larger frequency offsets relative to the desired channel are termed blockers. Blocking signal levels are usually very strong, and their name is derived from their effect on receiver operation. Once a blocking signal is sufficiently strong at the receiver’s input, the receiver’s output SNR will be degraded regardless of the frequency offset involved until the receiver’s front-end filtering begins to attenuate the blocking signal. This blocking behavior is caused by either small-signal suppression due to signal overload in the

2

Ibid.

System Performance

179

receiver, or through reciprocal mixing3 where the blocking signal is heterodyned into the receiver’s passband by the imperfect phase noise sidebands of the receiver’s local oscillator. This latter mechanism is of greatest interest here and is discussed below. These detailed phase noise issues can be studied further by considering the idealized directconversion receiver shown in Figure 5-3. Assume that the receive signal at the antenna is given by LOI Preselection BPF

HB ( f )

IR

LNA

QR

LOQ

HB ( f )

Figure 5-3 Idealized direct-conversion receiver with local oscillator inputs LOI and LOQ and outputs IR, QR.

{

}

{

}

r ( t ) = Real  I ( t ) + jQ ( t )  exp ( jω0 t ) + Real  x ( t ) + jy ( t )  exp ( jω1t ) 144444 42444444 3 1444442444443 Desired Signal

(5.3)

Blocking Signal

The desired signal is centered at radian frequency ω0 whereas a (strong) blocking signal is centered at ω1. Under worst-case conditions, the desired signal is at or slightly above (e.g., 3 dB) the specified sensitivity level for the receiver without any blockers being present, whereas the power level of the blocking signal can be at the maximum input signal level specified for the receiver (e.g., –25 dBm). Although this scenario imposes significant demands on the entire receiver, only phase noise issues involved with the local oscillator are considered here. The local oscillator signals shown in Figure 5-3 are given here by LOI ( t ) = 2 cos ω0 t + ϕ n ( t ) 

(5.4)

LOQ ( t ) = −2sin ω0 t + ϕn ( t ) 

(5.5)

where ϕn(t) represents the phase noise present on the local oscillator. The filters labeled HB( f ) in Figure 5-3 represent ideal roofing filters that are intended to pass the desired receive signal unchanged but otherwise completely eliminate unwanted adjacent channel signals and doublefrequency terms that arise in the frequency mixing process. The desired and blocking signals are spectrally represented as shown in Figure 5-4. The baseband outputs that result from the mixing operations in Figure 5-3 are given by

I R = I cos (ϕ n ) + Q sin (ϕ n ) + x cos ( ∆ωt − ϕ n ) − y sin ( ∆ωt − ϕ n ) + nR 144424443 14444442444444 3 Desired Signal

(5.6)

Blocking Signal

3 Reciprocal mixing refers to the unwanted frequency mixing between the phase noise sidebands of a receiver’s local oscillator and unwanted off-channel signals that ultimately causes a degradation in the signal-to-noise ratio of the desired signal at the mixer output.

System Performance

180

QR = Q cos (ϕ n ) − I sin (ϕn ) + x sin ( ∆ωt − ϕn ) + y cos ( ∆ωt − ϕn ) + nI 144424443 14444442444444 3 Desired Signal

(5.7)

Blocking Signal

where most of the explicit functional time-dependencies have been dropped, and ∆ω = ω1 – ω0. The noise contributions from the channel’s additive white Gaussian noise (AWGN) are represented by nR and nI. The blocking signal portion in (5.6) and (5.7) contributes interference only if ϕn has frequency components in the vicinity of ∆ω because this term is otherwise eliminated by the idealized roofing filters HB( f ). For most reasonable systems, |ϕn| < 10° and the small-angle approximations can be used to simplify (5.6) and (5.7) to x cos ( ∆ωt ) − y sin ( ∆ωt ) I R = I cos (ϕ n ) + Q sin (ϕ n ) + +n 144424443 xϕ n sin ( ∆ωt ) + yϕ n cos ( ∆ωt ) R Desired Signal 1444442444443

(5.8)

Blocking Signal Remnants

y cos ( ∆ωt ) + x sin ( ∆ωt )

QR = Q cos (ϕ n ) − I sin (ϕn ) + +n 144424443 yϕ n sin ( ∆ωt ) − xϕ n cos ( ∆ωt ) I Desired Signal 1444442444443

(5.9)

Blocking Signal Remnants

W1 Desired Signal

W0 Blocking Signal

ω0 / 2π

ω1 / 2π ∆FSep

Frequency, Hz

ω − ω0 = 1 2π

Figure 5-4 Spectral view of the desired receive signal centered at frequency ω0/2π and the blocking signal centered at ω1/2π.

If the spectral width of the desired and blocking signals are also such that their sum is less than 2∆FSep Hz, no direct spectral overlap will occur and these results can be further simplified to

I R = I cos (ϕ n ) + Q sin (ϕ n ) + xϕ n sin ( ∆ωt ) + yϕ n cos ( ∆ωt ) + nR 144424443 1444442444443

(5.10)

QR = Q cos (ϕ n ) − I sin (ϕn ) + yϕ n sin ( ∆ωt ) − xϕ n cos ( ∆ωt ) + nI 144424443 1444442444443

(5.11)

Desired Signal

Desired Signal

Blocking Signal Remnants

Blocking Signal Remnants

Note that the small-angle approximation has not been used in the desired signal portion of these results. These final two equations form the starting point for the computations that follow in Sections 5.3.1 and 5.3.2.

5.3.1 Close-In Phase Noise Effects The close-in local oscillator phase noise effects only involve the desired signal portion of (5.10) and (5.11). The noise terms contributed to IR and QR due to the phase noise are given by

System Performance

181

I Rn = I − I R = I 1 − cos (ϕ n )  − Q sin (ϕ n ) ≈ −Qϕ n

(5.12)

QRn = Q − QR = Q 1 − cos (ϕn )  + I sin (ϕ n ) ≈ I ϕ n

(5.13)

Invoking the small-angle approximation (i.e., retaining only the first-order terms) in (5.12) and (5.13), and assuming that the modulation and noise processes are statistically independent and widesense stationary, the power spectral densities (PSD) for the I- and Q-channel noise contributions are given by PRI ( f ) = SQ ( f ) ⊗ Sϕ ( f ) (5.14) PRQ ( f ) = S I ( f ) ⊗ Sϕ ( f )

(5.15)

where SI ( f ) and SQ ( f ) represent the power spectral densities of the original baseband signals being transmitted, Sϕ ( f ) is the PSD of the phase noise process, and ⊗ denotes convolution in the frequency-domain. In order to obtain some generally applicable results, assume that the I- and Q-channel information spectra are uniformly rectangular with the PSD −B ≤ f ≤ B

L S IQ ( f ) = LIQ rect B ( f ) =  IQ  0

otherwise

(5.16)

Further assume that the phase noise spectrum is a Lorentzian spectrum with an additional flat noise floor so that the complete local oscillator spectrum is given by Sϕ ( f ) = LFloor +

Lo  f  1+    fc 

2

+δ ( f )

(5.17)

In this form, LFloor and Lo have units of rad2/Hz because the carrier term has unit power. The convolution of these two spectra is given by    f +B −1  f − B  So ( f ) = 2 BLIQ LFloor + Lo LIQ f c  tan −1   − tan    + LIQ rect B ( f )   fc   f c  

(5.18)

If this result is normalized with respect to the PSD level of the ideal desired signal LIQ, this becomes    f +B −1  f − B  So _ norm ( f ) = 2 BLFloor + Lo f c  tan −1   − tan    + rect B ( f )  fc   f c   

(5.19)

where the last term in (5.19) represents the ideal desired signal. Several computed spectrum results focusing on only the noise-portion of (5.19) are presented in Figure 5-5. The Lorentzian phase noise spectrum creates direct interference for the desired signal that is proportional to Lo fc, and the width of the spectrum is clearly broadened. The spectrum sidelobes prove to be problematic for strong adjacent channel signals as discussed in Section 5.3.2.

System Performance

182

Spectrum Convolution (Noise Portion Only) 0

-5

fc = 0.10

PSD

-10

-15

fc = 0.05

-20

fc = 0.01

-25

fc = 0.001

-30

-35

-40 -10

-8

-6

-4

-2

0 2 Frequency, Hz

4

6

8

10

Figure 5-5 Example spectrum convolution results4 from the noise-only portion of (5.19) with B = 1 Hz, LFloor = 10–4, and different Lorentzian spectral width parameter values fc. Lo = 1/(π Fc) to keep the total integrated phase noise equal to unity for all cases.

Only the noise spectrum portion that falls through the receiver’s matched-filters HB( f ) in Figure 5-3 will ultimately degrade the signal-to-noise ratio (SNR) performance of the receiver. For the rectangular signal spectrum example considered thus far, the matched-filter characteristic represented by | HB( f ) | is identical to (5.16) aside from a constant scaling factor. Consequently, the signal variance at each matched-filter output due to the phase noise is given by 2 σ MF =



∫S (f)H (f) o

B

2

df

−∞

(5.20)  1 −1 2  = 4 B LIQ LFloor + 2 BLIQ Lo f c  2 tan ( γ ) − log e (1 + γ )  γ   with γ = 2B/fc. The first term in (5.20) is the contribution due to the flat noise floor term in (5.17). The second term is due to the Lorentzian phase noise spectrum which asymptotically becomes5 equal to 2π BLIQ Lo fc as γ →∞. The behavior of the bracketed term in (5.20) determines what portion of the total integrated phase noise undermines the receiver’s SNR performance. This can be looked at more closely by considering the normalized quantity  1 1 η =  2 tan −1 ( γ ) − log e (1 + γ 2 )  (5.21) π γ  2

4 5

Book CD:\Ch5\u13151_recip_mix.m. The total integrated phase noise for the Lorentzian PSD over (–∞, ∞) is also π Lo fc rad2.

System Performance

183

where again γ = 2B/fc. The behavior of η versus γ is shown in Figure 5-6. It is worthwhile to note that 90% of the total integrated Lorentzian phase noise is involved for 2B/fc ≥ 30 whereas less than 10% is involved if 2B/fc ≤ 0.30. The curve provides insight into how the shape of the phase noise spectrum affects the SNR performance of the receiver. It is advisable to avoid 2B/fc ratios that fall between these extremes because changes in the PLL closed-loop bandwidth could contribute sizeable SNR performance variations unless π Lo fc is very small to begin with. η Versus γ 1 0.9 0.8 0.7

η

0.6 0.5 0.4 0.3 0.2 0.1 0 -2 10

-1

10

0

10

1

2

10

3

10

10

4

10

γ = 2B/fc

Figure 5-6 Behavior6 of η versus γ from (5.21).

Phase noise performance impact on a receiver also depends significantly on the receiver’s required SNR at its specified input sensitivity level. The noise variance at the output of each matched-filter in Figure 5-3 is the sum of the phase noise contribution given by (5.20) and that due to the AWGN of the receive channel. Making use of (5.10) through (5.13), the input SNR to the receiver is given by

SNRin =

E ( I 2 + Q2 )

(5.22)

E ( nI2 + nQ2 )

and the output SNR is given by7 SNRout =

E ( I 2 + Q2 )

( ) (

E ( nI2 + nQ2 ) + E  ϕ n I 

2

)

2 + ϕnQ  

(5.23)

where E denotes statistical expectation as developed in Appendix 4A, and the overbar within the parentheses represents the filtering done by the matched-filters. Since the phase noise and modulation are statistically independent, the output SNR can be rewritten as

6 7

Book CD:\Ch5\u13151_recip_mix.m. The rectangular matched-filter assumption does not alter the PSD of the I and Q signals.

System Performance

SNRout

184

 E ( nI2 + nQ2 ) = + E  ϕn 2 2   E ( I + Q )

( )

 1 σ2  = + MF   SNRin 2 BLIQ 

2

     

−1

(5.24)

−1

This result assumes that the power spectral densities for I and Q in (5.14) and (5.15) are identical. This relationship makes it possible to directly compute the SNR loss due to phase noise versus the receiver input SNR. Example results using (5.24) are shown in Figure 5-7 in the context of a GSM/EDGE receiver, and similar results for a WCDMA receiver are provided in Figure 5-8. Bit error rate versus receive SNR and integrated phase noise is considered for a variety of different digital modulation types later in Section 5.5. Output SNR Loss vs Input SNR 5

Lo = -60 dBc/Hz

4.5

-65 dBc/Hz

4

Output SNR Loss, dB

3.5

-70 dBc/Hz

3

-75 dBc/Hz

2.5

-80 dBc/Hz 2

-85 dBc/Hz 1.5 1 0.5 0

0

5

10

15 Input SNR, dB

20

25

30

Figure 5-7 Phase noise related loss versus input SNR for a representative GSM/EDGE system.8

5.3.2 Large Frequency Offset Phase Noise Effects

Strong off-channel signals at the receiver’s input that are not attenuated by filtering are heterodyned by the local oscillator’s phase noise sidebands to baseband as suggested in Figure 5-9 and create unwanted interference that directly competes with the desired signal. The severity of the problem depends on the strength of the interferer relative to the desired signal (∆LdB) and the level of the phase noise sidebands at frequency offset ∆Fsep. If the power spectral density of the strong interferer is assumed to be rectangular as in (5.16) and the local oscillator phase noise spectrum has a Lorentzian shape, the resultant interference that appears at the mixer outputs in Figure 5-3 will be identical to the noise levels shown in Figure 5-5 at a frequency-axis value of ∆Fsep. The PSD of the interfering signal at baseband I and Q channel outputs is given by 8 Book CD:\Ch5\u13152_snr_loss.m, LFloor = –150 dBc/Hz, Lorentzian fc = 50 kHz, channel bandwidth 2B = 200 kHz, phase noise pedestal Lo, the varied parameter in dBc/Hz.

System Performance

185

Output SNR Loss vs Input SNR 3

Lo = -75 dBc/Hz

Output SNR Loss, dB

2.5

-80 dBc/Hz

2

-85 dBc/Hz

1.5

-90 dBc/Hz -95 dBc/Hz

1

0.5

0 10

15

20 25 Input SNR, dB

30

35

Figure 5-8 Phase noise related loss versus input SNR for a representative WCDMA system.9

 f − B − ∆Fsep  f + B − ∆Fsep  −1  So ( f ) = 2 BLx LFloor + f c Lo Lx  tan −1   − tan  fc fc    

   

(5.25)

in which Lx is the power spectral density of the interferer. Example baseband spectra are shown in Figure 5-10 where the interfering signal level is 35, 40, and 45 dB stronger than the desired signal. The noise variance present at each matched-filter output in Figure 5-3 due to the phase noise mixing with the strong interferer is given by   a1 + B   a1 − B   a2 + B   a2 − B   − g − g + g    f c   fc   fc   f c  

2 σ MFX = 4 B 2 Lx LFloor + Lx Lo f c  g 

(5.26)

where the inner function is given by 1   g ( u ) = f c u tan −1 ( u ) − log e (1 + u 2 )  2  

(5.27)

For receiver desensitization (i.e., loss of SNR) caused by a strong interferer, it is convenient to think in terms of the interferer being ∆LdB stronger than the desired signal as shown in Figure 5-9. On the basis of (5.24) with the noise variance now given by (5.26), and letting ∆LdB = 10 log10(χ), 2 σ MFX

2 BLIQ

 L f = χ 2 BLFloor + o c 2B 

  a1 + B   a1 − B   a2 + B   a2 − B    g  − g − g + g    fc   fc   f c      f c 

(5.28)

9 Book CD:\Ch5\u13152_snr_loss.m, LFloor = –150 dBc/Hz, Lorentzian fc = 75 kHz, channel bandwidth 2B = 3.84 MHz, phase noise pedestal Lo, the varied parameter in dBc/Hz.

System Performance

186

Strong Interferer Local Oscillator Spectrum

∆LdB

Sideband Noise

Desired Channel

∆FSep

∆FSep

Figure 5-9 Strong interfering channels are heterodyned on top of the desired receive channel by local oscillator sideband noise. Relative PSDs at Baseband 10

Desired Signal 5 0

PSD, dB

-5 -10 -15

+35 dB

-20

+40 dB

-25 -30 -35

+45 dB -2

0

2 4 6 Baseband Frequency, MHz

8

10

Figure 5-10 Baseband spectra10 caused by reciprocal mixing between a strong interferer that is offset 4B Hz higher in frequency than the desired signal and stronger than the desired signal by the dB amounts shown.

The first term in (5.28) 2BLFloor is attributable to the ultimate blocking performance of the receiver as discussed in Section 5.3. The resultant output SNR versus input SNR is given by SNRout

 1 σ2  = + MFX   SNRin 2 BLIQ 

−1

(5.29)

It is worthwhile to note that the interfering spectra in Figure 5-10 are not uniform across the matched-filter frequency region [–B, B]. Multicarrier modulation like OFDM (see Section 5.6) will potentially be affected differently than single-carrier modulation such as QAM (see Section 5.5.3) when the interference spectrum is not uniform with respect to frequency. The result given by (5.29) is shown for several interfering levels versus receiver input SNR in Figure 5-11. 10

Book CD:\Ch5\u13157_rx_desense.m. Lorentzian spectrum parameters: Lo = –90 dBc/Hz, fc = 75 kHz, LFloor = –160 dBc/Hz, B = 3.84/2 MHz.

System Performance

187

Receiver Desense with Strong Interferer 5

+45 dB

4.5 4

+40 dB

SNR Loss, dB

3.5 3

+35 dB

2.5 2 1.5 1 0.5 0

5

7

9

11

13 15 17 Input SNR, dB

19

21

23

25

Figure 5-11 Resultant receiver desensitization11 (input-to-output loss in SNR) due to the reciprocal mixing spectra shown in Figure 5-10.

Key Points: • Close-in local oscillator phase noise performance primarily affects the achievable receive SNR as given by (5.24). • Reciprocal mixing refers to the unwanted frequency conversion of strong off-channel signals by the local oscillator’s phase noise sidebands. • Large frequency offset phase noise requirements for the receiver’s local oscillator are dictated by the overall receiver selectivity requirements in conjunction with reciprocal mixing contributions. 5.4 LOCAL OSCILLATORS FOR TRANSMIT SYSTEMS

Phase noise effects on transmit systems are no less important than for receive systems. Generally speaking, the two most important performance measures involved are (i) the signal-to-noise ratio of the transmit signal and (ii) the amount of residual transmit signal energy that is allowed to fall into the adjacent channels. The first measure is primarily dictated by the close-in phase noise performance of the main local oscillator whereas the second is usually driven by phase noise performance at large frequency offsets from the carrier. 5.4.1 Close-In Phase Noise Effects

The SNR at a distant receiver can be no better than the originating SNR at the transmitter. Assuming a rectangular transmit signal spectrum (5.16) and Lorentzian phase noise spectrum (5.17) as used for the receiver discussions, the achievable transmit SNR due to phase noise limitations is given by SNRTx ≤ −10 log10 {2 BLFloor + π Lo f cη ( γ )} < −10 log10 ( 2 BLFloor + π Lo f c ) dB

(5.30)

where η (γ ) is defined by (5.21) and γ = 2B/fc once more. The upper bound is given by the total integrated phase noise of the Lorentzian phase noise spectrum. 11

Ibid.

System Performance

188

5.4.2 Large Frequency Offset Phase Noise Effects

Phase noise content at large frequency offsets is problematic because it causes interference to other frequency channels. Discrete spurious sideband tones at the main local oscillator output are often the most detrimental because they normally result in small modulation replicas centered on each spurious output frequency. The Federal Communication Commission (FCC) and other spectrum governing organizations specify unwanted transmitter output emissions based in part on the transmitter’s output power level, relying on free-space attenuation with distance to reduce the emissions below the thermal noise limit. For example, assume that a GSM mobile transmitter is broadcasting at a power level of 30 dBm at 900 MHz and its phase noise sideband level at 20 MHz frequency offset is –150 dBc/Hz. Assuming isotropic radiation patterns for the transmitter and distant receiver of interest, the free-space loss for a separation distance D (meters) is  4π D  LossdB = 20 log10    λ 

(5.31)

The noise power measured for D = 2 meters would be –162.5 dBm/Hz which is still well above the ambient thermal noise floor of –174 dBm/Hz. This amount of additional noise would completely overwhelm a nearby receiver most likely causing the call to be dropped. A more realistic requirement for the transmitter would be to require the sideband noise level to be –170 dBc/Hz which would result in a smaller noise contribution at the receiver of –177.5 dBm/Hz. If the nearby receiver’s nominal noise figure is 5 dB, the additional noise would result in an effective noise figure of 5.6 dB representing a 0.6 dB degradation in sensitivity, which is reasonable. A convenient plot that makes it simple to see how an interfering power spectral density raises the local noise floor for a receiver is provided in Figure 5-12. The sum of two noise quantities is considered in the plot, the first having a power level of α dBm/Hz, and the second having a power level of (α – x) dBm/Hz. The horizontal axis corresponds to the value x in dB whereas the vertical axis is the adjustment in the value for α (dB) due to the additional noise and is given by

(

ydB = 10 log10 10α /10 + 10(

α − x ) /10

) −α

(5.32)

Total Noise Power Adjustment Including 2nd Contributor 3 2.75 2.5

Noise Adjustment, dB

2.25 2 1.75 1.5 1.25 1 0.75 0.5 0.25 0

0

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

2nd Noise Source Less Than 1st, dB

Figure 5-12 Power contribution12 of a second noise power of (α – x) dBm/Hz to the total power of the two, relative to the first noise source power level of α dBm/Hz. 12

Book CD:\Ch5\u13173_noise_pwr.m.

System Performance

189

This relationship is tabulated for some of the most frequently used adjustment factor values in Table 5-1. For example, if the first source has a power spectral density of –160 dBm/Hz and the second has a density of –169.1357 dBm/Hz, the resultant power spectral density would be –159.5 dBm/Hz. Table 5-1

Sum of Noise Powers Total Power Increase Level of 2nd Source st Relative to 1 Source, dB Relative to 1st, dB 1 –5.8683 0.50 –9.1357 0.25 –12.2728 0.125 –15.3461

The formulas developed earlier for the receiving case apply equally well for the transmit situation. In cases where the local oscillator’s phase noise spectrum is Lorentzian and the baseband in-phase/quadrature-phase modulation spectra are rectangular, the transmitter’s output spectrum is like that shown earlier for the receive situation in Figure 5-5. The underlying frequency-domain convolution broadens the spectral sidelobe region of the otherwise ideal baseband rectangular spectra as shown, potentially leading to the adjacent channel interference of concern. 5.5 LOCAL OSCILLATOR PHASE NOISE IMPACT ON DIGITAL COMMUNICATION ERROR RATE PERFORMANCE

It has been shown in previous sections how close-in and far-out phase noise characteristics affect system performance differently. Close-in phase noise effects as they apply to single-carrier signal reception using a noisy phase reference are considered in this section. These results provide a more detailed analysis of specific digital modulation waveforms than the general discussions presented earlier. The receiver’s noisy phase reference is assumed to introduce a random phase error that is constant during each modulation symbol period. Phase errors are assumed to be statistically independent symbol to symbol. The probability density assumed for the phase errors is the Tikhonov density function (1.56) that is given by  cos (ϕ )  exp   2  σ ϕ  (5.33) pϕ (ϕ ) = 2π I 0 (σ ϕ−2 ) where I0( ) is the zeroth-order modified Bessel function. This is closely approximated [1] in the results that follow by  cos (ϕ ) − 1  exp   2  σϕ   (5.34) pϕ (ϕ ) ≅ σ ϕ 2π The Tikhonov density is used because it closely represents the phase error behavior of a PLL as described in Sections 1.4 and 10.6. Only uncoded bit- or symbol-error rate performance is considered in this section. More general results that can be useful for assessing phase noise-related degradation for coded systems are deferred to Section 5.8 and Appendix 5B.

System Performance

190

5.5.1 Uncoded BPSK Bit Error Rate Performance

The marginal uncoded BER for BPSK given a static phase error ϕ is given by  E  E  1 PBPSK _ b  b ϕ  = erfc  b cos (ϕ )  N   N o  o  2 

(5.35)

where Eb/No is the all-important energy-per-bit to noise spectral density ratio and No/2 is the twosided noise power spectral density of the channel. The average BER is given by integrating the marginal BER as E  π E  (5.36) PBPSK _ b  b  = ∫ PBPSK _ b  b ϕ  pϕ (ϕ ) dϕ   π −  No   No  where pϕ (ϕ ) is given by (5.34). The numerical results are presented graphically in Figure 5-13. Since BPSK can be considered a variant of amplitude-only modulation, it should not be surprising that the BER is fairly insensitive to phase noise as shown. BPSK BER with Noisy LO

-1

10

-2

10

o

σφ = 20 -3

10

Bit Error Rate

-4

o

σφ = 17.5

10

-5

10

o

σφ = 15

-6

10

o

σφ = 12.5 o

σφ = 0

-7

10

o

σφ = 10 -8

10

3

4

5

6

7

8

9 10 Eb/No, dB

11

12

13

14

15

Figure 5-13 Uncoded BER for BPSK with noisy local oscillator.13 A larger plot is provided in Figure 2-34. 13

Book CD:\Ch5\u13154_bpsk_ber.m.

System Performance

191

5.5.2 Uncoded QPSK Bit Error Rate Performance

The marginal uncoded BER for QPSK, given a static phase error ϕ, is given by  E  1  E  E  1 PQPSK _ b  b ϕ  = erfc  b  cos (ϕ ) − sin (ϕ )   + erfc  b  cos (ϕ ) + sin (ϕ )   N   N o  o  4  4  N o 

(5.37)

The average BER is calculated using the same method as in the previous section. The numerical results are presented graphically in Figure 5-14. This signal constellation makes use of the in-phase and quadrature-phase dimensions and is consequently more sensitive to phase noise than BPSK. QPSK BER with Noisy LO

-1

10

o

σφ = 13

o

σφ = 11

-2

10

o

σφ = 9

o

σφ = 7

-3

10

Bit Error Rate

-4

10

-5

10

o

σφ = 5

-6

10

o

o

σφ = 0

-7

σφ = 3

10

-8

10

4

5

6

7

8

9 10 E b/No, dB

11

12

13

14

15

Figure 5-14 Uncoded BER for QPSK with noisy local oscillator.14 A larger plot is provided in Figure 2-35.

5.5.3 Symbol Error Rate for Square QAM Signal Constellations

Quadrature-amplitude modulation (QAM) is used in many communication systems where high data throughput rates are required. The close-spacing of the signal constellation points involved with higher-order QAM constellations makes phase noise a serious performance issue. A formula to compute the symbol error rate (SER) for square QAM signal constellations when phase noise is present is developed in this section. In all other respects, ideal coherent demodulation is assumed. 14

Book CD:\Ch5\u13155_qpsk_ber.m.

System Performance

192

Let M 2 be the total number of points used in the signal constellation. This is equivalent to using M signal levels each on the in-phase (I) and quadrature-phase (Q) signal rails. Further assume that the (voltage) distance between the signal levels on the I and Q rails is d. An example 16-QAM signal constellation is shown in Figure 5-15. The most common QAM signal constellation cases along with their average energy-per-symbol values are tabulated in Table 5-2. Assume that the transmitted data symbols are given in complex form as (ak + jbk). The ak represent the data bits communicated through the I-channel, and the bk represent the data bits for the Q-channel. After reception by a coherent receiver that is ideal aside from introducing phase noise degradation ϕn into the reception process, the received I- and Q-channel information can be represented by I k = Real ( ak + jbk ) exp ( jϕ n )  + nI (5.38) Qk = Imag ( ak + jbk ) exp ( jϕ n )  + nQ d

16-QAM

Q 0000

0100

1100

1000

3

0001

0101

1101

1001

d

1 -3

-1

0011

0111

1

3

1111

1011

-1

I

-3

0010

0110

1110

1010

Figure 5-15 Sample QAM signal constellation for 16-QAM case including Gray-coding. Table 5-2

Average Energy per Symbol for Square QAM Constellations M2 Avg. Energy per Symbol/d2 M 4 2 2/4 16 4 10/4 64 8 42/4 256 16 170/4 1024 32 682/4

The phase noise term ϕn is due to the local oscillator phase noise, and nI and nQ are additive Gaussian noise terms each having variance σ 2. The I and Q channels are cross-coupled due to the common phase noise term ϕn. The cross-talk between the I and Q channels is severe because it is weighted by the first-order sin(ϕn) term, and also weighted by the potentially much stronger signal amplitude in the other channel. This can be rewritten in matrix form as  I k   cos (ϕn ) − sin (ϕ n )   ak   nI   +  Q  =  sin ϕ  k   ( n ) cos (ϕ n )   bk   nQ 

(5.39)

The I- and Q-channels are orthogonal to each other and can therefore be processed separately.

System Performance

193

Implicit in (5.39) is ideal automatic gain control (AGC) in the receiver because no impairments are present other than the phase noise and additive Gaussian noise. Decision thresholds on the I and Q rails are assumed to correspond to the same signal constellation points used in the transmitter. The corresponding ideal detection thresholds are shown by the large dashed lines in Figure 5-16 for a single dimension.

-7d 2

-5d 2

-3d 2

-d 2

d 2

3d 2

5d 2

7d 2

Figure 5-16 64-QAM I- and Q-rails with detection regions.

An inspection of the decision regions with respect to each constellation coordinate in the figure reveals that a symbol error occurs if the noise plus interference is greater than d/2 except for the endpoints on each rail. Symmetry between the I- and Q-channel cases makes it possible to analyze the error probability for the I-channel alone, and subsequently use the same result for the Q-channel. For all of the interior points on the I-rail (i.e., less endpoints), a symbol error occurs if ak cos (ϕ n ) − bk sin (ϕn ) + nI − ak >

d 2

(5.40)

In order to have reasonably low levels of cross-coupling between the I- and Q-channels, the phase noise must be correspondingly small. Under this assumption, the second-order nature of cos( ) can be exploited to accurately approximate15 akcos(ϕn) ≅ ak. Under this assumption, (5.40) can be simplified to bk sin (ϕ n ) − nI >

d 2

(5.41)

This assumption also decouples the ak and bk terms completely. The probability density function of the mean-zero Gaussian noise term nI is given by p ( nI ) =

 n2  exp  − I 2  σ 2π  2σ 

1

(5.42)

It is convenient to let the total observed noise plus interference in (5.41) be represented by v = bk sin (ϕ n ) − nI

(5.43)

A symbol error due to a decision error on any interior point in Figure 5-16 occurs if |v| > d/2, and this fact along with (5.42) and (5.43) can be used to write the probability of such an event as PI _ Error ( bk , ϕn ) = 2





d /2

15

For example, cos(5°) = 0.9962 whereas sin(5°) = 0.0872.

p bk sin (ϕn ) − v  dv

(5.44)

System Performance

194

d   2 − bk sin (ϕn )  = 2erfc   σ 2    

(5.45)

For the endpoints in Figure 5-16 when |bksin(ϕn) – nI| > d/2, a decision error only occurs half of the time. Consequently, the average decision error probability for the I-channel is given by d   2 − bk sin (ϕ n )  M −1 PI _ Error ( bk , ϕ n ) = erfc   M σ 2    

(5.46)

An identical result applies to the decision error probability on the Q-channel in terms of the ak values. The probability that a complete symbol is received without error is then given by PCorrect = 1 − PI _ Error ( bk , ϕ n )  1 − PQ _ Error ( ak , ϕ n )  ≅ 1 − PI _ Error ( bk , ϕ n ) − PQ _ Error ( ak , ϕ n )

(5.47)

= 1 − 2 PQ _ Error ( ak , ϕ n )

From this result, the probability of a symbol error given a phase noise value ϕn is given by d   2 − ak sin (ϕ n )   M −1  Psym (ϕ n ) = 2    erfc  σ 2  M     

(5.48)

where the angled-brackets denote averaging over all possible ak values. For M constellation points on each decision rail, the ak and bk values are both given by the same formula. The first half of the values are given by ak =

2k + 1 d 2

for 0 ≤ k ≤

M −1 2

(5.49)

and the second-half are just the negatives of the first-half. This makes it possible to write (5.48) as  M  d 2k + 1   d 2k + 1  −1  2 − 2 d sin (ϕ n )   2 + 2 d sin (ϕ n )   M − 1  2  Psym (ϕ n ) = 2  erfc   + erfc   2  ∑  σ 2 σ 2  M  k =0          

    

(5.50)

A relationship between σ in (5.50) and the receive SNR is needed before meaningful calculations can be performed. The average energy per data symbol can be calculated as

System Performance

4 Es = 2 M

M M −1 −1 2 2

195

∑ ∑ (a k =0 p =0

2 k

+ bp2 )

(5.51)

Symbol energy values were tabulated earlier in Table 5-2. Since the total receive noise power is given by 2σ 2, the relationship between receive SNR, noise variance16, and symbol energy is given by Es σ2 = (5.52) 2 SNR Using (5.50) through (5.52), the M 2-QAM SER is finally given by

Psym =



∫ P (ϕ ) pϕ (ϕ ) dϕ sym

n

n

n

(5.53)

−∞

In many systems, the constellation points are Gray-coded (e.g., see Figure 5-15) which results in at most a single bit-error for nearest-neighbor decision errors in the detection process. Under such circumstances, the uncoded bit error rate is almost exactly given by the SER divided by the number of bits per symbol. In the absence of phase noise, the average symbol error rate is also given by Proakis17 as   3k γ b   1  M − 1  3k γ b    M −1   Psym ( γ b , M , k ) = 2  erfc  1 −   erfc   2  2 ( M 2 − 1)   2 ( M − 1)   2  M   M     

(5.54)

where γb is the SNR per bit (Eb/No), k is the number of bits per symbol, and M is the number of levels on each signal rail. For the 16-QAM case, M = 4 and k = 4, for example. The symbol error rates for 16-QAM, 64-QAM, and 256-QAM using (5.53) and (5.54) are provided in Figure 5-17 through Figure 5-19, respectively, for convenient reference. 5.5.4 Phase-Modulated Signals M-PSK

Phase modulated signals that convey digital information by using M equally spaced angular constellation points on a circle are commonly referred to as M-PSK signals. An 8-PSK constellation example is shown in Figure 5-20. The best bits-to-symbol mapping possible for M-ary PSK is the Gray code mapping as shown. With this binary assignment, neighboring constellation points only differ in one bit-position and the resultant bit-error rate is therefore almost exactly 1/k times the symbol error rate where k = log2(M). Each transmit symbol can be represented in rectangular form as   2π ( m − 1)   2π ( m − 1)   sm =  Es cos   , Es sin    for m ∈ {1, 2,...M } M M      

16 17

The noise variance is such that σ 2 = No/2. [2], page 282, eq. (4.2.144).

(5.55)

System Performance

196

16-QAM Symbol Error Rate

-2

10

o σφ = 4 rms

-3

o σφ = 3 rms

10

o

σφ = 2 rms

-4

10

o

SER

σφ = 1 rms -5

10

No Phase Noise

-6

10

Proakis -7

10

-8

10

10

11 12

13 14

15 16

17 18 19 20 Eb/No, dB

21 22

23 24

25

Figure 5-17 16-QAM symbol error rate18 including noisy phase reference based on (5.53) and (5.54). 64-QAM Symbol Error Rate

-2

10

o

σφ = 2 rms o

σφ = 1.5 rms

-3

10

o

σφ = 1 rms -4

SER

10

-5

10

-6

10

o

σφ = 0.5 rms Proakis

-7

10

No Phase Noise

-8

10

15

16

17

18

19 20 Eb/No, dB

21

22

23

24

25

Figure 5-18 64-QAM symbol error rate19 including noisy phase reference based on (5.53) and (5.54). 18 19

Book CD:\Ch5\u13159_qam_ser.m. Larger plot in Figure 2-36. Book CD:\Ch5\u13159_qam_ser.m. Larger plot in Figure 2-37.

System Performance

197

256-QAM Symbol Error Rate

-1

10

o σφ = 1.25 rms o σφ = 1.0 rms o σφ = 0.75 rms -2

o σφ = 0.5 rms

SER

10

-3

10

o σφ = 0.25 rms

No Phase Noise

Proakis

-4

10

16

17

18

19

20 21 Eb/No, dB

22

23

24

25

Figure 5-19 256-QAM symbol error rate20 including noisy phase reference based on (5.53) and (5.54). Q 011 001 010

8-PSK

000 I

110 100 111 101

Figure 5-20 M-ary phase modulation using a constellation size of eight (8-PSK) including Gray-coding.

After signal reception over an AWGN channel by a receiver like Figure 5-3, the in-phase and quadrature-phase components are given by  2π ( m − 1)  I R = Es cos  + ϕ n  + nI M    2π ( m − 1)  QR = Es sin  + ϕ n  + nQ M   20

Book CD:\Ch5\u13159_qam_ser.m. Larger plot in Figure 2-38.

(5.56)

System Performance

198

where the phase error due to the local oscillator’s phase noise is represented by ϕn. The nI and nQ terms represent jointly independent random variables due to the AWGN channel. The variance of each individual noise term is σ 2. In this form, the receive SNR is given by

ρ=

Es 2σ 2

(5.57)

and the corresponding Eb/No = ρ / log2(M). The symbol error rate (SER) can be found without loss of generality by assuming that the constellation point 000 (corresponding to m = 1 in (5.55)) is transmitted and calculating the probability that a different symbol is detected in the receiver. Aside from the phase noise contribution ϕn, the problem is identical to the sine wave plus noise problem considered in Appendix 3B where it was found that the probability distribution function of the observed phase ψ was exp ( − ρ ) p (ψ ) = 1 + πρ cos (ψ ) exp  ρ cos 2 (ψ )  1 + erf ρ cos (ψ )  (5.58)   2π

{

)}

(

The final phase observed in the receiver’s detection process is the sum of ψ plus the local oscillator’s phase noise contribution φn which has a Tikhonov probability density function given earlier by (5.33). Consequently, the probability density function of the final observed phase is given by the convolution of (5.58) and (5.33) since the two phase processes are statistically independent. A symbol error occurs if the detected phase falls outside the range [–π /M, π /M]. The probability of a symbol error is consequently given by

Psym _ err = 1 −

π /M

π

∫ dϕ p (ψ − ϕ ) pϕ (ϕ )

(5.59)

 π  π   − ϕ n   + erfc  ρ sin  + ϕ n    dϕ n M  M   

(5.60)

∫ π

− /M



n

−π

n

n

n

This can be closely approximated by Psym _ err ≅

π



1  ∫π pϕ (ϕ ) 2 erfc 



n

n

ρ sin 

where the probability density function for ϕn is again the Tikhonov density (5.33). M-PSK symbol error rate curves are provided in Figure 5-21 and Figure 5-22 for 8-PSK and 16-PSK, respectively. 5.6 PHASE NOISE EFFECTS ON OFDM SYSTEMS

In its most basic form, orthogonal frequency division multiplex (OFDM) modulation is based on the modulation of mutually orthogonal sine wave subcarriers and the efficiency of the fast Fourier transform (FFT) to both modulate and demodulate the signals involved. The OFDM variety specified in the IEEE802.11a standard is used here to illustrate the primary fundamentals of OFDM. In the frequency-domain as seen on an RF spectrum analyzer, the ideal spectrum appears to be nearly square with rapidly decaying sidelobe levels caused by the symbol-rate QAM modulation imposed on each individual subcarrier. If no additional shaping or filtering of the individual OFDM data symbols is used, each subcarrier exhibits a sin(x)/x-type spectrum as shown in Figure 5-23. The 802.11a waveform utilizes 48 data subcarriers and 4 pilot subcarriers for a total of 52 active subcarriers.

System Performance

199

8-PSK Symbol Error Rate

-1

10

o

σφ = 7 rms -2

10

o

σφ = 6 rms -3

Symbol Error Rate

10

o

-4

σφ = 5 rms

10

-5

10

o

σφ = 4 rms o

-6

10

σφ = 3 rms

No Phase Noise

-7

10

o

σφ = 2 rms -8

10

5

10

15

20

25

30

Eb/No, dB

Figure 5-21 Uncoded 8-PSK symbol error rate.21 16-PSK Symbol Error Rate

-1

10

o σφ=3 rms o σφ=2.5 rms

-2

10

o σφ=2 rms -3

Symbol Error Rate

10

-4

10

-5

o σφ=1.5 rms

10

No Phase Noise -6

10

o σφ=1 rms

-7

10

-8

10

10

12

14

16

18 20 E b/No, dB

22

24

26

Figure 5-22 Uncoded 16-PSK symbol error rate.22 21 22

Book CD:\Ch5\u13170_mpsk_ber.m using (5.60). See Figure 2-39 for larger figure. Book CD:\Ch5\u13170_mpsk_ber.m. See Figure 2-40 for larger figure.

28

System Performance

200

The center frequency bin is left unoccupied. An example 802.11a transmit spectrum is shown in Figure 5-24. The additional sidelobe energy is normally attributable to the phase noise performance of the local oscillator and nonlinear effects in the power amplifier. A time-gated spectrum analysis that is precisely synchronized to the OFDM symbol rate (250 ksps) and centered on each subcarrier center frequency (subcarrier separation of 312.5 kHz) reveals a dramatically different spectrum that is composed of delta-functions due to the mutual orthogonality of the OFDM subcarriers. Proper time and frequency synchronization transforms a spectrum like Figure 5-24 into a result similar to the spectrum shown in Figure 5-23. Idealized OFDM Spectrum 5

0

-5

PSD, dB

Composite From Individual Subcarriers

-10

-15

-20

-25

-30

-10

-5

0 Frequency Offset, MHz

5

10

Figure 5-23 Idealized OFDM spectrum exhibiting sin(x)/x spectrum contribution from each subcarrier.23

Figure 5-24 RF spectrum analyzer view of actual 802.11a signal spectrum with superimposed spectral mask requirement template.

A time-domain perspective of each OFDM symbol is shown in Figure 5-25. Each symbol is preceded by a guard interval (GI) time-segment of 0.8 µs that is a carbon copy of the signal as it would appear if the data symbol were periodically continued 0.8 µs beyond its actual end. The 250 ksps rate corresponds to 4 µs per symbol. With a subcarrier spacing of precisely 312.5 kHz, an integer number of subcarrier cycles precisely fit in the remaining 3.2 µs of each symbol period. The subcarriers remain mutually orthogonal only if the matched-filter operation implemented by the FFT 23

Book CD:\Ch5\u13171_ofdm_spectrum.m.

System Performance

201

on each subcarrier is performed over an integer number of subcarrier cycles (i.e., only 1 complete cycle for the smallest frequency offset subcarrier). In principle, this mutual orthogonality is maintained so long as any precise 3.2 µs segment of the 4 µs is processed for a given OFDM symbol. One OFDM Symbol Transition Between Symbols

Time

0.80 µ s

Guard Interval

3.20 µ s Active Symbol Portion

Figure 5-25 Time-domain view of 802.11a OFDM symbols showing the guard interval and active portion of each symbol.

Time delay and phase are inseparably linked together in OFDM. Starting the matched-filter (i.e., FFT) operation τ seconds later causes each kth subcarrier to undergo a relative phase rotation of 2π k ∆Fτ radians where k is the subcarrier index relative to the channel center frequency and ∆F is the subcarrier spacing of 312.5 kHz. Consequently, the beginning of each matched-filter operation must begin at the same point relative to the beginning of each OFDM symbol in order to avert these unwanted phase rotations. A signal preamble is used as shown in Figure 5-26 to (i) establish the precise beginning of the first OFDM symbol, and (ii) provide the receiver a means to get a precise phase and amplitude estimate for every subcarrier. Receiver demodulation for the duration of the OFDM frame is based on comparing the matched-filter outputs with the amplitude and phase reference information obtained from this preamble on a symbol-by-symbol basis. Fine Freq Estimation Channel Estimation Coarse Time / Freq Estimation

G I

T2

T1

8.0 µ s

G I

Data

G I

Data

8.0 µ s

Frame Preamble

Figure 5-26 802.11a OFDM frame preamble illustrating initial estimation region followed by channel estimation region.

The importance of the 0.8 µs guard interval is that many communication channels exhibit signal reflections between the transmitter and receiver known as multipath. Since each signal path entails a slightly different time delay, they arrive at a distant receiver with varying amplitudes and phases. So long as the maximum-to-minimum path delay difference (of paths having appreciable power levels) is less than the length of the guard interval, the time delay-spread associated with the multipath can be largely mitigated. OFDM makes it possible to communicate in high multipath environments where many other signal types would be completely ineffective. Local oscillator phase noise directly affects OFDM communication in several ways which are considerably more involved than with single-carrier systems. More specifically, phase noise degradations must be evaluated for: • • • •

Initial channel estimation impairments; Individual subcarrier SNR limitations; Cross-coupling between subcarriers caused by loss of orthogonality; Coherence loss caused by accumulated phase error across each frame.

System Performance

202

Potentially serious impairments caused by inaccurate frequency estimation in the receiver are not addressed here. Sophisticated digital signal processing can be used to mitigate all of these frequency source-related impairments with various degrees of success. Only the fundamental aspects of phase noise and OFDM performance are considered here, however. Each OFDM subcarrier is QAM-modulated in the IEEE802.11a standard. An overlay of all of the data subcarriers, each carrying 64-QAM, is shown in Figure 5-27 for a receive SNR of 31.5 dB. When sinusoidal phase modulation is imposed on the main local oscillator in the receiver to explore the phase-error tracking capability of the demodulator, the residual phase error common to all of the subcarriers causes the constellation point expansion seen for the outer constellation points as compared to the inner points as shown in Figure 5-28. Phase noise impairments are readily recognizable whenever the radial extent of the recovered constellation points expands from the inner points to the outer points as seen in this figure.

Figure 5-27 Example compilation of all data subcarriers of 802.11a OFDM signal. SNR = 31.5 dB.

Figure 5-28 Signal constellation of Figure 5-27 with additional sinusoidal phase modulation impressed on the main local oscillator.

5.6.1 Channel Estimation Errors Due to Phase Noise

The amplitude and phase of each OFDM subcarrier must be precisely estimated during the T1 – T2 portion of the frame preamble shown in Figure 5-26. QAM demodulation of each subcarrier in the subsequent data symbols of the frame is done by comparing their amplitudes and phases with the

System Performance

203

subcarrier estimates made during T1 and T2. The channel estimation portion of the signal is purposely made twice as long as the subsequent data symbols to improve the estimation error variances by 3 dB. The degradations imposed by local oscillator phase noise on the channel estimation process are analyzed in this section. The kth OFDM subcarrier can be represented by sk ( t ) = Ak exp  j (ωo + k ∆ω ) t + θ k 

(5.61)

during the T1 –T2 time period where ωo is the radian center frequency of the transmit channel, ∆ω is the radian tone-separation frequency, Ak is the amplitude of the kth subcarrier, and θk is the phase of the kth subcarrier. The receiver normally uses a single local oscillator combined with an FFT to effectively heterodyne each subcarrier to baseband followed by an integrate-and-dump matchedfilter operation. The representative local oscillator signal for the kth subcarrier is consequently given by

{

}

LOk ( t ) = exp − j (ωo + k ∆ω ) t + ϕn ( t ) 

(5.62)

where ϕn(t) represents phase noise on the receiver’s local oscillator. The output from the receiver’s (time-aligned) integrate-and-dump matched-filter for the kth subcarrier is given by Dk =

1 TR

TR

N /2



p =− N / 2

S p ∫ exp  j ( p − k ) ∆ω t  exp  − jϕ n ( t )  dt

(5.63)

0

where Sp = Ap exp( jθp). After considerable computation, the signal power observed at the kth integrate-and-dump output is given by E  Dk  ≅ S k   2

2

+∞

(1 − σ ) + ∫ S 2

ϕn

−∞

ϕn

N /2

(f) ∑

p =− N / 2

Sp

2

{

 sin π  f − ( k − p ) ∆f  TR     π  f − ( k − p ) ∆f  TR 

}  df 2

 

(5.64)

The first-term in (5.64) is the desired signal term including a coherence-loss factor. The second-term is a noise term due to the phase noise interaction with all of the subcarriers including the kth subcarrier. The phase noise two-sided power spectral density corresponding to the wide-sense stationary phase noise process ϕn(t) is represented by Sϕ n( f ). Normally, the coherence-loss factor is quite small and the resultant channel-estimation SNR is limited by the second term on a bin-by-bin basis. The closed-form results using (5.64) for a Lorentzian phase noise spectrum having fc = 150 kHz and total integrated phase noise of 1° rms are shown in Figure 5-29. This result assumes that the |Sp| values are all unity as provided in the IEEE802.11a standard for the T1 – T2 signal portion. The summation of sin(x)/x functions in (5.64) makes this closed-form result time-consuming to compute, however, and the result is only approximate. A Monte Carlo time-domain simulation of the detection process is considerably faster to execute, and these results can be made arbitrarily accurate by running more Monte Carlo cases as desired. The time-domain simulation corresponding to the phase noise parameters used in Figure 5-29 is provided in Figure 5-30 for comparison purposes. The spectral cusping arises from using the actual T1 – T2 preamble subcarrier values while also avoiding the approximations that were required to arrive at (5.64).

System Performance

204

Closed-Form Channel Estimate -38 -40 -42

Weighting Factor

-44 -46 -48 -50 -52 -54 -56 -58 -40

-30

-20

-10 0 10 OFDM Subcarrier Index

20

30

40

Figure 5-29 Closed-form result24 for the channel estimate variance using (5.64) for unity-amplitude Sk and Lorentzian phase noise spectrum with fc = 150 kHz and 1° rms total integrated phase noise.

The shape of the local oscillator phase noise spectrum affects the channel estimate noise variance as shown in Figure 5-31 through Figure 5-33. For the Lorentzian phase noise spectrum assumption, spectrum pedestal widths that are narrow compared to an OFDM frequency bin lead to sharper noise spectra and slightly worse noise variance performance for the subcarriers. Larger spectrum pedestal widths result in slightly lower channel estimation noise variances but the out-of-band noise is much slower to attenuate with offset frequency. These results can be somewhat misleading, though, because OFDM data symbol reception involves additional processing beyond just the channel estimation process being discussed here. The discussion presented thus far only pertains to obtaining a channel estimate using the T1 – T2 portion of the preamble. If channel multipath is present, the received amplitudes will vary from bin-to-bin as will the received phases for each bin. As subsequent OFDM data symbols are received in Figure 5-26, the bin-by-bin channel estimate becomes stale due to (i) changes in the phase noise function ϕn(t) and (ii) changes in the wireless channel. Only the first cause is of interest here. As the time separation between the channel estimate and a subsequent data-symbol increases, the phase noise process becomes more and more uncorrelated, and this leads to an additional degradation due to the underlying phase noise process. Assuming that the local oscillator phase noise spectrum is Lorentzian with a two-sided spectral density given by Sϕn ( f ) =

Lo

 f  1+    fc  the corresponding autocorrelation function is given by

(5.65)

2

Rϕn (τ ) = Loπ f c exp ( −2π f c τ

)

(5.66)

Based on this autocorrelation function, the phase noise process observed during the channel estimation step is essentially uncorrelated with its behavior during subsequent OFDM data-symbols once the correlation coefficient is less than 0.368 (= 1/e) which corresponds to τ > 1/(2π fc) seconds. For an fc value of 50 kHz, τ = 3.2 µs so the additional performance loss relative to the channel estimate made in T1 – T2 is almost immediate for this example. 24

Book CD:\Ch5\u13172_chest.m.

System Performance

205

Variance of Bin-by-Bin Channel Estimate -30 -32 -34 -36

Noise Variance, dBc

-38 -40 -42 -44 -46 -48 -50 -52 -54 -56 -58 -60 -40

-30

-20

-10 0 10 OFDM Subcarrier Index

20

30

40

Figure 5-30 Time-domain simulation corresponding to Figure 5-29 assuming a Lorentzian phase noise spectrum having a corner frequency fc = 150 kHz and 1° rms total integrated phase noise. Variance of Bin-by-Bin Channel Estimate -30 -32 -34 -36

Noise Variance, dBc

-38 -40 -42 -44 -46 -48 -50 -52 -54 -56 -58 -60 -40

-30

-20

-10 0 10 OFDM Subcarrier Index

20

30

40

Figure 5-31 Bin-by-bin channel estimation error variance25 over T1 – T2 due to phase noise alone. Lorentzian phase noise spectrum assumed with total integrated phase noise = 1° rms and fc = 10 kHz. Variance of Bin-by-Bin Channel Estimate -30 -32 -34 -36

Noise Variance, dBc

-38 -40 -42 -44 -46 -48 -50 -52 -54 -56 -58 -60 -40

-30

-20

-10 0 10 OFDM Subcarrier Index

20

Figure 5-32 Same as Figure 5-31 except Lorentzian corner frequency fc = 50 kHz. 25

Ibid.

30

40

System Performance

206

Variance of Bin-by-Bin Channel Estimate -30 -32 -34 -36

Noise Variance, dBc

-38 -40 -42 -44 -46 -48 -50 -52 -54 -56 -58 -60 -40

-30

-20

-10 0 10 OFDM Subcarrier Index

20

30

40

Figure 5-33 Same as Figure 5-31 except Lorentzian corner frequency fc = 500 kHz.

Because the role of phase noise in OFDM reception is much more complex than for single-carrier systems, a true assessment is only possible by using detailed simulations that also include the time and frequency tracking loops used in the OFDM receiver. 5.7 PHASE NOISE EFFECTS ON SPREAD-SPECTRUM SYSTEMS

Spread-spectrum communication systems often have relaxed close-in phase noise spectrum requirements for the receiver’s local oscillator because the receiver’s despreading operation disperses the phase noise features while collapsing the received signal spectrum to its native bandwidth before it was spectrally spread by the transmitter. Even so, most spread-spectrum systems operate in the presence of other narrowband communication signals which can lead to unwanted reciprocal mixing like that discussed in Section 5.3. Channel selectivity often dictates the large frequency-offset phase noise requirements for these systems more so than the total integrated phase noise requirement. 5.8 PHASE NOISE IMPACT FOR MORE ADVANCED MODULATION WAVEFORMS

Many other types of modulation beyond the few described in this chapter are possible. The effects of phase noise on each system depend not only on the modulation waveform, but also on the specific hardware implementation adopted and the communication channel involved. One dissertation [3] alone compared the performance of 37 different demodulation methods for robust reception of Gaussian minimum shift keying (GMSK) signals, for example. A theoretical perspective is taken in this section and in Appendix 5B that looks at the fundamental limitations imposed by phase noise on some communication systems. 5.8.1 Euclidean Distance Measures

Phase noise causes a correlation loss and cross-coupling between the I- and Q-channels as developed earlier for receive applications in Section 5.3. When the phase noise process is slowly changing relative to an individual symbol period, there comes a point where additional input SNR improvement leads to negligible BER improvement. The apparent BER floor is referred to as the

System Performance

207

irreducible bit error rate. This BER flooring effect can be seen, for example, in Figure 5-14 as the amount of phase noise present increases. When the phase noise process is changing rapidly compared to a symbol interval, the impairments are normally less severe. The rapid phase changes cause crosscoupled channel terms to substantially average to zero over each symbol period thereby mitigating the resulting impairments. The correlation loss in symbol Es/No is closely approximated by  1 LFast = E   Tsym



Tsym

∫ cos ϕ ( t ) dt  n

(5.67)



0

where E[ ] denotes statistical expectation [4]. Coherence losses cause LFast to be less than unity. Normally, coherence loss is a secondary degradation to system performance as compared to the first-order degradation due to the cross-coupling between the I- and Q-channels. Continuous phase modulation (CPM) systems are usually characterized in terms of their minimum Euclidean distance dmin. This distance measure plays an important role in trellis-coded modulation systems. To define this quantity, assume that two constant envelope signals si(t) and sk(t) differ over a period of N symbol periods. The Euclidean distance between these two signals is given by NTsym

D=



 si ( t ) − sk ( t )  dt 2

(5.68)

0

which can be re-cast in a normalized Euclidean distance measure d( ) as D = 2 Eb d 2  si ( t ) , sk ( t ) 

(5.69)

where d 2  si ( t ) , sk ( t )  =

log 2 ( M ) Tsym

NTsym

∫ {1 − cos ∆θ ( t )} dt

(5.70)

0

and M is the symbol alphabet size, Tsym is the time duration of a data symbol, and Eb is the energyper-bit measure. The quantity ∆θ (t) denotes the phase difference between the two signals as a function of time. Clearly, the presence of local oscillator phase noise directly contributes to ∆θ (t) in a negative manner not unlike the correlation loss discussed earlier. In CPM, it is common practice to define dmin as

{

}

d min = min d 2  si ( t ) , sk ( t )  i,k i≠k

(5.71)

The significance of dmin is that the probability of a symbol error is on the order of Pe _ sym ≅

 2 Eb  1 erfc  d min   2 2 N o  

(5.72)

Consequently, the symbol error rate performance is degraded by the phase noise to the extent that the phase noise reduces dmin in (5.71).

System Performance

208

5.8.2 Forward Error Correction Coding Benefits

Forward error correction (FEC) methods have proven to be indispensable for modern data communication and storage applications. These techniques permit essentially error-free data transport operations to be conducted with reasonable overhead complexity. FEC systems insert additional redundancy within each source message that makes it possible to correct a limited number of errors that may otherwise occur within a source message during transmission. Assume, for instance, that an uncoded message of 100 bits is to be sent and the channel bit error rate is p = 10–3. The probability that the message would be received without error would be only (1 – p)100 = 90.48%. On the other hand, if a rate R = 2/3 FEC code were used having an error correction capability of only 5 coded bits per message, the probability of receiving the message in error would only be 5 N  ( N −k ) PMessage _ Error = 1 − ∑  T  (1 − p ) T p k (5.73) k =0  k  with NT = 100∗(3/2) which equates to 1.264e-8! Although 50% more bits must be communicated when the FEC bits are added for this example, the likelihood of receiving the complete message without error is dramatically improved. The bit error rate required to achieve this same degree of message delivery reliability without coding would be 1.264e-10. The reason that this example is important for phase noise considerations can be seen from examining one of the symbol-error rate plots presented earlier like Figure 5-18. Regardless of the channel SNR requirements necessary to achieve a symbol error rate on the order of 1e-10, even the slightest amount of phase noise on the local oscillator causes a substantial performance loss as shown. In sharp contrast, the coded system operates at an equivalent uncoded SER of approximately 10–3 where 1° rms phase noise introduces only about 0.3 dB SNR loss in this figure. In general, FEC inclusion within a system permits the system to operate at considerably lower SNR levels than uncoded systems, and this translates to relaxed phase noise requirements for the frequency sources involved. Identical arguments can be made for high-speed wireline or fiber optic data communications. Even modest FEC (e.g., simple (7, 4) Hamming code) dramatically improves the irreducible error floor due to hardware limitations like phase noise and clock-jitter. Additional information pertaining to channel coding is provided in Appendix 5B. 5.9 CLOCK NOISE IMPACT ON DAC PERFORMANCE

DAC clock-jitter causes the output sample transitions to occur at slightly perturbed time instants as shown in Figure 5-34. The output time-transition errors are denoted by δtk. The clock-jitter related impairment to the output spectrum can be computed as follows. The power spectral density computation is based on the fundamental relationship given by (4.16). The time-domain DAC output can be represented by VM ( t ) =

M

∑ d rect ( t − kT , δ t , δ t )

k =− M

k

s

k

k +1

where the sample values are given by the dk. The Fourier transform of (5.74) is given by

(5.74)

System Performance M

VM ( f ) = F {VM ( t )} =

∑d

k =− M

k

209

 exp  − s ( kTs + δ tk )  − exp  − s ( kTs + Ts + δ tk +1 )     s  

(5.75)

For practical systems, |δtk |

1 2

(6.25)

where ζ is the damping factor given by the standard equation 1 2

ζ = ωnτ 2

(6.26)

An alternative form of this design formula can be formed by substituting (6.22) into (6.23) which gives 13

The constraint on ζ is due to the slightly modified definition for ωn in (6.21) compared to the ideal type-2 PLL.

Fundamental Concepts for Continuous-Time Systems

tan (θ p ) =

1+ γ c −

239

1 1+ γ c

(6.27)

2

where the substitution τ2/τ1 = 1 + C2/C1 = 1 + γc has also been used. The capacitance ratio γc can be found by temporarily letting u = tan(θp) and carrying through the algebra with one application of the quadratic formula to give

γ c = 2 tan (θ p )  tan (θ p ) + sec (θ p ) 

(6.28)

This result is plotted in Figure 6-17. As shown there, the capacitance ratio γc limits the achievable phase margin for the system given the unity-gain frequency ωp. In other words, the inclusion of additional filtering using C1 compromises some of the system phase margin that would otherwise be achievable based on the ideal type-2 PLL damping factor given by (2.12). Achievable Phase Margin Versus C2/C1 90 80 70

Phase, deg

60 50 40 30 20 10 0 -2 10

-1

10

0

1

10 10 Capacitor Ratio C2/C1

2

10

3

10

Figure 6-17 Achievable phase margin for simple loop filter14 (Figure 6-16) versus C2/C1.

As just mentioned, the capacitance ratio γc plays a role in determining the system phase margin achievable through (6.28). This fact can be made more visible by rewriting the open-loop gain function in slightly different terms where C2 is held constant and the capacitance ratio C2/C1 = γc varied. Taking this approach, GOL ( s ) =

Kd Kv 1 + sτ 2 2 Ns C1 + C2 + sτ 2 C1 2

=

14

Kd Kv 1 1 + sτ 2 1 + sτ 2 ω  =  nx  2 NC2 s  C1  τ 2  s   1  τ2 1 + +s 1 +  + s γc γc  C2   γc 

Book CD:\Ch6\ u12631_loopfilter_1_main.m.

(6.29)

Fundamental Concepts for Continuous-Time Systems

240

where the new substitution

ωnx =

Kd Kv NC2

(6.30)

has been made for convenience. Using (6.29), the characteristic equation 1 + GOL(s) = 0 is given by  1+ γ c s3 + s 2   τ2

 2 2 γc =0  + sγ cωnx + ωnx τ2 

(6.31)

and the roots can be computed as a function of γc for a fixed choice of ωnx and ζ. A root-locus example following this approach is shown in Figure 6-18. This perspective strongly advocates using a capacitance ratio γc ≥ 10 for most applications because the resultant characteristic root locations are only altered slightly compared to the original ideal type-2 system with this choice. Root Locus Plot 1.5 1.3 1.1 0.9 0.7 Pole Imag Part

0.5 0.3 0.1 -0.1 -0.3 -0.5 -0.7 -0.9 -1.1 -1.3 -1.5 -1.5

-1.3

-1.1

-0.9

-0.7 -0.5 Pole Real Part

-0.3

-0.1

0.1

Figure 6-18 Root locus of simple loop filter15 (6.31) stepping C2/C1 ratio from 0.25 to 20 (ωn = 1 and ζ = 0.707).

The system phase margin can be used in an alternative formulation to design the same loop filter as follows. The open-loop unity-gain criteria at frequency ωp dictates that the open-loop gain given by (6.20) be given by GOL ( jω p )

ω = n ω  p

  

2

1 + (ω pτ 2 ) 1 + (ω pτ 1 )

2

2

(6.32)

Squaring this result, followed by further use of (6.22) and (6.26), and one application of the quadratic formula ultimately produces the concise result that the damping factor must be given by

ζ =

ωp 1 for ζ > 2ωn 2

(6.33)

Use of (6.22), (6.23), (6.33), and the equivalence τ2/τ1 = 1 + C2/C1 = 1 + γc can be made to also relate the phase margin θPM and damping factor ζ as16 15 16

For Figure 6-16, Book CD:\Ch6\ u12631_loopfilter_1_main.m. The constraint on ζ is due to the slightly modified definition for ωn in (6.21) compared to the ideal type-2 PLL.

Fundamental Concepts for Continuous-Time Systems

 16ζ 4 − 1  1  for ζ > 2 ζ 8 2  

θ PM = tan −1  Using (6.22) in (6.32) reveals that

ωp 1/ 4 = (1 + γ c ) ωn

241

(6.34)

(6.35)

While these multiple design formula are interesting, the concise design procedure provided in the next section should offer greater practical use. 6.5.1.1 Loop Design Procedure The design procedure for the loop filter shown in Figure 6-16 consists of the following steps:

Step 1: Select a phase margin value and the open-loop unity-gain frequency ωp for use. For reasonable transient response performance and low gain-peaking, a value between 45° and 60° is recommended. Represent this value by φPM (rad). Time constant τ1 is then computed from

τ1 =

sec (θ PM ) − tan (θ PM )

(6.36)

ωp

Step 2: From ωp, compute the second time constant τ2 from (6.22) as

τ2 =

1

(6.37)

ω p2τ 1

Step 3: From the unity-gain requirement at frequency ωp, and the open-loop gain function (6.20) compute K K τ 1 1 + (ω pτ 2 ) C1 = d v 1 2 N τ 2 ω p 1 + (ω τ )2 p 1

2

(6.38)

using known values for the phase detector gain Kd (A/rad) and VCO gain Kv (rad/s/V).

Step 4: Compute the remaining circuit component values as τ  C2 = C1  2 − 1  τ1  R2 =

τ2 C2

(6.39) (6.40)

One important final note: if the PLL is implemented in the differential form shown in Figure 6-15, any charge-pump current-source mismatches in Figure 6-15 will result in substantial commonmode voltage transients in the lead-lag filter. This problem can be dramatically subdued by using

Fundamental Concepts for Continuous-Time Systems

242

capacitors to ground on each side of the charge-pump output (same value as C1) rather than using the differential capacitor “C1 /2” shown in Figure 6-15. Example 6-1

Assume that Kd = 200 µA/2π, Kv = 2π 20MHz/V and N = 100. Assume further that fast frequency-switching speed is needed thereby making a phase margin of 60° and a unitygain bandwidth of 100 kHz necessary. Using the design procedure just outlined, τ1 = 0.42646 µs, τ2 = 5.9397 µs, C1 = 27.149 pF, C2 = 350.99 pF and R2 = 16.923 kΩ as shown in Figure 6-19. Furthermore, ζ = 0.9659 and γc = 12.9282. From ChargePump Phase Detector

16.92 K

R2

C1

To VCO

C2

27 pF

351 pF

Figure 6-19 Schematic for simple loop filter example (type-2 third-order PLL).

6.5.2 Additional RC Lowpass Filter Section

One of the most common passive loop filters used with charge-pump phase detectors is the type-2 fourth-order configuration shown in Figure 6-20. There is already ample guidance from Section 6.5.1 for adopting the ratio between C2 and C1, but the additional RC lowpass section complicates the design procedure somewhat. One easy approach to limit this complexity is to make resistor R3 much larger than R2 so that it does not load the passive lead-lag network. This approach is not necessarily advocated, however, since it usually leads to increased phase noise due to thermal noise from resistor R3 modulating the VCO. Normally, the additional filtering represented by C1 and the R3C3 section in Figure 6-20 is only intended to provide additional filtering well outside the closed-loop bandwidth. This is preferred so that performance is not degraded relative to the ideal type-2 system. It is therefore desirable to compare the performance of this configuration with the ideal type-2 PLL case.

From ChargePump Phase Detector

R2 C1

C2

R3 C3

To VCO

Figure 6-20 Passive loop filter for type-2, fourth-order.

6.5.2.1 Exact Linear Analysis Without any approximations, the exact open-loop gain function for Figure 6-20 is given by GOL ( s ) =

Kd Kv 1 1 + sτ 2 2 2 N s s τ 2τ 3C1 + s C1 (τ 2 + τ 3 ) + C2τ 3 + C3τ 2  + ( C1 + C2 + C3 )

(6.41)

Fundamental Concepts for Continuous-Time Systems

243

It is convenient to make the simplifying assignments

ωn = K1 =

Kd Kv N ( C1 + C2 + C3 )

(6.42)

C1 (τ 2 + τ 3 ) + C2τ 3 + C3τ 2

(6.43)

C1 + C2 + C3 K2 =

C1τ 2τ 3 C1 + C2 + C3

(6.44)

with τ2 = R2C2 and τ3 = R3C3, thereby simplifying (6.41) to 2

1 + sτ 2 ω  GOL ( s ) =  n  2  s  1 + K1 s + K 2 s

(6.45)

The H1(s) closed-loop transfer function for this case can then be written as H1 ( s ) =

GOL ( s )

1 + GOL ( s )

=

ωn2 (1 + sτ 2 ) s 4 K 2 + s 3 K1 + s 2 + sτ 2ωn2 + ωn2

(6.46)

Exact analysis of this network is best suited for computer-based analysis similar to that described at the end of Section 6.7 due to the complexity involved. 6.5.2.2 Approximate Analysis Considerable simplification results if it can be assumed that the R3C3 section in Figure 6-20 negligibly loads the passive lead-lag network R2C2. To this end, it is helpful to redraw Figure 6-20 in a slightly different form to accentuate the parallel nature of the two RC sections as shown in Figure 6-21. For frequencies within or near the closed-loop bandwidth, it is clearly desirable that the lowpass filter section R3C3 have negligible loading on the passive lead-lag network portion R2C2. Stipulating that

From ChargePump Phase Detector

R2

R3

C2

C1

C3

To VCO

Figure 6-21 Passive loop filter of Figure 6-20 redrawn to emphasize loading issue of R3C3 section on the passive lead-lag network.

R3 +

1 1 ≥ γ R2 + sC3 sC2

(6.47)

Fundamental Concepts for Continuous-Time Systems

244

at frequency s = jωn, and using the damping factor relationship ζ = 0.50ωnτ2, this criteria can be rewritten in terms of resistor and capacitor ratios as

γ 2 − C232 = 4ζ 2 R322 − 1

(6.48)

where C23 = C2/C3 and R32 = R3/R2. An alternative perspective of this same result is often helpful by letting ωs = 1/τ3 corresponding to the desired corner frequency of the extra lowpass filter section. This can be used to create the equivalent criteria 1 + 4ζ 2 C232 = γ 2 (6.49) 2  ωn  1+    ωs  When the loading effects of the R3C3 network on the lead-lag network can be ignored (e.g., γ > 10), the open-loop gain function can be simplified from (6.45) to 2

1 + sτ 2 ω  GOL ( s ) =  n   s  (1 + sτ 1 )(1 + sτ 3 )

(6.50)

where the same earlier substitutions

Kd Kv N ( C1 + C2 )

(6.51)

C1 1 τ2 = τ2 C1 + C2 1+ γ c

(6.52)

ωn2 = τ1 =

have been used. Maximizing the system’s phase margin entails maximizing the phase of (6.50) as done earlier with (6.19). It can be shown that the maximum phase margin for this network occurs at the radian frequency given by 2

 k  k k ωp = − 2 −  2  − 0 2k 4  2k 4  k 4

(6.53)

where k4 = τ 2τ 12τ 32 − τ 1τ 22τ 32 − τ 3τ 12τ 22

k2 = τ 2 (τ 12 + τ 32 ) − τ 1 (τ 22 + τ 32 ) − τ 3 (τ 12 + τ 22 )

(6.54)

k0 = τ 2 − τ 1 − τ 3

Given that τ1 0.35. Input Eb/No = 5 dB and BLTsym = 0.01 based on (10.39) and (10.40).

10.5.3 Hardware-Based Timing-Error Metrics

The timing-error metrics discussed in the previous section were explicitly derived from an estimation theoretic perspective. These solutions can, however, be impractical to implement at very high data rates (e.g., 4-phase clocking at 10 Gbps), overly complex, or power-hungry. In many applications, as long as the timing-error metric is unbiased, the resulting tracking-error variance can be made as small as necessary by using small closed-loop bandwidths BL as motivated by (10.18). The hardware techniques that are discussed in this section are suitable for NRZ waveforms, and are based on a combination of estimation theory and good engineering intuition. 10.5.3.1 Mueller-Müller Method Mueller and Müller (MM) introduced a timing error metric in [23] that is based on using only data and data-decision estimates at the symbol rate. The arguments behind the algorithm are fairly heuristic in nature. A high-level perspective of the computing architecture is shown in Figure 10-33 for a memory length of three symbols. The received signal at the matched-filter output is assumed to be given by x ( t ) = ∑ ak r ( t − kTsym ) + n% ( t )

(10.42)

k

where the ak are the bipolar data-symbol values (i.e., ±1), r(t) is the individual pulse-shape at the output of the matched-filter, and ñ(t) represents the Gaussian channel noise after the matchedfiltering. The form of the MM metric most frequently referred to in the literature uses the samples from two adjacent receive symbols (m = 2) to form the timing error metric output as zk = 32

Book CD:\Ch10\u14008_cdr_var.m.

1 [ xk ak −1 − xk −1ak ] 2

(10.43)

Clock and Data Recovery

475

The corresponding S-curve for this timing-error metric can be found by averaging zk over many receive symbols. The ak values in (10.43) are estimated from the matched-filter output by using a simple comparator.

Signal Samples

xk

Tsym

Tsym Timing Metric

Σ gk

g k −1

zk

gk −2

g k −i ( aˆk , aˆk −1 , aˆk − 2 )

aˆk − 2

aˆk −1

aˆk Tsym

Tsym

Data Estimator for Bi-Polar Data Figure 10-33 Mueller and Müller timing-error estimator using a memory length of m = 3. After: [23].

In the case where three symbol intervals are used (m = 3) as in Figure 10-33, the timing-error metric can be written as T

 − ak −1 − 2ak ak −1ak − 2  1  zk =  ak + 2ak − 2  3  a a a a −  k −1 k k −1 k − 2 

 xk − 2     xk −1   x   k 

(10.44)

Inspection of these results for the MM metric shows that both algorithms measure the symmetry of the matched-filter output in order to determine the underlying timing-error. As such, the tracking-error variance can be appreciable if the group delay distortion of the channel adversely affects the received pulse-shape symmetry. 10.5.3.2 Zero-Crossing Timing-Error Metrics Zero-crossing (ZC) methods attempt to track the transition-point between adjacent data-symbols that differ in sign, and then infer that the optimal data-sampling point for the data-decisions is midway in between. This approach is intuitively supported for the raised-cosine eye-diagram shown in Figure 10-9 for a β = 0.50 where the zero-crossing trajectories are very compact and sharp. The zerocrossings exhibit considerably more variability for lower excess bandwidth cases like those shown in Figure 10-7 and Figure 10-8, however. One of the more popular zero-crossing methods in use is due to Gardner [24]. It is particularly attractive for carrier-recovery applications because symbol timing can be extracted even without perfect RF phase coherence. This technique utilizes two samples per symbol as shown in Figure 10-34. Using the notation in this figure, the timing-error metric is computed as

Clock and Data Recovery

476

y1

y−1

kTsym  2k − 1    Tsym  2 

y0  2k − 1    Tsym  2 

y1

kTsym

 2k + 1    Tsym  2 

y0

Time

 2k + 1    Tsym  2 

Time

y−1

(a) Negative Transition

(b) Positive Transition

Figure 10-34 Zero-crossing timing-error metric due to Gardner. The method uses two samples per symbol and tracks the zero-value that occurs between adjacent but opposite-polarity symbols, as shown for (a) negative and (b) positive transitions.

ε = y0 ( y−1 − y+1 )

(10.45)

Since zero-crossing metrics track the mean zero-crossing between data-symbols rather than the peak data-eye opening, any asymmetry in the eye-pattern normally results in some additional loss in performance. Several variants of (10.45) that are often used are

ε a = y0 sign ( y−1 − y+1 )

(10.46)

ε b = sign ( y0 ) sign ( y−1 − y+1 )

(10.47)

The εb form can be viewed as an all-digital implementation of this metric, but due to metastability issues with creating sign(y0), it is only suitable for reasonably low data-rate applications. In general, the ZC method is inferior to the methods considered in Section 10.5.2 except possibly for higher SNR situations [25]. 10.5.3.3 Hogge The Hogge timing-error detector [26] shown in Figure 10-35 was first patented33 in 1985. Aside from the simple all-digital implementation, this detector also avoids the metastability issues that plague some other all-digital methods. The active loop-filter sees two pulses with opposite polarity once the correct timing has been established by the synchronizer. The positive-polarity pulse is created by the first D-flip-flop and has an area that is proportional to the phase error relative to the clock signal. The negative-polarity pulse is created by the second D-flip-flop whenever an input data transition occurs, and it has a fixed area that is proportional to one-half of the clock-period. The loop-filter sums these two pulse inputs, and although they occur at slightly different times, the longterm average-sum is zero when proper clock alignment exists. An example timing-diagram for the Hogge detector in operation is shown in Figure 10-36. Although the area of the composite output pulses S2(t) – S3(t) is zero as shown, the integral of this composite signal (the oscillator acts as an ideal integrator) has a net dc-value which is proportional to the incoming data transition-density. With sufficiently small closed-loop bandwidth and constant 33

U.S. Patent 4,535,459.

Clock and Data Recovery

477

transition-density, the impact on the bit synchronizer’s tracking performance is usually negligible, but this is nonetheless a source of low-frequency data-pattern-dependent noise. One possible remedy for the pattern-dependent noise in this detector is the modified-Hogge detector discussed in [27]. This modified timing-error metric creates an integrated error signal that exhibits no dc-bias and therefore eliminates the pattern-dependent noise. A simplified logic diagram for this detector is shown in Figure 10-37.

-

+

S2(t)

S3(t)

VCXO XOR

XOR

S1(t)

Data In

Data Out

D Q

D Q

Q

Q

Data Out

Clock Out Clock Out

Figure 10-35 Hogge clock-recovery circuit within PLL structure. After: [26]. 0

1

0

0

0

1

1

0

0

0

Data In Clock Out

S1 ( t ) Clock Out

Data Out

S 2 (t )

S 3(t ) S 2 (t ) − S 3 (t )

∫ S 2 ( t ) − S 3 ( t ) dt

Ave. dc

Figure 10-36 Example timing diagram for Hogge detector in Figure 10-35 assuming ideal clock alignment.

Clock and Data Recovery

478

ε (t )

Σ S2(t)

-2

S3(t) XOR

XOR

XOR

S1(t)

Data In

S5(t)

S4(t)

D Q

D Q

D Q

Data Out

Q

Q

Q

Data Out

Clock Out Clock Out

PLL VCO Input

Figure 10-37 Modified Hogge detector from [27] that eliminates low-frequency noise from the timing-metric output.

10.5.3.4 Bang-Bang (Alexander) The bang-bang timing-error detector was first introduced by Alexander in [28]. Its analysis and modeling along with some discussion of metastability issues can be found in [29]. This detector is all-digital and it exhibits so much gain (i.e., nonlinear) that the use of charge-pumps is unnecessary. This detector is most frequently used in high data-rate applications (e.g., 10 Gbps) where pulsewidths are very small and closed-loop bandwidths need only be a small percentage of the symbol rate. A logic diagram for the bang-bang detector is shown in Figure 10-38. The detector output consists of fixed-area pulses that are either positive or negative depending on the sign of the timingerror present. The pulses occur at the data-edges and the mean-value S-curve characteristic is given by the convolution of the nonlinear bang-bang characteristic and the probability distribution function of the prevailing time-jitter. V-to-I

Data In

Q

Q

D Q

D Q

Q

Q

Figure 10-38 Bang-bang style timing-error detector.

XOR

D Q

XOR

D Q

+ _

ε (t ) To Loop Filter

Clock from PLL VCO

Clock and Data Recovery

479

10.5.3.5 Summary of Hardware-Based Timing-Error Detectors A short summary of timing-error detectors is provided in Table 10-3. Before ultimately selecting a timing-error detector type, many other issues that are important for overall CDR design should also be considered. These include topics such as the initial frequency pull-in process, output spurious performance, necessary loop-filter order, bandwidth, and other measures. Table 10-3

Summary of Timing-Error Detector Types Advantages Disadvantages

Detector Type Mueller and Müller

Zero-Crossing

• •

Simple Analog and digital variants possible



• •

Simple Well suited for Nyquist pulses with β > 0.30



Generally for low- to medium-rate applications

Analytically difficult to analyze Optimum eye samplingpoint inferred from zerocrossings Increased variance for smaller values of β Based on waveform symmetry

Simple, low-speed applications



More complicated

Widely applicable



Highly nonlinear, difficult to analyze Metastability performance involved High-jitter variance

Normally only for high datarate applications



• •

Hogge

Modified Hogge

Bang-Bang

• • • • • • • •

Simple Avoids metastability issues All-digital Avoids metastability issues All-digital Improves on bias of integrated output All digital Very high-gain

Usage

Based on waveform symmetry Possible metastability issues



• •

Widely applicable

10.6 BIT ERROR RATE INCLUDING TIME RECOVERY

CDR bit error rate (BER) performance can be calculated using the same methods that were employed in Section 5.5 for the effects of phase noise on BER performance. The phase noise probability density function was assumed to be the Tikhonov density function throughout that discussion. This traditional approach is considered further in the context of CDR performance in Section 10.6.5. This section opens with an insightful detour into analyzing the PLL timing-recovery process through the use of a first-order Markov modeling approach. This perspective avoids the otherwise involved mathematics of the continuous-time Fokker-Plank and Chapman-Kolmogorov theories necessary to analyze the PLL’s behavior under poor signal-to-noise ratio conditions. Even though the discussion that follows is centered on using a type-1 PLL, this simplifying assumption is usually made for the other approaches as well. In the previous section, the mean and variance of the timingerror metric’s S-curve was central to the discussions about performance. In this section, the quantization involved with the Markov model forces each metric to be viewed in terms of the underlying phase-state transition-probabilities that are discussed shortly. Once these probabilities have been computed, the steady-state phase error distribution of the PLL can be found, and from these the associated tracking-error variance. A closely related analysis based solely on the transitionprobabilities is presented next that provides a closed-form means for predicting the average time that

Clock and Data Recovery

480

must pass before the PLL loses phase-lock. This discussion offers a comparative view between the different timing-error metrics that is seldom seen in the literature. The section concludes with a discussion of CDR bit error rate performance. This section is applicable whether the Markov-based tracking-error probability density is used or the Tikhonov density is used. 10.6.1 Clock Recovery Using First-Order Markov Modeling

Assume that a type-1 PLL is being used for the tracking loop and that no frequency-error is present. The phase error process is actually modulo-2π as shown in Figure 10-39. For appreciable closedloop SNR, the phase error remains within one 2π portion of the diagram. At lower SNRs, however, the phase error can wander beyond the boundaries of the original 2π segment causing a cycle-slip to occur. Normally, phase lock is once again achieved and the phase error remains within an adjacent 2π portion of the diagram for another random duration of time. Over time, many such cycle-slips can occur, so in general, the phase error process is not a stationary random process. Fortunately, the process is stationary if the phase error is always first reduced modulo-2π. Probability Density

Desired Lock Point

2π ( n − 1)

2π ( n )

2π ( n + 1)

2π ( n + 2 )

θ

Figure 10-39 Modulo-2π aspect of phase error process. Given that the desired lock-point is between 2π n and 2π (n+1), a cycle-slip occurs if the phase error moves to any adjacent 2π region.

The phase error range (–π, +π] can be uniformly quantized into N-states as suggested in Figure 10-40. The clock-recovery process is shown from a hardware perspective in Figure 10-41. The timing-error metric adopted for the CDR will have an S-curve like that shown in Figure 10-41 and when situated within a properly designed PLL, it will cause the CDR’s clock to be nominally positioned at midway between phase states 8 and 9 in this example. This point is assumed to be coincident with the optimal timing-phase for the CDR’s internal clock. Square-Wave

Quantized Phase States 1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

π

−π Example S-Curve Superimposed

Figure 10-40 Phase error quantized into 16 discrete states. An example timing-error metric S-curve is superimposed that exhibits a zero output and correct sign midway between states 8 and 9.

Clock and Data Recovery

481

It is convenient to think about the CDR’s internal clock-oscillator as a modulo-N counter that ideally completes one full cycle of N counts for every input data symbol as shown in Figure 10-41. If the CDR’s counter gets ahead of the ideal timing associated with the incoming data-stream, it must be delayed one-count rather than continue to march ahead of the incoming data-bearing signal. The opposite is true if the CDR’s counter gets behind the ideal timing. The advancement or delay of the CDR’s internal counter by these discrete amounts is represented by the quantized phase-steps in Figure 10-40. If the CDR’s counter is ultimately changed by more than ±8 states from the ideal tracking-point in the figure, a cycle-slip is said to have occurred which is equivalent to a loss of phase-lock for at least the short-term. Continuous-Time Input Signal

Timing-Error Metric Carry Out

Digital Clock @ Fdig = N Fsym

Mk

±1

Counter Advance/ Retard

Modulo-N Counter

Figure 10-41 Hardware perspective for the Markov modeling example where Mk is the discrete-time metric output.

Modeling of this incremental process can be done using a first-order Markov model [2], [30]. Each discrete phase-error in Figure 10-40 is represented by a Markov state as shown in Figure 10-42. Simple closed-form results occur if the state transitions are limited to strictly nearestneighbor transitions as shown. Since the phase adjustment process is limited to ±1 count per datasymbol period, the maximum slew-rate and closed-loop bandwidth are both inversely proportional to the total number of phase states used, N.

q2 1

q1

q3 3

2

p1

q4

p2

q N-1

N

N-1

p3

p N-2

pN

qN

p N-1

Figure 10-42 First-order N-state Markov chain model for type-1 PLL with transition-probabilities pk and qk. Note that cycleslips occur for the state transitions labeled q1 and pN.

At a given phase-state n, there is a finite probability that the next phase adjustment will be a +1 or conversely a –1 count. The probability of a correct adjustment decision increases with signal-tonoise ratio. The probability of moving to a smaller phase index is denoted by the qn transitionprobabilities in Figure 10-42, and to a larger index by the pn. In this arrangement, it is not possible to remain at the same phase state for more than one symbol time period, so pn + qn must always equal unity. The transition-probabilities are computable from the timing-error metric’s associated behavior

Clock and Data Recovery

482

as described shortly. The end-state probabilities q1 and pN are returned to opposite ends of the Markov diagram because of the modulo-2π nature of the phase-error process. As mentioned earlier, treatment of state-transitions q1 and pN in this manner results in a stationary phase error process. In order to assess how well the discrete-time PLL performs with a given timing-error metric, the steady-state probability density of occupying each of the N states is needed. In order to perform this analysis, let Sk represent the steady-state probability that a time snapshot of the PLL phase index is equal to k. Based on Figure 10-42, the steady-state probability solutions must satisfy the set of equations S1 = q2 S2 + pN S N S N = q1 S1 + pN −1 S N −1 S k = pk −1 S k −1 + qk +1 S k +1

(10.48) (10.49) (10.50)

From (10.50), it is simple to show that S k +1 =

1

qk +1

Sk −

pk −1 Sk −1 qk +1

(10.51)

The first several Sk can be written as follows:  1 S2 =   q2 S3 =

  pN   S1 +  −  S N = a2 S1 + b2 S N   q2 

 a − p1   b2  S2 p1 − S1 =  2  S1 +   S N = a3 S1 + b3 S N q3 q3  q3   q3 

a −a p   b −b p  S 4 =  3 2 2  S1 +  3 2 2  S N = a4 S1 + b4 S N q4 q4    

(10.52) (10.53) (10.54)

From these results, the general recursion formula is given by S k = ak S1 + bk S N ak =

1 ( ak −1 − ak − 2 pk − 2 ) qk

bk =

1 ( bk −1 − bk − 2 pk − 2 ) qk

(10.55)

for k ∈ {3,…, N – 1}. The behavior of the N th-state is dictated by (10.49). Letting k = N – 1 in (10.51) and equating this to (10.49) produced the result that

γ=

SN p a +q = N −1 N −1 1 S1 1 − pN −1bN −1

(10.56)

At any given instant in time, the phase index must be one of the N possible phase states which is the same as stipulating that the sum of all of the steady-state probabilities must be one. Mathematically this is given by summing all of the Sk with the additional boundary conditions that a1 = 1, b1 = 0, aN = 0, and bN = 1. Performing this sum and making use of (10.56) results in

Clock and Data Recovery N N  S1 =  ∑ ak + γ ∑ bk  k =1  k =1 

483 −1

(10.57)

With S1 now known, SN can be computed from (10.56), and the remaining Sk values can be found using (10.55). The mean tracking point and tracking error variance can be directly computed in terms of index units from these steady-state probabilities as N

µ = ∑ iSi

(10.58)

i =1

N

σ 2 = ∑ ( i − µ ) Si 2

(10.59)

i =1

The sine wave in AWGN first presented in Section 1.4 and again in Appendix 3B can be used to illustrate how these results can be used. Assume that the N-state discrete PLL is going to be used to track the sine wave assuming that no frequency error is present. The probability distribution of the observed phase error within the tracking loop can be computed using (3B.5). Several examples at different input SNRs are shown in Figure 10-43 where the sine wave amplitude is assumed to be 1V, and the SNR is given by ρ = 1/(2σ 2) where σ 2 is the variance of the additive Gaussian noise. Phase PDF for Sinewave in AWGN- Analytic Solution

Probability Density

1

SNR= +6 dB SNR= 0 dB

0.8

0.6

SNR= -6 dB

0.4

0.2

0 -3

-2

-1

0 Phase, rad

1

2

3

Figure 10-43 Phase error probability density function34 using (3B.5).

The probability of moving right (or left) in Figure 10-42 (i.e., the transition-probabilities) depends on the probability mass that is within +π (–π ) radians of the present tracking phase in Figure 10-40. Mathematically, this is given by θk +π  for θ k ≤ 0  ∫ pφ (φ , ρ ) dφ  θk (10.60) pk =  − π +θ k π   ∫ pφ (φ , ρ ) dφ + ∫ pφ (φ , ρ ) dφ for θ k > 0 −π θk 34

Book CD:\Ch10\u12547_sinusoid_pdf.m.

Clock and Data Recovery

484

where pφ (φ, ρ) corresponds to (3B.5) and θk corresponds to the phase associated with the kth Markov state that is given by –π + 2π (k – 1)/N. The qk transition-probabilities can be computed from 1 – pk. Several example transition probability curves are shown in Figure 10-44 for different SNR values. S-Curves 1 0.9 0.8

SNR= +6 dB Transition Probability

0.7 0.6 0.5

SNR= -6 dB 0.4 0.3

SNR= 0 dB

0.2 0.1 0

0

10

20

30 40 Phase State

50

60

70

Figure 10-44 Qk transition-probabilities for a sine wave in AWGN with DPLL35 using 65 states.

With the transition-probabilities calculated, the previous results can be used to calculate the steady-state probabilities like the results shown in Figure 10-45. The closed-loop tracking-error variance follows from using the Sk steady-state probabilities in (10.58) and (10.59). The probability densities shown in Figure 10-45 become increasing compact about the desired tracking point as the input SNR is increased. For a fixed SNR, the same compactness can be obtained by narrowing the closed-loop bandwidth of the tracking system by increasing the number of phase states N used in the system. The resulting steady-state probability curves for N = 4 × 65 are shown in Figure 10-46 to illustrate this point.

Steady-State Probability of Occupancy

Steady-State PLL Probabilities

SNR= +6 dB

0.25

SNR= 0 dB 0.2

SNR= -6 dB

0.15

0.1

0.05

0 26

28

30

32 34 Phase State

36

38

40

Figure 10-45 Steady-state occupancy probabilities36 (N = 65). The index range of 26 to 40 corresponds to an angular phase 35 36

Book CD:\Ch10\u12547_sinusoid_pdf.m. Ibid.

Clock and Data Recovery

485

error range of approximately ±40°. Steady-State PLL Probabilities 0.15

Steady-State Probability of Occupancy

0.14

SNR= 0 dB

0.12

0.1

SNR= -6 dB SNR= +6 dB

0.08

0.06

0.04

0.02

0

105

110

115

120

125 130 135 Phase State

140

145

150

155

Figure 10-46 Steady-state occupancy probabilities for number of phase states N = 260. The angular extent of plot is the same as used for Figure 10-45 and the benefit of 4-times reduced closed-loop bandwidth is readily apparent. The probability mass under each curve is still 1.0 even though the increased number of states causes the discrete probability values to be roughly 4times smaller.

The Sk probabilities are the discrete equivalent of the phase error probability density function given earlier as pφ (φ ) in (5.36), and the role of PBPSK(.) in the same equation is played by the symbol error rate curves like those shown in Figure 10-12. Mathematically, the CDR bit error rate can be written as N

PCDR = ∑ S k PSymErr (θ k , ρ )

(10.61)

k =1

where PSymErr(θk, ρ ) is the static bit-error probability corresponding to a static timing phase-error of θk (like that shown in Figure 10-12) and ρ is the input SNR. Additional comments are provided in Section 10.6.6.

10.6.2 Computing Transition-Probabilities for CDR Applications

The previous section conveniently computed the required state-transition-probabilities from the probability density function for the phase of a sine wave in AWGN using (10.60). The S-curves computed earlier in this chapter had a voltage output rather than a “probability” output, however. In order to use the information presented in the previous section, some kind of translation between the voltage-style S-curves and transition-probabilities is required. A clue that these two perspectives are very different is offered by the voltage-style S-curves shown in Figure 10-22 which are independent of SNR whereas the transition probability curves must be a function of SNR. In short, the voltagestyle S-curves provide information about the timing-error detector’s mean output, but insufficient information about the underlying transition-probabilities. First-order transition-probabilities can, however, be calculated from combining the CDR Scurve information with the computed output variance curves (e.g., Figure 10-20 and Figure 10-21) while assuming that the associated probability density function is Gaussian. This is a fairly pessimistic assumption, however, if a fair amount of ISI is present because the ISI contributes

Clock and Data Recovery

486

significantly to the computed variance whereas its distribution is far less damaging than the Gaussian assumption would imply. In order to obtain more accurate results, the Gram-Charlier method may be used to better characterize the underlying probability density function, or timedomain simulations may be run over many data-symbol intervals and the sought-after transitionprobabilities may be closely estimated with histogram methods. The latter approach is adopted here for further investigation. In the Markov modeling case, the only decision that must be made every symbol period by the CDR’s internal logic is whether to advance or retard the CDR’s internal clock one count. Although the timing-error metrics (e.g., S-curves) in earlier sections have been plotted as if they are continuous-time signals, in reality the error metric value is only available at one discrete time instant per symbol as dictated by the phase of the CDR’s internal clock. The decision to advance or retard the CDR’s internal clock is based solely on whether the timing-error metric value is positive or negative for a given symbol interval. The probability of a positive or negative output for a given timing-phase offset determines the transition probability values. The transition probability values for the AVEL-CDR are shown in Figure 10-47 for an input SNR of 12 dB using N = 256 discrete phase states. The resulting steady-state timing-phase distribution using the Markov model is shown in Figure 10-48. The same computations for an SNR of 40 dB are shown in Figure 10-49 and Figure 10-50. The most striking difference between these results and the sine wave example considered in the previous section is that the transition probability curves are more erratic, but more importantly never reach probabilities greater than approximately 0.65 nor less than approximately 0.30. Probabilities near 0 and 1.0 are attained for the sine wave case, even for a 6 dB SNR. This can be explained as follows. In the sine wave case, a zero-crossing is guaranteed every cycle, and no ISI is present. In the AVEL-CDR case, the data transition density is only 50%, however, and ISI is present. Even if the timing-error metric exhibits the correct sign whenever a data-transition occurs, it probably exhibits the correct sign only 50% of the time when no transition is present since only noise and ISI are present in those cases. As a result, the transition-probabilities are substantially limited to the range of [0.25, 0.75]. The fine-structure seen in Figure 10-49 is due to the ISI that is present. The tracking-error performance is expected to be better for the MSEE-EL-CDR at higher SNRs as suggested by Figure 10-30, and this is substantiated by the results shown in Figure 10-51 and Figure 10-52 for an input SNR of 40 dB. The transition-probabilities for the MMSE-EL-CDR come much closer to the 0 and 1.0 limits and the slope of the curve is sharper and more extended than for the AVEL-CDR.

Clock and Data Recovery

487

AVEL-CDR Transition Probabilities 0.7

0.65

Transition Probability

0.6

0.55

0.5

0.45

0.4

0.35

0

0.1

0.2

0.3

0.4 0.5 0.6 Timing, symbols

0.7

0.8

0.9

1

Figure 10-47 Computed transition-probabilities37 for the AVEL-CDR for N = 256 phase states and an input Eb/No = 12 dB. Recovered-Clock Phase Distribution 15 14

Probability Density

12

10

8

6

4

2

0 0.4

0.45

0.5

0.55 Time, symbols

0.6

0.65

Figure 10-48 Steady-state recovered clock phase distribution38 for the AVEL-CDR for N = 256 phase states and an input Eb/No = 12 dB.

37 38

Book CD:\Ch10\u14020_earlylate_transp.m. Ibid.

Clock and Data Recovery

488

AVEL-CDR Transition Probabilities 0.7 0.65

Transition Probability

0.6 0.55 0.5 0.45 0.4 0.35 0.3 0.25

0

0.1

0.2

0.3

0.4 0.5 0.6 Timing, symbols

0.7

0.8

0.9

1

Figure 10-49 Computed transition-probabilities for the AVEL-CDR for N = 256 phase states and an input Eb/No = 40 dB. Recovered-Clock Phase Distribution 25 24 22 20

Probability Density

18 16 14 12 10 8 6 4 2 0 0.4

0.45

0.5

0.55 Time, symbols

0.6

0.65

Figure 10-50 Steady-state recovered clock phase distribution for the AVEL-CDR for N = 256 phase states and an input Eb/No = 40 dB. A slight timing-error bias is apparent.

Clock and Data Recovery

489

MMSE Transition Probabilities 1 0.9 0.8

Transition Probability

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.1

0.2

0.3

0.4 0.5 0.6 Timing, symbols

0.7

0.8

0.9

1

Figure 10-51 Computed transition-probabilities39 for the MMSE-EL-CDR for N = 256 phase states and an input Eb/No = 40 dB. Recovered-Clock Phase Distribution 28 26 24 22

Probability Density

20 18 16 14 12 10 8 6 4 2 0 0.4

0.42

0.44

0.46

0.48 0.5 0.52 Time, symbols

0.54

0.56

0.58

0.6

Figure 10-52 Steady-state recovered clock phase distribution40 for the MMSE-EL-CDR for N = 256 phase states and an input Eb/No = 40 dB. The timing-error bias has disappeared compared to the AVEL-CDR in Figure 10-50 and the distribution is more compact implying a smaller tracking-error variance. The distribution is slightly asymmetric due to ISI patterns that are distinct at high SNR levels.

In summary, it is worthwhile to point out that Markov modeling such as this preceding discussion can be used to gain valuable system insights without resorting to more complicated approaches based on Fokker-Planck and Chapman-Kolmogorov theory for low-SNR operation. Hybrid approaches that separately compute the tracking-error density function (e.g., Figure 10-52) and the marginal bit-error rate density function (e.g., Figure 10-12) make it possible to compute the resultant CDR bit error rate using (10.61) quickly and accurately.

39 40

Ibid. Ibid.

Clock and Data Recovery

490

10.6.3 Mean-Time to First-Slip

An important quantity relevant to CDR design is the mean-time before loss of phase-lock occurs. This is commonly referred to as the mean-time to first-slip. This quantity normally increases exponentially with SNR and is therefore impractical to compute with time-domain simulations except for low SNR values. The first-order Markov model approach can be extended to provide a convenient closed-form result that is based entirely on the transition-probabilities and the input SNR as described here. As mentioned earlier, a cycle-slip occurs when the phase state is either incremented to the right of state-N or to the left of state-1 in Figure 10-42. Both of these possibilities must be addressed since the transition-probabilities may not be symmetric about the ideal locking point. The state diagram can be modified as shown in Figure 10-53 in order to compute the average time required to reach state N + 1 from the nominal-center tracking state.

q2 1

q3

q N-1

3

2

p1

q4

p2

q1

qN N

N-1

p3

p N-2

p N-1 pN

N+1

1

Figure 10-53 Modified first-order Markov state diagram for computing mean-time to lose phase-lock. Loss of lock occurs when the phase error reaches the absorbing state N + 1.

Let TkN+1 be the mean-time to reach state N + 1 when the starting phase is state-k where k corresponds to an interior state index. The probability of moving to the neighboring phase-state on the right is given by pk and the mean-time to reach state N + 1 from that state would be Tk+1N+1 + 1. Similarly, the probability of moving to the neighboring phase-state on the left is given by qk and the mean-time to reach state N + 1 from that state would be Tk–1N+1 + 1. Consequently, TkN +1 = pk (TkN+1+1 + 1) + qk (TkN−1+1 + 1) = pk TkN+1+1 + qk TkN−1+1 + 1

(10.62)

since pk + qk = 1. Using this result and solving for Tk+1N+1 produces

TkN+1+1 =

TkN +1 − 1 − qk TkN−1+1 pk

(10.63)

Subtracting TkN+1 from both sides of (10.63) and collecting terms results in q  1 TkN+1+1 − TkN +1 =  k  (TkN +1 − TkN−1+1 ) − p p k  k

(10.64)

Clock and Data Recovery

491

The situation is a bit different for state-1. First of all, TN+1N+1 = 0 since the starting phase is the same as the ending phase. Using this fact for state-1 leads to T1N +1 = p1T2N +1 + 1

(10.65)

Rearranging terms in (10.65) and subtracting T1N+1 from both sides of the equation results in q  1 T2N +1 − T1N +1 =  1  T1N +1 − p p  1 1

(10.66)

With this starting point and the recursion represented by (10.64), the general solution is given by

TkN+1+1 − TkN +1 = α k T1N +1 + β k

(10.67)

 qn   n =1  pn  k −1  1 k  q  1 β k = −∑  ∏  m   − n =1   qn m = n  pm   pk

(10.68)

where k

αk = ∏ 

Since TN+1N+1 = 0, it follows from (10.67) that

TNN +1 = −α N T1N +1 − β N

(10.69)

It must also be true that N −1

TNN +1 = T1N +1 + ∑ (TnN+1+1 − TnN +1 ) n =1

= aT1

N +1

(10.70)

+b

with the term in parenthesis given by (10.67). Equating (10.69) with (10.70) and solving for T1N+1 finally produces β +b T1N +1 = − N (10.71) αN + a In order to compute the mean-time to loss of phase-lock, the starting phase is assumed to initially be at the ideal sampling point within the eye-pattern. This corresponds to state M = (N+1)/2 in the diagrams that have been presented in this chapter. The mean-time to first-slip is then given by TMN+1 which can be computed using (10.67) as M −1

TMN +1 = T1N +1 + ∑ (TnN+1+1 − TnN +1 ) n =1

(10.72)

Clock and Data Recovery

492

This final result41 is used to compute the mean-time to first-slip as a function of input Eb/No for the MMSE-EL-CDR and the AVEL-CDR in Figure 10-54. A total of N = 64 phase states are used for each CDR. The performance of the MMSE-EL-CDR is considerably better than for the AVELCDR at input Eb/No values greater than approximately 3.5 dB because the transition-probabilities are much sharper for the former as shown in Figure 10-51. This performance difference is also consistent with the comparison provided in Figure 10-30. The precise slip-time values are very sensitive to the transition-probabilities that were numerically obtained in these results by simulating a sequence of 32,768 data symbols in the time-domain. This sensitivity is also heightened by the large and small numerical values involved in the computations. The linear portion of the MMSEEL-CDR curve illustrates the exponential behavior of the mean-time to first-slip with input SNR when the timing-error metric is not limited by ISI contributions. Mean-Time to Lose Lock

12

10

10

Mean-Time to First-Slip, symbols

10

MMSE-EL-CDR

8

10

6

10

AVEL-CDR

4

10

2

10

0

1

2

3

4 5 6 Input E b/No,dB

7

8

9

10

Figure 10-54 Mean-time to first-slip42 for MMSE-EL-CDR and AVEL-CDR using N = 64 states. The MMSE-EL-CDR exhibits a substantially better mean-time to slip for Eb/No greater than about 3.5 dB because its transition probability (Figure 10-51) curve exhibits a much larger maximum-to-minimum difference.

10.6.4 Applying First-Order Markov Modeling to Real PLLs

The Markov modeling results developed in the previous sections for the type-1 system were based almost entirely on state-transition-probabilities. This dependence on transition-probabilities and the applicability of the results to a real type-1 PLL are illustrated through the example that follows.

θ in

+ _

Σ

θe

Kv

±1

θo

VCO Fsym

Figure 10-55 Simple type-1 digital phase-locked loop with one-bit phase detector quantization. 41 42

An example time-domain simulation is compared with the closed-form solution in Book CD:\Ch10\u14021_mt2slip.m. Book CD:\Ch10\u14020_earlylate_transp.m with N = 64 states, 32,768 symbols simulated.

Clock and Data Recovery

493

Consider the type-1 PLL shown in Figure 10-55 which operates with a sampling rate of Fsym = 1 Hz. The purpose of the PLL is to track the nominally mean-zero input phase function of time θin even though the input is accompanied by Gaussian noise. The phase error represented by θe is hardlimited to ±1 in this example, and the VCO tuning sensitivity is Kv rad/sec/V. In this configuration, the VCO’s actual output frequency would be pre-tuned to the desired symbol rate (≅ 1 Hz) and adjustments to the VCO’s frequency are limited to ±δ Hz from the nominal center frequency. A discrete-time model for this PLL is shown in Figure 10-56 where the quantization noise associated with the 1-bit phase-error quantization is represented by Vq and the VCO has been replaced by a discrete numerically-controlled oscillator (NCO) that has a tuning sensitivity of 2π /N rad/sec/V. The number of discrete phase-states used is N. For the discussion that follows, it is assumed that θin is a mean-zero Gaussian random phase quantity having a variance of σθ 2 rad. Vq ( k )

θ in ( k ) + _

Σ

θe ( k )

d (k ) ±1

2π N

Σ

Σ z −1 θ k o ( )

NCO Figure 10-56 Discretized one-bit DPLL showing the quantization noise source Vq, the discrete decisions d(k), and NCO gain based on the number of phase states used.

The variance of the output phase can be calculated by using a brute-force time-domain simulation of the system shown in Figure 10-56. Since this is a simple system, and only the output variance is needed, this is an acceptable approach to use. This result will be compared with the closed-form result that will be derived as follows. Based on the previous work done with transitionprobabilities, the mean-value of d(k) (dropping the sample-index from this point forward) can be written as d ave = E ( d | θ o ) = prob (θ in > θ o ) − prob (θ in ≤ θ o )  θ = −erf  o  2σ θ 

  

(10.73)

Even though this is a nonlinear function of θo, this relationship can be linearized and dave written as dave = Kd θe where  θ  ∂  Kd = −  −erf  o   ∂θ o   σ θ 2  θo = 0 (10.74) 2 1 =

π σθ

Given this linearization step, the z-transform for the output phase in Figure 10-56 can be written as  

θ o ( z ) = θin ( z ) +

2π  α Vq ( z )  αN  z −1 + α

(10.75)

Clock and Data Recovery

494

with

α = Kd

2π 8π = N Nσ θ

(10.76)

The variance of the output phase can be calculated from the power spectral density of θo which is available from (10.75) and by recognizing that the power spectral densities for θin and Vq are already known. The input phase was assumed to be a white Gaussian random function and its corresponding power spectral density is given by Sθin ( f ) = σ θ2

for f ≤

1 2

(10.77)

As long as the PLL’s input dynamic range is not exceeded, the quantization noise represented by Vq is uniformly distributed between [–1,+1] and therefore has a variance of 1/3. Since the quantization error sequence is also assumed to be uncorrelated (e.g., white), its power spectral density is given similarly by 1 1 SVq ( f ) = for f ≤ (10.78) 3 2 Using the last two results along with (10.75), the power spectral density of the output phase is given by 2  1  2π   α2 1 Sθo ( f ) = σ θ2 +  (10.79) for f ≤   2 3  N α   1 + 2 (α − 1) cos ( 2π f ) + (α − 1) 2  Substituting (10.76) into (10.79) produces the final result

α  π Sθo ( f ) = σ θ2 1 +  2  6  1 + 2 (α − 1) cos ( 2π f ) + (α − 1) 2

for f ≤

1 2

(10.80)

The output variance is then found by integrating (10.80) for –½ ≤ f ≤ ½. Closed-form results for the output phase variance given by integrating (10.80) versus those obtained from brute-force time-domain simulation of the system shown in Figure 10-56 are compared in Figure 10-57 and show excellent agreement. In general, Markov-style analysis can be successfully used to analyze CDR systems as long as the transition probability functions can be accurately determined. Time-domain simulations are a viable alternative for general CDR analysis, but they become increasingly time-consuming if low BER rates are to be confirmed. The Markov modeling methods are indispensable for computing mean-time to lose lock performance as mentioned earlier. 10.6.5 Conventional Approach to Timing-Recovery Analysis

Most CDRs operate at an appreciable input SNR where there is little concern about cycle-slipping or loss of phase lock. In these cases, the most important characteristic of the clock-recovery performance is the tracking-error variance and its associated distribution. If the S-curve of the timing-error metric is linear and otherwise well behaved, the Tikhonov probability density (see Sections 1.4 and 5.5) can be assumed for the timing-error distribution, leaving only the phase variance to be obtained either through time-domain simulation or other means.

Clock and Data Recovery

495

Output Phase Deviation vs SNR 14 1-Bit DPLL Formula

Output Phase Standard Deviation, orms

12

10

8

6

4

2

0 -10

-5

0

5

10 15 Input SNR, dB

20

25

30

Figure 10-57 Output phase standard deviation based on brute-force time-domain simulation of the system shown in Figure 10-56 versus the closed-form result43 given by integrating (10.80). N = 256 used.

10.6.6 Connecting Phase Tracking Performance with CDR BER Performance

The clock-recovery and bit-detection processes have been largely dealt with separately throughout this chapter because they are parallel distinct processes that occur within a CDR. The total CDR BER can only be found by combining the performance metrics for both processes. The best achievable BER is dictated by the marginal BER curves that assume perfect clockrecovery performance. Several example results were provided earlier in Figure 10-12. For reasonable CDR BER performance, an input Eb/No of at least 10 dB can normally be expected and achieving an SNR within the clock-recovery PLL that is 10 dB to 20 dB higher is not unreasonable, assuming that rapid initial acquisition requirements are not overly demanding. The much smaller closed-loop bandwidth of the PLL compared to the symbol rate makes this SNR improvement possible. Under these circumstances, the phase error variance will be very small and the Tikhonov probability density assumption for the phase error distribution will be very accurate. Care must still be exercised, however, to ensure that the timing-error metric imposes no bias error relative to the ideal sampling point within the data-eye. A more rigorous evaluation of a CDR’s BER performance requires computing the weighted BER performance as given by (10.61) when the phase-states are discrete. For a continuous phaseerror distribution, the BER must be evaluated using an integral like (5.53). Normally, this additional level of computational precision is not necessary. 10.7 FINAL THOUGHTS

CDR applications span across a wide range of operational circumstances and performance requirements. With the advent of true system-on-chip (SoC) implementations, the clock-recovery and carrier-recovery operations are often merged together in wireless applications in order to obtain a more optimal solution. Channel equalization is the norm rather than the exception in both lowspeed and high-speed data-communication situations. Nonetheless, the material presented in this 43

Book CD:\Ch10\u14023_dpll.m.

Clock and Data Recovery

496

chapter is foundational in nature, and a thorough understanding of the principles involved will prove helpful in any time- and/or phase-recovery design effort. Deep submicron technology makes it possible now to implement CDR methods that were impractical only a few years ago. Weighted timing-error metrics based on SNR (e.g., MMSE plus ML) as well as sophisticated Kalman filtering methods [31] are all now in play. As demonstrated throughout this chapter, CDRs utilize a wide range of design disciplines ranging from PLLs [32] to communication and estimation theory, and as such, they present a rich mixture of many of the diverse topics presented in this text. References [1]

Crawford, J.A., “DSP-Based GMSK Modem Algorithm Design,” Radio Modem Reference Design Guide, RAM Mobile Data, June 3, 1991.

[2]

Holmes, J.K., Coherent Spread Spectrum Systems, New York: John Wiley & Sons, 1982.

[3]

Simon, M.K., S.M. Hinedi, and W.C. Lindsey, Digital Communication Techniques, Signal Design and Detection, Upper Saddle River, NJ: Prentice-Hall, 1995.

[4]

Ziemer, R.E., and R.L. Peterson, Digital Communications and Spread Spectrum Systems, New York: Macmillan Publishing, 1985.

[5]

Lucky, R.W., J. Salz, and E.J. Weldon, Principles of Data Communication, New York: McGraw-Hill, 1966.

[6]

Ho, E.Y., “Evaluation of Error Probability Including Intersymbol Interference,” Bell System Technical Journal, Nov. 1970.

[7]

Shimbo, O., and M.I. Celebiler, “The Probability of Error Due to Intersymbol Interference and Gaussian Noise in Communication Systems,” IEEE Trans. Communications, April 1971.

[8]

Moeneclaey, M., “A Comparison of Two Types of Symbol Synchronizers for Which Self-Noise Is Absent,” IEEE Trans. Communications, March 1983.

[9]

Moeneclaey, M., “A Simple Lower Bound on the Linearized Performance of Practical Symbol Synchronizers,” IEEE Trans. Communications, Sept. 1983.

[10] Proakis, J.G., Digital Communications, 4th ed., New York: McGraw-Hill, 2001. [11] Lindsey, W.C., and M.K. Simon, Telecommunication Systems Engineering, Englewood Cliffs, NJ: Prentice-Hall, 1973. [12] McBride, A.L., and A.P. Sage, “Optimum Estimation of Bit Synchronization,” IEEE Trans. Aerospace and Electronic Systems, May 1969. [13] Franks, L.E., “Carrier and Bit Synchronization in Data Communications,” IEEE Trans. Communications, Aug. 1980. [14] Srinath, M.D., and P.K. Rajasekaran, An Introduction to Statistical Signal Processessing with Applications, New York: John Wiley & Sons, 1979. [15] Weinstock, R., Calculus of Variations, New York: Dover Publications, 1974. [16] Hildebrand, F.B., Methods of Applied Mathematics, 2nd ed., New York: Dover Publications, 1965. [17] Sagan, H., Introduction to the Calculus of Variations, New York: Dover Publications, 1969. [18] Moeneclaey, M., “Two Maximum-Likelihood Symbol Synchronizers with Superior Tracking Performance,” IEEE Trans. Communications, Nov. 1984. [19] Meyr, H., and G. Ascheid, Synchronization in Digital Communications, Volume I Phase-, Frequency-Locked Loops, and Amplitude Control, New York: John Wiley & Sons, 1990. [20] Egan, W.F., Phase-Lock Basics, New York: John Wiley & Sons, 1998. [21] Moeneclaey, M., “The Influence of Four Types of Symbol Synchronizers on the Error Probability of a PAM Receiver,” IEEE Trans. Communications, Nov. 1984.

Clock and Data Recovery

497

[22] Meyr, H., M. Moeneclaey, and S.A. Fechtel, Digital Communication Receivers, Synchronization, Channel Estimation, and Signal Processing, New York: John Wiley & Sons, 1998. [23] Mueller, K.H., and M. Müller, “Timing Recovery in Digital Synchronous Data Receivers,” IEEE Trans. Communications, May 1976. [24] Gardner, F.M., “A BPSK/QPSK Timing-Error Detector for Sampled Receivers,” IEEE Trans. Communications, May 1986. [25] Fogel, E., and M. Gavish, “Performance Evaluation of Zero-Crossing-Based Bit Synchronizers,” IEEE Trans. Communications, June 1989. [26] Hogge, C.R., “A Self Correcting Clock Recovery Circuit,” IEEE Trans. Electron Devices, Dec. 1985. [27] Devito, L., et al., “A 52 MHz and 155 MHz Clock-Recovery PLL,” IEEE ISSCC, Feb. 1991. [28] Alexander, J.D.H., “Clock Recovery from Random Binary Data,” Electronic Letters, Vol. 11, Oct. 1975, pp. 541-542. [29] Lee, J., K.S. Kundert, and B. Razavi, “Analysis and Modeling of Bang-Bang Clock and Data Recovery Circuits,” IEEE Jour. Solid-State Circuits, Vol. 39, No. 9, Sept. 2004. [30] Holmes, J.K., “Performance of a First-Order Transition Sampling Digital Phase-Locked Loop Using Random-Walk Models,” IEEE Trans. Communications, April 1972. [31] Driessen, P.F., “DPLL Bit Synchronizer with Rapid Acquisition Using Adaptive Kalman Filtering Techniques,” IEEE Trans. Communications, Sept. 1994. [32] Moeneclaey, M., “The Optimum Closed-Loop Transfer Function of a Phase-Locked Loop Used for Synchronization Purposes,” IEEE Trans. Communications, April 1983.

Appendix 10A: BER Calculation Using the Gil-Pelaez Theorem The problem under consideration here is the computation of the cumulative probability distribution F(x) given by x

F ( x) =

∫ f ( u ) du x

(10A.1)

−∞

in terms of the characteristic function φ ( f ) associated with the probability density function fx(x). A frequent starting point for this calculation is the Gil-Pelaez theorem [1] that states that the cumulative distribution F(x) can be computed from the characteristic function φ ( f ) as [2] F ( x) =

exp ( − jω x ) 1 φ (ω ) d ω −∫ 2 −∞ j 2πω +∞

(10A.2)

The Beaulieu series result occurs if the integration in (10A.2) is approximated by a trapezoidal sum [3], [4]. If the characteristic function is an even function of ω [like that associated with ISI in (10.15)], (10A.2) can be simplified as

F ( x) =

sin (ω x ) 1 +∫ φ (ω ) d ω 2 −∞ 2πω +∞

(10A.3)

The computation in (10A.2) may be closely approximated by using a discrete Fourier transform (DFT) as

Clock and Data Recovery

F ( x) =

498

1 ∞ 2 Im  exp ( − jkωo x ) φ ( kωo )  −∑ 2 k =1 kπ

(10A.4)

Odd

where ωo governs the sampling interval used in the frequency-domain. It should consequently be chosen small compared to the frequency behavior of the characteristic function so that precise results are obtained. A similar formula is available [2] for computation of the probability density fx(x) associated with a specific characteristic function as fx (u ) =

2ωo

π



∑ Re exp ( − jkω u ) φ ( kω ) k =1 Odd

o

o

(10A.5)

References [1]

Gil-Pelaez, J., “Note on the Inversion Theorem,” Biometrika, Vol. 38, pp. 481-482, 1951.

[2]

Tellambura, C., and A. Annamalai, “Further Results on the Beaulieu Series,” IEEE Trans. Communications, Nov. 2000.

[3]

Beaulieu, N., “An Infinite Series for the Computation of the Complementary Probability Distribution Function of a Sum of Independent Random Variables and Its Application to the Sum of Rayleigh Random Variables,” IEEE Trans. Communications, Sept. 1990.

[4]

Beaulieu, N., “The Evaluation of Error Probabilities for Intersymbol and Cochannel Interference,” IEEE Trans. Communications, Dec. 1991.

Acronyms and Abbreviations ADC ALC AM APM AR ARM ARMA AVEL-CDR AWGN BE BER BLUE BPSK BT CAD CDF CDR CNR CPM CR CRB CW DAC DDS DFT DNL DPLL DPSK DSEL-CDR DSP EDGE ENBW ENOB EVM FBN FCC FE FEC FFT FIR FM FSK GI GMSK GSM

Analog-to-Digital Converter Automatic Level Control Amplitude Modulation All-Pole Method Auto Regression Auto-Regressive Modeling Auto-Regressive Moving Average Absolute Value Early-Late Gate Clock and Data Recovery Additive White Gaussian Noise Backward Euler Numerical Integration Method Bit Error Rate Best Linear Unbiased Estimate Binary Phase Shift Keying Bandwidth Time Computer Aided Design Cumulative Distribution Function Clock and Data Recovery Carrier-to-Noise Ratio Continuous Phase Modulation Cramer-Rao Cramer-Rao Bound Continuous Wave Digital-to-Analog Converter Direct Digital Synthesizer Discrete Fourier Transform Differential Nonlinearity Digital Phase-Locked Loop Differential Phase-Shift Keying Difference of Squares Early-Late Gate Clock and Data Recovery Digital Signal Processing Enhanced Data rates for GSM Evolution Equivalent Noise Bandwidth Effective Number of Bits Error Vector Magnitude Fractional Brownian Noise Federal Communications Commission Forward Euler Numerical Integration Method Forward Error Correction Fast Fourier Transform Finite Impulse Response Frequency Modulation Frequency Shift Keying Guard Interval Gaussian Minimum-Shift Keying Global System for Mobile communications

500

Acronyms and Abbreviations

IIP3 IIR IMD INL ISF LO LPF LSB LTI MA MAP MASH M-ASK MEM ML ML-CDR MM MML MMSE MMSE-CDR MMSE-EL-CDR M-PSK MPSK M-QAM MSB MV NCO NTF NRZ ODE OFDM OSR PAPR PDF PLL PM PSD QAM QPSK RF RFIC RK2 RK4 RTS SER SH SNDR SNR THD TOA

Input Third-Order Intercept Point Infinite Impulse Response Intermodulation Distortion Integral Nonlinearity Impulse Sensitivity Function Local Oscillator Lowpass Filter Least Significant Bit Linear Time Invariant Moving Average Maximum A Posteriori Multi-stAge-noise-SHaping M-ary Amplitude Shift Keying Maximum Entropy Method Maximum-Likelihood Maximum-Likelihood Clock and Data Recovery Mueller Müller Modified Maximum-Likelihood Minimum Mean-Square-Error Minimum Mean-Square-Error Clock and Data Recovery Minimum Mean-Square-Error Early-Late Gate Clock and Data Recovery M-ary Phase Shift Keying Multi-Phase Shift Keying M-ary Quadrature Amplitude Modulation Most Significant Bit Minimum Variance Numerically Controlled Oscillator Noise Transfer Function Non-Return to Zero Ordinary Differential Equation Orthogonal Frequency Division Multiplex Oversampling Rate Peak-to-Average Power Ratio (normally expressed in dB) Probability Density Function Phase-Locked Loop Phase Modulation Power Spectral Density Quadrature Amplitude Modulation Quadrature Phase Shift Keying Radio Frequency RF Integrated Circuit Second-Order Runge-Kutta Numerical Integration Method Fourth-Order Runge-Kutta Numerical Integration Method Random Telegraph Signal Symbol Error Rate Sample-and-Hold Signal-to-Noise Plus Distortion Ratio Signal-to-Noise Ratio Total Harmonic Distortion Time of Arrival

501

Acronyms and Abbreviations

VCO VSWR WCDMA WSS ZC

Voltage Controlled Oscillator Voltage Standing Wave Ratio Wideband Code-Division Multiple Access Wide-Sense Stationary Zero Crossing

502

List of Symbols Parameters

α β ∆F

Fs Fref Fsym kB Kd Kv N No QL To Tref Ts Γ

ρ ρL σ2 ζ ωn ωs ωu

Filter excess bandwidth parameter Excess bandwidth parameter for raised-cosine pulse shape Step-frequency change, Hz Sampling rate, Hz Reference frequency, Hz Symbol rate Boltzmann constant, 1.38 10–23 Joule/°K Phase detector gain, normally A/rad or V/rad VCO tuning sensitivity, rad/s/V Feedback divider ratio Number of samples One-side noise spectral density, W/Hz Loaded resonator quality factor Q which is given by 2π (maximum energy stored/total energy lost per cycle ) Nominal absolute temperature, 290 Kelvin Reference time period, sec (1/Fref) Constant time-sampling interval, sec; (1/Fs) Reflection coefficient Signal-to-noise ratio SNR within closed-loop bandwidth Statistical variance Damping factor Natural frequency, rad/s Sampling frequency in rad/s (e.g., 2π /Ts) Unity-gain frequency

Measurement Units

dBc EVM rms UI

Decibels relative to carrier Error vector magnitude, % rms Root mean square Unit interval

Mathematical Operations and Symbols

⊗ E( ) GOL(s)

GOLS(s)

Convolution Statistical expectation Traditional open-loop gain function in Laplace transform form for a strictly continuous-time system Scaled open-loop gain function in Laplace transform form GOLS ( s ) = Ts GOL ( s ) where Ts is the time sampling-period 504

List of Symbols

GOL(z) H*(s)

Open-loop gain function in z-transform form. Includes all sampling effects Laplace transform including sampling effects. Equivalent to the z-transform by way of the Poisson Sum formula (1.11)  1 ∞ 2π  H * (s) = H s− j k  = H ( z ) z = exp( sT ) ∑ s Ts k =−∞  Ts 

H1(s) H2(s) H( ) L( ) log10 loge L(f)

PLL closed-loop gain, e.g., (1.4) PLL closed-loop gain, e.g., (1.5) Hilbert transform operation Laplace transform operation Logarithm to the base 10 Natural logarithm The normalized frequency-domain representation of phase fluctuations. It is the ratio of the PSD in one phase modulation sideband, referred to the carrier frequency on a spectral density basis, to the total signal power, at a frequency offset f. The units for this quantity are Hz–1. The frequency range for f ranges from –νo to +∞. Therefore, L ( f ) is a two-sided spectral density. It is also called single-sideband phase noise1 Cutoff rate Laplace (Heaviside) transform operator The one-sided spectral density of phase fluctuations.2 The range of frequencies f span from 0 to ∞, and the dimensions are rad2/Hz. The value of Sϕ ( f ) is measured by passing the signal through a phase detector and measuring the PSD at the detector output. Normally, the approximation 1 L ( f ) ≈ Sϕ ( f ) 2 is made, but this is only valid as long as

Ro s Sϕ ( f )



∫ Sϕ ( f ) df

1 rad 2

f1

z Z

1 2

505

z-transform variable Forward z-transform operator

Crawford, J.A., Frequency Synthesizer Design Handbook, Norwood, MA: Artech House, 1994. Ibid.

About the Author James A. Crawford has held senior staff engineering positions at Hughes Aircraft Co., TRW, and M/ACOM Linkabit. While at Hughes, he became the group leader for their synthesized frequency source activities. Most recently, he has worked as a senior fellow and been on the executive staff of Sequoia Communications in San Diego. Through most of the 1990s, he operated his own consulting firm in the San Diego area, focusing on wireless and wired communication systems. His many projects included communication system and design responsibilities for paging systems, cellular phones, military TDMA SATCOM terminals, UWB signal interception, and DVBH chipset design, only to name a few. In 1994, his first book, Frequency Synthesizer Design Handbook, was released. He cofounded venture-backed Magis Networks in 1999 that specialized in 5 GHz OFDM distribution of HDTV video and data, serving as its CTO and PHY-director. Following Magis, he resumed his consulting activities as AM1 LLC which he has continued to this day. Mr. Crawford has filed and/or been awarded more than 25 U.S. patents. He earned an MSEE degree from USC-Los Angeles in quantum electronics in 1979. He received his BSEE in electrical engineering from the University of Nebraska-Lincoln. Prior to attending college, he lived in Sidney, Iowa, a small rural farming community in southwest Iowa. He and his wife have been married for 24 years and they are the parents of four children. In his spare moments, he enjoys spending time with his family, and is actively involved in Christian service through teaching and other activities. He also enjoys technical computing and writing. He still holds his original amateur radio callsign WB0ZVV. He also lists astronomy and optics as significant interests. He can be contacted through e-mail at [email protected].

506

Index CDR ................... See clock and data recovery central limit theorem ........................ 102, 111 channel capacity....................................... 214 M-ASK................................................. 215 channel cutoff rate.................................... 214 M-ASK................................................. 215 M-PSK ................................................. 217 M-QAM ............................................... 217 with phase noise ................................... 221 characteristic function .............3, 63, 102, 154 of intersymbol interference .................. 453 charge-pump ..................... See phase detector Chebyshev filter attenuation.............................................. 69 group delay............................................. 70 poles ....................................................... 69 Chebyshev inequality................................. 81 Chernoff bound .................................. 81, 220 clock and data recovery automatic gain control.......................... 468 AVEL-CDR ......................................... 459 BER with intersymbol interference...... 452 Cramer-Rao bound............................... 455 DSEL-CDR .......................................... 460 initial acquisition.................................. 469 intersymbol interference .............. 443, 447 maximum-likelihood............................ 444 mean-time to slip.................................. 487 minimum mean square error ................ 444 ML-CDR .............................................. 456 MML-CDR .......................................... 462 MMSE-CDR ........................................ 466 MMSE-EL vs. AVEL .......................... 468 MMSE-EL-CDR .................................. 466 modified maximum-likelihood............. 444 S-curve ................................................. 457 transition density .................................. 469 clock-jitter effects on ADC performance ............... 209 effects on DAC performance ............... 207 power spectral density effects .............. 209 closed-loop bandwidth ............................. 8, 9 closed-loop gain -3 dB frequency................................ 28, 29 -6 dB frequency...................................... 28

accuracy metric ........................................ 319 Adler ........................................................ 426 AM sidebands ............................................ 81 ambiguity function ................................... 101 AM-to-PM conversion ............................... 79 analog to digital converter......... See digital to analog converter AWGN ................................................. 14, 64 Box-Muller method.............................. 154 modeling noise ....................................... 64 Bain .......................................See PLL history Barkhausen criterion ................................ 382 Bayes rule................................................... 19 bessel functions .................................. 71, 188 Jacobi-Anger formula............................. 71 recursions ............................................... 73 series expansion for................................ 72 spurious sidebands ............................... 132 Bhattacharyya bound................................ 220 bilinear transform ............................... 40, 281 binomial theorem ............................... 68, 102 bit error rate BPSK.............................................. 47, 189 irreducible error.................................... 206 QAM .................................................... 190 QPSK ............................................. 47, 190 with intersymbol interference............... 452 bit synchronization ............ See clock and data recovery black body radiation................................. 110 Bode plot .................................................. 245 Box-Muller................................................. 64 Bromwich.................. See Laplace transforms Butterworth filter...................................... 452 attenuation.............................................. 69 equivalent noise bandwidth............ 69, 453 group delay............................................. 69 Hilbert transform.................................... 77 poles ....................................................... 69 Caley-Hamilton theorem .......................... 278 Cauchy principle of the argument .... 272, 303 Cauchy principle value theorem................. 75 Cauchy residue theorem ........... 233, 267, 290 Cauchy-Schwarz inequality.......... 78, 84, 101 causal system.............................................. 74

508

Index

H1(s) ........................................... 4, 27, 226 H1(z) ................................................. 43, 44 H2(s) ....................................... 4, 6, 27, 226 -LdB frequency.................................. 28, 29 maximum gain........................................ 28 maximum-gain frequency ................ 28, 29 sampled system .................................... 302 unity-gain frequency .................... 7, 28, 29 continuous time systems........................... 223 Cramer-Rao bound ............................... 21, 83 Cauchy-Schwarz inequality.................... 78 clock and data recovery........................ 455 Fisher information matrix....................... 84 sine wave in AWGN .............................. 85 time of arrival......................................... 89 cycle-slips........... See clock and data recovery cyclostationary process ............................ 131 damping factor ......... 3, 10, 12, 148, 225, 232 approximation ...................................... 232 data run length.......................................... 469 de Bellescize .........................See PLL history delta-sigma modulator architectures ......................................... 346 chaotic behavior ................................... 368 dithering ............................................... 364 irrational initial condition..................... 365 Jackson second-order ........................... 346 Jackson third-order............... 336, 345, 346 limit-cycles........................................... 365 MASH .......................................... 334, 355 MASH 1-1-1 ........................................ 357 MASH 1-2............................................ 358 MASH 2-2.................................... 357, 360 modal behavior..................................... 345 multi-bit third-order feedforward ......... 349 noise shaping........................................ 337 noise transfer function.................. 335, 340 optimized.............................................. 362 phase error probability density ............. 344 quantization noise................................. 335 signal transfer function................. 335, 341 single versus multi-bit .......................... 359 single-bit fourth-order .......................... 359 single-bit single-stage second-order feed forward ............................................. 348 single-stage error feedback................... 351 single-stage feedback ........................... 352 single-stage feedforward ...................... 346 single-stage hybrid ............................... 354 unity-gain single-stage ......................... 354

509

delta-sigma modulator stability................ 342 design guidelines.................................. 343 Jackson third-order............................... 342 useable stability range.......................... 344 variable gain model.............................. 344 DeMoivre Laplace limit ............................. 62 differentiation fourth-order central ................................ 45 fourth-order Gear ................................... 45 third-order Gear...................................... 45 digital dividers first patent ............................................ 327 sampling effects ................................... 268 swallow counter ................................... 328 digital to analog converter dead-zone ............................................. 116 DNL ..................................................... 117 ENOB................................................... 117 IMD...................................................... 118 INL....................................................... 117 midriser ................................................ 116 midtread ............................................... 116 Nyquist................................................. 116 quantization noise ................................ 124 SNDR................................................... 117 SNR...................................................... 117 THD ..................................................... 117 diode equation.......................................... 111 Dirchlet kernel ......................................... 126 direct digital synthesis...................... 118, 171 CORDIC .............................................. 171 phase spurious performance................. 173 SNR...................................................... 173 DSP windows............................................. 66 3 dB bandwidth .................................... 129 Bartlett ................................................. 127 Blackman ............................... 66, 126, 128 comparison........................................... 130 equivalent noise bandwidth.................. 129 Gaussian................................................. 66 Hamming................................................ 66 Hanning.................................................. 66 rectangular............................................ 126 symmetric for data ............................... 127 symmetric for periodicity............... 66, 127 effective number of bits ............................. 65 eigenfilter ................................................... 90 equivalent noise bandwidth Butterworth filter.................................. 453 DSP window ........................................ 129

Index

ideal type-2 PLL............................. 28, 232 erf function....................................... 106, 154 erfc function ......................................... 47, 82 ergodic...................................................... 119 error vector magnitude ....................... 48, 213 estimation theory amplitude estimate ................................. 14 BLUE ............................................... 20, 23 central limit theorem ............................ 102 consistent estimator................................ 21 efficient estimator............................. 21, 83 estimator similarity................................. 20 finite time correlation ........................... 139 Fisher information matrix....................... 22 fundamental theorem.............................. 20 Kalman filter .......................................... 22 log-likelihood ......................................... 18 MAP estimator ....................................... 19 maximum-likelihood .............................. 18 maximum-likelihood frequency ..... 86, 104 minimum-variance ................................. 19 minimum-variance phase estimator........ 18 MMSE.................................................... 17 orthogonality principle ........................... 20 phase estimation ............................... 14, 89 phase maximum-likelihood estimator .... 19 predictor-corrector.................................. 23 time of arrival estimator ......................... 89 weighted least-squares estimator............ 20 Euclidean distance.................................... 205 Euler Lagrange equation .......................... 463 Euler summation ...................................... 282 excess bandwidth ..................................... 449 eye diagram .............................................. 140 Fano broadband matching .......................... 93 feedback divider ................ See digital divider Feldtkeller energy equation...................... 144 Fokker-Planck ............................................ 14 forward error correction ................... 207, 215 Fourier series............................................ 132 exponential wave.................................. 133 sawtooth wave...................................... 133 square wave.......................................... 133 Fourier transform ......................................... 8 fractional-N fundamentals........................ 333 fractional-N history analog fractional-N............................... 333 Andrea and Dennison ........................... 327 Bolie ..................................................... 327 Cutler.................................................... 330

510

Digiphase ..................................... 330, 333 digital delay.......................................... 330 direct-digital synthesis ......................... 331 HP8662 ................................................ 328 Jackson................................................. 331 MASH.......................................... 331, 333 Miller ................................................... 333 oversampled noise shaping .................. 331 rate-multiplier ...................................... 332 Wells ............................................ 331, 334 fractional-N synthesis beat-note............................................... 328 charge-pump nonlinearities.................. 370 continuous PSD.................................... 347 cycle-slip .............................................. 364 delta-sigma modulators ..... See delta-sigma modulator discrete spur issues............................... 363 issues to avoid ...................................... 369 loop filter requirements ........................ 376 m-spurs................................................. 364 noise power spectral density ................ 341 propagation delay issues ...................... 369 recommendations ................................. 377 spur reduction using chaos ................... 364 VCO pulling......................................... 369 frequency resolution................................... 99 Gabor limit ................................................. 98 gain margin definition of.......................................... 273 sampled type-1 PLL ............................. 306 sampled type-2 PLL ............................. 310 sampled type-2 third-order PLL........... 314 type-3 PLL ........................................... 245 gain phase imbalance ...... See image rejection gain-peaking...... 5, 11, 229, See loop stability Gaussian filter ............................................ 97 Gaussian moments ..................................... 63 Gaussian noise ...................... See Box-Muller Gray code mapping .......................... 194, 220 Grenader's uncertainty................................ 99 guard interval ........................................... 199 Haggai phase noise model.......................... 94 Haggai PLL.............................................. 251 Chebyshev phase.................................. 251 design of three-section ......................... 252 design of two-section ........................... 252 lead-lag configuration .......................... 251 open-loop gain ..................................... 254 harmonic-balance ..................................... 388

Index

Hilbert transform........................................ 73 Butterworth filter.................................... 77 Carlin impedance matching.................... 74 Hogge timing error metric........................ 473 Huygens ............................................... 2, 223 image rejection ........................................... 48 impulse sensitivity function ............. 434, 436 initial value problem .................................. 41 injection-locked oscillators .......................... 2 input third-order intercept point ............... 175 integration by parts........................................... 68, 384 forward-Euler ......................................... 40 Runge-Kutta ........................... 68, 248, 277 second-order Gear .................................. 42 stability region........................................ 41 trapezoidal.............................................. 40 integration by parts................................... 384 interpolation polynomial based ................................... 67 raised-cosine based................................. 68 intersymbol interference........... 140, 443, 447 Nyquist-1 criterion ............................... 448 inverse z-transform................................... 304 Johnson noise .................................. See noise Kalman filter .............................................. 22 Lagrange multiplier.................................. 463 Laguerre root method............................... 275 Laplace transforms ..................................... 59 Corrington inversion method ............... 279 definition of.......................................... 274 FFT-based inversion............................... 64 final value theorem................................. 59 initial value theorem............................... 59 inversion............................................... 274 inversion as ODE ................................. 276 inversion using companion models ...... 282 inversion using FFT ............................. 281 inversion using integration ................... 281 inversion using Poisson sum ................ 282 Ross inversion ...................................... 278 state-transition matrix........................... 278 table of.................................................. 301 Laurent series ........................................... 168 lead-lag loop filter.......................................... 3, 234 Leeson phase noise model.................. 94, 429 Scherer ................................................... 94 l'Hopital's rule ............................................ 68 limit cycles ............................................... 109

511

line coding................................................ 446 3B/4B ................................................... 446 4B/5B ................................................... 446 power spectral density.......................... 446 loaded Q ................................................... 412 loop filter.....................2, 4, 45, 148, 234, 288 9 dB/octave ...................... See Haggai PLL cascaded RC sections ........................... 243 design of............................................... 239 Haggai.......................... 8, See Haggai PLL lead-lag............................................. 9, 224 single-ended vs. differential ................. 234 type-2 ....................................................... 3 type-2 fourth-order......................... 40, 240 type-2 third-order ............................. 8, 235 loop order ..................................................... 3 Markov modeling......................... 14, 15, 476 first-order PLL ..................................... 489 steady-state probabilities................ 15, 479 transition-probabilities ........... 15, 478, 482 transition-probabilities AVEL-CDR .... 483 transition-probabilities MMSE-CDR ... 486 MASH.................. See delta-sigma modulator matched-filter................................... 100, 140 clock and data recovery........................ 453 OFDM.................................................. 199 maximum-likelihood.... See estimation theory mean-time to slip...................................... 487 minimum phase network............................ 74 ML ............ See estimation theory, maximumlikelihood MMSE.......................... See estimation theory modulation types BPSK ................................................... 175 CPM ..................................................... 206 DPSK ................................................... 138 M-QAM ............................................... 175 OFDM.................................................. 177 WCDMA.............................................. 177 Mueller-Muller method............................ 471 natural frequency.......3, 10, 12, 148, 225, 228 stability limit for maximum ..................... 8 noise 1/f ......................................................... 112 ADC and DAC..................................... 116 bipolar transistor model ....................... 114 Brownian motion ......................... 110, 160 burst ..................................................... 113 equipartition theorem ........................... 142 equivalent noise bandwidth.................. 112

Index

flicker ................................................... 112 fractional Brownian motion ................. 160 generation recombination..................... 113 Johnson................................................. 110 Nyquist ................................................. 110 popcorn................................................. 113 quantization.......................................... 116 random telegraph signal ....................... 113 Schottky's theorem ............................... 111 shot....................................................... 111 simulation of ........................................ 121 standard Wiener process....................... 160 summing noise powers ......................... 187 thermal ................................................. 110 two port modeling ................................ 142 white spectrum ..................................... 110 noise modeling ......................................... 158 1/f noise................................................ 159 arbitrary power spectral density ........... 167 autoregressive method.......................... 161 fractional differencing method ............. 163 fractional digital filter method.............. 163 Hosking FIR/IIR method.............. 163, 166 Hurst parameter............................ 160, 165 maximum entropy method ................... 167 random addition method....................... 166 random midpoint method ..................... 164 recursive filtering method .................... 162 self symmetry ............................... 161, 164 Nyquist ....................................................... 97 Nyquist rate ............................................ 95 sampling theorem ................................... 95 Nyquist noise................................... See noise Nyquist-1 criterion ................................... 448 open-loop gain......9, 10, 11, 12, 13, 239, 395, 500 -3 dB frequency...................................... 27 GOLS(s)...................................................... 9 type-2 ................................................. 3, 27 type-3 ................................................... 244 unity gain frequency........................... 4, 27 optimal filter............................................... 90 optimal PLL ............................................... 14 oscillator ALC...................................................... 408 AM to FM conversion.......................... 410 Barkhausen criterion ............................ 382 bridge ................................................... 395 bridged-tee ........................................... 396 coarse tuning ........................................ 416

512

complex envelope ................................ 385 control theory perspective .................... 381 device line measurement ...................... 388 differential tuned .................................. 418 fine tuning ............................................ 417 frequency stability factor...................... 391 gain compensation................................ 421 Haggai's model..................................... 430 injection-locked................................ 2, 425 Kurokawa............................................. 383 Kurokawa FET example ...................... 385 LC ........................................................ 403 LC oscillator configurations................. 405 Leeson's model..................................... 429 linear oscillator..................................... 381 load pulling .......................................... 423 Meacham bridge................................... 401 negative resistance ............................... 383 open-loop Q ......................................... 392 post-tuning drift ................................... 426 pushing................................................. 426 RC oscillator ........................................ 389 RC phase shift ...................................... 393 RC-CR oscillator.................................. 389 resonator Q........................................... 385 ring ....................................................... 394 ring oscillator phase noise.................... 433 self-limiting.......................................... 413 summary of results ............................... 406 synchronized ............................................ 2 tune line noise ...................................... 417 twin-tee ................................................ 398 varactor nonlinearity ............................ 410 voltage-controlled .................................... 3 Wien bridge.......................................... 399 oversampling ratio.................................... 337 Pade approximation................. See time delay pair-wise error probability ....................... 219 Paley-Wiener.............................................. 97 Parseval's theorem.............................. 97, 491 Pascal's triangle.......................................... 69 phase detector........................................... 2, 3 charge-pump ........................ 3, 8, 263, 370 dead-zone ............................................. 374 nonlinearity issues................................ 370 tri-state voltage..................................... 259 unequal gain ......................................... 370 zero-order sample-hold ........................ 266 phase error metric.............. See phase detector phase margin

Index

definition of.......................................... 274 pseudo-continuous type-1 PLL ............ 308 sampled type-1 PLL ............................. 306 sampled type-2 PLL ............................. 310 sampled type-2 third-order PLL ........... 314 type 2 PLL............................................ 232 type-2 fourth-order ............................... 242 type-2 third-order ................................. 238 phase noise Allan variance ...................................... 136 close-in ................................................. 179 digital dividers...................................... 145 integrated...................................... 136, 176 large offset............................................ 183 OFDM .................................................. 197 OFDM channel estimation ................... 201 other modulation types ......................... 205 PLL modeling....................................... 146 power law regions ................................ 138 reference noise in sampled PLL ........... 321 residual FM .......................................... 137 single sideband ............................. 134, 135 spread spectrum systems ...................... 205 transfer function ................................... 145 VCO noise in sampled PLL ................. 324 phase noise peaking...................................... 7 phase-lock .................................................... 2 PLL analysis tool...................................... 256 PLL history Bain .......................................................... 2 de Bellescize ............................................ 2 Huygens ........................................... 2, 293 pendulum.......................................... 2, 223 sampling theorem ................................. 295 Whittaker.............................................. 295 PLL order ............................................. 3, 224 PLL stability................................................. 9 gain margin ...................................... 12, 13 phase margin .................................... 11, 28 PLL type............................................... 3, 224 type-2 ....................................................... 3 type-3 ................................................... 244 PM sidebands ............................................. 81 PM-to-AM conversion ............................... 79 Poisson sum....8, 9, 64, 95, 97, 210, 262, 265, 324 approximation ...................................... 288 derivation of ................................. 297, 298 pole-zero excess ............................... 288, 290 positive semidefinite .................................. 91

513

power spectral density all-pole delta-sigma................................ 56 Barlett method...................................... 125 clock-jitter ............................................ 209 continuous time ...................................... 56 DAC sine wave output ......................... 339 Daniell method..................................... 125 direct digital synthesizer ........................ 56 discrete sampled systems ..................... 120 discrete time ........................................... 63 effective statistical bandwidth................ 99 formal definition .......................... 118, 156 line coding............................................ 446 Lorentzian .......................56, 113, 157, 176 one sided .............................................. 135 periodogram ........................................... 99 random DAC sequence ........................ 338 resolution bandwidth............................ 125 sampled noise....................................... 121 statistical quality ratio ............................ 99 two sided ...................................... 134, 136 Welch method ...................................... 125 predictor-corrector ....... See estimation theory probability density function binomial ................................................. 62 Cauchy ................................................... 65 cumulative probability density15, 154, 155 definition of.......................................... 152 exponential............................................. 65 Gaussian......................................... 62, 153 Poisson ................................. 111, 114, 154 Rayleigh ........................................... 62, 65 sine wave in AWGN ...................... 14, 197 Tikhonov .............14, 22, 63, 188, 220, 491 uniform........................................... 62, 153 pseudo-continuous ................................... 224 bandwidth limitations................... 266, 287 comparison with sampled system. 307, 319 phase detectors ..................................... 259 Q function .................................................. 47 approximation ........................................ 47 bounds.................................................... 47 QAM energy per symbol.......................... 193 quantizer midriser ................................................ 336 midtread ............................................... 336 raised cosine................................. 64, 95, 449 excess bandwidth ................................. 449 eye diagrams ........................................ 450 random number generator ........................ 153

Index

Box-Muller........................................... 154 Cauchy ................................................. 156 creation of ............................................ 155 exponential ........................................... 156 Rayleigh ............................................... 156 realizable filters.......................................... 97 receiver selectivity ................................... 176 adjacent channel ................................... 177 alternate channel................................... 177 reciprocal mixing ................................. 178 reflection coefficient .................................. 93 root locus type-2 third-order ................................. 238 root polishing ........................................... 275 Ross inversion method ................ See Laplace transforms sampled PLL phase noise in ....................................... 320 type-1 ............................................. 45, 305 type-2 ....................................... 13, 46, 309 type-2 fourth-order ............................... 317 type-2 third-order ........................... 10, 313 sampling theorem ..................................... 295 S-curve ............................................... 15, 457 Shannon capacity theorem ....................... 215 sigma-delta modulator.............................. 346 signaling waveforms bi-phase ................................................ 445 dicode ................................................... 445 line coding............................................ 446 Manchester ........................................... 444 Miller.................................................... 445 NRZ-L.................................................. 444 SNR degradation ...................................... 182 spurious sidebands ............................. 73, 132 square-root raised cosine.......................... 451 SSB rejection................... See image rejection stability Nyquist ................................................. 272 sampled systems................................... 303 stochastic processes covariance ............................................ 153 equivalent noise bandwidth.................. 158 ergodic.................................................. 150 linear filtering....................................... 157 mean ..................................................... 153 review of .............................................. 150 wide-sense stationary ........................... 151 symbol error rate M-PSK ................................................. 194

514

QAM .............................................. 48, 194 system order reduction ............................. 284 Taylor series............................................... 68 thermal noise..................... See Johnson noise Tikhonov ...... See probability density function time delay................................................. 270 Pade approximation.............................. 271 sampling related ................................... 290 type-1 PLL ........................................... 270 time sidelobes............................................. 91 time-bandwidth product ............................. 98 time-frequency uncertainty ........................ 99 timing error metric ................................... 455 bang-bang............................................. 475 Hogge................................................... 473 maximum-likelihood............................ 456 Mueller-Muller..................................... 471 zero-crossing-based.............................. 472 Toeplitz matrix......................................... 168 tracking error variance ............................. 480 transducer gain ......................................... 142 transient response..................................... 274 dead-beat .............................................. 296 hold-in range .......................................... 30 linear settling time.................................. 30 overshoot................................................ 30 sampled type-1 PLL ............................. 307 sampled type-2 fourth-order PLL......... 318 sampled type-2 PLL ............................. 311 sampled type-2 third-order PLL........... 316 step-frequency to frequency................... 29 step-phase to frequency.......................... 29 step-phase to phase................................. 29 transition density ...................................... 469 trigonometry............................................... 58 half-angle ............................................... 58 law of cosines......................................... 58 law of sines ............................................ 59 sine cosine products ............................... 58 sine cosine sums..................................... 58 tangent of difference .............................. 58 Tsypkin .................................................... 365 type-2 PLL ................................. 27, 146, 224 type-3 PLL ............................................... 244 Bode plot.............................................. 245 damping factor ..................................... 247 equivalence with type-2 ....................... 247 frequency-step response ....................... 247 gain margin .......................................... 245 open-loop gain ..................................... 244

Index

root locus.............................................. 245 unity gain.............................................. 246 unit circle.................................................. 303 unit interval .............................................. 140 unity-gain ......................... 228, 236, 310, 355 type-3 PLL ........................................... 246 varactors back-to-back ............................. 417 VCO pushing............................................ 229 VSWR ........................................................ 94 Wagner ..................................................... 388

515

weak law of large numbers ...................... 101 wide-sense stationarySee stochastic processes Wiener-Khintchine theorem...... 78, 119, 157, 159, 168, 210 z-transforms ..................................... 8, 60, 98 final value theorem................................. 62 initial value theorem............................... 62 Parseval's theorem.................................. 62 relationship to Laplace ......................... 297