Noise and Vibration. Analysis Signal Analysis and Experimental Procedures [2 ed.] 9781118962183, 9781118962121, 9781118962152

403 24 17MB

English Pages 707 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Noise and Vibration. Analysis Signal Analysis and Experimental Procedures [2 ed.]
 9781118962183, 9781118962121, 9781118962152

Table of contents :
Cover
Title Page
Copyright
Contents
About the Author
Preface
Acknowledgments
List of Abbreviations
Annotation
Chapter 1 Introduction
1.1 Noise and Vibration
1.2 Noise and Vibration Analysis
1.3 Application Areas
1.4 Analysis of Noise and Vibrations
1.4.1 Experimental Analysis
1.5 Standards
1.6 Becoming a Noise and Vibration Analysis Expert
1.6.1 The Virtue of Simulation
1.6.2 Learning Tools and the Format of this Book
Chapter 2 Dynamic Signals and Systems
2.1 Introduction
2.2 Periodic Signals
2.2.1 Sine Waves
2.2.2 Complex Sines
2.2.3 Interacting Sines
2.2.4 Orthogonality of Sines
2.3 Random Signals
2.4 Transient Signals
2.5 RMS Value and Power
2.6 Linear Systems
2.6.1 The Laplace Transform
2.6.2 The Transfer Function
2.6.3 The Impulse Response
2.6.4 Convolution
2.7 The Continuous Fourier Transform
2.7.1 Characteristics of the Fourier Transform
2.7.2 The Frequency Response
2.7.3 Relationship Between the Laplace and Frequency Domains
2.7.4 Transient Versus Steady‐State Response
Chapter Summary
2.9 Problems
References
Chapter 3 Time Data Analysis
3.1 Introduction to Discrete Signals
3.1.1 Discrete Convolution
3.2 The Sampling Theorem
3.2.1 Aliasing
3.2.2 Discrete Representation of Analog Signals
3.2.3 Interpolation and Resampling
3.3 Filters
3.3.1 Analog Filters
3.3.2 Digital Filters
3.3.3 Smoothing Filters
3.3.4 Acoustic Octave Filters
3.3.5 Analog RMS Integration
3.3.6 Frequency Weighting Filters
3.4 Time Series Analysis
3.4.1 Min‐ and Max‐Analysis
3.4.2 Time Data Integration
3.4.3 Time Data Differentiation
3.4.4 FFT‐Based Processing
Chapter Summary
3.6 Problems
References
Chapter 4 Statistics and Random Processes
4.1 Introduction to the Use of Statistics
4.1.1 Ensemble and Time Averages
4.1.2 Stationarity and Ergodicity
4.2 Random Theory
4.2.1 Expected Value
4.2.2 Errors in Estimates
4.2.3 Probability Distribution
4.2.4 Probability Density
4.2.5 Histogram
4.2.6 Sample Probability Density Estimate
4.2.7 Average Value and Variance
4.2.8 Central Moments
4.2.9 Skewness
4.2.10 Kurtosis
4.2.11 Crest Factor
4.2.12 Correlation Functions
4.2.13 The Gaussian Probability Distribution
4.3 Statistical Methods
4.3.1 Hypothesis Tests
4.3.2 Test of Normality
4.3.3 Test of Stationarity
4.3.3.1 Frame Statistics
4.3.3.2 The Reverse Arrangements Test
4.3.3.3 The Runs Test
4.4 Quality Assessment of Measured Signals
Chapter Summary
4.6 Problems
References
Chapter 5 Fundamental Mechanics
5.1 Newton's Laws
5.2 The Single Degree‐of‐Freedom System (SDOF)
5.2.1 The Transfer Function
5.2.2 The Impulse Response
5.2.3 The Frequency Response
5.2.4 The Q‐Factor
5.2.5 SDOF Forced Response
5.3 Alternative Quantities for Describing Motion
5.4 Frequency Response Plot Formats
5.4.1 Magnitude and Phase
5.4.2 Real and Imaginary Parts
5.4.3 The Nyquist Plot – Imaginary Versus Real Part
5.5 Determining Natural Frequency and Damping Ratio
5.5.1 Peak in the Magnitude of FRF
5.5.2 Peak in the Imaginary Part of FRF
5.5.3 Resonance Bandwidth (3 dB Bandwidth)
5.5.4 Circle in the Nyquist Plot
5.6 Rotating Mass
5.7 Some Comments on Damping
5.7.1 Hysteretic Damping
5.8 Models Based on SDOF Approximations
5.8.1 Vibration Isolation
5.8.2 Resonance Frequency and Stiffness Approximations
5.9 The Two Degree of Freedom System (2DOF)
5.10 The Tuned Damper
Chapter Summary
5.12 Problems
References
Chapter 6 Modal Analysis Theory
6.1 Waves on a String
6.2 Matrix Formulations
6.2.1 Degree of Freedom
6.3 Eigenvalues and Eigenvectors
6.3.1 Undamped System
6.3.2 Mode Shape Orthogonality
6.3.3 Modal Coordinates
6.3.4 Proportional Damping
6.3.5 General Damping
6.4 Frequency Response of MDOF Systems
6.4.1 Frequency Response from [M], [C], [K]
6.4.2 Frequency Response from Modal Parameters
6.4.3 Frequency Response from [M], [K], and ζ – Modal Damping
6.4.4 Mode Shape Scaling
6.4.5 The Effect of Node Lines on FRFs
6.4.6 Antiresonance
6.4.7 Impulse Response of MDOF Systems
6.5 Free Decays
Chapter Summary
6.7 Problems
References
Chapter 7 Transducers for Noise and Vibration Analysis
7.1 The Piezoelectric Effect
7.2 The Charge Amplifier
7.3 Transducers with Built‐In Impedance Converters, “IEPE”
7.3.1 Low‐Frequency Characteristics
7.3.2 High‐Frequency Characteristics
7.3.3 Transducer Electronic Data Sheet, TEDS
7.4 The Piezoelectric Accelerometer
7.4.1 Frequency Characteristics
7.4.2 Mounting Accelerometers
7.4.3 Electrical Noise
7.4.4 Choosing an Accelerometer
7.5 The Piezoelectric Force Transducer
7.6 The Impedance Head
7.7 The Impulse Hammer
7.8 Accelerometer Calibration
7.9 Measurement Microphones
7.10 Microphone Calibration
7.11 The Geophone
7.12 MEMS‐based Sensors
7.13 Shakers for Structure Excitation
7.14 Some Comments on Measurement Procedures
7.15 Problems
References
Chapter 8 Frequency Analysis Theory
8.1 Periodic Signals – The Fourier Series
8.2 Spectra of Periodic Signals
8.2.1 Frequency and Time
8.3 Random Processes
8.3.1 Spectra of Random Processes
8.4 Transient Signals
8.5 Interpretation of Spectra
Chapter Summary
8.7 Problems
References
Chapter 9 Experimental Frequency Analysis
9.1 Frequency Analysis Principles
9.1.1 Nonparametric Frequency Analysis
9.2 Octave and Third‐Octave Band Spectra
9.2.1 Time Constants
9.2.2 Real‐time Versus Serial Measurements
9.3 The Discrete Fourier Transform (DFT)
9.3.1 The Fast Fourier Transform, FFT
9.3.2 The DFT in Short
9.3.3 The Basis of the DFT
9.3.4 Periodicity of the DFT
9.3.5 Properties of the DFT
9.3.6 Relation Between DFT and Continuous Spectrum
9.3.7 Leakage
9.3.8 The Picket‐Fence Effect
9.3.9 Time Windows for Periodic Signals
9.3.9.1 Amplitude Correction of Window Effects
9.3.9.2 Power Correction of Window Effects
9.3.9.3 Comparison of Common Windows
9.3.9.4 Frequency Resolution
9.3.10 Time Windows for Random Signals
9.3.11 Oversampling in FFT Analysis
9.3.12 Circular Convolution and Aliasing
9.3.13 Zero Padding
9.3.14 Frequency Domain Processing
9.3.15 Zoom FFT
Chapter Summary
9.5 Problems
References
Chapter 10 Spectrum and Correlation Estimates Using the DFT
10.1 Averaging
10.2 Spectrum Estimators for Periodic Signals
10.2.1 The Autopower Spectrum
10.2.2 Linear Spectrum
10.2.3 Phase Spectrum
10.3 Estimators for PSD and CSD
10.3.1 The Periodogram
10.3.2 Welch's Method
10.3.3 Window Correction for Welch Estimates
10.3.4 Bias Error in Welch Estimates
10.3.5 Random Error in Welch Estimates
10.3.6 The Smoothed Periodogram Estimator
10.3.7 Bias Error in Smoothed Periodogram Estimates
10.3.8 Random Error in Smoothed Periodogram Estimates
10.4 Estimators for Correlation Functions
10.4.1 Correlation Estimator by Long FFT
10.4.2 Correlation Estimator by Welch's Method
10.4.3 Variance of the Correlation Estimator
10.4.4 Effect of Measurement Noise on Correlation Function Estimates
10.5 Estimators for Transient Signals
10.5.1 Windows for Transient Signals
10.6 A Signal Processing Framework for Spectrum and Correlation Estimation
10.7 Spectrum Estimation in Practice
10.7.1 Linear Spectrum Versus PSD
10.7.2 Example of a Spectrum of a Periodic Signal
10.7.3 Practical PSD Estimation
10.7.4 Spectrum of Mixed Property Signal
10.7.5 Calculating RMS Values in Practice
10.7.6 RMS from Linear Spectrum of Periodic Signal
10.7.7 RMS from PSD
10.7.8 Weighted RMS Values
10.7.9 Integration and Differentiation in the Frequency Domain
10.8 Multichannel Spectral and Correlation Analysis
10.8.1 Matrix Notation for MIMO Spectral Analysis
10.8.2 Arranging Spectral Matrices in MATLAB/Octave
10.8.3 Multichannel Correlation Functions
Chapter Summary
10.10 Problems
References
Chapter 11 Measurement and Analysis Systems
11.1 Principal Design
11.2 Hardware for Noise and Vibration Analysis
11.2.1 Signal Conditioning
11.2.2 Analog‐to‐Digital Conversion, ADC
11.2.2.1 Quantization and Dynamic Range
11.2.2.2 Setting the Measurement Range
11.2.2.3 Sampling Accuracy
11.2.2.4 Anti‐alias Filters
11.2.2.5 Sigma–Delta ADCs
11.2.3 Practical Issues
11.2.4 Hardware Specifications
11.2.4.1 Absolute Amplitude Accuracy
11.2.4.2 Anti‐alias Protection
11.2.4.3 Simultaneous Sampling
11.2.4.4 Cross‐Channel Match
11.2.4.5 Dynamic Range
11.2.4.6 Cross‐Channel Talk
11.2.5 Transient (Shock) Recording
11.3 FFT Analysis Software
11.3.1 Block Processing
11.3.2 Data Scaling
11.3.3 Triggering
11.3.4 Averaging
11.3.5 FFT Setup Parameters
Chapter Summary
11.5 Problems
Problems
References
Chapter 12 Rotating Machinery Analysis
12.1 Vibrations in Rotating Machines
12.2 Understanding Time–Frequency Analysis
12.3 Rotational Speed Signals (Tachometer Signals)
12.4 RPM Maps
12.4.1 The Waterfall Plot
12.4.2 The Color Map Plot
12.5 Smearing
12.6 Order Tracks
12.7 Synchronous Sampling
12.7.1 DFT Parameters after Resampling
12.8 Averaging Rotation‐Speed‐Dependent Signals
12.9 Adding Change in RMS with Time
12.10 Parametric Methods
Chapter Summary
12.12 Problems
References
Chapter 13 Single‐input Frequency Response Measurements
13.1 Linear Systems
13.2 Determining Frequency Response Experimentally
13.2.1 Method 1 – The H1 Estimator
13.2.2 Method 2 – The H2 Estimator
13.2.3 Method 3 – The Hc Estimator
13.3 Important Relationships for Linear Systems
13.4 The Coherence Function
13.5 Errors in Determining the Frequency Response
13.5.1 Bias Error in FRF Estimates
13.5.2 Random Error in FRF Estimates
13.5.3 Bias and Random Error Trade‐offs
13.6 Coherent Output Power
13.7 The Coherence Function in Practice
13.7.1 Nonrandom Excitation
13.8 Impact Excitation
13.8.1 The Force Signal
13.8.2 The Response Signal and Exponential Window
13.8.3 Impact Testing Software
13.8.4 Compensating for the Influence of the Exponential Window
13.8.5 Sources of Error
13.8.6 Improving Impact Testing by Alternative Processing
13.9 Shaker Excitation
13.9.1 Signal‐to‐noise Ratio Comparison
13.9.2 Pure Random Noise
13.9.3 Burst Random Noise
13.9.4 Pseudo‐random Noise
13.9.5 Periodic Chirp
13.9.6 Stepped‐sine Excitation
13.10 Examples of FRF Estimation – No Extraneous Noise
13.10.1 Pure Random Excitation
13.10.2 Burst Random Excitation
13.10.3 Periodic Excitation
13.11 Example of FRF Estimation – With Output Noise
13.12 Examples of FRF Estimation – With Input and Output Noise
13.12.1 Sources of Error during Shaker Excitation
13.12.2 Checking the Shaker Attachment
13.12.3 Other Sources of Error
Chapter Summary
13.14 Problems
References
Chapter 14 Multiple‐Input Frequency Response Measurement
14.1 Multiple‐Input Systems
14.1.1 The 2‐Input/1‐Output System
14.1.2 The 2‐Input/1‐Output System – Matrix Notation
14.1.3 The H1 Estimator for MIMO
14.1.4 Multiple Coherence
14.1.5 Computation Considerations for Multiple‐Input System
14.1.6 The Hv Estimator
14.1.7 Other MIMO FRF Estimators
14.2 Conditioned Input Signals
14.2.1 Conditioned Output Signals
14.2.2 Partial Coherence
14.2.3 Ordering Signals Prior to Conditioning
14.2.4 Partial Coherent Output Power Spectra
14.2.5 Backtracking the H‐Systems
14.2.6 General Conditioned Systems
14.3 Bias and Random Errors for Multiple‐Input Systems
14.4 Excitation Signals for MIMO Analysis
14.4.1 Pure Random Noise
14.4.2 Burst Random Noise
14.4.3 Periodic Random Noise
14.4.4 The Multiphase Stepped‐Sine Method (MPSS)
14.5 Data Synthesis and Simulation Examples
14.5.1 Burst Random – Output Noise
14.5.2 Burst and Periodic Random – Input Noise
14.5.3 Periodic Random – Input and Output Noise
14.6 Real MIMO Data Case
Chapter Summary
14.8 Problems
References
Chapter 15 Orthogonalization of Signals
15.1 Principal Components
15.1.1 Principal Components Used to Find Number of Sources
15.1.2 Data Reduction
15.2 Virtual Signals
15.2.1 Virtual Input Coherence
15.2.2 Virtual Input/Output Coherence
15.2.3 Virtual Coherent Output Power
15.3 Noise Source Identification (NSI)
15.3.1 Multiple Source Example
15.3.2 Automotive Example
Chapter Summary
15.5 Problems
References
Chapter 16 Experimental Modal Analysis
16.1 Introduction to Experimental Modal Analysis
16.1.1 Main Steps in EMA
16.2 Experimental Setup
16.2.1 Points and DOFs
16.2.2 Selecting Measurement DOFs
16.2.3 Measurement System
16.2.4 Sensor Considerations
16.2.5 Data Acquisition Strategies
16.2.6 Suspension
16.2.7 Measurement Checks
16.2.8 Calibration
16.2.9 Data Acquisition
16.2.10 Mode Indicator Functions
16.2.11 Data Quality Assessment
16.2.12 Checklist
16.3 Introduction to Modal Parameter Extraction
16.4 SDOF Parameter Extraction
16.4.1 The Least Squares Local Method
16.4.2 The Least Squares Global Method
16.4.3 The Least Squares (Local) Polynomial Method
16.5 The Unified Matrix Polynomial Approach, UMPA
16.5.1 Mathematical Framework
16.5.2 Choosing Model Order
16.5.3 Matrix Coefficient Normalization
16.5.4 Data Compression
16.6 Time Versus Frequency Domain Parameter Extraction for EMA
16.7 Time Domain Parameter Extraction Methods
16.7.1 Converting Bandpass Filtered FRFs into IRFs
16.7.2 The Ibrahim Time Domain Method
16.7.3 The Multiple‐Reference Ibrahim Time Domain Method (MITD)
16.7.4 Prony's Method
16.7.5 The Least Squares Complex Exponential Method
16.7.6 Polyreference Time Domain
16.7.7 The Modified Multiple‐Reference Ibrahim Time Domain Method (MMITD)
16.8 Frequency Domain Parameter Extraction Methods
16.8.1 The Least Squares Complex Frequency Domain Method
16.8.2 The Frequency Domain Direct Parameter Identification Method (FDPI)
16.8.3 The Frequency Z‐Domain Direct Parameter Method, FDPIz
16.8.4 The Complex Mode Indicator Function, CMIF Method
16.9 Methods for Mode Shape Estimation and Scaling
16.9.1 Least Squares Frequency Domain – Single Reference Case
16.9.2 Least Squares Frequency Domain – Multiple Reference Case
16.9.3 Least Squares Frequency Domain – Multiple Reference Without MPFs
16.9.4 Least Squares Time Domain
16.9.5 Scaling Modal Model When Poles and Mode Shapes Are Known
16.10 Evaluating the Extracted Parameters
16.10.1 Synthesized FRFs
16.10.2 The MAC Matrix
Chapter Summary
16.12 Problems
References
Chapter 17 Operational Modal Analysis (OMA)
17.1 Principles for OMA
17.2 Data Acquisition Principles
17.3 OMA Modal Parameter Extraction for OMA
17.3.1 Spectral Functions for OMA Parameter Extraction
17.3.2 Correlation Functions for OMA Parameter Extraction
17.3.3 Half Spectra
17.3.4 Time versus Frequency Domain Parameter Extraction for OMA
17.3.5 Modal Parameter Estimation Methods for OMA
17.3.6 Least Squares Frequency Domain, OMA Versions
17.4 Scaling OMA Modal Models
17.4.1 Scaling an OMA Model Using the Mass Matrix
17.4.2 The OMAH Method
Chapter Summary
17.6 Problems
References
Chapter 18 Advanced Analysis Methods
18.1 Shock Response Spectrum
18.2 The Hilbert Transform
18.2.1 Computation of the Hilbert Transform
18.2.2 Envelope Detection by the Hilbert Transform
18.2.3 Relating Real and Imaginary Parts of Frequency Response Functions
18.3 Cepstrum Analysis
18.3.1 Power Cepstrum
18.3.2 Complex Cepstrum
18.3.3 The Real Cepstrum
18.3.4 Inverse Cepstrum
18.4 The Envelope Spectrum
18.5 Creating Random Signals with Known Spectral Density
18.6 Identifying Harmonics in Noise
18.6.1 The Three‐Parameter Sine Fit Method
18.6.2 Periodogram Ratio Detection, PRD
18.7 Harmonic Removal
18.7.1 Frequency Domain Editing, FDE
18.7.2 Cepstrum‐Based Harmonic Removal Methods
Chapter Summary
18.9 Problems
References
Chapter 19 Practical Vibration Measurements and Analysis
19.1 Introduction to a Plexiglas Plate
19.2 Forced Response Simulation
19.2.1 Frequency Domain Forced Response for Periodic Inputs
19.2.2 Frequency Domain Forced Response for Random Inputs
19.2.3 Time Domain Computation of Forced Response for Any Inputs
19.2.3.1 Time Domain Response by Frequency Domain Computation
19.2.3.2 Time Domain Response by Digital Filters
19.2.4 Plexiglas Plate Forced Response Example
19.3 Spectra of Periodic Signals
19.4 Spectra of Random Signals
19.5 Data with Random and Periodic Content
19.5.1 Car Idling Sound
19.5.2 Container Ship Measurement
19.6 Operational Deflection Shapes – ODS
19.6.1 Plexiglas Plate ODS Example – Single Reference
19.6.2 Plexiglas Plate ODS Example – Multiple‐Reference
19.7 Impact Excitation and FRF Estimation
19.8 Plexiglas EMA Example
19.8.1 FRF Quality Assessment
19.8.2 EMA Modal Parameter Extraction, MPE
19.9 Methods for EMA Modal Parameter Estimation, MPE
19.9.1 Time Domain Variable Settings
19.9.2 High‐Order Methods for EMA MPE
19.9.3 Low‐Order Methods for EMA MPE
19.9.4 The Complex Mode Indicator Function, CMIF
19.9.5 Calculating Scaled Mode Shapes
19.10 Conclusions of EMA MPE
19.11 OMA Examples
19.11.1 OMA Using Synthesized Data for Plexiglas Plate
19.11.2 OMA on Measured Data of Plexiglas Plate
19.11.3 OMA of a Suspension Bridge
19.11.4 OMA on Container Ship
References
Appendix A Complex Numbers
Appendix B Logarithmic Diagrams
Appendix C Decibels
Appendix D Some Elementary Matrix Algebra
Appendix E Eigenvalues and the SVD
E.1 Eigenvalues and Complex Matrices
E.2 The Singular Value Decomposition (SVD)
Appendix F Organizations and Resources
Appendix G Checklist for Experimental Modal Analysis Testing
Bibliography
Index
EULA

Citation preview

Noise and Vibration Analysis

Noise and Vibration Analysis Signal Analysis and Experimental Procedures

Second Edition Anders Brandt Aarhus University Department of Mechanical and Production Engineering Inge Lehmanns Gade 10, Navitas 8000 Aarhus C Denmark

This second edition first published 2023 © 2023 John Wiley & Sons Ltd. Edition History John Wiley & Sons Ltd. (1e, 2011) All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/ permissions. The right of Anders Brandt to be identified as the author of this work has been asserted in accordance with law. Registered Office John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com. Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats. Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book. Limit of Liability/Disclaimer of Warranty While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Library of Congress Cataloging-in-Publication Data Names: Brandt, Anders, author. Title: Noise and vibration analysis : signal analysis and experimental procedures / Anders Brandt. Description: Second edition. | Hoboken : John Wiley, [2023] | Includes bibliographical references and index. Identifiers: LCCN 2023001526 (print) | LCCN 2023001527 (ebook) | ISBN 9781118962183 (hardback) | ISBN 9781118962121 (adobe pdf) | ISBN 9781118962152 (epub) Subjects: LCSH: Vibration–Mathematical models. | Noise–Mathematical models. | Acoustical engineering. | Stochastic analysis. | Signal processing. Classification: LCC TA355 .B674 2023 (print) | LCC TA355 (ebook) | DDC 620.2/3015118–dc23/eng/20230113 LC record available at https://lccn.loc.gov/2023001526 LC ebook record available at https://lccn.loc.gov/2023001527 Cover Design: Wiley Cover Images: © loops7/Getty Images, Jorg Greuel/Stone/Getty Images, Sven Hansche/EyeEm/Getty Images, Justin Paget/DigitalVision/Getty Images, Herr Loeffler/Shutterstock Set in 9.5/12.5pt STIXTwoText by Straive, Chennai, India

v

Contents About the Author xix Preface xxi Acknowledgments xxv List of Abbreviations xxvii Annotation xxix 1 1.1 1.2 1.3 1.4 1.4.1 1.5 1.6 1.6.1 1.6.2

Introduction 1 Noise and Vibration 1 Noise and Vibration Analysis 2 Application Areas 3 Analysis of Noise and Vibrations 4 Experimental Analysis 4 Standards 5 Becoming a Noise and Vibration Analysis Expert 5 The Virtue of Simulation 6 Learning Tools and the Format of this Book 6

2 2.1 2.2 2.2.1 2.2.2 2.2.3 2.2.4 2.3 2.4 2.5 2.6 2.6.1 2.6.2 2.6.3 2.6.4 2.7

Dynamic Signals and Systems 9 Introduction 9 Periodic Signals 11 Sine Waves 11 Complex Sines 12 Interacting Sines 13 Orthogonality of Sines 15 Random Signals 16 Transient Signals 17 RMS Value and Power 18 Linear Systems 19 The Laplace Transform 20 The Transfer Function 23 The Impulse Response 24 Convolution 25 The Continuous Fourier Transform 29

vi

Contents

2.7.1 2.7.2 2.7.3 2.7.4 2.8 2.9

Characteristics of the Fourier Transform 31 The Frequency Response 33 Relationship Between the Laplace and Frequency Domains 33 Transient Versus Steady-State Response 34 Chapter Summary 35 Problems 36 References 38

3 3.1 3.1.1 3.2 3.2.1 3.2.2 3.2.3 3.3 3.3.1 3.3.2 3.3.3 3.3.4 3.3.5 3.3.6 3.4 3.4.1 3.4.2 3.4.3 3.4.4 3.5 3.6

Time Data Analysis 39 Introduction to Discrete Signals 39 Discrete Convolution 40 The Sampling Theorem 40 Aliasing 42 Discrete Representation of Analog Signals 43 Interpolation and Resampling 45 Filters 48 Analog Filters 48 Digital Filters 50 Smoothing Filters 52 Acoustic Octave Filters 53 Analog RMS Integration 55 Frequency Weighting Filters 56 Time Series Analysis 57 Min- and Max-Analysis 57 Time Data Integration 58 Time Data Differentiation 62 FFT-Based Processing 65 Chapter Summary 66 Problems 67 References 68

4 4.1 4.1.1 4.1.2 4.2 4.2.1 4.2.2 4.2.3 4.2.4 4.2.5 4.2.6 4.2.7 4.2.8 4.2.9

Statistics and Random Processes 71 Introduction to the Use of Statistics 71 Ensemble and Time Averages 72 Stationarity and Ergodicity 72 Random Theory 73 Expected Value 73 Errors in Estimates 73 Probability Distribution 74 Probability Density 75 Histogram 75 Sample Probability Density Estimate 76 Average Value and Variance 76 Central Moments 78 Skewness 78

Contents

4.2.10 4.2.11 4.2.12 4.2.13 4.3 4.3.1 4.3.2 4.3.3 4.3.3.1 4.3.3.2 4.3.3.3 4.4 4.5 4.6

Kurtosis 79 Crest Factor 79 Correlation Functions 80 The Gaussian Probability Distribution 81 Statistical Methods 83 Hypothesis Tests 83 Test of Normality 85 Test of Stationarity 87 Frame Statistics 87 The Reverse Arrangements Test 87 The Runs Test 90 Quality Assessment of Measured Signals 91 Chapter Summary 94 Problems 95 References 96

5 5.1 5.2 5.2.1 5.2.2 5.2.3 5.2.4 5.2.5 5.3 5.4 5.4.1 5.4.2 5.4.3 5.5 5.5.1 5.5.2 5.5.3 5.5.4 5.6 5.7 5.7.1 5.8 5.8.1 5.8.2 5.9 5.10 5.11 5.12

Fundamental Mechanics 97 Newton’s Laws 97 The Single Degree-of-Freedom System (SDOF) 98 The Transfer Function 98 The Impulse Response 99 The Frequency Response 102 The Q-Factor 105 SDOF Forced Response 105 Alternative Quantities for Describing Motion 106 Frequency Response Plot Formats 108 Magnitude and Phase 108 Real and Imaginary Parts 109 The Nyquist Plot – Imaginary Versus Real Part 111 Determining Natural Frequency and Damping Ratio 113 Peak in the Magnitude of FRF 114 Peak in the Imaginary Part of FRF 114 Resonance Bandwidth (3 dB Bandwidth) 114 Circle in the Nyquist Plot 115 Rotating Mass 115 Some Comments on Damping 116 Hysteretic Damping 117 Models Based on SDOF Approximations 118 Vibration Isolation 118 Resonance Frequency and Stiffness Approximations 120 The Two Degree of Freedom System (2DOF) 121 The Tuned Damper 123 Chapter Summary 125 Problems 126 References 127

vii

viii

Contents

6 6.1 6.2 6.2.1 6.3 6.3.1 6.3.2 6.3.3 6.3.4 6.3.5 6.4 6.4.1 6.4.2 6.4.3 6.4.4 6.4.5 6.4.6 6.4.7 6.5 6.6 6.7

Modal Analysis Theory 129 Waves on a String 129 Matrix Formulations 131 Degree of Freedom 132 Eigenvalues and Eigenvectors 132 Undamped System 132 Mode Shape Orthogonality 136 Modal Coordinates 138 Proportional Damping 140 General Damping 142 Frequency Response of MDOF Systems 146 Frequency Response from [M], [C], [K] 146 Frequency Response from Modal Parameters 147 Frequency Response from [M], [K], and 𝜁 – Modal Damping 151 Mode Shape Scaling 152 The Effect of Node Lines on FRFs 153 Antiresonance 154 Impulse Response of MDOF Systems 155 Free Decays 155 Chapter Summary 156 Problems 157 References 158

7 7.1 7.2 7.3 7.3.1 7.3.2 7.3.3 7.4 7.4.1 7.4.2 7.4.3 7.4.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14

Transducers for Noise and Vibration Analysis 159 The Piezoelectric Effect 159 The Charge Amplifier 160 Transducers with Built-In Impedance Converters, “IEPE” Low-Frequency Characteristics 163 High-Frequency Characteristics 164 Transducer Electronic Data Sheet, TEDS 165 The Piezoelectric Accelerometer 165 Frequency Characteristics 166 Mounting Accelerometers 167 Electrical Noise 168 Choosing an Accelerometer 168 The Piezoelectric Force Transducer 170 The Impedance Head 171 The Impulse Hammer 172 Accelerometer Calibration 173 Measurement Microphones 174 Microphone Calibration 175 The Geophone 175 MEMS-based Sensors 176 Shakers for Structure Excitation 177 Some Comments on Measurement Procedures 178

162

Contents

7.15

Problems 180 References 181

8 8.1 8.2 8.2.1 8.3 8.3.1 8.4 8.5 8.6 8.7

Frequency Analysis Theory 183 Periodic Signals – The Fourier Series 183 Spectra of Periodic Signals 185 Frequency and Time 186 Random Processes 187 Spectra of Random Processes 187 Transient Signals 189 Interpretation of Spectra 189 Chapter Summary 191 Problems 192 References 193

9 9.1 9.1.1 9.2 9.2.1 9.2.2 9.3 9.3.1 9.3.2 9.3.3 9.3.4 9.3.5 9.3.6 9.3.7 9.3.8 9.3.9 9.3.9.1 9.3.9.2 9.3.9.3 9.3.9.4 9.3.10 9.3.11 9.3.12 9.3.13 9.3.14 9.3.15 9.4 9.5

Experimental Frequency Analysis 195 Frequency Analysis Principles 195 Nonparametric Frequency Analysis 196 Octave and Third-Octave Band Spectra 197 Time Constants 197 Real-time Versus Serial Measurements 198 The Discrete Fourier Transform (DFT) 198 The Fast Fourier Transform, FFT 200 The DFT in Short 200 The Basis of the DFT 202 Periodicity of the DFT 202 Properties of the DFT 205 Relation Between DFT and Continuous Spectrum 206 Leakage 206 The Picket-Fence Effect 209 Time Windows for Periodic Signals 211 Amplitude Correction of Window Effects 212 Power Correction of Window Effects 212 Comparison of Common Windows 214 Frequency Resolution 218 Time Windows for Random Signals 219 Oversampling in FFT Analysis 219 Circular Convolution and Aliasing 219 Zero Padding 221 Frequency Domain Processing 222 Zoom FFT 223 Chapter Summary 224 Problems 225 References 226

ix

x

Contents

10 10.1 10.2 10.2.1 10.2.2 10.2.3 10.3 10.3.1 10.3.2 10.3.3 10.3.4 10.3.5 10.3.6 10.3.7 10.3.8 10.4 10.4.1 10.4.2 10.4.3 10.4.4 10.5 10.5.1 10.6 10.7 10.7.1 10.7.2 10.7.3 10.7.4 10.7.5 10.7.6 10.7.7 10.7.8 10.7.9 10.8 10.8.1 10.8.2 10.8.3 10.9 10.10

Spectrum and Correlation Estimates Using the DFT 229 Averaging 229 Spectrum Estimators for Periodic Signals 230 The Autopower Spectrum 231 Linear Spectrum 232 Phase Spectrum 233 Estimators for PSD and CSD 233 The Periodogram 234 Welch’s Method 235 Window Correction for Welch Estimates 236 Bias Error in Welch Estimates 237 Random Error in Welch Estimates 242 The Smoothed Periodogram Estimator 248 Bias Error in Smoothed Periodogram Estimates 249 Random Error in Smoothed Periodogram Estimates 250 Estimators for Correlation Functions 250 Correlation Estimator by Long FFT 251 Correlation Estimator by Welch’s Method 253 Variance of the Correlation Estimator 254 Effect of Measurement Noise on Correlation Function Estimates 256 Estimators for Transient Signals 258 Windows for Transient Signals 259 A Signal Processing Framework for Spectrum and Correlation Estimation 260 Spectrum Estimation in Practice 262 Linear Spectrum Versus PSD 263 Example of a Spectrum of a Periodic Signal 264 Practical PSD Estimation 266 Spectrum of Mixed Property Signal 269 Calculating RMS Values in Practice 269 RMS from Linear Spectrum of Periodic Signal 270 RMS from PSD 270 Weighted RMS Values 271 Integration and Differentiation in the Frequency Domain 272 Multichannel Spectral and Correlation Analysis 273 Matrix Notation for MIMO Spectral Analysis 274 Arranging Spectral Matrices in MATLAB/Octave 275 Multichannel Correlation Functions 276 Chapter Summary 276 Problems 277 References 278

11 11.1 11.2

Measurement and Analysis Systems 281 Principal Design 282 Hardware for Noise and Vibration Analysis 283

Contents

11.2.1 11.2.2 11.2.2.1 11.2.2.2 11.2.2.3 11.2.2.4 11.2.2.5 11.2.3 11.2.4 11.2.4.1 11.2.4.2 11.2.4.3 11.2.4.4 11.2.4.5 11.2.4.6 11.2.5 11.3 11.3.1 11.3.2 11.3.3 11.3.4 11.3.5 11.4 11.5

Signal Conditioning 283 Analog-to-Digital Conversion, ADC 284 Quantization and Dynamic Range 284 Setting the Measurement Range 285 Sampling Accuracy 286 Anti-alias Filters 288 Sigma–Delta ADCs 290 Practical Issues 290 Hardware Specifications 292 Absolute Amplitude Accuracy 292 Anti-alias Protection 292 Simultaneous Sampling 293 Cross-Channel Match 293 Dynamic Range 293 Cross-Channel Talk 294 Transient (Shock) Recording 295 FFT Analysis Software 295 Block Processing 296 Data Scaling 297 Triggering 297 Averaging 298 FFT Setup Parameters 299 Chapter Summary 299 Problems 300 Problems 300 References 301

12 12.1 12.2 12.3 12.4 12.4.1 12.4.2 12.5 12.6 12.7 12.7.1 12.8 12.9 12.10 12.11 12.12

Rotating Machinery Analysis 303 Vibrations in Rotating Machines 303 Understanding Time–Frequency Analysis 304 Rotational Speed Signals (Tachometer Signals) 306 RPM Maps 308 The Waterfall Plot 309 The Color Map Plot 310 Smearing 310 Order Tracks 312 Synchronous Sampling 314 DFT Parameters after Resampling 317 Averaging Rotation-Speed-Dependent Signals 317 Adding Change in RMS with Time 318 Parametric Methods 322 Chapter Summary 323 Problems 324 References 325

xi

xii

Contents

13 13.1 13.2 13.2.1 13.2.2 13.2.3 13.3 13.4 13.5 13.5.1 13.5.2 13.5.3 13.6 13.7 13.7.1 13.8 13.8.1 13.8.2 13.8.3 13.8.4 13.8.5 13.8.6 13.9 13.9.1 13.9.2 13.9.3 13.9.4 13.9.5 13.9.6 13.10 13.10.1 13.10.2 13.10.3 13.11 13.12 13.12.1 13.12.2 13.12.3 13.13 13.14

Single-input Frequency Response Measurements 327 Linear Systems 328 Determining Frequency Response Experimentally 328 Method 1 – The H1 Estimator 329 Method 2 – The H2 Estimator 330 Method 3 – The Hc Estimator 331 Important Relationships for Linear Systems 333 The Coherence Function 333 Errors in Determining the Frequency Response 334 Bias Error in FRF Estimates 335 Random Error in FRF Estimates 337 Bias and Random Error Trade-offs 339 Coherent Output Power 339 The Coherence Function in Practice 340 Nonrandom Excitation 341 Impact Excitation 342 The Force Signal 343 The Response Signal and Exponential Window 345 Impact Testing Software 345 Compensating for the Influence of the Exponential Window 347 Sources of Error 349 Improving Impact Testing by Alternative Processing 350 Shaker Excitation 351 Signal-to-noise Ratio Comparison 352 Pure Random Noise 352 Burst Random Noise 355 Pseudo-random Noise 355 Periodic Chirp 356 Stepped-sine Excitation 356 Examples of FRF Estimation – No Extraneous Noise 357 Pure Random Excitation 357 Burst Random Excitation 358 Periodic Excitation 360 Example of FRF Estimation – With Output Noise 360 Examples of FRF Estimation – With Input and Output Noise 362 Sources of Error during Shaker Excitation 362 Checking the Shaker Attachment 362 Other Sources of Error 364 Chapter Summary 365 Problems 367 References 368

14 14.1 14.1.1 14.1.2

Multiple-Input Frequency Response Measurement 369 Multiple-Input Systems 369 The 2-Input/1-Output System 370 The 2-Input/1-Output System – Matrix Notation 371

Contents

14.1.3 14.1.4 14.1.5 14.1.6 14.1.7 14.2 14.2.1 14.2.2 14.2.3 14.2.4 14.2.5 14.2.6 14.3 14.4 14.4.1 14.4.2 14.4.3 14.4.4 14.5 14.5.1 14.5.2 14.5.3 14.6 14.7 14.8

The H1 Estimator for MIMO 372 Multiple Coherence 373 Computation Considerations for Multiple-Input System 375 The Hv Estimator 376 Other MIMO FRF Estimators 377 Conditioned Input Signals 377 Conditioned Output Signals 380 Partial Coherence 380 Ordering Signals Prior to Conditioning 381 Partial Coherent Output Power Spectra 382 Backtracking the H-Systems 382 General Conditioned Systems 384 Bias and Random Errors for Multiple-Input Systems 384 Excitation Signals for MIMO Analysis 384 Pure Random Noise 385 Burst Random Noise 385 Periodic Random Noise 385 The Multiphase Stepped-Sine Method (MPSS) 386 Data Synthesis and Simulation Examples 387 Burst Random – Output Noise 387 Burst and Periodic Random – Input Noise 389 Periodic Random – Input and Output Noise 391 Real MIMO Data Case 393 Chapter Summary 396 Problems 397 References 398

15 15.1 15.1.1 15.1.2 15.2 15.2.1 15.2.2 15.2.3 15.3 15.3.1 15.3.2 15.4 15.5

Orthogonalization of Signals 401 Principal Components 401 Principal Components Used to Find Number of Sources 403 Data Reduction 406 Virtual Signals 410 Virtual Input Coherence 411 Virtual Input/Output Coherence 413 Virtual Coherent Output Power 414 Noise Source Identification (NSI) 417 Multiple Source Example 417 Automotive Example 421 Chapter Summary 422 Problems 423 References 424

16 16.1 16.1.1

Experimental Modal Analysis 425 Introduction to Experimental Modal Analysis 425 Main Steps in EMA 426

xiii

xiv

Contents

16.2 16.2.1 16.2.2 16.2.3 16.2.4 16.2.5 16.2.6 16.2.7 16.2.8 16.2.9 16.2.10 16.2.11 16.2.12 16.3 16.4 16.4.1 16.4.2 16.4.3 16.5 16.5.1 16.5.2 16.5.3 16.5.4 16.6 16.7 16.7.1 16.7.2 16.7.3 16.7.4 16.7.5 16.7.6 16.7.7 16.8 16.8.1 16.8.2 16.8.3 16.8.4 16.9 16.9.1 16.9.2 16.9.3 16.9.4 16.9.5 16.10

Experimental Setup 427 Points and DOFs 427 Selecting Measurement DOFs 428 Measurement System 429 Sensor Considerations 430 Data Acquisition Strategies 430 Suspension 431 Measurement Checks 432 Calibration 434 Data Acquisition 434 Mode Indicator Functions 434 Data Quality Assessment 437 Checklist 437 Introduction to Modal Parameter Extraction 437 SDOF Parameter Extraction 440 The Least Squares Local Method 440 The Least Squares Global Method 441 The Least Squares (Local) Polynomial Method 442 The Unified Matrix Polynomial Approach, UMPA 443 Mathematical Framework 443 Choosing Model Order 446 Matrix Coefficient Normalization 447 Data Compression 449 Time Versus Frequency Domain Parameter Extraction for EMA 452 Time Domain Parameter Extraction Methods 454 Converting Bandpass Filtered FRFs into IRFs 455 The Ibrahim Time Domain Method 456 The Multiple-Reference Ibrahim Time Domain Method (MITD) 459 Prony’s Method 462 The Least Squares Complex Exponential Method 464 Polyreference Time Domain 464 The Modified Multiple-Reference Ibrahim Time Domain Method (MMITD) 468 Frequency Domain Parameter Extraction Methods 470 The Least Squares Complex Frequency Domain Method 471 The Frequency Domain Direct Parameter Identification Method (FDPI) 474 The Frequency Z-Domain Direct Parameter Method, FDPIz 477 The Complex Mode Indicator Function, CMIF Method 478 Methods for Mode Shape Estimation and Scaling 480 Least Squares Frequency Domain – Single Reference Case 480 Least Squares Frequency Domain – Multiple Reference Case 482 Least Squares Frequency Domain – Multiple Reference Without MPFs 484 Least Squares Time Domain 485 Scaling Modal Model When Poles and Mode Shapes Are Known 486 Evaluating the Extracted Parameters 486

Contents

16.10.1 16.10.2 16.11 16.12

Synthesized FRFs 486 The MAC Matrix 487 Chapter Summary 489 Problems 491 References 492

17 17.1 17.2 17.3 17.3.1 17.3.2 17.3.3 17.3.4 17.3.5 17.3.6 17.4 17.4.1 17.4.2 17.5 17.6

Operational Modal Analysis (OMA) 495 Principles for OMA 496 Data Acquisition Principles 497 OMA Modal Parameter Extraction for OMA 498 Spectral Functions for OMA Parameter Extraction 499 Correlation Functions for OMA Parameter Extraction 502 Half Spectra 504 Time versus Frequency Domain Parameter Extraction for OMA 504 Modal Parameter Estimation Methods for OMA 505 Least Squares Frequency Domain, OMA Versions 506 Scaling OMA Modal Models 508 Scaling an OMA Model Using the Mass Matrix 509 The OMAH Method 509 Chapter Summary 512 Problems 514 References 514

18 18.1 18.2 18.2.1 18.2.2 18.2.3 18.3 18.3.1 18.3.2 18.3.3 18.3.4 18.4 18.5 18.6 18.6.1 18.6.2 18.7 18.7.1 18.7.2 18.8 18.9

Advanced Analysis Methods 517 Shock Response Spectrum 517 The Hilbert Transform 520 Computation of the Hilbert Transform 521 Envelope Detection by the Hilbert Transform 522 Relating Real and Imaginary Parts of Frequency Response Functions Cepstrum Analysis 527 Power Cepstrum 527 Complex Cepstrum 530 The Real Cepstrum 530 Inverse Cepstrum 530 The Envelope Spectrum 531 Creating Random Signals with Known Spectral Density 533 Identifying Harmonics in Noise 535 The Three-Parameter Sine Fit Method 535 Periodogram Ratio Detection, PRD 537 Harmonic Removal 539 Frequency Domain Editing, FDE 539 Cepstrum-Based Harmonic Removal Methods 540 Chapter Summary 542 Problems 543 References 544

524

xv

xvi

Contents

19 19.1 19.2 19.2.1 19.2.2 19.2.3 19.2.3.1 19.2.3.2 19.2.4 19.3 19.4 19.5 19.5.1 19.5.2 19.6 19.6.1 19.6.2 19.7 19.8 19.8.1 19.8.2 19.9 19.9.1 19.9.2 19.9.3 19.9.4 19.9.5 19.10 19.11 19.11.1 19.11.2 19.11.3 19.11.4

Practical Vibration Measurements and Analysis 547 Introduction to a Plexiglas Plate 547 Forced Response Simulation 550 Frequency Domain Forced Response for Periodic Inputs 550 Frequency Domain Forced Response for Random Inputs 551 Time Domain Computation of Forced Response for Any Inputs 551 Time Domain Response by Frequency Domain Computation 551 Time Domain Response by Digital Filters 552 Plexiglas Plate Forced Response Example 555 Spectra of Periodic Signals 556 Spectra of Random Signals 559 Data with Random and Periodic Content 561 Car Idling Sound 561 Container Ship Measurement 565 Operational Deflection Shapes – ODS 567 Plexiglas Plate ODS Example – Single Reference 568 Plexiglas Plate ODS Example – Multiple-Reference 570 Impact Excitation and FRF Estimation 572 Plexiglas EMA Example 578 FRF Quality Assessment 578 EMA Modal Parameter Extraction, MPE 580 Methods for EMA Modal Parameter Estimation, MPE 585 Time Domain Variable Settings 586 High-Order Methods for EMA MPE 590 Low-Order Methods for EMA MPE 592 The Complex Mode Indicator Function, CMIF 594 Calculating Scaled Mode Shapes 596 Conclusions of EMA MPE 599 OMA Examples 600 OMA Using Synthesized Data for Plexiglas Plate 600 OMA on Measured Data of Plexiglas Plate 607 OMA of a Suspension Bridge 612 OMA on Container Ship 617 References 622

Appendix A

Complex Numbers 625

Appendix B

Logarithmic Diagrams 629

Appendix C

Decibels

Appendix D

Some Elementary Matrix Algebra

633

635

Contents

Appendix E Eigenvalues and the SVD 639 E.1 Eigenvalues and Complex Matrices 639 E.2 The Singular Value Decomposition (SVD) 640 Appendix F

Organizations and Resources 643

Appendix G

Checklist for Experimental Modal Analysis Testing

Bibliography 647 Index 659

645

xvii

xix

About the Author Anders Brandt is currently professor and Head of the Department of Mechanical and Production Engineering at Aarhus University in Denmark. He received a MSc degree in electrical engineering from Chalmers University of Technology, Sweden, in 1986, and a Licentiate of Engineering degree in medical electronics from the same university in 1989 with a thesis on bone conduction hearing. For the next 20 years, he worked with support, education, and consultancy in industry in Sweden and abroad, in the areas of applied signal analysis. In 1996, he started Axiom EduTech, a company dedicated to serve industry and academia with his expertise in advanced signal analysis methods for vibration analysis. During this time he gave over 250 short courses on various topics such as frequency analysis, modal analysis, order tracking, and vibration testing. He was also teaching at universities on similar topics. In 2009, he left the company and joined University of Southern Denmark (SDU) as an associate professor, building up a research group focusing on research within operational modal analysis and structural health monitoring. He has supervised and cosupervised 11 PhD students and 31 master’s students to completion and was promoted to full professor in 2019. He has published over 100 cited papers in the fields of vibration analysis and structural health monitoring. He left SDU to become Head of Department at Aarhus University in 2021. He is the author of the free ABRAVIBE toolbox for MATLAB and GNU Octave, and maintains the site www.abravibe.com from which the toolbox and other educational material may be downloaded. The toolbox is used throughout universities and industry worldwide and has over 5,000 registered users. He also has a YouTube channel which contains lectures for many of the chapters of his book.

xxi

Preface The second edition of this book includes three new chapters: Chapter 16 on experimental modal analysis (EMA) and modal parameter estimation, Chapter 17 on operational modal analysis (OMA), and Chapter 19 in which I have included several examples of how to apply the many techniques presented in the book, both on real data and on synthesized data, i.e., data generated by some numerical model. The latter chapter includes real data from some of my recent research that are also available through my website. It is my hope that these new chapters will make the book even more comprehensive for educators, students, and practitioners alike. In addition to these new chapters, I have also rewritten a few parts, notably the part about correlation function estimators (Sections 10.4.1 and 10.4.2), for which I have found in recent years, that it is easier for my students to understand if presented for correlation as a convolution. Chapter 10 now includes a section presenting a new framework for signal processing that I developed in my recent research. In chapter 19 is also included a presentation of a new way of implementing impact testing which offers new advantages not found in traditional measurement systems. This is also available in the accompanying toolbox, ABRAVIBE, which is available from my website www.abravibe.com, along with almost everything from this book, including a solutions manual for all problems in the book. The ABRAVIBE toolbox for MATLAB (or GNU Octave) is a very comprehensive toolbox with functionality for most of the techniques presented in this book, including rotating machinery analysis, spectrum analysis, EMA, and OMA. My YouTube channel also complements this book with many lectures. The material in the first edition of this book, published in 2011, had been developing in my mind for more than 20 years of teaching at the time of its first publication. During these years, I had been teaching over 250 shortcourses for engineers in the industry on techniques for experimental noise and vibration analysis and also on how to use commercial measurement and analysis systems. In addition, in the late 1990s, I developed and taught three master’s level courses in experimental analysis of vibrations at Blekinge Institute of Technology in Sweden. Noise and vibration analysis is an interdisciplinary field, incorporating diverse subjects such as mechanical dynamics, sensor technology, statistics, and signal processing. Whereas there are many excellent and comprehensive books in each of these disciplines, there has been a lack of introductory material for the engineering student who first starts to make noise and/or vibration measurements, or the engineer who needs a reference in their daily

xxii

Preface

life. In addition, there are few textbooks in this field presenting the techniques as they are actually used in practice. This book is an attempt to fill this void. My aim for this book is that it may serve both as a course book and as supplementary reading in university courses, as well as providing a handbook for engineers or researchers who measure and analyze acoustic or vibration signals. The level of the book makes it appropriate both for undergraduate and graduate levels, with a proper selection of the content. In addition, the book should be a good reference for analysts who use experimental results and need to interpret such results. To satisfy these rather different purposes, for some of the topics in the book I have included more detail than would be necessary for an introductory text. To facilitate its use as a handbook, I have also included a short summary at the end of each chapter where some of the key points of the chapter are repeated. This book contains background theory explaining the majority of analysis methods used in modern commercial software for noise and vibration measurement and analysis. It also includes a number of tools which are usually not found in commercial systems, but which are still useful for the practitioner. With modern computer-based software, it is easy to export data to, e.g., MATLAB/Octave (see below), and apply the techniques there. Since it is an introductory text, most of the content of this book is of course available in more specialized textbooks and scientific papers. A few parts, however, include some improvements of existing techniques. I will mention these points in the descriptions of the appropriate chapters below. Signal analysis is traditionally a field within electrical engineering, whereas most engineers and students pursuing noise and vibration measurements are mechanical or civil engineers. The aim has therefore been to make the material accessible particularly to students and engineers of these latter disciplines. For this reason, I have included introductions to the Laplace and Fourier transforms – both essential tools for understanding, analyzing, and solving problems in dynamics. Electrical engineering students and practitioners should still find many of the topics in the book interesting. Signal analysis is a subject which is best learned by practicing the theories (perhaps that is a universal truth for all areas?). I have therefore incorporated numerous examples using MATLAB or GNU Octave throughout the book. Further examples and an accompanying toolbox which can be used with either MATLAB or GNU Octave can be downloaded from my website. More information about this is located in Section 1.6. I strongly recommend the use of these tools as a complement to reading this book, regardless of whether you are a student, a researcher, or an industry practitioner. Chapter 2 introduces dynamic signals and systems with the aim of being an introduction particularly for mechanical and civil engineering students. In this chapter, the classification of signals into periodic, random, and transient signals is introduced. The chapter also includes linear system theory and a comprehensive introduction to the Laplace and Fourier transforms, both important tools for understanding and analyzing dynamic systems. In Chapter 3, some fundamental concepts of sampled signals are presented. Starting with the sampling theorem and continuing with digital filter theory, this chapter presents some important applications of digital filters for fractional octave analysis and for integrating and differentiating measured signals. Chapter 4 introduces some applied statistics and random process theory from a practical perspective. It includes an introduction to hypothesis testing as this tool is sometimes used

Preface

for testing stationarity of data. This chapter also gives an introduction to the application of statistics for data quality assessment, which is becoming more important with the large amounts of data collected in many applications of noise and vibration analysis. Chapters 5 and 6 provide an introduction to the theory of mechanical vibrations. I anticipate that the contents of these two chapters will already be known to many readers, but I have found it important to include them because my presentation focuses on the experimental implications of the theory, unlike the presentation in most mechanical vibration textbooks, and because some later chapters in the book need a foundation with a common nomenclature. In Chapter 7, the most important transducers used for measurements of noise and vibration signals are presented, specifically the accelerometer, the force sensor, and the microphone. Because piezoelectric sensors with built-in signal conditioning (the so-called IEPE sensors) are widely used today, this technology is presented in some depth. In this chapter, I also present some personal ideas on how to become a good experimentalist. The analysis techniques mostly used in this field are based on the Discrete Fourier Transform (DFT), computed by the FFT. Spectrum analysis is therefore an important part of this book and Chapters 8–10 are spent on this topic. Chapter 8 introduces basic frequency analysis theory by presenting the different signal classes, and the different spectra used to describe the frequency content of these signals. In Chapter 9, the DFT and some other techniques used to experimentally determine the frequency content of signals are presented. The properties of the DFT, which are very important to understand when interpreting experimental frequency spectra, are presented relatively comprehensively. Chapter 10 includes a comprehensive presentation of how spectra from periodic, random, and transient signals, and mixes of these signal classes should be estimated in practice. Chapter 10 also includes a comprehensive explanation of Welch’s method for PSD estimation, including overlap processing, as this is the method used in virtually all commercial software. The treatment of practical spectral analysis in this chapter should also be of use to engineers outside the field of acoustics and vibrations who want to calculate and/or interpret spectra by using the FFT. In Chapter 11, the design of modern data acquisition and measurement systems is described from a user perspective. In this chapter, both hardware and software issues are penetrated. Chapter 12 addresses order tracking, which is a common technique for analysis of rotating machinery equipment. The chapter describes the most common techniques used to measure such signals both with fixed sampling frequency and with synchronous sampling. Frequency response functions are important measurement functions in experimental noise and vibration analysis and are used, for example, in EMA. Chapter 13 therefore covers techniques for measuring frequency responses for single-input/single-output (SISO) systems. Both impact excitation and shaker excitation techniques are presented in detail. In Chapter 14, the techniques are extended to multiple-input/multiple-output (MIMO) systems. Chapter 15 presents some relatively advanced techniques used for multichannel analysis, namely principal components and virtual signals. These techniques are commonly used for noise path analysis and noise source identification in many of the sophisticated software

xxiii

xxiv

Preface

packages available commercially. I present these concepts in some depth, since they are not readily available in other textbooks. Chapter 16 is a new chapter introducing EMA in terms of how to perform such tests. The chapter also includes a thorough explanation of the mathematical background of modal parameter estimation (MPE) with the Unified Matrix Polynomial Approach (UMPA) framework, and presents most algorithms currently used in commercial systems for EMA, and also for OMA, since the MPE is essentially the same for these two techniques, with some small differences. Chapter 17 is a new chapter that presents OMA. Since most MPE algorithms are common for EMA and OMA, only those methods that are unique for OMA are presented in this chapter. The chapter also presents the fundamental basis for OMA; the decomposition of correlation functions and spectral densities into modal parameters. In Chapter 18, which is essentially what constituted Chapter 16 in the first edition of the book, I have collected a number of more advanced techniques that engineers in this field should be acquainted with. This chapter presents, in order, the shock response spectrum, the Hilbert transform with applications, the cepstrum and envelope spectrum, and how to produce Gaussian time signals with known spectral density. The chapter has also been extended with two recently developed methods for removing harmonics, or separating signals into random and harmonic parts, cepstral editing, and the frequency domain editing method. In the Appendices, I have included some fundamentals on complex numbers, logarithmic diagrams and the decibel unit, matrix theory, eigenvalues, and the singular value decomposition. The reader who does not feel confident with some of these concepts will hopefully find enough theory in these appendices to follow the text in this book. The last appendix contains some references to good sources for more information within the noise and vibration community. I hope the newcomer to this field can benefit from this list.

xxv

Acknowledgments In the second edition of this book, I have added examples from my teaching and research performed at University of Southern Denmark (SDU). I am very grateful for the support of SDU during 12 years, as I am also grateful to my current affiliation, Aarhus University, Denmark. And to all my many colleagues who have shown so much friendliness and support – thanks! During my time at SDU, I had the honor of receiving financial support from the EU Fund for Regional Development, through the Interreg 4A and Interreg 5A programs, for which I owe many of the results presented in this book, including the data of the RO-LO ship used in Chapter 19. I am also grateful to Innovation Fund Denmark for several industrial PhD and PostDoc projects, and to Siemens Gamesa, Vattenfall, Dinex A/S, Lindø Offshore Renewables Center, and the German Aerospace Center, DLR, for their support. Furthermore, I am grateful for support from the Danish Maritime Fund and to Linnaeus University in Sweden for a period as guest professor. I have also had the fantastic opportunity to learn from my PhD students (some as assistant supervisor), Åsa Bolmsvik, Kirsi Jarnerö, Till Köder, Esben Orlowitz, Michael Styrk Andersen, Michael Krentzel, Silas Sverre Christensen, Karsten Krautwald Vesterholm, ´ Jonas Gad Kjeld, Jesper Berntsen, and Martin Suhr. I have learnt so many Goran Jeliˇcic, things from you! And I am grateful to my master’s students, especially to Hilde Knutby Hustad who acquired the excellent data of the Little Belt Bridge used in Chapter 19. But also to Paw, Josip, Edwin, Casper, Jackie, Armin, Guðmundur, Martin, Mads, Mads (yes, two), Morten, Steen, Søren, Jacob, Benny, Rasmus, Lasse, Mathias, Jeppe, Nimai, Jesper, Taus, Heiðar, Jakob, and Nicolaj. I am also grateful for the help to review parts of the manuscript to the second edition. Thanks to Stefano Manzoni, Goran, POS! The errors remaining in the book are my own. The first edition of this book was inspired partly by class notes I wrote for two classes at Blekinge Institute of Technology, BTH. I am especially grateful to Professor Ingvar Claesson and the Department of Signal Processing at BTH for supporting me in writing these early texts. Also, Timothy Samuels did a great job translating an early manuscript from Swedish to English. My most sincere appreciation goes to Professor Kjell Ahlin, my colleague and friend for many years. Our many long discussions have strongly contributed to my understanding of this subject and I am grateful for the data provided by Professor Ahlin for examples in Chapters 16 and 19.

xxvi

Acknowledgments

Dr Per-Olof Sturesson and the noise and vibration group at SAAB Automobile AB were invaluable resources of feedback and have provided data for Chapters 12 and 15. For this, and many ideas and discussions, I am very grateful. Special thanks also goes to Mats Berggren. Also, thanks to Volvo Car Corporation and William Easterling for providing data. My thanks extend to Professor Jiri Tuma for supporting me with data for Chapter 12 and for kind support through times. Svend Gade and Brüel and Kjær A/S are acknowledged, along with Niels Thrane, for allowing me to reuse an illustration and an overview description of the Discrete Fourier Transform from an old B&K Technical Review, which I find is of great value for presenting the DFT. I also wish to thank Flensburg Shipyard in Germany for allowing us to collect data on the RO-LO ship used in Chapter 17. I have always found the many participants at the International Modal Analysis Conference (IMAC), organized by Society for Experimental Mechanics (SEM), an invaluable source of inspiration and knowledge. Special thanks to Tom Proulx (in memoriam), Al Wicks, Dave Brown, Randall Allemang, Pete Avitabile, and Dan Rixen for their outstanding support and encouragement and continuous willingness to give from their wealth of knowledge. I am also grateful for the many good discussions over the years with Bob Randall. This book would not be what it is without the professional staff at Wiley, who have been of great help throughout the work. My thanks extend particularly to Debbie Cox and Nicky Skinner who were both of great help for the first edition. And to Sarah Lemore for help with the second edition. Particularly I also wish to thank Dr. Julius S. Bendat (in memoriam), Professor Rune Brincker, Knut Bertelsen (in memoriam), and Professor Bo Håkansson for their willingness to always share their knowledge and for inspiring me, to Claus Vaarning and Soma Tayamon for reading parts of the manuscript and offering many good comments for the first edition, and to all the professional people I have had the opportunity of learning from during my career. Finally I am, of course, thankful to a great number of people who have inspired and supported me, and to all those I have forgotten here – sorry!

xxvii

List of Abbreviations 2DOF AC ADC AFDE BT CMIF CSD DAC DC DFT DOF EMA ESD FDD FDE FDPI FE FEM FFT FIR FRF HF HP IDFT IEPE IFFT IIR IRF ITD LF ISO LSCE LSCF

two degrees-of-freedom system alternating current analog-to-digital converter automatic frequency domain editing bandwidth-time (product) complex mode indicator (indication) function cross-spectral density function digital-to-analog converter direct current discrete Fourier transform degree-of-freedom (point and direction) experimental modal analysis energy spectral density frequency domain direct identification frequency domain editing frequency domain direct parameter Identification finite element finite element method fast Fourier transform finite impulse response (filter) frequency response function high frequency highpass inverse discrete Fourier transform integrated electronics piezoelectric (sensor) inverse fast Fourier transform infinite impulse response (filter) impulse response function Ibrahim time domain low frequency international standardization organization least squares complex exponential least squares complex frequency domain

xxviii

List of Abbreviations

LSFD MAC MEMS MDOF MIF MITD MIMO MISO MMITD MPE MPSS MrMIF MvMIF NSI NSR ODS OMA PDF PRD PSD PTD RMS RPM SDOF SIMO SISO SNR SRS SVD TEDS

least squares frequency domain modal assurance criterion microelectro-mechanical systems (sensors) multiple degrees-of-freedom mode indicator function multiple-reference Ibrahim time domain multiple-input/multiple-output multiple-input/single-output modified multiple-reference Ibrahim time domain modal parameter estimation multiphase stepped sine modified real mode indicator function multivariate mode indicator function noise source identification noise-to-signal ratio operating deflection shape operational modal analysis probability density function periodogram ratio detection power spectral density polyreference time domain root mean square revolutions per minute single degree-of-freedom single-input/multiple-output single-input/single-output signal-to-noise ratio shock response spectrum singular value decomposition transducer electronic data sheet

xxix

Annotation

 []  []  [] E [] a, a(t) Apqr Axx B Be Ben Br cp cr 𝛿(t) Δf Δt 𝜀 f fn , fr g2 (f ) 2 𝛾yx 2 𝛾y∶x Gxx (f ) G′ [ xx ] Gxx Gyx (f ) [ ] Gyx h(n) h(t) H(f ) H(k) H(s) Im []

average of x Fourier transform of [ ] Hilbert transform of [ ] Laplace transform of [ ] expected value vibration acceleration residue of mode r, between points p and q autopower spectrum of x bandwidth in [Hz] equivalent (statistical) bandwidth in Hz normalized equivalent bandwidth (dimensionless) resonance bandwidth in Hz power cepstrum modal (viscous) damping of mode r Dirac’s unit impulse frequency increment of discrete Fourier transform time increment in [s] normalized error frequency in [Hz] undamped natural frequency virtual coherence function coherence function between x (input) and y (output) multiple coherence of y (output) with all xq (inputs) single-sided autospectral density of x principal component single-sided input cross-spectral matrix single-sided cross-spectral density between x (input) and y (output) single-sided input/output cross-spectral matrix discrete impulse response analog impulse response analog frequency response function discrete frequency response function transfer function imaginary part of []

xxx

Annotation

j k kr Kx 𝜆 𝜇x mr Mn n NL NS 𝜙 px (x) P(x) {𝜓}r [Ψ]r Q Qr Rxx (𝜏) Ryx (𝜏) Re [] s sr 𝜎x Sx Sxx (f ) S (f ) [ yx ] Gyx t T 𝜏 Tx (k) u, u(t) v, v(t) w(n) x(n) x(t) x̃ (t) X(f ) X′ X(k) XL (k) y(n) y(t) 𝜔 𝜁r

√ imaginary number, −1 discrete (dimensionless) frequency variable modal stiffness of mode r Kurtosis of x Eigenvalue (theoretical) mean value of x modal mass of mode r Nth statistical (central) moment discrete (dimensionless) time variable long dimension (of, e.g., FRF matrix, number of responses) short dimension (of, e.g., FRF matrix, number of inputs) phase, general random variable probability density of x probability distribution of x mode shape vector of mode r mode shape matrix of mode r quality factor (Q-factor) modal scale constant of mode r autocorrelation of x cross-correlation between x (input) and y (output) real part of [] Laplace operator (in [rad/s]) pole, root to characteristic polynomial standard deviation of x skewness of x double-sided autospectral density of x double-sided cross-spectral density between x (input) and y (output) single-sided input/output cross-spectral matrix analog time measurement time time delay, time lag variable for correlation functions discrete transient spectrum of x vibration displacement vibration velocity discrete time window discrete/sampled (input) signal analog (input) signal Hilbert transform of x(t) (continuous) Fourier transform of x(t) spectrum of virtual signal discrete Fourier transform of x(n) linear (RMS) spectrum of x(n) discrete/sampled (output) signal analog (output) signal angular frequency in [radians/s] relative (viscous) damping

1

1 Introduction This chapter provides a short introduction to the field of noise and vibration analysis. Its main objective is to show new students in this field the wide range of applications and engineering fields where noise and vibration issues are of interest. If you are a researcher or an engineer who wants to use this book as a reference source, you may want to skim this chapter. If you decide to do so, I would recommend you to still read Section 1.6, in which I present some personal ideas on how to use this book, as well as on how to go about becoming a good experimentalist – the ultimate goal after reading this book. In this section, I also present the free MATLAB toolbox, ABRAVIBE, which is an accompanying toolbox for this book, which contains functionality to try out all theory in this book, and which may also be used for vibration analysis in real applications. I want to show you not only the width of disciplines where noise and vibrations are found. I also want to show you that noise and vibration analysis, the particular topic of this book, is truly a fascinating and challenging discipline. One of the reasons I personally find noise and vibration analysis so fascinating is the interdisciplinary character of this field. Because of this, becoming an expert in this area is indeed a real challenge, regardless of which engineering field you come from. If you are a student just entering this field, I can only congratulate you for selecting (which I hope you do!) this field as yours for a lifetime. You will find that you will never cease learning, and that every day offers new challenges.

1.1

Noise and Vibration

Noise and vibration are constantly present in our high-tech society. Noise causes serious problems both at home and in the workplace, and the task of reducing community noise is a subject currently focused on by authorities in many countries. Similarly, manufacturers of mechanical products with vibrations causing acoustic noise more and more find themselves forced to compete on the noise levels of their products. Such competition has so far occurred predominantly in the automotive industry, where the issues with sound and noise have long attracted attention, but, at least in Europe, e.g., domestic appliances are increasingly marketed stressing low noise levels. Let us list some examples of reasons why vibration is of interest: ●

Vibration can cause injuries and disease in humans, with “white fingers” due to long-term exposure to vibration and back injuries due to severe shocks, as examples.

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

2

1 Introduction ●





● ●

Vibration can cause discomfort, such as sickness feelings in high-rise buildings during storms, or in trains or other vehicles, if vibration control is not successful. Vibration can cause mechanical fatigue, i.e., products break after being submitted to vibrations for a long (or sometimes not so long) time. Vibration can cause dysfunction in both humans and things we manufacture, such as bad vision if the eye is subjected to vibration, or a radar on a ship performing poorly due to vibration of the radar antenna. Vibration can be used for cleaning, etc. Vibration can cause noise, i.e., unpleasant sound, which causes annoyance as well as disease and discomfort.

To follow up on the last point in the list above, once noise is created by vibrations, noise is of interest, e.g., for the following reasons: ● ● ● ●

Excessive noise can cause hearing impairment. Noise can cause discomfort. Noise can (probably) cause disease, such as increased risk of cardiac disease and stress. Noise can be used for burglar alarms and in weapons (by disabling human ability to concentrate or to cope with the situation).

The lists above are examples meant to show that vibrations and noise are indeed interesting for a wide variety of reasons, not only to protect ourselves and our products, but also because vibration can cause good things. Besides simply reducing sound levels, much work is currently being carried out within many application areas concerning the concept of sound quality. This concept involves making a psychoacoustic judgment of how a particular sound is experienced by a human being. Harley Davidson is an often-cited example of a company that considers the sound from its product so important that it tried to protect that sound by trademark, although the application was eventually withdrawn. Besides generating noise, vibrations can cause mechanical fatigue. Now and then we read in the newspaper that a car manufacturer is forced to recall thousands of cars in order to exchange a component. In those cases, it is sometimes mechanical fatigue that has occurred, resulting in cracks initiating after the car was being driven a long distance. When these cracks grow they can cause component breakdown and, as a consequence, accidents.

1.2 Noise and Vibration Analysis This book is about some of the most common analysis methods for analyzing noise and vibrations, rather than the mechanisms causing them. In order to identify the sources of vibrations and noise, extensive analysis of measured signals from different tests is often necessary, using the methods described in this book. The measurement techniques used to carry out such analyses are well developed, and, in universities as well as in industry advanced equipment is often used to investigate noise and vibration signals in laboratory and field environments. It may be that the word “noise” in this context can be misunderstood, since it stands for or is close to “sound” or “acoustics” to many people in that field. So, to clarify and set the expectations right, it should be mentioned that the focus in this

1.3 Application Areas

book is on vibrations, and that “noise” is included in the name of the book in the sense that it is produced by vibrations. This book covers many of the analysis techniques used to analyze and understand noise, but it does not cover all methods used in acoustics. See also Section 1.4 for some complementing comments on this. The area of experimental noise and vibration analysis is an intriguing field, as I hope this book will reveal. It is so partly because this field is multidisciplinary, and partly because dynamics (including vibrations) is a complicated field where the most surprising things can happen. Using measurement and analysis equipment often requires a good understanding of mechanics, sensor technology, electronic measurement techniques, and signal analysis. Vibrations and noise are found in many disciplines in the academic arena. Perhaps we first think of mechanics, with engines, vehicles, and pumps, etc. However, vibrations are also found in civil engineering, in bridges, buildings, etc. Many of the measurement instruments and sensors we use in the field of analyzing vibrations and noise are, of course, electrical, and so the field of electrical engineering is heavily involved. This makes the initial study of noise and vibration analysis difficult, perhaps, because you are forced to get into some of the other fields of academia. Hopefully, this book can help bridge some of the gaps between disciplines. If many academic disciplines are involved with noise and vibrations, the variety in industry is perhaps even more overwhelming. Noise and vibration are important in, for example, military, automotive, and aerospace industries, in power plants, home appliances, industrial production, hand-held tools, robotics, medical field, electronics production, bridges and roads, etc.

1.3

Application Areas

As evident from the first sections of this chapter, noise and vibration are important for many reasons, and in many different disciplines. Within the field of noise and vibration, there are also many different, more specialized, disciplines. We need to describe some of these a little more. Structural dynamics is a field which describes phenomena such as resonance in structures, how connecting structures together affect the resonances, etc. Often, vibration problems occur because, as you probably already know, resonances amplify vibrations – sometimes to very high levels. Environmental engineering is a field in which environmental effects (not to be confused with the “green environment”) from such diverse phenomena as heat, corrosion, and vibration are studied. As far as vibrations are concerned, vibration testing is a large industrial discipline within environmental engineering. This field is concerned with a particular product’s ability to sustain the vibration environment it will encounter during its lifetime. Sensitive products such as mobile phones and other electronic products are usually tested in a laboratory to ensure they can sustain the vibrations they will be exposed to during their lifetime. Producing standardized tests, which are equivalent to the product’s real-life vibration environment, is often a great challenge. Transportation testing of packaging is a closely related field, in which the interest is that, for example, the new video camera you buy arrives in one piece when you unpack the box, even if the ship that delivered it encountered a storm at sea.

3

4

1 Introduction

Fatigue analysis is a field closely related to environmental engineering. However, the discipline of fatigue analysis is usually more involved with measuring the stresses on a product and, through mathematical models, such as Wöhler curves, trying to predict the lifetime of the product, e.g., before fatigue cracks will appear. From the perspective of experiments, this practically means it is more common to measure with strain gauges rather than accelerometers. Vibration monitoring is another field, where the aim is to try to predict when machines and pumps, for example, will fail, by studying (among many things) the vibration levels during their lifetime. In civil engineering, a somewhat related field, structural health monitoring attempts to assess the health of buildings and bridges after earthquakes as well as after aging and other deteriorating effects on the structure, based on measurements of (among many things) vibrations in the structures. Acoustics is a discipline close to noise and vibration analysis, of course, as the cause of acoustic noise is often vibrations (but sometimes not, such as, for example, when turbulent air is causing the noise).

1.4 Analysis of Noise and Vibrations There are several ways of analyzing noise and vibrations. We shall start with a brief discussion of some of the methods, which this book is not aimed at, but which are crucial for the total picture of noise and vibration analysis, and which are often the reason for making experimental measurements. Analytical analysis of vibrations is most commonly done using the finite element method, FEM, through normal mode analysis, etc. In order to successfully model vibrations, usually models with much greater detail (finer grid meshes, correctly selected element types, etc.) need to be used, compared with the models sufficient for static analysis. Also, dynamic analysis using FEM requires good knowledge of boundary conditions, etc. For many of these inputs to the FEM software, experiments can help refine the model. This is a main cause of much experimental analysis of vibrations today. Chapters 16 and 17 in the new edition of this book deal with experimental and operational modal analysis which are commonly used to obtain experimental data to verify FEM models. For acoustic analysis, acoustic FEM can be used as long as the noise (or sound) is contained in a cavity. For radiation problems, the boundary element method, BEM, is increasingly used. With this method, known vibration patterns, for example, from a FEM analysis, can be used to model how the sound radiates and builds up an acoustic field. FEM and BEM are usually restricted to low frequencies, where the mode density is low. For higher frequencies, statistical energy analysis, SEA, can be used. As the name implies, this method deals with the mode density in a statistical manner and is used to compute average effects.

1.4.1 Experimental Analysis In many cases, it is necessary to measure vibrations or sound pressure, etc. to solve vibration problems, because the complexity of such problems often makes them impossible to foresee

1.6 Becoming a Noise and Vibration Analysis Expert

through analytical models such as FEM models. This is often referred to as trouble-shooting. Another important reason to measure and analyze vibrations is to provide input data to refine analytical models. Particularly, damping is an entity which is usually impossible to estimate through models – it needs to be assessed by experiment. Experimental analysis of noise and vibrations is usually done by measuring accelerations or sound pressures, although other entities can be measured, as we will see in Chapter 7. In order to analyze vibrations, the most common method is by frequency analysis, which is due to the nature of linear systems, as we will discuss in Chapter 2. Frequency analysis is a part of the discipline of signal analysis, which also incorporates filtering signals, etc. The main tool for frequency analysis is the fast Fourier transform (FFT) which is today readily available through software such as MATLAB and Octave, and increasingly, Python (see Section 1.6), or by the many dedicated commercial systems for noise and vibration analysis. Methods using the FFT will take up the main part of this book. Some of the analyses necessary to solve many noise and vibration problems need to be done in the time domain. Examples of such analyses are fatigue analysis, which incorporates, e.g., cycle counting, and data quality analysis, to assess the quality of measured signals. For a long time, the tools for noise and vibration analysis were focused on frequency analysis, partly due to the limited computer performance and cost of memory. Today, however, sophisticated time domain analysis can be performed at a low cost, and we will present many such techniques throughout the book.

1.5

Standards

Due to the complexity of many noise and vibration measurements, international standards form an important part of vibration measurements as well as of acoustics and noise measurements. Acoustics and vibration standards are published by the main standardization organizations, International Standardization Organization (ISO), International Electrical Committee (IEC), and, in the United States, by American National Standards Institute (ANSI). The general recommendation from many acoustics and vibration experts is that, if there is a standard for your particular application – use it. It is outside the scope of this book, and practically impossible, to summarize all the standards available. Some of the many standards for signal analysis methods used in vibration analysis are, however, cited in this book.

1.6

Becoming a Noise and Vibration Analysis Expert

The main emphasis in this book is on the signal analysis methods and procedures used to solve noise and vibration problems. To be successful in this, it is necessary to become a good experimentalist. Unfortunately, this is not something which can be (at least solely) learned from a book, but I want to make some recommendations on how to enter a road which leads in the right direction.

5

6

1 Introduction

1.6.1 The Virtue of Simulation As many of the theories of dynamics, as well as those of signal analysis, are very complex, a vital tool for understanding dynamic systems and analysis procedures is to simulate simplified, isolated, cases, where the outcome can be understood without the complicating presence of disturbance noise, complexity of structures, nonideal sensors, etc. I have therefore incorporated numerous examples in this book which use simulated measurement data with known properties. Practical methods to create such signals for mechanical systems with known properties are presented in Chapter 19. The importance of this cannot be overrated. Before making a measurement of noise or vibrations, it is crucial to know what a correct measurement signal should look like, for example. The hidden pitfalls in, particularly, vibration measurements are overwhelming for the beginner (and sometimes even for more experienced engineers). The road to successful vibration measurements therefore goes through careful, thought-through simulations. Another important aspect of good experiments is to make constant checks of the equipment. In Section 7.14, I present some ideas of things to check for in vibration measurements. In Section 7.8, I also present a by no means new technique, but nevertheless a simple and efficient one (mass calibration, if you already know it) to verify that accelerometers are working correctly. These devices are, like many sensors, sensitive and can easily break, and unfortunately, they often break in such a way that it can be hard to discover without a proper procedure to verify the sensors on a known signal. Single-frequency calibration, which is common for absolute calibration of accelerometers, usually completely fails to discover the faults present after an accelerometer has been dropped on a hard floor. Having written this, I want to stress that good vibration measurements are performed every day in industry and universities. So the intention is, of course, not to discourage you from this discipline, but simply to stress the importance of taking it slowly, and making sure every part of the experiment is under your control, and not under the control of the errors.

1.6.2 Learning Tools and the Format of this Book If you anticipated finding a book with numerous data examples from the field by which you would learn how to make the best vibration measurements, you will be disappointed by this book. The main reasons for this are twofold: (i) for the reasons just given in the preceding section, real vibration measurements are usually full of artifacts from disturbance noise, complicated structures, etc.; and (ii) each structure or machine or whatever is measured, has its own vibration profile, which makes “typical examples” very narrow. If you work with cars, or airplanes, or sewing machines, or hydraulic pumps, or whatever, your vibration signals will look rather different from signals from those other products. In the second edition of the book I have, nevertheless, added a chapter (the last one) with results from some of my projects that hopefully can help getting you some ideas of how signals may look, and what to look out for in actual measurement situations. Hopefully, this will be helpful even if you are active in a different area than where these examples come from. I have still based most examples in this book on simplified simulations, where the key idea of discussion is easily seen. These examples will, hopefully, provide much deeper insights

1.6 Becoming a Noise and Vibration Analysis Expert

into the fundamental signal analysis ideas we discuss in each part of the book. They are also easily repeated on your own computer, which leads us to the next important point. I believe that signal analysis (like, perhaps, most subjects) is far too mathematically complicated to understand through reading about it. Instead, I believe strongly in simulation and application of the theories by your own hands. I have therefore throughout the book given numerous examples using the best tool I know of – MATLAB. This software is, in my opinion, the best available tool for signal analysis and therefore also for the vibration analysis methods we are concerned with in this book. If you do not already know MATLAB, you will soon learn by working through the examples. The drawback of MATLAB may be that it is commercial software, and therefore, costs money. If you find this to be an obstacle you cannot overcome, you can instead use GNU Octave, which is free software published under the GNU General Public License (GPL) and can be freely downloaded from http:///www.gnu.org/software/octave/. Octave is to a large extent compatible with MATLAB in the sense that MATLAB code, with some minor tweaks, can run under Octave. I have attempted that examples in this book run under both MATLAB and Octave, although there are several things in ABRAVIBE that do not work in Octave, especially the applications where I have used MATLAB’s graphical user interface. Also, I do not have the time to double check all examples with Octave, so there may be some things that do not work on this platform. In addition to the examples in this book, there is a free accompanying toolbox, ABRAVIBE, for MATLAB/Octave made available on my website www.abravibe.com. This toolbox is a very comprehensive tool designed to aid your learning. It includes tools for spectrum analysis, modal analysis, rotating machinery analysis, etc. More information about this toolbox and examples for instructors can be found at my website.

7

9

2 Dynamic Signals and Systems Vibration analysis and, indeed, the field of mechanical dynamics in general deals with dynamic events, i.e., for example forces and displacements which are functions of time. This chapter aims to introduce many of the concepts typical for dynamic systems, particularly for mechanical and civil engineering students who may have little theory at their disposal for understanding this subject. We will start with some rather simple signals, and later in this chapter introduce some important concepts and fundamental properties of dynamic signals and systems. This chapter also covers basic introductions to the Laplace and Fourier transforms – two very important mathematical tools to describe and understand dynamic signals and systems. This chapter deals with continuous signals, as most of our understanding of engineering principles is based on the theory of continuous signals and differential calculus. In Chapter 3, we will introduce experimental signals, i.e., sampled signals as we find them in measurement systems. Before that, however, we need to have a general understanding of what characterizes dynamic signals and systems.

2.1

Introduction

In this book, we will call any physical entity that changes over time a signal, regardless of whether it is a measured signal, or an analytical (mathematical) “signal.” Some examples of signals are thus ●

● ●

the force acting on a car suspension (in a particular direction) as we drive the car on a road, or the sound pressure at the ear of an operator of some machine, or the displacement of a point (in a particular direction) on a vibrating handle on a handheld machine such as a pneumatic drilling machine.

The analysis of (dynamic) signals is often called signal analysis or sometimes signal processing. I make the distinction, along with some, but not all authors, that signal analysis is the process of extracting and interpreting useful information in a signal, for example by a frequency spectrum, some statistical numbers. By signal processing, on the other hand, I mean the actual process (usually a mathematical algorithm or similar) used in processing a signal from one form to another form. With this distinction, signal analysis will often Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

10

2 Dynamic Signals and Systems

include some signal processing, but not the other way around. This book deals with the signal analysis procedures used to understand signals that describe mechanical vibrations and acoustic noise, and many of the methods we use throughout the book will include signal processing procedures. There are many excellent books that include a more in-depth coverage of the topics discussed in this chapter, for example, Oppenheim et al. (1999) and Proakis and Manolakis (2006) for general signal analysis, and Haykin (2003) for systems analysis. A dynamic system is a physical entity that has one or more outputs (responses), caused by one or more inputs, and where both input(s) and output(s) are dynamic, i.e., they change over time. In this book, the most common system will be a mechanical system, or sometimes a vibro-acoustic system. The former includes inputs in the form of forces and torques, and outputs in the form of some time derivative of motion, i.e., displacement, velocity, or acceleration. The latter is a combined system where the outputs, in addition to motion responses, can be acoustic (sound) pressure or some other acoustical entity. In a sense, a system can be thought of as a “black box,” with the inputs and outputs and the relationships that relate the outputs to the inputs. The simplest system we will use is the mechanical single-degree-of-freedom, SDOF, system we will introduce in Chapter 5. In terms of the frequency content of signals, we often separate signals into three different signal classes, namely ●





periodic signals signals which repeat themselves with a period, Tp random signals (stochastic processes) signals which at each time instant are independent of values at other instants, and transient signals signals which have limited length, usually they die out after a certain time.

Determining to which such class a particular signal belongs is often called signal classification, a field particularly important when damaging effects of vibrations are of interest, such as in fatigue analysis and in environmental testing. We will describe some important fundamental properties of each of these classes in this chapter. Another way of classifying signals is into stationary and nonstationary signals, see Chapter 4, or into deterministic versus nondeterministic signals. A deterministic signal is a signal for which there is a closed form expression so that from a part of the signal, the entire signal for all times, past and present, can be expressed mathematically. Periodic signals and most transient signals belong to the class of deterministic signals, whereas random signals (noise) belong to the other class, the nondeterministic signals, which cannot be described in the past or present based on a shorter observation, as their values are random at each instant in time. In practice, of course, we often encounter signals which are mixed combinations of the “pure” signal classes described here, for example, periodic signals with background noise. The interpretation of such signals can sometimes be difficult and will be discussed with respect to frequency analysis in Chapter 10. As we will see in later chapters, random and transient signals have continuous spectral content, as opposed to periodic signals which have discrete spectra (with only some frequencies present). Because of this fundamental difference, we will introduce different types of spectral scaling in Chapter 8 for describing the different types of signals.

2.2 Periodic Signals

2.2

Periodic Signals

Periodic vibrations occur whenever we have repeating phenomena such as a reciprocating engine running at constant revolutions per minute (RPM) or a rotating device such as a turbine, for example. The simplest periodic signal is the sine wave which we start the discussion with.

2.2.1

Sine Waves

One of the most fundamental dynamical signals is the sinusoid, or sine wave, which has some very interesting properties that we will discuss in this and subsequent sections. A sine signal is defined by three parameters: the amplitude, A, the angular frequency, 𝜔, and the phase angle, 𝜙. With these parameters defined, the time-dependent sine is defined by x(t) = A sin (𝜔t + 𝜙) .

(2.1)

The amplitude, A, defines the maximum of the sine, since −1 ≤ sin(𝜔t + 𝜙) ≤ 1 for all angles 𝜔t + 𝜙. The angular frequency in [rad/s] is often replaced by the (cyclic) frequency in [Hz], f , defined by the relationship 𝜔 = 2𝜋f . The phase, 𝜙 of the sine, finally, defines a shift along the time axis and can be calculated from the function value at time zero, i.e., x(0) = A sin (𝜙). A sine with amplitude A = 5, frequency f = 10 [Hz] and phase angle 𝜙 = 𝜋∕4 radians, is plotted in Figure 2.1. The period, Tp of the sine (or of any periodic signal) is the time for one complete cycle, which for the sine is related to the frequency by 1 (2.2) Tp = . f

10 8 6 4 2 0 −2 −4 −6 −8 −10 0 Figure 2.1

0.2

0.4

Time (s)

0.6

0.8

1

Sine wave with amplitude A = 5, frequency f = 10 Hz, and phase, 𝜙 = 𝜋∕4 degrees.

11

12

2 Dynamic Signals and Systems

The cosine is similar to the sine, and in this text, we will often refer to both the sine and cosine as “sines.” The relationship between the sine and cosine is cos (𝜙) = sin (𝜙 + 𝜋∕2) ,

(2.3)

i.e., the cosine lags behind the sine by 90∘ , or 𝜋∕2 radians. There are many reasons why sines are important in vibration analysis. The most fundamental reason is perhaps that a sine represents a single frequency, and as we will see in Section 2.6.1, for linear systems, sinusoidal inputs result in sinusoidal outputs. This is often referred to as harmonic response. Another important reason for using sines is that through the theory of Fourier series, we know that all periodic signals are composed of a sum of sines, see Section 8.1. A third reason why sines and cosines are important is that they are orthogonal, see Section 2.7.1 and that they are used as the so-called basis functions in the Discrete Fourier transform, see Chapter 9.

2.2.2 Complex Sines A common approach when dealing with periodic signals is to use complex sines. It is essential to understand how this is used, and we will therefore discuss complex sines in some depth. If you are not familiar with complex numbers, Appendix A gives an overview. Assume first that we have a real, time-dependent signal, y(t) = A cos(𝜔t + 𝜙). A corresponding complex sine, ỹ (t) is now defined as follows: [ ] ỹ (t) = Ae j(𝜔t+𝜙) = Ce j𝜔t = C cos(𝜔t) + j sin(𝜔t) ,

(2.4)

(2.5)

where C = Ae j𝜙 .

(2.6)

Using this notation, our actual (original) signal is y = Re [̃y] .

(2.7)

By introducing the complex signal, ỹ (t), we are now able to easily change both the amplitude and phase of our signal, for example, by passing the complex sine through a frequency response (see Section 2.7.2), i.e., some physical process that affects the amplitude and phase. The resulting, true signal is then obtained by taking the real part of the complex signal, which follows from the orthogonality between the real and imaginary parts. We achieve the same result as if we had calculated the result using trigonometric functions for addition and multiplication, but in a usually much easier way. In some applications, the imaginary part of the complex signal also has interpretations, which we shall not discuss here, but in general, it can be said that the imaginary part is simply “following along” as a “complement” in the calculations. Example 2.2.1 As an example of using a complex sine, assume that we have a sinusoidal force with amplitude 30 N and frequency 100 Hz. The force acts on an SDOF system with a resonance f0 = 100 Hz, where the frequency response between force input and acceleration output

2.2 Periodic Signals

is 0.1∠90∘ [(m/s2 )/N]. We let the phase of our force be the reference, that is, 0∘ . What is the resulting acceleration? Note: This example is by necessity a little premature, as we will not present frequency responses until later in this chapter, see Section 2.7.2. However, at the moment, it is sufficient to know that the output of a linear system, at each frequency, is the product of the input (force in our example) and the frequency response at that frequency. The frequency response is a frequency-dependent function which at each frequency is a complex number describing amplitude gain factor and phase effect if described in polar form, so the example illustrates how the complex sine formulation simplifies the calculation when we multiply two complex values. Our force signal, F(t), can be written in complex form as follows: (2.8) F(t) = Ce j2𝜋f0 t , where C = 30∠0∘ [N] and f0 = 100 [Hz]. Furthermore, the frequency response at 100 Hz is H(100) = 0.1∠90∘ . (2.9) We thus obtain that the resulting acceleration is ( ( )) A = F ⋅ H = 30 ⋅ 0.1∠ 0 + 90∘ = 3∠90∘ m∕s2 ,

(2.10)

or if we write the actual, real acceleration, that is, the real part of Equation (2.10), then a (t) = 3 cos (2𝜋 ⋅ 100t + 𝜋∕2) m∕s2 .

(2.11)

End of example.

2.2.3

Interacting Sines

Next, we will study the effects of summing and multiplying sines with different frequencies. When two sines with different frequencies are combined, the result depends on the frequencies and phase angles of the two sines. Assume that we sum two sines with frequencies f1 and f2 Hz. The resulting signal y(t) = sin(2𝜋f1 t) + sin(2𝜋f2 t)

(2.12)

will be a periodic signal if there is a time Tp for which both sines make integer number of periods. This will be the case if f1 and f2 are both rational numbers, or if they are related so that their ratio is a rational number. An example is illustrated in Figure 2.2, where the result of the sum of a sine with frequency f1 = 10 Hz and a sine with frequency f2 = 20 Hz is plotted. In the figure, the sum is shown for two cases; the two signals in phase (both have a phase of 0 radians), and with the phase of the second sine being 𝜋∕2 relative to the phase of the first sine. In both cases, the period will be Tp = 1∕f1 as the second frequency is exactly twice the first one. As seen in Figure 2.2, the resulting sum of the two sines is a periodic signal. As is evident from the two signals in the plot, the actual shape of the signal depends on the phase between the two sines. Another important observation from the plot is that there is no well-defined amplitude, since the maximum value of each of the signals is different. Amplitude is a useful concept only for single sines, not for signals containing several sines. A special effect of the combination of two sines, beating, occurs when two sines with frequencies relatively near each other are summed, as seen in Figure 2.3. In the figure, the

13

14

2 Dynamic Signals and Systems

2 ϕ1 = 0

1.5

ϕ2 = π/2

1 0.5 0 −0.5 −1 −1.5 −2

0.2

0

0.4

Time (s)

1

0.8

0.6

Figure 2.2 Sum of two sines with frequencies 10 and 20 Hz, respectively. Two cases of phase difference are shown; solid: both signals in phase (phase angles 𝜙 = 0); dashed: phase angle of 20 Hz sine 𝜙2 = 𝜋∕2. The sum signal has a period of T = 0.1 s, which corresponds to one period of the 10 Hz sine and 2 periods of the 20 Hz sine.

2 1.5 1 0.5 0 −0.5 −1 −1.5 −2

0

0.05

0.1

0.15

0.2 Time (s)

0.25

0.3

0.35

0.4

Figure 2.3 Sum of 2 sines with beating. The signal in the figure is the sum of a sine with frequency f1 = 100 Hz and another sine with frequency f2 = 90 Hz, both with amplitudes of unity. The sum signal has a periodic beating with a frequency of 10 Hz, corresponding to the difference between the frequencies, f1 − f2 .

sum of a sine with frequency f1 = 100 Hz and a sine with frequency f2 = 90 Hz is plotted. As is seen in the figure, the result shows a “high-frequency” sine with a “slowly” varying amplitude and it can be seen that the amplitude varies with a frequency of 10 Hz (from the period defined between two of the instances where the amplitude is, for example, zero).

2.2 Periodic Signals

From basic trigonometry, we have the formula for the sum of two sines: ( ) ( ) u+v u−v sin(u) + sin(v) = 2 sin cos , 2 2

(2.13)

which shows one of the relationships between the sum of two sines and multiplication of two sines (or a sine and a cosine to be exact). From this relationship, we see that the effect of summing the two sines is equal to multiplying the mean and half the difference frequencies. The beating effect thus occurs either when two sines with close frequencies are summed, or when two sines with largely different frequencies (typically a high frequency and a much lower frequency) are multiplied. The effect of beating is important in noise and vibration applications, not the least because our human hearing is sensitive to amplitude fluctuations. Naturally, two sines with close frequencies can often occur, and they are often causes of unwanted noise effects, particularly from rotating machines, see also Chapter 12.

2.2.4

Orthogonality of Sines

The concept of orthogonal signals is very important in signal analysis. For example, we will use the concept of orthogonality between general signals in Chapter 15. The definition of orthogonality between any two signals u(t) and v(t) is that the integral of the product of the two signals is zero, i.e., ∞



u(t) ⋅ v(t)dt = 0.

(2.14)

−∞

It should be noted that if the integral in Equation (2.14) is fulfilled, then the mean (average value) of the product of the two signals is also zero since the mean equals the integral divided by the time of integration. Often, it is easier to think of the mean of a signal rather than its integral. In this section, we will discuss specifically the concept of orthogonality between sines and cosines, which is essential (among other things) to understand the Fourier transform. For two rational frequencies f1 and f2 , the product between two sines and/or cosines gives a new periodic signal as we discussed in Section 2.2.3. If we let the period of the new signal be Tp , then we have the orthogonality relationships: Tp

( ) ( ) 1 cos 2𝜋f1 t ⋅ cos 2𝜋f2 t dt = Tp ∫ 0

{

0, f1 ≠ f2 , 1∕2, f1 = f2 ,

(2.15)

which is also valid if the cosines are replaced by sines, and Tp

( ) ( ) 1 cos 2𝜋f1 t ⋅ sin 2𝜋f2 t dt = 0, for all f1 , f2 . ∫ Tp

(2.16)

0

Equation (2.15) states that in order for the average of the product of two cosines (or sines if both cosines in the equation are replaced) to be nonzero, then the signals must have the same frequency, whereas Equation (2.16) states that the product of a sine and a

15

16

2 Dynamic Signals and Systems

cosine always has zero mean, even if the frequencies of the sine and cosine are the same. There is a limitation to when this is mathematically true, and that is that the frequencies f1 and f2 have to be rational numbers, so that there is a common period Tp over which the two sines/cosines each have an integer number of periods, as otherwise the integral cannot be calculated as stated in the equations. If one or both of the frequencies are not rational numbers, there will not be any period over which one of the integrals in Equations (2.15) and (2.16) will be exactly zero. However, the product inside the integral will still be a signal with an “apparent” zero mean so from a practical standpoint the effect is a “roundoff error,” see Problem 2.3. Signals that have this property are sometimes called almost-periodic signals.

2.3 Random Signals As mentioned in the chapter introduction, signals can be either deterministic or random. Random vibrations typically occur when the forces are caused by many independent contributions, such as the roughness of a road producing random force inputs to the tires of a car, or the sound produced by turbulent air coming out of a ventilation system, etc. Random signals are mathematically described by stochastic processes, which we will discuss in Chapter 4. In this section, we will limit the discussion to some fundamental aspects of random signals. A random signal, x(t), is a signal for which the function values at different instances t and t + 𝜏, i.e., x(t) and x(t + 𝜏) are independent. Thus, knowing (recording) x(t) for any amount of time does not help at all to predict future values. Since most random signals we find in vibration applications have some causing mechanism behind them which has some particular “pattern,” the random signals will have some resulting pattern. We may, for example, drive a car at constant speed over a type of asphalt road which has a certain surface “shape,” which causes the sound produced by the road to sound “constant” in some way. If this is the case, the random signal has constant statistical values such as root mean square (RMS) value (see Section 2.5), spectrum (see Section 8.3.1), and we refer to the random signal as a stationary random signal. Note, however, that over a long enough time, most random signals are not stationary, as for example, the asphalt type will change after a while, or the wind speed for wind-induced vibrations. An example of a random signal is shown in Figure 2.4. The example is taken from an accelerometer (see Section 7.4) measuring the acceleration on the frame of a truck driving on a rough road. In the plot in Figure 2.4(a), a 5-second frame of data is plotted, which shows random variations. The plot in Figure 2.4(b) shows a small part of the data, from 1 to 1.2 s, which reveals the random “ringing” of the signal. A word is appropriate here about the nature of the signal in Figure 2.4. You may see a seemingly periodic behavior of the signal, with a period of approx. 0.3 s. How can we be sure this signal is random and not periodic? The answer is that we cannot determine this at all from the figure. Indeed, this question, although so apparently simple, turns out to be very difficult in practice. For now, we leave the discussion on this difficult issue to Chapters 4 and 10 where it belongs.

2.4 Transient Signals

5

Acc. (m/s2)

Acc. (m/s2)

5

0

−5

0

1

2 3 Time (s)

4

5

0

−5

1

(a)

1.1 Time (s)

1.2

(b)

Figure 2.4 Example of random signal. The figure shows the acceleration on the frame of a truck driving on a rough road. In (a) the acceleration over five seconds is displayed and in (b) the same acceleration signal is zoomed in to show a small part of the data. See text for discussion.

2.4

Transient Signals

The third fundamental signal class is the class of transient signals. A transient signal is a signal which has a limited duration, i.e., it dies out after a while. Examples of such signals are, for example, the vibrations when we cross a railroad with our car, or the sound of a car door closing. Transient signals are usually, but not always, deterministic; for example, burst random noise that is described in Section 13.9.3 is a, rare, exception. If the signal is deterministic, it means that the same signal is repeated if the event is repeated, for example, we can imagine each sound from a gunshot producing the same sound pressure at a particular location relative to the gun barrel. This is of course an idealized example, which does not 1 0.8 0.6

Acc. (m/s2)

0.4 0.2 0 −0.2 −0.4 −0.6 −0.8

Figure 2.5

0

0.5

1 Time (s)

1.5

Example of a transient signal; an exponentially decaying sine.

2

17

18

2 Dynamic Signals and Systems

take into account any statistical spread between each gunshot, etc. We will say more about spread in measurements, etc. in Section 4.1. An example of a transient signal is shown in Figure 2.5 in the form of an exponentially decaying sine. A characteristic that separates transients from periodic and random signals is that because the transients die out, it is not relevant to discuss the power of the transient (remember, power is defined as energy per time unit). Instead of power, we can relate to the energy of the transient, or sometimes simply the sum (integral) of it. If the measured entity is a force, for example, we can relate to the integral of the force, which we know as the impulse of the force.

2.5 RMS Value and Power From the discussions in the previous sections in this chapter, it should be apparent that the properties of dynamic signals in general cannot be summarized by a single value. Often it is, however, useful to be able to compare two dynamic signals and distinguish which one is “larger.” The most common measure used in this respect is the root mean square, or RMS value. The RMS value of a signal x(t), based on an averaging time of 𝜏 s, which we can denote xRMS , is defined by √ √ 𝜏 √ √1 xRMS = √ x2 (t)dt, (2.17) 𝜏∫ 0

that is, the RMS value is the square root of the mean square of the signal. The “origin” of the RMS value is based on a simple electrical circuit as illustrated in Figure 2.6. In such a circuit, the instantaneous power dissipating through the resistor is u2 (t) . R The average power, which we denote < Pu >, based on 𝜏 seconds of u(t) is now Pu (t) =

(2.18)

𝜏

1 < Pu > = u2 (t)dt, R𝜏 ∫

(2.19)

0

where R is the resistance. Equation (2.19) is the mean square value of the voltage u(t), divided by the resistance, R. This means that if we replace the dynamic (alternating current [AC]) voltage u(t) with a direct current (DC) voltage uDC = uRMS from Equation (2.17), the Figure 2.6 A simple electrical circuit with an AC voltage source and a resistor.



u(t)

R

2.6 Linear Systems

mean power will be equivalent. This in turn means that the heat dissipated by the resistance (or the light emitted if R is a light bulb) will be the same. This is the essence of the RMS value. In noise and vibration applications, the RMS value is often relevant. For instance, the ear is essentially sensitive to the RMS value of the sound pressure in the ear canal. Sound level meters, therefore, measure RMS values, as will be discussed in Section 3.3.5. The RMS value is the most common value used when a single value is wanted to describe the level of a dynamic signal. It is, however, by no means the only one. Moreover, it should be emphasized that the only thing the RMS level tells us is what the square root of the mean square of the signal is. In Chapter 4, we will discuss several more statistical values such as, for example, skewness and kurtosis, which are also often used to describe the characteristics of dynamic signals. From the discussion of the simple electrical circuit above, it is clear that the electrical power is proportional to the square of the voltage. It is very common in signal analysis and vibration analysis to refer to all squared units as “power,” although in many cases, the actual signal squared may not be directly proportional to the actual power in units of watts [W]. For example, if we measure an acceleration, the square of the acceleration will be referred to as the “power of the acceleration,” although for mechanical systems, the power is actually related to the square of the velocity (as the kinetic energy is mv2 ∕2 which should be well known from mechanical dynamics). It is important to realize this somewhat “sloppy’ use of the term “power” in order not to be confused later in this book (or in your professional career, for that matter).

2.6

Linear Systems

As we mentioned in the introduction of this chapter, a system is an entity which has one or more inputs causing one or more outputs. A dynamic system is often defined (rather theoretically) as a linear system if it can be described by linear differential equations. If it is not linear, it is called a nonlinear system. In this section, we will show what implications this theoretical definition has, and we will discuss briefly when we can expect a system to be linear. In Chapters 13 and 14, we will discuss how to identify linear systems from measurements of input(s) and output(s), and then we will also discuss practical methods of testing if the estimated system is linear or nonlinear. A particularly interesting class of linear systems is the class of time invariant systems. Such a system is a linear system for which all parameters are constant (independent of time). In mechanical systems, this means that the masses, springs, and dampers are not changing with time. This is often a reasonable assumption during, for example, the time over which we measure a system, but over a long enough time span, very few real systems are time invariant. The characteristics of a bridge, for example, can change due to the temperature changing between day and night, or its characteristics (on a more long-term span) can change due to aging or fatigue of the structure. In principal, a system can be thought of as a “black box” relating the inputs and the outputs caused by those inputs as illustrated for a single-input/single-output system in Figure 2.7. In the remainder of this section, we will look into how the input and output

19

20

2 Dynamic Signals and Systems

x (t) X (s)

y (t) Y (s)

h (t) H (s)

Figure 2.7 Linear system as a “black box” with time signals and Laplace domain equivalents.

of such systems are related when the system is a time invariant, linear system. The main theory we will use for describing the linear system is the Laplace transform. If you feel you have a good understanding of the Laplace transform, I still recommend you read the following subsections at least briefly, as the treatment here is probably less abstract than you have seen in math classes, and may reveal one or two pieces of information you have not thought about before. If you have never seen the Laplace transform before, the following is meant to serve as an introduction sufficient to follow the discussions in the rest of this book.

2.6.1 The Laplace Transform The Laplace transform is a mathematical tool that can be used (among other things) to solve systems described by linear differential equations. The feasibility of the Laplace transform theory for our purpose is a result of the fact that it is very general and is easily related to experimentally available entities such as time signals and frequency spectra (Fourier transforms). If we have a signal x(t), we define its Laplace transform,  [x(t)] = X(s) by ∞

 [x(t)] = X (s) =



x (t) e−st dt,

(2.20)



0

where the complex variable s is the Laplace variable which we will sometimes divide into its real and imaginary parts as follows: s = 𝜎 + j𝜔.

(2.21)

The Laplace transform, X(s) is an algebraic expression; in our cases, with differential equations, it is usually a polynomial. We often refer to the Laplace variable, s, and the function X(s) in the Laplace s-plane as belonging to the Laplace domain, whereas the original time signal, x(t), is in the time domain. We can thus transform signals from one domain to the other with the forward or inverse Laplace transform. Later, in Section 2.7, we will introduce the similar frequency domain for the Fourier transform. Note that the integral in Equation (2.20) starts at “0− ” which ensures that we will include any Dirac impulse functions at time zero, see Section 2.6.3 below. If we have a Laplace transform X(s), we can use the inverse Laplace transform denoted −1 [X(s)] to go backward to get the time function, x(t), i.e., 𝛽+jT

1 x (t) = lim X (s) est ds. 2𝜋j T→∞ ∫

(2.22)

𝛽−jT

In order to understand Equation (2.22), you will need to know complex calculus, which we will leave out here. The important Laplace transform pairs, i.e., time functions and their Laplace domain counterparts which we need, will be presented later in this section.

2.6 Linear Systems

The Laplace transform has some important properties related to our application of it, which we will now present. The Laplace transform is a linear transform which means that ] [ ] [ ] [ (2.23)  a1 x1 (t) + a2 x2 (t) = a1  x1 (t) + a2  x2 (t) , for any real scalar constants a1 and a2 . Further, what makes the Laplace transform particularly useful to solve differential equations, is that it transforms linear differential equations into polynomials in s. This is a [ ] fact because the Laplace transform of the n-th derivative of x(t), i.e.,  x(n) (t) is [ ]  x(n) (t) = sn X(s) − sn−1 x(0) − sn−2 x(1) (0) − … − x(n−1) (0), (2.24) where x(0), x(1) (0), etc., are the initial conditions of the differential equation. Note the difference between the n-th power of s, sn , and the nth derivative of x(t), where we use parentheses, ̇ of x(t), is x(n) (t). Equation (2.24) means that the Laplace transform of the first derivative, x(t), ̇ = sX(s) − x(0),  [x(t)]

(2.25)

and the Laplace transform of the second derivative, ẍ (t) is ̇  [̈x(t)] = s2 X(s) − sx(0) − x(0).

(2.26)

The initial conditions in the previous equations are necessary to solve differential equations with arbitrary initial conditions. However, there is an important principle that we will utilize later, namely, the principle of superposition, which says that if an input x1 (t) causes an output y1 (t), and another input x2 (t) causes an output y2 (t), then for a linear system, the input signal x1 (t) + x2 (t) will be y1 (t) + y2 (t). Thus, if the initial conditions are not zero, we can always calculate the contribution due to a particular input signal (or change in the input signal) under the assumption of zero initial conditions, and then add the vibrations that were there before. You should remember from your calculus class that the solution to a linear differential equation generally consists of two parts, the homogeneous (transient) solution, and the particular (forced, or steady-state) solution. The total solution is the sum of those two solutions. You should note that when using the Laplace transform to solve a linear differential equation, we get both those solutions. This adds to the wide applicability of the Laplace transform. Also see Section 2.7.4 for a discussion on transient versus steady-state response. Some common Laplace transform pairs are given in Table 2.1. For more comprehensive tables of Laplace transform pairs, any standard mathematical reference book can be used, for example, Zwillinger (2002). An important theorem we will use extensively later in this book is the theorem of partial fraction expansion. This theorem applies to any function H(s) which is a ratio of two polynomials P(s) and Q(s), i.e., P(s) P(s) = , (2.27) H(s) = Q(s) (s − s1 )(s − s2 ) … (s − sNq ) and for which the polynomial order of Q, Nq , is at least one more than the order of P, Np , i.e., Nq > Np . If those conditions are met, the theorem says that the function H(s) can be divided into a sum Nq ∑ Ar H(s) = , s − sr r=1

where sr is the rth root of Q(s), i.e., a solution Q(sr ) = 0.

(2.28)

21

22

2 Dynamic Signals and Systems

Table 2.1 Common Laplace transform pairs. Note that pairs 1 and 2 are for the special case where all initial conditions are zero, see the text for details. #

Description

x(t)

X(s)

1

Differentiation

̇ x(t)

sX(s)

2

Integration

∫ x(t)dt

1 X(s) s

3

Dirac pulse

𝛿(t)

1

4

Exponential

eat

5

Time delay

x(t − 𝜏)

1 s−a −s𝜏

e

X(s)



6

Convolution

∫ x(u)y(t − u)du

X(s)Y (s)

−∞

7

Time reversal

x(−t)

X(−s)

The variables sr in Equation (2.28) are called the poles of H(s), and the variables Ar are called the residues. To calculate the residues, we can use the so-called Heaviside cover-up method, which says that Ar = (s − sr )

P(s) || . Q(s) ||s=sr

(2.29)

This method is called the Heaviside cover-up method because Equation (2.29) says that to calculate the residue for pole r, Ar , then from an expansion of the denominator, H(s) =

| P(s)(s − sr ) | | (s − s1 )(s − s2 ) … (s − sr ) … (s − sNq ) ||

,

(2.30)

s=sr

we see that the factors (s − sr ) in the numerator and the denominator cancel out. Therefore, without going to the length of Equation (2.30), we can instead cover-up the factor s − sr in Equation (2.27), and insert s = sr in the remaining equation. The Heaviside cover-up method does not work if there are repeated poles, i.e., two or more values of sr are coinciding. The partial fraction expansion still applies, however, and as we are interested in using the Laplace transform as a tool to explain the principle of systems theory, we leave this special case out here. Example 2.6.1 As an example of using the Laplace transform to solve differential equations, let us solve the differential equation ÿ + 3ẏ + 2y = x(t),

(2.31)

for an input signal x(t) which is x(t) = 10e−3t ,

(2.32)

and, for simplicity, initial conditions ̇ y(0) = 0 y(0) = 0.

(2.33)

2.6 Linear Systems

Laplace transformation of Equation (2.31) gives 10 [s2 + 3s + 2]Y (s) = , (2.34) s+3 where we have used transform pair number 4 in Table 2.1. We divide left- and right-hand sides of the equation by the polynomial in s on the left-hand side, which gives us 10 10 Y (s) = , (2.35) = (s + 3)(s2 + 3s + 2) (s + 3)(s − s1 )(s − s2 ) where s1 and s2 are the roots of the polynomial s2 + 3s + 2, i.e., s1 = −2 and s2 = −1. Next, we use partial fraction expansion on Equation (2.35) which yields ] [ A3 A2 A1 + + , Y (s) = 10 s+3 s+2 s+1

(2.36)

where the residues, An , can be found by applying the Heaviside cover-up method on Equation (2.35), which gives A1 = 0.5, A2 = −1, A3 = 0.5. Thus, we have a solution in the s-plane 5 10 5 Y (s) = − + . (2.37) s+3 s+2 s+1 We now go to Table 2.1 to find the inverse solution: y(t) = 5e−3t − 10e−2t + 5e−t ,

(2.38)

which is our end result. You should also look at Problems 2.9 and 2.10 to learn how to use MATLAB/Octave to solve this problem. End of example.

2.6.2

The Transfer Function

From the definition of the Laplace transform in Section 2.6.1 the transfer function, H(s), of a system follows straightforwardly. For any linear (single-input/single-output) system with a Laplace transform of the input X(s) and of the output Y (s), the transfer function is defined as the ratio of the output and input Laplace transforms, i.e., Y (s) . (2.39) H(s) = X(s) The transfer function for any linear system has a unique expression, i.e., it is independent of the input and output signals; for any input signal, the output signal Laplace transform will be such that the ratio Y (s)∕X(s) = H(s). The practical use of the transfer function is that the output can be calculated for an arbitrary input by multiplying the transfer function with the input, which follows directly from rewriting Equation (2.39) into Y (s) = X(s)H(s).

(2.40)

The output time signal (the solution to the linear differential equation for a particular forcing function), is found be applying the inverse Laplace transform on Equation (2.40), i.e., y(t) = −1 [Y (s)] = −1 [X(s)H(s)] ,

(2.41)

where the inverse Laplace transform is usually found by table lookup, after some algebraic manipulation to yield Laplace expressions that can be found in the Laplace table.

23

24

2 Dynamic Signals and Systems

An important concept related to the transfer function is the concept of poles. The poles of a transfer function (or, as we often say, the poles of the system H(s)) are the roots of the denominator of H(s). We will look at the poles of a simple mechanical system, which we will describe in more detail in Chapter 5. Let us assume we have a second-order differential equation: m̈y + cẏ + ky = x(t),

(2.42)

where for the moment we only assume the constants m, c, and k are time invariant ̇ constants and the initial conditions are zero (i.e., x(0) = 0 and x(0) = 0). By Laplace transforming this equation we first obtain ) ( 2 (2.43) ms + cs + k Y (s) = X(s), after which we form the transfer function 1∕m Y (s) 1 = = 2 , (2.44) H(s) = 2 X(s) ms + cs + k s + c∕ms + k∕m where we have rearranged the denominator so that the highest order is free of any constant. The poles of H(s) are now the roots of the denominator in Equation (2.44). From Equation (2.43), we can see that the roots of the polynomial are in fact the nontrivial solutions to ) ( 2 (2.45) ms + cs + k Y (s) = 0, i.e., for mechanical systems, which we are particularly interested in here, the equation for free vibrations. Thus, the poles are the solutions (frequencies) which give us free vibrations for a mechanical system. (Remember that the nontrivial solutions are the solutions of Equation (2.45) for which y(t) ≠ 0. They are called nontrivial, of course, because if indeed y(t) = 0, for a mechanical system, we have no vibrations, and that is surely a trivial solution). Roots of the numerator polynomial in H(s), i.e., values of the Laplace operator s for which H(s) = 0, are called zeros of H(s) (or zeros of the system). The poles and zeros are important properties because they build up the transfer function. In Section 2.7.2, we will look more into these important properties. First, however, we will now look at the inverse transform of the transfer function. Example 2.6.2 Define the transfer function for the system described by the differential equation in Example 2.6.1. Find the poles of the system. We already have a Laplace transform of the differential equation, so the transfer function simply becomes Y (s) 1 = , (2.46) H(s) = X(s) s2 + 3s + 2 and the poles are the roots of the denominator polynomial, which were already calculated in Example 2.6.1, i.e., s1 = −2, and s2 = −1. End of example.

2.6.3 The Impulse Response A very important Laplace transform pair is the one that specifies that multiplication in the Laplace domain corresponds to convolution in the time domain, and vice versa. Thus, the

2.6 Linear Systems

Laplace domain relationship in Equation (2.40) is equivalent in the time domain to the convolution integral: ∞

y(t) =



x(u)h(t − u)du.

(2.47)

−∞

The function h(t) in Equation (2.47) is the inverse Laplace transform of the transfer function H(s) and is called the impulse response of the system. The name implies that the function h(t) is obtained as the output of the system, when the input of the system, x(t) is an ideal impulse, a so-called Dirac unit impulse function, 𝛿(t). The Dirac unit impulse (or simply the Dirac impulse), 𝛿(t) is an idealized function with the special properties 𝛿(t) = 0,

t ≠ 0,

0+

∫ 𝛿(t)dt = 1.

(2.48)



0

These properties imply that the Dirac impulse is infinitely narrow, and infinitely high, so that the area under it is unity. The Laplace transform of the Dirac impulse is  [𝛿(t)] = 1,

(2.49)

so that, as mentioned above, if x(t) = 𝛿(t), then ∞

y(t) =



𝛿(u)h(t − u)du = h(t).

(2.50)

−∞

Naturally, we could also obtain this relationship by using the Laplace transform, i.e., y(t) = −1 [1 ⋅ H(s)] = h(t).

(2.51)

Example 2.6.3 Find the impulse response of the system from Example 2.6.2. By applying the Heaviside cover-up method on the transfer function in Example 2.6.2, we find that the transfer function can be written as follows: 1 1 H(s) = − . (2.52) s+1 s+2 Thus, the impulse response becomes h(t) = e−t − e−2t .

(2.53)

End of example. The impulse response and the convolution integral, so closely related to it through Equation (2.47), are very important concepts. The convolution integral is in fact so important that we will devote a whole section to it.

2.6.4

Convolution

As we saw in the previous section, convolution of two signals is important for understanding linear systems, as the output of a linear system is the convolution result between the input time signal and the impulse response. The convolution is also important for

25

26

2 Dynamic Signals and Systems

understanding many aspects of frequency analysis, as multiplication in the time domain corresponds to convolution in the frequency domain, and conversely, multiplication in the frequency domain corresponds to convolution in the time domain, see Section 2.7. In my experience, the convolution process is perceived as rather difficult by many students. Yet it is essential to understand, in order to grasp the nature of linear systems and of frequency analysis, and we will therefore describe the convolution process in some depth here. Although the convolution process is certainly complex (in the sense of complexity), and the results of the convolution of most pairs of signals impossible to foresee without actually computing it, the principle of convolution is rather simple, as we will see henceforth. The convolution is often denoted by an asterisk, ∗, and the convolution result, y(t), between two time signals x(t) and h(t), is defined as follows: ∞

y(t) = x(t) ∗ h(t) =



(2.54)

x(u)h(t − u)du,

−∞

where the integration time variable is substituted by the variable u. It is rather easy to show by variable substitution (see Problem 2.8) that changing the order of the signals that are convolved, does not change the result, i.e., (2.55)

h(t) ∗ x(t) = x(t) ∗ h(t).

A case of special interest is the convolution between a function and the Dirac unit impulse, 𝛿(t), or a translated unit impulse, 𝛿(t − t0 ), as defined in Section 2.6.3. Since the Dirac function 𝛿(t − t0 ) is nonzero only at t = t0 , it is relatively easy to show that h(t) ∗ 𝛿(t − t0 ) = h(t − t0 ),

(2.56)

for any delay t0 . Obviously then, convolving any function by the Dirac function 𝛿(t − t0 ) translates the convolved function in such a way that the function value at the origin moves from t = 0 to t = t0 , as illustrated in Figure 2.8. As mentioned above, the actual result of the convolution of two signals is in general impossible to foresee simply by knowing the two signals, because of the effective “mixing” of the signals through the convolution process. A property of the convolution process is, however, that, loosely speaking, the resulting function will have the “qualities” of both signals. If, for example, one of the signals is rippling, whereas the other signal is smooth, then the result will likely have some ripple. δ(t–t0)

h(t)

*

t0

h(t–t0)

=

t0

Figure 2.8 Illustration of the result that convolving a signal (here an impulse response h(t)) by a Dirac impulse located at t0 , i.e., 𝛿(t − t0 ) results in shifting the impulse response to start at point t = t0 , i.e., the result is h(t − t0 ).

2.6 Linear Systems

h(t−u)

h(u)

In order to understand the convolution integral, we will split the convolution process up into a few pieces, as illustrated in Figure 2.9. First, the impulse response h(t) is usually of limited length, which is illustrated in Figure 2.9(a). Now, the integral in Equation (2.54) has a factor h(t − u) rather than h(u). We start by observing, in Figure 2.9(b), that h(−u) is the function h(u) reversed along the u-axis (x-axis). In Figure 2.9(c), h(t − u) is obtained by shifting the function h(−u) to the “start point” u = t, which, if t is positive as in the figure, is positive along the u-axis. (Think of the point h(0) which corresponds to h(t − u), where u = t). Now that we have seen that h(t − u) is h(u) time-reversed and shifted to time u = t, we can go to the next “step” in the convolution integral in Equation (2.54), which is the multiplication of h(t − u) by x(u) and then taking the time integral of this product, as indicated

0

u

x(t)

h(−u)

0 (a)

0

x(t) ʹ h(t−u)

h(t−u)

0

Figure 2.9

t

u

0 (d)

t

u

0 (e)

t

u

0

u

0 (b)

0 (c)

0

0

0 t (f) Area = y(t)

Illustration of the convolution process; see text for details.

u

27

2 Dynamic Signals and Systems

h(3−u)

in Figure 2.9(d) and (e). In Figure 2.9(f), finally, the integral of the product is marked by the shaded area where, of course, negative area under the x-axis is subtracted from the positive area above the x-axis. A complication of the convolution process is that the entire process thus described, is calculating one, single, value of y(t). In order to calculate the convolution value of, say, y(t2 ), then h(−u) is shifted to u = t2 , and the multiplication and subsequent integration is recalculated. This process is then repeated again and again, as t passes all values along the time axis for which we want to calculate the convolution result y(t). From the above discussion it follows that to calculate the entire function y(t), the time-reversed impulse response h(t − u) is sliding along the time axis. This means that the impulse response acts as a weighting function, where the output y(t) at each t is the input signal x(t) weighted “backwards” in time by the impulse response h(t), as illustrated in Figure 2.10. This leads us to the important concept of causality. A causal physical system is a system for which the outputs at every instant in time only depend on past inputs. Such a system does not “foresee the future” which is an important concept in physics. Naturally, any system we observe in nature will be causal. From the understanding, we now have of the convolution process and the definition of the impulse

0

1

2

3

4

5

0

1

2

3

4

5

0

1

2

3

4

5

x(t)

5 0 −5 5 y(t)

28

0 −5

Time (s)

Figure 2.10 Illustration of the impulse response “sliding” through time to obtain the convolution output result. The figure illustrates the moment when the impulse response has “moved” up to t = 3 seconds, and the sample for y(3), which is the result of the integral of the product of the impulse response as shown and the part of x(t) which coincides in time with h(3 − u), is marked by an asterisk. The shaded area is illustrating the part of x(t) which is coinciding with the impulse response h(3 − u).

2.7 The Continuous Fourier Transform

response, it is straightforward to conclude that any causal system will have an impulse response obeying h(t) = 0,

t < 0,

(2.57)

as otherwise it would mean that at an arbitrary time instant t = t0 , the convolution process would act on “future” values of x(t), where t > t0 . Convolution is a very basic mathematical operation. Polynomial multiplication, for example, is an example where the polynomial coefficients are convolved to find the resulting polynomial, although we are then talking about discrete convolution instead of continuous convolution as we have so far talked about. See Section 3.1.1 and Problem 2.10 for more details.

2.7

The Continuous Fourier Transform

We will soon continue to explain the natural sequel of the transfer function and the impulse response, namely the frequency response. Before we can do that, however, we need a short introduction to the continuous Fourier transform. This transform is the basis for frequency analysis of aperiodic signals, i.e., random and transient signals. The Fourier transform is also useful because it relates the abstract transfer function with the more “experimentally friendly” frequency response. The Fourier transform, X(f ), of a time signal, x(t), is defined by the Fourier transform integral ∞

 [x(t)] = X(f ) =

x(t)e−j2𝜋ft dt.



(2.58)

−∞

The time signal can be calculated from the Fourier transform, X(f ), by the inverse Fourier transform defined by [ ]  −1 X(f ) = x(t) =





X(f )ej2𝜋ft df .

(2.59)

−∞

You may not recognize the above definitions, as in maths classes, the Fourier transform is usually defined using the angular frequency 𝜔, which means there has to be a factor of 1∕2𝜋 in the definition of the inverse transform. The above definitions, however, are more physically meaningful and appropriate for the signal analysis that we are going to use in this book. Similar to the Laplace transform, as mentioned before, we often refer to the Fourier transform functions as being in the frequency domain. Like the Laplace transform, the Fourier transform is a linear transform, so if x1 (t) and x2 (t) have Fourier transforms X1 (f ) and X2 (f ), respectively, and a1 and a2 are real constants, then [ ] { } [ ] X(f ) =  a1 x1 (t) + a2 x2 (t) = a1  x1 (t) + a2  x2 (t) . (2.60) It can be helpful to understand some basic characteristics of the Fourier transform by studying some transform pairs. Therefore, in Table 2.2, some of the most common transform

29

30

2 Dynamic Signals and Systems

Table 2.2 Common Fourier transform pairs. Note: the last term in row 7 assumes the time signal is real-valued. #

Description

x(t)

X(f )

1

Differentiation

̇ x(t)

j𝜔X(f )

2

Integration

∫ x(t)dt

X(f ) j𝜔

3

Constant

1

𝛿(f )

4

Dirac pulse

𝛿(t)

1

−𝜋t2

2

5

Gaussian pulse

e

e−𝜋f

6

Symmetry

X(t)

x(−f )

7

Time inversion

x(−t)

X(−f ) = X ∗ (f )

8

Time translation (Delay)

x(t − 𝜏)

e−j2𝜋f 𝜏 X(f )

9

Frequency translation

x(t)ej2𝜋at

X(f − a)



10

Complex conjugation

x (t)

X ∗ (−f )

11

Rectangular window

1,0 ≤ t ≤ T

e−j𝜋fT T sin(𝜋fT) (𝜋fT)

12

Multiplication

x(t)y(t)

∫ X(u)Y (f − u)du





13

∫ x(u)y(t − u)du

Convolution

14

Parseval’s theorem

−∞

X(f )Y (f )

−∞ ∞



∫ x(t)y∗ (t)dt

∫ X(f )Y ∗ (f )df

−∞

−∞

pairs are presented. The description to the left relates to the time signal, except for pair number 8, frequency translation. The transform pairs numbers 1 and 2 show that differentiation and integration in the time domain is equivalent to multiplication and division by j𝜔 = j2𝜋f , respectively. Transform pairs 3 and 4 show that a constant in one domain corresponds to a Dirac pulse in the other domain. Transform pair 5 is a somewhat interesting case, showing that the Fourier transform of a Gaussian pulse is a Gaussian pulse. Although not of any great importance for us as such (because we rarely have vibrations corresponding to Gaussian pulses), this fact gives some insight into the Fourier transform. It can be generalized loosely to say that a function which is “narrower” than a Gaussian pulse in one domain, is “wider” than the Gaussian pulse in the other domain, and vice versa. The symmetry property in transform pair 6 shows that if we know a transform pair in “one direction,” we can use the same transform pair in the other direction by replacing the frequency variable f by −f . A special result of this is, that if we perform four forward transforms, we end up with the original function, because we get 







x(t) → X(f ) → x(−t) → X(−f ) → x(t).

(2.61)

Time translation as in pair 7, which is the same as a time delay if 𝜏 is positive, is a common effect in filters, and in some mechanical systems. The effect in the frequency domain is to

2.7 The Continuous Fourier Transform

add a linear phase component 𝜙(f ) = −2𝜋𝜏 ⋅ f . Frequency translation as in transform pair 8 has several important uses in vibration analysis, the most important perhaps that of zoom fast Fourier transform (FFT) as described in Section 9.3.15. It is also known as amplitude modulation, which we will briefly discuss in Section 16.4. Complex conjugation in transform pair 9 will be commented in Section 2.7.1 below. Transform pair 10 is very important, as we very often multiply functions with a rectangular window, for example, to limit the measurement time. A comment on the form on the right-hand side is motivated; the reason there is a factor e−j𝜋fT in the frequency domain is that the pulse in the time domain is not symmetric from −T∕2 to T∕2. This corresponds to a time translation of T∕2 and therefore the Fourier transform will include an exponential term, as shown by pair 7. If the pulse had been symmetric in the time domain, the exponential term would disappear in the frequency domain. Transform pairs 11 and 12 show a very important property of linear systems, that a multiplication in one domain corresponds to convolution in the other domain. We have included both directions here to make this statement explicitly. Finally, transform pair 13, Parselval’s theorem, is particularly interesting when x(t) = y(t). The transform pair then says that the mean square value of the time signal is equal to the frequency summation of |X(f )|2 . In other words, instead of calculating an RMS value in the time domain, we can integrate the magnitude squared of the Fourier transform in the frequency domain. The scaling factors will be dealt with in Chapter 9 where we will use Parseval’s theorem for discrete spectra.

2.7.1

Characteristics of the Fourier Transform

To appreciate some of the characteristics of the Fourier transform, it is necessary to understand a few fundamental mathematical principles. First of all, the Fourier transform is based on the orthogonality between sines and cosines as described in Section 2.2.4. The integral of the transformed signal multiplied by the complex sine e−j2𝜋ft essentially extracts the mean of the product over the whole time interval. If there is some signal content around the frequency f , then the integral will result in a nonzero value, otherwise not. Furthermore, the Fourier transform can obviously be regarded as two integrals ∞

X(f ) =  [x(t)] =

∫ −∞



x(t) cos(2𝜋ft)dt − j



x(t) sin(2𝜋ft)dt.

(2.62)

−∞

In Equation (2.62), it is seen that the real part of the Fourier transform comes from a multiplication of the signal x(t) by a cosine, and the imaginary part from a multiplication by a sine. In order to understand the implications of this, we need to look at the properties of even and odd functions. An even function xe (n) is a function for which xe (−t) = xe (t),

(2.63)

and an odd function xo (n) is a function for which xo (−t) = −xo (t).

(2.64)

An even and an odd function are illustrated in Figure 2.11. All real functions can be split into an even and an odd function, which easily follows from the evident relationships: 1 (2.65) xe (t) = [x(t) + x(−t)] , 2

31

32

2 Dynamic Signals and Systems

Figure 2.11

Even function

Odd function

(a)

(b)

Illustration of even (a) and odd (b) functions.

and 1 (2.66) [x(t) − x(−t)] , 2 and by observing that the sum of the even and odd function is the original signal x(t). Some important properties related to even and odd functions, which we present here without proof, are xo (t) =

● ● ●

The product of an even and an odd function is an odd function. The product of two even or two odd functions is an even function. A symmetric integral of an odd function is zero.

Because of the last item in the list, even and odd functions are particularly useful when we integrate functions symmetrically around the x-axis, for example, the Fourier transform time integral from negative to positive infinity. Obviously, the cosine is an even function, and the sine is an odd function. From the above properties, it then follows that for a real time signal, x(t) = xe (t) + xo (t), the real part of the Fourier transform, X(f ), will be ∞

Re[X(f )] =

[



] xe (t) + xo (t) cos(2𝜋ft)dt =

−∞



xe (t) cos(2𝜋ft)dt,



(2.67)

−∞

since the product of the even cosine and xo (t) will be an odd function, and thus the integral value will be zero. Similarly for the imaginary part, ∞

−Im[X(f )] =

∫ −∞

[ ] xe (t) + xo (t) sin(2𝜋ft)dt =





xo (t) sin(2𝜋ft)dt.

(2.68)

−∞

Furthermore, because cos(−2𝜋ft) = cos(2𝜋ft), it follows directly from Equation (2.67) that ] [ ] [ ] [ (2.69) Re X(f ) = Re X(−f ) =  xe (t) . Similarly because sin(−2𝜋ft) = − sin(2𝜋ft), it follows from Equation (2.68) [ ] [ ] [ ] Im X(f ) = −Im X(−f ) =  xo (t) .

(2.70)

Hence, the real part of the Fourier transform is an even function and the imaginary part an odd function, and each part consists of the frequency information of the even and odd parts of x(t), respectively.

2.7 The Continuous Fourier Transform

It should be noted that the results in Equations (2.69) and (2.70) could also be concluded from transform pair 9 in Table 2.2, by noting that the time signal x(t) is real and therefore its complex conjugate equals the function itself. From the Fourier transform pair, it then follows that the Fourier transform X(f ) of a real signal x(t) must satisfy the relationships in the left-hand sides of Equations (2.69) and (2.70).

2.7.2

The Frequency Response

In Section 2.6.3, we showed that the impulse response, h(t) is the inverse Laplace transform of the transfer function, H(s). If we use the Fourier transform to transform the impulse response into the frequency domain, we obtain the frequency response function, H(f ) (more often referred to as simply “frequency response,” or FRF). By the Fourier transform relationship number 13 in Table 2.2, and replacing y(t) with the impulse response h(t) and rearranging, it is obvious that the frequency response is H(f ) =

Y (f ) . X(f )

(2.71)

The most intuitive interpretation of the frequency response is that it is the ratio of a sinusoidal output and a corresponding sinusoidal input. H(f ) at each frequency is a complex number and the magnitude of it is the ratio of the two amplitudes, and the phase of H(f ) is the phase difference ∠(Y (f )) − ∠(X(f )). While the transfer function is a mathematical, abstract entity which we can use as a tool for solving differential equations, the frequency response is an entity which we can measure experimentally, as we will discuss in Chapters 13 and 14. The frequency response can also be calculated directly from the mathematical models in the form of differential equations, or for example, from finite element models. It is therefore a very common analysis function for dynamic systems, both analytically and experimentally. In Figure 5.3 on page 105, an example of an FRF of a simple mechanical system with one mass, spring, and damper, is shown. Since the FRF is the Fourier transform of the real-valued impulse response function, it follows from the relations in Equations (2.69) and (2.70), that a FRF has the properties that its real part is even, and its imaginary part is odd.

2.7.3

Relationship Between the Laplace and Frequency Domains

The frequency domain and Laplace domain are conceptually quite different. While the frequency domain contains a real frequency axis, the Laplace domain contains a more abstract complex operator, s. For physical systems there is, however, a direct relationship between the two which applies to the transfer function. The frequency response can be obtained by evaluating the transfer function on the imaginary axis, or H(j𝜔) = H(s)|s=j𝜔 .

(2.72)

The effect of Equation (2.72) is illustrated in Figure 2.12, for the magnitudes |H(s)| and |H(j𝜔)|. The transfer function H(s), or at least its magnitude, can be visualized as a surface above the s-plane as in Figure 2.12. In the figure, the transfer function has a pair of complex

33

34

2 Dynamic Signals and Systems

|H(s)|

|H(j ω)|

6 5

10

4 3

5

2 0 −2

−1 0 Real (s) (rad /s)

2 /s) 1 d (ra ) s ( 0 1 −2 Imag −2 0

−1

(a)

0 ω (rad/s)

1

2

1

2

(b)

|H(s)|

|H(j ω)|

6 5

10

4 3

5

2 0 −2

−1 0 Real (s) (rad /s)

0

1 −2

s

I

g( ma

2 ) /s rad )(

1 0 −2

(c)

−1

0 ω (rad/s)

(d)

Figure 2.12 Illustration of the relationship between the Laplace domain and the frequency domain. The frequency response is obtained by evaluating the transfer function on the imaginary axis in the Laplace domain, i.e., where s = j𝜔. (a) Magnitude of Laplace transfer function with poles s1,2 = −0.5 ± j; (b) corresponding frequency response; (c) magnitude of Laplace transfer function with poles s1,2 = −0.1 ± j; (d) corresponding frequency response. From the figure, it is clear that the closer to the imaginary axis the poles are located, the larger the peaks in the frequency response.

conjugate poles, as typically results from a second-order linear differential equation. When evaluating H(s) on the imaginary axis s = j𝜔, it is obvious that the closer the poles are to the imaginary axis, the larger are the peaks in the frequency response. In Chapter 5, we will show that this corresponds to lower damping in the system.

2.7.4

Transient Versus Steady-State Response

In Section 2.6.1, we mentioned the transient and forced parts of the solution to differential equations of linear systems. The forced part of the solution is often referred to as the steady-state solution, as it is the solution which remains after the transient response has died out. The practical implications of these two properties are very important, particularly how they relate to the Laplace and Fourier transforms. If we have a linear system, described by its transfer function, H(s), we can solve both the transient and the steady-state responses by using the Laplace transform approach, as described in Section 2.6.1. This applies also to the Fourier transform, i.e., the inverse Fourier

2.8 Chapter Summary

transform of a product of the Fourier transform of the input and the frequency response is the sum of the transient and the steady-state responses (although using the discrete Fourier transform [DFT] for this purpose requires some care to obtain the correct solution, see Section 9.3.14). In the frequency domain, the spectrum is, of course, the sum of the transient and the steady-state solutions, since it is the spectrum of the entire time signal. A special case is when an input spectrum with a single frequency line (from a harmonic load) and a FRF are multiplied to produce a forced response, which is sometimes done in finite element software, for example (see Section 19.2.3). In this case, the product will only show the spectrum of the steady-state response because the spectrum with a single frequency line is an ideal case.

2.8

Chapter Summary

In this chapter, we have introduced some basic concepts about dynamic signals and systems. We started by defining the most fundamental dynamic signal, the sine wave x(t) by its amplitude, A, its angular frequency (in radians/s), 𝜔, and the initial phase angle, 𝜙, or in equation form: x(t) = A sin(𝜔t + 𝜙),

(2.73)

and we noted that the circular frequency (in Hz) is f = 𝜔∕(2𝜋). We then introduced the concept of the three fundamental signal classes, periodic signals, random signals, and transient signals. These three classes of signals often have to be separated in signal analysis, as they have very different properties. Linear systems theory was then introduced. The most important concept of systems theory is to apply a “black box” kind of approach where the output of the system to a known input can be calculated, or the system can simply be described by one if its important system functions, the transfer function, H(s), the impulse response, h(t), or the frequency response, H(f ). The transfer function H(s) is defined as the ratio Y (s)∕X(s), where Y (s) is the Laplace transform of the output signal y(t), and X(s) is the Laplace transform of the input signal x(t), and it defines the properties of the dynamic system. The poles of the system, i.e., the roots of the denominator in H(s) are particularly interesting because they give the frequencies where the system has free vibrations (if it is a transfer function of a mechanical system). The transfer function can (in theoretical analysis) be used to find the solutions (responses) for any input (force) by using the inverse Laplace transform, or y(t) = −1 [X(s)H(s)] .

(2.74)

The impulse response, the inverse Laplace transform of the transfer function, can alternatively be used to calculate the output by the convolution integral: ∞

y(t) =



x(u)h(t − u)du.

(2.75)

−∞

The impulse response is experimentally obtainable, although usually indirectly by first estimating the frequency response, H(f ), and then use the inverse Fourier transform to calculate h(t).

35

36

2 Dynamic Signals and Systems

The FRF, finally, is probably the most commonly used description of dynamic systems, as it is easily estimated from measurements of (for example) forces and acceleration signals, see Chapters 13 and 14. The FRF of a system can be obtained theoretically from the transfer function by evaluating H(s) on the imaginary axis s = j𝜔. If it is determined experimentally, it is essentially defined in the frequency domain by H(f ) =

Y (f ) . X(f )

(2.76)

2.9 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 2.1 Determine the expression of the sine wave x(t) = A sin(𝜔t + 𝜙), which is plotted in Figure 2.13. Answer the following questions: (a) (b) (c) (d)

What is the amplitude? What is the (circular) frequency (in Hz)? What is the initial phase angle? Derive an expression of a complex sine x̃ (t) describing x(t) so that Re[̃x] = x(t). 6 4 2 0 −2 −4 −6 0

Figure 2.13

0.2

0.4

Time (s)

Plot of sine wave for Problem 2.1.

0.6

0.8

1

2.9 Problems

Problem 2.2 Calculate the RMS value of a signal x(t), x(t) = 4 cos(4t + 𝜋∕8).

(2.77)

Problem 2.3 Write a MATLAB/Octave function function m = multsines(f1,f2) which calculates the mean, m, of the product of the two sines with frequencies f1 and f2 , and plots the product in a figure. Choose a sampling frequency of 10 times the highest frequency of f1 , f2 , and a total time of 10 times the largest period. Use this function to experiment with different frequencies to see for which frequencies the mean does not become zero. Also look at the plots and investigate what happens when the difference between the two frequencies becomes small. Problem 2.4 Find the poles of the transfer function H(s), H(s) =

1 . s2 + 2s + 4

(2.78)

Problem 2.5 Find the partial fraction expansion of the transfer function in Problem 2.4. Problem 2.6 Calculate the impulse response h(t) of the system described by the transfer function in Problem 2.4. Write a MATLAB/Octave script that plots the function. Problem 2.7 Calculate the frequency response H(j𝜔) of the system described by the transfer function in Problem 2.4. Write a MATLAB/Octave script that plots the function. Problem 2.8 Prove that changing the order of convolution does not change the result of the convolution, i.e., that h(t) ∗ x(t) = x(t) ∗ h(t). Problem 2.9 Investigate the MATLAB/Octave commands poly roots residue to find how to solve a differential equation such as the one in Example 2.6.1 on page 23. Problem 2.10 Investigate the MATLAB or Octave command conv to find how to calculate the total polynomial in the numerator in Example 2.6.1 (see Section 2.6.4). (Hint: Try convolving the two polynomials s − s1 and s − s2 with numbers replacing s1 and s2 . In MATLAB/Octave these polynomials are defined as the vectors V1 = [1, − S1] and V2 = [1, − S2], where you replace variables S1 and S2 by numbers. Compare the results with what you obtain manually!)

37

38

2 Dynamic Signals and Systems

References Haykin S 2003 Signals and Systems 2nd edn. John Wiley. Oppenheim AV, Schafer RW and Buck JR 1999 Discrete-Time Signal Processing. Pearson Education. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall. (ed. Zwillinger D) 2002 CRC Standard Mathematical Tables and Formulae 31st edn. Chapman & Hall.

39

3 Time Data Analysis In modern measurement and analysis systems for noise and vibration analysis, it is usually possible to record time signals for later analysis. In many software packages, the time data can then be processed with functions such as filtering, statistics. The aim of this chapter is to introduce some fundamental aspects of digital signals and time domain processing of time discrete signals, necessary to understand in order to correctly analyze time signals. Such processing is often referred to as time series analysis. This chapter will by necessity be brief and introductory in nature. Still, I hope the information will be useful for the nonexpert practitioner or, for example mechanical and civil engineering students, enabling them to use the information provided to perform some important analysis tasks such as resampling, and octave filtering, in the time domain. Electrical engineering students may also find some otherwise not so readily available information on applications of signal processing. For example, this chapter includes some filters for integration and differentiation of time signals not readily available in most textbooks. Practical issues regarding data acquisition of time signals are left for Chapter 11, where, for example, the most common type of analog-to-digital converters (ADC), the sigma–delta ADC, will be presented, together with practical considerations of discretization, etc. In this chapter, we will assume that we can somehow produce a sampled signal, and we will discuss how to process it to obtain good quality results. Signal processing of time signals can be applied either directly in the time domain or in the frequency domain. The latter requires an understanding of the discrete Fourier transform and is therefore left to Chapter 9, but is briefly mentioned at the end of this chapter.

3.1

Introduction to Discrete Signals

All signals we measure on mechanical systems are of course analog, that is they are defined in continuous time. When we record signals, they are converted to time discrete signals by ADC. If an analog signal x(t) is sampled with sampling frequency fs = 1∕Δt, we denote the new, time discrete signal by x(nΔt) or, for simplicity, x(n) or sometimes xn . We call Δt the sampling increment.

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

40

3 Time Data Analysis

3.1.1 Discrete Convolution Many operations are similar between continuous and discrete signals, if one replaces integrals by sums. We illustrate it with discrete convolution, as this is a very common application in this book. The discrete convolution of the signals x(n) and y(n), is defined by z(n) = x(n) ∗ y(n) =

∞ ∑

x(m)y(n − m) =

m=−∞

∞ ∑

x(n − m)y(m).

(3.1)

m=−∞

For finite length signals that we will most often be interested in, the sum can only be computed over the overlapping samples for any value n. Assuming the signals are length Nx and Ny , i.e., x(n) is defined for n = 0, 1, 2, … , Nx − 1, and y(n) is defined for n = 0, 1, 2, … , Ny − 1, then the convolution will be z(n) = x(n) ∗ y(n) =

n ∑

x(m) y(Ny − 1 − m) =

m=0

n ∑

x(Nx − 1 − m) y(m),

(3.2)

m=0

for n = 0, 1, … , Nx + Ny − 1. An important difference between the discrete and the continuous convolution in Equation (2.54) is that a continuous integral includes the differential dt, whereas the discrete convolution only includes the sum. This means that the units of the two types of convolution are different. In many cases, not only discrete convolution, when the purpose is to approximate a continuous function by a discrete operation, we need to scale by the time increment, Δt = 1∕fs . This applies to many digital filters, see e.g., Section 3.4, as well as when we scale spectra, see Chapter 10.

3.2 The Sampling Theorem The sampling theorem, which is fundamental for all digital signal analysis, was first formulated by Nyquist in 1928, in a paper that is reprinted in Nyquist (2002). The paper by Nyquist did not receive much attention at the time, however. It was not until Shannon added important contributions in a paper in 1949, reprinted in Shannon (1998), that the sampling theorem and its interpretations were more widely spread. Actually, Shannon stated in his paper that the sampling theorem was “common knowledge in the communication art,” but he is widely acknowledged for formalizing the mathematics of it in a precise and accessible way. The sampling theorem can be formulated several ways. The formulation we will use here is given in Equation (3.4). Assume the frequency spectrum of a signal x(t) is zero outside a frequency interval, B = (f1 , f2 ), the bandwidth of the signal, i.e. if we denote the Fourier transform by  , then |X(f )| = | [x(t)]| = 0,

f ∉ B.

(3.3)

The sampling theorem says that the analog signal x(t) can be uniquely represented by its discrete samples, if (and only if) it is sampled using a sampling frequency larger than twice the bandwidth, B, i.e. fs > 2 ⋅ (f2 − f1 ). If this is fulfilled, then the analog signal can be reconstructed using Equation (3.4): x(t) =

∞ ∑ n=−∞

x(n)sinc[ fs (t − nΔt)],

(3.4)

3.2 The Sampling Theorem

where sinc(x) =

sin(𝜋x) . 𝜋x

(3.5)

Half of the sampling frequency, fs ∕2, is generally called the Nyquist frequency, being named so by Shannon. It is important to understand that what the sampling theorem means is that if the sampling theorem is fulfilled before sampling an analog signal, then the sampled signal, x(n), is an exact representation of the analog signal. In other words, the sampled signal contains all information in the analog signal. It is shown in Equation (3.4) by observing that any value of the analog signal, x(t), can be calculated using the samples in the discrete signal, x(n). Why the sampling theorem holds is not easily understood intuitively (mind you it was “revealed” only as late as 1928, or perhaps 1949). However, it follows from the fact that sampling can be understood as multiplication of the analog signal by a pulse train s1 (t), as illustrated in Figure 3.1. As we know from Chapter 2, multiplication in the time domain is equivalent to convolution in the frequency domain. The Fourier transform of a time domain “pulse train” such as the signal s1 is another “pulse train” in the frequency domain, with a distance between the pulses equal to the sampling frequency, fs . The convolution of the two spectra thus results in a repetition of the spectrum of the sampled signal x(t) around each multiple of the sampling frequency fs . Thus, in order for the convolution not to mix the spectrum content, the bandwidth of the sampled signal x(t) must be less than fs ∕2, the Nyquist frequency. By fulfilling the sampling theorem, we therefore make sure that the spectrum of the sampled signal, within |B| < fs ∕2, is the same as the original spectrum of x(t). Thus, the original signal can be retrieved back by bandpass filtering the sampled signal Time

Frequency X( f )

x(t)

s1(t)

–fs

fs

x(t)s1(t)

(a)

S1( f )

X( f )*S1( f )

(b)

Figure 3.1 Illustration of the sampling process as a multiplication of the analog signal x(t) by a pulse train s1 (t): (a) the time domain process, (b) the frequency domain equivalent, the convolution of the frequency spectra X(f ) and S1 (f ). The time signal used here is a Gaussian pulse, which has a Fourier transform also being a Gaussian pulse. This is chosen only for the simplicity of the plot. In most cases, of course, the time signal will be some continuous signal. This will be discussed more in Chapter 9.

41

42

3 Time Data Analysis

x(n). Proofs of the sampling theorem can be found in standard textbooks on digital signal processing, for example Oppenheim et al. (1999) and Proakis and Manolakis (2006). Although not of much practical interest for noise and vibration analysis, it should be noted that the sampling theorem implies that the signal has to be sampled at more than twice the bandwidth of the signal and not twice the highest frequency of it, as is often, but incorrectly, said. A band-limited signal, where the frequency content of the signal does not start at 0 Hz and go up to the bandwidth, B, but rather lies in some frequency range where fmax ≫ B, can thus be sampled at a sampling frequency much lower than the frequency fmax . This is frequently used in some applications such as in modern cell phones, where signals at, e.g. the 1.8 GHz band of the GSM network, are sampled at a few MHz.

3.2.1 Aliasing If we do not fulfill the sampling theorem, a phenomenon called aliasing or frequency-folding will occur. These names, which both refer to the same phenomenon, come from two different ways of illustrating the phenomenon, illustrated in Figure 3.2. Aliasing as illustrated in Figure 3.2(a) occurs if we sample a sine signal with a sampling frequency less than twice the frequency of the sine, which results in a sine signal of a different frequency. For example, the frequencies 0.4fs and 0.6fs will, after sampling, produce the same signal. The same is true for the frequencies 1.1fs and 0.1fs . We thus observe that all frequencies are mirrored in the Nyquist frequency, fs ∕2. Another way of illustrating the same phenomenon is found by cutting the frequency axis at each multiple of the Nyquist frequency, fs ∕2, and folding the frequency axis like an

fs/2

2fs/2

(a)

3fs/2

2fs/2 fs/2

4fs/2

f

4fs/2 3fs/2

f

(b)

Figure 3.2 Illustration of (a) aliasing, and (b) folding. In (a) is illustrated how sampling a signal with sampling frequency fs makes all frequencies above fs ∕2 appear in the frequency range 0 to fs ∕2. It thus behaves like a signal with a frequency other than that which it really has, therefore, the name aliasing. In (b) the same phenomenon is illustrated through the so-called frequency folding the name arising from observing that the frequency axis is folded at all multiples of half the sampling frequency, as in the figure. After the folding of the frequency axis is completed, the entire frequency axis will go “back and forth” between 0 and fs ∕2.

3.2 The Sampling Theorem

accordion around these points, as illustrated in Figure 3.2(b). Thus, the name folding. For broadband signals (signals with continuous frequency content, i.e. random and transient signals) aliasing will still occur, of course, but in a more complicated way. Perhaps you notice a contradiction here. In Section 3.2, we said that the sampling theorem relied on the bandwidth of the signal. However, the bandwidth of a sine is really zero, so the discussion on aliasing/folding here must surely be mushy? Indeed, if we know a priori that a signal is a sine, we can actually sample it at any (low) sampling rate and reproduce it according to the sampling theorem. But that relies on the fact that we have to know it is a sine. The illustration of aliasing here is an effect of the periodicity in the spectrum illustrated in Figure 3.1. It says that if we do not band limit the signal prior to sampling it, we will not be able to tell whether a peak in the spectrum belongs to a frequency component at that frequency, or from a frequency component at that frequency plus some multiple (positive or negative) of the sampling frequency. By not band-limiting the signal prior to sampling, we indirectly assume that all frequency components of the signal are in the frequency region between 0 and fs ∕2; therefore, there is no contradiction here. Actually, aliasing is very easy to illustrate, see Problem 3.1. A direct implication of the sampling theorem is that we must make sure before sampling a signal, that it has no frequency content above half the sampling frequency. This is done by an analog antialias filter before the ADC and will be discussed in Chapter 11. The antialias filter must have a cutoff frequency below fs ∕2, as the slope of any analog filter above the cutoff frequency is finite. The ratio between the sampling frequency and the cutoff frequency of the antialias filter is called the oversampling ratio (sometimes oversampling factor), and the steeper the slope of the antialias filter, the lower the oversampling factor needs to be. Traditionally, the oversampling factor was always 2.56 in fast Fourier transform (FFT) analyzers, but in some modern analyzers with sigma–delta ADCs, the oversampling factor has been reduced, see Chapter 11.

3.2.2

Discrete Representation of Analog Signals

An effect of the limited oversampling ratio used in most measurement systems designed for noise and vibration signals is that time signals do not appear correct when plotted. In Figure 3.3, a signal is plotted with 2.56 times oversampling and with 20 times oversampling. It is evident from a careful study of the figure that the signal with low oversampling ratio is not accurately describing the signal. The signal seems to “jump” between the sampling points, which is of course impossible. It is important to understand that the signal with 2.56 times oversampling still includes all information in the signal. This means, among other things, that the frequency content of this signal is equal to that of the original signal. Only the time domain representation of the signal is limited due to the low oversampling ratio. This will be further discussed in Section 3.4.1. Another effect often encountered when measuring noise and vibration signals is the effect on the time domain representation, of reducing the bandwidth of a signal. In Figure 3.4(a), a pulse from an impact test (see Chapter 13) using a (high) sampling frequency of 10 kHz is shown. This results in a bandwidth of the signal of approximately 4 kHz (10 kHz divided by 2.56, the oversampling ratio). The time signal is well described as an approximate half-sine (half a period of a sine). In Figure 3.4(b), the same signal is displayed with a bandwidth of

43

44

3 Time Data Analysis

2

2

1

1

0

0

−1

−1

−2

−2 0

0.02 Time (s)

0.04

0

0.02 Time (s)

(a)

0.04

(b)

Figure 3.3 Illustration of oversampling ratio. A signal measured by a noise and vibration measurement system is usually sampled using an oversampling ratio of 2.56 or some similar number, as illustrated in (a). This signal is an inaccurate description of the analog signal because of the low oversampling ratio. Note the abrupt change of the signal with sharp edges between the samples. If this is, for example an acceleration signal, it would mean that the structure (point where the sensor is located) would have a “violent” behavior with rapidly changing acceleration. That is, of course, not physical. Increasing the oversampling ratio to 20 times, as in (b), the true signal behavior, with smooth variations, is revealed. The signal samples in (a) are, however, enough to represent the information in the original, analog signal. As we will see later in this chapter and in later chapters of this book, the signal in (b) can be produced from the signal in (a), and spectra can accurately be computed from the signal in (a). However, some time domain information cannot be directly extracted from the signal in (a), for example min- and max values, see Section 3.4.1. 0.2 0.5

0.15

0.4 0.3

0.1

0.2

0.05

0.1

0

0 90

95

100

Time (ms) (a)

105

110

–0.05 90

95

100

Time (ms) (b)

105

110

Figure 3.4 Illustration of the effect of reducing the bandwidth of a signal. In (a) a half-sine pulse sampled with a sampling frequency of 10 kHz. In (b), the same signal is shown after resampling by a sampling frequency of only 1 kHz. Note how oscillations appear both before and after the pulse, and the pulse amplitude is highly affected (note the different amplitude scales!). Due to the nature of linear systems, where each frequency is independent of all other frequencies (see Section 2.6, the signal in (b) is entirely valid in the frequency range up to approx. 400 Hz, however, as we will illustrate in Section 13.8, where we will discuss impact testing, a common method for exciting structures where the phenomenon illustrated here often appears.

3.2 The Sampling Theorem

400 Hz, corresponding to a sampling frequency of approx. 1 kHz at an oversampling ratio of 2.56. Note how the signal with low bandwidth oscillates before and after the pulse. This effect, known as the Gibbs phenomenon, is strictly due to the limited bandwidth and does not mean that there is anything wrong with the signal (unless, of course, we want to accurately describe it, for example by its peak and width, which are obviously not described well in Figure 3.4(b)). It is important to understand this effect in order not to misinterpret signals similar to the one in Figure 3.4(b) to indicate an error.

3.2.3

Interpolation and Resampling

The formula in Equation (3.4) can be used for the purpose of interpolation, that is, to compute a value of the analog signal, at any value of t between the samples. The interpolation principle according to Equation (3.4) is illustrated in Figure 3.5. The sinc function in Equation (3.4) is centered at the time t, where we wish to calculate a new sample as is seen in the figure. Then the sinc function is calculated every time of an increment nΔt where we have our samples. The two functions are multiplied together and summed to form the new sample value x(t). In practice, we do not have an infinite number of samples. This produces a truncation error when interpolating data, reducing the accuracy of interpolation that can be performed after sampling. The sinc function falls off by 1∕t away from the center point, which is a rather slow decay. To obtain a high accuracy when interpolating signals, rather long interpolation filters should therefore be used. In fact, in order to produce the accurate new sample in Figure 3.5, several hundred samples outside the plot range were used.

1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6

Signal sample Calculated sinc sample New sample

−0.8 −1 −8

−6

−4

−2

0 2 Sample number

4

6

8

Figure 3.5 Illustration of exact (sinc) interpolation according to Equation (3.4). The sinc function is centered at the time value to be calculated, then it is sampled at the time points which coincide with the sampled function, and finally the product of corresponding samples is summed to produce the new sample value. Note that only a few values around the point of interpolation are shown for clarity, but that more points outside the figure where used to produce the accurate new sample value. See text for comments.

45

46

3 Time Data Analysis

Another important error occurs at the ends of the data where there is limited (or no) data on one side of the interpolation point t since the interpolation formula in Equation (3.4) is symmetric around the point of interpolation. Some data at the ends should therefore preferably be excluded after resampling; a hundred samples are normally sufficient for time domain analysis. If very accurate data are needed, for example in order to perform input/output analysis in the frequency domain, many thousand points may have to be discarded at each end. Example 3.2.1 In many software packages, there are efficient algorithms to interpolate a signal. In MATLAB/Octave, there are several ways, one of the best being the resample command. In this example, we will illustrate the process of resampling a signal with a different sampling frequency which is often of great use for different analysis processes. The following MATLAB/Octave code produces a random signal with 10 times oversampling which is then decimated by a factor 4 by taking every 4th sample. The thus decimated signal is then interpolated (resampled) up to the original sampling frequency again, and the samples are plotted of both signals with rings and plus signs, respectively. N=1000; % Number of samples to start with x=randn(N,1); % Gaussian noise, oversampling ratio is 2 x=resample(x,5,1); % Resample to 10 times oversampling x25=x(1:4:end); % Oversampling ratio of x25 is 2.5 xr=resample(x25,4,1); % oversampling ratio of xr is 10 % Plot 'original' data x, and the resampled xr plot(1:50,x(1:50),'ok',1:50,xr(1:50),'+k') legend('Original','Resampled') xlabel('Sample Number')

The result of the plot is shown in Figure 3.6 where it can be seen that except for a few samples at the beginning of the signal, the interpolation reproduces the same samples that were formerly removed by the decimation. Note that the use of a random signal on the second line will result in a different signal every time you run the code above. End of example. If we have sampled a signal at a particular sampling rate, we can also use interpolation to calculate the signal corresponding to another sampling rate. This is usually referred to as resampling the signal, as was used by the command resample in Example 3.2.1. When resampling a signal, it is essential, first of all, to differentiate between upsampling and downsampling, respectively. In the first case, upsampling a signal means increasing its sampling frequency. This can be done by directly implementing the interpolation formula in Equation (3.4) at suitable locations for a new sampling frequency, usually at some fractional of the original sampling frequency. The resample command, for example, has a syntax resample(x,P,Q) which resamples the signal at a new sampling frequency of fs,new = P∕Qfs,old , and the ratio P∕Q can be larger or smaller than one. If downsampling is required, however, it is a completely different case, as this will potentially produce aliasing if the bandwidth of the signal is not taken into consideration.

3.2 The Sampling Theorem

4 3 2 1 0 −1 −2

Original Resampled 0

10

20 30 Sample number

40

50

Figure 3.6 Plot for Example 3.2.1. The original signal samples are indicated by rings, and the recreated samples obtained by interpolation of the decimated signal are indicated by plus signs. There is apparently an error for the first samples due to the lack of symmetry at the end of the data.

A lowpass filter must then be applied prior to decimating data (removing samples is called decimation in this context). The resample command in MATLAB/Octave is applying such a lowpass filter when needed and provides a good way of ensuring accurate resampling for upsampling as well as downsampling. Finally, the potential of using the resample command for lowpass filtering should be stressed. If the purpose is to increase the oversampling ratio, instead of designing a lowpass filter and having to elaborate on things such as time delay, phase linearity, filter slopes in most cases, it is better to use the resample command. To reduce the bandwidth of the signal, you first resample down to a lower sampling frequency, twice the bandwidth you want, and then up again, as was done in Example 3.2.1. The new signal will be equivalent to the original signal filtered by a very sharp lowpass filter (with near-ideal filter characteristics as indicated in Figure 3.7, see next section). LP

HP

fc

(a)

f

BP

fc

f

(b)

f1

f2

f

(c)

Figure 3.7 Ideal filter characteristics. (a) “LP” indicates the characteristic of an ideal lowpass filter, (b) “HP” the ideal highpass filter characteristic, and (c) “BP” the ideal bandpass characteristic.

47

48

3 Time Data Analysis

3.3 Filters The design of suitable filters for various purposes is a large part of the field of signal processing which requires a deep understanding of many aspects of signal processing to be applied correctly. In this section, we will discuss some basic principles that will allow you to use, for example MATLAB/Octave for some filtering operations that are often required for noise and vibration signals. A filter is often described by its frequency response, which for filters is often referred to as the filter characteristic. Very often, we want to filter a signal with an ideal filter characteristic, whereas due to physical limitations, we have to do with some compromise between computational efficiency and filter characteristic. The three most common types of filters are the lowpass, highpass, and bandpass, filters illustrated (as ideal filters) in Figure 3.7. As the names imply, the lowpass filters let low frequencies pass, and consequently high frequencies are blocked, or filtered away. Similarly, the highpass filter is used to filter away low frequencies, whereas for the bandpass filter all frequencies, except those in a certain passband region, are filtered away. The ideal filters in Figure 3.7 cannot be physically realized. With digital filter designs, however, it is possible to get arbitrarily close to ideal characteristics, at the expense of two sorts, namely computational cost and time delay. This will be discussed in Section 3.3.2. The common way to describe filter characteristics is by the asymptotic behavior of the filter. A lowpass filter, for example, is then described by two parameters: its cutoff frequency, fc , and the slope of the filter above that cutoff frequency. It is particularly important to understand that the cutoff frequency is almost always defined as the −3 dB frequency, where the gain function of the filter has decreased by 3 dB, which means that, for a lowpass filter, |H(f ≪ f )| |H(f )| = | √ c | . (3.6) | c| 2 In measurement applications, this means that, expressed as an error, the error at the cutoff frequency is approximately 30%; a rather significant number. If we want to estimate the root mean square (RMS) value of a sine, for example it is therefore essential to carefully evaluate the effects of filter cutoff frequencies.

3.3.1 Analog Filters Much of the filtering theory still used goes back to the days of analog filters, before digital signal processing was common. Thus, a few words on some common analog filter characteristics are motivated. An analog filter can be characterized by its gain and phase functions, which is the magnitude and phase of the frequency response of the filter. The simplest analog filter is a so-called first-order filter which for a lowpass version has a filter characteristic of 1 H(j𝜔) = (3.7) ), ( 1 + j𝜔∕𝜔c where 𝜔c is the filter cutoff frequency. The first-order lowpass filter is common in electronics, although in vibration equipment, the first-order highpass filter is perhaps even more common, as it is included in many sensors and signal conditioning units, see Chapter 7.

3.3 Filters

Gain

100

10−1

10−2 100

101

Frequency (Hz)

102

103

102

103

100

Phase (Deg.)

80 60 40 20 0 100

101

Frequency (Hz)

Figure 3.8 Filter characteristics of a first-order highpass filter with a cutoff frequency of fc = 100 Hz. It is common to plot filter gains with a scale in dB. However, for physical interpretation of actual numbers, we have chosen to show the gain characteristic in a logarithmic scale here.

The first-order highpass filter has a characteristic ) ( j𝜔∕𝜔c H(j𝜔) = ), ( 1 + j𝜔∕𝜔c

(3.8)

which is plotted in Figure 3.8 in amplitude and phase versus frequency. The gain of the first-order highpass filter is (𝜔∕𝜔c ) , |H(j𝜔)| = √ 1 + (𝜔∕𝜔c )2

(3.9)

and some useful numbers for measurement applications is found in Table 3.1. One of the most common general filters is the Butterworth filter, which (for a lowpass filter) has a filter gain of 1 |H (j𝜔)| = √ , | | b 1 + (𝜔∕𝜔c )2n

(3.10)

where the integer n is called the filter order. A comparison of Equation (3.10) with the magnitude of Equation (3.7) shows that a first-order lowpass filter is identical to a first-order Butterworth filter with n = 1. The Butterworth filter is a useful filter because it has maximum flat gain characteristic, and the phase characteristic is relatively linear. It is therefore

49

50

3 Time Data Analysis

Table 3.1 Values of gain and phase characteristics of a first-order highpass filter. |H(f )|

f ∕fc [–]

[–]

∠H(f ) [Deg.]

0.1

0.0995

84.3

0.9

0.669

48.0

1

0.707

45.0

2

0.894

26.5

10

0.995

5.71

20

0.999

2.86

a commonly used filter in many applications. For example, the filters used for standardized octave and third-octave filtering are third-order Butterworth filters, see Section 3.3.4, where we will also look into how MATLAB/Octave can be used to define digital versions of Butterworth filters. The filter order determines the asymptotic slope of the filter, which is approached at frequencies higher than the cutoff frequency (for a lowpass filter). It is easy to determine that the asymptotic slope is −20 dB/decade or −6 dB/octave per order of the filter. A decade is an increase in frequency by 10 times, whereas an octave is a doubling of frequency. For a third-order filter, for example, the slope is thus −60 dB/decade.

3.3.2 Digital Filters Digital filters are often used when analyzing noise and vibration data, for example to reduce the bandwidth of a signal prior to performing frequency analysis or for acoustic octave and third octave filtering, human whole body vibration (comfort) filters, or shock response spectra. Digital filters and their design is a discipline of its own in the field of digital signal processing, and here we will touch only on some simple facts that are important to understand from a user perspective. A rather general definition of a digital filter between an input xn and an output yn is shown in Equation (3.11): a0 yn =

Nb Na ∑ ∑ bk xn−k − al yn−l , k=0

(3.11)

l=1

where the filter coefficients al and bk are defining the filter characteristics. If Na is zero, i.e. the filter is using only old input values to compute the output, then the filter is called a finite impulse response (FIR) filter, as it will have a FIR of length Nb + 1. The impulse response is the output for an impulse input x0 = 1 and xn = 0 for all remaining n. If, on the other hand, there are nonzero al coefficients in the filter, it is called an infinite impulse response (IIR) filter, or sometimes a recursive filter. IIR filters are in general more time efficient than FIR filters, i.e. more powerful filter characteristics can be accomplished

3.3 Filters

using fewer filter coefficients. In some special cases, however, FIR filters are preferred, especially if linear-phase characteristics are wanted, as will be discussed below. Often, we would like to create a digital filter with characteristics equivalent to a specified analog filter because most filter theory was developed in the analog era. There is no exact such transformation, however, and thus the science of digital filters deals to a great extent with how to make the digital filter that best approximates the analog filter characteristics, in some respect. Here, it is sufficient to mention a few basic facts about digital filter performance that a user must be aware of. It is important to know that the digital filter approximation of an analog filter performs worse and worse, the closer one gets to the Nyquist frequency. Therefore, if the filter characteristics are defined in the analog frequency domain, such as for octave filters and whole body filters, one should use sufficient oversampling prior to filtering the signal. In the IEC/ANSI standards for octave filters, IEC 61260 (1995) in Europe and ANSI S1.11 (2004) in United states, for example a minimum oversampling of five times the highest-center frequency is recommended, see Section 3.3.4. In general, it is recommended to use at least 10 times oversampling when performing digital filtering of data unless you are sure some other factor is better. The two most common transforms used to convert an analog filter to a corresponding digital filter are the bilinear and the impulse invariant transforms, although there is a large variety of other transforms available, all with different advantages and disadvantages. To thoroughly understand these transforms, it is convenient to understand the z-transform, which we will not introduce here. It is, however, possible to compute the frequency response of the digital filter directly from the coefficients al and bk in Equation (3.11), so a z-transform model is not needed, although it certainly aids in the understanding of digital filters (and discrete systems in general). In MATLAB/Octave, the frequency response can be computed from the digital filter al and bk coefficients by the command freqz. An important issue with filters is the time delay. Most digital filters delay the signal by some number of samples, sometimes an integer number samples and sometimes a fractional number of samples. To calculate the delay of a filter, it is useful to define the group delay of the filter which is defined as follows: d ∠(H(j𝜔)), (3.12) d𝜔 and thus (in most cases) is a frequency-dependent number. The group delay of a digital filter is scaled in samples if no sampling frequency (or, actually, a sampling frequency of 1 Hz) is used. In MATLAB/ Octave, the command to calculate it is grpdelay. For data analysis, it is sometimes necessary, or at least convenient, that the time delay of the filter be an integer number of samples, so that the data at the output of the filter are still synchronously sampled with the input data. In addition to the delay of the filter, it is also important to understand that a filter can have a transient response, just like a physical system. From Equation (3.11), it follows that, in the simplest case with a FIR filter, it will take Nb samples before the input “fills” the filter so that the output is actually calculated using the full length of Nb samples. The filter transient can be several times longer than this length. However, just like for physical systems, transient effects are longer the less damping the filter has. For the types of filters presented in this chapter, the damping is high, and transient effects thus less disturbing. 𝜏g (𝜔) = −

51

52

3 Time Data Analysis

An important concept relating to filters is the notion of phase distortion. If we ask ourselves what the phase characteristic of a filter should be, in order for a certain signal to pass the filter with the same relative phase between different frequencies, it turns out that the answer is that the phase should be linear with frequency, see Problem 3.2. Of course, zero phase characteristic for all frequencies could also be a solution, but this is unfortunately impossible to achieve if the filter is to have a gain characteristic other than a constant. The easiest way to produce a filter with linear phase is by designing a FIR filter with symmetric coefficients. For a linear-phase filter, the group delay is constant, and for the most common type of linear-phase filters, FIR filters with symmetric coefficients (see Section 3.4) of length 2N + 1, the time delay is always equal to N. We have mentioned that the oversampling factor usually needs to be at lest 5–10 times. The oversampling factor should, however, in general not be too large, as this produces larger filters (more filter coefficients), and potential numerical truncation problems. A useful trick to produce linear phase characteristics, even for an IIR filter, can be achieved by filtering the data first in the normal (time) direction, and then running the filter backward in time. The scaling of the filter has to be considered when using this method. In MATLAB/Octave, there is a command, filtfilt, which performs this type of filtering, including scaling the output so that the efficient (gain) filter characteristic is the same as for the IIR filter used in the “normal” way. It is rather common to use several filters connected in so-called cascade coupling, which means that one filter is following another, practically meaning we first filter data with one filter, and then use that output as input to a new filter. Due to the fact that polynomial multiplication is equivalent to convolution of the two polynomials, convolving the filter coefficients of the numerator and denominator, respectively, of each filter, produces the total digital filter parameters. This will be illustrated later in this chapter, in Example 3.3.4.

3.3.3 Smoothing Filters A common filter for averaging several adjacent values in a signal together is the smoothing filter. The simplest digital smoothing filter is a filter with the same weighting factors for all filter coefficients. Such filters are frequently used in order-tracking applications (see Chapter 12), and many other applications. To obtain linear phase characteristics, the filtfilt function in MATLAB/Octave is preferably used. We illustrate this with an example. Example 3.3.1 Filter a signal in variable x with a smoothing filter of length L = 10 samples. We define the digital filter a and b coefficients and subsequently filter the content of variable x by the MATLAB/Octave code: L=10; % Smoothing filter length a=1; b=1/L*ones(1,L); xfiltered=filtfilt(b,a,x); End of example.

3.3 Filters

3.3.4

Acoustic Octave Filters

In acoustics (particularly), it is very common to analyze the frequency content of signals by means of a set of parallel bandpass filters, whose time domain output signals are analyzed various ways, as we will discuss in Section 9.2. The bandwidth of the filters are usually either a whole octave (1/1-octaves) or sometimes one-third of an octave (1/3-octaves), and center frequencies and filter characteristics are standardized in IEC 61260 (1995) for Europe, and in ANSI S1.11 (2004) for United States. Those two standards are compatible so that center frequencies and filter shapes are identical. The standardized center frequencies for whole-octave and third-octave filters are tabulated in Table 3.2. Octave and third-octave filters are two examples of a more general set of 1∕n octave bands, where the integer, n, is usually (but not necessarily) 1, 3, 6, 12, and 24. The higher fractional octave bands were used more in the past when narrowband analysis using FFT was not as readily available as it is today. Today, there is little use for those filter types, but they are still sometimes used. To define the center frequencies and bandwidths of fractional octave bands, the standard specifies an octave ratio, G, defined by G = 103∕10 ≈ 1.9953,

(3.13)

which is then used to calculate the exact center frequencies of each filter by the formulas: fc = G(x−30)∕n , n odd fc = G(2x−59)∕(2n) , n even,

(3.14)

where x is a positive or negative integer corresponding to the band number.

Table 3.2 Center frequencies for standardized 1/1- and 1/3-octave filters as specified in IEC 61260 (1995) and ANSI S1.11 (2004) for some typical frequencies in the audio band. The frequency series continues with the same numbers multiplied or divided by 10, 100, and so on, for frequencies higher and lower, respectively, than those specified in Table. Band

Center freq. [Hz]

Band

1/1-octave

1/3-octave

31.5

31.5

23

16

40

24

17

50

25

15

18

63

19 20 21 22

125

Center freq. [Hz] 1/1-octave

1/3-octave

200 250

250 315

63

26

80

27

100

28

630

125

29

800

160

30

Source: Adapted from ANSI S1.11 2004; IEC 61260 1995.

400 500

1000

500

1000

53

3 Time Data Analysis

0 −10 Magnitude (dB)

54

−20

Upper bound

−30 −40 Lower bound −50 −60 500

1000 Frequency (Hz)

2000

Figure 3.9 Octave filter limits for an example 1/3-octave filter with center frequency 500 Hz, according to ANSI and IEC standards. See text for details.

The filters for 1∕n octave bands are specified as third-order Butterworth filters with the center frequencies from Equation (3.14) and with lower and upper edge bands, fl and fh according to fl = fc G−1∕2n fh = fc G1∕2n . Using Equation (3.15) it is easy to verify that fc = [ ] fh − fl = fc G1∕2n − G−1∕2n ,



(3.15) fl fh and that (3.16)

which, for example, for a third-octave band is approximately 0.23fc or 23%. The filters are allowed to vary within certain limits, as plotted in Figure 3.9 for an example of a 1/3-octave band with center frequency of 1000 Hz. It is easy to make a MATLAB/Octave function that calculates the upper and lower bounds for a particular 1∕n octave filter so that one can verify that the filter is within the specified bounds. To define a digital filter which is in accordance with the standards mentioned above, one can use the MATLAB/Octave butter command. It is however essential to check the filter shape obtained by this command, as for a certain ratio of center frequency to sampling frequency, the limits specified by the standard will not be met. This is easily done by the freqz command mentioned above, as we will show in Example 3.3.2. Example 3.3.2 In this example, we will show how to create a fractional filter corresponding to a 1/3-octave filter for band 30 (center frequency 1000 Hz) in MATLAB/Octave. By changing the parameter n in the code below, any 1/n octave filter can be produced. G=10ˆ0.3; n=3;

3.3 Filters

fc=1000; % 1/3-octave center freq. flo = fc/Gˆ(1/2/n); % low cutoff freq. (definition) fhi = fc*Gˆ(1/2/n); % high cutoff freq. (definition) N=8*1024; % Number of frequency lines for H fs=10000; % Sampling frequency [b,a] = butter(3,[flo/(fs/2) fhi/(fs/2)]); [H,f]=freqz(b,a,N,fs); % Filter frequency response % Perform the acoustic filtering on signal in vector x y=filter(b,a,x); End of example.

3.3.5

Analog RMS Integration

In acoustic analysis, it is very common that the time signal is analyzed with a running RMS value with a particular integration time, corresponding to the output of an analog integrator as was used in the past. This is what is done in a sound level meter (SLM), where the RMS value of the integrated data is simply converted to a sound pressure level (SPL): in dB SPL, see Appendix C. Such a filter is easily designed as a first-order Butterworth lowpass filter, noticing that the time constant (integration time), 𝜏, of an analog filter obeys the relationship: 1 1 = , (3.17) 𝜏= 𝜔c 2𝜋fc where 𝜔c is the filter cutoff frequency in rad/s and fc is the cutoff frequency in Hz. In order to get the right scaling of the filter, we need to refer back to Equation (3.7) and compare it with the true integrator frequency response of H(j𝜔) = 1∕j𝜔. We realize we need to divide the filter characteristic of the first-order lowpass filter by 𝜔c to get an asymptotic behavior of the filter of 1∕j𝜔. The principle is most easily illustrated by an example. Example 3.3.3 In this example, we illustrate how to make an analog integration with a so-called fast time constant as specified in many acoustical applications. This means that the time constant should be 𝜏 = 1∕8 [s] (another common time constant in acoustics, the slow time constant, is 1 s). We assume the data in vector x are scaled in Pa as coming from a microphone. We recall from Section 3.3.1 that the first-order lowpass filter is identical to a first-order Butterworth filter. For sound, the SPL, Lp , is the “instantaneous” integrated RMS value with a particular time constant, in dB relative to 20 𝜇Pa. The equivalent SPL, Leq , is the total RMS level in dB with the same reference. The following MATLAB/Octave code designs an integration filter and produces Lp and Leq , after filtering the signal through an analog integrator filter. fs=44100; tau=1/8; fc=1/(2*pi*tau); [b,a]=butter(1,fc/(fs/2)); y=filter(b,a,x.ˆ2); y=y/(2*pi*fc); y=sqrt(y);

% % % % % % %

Our sampling frequency Time constant in s Cutoff freq. in Hz Integrator filter Filter squared signal Scaled, integrated square Root mean square complete

55

56

3 Time Data Analysis

Lp=20*log10(y/2e-5); Leq=20*log10(std(y)/2e-5);

% Sound pressure level in dB % Leq level in dB

End of example.

3.3.6 Frequency Weighting Filters Many noise and vibration applications include some form of frequency weighting, such as acoustic A or C weighting, to account for the frequency dependence of the human ear, as specified in IEC 61672-1 (2005), or various frequency weightings for human comfort analysis to account for the sensory perceptions of vibrations, as specified in ISO 8041: (2005), ISO 2631-1: (1997), and ISO 2631-5: (2004). Such weightings can often be applied more efficiently in the frequency domain as we will discuss in Section 10.7.8. In some cases, however, it is necessary to apply the weightings in the time domain, for example to produce values equivalent to those of a SLM. For space reasons, we will limit the discussion here to acoustic A and C filters, these being the most common filters necessary for many acoustic applications. A good source for implementing weighting filters for vibration effects on humans is found in Rimell and Mansfield (2007), and background information in Mansfield (2005). The filter characteristics for the A and C weighting filters are currently defined by the standard (IEC 61672-1, 2005) (previously IEC651). The C filter is defined in the Laplace plane by two poles located at 20.6 Hz and two poles located at 12,200 Hz, i.e. at s = −2𝜋 ⋅ 20.6 and s = −2𝜋 ⋅ 12, 200, since s is defined in rad/s. Thus, if we denote the two pole locations by 𝜔1 and 𝜔2 , respectively, the transfer function is C c s2 HC (s) = , (3.18) (s + 𝜔1 )2 (s + 𝜔2 )2 where Cc is a scaling constant to provide a frequency response of 1 at 1000 Hz, as specified by the standard. The s2 factor in the numerator is necessary to yield the correct shape as we will see shortly. To produce a digital filter that approximates the transfer function in Equation (3.18), it is recommended to use the bilinear transform as mentioned in Section 3.3.2. However, the bilinear transform does not behave nicely when poles are spread apart by as much as they are in this filter. Thus, it is more appropriate to define the C-weighting filter as two filters in cascade; one highpass filter HHPc =

C 1 s2 , (s + 𝜔1 )2

and a lowpass filter C1 , HLPc = (s + 𝜔2 )2

(3.19)

(3.20)

which are transformed separately into digital filters which are in turn combined using convolution between the coefficients (or you could run filter once for each filter). The A weighting filter is defined by adding two poles at 107.7 and 737.9 Hz to the C weighting filter. We illustrate the process with an example. It should be noted that to ensure that the digital filters perform as a close approximation to the analog filters, 10 times oversampling should preferably be used.

3.4 Time Series Analysis

Example 3.3.4 Compute filter coefficients B and A for a digital filter for C weighting when the sampling frequency of the data to be filtered is 44,100 Hz, using MATLAB/Octave. We start by defining the denominator and numerator polynomials for the separate HP and LP filters in s. The standard defines the frequency response at 1000 Hz to 1 (0 dB), so we ensure a scaling to achieve that. Then we use the bilinear transform to compute the digital filter coefficients for each filter, and finally convolve the A and B coefficients separately into coefficients for the total (cascade coupled) filter. The entire (MATLAB, see below) code becomes w1=2*pi*20.6; % First (double) pole D1=conv([1 w1],[1 w1]); % Denominator of HP filter jw1k=j*2*pi*1000; % Value at 1000 Hz C1=abs(jw1k+w1)ˆ2/abs(jw1k)ˆ2; % To make H(1000)=1 w2=2*pi*12200; % Second (double pole) D2=conv([1 w2],[1 w2]); % Denominator of LP filter C2=abs(jw1k+w2)ˆ2; % Const. to make H(1000)=1 [B1,A1]=bilinear([C1 0 0],D1,fs); % Digital HP filter [B2,A2]=bilinear(C2,D2,fs); % Digital LP filter B=conv(B1,B2); % Total filter, B coeff. A=conv(A1,A2); % Total filter, A coeff. With this example, it should be easy to add the two poles to produce a similar filter for A weighting. An important note is required here; the bilinear command in Octave should have 1∕fs as the third parameter, as the syntax is different from the MATLAB syntax. End of example.

3.4

Time Series Analysis

In this section, we will discuss some fundamental time series analysis procedures. Integration and differentiation of vibration signals are very common and natural because of the close relationship between vibration displacement, velocity, and acceleration. It is thus somewhat surprising that so little has been published on best practices to integrate and differentiate vibration signals. The presentation here will be rather practical since we do not have the tools (z-transform) to analyze digital filters in detail. Still, it will be useful and hopefully illuminating for some “best practice” procedures that are not easily found in literature for the nonspecialist.

3.4.1

Min- and Max-Analysis

In cases where minimum and maximum values in the time signal are to be estimated in an analysis procedure, it is important to consider several factors which we will present here. Such time domain analysis is perhaps most commonly used in fatigue analysis, where different forms of reduction processes are used for cycle counting, such as range pair and rainflow reductions. Transient analysis, for example in pyroshock or drop table applications, is also a common example where time domain analysis is used, and shock response spectrum is another related example.

57

58

3 Time Data Analysis

First of all, peak values in data are heavily influenced by the bandwidth of the analysis, as is evident from Figure 3.4. Thus, it is important to select a high enough sampling frequency when recording the signal. If no a priori information on the bandwidth of the data is available, this has to be established by increasing the sampling rate until peaks do not increase anymore. For min- and max-analysis, it is also essential to ensure that data are sampled correctly, i.e. with linear-phase filters. This will be discussed in more detail in Section 11.2.2. It should be reminded here, however, that some current measurement systems for noise and vibration analysis do not have linear phase antialias filters, and that care must therefore be taken when this type of time domain analysis is to be performed. With the normally low oversampling ratio of 2.56 or slightly less, as used in FFT analyzers, min and max values will be seriously wrong. There is, to my knowledge, no general formula for setting an acceptable oversampling ratio in peak analysis. For narrowband data such as from shock response analysis (see Section 18.1), however, it is recommended to use 10 times oversampling, which yields less than 10% error in min and max estimates on vibration levels from single degree-of-freedom (SDOF) systems, Wise (1983). From the discussion on interpolation in Section 3.2.3, it is clear that 10 times oversampling can be obtained by resampling the data immediately prior to the time domain analysis. If storage space is limited, data with an oversampling of approximately 2.56 (whatever the hardware manufacturer has implemented) can be stored without losing any significant quality in the data. This oversampling ratio is sufficient to allow for accurate resampling when so needed as we showed in Example 3.2.1.

3.4.2 Time Data Integration As accelerometers are the most common sensors for vibration measurements, it is common to want to integrate such signals into vibration velocity or displacement. This task can be applied in the time domain by a digital filter, or in the frequency domain. In this section, we describe the former, whereas frequency domain integration will be discussed in Section 9.3.14. It should be mentioned here that although frequency domain integration is numerically superior for long signals, the problems of low-frequency content, as discussed in the present section will apply also for frequency domain integration. Integration can seem like a simple task, but as we will show in this section, it is not as easy as one might imagine to integrate time signals accurately. One well-known problem of numerical integration is the problem of the integration constant, or more practically formulated, the problem of direct current (DC) or low-frequency variations in the signal. DC errors are very common in measurement signals due to offsets in the data acquisition electronics. In many cases, this can be easily corrected for by simply removing the mean from the entire time signal prior to integration. A more serious issue is if there is low-frequency drift in the signal, as this can often cause a low-frequency variation many times larger than the actual vibration signal, as is illustrated in Figure 3.10. Low-frequency drift is also very common in microphone signals and sometimes occurs in accelerometer signals due to temperature variations. If integration of

Velocity (m/s)

Acceleration (m/s2)

3.4 Time Series Analysis

50 0 −50

20

40

60 Time (s)

80

100

120

60 Time (s)

80

100

120

60 Time (s)

80

100

(a)

50 0 −50

Velocity (m/s)

0

0

20

40

(b)

0.5 0 −0.5 −1 0

20

40

120

(c)

Figure 3.10 Illustration of the problem of integrating an acceleration signal. In (a) a time record of an acceleration signal from a truck driving on a road is plotted. In (b) the signal is integrated by a precision filter presented later in the section (an accurate IIR filter). The integration reveals that apparently there are some low-frequency variations in the signal in (a) that overshadows the vibrations when integrated. These are likely low acceleration frequencies arising from the road surface variations. In (c) the original signal has been highpass filtered with a cutoff frequency of 5 Hz prior to integration, which produces the expected vibration velocity.

a time signal results in these problems, the only solution is often to include a highpass filter in the integration process to eliminate frequencies below a certain frequency. Another problem often encountered is the problem of integrating to absolute displacement. Due to the same reasons already mentioned, it is in most cases not possible to integrate acceleration signals twice and produce absolute displacement. Furthermore, in order to be able to obtain the absolute displacement, the acceleration signal must have a frequency response down to DC (static) acceleration, which excludes the most common piezoelectric signals (see Chapter 7). This does not mean that it is always impossible to obtain absolute position from acceleration measurements, but it is a much more difficult problem than is perhaps first thought. We will now look at how time domain integration can be performed and what the errors are. As mentioned above, quite surprisingly the problem is not particularly commonly studied in the signal processing literature or in the noise and vibration analysis literature. For that reason, we will cover different integrators in some depth to show what

59

60

3 Time Data Analysis

performance commonly used algorithms have in practice. Pintelon and Schoukens (1990) includes a short overview, and they also summarize that the known methods from the field of numerical analysis (trapezoidal rule, Newton–Raphson, etc.) perform poorly compared with what can be done by modern signal processing techniques. The presentation here will end with a very useful high-performance integrator presented in Pintelon and Schoukens (1990) which today seems to be one of the best choices for integration in time domain. Integration in the time domain corresponds to division by the factor j𝜔 in the frequency domain. Ideally, therefore, we would like to filter the data with a filter with the “true” frequency response, Ht (𝜔), Ht (𝜔) =

1 , j𝜔

(3.21)

where the subindex t stands for “true.” If instead we filter the data with a filter with ̂ along with (Pintelon and Schoukens, 1990), we define a a frequency response H, frequency-dependent error, 𝛿, as follows: |H − H ̂ || | 𝛿(𝜔) = | t |, | Ht | | |

(3.22)

where we drop the frequency variable 𝜔 on the right-hand side for simplicity. We will thus evaluate different integration filters by comparing their error 𝛿. A very intuitive, but not very good, filter for integration is obtained by approximating the integral of a time signal x(t) by tk



x(t)dt ≈ Δt

0

k ∑

x(n),

(3.23)

n=0

which is easily converted to a recursive difference equation: yn = yn−1 + Δt ⋅ x(n),

(3.24)

which corresponds to a filter in MATLAB/Octave with A = [1, −1] and B = Δt. An alternative way of computing it is to use the command cumsum in MATLAB/Octave. It can be shown that the frequency response of this simple integrator is H1 =

Δt , 1 − e−j2𝜋r

(3.25)

where we denote the normalized frequency by r = f ∕fs . The frequency response in Equation (3.25) is a very bad approximation of true integration, and not worth plotting together with the other integrators. Instead, we leave this for Problem 3.3. The relative error of this integrator, which we denote 𝛿1 for later comparison, is plotted in Figure 3.12, together with the errors for the integrators we are going to present later in this section. As is seen in the plot, this simple integrator is a rather poor choice, with an error at r = 0.1, which corresponds to 10 times oversampling, of a mere −10 dB, which corresponds to approximately 30% error. You should apparently avoid integrating using this simple method.

3.4 Time Series Analysis

Log magnitude

102

1/jω H2

100

H3 H4

10–2

0.001

0.01

0.1

0.5

0.01

0.1

0.5

Phase (Deg.)

0 –20 –40 –60 –80

–100 0.001

Relative frequency, f / fs

Figure 3.11 Filter gain and phase characteristics of some common integrator filters. 1∕j𝜔 is the ideal integrator, H2 is the trapezoidal rule integrator, H3 , the first-order lowpass filter integrator, with a cutoff frequency fc = fs ∕1000, and H4 is the combined filter characteristics of a 32 order FIR highpass filter with a cutoff frequency fc = fs ∕1000, and the 8-th order IIR filter defined in Pintelon and Schoukens (1990). Source: Adapted from Pintelon and Schoukens (1990).

A next step to achieve a more accurate integration can be found by using the bilinear transform on the ideal transfer function H(s) = 1∕s. This results in an integrator with a difference equation: yn = yn−1 +

Δt Δt x + x , 2 n 2 n−1

(3.26)

which is easily implemented as a digital filter with A = [1, −1] and B = (1∕2fs ) ⋅ [1, 1]. This integrator is well known in numerical analysis and is referred to as the “trapezoidal rule.” It has a frequency response, H2 , as shown in Figure 3.11 with a phase which is identical to −90∘ except at DC. The error, as shown in the comparison plot in Figure 3.12, is less than −30 dB, corresponding to approximately 3% error, for oversampling ratios over 10. This may seem like a small error, but compared with the dynamic range of most vibration measurement systems (in excess of 100 dB), it is a poor integrator. A third integrator which is common in vibration applications is to use a first-order lowpass filter with a low cutoff frequency, as we discussed in Section 3.3.5. This filter has the advantage of including a cutoff frequency (at some low frequency) below which no integration is done. However, as seen in Figure 3.11, where the frequency response, H3 , of this type of integrator with a cutoff frequency of fs ∕1000 is plotted, the phase of this integrator performs rather poorly at frequencies several hundred times the cutoff frequency. The relative error, 𝛿3 , of the integrator is also seen to be poorer than the trapezoidal rule integrator at most relative frequencies.

61

3 Time Data Analysis

0 −20

Relative error (dB)

62

−40 −60 −80

δ1

−100

δ2

−120

δ4

δ3

0.001

0.01 Relative frequency, f /fs

0.1

0.5

Figure 3.12 Comparison of relative error for different integrators. 𝛿1 is the simple integrator using the command cumsum, 𝛿2 is the trapezoidal rule integrator, 𝛿3 , the first-order lowpass filter integrator, with a cutoff frequency fc = fs ∕1000, and 𝛿4 is the combined filter characteristics of a 32 order FIR highpass filter with a cutoff frequency fc = fs ∕1000, and the 8-th order IIR filter defined in Pintelon and Schoukens (1990). Source: Adapted from Pintelon and Schoukens (1990).

For high-performance integration, Pintelon and Schoukens (1990) presented two different IIR filter integrators which were developed using advanced digital filter optimization methods. One of their integrators, referred to as the eighth-order integrator, has an integer sample delay, which is preferable for data analysis as data can be shifted to be synchronous with the data that are not integrated. A slight drawback with this filter is that it is somewhat unstable. However, the performance can easily be improved by adding a linear-phase FIR highpass filter prior to the IIR filter. The frequency response, H4 , and relative error, 𝛿4 , of such an implementation, with the highpass cutoff at fs ∕1000, are plotted in Figures 3.11 and 3.12, respectively. As can be seen in the second figure, the relative error 𝛿4 is below −120 dB for all frequencies up to r = 0.25. This means that 4 times oversampling is sufficient for this type of integration filter, and the dynamic range of the integrator is outstanding compared with the other types of integrators. The computational expense is negligible for most data analysis cases with the performance of modern PCs. As of this writing, integrating a data vector with 1 million samples with the filter implementation including a highpass FIR filter of order 32 and the IIR filters of Pintelon and Schoukens, takes approximately 0.2 s on my computer, compared with approximately 0.03 s for the simple integration using cumsum.

3.4.3 Time Data Differentiation Like integration, differentiating time signals is of rather common interest in many noise and vibration applications. The literature on differentiation is extensive compared with what is written about integration. This is probably largely due to the fact that differentiation poses

3.4 Time Series Analysis

fewer problems than integration due to the fact the there is a zero at DC in the ideal differentiator, which has, of course, a transfer function of H(s) = s. This means that FIR filters perform very well for differentiation, whereas they are poor for integration, as a pole at DC cannot be implemented by a FIR filter. For comparison, and perhaps as a warning example, we will start by the perhaps most intuitive and simple differentiator given by the digital filter calculating yn , an estimate of the derivative of xn , by ] [ xn − xn−1 , (3.27) yn = Δt which can be computed using the MATLAB/Octave command diff times the time increment, Δt. The relative error, which we defined in Section 3.4.2, of this simple differentiator is a mere −10 dB, corresponding to an amplitude error of approximately 30%. We leave the computation of the error to Problem 3.6, but we emphasize that this is not a good differentiator. The next step could be to use the bilinear transform of the ideal differentiator H(s) = s to calculate digital filter coefficients, as we did for the integrator. This leads to a digital filter with A = [1, 1] and B = 2fs [1, −1]. The relative error for this differentiator, denoted 𝛿1 is plotted in Figure 3.13, where it can be seen that the error is less than approximately −30 dB for frequencies below fs ∕10. Although perhaps this does not seem too bad, it is a relatively large error compared to the approximately 100 dB dynamic range of most sensors and data 0 δ1

Relative error (dB)

−20 −40

δ2 δ3 δ4

−60 −80 −100 −120 0.001

0.01 Relative frequency, f/fs

0.1

0.5

Figure 3.13 Comparison of relative error for different differentiation filters. The error denoted 𝛿1 is the error of the simple differentiator obtained by Δt times the command diff, 𝛿2 is the error for the maximum flat differentiator according to Equation (3.28) with N = 2, 𝛿3 the error of the same type of differentiator with N = 4, and finally 𝛿4 is the error of an optimal FIR differentiator filter using the Parks–McClellan/Remez method with N = 40. It can be seen that the last differentiator performs extremely well all the way up to a relative frequency of 0.4 corresponding to the normal oversampling ratio of noise and vibration measurement systems. Using this differentiator, there is no need for upsampling the signal.

63

64

3 Time Data Analysis

acquisition equipment in use. For precise integration, we need a better differentiator, as we will present below. Optimal FIR differentiators were early investigated in a classical paper by Rabiner and Schafer (1974) and those results are now standard text in most textbooks (e.g. (Oppenheim et al., 1999; Proakis and Manolakis, 2006)). We will, however, look at two later developments of FIR differentiators; the maximum flat FIR differentiators (Carlsson, 1991; Kumar and Roy, 1988; Le Bihan, 1995), and the methods available through the so-called Parks–McClellan optimization method, sometimes referred to as the Remez method (Parks and McClellan, 1972). Both methods, maximum flat filters and Parks–McClellan/Remez optimization, lead to linear-phase FIR filters, ideal for data analysis, and we will compare the two methods next. The maximum flat FIR differentiators are based on finding the best digital filter that approximates the ideal differentiator with frequency response H(j𝜔) = j𝜔 under the constraint of having as many derivatives at DC as possible being zero. This is the “maximum flat” behavior. To obtain filters with an integer sample delay, we let the length of the filter be 2N + 1. The FIR filter coefficients bk have been shown in Le Bihan (1995) to be computable using a recursive formula by first computing coefficients, cn , as follows: c1 = −

N N +1

cn = (−1)cn−1

(n − 1)(N − n + 1) , n(N + n)

(3.28)

for n = 1, 2, … , N. These coefficients cn are then used to build the FIR filter by using the coefficients in the rightmost part of the FIR filter (with MATLAB/Octave definitions), and flipping the coefficients and changing the sign in the leftmost part of the FIR filter vector, and in between, we put a zero. For N = 2, this leads to a filter of length 5 with coefficients bk = −1∕12, 8∕12, 0, −8∕12, 1∕12. The relative error, as defined in Section 3.4.2, denoted 𝛿2 of this filter is plotted in Figure 3.13 together with the error for N = 4, denoted 𝛿3 . As shown in the figure, the filters perform reasonably well for oversampling ratios above 10 times, with a maximum error of −46 and −90 dB, respectively, at f = 0.1fs . Already at N = 6, the relative error is below −130 dB for oversampling ratios larger than 10 times. By using the optimization method of Parks–McClellan, very accurate and yet relatively short filters can be designed, for example, for differentiation. The essential difference compared to the maximum flat filters is that the Parks–McClellan method leads to filters that behave better close to the Nyquist frequency, at the expense of some extra FIR filter coefficients. In Figure 3.13 the relative error, denoted 𝛿4 , is plotted for an optimized FIR differentiation filter with 41 FIR coefficients (including a zero coefficient in the middle). As can be seen in the plot, the error is below −120 dB up to 0.4fs , corresponding to an oversampling ratio of 2.5, which is the standard ratio used in many data acquisition systems. This means that the differentiation is accurate without the need to resample the measurement data. Example 3.4.1 We will illustrate the procedure of differentiating a signal with a simple example. Using MATLAB/Octave, design a maximum flat FIR differentiating filter using Equation (3.28), with N = 2 and produce a plot of the relative error, 𝛿(f ). Equation (3.28) gives us c1 = −2∕3 and c2 = (−1) ⋅ (−2∕3) ⋅ 1∕(1 ⋅ 1 ⋅ 2 ⋅ 4) = 2∕3 ⋅ 1∕8 = 1∕12. We then construct the MATLAB/Octave denominator coefficient vector

3.4 Time Series Analysis

B = [−1/12 2/3 0 −2/3 1/12] where we use the vector with c1 and c2 on the right-hand side of the zero, and the flipped coefficients with changed sign on the left-hand side of the zero. For all FIR filters, the numerator vector is a simple scalar, A = 1. The next thing we need to consider is the delay of the filter. All linear-phase FIR filters with length of 2N + 1 have a delay of N samples with the notation we use here. (MATLAB uses a different nomenclature, so in MATLAB language a FIR filter of order N is actually N + 1 long, so watch out with this.) The delay of N samples corresponds to a phase shift of (using r for relative frequency, i.e. r = f ∕fs ): 𝜙 = −2𝜋Nr,

(3.29)

so in order to get the undelayed filter response, we need to compensate the frequency response by multiplying by ej2𝜋Nr . We are now ready to write the code. We use a sampling frequency of 1 Hz, which means that the frequency axis will “automatically” be scaled in relative frequency. The code thus becomes A=1; B=[-1 8 0 -8 1]/12; [H,r]=freqz(B,A,1000,1); % 1000 freq. values, 1 Hz fs Hc=H.*exp(j*2*pi*2*r); % Compensate for delay Ht=j*2*pi*r; % True diff. response delta=abs((Ht-Hc)./Ht); % Relative error figure semilogx(r,20*log10(delta)); % Error in dB xlabel('Relative frequency, f/f_s') This example produces a plot like that for 𝛿4 in Figure 3.13. End of example. It should be mentioned that the paper by Pintelon and Schoukens (1990) that we mentioned in Section 3.4.2, also includes a high-performing IIR filter for differentiation with integer sample delay. This filter is also an acceptable candidate for differentiation along with the methods presented in this section. However, the Parks–McClellan optimized FIR filters perform better closer to the Nyquist frequency, so for data analysis, where computational expense is usually not vital, the increased filter size for the FIR filters is not that crucial. In real-time applications, it is quite a different matter.

3.4.4

FFT-Based Processing

In this chapter, we have presented some digital signal processing concepts and methods to process digital data. It should be mentioned that an alternative method which is common in audio processing, for example, is to use FFT (see Section 9.3) to transform overlapped blocks of time data, then process data in the frequency domain, and finally inverse transform back to time domain. This method, known as overlap-add or overlap-save, depending on details

65

66

3 Time Data Analysis

in the implementation, is described in most standard textbooks on signal processing, for example Proakis and Manolakis (2006) and Oppenheim et al. (1999). Another very accurate method for finite data, i.e., for processing acquired signals, is to use a FFT of the entire signal, then apply the signal processing in the frequency domain, and inverse transform back to time domain. This method is described in Section 9.3.14 and is often superior to filter-based processing if the signal is reasonably long, as measured vibration signals often are. The method is also part of the framework for signal processing described in Section 10.6.

3.5 Chapter Summary In this chapter, we have presented some important theory and applications of time data processing of measured signals. We started by the sampling theorem, a fundamental theorem for all digital data analysis. We noted that in order to sample a signal, we first need to make sure that the bandwidth of the signal is below fs ∕2, the Nyquist frequency. Under the assumption that this is fulfilled, then the sampled signal will contain all information in the analog signal, which is represented in the interpolation formula, which states that ∞ ∑ x(n) sinc[fs (t − nΔt)], (3.30) x(t) = n=−∞

where sin(𝜋x) . (3.31) 𝜋x In other words, if we fulfill the sampling theorem, then any value of the analog signal at any time, t, can be calculated using the samples of the signal. Strictly, this is true only for infinite signals, so in practice we will get some error due to the finite data length, but for practical purposes, this works well. It is, for this reason, always good to keep measured signals as long as possible, or more practically said; always record a portion before and after your important event takes place. Next, we explained some of the nature of resampling signals, and mentioned that the MATLAB/Octave command resample is a favorable way to change the bandwidth of a measured signal. Instead of designing a lowpass filter to reduce the bandwidth of the signal, for the inexperienced user, it is safer to resample down to two times the lower frequency you want in the signal, and then upsample back to the original sampling frequency. Actually, even for the experienced user, the method proposed provides better accuracy than most other techniques, so it is still highly recommended. The procedure of downsampling followed by upsampling just mentioned, will produce a higher oversampling ratio, which is often necessary for digital filters to perform as their analog counterparts, and is also necessary, for example for min-max-analysis. We summarized some important notes on the behavior of digital filters that are designed to perform like analog filters. Regardless of which transform (bilinear, sinc(x) =

3.6 Problems

impulse invariant, etc.) has been used to produce the digital filter from an analog filter prototype, there are some important points regarding the digital filter behavior: ●







the digital filter performs poorer the closer to the Nyquist frequency one gets. Most digital filters that approximate analog filters perform well if the oversampling ratio is kept above a factor of 10 times. digital filters usually have some time delay which can be important to understand. For FIR filters with linear-phase characteristics, and of length 2N + 1, the time delay is always N. phase distortion, or phase linearity, is another important consideration when time domain analysis of, for example transients are of interest. The easiest way to produce a linear-phase filter is by designing a FIR filter with symmetric filter coefficients. using the MATLAB/Octave command filtfilt, any filter can be used to produce linear-phase characteristics.

We introduced some digital filter procedures for designing digital filters that approximate an analog filter and used this to illustrate how fractional octave filters can be designed easily in MATLAB/Octave. We noted particularly that the digital filters behave as the analog filters at low frequencies, but that the performance deteriorates more the closer to the Nyquist frequency we get. Also, we observed there is a difference between the bilinear commands in MATLAB and Octave, respectively. How high the digital filters perform well depends on the design, and examples were shown for filters for differentiation, for example that perform well up to 0.4 times the sampling frequency, whereas many other filters perform well only up to approximately 0.1 times the sampling frequency. In the last section of the chapter, we showed some examples of good filters for integration and differentiation of measured signals. It was shown that the simplest and most intuitive filters for both integration and differentiation, which are very commonly used, should be avoided in favor of only slightly more computationally costly filters. A few examples of the application of those filters can be found in the problem section following this.

3.6

Problems

Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 3.1 To illustrate the effect of aliasing, produce a time axis corresponding to a sampling frequency of 2 kHz, 0.05 s long, using MATLAB/Octave. Then calculate a cosine with frequency 800 Hz (0.8fs ∕2) and a cosine of 1200 Hz (1.2fs ∕2). Verify that the samples are identical (within the computation accuracy) for both signals.

67

68

3 Time Data Analysis

Problem 3.2 Assume that we have a time signal containing two frequencies, that is x(t) = X1 ej2𝜋f1 t + X2 ej2𝜋f2 t ,

(3.32)

where X1 and X2 are complex constants and thus include the initial phase relationships of the two frequency components. Show that passing this signal through a filter with frequency response H(j𝜔) with linear phase, that is ∠H = A𝜔 will result in the same relative phase relationship between the two phases, ∠X1 and ∠X2 as they had before the filter. Problem 3.3 Calculate and plot the frequency response of a simple integrator according to Equation (3.25), using the filter coefficients given in the text near the equation. Overlay with the true frequency response of integration, H = 1∕j𝜔. Problem 3.4 Calculate the frequency response of a differentiator using the MATLAB/Octave diff, by using the filter coefficients mentioned in Section 3.4.3. Then calculate the true differentiator frequency response and use both frequency response functions (FRFs) to compute and plot the relative error similar to Figure 3.13. Use a relative frequency axis from 1e−3 to 0.5. Problem 3.5 Design a maximum flat FIR differentiator for N = 6 using Equation (3.28) and plot the relative error similar to Example 3.4.1. Make sure the error at r = 0.1 is approximately −130 dB as the text says. Problem 3.6 Create a sine signal in a vector x, with a frequency of 230 Hz, using an oversampling ratio of 10 times, and 10 s long in MATLAB/Octave, and create the true derivative of the same signal in another vector xp (x prime). Use the accompanying toolbox command timediff to calculate the derivative of the signal in x, in vector y, and compare it with the time vector xp. How much do they differ? Explain (reading the text inside timediff) why the vector y is shorter than xp? Try the different types that timediff has as option. Do they make a difference? Which one performs best? Problem 3.7 Repeat Problem 3.6 but instead using the command timeint and compare it with the true integral of the sine. Answer the same questions as in Problem 3.6 relating to integration instead of differentiation.

References ANSI S1.11 2004 Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. American National Standards Institute. Carlsson B 1991 Maximum flat digital differentiator. Electronics Letters 27(8), 675–677. IEC 61260 1995 Electroacoustics – Octave-Band and Fractional-Octave-Band Filters. International Electrotechnical Commission. IEC 61672-1 2005 Electroacoustics - Sound level meters – Part 1: Specifications. International Electrotechnical Commission. ISO 2631-1 1997 Mechanical vibration and shock – evaluation of human exposure to whole-body vibration – part 1: General requirements.

References

ISO 2631-5 2004 Mechanical vibration and shock – evaluation of human exposure to whole-body vibration – part 5: Method for evaluation of vibration containing multiple shocks. ISO 8041 2005 Human response to vibration – measuring instrumentation. Kumar B and Roy SCD 1988 Coefficients of maximally linear, FIR digital differentiators for low-frequencies. Electronics Letters 24(9), 563–565. Le Bihan J 1995 Maximally linear FIR digital differentiators. Circuits Systems and Signal Processing 14(5), 633–637. Mansfield NJ 2005 Human Response to Vibration. CRC Press. Nyquist H 2002 Certain topics in telegraph transmission theory (reprinted from transactions of the A. I. E. E., february, p. 617–644, 1928). Proceedings Of The IEEE 90(2), 280–305. Oppenheim AV, Schafer RW and Buck JR 1999 Discrete-Time Signal Processing. Pearson Education. Parks TW and McClellan J 1972 Chebyshev approximation for nonrecursive digital filters with linear phase. IEEE Transactions On Circuit Theory CT19(2), 189–194. Pintelon R and Schoukens J 1990 Real-time integration and differentiation of analog-signals by means of digital filtering. IEEE Transactions on Instrumentation and Measurement 39(6), 923–927. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall. Rabiner LR and Schafer RW 1974 On the behavior of minimax relative error fir digital differentiators. Bell System Technical Journal 53(2), 333–361. Rimell AN and Mansfield NJ 2007 Design of digital filters for frequency weightings required for risk assessments of workers exposed to vibration. Industrial Health 45(4), 512–519. Shannon CE 1998 Communication in the presence of noise (reprinted from the proceedings of the IRE, vol 37, pp. 10–21, 1949). Proceedings of the IEEE 86(2), 447–457. Wise J 1983 The effects of digitizing rate and phase distortion errors on the shock response spectrum Proceedings of Institute of Environmental Sciences, Annual Technical Meeting, 29th, April 19-21, Los Angeles, CA.

69

71

4 Statistics and Random Processes Noise and vibrations are often produced by sources with random behavior, for example vibrations resulting from a road surface interacting with the tires of a car or vibrations caused by turbulence around an airplane wing. To understand random vibrations and their analysis, it is important to understand applied statistics. In this chapter, we will review some fundamental parts of probability theory, especially of the theory of stochastic processes, or random functions, and the way these methods are used in noise and vibration analysis. Statistical properties are used in many ways in this field. In data quality assessment, covered in Section 4.4, many statistical properties can be used to assess the quality of a set of acquired data. Also in the classification of signals, for example in order to find the damaging effect of signals, the statistical properties of signals are important tools. The treatment here will be practical and focused on statistical analysis methods commonly applied to measured signals. The theoretical background of fundamental statistical theory is assumed to be familiar to the reader and is therefore only briefly recapitulated here. For a deeper understanding, you are referred to standard textbooks, either mathematical, Papoulis (2002), or engineering oriented, for example Bendat and Piersol (2010), Newland (2005), and Wirsching et al. (1995).

4.1

Introduction to the Use of Statistics

Before going on we should discuss two different forms of random signals that we are interested in. There are, as mentioned in the introduction above, dynamic forces that are naturally behaving as random signals. In this case, we are interested in describing the random signal (process) as accurately as possible. This is usually done by describing the signal by its spectral characteristics (spectral density, see Section 8.3.1), or correlation function (see Section 4.2.12) and its amplitude characteristics, for example the probability density. There is, however, also another form of random signals, namely measurement noise coming from various sources, i.e. unwanted “disturbance” added to our measurement signals. First of all, there is electrical noise inherent in all electronics, measurement sensors as well as data recording hardware. Then there is random contributions to essentially deterministic processes, which can be thought of as random signals. An example of the latter is the vibration from a reciprocating engine, which is essentially a periodic signal. But, due to Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

72

4 Statistics and Random Processes

the inexact amount of fuel injected in the cylinder during each combustion, and the uncertainty in the exact time of each combustion, in reality the periodic signal will not be perfectly periodic. It can then be thought of as a periodic signal plus a random contribution, where the random contribution is essentially an “error contribution” to the periodic signal. By modeling the total measurement signal in this way, we often apply some averaging procedure which has the goal of eliminating, or minimizing, the random part of the signal. In Section 2.4, the sound of gunshots was taken as an example where, ideally, each gunshot should sound the same, which means the measured sound pressure should be a deterministic signal. But due to the uncertainty in the exact amount of gunpowder in each shot, there will of course be a random contribution to this deterministic sound.

4.1.1 Ensemble and Time Averages There is an abstraction in the mathematical way of describing statistics and random processes that I frequently find my students seem to have missed in the math (or probability) class. I therefore want to point out the concept of realizations and time-dependent statistical measures a bit. A random process as described in the theory of stochastic processes is an abstract entity which we cannot study practically. An example of such a process is the electrical signal in a certain type of electrical resistor coming from thermal noise. If we call this process x(t), this means it is some theoretical signal which has certain properties. Let us now look at the mean, or average, or expected value of this process, which we denote 𝜇x (t). What you should observe particularly is that this average is a time-dependent variable. How does that come to be? It comes from the concept of ensemble averaging from the theory of statistics. This average is namely considered as the average of infinitely many such resistors, on each of which we measure the time-dependent voltage coming from the resistor. Each such voltage, which we can call xi (t), is a realization of the process x(t) (you can compare this with a stochastic variable, for example the throw of a dice, where each throw of the dice is a realization). If this was the only type of average available to us, we would be in trouble, as the cost of measurements would be very high. Instead of measuring the noise from one prototype car (or a few) as we do today, we would have to measure very many (20–100, perhaps) for everything we would want to measure. But, luckily, there is the concept of ergodicity. But first we need to understand stationarity.

4.1.2 Stationarity and Ergodicity A random process is defined as (strongly or strictly) stationary if all statistical properties of the process are independent of time. This means that, for example, the mean value ] [ ] [ (4.1) E x(t1 ) = E x(t2 ) , for any arbitrary times t1 and t2 . The symbol E used here is the expected value operator, which will be introduced in Section 4.2.1, but I assume you are already slightly familiar with the concepts here. It should be noted, however, that in terms of an experimental situation, to estimate this time-dependent mean value, we need to measure many so-called ensembles (or realizations) of the process and calculate the mean at each time instant t through the

4.2 Random Theory

ensembles. That means that, if we would measure, for example the road noise in a car, we would have to find many cars and measure the noise in all those cars at the same time (which would cause a problem, because they should also be in the same location at every instant in time, have the same speed, etc., which would lead to a very crowded test track). Obviously, this is not a very practical measure. Therefore, we need to consider ergodic random processes. A random process is referred to as ergodic, if it is stationary, and if ensemble properties and time properties are equal, that is, if the mean can be calculated from a single ensemble (realization) as: T

1 x(t)dt, 𝜇x = E[x(t)] = lim T→∞ 2T ∫

(4.2)

−T

and this mean is equal to the mean in Equation (4.1). In reality, a measured signal is usually ergodic if it is stationary, so it is enough to analyze one realization of it. However, you should note that this does not imply that most measured signals are stationary! Indeed, stationary conditions are often difficult to obtain. This will be addressed in Section 4.3.3 below. Taking the example of road noise in a car, stationarity, and in this case ergodicity, implies that the noise in a sense “sounds constant,” i.e., we will only have stationary sound if we run the car with constant speed, on the same type of asphalt, etc. If the nature of the road changes (as it often does in practice, more often than we would want it to from a measurement point-of-view), the noise will not be stationary.

4.2

Random Theory

When we have noise and vibration signals of random nature, the theory of stochastic processes is used to describe the signals. Some of the most basic concepts of statistics and stochastic process theory will therefore be briefly recapitulated here with some comments on the meaning of each measure. In practice, instead of the theoretical term stochastic process, we often use one of the synonyms random signal or random process. In this section we will also, unless otherwise noted, assume that the random signal is stationary and ergodic.

4.2.1

Expected Value

We will start our discussion with the definition of expected value of a random variable x, because it is needed to understand the errors introduced next. The expected value can be interpreted as the mean, or average, value of the function, when an infinite number of values are used. Thus, the expected value of a random variable (process) x is obtained by N ∑ 1 x. N→∞ 2N + 1 n=−N

E[x(n)] = lim

(4.3)

The expected value will be further discussed in Section 4.2.7.

4.2.2

Errors in Estimates

When calculating statistical estimates, we differentiate between two types of errors: the bias error, and the random error. We assume that we shall estimate (i.e. measure and calculate)

73

74

4 Statistics and Random Processes

a parameter, 𝜙, which can be for example the standard deviation 𝜎x (t). In the theory of statistical variables, the “hat” symbol,̂, is usually used for a variable estimate, so we denote our estimate 𝜙̂ (in practice, for example 𝜎̂ x ). We now define the bias error, b𝜙̂ , as: ̂ − 𝜙, b𝜙̂ = E[𝜙]

(4.4)

i.e. the bias error is the difference between the expected value of our estimate and the “true” value 𝜙. In practice, we generally divide this error by the true value to obtain the normalized bias error, 𝜀b , as: 𝜀b =

b𝜙̂ 𝜙

.

(4.5)

If we have a nonzero bias error, there will thus be a difference between the estimate 𝜙̂ and ̂ This is of the true value 𝜙, even if we make a very large number of averages to estimate 𝜙. course unwanted, and such an estimate is called a biased estimate. The opposite, an estimate that has no bias, is consequently called an unbiased estimate, and is usually preferred. In many cases, however, we cannot find an unbiased estimator, as we will see is the case in spectral estimation, e.g., in Chapter 10. The random error, on the other hand, is defined as the standard deviation, 𝜎𝜙̂ , of the difference between our estimated variable and the true variable, that is √ √ N √ 1 ∑ { [ ]}2 𝜙̂ − E 𝜙̂ , (4.6) 𝜎𝜙̂ = lim √ N→∞ N − 1 k=1 k and, as with the bias error, we define the normalized random error, 𝜀r , as: 𝜀r =

𝜎𝜙̂ 𝜙

.

(4.7)

A good estimator should have a random error that approaches zero as the number of averages increases. Such an estimator is referred to as a consistent estimator. In most cases, we do not know the true parameter 𝜙, and therefore we generally have to ̂ in its place if we wish to calculate the normalized bias and use our estimated parameter, 𝜙, random errors. When we have estimated the normalized random error, 𝜀r , we can use it, for small errors (𝜀r < 0.1), to calculate a 95% confidence interval ̂ − 2𝜀r ) ≤ 𝜙 ≤ 𝜙(1 ̂ + 2𝜀r ). 𝜙(1

(4.8)

We assume confidence intervals are known from a basic course in statistics. It follows quite straightforwardly from the nature of the Gaussian distribution and the central limit theorem, which we will discuss in Section 4.2.13.

4.2.3 Probability Distribution The probability distribution, P(x) of a random (stochastic) signal x(t), is defined as : P(x) = Prob[x(t) ≤ x],

(4.9)

4.2 Random Theory

where “Prob” denotes probability. Typical distribution functions for random variables are, for example the well-known Gaussian distribution (see Section 4.2.13) and the theoretically popular, but in real life not so common, uniform distribution.

4.2.4

Probability Density

The derivative of the probability distribution function is called the probability density function, PDF, which gives the relative occurrence of amplitudes in x(t). This is the derivative of P(x), d [P(x)]. (4.10) px (x) = dx Since P(x) is clearly a continuously growing (or constant) function for increasing x, because the probability in Equation (4.9) must increase with increasing amplitudes (until it reaches the maximum amplitude), then it follows that px (x) ≥ 0,

(4.11)

for all x. Furthermore, it follows directly from Equation (4.10), that x

P(x) =



(4.12)

px (x)dx

−∞

and ∞



px (x)dx = 1.

(4.13)

−∞

The interpretation of the probability density is thus, that the area under the curve in a certain amplitude range, equals the probability that the random signal is within that range, i.e., [ ] Prob x1 ≤ x(t) ≤ x2 =

x2



px (x)dx.

(4.14)

x1

4.2.5

Histogram

The probability distribution and density functions above are theoretical functions that we cannot estimate from real-life signals. Instead, in practice, we estimate the histogram, or more specifically, the amplitude histogram of the signal x(n) (as a histogram can be made from any measure). The (amplitude) histogram consists of a discrete number of values, where each value is the number of samples of the signal in a certain amplitude range. The procedure of creating a histogram is illustrated in Figure 4.1. In principle, first we need to choose the minimum and maximum amplitude we wish to include, which is frequently the analog-to-digital converter (ADC) range, or alternatively, the minimum and maximum amplitude values of the signal. As most vibration data have zero mean, it may be more practical to use the largest of the minimum or maximum (maximum of the absolute values of all samples), Mx , and use a symmetric amplitude interval [−Mx , Mx ] for the histogram calculation. This amplitude range is then divided into a number of uniformly spaced

75

4 Statistics and Random Processes

4 3 2 1 0

Number of samples

Acceleration (m/s2)

76

–1 –2 –3 –4 Time

Figure 4.1 Illustration of the principle of the calculation of a histogram. It is formed by calculating the number of samples in the analyzed signal, that fall within each of a number of amplitude ranges, called bins.

intervals, with width Δx. Finally, we produce the histogram by counting the number of samples, Ni , in our data, x(n), that fall within each bin.

4.2.6 Sample Probability Density Estimate The histogram as we have defined it is not directly comparable to the probability density. By normalizing it by the total number of samples N and the amplitude width Δx, we obtain the sample probability density, p̂ i . Thus, we have ( ) Ni 1 p̂ i = . (4.15) N Δx This estimate of the (continuous) PDF has a bias error due to the limited amplitude (x-axis) resolution. It also has a random error due to the limited number of samples used in the calculation. In order to keep these errors small, it is recommended to select the width of each bin so that Δx ≤ 𝜎x ∕5 and use N ≥ 10, 000, where 𝜎x is the standard deviation of the signal, see Section 4.2.7. For data with Gaussian distribution and with zero mean, most of the data is within 4 to 5 times 𝜎x , the standard deviation of the signal (see Section 4.2.13). Thus, 40 or 50 bins are normally suitable for calculating the sample probability density when the minimum and maximum amplitudes for the histogram calculation are chosen as ±4𝜎x or ±5𝜎x , respectively. The bias and random errors for the sample PDF are approximate and can be found in Bendat and Piersol (2010). An example of a sample PDF is found in Figure 4.5.

4.2.7 Average Value and Variance For a function g(x) of a random signal (process) x(t) with probability density px (x), the expected value, E[g(x)], is defined by ∞

E[g(x)] =

∫ −∞

g(x)px (x)dx.

(4.16)

4.2 Random Theory

It follows directly from Equation (4.16) that the expected value of x(t), is given by ∞

𝜇x = E[x] =



x ⋅ px (x)dx,

(4.17)

−∞

which is similar to the calculation of the center of gravity in one dimension, if you feel more comfortable with that. The expected value, or average value, is essentially the “amplitude center of gravity.” The estimator of the mean value based on N samples of x(n), n = 0, 1, … , N − 1, is 1∑ x(n), N n=0 N−1

x = 𝜇̂ x =

(4.18)

which is a consistent estimator, i.e., the variance of the estimate approaches zero as N approaches infinity, see Section 4.2.2. Note that we either use the “bar,” x, symbol or the hat symbol combined with the symbol of the true estimate in 𝜇̂ x . The variance, 𝜎x2 , of a random variable, x(t), is defined by ] [ 2 (4.19) 𝜎x2 = E (x − x)2 = E[x2 ] − x . The square root of the variance, i.e., 𝜎x , is called the standard deviation of x. In order to obtain an unbiased estimator for the variance and standard deviation, the so-called sample variance defined by Equation (4.20) should be used, that is 𝜎̂ 2x =

N−1 1 ∑ (x − x)2 , N − 1 n=0

(4.20)

where the name sample variance is used when we use N − 1 in the denominator instead of N, to make the estimator unbiased. In practice, this makes little difference compared with using N in the denominator, as we typically use many thousand values to compute the variance. You should note that the standard deviation is similar to the root mean square, RMS, as defined in Equation (2.17). For N samples of a dynamic signal x(n), the RMS level is defined by √ √ N−1 √1∑ xRMS = √ x2 (n). (4.21) N n=0 The RMS level differs from the standard deviation in that the RMS level includes the mean value (static or direct current [DC] value) of the signal x. Thus, when the DC value of a signal is zero, the RMS level is essentially equal to the standard deviation (except for the difference of N and N − 1 in the denominator which can be neglected for large N). When a measured signal contains a DC value, it is recommended that this value be removed from the signal and treated separately, anyway. So for noise and vibration signals, we regularly use RMS and standard deviation synonymously. The mean estimator in Equation (4.18) is an unbiased estimator. The random error, 𝜀r [x] is more complicated because the random signal is time-varying. It turns out (Bendat and Piersol, 2010) that the random error of the average of a random signal with zero mean is 𝜎 𝜎[x] = √ x , (4.22) 2BT

77

78

4 Statistics and Random Processes

where B is the bandwidth and T the observation time of x(t) (the time it takes to sample the N samples in the averaging process) and you should note that the error is not normalized. Because the mean is zero, normalizing it would mean dividing by zero. The estimator for the variance in Equation (4.20) is a consistent and unbiased estimator. The random error of the variance 𝜎̂ 2x , if x(t) has a bandwidth B and the observation time is T, is 1 (4.23) 𝜀r [𝜎̂ 2x ] ≈ √ . BT It can be shown that the normalized random error of 𝜎̂ x , under the same assumptions as for the variance error above, is 1 𝜀r [𝜎̂ x ] ≈ √ . (4.24) 2 BT The product BT found in the equations for the random error of the mean, variance, and standard deviation above is called the bandwidth–time product, and is central in signal analysis and spectrum estimation (see Section 9.1.1). Clearly, it specifies that, in order to obtain a certain random error when calculating an estimate of any of these measures, we must measure during a specific minimum amount of time, and further, the lower the bandwidth of the signal is, the longer time we must measure. This follows naturally from the fact that, loosely speaking, frequency is change per time unit.

4.2.8 Central Moments The ith central moment, Mi , of a signal x(t) is defined by [ ] Mi = E (x − x)i ,

(4.25)

where i is an integer number and x(t) is a time-varying signal. From a comparison of Equations (4.19) and (4.25), it follows that M1 = 0, M2 = 𝜎x2 .

(4.26)

The statistical moments are used to calculate many higher-order statistical measures. In the next two subsections, we will present two commonly used functions in noise and vibration analysis based on higher-order moments, which are called skewness and kurtosis.

4.2.9 Skewness Skewness is a commonly used parameter when analyzing dynamic signals. If the measured signal is denoted x, the skewness, denoted Sx , is defined by Sx =

M3 𝜎x3

.

(4.27)

Apparently the skewness is a dimensionless measure, and it measures to what degree the signal is nonsymmetric around its mean. If the signal is symmetric, the skewness is zero. For many random vibration signals, the probability distribution is symmetric around the mean

4.2 Random Theory

(e.g., the normal distribution). Thus, skewness differing from zero in many cases indicates that something is wrong. The similarity and difference between the skewness and the mean value should be considered. The third power in the third moment of the estimate of skewness does not change the sign of the negative part of x (we assume x is zero mean for simplicity). Thus, skewness is similar to the mean. The difference between the mean and the skewness is that the values of x are raised to the third power in the skewness estimate. This exaggerates high values and suppresses low values of x compared with the mean. The skewness value is thus more sensitive to an asymmetry in the large values in x than the mean.

4.2.10

Kurtosis

Another commonly used parameter is the kurtosis, Kx which is also a dimensionless parameter, defined by M Kx = 44 . (4.28) 𝜎x Kurtosis resembles the variance, except the values are raised to the fourth power instead of the second. Both these powers are even and thus make all values in the summation positive. The higher power in the kurtosis compared, with the variance estimate, will emphasize large values in the signal and suppress small values in the kurtosis, compared with the variance. The kurtosis can for this reason also be regarded as comparing the tails of the PDF with those of the normal distribution. For a normally distributed variable, the kurtosis is exactly 3 (see Problem 4.3). If the kurtosis is larger, the distribution has “higher tails” than the normal distribution, and vice versa if the kurtosis is smaller than 3. For a sine wave, the kurtosis is 1.5. For many other signals, it may be necessary to investigate empirically what kurtosis values are to be regarded as “normal,” see Section 4.4. An alternative kurtosis value known as the excess kurtosis is common in software for noise and vibration analysis. The excess kurtosis is simply the difference of the kurtosis defined above and 3, i.e., Kex = Kx − 3,

(4.29)

and the meaning of this is of course to obtain values that are 0 if data are Gaussian, instead of the somewhat odd number 3. It is a matter of taste which kurtosis definition one wants to use, but unfortunately, due to the two similar definitions, there is a cause of confusion when discussing kurtosis values, so great care should be exercised to make sure to mention which definition you use.

4.2.11

Crest Factor

The crest factor is a rather common statistical measure in signal analysis. It is defined as the ratio of the maximum absolute value to the RMS value of the signal. If we thus have a signal, x(n), with zero mean, the crest factor, cx is thus defined as: cx =

max (|x(n)|) . 𝜎x

(4.30)

79

80

4 Statistics and Random Processes

The crest factor is often an important property, as it tells how “peaky” data are. A high crest factor implies there is at least one large (positive or negative) peak in the signal. For a Gaussian random signal, the crest factor is usually of the order of 4–5, see Section 4.2.13.

4.2.12 Correlation Functions The autocorrelation function, Rxx (𝜏), for a stochastic, ergodic time signal x(t) is defined as: Rxx (𝜏) = E[x(t)x(t − 𝜏)],

(4.31)

and can be interpreted as a measure of the similarity a signal has with a time-shifted version of itself (shifted by time 𝜏). Similarly, the cross-correlation between two different stochastic, ergodic functions, x(t) and y(t), where x(t) is seen as the reference, is defined as: Ryx (𝜏) = E[y(t)x(t − 𝜏)].

(4.32)

The autocorrelation function can be seen as a special case of the cross-correlation, for the case where the two signals are equal. It is easy to see that the autocorrelation at 𝜏 = 0 equals the variance of the signal x(t), since Rxx (0) = E[x(t)x(t)] = 𝜎x2 .

(4.33)

The definitions of autocorrelation and cross-correlation are sometimes formulated such that, e.g., the cross-correlation is expressed as Ryx (𝜏) = E[y(t + 𝜏)x(t)], which is easily seen to be equivalent with our definition by substituting u for t − 𝜏 Equation (4.32) which means t = u + 𝜏 and the alternative definition occurs. Furthermore, with the definition we have used, if we assume the output signal y(t) is the input signal delayed by 𝜏1 seconds, i.e., y(t) = x(t − 𝜏1 ), the cross-correlation Ryx (𝜏) becomes [ ] (4.34) Ryx (𝜏) = E[y(t)x(t − 𝜏)] = E x(t − 𝜏1 )x(t − 𝜏) , which by making the variable substitution u = t − 𝜏1 , and thus t − 𝜏 = u + 𝜏1 − 𝜏, leads to [ ] Ryx (𝜏) = E x(u)x(u − (𝜏 − 𝜏1 )) = Rxx (𝜏 − 𝜏1 ), (4.35) which means the cross-correlation equals the autocorrelation with the maximum shifted to 𝜏 = 𝜏1 . This principle is commonly used to find time delays by use of the cross-correlation. The principle of the cross-correlation definition is worth a comment. You should note that the definition of cross-correlation in Equation (4.32) is the mean of the product of the signals. If we assume zero mean signals, this will reveal if there is a relationship between the signal y(t) and the time-shifted signal x(t − 𝜏). If these two signals are uncorrelated (there is no linear relationship between the two signals) the mean of the product will have a zero mean, but if there is some relationship between the two signals, both will more often go positive or negative simultaneously, which will lead to a nonzero mean. This is the basic principle of finding correlation between different signals. If you have not yet contemplated this fact, I recommend you take a moment to think about it. For correlation functions, the following relationships hold for real signals x(t) and y(t): Rxx (−𝜏) = Rxx (𝜏) even function

(4.36)

4.2 Random Theory

and (4.37)

Rxy (−𝜏) = Ryx (𝜏).

Correlation functions should be estimated in practice using spectra, as a direct implementation of the equations given here is very time-consuming. Procedures for estimation of correlation functions will therefore be discussed in Chapter 10.

4.2.13

The Gaussian Probability Distribution

The Gaussian or normal distribution is the most common probability distribution. We denote a random signal with Gaussian distribution and with mean 𝜇x and standard deviation 𝜎x , by N(𝜇x , 𝜎x ). The PDF for this N(𝜇x , 𝜎x ) distributed signal is px (x) =

1 √

𝜎x 2𝜋

e

− 12

(

x−𝜇x 𝜎x

)2

.

(4.38)

Rather than using this general form, a standardized or normalized variable, z is usually formed by taking x − 𝜇x , (4.39) z= 𝜎x for which 𝜇z = 0 and 𝜎z = 1. For this standardized variable, the Gaussian distribution is simplified to z2 1 pz (z) = √ e− 2 , (4.40) 2𝜋 which is apparent from the equations above. The importance of this standardization or normalization is very important. By subtracting the mean from any signal, we get a new signal which has zero mean. By dividing any signal by its standard deviation, we get a new signal with unity standard deviation. This is easily verified, see Problem 4.1. The PDF of the standardized variable is plotted in Figure 4.2. The interpretation of the standardized variable is that any number z0 along the x-axis corresponds to the value x − 𝜇x = 𝜎x z0 for the original variable x. As we usually have zero mean vibration signals, we can further simplify this so that x = 𝜎x z0 . If you further interpret the standard deviation as the RMS value, which is more practical, the x-axis in Figure 4.2 should be interpreted in values times the RMS value of x. We recall it is the area under the PDF which is the probability that the signal is within the limits between which the area is calculated. Thus, it is good to recall (from your statistics class) that for z we have that 1



pz (z)dz = 0.68,

−1 2



pz (z)dz = 0.95,

−2 3

∫ −3

pz (z)dz = 0.997,

(4.41)

81

4 Statistics and Random Processes

0.5 0.45 0.4 0.35 0.3 pz(z)

82

0.25 0.2 0.15 0.1 0.05 0 −5

Figure 4.2

0 Standardized variable z

5

Probability density function, PDF, of the standardized variable, z, for normal distribution.

which are useful to remember in order to get a feeling for signals with Gaussian distribution. The numbers in Equation (4.41) say that the signal x (the nonnormalized signal) is within ±𝜎x , or 1 times its RMS value, 68% of the time, within two times its RMS value 95% of the time, and within three times its RMS value 99.7% of the time. This says something about the normal distribution. Since this is a book strongly recommending using software such as MATLAB/Octave, it is worth noting that the values in Equation (4.41), and similar values for other integral limits, can be calculated using the error function. This function is available as the command erf in MATLAB/Octave and is defined by x

erf(x) =

∫ 0

2 −t2 √ e dt, 𝜋

from which, together with the symmetry of pz (z), it is evident that ) ( z0 z0 . p (z)dz = erf √ ∫ z 2

(4.42)

(4.43)

−z0

The proof of this is left for Problem 4.2. In conjunction with the error function, it should also be mentioned that the inverse function erfinv is available in MATLAB/Octave. This function is very useful when computing limits for hypothesis tests as we will discuss in Section 4.3.1. The importance of the Gaussian distribution is strongly related to the central limit theorem (Bendat and Piersol, 2010; Papoulis, 2002), which says that any random signal, which is produced by a sum of many different contributions, is normally distributed regardless of the distribution of each of the contributions. This is usually the case for random sources in nature, and many signals that occur naturally are therefore normally distributed.

4.3 Statistical Methods

If a variable is normally distributed, confidence levels can be calculated for the likelihood that the variable is inside a certain amplitude interval. The most common confidence level is the 95% level, which from Equation (4.41) is found to be the probability that |z| ≤ 2, which is equivalent to |x − x| ≤ 2𝜎x .

4.3

Statistical Methods

When analyzing noise and vibration signals in practice, there is a need for some statistical tools for investigating properties as for example whether data are stationary or not. For this purpose, a statistical tool called hypothesis test is used. In this section, we will therefore introduce some basic concepts of hypothesis testing, and thereafter we will discuss the issues of testing normality and stationarity of data.

4.3.1

Hypothesis Tests

In some of the following subsections, we will use hypothesis tests. Such tests are common in statistics, and we often meet the results of them in everyday life, for example when media report that smoking increases the risk of lung cancer, or that high fat consumption can lead to an increased risk of heart disease. It is important to note that hypothesis tests do not prove anything, but they test whether a particular set of observations (data) either agree or disagree with a statement – the hypothesis (Brownlee, 1984; Sheskin, 2004). As hypothesis tests are not part of most engineering curricula, we will present the principle of hypothesis testing in some detail. We illustrate the principle of a hypothesis test by an example. Assume we have a random variable, x, which has a mean value which we denote 𝜙 (because we want the notation to be general, regardless of which actual statistical measure we would like to test). We now believe that the value of this mean is 𝜙 = 𝜙0 , for a specific value 𝜙0 , and we want to test if the mean value of our observed data is actually 𝜙0 . Note that we are talking about the theoretical mean here, which is untouchable for us. What we can obtain by a measurement ̂ is an estimate of the variable 𝜙, which we denote 𝜙. To define a hypothesis test, we have to know the probability distribution of the tested variable 𝜙, although not necessarily the distribution of the original random variable x. The way a hypothesis test is designed is that we set up a null hypothesis that 𝜙 = 𝜙0 , which we denote H0 . The alternative hypothesis, 𝜙 ≠ 𝜙0 , we denote H1 . The test we perform in this example is to reject the null hypothesis if our observed mean value, 𝜙̂ is outside a certain interval around 𝜙0 , i.e., if | |̂ (4.44) |𝜙 − 𝜙0 | ≥ 𝛿, | | and we want to be able to calculate 𝛿. If the observations reject the null hypothesis, then our data support the alternative hypothesis. (More generally, there can actually be several alternative hypotheses, but we limit the discussion to two hypotheses here as they serve all our purposes in this book.) The estimate 𝜙̂ of the statistical property under test is based on a sum of independent observations of x. According to the central limit theorem, regardless of the probability

83

84

4 Statistics and Random Processes

p(ϕ) Rejection region

Acceptance region

Rejection region

Area = α/2

Area = 1–α

Area = α/2

ϕ1–α/2

ϕ0

ϕα /2

Figure 4.3 Illustration of acceptance and rejection regions for hypothesis tests, for our example of testing the mean of a random variable. The figure shows the Gaussian probability density function ̂ which has a mean value of 𝜙 if the null hypothesis H is of the estimated mean values, i.e., p(𝜙), 0 0 correct. The standard deviation depends on the number of values used for the mean calculation. The area 𝛼 is the significance level of the test, which is the probability that the null hypothesis is rejected when it is actually true.

distribution of x, the estimate 𝜙̂ will thus (at least approximately) have a Gaussian probability distribution as in Figure 4.3. It is important to understand that this probability density is the assumed true probability density, if our null hypothesis is true. To test the null hypothesis, H0 , we select a significance level, 𝛼 as indicated in Figure 4.3. ̂ is inside the This means that we will accept the null hypothesis if our estimated mean, 𝜙, acceptance region indicated in the figure. If 𝜙̂ happens to come out as a value in either of the rejection regions, we will reject the null hypothesis. The significance level is thus the probability that we erroneously reject the null hypothesis when it is actually true. As can be seen in the figure, this probability is usually chosen to be small, usually in the order of 0.1–5%. Two errors can occur in a hypothesis test. In the first case, just mentioned, the null hypothesis is true, but we reject it anyway because our tested variable happens to be outside our limits. This is called the type I error. The probability of this, as we mentioned above, equals the significance level, which we select to be small when we design the test. This means that if we reject the null hypothesis, then the probability that we did so erroneously is the significance level, 𝛼, which will make our case strong if we decide on a small 𝛼. The other error that can occur is that the null hypothesis is not true, but that we accept it anyway. In other words, our test variable (in our example the mean, 𝜙) is not equal to 𝜙0 , as indicated in Figure 4.4. In this case, the PDF of the variable 𝜙, marked by a solid line in the figure, is different than the PDF we assume under the null hypothesis. There is then a certain probability, 𝛽, that the estimate 𝜙̂ falls inside the acceptance region, as indicated in Figure 4.4. This is called a type II error. The error region 𝛽 in Figure 4.4 is the probability of making an error of type II. The “opposite” of this probability, 1 − 𝛽, is called the strength of the test and gives the probability of not accepting the null hypothesis “by mistake.” Increasing the strength can be accomplished by increasing the significance level, 𝛼, which is of course generally not a good choice as it increases the risk that we reject

4.3 Statistical Methods

p(ϕ) Rejection region

Acceptance region

Rejection region

Error region, area β

ϕ1−α/2

ϕ0

ϕα/2

ϕ0+d

Figure 4.4 Illustration of the type II error in hypothesis testing, for our example with testing the mean value of a random variable. The type two error occurs when the null hypothesis is not true, but still accepted, which occurs inside the dashed area in the figure. In the figure, the probability density of the mean of our estimated variable (solid) is centered at 𝜙0 + d, whereas the assumed density function used for the acceptance and rejectance regions (dashed) is centered at 𝜙0 . The “opposite” of the area 𝛽 indicated in the figure (i.e., 1 − 𝛽) is called the strength of the test. See text for details.

the null hypothesis despite the fact that it is true. Alternatively, the strength can be increased by adding more data in the calculation of the variable under test, and thus reducing the ̂ The latter should be preferred if at all possible. variance of 𝜙. In our example, we have used a simple test variable, the mean of the random variable x. In a hypothesis test, we can more generally use any variable 𝜙, for example an RMS value, the variance, the kurtosis. The only restriction is that we have to know the probability distribution of the test variable under the null hypothesis. From the discussion above, it should be clear that the best philosophy when designing hypothesis tests is to use the argument of the “opponent” as the null hypothesis. Doing this, if the test turns out to reject the null hypothesis, we have a strong case arguing that our own opinion is correct. Then the opponent has a weak case because the probability that the opponent is correct, but the test did not come out in his favor, is a mere 𝛼. In order to select such a null hypothesis, however, we need to know the distribution of the opponent’s case. As we will see in the next two sections, this is often not the case. Instead, the test has to be designed so that the null hypothesis is what we want to prove. Then the strength of the test should be made as high as possible, i.e., 𝛽 should be small, since 𝛽 is then the critical probability in the case the test turns out in our favor. Hypothesis tests somewhat resemble confidence levels, and in some cases either can be used. It is generally considered that, whenever possible, confidence levels should be preferred over hypothesis tests.

4.3.2

Test of Normality

The normal, or Gaussian, distribution is by far the most common probability distribution encountered in practice when studying noise and vibration signals of random character.

85

4 Statistics and Random Processes

This follows from the central limit theorem, as was mentioned previously. When a signal is assumed to be Gaussian for some analysis procedure to be valid, data should be investigated for normality. For large data sequences, as we typically have in vibration analysis contexts, it is not meaningful to use hypothesis testing, although several such tests exist, for example the chi-squared test that can test if a signal conforms to any probability density, see Bendat and Piersol (2010). The reason this is not practical is that no real signal is in fact Gaussian distributed, because no real data can possible include values far out the tails of the Gaussian distribution. This means that for large signals, the precision in the hypothesis test will mean that the test typically fails, even if we “believe” data to be Gaussian. For many practical purposes, we can, however, rely on graphical investigation. This means that we calculate the sample probability density defined in Equation (4.15), and compare it with the theoretical curve for the normal distribution, based on the calculated mean and standard deviation of the measured data, i.e., px (x) =

1 √

𝜎̂ x 2𝜋

e



(x−𝜇̂ x )2 2𝜎x2

.

(4.45)

An example of such a comparison is shown in Figure 4.5, where in (a) the sample probability density of the measured signal is plotted with bars, and the Gaussian PDF based on the mean and standard deviation of the measured signal is plotted in solid. An alternative plot, which is shown in Figure 4.5(b), is to plot the two functions with logarithmic y-scale, as this makes it easier to see if there is discrepancy in the tails. When investigating whether a measured signal is Gaussian or not, it is often the tails of the PDF which are most important. In such cases, a linear y-axis as in Figure 4.5 is not very convenient as the tails are low. A logarithmic y-axis is therefore often used, where the bar chart of the histogram is normally replaced by a trace plot, see Problem 4.4.

100 0.4

Probability density

Probability density

86

0.3 0.2 0.1 –4

–2

0

Amplitude (a)

2

(m/s2)

4

–5

0

Amplitude (b)

5

(m/s2)

Figure 4.5 A plot of the sample probability density function (PDF) of a Gaussian signal with zero mean and unity standard deviation. In (a) the PDF is plotted as a bar chart, and the theoretical normal probability density as a solid line. In (b) the PDF is plotted with a logarithmic y-axis and dashed line, and overlaid with the theoretical PDF as a solid line. The logarithmic y-axis reveals differences in the tails. A total number of 50,000 samples were used and 40 bins within ±4𝜎x .

4.3 Statistical Methods

For cases where an absolute statistical measure of normality is of importance, the chi-square goodness-of-fit test can be used. This is an example of a hypothesis test and is found in many standard textbooks on random data analysis (e.g., (Bendat and Piersol, 2010; Brownlee, 1984; Sheskin, 2004)).

4.3.3

Test of Stationarity

When the spectral density of a random signal is to be estimated (see Chapter 10), an underlying assumption is that the signal is stationary. Before such estimates are made, the signal should be tested to verify that it is stationary. Before proceeding, it may be fruitful to consider for a moment some practical aspects of the concept of stationarity. In practice, few signals can be considered stationary for any longer period of time. Measuring the acoustic pressure from a microphone during a rocket launch, for example if stationary at all, the signal will definitely only be stationary during some limited time during which the thrust, speed, etc. can be considered constant. The same can be said about a vibration signal measuring road vibrations on a car. This signal will only be stationary for constant speed, type of road surface, etc. Another aspect of stationarity is that for a given signal, the time frame during which we study the signal is of importance. Of course, any time-varying signal is not stationary if the observation time is too short, and the question if a signal is stationary or not, often turns into a philosophical question. In order for spectral analysis to be applicable, it is usually enough to ensure that some lower-order moment (for example the RMS value) of each time block used in the averaging process is approximately constant (this assumes Welch’s method is used for the power spectral density (PSD) estimation, but can be extended to other estimators). 4.3.3.1 Frame Statistics

In order to test stationarity in practice, some of the statistical properties need to be investigated as functions of time. For example, the RMS value can be tested, or the skewness or kurtosis. The simplest way to do this is to divide the measured signal into a number of frames, and then for each frame, the selected properties are calculated and plotted as a function of time, as illustrated in Figure 4.6. If the value plotted does not seem to vary with time, the signal can be considered stationary. Some engineering judgment of what variations are acceptable will be needed, of course. It is important to select a proper frame time for this type of test. The frame time should be large enough so that the variance is small, which is dependent also on the signal bandwidth (see Section 4.2.7). At the same time, it must be sufficiently short for any temporary changes to show. Thus, some compromise is always necessary. If frequency analysis is going to be used, using the same frame size for the stationarity plots as for subsequent fast Fourier transform (FFT) analysis is usually a good practice. 4.3.3.2 The Reverse Arrangements Test

For a more rigid test of stationarity, this can be done using various statistical methods (Bendat and Piersol, 2010; Sheskin, 2004; Brownlee, 1984). The reverse arrangements test, which is usually recommended, is a so-called nonparametric method that does not require

87

4 Statistics and Random Processes

3 2.5 Running rms (m/s2)

88

2 1.5 1 0.5 0

0

10

20 30 Frame number

40

50

Figure 4.6 Plot of root mean square (RMS) value as a function of frame number. The data used were tested for stationarity by the reverse arrangements test and found to be stationary with a significance level of 0.02, see Section 4.3.3. Some random variation can thus be tolerated in a plot like this.

any a priori assumption or knowledge of the statistical distribution of the signal, and is therefore often preferred in practice. The reverse arrangements test is a hypothesis test that detects if there is a trend in the measured parameter. It is based on a calculation of frame statistics as we discussed in Section 4.3.3. If we do not have any trend in the data (parameter), then each value of our parameter should be greater than, loosely speaking, approximately half the other values, and smaller than half of the other values, on average. The test procedure is as follows. First, assume we have a sampled time sequence of a signal x(n) of length M × N samples, M and N being integer numbers. We divide this sequence into N segments, for which we calculate the parameter we want to use to test the stationarity, which we call yi . The RMS value is often chosen, but the skewness or kurtosis may also be chosen, as well as any other statistical parameter that can be calculated from each frame. Next, we test the sequence of numbers yi for variations outside what is expected due to sample variations. This is done by calculating a new function hij , as: { hij =

1, 0,

(yi > yj ), otherwise,

(4.46)

where i = 1, 2, … , N − 1, and j = i + 1, i + 2, … , N. We now calculate the variables Ai =

N ∑

hij

(4.47)

j=i+1

and ∑

N−1

A=

Ai ,

i=1

where the variable A is called the number of reverse arrangements.

(4.48)

4.3 Statistical Methods

Based on N independent observations of a stationary random variable, the mean and variance are given by Equations (4.49) and (4.50), respectively, N(N − 1) (4.49) 4 N(2N + 5)(N − 1) 𝜎A2 = . (4.50) 72 The null hypothesis of the reverse arrangements test is now defined as: the signal x is stationary if the variable A falls within the acceptance region given by Equation (4.51), defined by 𝜇A =

AN;1−𝛼∕2 < A ≤ AN,𝛼∕2 .

(4.51)

The limits in Equation (4.51) to accept stationarity can easily be calculated from the definition of the error function, erf, in Equation (4.42), which yields that √ (4.52) AN;1−𝛼∕2 = 𝜇A − 2 ⋅ 𝜎A erf−1 (1 − 𝛼) and AN;𝛼∕2 = 𝜇A + 𝜎A erf−1 (1 − 𝛼),

(4.53)

where erf−1 is the inverse error function, which in MATLAB/Octave is available as the command erfinv. From the discussion in Section 4.3.1, it follows that when the reverse arrangements test fails (the null hypothesis is rejected), the case is strong that the data are not stationary. The probability then that the data are stationary but erroneously tested as nonstationary is the same as the significance level. However, if the test shows that the data are stationary, the case is somewhat weaker and depends on the strength of the test. The strength of this test cannot be defined rigidly, however, since when the signal is not stationary, we do not have a general expression for the probability distribution of it because the nonstationarity can take on many forms. This is a weakness in many hypothesis tests like the present one. Based on an analysis of the strength of the test versus significance level, when using the reverse arrangements procedure, some guidelines are recommended in Himmelblau et al. (1993). They are as follows: ●





N should always be larger than 10, and preferably more than 20. If less than 20, use 𝛼 = 0.10. If the measurement time is between 20 and 100 s, use segments of 1 s each, and use 𝛼 = 0.05. If the measurement time is larger than 100 s, use N = 100 and 𝛼 = 0.02.

Example 4.3.1 We illustrate the process of the inverse arrangements test with an example on a small number of data to keep the example simple. Assume we have measured 20 s of an acceleration signal with a sampling frequency of 2000 Hz. Make a reverse arrangements test, based on the RMS of the signal, to see if the data are stationary. The guidelines above lead to a choice of N = 20 and 𝛼 = 0.05. We calculate the RMS values of each of the 20 segments and obtain the values in Table 4.1.

89

90

4 Statistics and Random Processes

Table 4.1 RMS value for each frame for reverse arrangements test in Example 4.3.1. The values should be read row by row. 2.95

2.82

3.26

3.51

3.21

2.96

2.87

3.28

3.41

3.31

2.99

2.94

3.16

3.42

3.27

3.05

2.88

3.11

3.32

3.44

We now make an upper-diagonal matrix hij and for each pair i, j, where j > i, we set hij to one if yi > yj . For space reasons, we omit that result here. We then sum each row to produce Ai for i = 2, 3, … , N which becomes Ai = [4, 0, 9, 16, 8, 3, 0, 7, 9, 7, 2, 1, 3, 5, 3, 1, 0, 0, 0],

(4.54)

and finally, we sum the values Ai and get the total A = 78. We have to calculate the limits for the null hypothesis (that our data are stationary), for N = 20 and 𝛼 = 0.05, using Equations (4.52) and (4.53), respectively, which yields the limits: AN;1−𝛼∕2 = 64, AN;𝛼∕2 = 125,

(4.55)

where you should note that we take the integer part of the calculated AN;1−𝛼∕2 and AN;𝛼∕2 since A is an integer. Since our calculated A = 78 is within the limits, we accept the null hypothesis; our data are stationary at the significance level 0.05. End of example. 4.3.3.3 The Runs Test

The reverse arrangements test in the previous section is not particularly sensitive to periodicities in the signal, since it is based on rearranging the segments (see Problems 4.5 and 4.6). To discover, e.g., periodic fluctuations in the data that make the signal nonstationary, a test based on the so-called runs test, or Wald–Wolfowitz test, is more suitable, Sheskin (2004) and Brownlee (1984). This is a nonparametric test that tests the randomness of a variable, and the idea for using it to test for stationarity is based on testing whether the difference of the measured parameter of each frame (e.g., the RMS value) is a random variable. If there is a slowly varying periodicity in the data, with the RMS value periodically increasing and decreasing, the sequence of RMS values will, of course, not be random. We will illustrate this method by an example. First, the variable that is going to be tested is calculated for frames just as with the reverse arrangements test. We denote these values by yi as before. Next the mean, 𝜇y , of the obtained frame values is calculated. Then, each value yi is compared with the mean and assigned a “+” if it is larger than or equal to the mean value, and a “−” if it is less than the mean. Assume we have divided our data into 20 segments, and for each segment, we have calculated the RMS value and determined the mean of these RMS values. The result of the comparison with the mean could then look as (the results are taken from the same sequence as in Example 4.3.1) − − + + + − − + + + − − − + + − − − ++

4.4 Quality Assessment of Measured Signals

A run is defined as a sequence with the same sign (“+” or “−”). Thus, in this example, we have 8 runs out of the 20 segments. Now, denote the number of frames used by N. If the underlying data are stationary, based on N independent observations of these data, it turns out that for large N, the variable r, the number of runs in our test, is an approximately normally distributed random variable, with mean and variance as follows: 𝜇r =

2N1 N2 +1 N

(4.56)

𝜎r2 =

2N1 N2 (2N1 N2 − N) . N 2 (N − 1)

(4.57)

and

The next step is to find the limits for which the runs test indicates a random behavior on the runs (i.e., indicates the data to be stationary). Thus, the null hypothesis is formulated: “the signal x is stationary if the variable r falls within the acceptance region given by rlow ≤ r ≤ rhigh ,” where the limits are the same as those for the reverse arrangements test in Equations (4.52) and (4.53), with the mean and standard deviation from Equations (4.56) and (4.57), respectively. As for any hypothesis test, a level of significance, 𝛼, has to be chosen. In our example, for a level of significance of 𝛼 = 0.05, we obtain the upper and lower numbers 6 and 15, respectively. Our number of runs, 8, falls within this range, and thus the run test indicates that our data are stationary with the level of significance 0.05. Finally, we should note that a runs test should always be used with more than 20 segments as the test gets weak for smaller numbers. If a check for spectrum estimation is the goal for the runs test (with Welch’s method, see Section 10.3), it is recommended to use the same segment (frame) size as the blocksize for the spectrum averaging.

4.4

Quality Assessment of Measured Signals

In this section, we will look at some examples of how the statistical tools, etc. discussed previously in this chapter can be used to assess the quality of measured data in the field of noise and vibration analysis. In modern data analysis, increasing amounts of data are recorded in field tests as well as lab tests, for later analysis. Vibration measurements are certainly challenging, with many potential causes of measurement data being erroneous, for example due to bad sensors, cable problems. There is therefore a need to assess the quality of measured data and find possible errors. A means for data quality assessment is to investigate some of the statistical properties of the measured signals. This can be done regardless of whether the data are actually of random character or not, as also deterministic signals from this point of view can be investigated using the same methods. It should be stressed, however, that due to the very different nature of various vibration signals, it is difficult to make a standard data quality test that will work in every case. This is particularly true if one wants automatic detection of different errors. I therefore suggest an approach to this application that uses some statistical measures as indicators of potential (but not necessarily actual) errors, followed by a manual inspection where the engineering experience can be used to interpret the suspicious statistical measures.

91

4 Statistics and Random Processes

It should be stressed that data quality assessment has to be done on time signals. This is one of several arguments for why time data should (almost) always be recorded for noise and vibration signals. The tradition in many areas of this engineering field has been to reduce data to spectra immediately during the data acquisition process, which was a technique introduced in the 1970s when the first analyzers came on the market. This way of working should be abandoned except in monitoring applications, as will be further discussed in Chapter 11. In general, unless stationarity for some reason can be definitely assumed a priori (due to known test conditions, for example), data should be tested for stationarity, as this is usually assumed for the other measures. The reverse arrangements test and the run test described in Sections 4.3.3 and 4.3.3, respectively, should be used to test stationarity in cases where a statistically reliable method is requested. In many other cases, it may be sufficient to use the procedure with frame statistics described in Section 4.3.3. For stationarity, it is often sufficient to investigate the standard deviation (or variance). However, some errors are more easily revealed when studying the skewness or kurtosis as a function of frame number (or time). For example, kurtosis can be used as a “spike detector.” Often, when data are assumed to be normally distributed a priori, this should be investigated using one of the procedures mentioned above. Calculating a sample PDF is also motivated by the fact that many errors in instrumentation, such as drop-outs or spikes, can be revealed in a histogram by showing an increased occurrence around zero or the amplitude of the spikes, respectively, as shown in Figure 4.7. When data are verified to be stationary, and if necessary, Gaussian, next some standard statistical values should be calculated using the complete signal. Such standard statistics should include minimum, maximum, and mean values, standard deviation or RMS value, skewness, and kurtosis. By comparing these values with known values, either from a priori theoretical assumptions of the data, or by empirical knowledge of the type of data, many errors can be detected. 0.5

Probability density (m/s2)

92

0.4 0.3 0.2 0.1 0 −6

−4

−2

0

Amplitude (m/s2)

2

4

6

Figure 4.7 Plot of the sample probability density of a signal with dropouts, overlaid by the Gaussian PDF using the mean and standard deviation of the signal. As can be seen in the figure, the histogram is revealing the increased occurrence of values around zero, which indicates there is something wrong in the signal.

4.4 Quality Assessment of Measured Signals

When recording many channels, or many measurements, a useful trick is to normalize each statistical measure to the measure of one measurement (channel) which you assume (or perhaps make sure through some manual analysis) is without quality errors. A plot of all those normalized metrics will clearly show a channel with a potential problem. It should also be pointed out that producing frame statistics of each of the metrics for smaller frames of data has the potential to reveal problems occurring only temporarily in data. For example, spikes due to a loose cable may occur only during extra-large vibration levels such as when a car hits a pot hole or passes over a railroad crossing. Calculating, in this case, kurtosis over small, say 1 s frames, is likely to show these spikes, whereas the effect of a few single spikes may not come through in a calculation over the entire data set. We will now show some results from a data quality analysis on an eight-channel measurement on a truck running on a rough road. Example 4.4.1 We illustrate data quality analysis with an example based on acceleration data that were recorded on a truck driving with constant speed on a stretch of rough road. The dataset used here consists of eight channels which were selected from a larger set from some faulty measurements to illustrate the procedure. After importing to MATLAB, we calculate some basic statistics of each channel based on all data. The results are found in Table 4.2. The statistics in the table reveal that channel 1 has a relatively high mean value, and a kurtosis much higher than remaining channels. This is an indication of something that may be wrong, and we put the channel up for a closer look. Next, the kurtosis of channels 2 and 4 seems to be higher than remaining channels. Now, this could be quite in order if there is a good explanation for it. It could be due to different vibration character in different directions, or it could be that those points with higher kurtosis are in locations near some impulse force due to, e.g., rattling. To find if there are temporary errors in the data such as spikes, frame statistics can be calculated and plotted. For space reasons, not all such plots can be shown here. As an example, frame statistics for skewness and kurtosis of channel 4 are plotted in Figure 4.8. We see that the kurtosis is relatively high at some instances, for example in frame 14. This is an indication

Table 4.2 Ch. #

Statistics results for Example 4.4.1. Mean

RMS

Skewness

Kurtosis

1

1.2

3.8

0.16

11.0

2

−0.032

4.1

0.19

8.8

3

−0.012

1.3

0.25

4.9

4

−0.0035

0.4

0.04

6.8

5

0.2

1.8

0.04

4.3

6

−0.014

0.5

−0.02

5.0

7

0.047

3.3

0.19

3.0

8

0.027

1.3

0.7

6.1

93

4 Statistics and Random Processes

Running skewness (−)

1

0.5

0

−0.5

0

20

40 60 Frame number

80

100

40 60 Frame number

80

100

(a)

Running kurtosis (−)

94

30 20 10 0

0

20

(b)

Figure 4.8 Plots for Example 4.4.1. In (a) frame statistics based on skewness is plotted versus frame number. In (b), a similar plot with frame statistics of kurtosis is shown. The latter shows a few frames with extra high kurtosis that could potentially be caused by some spikes, for example. See text for a discussion.

that could mean there is a spike or similar in the data, and it should be investigated manually. In this case, it turned out to be the result of potholes causing extra high shocks. End of example.

4.5 Chapter Summary In this chapter, we have presented some basic statistical properties and showed how to estimate them based on experimental data. These are found in Section 4.2 and will not be repeated here. Next we introduced statistical hypothesis testing and presented two methods to use such tests to verify if a signal is stationary, the reverse arrangements test and the runs test. The former test reveals trends in the data that violate stationarity, and the latter reveals periodicities in the data. We also noted that in many cases it is sufficient to use frame statistics to evaluate the stationarity of data. In such an analysis, a statistical measure (e.g., the standard deviation or kurtosis) is calculated for frames of the data

4.6 Problems

and plotted against frame number. If data are stationary, there should be little variation in the statistical parameter. An important application of statistics, namely quality assessment of data, was then discussed. The recommended procedure to find anomalies in data is to calculate a number of statistics in two different ways; : ●



Overall statistics. By calculating mean, RMS, skewness, kurtosis, etc. over the entire data record, the nature of the measured data is found. If one or more channels stand out from the others, in terms of one or more statistical properties, then it is an indication that something may be wrong in that channel, and it should be manually inspected. Frame statistics. The overall statistics can, of course, miss errors that occur only intermittently and are therefore lost in the average over the entire data. The same statistics should therefore preferably be computed also for frames of, e.g., 1 s duration. Either those results can be plotted for every channel, or for large channel counts, it might be enough to list the maximum value, as this alone will be an indication of an error in data.

4.6

Problems

Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 4.1 Assume a stationary random signal x(t) has a mean x and standard deviation 𝜎x . Prove that the normalized variable z = (x − x)∕𝜎x has zero mean, and unity standard deviation. Problem 4.2 Show that the relation in Equation (4.43), between the integral of the normal distribution probability density and error function, is true. Problem 4.3 There is a theorem in random theory that states that the expected value of a product of four random processes, x1 , x2 , x3 , and, x4 , each with normal distribution, is E[x1 x2 x3 x4 ] = E[x1 x2 ]E[x3 x4 ] + E[x1 x3 ]E[x2 x4 ] + E[x1 x4 ]E[x2 x3 ].

(4.58)

Show that this leads to the fact that kurtosis of a random signal with normal distribution is exactly 3. Problem 4.4 Create a Gaussian time signal, x with 100,000 samples in MATLAB/Octave. Then create a new non-Gaussian variable, y, by the equation y = x + 0.1x|x|. This new variable will, of course, not be Gaussian. Calculate and plot the sample probability density function (PDF) of x overlaid with the Gaussian PDF using the mean and standard deviation

95

96

4 Statistics and Random Processes

of x, with linear and logarithmic y-axis, respectively. Then repeat the plots for y. Compare the plots and watch the differences between linear and logarithmic y-axes. Hint: You can use the apdf command from the accompanying toolbox to compute the PDF. Problem 4.5 Create a time signal in MATLAB/Octave with 100,000 samples, which is the product of a Gaussian random signal, and a half sine, so that the RMS level of the signal increases over the first half of the data, and then decrease over the last half. Run the reverse arrangements test on the data with a significance level of 𝛼 = 0.02 and N = 100 segments. Is the data stationary based on this test? Answer why? (Comment: The data are clearly not stationary.) Problem 4.6 Perform a runs test on the same data as in Problem 4.5. Does the data pass the test as stationary? Explain the difference between the result of this test and the reverse arrangement test result.

References Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Brownlee K 1984 Statistical Theory and Methodology. Krieger Publishing Company. Himmelblau H, Piersol AG, Wise JH and Grundvig MR 1993 Handbook for Dynamic Data Acquisition and Analysis. Institute of Environmental Sciences and Technology, Mount Prospect, Illinois. Newland DE 2005 An Introduction to Random Vibrations, Spectral, and Wavelet Analysis 3rd edn. Dover Publications Inc. Papoulis A 2002 Probability, Random Variables, and Stochastic Processes 4th edn. McGraw-Hill. Sheskin D 2004 Handbook of Parametric and Nonparametric Statistical Procedures 3rd edn. Chapman & Hall. Wirsching PH, Paez TL and Ortiz H 1995 Random Vibrations: Theory and Practice. Wiley Interscience.

97

5 Fundamental Mechanics It could be argued that a single mass connected to a spring and a damper is a superficial and limited example of a mechanical system, at first glance. However, as we will see in later chapters, the single degree-of-freedom system, or SDOF system as it is often called, is most essential when it comes to understanding mechanical dynamics, because real structures in some respects behave as if they were constructed of several SDOF systems. It is also common to make approximations in mechanical dynamics based on SDOF assumptions. A thorough understanding of this, the simplest of mechanical systems, is thus absolutely essential in order to understand the dynamics of mechanical systems. In this chapter, we introduce the SDOF system, starting from its basic equations, and deduce important results necessary for understanding its dynamics. If you have already studied mechanical dynamics, it is advised that you still read through this chapter at least briefly, as many of the results we emphasize here are often omitted in other texts, as the discussion here is particularly concerned with the experimental results we can obtain from measurements on a mechanical system.

5.1

Newton’s Laws

The fundamental mechanics theory most commonly used to describe vibrations was established by Isaac Newton (1642–1727). In 1687, he published his three well-known laws in his famous Principia (Newton, 1687): ●





The Law of Inertia In the absence of an outside force, an object in motion tends to stay in motion, and an object at rest tends to stay at rest. The Law of Acceleration The rate of change (time derivative) of a body’s momentum (=mass times velocity) is equal to the sum of all forces acting on the body. The Law of Action and Reaction If an object exerts a force on a second object, the second object exerts an equal but opposite force on the first.

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

98

5 Fundamental Mechanics

If several forces act on a body, they are summed vectorially. The law that we will use in this chapter is the second law, the Law of Acceleration. It is so interesting, in fact, that we shall study it in great detail.

5.2 The Single Degree-of-Freedom System (SDOF) We first assume that we have a mass connected to a spring and a so-called viscous damper, as shown in Figure 5.1. This u k system is called a single degree-of-freedom system, where the F term “degree-of-freedom” refers to the motion along (in this m case) one translational axis. In Section 6.2.1, we will discuss the concept of degrees of freedom in mechanical systems in more c detail. The mass is driven by a dynamic force, F(t). To make the example as simple as possible, we choose a horizontal force so that we do not have to consider any effects of gravity and Figure 5.1 Mechanical initial spring compression, which might divert attention from system with one degree the important issues at stake here. We further assume that the of freedom, i.e., a mass, m, moving in one direction. mass can move without friction (except for the effect of the visConnected to the mass are cous damper) around its equilibrium position u0 . The spring a spring with stiffness k and the viscous damper are assumed to be massless. and a viscous damper with A description of the spring and the viscous damper is appro- damping c. priate here. A spring which is subjected to a force, F, gives a counter force which is proportional to the displacement from its equilibrium position. The constant of proportionality, k, is called the stiffness of the spring with units of [N/m]. The viscous damper similarly provides a counter force which is proportional to the velocity, that is, to the time derivative of displacement. The constant of proportionality for the damper is given by the symbol c which has units of [N s/m]. The balance of forces according to Newton’s second law gives mü = −cu̇ − ku + F(t),

(5.1)

where the minus sign indicates that the damper and spring provide counter forces (in opposite direction to the force F). Equation (5.1) is usually rewritten as follows: mü + cu̇ + ku = F(t).

(5.2)

Time derivatives are indicated by a “dot” such that velocity is given by u̇ = du∕dt = v and acceleration by ü = d2 u∕dt2 = a which are read “u-dot” and “u-double-dot,” respectively. These notations are common in mechanics where we often deal with time derivatives of displacement.

5.2.1 The Transfer Function We shall now examine the solution to Equation (5.2), which we solve by Laplace transformation. If you are not familiar with the Laplace transform, read Section 2.6.1 carefully

5.2 The Single Degree-of-Freedom System (SDOF)

before continuing with this chapter. Taking the Laplace transform of both sides of Equation (5.2) gives (ms2 + cs + k)U(s) = F(s).

(5.3)

This expression leads to the transfer function, H(s), between F(s) and U(s) given by H(s) =

1∕m U(s) = 2 , F(s) s + sc∕m + k∕m

(5.4)

where we have divided numerator and denominator by m to render the s2 term alone in the denominator. For second-order systems, it is common to write the denominator in the “standard form”: 1∕m U(s) , (5.5) H(s) = = F(s) s2 + s2𝜁𝜔n + 𝜔2n which means that in our case √ k 𝜔n = m and c 𝜁= √ , 2 mk

(5.6)

(5.7)

where 𝜔n is the undamped natural (angular) frequency, sometimes called undamped resonance frequency, in [rad/s] and 𝜁 (Greek “zeta”) is the relative damping, or damping ratio which is dimensionless. The standard form in Equation (5.5) is often used because much can be understood about the system from just knowing the parameters 𝜔n and 𝜁. In the following text, we do not assume any prior experience, but if you are familiar with second-order systems, you will recognize much from before. Example 5.2.1 Assume an SDOF system with m = 1 [kg], c = 100 [N s/m], and k = 106 [N/m]. Using Equation (5.6), we find that the undamped natural frequency will be √ √ 𝜔 1 1 1000 k 106 = = ≈ 159.2 Hz, (5.8) fn = n = 2𝜋 2𝜋 m 2𝜋 1 2𝜋 and from Equation (5.7), the relative damping will be c 100 𝜁= √ = √ = 0.05 = 5%. (5.9) 2 km 2 106 End of example.

5.2.2

The Impulse Response

From Section 2.6.1, we know that the time domain equivalent of the transfer function in Equation (5.5) is the impulse response. In order to find the impulse response, we first calculate the poles of Equation (5.4), which are also known as the roots of the denominator. The poles may be expressed as follows: √ (5.10) s1,2 = −𝜁𝜔n ± j𝜔n 1 − 𝜁 2 = −𝜁𝜔n ± j𝜔d , √ where 𝜔d = 𝜔n 1 − 𝜁 2 is often referred to as the damped natural frequency.

99

100

5 Fundamental Mechanics

The poles correspond, as we mentioned earlier, to the homogeneous solution to Equation (5.3), which describe the free oscillations, where the displacement of the mass, u(t) is nonzero, while the force F(t) equals zero. Using the poles in Equation (5.10), the transfer function in Equation (5.4) can be rewritten as follows: H(s) = (

1∕m )( ). s − s 1 s − s2

(5.11)

We can now use partial fraction expansion (see Section 2.6.1) to rewrite Equation (5.11) into H(s) =

C1 C2 + , s − s1 s − s2

(5.12)

where the constants C1 and C2 can be identified by residue calculus as mentioned in Section 2.6.1. Using the Heaviside cover-up method thus yields C1 =

1∕m 1 = , s 1 − s2 j2m𝜔d

(5.13) 1∕m −1 = , C2 = s 2 − s1 j2m𝜔d √ where 𝜔d = 𝜔n 1 − 𝜁 2 . You should note that C2 = −C1 . Applying Laplace transform pair 4 from Table 2.1 on Equation (5.12) results in the impulse response of the SDOF system h(t) =

1 1 es1 t − es2 t . j2m𝜔d j2m𝜔d

(5.14)

Using the expression for the poles from Equation (5.10), we can rewrite Equation (5.14) as: h(t) =

( ) 1 −𝜁 𝜔n t 1 e−𝜁 𝜔n t ej𝜔d t − e−j𝜔d t = e sin(𝜔d t), j2m𝜔d m𝜔d

(5.15)

since ej𝜔d t − e−j𝜔d t = j2 sin(𝜔d t). From the first exponential factor in each term of Equation (5.15), it is clear that the product of the relative damping 𝜁 and the natural frequency 𝜔n determines how quickly the motion of the mass is damped out after excitation. From the second, imaginary exponential factor in each term in Equation (5.15), we will get oscillations with a frequency, 𝜔d which is lower than the undamped natural frequency. The solution in Equation (5.15) is only practical in the case where the damping is underdamped, i.e., 0 ≤ 𝜁 ≤ 1, where the solutions will oscillate, i.e., we will have vibration. The upper limit, 𝜁 = 1, indicates the case where an impulse-shaped excitation results in a similar impulse-shaped response, after which all motion ceases. See Section 5.7 for a general discussion about damping. The impulse response of the SDOF√ system thus consists of an exponentially damped sine oscillating with the frequency fd = fn 1 − 𝜁 2 , usually called the damped natural frequency. From Equation (5.15), it is clear that the frequency of oscillation decreases with increased damping, although slightly, see Table 5.1. Figure 5.2 shows the impulse responses, h(t), for two typical values of relative damping, 𝜁 = 0.01 and 𝜁 = 0.05 for an SDOF system with undamped natural frequency of 159.2 Hz.

5.2 The Single Degree-of-Freedom System (SDOF)

√ √ Table 5.1 Values of the factors 1 − 𝜁 2 and 1 − 2𝜁 2 in Equations (5.10) and (5.21), respectively.

Impulse response (m/Ns)

1

𝜻

√ 1 − 𝜻2

√ 1 − 2𝜻 2

0.01

1.000

1.000

0.05

0.999

0.997

0.1

0.995

0.990

0.2

0.980

0.969

0.3

0.954

0.906

× 10−3

0.5 0 −0.5 −1

0

0.1

0.2

0.3

0.4

0.5

0.3

0.4

0.5

Time [s]

Impulse response (m/Ns)

1

(a)

× 10−3

0.5 0 −0.5 −1

0

0.1

0.2 Time [s]

(b) Figure 5.2 Impulse response of two different SDOF systems with undamped natural frequency fn = 159.2 Hz and relative damping of (a) 𝜁 = 0.01, and (b) 𝜁 = 0.05.

Example 5.2.2 Taking the SDOF system from Example 5.2.1, we find that the impulse response in this case with relative damping of 𝜁 = 5% will oscillate with a frequency of √ (5.16) fd = fn 1 − 𝜁 2 = 159.2 × 0.999 = 159.0 [Hz]. End of example.

101

102

5 Fundamental Mechanics

It is well worth recapitulating the meaning of the impulse response here. As we saw in Section 2.6.3, the effect of the impulse response in the convolution process to obtain the output from the input is to weight input values “backward” in time. The longer the impulse response is, the longer is the part of the input (force) which is weighted together to form the output (displacement) at any instance in time. If we consider an input signal which is more or less constant in level, such as a stationary noise or a periodic signal, we can therefore predict that the more input values that are weighted together, the higher the output signal will be. Less damping thus leads to a longer impulse response which leads to higher vibrations. If the input is a transient shock, then the amount of damping will determine how long the response continues after the input shock has excited the system.

5.2.3 The Frequency Response When we measure a dynamic system in order to identify it, we cannot measure the transfer function as in Equation (5.4), as it is a nonphysical entity, as explained in Section 2.6.1. Instead, we usually measure the frequency response, which as we know from Section 2.7.2 is defined as the ratio of the spectrum of the output (response) and the spectrum of the input (force). The most intuitive method to determine the frequency response at a certain frequency is to let the excitation be a sinusoid with the desired frequency and calculate the frequency response magnitude as the ratio of the response and force amplitudes. The phase angle can be determined, if desired, by measuring the phase difference between the response and the force. In Chapters 13 and 14, more refined methods for measuring frequency responses will be described, which are more commonly used in practice. As we know from Section 2.7.3, we obtain the frequency response for the system in Equation (5.4) by letting s = j𝜔 = j2𝜋f . We first get H(f ) =

U(f ) 1∕m = . F(f ) −𝜔2 + j2𝜁𝜔n 𝜔 + 𝜔2n

(5.17)

By dividing the numerator and denominator by 𝜔2n and noting that 𝜔2n = k∕m and 𝜔∕𝜔n = f ∕fn ), we can simplify this equation into : H(f ) =

U(f ) 1∕k = )2 ), ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.18)

where we have made the denominator a function of the relative frequency f ∕fn which is very practical. The shape of the frequency response is evidently independent of the actual natural frequency, and only dependent on the ratio f ∕fn . You should also note that the numerator in Equation (5.18) has been replaced by 1∕k. This reflects the physical fact that at very low frequencies, where the denominator is approximately equal to unity, the frequency response is approximately 1∕k, which means that we only “feel” the spring, as the forces from the damper and mass are small at low frequencies. In Figure 5.3, the magnitude and phase of H(f ) are plotted for three different values of the relative damping, 𝜁. It is clear that for low frequencies (relative to the natural frequency), the response (displacement) magnitude is approximately constant and the phase angle between response and force is close to zero. These relationships indicate that the mass moves in phase with the force which is a result of the fact that at low frequencies only the spring is

5.2 The Single Degree-of-Freedom System (SDOF)

Dyn. flexibility (m/N)

10−4 ζ = 0.01 ζ = 0.05 ζ = 0.1

10−5

10−6

10−7

0

100

200 300 Frequency (Hz)

400

500

0 ζ = 0.01 ζ = 0.05 ζ = 0.1

Phase (Deg.)

−50 −100 −150 −200

0

100

200 300 Frequency (Hz)

400

500

Figure 5.3 Frequency response magnitude and phase plot of an SDOF system according to Equation (5.18) for three values of the relative damping, 𝜁 = 0.01, 𝜁 = 0.05, and 𝜁 = 0.1. The undamped natural frequency, fn , is 159.2 Hz.

in effect. If we now suppose that we increase the frequency and keep the force constant, we see in Figure 5.3 that as we approach the natural frequency, the response increases greatly and has a maximum near the natural frequency. At higher frequencies, the response decreases, inversely proportional to the square of the frequency (which is apparent from Equation (5.18)). At and around the natural frequency, something important occurs in the phase plot. From having been in phase at low frequencies, the response begins to lag behind, and eventually becomes out of phase with the force above the natural frequency. This phase relationship means that we have maximum displacement in one direction (remember we are considering sinusoidal force at each frequency!), while we have maximum force directed the opposite way. The reason is that the reactive force of the mass is proportional to its acceleration, while the force of the spring is proportional to its position (displacement from equilibrium). Exactly at the undamped natural frequency, where f = fn , Equation (5.18) shows that the frequency response is purely imaginary. Since the imaginary number is in the denominator, this means that the phase is exactly ∠H(fn ) = −𝜋∕2 or −90∘ at this frequency. This can also be seen in the phase plots in Figure 5.3.

103

104

5 Fundamental Mechanics

The damping affects how high the resonance peak is, the lower the damping, the greater the motion around the resonance frequency, for a given force. In Figure 5.3, you should also observe that higher damping leads to a larger frequency range over which the phase curve switches from 0 to −180∘ . We shall now examine the frequency response in more detail. Writing the expressions for amplitude and phase of H(f ), we get 1∕k , |H(f )| = √ ( ( )2 )2 ( )2 f f 1− f + 2𝜁 f n

(5.19)

n

and for the phase ⎛ ⎞ f ⎜ 2𝜁 fn ⎟ ∠H(f ) = − arctan ⎜ ( )2 ⎟ , ⎜1 − f ⎟ ⎝ ⎠ fn

(5.20)

where ∠ indicates the phase angle of H(f ) and the minus sign comes from the fact the complex numbers are in the denominator in Equation (5.19). By differentiating Equation (5.19) and finding for which frequency the derivative is zero, we can find where the amplitude of the frequency response H(f ) has its maximum. We find that this frequency, sometimes called the damped resonance frequency, fmax , is given by √ (5.21) fmax = fn 1 − 2𝜁 2 , √ which is valid for damping values 𝜁 ≤ 1∕ (2) and where the peak value of |H(f )| is 1∕k , (5.22) √ 2𝜁 1 − 𝜁 2 √ which is also valid for 𝜁 ≤ 1∕ 2. This maximum value can also be expressed using the Q-factor from Equation (5.27), in which case the maximum is approximately |H(f )| = | max |

|H(f )| ≈ Q ⋅ 1 , | max | k

(5.23)

which means that the Q-factor tells how much the resonance amplification is. It should be noted that the damped resonance frequency fmax is different from the damped natural frequency fd defined in Section 5.2.2. In fact, there is some disagreement between different books on what to call both these frequencies. This is a good reason to always use the undamped natural frequency when communicating results from structural dynamics. It should, however,√ be understood that there is one frequency√ by which the impulse response oscillates, fd = fn 1 − 𝜁 2 , and another frequency, fmax = fn 1 − 2𝜁 2 , where the maximum in the magnitude of the frequency response is found, and both these frequencies are lower than the undamped natural frequency. We thus have maximum displacement, for constant force at all frequencies, at a lower frequency than the undamped natural frequency, and f√ max becomes lower with increasing damping. All of this is valid, however, only when √ 𝜁 ≤ 1∕ 2; otherwise, we have no peak at all at the natural frequency. The values of 1 − 𝜁 2 and √ 2 1 − 2𝜁 for some different values of 𝜁 are given in Table 5.1.

5.2 The Single Degree-of-Freedom System (SDOF)

It is difficult to state typical values of relative damping, as it can differ much between different structures, depending on design and material choice. In metal structures, for example, the damping is often as low as 0.001 (e.g., airplane wings) up to possibly 0.1 (structures with screw joints, etc.). The value of 𝜁 = 0.05 is often used as a typical value of damping in steel structures. We shall discuss more about damping in Section 5.7. Example 5.2.3 Taking again, the SDOF system from Example 5.2.1 calculate the damped resonance frequency, fmax . From Equation (5.16) and the results from Example 5.2.1, we find that the damped resonance frequency will be √ (5.24) fmax = fn 1 − 2𝜁 2 = 159 × 0.997 = 158.8 [Hz]. End of example.

5.2.4

The Q-Factor

The half-power bandwidth or 3 dB bandwidth or resonance bandwidth of a resonance peak is defined by Br = fu − fl ,

(5.25)

where the lower and upper frequencies, fl and fu are defined by |H(f )|2 = |H(f )|2 = 1 |H(f )|2 , (5.26) | l| | u| 2 | max | that is, the upper and lower frequencies are defined so that the power (amplitude squared) of |H(f )| has been halved in relation to the peak value of ||H(fmax )||. In electrical circuit theory, a common concept for resonant circuits is the quality factor, or Q-factor, which can be calculated as the ratio between the center frequency and the half-power bandwidth, i.e., Q=

fn . Br

(5.27)

It should perhaps be particularly noted here that the word “quality” relates to radio receiver applications, where a good-quality resonance is one with small bandwidth which picks up the radio signal, but not the surrounding frequencies. In mechanical applications, a high Q value is usually of rather “bad quality” as it causes large vibrations. The term “Q-factor” is commonly used in vibration fatigue and shock applications, see Section 18.1. The relative damping, 𝜁, is related to the Q-factor, as will be discussed in Section 5.5.3.

5.2.5

SDOF Forced Response

The response of the SDOF system to a force input is, like the response of any linear system, composed of a transient and a steady-state response, as we discussed in Section 2.7.4. This can sometimes result in a forced response that exhibits a phenomenon called beating which is illustrated in Figure 5.4(a). Beating is most prominent if the damping is low and the excitation frequency is close to the natural frequency of the SDOF system. For the example in Figure 5.4, we use an SDOF system with the same undamped natural frequency of 159.2 Hz

105

5 Fundamental Mechanics

4

× 10−4

1.5

× 10−5

1 2

Displacement (m)

Displacement (m)

106

0

−2

0.5 0 −0.5 −1

−4

0

0.1

0.2 0.3 Time (s) (a)

0.4

−1.5

0

0.1

0.2 0.3 Time (s) (b)

0.4

Figure 5.4 Illustration of transient forced response of SDOF system, causing in (a) strong beating, and in (b) some distortion. The SDOF system has a undamped natural frequency of 159.2 Hz and relative damping of 0.2%. The exciting force is sinusoidal with 10 N amplitude and, in (a), 155 Hz, and, in (b), 40 Hz frequency. The beating phenomenon is most prominent on systems with low damping and is more severe, the closer to the natural frequency the excitation frequency is, until the beating disappears when the excitation frequency coincides with the natural frequency (although there is still a rather long transient part until the correct level is obtained in the latter case).

as in the previous examples, but with relative damping of 0.2%. The plot in Figure 5.4(a) illustrates the beating that results if we apply a sinusoidal excitation force of 10 N amplitude and a frequency of 155 Hz to the system. In (b), the result of a force with 10 N amplitude and 40 Hz frequency is shown. In the latter case, the result is a clearly distorted sine. In both cases, after this transient behavior has died out, the response is a steady-state sine with 155 and 40 Hz, respectively, but with the low damping of this system, it will take a long time before this is achieved, see also Problem 6.8.

5.3 Alternative Quantities for Describing Motion When we experimentally measure frequency response, as for example in experimental modal analysis, we normally measure responses in the form of accelerations, not displacements, because acceleration is easier to measure, as we will see in Chapter 7. The frequency response according to Equation (5.18), consisting of the ratio of displacement with force, is often called dynamic flexibility or receptance and has the units [m∕N = s2 ∕kg]. As mentioned above, we can instead consider that we, in place of displacement, u(t), measure velocity, v(t) = du∕dt. Through the differentiation of Equation (5.18), which in the frequency domain corresponds to a multiplication by j2𝜋f (= j𝜔), we obtain the mobility, Hv (f ), with units of [m∕N s = s∕kg], that is Hv (f ) =

V(f ) 1∕k = j2𝜋f )2 ). ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.28)

5.3 Alternative Quantities for Describing Motion

It is more common to measure acceleration, rather than velocity, although velocity sensors do exist, and for example laser Doppler vibrometers typically measure velocity. The most common frequency response is thus that of acceleration with force which is called accelerance or sometimes inertance and has the units [m∕N s2 = 1∕kg]. Through the differentiation of Equation (5.28), we obtain the accelerance as follows: Ha (f ) =

A(f ) 1∕k = −(2𝜋f )2 )2 ), ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.29)

Magnitude of frequency response (different units)

where the minus sign comes from the square of the imaginary number. By analogy with what we discussed for dynamic flexibility above, we could take the absolute values of Hv and Ha in Equations (5.28) and (5.29), respectively, differentiate and set the derivatives to zero to determine at which frequency the peak in the absolute value occurs. It can be shown that the magnitude of mobility has a maximum at exactly f = fn , while the magnitude of accelerance has a maximum at a somewhat higher frequency. In Figure 5.5, the three different types of frequency response are plotted using log–log plot format to obtain straight lines for the asymptotes (see Appendix B). Instead of calculating response divided by force, we could calculate the inverted expression, i.e., force divided by displacement, velocity, or acceleration. There is, however, a very good reason to measure functions in the middle column in Table 5.2, i.e., “response over force” type of frequency response, as will be further discussed in Section 6.4.1. Although the “force over response” type of frequency response is thus rarely used, it is still good to know the names as they do occasionally occur in texts on vibrations. All six possible kinds of frequency response and their respective names are therefore given in Table 5.2.

102 Hu 100

Hv Ha

10−2

10−4

10−6

10−8 100

101

102 Frequency (Hz)

103

Figure 5.5 Log–log plot of the three common forms of frequency response, dynamic flexibility, Hu , mobility, Hv , and accelerance, Ha . Note especially the asymptotic slopes for low and high frequencies indicated in the figure. Also, note that the three forms of frequency response have different units and are therefore not comparable in the same plot. They are plotted in the same plot here for shape comparison only.

107

108

5 Fundamental Mechanics

Table 5.2 Names of frequency responses between different response signals, R, and force, F. The names in boldface are those recommended by the international standard (ISO 2641: 1990). Response Quantity, R

R∕F

Displacement

u∕F

F∕R

Dynamic

F∕u

Flexibility

Dynamic Stiffness

Receptance Compliance Velocity Acceleration

v∕F =

Mobility

F∕v =

Mechanical

j2𝜋f ⋅ u∕F

Admittance

1∕(j2𝜋f ) ⋅ F∕u

Impedance

a∕F =

Accelerance

F∕a =

Apparent

−(2𝜋f )2 ⋅ u∕F

Inertance

−1∕(2𝜋f )2 ⋅ (F∕u)

Mass

5.4 Frequency Response Plot Formats As our aim is to understand the frequency response of the SDOF system from an experimental point of view, we will now turn to different ways of presenting the frequency response in any of the “response over force” types of dynamic flexibility, mobility, and accelerance. As we will see, there are several common ways of plotting the frequency response, each of which emphasizes different aspects of it. Experimentally, the usual response units are either acceleration or velocity, but using modern software for analysis, it is easy to convert between any type of response units. In this section, we will therefore discuss all the three forms of response units.

5.4.1 Magnitude and Phase The first and most common plot format is the magnitude-phase plot that we have been using already. This plot format can in turn be plotted either using linear or logarithmic scales for the x- and y-axes. It is most common, and strongly recommended, to use logarithmic scale for the magnitude y-axis, as otherwise a lot of detail is lost, as is shown in Appendix B. For the x-axis, both formats are common, and each format has its own merits. The main reasons to use a logarithmic scale for the x-axis are twofold. First, it produces straight-line asymptotes as we mentioned in conjunction with Figure 5.5. Second, as indicated by Equation (5.25), the relative bandwidth, i.e., the ratio of the resonance bandwidth and the resonance frequency, is proportional to the relative damping. If the damping is the same for all resonances on real structures with several resonances, the width of each resonance will therefore look equal when using a logarithmic frequency axis. In Figure 5.6, the magnitude and phase of the three frequency response types are plotted for the same SDOF system as we used in the examples earlier in this chapter, with an undamped resonance frequency of 159.2 Hz, and a relative damping of 5%. It is probably equally popular among engineers working with structural dynamics to plot frequency responses with linear frequency axis. This plot format has the advantage of not

5.4 Frequency Response Plot Formats

Dyn. flex. (m/N)

10–4

Magnitude

–50

10–5

–100 10–6 –150 10–7 100

102

10–1

Mobility [(m/s)/N]

Phase 0

–200 100

102

100 50

10–3 0 –50

10–5 100

102

–100 100

102

Accelerance [(m/s2)/N]

200 101 150 10–1

100

10–3 10–5 100

50

102 Frequency (Hz)

0 100

102 Frequency (Hz)

Figure 5.6 Log-frequency plot of frequency responses with magnitude and phase. The magnitude is plotted with a logarithmic y-axis, whereas the phase is plotted with a linear y-axis.

extending the low-frequency part and compressing the upper-frequency part, and many engineers find this format better for many purposes, see also Appendix B. A plot in magnitude/phase format and with linear frequency axes, of all three frequency response types, is shown in Figure 5.7. As before, the y-axis of the magnitude plot is logarithmic and the y-axis of the phase plot is linear.

5.4.2

Real and Imaginary Parts

A careful examination of the denominator of the frequency response of dynamic flexibility type in Equation (5.18) reveals that where f ≪ fn , the unity term dominates the denominator, and thus the frequency response is (approximately) real. On the other side of the resonance where f ≫ fn , the term −(f ∕fn )2 dominates, and the frequency response

109

5 Fundamental Mechanics

Magnitude

Dyn. flex. (m/N)

10–4

Mobility [(m/s)/N]

Phase

0 –50

10–6 –100 –150

10–8 0

500

1000

1500

2000

–200

10–1

100

10–2

50

10–3

0

10–4

–50

10–5

0

500

1000

1500

2000

102

Accelerance [(m/s2)/N]

110

–100

0

500

1000

1500

2000

0

500

1000

1500

2000

0

500 1000 1500 Frequency (Hz)

2000

200 150

100

100 10–2

10–4

50

0

500 1000 1500 Frequency (Hz)

2000

0

Figure 5.7 Lin-frequency plot of frequency responses with magnitude and phase. The magnitude is plotted with a logarithmic y-axis, whereas the phase is plotted with a linear y-axis.

is again real. It is only at frequencies around f = fn that the frequency response becomes significantly complex, and exactly at f = fn , it is purely imaginary. The exact appearance of the real and imaginary parts of dynamic flexibility are shown in the upper plots of Figure 5.8. As can be seen in the figure, the real part is positive for low frequencies, and then makes a characteristic “bend” around the resonance frequency. At higher frequencies, the frequency response is negative due to the 180∘ phase shift that occurs at the resonance. The imaginary part, on the other hand, is zero at most frequencies and exhibits a dip at the resonance, as a result of the fact that the phase is exactly −90∘ when f = fn , see Equation (5.20). As the conversion of dynamic flexibility into mobility and accelerance is accomplished by multiplying once and twice, respectively, with the factor j𝜔 which includes the imaginary number, the real and imaginary parts will swap when converting between dynamic

5.4 Frequency Response Plot Formats

3

× 10−5

× 10−5 Imaginary part

Real part 0

Dyn. flex. (m/N)

2 1

−2

0 −1

−4

−2

Mobility [(m/s)/N]

−3

0

500

1000

1500

2000

−6

0.06

0.03

0.05

0.02

0.04

0.01

0.03

0

0.02

−0.01

0.01

−0.02

0

0

500

1000

1500

2000

−0.03

0

500

1000

1500

2000

0

500

1000

1500

2000

0

500 1000 1500 Frequency (Hz)

2000

Accelerance [(m/s2)/N]

30 50 20 30

0

20

−10

10

−20 −30

Figure 5.8

40

10

0 0

500 1000 1500 Frequency (Hz)

2000

Real part and imaginary part for dynamic flexibility, mobility, and accelerance.

flexibility and mobility so that the real part in one of the functions becomes the imaginary part in the next format, and vice versa. This is shown in the subsequent plots in Figure 5.8. Of the real and imaginary parts, it is most common to plot the part that exhibits the peak (or dip), that is the imaginary part of the dynamic flexibility or accelerance, and the real part of the mobility. This plot is sometimes used for estimating the resonance frequency, as is described in Section 5.5.2. Another common use is for examining the quality of frequency responses with force and response measured in the same point, so-called driving point frequency response functions (FRFs), see Section 13.12.2.

5.4.3

The Nyquist Plot – Imaginary Versus Real Part

The last plot format we shall discuss was first described by Kennedy and Pancu (1947) in a well-known paper. They showed that for mobility, a perfect circle is formed in the Nyquist

111

5 Fundamental Mechanics

5

Imaginary

Dyn. flex. (m/N)

10−4

10−5

10−6

150 160 170 Frequency (Hz)

Imaginary

Mobility [(m/s)/N]

0 Real

5 × 10−5

150 160 170 Frequency (Hz)

180

0

−0.05 −0.05

102

0 Real

0.05

0 Real

50

Imaginary

50

101

100

0

0.05

10−2

10−3

× 10−5

−5 −5

180

10−1

Accelerance [(m/s2)/N]

112

150 160 170 Frequency (Hz)

180

0

−50 −50

Figure 5.9 Nyquist diagram for dynamic flexibility, mobility, and accelerance for an SDOF system with viscous damping. The Nyquist curve for mobility displays a perfect circle.

diagram, i.e., in a plot with the imaginary part versus the real part of Hv (f ). Nyquist plots of dynamic flexibility, mobility, and accelerance are shown in Figure 5.9. If we begin with the transfer function for dynamic flexibility from Equation (5.4) above, convert to mobility, and evaluate on the frequency axis s = j𝜔, we find that the mobility can be written as follows: Hv (𝜔) =

j𝜔 k−

𝜔2 m

𝜔2 c + j𝜔(k − 𝜔2 m) = ( , )2 + j𝜔c k − 𝜔2 m + (𝜔c)2

(5.30)

5.5 Determining Natural Frequency and Damping Ratio

where we have multiplied the second expression in the numerator and denominator by the complex conjugate of the denominator of the second expression to obtain the last expression. From this equation, we split the real and imaginary parts and obtain

and

{ } 𝜔2 c HvR = Re Hv = ( )2 k − 𝜔2 m + (𝜔c)2

(5.31)

( ) 𝜔 k − 𝜔2 m { } . HvI = Im Hv = ( )2 k − 𝜔2 m + (𝜔c)2

(5.32)

Now, comes the trick. We let 1 X = HvR − 2c and Y = HvI . With these definitions, it can be shown that ]2 [ ( )2 (𝜔c)2 + (k − 𝜔2 m)2 1 2 2 X +Y = [ ]2 = 2c , 2 2 2 2 4c (𝜔c) + (k − 𝜔 m)

(5.33)

(5.34)

(5.35)

which is thus the same as follows: ) ( )2 ( 1 2 ( )2 1 + HvI = . (5.36) HvR − 2c 2c Equation (5.36) is the equation of a circle with radius 1/2c and center at (HvR = 1∕2c, HvI = 0). This fact has been used in experimental modal analysis as a specific method for curve fitting, called the circle fit method, see Section 5.5.4.

5.5

Determining Natural Frequency and Damping Ratio

When characterizing mechanical systems it is often desired to determine fn and 𝜁. We shall therefore discuss a number of techniques for accomplishing this task. In the present section, we limit our analysis to some easy-to-use methods that can be used for a first rough estimate. In Chapter 16, we will take this one step further and use a more accurate mathematical curve fitting technique to estimate the parameters. Depending on which type of frequency response we look at (dynamic flexibility, mobility, or accelerance), there will be (small) differences in peak magnitude location, etc., as we have mentioned before. If the damping is low, say less than 0.1, then those differences are small. The purpose of the discussion here is to provide some approximate means of roughly estimating fn and 𝜁. We will therefore not present all details and exact formulas, but limit the discussion to practical formulas to be used. Keep in mind, however, that these are approximate. More details can be found in most textbooks on vibration analysis, e.g., Inman (2007), Den Hartog (1985), and Ewins (2000).

113

114

5 Fundamental Mechanics

5.5.1 Peak in the Magnitude of FRF The simplest method follows directly from the discussion above. For 𝜁 ≪ 1, we have fn ≈ fd , and we can define fn as the frequency where the magnitude of the frequency response, |H(f )|, has its maximum. This method is approximately valid for dynamic flexibility or accelerance, whereas for mobility it is exact, that is, the peak in the magnitude of Hv (f ) is located exactly at fn .

5.5.2 Peak in the Imaginary Part of FRF An alternative method of finding fn is to look at the location of the peak or the dip that occurs in the imaginary parts of dynamic flexibility or accelerance FRFs, or in the real part in mobility FRFs. I have chosen to call this method “peak in the imaginary part” since it is most common to measure accelerance. In many cases, the peak of the imaginary part of (for example) an accelerance FRF is more pronounced than the peak in the magnitude plot, particularly if two resonances are closely spaced, or highly damped. Although the peak or the dip in the real part of mobility and the imaginary part of accelerance do not match fn exactly, for low values of the relative damping, 𝜁, the peak is located near fn .

5.5.3 Resonance Bandwidth (3 dB Bandwidth) The resonance bandwidth, Br , from Equation (5.25) is often used to determine the critical damping ratio 𝜁. Many textbooks, for example Ewins (2000), include a proof that for any relative damping factor 𝜁 𝜁=

fu2 − fl2 (2fd )2

,

(5.37)

where fl and fu are the half-power lower and upper frequencies from Equation (5.26). For small damping ratios, say 𝜁 < 0.1, Equation (5.37) can be approximated by 𝜁≈

fu − fl . 2fd

(5.38)

Simplifying Equation (5.38) by using the bandwidth Br from Equation (5.25), we obtain a useful expression for the damping as follows: 𝜁≈

Br . 2fd

(5.39)

Since the quality factor, Q, in Equation (5.27) was defined as the ratio of the resonance frequency and the resonance bandwidth in Equation (5.27), we can combine this definition with Equation (5.39) to obtain the following relationship between the Q-factor and the relative damping: Q≈

1 . 2𝜁

(5.40)

The derivations behind the expressions above are based on mobility frequency response. However, the expressions are approximately valid also for dynamic flexibility and accelerance, if the damping is low.

5.6 Rotating Mass

5.5.4

Circle in the Nyquist Plot

For dynamic flexibility and accelerance, a perfect circle is not obtained, but for low values of the relative damping, 𝜁, approximate circles are obtained. It can be shown that the undamped natural frequency, fn , lies at the frequency at which the rate of change of angle is largest between two frequency values, as one moves along the circle (or approximate circle if the frequency response is not mobility). If we want to measure the damping by the circle fit method, it is necessary to use mobility. Of course, it is not the viscous damping c we would wish to estimate, but rather the relative damping, 𝜁. The relative damping can be estimated from the rate of angle change, although this is hardly practical without using a computer routine. More information on the circle fit technique can be found in textbooks on modal analysis, for example Maia and Silva (2003) and Ewins (2000). One can thus measure both resonance frequency and damping using the circle method.

5.6

Rotating Mass

An interesting case in vibration analysis is a mass which rotates around a center point. This arises whenever there exists an imbalance, for example in a rotating engine part, and is one of the most common causes of vibrations. One case, which can be studied relatively easily, is that of a round, homogeneous disk of mass M, see Figure 5.10, which rotates around its center of gravity (center of the disk). If we create an imbalance by attaching a small mass, m, at a distance r [m] from the center, and assume that the disk rotates with angular frequency 𝜔 [rad/s], the so-called centrifugal force, Fc (t) [N], Fc = mr𝜔2

(5.41)

is generated. This new force arises because of the necessity to balance the current system, whose center of gravity, because of the small mass, m, no longer lies at the point of rotation. A simple way to balance the system is to attach another identical mass, m, symmetrically around the point of rotation (across from the first mass), so that the center of gravity for the whole system, of

Figure 5.10 This figure illustrates the centrifugal “force,” Fc , arising when a small mass, m, is placed on the disk a distance r from the center of rotation. The disk mass, M, is assumed to be evenly distributed over its area.

ω Fc(t) r

M

m

115

116

5 Fundamental Mechanics

total mass M + 2m, now agrees with the point of rotation. This balancing method is called balancing in one plane and is what is used, for example when the auto mechanic balances the wheels of your car. In more complex cases, for example an axle suspended on two bearings, the number of degrees of freedom is increased, and the balancing is more complicated. The methods for balancing are well developed, although they are a bit beyond the scope of this book. More details on this topic can be found in, for example, Norfield (2006). We shall touch upon one more phenomenon regarding rotating objects. If we imagine that we have an axle suspended on a bearing, which in turn is clamped onto a flexible mount, we can model the system as a SDOF system in the tangential direction. If we look at one degree of freedom (translational direction), the centrifugal “force” from the imbalance mass, m, which now corresponds to the net result of the total mass distribution around the center of rotation, will then be compensated for by the SDOF system, that is, the whole system can be modeled as follows: Mü + cu̇ + ku = mr𝜔2 sin(𝜔t),

(5.42)

where M, c, and k come from the bearing and its mounting. This equation is well known, and its solution will also be a sinusoid which, if we assume a complex solution, u(t), can be written as follows: u(t) = u0 ej𝜔t ,

(5.43)

for which the solution can be written as follows: ( )2 f m r M fn u0 = ( )2 ( ). f f 1− f + j2𝜁 f n

(5.44)

n

The implication of the solution in Equation (5.44) is very interesting. Because we apparently have a resonance at the (undamped) frequency fn , the vibration levels will be much higher at and around this rotation speed than at other rotation speeds. In order to get small vibration levels at the operating rotation speed, we should obviously operate this system away from the resonance. If we operate far below the resonance, we get relatively low vibrations; this is called running the machine subcritically. Alternatively, letting the rotation speed increase above the resonance is sometimes a better option, if the resonance frequency is too low to allow operating at a considerably lower frequency. When the operating rotation speed is higher than the first resonance frequency of the machine, it is said to run supercritically. A disadvantage with the supercritical operating speed case is naturally that we must pass the resonance when starting the machine. Ensuring the machine is passing its critical speed reasonably fast, however, is usually sufficient to avoid that problem. Supercritical operation is the most common case.

5.7 Some Comments on Damping So far in this chapter, we have discussed different SDOF models considering viscous damping. As mentioned a few times already, damping is a difficult issue in vibration engineering. There are many models for different forms of damping, but there is limited knowledge on

5.7 Some Comments on Damping

how to calculate the total effects of different forms of damping in an actual structure. Many of the known forms of damping, for example Coulomb friction, are nonlinear. In many cases, particularly with low damping; however, it works relatively well to approximate the effect of the various forms of damping by a linear model. Therefore, the usual situation is that, regardless of the actual damping, it is approximated by viscous damping. This often works well, but there is one other common form of damping that we will briefly mention, namely hysteretic damping, also known as structural damping. More thorough treatment of different damping forms is found in most books on mechanical vibrations, for example Inman (2007), Rao (2003), and Craig and Kurdila (2006).

5.7.1

Hysteretic Damping

For many real-life structures, experimental results show that the model with viscous damping which we have used so far, does not completely agree with the frequency responses obtained experimentally. An alternative to the viscous model is therefore sometimes favored by replacing the viscous damping, c, by a complex spring constant. This form of damping, called structural, or hysteretic, damping is achieved by introducing a frequency dependence on the damping term by c = 𝜂k∕𝜔, where 𝜂 is called the loss factor. With this damping model, Newton’s equation can be written as mü + k(1 + j𝜂)u = F(t).

(5.45)

The mathematical background to Equation (5.45) is not as rigid as the equation for viscous damping which we have studied earlier in this chapter, because it is not an ordinary differential equation with real coefficients. Therefore, we cannot use the Laplace transform, nor can we define free oscillations for this model. However, the frequency response corresponding to Equation (5.45) can be solved and resembles the one for viscous damping. The dynamic flexibility frequency response for an SDOF system with hysteretic damping is Hu (f ) =

U(f ) 1∕k . = ( )2 F(f ) 1 − f ∕fn + j𝜂

(5.46)

The important difference between the viscous and the structural damping models lies in the frequency dependence of the damping. We can observe from the similarity between Equations (5.18) and (5.46) that exactly at the natural frequency, f = fn , we have that 𝜂 = 2𝜁.

(5.47)

Thus, we can use Equation (5.47) in Equation (5.39) to calculate 𝜂 from the 3 dB bandwidth, Br , and we find that 𝜂≈

Br . fr

(5.48)

It can be shown (Kennedy and Pancu, 1947) that, for structural damping, a circle may be obtained in the Nyquist plot, just as for viscous damping. For hysteretic damping, however, the circle is formed when we plot dynamic flexibility, and the circle has its center at 1∕(2𝜂) and a radius of 1∕(2𝜂). Hysteretic damping is sometimes available as an option in software for experimental modal analysis.

117

118

5 Fundamental Mechanics

5.8 Models Based on SDOF Approximations After this introduction of the SDOF system, we will now discuss a few cases where SDOF models are commonly used with great success. The first application, presented in Section 5.8.1, is vibration isolation, which is very common. Vibration isolators are found in a large variety of products such as cars, washing machines, and electronic devices such as computers. In Section 5.8.2, we will deduce two very useful relationships between static deflection and resonance frequencies and point to some applications of those relationships. A third application where the SDOF system is used is the shock-response spectrum, SRS, which is presented in Section 18.1.

5.8.1 Vibration Isolation The supercritical operation mentioned in Section 5.6 leads us to the application of vibration isolation. Vibration isolation design is often based on the single degree of freedom (SDOF) model. There are two different reasons to use vibration isolation which we will discuss separately: 1. We wish to protect a sensitive device from a vibrating environment, for example the control electronics box for engine control mounted on top of an engine with high-vibration levels. 2. We wish to protect the environment around a vibrating device from the vibration force from the device, for example an engine in a vehicle, whose vibrations we do not wish to propagate through the vehicle body. These two cases are illustrated in Figure 5.11. In the first case, in Figure 5.11(a), we can define the amount of vibration isolation as the ratio of the vibration of the device and the vibration of the base. This is independent of whether we choose displacement, velocity, or acceleration as the vibration parameter, as the ratio (for harmonic excitation) will remain the same. We start by setting up Newton’s equation for the mass of the device: ̇ − k (y − x) . m̈y = −c (ẏ − x)

(5.49) F(t)

y(t) m Sens. equip.

Vibration isolator c

k

m Vibr. source

Vibr. base (a)

x(t)

k

Vibration isolator c

Fe(t)

Base

(b)

Figure 5.11 Two cases of vibration isolation: (a) illustrates the case of some sensitive equipment to be protected from environment vibrations, whereas (b) illustrates the case of a vibrating source, for example an engine, from which the environment is to be protected.

5.8 Models Based on SDOF Approximations

We solve this equation by Laplace transforming it and regrouping the terms, which results in [ 2 ] ms + cs + k Y (s) = (cs + k) X(s). (5.50) From Equation (5.50), it follows that the transfer function of the vibration of the sensitive device, with the vibration of the base, is (cs + k) ∕m Y (s) = 2 . X(s) s + s ⋅ c∕m + k∕m

(5.51)

The denominator of Equation (5.51) is the same as the one we obtained for an SDOF system in Equation (5.18), and thus we have that √ k (5.52) 𝜔n = 2𝜋fn = m and c (5.53) 𝜁= √ . 2 mk Substituting Equations (5.52) and (5.53) into Equation (5.51) and simultaneously setting s = j𝜔 leads (after a few steps) to the frequency response: ) ( 1 + j2𝜁 f ∕fn Y (f ) . (5.54) = X(f ) 1 − (f ∕f )2 + j2𝜁 (f ∕f ) n n The frequency response in Equation (5.54) is plotted in Figure 5.12 as a function √ of the relative frequency f ∕fn . It is clear from this figure that, for frequencies larger than 2fn , the

Vibration isolation, |Y(f)/X(f)| or |Fe(f)/F(f)|

102 ζ = 0.01 ζ = 0.05 ζ = 0.1 ζ=1

101

100

10−1

10−2

0

0.5

1

1.5 2 2.5 Relative frequency, f / fn

3

3.5

4

Figure 5.12 Vibration isolation defined as the frequency response, for different relative damping ratios, 𝜁, of, in case (a) the vibration level of the sensitive device, Y(f ), with the level of the base, X(f ), and in case (b) the force level of the base (environment) with the force from the device, see Equations (5.54) and (5.55), respectively. √ As depicted in the figure, actual vibration isolation is obtained for relative frequencies above 2.

119

120

5 Fundamental Mechanics

vibration level of the sensitive device is lower than that of the base and thus √ we have vibration isolation. A drawback, however, is that at frequencies lower than 2fn , the sensitive device will actually have higher vibration levels than the base. Therefore, vibration isolation works best when the frequency content in operation is limited to frequencies above a certain frequency so that the isolator can be designed to give isolation for all operating conditions. This is usually the case for most engines, and pumps, etc., except during startup of the device. In most cases, where the startup is sufficiently fast, this does not pose a problem. As seen in Figure 5.12, the isolation effect increases with decreasing damping. Using an insufficient damping level can, however, result in large displacements, especially when transient vibration occur, such as a vehicle passing a bump, causing transient vibration (shock). In such cases, the vibration isolator has to be designed with enough damping so that the displacement does not become larger than can be handled by the spring, as every real spring has a limit for how much it can be compressed. If the spring is compressed to its limit, the resulting acceleration shock level can become very high, causing the device to break. For the second case, in Figure 5.11(b), the vibration isolation is defined as the ratio of the force on the base and the force on the side of the device producing the vibrations, Fe ∕F, as shown in the figure. An analysis of this case, which is left for Problem 5.4, yields a frequency response identical to that of Equation (5.54). Note, however, that in this case, the frequency response is the ratio of two forces, ) ( 1 + j2𝜁 f ∕fn Fe (f ) = (5.55) )2 ). ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn The above equations are naturally approximations to any real case, as any structure we mount on vibration isolators will of course not be moving in just a single direction. Moreover, in practice, many vibration isolator designs will not even be linear, for example because they contain rubber parts to add damping. Rubber is a highly nonlinear material with temperature sensitivity and many other properties that may make the isolator nonlinear. The equations deduced above are still very useful from a design standpoint as a first approximation. Usually, the manufacturers of isolators can aid in the selection and application of a particular isolator design.

5.8.2 Resonance Frequency and Stiffness Approximations Another case, where an SDOF model can be used for a first approximation, is to determine the resonance frequency that will be added to a structure when mounting a new component to an existing dynamic structure. In Chapter 6, we will show that continuous structures (if damping is low enough) exhibit resonant behavior. If we add another dynamic structure to the first structure, the new combined structure will have dynamic properties that are rather complicated to predict. This field is called substructuring, and more details can be found in for example Inman (2007), Ewins (2000), and Craig and Kurdila (2006). As a first approximation, however, there is a simple relationship that can sometimes be successfully used, when the second structure which is added to the first structure can be approximated as a rigid mass. If this mass, m, is added to the structure, and we can calculate the static displacement, d, caused by the mass (which can often be simply calculated if we

5.9 The Two Degree of Freedom System (2DOF)

have a finite element model of the dynamic structure), we can calculate an approximate point stiffness, k, by the simple equation, mg k= , (5.56) d where g is the gravitational acceleration, g = 9.806 m/s2 , and all units are assumed to be SI units. If we now use the stiffness of Equation (5.56) to define an SDOF system together with the mass, we also have that the resonance frequency, fr , of this SDOF system will be √ 1 k . (5.57) fr = 2𝜋 m Substituting k from Equation (5.56) into Equation (5.57), by a simple calculation we get that √ g 0.5 fr = (5.58) √ ≈ √ . 2𝜋 d d From this equation, we see that the resonance frequency is only dependent on the static deflection, d, caused by the mass. Equation (5.58) can be rewritten in the form g 0.25 d= ( )2 ≈ 2 . fr 2𝜋fr

(5.59)

The result in Equation (5.59) is useful to consider in experimental modal analysis, when trying to achieve free–free boundary conditions by supporting a test structure with a soft spring. From the equation, we see that obtaining a particular resonance frequency when hanging a structure in a soft spring, will result in a particular displacement, or extension of the spring from its unloaded condition. If, for example, we want the resonance frequency to be 1 Hz, Equation (5.59) yields that the extension (or compression) of the spring will be 25 cm.

5.9

The Two Degree of Freedom System (2DOF)

A mechanical system consisting of two masses is of special interest in some applications. We will therefore now study such a system in some detail. In Chapter 6, we will introduce the more general multiple degrees-of-freedom (MDOF) system with an arbitrary number of degrees of freedom. To simplify the equations, we will restrict the treatment in this section to an undamped 2DOF system. A general illustration of a 2DOF system is shown in Figure 5.13. Newton’s equations for each of the two masses give an equation system: { m1 ü1 = F1 − k1 u1 − k2 (u1 − u2 ), (5.60) m2 ü2 = F2 − k2 (u2 − u1 ) − k3 u2 . We are now interested in finding the free vibrations, the (undamped) natural frequencies, which we obtain when the forces are zero. If we rewrite the equations somewhat, we obtain { m1 ü1 + k1 u1 + k2 (u1 − u2 ) = 0, (5.61) m2 ü2 + k2 (u2 − u1 ) + k3 u2 = 0.

121

122

5 Fundamental Mechanics

Figure 5.13 Mechanical system with two degrees of freedom, the 2DOF system.

k1

k2 m1

c1

k3 m2

c2

c3

The solutions to Equation (5.61) can be found by assuming a trial solution. This is an alternative technique to the Laplace transform, which we use to illustrate a common approach found in many textbooks on vibration. In Chapter 6, we will use a more general way to solve this equation. We can surely assume (referring to the results for the SDOF system), that if there are any solutions to Equation (5.61), they will be harmonic (oscillating). Thus, we assume a solution of the form: { u1 (t) = U1 sin(𝜔t), (5.62) u2 (t) = U2 sin(𝜔t). This gives us the second derivatives { ü1 (t) = −𝜔2 u1 (t), ü2 (t) = −𝜔2 u2 (t). Substituting the two last sets of equations into Equation (5.61) gives [( ) ] −m1 𝜔2 + k1 + k2 U1 − k2 U2 sin(𝜔t) = 0, [ ) ] ( −k1 U1 + −m2 𝜔2 + k2 + k3 U2 sin(𝜔t) = 0.

(5.63)

(5.64)

Solutions to Equation (5.64) will now be those combinations of U1 , U2 , and 𝜔 that satisfy the equations. Generally, there will not be any unique such solutions, but if we set the ratio U1 ∕U2 = U we obtain, from the first equation in Equation (5.64) that U=

m1

𝜔2

−k2 , − k1 − k2

(5.65)

and from the second equation U=

m2 𝜔2 − k2 − k3 . −k2

(5.66)

The two last equations must simultaneously apply, and thus m1

𝜔2

m 𝜔2 − k2 − k3 −k2 = 2 . −k1 − k1 − k2

This equation leads to a fourth-order polynomial in 𝜔 [ ] k k + k1 k3 + k2 k3 k2 + k3 4 2 k1 + k2 + 1 2 + = 0. 𝜔 −𝜔 m1 m2 m1 m2

(5.67)

(5.68)

Equation (5.68), sometimes referred to as the frequency equation, has four solutions in 𝜔. As for the SDOF system, these roots will come in complex conjugate pairs. This means that

5.10 The Tuned Damper

our system with two degrees of freedom has two natural frequencies, which are solutions to Equation (5.68). For each of those two solutions, there will be a unique ratio of the displacements of the masses, U = U1 ∕U2 , i.e., the two masses move in a specific way relative to each other. Example 5.9.1 Let us illustrate the above discussion of a 2DOF system with an example. Assume both masses are equal, as well as all the springs, i.e., { m1 = m2 = m, (5.69) k1 = k2 = k3 = k. Substituting this into Equation (5.68), we obtain 4k 3k2 + 2 = 0, m m with the solutions √ { (2 ± 1) k k∕m, 2k 4k2 3k2 2 ± = − = 𝜔1,2 = 2 2 3k∕m. m m m m 𝜔4 − 𝜔2

Inserting the first of these frequencies into either Equation (5.65) or (5.66) gives U1 = +1. U2 The second frequency in Equation (5.71) in the same way gives U1 = −1. U2

(5.70)

(5.71)

(5.72)

(5.73)

End of example. The solutions in Equations (5.72) and (5.73) are very important. They implicate that for the system with two degrees of freedom, there are two natural frequencies, eigenfrequencies, or resonance frequencies, by which the system can oscillate by itself, without any applied force, just as was the case for one frequency in an SDOF system. For each of these frequencies, there is a unique relation between the displacement of each of the two masses. We call this relative motion a mode shape, which is related to the fact that a resonance is also called a mode (which we will discuss further in Section 6.1). An undamped 2DOF system will always have one mode where the two masses move in phase, and one mode where they move out of phase. The size of the relative displacements depend on the sizes of the masses and springs, so the reason they were equal in our example above, Equations (5.72) and (5.73), were special cases, since we chose the masses and springs equal to each other. For physical systems, which will of course have damping, the problem becomes somewhat more difficult. We will discuss this in more detail in Chapter 6.

5.10 The Tuned Damper We will conclude this chapter by studying a particular application of the 2DOF system that can be used to reduce vibrations. Assume that we have an SDOF system with a mass M, a spring with stiffness k, and damping c. We then add a second SDOF system with ma , ka ,

123

124

5 Fundamental Mechanics

and ca , respectively, as illustrated in Figure 5.14. The first SDOF system could, for example, be a (simplified) model of a machine on its foundation, modeled as a single mass moving translationally, with stiffness and damping from the machine mounts. The second SDOF system is called a tuned damper or a tuned absorber. Assume we force the system in Figure 5.14 with a harmonic force, corresponding to the operating speed of the machine. Since the system is linear, if we force it with a harmonic force, the resulting vibration, u1 , will be harmonic. To find the displacement of the SDOF mass, M, for a particular force, we formulate Newton’s equations and get ) ] [( −M𝜔2 + k + ka U1 − ka U2 sin(𝜔t) = F sin(𝜔t), ( ) ] [ (5.74) −kU1 + −ma 𝜔2 + ka U2 sin(𝜔t) = 0.

F1 = Fsin(ωt)

u1 M ka k

ca ma

c

u2

Figure 5.14 Illustration of the tuned damper. The damper consists of ma , ca , ka , which are attached to the SDOF system consisting of M, c, k.

The second equation in Equation (5.74) gives U2 =

k U, ka − ma 𝜔2 1

(5.75)

which, substituted into Equation (5.74), gives that ka − ma 𝜔2 F. U1 = ( )( ) ka − ma 𝜔2 k + ka − M𝜔2 − kka

(5.76)

Equation (5.76) may seem complicated at first, but the interesting part for us at the moment is the numerator. It follows from the numerator that there is an angular frequency 𝜔a for which the mass M will not move at all. This is an antiresonance, a phenomenon that will be discussed more in detail in Section 6.4.6. By choosing the mass ma and the spring ka appropriately such that 𝜔a corresponds to the natural frequency of the SDOF system M, c, k, we can apparently entirely remove the vibrations at that frequency. Thus, we choose ma and ka so that √ √ ka k = 𝜔r = 𝜔a = . (5.77) ma M Note especially in Equation (5.77) that the resonance of the tuned damper, considered as a separate SDOF system equals the frequency at which the vibration becomes zero. This principle applies also to the case where a tuned damper is applied to a continuous structure. Tuned dampers can be purchased “off-the-shelf” from vibration isolator manufacturers. When we add damping to the tuned damper, Equation (5.77) will turn into an expression similar to the poles of the SDOF system, see for example, the denominator of Equation (5.18). Thus, the denominator in the new expression will not equal zero at the tuned frequency, but rather be some low number, depending on the damping. In practice, we can choose how much attenuation we want to achieve at the tuned frequency by changing the damping of the tuned damper. The vibration absorption does of course not come without cost. While we can reduce the vibration at a particular frequency, 𝜔a , the complete 2DOF system will have two resonance frequencies. In Figure 5.15 an example of a frequency response of the displacement, u1 ,

5.11 Chapter Summary

10−3

Dynamic flexibility (m/N)

FRF w. tuned damper FRF of SDOF only 10−4

10−5

10−6

10−7

0

50

100

150 Frequency (Hz)

200

250

300

Figure 5.15 Frequency response (dynamic flexibility) U1 ∕F in Figure 5.14, before and after attachment of a tuned damper. As can be seen in the figure, after attaching the tuned damper, an antiresonance occurs at the tuned frequency. In the figure, it has been assumed that the damping of the tuned damper was 10 times the damping of the SDOF system.

with the force, F1 , of the system in Figure 5.14 with an added tuned damper is plotted. Example values of M, c, and k have been used, and the tuned damper values were chosen so that the damping of the tuned damper was 10 times that of the SDOF system and the frequency of the tuned damper was chosen equal to the resonance frequency of the SDOF system. It should be especially noted in Figure 5.15, that an effect of the higher damping in the tuned damper is that both resonances of the new system, including the tuned damper, obtain higher damping. This is a general property of MDOF systems, as energy dissipation anywhere on the structure naturally helps to attenuate all vibrations in the structure.

5.11 Chapter Summary In this chapter, we have studied a mechanical system with one degree of freedom, the SDOF system. If we have a system with mass, damping and stiffness m, c, and k, respectively, we found that such a system will have an undamped natural frequency fn in [Hz] of √ 𝜔n 1 k = , (5.78) fn = 2𝜋 2𝜋 m where 𝜔n is the natural angular frequency in [rad/s] and 𝜔n = 2𝜋fn . The SDOF system will have a relative damping, 𝜁, of c (5.79) 𝜁= √ . 2 mk

125

126

5 Fundamental Mechanics

The impulse response h(t) of the SDOF system consists of an exponentially decaying sine wave described by ( ) 1 −𝜁 𝜔n t h(t) = e sin 𝜔d t , (5.80) m𝜔d where the damped natural frequency, 𝜔d in [rad/s] is defined by √ 𝜔d = 2𝜋fn 1 − 𝜁 2 .

(5.81)

Example impulse responses were plotted in Figure 5.2. The FRF H(f ) in the form of dynamic flexibility, is Hu (f ) =

U(f ) 1∕k = )2 ). ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.82)

The magnitude of this FRF (plotted in Figure 5.3) will have a maximum at the damped resonance frequency: √ (5.83) fmax = fn 1 − 2𝜁 2 , √ which is slightly different from the damped natural frequency fd = fn 1 − 𝜁 2 by which the impulse response oscillates. Experimentally, it is more common to obtain the mobility FRF, Hv (f ) =

j(2𝜋f )∕k V(f ) = )2 ), ( ( F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.84)

or the accelerance, Ha (f ) =

−(2𝜋f )2 ∕k A(f ) = ( ( )2 ). F(f ) 1 − f ∕fn + j2𝜁 f ∕fn

(5.85)

5.12 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 5.1 Write a MATLAB/Octave function which calculates the impulse response h(t) of a SDOF system with a mass of 1 kg, and with undamped resonance frequency fn = 𝜔n ∕2𝜋 and the relative damping 𝜁 as input parameters. The function could be defined as for example function [h, t] = fz2impresp(fn,z) Run the function for different values of fn and 𝜁 and plot the results. Then observe, for a given fn , how many periods of oscillation you see for different values of the damping 𝜁. Problem 5.2 Write a MATLAB/Octave function with a definition: function [Hv, f] = sdofmob(fn, z)

References

which calculates the mobility frequency response Hv (f ) in Equation (5.28), given the undamped resonance frequency fn = 𝜔n ∕2𝜋 and the relative damping 𝜁. Assume the mass to be 1 kg. Use the function to plot the mobility for fn = 1 Hz, and 𝜁 = 0.01, 0.05, and 0.1. Plot the three frequency responses overlaid, with linear scales as well as logarithmic y-scale (using the MATLAB/Octave semilogy command), and log–log scale (loglog command). Observe that with a nominal resonance frequency of 1 Hz, the frequency axis will be equal to the normalized frequency r = f ∕fn . Then write a new MATLAB/Octave function sdofacc similar to the previous function, but which calculates the accelerance frequency response. Plot the accelerances for the same values of frequency and damping as for mobility. Compare the results with those for mobility. Problem 5.3 Calculate the undamped resonance frequency fn and relative damping 𝜁 of a mechanical single degree of freedom system with the following parameters: m = 4 kg, k = 106 m/N, and c = 80 m/N s. Use the sdofacc function from Problem 5.2 to compute the accelerance and plot it. Then use a suitable method from Section 5.5 to find the mass, stiffness, and damping values. How correct can you get it? (The accuracy of particularly the relative damping estimate will be poor from a visual inspection, so do not expect a very high accuracy for c. m, and k should be much easier to obtain good results for.) Hint: Use the asymptotic behavior at higher frequencies to obtain the mass, then the peak in the imaginary part to find the resonance frequency, from which the stiffness can be calculated using the mass estimate. Finally, find the damping 𝜁 using the 3-dB bandwidth and use the appropriate equation from Section 5.5 to find the viscous damping. Problem 5.4 Prove that the vibration isolation case B leads to Equation (5.55). Problem 5.5 We assume that you are adding a mass to a beam. The beam gets a static deflection of 1 mm due to the added mass. Use Equation (5.58) to obtain an approximate estimate of what the resulting resonance frequency will be.

References Craig RR and Kurdila AJ 2006 Fundamentals of Structural Dynamics. John Wiley. Den Hartog JP 1985 Mechanical Vibrations. Dover Publications Inc. Ewins DJ 2000 Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Inman D 2007 Engineering Vibration 3rd edn. Prentice Hall. ISO 2641 1990 Vibration and shock – vocabulary. Kennedy C and Pancu C 1947 Use of vectors in vibration measurement and analysis. Journal of the Aeronautical Sciences 14(11), 603–625. Maia, N and Silva, J 2003 Theoretical and Experimental Modal Analysis. Research Studies Press, Baldock, Hertforsdhire, England. Newton I 1687 Philosophiæ Naturalis Principia Mathematica. London. Norfield D 2006 Practical Balancing of Rotating Machinery. Elsevier Science. Rao S 2003 Mechanical Vibrations 4th edn. Pearson Education.

127

129

6 Modal Analysis Theory Modal analysis, which is a part of the wider subject of structural dynamics, is the theory dealing with the dynamics of mechanical systems described by modes. In this chapter, we will show a general approach related to vibrations in mechanical systems with more than one (or two) degrees of freedom. We will show that these systems can be condensed down to their poles and mode shapes, and we will show how this condensation is done. The theory in this chapter is essential to understand vibrations, as we will get answers to such questions as follows: what will the vibration level be in, e.g., DOF 63 in Z direction, if we apply a harmonic force of 10 N in DOF 12 in the X direction (see Section 6.2.1 for a discussion of degrees-of-freedom, DOF)? We will see that the answer to such questions lies in the mode shapes and poles of the system (structure). Modal analysis is a comprehensive subject, and this chapter will by necessity be limited. It includes, however, all the essential information for the beginner to understand the concepts of modal analysis from an experimental perspective. In addition, engineers working with analytical modal analysis may also find it useful as an overview, helping them to understand experimental results. This chapter does not include all the necessary information to fully understand mechanical system simulation such as used in for example the finite element method, FEM. The reader interested in more in-depth material can find that in dedicated books on modal analysis (Ewins 2000; Heylen et al. 1997; Maia and Silva 2003) and structural dynamics (Craig and Kurdila 2006). The outline of this chapter is different from many textbooks on mechanical vibration because its focus is on what we can obtain experimentally. This means that synthesis of frequency response functions (FRFs) and mode shape scaling form a central part of the chapter.

6.1

Waves on a String

Most of this chapter will deal with the so-called lumped parameter systems, i.e., systems with discrete masses, dampers, and springs. Modal analysis is, however, also strongly related to wave theory, the study of waves in continuous structures. Indeed, modes in continuous structures are identical to the modes we will obtain later in this chapter, at the points where

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

130

6 Modal Analysis Theory

we define our mass locations (assuming we select these carefully, at least; I am referring to a principal equality). I therefore think it is appropriate to include a short introduction to the concept of modes for continuous systems to remind the reader that modes and waves are dual theories, which is to say they answer the same question: what vibrations do we get for particular force inputs? It is assumed that we are at least briefly acquainted with the modes of an ideal string, fixed at both ends. The modes on a fixed string are shaped as sines and have the spatial shape: ( ) n𝜋x v(x) = sin , (6.1) L where x is the coordinate along the string, from 0 to L, the length of the string. Each mode is a standing wave, where all points move either in phase, or out of phase, and for each mode, n, there is an associated natural frequency fn [Hz], by which the mode n is oscillating, so that an individual point along the mode at, say, point x0 has a sinusoidal motion: u(x0 , t) = v(x0 ) sin(𝜔n t),

(6.2)

where the angular natural frequency 𝜔n = 2𝜋fn as usual. The important thing now is to realize how the modes are used to describe a particular vibration pattern. Let us say that at t = 0, we have a particular deformation of the string, and we then release it. To describe the vibration pattern that results, the modes are then “tuned” so that the sum of all the modes at t = 0 equal the initial deformation, and after the release of the string, each mode starts to oscillate at its own natural frequency. The “tuning” consists of setting a particular amplitude and phase of each mode, so that the sum of all the modes at t = 0 results in the initial deformation shape. (Note that we know from the theory of Fourier series that there is indeed such a solution for any deformation pattern, since all the modes are sinusoids. We can always take the deformation pattern between x = 0 and x = L, make it repeat periodically, outside this region, and split it into a Fourier Series. The result of the Fourier series is the amplitudes and phases of all sines.) The result is two “waves,” or deformation patterns, each half as large as the original pattern (since the total pattern is the sum of the two) – one moving in each direction on the string, as you probably have seen many times. The waves hit the boundaries at x = 0 and x = L, where they are reflected (change sign, so if the wave shape was pointing upwards, for example it is pointing downward after the reflection) and move back and forth for all eternity. That is, if there is no damping. An alternative way to describe the deformation patterns moving back and forth is by using solutions to the wave equation: of the form u(x, t) = U+ (x)ej(𝜔t−kx) + U− (x)ej(𝜔t−kx) ,

(6.3)

for example. In order to use these solutions, however, there must be a known closed-form solution; we need to have the equation. The versatility of modes lies in the fact that the modes give exactly the same answers (the modes can be calculated from the wave equation), but the modes can be calculated or experimentally determined in many cases, for which we do not know the closed-form solutions to the wave equations. Furthermore, even rather basic mechanical geometries, such as plates and beams, have several wave types which sum up to the total solution. In most cases, these are very difficult (read: impossible) to obtain

6.2 Matrix Formulations

on real-life structures such as bridges, buildings, airplanes, and refrigerators. However, the modes can be obtained in several ways; most commonly from an finite element (FE) model or from an experimental modal analysis. Once the modes are known, simulations can be made as we will see later in this chapter.

6.2

Matrix Formulations

We will develop a general theory for multiple degrees of freedom (MDOF) systems by looking at the 2DOF system from Section 5.9, illustrated in Figure 5.13. To make the matrix notation easy to read, we introduce the following symbols for matrix formulations. A rectangular matrix will be denoted by brackets, e.g., [M], a diagonal matrix will be denoted by, e.g., ⌊Mr ⌋, column vectors will be denoted by curly brackets, e.g., {u}, and row ⌊ ⌋ vectors will be denoted by lower brackets, e.g., Hp = {Hp }T , see also Appendix D. For the 2DOF system in Figure 5.13, Newton’s second law gives the following equation system: { m1 ü 1 = F1 − c1 u̇ 1 − k1 u1 − c2 (u̇ 1 − u̇ 2 ) − k2 (u1 − u2 ), (6.4) m2 ü 2 = F2 − c2 (u̇ 2 − u̇ 1 ) − k2 (u2 − u1 ) − c3 u̇ 2 − k3 u2 , which we can write in matrix form if we define the following matrices. We define the mass matrix, [M], as follows: [ ] m1 0 [M] = , (6.5) 0 m2 the damping matrix, [C], as follows: ] [ −c2 c1 + c2 , [C] = −c2 c2 + c3 and the stiffness matrix, [K], as follows: ] [ −k2 k + k2 . [K] = 1 −k2 k2 + k3 Furthermore, we define the displacement vector, {u}, as a column vector: { } u1 , {u} = u2 and the force vector, {F}, as a column vector: { } F1 . {F} = F2

(6.6)

(6.7)

(6.8)

(6.9)

With these definitions, Equation (6.4) can be written more compactly as follows: ̈ + [C]{u} ̇ + [K]{u} = {F(t)}. [M]{u}

(6.10)

The formulation in Equation (6.10) is of course not limited to our 2DOF system, but general to all MDOF systems with proper formulations of the mass, damping, and stiffness matrices, and the displacement and force vectors.

131

132

6 Modal Analysis Theory

6.2.1 Degree of Freedom The concept of degree-of-freedom, DOF, is important for the discussion in this chapter. In this chapter, we discuss MDOF systems built up by lumped parameters, i.e., discrete masses, dampers, and springs. For such a system, the motion of each mass is a DOF, i.e., we always have as many DOFs as we have masses. Conversely, we can define the number of degrees of freedom necessary to describe a system (structure) as the number necessary to specify the instantaneous position of all points on the structure at a given time. Thus, for a three-dimensional structure, we will typically have six DOFs for each point on the structure we wish to describe the motion of three translational and three rotational. The most common situation is that the lumped parameter model represents some continuous structure, in which case it is, of course, a matter of approximating the infinite number of DOFs on the continuous structure by a discrete set of lumped masses. Producing these lumped masses, dampers, and springs to correctly represent a continuous structure is an important part of structural dynamics, see for example Craig and Kurdila (2006). In this chapter, we will not discuss producing the matrices, etc., as we are focused on the principal results of them in order to understand what experimental data will look like, and why. It is, however, worth mentioning that in FE models, currently millions of DOFs can be used to produce accurate models of structures, whereas experimentally, few of us can afford to use more than a few hundred, and very often just a few tens. Perhaps it should be mentioned for clarity that what a FEM software does is essentially building mass and stiffness matrices. It should be noted that what we mean by a degree of freedom in experimental mechanics (see for example Chapter 16 and Section 19.6) is a particular point and a certain direction where we have one or more transducers. An “experimental DOF” can, for example be point 22 in the -Y direction, usually written as “22Y-” or similar. Thus, we can have up to three translational DOFs in each point. In addition to that we can have three rotational DOFs in each point, although transducers for rotational DOFs are not readily available, so it is rare to obtain rotational DOFs experimentally.

6.3 Eigenvalues and Eigenvectors To find general solutions for our 2DOF (and generally, MDOF) system, we will look at the three cases separately: ● ●



an undamped system, for which [C] = 0, a proportionally damped system, for which [C] = a[M] + b[K] for real constants a and b, and a generally damped system, where [C] is any damping matrix.

6.3.1 Undamped System We will start by looking at the free vibrations, i.e., solutions to Equation (6.10) when {F} = {0}. For the undamped system and for free vibrations, we have a special case of Equation (6.10): ̈ + [K]{u} = {0}, [M]{u}

(6.11)

6.3 Eigenvalues and Eigenvectors

where we use the zero column vector {0} to emphasize that it is a matter of a vector on the right-hand side. We start with the Laplace transforming Equation (6.11) which yields (2 ) s [M] + [K] {U(s)} = {0}. (6.12) This equation can be reformulated by multiplying with the inverse of [M] and rearranging the two terms in the equation as follows: ( ) [M]−1 [K] + s2 [I] {U(s)} = {0}. (6.13) If we compare this equation with the “standard form” of an eigenvalue problem with eigenvalues 𝜆 ([A] − 𝜆[I]) {x} = {0},

(6.14)

we see that Equation (6.13) is an eigenvalue problem with [A] = [M]−1 [K],

(6.15)

and the eigenvalues 𝜆 = −s2 ,

(6.16) √ and the undamped natural frequencies are given by s = j𝜔 = ± −𝜆. One should remember that eigenvalue problems have solutions in terms of eigenvalues and eigenvectors, see also Appendix E. The eigenvalues, 𝜆r , are the values of 𝜆 that satisfy the eigenvalue equation, and for each eigenvalue, there is a particular vector, the eigenvector, {𝜓}r , that satisfies the equation ( ) [A] − 𝜆r [I] {𝜓}r = {0}. (6.17) (You should not be surprised that we find that the solution to Newton’s equations turn out to be an eigenvalue problem. Eigenvalue problems are, among other things, giving the solutions to differential equations.) The nontrivial solutions to the standard eigenvalue problem in Equation (6.14) are obtained by setting the determinant equal to 0, i.e., det ([A] − 𝜆 [I]) = 0, which in our case is equivalent to the equation: ( ) det [M]−1 [K] − 𝜆 [I] = 0,

(6.18)

(6.19)

which leads to a polynomial in 𝜆, the characteristic equation. The roots of this polynomial are known from linear algebra as the eigenvalues, denoted 𝜆1 , 𝜆2 , … , 𝜆N , where N is the number of dimensions in the matrix equation, that is, the number of masses (DOFs) in the system. The matrices [M] and [K] are in most cases positive definite, which means that the eigenvalues are strictly larger than zero. In cases where there are the so-called rigid body modes, the matrices [M] and [K] are positive semidefinite, which means that the eigenvalues are larger than or equal to zero. Rigid body modes are modes where all masses move without any relative motion in between each other, that is, the system has no vibrations, but is translating along, or rotating around, any of the axes.

133

134

6 Modal Analysis Theory

Since the eigenvalue problem in Equation (6.13) is formulated such that 𝜆 = −s2 , the poles of the system, which will give us the frequencies of free vibrations, are √ √ sr = ± −𝜆r = ±j 𝜆r , (6.20) where we use the fact that we know that 𝜆r ≥ 0. We thus find that the poles always come in complex conjugate pairs and lie on the imaginary axis in the Laplace domain (the latter a direct result of the fact that we have no damping – the real part of s corresponds to damping as we know from Section 2.6.1). The vector {u} = {𝜓}r which satisfies Equation (6.17) for a particular eigenvalue 𝜆r , is the eigenvector of the system for that eigenvalue. Within structural dynamics, the eigenvectors, {𝜓}r , obtained for an undamped system are called normal mode shapes, or simply normal modes and are unique for each given structure and boundary conditions. Because eigenvectors can be arbitrarily scaled, the mode shapes are only determined in shape, that is, only the relative motion of the points is unique. For example, the two mode shapes: ⎧ 1 ⎫ ⎪ ⎪ {𝜓}r = ⎨ −1 ⎬ , ⎪ 1 ⎪ ⎭ ⎩

(6.21)

⎧ 10 ⎫ ⎪ ⎪ {𝜓}r = ⎨ −10 ⎬ ⎪ 10 ⎪ ⎭r ⎩

(6.22)

r

and

are both the same eigenvector but with different scaling. Note that we use a subscript {}r to denote mode number r. Naturally, as the alert reader has already observed, there is another quite similar way to formulate an eigenvalue problem for mechanical systems as in Equation (6.12). We could have multiplied this equation by the inverse of the stiffness matrix instead of the inverse of the mass matrix, and then divided by s2 and obtained an equation similar to Equation (6.13) namely ] [ 1 (6.23) [K]−1 [M] + 2 [I] {U(s)} = {0}. s This equation is an eigenvalue problem with eigenvalues 𝜆r = −1∕s2 from which the poles can be solved similarly to Equation (6.16). Solving Equation (6.23) will (of course) yield the reciprocal eigenvalues to the ones we find solving Equation (6.13), which means that the poles will be the same in both cases. Verifying this is left as Problem 6.2 at the end of this chapter. There are in fact several other eigenvalue problem formulations, some of which are sometimes preferred for better stability and computational efficiency. Those methods can be found in, for example Craig and Kurdila (2006) and Inman (2007). Example 6.3.1 Determine the eigenvalues, poles, and eigenvectors of the 2DOF system in Figure 5.13 if m1 = m2 = 1 [kg], k1 = k3 = 100 [N/m], k2 = 150 [N/m], and dampers c1 = c2 = c3 = 0.

6.3 Eigenvalues and Eigenvectors

We start by writing the mass and stiffness matrices, which according to Equations (6.5) and (6.7), respectively, become [ ] 1 0 [M] = (6.24) 0 1 and

[ [K] =

250

−150

−150

250

] .

(6.25)

The next step in the solution is to calculate [A] as follows: [A] = [M]−1 [K] = [K],

(6.26)

since the mass matrix is the identity matrix. We now formulate the determinant of A − 𝜆I and set it to zero in order to find the nontrivial solutions to Equation (6.14). We thus get | 250 − 𝜆 | | | −150 |

−150 || | = (250 − 𝜆)2 − 1502 = 0. 250 − 𝜆 ||

(6.27)

This equation can easily be solved (for example by using the MATLAB/Octave roots command, see Problem 6.2), and we find the solutions: 𝜆1 = 100, 𝜆2 = 400, from which we obtain the poles of the system √ s1 = ±j 100 = ±j10, √ s2 = ±j 400 = ±j20

(6.28)

(6.29)

rad/s. We thus have two undamped natural frequencies of this system: f1 = 10∕2𝜋 ≈ 1.6 Hz, and f2 = 20∕2𝜋 ≈ 3.2 Hz. Having found the eigenvalues, the next step is to find the corresponding eigenvectors. These we obtain by substituting one of the eigenvalues into Equation (6.14) and find the vector coefficients in {𝜓}r which satisfy the equation. In our example, we first get { } [ ]{ } 0 250 − 100 −150 𝜓1 = , (6.30) 𝜓2 1 0 −150 250 − 100 where the subscript index 1 after the column vector {𝜓}1 indicates it is the first eigenvector (mode shape) corresponding to the first eigenvalue. Equation (6.30) contains two equal equations. Either one can be used to find the eigenvector, and we get 150𝜓1 − 150𝜓2 = 0, which means 𝜓1 = 𝜓2 , and thus the eigenvector is { √ } 1∕ 2 √ , {𝜓}1 = 1∕ 2 1 if we scale it to unity length.

(6.31)

(6.32)

135

136

6 Modal Analysis Theory

Similarly for the second eigenvalue, we get [ ]{ } { } 250 − 400 −150 𝜓1 0 = , −150 250 − 400 𝜓2 2 0

(6.33)

from which we take any row and get the equation: −150𝜓1 − 150𝜓2 = 0, which yields the eigenvector { √ } 1∕ 2 √ . {𝜓}2 = −1∕ 2 2

(6.34)

(6.35)

End of example.

6.3.2 Mode Shape Orthogonality As we have seen in Section 6.3.1, the mode shape vectors can be arbitrarily scaled. Furthermore, the mode shapes are generally not independent (although they were in the above example), as we will see in this section, but instead they have a weighted orthogonality property which we will now address. From Equations (6.12) and (6.16), for a particular eigenvalue, 𝜆r , and its associated eigenvector, {𝜓}r , we have −𝜆r [M]{𝜓}r + [K]{𝜓}r = {0}.

(6.36)

If we premultiply Equation (6.36) by another eigenvector transposed, {𝜓}Ts , we get −𝜆r {𝜓}Ts [M]{𝜓}r + {𝜓}Ts [K]{𝜓}r = 0.

(6.37)

Now remember that for vectors and matrices, transposing a product results in reversing the order of the factors in the product, and transposing each factor separately, i.e., (ab)T = bT aT . Furthermore, since our matrices [M] and [K] are symmetric, we have that [M]T = [M] and [K]T = [K]. The transpose of Equation (6.37) thus yields −𝜆r {𝜓}Tr [M]{𝜓}s + {𝜓}Tr [K]{𝜓}s = 0.

(6.38)

If we formulate Equation (6.36) again, for another eigenvalue, 𝜆s , with its eigenvector, {𝜓}s , and premultiply with {𝜓}Tr , we similarly get −𝜆s {𝜓}Tr [M]{𝜓}s + {𝜓}Tr [K]{𝜓}s = 0.

(6.39)

We now take Equation (6.39) minus Equation (6.38), which yields (𝜆r − 𝜆s ){𝜓}Tr [M]{𝜓}s = 0,

(6.40)

for any two eigenvalues 𝜆r and 𝜆s . Thus, if r ≠ s, we must have {𝜓}Tr [M]{𝜓}s = 0.

(6.41)

If we now use Equation (6.41) in either Equation (6.37) or (6.38), then {𝜓}Tr [K]{𝜓}s = 0,

(6.42)

must also hold for any r ≠ s. Equations (6.41) and (6.42) are the equations for the weighted orthogonality properties of modal vectors.

6.3 Eigenvalues and Eigenvectors

For the case where r = s and the two modal vectors, premultiplying and postmultiplying the mass or stiffness matrix are the same vectors, we define the modal mass of mode r, mr , by {𝜓}Tr [M]{𝜓}r = mr ,

(6.43)

and similarly the modal stiffness of mode r, kr , by {𝜓}Tr [K]{𝜓}r = kr .

(6.44)

By replacing −𝜆r by the undamped natural frequency (j𝜔r )2 = −𝜔2r in Equation (6.38), it follows that kr = 𝜔2r mr .

(6.45)

Because the mode shapes {𝜓}r have an arbitrary scaling, the modal mass and stiffness are clearly not well-defined numbers for a particular system. They can rather be any number, depending on the scaling of the mode shapes. But the important use of the modal mass (particularly, but in principle also the modal stiffness) is in its use for scaling mode shapes. Thus, we can scale the mode shapes so that the modal mass becomes, for example unity, i.e., mr = 1. This is a very common way of scaling mode shapes. Other ways of scaling mode shapes are, for example to unit length, so that ||{𝜓}r ||2 = 1, or so that the largest coefficient in the mode shape is unity. The concepts of modal mass and stiffness are very important also because they are used for several purposes when we deduce other fundamental properties such as modal coordinates as we will see in Section 6.3.3, and for frequency responses of MDOF systems, as we will see in Section 6.4. Example 6.3.2 Calculate the modal mass and stiffness of each mode shape in Example 6.3.1. Rescale the mode shapes (just as an example) to unity modal stiffness. With the scaling of unity length, we calculate the modal mass of mode 1 as follows: √ √ (6.46) m1 = {𝜓}T1 [M]{𝜓}1 = (1∕ 2)2 + (1∕ 2)2 = 1. Similarly, the modal mass of the second mode is m2 = 1 because the numbers are the same (verify this if you are not sure!). The modal stiffness of the first mode becomes k1 = {𝜓}T1 [K]{𝜓}1 = · · · = 100,

(6.47)

and for the second mode, k2 = {𝜓}T2 [K]{𝜓}2 = · · · = 400.

(6.48)

Here we alternatively could have used the relation in Equation (6.45) to find the modal stiffness from the modal masses and natural frequencies. Since the equations for mode shape orthogonality are square-form matrices (the mode shapes multiply twice), it is obvious that we have to divide by the square root of the “current” modal stiffness to obtain a square product of unity. Therefore, the new, scaled mode shapes for unity modal stiffness become { √ } 1∕(10 2) √ , (6.49) {𝜓}1 = 1∕(10 2) 1

137

138

6 Modal Analysis Theory

and the second mode shape scaled for unity modal stiffness becomes { √ } 1∕(20 2) √ {𝜓}2 = . −1∕(20 2) 2

(6.50)

End of example.

6.3.3 Modal Coordinates The concept of modal coordinates or principal coordinates is very important in modal analysis. It follows directly from linear algebra theory of eigenvalues and eigenvectors that eigenvectors diagonalize matrices, as we saw in the orthogonality criteria in Section 6.3.1 (see also Appendix E). Nevertheless, we will formulate the proper coordinate transformation here and see what it leads to. We do this by defining the coordinate transformation: {u(t)} = [Ψ]{q(t)},

(6.51)

where {q(t)} is a column vector ⎫ ⎧ ⎪ q1 (t) ⎪ ⎪ q (t) ⎪ {q(t)} = ⎨ 2 ⎬ , ⎪ ⋮ ⎪ ⎪ qN (t) ⎪ ⎭ ⎩

(6.52)

and we call these new coordinates, qr the modal coordinates of mode r. The transformation matrix [Ψ] is the mode shape matrix which is a matrix with the mode shapes in columns, i.e., the rth column in [Ψ] is {𝜓}r . Note that the concept of modal coordinates implies that any response up (t) may be determined by a linear combination of the coefficients in row p of the mode shape matrix and the modal coordinates for each mode, i.e., up (t) =

N ∑

𝜓pr qr (t).

(6.53)

r=1

Newton’s equation can be written in the modal coordinates as follows: ̈ + [K][Ψ]{q} = {F(t)}. [M][Ψ]{q}

(6.54)

We premultiply this equation by [Ψ]T which gives the equation: ̈ + [Ψ]T [K][Ψ]{q} = [Ψ]T {F(t)}. [Ψ]T [M][Ψ]{q}

(6.55)

From Equations (6.43) and (6.44), it follows that replacing the vectors {𝜓}r by the mode shape matrix [Ψ] will result in diagonal modal mass and modal stiffness matrices: [Ψ]T [M][Ψ] = ⌊Mr ⌋

(6.56)

[Ψ]T [K][Ψ] = ⌊Kr ⌋ ,

(6.57)

and

where the matrix ⌊Mr ⌋ have the modal masses, mr , on its diagonal, and ⌊Kr ⌋ has the modal stiffnesses, kr on its diagonal.

6.3 Eigenvalues and Eigenvectors

Using these relations in Equation (6.55) gives us the equation: ̈ + ⌊Kr ⌋ {q} = {F ′ (t)}, ⌊Mr ⌋ {q}

(6.58)

where we have renamed the forces in the new coordinate system {q} as follows: {F ′ (t)} = [Ψ]T {F(t)}.

(6.59)

Equation (6.58) is a very important property. It shows that in the modal coordinates, each mode is uncoupled from all the other modes, and each row in Equation (6.58) corresponds to the equation of an uncoupled single degree-of-freedom (SDOF) system: mr q̈ r + kr qr = Fr′ .

(6.60)

We illustrate the concept of modal coordinates with the example provided beneath. Example 6.3.3 Set up the system of uncoupled forced response equations for the system in Example 6.3.1 (with numbers). Using the modal masses and stiffnesses already calculated in Example 6.3.2, we get the uncoupled equations: √ ]{ } ]{ } [ √ [ ]{ } [ 1∕ 2 1∕ 2 F1 100 0 q1 1 0 q̈ 1 √ √ + = . (6.61) q̈ 2 q2 F 0 400 0 1 1∕ 2 −1∕ 2 2 You should note especially that the mode vectors multiplying the force vector are horizontal, since the mode shape matrix is transposed. End of example. Modal coordinates may be computed by inversing the mode shape matrix. This is often used in cases where results from a FE model are compared with experimental data, because of the difference in size of the two sets of data. In such applications, the mode shape matrix is never full, but uses as many modes as the experimental model. Therefore, a pseudo-inverse (see Appendix E) of the experimental mode shape matrix is used to compute the modal coordinates by {q(t)} = [Ψe ]+ {u(t)},

(6.62)

where the mode shape vector [Ψe ] is a small experimental model, size No × Nm , where No is the number of responses and Nm the number of experimental mode shapes. Inserting this relationship in Equation (6.51), but using a large mode shape matrix, [ΨFE ] from an FE model, we get {uFE (t)} = [ΨFE ][Ψe ]+ {ue (t)},

(6.63)

where {uFE (t)} is the large vector of all FE degrees-of-freedom, and {ue (t)} is a small vector with measured responses. The product of mode shape vectors is defined as the transformation matrix, T, i.e., [T] = [ΨFE ][Ψe ]+ ,

(6.64)

by which the relationship between measured response degrees of freedom and those of the FE model are {uFE (t)} = [T]{ue (t)}. This way of expanding experimental modes to FE

139

140

6 Modal Analysis Theory

modes is called system equivalent reduction expansion process, SEREP, (O’Callahan et al. 1989). Note that the modal coordinates may also be calculated from measured responses by Equation (6.62), for each time value t. Another important implication of the modal coordinates, and the use of these coordinates, will become clear in the next section where we introduce a special form of damping, proportional damping.

6.3.4 Proportional Damping The concept of proportional damping is defined as the case where we have a damping matrix which can be written as a linear combination of the mass and stiffness matrices, i.e., [C] = a[M] + b[K],

(6.65)

where a and b are real constants. This is often called Rayleigh damping. From the orthogonality criteria in Equations (6.56) and (6.57), it follows that the damping matrix will also be a diagonal matrix in the modal coordinates, since [Ψ]T [C][Ψ] = a ⌊Mr ⌋ + b ⌊Kr ⌋ = ⌊Cr ⌋ .

(6.66)

This means that with this special form of damping, the MDOF system is decoupled in modal coordinates, and we get a set of uncoupled equations ̈ + ⌊Cr ⌋ {q} ̇ + ⌊K⌋ {q} = {F ′ (t)}, ⌊Mr ⌋ {q}

(6.67)

which is a sufficient condition for the damped system to have the same mode shapes as for the undamped system. Strictly speaking these are not eigenvectors of the damped system, as there is no eigenvalue problem defined for this case; however, the mode shapes are usually still referred to as eigenvectors because they are eigenvectors of the undamped system. In each row of the equation system in Equation (6.67), we have an equation: mr q̈ + cr q̇ + kr q = Fr′ (t),

(6.68)

which is Newton’s equation of an SDOF system with the modal mass, damping, and stiffness as parameters. From Section 5.2.1, we then know that each mode (SDOF system) will have an undamped natural frequency: √ kr 𝜔r = , (6.69) mr and relative damping c . 𝜁r = √ r 2 m r kr Using these equations, we can calculate the poles for mode r as follows: √ sr = −𝜁r 𝜔r ± j𝜔r 1 − 𝜁r2 = 𝜎r + j𝜔dr . Using the relation for the real part of the poles that √ kr , 𝜎r = −𝜁r 𝜔r = −𝜁r mr

(6.70)

(6.71)

(6.72)

6.3 Eigenvalues and Eigenvectors

with Equation (6.70), we find that √ √ −2𝜎r mr kr = −2𝜎r mr . cr = 2𝜁r mr kr = √ kr ∕mr

(6.73)

Example 6.3.4 We continue with the same 2DOF system as in the previous examples, but we now add proportional damping defined by Equation (6.65) with a = 2∕15 ≈ 0.1333 and b = 1∕1500. Calculate the poles of the system. The damping matrix is [ ] [ ] [ ] 2∕15 0 250∕1500 −150∕1500 0.3 −0.1 [C] = + = . (6.74) 0 2∕15 −150∕1500 250∕1500 −0.1 0.3 The next step to solve the damped case would be to find the eigenvectors (mode shapes) of the undamped system, which we already have from Example 6.3.1. We can therefore now compute the diagonal modal damping matrix by computing, for example by using MATLAB/Octave, ⌊ ⌋ 0.2 0 . (6.75) [Ψ]T [C][Ψ] = 0 0.4 We have the modal masses and stiffnesses from Example 6.3.2, so we can now calculate the relative damping factors: c 0.2 = √ = 0.01 (6.76) 𝜁1 = √ 1 2 m 1 k1 2 1 ⋅ 100 and c 0.4 = 0.01. = √ 𝜁2 = √ 2 2 m 2 k2 2 1 ⋅ 400

(6.77)

It is common in practice to express this as 1% relative damping for both modes. The poles, finally, are s1 and s∗1 , where using the undamped natural frequencies from Example 6.3.1, we get √ √ (6.78) s1 = −𝜁1 𝜔1 + j𝜔1 1 − 𝜁12 = −0.1 + j10 1 − 0.012 , for the first mode, and for the second mode we have the poles s2 and s∗2 , where √ √ s2 = −𝜁2 𝜔2 + j𝜔2 1 − 𝜁22 = −0.2 + j20 1 − 0.012 .

(6.79)

End of example. The definition of proportional damping given in Equation (6.65), where the two parameters a and b controls the damping matrix, is not the most general definition leading to an uncoupled damping matrix as in Equation (6.67). It can be shown that the mode shapes of a damped system are equal to the modes of the undamped system if the equation ( )( ) ( )( ) [M]−1 [C] [M]−1 [K] = [M]−1 [K] [M]−1 [C] (6.80) is satisfied, although this will not be proven here, see for example Craig and Kurdila (2006) or Ewins (2000). The most common proportional damping used in the simulation

141

142

6 Modal Analysis Theory

of mechanical systems is referred to as modal damping and is obtained by adding an individual damping factor, 𝜁r , to each undamped natural frequency to produce the poles and use the mode shapes of the undamped system, see Section 6.4.3. Although the assumption of proportional damping as defined either by Equation (6.65) or (6.80) is certainly not always valid, in many cases, there seem to be good reasons to assume this form of damping. First, it can be argued that many types of damping are related to the stiffness elements, e.g., internal material damping or to the mass elements, e.g., for friction damping. A stronger argument for the validity of proportional damping as a total approximation of damping, however, is perhaps the empirical evidence. Practical experience from experimental modal analysis has shown that mode shapes are often indeed real or near-real. In many cases, therefore, proportional damping seems to be a valid assumption.

6.3.5 General Damping In the case of general, or nonproportional, damping, the eigenvalue problem we used above cannot be used because the normal modes do not decouple the damping matrix. An alternative solution can, however, be found by reformulating the second-order system into a so-called state-space formulation, a common technique developed in the field of control engineering. We thus define a new vector with 2N elements, {z(t)}, { } {u(t)} {z(t)} = , (6.81) ̇ {u(t)} ̇ is whereby the first derivative z(t) { } ̇ {u(t)} ̇ {z(t)} = . ̈ {u(t)}

(6.82)

Newton’s equation for our MDOF system is now extended by adding N extra lines ̇ − [M]{u} ̇ = {0}, [M]{u} by introducing two new matrices [ ] C M [A] = M 0 and

[ [B] =

K 0

] 0 , −M

and finally, the force vector is appended by N zeros so that { } F(t) . F′ = 0 With these definitions, we set up the 2N-by-2N equation system: [ ]{ } [ ]{ } { } C M u̇ K 0 u F + = , M 0 ü 0 −M u̇ 0

(6.83)

(6.84)

(6.85)

(6.86)

(6.87)

or more compactly ̇ + [B]{z} = F ′ , [A]{z}

(6.88)

6.3 Eigenvalues and Eigenvectors

which is a linear first-order differential equation in z(t). The solutions to Equation (6.88) are of the form: } { {𝜓}r esr t , {z(t)} = {Φ}r esr t = (6.89) sr {𝜓}r where sr is pole for mode r, {Φ}r is the corresponding eigenvector of length 2N, and the lower half includes a multiplication by the pole because the lower half of {z} is the derivative of the first half and sr is the inner derivative from the esr t factor. To find the free vibrations of the system described by Equation (6.88), we first Laplace transform Equation (6.88), which results in [s[A] + [B]] {Z(s)} = F ′ (s).

(6.90)

With the same procedure as for the undamped system, we premultiply Equation (6.90) by [A]−1 and rearrange, and set the force to zero to find the free vibrations. We get the equation: ] [ −1 (6.91) [A] [B] − 𝜆[I] {Φ} = {0}, which is a standard eigenvalue problem with eigenvalues equal to minus the poles, 𝜆r = −sr . Solving the eigenvalues and eigenvectors gives 2N eigenvalues and 2N corresponding eigenvectors of length 2N. Since the coefficient matrices [A] and [B] are real, the eigenvalues must be real or come in complex conjugate pairs. If they are real the system is overdamped, so we concentrate on the case where they are complex, and the system exhibits free vibrations. In that case, the system will have N complex conjugate pairs of eigenvalues, and N corresponding complex conjugate pairs of eigenvectors. An important point to note is that because the eigenvalue problem in the case of nonproportional damping is of first order, the eigenvalues are directly related to the Laplace operator, i.e., 𝜆 = −s, and thus the poles of the system, not to the square of the poles as for the undamped case. As the poles come in complex conjugate pairs, we take every second pole from the eigenvalues, and we can therefore write the poles with positive imaginary part as follows: √ sr = −𝜁r 𝜔r + j𝜔r 1 − 𝜁r2 , (6.92) for r = 1,2, 3, … , N. The complex conjugate poles s∗r are of course also poles of the system. The angular frequencies wr for nonproportionally damped systems are called natural frequencies and are not strictly speaking the same as the undamped natural frequencies. The main difference between the mode shapes of a nonproportionally damped system and those of the proportionally damped system is that the mode shapes of the nonproportionally damped system are complex. This means that each point on the structure has its own phase angle relative to the other points, which in turn means that each point on the structure reaches its maximum deflection at different time instances. The result, if the complex mode has phase angles that differ substantially from 0 and 180∘ relative to each other (a large mode complexity), is that the mode shape is not a standing wave as for the normal modes, but rather consists of a “traveling wave” whose maximum moves around over the structure. In experimental modal analysis, it is very easy to obtain complex modes due to errors in the parameter extraction because of the rapid phase shift in the FRF around the natural

143

144

6 Modal Analysis Theory

frequency. This rapid phase shift can make a small error in estimated frequency yield a large phase error in the mode shape. It is therefore important to understand mode complexity in order to understand if the obtained complex modes are “true” or a result of erroneous data and/or curve fitting. On most structures with light damping, the mode shape complexity is not particularly large, as most mode shapes have phase angles close to 0 or 180∘ , even if the damping is strongly nonproportional. It turns out (Ewins, 2000) that in order to get highly complex modes on ordinary structures (not rotating, for example, for which modes are normally highly complex), in addition to having a nonproportional damping matrix, it is also necessary that at least two modes are very close in frequency. A result of this is that, if experimental modal analysis results in highly complex modes, it is good practice to treat the results with some suspicion. The most likely cause of highly complex modes is poor curve fitting. It can be shown that there are orthogonality criteria similar to those for the undamped system in Equations (6.56) and (6.57) such that [Φ]T [A][Φ] = ⌊MA ⌋

(6.93)

[Φ]T [B][Φ] = ⌊MB ⌋ ,

(6.94)

and

where coefficients in the diagonal matrices are called modal A and modal B, respectively. These complex coefficients, mar and mbr , respectively, can be used for mode shape scaling, just like the modal mass and stiffness numbers for the proportionally damped system, and as we will see in Section 6.4.4, there is a good reason for scaling mode shapes to unity modal A. From Equation (6.91) and the results in Equations (6.93) and (6.94), it follows that the eigenvalue of mode r has the relationship: m 𝜆r = −sr = br , (6.95) mar which should be compared to the result for undamped systems in Equation (6.45). Example 6.3.5 To illustrate the concept of nonproportional damping, we change the damping matrix from Example 6.3.4 by adding 0.5 to element (1,1), which produces the nonproportional damping matrix [ ] 0.8 −0.1 [C] = . (6.96) −0.1 0.3 Find the poles and mode shapes of the system. The generalized eigenvalue problem can be solved in MATLAB/Octave by building the matrices [A] and [B], etc. There are some steps necessary that we will not discuss in great length here, but instead summarize by the following lines of MATLAB/Octave code that, together with the comment code, should be sufficient. We assume that the mass, damping, and stiffness matrices are already defined as in previous examples. You should note that we use eig(−A∖B), with a minus sign, which gives correct poles directly, since we have that 𝜆 = −s. This is not necessary but simplifies the sorting process of the poles a little if we want the poles with positive imaginary part to come first. We also assume that the absolute values of the eigenvalues are larger than unity.

6.3 Eigenvalues and Eigenvectors

A=[C M; M 0*M]; B=[K 0*M; 0*M -M]; [V,D]=eig(-A\B) % Sort in descending order [Dum,I]=sort(diag(abs(imag(D)))); p=diag(D(I,I)); V=V(:,I); % Scale to unity Modal A Ma=V.'*A*V; for col = 1:length(V(1,:)) V(:,col)=V(:,col)/sqrt(Ma(col,col)); end The results of this code are the column vector p with the poles of the system and the matrix V with the mode shapes in columns. After sorting and scaling for unity modal A, these variables contain ⎫ ⎧ ⎪ −0.225 + 9.999i ⎪ ⎪ −0.225 − 9.999i ⎪ p=⎨ ⎬, ⎪ −0.325 + 19.995i ⎪ ⎪ −0.325 − 19.995i ⎪ ⎭ ⎩

(6.97)

and the eigenvalues of the first mode are ⎫ ⎧ ⎪ −0.11 + 0.11i ⎪ ⎪ −0.11 + 0.11i ⎪ {𝜙}1 = ⎨ ⎬, ⎪ −1.10 − 1.13i ⎪ ⎪ −1.10 − 1.15i ⎪ ⎭1 ⎩

(6.98)

and the complex conjugate, {𝜙}∗1 , whereas for the second mode, we have the eigenvector ⎫ ⎧ ⎪ 0.08 − 0.08i ⎪ ⎪ −0.08 + 0.08i ⎪ {𝜙}2 = ⎨ ⎬, ⎪ 1.53 + 1.63i ⎪ ⎪ −1.58 − 1.58i ⎪ ⎭2 ⎩

(6.99)

and the complex conjugate, {𝜙}∗2 . The natural frequencies are f1 = 1.59 Hz and f2 = 3.18 Hz, and the relative damping coefficients 𝜁1 = 0.0225 and 𝜁2 = 0.0162. The mode shapes, finally, {𝜓}1 and {𝜓}2 are the upper halves of {𝜙}1 and {𝜙}2 , respectively. You should note that you may get eigenvectors with a minus sign compared to the vectors above. This is quite arbitrary, as the sign of an eigenvector is undefined. End of example. The important parts of the eigenvectors are obviously the upper halves as the lower halves can be reconstructed by multiplying the upper half by the corresponding eigenvalue (pole) if needed, for example, to calculate modal A or modal B.

145

146

6 Modal Analysis Theory

6.4 Frequency Response of MDOF Systems We have now come to a point where we can formulate relations for the frequency responses (FRF) of MDOF systems. As we will see there are two ways to synthesize frequency responses; either directly from Newton’s equation, or by using the modal parameters, i.e., poles and mode shapes. It is worth pointing out the great potential offered by FRFs, as these functions can be used to calculate the steady-state response in a particular point (degree of freedom), for a particular force input in a point (the same as the response point or another point). This section will thus answer the important question we raised in the introduction to this chapter: how much vibration do we get in a point (DOF) on our structure for a particular force input in a particular point? We will comment more below as we come to the results.

6.4.1 Frequency Response from [M], [C], [K] We first look at how frequency responses can be calculated from known mass, damping, and stiffness matrices. The most intuitive frequency responses would be those obtained from the Laplace transform of Newton’s equation, i.e., [2 ] s [M] + s[C] + [K] {U(s)} = {F(s)}, (6.100) which can be rewritten as follows: [Z(s)]{U(s)} = {F(s)},

(6.101)

where the matrix [Z(s)] is called the system impedance matrix. The frequency responses from this equation would be obtained by evaluating the equation along the imaginary axis in the s-plane, i.e., Z(j𝜔) = Z(s)|s=j𝜔 . There is an important implication, however, of this matrix, which makes it rather unsuitable for the purpose of general description of mechanical systems. If we look at the formulation in Equation (6.101), an individual element zpq (j𝜔) implies zpq (j𝜔) =

Fp || , | Uq ||U =0,k≠q k

(6.102)

that is, in order to experimentally measure an individual element zpq in [Z(s)], we would have to ground every point except point q to ensure the displacement of all other points would be equal to zero. Of course, this is impossible in most cases. We must therefore reformulate Equation (6.101) by inverting the impedance matrix, and we obtain an alternative, useful formulation by introducing [H(s)] = [Z(s)]−1 , whereby we get the equation: [H(s)]{F(s)} = {U(s)}.

(6.103)

To get the frequency responses, we evaluate Equation (6.103) on s = j𝜔 and get the receptance frequency responses (or dynamic flexibility) defined by the matrix equation: [H(j𝜔)]{F(j𝜔)} = {U(j𝜔)}.

(6.104)

6.4 Frequency Response of MDOF Systems

Measuring an individual element Hpq (j𝜔) in [H(j𝜔)] implies measuring Hpq (j𝜔) =

Up || , | Fq ||F =0,k≠q k

(6.105)

which is usually very much easier than keeping the displacements to zero. It is simply a matter of “not touching” the points except where we are inputting a force. If we wish we can input several forces, the so-called multiple-input excitation, which is no problem considering we measure all forces exciting the system. This will be described in detail in Chapter 14. The frequency response formulation in Equation (6.104) is “physical” in the sense that it represents the displacement at each point as a superposition of the contribution of each nonzero force, which is exactly what the physics of mechanical systems imply. To compute the frequency responses in Equation (6.104), the procedure is very simple: ● ●

Compute the system impedance [Z(j𝜔)] by Equation (6.101). Compute the receptance matrix by inverting the impedance matrix at each frequency 𝜔, i.e., [H(j𝜔)] = [Z(j𝜔)]−1 at each frequency.

When working with experimentally obtained frequency responses, it is most common to use the frequency in [Hz] as the variable. We will therefore later in this book refer to, for example, the receptance matrix [H(f )].

6.4.2

Frequency Response from Modal Parameters

The development of frequency responses from mass, damping, and stiffness matrices in Section 6.4.1 required the entire matrices to be known and included a matrix inversion of the entire system impedance matrix at each frequency. For large systems, this is computationally inefficient. Also, it is rare to know the damping matrix, so the equations developed in Section 6.4.1 are often not practically useful. In Section 6.4.3, we will see an alternative way of synthesizing FRFs which is more practical and which is based on the results we obtain in this section. In this section, we will also show that the modal parameters provide a much more computationally efficient way to compute the frequency responses. Furthermore, the development in this section is the key to experimental modal analysis, which will be described comprehensively in Chapter 16, as we will now show the relation between measured frequency responses and modal parameters (natural frequencies, damping coefficients, and mode shapes). We will develop a general form of an expression for the receptance frequency response as defined in Equation (6.104) for the case of proportional damping as described in Section 6.3.4 because this is somewhat easier to follow than the general case for nonproportional damping. It should perhaps be mentioned that for the undamped case, we cannot define any frequency responses, as they would go to infinity at f = fr ; frequency responses require damping. We start by noting that the frequency response matrix we want is the inverse of the system impedance matrix in the Laplace domain (eventually setting s = j𝜔, but we wait until Equation (6.110) at the end of this argument), so we have that ] [2 (6.106) s [M] + s[C] + [K] = [H]−1 .

147

148

6 Modal Analysis Theory

We now premultiply both sides with the mode shape matrix transposed, [Ψ]T and postmultiply with [Ψ] which yields ] [ [Ψ]T s2 [M] + s[C] + [K] [Ψ] = [Ψ]T [H]−1 [Ψ]. (6.107) Next, we make use of the fact that we have proportional damping, i.e., [C] = a[M] + b[K], and the orthogonality criterion therefore makes the matrix on the left-hand side diagonal, that is, we have ] [2 (6.108) s ⌊Mr ⌋ + s ⌊Cr ⌋ + ⌊Kr ⌋ = [Ψ]T [H]−1 [Ψ]. We then note that the inverse of a product [ABC]−1 = C−1 B−1 A−1 and take the inverse of both sides of the equation, which results in )−1 ]−1 ( [2 = [Ψ]−1 [H] [Ψ]T , (6.109) s ⌊Mr ⌋ + s ⌊Cr ⌋ + ⌊Kr ⌋ and then we premultiply this equation by [Ψ] and postmultiply by [Ψ]T and reverse the equation to get ]−1 [ (6.110) [H] = [Ψ] s2 ⌊Mr ⌋ + s ⌊Cr ⌋ + ⌊Kr ⌋ [Ψ]T . Now, we note that the inverse of a diagonal matrix is nothing but the reciprocal of each value on the diagonal. Therefore, we define a new matrix, the inverse pole matrix, [S−1 ] which is a diagonal matrix where each element on the diagonal, srr , is srr =

1∕mr 1 , = (s − sr )(s − s∗r ) s2 mr + scr + kr

for mode r. Using this matrix, we can simplify the result in Equation (6.110) to ⌊ ⌋ [H] = [Ψ] S−1 [Ψ]T .

(6.111)

(6.112)

It is particularly important to look at what Equation (6.112) means for a particular function Hpq (s) = Xp ∕Fq . A careful study of the equation reveals that the frequency response Hpq (s) can be written as Hpq (s) =

N ∑

𝜓pr 𝜓qr

r=1

mr (s − sr )(s − s∗r )

,

(6.113)

where 𝜓pr is mode shape coefficient in point p for mode r, etc. To find a more general description of the transfer function in Equation (6.113), we can apply a partial fraction expansion (see Section 2.6.1) of each term in the sum to split it into a sum of the residues, Apqr divided by (s − sr ). Since the numerator coefficient in Equation (6.113) is real, and the poles are complex conjugate pairs, it is relatively easy (see Problem 6.2) to show that the frequency response can be written as follows: Hpq (s) =

N ∑ Apqr r=1

s − sr

+

A∗pqr s − s∗r

,

(6.114)

where the residues, Apqr , are given by Apqr =

1 𝜓 𝜓 . j2𝜔dr mr pr qr

(6.115)

6.4 Frequency Response of MDOF Systems

In a more general form, we define the modal scaling constant, Qr , for each mode r, so that the residues are given by Apqr = Qr 𝜓pr 𝜓qr .

(6.116)

By using the modal scaling constants, Equation (6.114) is valid for all mode shapes, also complex mode shapes in the case of nonproportional damping. If the damping is proportional, the modal scaling constant is obviously Qr =

1 . j2𝜔dr mr

(6.117)

We will deduce the modal scaling constant for nonproportional damping in Section 6.4.4. We will now replace the transfer functions used for the development here by setting s = j𝜔, which gives us the following general expression for MDOF frequency response functions, Hpq (j𝜔) =

N ∑ Apqr r=1

j𝜔 − sr

+

A∗pqr j𝜔 − s∗r

,

(6.118)

where the residues Apqr are given by Equation (6.116). The result in Equation (6.118), which is called the modal superposition equation is very important because it is the key to the topic of experimental modal analysis, as it relates frequency responses that can be experimentally estimated with the modal parameters – the poles and mode shapes. This equation also shows why scaling mode shapes to unity modal mass is so convenient; the factor mr in the denominator can then be neglected which makes the calculations a little easier. It is clear, comparing Equation (6.113) with the expression of the transfer function of an SDOF system in Equation (5.4), that Equation (6.113) (and, of course, also Equation 6.118) is a sum of SDOF transfer functions or FRFs, respectively. This is the reason for the great interest we took in the SDOF system; MDOF systems have FRFs that consist of sums of SDOF FRFs, where each mode corresponds to one SDOF system. In the case of MDOF systems, this does not necessarily mean that every mode produces a clear peak in the frequency response, like an SDOF system, because two or more natural frequencies can coincide or be very close, which will result in just one peak in the frequency response. However, if modes are well separated, each mode will show a peak very similar to the peak of an SDOF system (but not identical, because surrounding modes will interfere at least a little, and sometimes a lot). An important implication of Equations (6.118) and (6.116) is that the frequency response matrix [H] is obviously symmetric so that Hpq (j𝜔) = Hqp (j𝜔).

(6.119)

This rather interesting equation is called Maxwell’s reciprocity relation and shows that the frequency response between two points is the same if we excite in point q and measure the response in point p, as if we reverse the force and response points.

149

150

6 Modal Analysis Theory

The residues can easily be formulated in matrix notation, whereby the residue matrix, [A]r becomes ⎡ 𝜓1r 𝜓1r ⎢ 𝜓 𝜓 [A]r = Qr {𝜓}r {𝜓}Tr = Qr ⎢ 2r 1r ⎢ 𝜓3r 𝜓1r ⎢ … ⎣

𝜓1r 𝜓2r 𝜓2r 𝜓2r 𝜓3r 𝜓2r …

𝜓1r 𝜓3r 𝜓2r 𝜓3r 𝜓3r 𝜓3r …

…⎤ ⎥ …⎥ , …⎥ … ⎥⎦

(6.120)

which is a matrix of rank one (since it is composed only of linear combinations of one vector, {𝜓}r ). It should be noted that each column in the residue matrix is the mode shape {𝜓}r , scaled by the modal constant Qr and the mode shape coefficient corresponding to the column, that is ] [ [A]r = Qr 𝜓1r {𝜓}r 𝜓2r {𝜓}r 𝜓3r {𝜓}r … . (6.121) With this expression for the residue matrix, the entire frequency response matrix [H(j𝜔)] can be written compactly as follows: [H(j𝜔)] =

N ∑ [A]r [A∗ ]r + , j𝜔 − sr j𝜔 − s∗r r=1

(6.122)

which is the equation for synthesizing frequency responses from the modal parameters, and the basic expression used for modal parameter extraction in the frequency domain, see Chapter 16. It should be noted here that although Equation (6.122) was developed here for a system with proportional damping, it is also valid for systems with general damping, i.e., with complex mode shapes. A very important implication of Equation (6.120) for experimental modal analysis is found by observing that any row or column in the residue matrix, [A]r , contains the mode shape {𝜓}r . This assumes that there are not two poles which coincide, however. Nevertheless, we can conclude from this that the minimum amount of data necessary to be able to extract a mode shape from a measurement of a frequency response matrix, is one row or column. In the special case of two coinciding poles, there are multireference techniques which can separate the two if two rows or columns of [H] are measured. In experimental modal analysis, we often write Equation (6.122) using a pole matrix, [Λ−1 ], similar to the inverse pole matrix [S−1 ] in Equation (6.112), but expanding the matrix to a size of 2N, formulating it in the frequency domain instead of in Laplace domain, and renumbering the poles to sr , r = 1,2, 3, … , 2N. The inverse pole matrix [Λ−1 ] is thus ⎡ 1 ⎢ j𝜔−s1 ⎢ 0 [Λ−1 (j𝜔)] = ⎢ ⎢ … ⎢ ⎢ … ⎣

0

0

1 j𝜔−s2

0









… ⎤ ⎥ … ⎥ ⎥, … ⎥ ⎥ 1 ⎥ j𝜔−s2N ⎦

(6.123)

and if we further redefine the 2N-by-2N mode shape matrix [Ψ′ ], including the complex conjugate mode shapes in columns, Equation (6.122) can be written compactly as follows: ⌋ ⌊ [H(j𝜔)] = [Ψ′ ] Λ−1 [Ψ′ ]T . (6.124)

6.4 Frequency Response of MDOF Systems

Another important implication of Equation (6.118) is the great information saving it offers. Essentially, knowing the poles (frequencies and relative damping coefficients) and the mode shapes of all modes, the frequency response between any two points on the structure can be synthesized. Furthermore, if a limited frequency range is of interest, as it always is in practice, we only need the first NM modes which can offer a great saving. The number of coefficients necessary for this are NM complex poles (2NM real numbers) plus NM × N mode shape coefficients. This should be compared with storing all frequency responses with, say, N frequency values each, which would correspond to N 3 complex numbers. For N = 1000 DOFs and NM = 25 modes, for example, this offers a saving of a factor of approximately 80,000 (approximately 25,000 values instead of 2 × 109 ). Example 6.4.1 Calculate and plot the frequency responses H11 (j𝜔) and H12 (j𝜔) for the 2DOF system from Example 6.3.4. For any FRF, H11 we have the residue, according to Equation (6.116) Apqr = Qr 𝜓pr 𝜓qr .

(6.125)

Since we scaled the mode shapes in Example 6.3.4 to unity modal A, then the modal scaling factors are Q1 = Q2 = 1. We thus obtain the residue A111 ≈ (0.11 ⋅ (−1 + i))2 ≈ −0.025i,

(6.126)

where we have ignored the rather small real part in the answer. Furthermore, for this and all subsequent residues, we use all decimal numbers from Example 6.3.4, not the truncated numbers in the row above. For the second mode, similarly we get A112 ≈ (0.08 ⋅ (1 − i))2 ≈ −0.0125i.

(6.127)

Thus, the frequency response becomes H11 (j𝜔) =

−0.025j 0.025j −0.0125j 0.0125j + + + . j𝜔 − s1 j𝜔 − s∗1 j𝜔 − s2 j𝜔 − s∗2

(6.128)

For the second FRF, H12 , similarly we get the residues: A121 ≈ (0.11 ⋅ (−1 + i))2 ≈ −0.025i

(6.129)

A122 ≈ (0.08 ⋅ (1 − i)) ⋅ (0.08 ⋅ (−1 + i)) ≈ 0.0125i.

(6.130)

and

H12 (j𝜔) =

−0.025j 0.025j 0.0125j −0.0125j + + . ∗ + j𝜔 − s1 j𝜔 − s1 j𝜔 − s2 j𝜔 − s∗2

(6.131)

The two frequency responses are plotted in Figure 6.1. End of example.

6.4.3

Frequency Response from [M], [K], and 𝜻 – Modal Damping

As was noted earlier, the equations in Section 6.4.1 are of somewhat limited practical use since we usually do not know the damping matrix [C]. In most cases, where frequency responses are wanted for simulation purposes, we are therefore forced to use some

151

Dyn. flexibility (m/N)

6 Modal Analysis Theory

100

H11 H12

10−2

10−4

0

1

2

3

4

5

0

1

2 3 Frequency (Hz)

4

5

200 Phase (Degrees)

152

100 0 −100 −200

Figure 6.1 Plots of 2DOF frequency responses for Example 6.4.1, magnitude (upper plot) and phase (lower plot).

other means of synthesizing frequency responses. One such way, using the results from Section 6.4.2, is by using the normal modes and undamped natural frequencies, [Ψ] and 𝜔r from a solution of the undamped system, and adding a relative damping factor, 𝜁r to each mode, to form a complex pole, and then using Equation (6.118) to synthesize frequency responses. This is the most common method used in simulation of mechanical systems, and the damping is then usually referred to as modal damping.

6.4.4 Mode Shape Scaling Mode shapes are, as we have noted several times, arbitrarily scaled. The relation necessary to synthesize correctly scaled frequency responses from modal parameters is the modal scaling constant, Qr , in, for example, Equation (6.116). As we have mentioned, mode shapes can be scaled many different ways – for largest coefficient of 1, unity length, etc. The two most common and most important scaling conventions are, however, unity modal mass, and unity modal A. The convenience of the former was pointed out in conjunction with Equation (6.113). The convenience of the latter will be apparent if we look at what it implies for a proportionally damped system (remember, for nonproportional damping, unity modal A is the usual scaling). We start with the definition of the matrix A in the state-space formulation from Equation (6.84), repeated here for convenience [ ] C M [A] = , (6.132) M 0 and we remember that the state-space eigenvectors are of the form: } { 𝜓r . {𝜙}r = sr 𝜓r

(6.133)

6.4 Frequency Response of MDOF Systems

If we use the above equations to calculate modal A for a proportionally damped system, for mode r, we get mar = {𝜙}Tr [A] {𝜙}r = =



{𝜓}Tr

sr {𝜓}Tr



[

C M

M 0

]{

{𝜓}r sr {𝜓}r

}

(6.134)

which results in mar = {𝜓}Tr [C] {𝜓}r + sr {𝜓}Tr [M] {𝜓}r + {𝜓}Tr [M] sr {𝜓}r ,

(6.135)

and we obtain mar = cr + 2sr mr .

(6.136)

From Equation (6.73), we have a relation cr = −2𝜎r mr , which, by using the relation sr = 𝜎r + j𝜔dr , leads to mar = −2𝜎r mr + 2(𝜎r + j𝜔dr )mr = j2𝜔dr mr ,

(6.137)

which is our final relation between modal A and modal mass. Equation (6.137) can be used to rescale mode shapes for proportionally damped systems so that we get, e.g., unity modal A. The motivation for scaling mode shapes to unity modal A is found by comparing Equation (6.137) with Equation (6.117), which shows that the modal scaling constant is Qr = 1∕mar the reciprocal of modal A, i.e., if the mode shapes are scaled for unity modal A, then the residue Apqr is simply Apqr = 𝜓pr 𝜓qr .

(6.138)

For convenience, we also note that for unity modal mass scaling, the mode shapes are scaled so that 1 Apqr = 𝜓 𝜓 , (6.139) j2𝜔dr pr qr which follows straight from Equation (6.116). In other words, if we scale mode shapes to unity modal A, the modal constant is unity and the FRF synthesis according to Equation (6.118) becomes simple. Therefore, this mode shape scaling is often preferred in experimental modal analysis. It should be noted, however, that in analytical modal analysis, where the undamped system is usually solved, unity modal mass scaling of mode shapes is most common. It should also be recalled from Example 6.3.5 that scaling mode shapes to unity modal A means that real mode shapes, such as those from proportionally damped systems, become complex, although all mode shape coefficients are, of course, either in phase or out of phase.

6.4.5

The Effect of Node Lines on FRFs

An effect of the expression of the residue in Equation (6.116) is of particular importance. What happens if one of the mode coefficients is zero? Such a point, which is a point with no motion in the mode, is often called a node or a nodal point (not to be confused by a “node” in the sense of a point on an element in a FE model). On continuous structures, there are lines along which all points have zero motion for a particular mode called node lines. Apparently,

153

Dyn. flexibility (m/N)

6 Modal Analysis Theory

H11 H21

100

0

1

2

3

4

5

0

1

2 3 Frequency (Hz)

4

5

200 Phase (Degrees)

154

100 0 −100 −200

Figure 6.2 Plot of two frequency responses from a 3DOF system; H11 which in this case shows all three modes (solid), and a cross FRF, H21 , where DOF 2 obviously lies on a nodal line, since mode 2 is invisible in H21 (dashed). As the figure shows, there is no peak at all in H21 at the natural frequency of the second mode, where the residue is zero.

from Equation (6.116), the residue becomes zero for this mode, and the consequence is that the mode does not appear in the FRF, as is seen in Figure 6.2. An implication of this is that it is very important in experimental modal analysis to select measurement points carefully so that reference points are not located on node lines. If a reference point with one or more zeros in the mode shape vectors is chosen, the entire mode (or modes) will be impossible to detect. This is an important reason why multiple references are usually preferable, as will be discussed in more detail in Chapter 16.

6.4.6 Antiresonance The results of Example 6.4.1, plotted in Figure 6.1, revealed a phenomenon known as an antiresonance. at approximately 2.5 Hz, for the cross-frequency response H11 (f ). This phenomenon is due to a simple relation between phases, and although not due to any global property of the structure, it is still interesting to understand why there sometimes occurs an antiresonance between two modes, and sometimes not. The answer to this question lies in the phase of the frequency responses and the expression of the residues according to Equation (6.116). A natural frequency (resonance) produces a phase change in the FRF of −180∘ . Thus, if there is no antiresonance between two resonances, the phase of the FRF at the two natural frequencies will necessarily have opposite signs, as is seen in the phase of H12 in Figure 6.1. The phase at the first natural frequency is −90∘ , and at the second natural frequency it is +90∘ (or −270∘ , but arctan() gives phases only between ±180∘ ). Thus, since the sign change has to come from the residue in the numerator, one of the two mode shape coefficients must have changed sign between the two modes. If, as we know in this case, the two points (masses) are in phase in the first mode, the two points must be out of phase in the second mode, as we know indeed they are. If you instead look at the phase of H11 in Figure 6.1, the

6.5 Free Decays

antiresonance lifts the phase by +180∘ , so that the phase relationship at the second natural frequency is the same as the phase at the first natural frequency. In this FRF, the force and displacement are in the same point, and the mode shape coefficient therefore cannot, of course, change sign since it is actually the mode shape coefficient squared that makes the residue. Consequently, this FRF shows an antiresonance between the two modes. This is a necessary requirement for all driving point FRFs, where force and response are in the same point, and is used as a quality check when FRFs are measured experimentally using a shaker, see e.g., Section 14.6.

6.4.7

Impulse Response of MDOF Systems

Equation (6.122) can be inverse transformed into a corresponding equation for the time domain. The impulse response matrix [h(t)] is thus [h(t)] =

N ∑ ∗ [A]r esr t + [A∗ ]r esr t ,

(6.140)

r=1

which is the time domain formulation used for synthesizing impulse responses from modal parameters, or for formulating parameter extraction methods in the time domain.

6.5

Free Decays

Free decays are the responses of a structure that is initially deformed, and then released. This is sometimes called step relaxation and was used frequently before modern operational modal analysis (OMA) techniques became popular. To use this technique, the structure is loaded by a static load, for example by hanging a large load in a wire off a bridge. The wire holding the load is then cut, causing a free decay of the structure. Another technique popular in civil engineering applications on large structures such as buildings and bridges is to excite the structure with a sinusoidal force using an unbalance exciter, at the undamped natural frequency, and measuring the responses after the exciter is abruptly stopped. These techniques are still used, but perform poorly compared to modern OMA techniques as those described in Chapters 16 and 17, see for example Magalhaes et al. (2010). The main reason is that the signal-to-noise ratio becomes poorer because of the short measurement time, since the measurement time is limited to how long the structure responds before the exponential decays vanish. Also, the traditional techniques are relatively expensive. A free decay may be developed simply by starting with the equation of impulse responses in Equation (6.140). If we assume that the structure is loaded in a single DOF q at time zero, which is then removed, then we have that the force signal may be written as F0 ⋅ 𝛿(t) for some constant F0 , and all other forces are zero. The responses may thus be calculated as the convolution between the impulse response matrix and the force vector, which, since the force vector only contains a single function, from Equation (6.140) becomes {u(t)} = {h(t)}q ∗ Fq (f ) = F0 {h(t)}q ,

(6.141)

where the impulse response vector contains the impulse responses of column q of [h(t)] which we denote as {h(t)}q . The result arises since convolving a function with a Dirac pulse returns the function itself. We can thus see that free decays have the same appearance as the impulse responses in Equation (6.140). Modal parameters may therefore be extracted from free decays just as if these were impulse responses, see Chapter 16.

155

156

6 Modal Analysis Theory

6.6 Chapter Summary In this chapter, we have seen how vibration problems in discrete mechanical systems with masses, viscous dampers, and springs can be solved. Newton’s equation can be formulated in matrix form for general MDOF systems using mass, damping, and stiffness matrices, and a displacement vector {u}, as follows: ̈ + [C]{u} ̇ + [K]{u} = {F(t)}. [M]{u}

(6.142)

It was shown that the undamped system has eigenvectors called normal modes, where all points move in phase or out of phase, which means that the mode shapes are real (can be described by real numbers with + or − sign). The undamped natural frequency, 𝜔r , of each mode is found by taking the square root of the eigenvalues. For a system with N degrees of freedom (N masses), it was shown that we get N normal modes. Normal modes were then shown to diagonalize the mass and stiffness matrices into modal mass and modal stiffness by the weighted orthogonality criteria: [Ψ]T [M][Ψ] = ⌊Mr ⌋ ,

(6.143)

where we use the mode shape matrix [Ψ] with each mode shape as a column, and the diagonal matrix ⌊Mr ⌋ on the diagonal has the modal mass mr of mode r. Similarly we showed that the diagonal modal stiffness matrix ⌊Kr ⌋ is obtained by [Ψ]T [K][Ψ] = ⌊Kr ⌋ ,

(6.144)

and further 𝜔2r ⌊Mr ⌋ = ⌊Kr ⌋. The modal coordinates or principal coordinates, {q}, defined by {u} = [Ψ]{q} were used to decouple the coupled equations into uncoupled SDOF systems with the modal mass and stiffness values. For proportionally damped systems where the viscous damping matrix is a linear combination of the mass and stiffness matrices, i.e., [C] = a[M] + b[K], it was then shown that the mode shapes are the same as the normal modes, and complex poles can be obtained in the modal coordinates, by first calculating the diagonal damping matrix: [Ψ]T [C][Ψ] = ⌊Cr ⌋ , and then obtaining the poles by sr = 𝜎r ± j𝜔dr = −𝜁r 𝜔r ± j𝜔r

(6.145) √ 1 − 𝜁 2,

(6.146)

where 𝜔2r =

kr mr

is the undamped natural frequency of mode r, and the relative damping is c , 𝜁r = √ r 2 m r kr

(6.147)

(6.148)

i.e., the poles are calculated for an SDOF system with the modal mass, damping, and stiffnesses.

6.7 Problems

For the general damping case, we then showed that a state-space formulation can be used which leads to similar complex conjugate pairs of poles, and complex mode shapes, i.e., all points no longer necessarily move exactly in and out of phase; we may have traveling waves. For any type of damping, we showed that a frequency response matrix can be formulated as follows: (6.149)

[H(j𝜔)]{F(t)} = {u(t)},

and we showed that this frequency response matrix can be decomposed by using the residue matrix and poles by the modal superposition equation, where each frequency response Hpq = Xp ∕Fq is Hpq (j𝜔) =

N ∑ Apqr r=1

j𝜔 − sr

+

A∗pqr j𝜔 − s∗r

,

(6.150)

where the residues, Apqr , are composed of a modal scaling constant, Qr for each mode r, and the mode shape coefficients in the two points p and q, Apqr = Qr 𝜓pr 𝜓qr =

1 𝜓 𝜓 . j2𝜔dr mr pr qr

(6.151)

We discussed that a common mode shape scaling method, to unity modal A, is equal to setting the modal scale constant to Qr = 1, which obviously simplifies the frequency response synthesis as no scaling needs to be done in the modal superposition. Finally, we presented an accurate method of calculating forced response in the time domain, using a method which defines digital filters for each SDOF system in a modal superposition formulation. The method is very convenient to produce accurate time data for use in simulation examples of noise and vibration signals.

6.7

Problems

Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 6.1 Show that the residues Apqr for each mode can be split as given in Equation (6.114), by applying the Heaviside cover-up method from Section 2.6.1 to Equation (6.113) and using the expression for the poles: sr = 𝜎r + j𝜔dr .

(6.152)

Problem 6.2 Find the eigenvalues, poles, natural frequencies (in Hz), and mode shape vectors for a 2DOF system like the one in Figure 5.13 for which m1 = 2 and m2 = 5 kg, k1 = k2 = k3 = 104 N/m, and proportional damping with [C] = 5[M] + 0.0001[K].

157

158

6 Modal Analysis Theory

Use manual calculations to find the characteristic equation and use the MATLAB/Octave roots command to find the roots of this equation. Use both the formulation according to Equation (6.13) and Equation (6.23) and verify that you get exactly the same results. This exercise is valuable to learn where the two eigenvalue formulations diverge and where they come together again. Problem 6.3 Use the MATLAB/Octave command eig to solve Problem 6.2. Problem 6.4 Synthesize FRFs between all degrees of freedom for the system in Problem 6.2 using the formulation in Equation (6.106). Write a MATLAB/Octave script to calculate the matrix inverse at every frequency, and plot the frequency responses in mobility form overlaid in one plot. How many FRFs do you see? Explain why. Problem 6.5 Use MATLAB/Octave to formulate the solution to Problem 6.2 using the state-space form and verify that you get the same results as in Problem 6.2. Problem 6.6 Synthesize FRFs in mobility form between all degrees of freedom for the system in Problem 6.2 using the formulation in Equation (6.118) and the results from Problem 6.5. Write the equations calculating each residue, and then write a MATLAB/Octave script to plot the frequency responses. Compare with the results from Problem 6.4. Problem 6.7 Add 5 to element (1,1) of [C] in Problem 6.2 and solve the poles and mode shapes of the nonproportionally damped system. Check how the poles are affected and how complex the new mode shapes become. Problem 6.8 Use the forced response method of Section 19.2.3 to create the beating and distorted signals mentioned in Section 5.2.5. Check how long it takes for the transient to die out. Then reduce the damping of the system to 1% and rerun the simulation, and see how long it takes for the transient to disappear then. (Hint: You can use the accompanying toolbox command timefresp to create the displacement output of the SDOF system.)

References Craig RR and Kurdila AJ 2006 Fundamentals of Structural Dynamics. John Wiley. Ewins DJ 2000 Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Heylen W, Lammens S and Sas P 1997 Modal Analysis Theory and Testing 2nd edn. Catholic University Leuven, Leuven, Belgium. Inman D 2007 Engineering Vibration 3rd edn. Prentice Hall. Magalhaes F, Cunha A, Caetano E and Brincker R 2010 Damping estimation using free decays and ambient vibration tests. Mechanical Systems and Signal Processing 24(5), 1274–1290. (ed. Maia N and Silva J) 2003 Theoretical and Experimental Modal Analysis. Research Studies Press, Baldock, Hertforsdhire, England. O’Callahan J, Avitabile P and Riemer R 1989 System equivalent reduction expansion process (SEREP) Proceedings of the 7th International Modal Analysis Conference, pp. 29–37.

159

7 Transducers for Noise and Vibration Analysis A large variety of sensors and instrumentation for measurements are in use in the various fields of noise and vibration measurement and analysis. A book such as this can by no means include any comprehensive description of all these sensor types. However, it is still reasonable to give a short description of the most common types of sensors to serve as an introduction to the newcomer to this field. In this chapter, we will therefore present some of the most commonly used transducers for vibration measurements, the piezoelectric transducer, and the most common type of microphone used for precision acoustic measurements, the condenser microphone. Geophones and micro-electro-mechanical systems (MEMS) sensors are other sensors we briefly introduce. At the end of the chapter, there is a section also on electromagnetic shakers used for producing the excitation force for measurements of frequency response. Excellent information on sensor technology can be obtained from the manufacturers, who are the real experts on their own sensors, of course. There is a lot of information available on Internet, for example, at the many manufacturers’ websites. To measure particularly vibration accurately, but sometimes also noise, is often somewhat of a challenge and care must be taken to assure good quality measurements. Becoming a good experimentalist is certainly very difficult through reading a book. If you are new to this field and are going to make vibration measurements, you are encouraged to start by making a number of (seemingly) simple measurements on setups where you know the correct answer. In this chapter, I have devoted Section 7.14 to some comments on good measurement procedures that can help you obtaining a good measurement practice. A word about the nomenclature used in this chapter is perhaps necessary. Transducers and sensors are both common names for devices that transform some physical entity into an electrical voltage that can readily be measured by an instrument. Although the two names are sometimes used with slightly different meanings, I will use the two as synonyms, as is quite common among measurement engineers.

7.1

The Piezoelectric Effect

The most common transducers used in vibration analysis are based on the piezoelectric effect. This effect is common, occurring naturally in, for example, quartz crystals, where

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

160

7 Transducers for Noise and Vibration Analysis

C q

C

(a)

R

u

R

(b)

Figure 7.1 Charge and voltage models for piezoelectric transducers. In the current model in (a) the sensor charge, q acts as a current source in parallel with the capacitance, C, and resistance, R. In the voltage model in (b), the sensor voltage, V , is instead in series with the capacitance, C, and in parallel with the resistance, R.

a charge, q, is produced across the crystal when a force is applied to it. The charge is proportional to the force, F, such that q = S ⋅ F,

(7.1)

where S is called the sensitivity factor. The direction-dependent piezoelectric effect is usually explained through the existence of electric dipoles in the material. Under certain conditions, these dipoles can be forced into a common direction, which then becomes the direction of the material’s piezoelectric effect. There are two main types of materials used in piezoelectric transducers for vibration measurements. Traditionally, ceramic materials artificially polarized to obtain piezoelectric characteristics were used. This type of crystal is still used in accelerometers of charge mode type. The most common piezoelectric material type today in integrated electronics piezoelectric (IEPE) transducers is quartz crystals. The quartz crystals have relatively low piezoelectric (charge) sensitivity, which in the past made them unsuitable, but today, this is overcome by adding signal conditioning inside the sensor, see Section 7.3. An electrical model of piezoelectric transducers can be formulated in two ways. First, the transducer can be described by a charge model, where the transducer signal is seen as a charge generator coupled in parallel with a resistance and a capacitor. Alternatively, the transducer can be described by a voltage model, where the transducer signal is an electric voltage and the resistor and capacitor are instead in series with the voltage generator, see Figure 7.1. Both of these models are frequently used.

7.2

The Charge Amplifier

Since the traditional so-called charge mode piezoelectric transducer produces charge, and not voltage, the measured signal must be converted to a voltage before we can measure it with normal measuring instruments. The conversion is accomplished using a charge amplifier, see Figure 7.2. In the figure, Cc symbolizes the cable capacitance, while Rf and Cf

7.2 The Charge Amplifier

Rf

Cf

A

Cc

Rg

Cg

q

Cin

+ uout –

Sensor

Figure 7.2

Cable

Charge Amplifier

Circuit diagram of the charge amplifier.

are feedback resistance and capacitance, respectively. Rg and Cg are the transducer’s inner resistance and capacitance, respectively, and Cin is the charge amplifier input capacitance. If we calculate the amplification, it can be shown (Serridge and Licht 1986) that uout 1 = −( , (7.2) ) 1 q 1+ C + 1C A

f

A

s

where Cs is the net input capacitance, that is, Cs = Cg + Cc + Cin . For large values of the amplification, A, the total amplification consequently can be written approximately as follows: uout 1 ≈− . (7.3) q Cf What is particularly interesting in Equation (7.3) is that the output signal from the charge amplifier is independent of the capacitances of the transducer and cable, and only dependent upon the characteristics of the amplifier. However, the signal from a charge mode transducer is very low, and therefore susceptible to noise. Using such a sensor, cables therefore need to be kept as short as possible. Charge amplifiers are relatively expensive to manufacture, and the signal from the transducer is sensitive to disturbances. Another disadvantage with this type of transducer is the so-called triboelectric effect. This effect is due to changes in cable capacitance resulting from bending the cable, giving rise to a noise signal. Special cables must therefore be used to reduce this effect. Even then, great caution must be observed so that the cable does not move during the measurement, usually by taping down the cable. Among the advantages of charge mode transducers, compared with the more common IEPE sensors we are going to introduce in the next section, are their ability to handle higher operating temperatures, and that they can be used over a large dynamic measurement range. The charge amplifier can easily be reconfigured for different measurement ranges by changing the feedback capacitor, which according to Equation (7.3) controls the amplification. Charge amplifiers therefore often have an amplification range of many decades.

161

162

7 Transducers for Noise and Vibration Analysis

7.3

Transducers with Built-In Impedance Converters, “IEPE”

To reduce the cost and increase signal quality when measuring with piezoelectric transducers, the US transducer manufacturer Kistler in the 1960s developed the built-in impedance converter which led to a breakthrough in transducer technology in the 1980s. This method includes a built-in, semiconductor-based impedance converter as in Figure 7.3. The upper transistor is a so-called CMOS-FET, a Complementary Metal Oxide Semiconductor Field-Effect Transducer with extremely high-input impedance. Together with the bipolar transistor it forms an impedance converter, which converts the transducer’s output voltage, ug , with a high impedance, to a voltage with low impedance, as described below. As mentioned earlier, this type of transducer is usually based on quartz crystals which are less expensive than the previously used ceramic crystals. The lower sensitivity of the quartz crystal material can be compensated for by the amplification in the built-in impedance converter. Depending upon manufacturer’s brand names, this technique is known by a number of different names, see Table 7.1. Recently a new name, IEPE has been introduced as a “manufacturer neutral” name for the technology. The IEPE impedance converter, which has become a de facto standard, is based on the converter being fed with a constant current of between 2 and 20 mA. The input current to the amplifier turns it on and produces a direct current (DC) “working point” voltage,

+ IDC

Cg ug

Rg

Cc

UDC –

Sensor

Cable

Cc Di = 2–20 mA E = 18–30 V DC Current supply

Figure 7.3 Circuit diagram of the IEPE principle and the current supply providing signal conditioning for an IEPE sensor.

Table 7.1 Common names for the technique of built-in impedance conversion in piezoelectric transducers, for a few of the largest manufacturers. Other manufacturers use other names. ICP is currently the most common name used by analyzer manufacturers, although IEPE is starting to take over. Manufacturer

Brand Name

Brüel & Kjær

DeltatronTM

Dytran Instruments

LIVMTM

Endevco

IsotronTM

Kistler Instrument Corp.

Piezotron®

PCB Piezotronics Inc.

ICP®

7.3 Transducers with Built-In Impedance Converters, “IEPE”

UDC , of typically 9–12 V. The measurement signal from the transducer is superimposed as a variation on this DC voltage. The obtained alternating current (AC) voltage from the transducer, and with that the calibration factor, is independent of the measurement current. That said, it is important that the voltage E is sufficiently large so that it can generate the constant current to ensure the measured signal is not clipped. Also, it should be mentioned that a weak point with the IEPE “standard” is that unfortunately, the current needed for proper operation of different IEPE sensors is not firmly standardized. Some IEPE sensors perform badly for low currents, which cause problems when using some measurement systems with built-in current supply, because instrument manufacturers sometimes design the built-in current supply with rather low current to keep power consumption and heat down. Therefore, one must carefully check the manufacturer’s specifications and verify that a particular sensor can work properly with a particular instrument. The circuit feeding the transducer with built-in impedance conversion is quite simple; see Figure 7.3. A battery or stabilized voltage, E, is connected to a current diode, Di , which ensures that a constant DC current flows through it. A so-called coupling capacitor, Cc , usually of tantalum type, ensures that the DC flows to the transducer and not to the measuring instrument. An indicator device is often found on the current supply unit. The indicator measures the DC voltage UDC . A voltage of approximately 9–12 V indicates that the unit has contact with the impedance converter. A UDC voltage of 0 or E volts indicates, on the other hand, either a short-circuit in the cable or an open connection (e.g., broken cable), respectively. These indications can be of considerable help when trouble-shooting measurement connections, and some measurement systems with built-in supply for this type of transducer also have an indicator in the software.

7.3.1

Low-Frequency Characteristics

Piezoelectric transducers can never measure frequencies down to DC because of the piezoelectric sensitivity to charge (at DC there is no change in charge, i.e., electrical current). This is on the other hand rarely necessary for vibration analysis. When it is necessary, however, there are other types of transducers that can be used, for example, piezoresistive or capacitive accelerometers. With well-chosen time constants, relatively low frequencies can be measured using piezoelectric transducers, down to approximately 0.1–0.5 Hz with maintained accuracy. In such cases, it is essential to understand the time constants involved. An equivalent model for this purpose is shown in Figure 7.4, where

us

Sensor

Current supply

τ1

τ2

uout

Figure 7.4 Equivalent model for time constant for a piezoelectric transducer. The “true” transducer signal us passes first the time constant 𝜏1 , which is the transducer’s inner time constant, and then the time constant 𝜏2 , which originates in the current supply unit or charge amplifier, depending on the sensor type.

163

164

7 Transducers for Noise and Vibration Analysis

𝜏1 = R1 C1 is the transducer’s built-in time constant (dependent upon its inner capacitance and resistance), and 𝜏2 = R2 C2 is the time constant of the current supply (or charge amplifier, if such a device is used). It consists of the coupling capacitor and the shunt resistance (actually in parallel with the input impedance of the measuring instrument, but the latter is normally much higher than the shunt resistance). The total time constant resulting from the cascade coupling of circuits with two highpass filter time constants can be approximated by 1 1 1 ≈ + , 𝜏tot 𝜏1 𝜏2 which can also be written as follows: 𝜏𝜏 𝜏tot ≈ 1 2 , 𝜏1 + 𝜏2

(7.4)

(7.5)

which we recognize as a formula similar to resistors in parallel coupling, or springs in serial coupling, whichever is most familiar. In the frequency domain, for a simple high-pass filter with an RC link, the lower frequency limit is given by fc =

0.16 1 ≈ , 2𝜋𝜏 𝜏

(7.6)

where fc is the cutoff frequency (−3 dB frequency) in Hz and 𝜏 is the time constant in seconds. In most cases, one of the time constants is much smaller than the other, and the smaller of the two hence determines the total time constant. This is particularly the case when using measurement hardware with built-in IEPE supply. The built-in coupling capacitor is generally smaller than that used in most external current supply units, which give a higher frequency limit when using a built-in supply. In cases where the time constant needs to be long, for example, when measuring slow pulses (pulse times of 50 ms or more), then an external current supply should be considered.

7.3.2

High-Frequency Characteristics

The high-frequency limit of most vibration measurements is due to the sensor or mounting limitations such as accelerometer resonance frequency or stiffness in the attachment point. However, when using IEPE sensors with very long cable lengths, the cable capacitance and/or the power supply drive current can introduce a frequency limit. According to accelerometer manufacturers, the high-frequency limit when using IEPE sensors is affected by the power supply drive current, I in [A], the cable capacitance per meter in [F/m], C, the cable length in [m], L, and the peak voltage of the sensor, Vp in [V], by the equation: fmax =

I . 2𝜋Vp CL

(7.7)

This equation shows that if long cables are required, the drive current may have to be increased. Normally, for cables used for IEPE sensors, no special care has to be taken for cable lengths shorter than at least 20 m (60 feet).

7.4 The Piezoelectric Accelerometer

7.3.3

Transducer Electronic Data Sheet, TEDS

A recent standard, IEEE 1451.4 (2004), has brought a new concept to modern instrument hardware. The TEDS standard allows the sensor manufacturer to build a small computer chip into the sensor which stores information about the sensor, such as its sensitivity (calibration factor), type, etc. Many modern data acquisition systems can read this sensor information, which eliminates the possibility of making errors when entering information about the sensor into the measurement systems. Most IEPE sensors can be ordered with TEDS circuits installed.

7.4

The Piezoelectric Accelerometer

The piezoelectric accelerometer is fundamentally designed according to the left-hand part of Figure 7.5, which illustrates the pressure mode design. At the bottom the sensor consists of a stiff base, which ideally should follow the motion of the measured object. On the base, the silicon crystal and then the seismic mass are glued. The mass is preloaded as indicated in the left-hand illustration in Figure 7.5 so that there is always a positive force onto the crystal within the operating range of the accelerometer. When the base of the accelerometer is subjected to acceleration, the mass generates a reaction force onto the crystal, which is proportional to the acceleration according to Newton’s second law, F = ma. The sensitivity, Sa , expressed in [pC/ms−2 ] for a charge mode accelerometer, or in [mV/ms−2 ] for an IEPE accelerometer, therefore depends both on the crystal’s charge sensitivity and on the size of the seismic mass. The pressure mode sensor suffers from several drawbacks, such as a high base strain sensitivity and relatively high temperature sensitivity, see Section 7.4.4. In order to avoid some of those negative properties, the shear mode design has been developed, which is illustrated on the right-hand side of Figure 7.5. The basic principle of the shear mode design is that the piezoelectric crystal is mounted so that it is submitted to a shear force from the seismic mass instead of a pressure force. The seismic mass is furthermore usually divided into several pieces, which are mounted around the crystal. The advantage with this sensor design is first of all that piezoelectric crystals are usually more sensitive to shear force than pressure force. Thus, a shear mode sensor is usually more sensitive than a pressure mode sensor with equal seismic mass. Furthermore, the design with several seismic masses mounted around the crystal results in less transverse sensitivity, see Section 7.4.4. Because of the reduced

Figure 7.5 Principle design of pressure mode type (a) and shear mode accelerometers (b). The shear design gives several advantages, such as lower transverse sensitivity and base strain sensitivity, see Section 7.4.4.

Crystal

Mass Mass Crystal Base

Base Pressure mode (a)

Shear mode (b)

165

166

7 Transducers for Noise and Vibration Analysis

crystal area in contact with the base in the shear design, this sensor type will also exhibit less sensitivity to base strain.

7.4.1

Frequency Characteristics

In order to understand the frequency characteristics of the accelerometer, we shall study the simplified, equivalent model of Figure 7.6. In this model, we reduce the accelerometer to the mass of the base, the seismic mass, and the stiffness of the silicon crystal. This model contains no damping, which works quite well since the accelerometer is built with as little damping as possible to minimize phase error. The model in Figure 7.6 gives rise to a resonance frequency, which influences the output signal from the transducer. If we set up Newton’s equations for the seismic mass, we have ms ẍ s = −k(xs − xb ).

(7.8)

If we assume harmonic motion, i.e., observe each frequency independently, we get xs = Xs sin(wt) ⇒ ẍ s = −𝜔2 Xs sin(wt), xb = Xb sin(wt). Substituting Equation (7.9) into Equation (7.8) yields ( ) −𝜔2 mXs = −kc Xs − Xb .

(7.9)

(7.10)

This equation can be rewritten as follows: xs ẍ = s = ẍ b xb

kc ms kc ms

− 𝜔2

=

1 ( )2 , 1 − 𝜔∕𝜔m

(7.11)

where 𝜔2m =

kc . ms

(7.12)

From Equation (7.11), it follows that when the base of the accelerometer is exposed to a constant acceleration, we obtain a resonance at the frequency fm , where √ kc 1 , (7.13) fm = 2𝜋 ms which is called the accelerometer’s mounted resonance frequency. xs ms kc xb

mb F(t)

Figure 7.6 Equivalent model of the piezoelectric accelerometer. mb is the base mass, ms the seismic mass, and kc the stiffness of the silicon crystal.

7.4 The Piezoelectric Accelerometer

Accelerometer output (V)

103 102 101 100

10–1 10–2 10–1

100

Relative frequency f fm

101

Figure 7.7 Output signal from an ideally mounted accelerometer as a function of frequency, when a constant sinusoidal acceleration is applied at each frequency, according to the model in Figure 7.6. As seen in the figure, the output signal displays a sharp peak at the mounted resonance frequency. In the plot, the frequency axis is normalized to the mounted resonance frequency.

In order to measure fm experimentally for a specific accelerometer, it is mounted onto a rigid mass, which is subjected to constant sinusoidal acceleration of increasing frequency. With this kind of measurement, we obtain an output signal from the accelerometer according to Figure 7.7. As seen in Figure 7.7, the accelerometer signal rises sharply when the frequency approaches the (mounted) resonance frequency, fm . Accelerometers are usually designed with very low damping in order to reduce the phase error, which could otherwise become significant, see Figure 5.3. Normally, a frequency range up to fm ∕4 is used for a maximum error of approximately 5%, or fm ∕3 for a maximum error of approximately 10%. The output signal follows the formula atrue (7.14) aout = ( )2 , 1 − f ∕fm which of course is not exactly correct all the way up to the resonance frequency, since Equation (7.14) does not contain any damping, but the formula works well for all usable frequencies.

7.4.2

Mounting Accelerometers

Various methods may be used when mounting the accelerometer. The most common are screws, glue, wax, and magnets. Many types of glue are used: rigid glue such as dental cement, and softer glue such as superglue or even glue from a glue gun. When accelerometers are mounted this way, the mount itself will naturally have a certain stiffness. This “spring,” along with the accelerometer’s mass, gives rise to a resonance which is often lower than the previously mentioned ideally mounted resonance frequency,

167

168

7 Transducers for Noise and Vibration Analysis

thereby limiting the measurement range. It is impossible to predetermine where this resonance will be, since it depends on the measured object’s stiffness at the mounting point, which influences the mass mb according to the above model. A very useful method of verification of the usable frequency range of a particular accelerometer will be discussed in Section 7.14. When mounting an accelerometer, it is very important to ensure that the surface is clean and smooth. The smallest bit of dirt or surface roughness may give rise to elasticity in the mounting, which can seriously influence the measurement. When mounting with screws either the fingers alone should be used or even better a torque wrench. Suitable torque is normally given in the user’s instructions of the transducer. When removing a transducer, it is also very important, especially when using glue, that it is twisted loose, that is, turned in the same plane as that on which it is mounted. All accelerometers designed for glue mounting have a six-sided base (like a nut) so that the transducer may be twisted loose without damaging the casing.

7.4.3

Electrical Noise

It should be noted, particularly when using IEPE transducers, that these often have their housing connected to the shield of the cable. Power line 50 Hz (or 60 Hz) noise can easily arise during measurements if there is a difference in electrical potential between the points where the accelerometers are mounted, and if the transducer is mounted on conductive material. As an initial measure, the transducer should be insulated from the measured object using a special insulating plate. There are also some transducers with insulated bases often consisting of an anodic coating. This coating can sometimes be scratched off, after which it no longer insulates. In environments especially susceptible to noise, special IEPE transducers with insulated housings that are galvanically separated from the transducer itself should be used. These transducers, however, require special cables with two conductors and are usually heavier industrial transducers made for measuring on foundations and other stiff objects. To avoid noise when measuring acceleration, particularly with charge mode transducers, another important consideration is that the cables are fixed so as not to hang loose and vibrate. Therefore, cables should be taped down on the structure as tightly as possible. Furthermore, one should always strive for as short cables as possible between the transducer and either the charge amplifier or the current supply unit (or the measurement system if it has a built-in current supply). Another tip when using grounded measurement equipment is to make sure that the same mains outlet supplies all units in the measurement setup. Otherwise, potential differences (voltage differences) can exist because different outlets have different ground potentials.

7.4.4

Choosing an Accelerometer

In addition to what has already been mentioned about frequency range, etc., of the accelerometer, we shall discuss a few other characteristics of the accelerometer necessary to understand in order to select a proper accelerometer for a particular measurement. An ideal accelerometer should be as light as possible in order not to affect the vibrations the

7.4 The Piezoelectric Accelerometer

sensor is intended to measure. If the mass of the accelerometer is too high a phenomenon usually referred to as mass loading occurs. Mass loading is often misunderstood to be related to the weight of the accelerometer with respect to the weight of the measurement object. Since mass loading is a dynamic phenomenon; however, this is not correct. Instead, it is the dynamic stiffness at the point where the accelerometer is mounted which affects how much mass the accelerometer can have without (seriously) affecting the vibrations. It is often surprising how light an accelerometer has to be in order not to give a mass loading effect. My recommendation is therefore to test experimentally if mass loading is a problem, if there is any doubt, see Section 7.14. Temperature sensitivity manifests itself in an increasing sensitivity with increasing [ ] temperature and is given in %∕∘ C . How high the temperature sensitivity is depends much on the accelerometer design, and the shear mode type described in Section 7.4 usually has lower temperature sensitivity than other designs. Great caution must be observed if temperature fluctuations occur while measuring with an accelerometer. Accelerometers are also sensitive to base strain, i.e., to bending of the base. For example, if we mount an accelerometer on a bending beam, then the accelerometer will give a signal due to the deformation of the base, making the silicon crystal bend. Also, the base strain sensitivity is typically lower for accelerometers based on the shear mode design than for pres] [ sure mode accelerometers. The base strain sensitivity is typically given in (m/s2 )∕μStrain (“microstrain”) at approximately 250 μStrain (1 μStrain is a strain of 1 μm/m). When an accelerometer is mounted on a measurement object, there will of course in many cases be vibrations in directions other than the direction intended to measure. These vibrations give rise to an undesired signal from the transducer. The sensitivity to cross-directional vibration is identified by what is called transverse sensitivity, which is given as a percentage. If the transverse sensitivity is 1%, this indicates that a 100 m/s2 acceleration in the cross-direction will produce an output as large as an acceleration level of 1 m/s2 in the transducer’s main direction. The transverse sensitivity depends on the transducer design and is lower for the shear mode type than for other accelerometer designs. The transverse sensitivity can be a significant cause of error in acceleration measurements where the acceleration levels are considerably higher in one direction than in other directions, as the example beneath shows. Example 7.4.1 An example of a case where there are often considerably higher acceleration levels in one direction than in the other directions is on combustion engines, particularly with straight cylinders. On such objects, great care must be taken when interpreting the results. Consider, for example, an engine with 10 times higher vibration in the vertical direction than in the horizontal directions. Let us assume we use accelerometers with transverse sensitivity of (a common value) 5%. If we denote the low acceleration level in horizontal direction by A, the vertical vibration level will be 10A. Then the output of the vertical accelerometer will be Az = 10A + 0.05 ⋅ A = 10.05A.

(7.15)

In Equation (7.15), the error (the output compared with the true vertical acceleration of 10A) is approx. 0.5%, which is negligible. But what happens with the output of the horizontal transducer? Its output will be Ax = 0.05 ⋅ 10A + A = 1.5A.

(7.16)

169

170

7 Transducers for Noise and Vibration Analysis

The error in the horizontal accelerometer is 50%! This example shows that even a seemingly small transverse sensitivity of 5% can easily cause a very large error when the acceleration levels are considerably different in different directions. End of example.

7.5

The Piezoelectric Force Transducer

Besides measuring motion (acceleration), it is often necessary in vibration analysis to measure dynamic force. For that purpose a piezoelectric force transducer is normally used, which has a principal design as illustrated in Figure 7.8. The design of the force transducer is relatively simple with a base and a small mass referred to as the mass above the force gauge, which is glued onto the piezoelectric crystal. To allow tensile (pulling) forces being measured by the transducer, the crystal is preloaded so that the force on the crystal is always a compressive force within the operating range of the transducer. The applied force is transferred over the mass above the force gauge directly to the silicon crystal. Because of the limited preload provided by the springs, a piezoelectric force transducer is often specified to allow more compression force than tension force. Because of the direct contact between the silicon crystal and the base of the transducer, the force transducer is very sensitive to transverse forces. Therefore, a stinger (in British English “rod”) should always be used between the transducer and the shaker to eliminate any transverse forces. The stinger should be flexible in the transverse direction and is often made of piano wire on which small threaded screws are glued or soldered for attachment to the shaker and the force transducer, see Figure 7.9. Stingers with different axial stiffness are required for various measurements, since the stinger itself must be stiff enough to transfer the force within the excitation frequency range, but not stiffer than necessary. Usually, the stiffness can be adjusted by using stingers of different length. Practical aspects of the use of stingers will be discussed in Chapter 13. When excited with no load, the force transducer will display a curve similar to that for the accelerometer in Figure 7.7, with a resonance caused by the mass above the force gauge (see Figure 7.8) and the stiffness of the silicon crystal. This resonance is specified similarly with the accelerometer as the mounted resonance without load and is usually very high (compared with the frequency range you can measure accurately with the force transducer), typically above 60 kHz. The force transducer’s operating frequency range is rarely limited by this factor; instead, it is limited by the range in which the force applied through the force transducer is actually an axial force. In practice, it is very difficult (but not impossible!) to

Mass above force gauge

Base

Crystal

Preload spring

Figure 7.8 Principle design of the piezoelectric force transducer. A preload is applied over the crystal so the net force is always a pressure over the crystal, illustrated by the two preload springs in the figure. The side marked “Mass above force gauge” should be attached to the test structure.

7.6 The Impedance Head

Figure 7.9 When using a force transducer, transverse forces must be eliminated, which is accomplished by a so-called stinger.

Shaker Force sensor

Stinger

Measurement object

excite a structure with a shaker and properly measure the force above, say, 1–2 kHz, see Sections 7.14 and 13.9. The force transducer does not actually measure the force applied to the measurement object. Instead, the force measured is the force acting on the crystal; this means that the mass above the force gauge is effectively added to the measurement object when the transducer is attached. It is therefore necessary that this mass is small. For the same reason, it is essential to mount the correct side of the force transducer onto the measurement object, as otherwise, the much heavier base mass is added to the structure and a large error results. Most transducers have some marking which shows the correct side to be attached to the object. Also, see the discussion of mass loading in Section 7.14 for a discussion of the apparent mass of the structure. Piezoelectric force transducers usually have relatively long time constants compared with accelerometers. This feature is necessary for accurate calibration, as force transducers are generally calibrated semistatically, i.e., by applying a known static force to the transducer. For example, if the transducer is quickly loaded with an accurately known mass, then the resulting, slowly decaying force signal can be measured before the time constant has generated too large an error.

7.6

The Impedance Head

The impedance head is a transducer that contains a force transducer and an accelerometer in a single housing. The name comes from the fact that, in the past, the transducer was used to measure mechanical point impedance, i.e., the ratio between (dynamic) force and velocity (the acceleration was integrated). Today, most measurements of frequency response are measured as mobility or accelerance, as we saw in Chapters 5 and 6, but the name of this transducer has remained. When measuring point accelerance, that is, the ratio between acceleration and force at the same point, it is often impossible to place a force transducer and an accelerometer so that they measure in the same point. Sometimes, for example, on thin plates, the transducers can each be placed on opposite sides of the plate. In many cases, however, the force sensor and accelerometer have to be mounted next to each other. This results in an error because of the difference in acceleration between the desired point where the force transducer is placed, and the point at which the accelerometer is possible to mount. This error depends on the difference in mode shape coefficients between the two points, as follows from the discussion in Section 6.4, and thus the error is larger for smaller measurement objects or

171

172

7 Transducers for Noise and Vibration Analysis

mf

Ca = 1/ka

kf

va

ka

+

ma

Fin

Ff





mb

(a)

ma

mf

+

Cf = 1/kf

vout

+

Fout –

(b)

Figure 7.10 Schematic illustration of the impedance head. (a) mechanical design, and (b) equivalent electric circuit. Force Fout is the force to be measured, whereas the signal given by the transducer is proportional to force Ff in the figure. If mf is small relative to the measurement object’s dynamic mass, then the error is small. Since current in the impedance analog corresponds to mechanical velocity, currents in the figure are given as velocities instead of the acceleration signals actually produced by the transducers. The acceleration signal which the transducer thus gives corresponds to velocity va , which, as seen in the schematic, does not correspond to the desired acceleration aout (analogous to vout ). The error in the acceleration signal is analogous to the error of a regular accelerometer, see Section 7.4.

for higher-order mode shapes where the wavelength of the mode shapes are shorter. The impedance head overcomes this problem. A schematic illustration of an impedance head can be found in Figure 7.10, together with an equivalent model. As seen in the figure, both the force and the acceleration signals from this type of transducer, as those from separate transducers, have bias errors. One advantage with the impedance head is, however, that when measuring point accelerance, a part of these errors can be compensated for (Håkansson and Carlsson, 1987). When measuring flexible structures where the mass above the force gauge, mf in the figure, is significant in comparison with the structure’s apparent mass, the measured point accelerance can rather easily be compensated. The compensation is done by first measuring accelerance with the impedance head without load. The inverse of this measured accelerance, H0a (f ), is then subtracted from the inverse of the measured accelerance of the structure, Ha (f ). The corrected accelerance, Ha′ (f ), is thus calculated by 1 . (7.17) Ha′ (f ) = 1 1 − H (f ) H (f ) a

0a

The drawback with impedance heads is that they are considerably taller than separate force transducers. This means that, even using good stingers, the frequency range over which the force will be in line with the impedance head is relatively low. At relatively low frequencies, the impedance head has a tendency to start to “wobble.” For higher frequencies, where the impedance head would solve the problem of short wavelengths discussed above, one is still often obliged to use a force transducer with a lower form factor.

7.7

The Impulse Hammer

An impulse hammer is used to excite vibration, for example, to measure frequency response when studying a mechanical system. It consists of a handle, a head, and a force transducer

7.8 Accelerometer Calibration

Figure 7.11 The impulse hammer. The hammer consists of a handle, a head, and a force transducer with an interchangeable tip. The head mass can often be adapted to the measurement object by attaching an extra mass, increasing the head mass and giving more energy in the pulse (longer pulse for approximately same strength strike).

Head expander

Force sensor

Tip

with an interchangeable tip, see Figure 7.11. Impulse hammers come in various sizes from that of a pen, with head mass of a few grams, to sledgehammers with 1.5 m handle, and a head mass of 10 kg or more. In using the hammer, you make a distinct (but usually rather soft) impact, and the force transducer will measure the resulting force pulse; the harder the tip, the shorter the pulse, and therefore, the wider the frequency content. See Section 13.8 for more details on practical aspects of impulse excitation.

7.8

Accelerometer Calibration

Several different methods are used to calibrate accelerometers. The method giving highest accuracy, and therefore used by calibration laboratories, is the so-called reference calibration, where the accelerometer is subjected to the same acceleration as another, accurately calibrated reference accelerometer. The latter is in certain cases a laser interferometer, which has very high precision. The accelerometer must be calibrated at a specific frequency to provide traceability, and in Europe, it is in most cases calibrated at 159.2 Hz, which corresponds to the angular frequency 1000 rad/s. The reason behind using this calibration frequency is that acceleration, velocity, and displacement at this frequency are easy to convert, as they are related to each other by angular frequency. In the United States, 100 Hz is often used as the calibration frequency, which is considered unsuitable in Europe as it corresponds to the first harmonic of the mains voltage, 50 Hz. A similar method with somewhat less accuracy uses a calibrator, in this case a shaker with a precisely determined acceleration. This type of calibrator is available as a tabletop model or a practical hand-held model. To use this method of calibration, the accelerometer is mounted on the calibrator. Then the level is measured on the same measurement system to be used later for the actual measurement. The measurement system is adjusted so that it gives the correct level. In this way, the entire measurement chain is calibrated, which is good from an accuracy point of view. At the same time, all possible sources of error such as an incorrectly configured charge amplifier are eliminated.

173

174

7 Transducers for Noise and Vibration Analysis

Impact hammer

Mass, M

Accelerometer

Figure 7.12 Calibration of force transducer and accelerometer using a calibrated mass. The principle behind this calibration is that the measured frequency response between force and acceleration (acceleration/force) according to Newton’s law F = m ⋅ a gives a constant value for all frequencies, equal to 1∕m. By adjusting the calibration factor of one of the measurement system channels so that this relationship is fulfilled, then all measurements are correct, with traceability to the mass.

A third method of accelerometer calibration can be used in two cases: when one has access to a well-calibrated force transducer, or when one is going to measure the frequency response between force and acceleration (usually with an impulse hammer). In the latter case neither of the transducers’ sensitivities need to be known in advance. This method is usually called mass calibration, since it is based on the fact that a reference mass is known with high accuracy. Since Newton’s second law states that F = ma, the accelerance of a measurement on a solid mass will be a∕F = 1∕m, independent of frequency. This frequency response should thus be a constant straight line, as long as the measurement is correct. The method is good since mass (weight) is relatively easy to measure with high accuracy, and it is applied as shown in Figure 7.12. See also Section 7.14 for a practical discussion of the use of mass calibration.

7.9

Measurement Microphones

Generally when measuring sound pressure, specially developed condenser microphones are used. These microphones exist in two types: externally polarized or prepolarized. The principle behind the condenser microphone is that there are two plates, as in a capacitor, across which a DC voltage is applied. One plate works as the microphone diaphragm, and when it experiences air fluctuations, and consequently vibrates, the capacitance between the plates varies, as the distance between the plates varies. This variation in capacitance gives rise to a charge change, which is converted to a voltage in the microphone’s preamplifier. The two microphone types differ primarily in that, for the externally polarized microphone, the polarization voltage, usually 200 V DC, must be applied externally by a power

7.11 The Geophone

supply. In the prepolarized microphone, the plates have already been charged during manufacturing, so that they do not require any power supply. Prepolarized microphones are less sensitive to noise but have the disadvantage that they can give faulty measurements, for example, when the diaphragm is pressed together with the charged plate. Externally polarized microphones are therefore most often preferred, since they either function correctly, or not at all, which will be evident during calibration. Microphones have a certain directivity, that is, they give different measurement values depending on from which direction the sound wave hits the diaphragm. Most microphones used for acoustical measurements in Europe (according to IEC standards), compensate for the influence the microphone itself has on the surrounding sound field so that the measured sound pressure matches as closely as possible the sound pressure at the measurement position without the microphone present. However, this compensation works only for sound waves directed straight onto the diaphragm. Microphones of this type, called free-field microphones, should be positioned directly toward the sound source. A different type of microphone is usually used in the United States (according to ANSI standards) called diffuse-field microphones. This type of microphone is designed to give correct sound pressure when the sound field is diffuse, i.e., the sound comes from all directions. Depending upon which frequency range shall be measured, and to some degree which sound pressure range, microphones of differing sizes are used; the smaller the microphone, the higher the frequencies which can be measured. The most common microphones are 1∕2 inch in diameter, and also 1, 1∕4, and 1∕8 inch are common.

7.10 Microphone Calibration Microphones should always be calibrated before (and often after) every measurement. Microphone calibration is done by placing the microphone into a microphone calibrator, which is a tube containing an accurately known sound source. It is important that the ring sealing the microphone to the tube is tight; otherwise, the sound pressure inside the calibrator is inaccurate. Most calibrators have various adapters available for use with microphones of different diameters.

7.11 The Geophone Geophones are inexpensive sensors very suitable for measuring vibrations, for example, for operational modal analysis (Brincker et al., 2010). Their mass is often in the range of 100–200 g, making them suitable mostly for measurements of large structures. A geophone consists of an electric coil suspended with a spring in a magnet field caused by permanent magnets. They are passive sensors, i.e., the vibration signal is caused entirely by the induction voltage produced in the coil when it moves in the magnetic field. This results in extremely low noise in the sensor voltage making the dynamic range as high as 140 dB as shown in Brincker et al. (2010). An example of this high dynamic range is shown in Figure 17.2. The voltage of the geophone should be connected to the measurement instrument by using differential input coupling.

175

7 Transducers for Noise and Vibration Analysis

102

Frequency response [V/(m/s)]

176

101

100

10–1

10–2 10–1

100

101

102

Frequency (Hz) Figure 7.13 Frequency response of typical geophone with natural frequency 𝜔n = 2𝜋 ⋅ 4.5 [rad/s], relative damping 𝜁 = 0.5, and sensitivity Sg = 29 [V/(m/s)].

Due to the suspension of the coil, the geophone acts as a single degree of freedom system, with the mass of the coil, stiffness of the suspension, and internal (high) damping. Typically, these values are chosen by the manufacturer such that the overshoot around the resonance is small, i.e., the damping is around 50%. The output voltage of the geophone is nonlinear (Hons and Stewart 2006), but at frequencies well above the natural frequencies, the output is proportional to the vibration velocity. The frequency response in [V/(m/s)] is given by the following equation: Hg (𝜔) =

𝜔2n



𝜔2 Sg , + j2𝜁𝜔𝜔n

𝜔2

(7.18)

where Sg is the sensitivity in [V/(m/s)], 𝜔n is the undamped natural frequency in [rad/s], which is often in the range of 4–12 Hz for geophones suitable for our purposes, and the damping 𝜁 is typically high as mentioned above. The magnitude of a frequency response of a typical geophone given by Equation (7.18) is shown in Figure 7.13 where the natural frequency in [rad/s] is fn = 4.5 Hz, damping 𝜁 = 0.5 and the sensitivity Sa = 29 [V/(m/s)]. If so desired, the output voltage of the geophone may be easily linearized by using fast Fourier transform (FFT)-based processing as described in Section 9.3.14, dividing the voltage measured by the frequency response in Equation (7.18) to produce the vibration velocity in [m/s]. Geophones may be simply calibrated using only a current source and a measurement system for dynamic signals (van Kann and Winterflood, 2005).

7.12 MEMS-based Sensors In recent years, sensors based on MEMS technology, micro-electro-mechanical systems, have been increasingly more popular for vibration measurements. While MEMS-based

7.13 Shakers for Structure Excitation

sensors for vibration measurement were performing poorly only a few years ago, today, they are good to use for many purposes. The price of most MEMS-based sensors for measuring vibration is usually very competitive compared to piezoelectric accelerometers, although the latter still perform considerably better in terms of dynamic range, i.e., background noise level. Today, there are good MEMS-based accelerometers and also MEMS-based geophones are available. Due to the built-in MEMS-based accelerometers and gyro available in today’s smartphones, even phones can be used for vibration measurements. The near future will certainly offer new exciting opportunities for inexpensive measurement of vibration.

7.13 Shakers for Structure Excitation When accurate measurements of frequency responses are required, the best excitation method is often the electrodynamic shaker. Shakers of this design come in a variety of sizes from coffee cup size up to several feet in diameter. Even larger electrodynamic shakers are used for vibration testing, and the largest shakers today are up to a few meters in diameter and can give several hundred thousand Newton output force. Electrodynamic shakers are available for a large variety of output forces and frequency ranges and you are encouraged to check manufacturers’ web pages for details. In addition to the electrodynamic shakers, also hydraulic shakers are common for some applications, where, for example, very low frequencies (even static) are required. In this section, we will limit our discussion to the type of electrodynamic shaker which is used for exciting structures for measurement of frequency responses, for example, for experimental modal analysis. The principle of the electrodynamic shaker is that of a moving electric coil in a magnetic field, as illustrated in the principle drawing in Figure 7.14. A cylinder with a coil is suspended in a magnetic field. In the smallest shakers, the magnetic field is created by a permanent magnet, whereas in larger shakers, there are coils fed

Bearings

Head

Permanent magnet Coil current Magnetic field Springs/bearings

Figure 7.14 Principle drawing of an electrodynamic shaker. A moving coil in a magnetic field created by a permanent magnet is supplied with an AC current causing the coil to move. Larger electrodynamic shakers can have the permanent magnet replaced by a static coil fed by a DC current to create the magnetic field. Springs are supporting the coil statically, so it stays in an equilibrium position when no current is passing the coil. To ensure movement in only one direction, the cylinder with the coil and the head is often supported by bearings.

177

178

7 Transducers for Noise and Vibration Analysis

by DC current to create the magnetic field. The cylinder with the head, where the force output is taken, is supported by springs to hold the head in an equilibrium position, and often supported by bearings to ensure movement of the head in one direction. The force output, F(t), in [N] from an electrodynamic shaker is dependent on the magnetic flux, B, in [Wb/m2 ], the length of the coil in the magnetic field, L, in [m], and the current in the coil, i(t), in [A], through the relationship: F(t) = BLi(t),

(7.19)

where the product BL can be seen as a shaker constant for a particular design. The electrical input impedance of the coil (i.e., of the shaker input) will be dependent on the mechanical load of the shaker. If optimum performance is required from an electrodynamic shaker, it must therefore be fed by a current controlled amplifier, which is an amplifier that gives a current output proportional to the voltage input of the amplifier. The current controlled amplifier is particularly important if the shaker is to be driven at very low frequencies. Due to the physics of the electrodynamic shaker, only dynamic force can be output. If a static force is required, elaborate attachments can be considered; however, a hydraulic shaker can be a better choice. The actual performance of a shaker in frequency response measurements is complicated by the stinger necessary to avoid transverse vibrations in the force sensor, see Figure 7.9. It is therefore often best to use a trial-and-error approach to find the best attachment of a shaker to a test structure. We will discuss the practical issues with using a shaker in Section 13.12.2.

7.14 Some Comments on Measurement Procedures In the text so far in this chapter, some theoretical descriptions of various noise and vibration transducers have been given. It is necessary to discuss briefly how to use these sensors for the best measurement accuracy. Vibration measurements can be very tricky, and it is important to be a very critical measurement engineer to ensure the signals you measure are the actual vibration signals, and not some artifacts due to bad sensor installation, bad cables, overload in the measurement equipment, etc., that can so easily ruin the best of measurements. It is therefore essential to obtain experience with the type of errors which often occur and to know what good results look like. The most important behavior of a good experimentalist is that he or she never trusts his/her measurement results. Another important thing in vibration measurements is to remember that you have no intuition of what is going on. Intuition should not be confused by knowledge learned by experience; experience is exactly what you want to acquire, but in order to get experience, you need to question and reevaluate your measurement results over and over again. What if you make a new measurement, do you get the same result? What if you change the accelerometer to another one, do you get the same result? What if you place an extra accelerometer next to the one you are measuring with, do you get the same result? (If not, you have mass loading! See Section 7.4.4.) In addition to all these questions about the sensor and attachment of it, there is also a question of whether the measurement system is correctly set up. This will be discussed further in Chapter 11.

7.14 Some Comments on Measurement Procedures

One measurement setup that I advocate you always keep within reach is the calibration mass mentioned in Section 7.8 and an impact hammer. Using the hardest tip of the impact hammer, you can check that your accelerometer gives a good frequency range for a particular measurement by investigating for how high frequencies the frequency response function (FRF) between acceleration and force stays a straight line, independent of frequency. By using this method, you verify that the accelerometer is still working properly throughout its operating frequency range. If an accelerometer is dropped on, e.g., a hard floor, it often does not break entirely, but instead some parts inside it can become loose, which causes “resonances” inside it at certain frequencies. Calibrating such an accelerometer on a single-frequency calibrator may not reveal this problem, but a mass calibration ensures you find it. Figure 7.15 shows the result of mass calibration of an accelerometer mounted with three different mounting methods: wax, hot glue, and super glue (cyanoacrylate adhesive). The wax mounting was repeated twice: once with a thin layer covering the entire accelerometer base, and once with a thick layer to “simulate” a sloppy test engineer. As can be seen in the plot, the thick layer of wax limits the usable frequency range to approximately 1.3 kHz, whereas a thin layer wax and super glue both perform within the specified ± 5% limits up to 5 kHz. Note that the frequencies depend on the mass of the accelerometer, so the results cannot be extrapolated to any other sensor. The procedure described here is a good

Acceleration/force (1/kg)

1.2 Thin wax Thich wax Hot glue Super glue 1.05 1 0.95

0.8 100

200

500 1000 Frequency (Hz)

5000

Figure 7.15 Result of mass calibration with an accelerometer with a mass of approximately 12 g, on a mass of 1 kg. The accelerometer was mounted with three different mounting methods, whereof the first method, wax, was used twice: once with a thin layer (solid plot), and one with a thick layer (dashed plot). In the figure are also shown the results for hot glue (dash dot) and super glue (dotted). The sensitivity of each measurement was adjusted so that the value at 159.2 Hz was correct. The solid lines at 0.95 and 1.05 indicate ±5% accuracy limits. As can be seen in the plot, the thin layer wax and super glue both give results within the accuracy limits up to 5 kHz, whereas hot glue is slightly worse and the thick layer of wax made the accuracy poor in comparison. The example shows that you need to be careful in order to get good results at higher frequencies.

179

180

7 Transducers for Noise and Vibration Analysis

procedure for investigating the frequency range of proper operation for a combination of a particular accelerometer and mounting method. You should, however, remember to include a frequency “margin,” because when the accelerometer is mounted on an actual structure, which is more flexible than the rigid calibration mass, the usable frequency range of the accelerometer is reduced. I finally want to encourage you to read the many excellent instructions provided by the manufacturers of vibration equipment. They are the experts on their own products and provide many important hints on how to best use their products. But – always remember to be critical about your measurements, no matter what precautions you have taken.

7.15 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 7.1 A certain transducer manufacturer specifies a time constant of 1000 s for an IEPE accelerometer that you are considering for a measurement. Your measurement system manufacturer specifies a lower cutoff frequency of 2 Hz for the IEPE current supply. What is the total time constant if you are using the specified accelerometer? Problem 7.2 If you want to be able to measure harmonic vibrations within 5% accuracy, and the accelerometer and measurement errors are assumed to be negligible, what is the lowest frequency you can measure with this accuracy using the data from Problem 7.1? Problem 7.3 Assume you are planning on using an IEPE accelerometer with a sensitivity of 100 mV/g for measurements of accelerations up to 300 m/s2 . You are planning to use 50 m cable, and the cable manufacturer specifies a cable capacitance of 94 pF/m and your measurement instrument includes a 4 mA current supply for IEPE sensors. What is the maximum frequency you can measure with this cable length and current supply? Problem 7.4 This is a recommended exercise rather than a problem, but we place it here for lack of a better place. Using the measurement system of your choice, use an accelerometer and an impact hammer from your supply of sensors (provided you have access to all this, of course). If you do not have a calibration mass, make one from an approx 60 mm long, 40 mm diameter rod of steel or similar material (it should be a hard material so you do not get a deformation when you hit it with the hard tip of your impact hammer, so aluminum is not a good choice, for example). Weigh the mass and write it with a permanent marker on the mass. Then attach your accelerometer using your choice of method; wax, glue, etc. Make an impact measurement (you may need to read appropriate parts of Chapter 13) and make sure you get a coherence of very near unity. Store the frequency response, and thereafter detach the accelerometer, clean it and the calibration mass. Add the accelerometer again, using the same method as the first time. Make a new measurement and compare (overlay) with the previous measurement. Do you get agreement between the measurements?

References

With what accuracy? Remember to regularly try this measurement with your accelerometers and mounting techniques you use, to ensure the accelerometers and your measurement equipment, etc., are operating normally.

References Brincker R, Brandt A and Bolton R 2010 Calibration and processing of geophone signals for structural vibration measurements Proceedings of 28th International Modal Analysis Conference, Jacksonville, FL Society for Experimental Mechanics. Håkansson B and Carlsson P 1987 Bias errors in mechanical impedance data obtained with impedance heads. Journal of Sound & Vibration 113(1), 173–183. Hons MS and Stewart RR 2006 Transfer functions of geophones and accelerometers and their effects on frequency content and wavelets. Technical report, CREWES, www.crewes.org. IEEE 1451.4 2004 A Smart Transducer Interface for Sensors and Actuators – Mixed-mode Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats. IEEE Standards Association. Serridge M and Licht T 1986 Piezoelectric Accelerometer and Vibration Preamplifier Handbook. Brüel & Kjær, Nærum, Denmark. van Kann F and Winterflood J 2005 Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors. Review of Scientific Instruments 76(3), 034501.

181

183

8 Frequency Analysis Theory Frequency analysis is a central part of noise and vibration analysis because of the properties of linear systems which we described in Chapter 2. It is a complicated subject, incorporating Fourier transform theory, statistics, and digital signal processing. In order to get a comprehensive understanding of this important topic, we will dedicate several chapters to it. In this chapter, we start with a discussion of frequency analysis by introducing some theoretical aspects of the subject. In Chapter 9, we describe the discrete Fourier transform, which is the most important tool for estimating spectra. Then we are ready to spend Chapter 10 on a discussion of how to experimentally estimate the theoretical spectra defined in this chapter. When we analyze a signal with respect to its frequency content, it turns out there are three different types (classes) of signals which we must theoretically and practically handle in different ways. These signals are ● ● ●

periodic signals, e.g., from rotating machines; random signals, e.g., vibrations in a car caused by the road–tire interaction; transient signals, e.g., shocks arising when a train passes rail joints.

Each of these three types of signals, or signal classes, have spectra of quite different nature, and therefore they must be treated separately. We will start with periodic signals, and then move on to random and transient signals.

8.1

Periodic Signals – The Fourier Series

Jean Baptiste Joseph Fourier, who was a French scientist around the start of the nineteenth century, discovered that all periodic signals can be split up into a (potentially infinite) sum of sinusoids, where each sinusoid has its individual amplitude and phase, see Figure 8.1. The frequencies are integer harmonics of the fundamental frequency 1∕Tp , i.e., the only frequencies present in the signal are 1∕Tp , 2∕Tp , 3∕Tp , etc., where Tp is the period of the signal. A periodic signal thus has the special property that it contains only (sinusoids with) discrete frequencies. The mathematical theory of Fourier series states that every periodic signal xp (t) can be written as follows: ( ) ∞ ( ) ∞ ∑ a0 ∑ 2𝜋k 2𝜋k ak cos t + bk sin t , (8.1) xp (t) = + 2 Tp Tp k=1 k=1 Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

184

8 Frequency Analysis Theory

f1 = 1/Tp

Figure 8.1 Using the theory of Fourier series, every periodic signal can be split into a (potentially infinite) number of sinusoidal signals, each with individual amplitude and phase. In the figure a periodic signal is shown which consists of the sum of the three frequencies 1∕Tp , 2∕Tp , and 3∕Tp , where Tp is the period of the signal.

f2 = 2/Tp f3 = 3/Tp Tp

where the coefficients ak and bk can be calculated by (

t1 +Tp

2 ak = x (t) cos Tp ∫ p t1 t1 +Tp

bk =

) 2𝜋k t dt Tp

(

2 x (t) sin Tp ∫ p t1

for k = 0, 1, 2, … (8.2)

) 2𝜋k t dt Tp

for k = 1, 2, 3, … ,

where the integration occurs over an arbitrary period of xp (t). To make Equation (8.1) easier to interpret physically, by simple trigonometry, it can be rewritten as one sinusoid at each frequency, with individual phase angle, 𝜙k , for each sinusoid. We then obtain the alternative expression: ( ) ∞ a0 ∑ ′ 2𝜋k + ak cos t − 𝜙k , (8.3) xp (t) = 2 Tp k=1 where a0 is the same as in Equation (8.1). Comparing Equations (8.1) and (8.3), we see that the coefficients in Equation (8.3) can be obtained from ak and bk in Equation (8.1) by √ a′k = a2k + b2k , ( ) (8.4) bk . 𝜙k = arctan ak By making use of complex coefficients, ck , instead of the real ak and bk , the Fourier series can alternatively be written as a complex sum as in Equation (8.5). xp (t) =

∞ ∑

j2𝜋k

t

ck e Tp ,

(8.5)

k=−∞

where the coefficients ck are given by a c0 = 0 2 t1 +Tp

j2𝜋k ) − t 1 1( ak − jbk = ck = xp (t)e Tp dt, ∫ 2 Tp

for k > 0,

(8.6)

t1

and the integration occurs over an arbitrary period of the signal xp (t) as before. Note in Equation (8.5) that the summation occurs over both positive and negative frequencies, i.e., k = 0, ±1, ±2, …. Since the left-hand side of the equation is real (we assume that the signal xp is an ordinary, real signal), the right-hand side must also be real. Since the cosine function

8.2 Spectra of Periodic Signals

is an even function and the sine function is odd, then the coefficients ck must consequently comply with Re[c−k ] = Re[ck ], (8.7)

Im[c−k ] = −Im[ck ], ∵ c−k =

c∗k .

for all k ≠ 0 and where ∗ represents complex conjugation. Hence, the real part of the coefficients ck is even and the imaginary part is odd. For real-life signals xp , which are necessarily band limited (because of energy limitations), the Fourier series summation can be done over a smaller frequency interval, k = 0, ±1, ±2, … , ±N, where the coefficients for k > N are negligible when N is sufficiently high. Note also that each coefficient ck is half of the signal’s amplitude at the frequency k, which is evident from Equation (8.6). Thus, the fact that we introduce negative frequencies gives the result that the physical frequency content is split, in a symmetrical way (or antisymmetrical for the imaginary part) so that half the true amplitude of each frequency component is located at its positive frequency, and half at the corresponding (virtual) negative frequency. This is similar to the properties of the continuous Fourier transform, as we saw in Chapter 2.

8.2

Spectra of Periodic Signals

To describe a periodic signal, either a linear spectrum or a power spectrum is used in practice, as we will define in Section 10.2. These two practical spectrum estimators are closely related with the most intuitive spectrum for periodic signals – the amplitude spectrum of Figure 8.2 – which basically consists of a specification of the coefficients for amplitude and phase angle according to Equation (8.3).

Magnitude

2 1.5 1 0.5 0

1/Tp

2/Tp

3/Tp

1/Tp

2/Tp

3/Tp

Phase

180

0

−180

Figure 8.2 Amplitude spectrum of a periodic signal. The spectrum contains only the discrete frequencies 1∕Tp , 2∕Tp , 3∕Tp , etc., where Tp is the signal period.

185

186

8 Frequency Analysis Theory

We will later see that when estimating spectra for periodic signals, many times one cannot simply compute this spectrum, since due to superimposed noise, it requires averaging, see Section 10.2.2. Therefore, the so-called power spectrum is usually available in noise and vibration analysis software. Using the term “power spectrum” is not recommended since its name is too easily mistaken for the power spectral density (PSD) of random signals, see Section 8.3. The power spectrum is, however, always an intermediate result in the averaging process and (unfortunately) in some systems it is the only available spectrum for periodic signals. This spectrum generally consists of the squared root mean square (RMS) value for each sinusoid in the periodic signal and is obtained by squaring the coefficients a′k in Equation (8.4) and dividing by 2. The phase plot in Figure 8.2 is thus missing in the power spectrum and linear spectrum as there is no phase reference available. Estimating phase spectra will be discussed in Section 10.2.3. The linear spectrum, which is the recommended spectrum for periodic signals, usually consists of the RMS levels of the frequency components in the periodic signal, and is thus the square root of the power spectrum.

8.2.1 Frequency and Time To understand the difference between the time and the frequency domain, i.e., the information that can be retrieved from the different domains, we can study the illustration in Figure 8.3. In the time domain, we see the sum of all included sine waves, while in the frequency domain, we see each isolated sine wave as a spectral component. Therefore, if we are interested in, for example, the time signal’s minimum or maximum value, we must look in the time domain. However, if we want to see which spectral components exist in the signal, we should rather look in the frequency domain. Remember that it is the same signal we see in both cases, that is, all signal information is contained in both domains. Various properties of this information, however, can be more easily identified in one or the other domain.

Frequency

f

Amplitude

Time

t

Figure 8.3 The time and frequency domains can be regarded as the signal appearance from different angles. Both planes contain all the signal information (actually, for this to be completely accurate, the phase spectrum must also be included along with the above amplitude spectrum).

8.3 Random Processes

8.3

Random Processes

As discussed in Chapter 4, random processes are signals that vary randomly with time. They do, however, many times have constant average characteristics such as mean value, RMS value, and spectrum, and are then called stationary signals. As discussed in Chapter 4, most real-life stationary signals are also ergodic, for which ensemble averages can be replaced by time averages. The signals we will study in this section are assumed to be both stationary and ergodic.

8.3.1

Spectra of Random Processes

As opposed to periodic signals, random signals have continuous spectra, that is, they contain all frequencies and not only discrete frequencies. Hence, we cannot display the amplitude or RMS value of each frequency, but we must instead describe the signal with a density-type of spectrum (compare, for example, with discrete and continuous probability )2 ( functions). The unit for noise spectra is therefore, for example, m∕s2 ∕Hz if the signal is an acceleration measured in m/s2 . This spectrum is called power spectral density, PSD. An example is shown in Figure 8.4. The theoretical derivation of the PSD usually involves the autocorrelation function. Alternately, the PSD could be defined as a spectral density with an area under the PSD equal to the mean-square value of the time signal in the corresponding frequency range. In most standard textbooks on random signal analysis (Bendat and Piersol, 2000; Newland, 2005; Wirsching et al., 1995), it is shown that the double-sided (or two-sided) autospectral density,

Acceleration PSD [(m/s2)2/Hz]

100

10−1

10−2

10−3

0

500

1000 Frequency [Hz]

1500

2000

Figure 8.4 Power spectral density, PSD, of a random acceleration signal. The spectrum is character)2 ( ized by being continuous in frequency and is a density function, that is, the units are m∕s2 ∕Hz if 2 acceleration is measured in m∕s .

187

188

8 Frequency Analysis Theory

or power spectral density, PSD, denoted Sxx (f ), is the forward Fourier transform of the autocorrelation function, i.e., [ ] Sxx (f ) =  Rxx (𝜏) =



Rxx (𝜏)e−j2𝜋f 𝜏 d𝜏.



(8.8)

−∞

Analogous with the autospectral density, for an input signal x(t), producing an output signal y(t) through some arbitrary system, the double-sided cross-spectral density, CSD, Syx (f ) is the forward Fourier transform of the cross-correlation, i.e., [ ] Syx (f ) =  Ryx (𝜏) =





Ryx (𝜏)e−j2𝜋f 𝜏 d𝜏.

(8.9)

−∞

The relations in Equations (8.8) and (8.9) are often called the Wiener–Khinchin relations, (or Khintchine) after the mathematicians who (independently of each other) are acknowledged to have developed these relationships in Wiener (1930) and Khintchine (1934), respectively. In recent years, however, it has become evident that Albert Einstein had already mentioned this relationship in a paper a while before (Einstein 1914). The negative frequencies in the Fourier transform has the property that half of the physical frequency content appears at positive frequencies, and the other half at negative frequencies. Therefore, it makes sense to define the physically interpretable single-sided (or one-sided) spectral densities, autospectral density, denoted by Gxx (f ), and CSD, denoted by Gyx (f ), as follows: Gxx (f ) = 2Sxx (f )

for f > 0, (8.10)

Gxx (0) = Sxx (0), and Gyx (f ) = 2Syx (f )

for f > 0,

Gyx (0) = Syx (0),

(8.11)

respectively. The spectral densities at zero frequency do not repeat and therefore have to be treated separately. In Section 4.2.12, we established that the autocorrelation is a real, even function, whereas the cross-correlation is real and has the property that Ryx (𝜏) = Rxy (−𝜏). For the double-sided spectral densities, Sxx (f ) and Syx (f ), this property of the cross-correlation and the properties of the Fourier transform from Section 2.7 (since the autocorrelation is the cross-correlation of the signal x with itself), leads to the fact that ∗ (f ) = Sxx (f ), Sxx (−f ) = Sxx

(8.12)

i.e., the autospectral density is real and even, because the autocorrelation is real and even. It can also be shown that Sxx (f ) > 0 for all frequencies f , because the integral between any two frequencies is the mean-square value of the signal between those two frequencies, apparently a positive number, see also Section 8.5. For the double-sided CSD, it follows that ∗ (f ) = Sxy (f ), Syx (−f ) = Syx

(8.13)

i.e., the CSD Syx is a complex function with an even real part, and an odd imaginary part, see Problem 8.3.

8.5 Interpretation of Spectra

For the single-sided autospectral density, we have that Gxx (f ) is real, and for the single-sided CSD, we have Gyx (f ) = G∗xy (f ),

(8.14)

i.e., the CSD of the reversed signals (we change y to be input and x to be output instead of the usual opposite situation) leads to a complex conjugate. If we consider the phase of Gyx and Gxy , this means that ∠Gyx = −∠Gxy ,

(8.15)

which is intuitive, because if x leads y, it is equivalent to the fact that y lags x.

8.4

Transient Signals

In addition to the previously mentioned periodic and random signals, we have transient signals. These signals have continuous spectra just like the random signals. However, unlike random signals, transient signals do not continue indefinitely. It is therefore not possible to scale their spectra by the power in the signal because power is energy per unit time. Instead, transient signals are generally scaled by their energy, and thus a spectrum of a measured )2 ( transient acceleration, for example, can have units of m∕s2 s∕Hz. The spectrum most commonly used for transient signals is called the energy spectral density, ESD. Since energy is power times time, we obtain the definition of the ESD: ESD = T ⋅ PSD,

(8.16)

where T is the time scale of the PSD, the time it takes to collect one time block if using fast Fourier transform (FFT), see Section 10.3. The ESD is interpreted such that the area under the curve corresponds to the energy in the signal. In Section 10.5, we will discuss more on spectrum estimation of transient signals. An alternative linear, and therefore perhaps more intuitive, spectrum of a transient signal is obtained by using the continuous Fourier transform without further scaling. The transient spectrum of a signal x(t) is consequently defined as follows: ∞

Tx (f ) =  [x(t)] =



x(t)e−j2𝜋ft dt,

(8.17)

−∞

where  denotes the forward Fourier transform. The transient spectrum Tx (f ) in Equation (8.17) is of course a double-sided spectrum and we will return to a discrete approximation of this spectrum in Section 10.5. The reason there is only a single x in the subscript, is that there is no square involved, as there is in, for example, the PSD.

8.5

Interpretation of Spectra

We now need to discuss what can be interpreted from the different spectra presented in the preceding sections. For a periodic signal, this is relatively simple, as it consists of a sum of individual sinusoids. By knowing what these sinusoids are, i.e., their amplitudes, phase

189

8 Frequency Analysis Theory

angles, and frequencies, we can recreate the measured signal at any specific time, if we so choose. We could, in principle, tabulate the amplitudes (or RMS values), phase angles, and frequencies of each frequency component in the periodic signal. We may also want to know, for example, the RMS value of the signal, in order to know how much power the signal generates. This can be done using Parseval’s theorem, see Table 2.2. For a periodic signal, which has a discrete spectrum, we obtain its total RMS value by summing the included sinusoids using √∑ R2xk , (8.18) xRMS = where Rxk is the RMS value of each sinusoid for k = 1, 2, 3, …. The RMS value of a signal consisting of a number of sinusoids is consequently equal to the square root of the sum of the RMS values. This result could also be explained by noting that sinusoids of different frequencies are orthogonal and can therefore be summed like vectors (using Pythagoras’ theorem). For a random signal, we cannot interpret the spectrum in the same way. As we have stated earlier, the PSD of a random signal is a continuous function which makes it impossible to add the frequencies up. Instead, as the PSD is a density function, the correct interpretation is to sum the area under the PSD in a specific frequency range, which then is the square of the RMS, i.e., the mean-square value of the signal, see Figure 8.5. To calculate the random signal’s RMS value from the PSD, we use √ √ Gxx (f )df = area under the PSD curve. (8.19) xRMS = ∫ In a similar fashion, we can determine the energy in a transient signal by calculating the square root of the area under the ESD curve.

100 Acceleration PSD [(m/s2)2/Hz]

190

10−1

10−2

10−3

0

500

1000 Frequency [Hz]

1500

2000

Figure 8.5 From a PSD the RMS value can be calculated as the square root of the area under the curve. In the figure, the area between 600 and 1400 Hz has been highlighted to indicate the area in this frequency range as an example.

8.6 Chapter Summary

Cumulated mean-square [(m/s2)2]

102

101

100

10–1

10–2

Figure 8.6

0

500

1000

Frequency [Hz]

1500

2000

Cumulated mean-square value calculated for the spectral density in Figure 8.4.

Spectral density functions and energy density functions are difficult to interpret directly from a plot since it is the area under the curve that is interpreted as power or energy. A suitable function for easier interpretation of spectral density functions is therefore the cumulated function. For a PSD, one can thus build the cumulated mean-square value, Pms , which is calculated as follows: f

Pms (f ) =



Gxx (f )df .

(8.20)

0

Note the similarity between this function and the statistical distribution function, which is equal to the integral of the probability density function (or the sum if we have a discrete probability distribution). The function Pms (f ) is consequently equal to the mean-square value in a frequency range from the lowest frequency in the spectrum up to the frequency value f . In Figure 8.6, a plot of the cumulated mean-square of the same acceleration signal used previously in Figure 8.4 is shown. In a plot of the function Pms (f ), one can easily calculate the RMS value in any frequency interval, for example, f ∈ (f1 , f2 ) by calculating √ ( ) ( ) (8.21) xRMS (f1 , f2 ) = Pms f2 − Pms f1 .

8.6

Chapter Summary

In this chapter, we have introduced some theoretical spectra for the three signal classes: ● ● ●

periodic signals, random signals, and transient signals.

191

192

8 Frequency Analysis Theory

We have noted that, whereas periodic signals have discrete spectra, i.e., only certain frequencies exist in the signal, random, and transient signals have continuous spectra. The preferred theoretical spectra we use for the three signal classes are ● ● ●

linear spectrum, for periodic signals, PSD, for random signals, and transient spectrum, for transient signals.

We have also reviewed some relations between the correlation functions and spectral densities for random signals, by the so-called Wiener–Khinchin relations, which state that each spectral density function (auto and cross) is the forward Fourier transform of the respective correlation function.

8.7 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 8.1 Determine the Fourier series coefficients a′k and 𝜙k according to Equation (8.3) of a square wave signal defined by [ ] xs (t) = 5sgn sin(2𝜋f1 t) where the frequency f1 = 20 Hz. Plot the signal and an approximation of the signal using the first 5, 10, and 20 Fourier coefficients, respectively. Study what happens by including more harmonics in the Fourier series. (This is the so-called Gibbs phenomenon, which states that the overshoots do not disappear regardless of how many harmonics is included. However, the approximation naturally gets better and better.) Problem 8.2 Determine the Fourier series coefficients of a triangle wave signal defined by the plot below. 1

0.5 0 −0.5 −1

0

0.2

0.4 0.6 Time [s]

0.8

1

References

Problem 8.3 Prove the properties for autospectral density in Equation (8.12) and for CSD in Equation (8.13).

References Bendat J and Piersol AG 2000 Random Data: Analysis and Measurement Procedures 3rd edn. Wiley Interscience. Einstein A 1914 Méthode pour la détermination de valeurs statistiques d’observations concermant des grandeurs soumises à des fluctuations irrégulières (method for the determinination of the statistical values of observations concerning quantities subject to irregular fluctuations). Archives des Sciences Physiques et Naturelles 37(4), 254–256. Khintchine A 1934 Korrelationstheorie der stationären stochastischen prozesse. Matematische Annalen 109(1), 604–615. Newland DE 2005 An Introduction to Random Vibrations, Spectral, and Wavelet Analysis 3rd edn. Dover Publications Inc. Wiener N 1930 Generalized harmonic analysis. Acta Mathematica 55(1), 117–258. Wirsching PH, Paez TL and Ortiz H 1995 Random Vibrations: Theory and Practice. Wiley Interscience.

193

195

9 Experimental Frequency Analysis In practice, frequency analysis in the field of noise and vibration analysis is generally based on the discrete Fourier transform. In this chapter, we will present the fundamental techniques for estimating a spectrum from measured samples of a signal. In particular, we will go into some depth on the discrete Fourier transform, Discrete Fourier Transform (DFT), and its properties. In addition, we will discuss some alternative means of estimating the frequency content of signals, e.g., octave filter analysis, which was introduced in Section 3.3.4. The actual estimators for spectrum estimation using DFT will be discussed in Chapter 10.

9.1

Frequency Analysis Principles

When investigating the spectral content of a signal, there are many methods available. First of all, there are two classes of frequency (or spectral) analysis. Nonparametric techniques do not require any a priori information about the signal. This is probably what you would normally consider as frequency analysis – you want to know the frequency content of a signal and consequently apply some frequency analysis method to obtain the spectrum. The most common nonparametric methods available are DFT/FFT, octave filters, and the relatively new so-called wavelet analysis technique. Parametric techniques, on the other hand, are methods that use some a priori information about the signal. For example, such information may be that the signal is periodic and consists of 10 sinusoids. There are many parametric techniques known in signal processing engineering, for example maximum entropy, different forms of ARMA-based (Auto Regressive Moving Average) techniques, and the relatively new MUSIC (multiple signal classification) method, which can all be found in many standard textbooks on signal processing, for example, in Proakis and Manolakis (2006). Experience in the noise and vibration field has unfortunately shown that the, in theory, so promising parametric methods usually fail to deliver reliable results. Except in very rare special cases, they should be avoided as the risk of misinterpreting the results is very high. The reason these methods usually fail when applied to noise and vibration signals may be related to the fact that, in most cases, vibration data may usually not be explained by simple models. Vibrations are rather caused by a variety of sources, including spurious ones that we do not immediately think about, but which are still existent in the data. An accelerometer mounted on the engine of a car, for example, does not contain vibrations solely Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

9 Experimental Frequency Analysis

produced by the engine, but there will also be road-induced vibrations, etc. Parametric methods in general only work for data which may be well explained by the model used for describing them. Cases where parametric techniques are indeed used successfully in the noise and vibration field are, for example, the “Vold–Kalman” adaptive filters for order tracking that we will discuss in Chapter 12, and for tracking sine waves during sine sweep testing in vibration control applications. In general, the nonparametric techniques are superior because of their simplicity and reliability. In this chapter, we will therefore focus our attention on these methods.

9.1.1 Nonparametric Frequency Analysis The nonparametric methods most common for analysis of noise and vibration signals are DFT/FFT analysis, which we will introduce in this chapter, and octave band analysis which we described in Section 3.3.4. In the latter case, the frequency content is obtained by passing the time data through a series of parallel filters, after which the RMS value of each frequency band is measured. Those two methods will be discussed further in the remainder of this chapter. There are other nonparametric methods for spectrum estimation, for example, wavelets (Newland, 2005). Those methods will not be discussed here. For the particular feature of constant relative bandwidth analysis, which is one of the main benefits of wavelets, octave and fractional octave band analysis are already well established in noise and vibration analysis and offers the same principle advantages. Also, see Section 10.3.6 for a method using Fast Fourier Transform (FFT) to obtain constant relative bandwidth spectra. All nonparametric methods have a common principle; they, in effect, calculate the RMS value of the signal in a specific frequency band, during a certain time, the integration time. This principle is illustrated in Figure 9.1. It is important to realize that, given that we do not use any a priori information about the spectrum of the signal, this is the only possible principle. All nonparametric methods, also for example wavelet analysis and Wiegner–Ville transform, etc., are bound by the principle in Figure 9.1. To see what qualities this type of spectrum estimation has, we can study how an

B fc

f

Sound pressure level

196

dB SPL 50 30 10 63 125 250500 1k 2k 4k Tot

Figure 9.1 Principle method for measuring spectrum with nonparametric techniques. An adjustable bandpass filter is stepped through a number of center frequencies, and for each frequency band the RMS value is measured with a voltmeter. All (nonparametric) frequency analysis methods are in principle using this technique, whether called FFT (DFT), octave analysis, or wavelet analysis, etc.

9.2 Octave and Third-Octave Band Spectra

RMS value can be calculated for a bandpass-filtered signal. The RMS value, xRMS , of a signal x(n), is calculated for a sampled signal using the formula: √ √ N √1∑ x2 . (9.1) xRMS = √ N n=1 n We recall from Section 4.2.7 that for a random signal, the RMS value has a normalized random error given by 1 𝜀r = √ , 2 BT

(9.2)

where B is the signal’s bandwidth and T is the measurement time, i.e., T = NΔt if N is the number of samples in the RMS computation. The bandwidth–time product (see also Section 4.2.7) is thus central to spectrum estimates. The finer frequency resolution we require, the longer measurement time we must use to obtain a particular maximum random error. This compromise follows naturally from the fact that, loosely speaking, frequency is change per time unit. It can be shown that for all spectrum estimation methods the bandwidth–time product is always larger than BT ≥ (1∕4𝜋), see, e.g., Bendat and Piersol (2010).

9.2

Octave and Third-Octave Band Spectra

The way of measuring a spectrum prior to our current computer age was to have an adjustable bandpass filter and a voltmeter for AC voltage, as was shown in Figure 9.1. To be able to compare spectra from different measurements, the frequencies and bandwidths used were standardized at an early stage, as we discussed in Section 3.3.4. At that time, it was natural to choose a constant relative bandwidth so that the bandwidth increased proportionally with the center frequency (remember that, for example, the resonance bandwidth of modes on a structure is relative to the resonance frequency, as we saw in Chapter 5). Thus, if we denote the center frequency by fm and the bandwidth of the filter by B, we have B = constant, fm

(9.3)

where the standardized center frequencies and bandwidths were discussed in Section 3.3.4.

9.2.1

Time Constants

If the signal we measure is nonstationary, the RMS level of the bandpass filtered signal will, of course, vary as a function of time. Due to the bandpass filter’s time constant there is, however, a limit to how fast the filter output signal can change when the input varies. The filter time constant describes how quickly the signal rises to (1 − e−1 ) or about 63% of the final value, when the level of the input signal is suddenly altered (a step input). For a bandpass filter with bandwidth B, the time constant, 𝜏, is approximately 1 (9.4) 𝜏≈ . B

197

198

9 Experimental Frequency Analysis

For octave and third-octave band measurements with constant relative bandwidth, the different frequency bands consequently have different time constants, with longer time constants for lower frequency bands. On the right-hand side in Figure 9.1, a typical octave band spectrum for a vibration signal is shown. Note that to the far right the so-called total signal level, that is, the signal’s RMS value (within the whole frequency range) is shown. This value is usually shown on either side of the octave bands.

9.2.2 Real-time Versus Serial Measurements To measure an acoustic signal’s spectral contents using octave bands, in the simplest case a regular sound level meter with an attached filter bank can be used. With this technique, a set of adjustable filters are used which, often automatically, step through the desired frequency range and stores the result for each frequency band. This type of measurement is called serial since the frequency bands are measured one after the other and was common with old sound level meters used for acoustic measurements. Naturally, this method only works when the signal (sound) is stationary during the entire time it takes to step through all frequency bands of interest. In order for the measurement to go faster, or if the signal is not stationary, a real-time analyzer can be used, which is designed with all of the third-octave bands in parallel, so that the same time data can be used for all frequency bands simultaneously. Another approach is to record the time signal and do the analysis with digital filters as described in Section 3.3.4.

9.3 The Discrete Fourier Transform (DFT) The DFT is the method used to transform measured samples of a signal into a spectrum using one of the estimators which will be described in Chapter 10. In order to understand and correctly interpret estimated spectra, it is essential to understand many of the properties of the DFT. We shall therefore study the DFT in some depth in this section. Let us assume that we have a sampled signal x(n) = x(nΔt). We further assume that we have collected N samples, the blocksize, of the signal, where N is usually an integer power of 2, that is, N = 2p , where p is an integer number, see Section 9.3.1. Although not a requirement per se, we will limit our discussion by assuming that N is even to avoid complicating things unnecessarily. The (finite) DFT, X(k) = X(kΔf ), of the sampled signal x(n) is defined as follows: ∑

N−1

X(k) =

x(n)e−j2𝜋kn∕N ,

(9.5)

n=0

for k = 0, 1, … , N − 1. Equation (9.5) is called the forward DFT of x(n). To calculate the time signal from the spectrum X(k), we use the inverse DFT, or IDFT, defined by 1∑ X(k)ej2𝜋nk∕N , N k=0 N−1

x(n) =

for n = 0, 1, … , N − 1.

(9.6)

9.3 The Discrete Fourier Transform (DFT)

It should be pointed out immediately that the definition of the DFT presented in Equation (9.5) is by no means the only possible definition. There are several available definitions with different scaling factors in front of the sums. When confronted with new software, one should therefore test a known signal to find out which definition of the DFT is used. A simple way to test this is to create a signal with an integer number of periods and with a suitable blocksize, N, say 1024 samples. See Section 9.3.4 on how to create such periodicity. By checking the result of an FFT and comparing with the formulas above, the definition used can be identified. The definitions of the DFT and IDFT in Equation (9.5) and Equation (9.6), respectively, are common and are the definitions used by MATLAB/Octave and by many noise and vibration software packages. The number of samples, N, is called the blocksize (sometimes frame size) of the DFT, and consequently, we often refer to the N samples of x(n), n = 1, 2, … , N − 1 as a block of data. Each value, X(k), of the DFT is referred to as a frequency line or sometimes a frequency bin. The spectrum, X(k), obtained from the above definition of the DFT is not physically scaled. This is clearly seen by first observing the value for k = 0. The discrete frequency k = 0 corresponds to the DC component of the signal, that is, the average value of the signal. But, according to Equation (9.5) above, we have ∑

N−1

X(0) =

x(n) = N ⋅ x,

(9.7)

n=0

where x denotes the mean value of x(n). Furthermore, if we, as an example, define x(n) as a sine with two periods during the measurement time n = 0, 1, 2, … , N − 1, where we let N = 8, we get x(n) = cos(4𝜋n∕N), for n = 0, 1, … , 7. The DFT of the signal, x(n), will be { 0, for k ≠ 2, 6 X(k) = 4, for k = 2, 6

(9.8)

(9.9)

which is apparently not the amplitude of x(n). But if we divide the spectrum, X(k), by N = 8, we obtain the value 0.5 at the two frequencies X(2) and X(6), which is half the amplitude of the sine at the two frequencies. We will soon come back to the interpretation of the frequencies of the DFT. We can conclude from this example; however, that by dividing the DFT as we have defined it by N, we get physically interpretable results. It could be questioned why we do not define the DFT including the division by N. Indeed, this would be a more appropriate definition. However, as the definition in Equation (9.5) is the one used by MATLAB/Octave, and many other software packages, it makes sense to use it. We should also point out some differences between the DFT in Equation (9.5) and the continuous Fourier transform, in Section 2.7 (of course, one is a continuous integral, and the other a discrete sum, which are strictly speaking not comparable at all; I still hope you will make my point). ●



The DFT is computed from a finite number of samples, whereas the continuous Fourier transform is an integral from minus infinity to infinity. The DFT is not scaled in the same units as the continuous Fourier transform, since the differentiator dt is missing. The analog Fourier transform of a signal with units of

199

200

9 Experimental Frequency Analysis



m/s2 would have units of m/s2 ⋅ s, i.e., m/s, whereas the DFT will have units of m/s2 . In Chapter 10, this will be clear as we present how to compute scaled spectra from the DFT results. The DFT is calculated in a nonsymmetrical way, from n = 0 to n = N − 1, and not symmetrically as the analog Fourier transform. We will address these issues as we proceed.

9.3.1 The Fast Fourier Transform, FFT Before proceeding with the details of the properties of the DFT, it is appropriate to briefly discuss the FFT, which is an abbreviation of the Fast Fourier Transform. The FFT, first published, as we know it, in Cooley and Tukey (1965, 1993). The history of the FFT algorithm is interesting in many ways. It was clear early on that many people had known parts of what was rediscovered (in Cooley’s own words) in the 1965 paper, Cooley et al. (1967). Later, it was discovered that the famous mathematician Gauss, and many others, apparently knew it more than 150 years earlier, Heidemann et al. (1984). These facts in no way diminish the revolutionary impact the Cooley and Tukey (1965) paper has had. The fast Fourier transform is an algorithm (or, as a matter of fact, several), that computes the DFT in Equation (9.5) in a much faster way than the direct implementation of Equation (9.5). The rediscovery of this algorithm revolutionized applications involving spectrum computations, as well as many other signal-processing tasks. We will not go into any detail of the FFT algorithm, but only mention a few basic facts about it. The interested reader is referred to standard texts on signal processing (e.g., (Oppenheim and Schafer, 1975; Proakis and Manolakis, 2006)) for details on the algorithm. A careful study of Equation (9.5) reveals that the number of computations to compute N frequency values X(k) is proportional to N 2 floating point operations. The FFT algorithm makes use of the symmetry of the exponential terms e−j2𝜋kn∕N in Equation (9.5). It turns out that many of the multiplications in this equation can be saved and the cost of computing the FFT is proportional to Nlog2 (N) multiplications instead of N 2 for the direct DFT. For large blocksizes, N, this is a substantial saving of operations. For example, for N = 1024, the direct DFT requires approximately 2 million operations, whereas the FFT requires approximately 30 000 operations (log2 (1024) = 10), a saving by a factor of 68. For larger blocksizes, the time saving increases very rapidly, and already for N = 32 ⋅ 1024 the saving is more than 1400 times. It is often believed that the FFT algorithm requires N to be a power of 2, i.e., N = 2p for some integer, p, e.g., N = 512, 1024, 2048 samples. This is, however, not true. There are FFT algorithms which do not require N to be a power of two, but they are more complicated than those which assume an integer power of 2. Thus, the power of two choice of N is common. MATLAB/Octave includes several algorithms that can take any length, N, and usually optimizes the performance based on the number of samples.

9.3.2 The DFT in Short Before proceeding with many of the details of the DFT we will present an overview first published in Thrane (1979), reproduced here by permission of Brüel and Kjær, which

9.3 The Discrete Fourier Transform (DFT) Time

(A.1)

Frequency

(B.1)

x(t)

(A.2)

s1(t)

(B.2)

X(f)

–fs

fs

S1(f)

(A.3)

x(t) • s1(t)

(B.3)

X(f)* S1(f)

(A.4)

w(t)

(B.4)

W(f)

(A.5)

(A.6) (A.7)

x(t) • s1(t) • w(t) (B.5)

s2(t) (B.6) x(n)

(B.7)

X(f)* S1(f) * W

S2( f ) X(k)

Figure 9.2 Summary of the DFT. See text for explanation [After Thrane (1979). Reproduced by permission of Brüel & Kjær].

elegantly describes the different properties of the DFT. In Figure 9.2, the different steps in the DFT are shown and the following text explains the different steps. We start with a continuous time signal as in Figure 9.2(A.1). Figure 9.2(B.1) shows the Fourier transform of this continuous (infinite) signal, which is of course also continuous, but band-limited so that we fulfill the sampling theorem. For the sake of simplicity, we (and Thrane) have used a time function which is a Gaussian function, which has the same shape 2 in time and frequency (see Table 2.2, which is a result of the fact that e−𝜋t is one of an infinite amount of eigenfunctions to the Fourier transform). Of course, a Gaussian function is not band-limited, so the shape should only be viewed as a principal sketch. The next step in the process is to sample the time signal, which is equivalent to multiplying the signal by an ideal train of pulses with unit value at each sampling instant and zero between, see Figure 9.2(A.2) and (A.3). In the frequency domain, this operation corresponds to a convolution with the equivalent Fourier transform, which is a train of pulses at multiples of the sampling frequency, fs . We consequently obtain a repetition of the spectrum at each k ⋅ fs . This is actually an illustration of the sampling theorem because it shows that if the bandwidth of the original spectrum would be wider than ± fs ∕2, the periodic repetition of the spectra would overlap, see Figure 9.2(B.2) and (B.3).

201

202

9 Experimental Frequency Analysis

The next step is due to the truncation in time, since we measure only during a finite time. This is equivalent to multiplying the continuous time signal by a rectangular window in the time domain, as illustrated in Figure 9.2(A.4) and (A.5). In the frequency domain, this operation is equivalent to the convolution with a sinc function as in (B.4) and (B.5). The result of this truncation is uncertainty in amplitude in the frequency domain, which can be seen in the ripple of the spectrum in (B.5). The final step is carried out in the frequency domain, Figure 9.2(B.6) and (B.7). With the DFT, we calculate the spectrum only at discrete frequencies. Because of the symmetry of the Fourier transform, this operation is equivalent with the step in (A.2), i.e., it is equivalent with a multiplication of the spectrum with a train of pulses, only now with frequency increment Δf = 1∕T. In the time domain, this step implies a convolution with a train of pulses with separation T, as in (A.6), which finally gives the periodicity in the time domain in Figure 9.2(A.7).

9.3.3 The Basis of the DFT We will now take a closer look at the DFT definition in Equation (9.5). First, we should note that the sum is actually two sums, for the real and imaginary parts, respectively. Thus, we can rewrite Equation (9.5) ∑

N−1

X(k) =

x(n) cos(2𝜋kn∕N) − j

n=0



N−1

x(n) sin(2𝜋kn∕N).

(9.10)

n=0

Each of these sums makes use of the orthogonality properties of sines and cosines, similarly to the continuous Fourier transform, see Section 2.2.4 and Section 2.7. The sum of the product in Equation (9.10) will be nonzero only if there is some frequency content in x(n) at or around the frequency kΔf ; otherwise, it will be zero. (If you find it difficult to comprehend the sum here, you could think of the mean value of the product instead, which is, of course, the sum divided by the number of samples. This does not make any significant difference, as our discussion is concerning if the value is zero or nonzero. The actual value, when nonzero, will be clear as we proceed.) Furthermore, the factors cos(2𝜋kn∕N) and sin(2𝜋kn∕N), when k = 1, 2, … , N − 1, is a cosine and a sine with k periods within the blocksize, N. Thus, each frequency line, k, in the DFT, is the result of a multiplication of the signal x(n) by a cosine or a sine, respectively, with k periods within the blocksize (measurement time). More specifically, the real part of X(k) is the sum of the values of the multiplication of the signal x(n) by a cosine and the imaginary part of X(k) is the similar result of a multiplication by a sine.

9.3.4 Periodicity of the DFT As evident from Equation (9.5) and the discussion in Section 9.3.2, the DFT X(k) is periodic with period N, that is X(k + N) = X(k).

(9.11)

Similarly, the time signal x(n) according to Equation (9.6) is periodic with period N, so that x(n + N) = x(n).

(9.12)

9.3 The Discrete Fourier Transform (DFT)

2

x(n)

1 0 −1 −2

0

2

4

6 8 10 Sample number, n

12

14

16

0

2

4

6 8 10 Frequency number, k

12

14

16

Real[X(k)]/N

1 0.5 0 −0.5 −1

Figure 9.3 Cosine with N = 16 and with two periods in the measurement window (upper), and the real part of the DFT of the same signal (lower) divided by N. As evident from the figure, by dividing the FFT by N, we obtain half the cosine amplitude at X(2), and half at X(14). The imaginary part is in this case equal to zero because the cosine is an even function. Each frequency line k corresponds to a cosine (or sine for the imaginary part) with k periods in the measurement window. Also, note that the measurement window stops at k = N, whereas the DFT is only calculated up to k = N − 1.

In Figure 9.3, a cosine with two periods during the measurement window is plotted together with the real part of the DFT divided by N, i.e., X(k)∕N. (The imaginary part is in this case zero, which will be evident as we proceed. Therefore, it is not included in the figure.) The first thing to observe in Figure 9.3 is that the spectrum X(2) contains the first peak. This is a result of the fact that the cosine in x(n) contains two periods in the time window (measurement time), and that the DFT at k = 2 is the result of the product of x(n) and a cosine (since we are looking at the real part) with two periods during the time window, and the orthogonality of cosines, see Equation (9.10). From Figure 9.3, it is clear that the actual “measurement time,” i.e., the period of the cosine, x(n), is one sample more than the samples actually measured. In other words, the signal we have sampled is periodic in the blocksize if the next sample, x(N), in our example x(16), equals the first sample, x(0). Combining this with the fact that the spectrum at each frequency line, k, corresponds to a signal with k periods in the blocksize, we can conclude that the frequency increment, Δf , is Δf =

f 1 1 = = s. T NΔt N

(9.13)

203

9 Experimental Frequency Analysis

The relationships in Equation (9.13) are important to keep in mind when we use frequency analysis software, to keep count of how long the measurement time will be for a certain frequency increment. It should be particularly noted that the frequency increment is the reciprocal of the measurement time, T, (which, oddly, is one sample longer than it actually took to gather the data x(n)). Thus, if we want a frequency increment of 0.1 Hz, for example, we need to measure a block of data for 10 seconds (or, actually, one time increment less – you start to get the picture). Equation (9.13) also implies that the Nyquist frequency, fs ∕2 is found at k = N∕2. This frequency is of special interest not only because it is the upper limit of the interesting frequencies (i.e., the positive frequencies, we will come to that). If we go back to the definition of the DFT in Equation (9.5), we find that this value is ∑

N−1

X(N∕2) =



N−1

x(n)e−j2𝜋(N∕2)n∕N =

n=0



N−1

x(n)e−j𝜋n =

n=0

x(n)(−1)n ,

(9.14)

n=0

which is a real number. This number is important because, as we will see soon, whereas the remaining values X(k) for k > N∕2 can be calculated from the first values X(k) for k = 1, 2, … N∕2 − 1, the value for k = N∕2 must be stored. To understand this, however, we first need to discuss the symmetry properties of the DFT, see Section 9.3.5. We now put the focus on the right-hand half of the spectrum in Figure 9.3. The values above k = N∕2 can easily be seen to be the negative frequencies, if we note the periodicity of X(k). This is illustrated in Figure 9.4, where the spectrum in Figure 9.3 has been repeated once below and once above the earlier values. In the figure, it is clearly seen that shifting the upper N∕2 values of the “original” spectrum X(k) to the left of the first N∕2 values

0.9 0.8 0.7

Real[X(k)]/N

204

0.6 0.5 0.4 0.3 0.2 0.1 0

–0.1 –16

–8

0

8

16

Frequency number, k

24

32

Figure 9.4 Illustration of the periodicity of the DFT. In the figure, the DFT result X(k) for k = 1, 2, … , N − 1 for the signal used in Figure 9.3 has been repeated once to the left, and once to the right, of the original sequence. The frequency lines between k = −8 and k = 7 are highlighted, indicating a double-sided spectrum.

9.3 The Discrete Fourier Transform (DFT)

(remember the value k = 0), we obtain a symmetric spectrum with positive and negative frequencies. (This can be done conveniently in MATLAB/Octave by the fftshift command).

9.3.5

Properties of the DFT

Most of the properties of the continuous Fourier transform apply in a similar fashion to the DFT. Some important DFT transform pairs are presented in Table 9.1, where the most notable difference compared to the transform pairs for the continuous Fourier transform is the form of Parseval’s theorem, which has a scaling factor in the frequency domain for the DFT. Also, as we will show in Section 9.3.12, the multiplication and convolution works differently for DFT than for the continuous Fourier transform. If x(n) is a real sequence, like our typical measurement signals, then the real part of X(k) comes from the even part of x(n), and the imaginary part of X(k) comes from the odd part of x(n), due to the nature of even and odd signals that we presented in Section 2.7.1. Furthermore, for a real signal, x(n), the real part of its DFT, X(k), is an even function and the imaginary part is an odd function, i.e., [ ] [ ] Re X(−k) = Re X(k) , (9.15) and

[ ] [ ] Im X(−k) = −Im X(k) .

(9.16)

These qualities, called the Fourier transform symmetry properties, are valid also for the similarly to the continuous Fourier transform. Thus, according to Equations (9.15) and (9.16), the negative frequencies (the frequency lines X(k) for k > N∕2) are superfluous. It is therefore customary to discard these data to save (almost) half the storage space; “almost” because we need to store the real frequency line corresponding to the Nyquist frequency, i.e., X(N∕2) as was mentioned above. Thus, for a blocksize of, say, 1024 time samples, we store the first 513 values from the DFT. If needed for performing an inverse DFT, we then simply recreate the upper N∕2 − 1 values in X(k) using the symmetry properties in Equations (9.15) and (9.16).

Table 9.1

Some important transform pairs for the DFT.

#

Description

x(n)

X(k)

1

Periodicity

x(n + N) = x(n)

X(k + N) = X(k)

2

Constant

1,

n = 0, 1, … N − 1

X(0) = N X(k) = 0, k ≠ 0

3

Dirac Pulse

𝛿(n)

1, k = 0, 1, … N − 1

4

Complex Conjugation

x∗ (n)

X ∗ (N − k)

5

Multiplication

x(n)y(n)

1 X(k) N

6

Circular Convolution

7

Parseval’s Theorem

x(n) ⊛ y(n) N−1 ∑ x(n)y∗ (n)

X(k)Y (k) N−1 1 ∑ X(k)Y ∗ (k) N

n=0

See Section 9.3.12 about circular convolution as in pairs 5 and 6.

n=0

⊛ Y (k)

205

206

9 Experimental Frequency Analysis

9.3.6 Relation Between DFT and Continuous Spectrum The periodicity of the DFT can be interpreted such that the DFT X(k) of the signal x(n) is the spectrum of the periodic repetition of x(n). We can divide this periodic repetition into three different cases, depending on the signal, x(n), if we use an N-size DFT, namely 1. the signal x(n) is periodic in N, which is unlikely to happen for a measured signal (except for synchronous sampling, see Chapter 12), but is often used when considering the DFT and its effects (as in the present chapter, for example), 2. the signal is transient, of length L < N, i.e., it dies out inside N, or 3. the signal is continuous (either periodic or random), or transient with a length L > N. In the first case, the DFT, X(k), is an exact representation of the continuous Fourier series of x(t) if we apply appropriate scaling and there is no frequency content between the discrete frequencies kΔf = k∕(NΔt). In the second case, with a transient signal that dies out inside the blocksize, the analog signal sampled at x(nΔt) has a continuous Fourier transform, X(f ), which is sampled at the discrete frequencies of the DFT: X(k) = X(f )|f =kΔf .

(9.17)

Furthermore, in the second case, if we wish to calculate X(f ) at other frequencies than those sampled by the DFT, we can do so using the samples of x(n) by ∑

N−1

X(f ) = Δt

x(n)e−j2𝜋fn .

(9.18)

n=0

For such signals, zero padding is an easy and appropriate method to find values of X(f ) by the DFT at arbitrary resolution, see Section 9.3.13. This relation is given here without any proof, but it can be found in any standard textbook on signal processing (e.g., (Oppenheim et al. 1999; Proakis and Manolakis, 2006)). In the third case, finally, when the signal is truncated before the DFT can be computed, there will be an error, i.e., a difference between the true signal spectrum and the spectrum of the truncated, periodically repeated, signal. This error is called leakage and will be dealt with in Section 9.3.7.

9.3.7 Leakage What happens if we, for example, compute the DFT with a frequency increment of Δf = 2 Hz, but the measured signal is a sinusoid of, say, 51 Hz, so that the signal frequency is located right between two spectral lines in the DFT (50 and 52 Hz)? The result is that we get two high DFT values at 50 Hz and at 52 Hz. However, as shown in Figure 9.5, both values are lower than the true value, and there are a number of nonzero values of the DFT around the two peak values. To easily observe this error in the figure, we have scaled the DFT by dividing by N and taken the absolute value of the result, as we discussed in Section 9.3. We have also made the spectrum single-sided, by multiplying all values (except

9.3 The Discrete Fourier Transform (DFT)

y = sin(2π51t)

1 0.5 0 −0.5 −1

0

0.1

0

50

0.2

Time [s]

0.3

0.4

0.5

1

2 |fft(y)/N|

0.8 0.6 0.4 0.2 0 100 150 Frequency [Hz]

200

250

Figure 9.5 Time block (upper) and scaled DFT (lower) of a 51 Hz sinusoid. A total of 256 time samples have been used, giving 129 spectral lines. The frequency increment is Δf = 2 Hz, since the measurement time is 0.5 s. Instead of the expected value of 1, that is, the amplitude of the sinusoid, we get a peak much too low (in this case approx. 36% too low). There are also more nonzero frequency values to the left and right of the 50 Hz and 52 Hz values. This phenomenon is called leakage since the frequency content in the signal seems to “leak” out to surrounding frequencies.

the DC value) by 2, as we discussed in Chapter 8. Thus, the correct value should be 1, the amplitude of the sinusoid. As seen in Figure 9.5, the resulting peak is incorrect, by as much as 36%. Furthermore, it looks as if the frequency content has “leaked” away on both sides of the true frequency of 51 Hz. This phenomenon is therefore called leakage. One way to explain the leakage effect is by studying what happens in the frequency domain when we limit the measurement time to a finite time, as we do when we acquire only N samples of the continuous signal x(n). This procedure corresponds to multiplying the original, continuous signal by a time window which is zero outside the interval Δt ∈ (−T∕2, T∕2), and unity within this same interval. A multiplication with this function, w(t), in the time domain, is equivalent to a convolution with the corresponding Fourier transform, W(f ) in the frequency domain. We thus obtain the weighted Fourier transform of x(t) ⋅ w(t), denoted Xw (f ), as follows: ∞

Xw (f ) = X(f ) ∗ W(f ) =

∫ −∞

X(u)W(f − u)du

(9.19)

207

9 Experimental Frequency Analysis

0 −10

dB

208

−20 −30 −40 −50 −10

−5

0 k

5

10

Figure 9.6 DFT of a sinusoid which coincides with a spectral line (k = 0). The convolution between the transform of the (rectangular) time window, W(f ), (dotted) and the sinusoid’s true spectrum, 𝛿(f0 ), (solid) results in a single spectral line. Note that the scale is in dB, i.e., a logarithmic scale, as opposed to the linear scale in Figure 9.5.

where ∗ denotes (continuous) convolution. W(f ) is the transform of a rectangular time window, as in the example shown in Figure 9.5. This Fourier transform is sin(𝜋fT) = Tsinc (fT) , (9.20) W(f ) = T (𝜋fT) which is plotted (in dB magnitude scale) in Figure 9.6 as a dotted line. You should particularly note that the main lobe of the window is exactly zero at all integer k, except k = 0. The rectangular window is sometimes called a uniform window and in MATLAB/Octave it is defined by the boxcar command. We will illustrate the convolution process in Equation (9.19) for two cases: (i) when the frequency of the sine coincides with a frequency line, i.e., the sine is periodic in the time window; and (ii) when the frequency of the sine is located exactly in the middle of two frequency lines. (If you are not already familiar with convolution, you are strongly recommended to study Section 2.6.4 before proceeding.) In both cases, the convolution between the Fourier transform of our (continuous) sine wave and that of the time window implies that we allow the former, X(f ), to sit at its frequency f0 . Then, for each frequency line k, where we calculate X(k), we shift the Fourier transform of the window, W(f ), k samples (if k is negative it is a shift to the left, if k is positive, we shift to the right), i.e., to W(f − kΔf ). (Actually, we should also reverse the window Fourier transform, but it is symmetric, so nothing really happens in that step.) Finally, we multiply the two and sum all the values (for all frequencies k), but since X(f ) is a single spectral line there is nothing to sum – it will be a single value for each k. In case (i), when the frequency of the sine coincides with a frequency line, i.e., when f0 = k0 Δf , the result obtained is plotted in Figure 9.6. The reason there is only one nonzero value is that for all integer numbers k, where we place W(f − kΔf ), except for k = k0 , the spectral line of the sinusoid corresponds to a zero in W(f ).

9.3 The Discrete Fourier Transform (DFT)

5 0 −5 −10

dB

−15 −20 −25 −30 −35 −40 −45 −50 −10

−5

0

k

5

10

Figure 9.7 Leakage with rectangular window. The frequency of the sine wave is located at f0 = 0.5Δf , exactly mid-way between two frequencies, k = 0 and k = 1, corresponding to an integer number of periods plus one half period in the time window. The DFT result, X(k), is illustrated by black dots, and the dotted line is the rectangular window spectrum. When a periodic signal does not have an integer number of periods in the measurement window, then due to the finite measurement time, the convolution results in too low a frequency peak. At the same time, the power seems to “leak” into nearby frequencies, although the total power in the spectrum is still the same. Note that the scale is in dB, i.e., a logarithmic scale, as opposed to the linear scale in Figure 9.5, thus exaggerating the leakage effect.

In Figure 9.7, the result of the convolution as described above, for case (ii), where the frequency of the sinusoid is located exactly between two spectral lines (we have an integer number plus one-half period in the time window) is illustrated. At each shift of k samples during the convolution, the frequency line of the sine will now coincide approximately with the maximum of each side lobe. We see in the picture that the result is that we obtain many nonzero frequency lines which slowly decrease to the left and right, and we get two peaks for k = 0 and k = 1, which are the same height, although much lower than the sinusoidal amplitude. It can be shown that if the RMS values of all spectral lines are summed as will be discussed in Section 10.7.5 the result is equal to the true RMS value of the sinusoid. (This is a result of the fact that Parseval’s theorem always ensures that the power of the signal in the spectrum is equal to the power in the time domain, see Table 9.1.) Hence, the power in the signal seems to “leak” out to nearby frequencies, giving this phenomenon the name leakage.

9.3.8

The Picket-Fence Effect

An alternative way to look at the discrete spectrum X(k) from the DFT is to see each spectral line as the result of a bandpass filtering, followed by an RMS computation (or amplitude detection, we will discuss scaling in Chapter 10) of the signal after the filter. This process is often illustrated as in Figure 9.8 with a number of parallel bandpass filters, where each filter is centered at the frequency k (or k ⋅ Δf if we think in Hz). Each filter illustrated in

209

210

9 Experimental Frequency Analysis

...

k0–1

k0

k0+1

...

k

Figure 9.8 The picket-fence effect. Each value in the discrete spectrum corresponds to the signal’s RMS value after bandpass filtering. If we study a tone located between two frequencies, it will be attenuated by the filter shape, and we will obtain a value which is too low.

Figure 9.8 is the main lobe of the window centered at that frequency line in the convolution, which is, of course, a simplification; to be exact we really have to take into account the side lobes of the window as well. This method of looking at the DFT is reminiscent of viewing the true spectrum through a picket fence and therefore it is sometimes called the picket-fence effect, or sometimes scalloping (which is an English word meaning something going up and down in a wavelike manner). Note that the picket-fence effect is also analog with the method of measuring a spectrum with octave band analysis that we discussed in Section 9.2. As was mentioned in Section 9.1.1, this principle is the only (nonparametric) way to measure or compute spectral content. The picket-fence effect is a good illustration of what happens with the estimated amplitude of a sine which is located between two frequency lines; it will be attenuated by as much as the filter shape has decreased at the frequency of the sine. We can use this illustration for different windows to find the “maximum amplitude error,” which is tabulated for some common windows in Table 9.2. Table 9.2

Some useful figures for the windows are compared in Section 9.3.9.

Window

First side lobe [dB]

Sidelobe falloff [dB/oct.]

Ampl corr. [−]

NENBW [bins]

Max. ampl. error [%]

Rectangular

−13.3

−6

(1)

1

−36

Hanning

−31.5

−18

2

1.5

−15

ISO Flattop

−84



1

3.77

−0.1

Enh. flattop

−87.9



1

3.77

−0.1

The windows are defined in the text as well as the amplitude correction factor and the normalized equivalent noise bandwidth, NENBW.

9.3 The Discrete Fourier Transform (DFT)

9.3.9

Time Windows for Periodic Signals

As we saw in Section 9.3.7, we obtain leakage when calculating the DFT of a sinusoid with a noninteger number of periods in the observed time window. This error is caused by the fact that we truncate the true, continuous signal, which was also evident from the DFT overview in Section 9.3.2. By using a weighting function other than the rectangular window used in Section 9.3.7, we can reduce the leakage, and thus this amplitude error too. This process is called time-windowing and is illustrated in Figure 9.9 which illustrates the window effect on the same 51 Hz sine as in Figure 9.5. The time window used in Figure 9.9 is called a Hanning window and is one of the most common windows used in FFT analysis. The effect of the window is that it reduces the transients in the periodic repetition of the time signal, although it is not very intuitive how this could improve the result. We will show, however, that we can estimate the amplitude much better than with the rectangular window.

1

0.5

Hanning window

y = sin(2π51t)

1

0 −0.5

0.6 0.4 0.2

−1 0

0.1

0.2 0.3 Time [s]

0

0.4

(a)

2×|fft(w ⋅ y)/N|

0.5 0 −0.5 −1 0

0.1

0.2 0.3 Time [s]

(c)

0

0.8

1 Windowed time function

0.8

0.4

0.1

0.2 0.3 Time [s]

0.4

(b)

0.6 0.4 0.2 0 30

40 50 60 Frequency [Hz]

70

(d)

Figure 9.9 Illustration of time-windowing with a Hanning window. The window reduces the jumps at the ends of the repeated signal. In (a), the signal is shown. In (b) is shown the Hanning window, and in (c) the result of the multiplication of the two. In (d) is shown the result of calculating the spectrum with the Hanning window (solid) and without (dashed). The spectrum now has much less leakage, but the amplitude is wrong, which will be addressed in Section 9.3.9.

211

212

9 Experimental Frequency Analysis

The resulting spectrum after windowing in Figure 9.9(d) is markedly sharper than that in Figure 9.5, but the peak does not have the correct value. We will soon present the Hanning window in some detail, but first we will see how to correct for this error. 9.3.9.1 Amplitude Correction of Window Effects

The result in Figure 9.9(d) shows a peak that is considerably lower than the expected amplitude of the sine, which is 1. This is due to the fact that the Hanning window removes information in the signal, as is evident from the windowed function in Figure 9.9(c). To see how to compensate for this window, we look at the effect of windowing a complex sine wave with a frequency which coincides with frequency line k, that is, xk (n) = Aej2𝜋fk t = Aej2𝜋kn∕N ,

(9.21)

when evaluated on the sample times, and for some arbitrary frequency fk = kΔf . We now define a time window w(n) and calculate the DFT for the same frequency line (all other DFT outputs will, of course, be zero). We get ∑

N−1

X(k) = A



N−1

w(n)ej2𝜋kn∕N e−j2𝜋kn∕N = A

n=0

w(n),

(9.22)

n=0

which shows that the amplitude of the complex sine is scaled by a factor which is the sum of the window coefficients w(n). Since we have already gotten accustomed to dividing the DFT by N, the amplitude correction factor, Aw , that we should use is the ratio of N and the sum of the window coefficients, i.e., Aw =

N ∑

N−1

,

(9.23)

w(n)

n=0

which, for the Hanning window is exactly 2, as we will see below. The way to apply this factor is thus that we calculate a scaled spectrum X(k) as follows: Xw (k) =

N−1 Aw ∑ x(n)w(n)e−j2𝜋kn∕N , N n=0

(9.24)

which will result in a double-sided spectrum with approximately half of the amplitude if x(n) is a sine or cosine, at each of the positive and negative frequencies, respectively. We can then make it single-sided by the procedure discussed in Chapter 8. 9.3.9.2 Power Correction of Window Effects

In addition to the effect on the amplitude of periodic components, there is another effect from the window, which produces an incorrect power of the spectrum. This effect is due to the fact that the window bandwidth is not unity. Therefore, when applying Parseval’s theorem to windowed spectra, the equivalent noise bandwidth, denoted Be , has to be taken into account. We will use this factor in several occasions in Chapter 10 and will therefore derive it here. We define the equivalent noise bandwidth as the width of a rectangular filter (in Hz) with the same height as the window spectrum squared at zero frequency, and which passes the same power as the time window in question. To derive the formulation for this bandwidth,

9.3 The Discrete Fourier Transform (DFT)

we need a relation which we are going to develop in Section 13.3 for the output PSD of a filter. If we have a filter with frequency response H and a double-sided input PSD, Sxx , then the output PSD, Syy , is Syy = Sxx |H|2 .

(9.25)

We now assume that we have a filter with a rectangular frequency response with height h = W(0), the same height as the window spectrum, and width Be . Furthermore, we assume that we have an input PSD limited within ±fs ∕2 which has a constant value of 1∕fs so that the total power is unity (PSD times frequency range). The output power of this filter will be Pr =

1 B W 2 (0), fs e

(9.26)

where the square in W 2 (0) comes from the fact we are looking at power and not amplitude, so we have to take the square of the frequency response according to Equation (9.25). We now look at the window as a filter with frequency response W(f ). The total output power of this filter, with the constant input PSD of 1∕fs will be fs ∕2

1 Pw = |W(f )|2 df , fs ∫

(9.27)

−fs ∕2

which, using Parseval’s theorem, can be calculated as follows: T

1 w2 (t)dt, Pw = fs ∫

(9.28)

0

where we replace the infinity limits with 0 and T because the window w(t) is zero outside these limits. Observing this equation, we see that it is the Fourier transform of w2 (t) evaluated at f = 0 where the exponential term is unity. Thus, we can use Equation (9.18) to compute this equation directly from the samples of the window, as follows: Δt ∑ 2 w (n). fs n=0 N−1

Pw =

(9.29)

In the same manner, we find the peak power gain of the window, W 2 (0), computed directly from the samples of the window, to be [ 2

W (0) = Δt



N−1

]2 w(n)

.

(9.30)

n=0

Setting up the equality Pr = Pw , we now get ]2 [ N−1 N−1 ∑ 1 Δt ∑ 2 w (n) = Be Δt w(n) , fs n=0 fs n=0

(9.31)

213

214

9 Experimental Frequency Analysis

and by using the fact that Δt = 1∕(NΔf ), this gives the equivalent noise bandwidth, ENBW, Be , in [Hz], as ∑

N−1

NΔf

w2 (n)

n=0

Be = [ ]2 . N−1 ∑ w(n)

(9.32)

n=0

It is more useful in many cases to use the normalized equivalent noise bandwidth, NENBW, which we will denote Ben . The NENBW is simply the equivalent noise bandwidth divided by Δf , or ∑

N−1

N

w2 (n)

n=0

Ben = [ ]2 , N−1 ∑ w(n)

(9.33)

n=0

which for a Hanning window is exactly 1.5. The importance of this relation will be apparent in Chapter 10. 9.3.9.3 Comparison of Common Windows

There are many time windows which have been developed to optimize various properties. Many noise and vibration analysis software packages thus include a large variety of different windows from which to choose. Good overviews of different windows are found in Harris (1978) and Nuttall (1981), except for the later-developed flattop window which we will present below. For noise and vibration analysis, there is little use of this variety of windows, as the result for most windows, from a practical standpoint, is almost identical. We shall therefore examine only two windows, the Hanning and flattop windows, in more depth, as these two windows are sufficient for our purposes. We will also include the rectangular window for comparison purposes, which is the window that is effective if we do not multiply our data by any other window. Leakage can be explained by either of two different principles. First, referring back to the convolution results in Figure 9.7, it is seen that leakage is produced in the convolution when the sine tone coincides with a side lobe. Thus, reducing the side lobes will reduce leakage. There are two properties of the window that can be adjusted for this purpose: the first side lobe level, and the asymptotic falloff of higher side lobes. These properties can, however, only be reduced at the expense of increasing the width of the main lobe. The second way to explain leakage is by focusing on the periodic repetition of the windowed time sequence and its derivatives. It turns out that the higher derivatives of the window that are continuous in the periodic repetition, the higher the side lobe falloff will be. Of course, both these explanations are producing the same end result, but sometimes the one is more appropriate to explain a feature of the window than the other. A second reason for wanting lower side lobes (including the first side lobe) is also worth mentioning. A real signal will, of course, in most cases be expected to have more than the single sine tone we used to illustrate the convolution process. When calculating the spectrum at one of the tones, say at k = 0, those other tones will be weighted by the window

9.3 The Discrete Fourier Transform (DFT)

further away from k = 0 and summed into the spectrum value. For closely spaced tones, for the same reason, it is thus essential that already the first side lobe is as low as possible. The Hanning window is probably the most common window used in FFT analysis. It is defined by half a period of a squared sine, or alternatively one period of a displaced a cosine ( ( )) ( ) 1 2𝜋n 𝜋n = 1 − cos , (9.34) wH (n) = sin2 N 2 N for n = 0, 1, 2, … , N − 1. It should be particularly noted that the window defined in Equation (9.34) is periodic in the time window as described in Section 9.3.4. This means that the first value is zero, but the last value is identical to the second value, and not zero. Already Harris (1978) noted that this was often overlooked, and sadly, more than 30 years later this is still true. Thus, for example, the Hanning windows defined in MATLAB/Octave are, by default, not periodic in this way, although they can be made so by an option to the command hann. For large blocksizes, the error is not very large, but for small blocksizes, the error can be considerable, see Problem 9.1. The Fourier transform of the Hanning window is really simple. It turns out that there are only three nonzero values in W(f ), namely W(−1) = W(1) = −1∕4 and W(0) = 1∕2. The minus sign is often neglected, which leads to erroneous phase, but does not affect the magnitude. Due to this simple spectrum, in early days of FFT/DFT, it was common to convolute by the Hanning window in the frequency domain instead of multiplying in the time domain. Due to the fast computers of today, this has become less common. The first side lobe of the Hanning window is approximately −31.5 dB below the main lobe maximum and the asymptotic falloff is −18 dB/octave. The Hanning window’s Fourier transform has a main lobe that is wider than the rectangular window. This means that the picket-fence effect discussed in Section 9.3.8 results in a maximum error on the amplitude of a sine of approximately −15% (compared with −36% for the rectangular window). Note that the amplitude error is always negative or zero, since we scale the window so that a tone on a frequency line is correct, which means the main lobe maximum in the spectrum of the window is unity. The amplitude error of 15% for the Hanning window is of course unacceptably large in many cases, for example, when we want to measure the amplitude of a sinusoidal signal for calibration purposes. In that case, the flattop window may be utilized. The flattop window is not a uniquely defined window, but a name given to a group of windows with similar characteristics. The development of flattop windows was dominated by various companies manufacturing FFT analyzers in the 1980s, which has resulted in a few publications specifying the coefficients of such windows. There is, however, an ISO standard, ISO 18431-1: (2005) in which the coefficients of a relatively good flattop window are published. This window uses a formulation suggested by Nuttall (1981) where the window is built up by a sum of several sines. We formulate it in a slightly different way than the original, namely as w(n) =

K ∑

ak (−1)k cos(2𝜋kn∕N),

(9.35)

k=0

where the coefficients ak are real constants. For the ISO flattop window, as well as most other popular flattop windows, K = 4 is used. The ISO flattop window coefficients are

215

216

9 Experimental Frequency Analysis

a0 = 1, a1 = 1.933, a2 = 1.286, a3 = 0.388, and a4 = 0.0322. This window has a first side lobe of approximately −84 dB, which essentially makes it unnecessary to worry about the falloff rate. The main benefit of the flattop window is its very flat main lobe characteristics, which result in a maximum amplitude error of less than 0.1% and makes the window well suited to estimate sine amplitudes. Actually, the ISO window is not very well optimized, and hence, (Tran et al., 2004) have published modified coefficients that reduce the first side lobe to −88 dB without essentially affecting the main lobe width. We will use this excellent flattop window in our comparison below. Other examples of flattop windows can be found in Reljin et al. (2007) which also provides some more insight into these windows. In Figure 9.10, the three windows, rectangular, Hanning, and flattop, with their Fourier transforms are shown for comparison. In Table 9.2, some useful figures for the three windows we have discussed are presented. There is a price to pay for the decreased amplitude uncertainty when we use time windows. The price is in the form of increased frequency uncertainty, which occurs because the smaller the amplitude uncertainty, the wider the main lobe of the spectrum of the window. Therefore, if we measure a sinusoid with a frequency that matches one of our spectral lines, then the peak will become wider than if we had used the rectangular window. The flattop window, which has the best amplitude uncertainty, also has the widest main lobe, which, of course, is a direct result of the picket-fence effect. This trade-off is related to the bandwidth-time product which is explained more in relation to errors in PSD computation with windowing in Section 10.3.5. Figure 9.11(a) shows the result of the DFT of a sinusoid which exactly matches a spectral line after windowing with the Hanning window and with the flattop window, respectively. As shown in the figure, the Hanning window results in three nonzero spectral lines, while the flattop window gives nine nonzero spectral lines. With this broadening of the peaks, there is a larger uncertainty of the exact frequency of the tone. Even with windows other than the rectangular, we get some leakage when the sinusoid’s frequency does not coincide with a spectral line, as seen in Figure 9.11(b), but at much lower levels than with the rectangular window. On the other hand, we get a broadening of the peak as illustrated in Figure 9.11. The flattop window because of its large main-lobe width should only be used when it is known that the spectrum does not contain any closely spaced tones. The Hanning window should, therefore, be used as a standard window, since it provides a good compromise between amplitude accuracy and frequency resolution. Finally in this section, we will present two more windows which are not recommended for spectral estimation, but which are nevertheless important to present, for different reasons. First, we present the Bartlett, or triangular window, which enters into our discussion on PSD estimation in Chapter 10, and second, the half-sine window, which has recently been proposed for frequency response estimation (Antoni and Schoukens 2007) as an alternative to the Hanning window. The Bartlett, or triangular window, is defined by a triangular shape, { w(n) =

2n∕N, n = 0, 1, … , N∕2 2(N − n)∕N, n = N∕2 + 1, N∕2 + 2, … , N − 1

(9.36)

9.3 The Discrete Fourier Transform (DFT)

0

1

−20

0.8

−40

dB

0.6 0.4

−60

0.2

−80

0

0

−100 −10−8 −6 −4 −2 0 2 4 6 8 10

N−1

(a)

0

1

−20

0.8

−40

dB

0.6 0.4

−60

0.2

−80

0

0

−100 −10−8 −6 −4 −2 0 2 4 6 8 10

N−1

(c)

5

0

4

(d)

−20

3

dB

−40

2

−60

1

−80

0 −1

(b)

0

n (e)

N−1

−100 −10−8 −6 −4 −2 0 2 4 6 8 10

k (f)

Figure 9.10 Comparison of three common time windows; the rectangular (a, b), Hanning (c, d), and flattop (e, f) windows in time domain (left) and frequency domain (right). The Hanning window’s first zero is situated at k = 2, which means that for a sinusoid situated between two frequency values in the DFT, the convolution with the window spectrum will make the value for k = 0 in Figure 9.7 be attenuated much less than for the rectangular window. For the flattop window, almost no attenuation occurs. The uncertainty in amplitude therefore decreases when using windows with a periodic signal. On the other hand, with wider main lobes, the spectral peaks are broadened, which results in increased frequency uncertainty, see below.

217

218

9 Experimental Frequency Analysis

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

−5

0

k (a)

5

0

−5

0

k (b)

5

Figure 9.11 The widening of the frequency peak is the price we pay to get a more accurate amplitude. In (a), the scaled DFT result is shown for a sine with amplitude 1 and frequency that matches the spectral line marked k = 0, for both Hanning (solid, rings) and flattop (solid, plus sign) windowing. With the flattop window, the peak is much wider than with the Hanning window. The rectangular window is omitted as it would simply have one value of 1 at k = 0 in this case with no leakage. In (b), similar results for the case when the sine frequency is exactly midway between frequency lines k = 0 and k = 1 are shown, including the result for the rectangular window (solid, asterisk).

which like our other windows is periodic in its repetition. This window is actually the convolution of a rectangular window with itself and is therefore of interest in spectral estimation, see Section 10.4. The half-sine window is defined, not surprisingly, by half a sine w(n) = sin(𝜋n∕N),

(9.37)

for n = 0, 1, … , N − 1. This is a rather poor window for spectral analysis, but in Chapter 13, we will show that it has some advantages for estimation of frequency responses. 9.3.9.4 Frequency Resolution

From the above discussion about widening of frequency peaks, it is clear that with a certain frequency increment, Δf , one may not, after the DFT computation, be able to distinguish two sinusoids, separated in frequency by only one spectral line or a few spectral lines. Two closely spaced sine tones can potentially result in one peak in the DFT result. For this reason, we should differentiate between frequency increment and frequency resolution. Frequency resolution usually implies the smallest frequency difference that is possible to discern between two signals, while the frequency increment is the distance between two frequency values in the DFT computation, that is, Δf . Frequency resolution depends on the time window used and the measurement time, while the frequency increment depends only on the measurement time, T = NΔt. There is no exact frequency resolution for a particular window. How close two sinusoids can be in frequency, in order for the spectrum to still show two peaks, depends not only on the width of the window’s main lobe but also on where between the spectral lines the two sine waves are located and on their respective amplitudes. In Problem 9.3, we will look into some aspects of frequency resolution.

9.3 The Discrete Fourier Transform (DFT)

9.3.10

Time Windows for Random Signals

The window’s influence on a random signal is quite different than the effect on periodic signals described in Section 9.3.9 due to the fact that random signals have continuous spectra, as opposed to periodic signals having discrete spectra. The result of the convolution between the continuous Fourier transform of the window and the random signal is therefore more complicated to visualize. If we recall that convolution implies that the qualities of both signals are “mixed,” we can understand that the window will introduce a “ripple” in the random signal’s spectral density. At the same time, we get a smoothing of the spectral density due to the influence of the main lobe. For narrow peaks in the spectrum, for example, if we measure vibrations on resonant systems with low damping as discussed in Chapter 5, we get an undesired widening of the resonance peaks. More on these bias errors will be discussed in Section 10.3.4. The qualities most important for the influence of the window when determining spectral densities are the width of the main lobe (which creates a broadening of sharp peaks) and the height of the side lobes (which create leakage). The lower the side lobes are, the less influence we get from nearby frequency content during convolution. The flattop window should never be used for random signals because of its wide main lobe. The most common window is the Hanning window, and there is little use of any other time window for PSD calculation of random signals on noise and vibration signals.

9.3.11

Oversampling in FFT Analysis

If we use a blocksize of N time samples, the DFT results in half as many, that is, N∕2 + 1 positive frequency (usable) spectral lines, as discussed in Section 9.3.5. Since the analog anti-aliasing filter is not ideal, but has some slope above the cutoff frequency, see Section 11.2.2, we cannot sample with a sampling frequency which is exactly 2 ⋅ Bmax , the bandwidth of the signal. In data acquisition systems for noise and vibration analysis, a common oversampling factor is 2.56, although slightly lower oversampling factors (down to approximately 2.2) are sometimes used in the later generation systems, see Chapter 11. Thus, we can only use the discrete frequency values up to k = N∕2.56 as the remaining frequency lines will be contaminated by aliasing. Due to this fact, many data acquisition and analysis systems for noise and vibration analysis throw away the upper frequency values above k = N∕2.56. Although this was reasonable many years ago when memory storage was expensive, today, it is unfortunate because it means that some information is lost and that performing inverse DFT results in less accuracy than would be possible if the upper frequency values had been stored. In Table 9.3, some common number of spectral lines stored for different blocksizes are tabulated.

9.3.12

Circular Convolution and Aliasing

In some cases, we want to compute the multiplication of two time functions or two spectra, for example, to produce the filtering of a signal by multiplying the Fourier transform of the signal with a frequency response. When manipulating spectra like this it is essential to understand the circular convolution property of DFT, and how to avoid this, usually unwanted, property.

219

220

9 Experimental Frequency Analysis

Table 9.3 Typical blocksizes and corresponding numbers of stored spectral lines (unfortunately) used by many noise and vibration analysis systems. Block size

# of spectral lines

256

101

512

201

1024

401

2048

801

4096

1601

8192

3201

It should be emphasized that it is better to keep all frequency values up to k = N∕2, the Nyquist frequency.

We have learned that for the continuous Fourier transform, a multiplication in one domain is equivalent to a convolution in the other domain. Note that this is the kind of convolution we discussed in Sections 9.3.7 and 9.3.9, even though in conjunction with the continuous Fourier transform because we were dealing with continuous signals. If, however, two DFT results, X1 (k) and X2 (k), are multiplied due to the circular nature of the DFT, the equivalent relationship in the time domain is a circular convolution, denoted by ⊛, so that y(n) = IDFT[X1 (k)X2 (k)] = x1 (n) ⊛ x2 (n),

(9.38)

which is not equal to the ordinary convolution. Let us present an example to illustrate the phenomenon and give a simple solution to avoid it. Remember that convolution means reversing and shifting one signal and summing the product of this signal with the other signal not shifted. See Section 2.6.4 if you need to refresh the understanding of convolution. Example 9.3.1 Assume we have two signals (sequences) x1 (n) = x2 (n) = 1, for n = 0, 1, 2, 3. The ordinary convolution of these sequences (which can be obtained in MATLAB/Octave by the conv command) is the sequence x1 ∗ x2 = 1, 2, 3, 4, 3, 2, 1,

(9.39)

However, if we try to run the following MATLAB/Octave code x3 = ifft(fft(x1).*fft(x2)); we obtain the sequence x3 (n) = x1 ⊛ x2 = 4, 4, 4, 4,

(9.40)

which is certainly not what we want. So how can we avoid this? The simple way is to add zeros to the two sequences x1 (n) and x2 (n) by redefining them as the sequences 1, 1, 1, 1, 0, 0, 0, 0. The result of the IDFT of this sequence, which can easily be calculated with a small modification to the code above, namely x3 = iff(fft(x1,8).*fft(x2,8));

9.3 The Discrete Fourier Transform (DFT)

1

1

1

1

1

1

1

1

(a) 0

1

1

1

1

0

0

0

1

1

1

1

0

0

0

0

(b) Figure 9.12 Illustration of circular convolution that occurs because of the circularity of the DFT. The figure shows the convolution between the sequence [1 1 1 1] in the upper vector, with itself in the lower. In (a), it is illustrated that for each step in the convolution, where the upper signal is shifted one step to the right because of the circularity of the DFT, this means the number is shifted out on the right-hand side and shifted in on the left. Thus, the sum of the products in Example 9.3.1 is [4 4 4 4]. In (b), it is illustrated how zero padding both sequences that are convoluted (here they are the same for simplicity) has shifted the rightmost number in the upper vector to a position where the lower vector has a zero, and the leftmost number in the vector is a zero shifted in from right. This produces the correct correlation result, and the situation in (b) corresponds to the step producing the value of 3 in Example 9.3.1 (of course, the convolution starts with the upper vector shifted so that there is no “overlap” of the unity numbers).

i.e., by specifying a twice as large FFT blocksize N to the fft command. The result will now be x3 (n) = 1, 2, 3, 4, 3, 2, 1, 0,

(9.41)

which solves the problem. This is one of the applications of zero padding, which we will look at a little closer in Section 9.3.13. End of example. The cyclic convolution can be seen as aliasing either in the time domain or the frequency domain, as it moves energy from one place to another in either domain (the opposite domain to where the multiplication took place). In Figure 9.12(a)), we illustrate the result of Example 9.3.1. See the figure caption for an explanation.

9.3.13

Zero Padding

Zero padding is a technique where the DFT (or IDFT) is computed on a sequence that has been extended by zeros at the end. It is frequently used in various signal-processing tasks. We saw an example in Section 9.3.12, where it was used to produce an ordinary convolution instead of a circular convolution, when convolving two time domain signals by a frequency domain (DFT) multiplication. This is an important use of zero padding which we will use for estimating correlation functions in Section 10.4. It should be noted that zero padding does not add any information to the data, so the spectrum will not contain any information not already there before the zero padding.

221

222

9 Experimental Frequency Analysis

Another use of zero padding is to compute spectra with a finer frequency resolution than the data sequence admits, particularly for transients, as we discussed in Section 9.3.6. In this case, the zeros are added to the time sequence before the DFT. This procedure works correctly, if the time sequence is transient in the original time window with length N, i.e., it has already died out before we add extra zeros. The resulting spectrum will be a spectrum with finer resolution than if only the length of the sequence had been used. This technique was used to produce the spectra of the time windows in, e.g., Figure 9.10 and is correct because the time windows are assumed to be zero outside the measurement. Zero padding is often also used in the frequency domain before computing an IDFT. In this case, the procedure produces interpolation in the time domain. In early work on spectral estimation, several suggestions to use zero padding were published. Therefore, it is not uncommon to still find suggestions to use zero padding in spectral estimation procedures. This is, however, due to a misconception as zero padding (in the time domain) only produces an interpolation in the spectrum between the spectrum values obtained without zero padding (Kay and Marple 1981). Higher-frequency resolution can only be accomplished by increasing the actual measurement time and zero padding in spectral estimation should be avoided.

9.3.14 Frequency Domain Processing Many types of operations, such as filtering, integration and differentiation, and forced response computations, may be readily performed in the frequency domain with superior performance compared to time domain filtering. This is both in terms of speed and accuracy. The principle is illustrated in Figure 9.13. It is important to use correct procedures for the output of the processing to be accurate. A common mistake is to calculate the FFT without zero padding. Zero padding is necessary, however, to produce the correct convolution result, avoiding the circular convolution, as explained in Sections 9.3.12 and 9.3.13. The processing may be any operation that can be produced by a multiplication of the spectrum Xz (k), the FFT result of the zero padded signal xz (n). See Table 2.2 for examples of Fourier transform relationships. For example, the following operations may be performed in the frequency domain: ● ● ●

● ●

Integration may be performed by dividing by j𝜔. Differentiation may be performed by multiplying by j𝜔. Filtering may be performed by multiplying the spectrum Xz (k) by any complex filter function, also with zero phase, if so desired. Removing harmonics may be performed with the FDE method, see Section 18.7. Forced response may be calculated by multiplying the spectrum with a frequency response, see Section 19.2.3.

x(n)

Zero padding

xz(n)

FFT

Xz(k)

Processing

Y(k)

IFFT

y(n)

Figure 9.13 Illustration of frequency domain processing. The signal to be modified, x(n) is transformed by an FFT after being zero padded. This ensures that the inverse FFT after the processing creates a true convolution, and not circular convolution, see Section 9.3.12.

9.3 The Discrete Fourier Transform (DFT)

To implement the processing, it is easiest if the negative frequencies are removed from the spectrum, and the processing is done only on the positive frequencies. Before the inverse FFT computation, the negative frequencies are created using even real part and odd imaginary part as described in Chapter 2. In Brandt and Brincker (2014), it was shown that integration in the frequency domain is superior to the IIR filters described in Section 3.4.2 for signals longer than approximately 32000 samples. The result of the frequency domain processing, using a rectangular window for the FFT, corresponds to convolution of the “perfect” result with the Fourier transform of the rectangular window, as described earlier in this chapter. For reasonably long signals, of, say 30000 samples or longer, the Fourier transform of the rectangular window is very close to an ideal Dirac pulse so that the leakage is very small. For an example of frequency domain processing, see Example 19.2.1 in Section 19.2.3, where the principle is shown with MATLAB code. The error occurring when zero padding is not used when computing a forced response by frequency domain processing is also illustrated in Figure 19.3.

9.3.15

Zoom FFT

In many commercial systems for noise and vibration analysis, there is an option allowing the DFT (FFT) frequency bins to be concentrated to a limited frequency range fmin ≤ f ≤ fmax , often referred to as zoom FFT analysis. An alternative way of obtaining exactly the same frequency resolution is to use a large blocksize and a baseband measurement (i.e., with a spectrum from 0 to fs ∕2 Hz). With the large possible FFT blocksizes possible in modern computers and software, there is little use for zoom FFT for noise and vibration analysis (in the sense of post processing data). In some real-time applications, however, the zoom FFT still has its place. The Fourier transform property 9 in Table 2.2 states that if a time signal x(t) has a Fourier transform X(f ), then the time signal x(t)ej2𝜋at has the Fourier transform X(f − a). Thus, by multiplying the measured time signal by the exponential term ej2𝜋at , which is easily done digitally, the spectrum of the signal is translated down to frequency f − a. We will show the procedure to follow to use this digital zoom capability by an example. Example 9.3.2 Assume we have a signal sampled by fs = 10 kHz, which thus has a frequency range of 0 ≤ f ≤ 5 kHz. We wish to zoom it in the frequency range 1900 ≤ f ≤ 2100 Hz and use a 1024 sample FFT. We do this by the following steps: 1. Define the center frequency fc = 2000 Hz, in the middle of the requested frequency range 1900 ≤ f ≤ 2100 Hz, and apply a bandpass filter of this center frequency and bandwidth B = 200 Hz, to the signal. 2. Multiply the entire time signal by ej2𝜋fm t , where the modulation frequency fm = fc − B∕2 = 2000 − 100 = 1900 Hz. Note that this produces a complex signal. This will shift the frequencies so that the requested range is now 0 ≤ f ≤ 200 Hz. 3. Extract every 25th sample (5000∕200 = 25) from the real and imaginary parts of this complex signal, which may be done without aliasing since the bandwidth is only 200 Hz. This produces a new sampling frequency fs,new = 10000∕25 = 400 Hz.

223

224

9 Experimental Frequency Analysis

4. Put the real and imaginary parts of the decimated signal back as a complex signal. 5. Apply the 1024 sample FFT to this complex signal. This produces a spectrum with Δf = fs,new ∕N = 400∕1024 Hz. It should be noted that zoom FFT processing does not violate the important relationship that the measurement time required for the time block is T = 1∕Δf . This is a result of the fact that we decimate the signal in step 3, so we have to use 25 times longer data (in the original signal) than the 1024 samples we are using in the FFT. Thus, the same result can be obtained by using a long size, L = 25 ⋅ 1024, FFT and then selecting the frequency lines corresponding to the frequency interval 1900 ≤ f ≤ 2100 Hz.

9.4 Chapter Summary We started this chapter with a discussion of nonparametric versus parametric methods for frequency (or spectral) analysis. The methods are called nonparametric if they do not require any a priori information about the model of the data (or form of the spectrum, if you like). Nonparametric methods are the most common methods for spectrum analysis of noise and vibration signals because of their reliability and generality. The most common type is FFT/DFT analysis, and octave and fractional octave analysis also belongs to the group of nonparametric frequency analysis methods. The DFT and the fast algorithm to compute it, the fast Fourier transform, FFT, were then presented and discussed in some detail. The DFT, X(k) of a signal, x(n), is defined by ∑

N−1

X(k) =

x(n)e−j2𝜋kn∕N ,

(9.42)

n=0

for k = 0, 1, … , N − 1. The inverse DFT, IDFT, is similarly defined by 1∑ X(k)ej2𝜋nk∕N , N k=0 N−1

x(n) =

(9.43)

for n = 0, 1, … , N − 1. The frequencies of the DFT, kΔf , are located at f = k∕T, where T = NΔt is the measurement time. Thus, the frequency increment is Δf = 1∕T. We noted that the DFT is using the orthogonality properties of sines and cosines to extract any frequency content from x(n) around each of the frequency lines, k. Each of these frequency lines come from a multiplication of x(n) by a cosine (for the real part of X(k)), and a sine (for the imaginary part of X(k)) with k periods within the measurement time T = NΔt. We then noted that the DFT is periodic in both time and frequency, i.e., X(k + N) = X(k) and the inverse DFT gives us a signal for which x(n + N) = x(n). This can be interpreted such that the DFT gives us the spectrum of the periodic repetition of the measured signal x(n).

9.5 Problems

As for the continuous Fourier transform, the symmetry properties of the DFT results in spectra of real signals x(n) that have the properties, Re {X(−k)} = Re {X(k)} ,

(9.44)

Im {X(−k)} = −Im {X(k)} ,

(9.45)

and

i.e., the real part of X(k) is even, and the imaginary part is odd. This fact can be used to save some space by discarding the upper N∕2 − 1 values of X(k) because these values can then be recreated using the symmetry equations. It is important to note, however, that the value X(N∕2) (i.e., the 513th value for a blocksize of 1024, for example) is a real number which is not repeated in the symmetry. Due to the truncation of continuous signals when computing the DFT, for such signals an error called leakage will affect the spectrum X(k). Leakage in spectrum estimates can be reduced by using a time window, and we presented the Hanning window as the favored standard window, and the flattop window for the particular situation when we want to measure the amplitude or RMS value of periodic components accurately. The time windows typically trade frequency resolution for amplitude accuracy; the wider the main lobe of the window, the better amplitude accuracy we get for a periodic component, but at the expense of a broadening of the spectrum peak. The leakage effects can be divided into three parts: ● ●



a broadening of peaks due to the main lobe – an interpolation effect the additive effect of spectral content outside a frequency bin onto the DFT result at that frequency bin – true leakage a signal-to-noise deterioration caused (essentially) by the ratio of the measured (true) signal and the RMS of extraneous noise, both calculated inside the main lobe.

9.5

Problems

Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 9.1 Create a sampled time signal of the signal x(t) = 3 cos(2𝜋20t) + 5 cos(2𝜋40t) using the following parameters: Sampling frequency, fs = 512 Hz, Number of samples, N = 1024. Calculate a spectrum using the DFT with Hanning window (use the ABRAVIBE toolbox ahann command) and scale it for correct amplitudes. Create a correct frequency axis and plot the spectrum, and make sure the cosines appear at the right frequencies.

225

226

9 Experimental Frequency Analysis

Note: If you have done everything right, the signal should be periodic in the time window. You should have no leakage. Rerun the example replacing the window with the default MATLAB/Octave hann window, which is not periodic, see Section 9.3.9. Observe the differences. Problem 9.2 Calculate the RMS value of x(t) defined in Problem 9.1 analytically, using the orthogonality criteria of cosines. Then use the signal to calculate the RMS value of x(t) in the time domain and verify that they are the same. Problem 9.3 Create a time signal of the signal x(t) = 3 cos(2𝜋f1 t) + 5 cos(2𝜋f2 t) using the following parameters: Sampling frequency, fs = 512 Hz, Number of samples, N = 1024. Investigate the results of a scaled spectrum of x(n) using Hanning window, with different frequencies f1 and f2 which are exactly on frequency lines, and midway between frequency lines. How close can they be and still be clearly separated? Problem 9.4 Repeat Problem 9.3 with flattop window instead (use the ABRAVIBE toolbox command aflattop, or make a MATLAB/Octave function using the window defined in Section 9.3.9). Problem 9.5 Use MATLAB/Octave to calculate the (regular) convolution between the following two sequences, with the command conv and by FFT, as explained in Section 9.3.12, and verify you get the same result. x1 (n) = 1 1 1 1 1 1 1 1 x2 (n) = 1 2 3 4 3 2 1 1

References Antoni J and Schoukens J 2007 A comprehensive study of the bias and variance of frequencyresponse-function measurements: optimal window selection and overlapping strategies. Automatica 43(10), 1723–1736. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Brandt A and Brincker R 2014 Integrating time signals in frequency domain – comparison with time domain integration. Measurement 58, 511–519. Cooley JW and Tukey JW 1965 An algorithm for machine calculation of complex Fourier series. Mathematics of Computation 19(90), 297–301. Cooley JW and Tukey JW 1993 On the origin and publication of the FFT paper - a citation-classic commentary on an algorithm for the machine calculation of complex Fourier-series - Cooley, J.W., Tukey, J.W.. Current Contents/Engineering Technology & Applied Sciences (51–52), 8–9. Cooley J, Lewis P and Welch P 1967 Historical notes on the fast Fourier transform. IEEE Transactions on Audio and Electroacoustics 15(2), 76–79.

References

Harris FJ 1978 On the use of windows for harmonic-analysis with the discrete Fourier-transform. Proceedings of IEEE 66(1), 51–83. Heidemann MT, Johnson DH and Burrus CS 1984 Gauss and the history of the fast Fourier-transform. IEEE ASSP Magazine 34(3), 15–21. ISO 18431-1 2005 Mechanical vibration and shock – signal processing – part 1: General introduction. Kay SM and Marple SL 1981 Spectrum analysis - a modern perspective. Proceedings of the IEEE 69(11), 1380–1419. Newland DE 2005 An Introduction to Random Vibrations, Spectral, and Wavelet Analysis 3rd edn. Dover Publications Inc. Nuttall AH 1981 Some windows with very good sidelobe behavior. IEEE Transactions on Acoustics Speech and Signal Processing 29(1), 84–91. Oppenheim AV and Schafer RW 1975 Digital Signal Processing. Prentice Hall. Oppenheim AV, Schafer RW and Buck JR 1999 Discrete-Time Signal Processing. Pearson Education. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall. Reljin IS, Reljin BD and Papic VD 2007 Extremely flat-top windows for harmonic analysis. IEEE Transactions on Instrumentation and Measurement 56(3), 1025–1041. Thrane N 1979 The discrete Fourier transform and FFT analyzers. Technical Report 1, Brüel & Kjær Technical Review No. 1. Tran T, Claesson I and Dahl M 2004 Design and improvement of flattop windows with semi-infinite optimization Proceedings of The 6th International Conference on Optimization : Techniques and Applications, Ballarat, Australia.

227

229

10 Spectrum and Correlation Estimates Using the DFT In software for noise and vibration analysis, the fast Fourier transform (FFT)/discrete Fourier transform (DFT) is typically used to estimate spectra as we mentioned in Chapter 9. Depending on the type of signal being analyzed, different types of spectra are recommended, as explained in Chapter 8. In this chapter, we will discuss how spectra of these different types of signals should be scaled and interpreted. Much of the available literature on spectral analysis, or spectrum estimation, is rather theoretical, and it is often difficult to see, for example, how to obtain correctly scaled spectra. It is also difficult to find information on how to select a suitable spectrum estimator for a particular measured signal. In this chapter, we will therefore present details about scaling spectra as well as giving detailed descriptions of how to select a proper spectrum type for different signals. The discussion of Welch’s estimator for auto and cross-spectral density (CSD) in Section 10.3.2 is particularly thorough, as this method, despite the fact that it is virtually the only method available in commercial noise and vibration analysis software, is not described comprehensively in many other textbooks. In Section 10.7 at the end of the chapter, we also discuss some practical aspects of performing proper frequency analysis for some typical situations. General background theory on the topics of this chapter can be found in Bendat and Piersol (2000), Newland (2005), Wirsching et al. (1995), and Stoica and Moses (2005) and in the references at the end of the chapter.

10.1 Averaging Many measurement signals contain random noise, either because the signal is random, or because it is periodic or transient, but contains random contaminating noise. In such cases, several spectra are often averaged, frequency by frequency, to reduce the random error, 𝜀r , (see Section 4.2.2) of the spectrum estimate. This is often referred to as ensemble averaging or frequency domain averaging. Another form of averaging, time domain averaging, is sometimes used for certain types of signals. This type of averaging will be discussed in Section 11.3.4. An illustration of the segment-based processing used for frequency domain averaging in spectrum analysis software for noise and vibration analysis is shown in Figure 10.1. Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

230

10 Spectrum and Correlation Estimates Using the DFT

FFT

2

FFT

2

t

m=1

m=2

m=3

...

Figure 10.1 Illustration of segment-based processing used for frequency domain averaging. The data are divided into a number of segments, possibly overlapping as in the figure, where 50% overlap is illustrated. Each segment of data is typically windowed, and then a DFT is performed on the windowed data segment. Finally, the magnitudes squared of the DFT results are averaged for each discrete frequency separately.

The entire time signal is divided into M segments which are independently used for a DFT calculation of each data segment. The squared magnitude value of each DFT result for each frequency value, k, of several subsequent spectra are averaged. In the case of deterministic (periodic or transient) signals, typically a few, say, 3–10 averages are usually necessary, while several hundred averages may be necessary for signals of random nature, see Section 10.3.5. Perhaps it should be pointed out here, that although this is the typical implementation in current software for noise and vibration analysis, in Section 10.3.6 a different approach to spectrum estimation will be presented. When averaging spectra, the squared magnitude values of the DFT have to be averaged because we want the averaged result to give a correct root mean square (RMS) value (see Section 10.2.1). Spectra of transient signals can be averaged if several transients can be captured in a repeatable fashion, for example, if the measurement is triggered by a level trigger. Overlap processing is sometimes used in the averaging process, as indicated in Figure 10.1, particularly when computing spectral densities of random signals. Each time sample is then used more than once so that the final average will contain more averaged DFT results from the same time data than if no overlap of the segments was used. The reason this gives a better result is essentially that the time window used prior to the DFT calculation removes some information in the data at the ends, where the window approaches zero. The amount of overlap that should be used depends on the time window. With the Hanning window, 50% overlap is usually seen as optimal, see Section 10.3.5.

10.2 Spectrum Estimators for Periodic Signals For periodic signals, theoretically, we use the Fourier series to describe the frequency contents. In practice, however, it is more common to scale spectra so that a peak in the spectrum is equivalent to the RMS value of a periodic component at that frequency. This is

10.2 Spectrum Estimators for Periodic Signals

due to the fact that the amplitude of a signal is only defined for a single sine. As soon as we have more than one frequency component in a signal, it is more relevant to look at the RMS value. In most software for noise and vibration analysis, there is a choice for scaling spectra of periodic signals either to RMS or amplitude (the latter usually called “peak” scaling), but it is strongly recommended to use RMS scaling. Due to the availability of different scaling, it is essential to document the scaling type used, as correct interpretation of the spectrum is otherwise impossible. This is usually done in the units of the plot, for example, by typing “Acceleration, [m/s2 RMS]” or similar. In the following discussion, we will assume RMS scaling implicitly. A periodic signal is deterministic, which means that, theoretically, a single DFT should give the correct spectrum. However, as measurement signals are almost always contaminated by extraneous noise from sensors and data acquisition equipment, or often from the vibration source itself producing a combination of periodic and random contributions, it is often necessary to average a few spectra to get stable spectrum values. As mentioned in Section 10.1, the averaging is always done on squared magnitude values of the DFT, to keep RMS scaling consistent. In Section 10.7.4, we will discuss how to treat signals which contain contributions of both random and deterministic signals, for example, where the main power in the signal is random, but where some periodic components are included.

10.2.1

The Autopower Spectrum

The autopower spectrum is a spectrum scaled to the square of the RMS value, the mean power or mean-square value, of the signal at each frequency. This spectrum thus has square units, that is, if we measure voltage, the units of the autopower spectrum become [V2 ]. We have already seen in Chapter 9 how to scale the DFT to yield correct amplitudes. Thus, we have a suitable estimator for a single-sided autopower spectrum, Axx , using M averages, by  xx (k) =

M SA ∑ | |2 ⋅ |X̂ (k)| , M m=1| w,m |

(10.1)

for k = 0, 1, … , N∕2, for some constant SA to provide correct scaling. Note that we use the “hat” (̂) to denote it is an estimate of the true spectrum as we discussed in Chapter 4. In Equation (10.1), X̂ w,m (k) is the windowed DFT of segment m, according to Equation (9.5), where the segmentation is done as was illustrated in Figure 10.1, either with or without overlap processing. It follows from the discussion in Section 9.3 that the entire process to obtain a DFT scaled to RMS for one segment is Aw ∑ x(n)w(n)e−j2𝜋kn∕N , √ N 2 n=0 N−1

Xw =

(10.2)

correction factor of the window, w(n), from Equation (9.23), N is where Aw is the amplitude √ the blocksize, and 2 scales to the RMS value. Xw is a double-sided spectrum. We recall that this scaling results in double-sided peaks of half the RMS value each of a periodic component. When we square this spectrum, we will obtain a quarter of the RMS on each frequency bin which equals half the RMS value squared on each bin, instead of the RMS squared as

231

232

10 Spectrum and Correlation Estimates Using the DFT

we wish. We thus need to put an additional factor 2 in SA to get half the RMS squared on each positive and negative frequency. Finally, we add a factor 2 to make the spectrum single-sided, for all k except the direct current (DC) bin. Thus, we find that the scaling constant SA for a correctly scaled autopower spectrum is 2

⎧ 2Aw ⎪ 2 , k ≠ 0, (10.3) SA = ⎨ N2 ⎪ Aw , k = 0, ⎩ N2 because we always have to treat the DC frequency line separately. The autopower spectrum has some disadvantages; first of all, it is squared, which is not very intuitive for periodic components. We would rather like to have linearly scaled values of each periodic component for practical interpretation of periodic signals. Second, it is also very easy to confuse autopower spectrum with (auto) power spectral density (PSD) for random signals. It is therefore not recommended to use this spectrum, but rather to take the square root of it, which results in the linear spectrum, see for example ISO 18431-1: (2005).

10.2.2 Linear Spectrum For reasons mentioned at the end of the last section, it is recommended to use the linear spectrum for periodic signals. With RMS scaling as described in Section 10.2.1, this spectrum is also referred to as RMS spectrum. It is more intuitive to interpret a periodic signal by directly presenting the RMS value of each of its periodic components, and not the square of those values. It should be especially noted that the averaging process when computing linear spectra, however, is still done on the autopower spectrum, as in Equation (10.1), and the square root is taken after the averaging is finished. We denote the linear spectrum by XL (k) to stress the fact that it is similar to the DFT, but must still be distinguished from it. The linear spectrum, or RMS spectrum, estimator is thus defined by √ (10.4) X̂ L (k) = Â xx (k). The procedure for computing linear spectra of periodic signals is as follows: 1. Remove the mean of the time signal x(n); 2. Divide the total signal into M segments, if averaging is wanted; 3. Select a window and calculate the windowed FFT, Xw,m = DFT[x(n)w(n)], of length N, of each segment; 4. Calculate the magnitude squared of each Xw,m from step 3; 5. For each frequency, make an average of all Xw,m , for m = 1, 2, … , M; 6. Calculate the scaling factor SA by Equation (10.3) and multiply the average from the previous step by this scaling; and 7. Finally, compute the linear spectrum X̂ L (k) by taking the square root of the scaled autopower spectrum. In addition to the steps in the list above, there are some practical aspects to estimate a relevant linear spectrum of a periodic signal. We will discuss this in Section 10.7.2.

10.3 Estimators for PSD and CSD

10.2.3

Phase Spectrum

The linear spectrum, XL (k), in Equation (10.4) has no phase information since it is based on the absolute value (squared) of the DFT, and also because a single signal does not contain phase information since phase is a relative concept. Sometimes, e.g., for computing operating deflection shapes, ODS, as discussed in Section 19.6, however, we need the phase information relative to a reference channel. In such cases, a phase spectrum with phase relative to a reference channel (signal) can be used. The way to obtain an average phase relationship between two signals is to calculate the cross-power spectrum of a signal x(n) with respect to a reference signal v(n) and take the phase from this averaged estimate. The cross-power spectrum for a periodic signal, x(n), relative to a reference signal v(n) is calculated by  xv (k) = SA

M ∑

∗ Xw,m (k)Vw,m (k),

(10.5)

m=1 ∗ (k) is the complex where the scaling constant SA is the same as in Equation (10.3), and Vw,m conjugate of the DFT of the reference signal v(n) of windowed segment m, to which the phase is related. The phase 𝜙̂ xv of this cross-power spectrum is then added to the linear spectrum XL to form a phase spectrum ̂ X̂ P (k) = X̂ L (k)ej𝜙xv (k) ,

(10.6)

𝜙̂ xv (k) = ∠Â xv (k),

(10.7)

where

where ∠ denotes phase angle in [rad/s]. The absolute value of the phase spectrum is thus equal to the linear spectrum in Equation (10.4). You should note that the phase angle in Equation (10.7) comes from the mean value of the phase difference between both signals. This implies in principle that the signal x is viewed as the output from a linear system, where v is the input, see Chapter 13. The phase spectrum is the recommended spectrum estimator for ODS when the signals are periodic, see Section 19.6.

10.3 Estimators for PSD and CSD We are now going to describe several estimators for autospectral and CSD for random signals. The history of estimators for PSD and CSD goes back to the days before the FFT algorithm. In those days, the most suitable way to obtain a PSD estimate was to first compute a correlation function, and then use the Wiener–Khinchin relations discussed in Section 8.3.1, because correlation functions could be computed reasonably well, but spectra (DFT) were expensive due to the lack of the FFT algorithm. This method is called the Blackman–Tukey method, (Blackman and Tukey 1958a,b), and it will be discussed briefly in Section 10.3.6, although we will use a more concurrent method, namely the smoothed periodogram. The development of the FFT algorithm soon made the Blackman–Tukey method obsolete, as (Welch 1967) published a method more suitable for use with low-memory

233

234

10 Spectrum and Correlation Estimates Using the DFT

computers. The Welch method is the method usually implemented in commercial noise and vibration analysis software because it is particularly well suited for computer processing. We will however, later in this section, discuss some advantages of reviving the Blackman–Tukey method (or more precisely, the similar smoothed periodogram method).

10.3.1 The Periodogram The basis for most FFT-based PSD estimation is the periodogram, P̂ xx (k), which is simply the magnitude squared of a (long) DFT of the entire time signal, x(n), scaled by the blocksize, i.e., |2 |∑ | | L−1 ̂Pxx (k) = Δt || x(n)e−j2𝜋kn∕N || , (10.8) L | n=0 | | | if we assume the length of the entire signal x(n) is L samples and the time step is Δt = 1∕fs . The periodogram as defined in Equation (10.8) is an estimate of the PSD, but a very poor one. It is, however, the building block of most PSD estimators. Similar to Equation (10.8), we can define a cross-periodogram as ( L−1 ) ( L−1 )∗ ∑ ∑ Δt −j2𝜋kn∕N −j2𝜋kn∕N y(n)e x(n)e . (10.9) P̂ yx (k) = L n=0 n=0 The periodogram can be shown (Bendat and Piersol 2000) to have a normalized random error of unity, i.e., 𝜀r [P̂ xx ] =

𝜎[P̂ xx − Pxx ] = 1, Pxx

(10.10)

which is independent of the length L, which is the main problem with the periodogram. It is therefore an inconsistent estimator. Also note that the random error is independent of k, which means the random error is equally large for all frequency lines, which will be useful later. It should be noted that the inappropriateness of the periodogram only applies to random signals x(n). For periodic signals, the periodogram is a very good estimator (with suitable scaling) of autopower spectrum, as shown in Figure 10.2, where two periodograms of lengths L1 = 512 and L2 = 4096 samples are shown for a pure random signal, and for a random signal plus a sine signal. In the figure it is seen that the two periodograms in Figure 10.2(a) and (b) of the random signal, with L1 = 512 and L2 = 4096 samples, respectively, do not stabilize at all between the shorter and longer time signal. Actually, the longer the blocksize, the wilder the behavior of the periodogram. For the periodic signal in Figure 10.2(c) and (d), however, which have the same data lengths, L1 = 512 and L2 = 4096, respectively, as the results in (a) and (b) in the same figure, we see that the periodic components stand out more from the background noise for the longer blocksize. For periodic signals, we do not actually calculate the periodogram, of course. Rather, we use a long blocksize, N, to calculate the linear spectrum described in Section 10.2.2. If plotted in logarithmic y-scale, such a spectrum often reveals the resonances of a structure.

Periodogram length N = 512

101

10–1

10–3

105 103 101

10–1

10–5

0

100 200 300 400 500

Frequency [Hz] (a)

101

Periodogram length N = 4096

Periodogram length N = 4096

Periodogram length N = 512

10.3 Estimators for PSD and CSD

10–1

10–3

0

100 200 300 400 500

0

100 200 300 400 500

105

Frequency [Hz] (c)

103 101

10–1

10–5

0

100 200 300 400 500

Frequency [Hz] (b)

Frequency [Hz] (d)

Figure 10.2 Example of periodogram plots of a random signal in (a) with a length L1 = 512 samples, and in (b) with L2 = 4096 samples. As illustrated by the plots, the longer periodogram is not more stable than the short, which shows the periodogram is an inconsistent estimator. In (c) and (d), periodograms with the random signal plus a periodic signal (square wave) are shown, using the same lengths L1 and L2 , respectively. As can be seen from these two plots, the longer periodogram makes the periodic signals stand out more, which illustrates that the periodogram (or long DFTs) can be good for finding periodic components hidden in noise.

10.3.2

Welch’s Method

The most common method for computing PSD and CSD is Welch’s method (Welch 1967), which is based on an average of shorter, windowed periodograms. Welch’s method is based on dividing the signal x(n) into M segments, each of length N, as was shown in Figure 10.1. Each segment is windowed before the DFT is calculated, and the modified periodogram (magnitude squared of the windowed DFT) is averaged for each frequency line. Usually, the time blocks are overlapped, which decreases the random error of the PSD estimate, as we will show in Section 10.3.5. Assuming we average M windowed DFT spectra denoted Xw,m as before, the PSD using ̂W Welch’s method, G xx (k), is computed by ̂W G xx (k) =

SP ∑ S ∑| |2 X X∗ = P |X (k)| , M m=1 w,m w,m M m=1| w,m | M

M

k = 1, 2, … , N∕2,

(10.11)

235

236

10 Spectrum and Correlation Estimates Using the DFT

where the scaling constant is to be determined. We choose the scaling factor SP so that the area under the function equals the square of the RMS value of the time function, which can be seen as an alternative definition of PSD instead of using the Wiener–Khinchin relation. Xw,m (n) in Equation (10.11) is the DFT of the windowed segment for average number m. Thus, if xm (n) is segment number m of the measured signal and w(n) is the time window used, usually the Hanning window, then ∑ [ ] N−1 xm (n)w(n)e−j2𝜋kn∕N . Xw,m (k) = DFT xm (n) ⋅ w(n) =

(10.12)

n=0

For two different signals, an input signal x(n) and an output signal y(n), the discrete ver̂W sion of the CSD, that we described in Section 8.3.1, G yx , using Welch’s method, is estimated in a similar fashion as ̂W G yx (k) =

SP ∑ Y X∗ . M m=1 w,m w,m M

(10.13)

Note that the complex conjugate in Equation (10.13) will in effect mean that the phase angle of X is subtracted from that of Y , which is natural if x is thought of as the input signal to a linear system, and y is the output.

10.3.3 Window Correction for Welch Estimates We must now find the scaling factor SP in Equation (10.11) so that we can interpret the area under the PSD as the square of the RMS value of the signal x(n). The first correction is to scale the function by dividing by the frequency increment, Δf , to make the PSD a density function. However, this will not be sufficient, as can be directly seen in Figure 9.6. This figure shows that, provided we have used the scaling factor SA for the autospectrum as in Equation (10.1), the peak value in the autopower spectrum already corresponds to the mean-square value of the sinusoid (since we do not have leakage). Because of the window’s main lobe width, we obtain a widening of the frequency peak, so that with the Hanning window we have two more nonzero spectral lines, besides the “correct” value at the center. Thus, integrating (summing) the values in Figure 9.6 will naturally yield too large a result. The factor we have to correct the spectrum estimate with is the normalized equivalent noise bandwidth, Ben , as defined by Equation (9.33), see for example Harris (1978) and Bendat and Piersol (2000). This effectively means that we have to divide the scaling factor SA for the autopower spectrum in Equation (10.3) by the equivalent noise bandwidth, Be = Ben Δf to produce the PSD scaling SP . We thus obtain the scale factor for Welch’s PSD (or CSD, of course) estimate to be ⎧ 2A2 w ⎪ , ⎪ N 2 Be SP = ⎨ 2 ⎪ Aw , ⎪ N 2 Be ⎩

k ≠ 0, (10.14) k = 0,

where the factor Aw is the amplitude correction factor from Equation (9.23). To calculate the single-sided PSD based on M averages, Equation (10.11) is therefore used with SP from Equation (10.14). Again, note that for k = 0, SP is different (half the value).

10.3 Estimators for PSD and CSD

Also, note especially that SP in Equation (10.14) is the same as the scaling constant SA for the autopower spectrum of Equation (10.1), except for the division by the equivalent noise bandwidth, Be . In many systems for noise and vibration analysis, consequently the conversion between different spectra is treated as a scaling chosen for the display, and not as different measurement functions, as the computational procedure for the autopower spectrum and PSD are the same. The (single-sided) CSD, between two signals x(n) and y(n), Gyx (k) where the signal x is regarded as the reference (or input) signal, is computed by Equation (10.13) using the scaling constant SP from Equation (10.14). Now that we have discussed the estimators for spectral density estimates, a few words need to be said about the signal processing involved in PSD/CSD calculations. In early work on spectral estimation, ideas about zero padding and detrending of signals where discussed. These ideas seem to be prevailing in some texts, and we therefore must investigate if it is a good idea to use these features. The idea of zero padding was discussed in Section 9.3.13 where it was concluded that it should not be used for spectral estimation because it results in confusing results (i.e., lures you to believe there is a frequency resolution which there really is not). Detrending means removing a possible trend in data (by linear regression analysis) and the idea of using this comes from knowledge about problems that can occur around DC (sensor drift, etc.). In effect, detrending is a highpass filtering operation. In some literature on spectrum estimation, it is popular to recommend detrending each segment of data prior to windowing. Applying such detrending on each segment in Welch’s method is, at best, dubious. If data contain an unwanted low-frequency drift, it is better to first apply a highpass filter to the entire signal, and thus remove the low-frequency content, by the methods described in Section 3.3.2 prior to calculating the spectral density. As mentioned before, it is a good idea to remove the mean of the signal before processing, but it is important to remove the mean of the entire signal, and not process the segmented data individually. Not removing the mean means that the leakage of the DC (zero frequency) will contaminate the spectrum at low frequencies.

10.3.4

Bias Error in Welch Estimates

When computing PSDs experimentally, in effect we approximate a continuous function with discrete ‘bars,’ as illustrated in Figure 10.3. This approximation leads to a bias error, which decreases with decreasing frequency increment, Δf .

Figure 10.3 Illustration of the bias error in estimating PSD by approximating the continuous function with bars of constant width, each bar (frequency line) having the same area (mean-square of the signal) as the area under the continuous PSD within the same frequency interval.

Gxx(k)

^

b[Gxx(k)]

^

Gxx(k)

f

∆f

237

10 Spectrum and Correlation Estimates Using the DFT

A more rigorous understanding of the bias error can be found by noting that the multiplication of the measured signal by the time window (or if no explicit time window is used, the rectangular window) corresponds to convolution in the frequency domain. A complication compared with the convolution discussed in Section 9.3.9 for the DFT is that Equation (10.11) involves the square of the window spectrum and not the window spectrum itself. It can be shown relatively easily, however, that the double-sided PSD estimate, Ŝ xx (f ) can be written as (Schmidt 1985b) fs ∕2

Ŝ xx (f ) =



Sxx (u)|W(f − u)|2 du,

(10.15)

−fs ∕2

i.e., as the convolution of the true PSD and the magnitude squared of the window Fourier transform. The bias error resulting from the convolution in Equation (10.15) will, of course, give a broadening of a peak in the PSD corresponding to a resonance. In Figure 10.4, an example of such a PSD is shown, zoomed in around the resonance of an single degree-of-freedom (SDOF) system. It can be seen that the bias error is negative at the resonance frequency and positive on both sides of it. This distortion of the shape of the PSD can cause serious errors if we try to estimate the damping of the system by using the resonance bandwidth as defined by Equation (5.39).

10–1

Displacement PSD [m2/Hz]

238

10–2

10–3 5

10

15

Frequency [Hz] Figure 10.4 Autospectral density of a simulated vibration signal from an SDOF system with undamped natural frequency, fr = 10 Hz, and relative damping, 𝜁 = 0.05, excited by bandlimited white noise. For the PSD calculation, Welch’s method was used with N = 256 samples, fs = 128 Hz, a Hanning window and 500 averages with 50% overlap (solid). Overlaid is the true PSD (dashed) evaluated at the same discrete frequencies as the experimental PSD. The bias is clearly seen as an underestimation at fr , and an overestimation at frequencies a little further away from fr .

10.3 Estimators for PSD and CSD

By noting that the autospectral density is the Fourier transform of the autocorrelation, an equation analogous to Equation (10.15) can also be formulated in the time domain for the autocorrelation, by R̂ xx (f ) = 𝜌ww Rxx (𝜏),

(10.16)

where 𝜌ww is the “autocorrelation” of the time window given by convolution of the window by itself time reversed, 𝜌ww (𝜏) = w(𝜏) ∗ w(−𝜏).

(10.17)

Equation (10.15) can be used to calculate the bias of the PSD if we know the true PSD and the spectrum of the time window. The normalized bias error of the PSD estimate is then given by [ ] E Ŝ xx − Sxx ̂ eb [Sxx ] = , (10.18) Sxx as defined in Section 4.2.2. For noise and vibration analysis, we are particularly interested in this bias error in the vicinity of resonances as discussed in Chapters 5 and 6, as we can expect the convolution in Equation (10.15) to yield the worst error at frequencies where there is large change in Ŝ xx . The bias error will be equal for single-sided PSDs, so we will now state the errors for such estimates because we use single-sided estimates in practice. To understand what affects the bias error, Equation (10.15) can be expanded by a Taylor series. A commonly cited bias error was developed, for a rectangular time window, by Bendat and Piersol (2010) in their early work (we reference the latest edition here), as 𝜀b ≈

B2e G′′xx , 24 Gxx

(10.19)

where G′′xx is the second derivative of Gxx with respect to frequency, and Be = 1 for the rectangular window. This approximation was later shown to be a rough approximation compared with a better approximation given by Schmidt (1985a,b) (which is cited in later work (e.g., (Bendat and Piersol 2010)), for the particular case of the Hanning window, given by 𝜀b ≈

(4) (Δf )2 G′′xx (Δf )4 Gxx + , 6 Gxx 72 Gxx

(10.20)

where G(4) xx is the fourth derivative of Gxx with respect to frequency. A comparison of the equations shows that the error in Equation (10.19) becomes the first term in the error of Equation (10.20) if we set Be = 2Δf in Equation (10.19). For an output signal of a mechanical system with a resonance, we know from Equation (5.39), which we repeat here for convenience, that the resonance bandwidth, Br is related to the undamped natural frequency, fr and the relative damping of the system, 𝜁r , by Br = 2fr 𝜁r .

(10.21)

239

10 Spectrum and Correlation Estimates Using the DFT

Normalized bias error, εb

0.02

Normalized bias error, εb

240

0.01 0

–0.01 –0.02 5

10

Frequency [Hz] (a)

15

–0.016

–0.018

–0.02

–0.022 9.5

10

10.5

Frequency [Hz] (b)

Figure 10.5 Normalized bias error of a PSD calculated using a Hanning window on the output of an SDOF system with fr = 10 Hz, and relative damping 𝜁r = 0.05 with a frequency increment of Δf = 1∕8 Hz. The resonance bandwidth is Br = 1 Hz, which means there are eight frequency lines within Br . The bias error plotted with solid line is the result of using the theoretical formula for the convolution in Equation (10.15). The bias error calculated by Equation (10.20) with only the first term is plotted with dashed line. This is equal to the definition by Equation (10.19) using Be = 2Δf . In the dotted line, the bias error using both terms in Equation (10.20) is shown. It can be seen in the figure that using only one term from Equation (10.20) overestimates the error slightly. This error is larger the smaller the ratio Br ∕Δf is. It can be concluded that Equation (10.20) with both terms gives an accurate estimate of the bias error.

We can anticipate that the second derivative of the PSD will be dependent on the damping, and thus, we can relate the frequency increment Δf to the resonance bandwidth, Br . To investigate the bias error approximation in Equation (10.20), we can compute the “true” normalized bias error using Equation (10.15) and compare it with the approximation, using a Hanning window and an example SDOF system. In Figure 10.5, the result of a simulation using the first and both terms of Equation (10.20) and the “true” bias error is shown. It can be seen that the approximate bias error is good, provided the second term in Equation (10.20) is included, and relatively good for practical purposes, even using only the first term. It can also be seen that using only one term overestimates the error. It can be shown that the second term vanishes when the ratio Br ∕Δf becomes large, i.e., when the frequency increment is small. It is interesting to make a simulation on a known mechanical system using a PSD estimated by Welch’s method. The result of such a simulation, using the method described in Section 19.2.3, to produce output noise of a system with known properties will now be presented. The normalized bias error around the resonance according to Equation (10.15) is plotted in Figure 10.6 for an example where the frequency increment Δf = Br ∕5 for the same SDOF system used to produce Figure 10.5. To calculate the normalized bias error in the simulation, a fine resolution PSD was first computed with 100,000 averages to yield the “true” PSD. Then a PSD with a frequency increment with the requested relation Δf = Br ∕5 was computed with the same number of averages to make the random error negligible (see Section 10.3.5). The error according to Equation (10.18) was then computed from the two estimated PSDs. From the figure, it is clear that the bias approximation in Equation (10.20)

10.3 Estimators for PSD and CSD

0.05 Smith, 2 terms Simulation

Normalized bias error, εb

0.04 0.03 0.02 0.01 0

–0.01 –0.02 –0.03 –0.04 –0.05

5

10

15

Frequency [Hz] Figure 10.6 Theoretical bias error according to Smith (solid line), and bias error calculated from a simulation, for a PSD using Hanning window and Welch’s method (dashed line), from simulated data of an SDOF system with an FFT frequency increment Δf = Br ∕5. See text for details. In the figure it is shown that the normalized bias error obtained from the simulation agrees very well with the theoretical expression of the bias error given by Equation (10.20). The variations in the bias obtained from the simulation are due to the random error in the estimate of Gxx .

is a good approximation. It should be stressed, however, that this error is only valid for the Hanning window, but as this window is the recommended window for PSD estimation this is not a major drawback. From the previous discussion in this section, and the results in Figure 10.5, it is seen that the bias error is largest at the resonance frequency. We have also established that the bias error is related to the ratio Br ∕Δf which tells how many frequency lines reside inside the resonance bandwidth. In Figure 10.7, the maximum bias error (without sign, since it is always negative) as a function of the ratio Br ∕Δf is plotted. The most important result from this figure is that, we need very small Δf to yield negligible bias errors. For 1% normalized bias error, as many as 12 frequency lines must be within the resonance bandwidth Br . Example 10.3.1 To illustrate the implications of the discussion of bias errors in this section, we will look at the frequency increment necessary to yield a bias error less than 1% on a system (e.g., a tall building) with an undamped resonance frequency of fr = 0.5 Hz, and a relative damping of 𝜁r = 1%. Equation (10.21) yields a resonance bandwidth Br of Br = 2 ⋅ 0.5 ⋅ 0.01 = 0.01 Hz,

(10.22)

which, combined with the results of Figure 10.7, which shows that Δf < Br ∕12 for a maximum bias error of 1%, gives us Δf
Md , then the relative increase is Me M∕Md = ( ), ∑M−1 M−p Md 1 + 2 q=1 M 𝜌(q)

(10.32)

where 𝜌(q) is defined by Equation (10.29). This relative increase of the number of averages is independent of the blocksize, but is dependent on the time window and overlap factor

Relative increase of equivalent number averages

10.3 Estimators for PSD and CSD

2.5 Hanning Bartlett Rectangular

2

1.5

1

0

10

20

30

40

50

60

70

80

90

Overlap [%] Figure 10.8 Illustration of the increase in the equivalent number of averages as a function of overlap percentage when computing PSD based on a particular data set, using Welch’s method. The plot assumes that all data are used which means that the number of FFTs performed is increasing with increasing √ overlap. The equivalent number of averages is the number giving the random error as 𝜀r = 1∕ Me . For 50% overlap and Hanning window, Me = 1.89, instead of 1.99 if all blocks had been statistically independent (because 100 independent segments leads to 199 overlapped segments with 50% overlap).

used. In Figure 10.8, the relative increase is plotted for some common time windows, as a function of overlap percentage. Note especially that there is an increase in the equivalent number of averages even when using a uniform window. Example 10.3.2 Let us illustrate the computation of normalized random error by an example. Assume that we have a data set with 102,400 samples. We select a blocksize of 1024 samples, which gives us 100 nonoverlapped segments. We use a Hanning window and 50% overlap to compute a PSD. What will the random error be? First, we calculate the correlation coefficient 𝜌(q) for a Hanning window with 50% overlap according to Equation (10.29). We will have M = 199 overlapped segments, since the 200th segment will have half its block outside the data length. In our example then, D = 512, and the correlation coefficient for q = 1 becomes 𝜌(1) ≈ 0.02778. Furthermore, 𝜌(q) = 0 for q > 1, since already a shift of 2D will have shifted the window outside itself. We then use Equation (10.28) to sum up the normalized mean-square error and find that 1.05556 1 ≈ 0.00530, (1 + 2 ⋅ 0.02778) = 199 199 and the normalized random error is thus √ 𝜀r = 0.00530 ≈ 0.073. 𝜀2r =

(10.33)

(10.34)

Alternatively, using the results of Figure 10.8, we could have observed that the relative increase in the number of averages is approximately 1.89. This means that the equivalent

245

10 Spectrum and Correlation Estimates Using the DFT

number of averages we should put into Equation (10.31) is 1.89 times 100, √ i.e., 189, which √ gives us a normalized random error of approximately 𝜀r = 1∕ Me = 1∕ 189 ≈ 7.3%, the √ same result as above. Compare this with the normalized random error of 7.1% (1∕ 200) we would get if we used 200 averages and no overlap, which would require twice the amount of data. This shows that using 50% overlap with Hanning window produces almost entirely uncorrelated FFT blocks. End of example. From Figure 10.8 and Equation (10.31) follows, that when we use overlap processing with increasing overlap percentage, at first, we get a substantial reduction in the variance for increasing overlap, while above a particular overlap percentage, the additional gain is very small. For the Hanning window, the limit is 62.5% (Nuttall and Carter 1982), above which there will be no additional reduction in the variance. It is also clear from the figure that the increase from 50% to 62.5% overlap only gives approximately 8% less variance and thus approximately 4% less random error. The number of FFTs at the same time increases approximately from a factor of 2 to 2.7 times the number of independent segments. Therefore, 50% overlap percentage is usually considered optimal when using the Hanning window. The increase in the equivalent number of averages is then approximately 1.89. Thus, as the Hanning window and 50% overlap is recommended for PSD computations, it makes sense to plot the normalized random error for this combination as a function of the number of averages (total number of FFTs, the number usually put into the FFT analysis software). This result is plotted in Figure 10.9 and provides a convenient way to find out the normalized random error when computing PSDs using Welch’s method.

Normalized random error, εr

246

10–1

10–2 101

102

103

104

Number of FFTs Figure 10.9 The normalized random error for a PSD estimate using Welch’s method, with 50% overlap and Hanning window, as a function of the number of (overlapping) FFT blocks in the averaging process. This number is what is normally entered into the FFT analysis software as “number of averages.”

10.3 Estimators for PSD and CSD

0.105 Hanning Bartlett Rectangular

Normalized random error, εr

0.1 0.095 0.09 0.085 0.08 0.075 0.07 0.065

0

10

20

30

40

50

60

70

80

90

Overlap [%] Figure 10.10 Plot of experimentally obtained normalized random errors (lines) and theoretical values (rings) according to Welch’s formula. Bandlimited white Gaussian noise was passed through an SDOF system with relative damping of 1%. A PSD was computed using the output noise with different amounts of overlap, and for each case, the normalized random error was computed. The figure shows a very good agreement between theory and experiment.

As we mentioned above, Welch assumed the noise to be Gaussian and flat between ±Δf ∕2 in the deduction of the equation of the correlation between overlapped segments. This is a reasonable assumption in all practical cases, regardless of the actual statistical properties of the noise, since the noise for narrow spectral bands will tend to be Gaussian. To prove that this assumption holds in real cases, in Figure 10.10 results from simulations of random errors using random noise from a simulated SDOF system for three different time windows (Hanning, Bartlett, and rectangular) are presented. As can be seen in the figure, simulation results agree very well with Welch’s predictions. Simulations using uniformly distributed noise (which is far from Gaussian) give similar results. We can therefore use the random error as presented above with some confidence on real data. For cross-spectral densities, the random error is more complicated because it depends not only on the number of averages and overlap percentage (and thus correlation between the 2 (f ) (see Section 13.4), which is a measure of how averages) but also on the coherence, 𝛾yx much of the output y that is linearly dependent on the input x. Thus, the random error of the magnitude of the CSD can be shown to be (Bendat and Piersol 2000) ̂ xx ] 𝜀r [G ̂ yx ] ≈ √ . 𝜀r [G 2 𝛾yx

(10.35)

This equation yields the result that when the two signals x and y are completely linearly dependent, and the coherence thus equals unity, the random error of the CSD equals that of the autospectral density.

247

248

10 Spectrum and Correlation Estimates Using the DFT

10.3.6 The Smoothed Periodogram Estimator The “original” method for computer estimation of PSDs was, as was mentioned in the introduction to this section, a method accredited to Blackman and Tukey (1958a,b), which was superseded by Welch’s method shortly after the publication of the FFT algorithm. The Blackman–Tukey method was originally formulated by taking the Fourier transform (DFT) of a windowed estimate of the correlation function (auto or cross). It had, however, been known already from Daniell (1946), that estimating PSDs can also be achieved by smoothing the periodogram in the frequency domain. With the increase in computer power and memory, the method of smoothing the periodogram has become more appealing, and it adds some advantages that we will present below, as an alternative to the popular Welch’s method. The method is discussed in, for example, Bendat and Piersol (2010), Cooley et al. (1970), and Otnes and Enochson (1972). The idea behind the smoothed periodogram is similar to the idea by Welch, i.e., to reduce the variance (random error) of the PSD estimate by averaging. However, in the smoothed periodogram method, the frequency lines around the frequency line k to be calculated are used for the averaging. This essentially comes from the result of Equation (10.15), where we now calculate a periodogram using all data, which essentially produces an unbiased (low-bias) estimate of the PSD. The convolution in Equation (10.15) is then done on the periodogram, where we notice that convolving the periodogram with a (short with respect to the total data length L) smoothing window is the same, at a particular frequency line k, as a weighted average of the surrounding frequency lines. It is practical to use a smoothing window ws with an odd length so that it can be centered on the frequency line k we wish to calculate. The double-sided smoothed periodogram estimator for an autospectral density with a rectangular smoothing window of length Ls is thus 2 SP (k) = Ŝ xx Ls



kLk +(Ls −1)∕2

P̂ xx (l),

k = 1, 2, … floor[(L∕(2Lk ))],

(10.36)

l=kLk −(Ls −1)∕2

where P̂ xx is defined by Equation (10.8), the factor 2 is for single-sided conversion, Ls is the odd smoothing length and k (now corresponding to actual frequencies kLk fs ∕L) are selected at suitable frequencies, see below. The spacing of frequencies in number of frequency lines, Lk ≥ (Ls − 1)∕2 + 1, may be selected to obtain appropriate frequency resolution, for example, it may be chosen to yield frequency spacing similar to the frequencies fs ∕N that Welch’s method would result in. Similarly, the CSD can be estimated by 1 SP (k) = Ŝ yx Ls



kLk +(Ls −1)∕2

P̂ yx (l),

(10.37)

l=kLk −(Ls −1)∕2

where P̂ yx is defined by Equation (10.9). We could, of course, include an arbitrary smoothing window in the summation in Equations (10.36) and (10.37), instead of the rectangular window. Although this could reduce the bias in the estimator, it would also increase the random error. Some work has been done to try to find optimum smoothing operations, e.g., Hannig and Lee (2004) and Stoica and Sundin (1999), but those methods normally lead to complicated algorithms for little improvement in the general case. For noise and vibration analysis, the rectangular

10.3 Estimators for PSD and CSD

smoothing window is therefore recommended, and the experimental method described in Section 10.7.3 is recommended for minimizing the bias error of the estimate. It should be mentioned that a particular advantage of the smoothed periodogram method is that adjusting the smoothing window length and recomputing the spectral density are an operation with little computational cost, unlike for Welch’s method where adjusting the blocksize requires a complete recomputation of all FFTs and averaging. The smoothing window length can be considered almost equivalent to the number of averages in Welch’s method, with the difference that the average in the smoothed periodogram (with our restriction) is a straight average, since we are using a rectangular smoothing window. An easy implementation of the smoothed periodogram, which becomes almost equivalent with Welch’s estimator, is to choose a “blocksize,” N, being the number of frequency lines, equidistantly placed on the frequency axis. The smoothing window length then becomes the next lowest integer to Ls = L∕N. To make the smoothing symmetric, it is also a good idea to make Ls an odd number, centered on the frequency bin of the periodogram we want to calculate. The frequency lines k of the estimated PSD or CSD can be chosen arbitrarily, for example, at the frequencies fs ∕N where Welch’s estimator would put them. A further advantage with the smoothed periodogram method is that it is possible to choose the frequencies of the smoothed PSD with a logarithmic spacing, and simultaneously increase the smoothing window length exponentially as a function of frequency (so that the ratio of the smoothing window length and the frequency of the particular frequency line is constant). This is a relevant choice in many cases in vibration analysis because we can expect the resonances to be broader (in Hz) at higher frequencies than at lower frequencies, according to Equation (10.21). Using a logarithmic frequency spacing allows for longer smoothing window length at high frequencies which gives a smaller random error where we do not need the high-frequency resolution. An example of this will be given in Section 10.7.3. In principle, Welch’s method and the smoothed periodogram method of PSD/CSD estimation are versions of the same method. They asymptotically give the same result on stationary random signals. There is, however, a practical difference between the two methods which could be of great importance in the case of unwanted harmonics in random signals. As we showed in Figure 10.2, in the case of harmonics present in a random signal, the periodogram results in very sharp peaks for the harmonics. An efficient way to remove these harmonics, is thus to edit the periodogram by removing a few frequency lines around each harmonic, prior to applying the smoothing. The result is an accurate PSD with the harmonics removed. This method, referred to as frequency domain editing, FDE, is described in more detail in Section 18.7.1.

10.3.7

Bias Error in Smoothed Periodogram Estimates

The bias error in estimates using the smoothed periodogram estimator is easy to find by Equation (10.15), using a boxcar weighting function W(f ) and convolving the true PSD with |W(f )|2 . As before, we are particularly interested in this bias error when estimating PSDs for vibration signals on resonant structures. The bias error then is similar to the bias error of Welch’s estimator, and an example is shown in Figure 10.11.

249

10 Spectrum and Correlation Estimates Using the DFT

0.05 Estimated Theory

Normalized bias error, εb

250

0

–0.05 0.7

0.8

0.9

1

1.1

1.2

1.3

1.4

Normalized frequency f/fr Figure 10.11 Estimated normalized bias error of a smoothed periodogram estimate of PSD, around the natural frequency of an SDOF system. A rectangular smoothing filter was used for the PSD estimation. The ratio Br ∕Δf = 5. The bias error at the resonance frequency is similar to the result for Welch’s estimate in Figure 10.7 which is consistent with the fact that both methods lead to convolution in the frequency domain, although the convolution spectrum is (slightly) different.

10.3.8 Random Error in Smoothed Periodogram Estimates Each spectral line in the periodogram is an approximately independent variable in a statistical sense, at least if the total number of samples in the DFT are much greater than the smoothing window length, L ≫ Ls . Therefore, the normalized random error can be calculated as Ben ̂ SP 𝜀r [G (10.38) xx ] ≈ √ , Ls where Ls is the smoothing window length and Ben is the normalized equivalent noise bandwidth of the smoothing window from Equation (9.33), which is unity for a rectangular window as suggested to be used for the smoothed periodogram estimator in Section 10.3.6.

10.4 Estimators for Correlation Functions When we want to estimate the autocorrelation of a signal, or cross-correlation between two signals, random or periodic, we want to obtain an unbiased and consistent estimator, i.e., an estimator which asymptotically approaches the true correlation function when enough data are used, and which has a variance that decreases as we increase the number of averages. Two such estimators will be presented here. As we discussed in Section 4.2.12, the autocorrelation function, Rxx (𝜏), is a special case of the cross-correlation function, Ryx (𝜏) when the two signals x and y are the same. We will therefore treat only the cross-correlation here.

10.4 Estimators for Correlation Functions

Correlation functions can be estimated directly in the time domain by using the definition in Equation (4.32) and replacing the expected value operation by a mean calculation T∕2

Rbyx (𝜏)

1 = y(t)x(t − 𝜏)dt, T ∫

(10.39)

−T∕2

where 𝜏 is the time lag, and T is the length of the signals x(t) and y(t). It is easily realized that, for finite measured signals of length T, there will not be T seconds of data except for lag zero, 𝜏 = 0. For the other lags, the shift of the sequence x(t − 𝜏) will mean there is only T − 𝜏 seconds of data for positive lags, or T + 𝜏 seconds for negative lags. Thus, dividing by T in Equation (10.39) will mean that the estimator is biased, i.e., it is not approaching the true correlation as the length of data, T, increases. To obtain an unbiased estimator, we need to divide by the actual number of samples used in the mean calculation, which gives us the estimator T∕2

1 R̂ yx (𝜏) = y(t)x(t − 𝜏)dt, T − |𝜏| ∫

(10.40)

−T∕2

which is the desired, unbiased estimator of cross-correlation functions. The equation is very computationally expensive, if we want to compute the correlation for many lags, as we have to multiply and integrate the entire two sequences for each lag 𝜏 (i.e., the values of the signals that overlap after shifting x(t)). If only a few hundred time lags are desired, this direct integral may be computed relatively rapidly. This is often the case for OMA, for example, see Chapter 17. If we observe that the integral part of Equation (10.40) (i.e., the equation without the division by T − |𝜏|) is the convolution of y(𝜏) and x(−𝜏) (see Problem 10.7), we realize that it can be computed by multiplying the Fourier transforms of the signals in the frequency domain and inverse transforming the product. This leads to two very efficient computational methods that we will describe in the next two sections.

10.4.1

Correlation Estimator by Long FFT

The first method is to compute the convolution of y(𝜏) and x(−𝜏) by multiplication of the DFTs of the entire signals in the frequency domain, i.e., we make two long FFT calculations, which in modern computers can be made for several million samples in a few milliseconds. According to Line 7 in Table 2.2, the Fourier transform of x(−t) is the complex conjugate of the Fourier transform of x(t), i.e., X ∗ (f ), and thus, we can obtain the convolution part of the estimate of the correlation, for discrete signals x(n) and y(n) of lengths N by 2N−1 1 ∑ ∗ X (k)Yz (k)ej2𝜋km∕2N , Ĉ yx (m) = 2N k=0 z

m = 0, 1, 2, … , 2N − 1,

(10.41)

where Xz (k) and Yz (k) are the DFT results using zero padding by N values, to obtain the true convolution, as explained in Section 9.3.12. Note that the right-hand side of Equation (10.41) is the inverse DFT of the product Xz∗ (k)Yz (k).

251

10 Spectrum and Correlation Estimates Using the DFT

What we lack in order to produce the unbiased estimator of the cross-correlation is to divide each lag in the convolution vector Ĉ yx (m) by the actual number of overlapping samples in the convolution, for each lag r (see below). To find this scaling, we first observe that the inverse Fourier transform with our definition comes out with the positive lags in the lower half and the negative lags in the upper half of the vector, as shown in Figure 10.12(a). In order to produce the full correlation function with positive and negative lags, we need to shift the upper half of the vector to the left of the lower half. In MATLAB, s (m) is shown in this can be achieved by the command fftshift. The FFT shifted vector Ĉ yx Figure 10.12(b). The value m of this vector that corresponds to the lag, r = 0, is m = N, since the N first s s (m) correspond to the negative lags, and the first value is Ĉ yx (0). We thus obtain lags of Ĉ yx s (r) by setting r = m − N. Obviously, the zero lag should be divided by the length of the ̂Cyx signals, N, as for this value all vector values overlap. The two values to the left and right

×104

15 10 5 0

20 FFT shifted convolution

Convolution result

20

0.5

0

–0.5 –2000 –1000 0 1000 2000 Time lag [s] (c)

×104

15 10 5 0 –5 –2000 –1000 0 1000 2000 Time lag [s] (b) 1

Unbiased autocorrelation

–5 –2000 –1000 0 1000 2000 Time lag [s] (a) 1 Unbiased autocorrelation

252

0.8 0.6 0.4 0.2 0

–0.2 –5

0 Time lag [s] (d)

5

Figure 10.12 Convolution results as part of the estimation of autocorrelation of a band limited random signal. In (a) the direct result of the inverse FFT of the product Y(k) ⋅ X ∗ (k), in (b) the result after applying the command fftshift to the result in (a), in (c) the unbiased estimator of the correlation function obtained by dividing each value in the convolution result by the actual number of overlapping values in x(n) and y(n) for each lag, r. In (d) the result in (c) is shown with zoomed x-axis around lag 𝜏 = 0.

10.4 Estimators for Correlation Functions

of the zero lag should be divided by N − 1, and so on. Thus, the unbiased estimator for cross-correlation is 1 Ĉ s (r), r = −N + 1, −N + 2, … , −1, 0, 1, … , N − 1, R̂ yx (r) = (10.42) N − |r| yx s (m) is the FFT shifted result of the convolution in Equation (10.41) and r = m − N. where Ĉ yx We repeat that for autocorrelation of the signal x(n), y(n) in Equation (10.41) is replaced by x(n). In Figure 10.12(c), it may be seen that the function increases significantly for large positive and negative lags. This is a result of the fact that the number of values that the mean calculation (convolution) is based on becomes small, so the estimates behave erratically. We will look at the variance of the correlation estimates in Section 10.4.3. We are normally only interested in correlation functions for a few hundred or thousand values around lag zero, as shown in Figure 10.12d), so the erratic behavior in the ends is not a problem. For good estimates, however, relatively long time signals are needed, as will be shown in Section 10.4.3.

10.4.2

Correlation Estimator by Welch’s Method

For very long time signals, it may not be possible to compute the FFT of the entire signals. We can then obtain a Welch-based method by dividing the time signal in blocks as we did for the PSD estimator in Section 10.3.2. To develop this method, the signals are divided into blocks of length Nw , which results in P = floor[N∕Nw ], blocks in total. For each of these blocks of data in x(n) and y(n), the correlation estimate is obtained by Equation (10.41), but now with the smaller size blocks. For each convolution result, we average these results, for each lag m in the time (lag) domain. Again, we emphasize that zero padding needs to be used in the FFT computation to obtain the true convolution of the signals. The average result is thus computed by ( ) 2Nw −1 P ∑ 1 ∑ ∗ j2𝜋mk∕(2Nw ) ̂Cyx (m) = 1 X (k)Ypz (k)e , (10.43) P p=1 2Nw k=0 pz where Xpz and Ypz are the size 2Nw DFT results of time block number p, of x(n) and y(n), zero padded by Nw zeros. We obtain Welch’s method simply by reversing the summation order in Equation (10.43), i.e., ( P ) 2Nw −1 ∑ ∑ 1 1 ∗ Ĉ yx (m) = X (k)Ypz (k) e j2𝜋mk∕(2Nw ) , (10.44) 2Nw k=0 P p=1 pz where it should be noted that the expression inside the parentheses is a Welch estimate of an unscaled spectral density. The estimate of the correlation function is in the end calculated by fftshifting Ĉ yx (m) and scaling it as described by Equation (10.42). It may look like the Welch estimate produces a correlation estimate identical to the direct convolution in Section 10.4.1. This is, however, not the case. Welch’s method leads to poorer estimates for all lags except the zero lag r = 0. The reason is, that for a blocksize Nw , the Welch estimates R̂w yx (r) will be based on averaging fewer values than the direct convolution for any r > 0. To see this, assume that we have a signal x(n) of size 102,400 samples, that we

253

254

10 Spectrum and Correlation Estimates Using the DFT

divide into P = 100 blocks of size Nw = 1024. Let us now look at lag r = 500, for example. In the direct convolution, this will be based on averaging the products of 102, 400 − 500 = 101, 900 values of x(n) and y(n). But for Welch’s estimate, it will be based on only (1024 − 500) ⋅ 100 = 52, 400 values. Welch’s method should therefore be used with as large blocksize as the computer can deal with for best performance. It should be pointed out that although tempting since the Wiener–Khinchin relations (see Equation (8.9)) in theory say it is possible, correlation functions should not be computed by simply inverse Fourier transforming a general PSD, where a Hanning or other time window has been used, and where the FFT has not been computed by zero padding. Doing so produces a distorted correlation function. Nevertheless, this type of computation is often seen in literature.

10.4.3 Variance of the Correlation Estimator It is important to understand the variance of the unbiased correlation estimator in Equation (10.40) in order to choose an appropriate part of it for OMA. The variance of the cross-correlation estimate under the assumption of T ≫ 0 and for positive lags 𝜏 > 0 was deduced by Bendat and Piersol and may be found in Bendat and Piersol (2010) as [ ] Var R̂ yx (𝜏) ≈

T( ) 1 Rxx (r)Ryy (r) + Ryx (r + 𝜏)Rxy (r − 𝜏) dr, T − 𝜏 ∫−T

(10.45)

where it may be seen that this variance approaches infinity for large lags. For the special case, where T ≫ 𝜏, which is of special interest for our purposes with OMA, the variance may be approximated by T( ] 1 ) [ Rxx (r)Ryy (r) + Ryx (r + 𝜏)Rxy (r − 𝜏) dr. Var R̂ yx (𝜏) ≈ ∫ T −T

(10.46)

For the purpose of understanding the variance in correlation functions related to OMA, we will look at the variance of an autocorrelation function of the response of a SDOF system excited by Gaussian noise, since this is equivalent to the response due to a mode. In Clough and Penzien (2003), it is shown that the theoretical autocorrelation of a response of a SDOF system excited by a random force can be written as ( ) 𝜁 2 −𝜁 𝜔n |𝜏| cos 𝜔d |𝜏| + sin 𝜔d |𝜏| , (10.47) Ryy (𝜏) = 𝜎y e 1 − 𝜁2 √ where 𝜎y2 is the variance of the response y(t), and 𝜔d = 𝜔n 1 − 𝜁 2 is the damped natural frequency. To illustrate the result in Equation (10.46), we assume a SDOF system with undamped natural frequency fn = 1 Hz, and relative damping 𝜁 = 0.015. The true autocorrelation of the response and its variance are plotted in Figure 10.13(a) and (c), respectively, and with zoomed x-axes in Figure 10.13(b) and (d), respectively. First, it should be noted that the variance flattens out at large lags. Second, it should be noted that the variance of the correlation estimate oscillates with maxima coinciding with the large absolute values (large positive and negative values) of the correlation function, and with minima where the correlation function crosses zero.

10.4 Estimators for Correlation Functions

1

Autocorrelation [m2]

Autocorrelation [m2]

1 0.5 0 –0.5 –1

0

10

20

30

40

0.5 0 –0.5 –1

0

1

Time lag [s] (a) 6

×10–4

6

3

4

5

4

5

×10–4

5

Variance [m4]

Variance [m4]

5 4 3 2 1 0

2

Time lag [s] (b)

4 3 2 1

0

10

20

30

40

Time lag [s] (c)

0

0

1

2

3

Time lag [s] (d)

Figure 10.13 Theoretical autocorrelation of the displacement response of a SDOF system with natural frequency fn = 1 Hz and relative damping 𝜁 = 0.015, in (a), and (b). Approximate theoretical variance of the correlation according to Equation (10.46) in (b) and (d). It should be noted that the variance has maxima corresponding to the maxima and minima of the correlation function, and that the variance flattens out at a constant variance for high lags.

The normalized random error is defined by √ [ ] var R̂ yx (𝜏) [ ] 𝜀r R̂ yx (𝜏) = , Ryx (𝜏)

(10.48)

and will obviously not be defined where the correlation function crosses zero. To calculate a useful normalized random error, we therefore use the envelopes of the variance and the correlation function, calculated by the Hilbert transform as described in Section 18.2.2. The normalized random error of the correlation function used above is plotted in Figure 10.14. As can be seen, this error increases exponentially as a function of lag. For actual calculated correlation functions, we will now look at an example of a simulation of the SDOF system used above. For this purpose, we generate 1000 seconds of data of the system excited by a Gaussian force with flat spectral density. The system is simulated using 20 Hz sampling frequency, and we calculate 3000 realizations of the response (due to independent input forces), so that we can calculate the mean and variance

255

10 Spectrum and Correlation Estimates Using the DFT

1.2

Normalized random error εr [Rˆ yx(τ)]

256

1 0.8 0.6 0.4 0.2 0

0

5

10

15

20

25

30

35

40

45

Time lag [s] Figure 10.14 Normalized random error of the correlation function of the response of the same SDOF system as that used in Figure 10.13.

of the correlation function. Furthermore, we apply both the long FFT method and Welch’s method in order to compare the results of these two common methods for computing correlation functions. The results of this simulation are shown in Figure 10.15. It should first be noted that the correlation function starts showing some deviation from the exponential behavior around lag 𝜏 = 30 seconds in the results from the long FFT method, and slightly above 𝜏 = 20 seconds in the results from Welch’s method. In the simulation, the total length of each response realization was 20,000 samples, corresponding to 1000 cycles of the natural frequency. The random errors of correlation function estimates for the purpose of OMA have recently been investigated in Tarpø et al. (2020). In this chapter, it is pointed out that there are two essential errors that may affect OMA parameter estimation. The first is that there is a bias in the exponential decay of the envelope of the correlation function, and the second is that the tail starts to behave erratically at some point, as we have shown above. The bias in the envelope of the correlation functions is illustrated in Figure 10.16 for the SDOF system used above. The bias diminishes as the length T of the signals is increased. Since the damping stems from the exponential decay of the correlation function, the bias will obviously affect the damping estimates in OMA. The estimation of correlation functions for OMA parameter estimation are discussed in Section 19.11.

10.4.4 Effect of Measurement Noise on Correlation Function Estimates It is also important to understand the effect on correlation function estimates caused by extraneous noise in the sensors used for measurement. If we look at a measured signal, x̂ (t), and assume it consists of a true signal, x(t), plus noise, n(t), i.e., x̂ (t) = x(t) + n(t),

(10.49)

10.4 Estimators for Correlation Functions

×10–3

2.5

0 –2 –4

×10–7

2

2

Variance (m4)

Autocorrelation [m2]

4

1.5 1 0.5

0

10

20

30

0

40

0

10

Time lag [s] (a)

×10–3

2.5

0 –2 –4

30

40

×10–7

2

2

Variance (m4)

Autocorrelation [m2]

4

20

Time lag [s] (b)

1.5 1 0.5

0

10

20

30

40

0

Time lag [s] (c)

0

10

20

30

40

Time lag [s] (d)

Figure 10.15 Correlation functions estimated using a simulation of a SDOF system with natural frequency fn = 1 Hz and relative damping 𝜁 = 0.015. In (a), the correlation of the response of the system using the long FFT method, and in (b), the estimated variance of this estimator, from 3000 realizations. In (c) and (d), the corresponding results for Welch’s estimator, with a blocksize (not counting the zeros in the zero padding) of 1024 samples are shown. See text for details.

then the estimated autocorrelation function Rx̂ x̂ (𝜏) is given by Rx̂ x̂ (𝜏) = E[(x(t) + n(t))(x(t − 𝜏) + n(t − 𝜏))] = Rxx (𝜏) + Rnn (𝜏) + Rxn (𝜏) + Rnx (𝜏).

(10.50)

We can assume that the extraneous noise is uncorrelated with the signal x(t), which gives us Rx̂ x̂ (𝜏) = Rxx (𝜏) + Rnn (𝜏).

(10.51)

Since extraneous noise is usually broadband, and relatively flat (often it is proportional to 1∕f or 1∕f 2 ), it will have a “short” correlation function. In most cases, omitting the first few lags of the estimated correlation function will mean that the extraneous noise is not affecting the remaining lags. This is particularly important in OMA where correlation functions are used to estimate modal parameters, see Section 17.3.2.

257

10 Spectrum and Correlation Estimates Using the DFT

4.5

×10–3

4 3.5

Autocorrelation [m2]

258

3 2.5 2 1.5 1 0.5 0

0

5

10

15

20

25

30

35

40

45

Time lag [s] Figure 10.16 Envelopes of estimates of the correlation function of a SDOF system with natural frequency fn = 1 Hz and relative damping 𝜁 = 0.015. In solid line, the envelope of the true correlation of the response of the system, and in dashed lines, the envelopes of three estimates from different realizations of input force, using the long FFT method and the same data as reported in Figure 10.15.

For cross-correlation functions, the case is even better. If both signals are affected by uncorrelated noise, i.e., x̂ (t) = x(t) + n(t) and ŷ (t) = y(t) + m(t), then the cross-correlation function will be Rŷ x̂ (𝜏) = E[(y(t) + m(t))(x(t − 𝜏) + n(t − 𝜏))] = Ryx (𝜏) + Ryn (𝜏) + Rmx (𝜏) + Rmn (𝜏) = Ryx (𝜏),

(10.52)

i.e., extraneous noise has no effect on cross-correlation estimates.

10.5 Estimators for Transient Signals For transient signals, as mentioned in Section 8.4, the energy spectral density, ESD, is the most common measurement function available in noise and vibration analysis systems. In general, no window should be used for transient signals, which means that there is no need for any correction factor; however, also see the comments in Section 10.5.1. Instead, the measurement time is adjusted, if possible, so that the transient signal both starts and ends at zero, to avoid leakage. As mentioned in Section 8.4, the ESD is essentially a PSD multiplied by the measurement time used (for one FFT block). Thus, the single-sided ESD is calculated as |2 | ̂ xx (k) = 2T ⋅ | DFT [x(n)] | = 2(Δt)2 |DFT [x(n)]|2 , G (10.53) | | N Δf | | by exploiting the fact that Δf = 1∕T and that T = NΔt. The factor 2 should not be applied to the value for k = 0. In fact, Equation (10.53) is a discrete approximation of the magnitude

10.5 Estimators for Transient Signals

squared of the continuous Fourier transform, where the time increment replaces the infinitesimal differential, and the ESD is converted to a single-sided spectrum. Performing such scaling, however, the DC value must be considered so that it is scaled properly. As mentioned in Section 8.4, for transient signals one can also use a linear spectrum, called the transient spectrum, Tx (k), which in the discrete form is estimated by T̂ x (k) = Δt ⋅ |X(k)| ,

(10.54)

for k = 0, 1, 2, … , N∕2 + 1. Equation (10.54) is a direct approximation of the continuous Fourier transform, which was the definition of the transient spectrum in Section 8.4. From a comparison between Equations (10.53) and (10.54), it follows also that the transient spectrum, Tx , besides scaling for single-sided spectra, is equal to the square root of the ESD. Note that Tx (k) is defined as double-sided, but usually displayed single-sided. This is particularly useful if the signal x(t) is a force, since then the DC value Tx (0) is equal to the impulse of the force, which is seen from T̂ x (0) = Δt ⋅



N−1 n=0

x(n) ≈

T



x(t)dt,

(10.55)

0

where the second term follows directly from the definition of the DFT in Equation (9.5). The time increment times the sum in Equation (10.55) implies an estimate of the area under the curve x(n) through a block summation (Riemann sum), which approximates the integral. Example 10.5.1 As an example of a transient spectrum, we can study a half-sine pulse, which is the form of, for example, a force pulse from an impulse hammer excitation, see Section 13.8. This type of pulse with length D and amplitude A is defined by ( ) 𝜋 t , (10.56) x(t) = A sin D for t ∈ [0, D], and zero outside this range. The impulse, that is, the area under this half-sine is D

Ix =

∫ 0

( A sin

) [ ] 𝜋 2D D 𝜋t D =A . t dt = A − cos D 𝜋 D 0 𝜋

(10.57)

Figure 10.17 shows the transient spectrum of a half-sine pulse with duration D = 10 ms and the maximum value A = 100 N. According to Equation (10.57), this pulse has an impulse of 2∕𝜋, which is approximately 0.637 N s. As seen in the figure, this value agrees with the transient spectrum value at 0 Hz. End of example.

10.5.1

Windows for Transient Signals

For transient signals, none of the windows discussed in Section 9.3.9 are normally used, since most transient signals are self-windowing, that is, they begin and end at zero. Actually, in most cases, it would be devastating to use, for example, a Hanning window, since it would remove the most important part of the signal at the beginning (most vibration transients are

259

10 Spectrum and Correlation Estimates Using the DFT

0.7 0.6

Transient spectrum [N s]

260

0.5 0.4 0.3 0.2 0.1 0

0

100

200

300

400

500

Frequency [Hz] Figure 10.17 Transient spectrum of a half sine pulse of amplitude 100 N and duration 10 ms. The impulse of the pulse is approx. 0.637 N s which agrees with the transient spectrum value at frequency 0 Hz.

exponentially decaying signals, as we saw in Chapter 5). Hence, no window is in general necessary, since the periodic repetition of the transient does not cause any transient at the time block beginning and end points. For resonant structures with light damping, it can often happen that the signal does not die out before the measurement time is finished, resulting in spectrum leakage. This situation should be avoided by increasing the measurement time so that the transient dies out inside the time window. If this cannot be achieved, an exponential window may be used, but it is important to understand that the spectrum will be affected by it. The exponential window has the form w(n) = e−an ,

(10.58)

for n = 0, 1, 2, … , N − 1. The constant a is chosen so that the signal at the end of the measurement time is sufficiently attenuated so as not to give any significant leakage, i.e., so the windowed signal dies out before the measurement time is finished. If the measured signal is the impulse response of a mechanical system, the exponential window is equivalent to (has the same effect as) increasing the system damping. We will discuss this more when we come to measuring frequency responses with impact excitation in Section 13.8.

10.6 A Signal Processing Framework for Spectrum and Correlation Estimation In this section, we will present a signal processing framework that is particularly attractive for OMA, but may be used with convenience for all types of vibration analysis except for measuring frequency response with shaker excitation, where it is not convenient,

10.6 A Signal Processing Framework for Spectrum and Correlation Estimation

Signal x

Zero pad

FFT complex conjugate Multiplication Yz (k) Xz*(k)

Signal y

Zero pad

Frequency domain processing

IFFT

Ryx

FFT

Smoothing

Gyx

Figure 10.18 Illustration of the signal processing framework suggested for general vibration analysis. The principle is that the signals are first zero padded by as many zeros as the length of the signals. Then the FFT of each signal is computed, after which the product of Xz∗ Yz is produced. Signal processing may then be applied, for example, filtering, integration, differentiation. Finally, correlation and/or spectra are produced by inverse fast Fourier transform (IFFT) or smoothing, respectively [Brandt (2019)/With permission of Elsevier.]

as in such cases, the processing needs to be synchronous with the excitation signal, see Chapters 13 and 14. The framework was presented in Brandt (2019) and has the advantage that it allows for all common signal processing to be integrated with minimal FFT processing. The framework is illustrated in Figure 10.18. It is based on the fact that both spectra (for all types of signals; linear spectra, power spectral densities, or energy spectral densities) and correlation functions are based on squared Fourier transforms. Since modern computers allow FFT calculations with large amounts of data (much more than we usually use for vibration analysis), it makes sense to process the signals in the frequency domain after computing a FFT of each signal. In Figure 10.18, a cross-channel analysis is illustrated. If autospectra or autocorrelation is desired, one of the signals is replaced by the other (for example, y replaced by x to produce the autospectra and autocorrelation of x). The first step is to compute the FFT of both signals including zero padding with as many zeros as the length of the signals and multiply the complex conjugate of the FFT of the reference signal (denoted x here) by the FFT of the other signal, y. These two results are then multiplied, since all spectra are based on this type of product. The cross-products are then processed in the frequency domain, and it is here that this framework turns out to be efficient. In this step, several signal processing operations may be readily implemented, for example ●







Lowpass, bandpass, and highpass filters may be implemented by simply zeroing out those frequencies that are undesired. Integration or differentiation may be implemented by multiplying or dividing, respectively, by 𝜔2 , because of the square nature of the product (not −𝜔2 since (j𝜔X) is complex conjugated in the case of differentiation, and similarly for integration). Compensation of nonlinear transducer characteristics may be accomplished by implementing an inverse of the frequency response of the sensor characteristics. Harmonics may be detected by the periodogram ratio detection, PRD, method and removed by the frequency domain editing, FDE, see Section 18.7.

261

262

10 Spectrum and Correlation Estimates Using the DFT

After any frequency domain processing has taken place, the spectra and correlation functions may be efficiently computed. Correlation functions are directly obtained by inverse Fourier transforming the frequency domain processing result, and scaling to obtain unbiased estimates by Equation 10.42. This is why it is important to compute the FFT in the first step including zero padding because otherwise the correlation functions are not correctly obtained. Spectra are obtained by smoothing the frequency domain processing results and applying appropriate scaling. The most common type of spectra are spectral densities, where the scaling is described in Section 10.3.6. Note that the zero padding needs to be taken into account by multiplying the spectral density with an extra factor 2, since the zero padding means that the calculated mean-square of the time signal is halved.

10.7 Spectrum Estimation in Practice We shall now present some guidelines for spectrum measurements with FFT analysis software in practice. We will use examples to explain the process. It might be good to carry out these examples if you have access to a system for FFT analysis or MATLAB/Octave, partly for practice and partly to gain increased insight into the method of operation of the analysis software. In practice, as opposed to our theoretical treatment of the estimators in the previous sections of this chapter, time signals and spectra are usually plotted using physical units for time and frequency, respectively. Thus, the x-axis of the following plots (which are all spectra) will be in the frequency variable f = kΔf . Practical spectrum estimation issues that need to be discussed are most importantly the following: 1. 2. 3. 4.

Choice of the most suitable estimator, Choice of the most suitable time window, Choice of proper frequency increment, or blocksize, and Choice of the most suitable measurement length (or in many cases, to find the shortest acceptable time).

These issues will be discussed in the subsequent sections for various typical applications in noise and vibration analysis. There is also another issue, of course, which is to find the most suitable units of measurement. For vibrations, the question is often whether to look at acceleration, velocity, or displacement, as their respective frequency weighting in most cases will change the frequencies of “highest vibration levels.” This is, however, a topic that depends on engineering judgments which are a little outside the scope of this book. To summarize briefly, the choice of units is dependent on the reason for the measurement. For example, noise emission from a structure is largely dependent on vibration velocity, so in cases where the reason for the vibration measurement is related to noise emission, it may be reasonable to look at vibration velocity. It should be clear from the discussion so far in this chapter that the result of spectrum estimation depends on the frequency increment and time window, etc., chosen for the analysis. Once a particular frequency increment is chosen, for example, the trade-off between bias and random error is fixed. For this reason, the “traditional” method of averaging spectra in real time is highly unsatisfactory. In many cases, there is a need to

10.7 Spectrum Estimation in Practice

be able to run frequency analysis with several different choices of, for example, frequency resolution. With the inexpensive storage capacity of modern computers, it is motivated to store the measurement signal so that it can be subsequently analyzed over and over again. I therefore strongly recommend that all signals are stored first on hard disk, which makes the frequency analysis much easier and safer (because you can run the analysis again, if needed). An additional important point is the length of the recorded data. Many of the time domain processing procedures we discussed in Chapter 3 result in somewhat shorter data sequences after processing, for example, resampling (where some thousand samples on both ends of data should be discarded after the processing), and integration and differentiation by filters, etc. It is therefore good practice to always measure more data than is anticipated to be needed. This increases the freedom of further processing prior to spectrum estimation without losing accuracy.

10.7.1

Linear Spectrum Versus PSD

In order to motivate the use of the special spectra (autopower, or the recommended linear spectrum) for periodic signals, we will first study what happens if we measure a sinusoid with linear spectrum versus PSD. In Figure 10.19, autopower and PSD with two different frequency resolutions (blocksizes) of a recorded sine signal are shown. The difference in level between the two PSDs in Figure 10.19 is explained by the fact that for a PSD, it is not the level which is relevant to examine, but instead the area under the curve. Because the frequency peak with finer frequency increment is narrower, it must be higher so that the area under it is constant. Thus, it is not the choice of PSD that is “incorrect,” but the interpretation that is different than that of the linear spectrum. For periodic signals, we

0.06

Spectral density [V2/Hz]

Linear spectrum [V RMS]

1 0.8 0.6 0.4 0.2 0 150

200

Frequency [Hz] (a)

250

0.05 0.04 0.03 0.02 0.01 0 150

200

250

Frequency [Hz] (b)

Figure 10.19 Comparison between linear spectrum and PSD for sinusoid with amplitude 1 V and frequency 200 Hz. In (a), the linear spectrum of the sine is plotted with 6.25 Hz (solid) and with 12.5 Hz frequency increment (dashed). In (b), PSDs with the same frequency increments as above are plotted. As seen in the figure, for the linear spectra the peak value is the same regardless of frequency increment because the signal is periodic. See the text for an explanation. The frequency axes have been zoomed in to visualize the differences.

263

10 Spectrum and Correlation Estimates Using the DFT

0.12

5

Spectral density [V2 /Hz]

Linear spectrum [V RMS]

264

0.1 0.08 0.06 0.04 0.02 0

0

500

1000

×10–4

4 3 2 1 0

0

Frequency [Hz] (a)

500

1000

Frequency [Hz] (b)

Figure 10.20 Comparison between linear spectrum and PSD of a random signal. In (a), linear spectra with 6.25 Hz (solid) and 12.5 Hz frequency increment (dashed). In (b), PSDs with the same frequency increments are plotted. As seen in the figure, the value of the PSD is the same regardless of the frequency increment when the signal is random. See the text for an explanation.

usually want to be able to read out each periodic component’s RMS value, and the linear spectrum is thus usually preferable. In Figure 10.20, the result of a similar measurement on a random signal is presented in linear spectrum and PSD, respectively. Using reasoning analogous to the above, it is now the PSD that gives the easiest interpretation of the spectrum; for random signals, it makes little sense to use linear spectrum.

10.7.2 Example of a Spectrum of a Periodic Signal As a first real measurement example, we shall study the spectrum from an accelerometer attached to a small fan, with constant rotation speed. The measured signal is shown in Figure 10.21. The acceleration signal in this case is periodic, since imbalance, etc., gives rise to one or more fluctuations per revolution. Since some contaminating noise can exist in the transducer signal, we may need to average the signal to obtain a correct spectrum. We can determine if this is necessary by examining the changes in a specific peak if we make successive measurements based on one time block. If the peak varies more than what we assume to correspond to the measurement uncertainty, then we must increase the number of averages until a sufficiently stable average is obtained. Next, we need to select a time window. For this type of signal a flattop window should be chosen if accurate RMS estimates are necessary. In many cases, however, the maximum inaccuracy of approximately 15% with the Hanning window is sufficiently accurate, and then the Hanning window offers the advantage of higher-frequency resolution relative to the frequency increment, which means that the blocksize and thereby the measurement time of each time block can be kept shorter. The data acquisition hardware may have to be set to a proper input range and the frequency range selected. The frequency range should be set so that the desired number of harmonics are included. For 1800 revolutions per minute (RPM), which corresponds

10.7 Spectrum Estimation in Practice

2

Acceleration [m/s2]

1.5 1 0.5 0 0.5 1 1.5 2 2.5

Figure 10.21 1800 RPM.

0

0.1

0.2

0.3

0.4 0.5 Time [s]

0.6

0.7

0.8

Time signal from an accelerometer attached to a fan rotating with approximately

1 0.8 0.6 0.4 0.2 0

0

100

200

300

400

Frequency [Hz] (a)

Linear spectrum acc. [m/s2 RMS]

Linear spectrum acc. [m/s2 RMS]

to 30 Hz, for this example we assume that 500 Hz, corresponding to 16 harmonics fitting within the frequency range, is sufficient. Once the frequency range is fixed, we adjust the blocksize to vary the frequency increment, Δf . The recommended procedure is to make a measurement with a coarse frequency increment, Δf , i.e., a slightly higher Δf than we anticipate is sufficient. The resulting spectrum is then compared with spectra with gradually smaller Δf (larger blocksize) until the spectrum is clearly a line spectrum, i.e., the spectrum reaches near-zero values in between the peaks. In Figure 10.22, two spectra from the fan measurement with frequency increment 5 Hz, and 1.25 Hz, respectively, are plotted. The result in Figure 10.22(a), with Δf = 5 Hz, shows

500

1 0.8 0.6 0.4 0.2 0

0

100

200

300

400

Frequency [Hz] (b)

500

Figure 10.22 Linear spectra of fan acceleration from Figure 10.21. Two different frequency increments, (a) 5 Hz and (b) 1.25 Hz were used in the frequency range 0–500 Hz. As seen in the figure, a frequency increment of 1.25 Hz is sufficient for the spectrum to decrease to near-zero between the periodic components (peaks), whereas the 5 Hz frequency increment is obviously too coarse.

265

266

10 Spectrum and Correlation Estimates Using the DFT

that the spectrum does not reach near-zero values between the peaks, which is an indication that several spectral components are within the frequency resolution. With the finer frequency increment of Δf = 1.25 Hz in Figure 10.22(b), the details are enhanced and the spectrum looks like an anticipated spectrum of a periodic signal. With repeated measurements without averaging (number of averages = 1) the spread in the highest periodic component peak, at approximately 60 Hz, was around 10%. This was judged as more than the measurement uncertainty, and therefore the number of averages was increased to 10. The result was a significantly more stable value, with a variation of approximately ±1%. Finally, an important point must be emphasized. When averaging spectra from rotating machines as in this case, it must be verified that the variation in rotation speed (in Hz) is smaller than the frequency increment. Otherwise, for several consecutive averages, the peak will not always match the same spectral line in the spectrum. If that is the case, naturally the average result will become too low, see also Section 12.8.

10.7.3 Practical PSD Estimation We shall also study a spectrum example for a random acceleration signal. For this purpose, we use the simulation procedure that was described in Section 19.2.3 to generate data from a known mechanical system with three degrees of freedom because this allows us to illustrate the resulting PSD and compare with the “true” PSD. We let a bandlimited white random force excite the system and calculate the output acceleration. As we described in Section 10.3.4, when measuring a PSD we must choose a sufficiently small frequency increment so that the bias error is negligible. We therefore begin by setting the frequency range and frequency resolution, by guessing at first. The more knowledge we have before we begin the measurement the better off we are, naturally. In our case, we begin with a 1000 Hz measurement range and 5 Hz frequency increment. We also choose 200 averages and 50% overlap to obtain a fairly small random error. The result of this measurement is shown in Figure 10.23(a) and (b), overlaid with the true PSD. The results of a measurement where the frequency increment has been decreased to 2.5 Hz (i.e., the blocksize was doubled) are shown in Figure 10.23(c) and (d), also overlaid with the true PSD. As seen in the figure, the frequency increment in the second estimate agrees well with the true PSD. In a real measurement, we do not have the luxury of being able to plot the true PSD overlaid with our estimate. So how can we assure we obtain a sufficient frequency resolution with negligible bias? The answer is that we need to perform several PSD estimates with gradually decreasing frequency increment (increasing blocksize given that the sampling frequency is constant), until peaks due to resonances do not increase. A result of three such estimates with 0.5, 0.25, and 0.125 Hz frequency increment are plotted and overlaid in Figure 10.24. To be able to do this type of analysis easily, it is convenient to have the time data stored so that several analyses can be done on the same data. The discussion about finding a frequency increment which produces a PSD with negligible bias error should perhaps be extended a little. It is not always necessary to obtain PSDs with this high precision. In many cases, the aim of estimating a PSD is not to make accurate analyses of the peaks in the PSD. In such cases, a higher bias error can often be tolerated.

100 10

–2

10–4 0

100

200

300

Frequency [Hz] (a)

400

2

10

0

10 10

–2

10

–4

0

100

200

300

Frequency [Hz] (c)

400

Acceleration PSD [(m/s2 ) 2 /Hz]

102

Acceleration PSD [(m/s2 ) 2 /Hz]

Acceleration PSD [(m/s2 ) 2 /Hz]

Acceleration PSD [(m/s2 ) 2 /Hz]

10.7 Spectrum Estimation in Practice

5

1 96

97

98

99

100

101

97

98

99

100

101

Frequency [Hz] (b)

5

1 96

Frequency [Hz] (d)

Figure 10.23 The PSD of a signal from a simulated 3DOF system (solid) and the true PSD (dotted). The resonance bandwidth of the first mode is equal to Br = 2 Hz. Two different frequency increments were used: in (a) 1 Hz and in (c) 0.25 Hz. In (b) and (d), the peak corresponding to the first mode in (a) and (c), respectively, has been zoomed in for closer inspection. As seen in the figure, with the higher-frequency increment, there is a clear bias in the peak, whereas the bias is negligible with the finer frequency increment. The latter corresponds to a ratio of Br ∕Δf ≈ 8 which, according to Figure 10.7 gives a normalized bias error of less than 2%. The PSDs were both calculated using the same data; for the PSD in (a) and (b) a total of 1000 FFTs with 50% overlap were used, and consequently for the PSD in (b) and (d) a total of 250 FFTs were used. The normalized random error in the latter PSD is therefore, according to Figure 10.9, approximately 6.5%.

Examples of such applications are, for example, PSD estimates to find a suitable test spectrum for vibration testing, where the PSD is usually used to build some average envelope from several PSDs in different measurement points. One should, however, always be sure to know when there is bias in the PSD, so the analysis procedure presented can still be used with success to understand how much bias error a particular measurement has. As discussed in Section 10.3.6, an advantage in some cases with using the smoothed periodogram estimator, is that it can be implemented with a logarithmic frequency axis and smoothing window width, thus keeping a constant ratio of the smoothing window length and frequency. An example of this type of PSD estimate is shown in Figure 10.25, where it can be seen that the random error is smaller at higher frequencies than at lower frequencies.

267

10 Spectrum and Correlation Estimates Using the DFT

Acceleration PSD [(m/s2 ) 2 /Hz]

5

0.5 Hz 0.25 Hz 0.125 Hz

1

96

97

98

99

100

101

Frequency [Hz] Figure 10.24 PSDs of the same signal as in Figure 10.23, with three different frequency increments; 0.5, 0.25, and 0.125 Hz. As evident from the figure, the two spectral densities with the finest frequency increment show negligible bias, and we can therefore conclude that the 0.25 Hz increment is sufficient. This also corresponds to the result of Figure 10.23, where we also had the correct PSD overlaid.

102

Acceleration PSD [(m/s2 ) 2 /Hz]

268

0

10

–2

10

10–4 50

100

150

200

250

300

350

400

Frequency [Hz] Figure 10.25 PSD using the smoothed periodogram and a logarithmic frequency spacing and smoothing window length. The PSD is computed using 400 frequencies between 40 and 400 Hz, and an exponentially growing smoothing window length from 50 to 500 spectral lines. The random error can be seen as small fluctuations (“ripple”), particularly at low frequencies, where it is highest.

10.7 Spectrum Estimation in Practice

10.7.4

Spectrum of Mixed Property Signal

In many applications of noise and vibration measurements, the signal is not obviously either periodic or random. It is common to find vibrations with some periodic content and some random content. In such cases, it is usually easiest to interpret estimated PSDs, particularly, if the PSD plot is combined with a cumulated mean-square plot, as discussed in Section 8.5. The RMS value of each periodic component is easy to read from the cumulated mean-square by taking the square root of the rapid increase at the frequency of the component. In Figure 10.26(a), a PSD of an example signal is shown. The signal is the same noise signal used for the PSD discussion in Section 10.7.3, with a sine with a frequency of 80 Hz added. The cumulated mean-square in Figure 10.26(b) clearly shows the increase √ in cumulated mean-square at 80 Hz, and it is easy to read out that the RMS of the sine is 4 m/s2 , i.e., 2 m/s2 (using some cursor functionality, at least).

10.7.5

Calculating RMS Values in Practice

Cumulated mean-square [(m/s2)2]

Acceleration PSD [(m/s2)2/Hz]

In noise and vibration analysis, it is very common to want to know the RMS value of a signal in a particular frequency range. For example, for a measured sound pressure, this RMS

102 100 –2

10

–4

10

0

50

100

150

200

250

300

350

400

50

100

150

200

250

300

350

400

Frequency [Hz] (a)

2

10

0

10

10–2 –4

10

0

Frequency [Hz] (a)

Figure 10.26 PSD of a vibration signal consisting of random noise with a sine added at 80 Hz in (a). In (b), the cumulated mean-square is plotted, which makes readings of RMS levels between any two frequencies easy, see Section 8.5.

269

270

10 Spectrum and Correlation Estimates Using the DFT

value in decibels relative to 20 μPa, is the sound level in the particular frequency range. It is also common to apply some weighting characteristic to a spectrum and then calculate a weighted RMS value, for example, acoustic A-weighting in the case of sound pressure. In this section, we will see how to apply such weightings and how to calculate correct RMS values from different types of spectra.

10.7.6 RMS from Linear Spectrum of Periodic Signal Parseval’s theorem (see Table 9.1) implies that a mean-square summation in the time domain is equivalent to a magnitude square summation in the frequency domain. In order to compute the RMS value of a signal in a particular frequency range, it is thus suitable to perform the summation on an autopower spectrum. If we use the preferred linear spectrum, we thus square each spectrum value to obtain the autopower spectrum prior to the summation. From the discussion about broadening of spectrum peaks in Section 9.3.7, however, it should be clear that a straight summation of all the values in an autopower spectrum will yield a value which is too large, as can be seen in Figure 9.11. The peaks in an autopower spectrum are scaled so that the peak value, if there is no leakage, is exactly equal to the RMS value of the periodic component at a particular frequency. Since there is one additional value (in the case of a Hanning window and with no leakage) on each side of the peak, the sum of these three values will, of course, be larger than the true RMS. Indeed, for a Hanning window, the two side factors squared (because it is an autopower spectrum) are 1/4 of the center value, meaning that the squared sum will be a factor 1 + 2 ⋅ 1∕4 = 1.5 too large. It turns out that this is no surprise, since the factor to use to compensate the overestimated value by, for an autopower spectrum is the normalized equivalent bandwidth, Ben , which, for a Hanning window, is Ben = 1.5. This is also obvious from the discussion on PSD window correction in Section 10.3.3 because the PSD is scaled so that a summation over frequency is equal to the mean-square of the signal. Using the normalized equivalent noise bandwidth, we can thus formulate an expression for calculating RMS values of periodic signals measured with the single-sided autopower or linear spectrum. For the autopower spectrum, we have √ √ ∑k2 √ ( ) √ k=k1  xx (k) , (10.59) xRMS k1 , k2 = Ben where k1 and k2 are the spectral lines we wish to sum RMS values between. For a linear 2 spectrum, we naturally replace  xx (k) in Equation (10.59) with X̂ L (k) from Equation (10.4), i.e., √ √ ∑k2 2 √ ( ) √ k=k1 X̂ L (k) . (10.60) xRMS k1 , k2 = Ben

10.7.7 RMS from PSD For a PSD, the computation of the RMS value is even more straightforward. This spectrum is already scaled so that the area under the curve corresponds to the mean-square of the

10.7 Spectrum Estimation in Practice

signal, i.e., the square of the RMS value. Since the estimated PSD described above is usually measured with a relatively small Δf to avoid bias error, it is hardly necessary to use a better method to calculate the area under the curve than simply summing the values of the PSD multiplied by the frequency increment. Therefore, a desired RMS value is determined for an estimated PSD in the frequency range between spectral lines k1 and k2 as √ √ k2 ∑ ( ) √ √ ̂ xx (k). (10.61) G xRMS k1 , k2 = √Δf ⋅ k=k1

10.7.8

Weighted RMS Values

As mentioned in the introduction to this section, frequency weighting signals is very common in noise and vibration analysis. In acoustics, there are the common A-weighting and C-weighting (and the less common B- and D-weighting) characteristics that weigh sound pressure for better correlation with perceived sound levels. In an analysis of the vibration effects on humans, there are a range of weighting characteristics specified by different standards, for example, ISO 2631-1: (1997) and ISO 8041: (2005). The idea of all such weighting characteristics is that there is some linear filter between the location of the measurement to the location of interest, for example, the acceleration level on the floor where a person is standing, to the vibration level of the abdomen. It should perhaps be mentioned that not all human vibration weightings are linear. In the case of nonlinear effects, however, the simple, linear weighting described here cannot be used, see for example, ISO 2631-5: (2004) for shocks applied to humans, where the particular interest is on the spinal loads – this leads to more complicated data processing. In all cases of frequency weighting, the principle is the same. There is a specified filter, for example, the acoustic A- and C-weighting characteristics plotted in Figure 10.27. The measured signal is supposed to pass this filter, and the output unit (sound pressure in our example here) is to be computed – particularly the RMS value in a certain frequency range. The acoustic weighting curves are specified in the standard (IEC 61672-1 2005). Frequency weightings are defined as filter characteristics, and in most cases, they can be applied as time domain digital filters as described in Section 3.3.2. In many cases, this is necessary because the time signal should be analyzed with some particular time domain analysis. This is the case for acoustic analysis if particular sound levels with time constants are to be applied, see Section 3.3.5. Also, many of the human vibration filters should be applied in the time domain, see for example ISO 2631-1: (1997). When the signal is stationary and it is only the total RMS level which is to be calculated; however, the weighting can be applied in the frequency domain, and a much faster calculation can be done. In order to apply a frequency weighting in the frequency domain, the principle is to compute the weighted spectrum, and then perform an RMS calculation as described in previous sections. This is an easy operation, although we have to consider if the scaling of the spectrum is linear (as for a linear spectrum), or squared, as in the case of autopower spectra or PSDs. In the case of a linear spectrum, if we assume we have a linear weighting function described by a frequency response Ws (f ), then the weighted linear spectrum XLw (f ) is XLw (f ) = X̂ L (f )Ws (f ),

(10.62)

271

10 Spectrum and Correlation Estimates Using the DFT

10 0

Acoustic weighting [dB]

272

–10 –20 –30 –40 –50 –60

A-weighting C-weighting

102

103

104

Frequency [Hz] Figure 10.27 Plot of the acoustic A- and C-weighting curves in dB. The curves are specified in the standard (IEC 61672-1 2005).

and the easiest way to apply this weighting is, of course, to calculate the weighting frequency response Ws on the same discrete frequencies, f = kΔf , where the estimated spectrum X̂ L is defined. In the case of an autopower spectrum or a PSD, the frequency response Ws has to be squared, so that the weighted PSD, for example, becomes ̂ xx (f )Ws2 (f ), Gxx,w (f ) = G

(10.63)

where the weighting function, Ws (f ) again is calculated for the same discrete frequencies ̂ xx is defined. k, where G

10.7.9 Integration and Differentiation in the Frequency Domain Spectra of integrated and differentiated signals may be conveniently computed in the frequency domain, without affecting the time signals. Thus, if we have measured a displacement signal, u(t), and computed a linear spectrum UL (f ), the spectrum of the velocity time signal, v(t) = du∕dt can be computed on the spectrum itself by VL (f ) = 𝜔UL (f ),

(10.64)

which follows straight from the properties of the Fourier transform in Table 2.2. The imaginary number j is omitted since UL is real. If the signal u(t) is a random signal and we have computed the spectral density Guu instead, however, since this spectrum is squared, we must take the (magnitude) square into consideration, and thus the spectral density of the corresponding velocity, v(t) will be Gvv (f ) = 𝜔2 Guu (f ),

(10.65)

where it should be particularly noted that the square of the imaginary number j, does not appear, since a spectral density is based on the magnitude squared.

10.8 Multichannel Spectral and Correlation Analysis

Similarly, integration in the frequency domain can be done by dividing by 𝜔 in case of linear spectra, and by 𝜔2 for squared spectra.

10.8 Multichannel Spectral and Correlation Analysis In Chapters 14 and 15, we will use data for many channels sampled synchronously for estimation of multiple-input/multiple-output (MIMO) frequency response functions and principal components, etc. It is therefore practical to introduce a matrix approach to spectra for general MIMO systems. In Chapter 17, we will also be using MIMO correlation function matrices, which we also introduce in this section. Let us assume that we measure a number of input signals, x1 (t), x2 (t), … , xQ (t), and that we calculate the Fourier transform of each of these signals, which we denote X1 (f ), X2 (f ), … , XQ (f ). From consecutive blocks (records, frames) of spectra of each input signal, we average together single-sided autospectra, Gx1 ,x1 (f ), Gx2 ,x2 (f ), …, and so on. We now simplify the notation by dropping the x’s in the indexes and thus we define the auto (power) spectrum of xk (t) by [ ] Gkk (f ) = E Xk (f )Xk∗ (f ) , (10.66) where ∗ denotes complex conjugate and E [] denotes expected value (in practice averaging). Autospectra are real and nonnegative functions. The simplified notation we will use will omit the variable x in the index whenever the index stands for an input. We will also often drop the frequency variable f for simplicity. Thus, for example, Gx1 ,x1 (f ) will in most cases simply be denoted G11 . We define the cross (power) spectrum of (reference) signal xq (t) with (response) signal xp (t), by [ ] (10.67) Gpq (f ) = E Xp (f )Xq∗ (f ) , where the “reference” and “response” should only be interpreted as from which signal to which signal we mean the cross-spectrum is defined; both signals here are, of course, input signals to the MIMO system. However, any two-channel function requires a reference and a response. Cross-spectra are in general complex functions. Also note that the complex conjugation in Equation (10.67) corresponds to the phase of Xq being subtracted from that of Xp , as would be expected if xq is the reference (input) signal, and xp is the response (output) signal. Next, assume we also measure a number of output signals, y1 (t), y2 (t), … , yp (t), simultaneously with the acquisition of the input signals, with corresponding Fourier transforms Y1 (f ), Y2 (f ), … , YP (f ). Analogous with Equation (10.68), we then define the input–output cross-spectra of output signal yp with input signal xq by [ ] Gyp ,xq (f ) = E Yp (f )Xq∗ (f ) . (10.68) From Equation (8.13), we know that changing the order of the signals (indexes) corresponds to a complex conjugation of the cross-spectrum so that Gqp (f ) = G∗pq (f ), which also follows straight from the definition in Equation (10.68).

(10.69)

273

274

10 Spectrum and Correlation Estimates Using the DFT

Equation (10.69) is an important relationship in the discussion in Chapters 14 and 15. It should be mentioned here, to avoid confusion, that many textbooks define the order of the indexes in the reversed order compared with the one used here. However, we use the above notation in order to be consistent with standard matrix notation when we come to multiple inputs in later chapters.

10.8.1 Matrix Notation for MIMO Spectral Analysis When introducing multiple input and output signals in the later chapters, it will be useful to have a matrix notation for these signals. We, therefore, let the input spectrum vector {X(f )} be a column vector of the instantaneous spectra (each FFT result in practice) of the input signals ⎧X1 (f ) ⎫ ⎪ ⎪ ⎪X2 (f ) ⎪ {X(f )} = ⎨ ⎬. ⎪ ⎪⋮ ⎪X (f )⎪ ⎩ Q ⎭

(10.70)

Similarly, the output spectrum vector {Y (f )} is defined by ⎧Y1 (f ) ⎫ ⎪ ⎪ ⎪Y2 (f ) ⎪ {Y (f )} = ⎨ ⎬. ⎪⋮ ⎪ ⎪Y (f )⎪ ⎩ P ⎭

(10.71)

For the inputs, we define the input cross-spectrum matrix, Gxx (f ) as [

⎡ G11 ⎢ ] [ ] G Gxx (f ) = E {X}{X}H = ⎢ 21 ⎢ ⋮ ⎢G ⎣ Q1

G12 G22 ⋮ GQ2

G1Q ⎤ ⎥ G2Q ⎥ , ⎥ ⎥ GQQ ⎦

… … …

(10.72)

where H denotes Hermitian transpose, i.e., a complex conjugation and transpose (see Appendix D). The expected value (averaging) operation is applied to each matrix element separately and should be interpreted loosely: it needs to include the scaling factors we discussed for PSD and CSD estimators in Section 10.3. Note that the real autospectra G11 , G22 , …, etc., are found on the diagonal of [Gxx ], and that the off-diagonal elements of [Gxx ] contain complex cross-spectra between two input signals. We further define the input–output cross spectrum matrix, Gyx (f ) similarly by [

⎡ Gy1,x1 ⎢ ] [ ] G Gyx (f ) = E {Y }{X}H = ⎢ y2,x1 ⎢ ⋮ ⎢G ⎣ yP,x1

Gy1,x2 Gy2,x2 ⋮ GyP,x2





Gy1,xQ ⎤ ⎥ Gy2,xQ ⎥ . ⎥ GyP,xQ ⎥⎦

(10.73)

It should be noted that the matrices in Equations (10.72) and (10.73) are dependent on frequency. Also, entry p, q of Gyx (f ) corresponds to the cross-spectrum between response yp (t) and input xq (t).

10.8 Multichannel Spectral and Correlation Analysis

10.8.2

Arranging Spectral Matrices in MATLAB/Octave

In Chapters 13 and 14, we will see that for estimating frequency responses, the necessary spectral matrices for all analysis are [Gxx ], [Gyx ], and [Gyy ] defined in Section 10.8.1. However, there is an important limitation in that for these type of applications, we do not have the need for any of the cross-spectra between any outputs, y. Thus, we can limit [Gyy ] to contain only the autospectra between each signal yp . We will see how we can store such matrices conveniently in MATLAB/Octave below. It is not always the case that output cross-spectra are not needed, only for estimating frequency responses. In Chapter 17, for example, we will use the full response cross-correlation matrix of responses for OMA. As many commands operate on columns in MATLAB/Octave, for example plot, fft, and std, it is convenient to store spectral matrices with data in columns. This means, that the first index should be frequency. The second matrix index we will set to the output, or response, and the third index will be the input, or reference. With this convention, the three matrices defined above will have the following size. We let Nf here be the number of frequency lines (typically N∕2 + 1 if N is the blocksize), D the number of responses, and R the number of inputs (references). ● ● ●

[Gxx ] will be a matrix with size Nf × R × R. [Gyx ] will be a matrix with size Nf × D × R. [Gyy ] will be a matrix with size Nf × D as we do not need to store any cross-spectra between any output signals.

Example 10.8.1 Write a MATLAB/Octave script which produces the 3D input crossspectrum matrix in variable Gxx, if we have three input signals in three columns of a matrix in variable x in MATLAB/Octave. We assume we have a command acsd, producing a CSD between two vectors, and which will, of course, produce an autospectral density if the two vectors are the same. The following MATLAB/Octave lines then builds the 3D input cross-spectral matrix, Gxx , in variable (matrix) Gxx. [Nf,R]=size(x); Gxx=zeros(Nf,R,R); \% Allocate space for r=1:R \% x(:,r) treated as input for d=1:R \% x(:,d) treated as output Gxx(:,d,r)=acsd(x(:,r),x(:,d),fs,N); end end What if we want to plot this matrix? We will note that MATLAB/Octave cannot plot 3D matrices without some difficulty. So some very useful commands are permute and squeeze. The former command rearranges a matrix in a different order. So to produce a 2D matrix with only those spectra in the matrix Gxx which are related to the first reference, i.e., Gxx(:,:,1), we can write A=permute(Gxx,[1 3 2]); A=A(:,:,1); semilogy(f,abs(A))

275

276

10 Spectrum and Correlation Estimates Using the DFT

which will produce the requested matrix. Since Gxx(:,:,1) is really a 2D matrix, we can also write A=squeeze(Gxx(:,:,1)); semilogy(f,abs(A)) and whichever command is used is a matter of taste. End of example.

10.8.3 Multichannel Correlation Functions With the definition of the cross-spectral matrix [Gyx ] in Section 10.8.1, it is natural to define the cross-correlation matrix [Ryx ] as the inverse Fourier transform of the cross-spectral matrix. As we have noted before, in the time domain, this is equivalent to the convolution of y(t) by x(−t), i.e. [Ryx (𝜏)] =

1 {y(t)} ∗ {x(−t)}T . T − |𝜏|

(10.74)

To produce the correct, continuous convolution, if computed in the frequency domain, the spectral matrix [Gyx ] must be computed using zero padding by as many zeros as the length of the signals x(n) and y(n). In Section 17.3.1, we will see how this definition may be decomposed into modal parameters for OMA.

10.9 Chapter Summary To summarize the contents of this chapter, we note some important points of (nonparametric) spectral estimation: ●







all nonparametric spectral estimation methods in principle calculate RMS values of bandpass filters (filter bank theory), all FFT-based spectrum estimators essentially use the magnitude squared of FFT/DFT results, or for cross-spectrum, the product (Y X ∗ ) between the DFT of each signal, averaging is done in the frequency domain, either as ensemble averages (Welch’s method), or by averaging adjacent frequency lines (smoothed periodogram method), for easiest interpretation of spectra, it is important to choose an appropriate spectrum estimator (preferably linear spectrum, PSD, or transient spectrum).

For periodic signals, it is most convenient to use the linear spectrum (also called RMS spectrum) defined in Section 10.2.2. For periodic signals, the flattop window provides good amplitude accuracy, but requires larger blocksize (finer frequency increment) due to the wider main lobe of this window. For random signals, we have discussed two estimators; the usual method implemented in noise and vibration analysis software is Welch’s method, which is averaging windowed segments (blocks) of data. An alternative to Welch’s method which can

10.10 Problems

sometimes be attractive is the smoothed periodogram method, which instead uses one large FFT of the entire time signal, from which the magnitude squared (or product of (Y X ∗ )) is calculated. This function, the periodogram, is then smoothed by using frequency lines around each frequency of interest to reduce the variance. For mixed property signals, we have discussed that the PSD should normally be used, and can be combined with a plot of the cumulated mean-square value as a function of frequency, as in Figure 10.26. From the cumulated mean-square plot, it is easy to extract the RMS value of each periodic component. For spectrum analysis of both periodic and random signals, we have discussed that to obtain correct spectra, a recommended procedure is to start with a coarse frequency increment, Δf , and gradually decrease Δf (by increasing the blocksize), while observing the spectrum. For periodic signals, a line spectrum with distinct peaks with almost zero spectrum values between the peaks is expected. Thus, a frequency increment is sought for which each periodic component is clearly separated, as illustrated in Figure 10.22(b). For random signals, the aim of decreasing the frequency increment is instead to remove the bias of peaks due to resonances. Thus, a Δf is sought such that decreasing it further does not make the resonance peaks higher. This was illustrated in Figure 10.24. For spectrum analysis of transient signals, the entire transient should be captured if possible. If the signal starts and ends with zero, it is “self-windowing” and the transient spectrum should be calculated using the entire transient. For systems with low damping, sometimes an exponential window, defined in Equation (10.58), has to be applied to reduce the signal at the end to reduce leakage effects.

10.10 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 10.1 Create a sine with an RMS value of 5 V and frequency f0 = 20 Hz, with a sampling frequency of 1024 Hz, and a blocksize N = 1024 samples, in MATLAB/Octave. (a) Perform each computation step described in Section 10.2.1 to obtain a single-sided autopower spectrum using a Hanning window. Make sure the level of the resulting autopower spectrum is correct. (b) Calculate the linear (RMS scaled) spectrum from the result in (a). Problem 10.2 Repeat Problem 10.1 using a flattop window instead of the Hanning window. Problem 10.3 Use the entire linear spectrum of Problem 10.1(b) to calculate the RMS value of the signal using the procedure in Equation (10.60). Compare the result numerically with the RMS value calculated from the time signal (for example, using the MATLAB/Octave std command).

277

278

10 Spectrum and Correlation Estimates Using the DFT

Problem 10.4 Repeat Problem 10.3 on the result from Problem 10.2. Problem 10.5 Use the formulas in Section 10.3.2 to implement a MATLAB/Octave command to produce a PSD using 50% overlap and Hanning window. Create a Gaussian random signal with 102,400 samples using the MATLAB/Octave randn command and compute the PSD using your developed command with a blocksize of N = 1024 samples. Check that the RMS level calculated by Equation (10.61) equals the RMS calculated from the time signal. Problem 10.6 What is the normalized random error of the PSD estimated in Problem 10.5? Try to estimate this random error experimentally, by using the fact that the normalized random error is independent on frequency, i.e., use the standard deviation of the difference between the estimated and the true PSD as an estimate of the random error. How accurate is your estimated random error compared to the error given in Section 10.3.5? Problem 10.7 Prove that the integral part of Equation (10.39) equals the convolution of y(t) with x(−t). Hint: set up the equation for the convolution y(t) ∗ x(−t) using the definition in Equation (2.54) and make a suitable variable substitution.

References Bendat J and Piersol AG 2000 Random Data: Analysis and Measurement Procedures 3rd edn. Wiley Interscience. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Blackman RB and Tukey JW 1958a The measurement of power spectra from the point of view of communications engineering .1. Bell System Technical Journal 37(1), 185–282. Blackman RB and Tukey JW 1958b The measurement of power spectra from the point of view of communications engineering .2. Bell System Technical Journal 37(2), 485–569. Brandt A 2019 A signal processing framework for operational modal analysis in time and frequency domain. Mechanical Systems and Signal Processing 115, 380–393. Clough RW and Penzien J 2003 Dynamics of Structures. Berkeley, CA, USA: Computers & Structures Inc. Cooley JW, Lewis PAW and Welch PD 1970 The application of the fast Fourier transform algorithm to the estimation of spectra and cross-spectra. Journal of Sound and Vibration 12(3), 339–352. Daniell PJ 1946 Discussion of ‘on the theoretical specification and sampling properties of autocorrelated time-series’. Journal of the Royal Statistical Society 8 (Suppl.)(1), 88–90. Hannig J and Lee TCM 2004 Kernel smoothing of periodograms under Kullback–Leibler discrepancy. Signal Processing 84(7), 1255–1266. Harris FJ 1978 On the use of windows for harmonic-analysis with the discrete Fourier-transform. Proceedings of IEEE 66(1), 51–83. IEC 61672-1 2005 Electroacoustics - Sound level meters – Part 1: Specifications. International Electrotechnical Commission. ISO 18431-1 2005 Mechanical vibration and shock – signal processing – part 1: General introduction.

References

ISO 2631-1 1997 Mechanical vibration and shock – evaluation of human exposure to whole-body vibration – part 1: General requirements. ISO 2631-5 2004 Mechanical vibration and shock – evaluation of human exposure to whole-body vibration – part 5: Method for evaluation of vibration containing multiple shocks. ISO 8041 2005 Human response to vibration – measuring instrumentation. Newland DE 2005 An Introduction to Random Vibrations, Spectral, and Wavelet Analysis 3rd edn. Dover Publications Inc. Nuttall A and Carter C 1982 Spectral estimation using combined time and lag weighting. Proceedings of the IEEE 70(9), 1115–1125. Otnes RK and Enochson L 1972 Digital Time Series Analysis. Wiley Interscience. Pintelon R, Peeters B and Guillaume P 2008 Continuous-time operational modal analysis in the presence of harmonic disturbances. Mechanical Systems and Signal Processing 22(5), 1017–1035. Schmidt H 1985a Resolution bias errors in spectral density, frequency response and coherence function measurements: errata. Journal of Sound and Vibration 101(3), 377–404. Schmidt H 1985b Resolution bias errors in spectral density, frequency response and coherence function measurements, i: General theory. Journal of Sound and Vibration 101(3), 347–362. Schoukens J, Rolain Y and Pintelon R 2006 Analysis of windowing/leakage effects in frequency response function measurements. Automatica 42(1), 27–38. Stoica P and Moses R 2005 Spectral Analysis of Signals. Prentice Hall. Stoica P and Sundin T 1999 Optimally smoothed periodogram. Signal Processing 78(3), 253–264. Tarpø M, Friis T, Georgakis C and Brincker R 2020 The statistical errors in the estimated correlation function matrix for operational modal analysis. Journal of Sound and Vibration 466, 115013. Welch PD 1967 The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Transactions on Audio and Electroacoustics AU-15(2), 70–73. Wirsching PH, Paez TL and Ortiz H 1995 Random Vibrations: Theory and Practice. Wiley Interscience.

279

281

11 Measurement and Analysis Systems There are many commercial systems for measurement and analysis of noise and vibration signals. The basic design of these systems has its roots in the early FFT analyzers developed in the late 1960s and early 1970s, shortly after the publication of the FFT algorithm. Although hardware has gone through a revolutionizing development since then, the basic software design has been kept relatively intact, although today, of course, most systems consist of relatively inexpensive hardware and sophisticated software. Early analysis of noise and vibration signals involved analog tape recorders and expensive computers for converting the analog signals to digital signals and for performing frequency analysis. A breakthrough was made in the early 1970s when the first FFT analyzers were brought on the market. These bulky machines could, thanks to the FFT algorithm, compute spectra in real time. At that time, memory was expensive, so in most cases, time data was thrown away after computing the FFT, and only the average spectrum end result was kept. The basic design of modern FFT analysis systems was established already in these first analyzers. Today, the tools for noise and vibration analysis usually consist of a relatively inexpensive hardware box in which the analog-to-digital conversion is done, and data are then transferred to a laptop computer for the remaining processing. Thus, it is not appropriate to refer to current systems as analyzers. Rather, we will refer to modern analysis systems for noise and vibration analysis as FFT analysis systems, or noise and vibration analysis systems. The software architecture for noise and vibration analysis systems has been kept close to the original design in the sense that systems of today are usually still based on processing blocks of data, see Section 11.3. The main difference between a modern system and the original analyzers is that in modern systems, time signals can usually be stored onto a hard disk or other storage media for subsequent analysis. As we mentioned in Chapter 10, this is a recommended procedure, as frequency analysis sometimes has to be performed several times on the same time data to extract all information, and data quality analysis can only be performed on time data, see Section 4.4. In the remainder of this chapter, we will discuss some practical aspects of measuring signals for subsequent time domain or frequency domain analysis. We will discuss important hardware issues and how to interpret specifications of data acquisition hardware. We will also look at the basic design of systems available for noise and vibration analysis and

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

282

11 Measurement and Analysis Systems

discuss the demands a good noise and vibration measurement system has to meet. After a brief introductory overview of the design of general analysis systems, we will present hardware and software requirements and features, respectively.

11.1 Principal Design A modern system for analysis of noise and vibration signals typically consists of an external hardware box in which the analog signals are conditioned and sampled digitally. The hardware box is usually controlled by software, usually implemented on a laptop computer, in which the signals are processed and stored. A schematic illustration of the main parts of a typical measurement system is shown in Figure 11.1. The hardware usually consists of some signal conditioning electronics such as IEPE current feed for sensors with built-in impedance conversion as discussed in Chapter 7, or microphone power supply, etc. The signal conditioning is not illustrated in the schematic diagram in Figure 11.1, but will be discussed briefly in Section 11.2. Then, as illustrated at the very left of Figure 11.1, there is an AC/DC switch and an amplifier, or attenuator. Following this, the signal passes the analog-to-digital converter (ADC), indicated inside the dotted box in the figure, which in most cases today consists of an analog anti-aliasing filter, the actual A/D conversion, and a subsequent digital lowpass filter. The digital signal after the digital lowpass filter is transferred to the PC where it is processed and displayed, as illustrated on the right-hand side of Figure 11.1. There is an indicated boundary (vertical dotted line) between hardware and software in Figure 11.1. This is to indicate those parts of the data acquisition and processing that are typically implemented in the (external) hardware box, and PC software, respectively. This does, of course, not mean that the parts implemented in hardware are not seen in the software. The hardware is controlled by the data acquisition and analysis software, so you will find menu choices in the software which control the different parts of the hardware illustrated in Figure 11.1.

Hardware

Software ADC

AC DC AC/DC- Amplification/ Analog coupling attenuation LP filter

A/D

Sampling

f’s

fs Digital LP filter

FFT

Analysis

Display

Figure 11.1 Schematic diagram of the FFT analyzer. First comes a switch for AC/DC coupling, after which comes an amplifier/attenuator, which adjusts the signal to a reasonable level for A/D conversion. An analog filter, the antialiasing filter, follows, and then the A/D converter, and a digital lowpass filter. After this comes the processor in which the computations and analyses are carried out. Finally, the result can be displayed on a monitor. The dotted box surrounds components usually comprised in the sigma-delta A/D converter which is most common in modern measurement systems, see Section 11.2.2. The different steps are explained in the text below.

11.2 Hardware for Noise and Vibration Analysis

Details of each part of the hardware and software will be discussed in Sections 11.2 and 11.3, respectively.

11.2 Hardware for Noise and Vibration Analysis There are many hardware products on the market today which are dedicated to noise and vibration measurement. The main reason for this is that there are some special demands on, particularly, dynamic range, cross-channel match, and signal conditioning (current supply) for IEPE sensors which need to be fulfilled for a high-quality noise and vibration measurement system. It is, of course, in principle possible to build a system based on general ADC components, but once all necessary parts are put together, the price will likely be higher than using dedicated hardware. The special demands on a good system for noise and vibration signals require highly accurate antialiasing filters, matched between the channels, and electronics with relatively low noise floor, and those components are very expensive. The introduction of sigma–delta ADC technology in the 1990s (see Section 11.2.2), which is used in almost all dedicated hardware for noise and vibration signals today, has essentially led to very high performance at a relatively low cost per channel. This means that it is very difficult to match the price/performance of these dedicated systems by building a system based on more general ADC components.

11.2.1

Signal Conditioning

Many systems for noise and vibration analysis have built-in signal conditioning for typical transducers in vibration analysis and acoustics. Inputs with current supply for piezoelectric transducers of IEPE type, as discussed in Section 7.3, are now available in almost all noise and vibration systems. In addition, there are systems with direct inputs for microphones, charge amplifiers for piezoelectric transducers, etc. Since the signal conditioning is dependent on which transducer type is used, it has been omitted from the general schematic illustration in Figure 11.1, but is located before the input on the left-hand side of the figure. Immediately after the signal conditioning, or sometimes inside the signal conditioning stage, there is a circuit for AC or DC coupling. Choosing AC coupling will force the DC component to be removed from the input signal, which is desired when the AC component we want to analyze is relatively small in comparison with the superimposed DC component. If we try to A/D convert the entire signal, we will obtain worse dynamic range, see Section 11.2.2.1, in the measurement than necessary, since the A/D converter’s measurement range must be based on the maximum value in the signal, that is, the AC plus the DC component. Most noise and vibration signals should preferably be acquired with AC coupling, since we are in most cases only interested in the dynamic part of the signal. There are, however, some cases where the vibration signal has a natural nonzero mean, or where very low frequencies are of interest, which can justify acquiring the signal with DC coupling. The capacitor in the AC coupling circuit, together with the input resistance of the next electronics stage, will form a high pass (HP) filter. Typically, this is specified as

283

284

11 Measurement and Analysis Systems

a cutoff frequency in Hz by the manufacturer. For low-frequency measurements, this can be problematic and care should be taken when measuring frequencies below, say, 20 Hz or so. Also see Section 7.3.1 for a discussion on low-frequency characteristics of transducers. After the AC/DC circuit in Figure 11.1, there is an amplification/attenuation stage, with the purpose of scaling the input so that it has a suitable voltage range for the lowpass filter and A/D converter. This stage will be touched upon in the section on A/D conversion and dynamics.

11.2.2 Analog-to-Digital Conversion, ADC A/D conversion (analog to digital conversion) is a collective term for conversion from a continuous (analog) signal to discrete (digital) samples. A/D conversion can be split into two parts: (i) discretization of amplitude, the so-called quantization, and (ii) sampling. 11.2.2.1 Quantization and Dynamic Range

By quantization we mean the effect that each continuous amplitude (voltage) value is converted to a (binary) number with a fixed number of bits (one bit is one binary digit). After this conversion process the amplitude resolution is naturally fixed, which results in a limited dynamic range, see Figure 11.2. Dynamic range is defined as the ratio between the largest and smallest number that can be represented simultaneously after the A/D conversion. Since every extra bit in a binary number gives twice as many possible values (intervals), as a first approximation, we can calculate the dynamic range, D, expressed in dB, after A/D conversion using p bits, as follows: D ≈ 6 ⋅ p [dB]

(11.1)

because a factor of 2 is approximately equivalent to 6 dB (20 ⋅ log10 (2) ≈ 6). Besides the dynamic limitation due to quantization, the total dynamic range of the analysis system will usually also decrease due to the signal-to-noise ratio in the input amplifier if the noise floor of the amplifier is higher than the dynamic range due to

Binary number

Analog voltage

Max. input voltage +10 V

011

7.16 to +10.0

010

4.30 to +7.15 1.44 to +4.29

001 t

000

–1.43 to +1.43

101

–1.44 to –4.29

110

–4.30 to –7.15

111

–7.16 to –10.0 Min. input voltage –10 V

Figure 11.2 Schematic illustration describing the principle of quantization during A/D conversion. In the illustration, 3 bits are used in the conversion, which is clearly too few for most applications, but is used here for the sake of simplicity. A continuous (analog) value at the input is converted to a digital sample, indicated by a filled ring, with a certain number of binary digits. The marked samples in the figure would give the binary sequence (001, 010, 001, 010, 110, 110 110). This limitation in resolution gives rise to limited dynamic range, see text.

11.2 Hardware for Noise and Vibration Analysis

quantization. The total dynamic range is typically around 100–110 dB for modern 24-bit measurement systems. A more rigorous treatment of dynamic range limitations due to quantization, e.g., Oppenheim and Schafer (1975) leads to a dynamic range due to quantization which is approximately 1.5 dB less than that given by Equation (11.1), but this correction is normally negligible compared with the limitation due to the amplifier’s signal-to-noise ratio. Therefore, we refrain from this treatment here. As the alert reader you are, you may have noticed that a possible digital value is missing in Figure 11.2. This is because we have used a representation where the leftmost bit stands for the sign (+ or −), the so-called two’s-complement representation, the most common representation used in measurement equipment (ADCs and signal processors). As the aim of this chapter is not to describe how to design low-level software, we need not worry about the binary representation. 11.2.2.2 Setting the Measurement Range

In order to obtain maximum dynamic range in a measurement, it is essential to adjust the amplifier/attenuator in the analyzer so that the measured signal exploits as much as possible of the total input range of the A/D converter. This is done by adjusting the input amplifier or attenuator, as the reference for the A/D converter is normally set to a fixed reference voltage for reasons of long-term stability. (The ADC reference voltage needs to be insensitive to aging of electronic components, temperature changes, etc.) If the input signal contains a DC component, it is recommended to use AC coupling of the input signal so that the DC component disappears, as mentioned above. We will now illustrate the effect of changing full-scale range with an example. Example 11.2.1 Use MATLAB/Octave to simulate a measurement of a sine signal with 1 V RMS level with two different full-scale voltages, 2 and 20 V, respectively, using an ADC with 16-bit resolution. Calculate linear spectra of the two signals and plot and compare the results. MATLAB/Octave includes functionality to use integer arithmetic. This can be used to “simulate” a real-case measurement situation the following way. To compute the linear spectra, we use the alinspec command from the accompanying toolbox. fs=1000; % Sampling frequency N=2048; % FFT block size fsine=67; t=(0:1/fs:(N-1)/fs)'; % Time axis y=sqrt(2)*cos(2*pi*fsine*t); % Create the sine Scale=double(intmax('int16')/2); % This makes 2V = intmax ys1=double(int16(Scale*y)); % Truncate product to 16 bits [Y1,f]=alinspec(ys1/Scale,fs,ahann(N),1,0); % Linear spec. Scale=double(intmax('int16')/20); % This makes 20V = intmax ys2=double(int16(Scale*y)); [Y2,f]=alinspec(ys2/Scale,fs,ahann(N),1,0); % Linear spec. In Figure 11.3, the results of the code above are plotted for comparison. As can be seen, the effect of using an inappropriate full-scale range is that the noise floor of the measurement is

285

11 Measurement and Analysis Systems 0

10

0

−1

10

−1

10

−3

10

−5

10

−7

10 10

Lin. spectrum [V rms]

Lin. spectrum [V rms]

286

−3

10

−5

10

−7

10

0

100 200 300 Frequency [Hz]

(a)

400

0

100 200 300 Frequency [Hz]

400

(b)

Figure 11.3 Plot for Example 11.2.1. The plots illustrate the importance of selecting an appropriate full-scale range for dynamic measurements. In the figure, two spectra of a sine signal of 1 V RMS voltage (1.4 V amplitude), discretized using, in (a) a full-scale range of 2 volts, and in (b) a full-scale range of 20 volts. The ADC resolution was 16 bits. As can be seen in the figure, the background (discretization) noise is increasing with approximately 20 dB (10 times) with the higher full-scale range.

greatly increased (by a factor of ten just as the full-scale range was increased by a factor of ten). This shows the importance of optimizing the input range. End of example. It is also appropriate to warn against overloading the input electronics, which occurs if the measurement signal exceeds the full-scale range of the instrument. Overload is completely devastating to frequency analysis (and in most cases, time domain analysis as well), as the spectrum of an overloaded signal differs from that of the original signal. Therefore, it is vital for all measurements that we can detect when overload occurs. This must be performed both in the input amplifier and in the A/D conversion itself. Because we have a lowpass filter before the A/D conversion, it is not possible to avoid the fact that overload can occur in the analog part, but it will be impossible to discover in or after the A/D conversion. Hardware should therefore be designed such that overload is indicated if it occurs before the ADC as well as if it occurs in the A/D conversion process. In recent years, 24-bit A/D converters have become predominant in FFT analysis systems. The dynamic range of the ADC is then approximately 144 dB, which is much greater than the total dynamic range, which is rarely larger than approximately 100 dB, due to cross-channel talk and the noise floor of the input amplifiers, etc. The dynamic range of most transducers for vibration measurements is approximately 90–100 dB. Thanks to the larger dynamic range of the ADC, we can avoid overloading the analysis system by setting the measurement range a few times above the measured signal’s voltage without the total dynamic range of the measurement being compromised. 11.2.2.3 Sampling Accuracy

The second aspect of A/D conversion is the sampling, as we can of course only carry out the above quantization at specific points in time. It is important that the samples for all

11.2 Hardware for Noise and Vibration Analysis

channels in a multichannel measurement system are sampled simultaneously and that the time interval between samples, Δt, is accurate. To fulfill these requirements, the A/D converter should contain the so-called sample-and-hold circuits, which are circuits that temporarily “holds” the analog voltage at very precise time instants, so that the ADC can convert this constant value. Sample-and-hold circuits can usually be added to commercial ADC boards, and a system for noise and vibration must contain such circuits. Commercial FFT analysis systems always contain these circuits, whereas many inexpensive ADC boards do not, but can usually be equipped with optional sample-and-hold circuits. The price for the sample-and-hold circuits can exceed the price of the ADC board. The DFT is sensitive to the fact that the time increment, Δt, between all samples is identical. Figure 11.4 illustrates what happens to the dynamic range of a spectrum of a sinusoid when sampled with a normally distributed random error in the sampling instances, with a standard deviation of 10−4 ⋅ Δt. The signal’s spectrum was subsequently calculated, which should be a peak at the sinusoid’s frequency, with low values, under the dynamic range, for other frequencies. Because of the error in time intervals between samples, the dynamic range is considerably limited. In many cases of vibration analysis, high dynamic range is necessary, for example, to measure frequency response. Especially note that in our example, if the sampling frequency was, say, 1 kHz, then the standard deviation of the time error in our example would correspond to only 100 ns. The result if the different channels are sampled at different instances in time is a phase difference between the channels. This error is frequency-dependent and can be analyzed by considering the period of a sine at a particular frequency as 360∘ . If the error between two channels is T𝜖 seconds, and the frequency is f0 = 1∕T0 , then the phase error is 360 ⋅ T𝜖 ∕T0 . If, for example, we sample a 100 Hz sine with a sampling frequency of

Lin. spectrum [dB rel. 1 V rms]

0 −50 −100 −150 −200 −250 −300 −350

0

100

200 300 Frequency [Hz]

400

500

Figure 11.4 An irregular sampling frequency (spreading of time delay Δt) gives rise to limited dynamic range. The lower curve shows the numerical dynamic range and the upper curve shows what happens if we introduce a random error in the sampling times Δt with a standard deviation of 10−4 ⋅ Δt.

287

11 Measurement and Analysis Systems

fs = 1kHz, and the sampling error between two channels is 0.1%, i.e., 1 𝜇s, then the phase error is 360 ∗ 10−6 ∕10−3 = 0.36∘ . Phase match between channels will be further discussed in Section 11.2.4.4.

11.2.2.4 Anti-alias Filters

In order to ensure that the spectrum measured by an FFT analysis system does not contain aliasing errors, the signal must, in most cases, be lowpass filtered to remove all frequencies above the Nyquist frequency (half the sampling frequency), see Section 3.2. This filter must clearly be an analog filter, as it appears before the A/D converter. Because anti-aliasing filters, for physical reasons, cannot cut off all frequencies immediately above the cutoff frequency, but rather have a slope above the cutoff frequency, we must also have a “safety margin” between half the sampling frequency and the cutoff frequency. In early equipment for noise and vibration analysis, a standard oversampling factor of 2.56 was established, as illustrated in Figure 11.6. In more recent systems utilizing sigma–delta ADCs, however, this factor is sometimes somewhat smaller, say down to approximately 2.2.

Filter gain [dB]

50 0 −50 −100 −150

50

100

150

200 250 300 Frequency [Hz]

350

400

450

500

50

100

150

200 250 300 Frequency [Hz]

350

400

450

500

0 Phase [Degrees]

288

−500 −1000 −1500 −2000

Figure 11.5 Typical anti-aliasing filter. Because of the filter’s nonideal characteristics, the cutoff frequency, fc , needs to be set lower than half of the sampling frequency. It is typically set to fs ∕2.56 in FFT analysis systems, for historical reasons, which approximately corresponds to 0.8ḟ s ∕2. In the figure, the cutoff frequency is 400 Hz, which gives a sampling frequency of 2.56 ⋅ 400 = 1024 Hz. Note the nonlinear phase characteristics, which is discussed in Sections 3.3.2 and 11.2.2.

11.2 Hardware for Noise and Vibration Analysis

dB 0

Dynamic range

–D

fc

fs / 2

f

Figure 11.6 Schematic illustration of the sampling frequency in relation to the cutoff frequency of the antialiasing filter. How large the ratio must be depends on the slope of the filter above the cutoff frequency, and the wanted dynamic range, D. The “standard” in FFT analysis systems has become an oversampling factor of 2.56, that is, fs = 2.56 ⋅ fc . This factor is a result of the fact that typical antialiasing filters had a slope of approximately 120 dB/octave. In some more recent systems utilizing sigma-delta ADCs, this factor is slightly reduced, see text for details.

Following the A/D converter in Figure 11.1, there is a digital lowpass filter. This filter applies a lowpass filtering and subsequent decimation of the higher sampling frequency (fs′ as illustrated in Figure 11.1) used by the ADC to the lower sampling frequency, fs , requested by the user. Its function is to reduce some of the drawbacks with the analog anti-aliasing filter, especially its phase characteristics, as discussed in Section 3.3.2. An illustration of the characteristics of a typical anti-aliasing filter in an older analysis system without sigma-delta ADC is illustrated in Figure 11.5. This type of design with a digital decimation filter following the ADC has been predominant in systems for noise and vibration analysis since the mid-1980. If the analysis system is designed this way, the A/D converter’s sampling frequency is fixed, usually at the system’s highest frequency range. When we choose a lower measurement range, the digital filter is used to lowpass filter the measured signal, and at the same time it removes some samples, that is, decimates the data, so that the data after the digital filter have a sampling frequency of (typically) 2.56 times the highest frequency of interest. An advantage of this process is that the digital filter can be designed with better characteristics, such as ripple in the pass-band, stop-band attenuation, and not least linear phase, compared to analog filters. Besides these advantages, the digital filter will have the same characteristics for all channels, giving good cross-channel match, which is important for cross-channel analysis, for example, when estimating frequency response. Furthermore, the lowpass filtering process enhances the effective dynamic range, since each output sample of the filter is essentially an average of several input samples, which reduces the variance (noise level). There is, however, also a potentially severe cause of problems with this type

289

290

11 Measurement and Analysis Systems

of ADC design, if the measured signal contains frequencies above the Nyquist frequency, which will be discussed in Section 11.2.3. 11.2.2.5 Sigma–Delta ADCs

In recent years, an A/D converter type called the sigma–delta converter (sometimes, more appropriately, called delta–sigma converter) has become popular in FFT analysis systems. It works on the same principle as the process just mentioned, but has a considerably higher sampling frequency than the original analysis systems using the principle described in Section 11.2.2.4, and consequently the digital lowpass filter decimates the data even more, which allows for even higher-quality data. A further advantage of the high sampling frequency is that a lower-order filter can be used for the analog anti-aliasing protection which introduces less problems with the nonlinear phase characteristics. The sigma–delta ADC builds on the so-called one-bit converter (Higgins, 1990; Proakis and Manolakis, 2006) which in turn is based on delta modulation. This technology, developed for use in audio equipment, results in ADCs with very good performance at a very low cost. Typical sampling frequencies used in sigma–delta ADCs are in excess of 6 MHz, although at this frequency rate, the signal has only a single-bit resolution. Using the decimating digital filter following the ADC, however, the effective resolution is increased to 24 bits or more. The only restriction with this type of ADC is that, since the technology is developed for audio applications, the possible frequency range is limited; nominally to the upper audible frequency of 20 kHz. Overclocking sigma–delta ADCs, some manufacturers are currently marketing sigma–delta ADCs with upper frequency range of over 200 kHz (usually with reduced bit resolution). The digital filter following the ADC in sigma–delta converters should be a linear-phase filter, see Section 3.3.2, if time domain analysis such as transient analysis, for example, is the aim of the measurement. For price/performance reasons, however, not all manufacturers build this type of filters into measurement systems for noise and vibration analysis. You can therefore not be sure that systems with sigma–delta ADCs are suited for transient analysis without checking data sheets.

11.2.3 Practical Issues Measurement systems designed with digital filters after the ADC as illustrated in Figure 11.1 have a potential cause of problems which is illustrated in Figure 11.7. If the measured signal has substantial frequency content outside the measurement range, but below the analog antialiasing filter’s cutoff frequency, and this “outside signal” is larger than the signal of interest, the A/D converter can be overloaded, even though the measured signal looks fine on the screen. The reason is that what is observed on the screen is the output of the digital filter, which in the case under discussion will remove the high-frequency signal causing the overload. This situation can be perplexing if one is not aware that it may occur. Thus, if seemingly inexplicable overload indications occur, the measurement frequency range should be set to its highest value and the signal from the A/D converter should then be investigated. If the signal with the higher frequency is very large in relation to the signal of interest, and the dynamic range too low to allow the entire signal to be acquired and subsequently downsampled to the frequency range of interest, the only solution may be to use

11.2 Hardware for Noise and Vibration Analysis

Contaminating frequency, 200 Hz

Signal of interest, 50 Hz

15 10

Linear spectrum,V RMS

7

Volt

5 0 –5 –10 –15

0

0.05

0.15 0.1 Time, s

0.2

6 5 4 3 2 1 0

0

100

(a)

200 300 400 Frequency, Hz 10

500

(b)

A/D

20 kHz

51 kHz

100 Hz

−10

Time, s

Figure 11.7 Description of the case where overloading can occur even though the signal on the screen seems to be smaller than the A/D converter’s input range. The overloading occurs because the A/D converter sees the signal at 200 Hz, while we (in this example) on the screen only see up to 100 Hz. The A/D converter’s amplitude range is assumed to be ±5 V.

an external analog lowpass filter between the transducer and the measurement system to remove the higher-frequency content. It should be said, however, that with 24-bit ADCs, due to their extremely large dynamic range, in many cases, the situation can be solved by recording the signal with a high-frequency range, and subsequently downsample to the requested frequency range. In summary, we shall list a few points about sampling and A/D conversion, which should be observed when using an FFT analysis system: ●





Set the measurement range so that it approximately matches the measured signal’s level. When using modern analysis systems with an A/D converter with 24 bits or more, a larger safety margin can be used, as few transducers have as great dynamic range as the A/D conversion. As IEPE sensors (see Section 7.3) are the most common sensors, it is worth pointing out here that these signals have ±5 V full range. Potential overloading of the input is fatal for spectral analysis. Therefore, watch out for overload indications and do not forget to check the entire measurement chain, including transducers, signal conditioning, and the ADC. An FFT analysis system is suitable to use for analyzing the spectrum of a signal. The time data seen in an FFT analysis system can normally only be used as an indicator that there is a signal, but not to see what the signal looks like, since the oversampling is insufficient. (However, some analysis systems specially designed for time domain analysis have the ability to use higher oversampling ratio so that time analysis can be performed.) Alternatively, the signal can be recorded and subsequently lowpass filtered.

291

292

11 Measurement and Analysis Systems ●

If you have an indication of overload, even though the signal looks smaller than the full scale, increase the frequency range and check if you might have a signal at a higher frequency which is overloading the A/D converter. If the dynamic range is insufficient after the A/D converter is adjusted to the higher signal, then an analog lowpass filter is necessary between the sensors and the FFT analysis system to remove the higher-frequency signal.

11.2.4 Hardware Specifications When evaluating hardware for data acquisition systems, there are some very important specifications that need to be understood. In this section, we will discuss some of the most important specifications, and also discuss how they can be easily measured. The main reason for this discussion is that some of the specifications often given for ADCs are theoretical values. It is important to understand the total performance of a system, particularly if one wants to build a system from individual components. When testing specifications, it is important to correctly treat the channels which are not tested. These channels need to be shortened, usually best done by connecting 50 Ω BNC terminating plugs to the input BNC connectors. The reason for this is that open channels can easily pick up 50/60 Hz line power components which can ruin the measurements. At this time it would be appropriate to discuss the importance of calibrating your measurement system. The importance of calibration to make traceable measurement cannot be over emphasized. A measurement system should be calibrated regularly, and the recommended time between calibrations is usually specified by the manufacturer of the system. In the following subsections, the most important specifications of dynamic measurement systems will be discussed. Some relatively easy checks that can be done in the lab will be described, as it is good practice to, once in a while, check your measurement system to ensure that nothing has changed since the last calibration. 11.2.4.1 Absolute Amplitude Accuracy

The absolute accuracy of measurement systems for noise and vibration signals is usually good to within 1% or better. This can seem rough compared with, for example, a digital voltmeter, but we have to remember that the absolute accuracy in a dynamic measurement system is frequency dependent, and the worst-case is specified. As sensors for noise and vibration analysis are usually specified within ±5%, one percent absolute accuracy of the instrument is rather acceptable. To measure the absolute accuracy, a high-quality sine generator must be used, which is usually only possible in calibration labs. 11.2.4.2 Anti-alias Protection

As mentioned in Sections 3.2.1 and 11.2.2.4, it is essential that a system for noise and vibration analysis is equipped with analog anti-alias filters which ensure that a frequency component showing up in a spectrum is definitely due to frequency content at that frequency, and not due to a higher frequency, aliasing as a lower-frequency component. This means that

11.2 Hardware for Noise and Vibration Analysis

the anti-alias filters must attenuate all frequencies above the Nyquist frequency by more than the dynamic range (as discussed in Section 11.2.2.1). The anti-alias protection can be investigated by attaching a sine generator with an amplitude slightly less than the voltage range (full-scale voltage) of the channel to be tested. Then, instantaneous spectra are studied while increasing the frequency of the sine tone up above the Nyquist frequency. Once the frequency of the sine tone comes above the measurement frequency range, there should be no tone sticking up through the noise floor of the spectrum. If there is aliasing, the aliasing tone will sweep down in the spectrum, as the actual frequency is swept up between fs ∕2 and fs , where it (if still visible) will turn and start sweeping up. With some low-cost, anti-aliasing filters, there are cases where some frequencies can come across with virtually no attenuation at all. With sigma–delta ADCs, it can be difficult to anticipate which these frequencies are. A large frequency range above the Nyquist frequency should therefore be tested. 11.2.4.3 Simultaneous Sampling

As mentioned in Section 11.2.2.3, it is important that systems for noise and vibration analysis include sample-and-hold circuits for simultaneous sampling. This should be checked in the data sheet, as it is difficult to measure experimentally. This is particularly important if you build your own system, as dedicated systems in this field normally include such circuits. 11.2.4.4 Cross-Channel Match

Many applications in noise and vibration analysis involve estimation of two-channel functions such as frequency response, coherence, and cross-correlation functions. In estimates involving two channels, any difference in amplification or phase characteristics between the two channels will be added to the cross-channel estimate. The cross-channel match is therefore an important specification of a measurement system. For multichannel systems, it should be specified as the worst-case, between the two channels in the system with the largest mismatch. The cross-channel match is measured simply as the frequency response between any two channels. To investigate it, two channels are therefore connected to the same source by using a (usually random noise) signal from a signal generator to both inputs. The frequency response between the two investigated channels is then estimated using the procedures we will discuss in Chapter 13. The cross-channel match is typically far superior on systems using sigma–delta ADCs compared with any other design. Modern measurement systems are often within ±0.1% in amplification (the magnitude of the estimated FRF) and less than 0.5∘ in phase. The phase deviation usually becomes gradually worse closer to the upper frequency limit (cutoff frequency of the digital lowpass filters). 11.2.4.5 Dynamic Range

The dynamic range of measurement systems is a very important measure. Unfortunately, this term is sometimes slightly abused, so it is important to understand what is meant by the manufacturer when interpreting specifications. Dynamic range is a measure of the ratio of the largest and the smallest signal that can be resolved in a particular measurement, usually

293

294

11 Measurement and Analysis Systems

given in dB. High-dynamic range is essential for frequency analysis, especially for frequency response measurements as these functions often contain very large dynamic range. Thus, with this definition, it is a measure of the noise floor of the instrument, relative to the full-scale range. As there are some other parameters, particularly cross-channel talk (see below) which can reduce the total dynamic range in a measurement, it is very important to know how the dynamic range is defined. Unfortunately, many manufacturers are not very clear about this. It is quite common, particularly for ADC boards, that the quantization noise is specified as dynamic range. This can, however, easily be seen, as it is typically specified as 6 dB per bit ADC resolution. This, as obvious from Section 11.2.2.1, should not be confused with the dynamic range as we have defined it. In most cases in modern data acquisition hardware, the dynamic range is limited by spurious narrow band signals (tones) which originate from clock frequencies, etc., in the hardware above the random noise floor of the instrument. In such a case, the dynamic range can be measured rather easily by shortening the input, for example, by connecting a 50 Ω BNC terminating plug to the input connector. The spectrum will now contain only the spurious “noise,” since the input is shortened, and the ratio of the highest peak in the spectrum relative to the full-scale range is the dynamic range. After setting the input voltage range to, say, A [V], a linear spectrum is measured with a few averages to obtain a stable spectrum of the background “noise” (which in this case is not really noise as we assume it to be periodic). The dynamic range, D, in dB, is then computed by reading the root mean square (RMS) value of the highest peak in the spectrum, Xmax, and using the equation [ √ ] A∕ 2 D = 20log10 (11.2) Xmax √ where A∕ (2) is the RMS of the full-scale voltage. If the dynamic range is instead limited by random noise, i.e., no tones are appearing above the noise floor, the dynamic range can alternatively be measured by calculating the RMS value of the signal of the shortened channel (i.e., the noise floor), VRMS , and then calculating the dB ratio by [ √ ] A∕ 2 . (11.3) D = 20log10 VRMS 11.2.4.6 Cross-Channel Talk

Another very important property of measurement systems is the cross-channel talk. This measure, usually specified in dB, is measuring how much a sine tone connected to any channel is coming across on another channel. This factor is actually often limiting the effective dynamic range of many measurement systems, since if the cross-talk attenuation is less than the dynamic range (noise floor) of the instrument, it is impossible to tell if a low-level spectral component appearing on a particular channel comes from the signal at that channel, or if it is cross-talk from another channel, polluting the channel in question. The cross-channel talk is measured by connecting a sine with an amplitude close to the full-scale range, to one of the channels, and measuring a linear spectrum on all other

11.3 FFT Analysis Software

channels, usually set to the same full-scale range. In the spectrum of the adjacent channel, at the frequency of the sine, the sine is usually detected, and the cross-channel talk is defined as the ratio of the measured amplitude and the true amplitude of the sine, on the tested channel and on the channel where the sine tone is connected, reported in dB. It is important to terminate all channels but the one where the sine is connected, when performing this measurement.

11.2.5

Transient (Shock) Recording

In Chapter 3, we discussed that time domain analysis often requires that the signal is recorded with linear-phase filters. Many measurement systems for noise and vibration analysis are not designed with this in mind, although some systems are. It should be particularly noted that not all systems with sigma–delta ADCs are designed with linear-phase digital filters due to price/performance issues. Before using a noise and vibration analysis system for recording transients for time domain processing, the system specification should therefore be carefully checked. A solution if the system is not designed with linear-phase digital filters following the ADC, a solution could be to record the transients at the maximum sampling speed. Linear-phase digital filters can then be applied to the signals in postprocessing, either in the software provided with the system, if available, or by exporting data, for example, to MATLAB/Octave. This procedure usually works if the bandwidth of the transient is much lower than the bandwidth of the measurement system.

11.3 FFT Analysis Software In this section, we will discuss how modern FFT analysis software is designed and the typical applications for which such software is designed. A modern FFT analysis system is usually based on a laptop computer with software in which all analysis is done. There are many turnkey solutions on the market which provide all the functionality needed for typical applications in noise and vibration analysis. Most of these systems use a design established in the 1970s with the first FFT analyzers. The main thing characterizing these systems is that they are designed to process data blockwise, as will be described in Section 11.3.1. Before proceeding with this, we should mention a few words about an alternative way of working. As mentioned several times earlier in this book, there are some disadvantages with online processing of noise and vibration signals, where time data are discarded once fed to the FFT processor. The main disadvantage is that, as we know from Chapter 10, spectrum analysis is always a compromise between frequency resolution and variance or amplitude accuracy (depending on whether the signal is random or deterministic). Therefore, it is not uncommon to want to analyze the same data several times, for example, with different frequency increment, or with different time windows. For this reason, it is good practice to record time signals for subsequent processing with different measurement settings.

295

296

11 Measurement and Analysis Systems

Another important issue is that of quality assurance of the measured data, which is particularly important after field measurements. As we discussed in Section 4.4, such analysis can only be done in the time domain, and this is a strong reason to record time signals. A third point is that there are some analysis procedures, for example, estimating spectral densities by the smoothed periodogram method, which are not at all block-based, but rely on performing one FFT on the entire data, and then smooth this FFT result in the frequency domain. This estimator is usually not found in FFT analysis software because it does not fit the block processing architecture; however, the estimator is sometimes to be preferred over the more common Welch estimator. These points make it attractive to record time data in most cases. This is also indeed possible with most analysis systems available on the market, which include the possibility to export data to, for example, MATLAB/Octave, to perform nonblock-based processing, if the analysis software itself does not support it. In the remaining sections of this chapter, we will discuss the typical architecture of most commercial systems for noise and vibration analysis, as this is inevitably what you are most likely using.

11.3.1 Block Processing Before we continue with details about analysis hardware and software, we should take some time to discuss the principal operation of FFT analysis software. The FFT analysis system measures time data blockwise, meaning that a certain number of samples, a block (or frame) of, for example, 1024 samples, is acquired. As soon as this acquisition is made, the block with data is transferred, normally to RAM memory, where the computations take place. The FFT algorithm starts the computation, while samples continue to stream into the now-empty buffer. When the FFT computation is finished, the result, the instantaneous spectrum, is moved to a new memory location where it is accumulated in average operations and possibly being displayed, see Figure 11.8. It may happen that the FFT computation takes more time than it takes to acquire the next block of time data. In that case, data acquisition halts until the FFT computation is complete, and the instantaneous spectrum is moved on to the averaging buffer. That is to say, the samples that are A/D converted in the meantime are lost. This usually causes no problems if the measured signal is stationary. In certain special applications it can, however, be important that time data are not lost. Therefore, many manufacturers specify what is called the real-time bandwidth, which is usually given as the highest bandwidth (analysis range) which can be analyzed with a given blocksize and without any data loss. Usually,

A/D

Time buffer

FFT

Accum. of average

Figure 11.8 Principle diagram of the memory buffers of an FFT analysis system. Time data are buffered in memory right after A/D conversion, which is used for trigger functions, etc. The FFT process uses a certain memory location and stores the result in the final average accumulation.

11.3 FFT Analysis Software

such maximal performance is obtained without the accumulated spectrum being displayed, as displaying always costs some performance. It should be added that between the A/D converter and the FFT processor, there is usually a certain buffer to facilitate the communication necessary when the data are transferred internally. Therefore, one can usually carry out a number of averages without the limitation above coming into play, as long as data can fill this buffer. It is only when many averages are made in some cases several hundred that this limitation is noticed.

11.3.2

Data Scaling

Most FFT analysis software includes the possibility of using the sensitivity of the transducers used so that the data are presented directly in the unit measured. If the software is well designed, there is also a field for setting an amplification factor for an external amplifier between the transducer and the analysis system. In this way, one can avoid loading the brain with what the net result is of a combination of transducer sensitivity and amplification; something experience tells us is more difficult than one may think. As we discussed in Section 7.3.3, most IEPE sensors can be ordered today with a built-in memory circuit in which all necessary information about the sensor is stored. If the measurement systems support reading this information, it is a good step forward to safe and reliable measurements, as entering wrong scale factors for sensors is a common cause of error in vibration measurements.

11.3.3

Triggering

All FFT analysis software packages have a trigger function, that is, a start function, which can be used in at least three ways. Although the terminology differs between manufacturers, the following should be helpful to understand the trigger function in most analysis systems. Most software packages have only a common level trigger. It may be possible to also set a certain hysteretic effect, that is, when the trigger level is reached, a certain level above and/or below this level must be reached before a new trigger can be activated. In general, one may set the trigger level and pulse edge (positive or negative). In addition to the trigger level, the block processing can usually be controlled by setting a trigger mode condition according to the following. This setting may have different names but is almost always available in noise and vibration analysis systems. ●





Free run means that the trigger function is not activated. The analysis system then treats the incoming data as they arrive from the A/D converter. First frame or continuous trigger means that the trigger function is only used to begin the measurement, but afterward the data are treated as with free run. This function is used if one wants the measurement to begin when the input signal exceeds a certain level. Every frame or transient mode means that the trigger level must be fulfilled for each time block. This function is used to synchronize data in some way, for example, when analyzing transients, where a number of transients are to be acquired and a spectrum calculated for each transient. An additional feature is often found with this type of trigger, which gives the ability to check the data for each block, before they are added to the average. This is useful for impact excitation, which is described in Section 13.8.

297

298

11 Measurement and Analysis Systems

Figure 11.9 Illustration of pretrigger. A part of the signal before the triggering instant is acquired so that the entire pulse can be analyzed.

Level

Pretrigger time

Time

In some applications, a built-in signal generator in the FFT analysis system is used to control a shaker or a loudspeaker. In the case that the excitation signal is transient, a special trigger function called source triggering or similar, is sometimes used. This implies that every time block for the input signals is synchronized with the transient signal sent out from the signal generator. See Sections 13.9 and 14.4 for details about excitation signals. When the input signal is triggered at a certain level, it is also essential to be able to set a pretrigger so that no part of the signal is cut off. If we imagine a transient as in Figure 11.9, which we trigger at some level along the positive pulse edge, without pretriggering we would only obtain the part of the transient that came after the triggering point. To avoid cutting the signal like this, with the pretrigger, we set a time interval before the trigger that is also included in the acquired time block. This presumes, of course, that the time data are successively stored in memory in a so-called FIFO buffer (first in first out). Similarly, in some cases, one may like to have a delay after the triggering point, before data are acquired in order to, for example, compensate for a time delay in the physical system being measured.

11.3.4 Averaging Most of the time, spectrum analysis involves an averaging process in the frequency domain, known as frequency (domain) averaging, as explained in Chapter 10. FFT analysis software thus allows this form of averaging to be selected. Usually, there is also a choice of time domain averaging. These two forms of averaging are seemingly similar, but have quite different implications. Frequency averaging decreases the variance in each spectral line if there is random noise between the estimated instantaneous spectra (each scaled and squared FFT result). Time domain averaging, on the other hand, is a completely different process, which can be used to improve the signal-to-noise ratio of deterministic signals. This is, however, only possible in two different cases, 1. for repeated transients with noise added on each transient, using a well-defined triggering condition, a time average could remove the random noise and result in a “clean” average where the transient appears without the noise, or 2. for signals which are periodic exactly in the time window used for averaging, after some (many) averages only harmonic components which are periodic within the measurement block (window) remain in the averaged signal. The second averaging case can be used either in order tracking applications with synchronous sampling, see Chapter 12, or to improve estimates of frequency response functions when using periodic excitation signals, see Section 13.12.

11.4 Chapter Summary

In most analysis software, there is also a choice of averaging type which can usually be either “linear” (sometimes called “stable”), “exponential,” ‘peak hold,” or an averaging type called “interrupted” or similar. The first choice produces a linear average where each value is equally weighted in the averaging process; this is the standard form of averaging. Exponential averaging can be used for spectral averages to produce a result similar to old analog analyzers. The exponential average is an average where the result is formed by weighting the most recent spectrum by 1∕2 and older spectra with a series 1∕4, 1∕8, …. This produces an average similar to the result of an analog analyzer. It is mostly used for monitoring purposes where spectra are monitored continuously. The interrupted average is used in conjunction with impact testing which will be described in Section 13.8. The averaging type means that the data acquisition is stopped after acquisition of the time block, and usually, the user can manually select to use the current time block in the averaging process, or reject it. It can only be used if each data block is triggered. Peak hold, finally, is an averaging type which, for frequency domain averaging, keeps the maximum value of all averaged spectra, at each frequency. It can, for example, be used for conservative measurements of periodic signals with some contaminating noise, or to track (slowly) sweeping sines. You should look in the documentation of your analysis software for more details on how the different averaging types are used in your particular software.

11.3.5

FFT Setup Parameters

The parameters needed for spectrum or frequency response estimation are blocksize, overlap factor (usually in percent of the blocksize), frequency range (or sampling frequency), and number of averages to perform. The number of averages specified is always the actual number of FFTs that will be performed. In order to calculate the random error for a spectral density estimate, with particular overlap percentage and number of averages, the formulas from Section 10.3.5 can be used. The only method available for spectral density estimates in commercial software packages for noise and vibration analysis is usually Welch’s method (see Section 10.3.2). As we know from Chapter 9, the relations between sampling frequency and blocksize on the one hand, and the frequency increment on the other hand, are that fs (11.4) N where Δf is the frequency increment of the DFT, fs is the sampling frequency, and N is the blocksize. Δf =

11.4 Chapter Summary This chapter has included an overview of some of the most important concepts relating to hardware and software for measurement and analysis of noise and vibration analysis. We have presented many factors which explain why, in most cases, dedicated hardware designed for noise and vibration measurements has better price/performance than systems built by standard components such as A/D boards. The main reason for this is that

299

300

11 Measurement and Analysis Systems

a good noise and vibration measurement system puts high demands on the precision in A/D conversion, input dynamic range, etc. This makes modern measurement hardware based on the sigma–delta ADC superior. Some important specifications for a good measurement system for noise and vibration analysis are the following: ● ●





anti-aliasing filters with high cross-channel match, sample-and-hold circuits for simultaneous sampling and accurate cross-channel phase match, low noise floor for high dynamic range necessary for spectra and, particularly, frequency response measurements, high number of bits resolution, typically 24 bits is standard today.

11.5 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems.

Problems Problem 11.1 If you have a commercial measurement system, go through your system documentation and find the specifications presented in this chapter so you know the limitations of your system. Problem 11.2 Assume you are going to measure an acceleration with an IEPE accelerometer. We assume the sensor is well chosen so that it gives close to maximum output voltage (full range). The accelerometer has a scaling constant of 100.4 mV/g. Which settings should you set for 1. Input voltage range? 2. Sensitivity in mV/EU if your measured unit is m/s2 . (EU is commonly used for Engineering Unit)? If the acceleration you measure is 34 m/s2 , what will the voltage from the accelerometer be? Problem 11.3 Assume you have a microphone connected to a microphone power supply, which is then connected to a channel on your measurement system. The microphone has a sensitivity of 38 mV/Pa, and the power supply is set to a gain of 60 dB. What is the maximum sound pressure level in dB SPL that you can measure, if the maximum voltage from the power supply and in your measurement system is 10 V? (Note dB SPL means dB relative to 20 𝜇Pa.)

References

References Higgins RJ 1990 Digital Signal Processing in VLSI. Prentice Hall. Oppenheim AV and Schafer RW 1975 Digital Signal Processing. Prentice Hall. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall.

301

303

12 Rotating Machinery Analysis When analyzing vibrations and sound (noise) from rotating machines, a special type of frequency analysis, usually called order tracking is often used. This analysis is based on tracking RMS levels of time-varying sine tones resulting from the periodic forces acting on the machine. In this chapter, we shall study how such analysis is carried out, while we will not study the different mechanisms behind vibration problems in rotating machines more than briefly. The analysis methods we will focus on in this chapter are predominantly used in the automotive and aerospace industries, and on power plant generators, etc. Other areas where rotating machinery analysis is commonly used are, for example, in vibration monitoring applications in process industry and for balancing turbine engines and many other machines. The subject is vast and can fill several books, see, for example, Wowk (1991). The procedures for order tracking that we describe in this chapter are not commonly found in textbooks. A good source for a more comprehensive discussion is the dissertation thesis by Blough (1998).

12.1 Vibrations in Rotating Machines Two properties are of particular interest in the analysis of rotating machinery. As always in vibration analysis, structural resonances (modes) are of interest because they amplify vibrations. Second, however, on rotating machines, vibrations directly or indirectly caused by the rotation itself, are also of interest because they can become large without any resonance amplification. These latter vibrations are caused by, for example, imbalances, axle deformation or misalignment, defects in bearing races, defects in teeth on gears, etc. Each of these sources of vibration produces vibration at a particular factor times the rotational speed of the machine. Rotational speed-dependent vibrations in rotating machines can, of course, occur at a frequency where the structure has a resonance, which can often cause very high-vibration levels and sometimes disaster. A factor times the rotational speed is called an order, where the rotation speed is referred to as order 1, two times the rotation speed is order 2, etc. Orders do not need to be integer numbers; we can have order 2.5 or 3.938, etc. The methods for analyzing vibrations in rotating machines, which we are focusing on here, are mainly based on measuring the amount of noise or vibration due to either an order or a resonance frequency. The order number of the dominant vibration can often be used to deduce from where the vibration originates. If, Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

304

12 Rotating Machinery Analysis

for example, we have a gearbox with gear ratio 1:2.3 and we have high vibration levels at order 4.6, the problem is related to something that happens twice per rotation of the output shaft.

12.2 Understanding Time–Frequency Analysis Most analysis of rotating machines is based on investigating the vibrations during a speed sweep, where the machine is either run up from a low to a high RPM (revolutions per minute), or run down from a high to a low RPM. The time data measured during the speed sweep are divided into smaller segments, each of which is processed by FFT to find the spectrum. It is evident that we are here talking about a special case of nonstationary signals, as the frequency content of the signal is changing. Therefore, it is important to understand some fundamental properties of this type of signals. First, we need to have a model of the type of signals encountered. The assumption made in analysis of rotating machines is usually that the signal comprises a number of sine tones with some instantaneous frequency which changes with time, with each sine tone having a time-varying amplitude (and thus RMS level). We can thus formulate the signal as follows: x(t) =

No ∑

An (t) cos(𝜔n (t)t + 𝜙n ),

(12.1)

k=1

where An (t) is a time-varying amplitude of the n-th order component, 𝜔n (t) is the angular frequency of the same component, 𝜙n is the phase angle offset of the tone, and No is the number of order-related components in the signal. The concept of instantaneous frequency can be understood by observing that for a stationary sine tone, with constant amplitude and frequency, we have x(t) = A cos(𝜔t) = A cos (Φ(t)) ,

(12.2)

from which it can be seen that the (constant) angular frequency is d d (𝜔t) = (Φ(t)). (12.3) dt dt From this, it seems reasonable to define the instantaneous (angular) frequency of any signal 𝜔=

x(t) = A(t) cos(Φ(t)),

(12.4)

by d (Φ(t)), (12.5) dt where we have used the index i on 𝜔i to indicate “instantaneous.” When we are analyzing a time-varying signal, there is an intrinsic problem related to the bandwidth–time product, BT product, discussed in Section 9.1.1. The instantaneous frequency is not possible to estimate instantaneously due to the fact that we need to measure the signal during some time to be able to compute a spectrum. Furthermore, the 𝜔i (t) =

500

500

400

400 Frequency [Hz]

Frequency [Hz]

12.2 Understanding Time–Frequency Analysis

300 200

200 100

100 0

300

2

4

6 8 10 12 Time [s] (a)

0

2

4

6 8 10 12 Time [s] (b)

Figure 12.1 Two spectrograms of a recorded microphone signal during a sweep from approximately 1000 to 6700 RPM (see Section 12.9 for details about the signal); in (a) the frequency increment was approximately 4 Hz, and in (b) approximately 1 Hz. It can be seen in (a) that the large frequency increment yields a relatively fine time resolution, whereas in (b) the properties are reversed [Courtesy of Prof. Jiri Tuma].

frequency resolution we obtain using a particular measurement time is inversely related to the measurement time. Therefore, the finer frequency resolution we choose, the poorer time resolution we obtain, and vice versa. This results in an ambiguity when we are analyzing speed sweep signals, and it is very important to understand this limitation. To illustrate the BT product limitation, in Figure 12.1 two so-called spectrograms of a recorded microphone signal during a sweep from approximately 1000 to 6700 RPM is shown with two different FFT blocksize (the data will be described in Section 12.9). A spectrogram is a plot of a number of spectra with a certain blocksize versus time. On the left-hand side, in Figure 12.1(a), the frequency resolution is coarse and the time resolution thus fine, whereas in the right-hand spectrogram in Figure 12.1(b), the opposite situation is shown. Comparing the two spectrograms, it is obvious how the change in time–frequency resolution affects the result. As a consequence of the time–frequency limitations, it should be realized that from any time–frequency analysis, there is not one, true, answer to the question of what the spectrum content of a time-varying signal is – it depends on how it is observed. In most cases of speed sweep analysis, the speed of the sweep therefore has to be kept fairly slow so that changes in amplitude of order components do not occur too fast. In many cases, it is a good idea to standardize some experiment and analysis parameters, such as frequency resolution and speed sweep rate, to facilitate comparisons between different measurements.

305

306

12 Rotating Machinery Analysis

12.3 Rotational Speed Signals (Tachometer Signals) In most cases of rotating machinery analysis, a rotation speed transducer, a tachometer, is connected to the rotating machine to measure the RPM. Such a transducer is usually either optical or inductive. In either case, it produces some form of pulse signal where the time between the pulses is related to the rotation speed, vr (t), expressed in rotations per minute (RPM) as follows: vr (t) =

60 , Np (t2 − t1 )

(12.6)

where Np is the number of pulses per revolution, and t1 and t2 are the time instances of two pulses. The estimated RPM readings are very important for the analysis of rotating machinery, as we will see later in this chapter. Many FFT analysis systems designed for rotating machinery analysis therefore have functionality to ensure that the rotation speed measurement is accurate. For example, lowpass filtering is often included to remove high-frequency disturbances, averaging to decrease random error, and limiting the change in the estimate of vr (n) between nearby n (slew rate limitation). The RPM as a function of time can be rather easily computed from a measurement of the tacho signal. Usually, the tacho signal is recorded on a measurement channel simultaneously with the accelerometer and/or microphone signals so that the tacho signal is sampled synchronously with the signals to be analyzed. To illustrate how the RPM as a function of time can be computed, we will start by an example where a sine sweep is generated in MATLAB/Octave, which is then processed to obtain the RPM–time profile. Example 12.3.1 Generate a simulated run-up signal of an engine increasing in RPM from 600 to 6000 RPM in 30 seconds. The signal should contain the fundamental and first two harmonics, with constant RMS levels of 1, 0.5, and 0.25, respectively. Use the fundamental frequency of the same signal to compute the RPM as a function of time. The following MATLAB/Octave code can be used to generate a sweeping sine. Note that 600–6000 RPM corresponds to 10–100 Hz. The code generates a tacho signal in variable tacho and the signal with orders 1, 2, and 3, in variable x. fs=2000; % Sampling frequency T=30; % Time duration f0=10; % Start freq. f1=100; % End freq. t=(0:1/fs:T)′ ; % Time axis Sq2=sqrt(2); % Fundamental amplitude x=Sq2*chirp(t,f0,t(end),f1); % Fundamental tacho=x; % Tacho signal x=x+0.5*Sq2*chirp(t,2*f0,t(end),2*f1); % Add order 2 x=x+0.25*Sq2*chirp(t,3*f0,t(end),3*f1); % Add order 3 The first second of the generated signal is plotted in Figure 12.2. We now use the tacho signal to extract the instantaneous RPM at each positive zero crossing, since our tacho signal will be a pure sine wave. In analysis systems, it is common to be able to

12.3 Rotational Speed Signals (Tachometer Signals)

2.5 2

Sine sweep signal

1.5 1 0.5 0 −0.5 −1 −1.5

Figure 12.2

0

0.2

0.4

Time [s]

0.6

0.8

1

First second of sine sweep with orders 1, 2, and 3 from Example 12.3.1.

set both the triggering level and the slope, where the tacho pulses are counted. An example of MATLAB/Octave code to generate the RPM–time profile is as follows: % Produce +1 where signal is above trigger level % and -1 where signal is below trigger level TLevel=0; xs=sign(tacho-TLevel); % Differentiate this to find where xs changes % between -1 and +1 and vice versa xDiff=diff(xs); % We need to synchronize xDiff with variable t from the % code above, since DIFF shifts one step tDiff=t(2:end); % Now find the time instances of positive slope positions % (-2 if negative slope is used) tTacho=tDiff(find(xDiff == 2)); % Count the time between the tacho signals and compute % the RPM at these instances rpmt=60/PPR./diff(tTacho); % Temporary rpm values % Use three tacho pulses at the time and assign mean % value to the center tacho pulse rpmt=0.5*(rpmt(1:end-1)+rpmt(2:end)); tTacho=tTacho(2:end-1); % diff again shifts one sample The code above produced the variables rpmt (temporary RPM, see below), with the instantaneous RPM at time instances in variable tTacho, using the time instances where the tacho signal crossed the trigger level with positive slope. In most applications, it is preferable to have

307

12 Rotating Machinery Analysis

7000

7000

6000

6000

5000

5000

4000

4000

RPM

RPM

308

3000

3000

2000

2000

1000

1000

0

0

10 20 Time [s]

30

0

0

10 20 Time [s]

(a)

30

(b)

Figure 12.3 RPM-time profile from Example 12.3.1, in (a) without smoothing prior to the interpolation, and in (b) with smoothing prior to the interpolation.

an estimate of the instantaneous RPM corresponding to each sample in the sampled signals. To get this, we can interpolate the estimated RPM values onto the time axis of the original, sampled signals, with the following line of code. rpm=interp1(tTacho,rpmt,t,′ linear′ ,′ extrap′ ); The RPM as a function of time generated by the above procedure is plotted in Figure 12.3(a). It is apparent that the estimates have some “noise.” This comes from the uncertainty of ±Δt∕2 in the time instance of each trigger position which produces “quantization noise.” In order to reduce this error, we can add a smoothing filter (see Section 3.3.3) before the last interpolation. The result of using 10 trigger instances to average over is plotted in Figure 12.3(b). As can be seen in the figure, this produces a more stable and reliable RPM–time profile. End of example. The instantaneous RPM estimate has drawn some interest in the literature (Blough, 1998; Fyfe and Munck, 1997; Saavedra and Rodriguez, 2006; Vold and Leuridan, 1993). The most reliable way to improve the accuracy of the RPM–time profile, however, is to sample the tacho signal with a higher sampling frequency, thus reducing the uncertainty in the trigger occasions. Some commercial systems for noise and vibration signals therefore include special tacho input channels with this feature.

12.4 RPM Maps The typical analysis procedure in rotating machinery analysis is to make a run-up or a coast-down of the machine or engine, during which the time signals are recorded, or analyzed in real time. A run-up is when the rotational speed of the machine or engine is increased from a low RPM to a high RPM, for example, from 800 to 5500 RPM of an automobile engine. On some machines, for example, electrical generators, the RPM cannot

12.4 RPM Maps

be smoothly swept up, for example, because some machines are made to operate at a constant RPM (related to the power line frequency). In such cases, it is common to shut the drive of the machine off and measure the vibrations during the coast-down of the machine. Another example where coast-down is often done is on gearbox analysis, where a run-up and a coast-down will find problems on different sides of the gear teeth. In the analysis of the time signal measured from the run-up or coast-down, the time signal is typically divided into short segments, for each of which an instantaneous spectrum is calculated, sometimes called a short-time Fourier transform, or STFT. Each of these spectra is “stamped” by the RPM during the corresponding time segment either by taking the mean RPM from the RPM–time profile, or in some cases by taking the RPM at the end of the time block. The set of spectra thus produced is often referred to as an “RPM map” which can be plotted in several formats.

12.4.1

The Waterfall Plot

A common way to plot RPM maps is the waterfall plot. This is a three-dimensional diagram with frequency on the x-axis, amplitude on the y-axis and rotation speed on the z-axis. In this type of plot, order-related spectrum components, which occur at locations proportional to the rotation speed, will be visible as peaks on a straight line, and structural resonances will often be visible as peaks at fixed frequencies. In the diagram, it can thus be seen which peaks are highest, at which speed the maximum occurs, and if they are caused by resonances or rotation-speed-dependent phenomena. Figure 12.4 shows a waterfall diagram of the sine sweep from Example 12.3.1. The highest peak in this vibration signal is following order one, and orders two and three are also clearly seen.

5000 4000

Acc. [m/s2 RMS]

RPM

3000 2000 0.8

1000 0

50

100

150 200 250 Frequency [Hz]

300

350

400

Figure 12.4 Waterfall diagram of the sine sweep from Example 12.3.1. Orders one, two, and three are clearly seen.

309

12 Rotating Machinery Analysis

1000 900 800 700 Frequency [Hz]

310

600 500 400 300 200 100 0

1000 1500 2000 2500 3000 3500 4000 4500 5000 5500 RPM

Figure 12.5 Example of color map plot, converted to gray scale. The color map is a view of the waterfall diagram from above. On the y-axis is frequency, and on the x-axis is rotation speed. The spectral peaks are coded in different colors, the higher the vibration amplitude, the larger the symbol. In this type of diagram, it is often easier to distinguish different orders and resonance frequencies, some of which may be hidden in the waterfall diagram. The gray scale print here does not give the figure full credit, see the problem section for examples of producing this type of plot in MATLAB/Octave.

12.4.2 The Color Map Plot A disadvantage with the waterfall plot is that sometimes smaller peaks can be difficult to distinguish. An alternative to the waterfall diagram which utilizes modern computer graphics is the so-called color map plot. An example of this type of plot is shown in Figure 12.5, although the print of this book only allows gray scale to be printed. In the color map plot, it can often be easier to differentiate the peaks, especially when an order meets a resonance frequency. A common way to obtain more details in the color map is to plot the logarithm of the RPM map, which enhances lower peaks.

12.5 Smearing From the RPM map discussed in Section 12.4, so-called order tracks are often produced. An order track is an extracted RMS level of a particular order versus RPM. To see how we can extract this information accurately, we need to discuss the concept of smearing. As discussed in Section 12.2, the signals encountered in order tracking analysis are nonstationary. The spectrum of the signal is thus not constant during the measured time block, which violates the DFT assumption. An assumption made in order tracking analysis is that the change in frequency for a tone we are studying is small during the acquisition of the short time data block used for each FFT. Smearing is an effect of the DFT which occurs if the frequency of the signal is changing during the duration of the time block.

1

1

0.8

0.8

Linear spec., RMS

Linear spec., RMS

12.5 Smearing

0.6 0.4 0.2 0

0.6 0.4 0.2

195

200 Frequency [Hz]

(a)

205

0

195

200 Frequency [Hz]

205

(b)

Figure 12.6 Linear spectrum of a sinusoid with RMS value of 1 and a frequency which sweeps from f0 − kΔf ∕2 to f0 + kΔf ∕2 for k = 0,2, and 4. In (a), a Hanning window was used before the spectrum computation, and in (b), a flattop window. It can be seen that the peak decreases and the spectrum on both sides of the peak increases, an effect called smearing.

Analysis of the smearing effect shows that it is dependent on the time window used and the amount of frequency change during the time block, relative to the frequency increment, Δf . To illustrate the smearing effect, in Figure 12.6(a), the result of a spectrum computation using a Hanning window, on a sinusoid swept linearly upward in frequency, is shown. Three spectra are shown in the figure, for no frequency change, and a frequency change of 2Δf and 4Δf , respectively. As can be seen, the smearing effect causes the peak to decrease, whereas frequency bins on both sides of the peak are increased. A better window to use for this type of analysis could be the flattop window, which is often recommended for periodic signals. In Figure 12.6(b), the spectrum of the same signal as in Figure 12.6, but with a flattop window instead of the Hanning window, is plotted. It can be seen that the flattop window has the peculiar effect that the smearing causes a peak which is higher than the true value. As evident from the plot, the error is less than for the Hanning window due to the greater width of the main lobe of the flattop window. However, the overestimation of the RMS level is, in most cases, an unwanted effect. As will be shown next, we can also considerably reduce the error in the order track by utilizing a different estimation approach. There is a third alternative, which is to use a Hanning window and to sum the RMS value using several DFT bins around the peak frequency as described by Equation (10.51). It turns out that this alternative, using five frequency values, centered around the peak, is a much better alternative, which is due to the fact that Parseval’s theorem is valid also for sweeping sines. This means that the RMS level calculated over the entire frequency range will always be identical to the “true” RMS level calculated in the time domain. Also in a narrower frequency range around a single spectral component, the summed RMS level is very close to the true RMS value. In Figure 12.7, the results of using this method with a summation over five frequency bins is plotted versus the total frequency change of the sine in number of frequency increments, Δf . In the same figure, the results of taking the peak of the spectrum with Hanning, flattop and rectangular windows are also plotted.

311

12 Rotating Machinery Analysis

120 100 RMS in percent of true value

312

80 60 40 Hanning sum 5 Hanning Flattop Rectangular

20 0

0

1 2 3 4 Frequency change within meas. time, [times ∆f]

5

Figure 12.7 Smearing effect for different windows. The graph shows the error in the RMS value in %, when measuring the linear spectrum of a sinusoid with linearly increasing frequency within the time block. The x-axis shows how many DFT spectral lines the signal’s frequency changes during the time of the FFT block. It is evident from the figure that the rectangular window is very sensitive to frequency variation in the measured signal, while the flattop window (ISO window, see 9.3.9) is less sensitive. The flattop window has the quality that the smearing error is positive up to a relatively large frequency alteration in the measured signal. The best method is, however, to use a Hanning window, and sum five frequency bins centered on the peak, as shown in solid. For comparison, the result of using a rectangular window is also shown.

As can be seen in the figure, the error in RMS level is negligible when using the frequency summation with Hanning window, compared with using the peak for any of the windows. Using a Hanning window and summing five frequency lines around a peak to obtain the RMS level is thus the preferred method for order tracking with fixed sampling frequency.

12.6 Order Tracks When interpreting how different rotational-speed-dependent components contribute to vibration or sound levels, two-dimensional diagrams called order tracks are often used. These diagrams are calculated from the RPM map by extracting information about the RMS value of an order component versus the RPM. A principal illustration of the process of generating order tracks as well as RPM maps is shown in Figure 12.8. Using the process described in Section 12.5 of summing five frequency lines after applying a Hanning window in the spectrum computation, for the three first orders of the sine sweep in Example 12.3.1, results in the order tracks plotted in Figure 12.9. Since the RMS levels of all tones are constant during the sweep, the result is a check that the order tracking procedure works. When computing the RMS level of an order by summing five frequency lines as recommended in Section 12.5, the “filter” will have constant frequency bandwidth, i.e., a constant bandwidth in Hz. Sometimes it can be better to use a constant order bandwidth for

12.6 Order Tracks

Tacho

RPM

RPM

f f

FFT

A/D

o(1)

o(2)

RPM

RPM

Figure 12.8 Illustration of the principle of order track diagrams. While the time data are collected, the rotation speed is measured. Spectra are computed with a particular RPM increment and stored in an RPM map. From this RPM map, the RMS values for the orders of interest are then extracted and plotted in order track plots.

1

Orders [V RMS]

0.8 0.6 0.4 0.2 0

Order 1 Order 2 Order 3 0

1000

2000

3000 4000 RPM, [times ∆f]

5000

6000

Figure 12.9 Order tracks. The first three orders of the sine sweep used in Example 12.3.1 are plotted versus RPM. The RMS level of each order was computed by summing five frequency bins in the spectrum after applying a Hanning window. The levels of 1, 0.5, and 0.25 corresponds to the values used in the example.

the summation which can be easily accomplished by increasing the number of frequency lines the RMS summation is made over, with increasing RPM. The main reason for using constant order bandwidth is that the smearing effect gets worse for higher orders since the change in frequency increases with the order. If order one changes, for example, by one

313

314

12 Rotating Machinery Analysis

frequency increment, Δf , inside the FFT block, then order on will change by on Δf . When using constant order bandwidth with the recommended procedure of using a Hanning window, the order bandwidth must be set so that it corresponds to at least five frequency lines at the lowest RPM.

12.7 Synchronous Sampling In order to avoid the smearing effect discussed above the so-called synchronous sampling can be used instead of sampling the data with fixed sampling frequency as has been assumed so far. In the past, this was done by intricate analog electronics devices called phase-locked loop circuits. With the fast processing power currently available, today, it is always accomplished using a resampling technique which will be presented in this section. This technique was developed and patented by Hewlett-Packard, Potter (1990a,b) in the 1980s and has become the industry standard for synchronous sampling of rotating machinery signals. The aim of synchronous sampling is to sample the vibration or noise signal at equal angles along each cycle (one revolution being one cycle), instead of equidistantly in time. The samples after synchronous sampling are usually said to be in the order domain, or angle domain. With, for example, 16 samples per revolution of the engine (or shaft), a signal will be obtained which is periodic with constant period length, if the x-axis is scaled in cycles. The DFT of this signal will in turn have peaks at locations corresponding to the harmonics of this basic cycle (one rotation of the engine), which are exactly the orders, or fractions of them, see Section 12.7.1. Before going into the DFT process of synchronously sampled data, we should look at how to obtain samples synchronous with the angle of the engine. We will describe two approaches here (i) Using one tacho pulse per revolution as synchronization pulses, and (ii) using the RPM–time profile we already computed in Section 12.3. We assume that we have the tacho signal recorded together with the vibration signal we wish to resample in two separate records. If we have a tacho signal with one or more pulses per revolution, we can easily obtain a trigger once per revolution by simply dropping all but one pulse per revolution. We can also assume that the RPM does not change substantially during one cycle, so we approximate the speed as constant during each cycle (this can, of course, be refined, but the first approximation used here usually works well). This is a reasonable assumption since there is always inertia in the machine preventing rapid changes of the rotational speed. We thus place the samples at equidistant times between the two tacho pulses. We now do this for each cycle (tacho pulse pair) during the run-up and obtain time instances where we should have sampled the signal, had we not used fixed sampling frequency during the recording. If the number of samples per revolution is an integer factor times the number of tacho pulses per revolution, it is, of course, not necessary to remove the tacho pulses to have one pulse per revolution. The resampling can then be done with higher accuracy using all tacho pulses. The next step is now to resample the vibration signal onto this new time axis. It turns out that this can be done arbitrarily accurately by simply using an upsampled vibration signal, and then using a linear interpolation algorithm. In most cases, 10 or 20 times

12.7 Synchronous Sampling

oversampling gives enough accuracy. The procedure will be illustrated next by a MATLAB/Octave example.

Example 12.7.1 Assume we have measured a vibration signal and a tacho signal with four pulses per revolution with a regular data acquisition system for noise and vibration analysis. Resample the signal with 16 samples per cycle. We showed how to obtain the time instances of each tacho pulse in Example 12.3.1, in the variable tTacho on page 311 (before the last interpolation onto the final time axis!). We now start with this signal and add the following MATLAB/Octave code. Note that we could use all tacho pulses here, since we have four tacho pulses per revolution and 16 samples per cycle, which means we want exactly four samples per tacho pulse. To make the example more general, however, we will use only one pulse per revolution. tTacho=tTacho(1:PPR:end); % Pick out every 4th pulse ts=[]; % Synchronous time instances SampPerRev=16; for n = 1:length(tTacho)-1 tt=linspace(tTacho(n),tTacho(n+1),SampPerRev+1); ts=[ts tt(1:end-1)]; end % Now upsample the original signal 10 times (to a total % of approx 25 times oversampling). x=resample(x,10,1); fs=10*fs; create a time axis for this upsampled signal tx=(0:1/fs:(length(x)-1)/fs); % Interpolate x onto the x-axis in ts instead of tx xs=interp1(tr,x,ts,′ linear′ ,′ extrap′ ); The reason for the code inside the for loop is that the new sampling instances should be at SampPerRev evenly spaced points between the two tacho pulses. The last sample, however, should be “one sample before” it reaches the next tacho pulse to obtain a continuous signal. If we want a new time axis for this new variable, xs, we obtain it by the extra line tc=(0:1/SampPerRev:(length(xs)-1)/SampPerRev; which will produce an x-axis in cycles. The result of resampling the sine sweep from Example 12.3.1 using the code above is shown in Figure 12.10 for some cycles. End of example.

An alternative to the above method is to use the RPM–time profile already established in Example 12.3.1. If we divide the instantaneous RPM by 60, we obtain the instantaneous frequency, fi in Hz. The integral of this frequency with respect to time is immediately the “angle” in parts per revolution (if we multiplied by 2𝜋 we would get it in radians, but for

315

12 Rotating Machinery Analysis

2.5 2 1.5 Resampled signal

316

1 0.5 0 −0.5 −1 −1.5 1110

1112

1114

Cycles

1116

1118

1120

Figure 12.10 Sine sweep signal from Example 12.3.1 resampled using the tacho signal as discussed in Example 12.7.1 for cycles 1110 to 1120. The signal appears versus the x-axis of “cycles” as a stationary sine.

our purpose, it is the straight derivative we want). We thus have that the “instantaneous angle,” Ai is t

Ai =

∫0

fi (t)dt.

(12.7)

If we now want, for example, 16 pulses per period, we simply interpolate the values in Ai onto a new x-axis being the fractions 0,1∕16,2∕16 …. This procedure will now be illustrated by a MATLAB/Octave example. Example 12.7.2 Use the computed RPM–time signal in variable rpm from Example 12.3.1 to resample a signal using 16 pulses per revolution. We obtain the following MATLAB/Octave code: % Calculate the inst. angle as function of time % (in part of revolutions, not radians!) Ainst=dt*cumsum(rpm/60); % Find every 1/SampPerRev of a cycle in Ainst minA=min(Ainst); maxA=max(Ainst); Fractions=ceil(minA*SampPerRev)/SampPerRev:1/SampPerRev:maxA; % New sampling times tt=interp1(Ainst,t,Fractions,′ linear′ ,′ extrap′ ); The synchronous sampling is then done as in Example 12.7.1, by upsampling the signal and interpolating it onto time instances in variable tt. End of example.

12.8 Averaging Rotation-Speed-Dependent Signals

1.2

8

4 2

1 Orders [V RMS]

Order

6

0 1000 2000 3000 4000 5000 RPM

0.8 0.6 0.4 0.2

0

2000

RPM

4000

6000

Figure 12.11 RPM map, in (a), and the first three orders of the sine sweep from Example 12.7.1, in (b), after resampling the signal using eight samples per revolution. The RPM map now has order on the y-axis and the order components appear along straight horizontal lines. The order tracks in (b) are usually of higher accuracy than those obtained by fixed sampling frequency as described in Section 12.6.

Of the two methods described here to synchronously resample a signal, the first method, using the tacho pulses, is usually a little more accurate because the tacho pulses are well defined in time, whereas the RPM–time profile usually needs some smoothing to obtain stable RPM values and therefore introduces some uncertainty in the instantaneous RPM.

12.7.1

DFT Parameters after Resampling

We will now look at the DFT parameters and their interpretation on synchronously sampled signals. We recall from Chapter 9 that each frequency line, k, in the DFT, corresponds to a sine with k periods in the time window. This means that if we want an order resolution, no , of for example, 1/8th order, we need to sample eight periods of the synchronously resampled signal (so that the first order is on spectral line eight). Furthermore, the highest order we want to calculate has to be below frequency line N∕2, if N is the blocksize. If we denote this maximum order by Omax , then we can calculate the necessary blocksize for the DFT by N = 2Omax no ..

(12.8)

For resampled signals, if the resampling was perfect, no time window should be necessary prior to computing the DFT. In practice, however, it is recommended to use a flattop window to get the best possible RMS level accuracy, even if the resampling process has failed slightly at some instance. An RPM map and corresponding order tracks of the first three orders of the sine sweep from Example 12.7.1 are shown in Figure 12.11.

12.8 Averaging Rotation-Speed-Dependent Signals Sometimes the instantaneous values extracted in the procedures previously mentioned in this chapter do not produce stable values. This can be caused by noise in the transducer, or,

317

318

12 Rotating Machinery Analysis

G1 2

Gavg 2 G2

f f

2

f

Figure 12.12 Illustration of the error that occurs if different instantaneous spectra in the averaging process contain a spectral peak at different frequency bins. As illustrated, this produces an erroneous average spectrum. Averaging on signals from rotating machinery can usually only be done in the order domain, after synchronous resampling of the signals.

e.g., by irregularities in the combustions of a combustion engine, etc. To obtain more stable values, averaging can be applied several ways. In general, averaging is not necessary to produce the RPM map, as the accuracy in RMS levels is not critical to the purpose of the RPM map plot. The best approach of averaging to produce less variance in order tracks is to average several adjacent RMS estimates in the order track. This is equivalent to applying a smoothing filter (see Section 3.3.3) to the order track calculated as described in Section 12.6. This is the recommended procedure to obtain more stable order tracks. Another approach is to average several consecutive spectra to produce each spectrum in the RPM map. This requires that the run-up or coast-down is very slow so that the spectrum peaks match the same spectral line in each spectrum included in the average, see also Section 10.1. Otherwise, an error will arise as illustrated in Figure 12.12. In practice, this type of averaging can only be successfully applied to synchronously resampled data and sometimes to constant RPM measurements, although in the latter case great caution has to be used to ensure the RPM is constant enough to produce spectrum peaks on the same spectral line in each instantaneous spectrum.

12.9 Adding Change in RMS with Time So far in this chapter, we have demonstrated the order tracking techniques with a sine sweep with constant RMS level across the run-up. This means that we have not taken the BT product into full consideration, as there was no change in RMS level with time. We will therefore now add some change to the RMS level of each order to see how that affects the accuracy of the order tracking. To do this, we use the forced response simulation method described in Section 19.2.3 and let our sine sweep be the force input to an SDOF system with a natural frequency of 50 Hz (corresponding to 3000 RPM), and 2% damping. In Figure 12.13, the time signal of the vibration signal from the simulation is shown.

12.9 Adding Change in RMS with Time

4 3

Vibration signal [V]

2 1 0 −1 −2 −3 −4

0

5

10

15 Time [s]

20

25

30

Figure 12.13 Time signal of sine sweep passing an SDOF system with natural frequency of 50 Hz and 2% relative damping. When the first, second, and third order passes, the natural frequency there is an amplification of the amplitude. This happens in the reversed order, of course, so that the third order passes the natural frequency first.

2.5

2.5

Order 1 [V RMS]

Order 1 [V RMS]

2 1.5 1 0.5 0

2

1.5

1 0

2000

RPM

(a)

4000

6000

2900

3000 3100 RPM

3200

(b)

Figure 12.14 Order track for first order of sine sweep which has passed an SDOF system, using the simulation method described in Section 19.2.3. Solid line: First order using synchronous sampling; dashed line: fixed sampling frequency with blocksize N1 = 1024 samples; and dash-dotted line: fixed sampling frequency with blocksize N2 = 2048 samples. In (a), the order track over the entire RPM range is shown, and in (b), the RPM range around where the first order passes the natural frequency of the SDOF system is zoomed in. It can be seen in the zoomed plot that the order tracks using fixed sampling frequency do not give correct estimates of the RMS level when the change rate is large. With synchronized sampling, however, the estimated RMS level reaches the true value of approximately 2.4 V, which corresponds to the maximum amplitude of the time signal in Figure 12.13.

319

12 Rotating Machinery Analysis

The resulting order tracks of the first order for fixed sampling frequency and for synchronous resampling using the tacho pulses, respectively, are shown in Figure 12.14. In Figure 12.14(b), where the plot is zoomed in around the RPM range where the first order passes the natural frequency of the SDOF system, it can be seen that with fixed sampling frequency and FFT blocksize of 1024 and 2048 samples, there is an error in the estimated order versus RPM. With synchronous sampling, however, the order is tracked with correct RMS level at the peak. It should be noted that the fixed sampling frequency could perhaps have been used with a shorter blocksize, giving a better accuracy, but using synchronous sampling is “safer.” It has sometimes been argued that fast run-ups cannot be correctly tracked, even using synchronous sampling. This is not entirely correct, however. Using commercial systems where the synchronous sampling is done in real time due to the reduced accuracy of the resampling process necessary to fulfill the real-time requirements, the resampling can sometimes fail. In postprocessing, however, the accuracy can be increased by using higher oversampling, as is seen in Figure 12.15, where a very fast run-up is shown. The signal in this case was a microphone positioned near the exhaust of a car during a full-throttle acceleration in third gear, taking approximately 15 seconds. The BT-product limitation is essentially the only limitation to tracking fast run-ups, regardless of the method used. This means that the accuracy of tracking a rapidly increasing RMS level versus RPM as in Figure 12.15 depends on the bandwidth of the tracking filter. If this filter bandwidth can be set high enough, then the slope of the rapid increase can be accurately tracked. If, however, there is a close order, for example, which will be included in the bandwidth of the tracking filter, then the problem is unsolvable using an FFT-based approach, and the only way to obtain accurate order tracks is to make a slower run-up, or to use a parametric method such as Prony’s method described in Section 12.10. 7000

0.4

6000 Order 2 [V RMS]

5000 RPM

320

4000 3000 2000

0.3 0.2 0.1

1000 0

0

5

10 15 Time [s]

(a)

20

0

0

2000

4000 RPM

6000

8000

(b)

Figure 12.15 Results of a fast run-up of a car during approximately 14 seconds. In (a), the RPM-time profile is shown, and in (b), order 2 is tracked by fixed sampling frequency (solid line) and by synchronous sampling (dashed line). As can be seen, in this case, both fixed sampling frequency and synchronous sampling gives similar results. Some smoothing was applied to get this result.

12.9 Adding Change in RMS with Time

Using very fast run-ups can pose another problem, which needs to be discussed. Assuming there are resonances which cause amplification of the order of interest, during a very fast run-up the frequencies can sweep through the resonance so fast that the mode is not fully excited, i.e., the amplitudes do not increase as much as they would if the run-up speed had been slower. This has to do with the transient behavior of the mechanical system, as discussed in Chapter 5. Therefore, in most cases, it is better to use slow sweep rates, as was recommended earlier in this chapter. Example 12.9.1 In this example, we are going to look at some results of a run-up analysis on a diesel car. The acceleration on the driver’s seat foundation, in a car equipped with a four cylinder diesel engine, was measured during a run-up from approximately 1400 to 3000 RPM. In Figure 12.16, the results are shown as an RPM map from an analysis using fixed sampling frequency, an RPM map after resampling the acceleration signal synchronously with 30 25

1000

20

800

Order

Frequency [Hz]

1200

600

15

400

10

200

5

0 1500

2000 RPM

0 1500

2500

(a) Acceleration RMS [unscaled]

RPM

3000 2500 2000 1500 0

50 Time [s]

(c)

2500

(b)

3500

1000

2000 RPM

100

15

10

5

1500

2000 RPM

2500

(d)

Figure 12.16 Results of a run-up analysis of a diesel engine used for Example 12.9.1. In (a), an RPM map using 4096 samples blocksize is shown. In (b), an RPM map of synchronously resampled data using 64 samples per revolution and a maximum order of 32. In (c), the RPM-time profile used for the RPM map in (a) as well as for resampling the data; and in (d), the overall level based on the fixed sampling frequency data, order 2 from fixed frequency (solid) and resampled data analysis (dash-dotted), and order 25 from fixed frequency (dashed) and resampled data (dotted). Order 2 from both analyses are indistinguishable. See Example 12.9.1 for details [Courtesy of SAAB Automobile AB].

321

322

12 Rotating Machinery Analysis

the rotation, an RPM–time profile, and the orders of interest for this example. In the last plot, the overall level computed from the fixed sampling frequency analysis is plotted together with orders 2 and 25 of both analyses (fixed and synchronous sampling). The results for order 2 are almost identical and no differences can be seen. For order 25, however, it is evident that the synchronous sampling generates more accurate results. For a four-cylinder, four-stroke engine, order 2 will be dominant because there are two combustions per revolution of the engine. This is clearly seen in the RPM maps in Figure 12.16(a) and (b). Order 2 is dominated by shaft unbalances at low RPMs and of the mass inertia of the piston and combustion forces at higher RPMs, so it should increase approximately linearly with RPM. As is seen in Figure 12.16(d), order 2 is increasing approximately linearly with RPM, which indicates there are no resonances in the structure in the frequency range of order 2, i.e., approximately 47–100 Hz. In addition to order 2, a large number of higher orders are visible. Order 25 seems to be higher than most orders in the RPM map in Figure 12.16(d). This is a driveline frequency due to some gearbox ratio (since 25 is not a multiple of 2), and we select it for processing simply to illustrate the difficulty of tracking higher orders using fixed frequency sampling (at least with fixed frequency bandwidth). Thus order 25 is plotted in Figure 12.16(d), near the bottom of the plot. The differences between the order from fixed sampling and synchronous sampling are clearly seen and are due to the inaccuracy of the tracking using fixed sampling frequency. To compute the results in Figure 12.16, the RPM–time profile in Figure 12.16(c) was first computed by the procedure described in Section 12.3. To compute the RPM map in Figure 12.16(a), the RPM–time profile was used to find the occasions for every 10th RPM, and a linear spectrum was computed using 4096 samples blocksize and a Hanning window. The orders from this RPM map were then computed by summation over 5 frequency lines as described in Section 12.6. The overall level, i.e., the total RMS level at each RPM, in Figure 12.16(d) was computed by summing the RMS level of each spectrum, as described by Equation (10.60). For the synchronous sampling, the original acceleration signal was resampled synchronously using the RPM–time profile, as described in Example 12.7.2, using a maximum order of 32 (i.e., 64 samples per revolution). Then an RPM map of the resampled signal was computed, as in Figure 12.16(b), again using every 10 RPM, and an order resolution of 1/32-th of an order, i.e., the blocksize, according to Equation (12.8) was 2048 samples. A flattop window was used for the synchronous resampling RPM map, and thus the orders from this RPM map were obtained by tracking the peak of every order. End of example.

12.10 Parametric Methods All the methods so far described in this chapter have been nonparametric methods, i.e., methods not using any a priori information about the signals. As we have discussed previously in this book, the advantage with nonparametric methods is that they give reliable results without the need for any assumptions about the data. The orders tracked by the methods described so far in this chapter can be slightly wrong due to bandwidth–time restrictions or because the synchronous resampling fails due to, for example, tachometer errors. But RMS levels which are completely absent from the data cannot accidently appear, which is the case with some parametric methods.

12.11 Chapter Summary

Some parametric methods, for example, Prony’s methods (see, e.g., (Proakis and Manolakis 2006; Vold et al. 1988)) have been suggested for order tracking. However, there has been little success using such methods, which is probably due to the fact that the models underlying vibration signals are not easily defined. There are almost always vibration artifacts due to the fact that any machine is more complicated than any reasonable model can include. Therefore, this type of parametric methods usually fails on noise and vibration data. More recently, however, a parametric method which is less prone to errors from the model assumption, usually referred to as the Vold–Kalman filter method, has been developed, Vold and Leuridan (1993), Vold et al. (1997), Pelant et al. (2004), Tuma (2004, 2005), and Pan and Wu (2007). This method uses adaptive bandpass filters whose center frequencies are controlled by the instantaneous RPM–time estimate described in Section 12.3. Although originally suggested for fast run-ups, with the results obtained in Section 12.9, it is clear that fast run-ups can be tracked using postprocessing synchronous sampling. The Vold–Kalman technique is still limited by the BT–product – the narrower the bandpass filters, the longer time constants the filters have, and thus the slower they can react to rapid changes in RMS level in an order, see, for example Brandt et al. (2005). The Vold–Kalman technique offers two other advantages, however, not available with nonparametric methods; first, it can be used to filter out time signals corresponding to order-related components, and second, it can be used for multiple RPM signals with crossing orders. The first of these points is very attractive in sound quality applications, where the Vold–Kalman method offers the possibility to listen to a summation of the order-related components of, for example, the noise from an engine. The second point is very important in many applications on, for example, automatic gearboxes and on turbines, where typically several independent rotational speeds are causing orders, related to the different rotating shafts, to cross. With Vold–Kalman filters these crossing orders can be split into independent orders. Another technique which has recently been proposed is a method based on the Gabor transform, Shao et al. (2003), Qian (2003), and Pan et al. (2007). This method is computationally attractive, although there is some disadvantage with virtual (nonexistent) components which can cause difficulties in the interpretation of results. Any parametric method for order tracking requires more experience from the user to obtain reliable results, compared with the nonparametric, FFT–based methods. Thus, in practice, it is often useful to first calculate order tracks using either fixed sampling frequency or synchronous sampling, and based on those results, for example, Vold–Kalman filters can be defined and used for higher accuracy.

12.11 Chapter Summary In this chapter, we have presented a method commonly used for analysis of rotating machines, order tracking. This technique assumes that the noise or vibration signal from a rotating machine consists of a number of RPM-dependent sine waves with variable amplitude and phase, and with a frequency which is a constant factor (the order number) times the RPM of the engine. Thus, order one corresponds to the rotational frequency of the machine, order two is the first harmonic (twice the frequency of order one), etc.

323

324

12 Rotating Machinery Analysis

Analysis of rotating machines by order tracking are usually made either during a run-up of the machine, where the RPM is increased from a low to a high RPM, or during a coast-down, where the machine is slowing down from a high RPM to a low RPM. In any case, the analysis is done the same way by dividing the signal into short time segments and by computing an FFT of each segment. The map of spectra versus RPM can be plotted in a 3D waterfall diagram or a color intensity map in which the orders and resonance frequencies of the machine can be identified. Order tracking is a technique where nonstationary signals are analyzed. The concept of the bandwidth–time product limitation for frequency analysis is therefore important to understand. Using a particular bandwidth of analysis, the time resolution is limited in such a way that changes in, for example, RMS level versus RPM, which are of interest in order tracking, can only be correctly estimated if they are slow enough. This means that it is best to use relatively slow run-up speeds in order to get reliable estimates of order tracks. From the RPM map, order tracks can be extracted, and the preferred method is to use a Hanning window in the FFT computations, and then sum the RMS level around a particular order of interest by using at least five frequency lines. This reduces the effect of smearing, as discussed in Section 12.5. To further reduce the effect of smearing, the signal originally sampled with fixed sampling frequency can be resampled into the angle domain, where the signal is sampled at a fixed number of samples per revolution of the engine. This produces a signal with seemingly constant “frequency,” with an x-axis of cycles. The DFT of this signal has peaks at fractions of the orders, as was discussed in Section 12.7.1. Parametric techniques, such as the now popular Vold–Kalman filter method, can offer advantages when orders are crossing in machines with two or more independently rotating parts, or when time signals of each order are of interest in sound quality applications. Other parametric techniques, such as Prony’s method, have not proven to be reliable for most rotating machinery applications.

12.12 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 12.1 Create a simulated run-up signal using MATLAB/Octave with the following properties: Start RPM: 800 RPM Stop RPM: 5800 RPM Sweep time: 60 seconds Order one RMS level: varying sinusoidally between 1 and 2 V, and with one period during the speed sweep. Order two RMS level: varying sinusoidally between 0.5 and 8 V, with two periods during the speed sweep. Use a sampling frequency of four times the highest frequency in the signal.

References

Also, create a tacho signal being a sine with constant amplitude during the speed sweep. Extract the RPM–time profile with a time resolution corresponding to the sampling frequency of the signal. Problem 12.2 Create an RPM map of the signal in Problem 12.1 and use it to extract the first and second orders. Compare the order tracks with the “true” values and experiment with the blocksize of the FFT. (Use the preferred method of using a Hanning window and five frequency lines to sum the RMS level presented in Section 12.6). Determine a blocksize that gives good results. Problem 12.3 Resample the signal in Problem 12.1 using eight samples per revolution, and calculate order tracks for the first and second orders, using an FFT resolution of 1/16th order. Compare the tracked orders with the results of Problem 12.2.

References Blough J 1998 Improving the Analysis of Operating Data on Rotating Automotive Components PhD thesis University of Cincinnati, College of Engineering. Brandt A, Lago T, Ahlin K and Tuma J 2005 Main principles and limitations of current order tracking methods. Sound And Vibration 39(3), 19–22. Fyfe KR and Munck EDS 1997 Analysis of computed order tracking. Mechanical Systems and Signal Processing 11(2), 187–205. Pan MC and Wu CX 2007 Adaptive Vold–Kalman filtering order tracking. Mechanical Systems And Signal Processing 21(8), 2957–2969. Pan MC, Liao SW and Chiu CC 2007 Improvement on Gabor order tracking and objective comparison with Vold–Kalman filtering order tracking. Mechanical Systems And Signal Processing 21(2), 653–667. Pelant P, Tuma J and Benes T 2004 Vold–Kalman order tracking filtration in car noise and vibration measurements Proceedings of Proceedings of 33rd International Congress and Exposition on Noise Control Engineering, INTER-NOISE, Prague, Czech Republic. Potter R 1990a A new order tracking method for rotating machinery. Sound And Vibration 24(9), 30–34. Potter R 1990b Tracking and resampling method and apparatus for monitoring the performance of rotating machines. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall. Qian S 2003 Gabor expansion for order tracking. Sound and Vibration 37(6), 18–22. Saavedra PN and Rodriguez CG 2006 Accurate assessment of computed order tracking. Shock and Vibration 13(1), 13–32. Shao H, Jin W and Qian S 2003 Order tracking by discrete Gabor expansion. IEEE Transactions On Instrumentation and Measurement 52(3), 754–761. Tuma J 2004 Sound quality assessment using Vold–Kalman tracking filtering Seminar, Instruments and Control, Ostrava, Czech Republic. Tuma J 2005 Setting the passband width in the Vold–Kalman order tracking filter Proceedings of 12th ICSV, Lisbon, Portugal.

325

326

12 Rotating Machinery Analysis

Vold H and Leuridan J 1993 High resolution order tracking at extreme slew rates Proceedings of SAE Noise and Vibration Conference, Traverse City, MI Society of Automotive Engineers. Vold H, Crowley J and Nessler J 1988 Tracking sine waves in systems with high slew rates Proceedings of 6th International Modal Analysis Conference, Kissimmee, FL, pp. 189–193. Vold H, Mains M and Blough J 1997 Theoretical foundation for high performance order tracking with the Vold–Kalman tracking filter Proceedings of 1997 Noise and Vibration Conference, SAE, vol. 3, pp. 1083–1088. Wowk V 1991 Machinery Vibration: Measurement and Analysis. McGraw-Hill.

327

13 Single-input Frequency Response Measurements It is common in many noise and vibration applications to compute frequency response functions (FRFs) from measurements. In most cases, the structure is excited by known (measured) forces applied either by an impulse hammer or by a shaker. In some cases, frequency response functions are measured between response signals which are due to natural excitation by for example wind or traffic loads on buildings or bridges. The latter is common, e.g., when measuring transmissibilities for operating deflection shape measurements, see Section 19.6. Common reasons that one may wish to measure frequency response functions between a force and a response signal are, for example, to determine ●

● ●

natural frequencies and relative damping (e.g., in experimental modal analysis, see Chapter 16) point stiffness to be used in analytical models noise transfer paths (NTF) for noise path analysis (NPA), which is a method of studying sound paths from, for example, the engine mounts to the driver’s ear in a vehicle, see Chapter 15. NPA has so far mainly been used within the automotive industry but is increasingly being applied for other products.

In this chapter, we shall present two different ways of measuring frequency response, namely through excitation with an impulse hammer and with a shaker, respectively. We limit the discussion in this chapter to systems with one input and one output, the so-called single-input/single-output (SISO) systems. In Chapter 14, we will extend the concept to general systems with many inputs and many outputs. Much of the theory in this chapter was developed in the early days of noise and vibration analysis. A good source for a more detailed discussion is Bendat and Piersol (2010). Estimation of frequency response turns out to be rather complicated in terms of all the errors involved, because, as we will see, there are errors due to the spectral estimation as well as errors due to unwanted noise in the measured force and response signals. We will therefore make extensive use of simulations in this chapter, rather than real measurements. This allows us to focus on one of the sources of error at the time, whereas in real measurements all errors occur at once, and are unknown. In order to make good measurements of FRFs, it is essential to understand all the errors involved and to be able to determine when or if they are occurring in a particular measurement. Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

328

13 Single-input Frequency Response Measurements

13.1 Linear Systems We recall from Section 2.6 that a system with an input signal x(t) and an output y(t) is considered to be time invariant if the coefficients of the system of differential equations which describe the system do not change with time. In practice, this means that the system parameters, for example, mass, damping, and stiffness, do not change during the time we study (measure) the system. Further, a system is linear if it fulfills two criteria: (i) the system is additive; and (ii) the system is homogeneous. Real-life systems are rarely completely linear. During all measurements of frequency response, we therefore need to investigate whether or not the conditions of linearity are fulfilled. If they are not, then many of our common assumptions fail, and the methods presented in this chapter will not be appropriate. How to check the linearity of mechanical systems will be discussed in Section 14.6.

13.2 Determining Frequency Response Experimentally We shall now see how the frequency response can be estimated experimentally, under the assumption that the system, H(f ), is linear and time-invariant. We thus assume that we have a system as in Figure 13.1, with input x(t) and output y(t). Recall from Section 2.6 that we may write the output as the convolution of the input with the system’s impulse response, h(t), i.e., ∞

y(t) = x(t) ∗ h(t) =



x(u)h(t − u)du.

(13.1)

−∞

Recall also that we can carry out the computation of Equation (13.1) in the frequency domain, which is undeniably more attractive, since in that case we obtain the equation for the spectrum Y (f ) as Y (f ) = X(f ) ⋅ H(f ).

(13.2)

The input is thus amplified and phase-shifted at each frequency independently of other frequencies. In real-life measurements, particularly in the field of noise and vibrations, we often have no hope of being able to measure x(t) and y(t) without at least one of these signals being contaminated by extraneous noise from the sensor and input electronics in the measurement system. However, we can assume that the contaminating noise is uncorrelated with the input and output signals. A general model for an actual system where extraneous noise is added to the signals we measure is illustrated in Figure 13.2. A discussion about the contaminating noise on the input and output signals is found in Section 13.7. When we treat

Figure 13.1 Linear time-invariant system with input x(t), output y(t), impulse response h(t) and frequency response H(f ).

13.2 Determining Frequency Response Experimentally

the systems with noise, the actual measured signals are always x(t) and y(t), whereas we denote the actual input to the system v(t) (when x(t) is measured with extraneous noise contribution) and the actual output of the system is u(t).

13.2.1

Method 1 – The H1 Estimator

In order to find a method which estimates the frequency response of the linear system, we shall first make a simplification of the system in Figure 13.2, which consists of setting the noise at the input to m(t) = 0. We thus assume that we can measure the input x(t) without any extraneous noise, but that the output contains noise. Furthermore, we assume for the moment that the input signal, x(t) is of random nature. In Section 13.7.1, we will extend the discussion to input signals with other properties. For the system in Figure 13.3, we can formulate the spectrum of the output as Y (f ) = X(f )H(f ) + N(f ). Next, we multiply both the left-hand and right-hand sides in Equation (13.3) by X the complex conjugate of X(f ) and obtain X ∗ (f )Y (f ) = X ∗ (f )X(f )H(f ) + X ∗ (f )N(f ).

(13.3) ∗ (f ),

i.e.,

(13.4)

If we take the expected values (i.e., experimentally we make an average) of the left-hand side and each term on the right-hand side separately, and scale the results in a suitable way to obtain PSDs as descried in Section 10.3, we obtain, from Equations (10.11) and (10.13), the cross- and auto-spectral densities Gyx (f ) = Gxx (f )H(f ) + Gnx (f ).

(13.5)

Figure 13.2 Linear system with extraneous noise contaminating both the input and output signals. Note that the signals x and y are now the measured signals, whereas v and u are the actual inputs and outputs of the system.

Figure 13.3 Linear system with noise only at the output. Note that x is here the measured “perfect” signal (without noise) and y is the measured output signal, whereas u is the actual output of the system.

329

330

13 Single-input Frequency Response Measurements

The last term in Equation (13.5), the cross-spectral density between the input and the noise, approaches zero when we average, since these signals are uncorrelated. We thus obtain the so-called H1 estimator of H(f ), as ̂ 1 (f ) = H

̂ yx (f ) G ̂ xx (f ) G

.

(13.6)

which is the equation used in FFT analysis systems for estimating H(f ) from measurements of x(t) and y(t) if it is assumed that the contaminating noise in the input signal is negligible. As before, we use the symbol ∧ (hat) to denote that we are dealing with estimated functions. Note specially in Equation (13.6), that, because Gxx (f ) is real and nonnegative, the phase angle in H1 (f ) comes exclusively from the phase angle of the cross spectrum, Gyx (f ). The H1 estimator in Equation (13.6) is a least squares estimate of the system H(f ), see, for example, Bendat and Piersol (2010). If there is noise added to the input, and thus the model we use for the H1 estimator is wrong, then the estimated H1 will be biased. This will be discussed in Section 13.4. Furthermore, as we will see in Section 13.5.1, the H1 estimator is always biased due to the limited frequency resolution, although this error can be made arbitrarily small by increasing the blocksize. Example 13.2.1 Write a MATLAB/Octave script which calculates the H1 estimator of a single-input/single-output system, using spectral densities in variables Gxx and Gyx. We assume that the variables are column vectors as described in Section 10.8.1. The H1 estimator is then simply the elementwise division of the cross-spectral density and the autospectral density of x. H1=Gyx./Gxx; End of example.

13.2.2 Method 2 – The H2 Estimator In some cases, it may be the case that the preceding assumption, that the noise at the input is negligible, is not reasonable. For example, when we measure the accelerance of mechanical systems with shaker excitation, then near the natural frequencies of the structure, the acceleration signal (output) is large, but the force signal (input) is often small because the structure is weak, see also Section 13.9. In that case, it is more reasonable to assume that the dominating extraneous measurement noise is present in the measured input signal, x(t), as illustrated in Figure 13.4. To find an estimator for this case, from Figure 13.4, we obtain the following relationship: [ ] Y (f ) = X(f ) − M(f ) H(f ). (13.7) The trick this time is to multiply Equation (13.7) by the complex conjugate of the output spectrum, Y ∗ (f ), which gives [ ] Y ∗ (f )Y (f ) = H(f ) Gxy (f ) − Gmy (f ) . (13.8)

13.2 Determining Frequency Response Experimentally

Figure 13.4 Linear system with noise only at the input. Note that x is the measured input, whereas v is the actual input to the system.

We now take the expected value (average) like we did for the H1 estimator above, and scale properly, and obtain [ ] Gyy (f ) = H(f ) Gxy (f ) − Gmy (f ) . (13.9) Similar to the H1 case above, the cross-spectrum, Gmy (f ) will approach zero when we average, if we assume that the noise, m(t), is uncorrelated with v(t) and therefore with y(t). We thus obtain the so-called H2 estimator as ̂ 2 (f ) = H

̂ yy (f ) G ̂ xy (f ) G

.

(13.10)

Note in Equation (13.10) that Gxy (f ) = G∗yx (f ) (see, e.g., Equation 8.13), that is, the phase of Gxy (f ) is equal to the phase of Gyx (f ) with opposite sign. Since Gxy (f ) in Equation (13.10) is in the denominator, while Gyx (f ) is in the numerator in Equation (13.6), we see that the phase of H1 is equal to the phase of H2 . It should also be noted that if the H2 estimator is missing in the FFT analysis software, it can easily be computed by switching place between x and y and then inverting the resulting H1 estimate, i.e., 2

̂ yx = H

1 1H ̂ xy

.

(13.11)

̂ yx , etc., to indicate that we are using specifwhere we introduce the left superscript ‘1’ in 1 H ically the H1 estimator. This nomenclature will be used whenever it is not obvious from the context which particular estimator we are discussing. Like the H1 estimator in the case of noise on the output, the H2 estimator is also a least ̂ 2 is biased, squares estimate of H(f ). In case the model assumption is wrong, the estimate H which will be discussed in Section 13.4. It is also, like the H1 estimator, always biased due to the limited frequency resolution, although this error can be made arbitrarily small by increasing the blocksize, see Section 13.5.1.

13.2.3

Method 3 – The Hc Estimator

Now, what if we are taking the noise on both input and output into consideration? There has been several attempts at solving this system, including the Hv (Rocklin et al. 1985), and Hs (Wicks and Vold 1986) estimators. In order to get an unbiased estimate in the case of noise on both input and output, it turns out that this is only possible with some a priori information about the input and output noise (or their ratio), or by knowing one signal without any contaminating noise at all. In Section 14.1.6, we will look at the common Hv

331

332

13 Single-input Frequency Response Measurements

estimator which is an estimator trying to address this problem. It should also be emphasized that the discussion here applies to random input signals; in Section 13.12, we will show that we can remove the bias due to both input and output noise if we use a periodic excitation signal (or at least make it arbitrarily small). Here, we will look at another, rather ingenious, estimator, namely the Hc estimator, which assumes that there is a signal which can be measured without contaminating noise. In the case of shaker excitation of structures, the electrical output signal of the signal generator is typically chosen. This estimator was originally proposed by Goyder (1984) and further developed by Mitchell and Cobb (1987). In Figure 13.5, the concept of the Hc estimator setup is illustrated. We have a signal v(t) which we assume we can measure without any contaminating noise. In addition we have the force signal, x(t), and a response signal, y(t), both of which are measured with contaminating noise, m(t), and n(t), respectively. The Hc estimator is now based on the fact that, using the H1 estimator, we know from ̂ yv and 1 H ̂ xv , indicated in Figure 13.5 Section 13.2.1 that we can estimate the two systems, 1 H without bias due to the extraneous noise, if we just make appropriately many averages. But we now have the simple relation that the system we are seeking, Hyx is given by Hyx =

Y (f ) Y (f )∕V(f ) Hyv = = , X(f ) X(f )∕V(f ) Hxv

(13.12)

which means that we can define the Hc estimator by c

̂ yx (f ) = H

1H ̂ yv 1H ̂ xv

.

(13.13)

In case we make enough averages so that the H1 estimators in Equation (13.13) are unbiased, then the Hc estimator is also unbiased. This is at the expense of an extra input channel, however, which is probably a reason why this estimator is not very common in commercial software. A further disadvantage with this estimator is that it relies on the frequency responses involved being linear, whereas for shakers used for structural excitation, we know they are not linear, which can cause uncertainty in the Hc estimator. n(t) v(t)

x’(t)

Hxv( f )

m(t)

Hyx( f )

y(t)

x(t)

Figure 13.5 Illustration of the model assumption for the Hc estimator. The estimator is based on the assumption that there is an input signal, v(t), which can be measured without any contaminating noise, and which is located before the measured input to the system. In mechanical applications, this is typically the voltage output of the signal generator.

13.4 The Coherence Function

13.3 Important Relationships for Linear Systems We shall now study some important relationships which apply to linear systems. If we measure the input x(t) to a linear system with frequency response H(f ), the complex conjugate of Equation (13.2) is Y ∗ (f ) = X ∗ (f )H ∗ (f ).

(13.14)

Multiplying Equation (13.2) by Equation (13.14) term by term, taking the expected value of each term, and scaling properly, we obtain the important relation Gyy (f ) = |H(f )|2 Gxx (f ).

(13.15)

This equation is a useful relation to express the PSD of the output of a linear system. Note that Equation (13.15) is valid for ideal linear systems without contaminating noise on the measured signals, as in Figure 13.1. We now move to the system with noise only at the output, in Figure 13.3. If we start with the signals summed at the output, we have that Y (f ) = U(f ) + N(f ).

(13.16)

We form the complex conjugate of this equation and multiply it by itself, obtaining Y ∗ Y = U ∗ U + N ∗ N + U ∗ N + UN ∗ ,

(13.17)

where we have left out the frequency variable for the sake of simplicity. Taking the expected value of each term, and scaling properly, we obtain Gyy = Guu + Gnn + Gnu + Gun .

(13.18)

Equation (13.18) expresses a very important relationship, namely that if we sum several signals, the power spectral density of the summed signal is in general not equal to the sum of each signal’s PSD. Instead, we must take into account all of the cross-spectral densities between the signals included in the summation. If the signals u and n are uncorrelated (independent), however, which they are in our case for the system with noise at the output, we obtain the special case Gyy = Guu + Gnn .

(13.19)

13.4 The Coherence Function In the case of existing input noise, m(t), not included in the assumption for the H1 estimator, ̂ 1 will be less than or equal to the true value of H(f ). This the magnitude of the estimate H ̂ xx will be measured with can easily be observed by the fact that when m(t) is not zero, then G this noise and thus [ ] ̂ xx = Gvv + Gmm , (13.20) E G ̂ xx term is found in the denominator of the H1 according to Equation (13.19). Because the G |̂ | estimator in Equation (13.6), |H1 | will become smaller than or equal to the true |H(f )| due to | |

333

334

13 Single-input Frequency Response Measurements

this bias error. In a similar fashion, because the H2 estimator has a factor Gyy in the numera|̂ | tor which will be affected similarly with Gnn , |H | will always be greater than or equal to the | 2| true value H(f ), and equal only when n = 0. In both cases, it is also necessary that we have made many averages in Equation (13.6) and (13.10), respectively, so that the cross-spectrum terms, including the extraneous noise in Equations (13.5) and (13.9), respectively, become zero. Thus, we may conclude that the true value of |H(f )| fulfills |̂ | | |̂ (13.21) (f )| . |H1 (f )| ≤ |H(f )| ≤ |H | 2 | | | It should perhaps be noted that this equation assumes that there is no random error in the ̂ 1 estimates that are actually larger than H(f ), if the random estimate. It is possible to obtain H ̂ 2 , which may be smaller than H. error is large (see Section 13.5.2), and vice versa for H When we estimate the frequency response with the H1 or H2 estimators as above, we can simultaneously compute the coherence function, 𝛾 2 (f ) which is defined as the ratio between ̂ 1 estimate and the H ̂ 2 estimate, i.e., the H |2 |̂ |Gyx (f )| ̂ 1 (f ) H | . | 𝛾̂ 2yx (f ) = = ̂ ̂ ̂ H2 (f ) Gxx (f )Gyy (f )

(13.22)

Note that the coherence function in Equation (13.22) is defined as the squared function, 2 . From Equation (13.21) and the definition of the coherence function, it follows directly 𝛾yx that 2 (f ) ≤ 1. 0 ≤ 𝛾yx

(13.23)

2 ̂1 = H ̂ 2 which implies that we have no extraneous noise, and moreover, If 𝛾yx (f ) = 1, then H that the measured output, y(t), is caused solely by the measured input, x(t), i.e., that y(t) is fully coherent with x(t). The coherence function is a quality measure of our estimated frequency response, regardless of which estimator we use. The coherence function drops below unity if there is contaminating noise on either the measured input signal, x(t), or in the measured output signal, y(t), or in both signals. In all three cases, there is a bias error in the determination of the ̂ 2 , which follows from the fact ̂ 1 , or H frequency response in at least one of the estimators, H ̂ 1 ∕H ̂ 2 and the properties in Equation (13.21). If the coherence drops because of that 𝛾̂ 2yx = H noise m(t) or n(t) (or both, of course) is impossible to tell solely from observing x(t) and y(t). We will return to a more thorough discussion on the coherence function in Section 13.7, but first we need to establish some more relations.

13.5 Errors in Determining the Frequency Response We will now look at some aspects of the errors involved in estimating frequency response functions using the H1 and H2 estimators for a single-input/single-output system. This is considerably more complicated than the discussion on spectrum estimation in Chapter 9, because the correlation between the input and output signals is coming into the error relations. Also, the estimators we use to estimate the auto- and cross-spectral densities change the relations for the bias and random errors involved.

13.5 Errors in Determining the Frequency Response

It is important to understand that there are two completely different errors involved in the FRF estimates: (i) spectral analysis errors, and (ii) model errors. We have already discussed the model errors, which arise if, for example, we use the H1 estimator, but there is input noise, m(t). The errors we will look at in this section assume that the model is correct. The spectral errors in the FRF estimates are further divided into two parts: the errors caused by the estimator itself, without any extraneous noise, n(t) or m(t) as illustrated in Figure 13.2, and then the errors caused by the extraneous noise, i.e., n(t) if we look at the H1 estimator, and m(t) if we look at the H2 estimator. The most common estimator for the auto- and cross-spectral densities, used in all commercial systems to date, is Welch’s method, as described in Section 10.3.2. A comprehensive analysis of the errors involved when using this method when the input signal is random noise is found in Antoni and Schoukens (2007,2009) and Schoukens et al. (2006). A thorough discussion on errors if the smoothed periodogram estimator is used is found in Bendat and Piersol (2010).

13.5.1

Bias Error in FRF Estimates

Similarly to what we discussed for spectral densities in, e.g., Section 10.3.4, the frequency response estimated with the H1 estimator, even without the presence of contaminating noise, is biased due to the limited frequency resolution. In practical frequency response measurements, this bias error should be minimized by selecting a blocksize for the FFT by using the same procedure as presented in Section 10.7.3, i.e., by gradually increasing the blocksize until peaks do not increase in height when the blocksize is increased further. The spectral analysis bias error has two parts: one caused by the fact that the measured output block is not actually caused exactly by the measured input block, and one caused by leakage. The first of these errors is illustrated in Figure 13.6 and is caused by the fact that when we truncate the signals, we measure an output signal of the linear system, which is not totally caused by the input signal. To understand this effect, we must consider that the output signal is the input signal convolved with the impulse response, which from Section 2.6.4 we know means that the impulse response is “mirrored” (time reversed), then multiplied by the input signal, and the product is summed up. At the beginning of the output signal in a particular time block, there is, therefore, a region which actually depends on the input signal prior to the time block. At the end of the time block, there is correspondingly a part of the input signal that causes the output signal outside (after) the time block. This error is often erroneously called leakage, although it is not really the same as the previously mentioned effect called leakage in Chapter 9 which was due to the end effects of the blocks in the DFT process. Once the blocksize is made much larger than the impulse response length, the remaining bias error is caused by leakage. An important result in Antoni and Schoukens (2007) is that [ ] ̂ , of a frequency response estimate is proportional to the normalized bias error, 𝜀b H [ ] H ′′ (f ) ̂ ∝ 1 𝜀b H , N 2 H(f )

(13.24)

where H ′′ is the second derivative of the FRF with respect to frequency. First of all, we can conclude that the bias error is inversely proportional to the blocksize squared, which means

335

13 Single-input Frequency Response Measurements

Input signal, x

4 2 0 –2 Measurement time –4

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.6

0.7

0.8

Impulse Response (mirrored)

Output signal, y

336

2

0

–2

Measurement time 0

0.1

0.2

0.3

0.4 0.5 Time [s]

Figure 13.6 Illustration of an arbitrary time block of the input and output signals in the averaging process to compute a frequency response function. A “time mirrored” impulse response is added to the picture to illustrate the convolution of the input signal with the impulse response, which involves mirroring the impulse response before multiplying the input signal and summing, see Chapter 2. The measured output block is therefore not caused by (exactly) the measured input block, which causes an error. This error is minimized by increasing the measurement time so that it is much larger than the length of the impulse response.

it vanishes quickly as we make the blocksize large. In Section 13.10, we will illustrate how this bias error can be made negligible in measurements of a typical lightly damped system. We recognize the ratio of the second derivative to the value of the FRF from the bias error expression for spectral densities in Equation (10.18). Since we usually measure FRFs of lightly damped systems in noise and vibration analysis applications, it is of particular interest to know the nature of the bias error on such systems. It turns out that the bias error, similarly to the spectral density bias error, essentially depends on the ratio of the resonance bandwidth and the frequency increment, i.e., the ratio Br ∕Δf . A plot of a simulation result of the maximum bias error (located at the resonance frequency) of a frequency response of an SDOF system is presented in Figure 13.7. Example 13.5.1 Assume we have a truck frame with a first natural frequency of 1 Hz, and relative damping of 1%. Determine the necessary frequency increment so that the normalized bias error of an FRF, measured using random excitation, is below 1%. If you need to make 50 independent averages (without overlapping), how long will the measurement take? The resonance bandwidth according to Equation (5.39) is Br = 2𝜁r fr = 0.02. In order to have a bias error less than 1%, we need to have a frequency increment of maximum Δfmax = 0.1Br

13.5 Errors in Determining the Frequency Response

0.16 0.14

|ϵb [H( f )]|

0.12 0.1 0.08 0.06 0.04 0.02 0

0

5

10

15

20

25

30

35

Ratio Br /Δf Figure 13.7 Maximum normalized bias error of FRF estimate on an SDOF system vs. the ratio of the resonance bandwidth and the frequency increment, Br ∕Δf . The plot is a result of a simulation of the maximum bias error, which occurs at the resonance frequency, using various blocksizes. Note that the plot shows the magnitude of the bias error; the bias error is actually negative at the resonance frequency. Furthermore, the plot was obtained by estimating the FRF using a half sine window and 67 % overlap, see Section 13.5.2. The plot shows that in order to have a normalized bias error of less than 1 %, we have to have at least 10 frequency lines within the resonance bandwidth. It also shows, however, as does Equation (13.24), that the bias error can be made arbitrarily small by increasing the blocksize.

according to the plot in Figure 13.7. Thus, we need to have a frequency increment of less than 0.002 Hz. Since the time of each FFT time block is T = 1∕Δf this means the measurement will take 50 = 25000s ≈ 7 hours. (13.25) Tm = 0.002 This example shows what a high price low-bias measurements of FRFs can come at! End of example.

13.5.2

Random Error in FRF Estimates

As shown by Antoni and Schoukens (2007) and Schoukens et al. (2006), the random error in estimates of frequency responses using Welch’s method, has two parts: one caused by the extraneous noise, n(t) in the H1 case, and one caused by leakage noise at the start and end of each block, of which the latter has previously been unaccounted for. The total random error therefore depends on the time window as well as the number of averages and the overlap percentage, similarly (but not equivalently) to the random error of PSD estimates. In Antoni and Schoukens (2007), it was shown that the smallest random error possible, given a certain amount of data, is obtained by using a half-sine window instead of the more common Hanning window, and with an overlap percentage of 67% instead of the more common overlap of 50% used for spectrum analysis. The differences in results using a Hanning

337

338

13 Single-input Frequency Response Measurements

and a half-sine window are not very large, but it is always good practice to use the best possible method. This will be further discussed and illustrated in Section 13.9. It should be noted that the half-sine window is unsuitable for spectrum estimation, so if both spectra and frequency responses are to be calculated and used, it may be better to use the common Hanning window. The random errors as defined by Antoni and Schoukens (2007) are rather complicated to interpret. For our purpose, it is sufficient to provide an approximate equation for the random error in the magnitude of frequency response functions, given by Bendat and Piersol (2010). They give the random error of the frequency response between input x(t) and output y(t) for the H1 estimator, in the absence of extraneous noise on the input, as √ 2 [ ] √ √ 1 − 𝛾xy (f ) |̂ | , (13.26) 𝜀r |H(f )| ≈ √ 2 | | 2nd 𝛾xy (f ) where nd is the number of distinct averages, i.e., the number of averages without overlap processing. When using overlap processing, nd should be replaced by Me , the equivalent number of averages, as given by Figure 10.8. This equation, although neglecting the random error due to leakage, is attractive in its simplicity, as it reveals that the best way to achieve a small random error is to make sure the coherence is near unity. If the coherence is not near unity due to excessive output noise, the random error in the estimate can still be reduced by a (large) number of averages. A near-unity coherence function should always be attempted to achieve when measuring frequency responses, as will be demonstrated in Section 13.9.

FFT analyzer

Impact hammer

Accelerometer

Test object

Figure 13.8 Typical setup for measuring frequency response using impact excitation. An impulse force is applied to the test object with the hammer, causing a response measured by the accelerometer. The analysis system, here illustrated by an “old-fashioned” FFT analyzer to appeal to all nostalgists, measures the signals and calculates the frequency response between force and response.

13.6 Coherent Output Power

In addition, it should be mentioned that in practice we usually also have other errors, such as extraneous noise on the input signal, which makes the precision of this equation good enough for practical use.

13.5.3

Bias and Random Error Trade-offs

From Sections 13.5.1 and 13.5.2, it can be concluded that the bias error and random error are contradictory, as they were shown to be for spectral density estimates in Chapter 10. To reduce the bias error we need to use large blocksize, which (if we have a fixed amount of recorded data) reduces the possible number of averages. To reduce the random error we need to take many averages, which (under the premise of a fixed amount of data) makes the available blocksize less. We thus have to make a compromise between bias and random errors, if we have a certain amount of data. In practical FRF measurements, however, it is good practice to experimentally verify a sufficiently small bias error, prior to recording the data, and then to verify how many averages are needed to produce a sufficiently small random error. In many cases, it is necessary to analyze the necessary measurement time prior to the measurement, so that enough averages can be made to get a reasonably low random error and bias error. Instead of using Welch’s method to compute the auto- and cross-spectral densities needed for the FRF estimator, it is, of course, possible to use the smoothed periodogram method described in Section 10.3.6. Actually, this estimator suffers a little less from leakage noise at the start and end of the blocks than Welch’s method, particularly for small blocksizes. However, the difference is so small that from a practical point of view it is equivalent to Welch’s estimator. It can also be shown that the two estimators are asymptotically equivalent when the blocksize increases (Antoni and Schoukens 2007).

13.6 Coherent Output Power Assuming we have a system with noise on the output only, as in Figure 13.3, we will now look at some properties which allow us to compute the spectral densities of the output from the linear system, Guu , and of the extraneous noise, Gnn . From Equation (13.15) and the expression for H(f ) with the H1 estimator from Equation (13.6) we first have ̂ uu G

| ̂ |2 |Gyx | | 1 ̂ |2 ̂ = | H | Gxx = | | . | | ̂ xx G

(13.27)

Using the first and second term in this equation, we can thus estimate the spectral density of u(t), i.e., the part of y(t) which comes from x(t) through the linear system, even though we cannot measure u(t). By noting the similarity between Equation (13.27) and the definition of the coherence function in Equation (13.22), we also find that Equation (13.27) can alternatively be calculated as ̂ yy (f ), ̂ uu (f ) = 𝛾̂ 2 (f ) ⋅ G G ̂ uu . which is the usual formula used to compute G

(13.28)

339

340

13 Single-input Frequency Response Measurements

̂ uu (f ) is called the coherent output power (spectrum), since it stands for the part of Gyy (f ) G which linearly derives from the input, x(t), i.e., which is coherent with x(t). This function can be used to identify noise sources (Bendat and Piersol 1993). However, it is crucial to remember that this formula is based on the system without noise at the input. In order to identify noise sources using Equation (13.28), it is thus important to find a “clean” input, which can be measured without extraneous noise, see Section 15.3. Using Equation (13.28) we can also express Equation (13.19) as ( ) ̂ nn (f ) = 1 − 𝛾̂ 2yx (f ) G ̂ yy (f ), G (13.29) because x (and therefore u) and n are uncorrelated. We can therefore also estimate the PSD of the contaminating noise, even though we cannot measure this signal. Again, this formula is valid under the assumption that no noise exists at the input of the system.

13.7 The Coherence Function in Practice We will now find a few more relations which help in interpreting the coherence function. First of all we should stress that the coherence function is a result of the fact that there is a difference between the H1 and the H2 estimator results, i.e., [ ] [ ] ̂ yx ̂ yy G G E ≠E . (13.30) ̂ xx ̂ xy G G Also, it is apparent that the coherence function equals exactly 1 if we only make one average, since then | ̂ |2 |Gyx | (Y X ∗ )(Y ∗ X) 𝛾̂ 2yx = | | = = 1. (13.31) ̂ xx G ̂ yy XX ∗ Y Y ∗ G This should be interpreted such that the coherence is undefined when only one average is taken. In practice, the coherence function requires more averages to obtain a small random error than does the frequency response. There is no exact solution of the random error in coherence estimates, but (Bendat and Piersol 2010) give an approximate equation where the normalized random error is given by √ 2) 2(1 − 𝛾yx [ 2] 𝜀 𝛾̂ yx ≈ , (13.32) √ |𝛾yx | nd where nd is the number of distinct averages, without overlap processing. It is worth pointing out that this error formula means that a very large number of averages is needed to keep the random error low when the coherence is low. For example, a random error of 𝜀 ≤ 0.02, when 2 = 0.5 requires 5000 distinct averages, whereas when the coherence is the coherence is 𝛾yx 2 = 0.8, only 500 averages are needed to produce the same random error. With overlap 𝛾yx processing, nd in Equation (13.32) should be replaced by Me , the apparent number of averages from Figure 10.8. It is apparent that in general the error given by Equation (13.32) is larger than the random error of the magnitude in the FRF, given by Equation (13.26).

13.7 The Coherence Function in Practice

We will now look at the quantitative value of the coherence function, in the cases of noise on the output and input, respectively. If we assume a system with extraneous noise on the output only, combining Equation (13.19) and Equation (13.28) leads to the relation 2 = 𝛾yx

Guu Guu = = Gyy Guu + Gnn

1 . Gnn 1+ Guu

(13.33)

From this equation, it follows that the coherence function deviates from unity when the extraneous output noise is not zero. If, instead, we assume a system with noise on the input only, then according to Figure 13.4 we have that Gyx = Gyv = 2HGvv ,

(13.34)

which follows if we formulate the equation for Y as a function of V, multiply by X ∗ and then average over many blocks. But we also have, from Equation (13.27), Gyy = |2 H|2 Gvv .

(13.35)

If we combine the two last equations with Equation (13.22), we arrive at the relation 2 = 𝛾yx

|Gyx |2

= (

|Gyv |2

= ) Gxx Gyy Gvv + Gmm Gyy |2 H |2 G2 1 | | vv = . ( ) Gmm Gvv + Gmm |2 H|2 Gvv 1+ Gvv

(13.36)

Equation (13.36) shows how the coherence deviates from unity when there is noise (only) on the input. It should be noted that the coherence function is thus, by combining Equations (13.33) and (13.36), a result of the amount of total extraneous input and output noise in the measurement in the general case of noise on both input and output signals. We can now summarize some possible reasons for the coherence function to deviate from unity: ●

● ● ●



Either the noise m(t) or n(t), or both, are not negligible compared with the measured signals x(t) and y(t). Bias errors due to insufficient frequency resolution, as described in Section 13.5.1. The system H(f ) is (strongly) nonlinear or not time invariant. Bias error exist due to a time delay between the signals x(t) and y(t). This is explained in detail in Bendat and Piersol (2010). The model is incorrect, for example, v(t) is not the only signal which contributes to y(t). This is explained in detail in Bendat and Piersol (2010).

13.7.1

Nonrandom Excitation

If the input signal x(t) to a linear system is not random, which is often the case, as will be evident in the following sections, the above discussion is not entirely applicable. However, the coherence function can still be used as an indicator of the signal-to-noise

341

342

13 Single-input Frequency Response Measurements

ratio at each frequency bin. If the coherence function equals unity, it is always an indication that the signal-to-noise ratio is sufficient. It should be mentioned, however, that even if the coherence is unity, the estimated frequency response can be erroneous. This is the case, for example, if the model is not true, for instance if there is some other signal, correlated with the measured input signal, which adds to the measured output signal. An example of this case is found in Section 13.12.3.

13.8 Impact Excitation We shall now study how the above methods are used to measure frequency responses on mechanical systems. We will start by the method of impact excitation (sometimes called impulse excitation), where an impulse hammer described in Section 7.7 is used to excite the structure under test. This technique was first developed in the 1970s, Halvorsen and Brown (1977), and received some renewed interest in the work of (Fladung, 1994), who, by increasing the number of references, showed that impact testing in many cases can give high-quality results for modal analysis. Impact excitation is easier to apply than shaker excitation, since the structure does not need to be loaded by a force transducer and mounted to a shaker. The method also has a few other advantages compared with shaker excitation, for example: ●



the force is well defined in the striking direction without transverse forces (see the section on shaker excitation below), it is easier to excite higher frequencies (which frequencies are considered “higher” naturally depends upon the measurement object).

The method, however, also has some disadvantages compared with shaker excitation, most importantly: ●

● ●

the signal-to-noise ratio (SNR) is low compared with methods using continuous excitation signals, particularly because the method measures the free decay of the structure, which means the contaminating noise leads to smaller SNR, the longer the time record is extended, the risk of exciting, and difficulties caused by, nonlinearities is higher, and the total energy in the excitation cannot be divided into multiple-input force signals, as it can with shaker excitation (see Chapter 14).

Impact excitation may be preferred for the above reasons when one or more of the following points weigh to its advantage: 1. It is desired to measure quickly, and the requirements for precision are less important. 2. The structure is lightly damped and difficult to excite by the shaker at its natural frequencies. 3. It is desired to investigate suitable excitation locations for shaker excitation. 4. Relatively high frequencies are being studied. 5. The structure is heavy which requires large and expensive shakers (for example, when studying bridges and other building structures).

13.8 Impact Excitation

Measuring frequency response using impact excitation can be carried out with relatively simple equipment. As a minimum, an FFT analysis system with two channels, an impulse hammer and an accelerometer, are required, as illustrated in Figure 13.8. With common modern data acquisition systems with multiple (more than two) channels, in general, it is recommended to use several reference accelerometers to acquire more redundant data.

13.8.1

The Force Signal

The basis of impact excitation is that a short force pulse has a broad spectrum – the shorter the pulse, the higher the frequencies in its spectrum, see Figure 13.9. The force spectrum also depends on the structure’s hardness and its dynamic stiffness at the excitation point. The strength of the impact also plays a role (a harder impact in general produces higher frequencies). Thus, it is important to choose the right hammer and the right tip to obtain a “good” force spectrum, i.e., a spectrum that does not drop too much within the desired frequency range. A total drop of less than 20 dB is usually considered good, but with the increased dynamic range in modern instruments, this can be stretched. As we will show later, the recommended procedure is to test the available dynamic range, so no assumptions need to be made with regards to the correct tip to be used. In Figure 13.9(a), it can be seen that particularly the shorter impact displays some ringing before and after the main impact signal. This is seen very often during impact testing and is an effect of the fact that the actual bandwidth of the force signal is higher than the

0

0.5

Transient spectrum [Ns]

10

Force [N]

0.4 0.3 0.2 0.1 0 –0.1

0

0.05

Time [s] (a)

0.1

10–5 –10

10

10–15 10–20 0

200

400

600

Frequency [Hz] (b)

Figure 13.9 a) Shape of the force pulse of two different hammer tips: a hard (solid) and a softer tip (dashed). (b) Corresponding spectra (Transient spectrum, see Section 10.5). The time scale for the impacts in (a) has been zoomed in, and the impacts have been separated in time for better viewing. A hard tip (solid line) gives a short impulse, which has a spectrum with high frequencies. The dashed line in (b) shows that the spectrum of the soft impact has zeros at certain frequencies, that is, there are frequencies with little energy. Normally, only frequencies in the main lobe are used for determining the transfer function. In (a), the shorter impact also shows some “ringing” before and after the main pulse. This is an effect of lowpass filtering which is often seen during impact testing. The bandwidth of the pulse is in this case larger than the bandwidth used for data acquisition. This does not affect the quality of the measurement.

343

13 Single-input Frequency Response Measurements

bandwidth used for data acquisition. The bandwidth limitation does not negatively affect the measurement, as each frequency is independent of all other frequencies, see Chapter 2. During excitation, the force from the hammer impact should normally be as small as possible, as nonlinearities are otherwise more likely to be excited. In the worst-case, the structure can be deformed at the impact point, resulting in nonlinearities which completely destroy the results. How much “as small as possible” is naturally difficult to say in general. When exciting a bridge with a sledgehammer, it could be quite a lot, but as a rule of thumb, the impact should be carried out so that it feels soft but distinct. If the signal-to-noise ratio is not sufficient, then one may have to increase the force, see below. A better alternative, however, is often to select a more sensitive accelerometer. The force signal always contains some noise from the transducer and (to much lesser degree in modern measurement hardware with very high dynamic range) electronics of the input channel. This noise is deteriorating the quality of the force spectrum, particularly after the impact has occurred, where the true force signal is zero. The majority of the noise can therefore be removed by applying a force window to the force signal, as illustrated in Figure 13.10. The force window is unity from the first sample in the time block, and a while

–2

10

Transient spectrum [Ns]

Force signal [N]

0.4 0.3 0.2 0.1 0

10–3

–4

10

–5

0

1

2

3

4

10

5

0

Time [s] (a)

50

100

Frequency [Hz] (c) –2

10

Transient spectrum [Ns]

1

Force window

344

0.8 0.6 0.4 0.2 0 0

1

2

3

Time [s] (b)

4

5

–3

10

–4

10

10–5 0

50

100

Frequency [Hz] (d)

Figure 13.10 Illustration of (a) a noisy force signal; (b) a force window; (c), transient spectrum of the unwindowed force signal; and (d) transient spectrum of the windowed force signal. It is seen that the force window removes most of the noise in the spectrum.

13.8 Impact Excitation

after the impact has gone to zero, and then smoothly approaches zero, where it stays for the remainder of the block. Its function is to multiply the noise at the end of the force signal by zero to eliminate that noise. The DFT is sensitive to sharp edges so the force window must be defined smoothly going from 1 to 0, so that the force spectrum is not getting distorted, for example, by adding the upper part of a half sine with a duration of the same length as the part which is unity. In most cases, a careful examination of the measured force signal will show that there is a slight offset voltage in the signal, resulting from the input electronics. It is important to remove this offset prior to computing the spectrum of the force signal. Otherwise, the offset will result in a distorted spectrum at low frequencies. The offset can be removed by using, for example, the first half of the samples in the pretrigger part of the force signal to calculate a mean value, which is then subtracted from the force signal. Alternatively, the first sample in the force signal can be subtracted from the force signal.

13.8.2

The Response Signal and Exponential Window

The response signal from an impact test is a free decay which approaches zero. The further away from the impact we get, the closer to the noise floor the response signal gets. This means that if the transducer noise is not negligible, there will be a deterioration in the signal-to-noise ratio the longer we make the measurement time. Multiplying the response signal with an exponential window defined by we (t) = e−at ,

(13.37)

with a suitably adjusted exponential constant, a, will suppress the last part of the response signal and therefore improve the signal-to-noise ratio. The exponential window is acting as artificial damping on the structure and will thus distort the frequency response. If modal parameter estimation is done on the measured FRF, the added artificial damping can be calculated and removed from the modal damping, as described in Section 13.8.4, provided the exponential window has been applied to both the force and the response signal. An example of a response signal, before and after application of an exponential window, is shown in Figure 13.11.

13.8.3

Impact Testing Software

As for spectrum analysis which we discussed in Chapter 9, there has become a “standard” for how impact testing is performed in commercial software for noise and vibration analysis. In this section, we will start by explaining this standard technique, and after that, suggest some ways to improve it, using the power of modern computers. Unfortunately, the standard method was implemented in the days of expensive storage space and limited computing capacity. Nevertheless, it is important to understand how commercial systems implement impact testing in order to correctly use such systems. The first setting is the trigger which is set with a trigger level and slope, and naturally should be set to trigger on the force signal. The data acquisition is triggered by each impact after which N samples are acquired. The software then computes the auto- and cross-spectra necessary for the FRF estimator, and adds them to a cumulated average. Many FFT analysis

345

13 Single-input Frequency Response Measurements

Acceleration [m/s 2 ]

200 100 0 –100 –200 0

0.2

0.4

0.6

0.8

1

0.6

0.8

1

Time [s] (a) 200

Acceleration [m/s 2 ]

346

100 0 –100 –200 0

0.2

0.4

Time [s] (b) Figure 13.11 Response acceleration from an impact. In (a), the original response signal is plotted, and in (b), the response after applying an exponential window with 0.001 as final value is plotted (solid) together with the exponential window (dotted). Note that the exponential window has been scaled to be visible in the figure, it always starts at 1.

systems include the possibility of stopping after the time block has been collected, but before it is included in the average (called “interrupted averaging” by some manufacturers), for a manual or automatic selection process where the user can select whether to include or reject the current impact in the averaging. Also, some manufacturers have overload and double-impact (see below) detection, which automatically prevents faulty hammer impacts from being included in the average. The typical procedure is to average in the frequency domain, but we will discuss the possibility of using time domain averaging in Section 13.8.6. Once the trigger level is working, the pretrigger has to be set. Usually, the force should start a few hundred samples into the time block or 5–10% of the blocksize. This ensures that the force time signal is not truncated at the beginning. Once the trigger is set up and tested (which can sometimes be cumbersome because you need to know the full-scale range and approximately what the force level is to set the trigger correctly in percent of the full scale range, which is most common), it is time to optimize the FFT parameters. We will here outline a procedure that simplifies this and ends with optimal settings. We assume that we know the frequency range, which is usually given by the number of modes we want to study. Then we start with some default settings presented in Table 13.1.

13.8 Impact Excitation

Table 13.1

Default settings for impact testing.

Frequency range

As high as the highest resonance (of interest) plus some margin

Window

None, Uniform

Block size

1024 or 2048

Averaging

Stable, 3–5 measurements

FRF estimator

H1

The next step is to set full-scale range for the A/D converters as described in Chapter 11. Make some impacts and see that you do not get any overloads. If there is a possibility to automatically reject overloaded blocks this can be recommended. When the analyzer triggers as expected, the frequency spectrum of the force signal should be examined, and a proper tip should be selected for the impulse hammer. With an appropriate tip, the force spectrum should drop at the end of the frequency range, but not so much that the signal-to-noise ratio is too bad (we will check this later, so you may need to go back to this point). The force spectrum should drop because otherwise it includes higher frequencies, which cause the response signals to be higher than necessary, and increases the risk of exciting nonlinearities. When the spectral content is acceptable, it is time to set the optimal blocksize. The proper blocksize is given by the system properties as we discussed for spectrum analysis in Chapter 9; the frequency resolution should be set so that the bias error at each resonance of interest is minimal. It is investigated as mentioned in Section 13.5.1 by gradually increasing the blocksize and making new measurements, and comparing the FRFs. When the bias error is eliminated (i.e., small enough by engineering judgment), the peaks at the resonances do not increase with increasing blocksize (or decreasing Δf , really). Once these fundamental settings have been established, what remains is to check if the SNR is sufficient, by looking at the coherence function. If there are dips in the coherence function, three actions can be taken: (i) a force window can be applied, as explained in Section 13.8.1; (ii) an exponential window can be applied, as described in Section 13.8.2; and (iii) a harder tip can be selected, if the reason for the low SNR is that the force spectrum drops too much (this is seen by the fact that the coherence gets noisy at higher frequencies). The second point is illustrated in Figure 13.12. In this figure, the effect of adding an exponential window is illustrated. It is seen that the dip in the coherence, which coincides with an anti-resonance in the FRF, disappears when the exponential window is added. This is an indication that the dip in the coherence was a result of low signal-to-noise ratio in the accelerometer signal.

13.8.4

Compensating for the Influence of the Exponential Window

The effect of the exponential window, that the “apparent” damping of the structure increases, can be computed, assuming that the window is used on both the force and

347

13 Single-input Frequency Response Measurements

Accelerance [(m/s 2 )/N]

Accelerance [(m/s 2 )/N]

104

2

10

100 200

400

600

10

2

10

0

200

(a) 1

1

0.8

0.9

0.6 0.4 0.2 0

400

600

(c)

Coherence

Coherence

348

0.8 0.7 0.6 0.5

200

400

Frequency [Hz] (b)

600

0.4

200

400

600

Frequency [Hz] (d)

Figure 13.12 Estimated frequency response and coherence with no exponential window in (a) and (b) and with an exponential window in (c) and (d). Five averages were used and the blocksize was N = 2048 samples. A force window was applied in both cases. In (b), it is seen that not using an exponential window results in a coherence function with a dip at approximately 250 Hz. The application of the exponential window removes this dip almost entirely, as shown in (d). This is an indication that the dip (which is coinciding with the antiresonance in the frequency response) was caused by noise in the accelerometer signal.

response signals. Thus, if the damping is later estimated using the FRF estimated including an exponential window, a correction factor can be applied to the estimated damping factor to obtain the true damping of the structure. This can easily be shown through the following equations. The Laplace transform pair for multiplication by an exponential function is e−at y(t) ⇔ Y (s + a),

(13.38)

that is, multiplying by an exponential window implies a substitution of s by (s + a). Furthermore, we know from Chapter 5 that the transfer function of a simple SDOF system is 1∕m X(s) = . F(s) s2 + s2𝜁𝜔n + 𝜔2n

(13.39)

13.8 Impact Excitation

If we multiply both force and response by an exponential window, this implies substituting all instances of s by (s + a), and we obtain 1∕m X(s) | | = | s=s+a F(s) | s=s+a s2 + s2𝜁𝜔n + 𝜔2n

(13.40)

In order to investigate how the exponential window influences damping, we shall now study the poles of Equations (13.39) and (13.40), since the Laplace variable s clearly only exists in the denominator. The poles of Equation (13.39) are, as we know, √ (13.41) s1,2 = −𝜁𝜔n ± j𝜔n 1 − 𝜁 2 for 𝜁 ≤ 1. Equation (13.40) requires a few steps of calculation, but finally the poles are √ s1,2 = −(𝜁𝜔n + a) ± j𝜔n 1 − 𝜁 2 for 𝜁 ≤ 1.

(13.42)

Consequently, the exponential window only influences damping, that is, the real term in Equation (13.42). The increased damping can thus easily be compensated for if the variable a used in the exponential window is known. If we denote the measured relative damping 𝜁m , and the undamped resonance frequency (in Hz) fr , the corrected (“true”) damping, 𝜁c , is a . (13.43) 𝜁c = 𝜁m − 2𝜋fr It should be noted that Equation (13.43) is only valid if the exponential window is applied to the force time signal as well as to the response signal, as otherwise Equation 13.40 does not apply, as it was stated above. This is not very intuitive, and thus, it is important to check that the software one uses correctly multiplies the force signal with an exponential window. More details may be found in Fladung and Rost (1997).

13.8.5

Sources of Error

Theoretically, the impacts can vary without any effect on the estimate of the frequency response function. However, in reality, when the structure is not entirely linear, each impact must be equal. This criterion is not always easily fulfilled, as will be shown in Section 13.8.6. If a double-impact occurs (one impact containing two or more hits of the hammer tip), which is difficult to avoid on some flexible objects that bounce back at the hammer, the phenomenon shown in Figure 13.13 occurs. The Fourier transform of the force is no longer nice and smooth, and this results in numerical errors in the computation of the frequency response. Examining the time signal and spectrum of the force, it is easy to discover double-impacts. Even better is when the analysis system has an automatic “double-impact detection” functionality, which is an algorithm that automatically detects double-impacts and warns the user, or automatically ignores that impact in the averaging process. Another error which can easily destroy an impact test occurs if one or more of the hammer impacts hit different locations on the structure. It is, of course, important that each impact hit the same point, as is easily understood from the relation between mode shapes and FRF appearance which we discussed in Section 6.4. How accurate one must be thus depends on the spatial wavelength of the mode shapes. If different points on the structure are excited with the different impacts during a measurement, the error that results is a bias error, which can be detected by the coherence function dropping below unity. Finally, as in

349

13 Single-input Frequency Response Measurements –3

10

Transient spectrum [Ns]

350

10–4

10–5

0

100

200

300

400

500

600

700

Frequency [Hz] Figure 13.13

The effect of double-impact on the force spectrum.

all measurements, it is important to avoid overloading the sensors and inputs electronics, which destroys the results.

13.8.6 Improving Impact Testing by Alternative Processing So far we have used averaging in the frequency domain which is the most common way to perform impact testing. It has been suggested that time domain averaging could be used instead, Fladung et al. (1999). However, as we showed for correlation function estimation in Section 10.4.2, time domain and frequency domain averaging are equivalent, since the order of the Fourier transform and the averaging are reversible, since the Fourier transform is linear. This was not understood earlier. In practical impact testing, with an imperfect operator of the impulse hammer, typically the individual impacts during the measurement of a particular point will differ by perhaps as much as ±50–100%. This can significantly reduce the quality of the estimated FRF if the structure is slightly nonlinear (as many structures are). In Brandt and Brincker (2011), the advantages of recording the time signal during impact testing, and then postprocessing the time signal, as opposed to the online processing discussed in Section 13.8.3, were discussed. The idea with this method is to set the measurement system up to record the signals from the impulse hammer and accelerometer during a time long enough to make five to ten impacts, with long enough time in between each impact to allow the maximum anticipated blocksize to be used in the later processing. After the measurement, software customized for postprocessing the signal can then be applied to obtain the best possible FRF out of the available data, possibly discarding some of the impacts. Among the most important advantages with this technique are that ●

the data acquisition becomes easier because no interaction is needed during the measurement (such as accept/reject of each impact, which is commonly required),

1

1

0.8

0.8

Coherence

Coherence

13.9 Shaker Excitation

0.6

0.6

0.4

0.4

0.2

0.2

0

40

50

60

Frequency [Hz] (a)

70

0

40

50

60

70

Frequency [Hz] (b)

Figure 13.14 Illustration of typical improvement of coherence on slightly nonlinear structure by selecting only force impacts with approximately equal level. The figure shows the coherence using five averages in (a), vs. using only the two impacts giving best coherence, in (b). The plots are from a measurement on a slalom ski.







the check for optimal blocksize can be made without remeasuring, since the data are available (assuming the impacts were spaced apart enough to allow for the necessary blocksize), the optimization of force and exponential windows can be made easier, since the data are available for repeated tests without remeasuring, and in the postprocessing phase, only those impacts that produce a good FRF estimate need to be taken into the averaging process; for instance, the user can select only impacts which have approximately the same force level.

Unfortunately, few designers of commercial software packages have so far implemented impact-testing software using this feature. It is easy, however, to implement the above-mentioned processing in, for example, MATLAB/Octave. In fact, the accompanying ABRAVIBE toolbox contains a graphical user interface implementation of the time data processing described here. A typical example of the possible improvement measurement is shown in Figure 13.14. In Section 19.7, we will demonstrate in detail how this processing is done and show results of applying it to a real dataset.

13.9 Shaker Excitation In many measurement applications, it is preferable to mount a shaker (see Section 7.13) with a force transducer to the structure, instead of using an impulse hammer for excitation. The shaker has several advantages compared with impact excitation, most importantly that it, in general, gives a better signal-to-noise ratio since the excitation signal is active during a

351

352

13 Single-input Frequency Response Measurements

larger part of the measurement time. For a discussion about attaching the shaker and force transducer to the structure, see Section 7.5. Different signals may be used to drive the shaker and can be divided into signals with continuous spectra and signals with discrete spectra. The former are either random signals or transient signals or a combination of the two. Signals with discrete spectra are of course periodic in the measurement window (see Section 9.3.4) and have the advantage of concentrating the energy in the signal on those frequencies which we estimate with the DFT.

13.9.1 Signal-to-noise Ratio Comparison To consider the signal-to-noise ratio of different excitation signals, we will use the parallel filter approach discussed in Section 9.1.1 and consider a single spectral line, k. The (approximate, since we only consider the main lobe of the spectrum of the time window) result of a single DFT at a frequency line is illustrated in Figure 13.15, where the extraneous noise, n(t), is also depicted. Two cases are of interest in this situation; either the excitation signal has a continuous spectrum, as is the case in impact testing and for pure and burst random excitation, or the excitation signal has a discrete spectrum, which is the case for the periodic excitation signals treated in Sections 13.9.4 and 13.9.5. In the first case, of an excitation signal with continuous spectrum, the value obtained at the spectral line of interest is a result of the true spectrum of the signal weighted by the spectrum shape of the time window (which in this case should be a rectangular window for impact testing or burst random, but a half-sine window or Hanning window for pure random). The extraneous noise, if present, is also weighted by the time window spectrum shape and added to the spectral line. The signal-to-noise ratio, is the ratio of the mean-square value of the weighted signal spectrum and the mean-square value of the weighted background noise spectrum, as depicted in Figure 13.15(a) and (b). Furthermore, due to the weighting by the spectrum shape of the time window, with excitation signals with continuous spectra, there will always be some bias, although with long blocksizes, the bias can be made small. In the second case, we have a periodic excitation signal, with a period coinciding with the measurement time. In this case the frequency of the deterministic signal coincides with the spectral line, k, and thus, without the extraneous noise, we get a correct value of the spectrum at the spectral line, with no bias at all. If there is some background noise, this noise will be weighted by the time window spectral shape before being added to the spectral line. The signal-to-noise-ratio is thus larger in this case (provided the mean-square levels of the transient and periodic signals are the same within ±Δf ∕2), as there is no weighting of the signal in the case of a periodic signal. This fact makes the signal-to-noise ratio superior when using periodic excitation signals compared to using nonperiodic excitation signals.

13.9.2 Pure Random Noise Pure random noise (or “true random”), see Figure 13.16(a), is a continuous, normally distributed noise signal. This signal has a continuous spectrum and since the signal is continuous in the time domain, it must be windowed by, for example, a half-sine window when we carry out the frequency analysis. As we discussed in Section 10.3.3, any window but the rectangular gives rise to a widening of spectral peaks. Therefore, when measuring

Spectrum

Spectrum

13.9 Shaker Excitation

k (a)

k+Δf/2

k-Δf/2

k (c)

k+Δf/2

k-Δf/2

k (b)

k+Δf/2

k-Δf/2

k (d)

k+Δf/2

Spectrum

Spectrum

k-Δf/2

Figure 13.15 Illustration of the signal-to-noise ratio at a spectral line for excitation signal with continuous spectrum ((a) and (b)) and with discrete spectrum ((c) and (d)). In (a), the true spectra of the excitation signal (solid) and the extraneous noise (dashed) are plotted; in (b), the same spectra are plotted after the spectrum shaping due to the effect of the time window are shown. In (c) and (d), the corresponding spectra are shown for an excitation signal with discrete spectrum. The signal-to-noise ratio is the mean square sum of the excitation signal divided by the mean square sum of the extraneous noise. As can be seen in the figure, by comparing (b) and (d), the signal-to-noise ratio will thus be higher for an excitation signal with discrete spectrum, since in this case, the spectrum of the excitation signal is not affected by the window shaping.

frequency response with low damping, which we normally have in structural dynamics, continuous noise gives poor resolution of the resonances. Because of this, pure random is often erroneously said to be completely inappropriate for exciting structures with low damping. Using pure random noise as excitation requires much larger blocksize for a particular bias error than other excitation signals, as we will show in the comparison between different excitation signals in Section 13.10. Therefore, it may be considered as an inappropriate excitation signal for shaker excitation, as the measurement will be possible to be done much more quickly, with equal bias, with one of the excitation signals presented in Sections 13.9.3 through 13.9.5. However, it is important to understand that pure random can be used with (from a practical standpoint) the same bias, if only more data are recorded. In many cases with operating measurements, for example, the natural signals are random, and then good FRFs can very well be measured, but at the price of rather long measurement times, compared with what we need with other excitation signals when exciting a structure in the lab.

353

13 Single-input Frequency Response Measurements

Voltage

(a)

Voltage

(b)

Voltage

(c)

(d)

Voltage

354

Time (s)

Frequency (Hz)

Figure 13.16 Common excitation signals (left) for vibration excitation when measuring frequency response, and their respective (theoretical) spectra (right): (a) pure random noise, (b) burst random noise, (c) pseudo-random noise, and (d) periodic chirp. See Sections 13.9.2–13.9.5 for descriptions of the different excitation signals.

13.9 Shaker Excitation

13.9.3

Burst Random Noise

Burst random noise, see Figure 13.16(b) is continuous noise which is turned off at a certain time during the measurement of each block, after which the force becomes zero and the responses (accelerations) die out during the remaining part of the block. Since all signals both begin and end at zero, no window should be used for this signal. It is relatively simple to create this type of signal within the hardware. The only requirement is that the data acquisition can be triggered when each burst block is sent out; the samples, however, do not need to be exactly synchronized. This is therefore the most common excitation signal for modal analysis available in commercial measurement systems. Yet another advantage is that it can easily be used for multiple-input estimation, see Chapter 14. As mentioned above, burst random noise requires all signals to die out before the end of the block. Although this may sound easy, sometimes it is not so easy to accomplish, particularly if the rigid body modes of a freely supported structure have low damping. Sometimes, an improvement can then be made by adding some time between the bursts which are not measured. The leakage that appears because the response signals have not died out entirely at the end of the block is then mostly affecting the very low frequencies where the rigid body modes are located and do not severely affect the frequency range of interest. An often misunderstood fact is the effect of the interaction between the shaker and the structure during the end of the block, when the source is turned off. Depending on the amplifier type used to control the shaker, either the force is kept at zero (in the case of a current-controlled shaker), or the velocity of the shaker is kept at zero (in the case of a voltage-controlled shaker). In the first case, there is no interaction between the structure and shaker when the excitation signal is turned off, and thus the shaker is not affecting the structure. In the latter case, however, the shaker will add damping to the structure, which will come to rest more quickly than in the former case. This does not, however, influence the estimated FRF, because the force during the interaction is measured, and thus the linear relationship between the force and acceleration is correctly estimated.

13.9.4

Pseudo-random Noise

Another way of achieving an excitation signal which does not require a time window is to make the excitation signal periodic. Pseudorandom noise is, despite its name, a completely deterministic signal, which is furthermore periodic within the time window of the FFT. It can be created rather simply by setting the amplitude of each frequency line in a spectrum at a desired level (usually the same level for all frequency lines), and then adding a random phase to each frequency line. When calculating the inverse FFT of this spectrum, a periodic signal is obtained, with the desired spectral properties, as plotted in Figure 13.16(c). When using a periodic excitation signal, it is important to take into account that the structure has to achieve its steady-state condition before the data are acquired. The first few blocks after turning the shaker on will cause the structure to respond with a transient behavior, and this has to be “waited out.” Usually, five to ten periods is enough to achieve a sufficient steady-state condition. On structures with very low damping, however, many more periods may have to be waited out before steady-state conditions apply. Pseudorandom noise has a good signal-to-noise ratio, since all the signal energy coincides with the spectral lines of the DFT. For slightly nonlinear structures, however, the periodicity

355

356

13 Single-input Frequency Response Measurements

can be a problem, as these structures respond with nonlinear harmonics of the periodic frequencies. A very important point with periodic excitation signals, which have very rarely been considered, is that the ideal averaging domain of these signals is time domain averaging, and not the “normal” frequency domain averaging which we use for random signals. This was discussed in Phillips and Allemang (2003). Since the periodic excitation signal is entirely periodic in the time window, the extraneous noise can be removed by time domain averaging, thus achieving a bias-free estimate of the FRF, even in the case of noise on both the input and output signals. This does not seem to have been acknowledged in literature but will be demonstrated in Section 13.12. Sometimes, particularly on light structures, it is difficult to excite the structure near its natural frequencies where the force spectrum then shows dips. In such cases, pseudorandom allows shaping of the spectrum by setting the spectral lines to different values, sometimes called “coloring” of the spectrum. This is not as easily done with other excitation signals, although it is possible with most signals. The main drawback with pseudorandom noise is that it requires more sophisticated hardware because it requires synchronization between the signal generator and the data acquisition input channels to assure the periodicity in the time window. Furthermore, the signal generator has to include a digital-to-analog converter, DAC, which is more expensive than a random generator. With the rapid cost reduction in modern hardware, DACs are becoming increasingly more available, however.

13.9.5 Periodic Chirp Another common periodic excitation signal is the periodic chirp signal, which gets its name from the sound it produces, provided its frequency range is appropriate. This signal consists of a sine, which is continuously swept through the frequency range of interest during each block, see Figure 13.16(d). The advantage of using this type of signal compared with the random signals above is that the signal-to-noise ratio is the best, since the crest factor is lower than for pseudorandom. If the disturbance noise is negligible, then it suffices with one average, although as with the noise signals above, a few averages should in general be used to obtain a good result. The fact that the chirp signal consists of a sinusoid can also be advantageous if one knows that the structure is nonlinear and it is desired to have an excitation signal that always has the same amplitude. The noise signals, however, are often better when one has a nonlinear structure and wants to measure a linear approximation of the system because the noise signals have amplitude distribution such that all amplitudes are randomly mixed. Since the periodic chirp signal is a periodic signal within the time window, time domain averaging should be applied, see the discussion in Section 13.9.4. The drawback with pseudorandom that it requires more sophisticated hardware than pure random and burst random applies also to periodic chirp excitation.

13.9.6 Stepped-sine Excitation Stepped-sine is a completely different measurement method from those described above. All of the above methods, from impact excitation to shaker excitation with the excitation

13.10 Examples of FRF Estimation – No Extraneous Noise

signals mentioned in Sections 13.9.2–13.9.5, are examples of broadband excitation, i.e., where the excitation signal contains frequencies within a wide frequency band. Stepped-sine excitation implies that we instead allow a sinusoidal excitation signal to step up in frequency. At each frequency, steady-state conditions are reached, after which the input and output signals are measured and the amplitude and phase relationships are determined between the output and the input force, before stepping the frequency to the next test frequency. Stepped-sine is often done using the FFT and by setting the sampling frequency so that exactly an integer number of periods are measured and leakage is avoided. Stepped-sine is thus a slow method, but it has the advantage of being able to cope with very low signal-to-noise ratios. The signal-to-noise ratio is the highest possible, which makes it possible to use relatively low levels (since all the signal power is concentrated at one frequency, while the extraneous noise is spread over all frequencies). In some systems supporting stepped-sine excitation, it is possible to control either the excitation force or the response signal using feedback so that either the force or response signal is held constant. This is particularly useful when studying nonlinear structures, but can often be vital also on measurements on lightly damped structures to avoid excessive vibration levels (by controlling the response, in this case).

13.10 Examples of FRF Estimation – No Extraneous Noise In this section, we will show some examples of estimation of frequency response using some different excitation signals which will illuminate many practical aspects of using the excitation signals described theoretically in Section 13.9. We will first use an SDOF system simulated by the method described in Section 19.2.3 and with no extraneous noise. This is essential to understand the inherent errors in the estimates of FRFs using the various excitation signals. For the simulations, we set the natural frequency of the SDOF system to 100 Hz and the relative damping to 1%. The frequency range is selected to 1024 Hz (sampling frequency 2048 Hz), with additional ten times oversampling in the simulation, to reduce the error due to the simulation to a negligible level. Furthermore, the 3 dB bandwidth of the resonance for the SDOF system is 2 Hz (see Equation 5.39).

13.10.1 Pure Random Excitation We start by exciting the SDOF system with pure random noise. We make 149 averages (50 independent blocks), with a half-sine window, and 67% overlap, which gives optimum bias and random error, as discussed in Section 13.5. The resulting FRFs and coherence functions for three different blocksizes are shown in Figure 13.17. In the zoomed-in plots on the right-hand side, it is seen that the largest blocksize gives a very small bias error. The blocksizes used for the results shown in Figure 13.17 were 2048, 4096, and 16384 samples. The ratio of the 3 dB bandwidth of the SDOF system to the frequency increment is thus 2, 4, and 16, respectively. As mentioned before, it is often said that pure random cannot be used to estimate FRFs for lightly damped systems due to the large bias caused by the time window. As the results in

357

Magnitude FRF

13 Single-input Frequency Response Measurements

10

–4

10

–5

10–6

100

200

300

400

95

(a)

100

105

(c) 1.1

1 1 0.8

Coherence

358

0.9

0.6

0.8

0.4

0.7

0.2

0.6

0

100

200

300

400

Frequency [Hz] (b)

500

0.5 95

100

105

Frequency [Hz] (d)

Figure 13.17 FRF estimates using the H1 estimator and pure random noise on a simulated SDOF system with fn = 100 Hz, and 𝜁 = 0.01. In (a) and (b), the magnitude of the FRF and coherence, respectively, estimated using 2048 samples blocksize (dotted), 4096 samples (dash-dotted), and 16384 samples (dashed) are plotted. In (c) and (d), the same plots are zoomed in for a detailed look around the natural frequency. The true FRF is plotted in solid in (a) and (c). As can be seen, with the largest blocksize the bias error is negligible, and the coherence is very near unity (min [𝛾 2 (f )] = 0.996 at 100 Hz for the blocksize of 16384 samples).

Figure 13.17 shows, this is not true. However, the price for a low bias error is a large amount of data; in this case, we have used 50 ⋅ 16384 = 819200 samples for the largest blocksize. As Sections 13.10.2 and 13.10.3 will show, there are more economical methods of estimating bias-free FRFs, if we select a more appropriate excitation signal. Still, this example shows that for natural data of random nature, it is quite possible to estimate good FRFs. Another interesting result in Figure 13.17 is that there is a random error in the estimate. At some spectral lines, this error makes the estimate larger than the true value. It should be noted that there is no contradiction between this and the remark in Section 13.4, which stated that the H1 estimate is always less than the true value, as this remark referred to the bias error due to input noise, in the case of the H1 estimator.

13.10.2 Burst Random Excitation As mentioned in Section 13.9.3, burst random noise is the most commonly used excitation signal for mobility and accelerance measurements on mechanical structures. With this

13.10 Examples of FRF Estimation – No Extraneous Noise

Magnitude FRF

10

–4

10–6

100 200 300 400 500

95

(a)

100

105

(c) 1.1

1 1

Coherence

0.8

0.9

0.6

0.8

0.4

0.7

0.2

0.6

0

100

200

300

400

Frequency [Hz] (b)

500

0.5 95

100

105

Frequency [Hz] (d)

Figure 13.18 Plots of magnitude of FRF and corresponding coherence for two cases of burst random: 75 % burst length (dotted), and 25 % burst length (dash-dotted, or ‘+’ sign), with a blocksize of 2048 samples. With the leakage produced by the larger burst length (dotted), in (b), it can be seen an effect where the coherence drops at higher frequencies (dash-dotted), and in (c), there is a clear bias at the natural frequency. With the low-burst length, this leakage error is minimal and the frequency lines are very close to the true FRF (solid). However, it should be remembered that with any excitation signal with continuous spectrum there is always some bias.

excitation signal, the burst length is adjusted so that the response signal completely dies out before the end of the time block, and no window is then used in the FFT processing. In Figure 13.18(a) and (b), the resulting FRF and coherence for two different burst lengths, 75% (too long) and 25% (sufficiently short) are plotted, using 2048 samples blocksize. As can be seen in Figure 13.18(b), the dip in the coherence at the natural frequency vanishes when the leakage disappears. The number of averages for burst random have to be large enough to produce a small random error. At least 50–100 averages should be used, and the number has to be higher, the more extraneous noise is present. In the example we have used 50 averages, comparable to the case for pure random in Section 13.10.1. In addition, as we mentioned before, it may be necessary in practice to add some idle time between the bursts in order for rigid body motion to entirely die out. Note also, that since the spectrum of the burst random signal is continuous, there will be a bias error if the blocksize is not much larger than the length of the impulse response. As shown in Figure 13.18(c), the bias error is almost zero even with a relatively low blocksize compared with the blocksize necessary for pure random excitation. If the FRF is going

359

360

13 Single-input Frequency Response Measurements

to be used for experimental modal analysis curve fitting, many algorithms can use spatial information, as well as frequency information, to achieve high accuracy in modal parameter estimates, so this low-frequency resolution is sufficient. In other cases, however, where the FRF is to be used for other purposes, it may still be necessary to have a high resolution of the FRF, which may necessitate higher blocksize. It should be pointed out that the amount of data we have used for the burst random case is 50 ⋅ 2048 = 102400 samples, which is an eighth of what we used for the pure random excitation.

13.10.3 Periodic Excitation Without any extraneous noise, the result of using periodic random or periodic chirp excitation would be similar to using burst random, although with periodic excitation, the bias error is exactly zero when no extraneous noise is present. Another difference is, of course, that we could use time averaging with periodic excitation signals and that we must allow the structure to enter a steady-state response before starting to acquire data. In Section 13.12, we will show some results of periodic excitation in the presence of extraneous noise on both input and output.

13.11 Example of FRF Estimation – With Output Noise We will now add some noise to the output signal to illustrate the effect on the FRF estimate and on the coherence. To illustrate this point, we add a degree-of-freedom to our mechanical system and simulate a two-degree-of-freedom system with a second natural frequency of 200 Hz and damping of 1 %. This produces an antiresonance at approximately 158 Hz, as illustrated in Figure 13.19(a). We add a constant noise signal with an RMS level of 10−2 times the RMS level of the output signal of the system, and with constant PSD over the frequency range. This is a rather large extraneous noise level which we use to illustrate clearly the result of extraneous noise. The signal-to-noise ratio is, of course, worse where the response signal is small, as the extraneous noise has a continuous spectral density, and the most difficult part of the FRF to estimate, is therefore around the antiresonance. In Figure 13.19, the FRF (in (a)) and coherence (in (b)) results are shown for pure random, burst random, and pseudorandom excitation. On the right-hand side, a small frequency region around the natural frequency (in (c)) and the antiresonance (in (d)), are shown, to more clearly see the bias error in this region. As evident from Figure 13.19(b), the noise at the output causes a dip in the coherence at the antiresonance. As seen in Figure 13.19(a) and (b), the coherence is still good at the natural frequencies of the 2DOF system. At the antiresonance, however, there is some random error, with pure random and burst random, but not for pseudorandom, but no evident bias error. The bias error is averaged away, since we are using the H1 estimator, and the noise is in the output signal, which is the assumption for the H1 estimator. The random error will diminish as we increase the number of averages. The result from the pseudo-random excitation signal in Figure 13.19 is more remarkable and requires some comments. To produce the results in this example, we have used time domain averaging, which, as mentioned in Section 13.9.4 reduces the noise in both

13.11 Example of FRF Estimation – With Output Noise 0

10

Magnitude FRF

−1

10

−2

10

−3

10

−4

10

100

200

300

400

500

99

99.5

(a)

100

100.5

101

(c) −3.4

10

1

Coherence

0.95

−3.5

10

0.9 −3.6

10 0.85 0.8

100

200 300 400 Frequency [Hz]

(b)

500

156

157 158 159 Frequency [Hz]

160

(d)

Figure 13.19 Magnitude of FRFs in (a) and coherence functions in (b) for two excitation cases: pure random excitation with blocksize of 16384 samples (dotted), and burst random with blocksize of 4096 samples blocksize and 25 % burst length (rings or dash-dotted). The output signal was contaminated by uncorrelated noise with a signal-to-noise ratio of 100 times (40 dB). The true FRF is plotted in solid. In all cases, 50 blocks of time data were used. In (c) and (d), the FRF estimated by a pseudorandom excitation signal, with 1024 samples (+ sign or dashed) is shown together with the other estimates, in a frequency region around the first natural frequency (in (c)), and around the antiresonance (in (d)). Although all three excitation signals have a small bias error, the plots illustrate the improvement in bias error achieved by a periodic excitation signal, in this case pseudorandom noise, although a periodic chirp would yield similar results.

signals, x and y. Using time domain averaging on all blocks of data yields the coherence estimate invalid, as it would be based on only one average. It is therefore not plotted in Figure 13.19 (b). Two main conclusions can be drawn from this example: ●



Any excitation signal can be used to estimate the frequency response with a small bias error, using the H1 estimator in the case of extraneous noise in the output signal. (And the same is valid for the H2 estimator in the case of noise in the input signal.) A periodic excitation signal such as pseudorandom (or periodic chirp, which is equivalent in terms of all relevant properties) is superior to any other excitation signal due to its lack of bias error and better signal-to-noise ratio.

361

362

13 Single-input Frequency Response Measurements

13.12 Examples of FRF Estimation – With Input and Output Noise As a final example, we will add extraneous noise also to the input signal. To make this example more realistic we will also shape the input force spectrum to illustrate a phenomenon common when exciting light structures. In such cases, often the force spectrum will have dips around the resonances of the structure, which is caused by the structure moving away from the shaker at the resonances. With this coloring of the force spectrum, typically the noise in the force transducer comes closer to the “true” force signal at these dips in the force spectrum, causing the H1 assumption to fail. The results of a simulation similar to the simulation described in Section 13.11, but with colored force spectrum and with addition of noise also on the input signal, x, are plotted in Figure 13.20. In (a) and (b), the PSDs of the force and response signals are plotted (solid) together with the extraneous noise m(t) and n(t), respectively. Of course, in a practical situation, we do not have access to the spectra of the extraneous noise, but it is useful to picture these noise sources as signals with relatively constant PSD as in the figure. When the force PSD is high near the antiresonance of the system, the signal-to-noise ratio is relatively high, and the assumption for the H1 estimator is valid. Around the natural frequencies of the system, however, since the PSD of the force dips, the H1 assumption is invalid. In Figure 13.20(d), this can be seen to cause some (small in this case) bias in the estimates using pure random (dotted) and burst random (rings). As mentioned in Section 13.2.3, there is no general estimator which can minimize the error in the frequency response in the case of both input and output noise, without a priori knowledge about the noise properties. This is valid for random noise input signals. However, as Figure 13.20 (d) shows, with pseudorandom (plus sign), and with time domain averaging as we have used here, the estimated FRF is (almost) bias-free, provided enough averages have been made to “clean up” the two input signals. It should also be noted that if the ordinary coherence function is desired, it could be computed if a hybrid method is used. In such cases, time domain averaging can be made on portions of data, after which each result of these portions is added together using frequency domain averaging. If 5–10 frequency averages are made, a coherence function can be estimated which can be used to assess the quality of the measurement.

13.12.1 Sources of Error during Shaker Excitation A few practical aspects of shaker excitation need to be discussed. Attaching a shaker and making a correct measurement requires a great deal of caution and skills. It is very easy to introduce, for example, transverse forces into the stinger between the shaker and force transducer, which causes the force sensor to produce an erroneous signal, as discussed in Section 7.5. In Sections 13.12.2 and 13.12.3, some common checks and causes of errors will thus be discussed.

13.12.2 Checking the Shaker Attachment The first thing which must be done, after attaching the shaker to the structure and choosing some initial settings for analysis, is to check that the stinger is not too flexible. This can be

13.12 Examples of FRF Estimation – With Input and Output Noise 5

10

Response PSD [V]

Force PSD [V]

4

10

3

10

2

10

1

10 0

0

10

−5

100 200

300 400

500

10

100 200 300 400

(a)

500

(b)

Magnitude FRF

10

−1

10

−2

10

−3

10

−4

10

100 200

300 400

500

99

99.5

100

100.5

101

(d)

(c) 1.01

Coherence

1 0.99 0.98 0.97 0.96 0.95

100 200 300 400 Frequency [Hz]

(e)

500

156 157 158 159 160 Frequency [Hz]

(f)

Figure 13.20 Results of simulation with noise on both input and output signal of 2DOF system excited by colored force spectrum: (a) force PSD (solid) and extraneous input noise (dotted); (b) response PSD (solid) and extraneous output noise (dotted); (c) magnitude of FRFs of true system (solid), pure random excitation (dotted), burst random (dash-dotted), and pseudorandom with time domain averaging (dashed); (e) coherence functions of pure random (dotted), and burst random (dash-dotted). For the pure random excitation a blocksize of 16384 samples was used, for burst random 2048 samples and 25 % burst length. In (c) and (d), the FRF estimated by the pseudorandom excitation signal, with 1024 samples (+ sign) is shown together with the other estimates, in a frequency region around the first natural frequency (in (d)), and around the antiresonance (in (f)). Although all three excitation signals have a small bias error, the plots illustrate the improvement in bias error achieved by a periodic excitation signal and time domain averaging, in this case pseudorandom noise, although a periodic chirp would yield similar results. Although not easily seen, the coherence for burst random in (e) is significantly worse than for pure random due to the reduced signal-to-noise ratio with burst random signals.

363

364

13 Single-input Frequency Response Measurements

done by checking that the force spectrum does not show too high dynamic range (variation between its minima and maxima) in the frequency range of interest. How much the force spectrum may acceptably drop depends both on the transducer used and on the dynamic range of the analysis system. When the force spectrum drops too much it will be detected through a loss of coherence, and this should be considered particularly serious if the H1 estimator is going to be used, as it will be biased in this case. When the optimal measurement settings have been set, as discussed earlier in this chapter, a check of the shaker attachment should be carried out. As discussed in Section 6.4.6, a driving point frequency response between force and acceleration (velocity, displacement) always displays an anti-resonance between each resonance. This is equivalent to the fact that the imaginary part of the driving point FRF (real part in the case of mobility) will display peaks as either positive or negative, but never both. If the driving point FRF does not have this property, it is an indication that something is wrong with the coupling between the force sensor and the structure, or that the accelerometer is located too far away from the force sensor to provide “one point,” in cases where the accelerometer has to be mounted next to the force transducer. One should in such cases consider using an impedance head, see Section 7.6. The causes of these deviations can include ● ●



the force transducer is incorrectly attached (too soft), the accelerometer is incorrectly attached, or it sits too far from the force transducer to be seen as sitting at the “same” position, the stingers are connected with transverse forces, or are unsuitably chosen, making the force transducer measure more than just the force in the desired direction.

Another check which should preferably be made is a check of reciprocity, as discussed in Section 6.4.2. Thus, if the shaker is intended to excite the structure at a certain point, a frequency response between the force in a different point, to an accelerometer in the future shaker location should first be measured. This requires either that the shaker is first mounted in this other point, or alternatively, if this is considered to be too much work, a separate measurement with an impulse hammer exciting the other point should be made. The FRF between this alternative force point and the accelerometer in the future force location is then measured and stored. After the shaker is then installed in its final location, an accelerometer is mounted in the point just excited by the force, and the FRF between these two points measured and stored. The two FRFs should be equal. The difficulty of attaching a shaker correctly is so large that this procedure should always be followed to ensure accurate measurements.

13.12.3 Other Sources of Error During shaker excitation, there is a potential error source which is very important to avoid and which is illustrated in Figure 13.21. This error arises if the shaker causes vibrations which propagate through the structure through a different path than through the stinger and force transducer. These vibrations make the linear system, measured between the input force and the response, incorrect according to the model we assume. The input forces to the structure via the wall, floor, frame, and suspension as illustrated in Figure 13.21 are highly

13.13 Chapter Summary

Frame F'

R F'

HFR F -F

Floor

Figure 13.21 Illustration of how forces correlated with the measured excitation signal can be transferred to the structure through the suspension and floor. The shaker’s reaction force produces floor vibrations. These vibrations propagate through the frame suspending the structure through the suspension springs. The structure is effected by forces F ′ from the springs, which are highly correlated with the force F. This leads to bias error in the frequency response estimate between force F and response R, which does not affect the coherence function making it more elusive. One should therefore always check that such forces do not exist by, for example, measuring the response signals with the shaker detached.

correlated with the force propagating through the stinger, which we measure with the force transducer. Therefore, this type of error is not discovered by the coherence function, since the measured response signal is correlated with the measured input. Sometimes, this type of error can be detected through measuring the response signals partly while the shaker is turned off and partly after it is turned on, but before the shaker is attached to the structure. After that, a comparison between the measured response signals is made. If there is no difference (easiest to see in the spectrum), then this is a good indication that no vibrations propagate through the structure via other paths than the desired one. See for example Ewins (2000) for more examples of situations like this.

13.13 Chapter Summary This chapter has presented the fundamental theory and application of experimental frequency response estimation for single-input/single-output, linear systems. The two most common estimators for these systems are the H1 and H2 estimators. The H1 estimator defined by ̂ 1 (f ) = H

̂ yx (f ) G ̂ xx (f ) G

,

(13.44)

minimizes the bias due to extraneous noise present in the measured output signal. The H2 estimator, defined by ̂ 2 (f ) = H

̂ yy (f ) G ̂ xy (f ) G

,

(13.45)

instead, minimizes extraneous noise present in the input signal. In case the input signal to the linear system is random noise, both these estimators are always biased, although

365

366

13 Single-input Frequency Response Measurements

the bias error decreases with increased blocksize. Thus with large enough blocksize, the bias error can be made negligible. Also, a relatively large number of averages usually have to be used in order to get the bias error due to the extraneous noise to be small. ̂1 In addition to the FRF estimators, the coherence function is defined as the ratio of H ̂ to H2 , or |2 |̂ |Gyx (f )| ̂ 1 (f ) H | , | 𝛾̂ 2yx (f ) = = ̂ ̂ ̂ H2 (f ) Gxx (f )Gyy (f )

(13.46)

2 ≤ 1. The coherence function is interpreted as the power of the output sigwhere 0 ≤ 𝛾yx nal, y(t) which can be explained by a linear relationship with the input signal x(t). The main reason for a coherence deviating from unity is that there is contaminating noise, either on the input, the output, or on both signals. There are, however, also a number of other situations where the coherence can differ from unity, which were listed in Section 13.7. We also discussed that the optimal settings for the FFT processing for estimates of frequency response functions when the signals are pure random noise is to use a half sine window and 67% overlap, if Welch’s method for spectral density estimation is used. The FRF estimates will always be biased, but the bias error can be made negligible by increasing the blocksize until the peaks at the natural frequencies do not increase with higher blocksize. The random error in frequency response estimates is always nonzero, but usually small when there is limited amount of extraneous noise. Two main techniques for measuring frequency response on mechanical systems were presented; (i) impact testing, where an impulse hammer is exciting the test structure, and (ii) shaker excitation, where a force sensor is attached, through a stinger, to a shaker. For impact testing, we showed that the noise in the input signal (force signal) can be almost eliminated using a force window. The H1 estimator is therefore usually the best choice for impact testing. Furthermore, if an exponential window is used to improve the signal-to-noise ratio, it is increasing the apparent damping in the measured FRF. The amount of added damping can, however, be calculated from the knowledge of the exponential factor of the exponential window, and the natural frequency of the system, see Equation (13.43). For shaker excitation, we have a range of excitation signals available. It was shown that the best excitation signals are the periodic signals, i.e., pseudorandom and periodic chirp. If these are not available in the measurement system, burst random can be used as an only slightly less-efficient excitation signal in most cases (provided the noise in the sensors is reasonably low and provided the burst length is adjusted appropriately). All these excitation signals are self-windowing, i.e., they should be used without any time window in the FFT processing. We also discussed the possibility of averaging periodic excitation signals in the time domain instead of the more commonly used frequency domain averaging. This can potentially lead to removal of both input and output noise, although more averages than for frequency averaging should then be used.

13.14 Problems

The main advantage with the self-windowing signals is that the bias error is small for smaller blocksizes than those which have to be used with pure random noise. This means that the measurement can be done much quicker with the same quality (bias error). Modern modal analysis algorithms allow relatively coarse frequency resolution, as they trade frequency resolution for spatial resolution.

13.14 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 13.1 Which estimator (H1 or H2 ) should you use in an impact test? Explain why it is better than the alternative! Problem 13.2 Write a MATLAB/Octave script which creates a mechanical SDOF system with a natural frequency of 12.5 Hz and 2% damping, using appropriate commands from the accompanying toolbox. Set the sampling frequency to 200 Hz and use 100 (frequency domain) averages. Excite the system with random noise and determine which blocksize you should use as a minimum to eliminate the bias error by running the script with larger and larger blocksize until the peaks do not become higher with a larger blocksize. Plot the magnitudes of the FRFs (with logarithmic y-axis) and coherence. Problem 13.3 Make a new script using part of the script from Problem 13.2 and replace the excitation signal by burst random noise. First, adjust the burst length using a blocksize of 1024 samples until the measurement quality is good. Then check the maximum burst lengths (in % of the blocksize) you can have with blocksizes of 2048 and 4096, with good coherence. Problem 13.4 Define a 2DOF system with natural frequencies of 12.5 and 25 Hz, and 2% damping. Use parts of the script in Problem 13.2 and add output noise. Rerun the script with different amounts of noise level. Check that you understand what happens at the natural frequencies as well as at the antiresonance. Problem 13.5 Use the script from Problem 13.4 and replace the excitation signal with burst random, and the same burst length you found out in the previous problem. Add the same amount of noise as in Problem 13.4. Run the script with different blocksizes and note what happens with the random error. Explain why! Problem 13.6 Using parts of the script from Problem 13.2 and the minimum blocksize obtained in that problem, compare the estimated FRF using half-sine window with 67% overlap with an FRF estimated using Hanning window and 50% overlap. How much larger is the bias error with Hanning window?

367

368

13 Single-input Frequency Response Measurements

References Antoni J and Schoukens J 2007 A comprehensive study of the bias and variance of frequency-response-function measurements: optimal window selection and overlapping strategies. Automatica 43(10), 1723–1736. Antoni J and Schoukens J 2009 Optimal settings for measuring frequency response functions with weighted overlapped segment averaging. IEEE Transactions On Instrumentation and Measurement 58(9), 3276–3287. Bendat J and Piersol A 1993 Engineering Applications of Correlation and Spectral Analysis 2nd edn. Wiley Interscience. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Brandt A and Brincker R 2011 Impact excitation processing for improved frequency response quality Structural Dynamics, Vol 3., pp. 89–95. Springer, New York. Ewins DJ 2000 Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Fladung W 1994 The development and implementation of multiple reference impact testing Master’s thesis University of Cincinnati. Fladung W and Rost R 1997 Application and correction of the exponential window for frequency response functions. Mechanical Systems And Signal Processing 11(1), 23–36. Fladung W, Zucker A, Phillips A and Allemang R 1999 Using cyclic averaging with impact testing Proceedings of 17th International Modal Analysis Conference, Kissimmee, FL Society for Experimental Mechanics. Goyder H 1984 Foolproof methods for frequency response measurements Proceedings of 2nd International Conference on Recent Advances in Structural Dynamics, Southampton. Halvorsen WG and Brown DL 1977 Impulse technique for structural frequency-response testing. Sound and Vibration 11(11), 8–21. Mitchell L and Cobb R 1987 An unbiased frequency response function estimator Proceedings of 5th International Modal Analysis Conference, London, UK, pp. 364–373, London, U.K. Phillips AW and Allemang RJ 2003 An overview of MIMO-FRF excitation/averaging/ processing techniques. Journal of Sound and Vibration 262(3), 651–675. Rocklin GT, Crowley J and Vold H 1985 A comparison of H1, H2, and Hv frequency response functions Proceedings of 3rd International Modal Analysis Conference, Orlando, FL. Schoukens J, Rolain Y and Pintelon R 2006 Analysis of windowing/leakage effects in frequency response function measurements. Automatica 42(1), 27–38. Wicks A and Vold H 1986 The Hs frequency response estimator Proceedings of 4th International Modal Analysis Conference, Los Angeles, CA.

369

14 Multiple-Input Frequency Response Measurement In this chapter, we will introduce identification of linear systems with multiple-input signals and multiple-output signals. Such systems are found in many applications in the field of structural dynamics and acoustics. For example, the noise in different parts of the interior of a car (outputs) is made up of sound from the engine, the road, the wind-induced sound, etc. (inputs). In experimental modal analysis, it is common to use several shakers to excite the test structure, and simultaneously to measure a large amount of response signals. In these cases, it is inappropriate to use the SISO estimation methods we discussed in Chapter 13, but instead frequency response functions must be estimated using a multiple-input/multiple-output (MIMO) model. As we will show in this chapter, the only results necessary for calculating all input/output relations for linear systems are the autospectra of all signals (input and output) and cross-spectra between each input and all other signals (including other input signals). We will use the multichannel spectral analysis concepts introduced in Section 10.8 for the discussion of estimators for MIMO FRF models. The success of understanding MIMO FRF estimation relies on correctly understanding the different effects of excitation signals, signal-to-noise ratios, and the extraneous noise on input or output signals, or both these signals. We will therefore make frequent use of simulations in this chapter to illustrate each effect without the other effects present.

14.1 Multiple-Input Systems MIMO systems are defined such that each output is caused by a linear combination of all the inputs, and there are no causal relations between any of the outputs, see Figure 14.1 where a general MIMO system is depicted. If there are causal relations between any of the outputs, then one or more output signals need to be redefined as an input signal. Thus, there is a linear system between each input and each output, and there can, of course, be some uncorrelated, extraneous noise added to either the inputs or outputs, or both, as for the SISO system. The MIMO system can therefore be seen as a number of parallel multiple-input/single-output (MISO) systems. The fact that we have several outputs is not an important issue, from a conceptual standpoint, since each output is independent of all the other outputs. The problem, instead, lies in the decoupling of the individual contributions in a particular output to each of the inputs. We can thus Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

370

14 Multiple-Input Frequency Response Measurement

X1

Y1

X2

Y2

X3 Xk

Figure 14.1 General Multiple-Input-Multiple-Output (MIMO) system. Each output signal is caused by a linear combination of the input signals, but not by any of the other output signals.

Y3

[H]

Yl

XQ

YP

concentrate on looking at the MISO system, at least from a conceptual point of view. It may be more computationally efficient to compute the entire MIMO system at once, but the understanding of the problem can be visualized by a MISO system.

14.1.1 The 2-Input/1-Output System Before we describe the general MIMO system in depth, we will first consider a 2-input/ 1-output system, as depicted in Figure 14.2 in order to visualize the concept because the matrix algebra we use in the general case becomes rather abstract. In Figure 14.2, the output is caused by a linear combination of the two inputs, x1 (t) and x2 (t) which can have some mutual correlation, but must not be completely correlated. This is no limitation, since, if the inputs are totally correlated, it is actually sufficient to use a model with only one input, as the other input can then be created by letting the first input pass a linear system. Thus, the model as illustrated in Figure 14.2 is invalid in that case, since we really have a single-input system. Some additional contaminating noise, n(t), is added to the sum of the signals coming from the two inputs, u(t), to produce the measured output signal, y(t). In the frequency domain, the output of the system then is (14.1)

Y (f ) = X1 Hy1 + X2 Hy2 + N.

We obtain the least squares solution to Equation (14.1) by multiplying it by the complex conjugates X1∗ and X2∗ , respectively, and taking expected values (in a loose sense; what we do is, of course, to apply the spectral estimators from Section 10.3), which gives us the system of equations: ] [ ] [ ] [ { [ ∗] E YX1 = Gy1 = E X1 X1∗ Hy1 + E X2 X1∗ Hy2 + E NX1∗ [ ∗] [ ] [ ] [ ] (14.2) E YX2 = Gy2 = E X1 X2∗ Hy1 + E X2 X2∗ Hy2 + E NX2∗

x1(t)

Hy1( f )

u1(t)

Figure 14.2 2-input/1-output system with noise on the output.

n(t) u(t) y(t)

x2(t)

Hy2( f )

u2(t)

14.1 Multiple-Input Systems

If we assume the noise n(t) to be uncorrelated with x1 (t) and x2 (t), this yields { Gy1 = G11 Hy1 + G21 Hy2 Gy2 = G12 Hy1 + G22 Hy2

(14.3)

since the cross-spectra Gn,x1 and Gn,x2 will approach zero as we take the expected values (i.e., we take many averages). To solve the system of Equation (14.3), we could elaborate and find a way to diagonalize it to produce a system, where each of the Hy1 and Hy2 is separated into one row. This is usually called Gaussian elimination and will indeed be used in Section 14.2, as it was a historic way of solving MIMO systems. Today, however, we have good software tools for linear algebra through MATLAB/Octave and similar software. A direct matrix formulation and solution is therefore of greater interest, and we will turn to such a formulation to find the solution to Equation (14.3).

14.1.2

The 2-Input/1-Output System – Matrix Notation

Equation (14.1) can be reformulated in matrix notation as Y = ⌊H⌋ {X} + {N}

(14.4)

where Y is a scalar because we have only one output (in general MIMO cases, it will be a column vector, see Equation (14.11). Furthermore, with this formulation in the case of two inputs and one output, in Equation (14.4) ⌊H⌋ is a row vector (1 × 2), equal to ⌊ ⌋ ⌊H⌋ = Hy1 Hy2 (14.5) and {X} is a column vector (2 × 1) equal to { } X1 {X} = X2

(14.6)

With these definitions, Equation (14.4) is equivalent to Equation (14.1). The least squares solution to Equation (14.4) in the matrix form can now be formulated by multiplying Equation (14.4) by the matrix equivalent of the complex conjugate of X, namely the Hermitian transpose, see Appendix D. Instead of taking the complex conjugate alone, the Hermitian transpose, as the name implies, transposes and takes the complex conjugate. We then get Y ⌊X ∗ ⌋ = ⌊H⌋ {X} ⌊X ∗ ⌋ + {N} ⌊X ∗ ⌋ .

(14.7)

Averaging each term separately then yields the result [ ] ⌊ ⌋ ⌊ ⌋ (14.8) Gyx = ⌊H⌋ Gxx + Gnx . [ ] The cross-spectrum matrix Gnx in Equation (14.8) will, of course, approach zero in the averaging process. We therefore end up with the matrix equation: ⌊ ⌋ [ ] Gyx = ⌊H⌋ Gxx (14.9) [ ] which we then have to solve. Assuming the input cross-spectral matrix Gxx can be inverted, the solution is the MISO H1 estimator ⌊ ⌋ ⌊ ⌋ [ ]−1 ̂ yx G ̂ xx ̂1 = G (14.10) H where we put hats on the variables to emphasize that they are all estimates (calculated from a measurement).

371

372

14 Multiple-Input Frequency Response Measurement

[ ] There is a potential issue here: how do we know that we can invert Gxx ? From linear algebra, we know (see Appendix D) that a matrix is invertible if it has full rank, i.e., if all rows and columns are independent. This is equivalent to saying its determinant is nonzero. [ ] This is obtained in our case, if Gxx is formed by averaging together several independent intermediate results X ⋅ X ∗ . Two things must then be fulfilled, namely ● ●

each element Xm in the averaging process must be independent, and at least two (in this case with two inputs) independent averages must be made when [ ] forming the matrix Gxx .

The first point is fulfilled if we use two independent sources driving the shakers, provided the shakers are strong enough to provide forces proportional to the voltage signals by which they are fed, see Section 14.5. The second point is hardly a problem in practice, as we will always need to take more averages than we have inputs to the system in order to make sure the cross-spectrum matrix Gnx approaches zero.

14.1.3 The H1 Estimator for MIMO The MISO system (of which the 2-Input/1-Output was a small example) is easily extended into the general multiple-input/multiple-output, MIMO, system. For the MIMO system, any output signal is comprised of the contribution of a number of input signals. In structural dynamics (experimental modal analysis), for example, multiple-input models are used for the measurement of frequency response functions between input forces and response accelerations, when several dynamic shakers are mounted on the structure. Multiple-input models are also frequently used in noise source identification in acoustics, where dominant sources of disturbing noise are identified, see Chapter 15. The general multiple-input/multiple-output (MIMO) system can be described as a general system with a number of input signals xq , q = 1, 2, … , Q, and a number of output signals yp , p = 1, 2, … , P, where we drop the time notation for simplicity. Each output signal is assumed to be caused only by the input signals. If there is a causal relationship between the output signals, then the model has to be redefined. A general MIMO system is depicted in Figure 14.1. If the system is linear and time invariant, and we assume noise is contaminating only the measured output signals, we can define the system frequency response matrix [H(f )] of the MIMO system so that {Y } = [H] {X} + {N}

(14.11)

In Equation (14.11), {Y (f )} is a column vector with dimension (P, 1), {X(f )} is a column [ ] vector with dimension (Q, 1), and H(f ) is a matrix with dimension (P, Q). Thus, an individual element Hpq (f ) is the frequency response between input xq and output yp . Row number p in [H] contains the frequency responses that sum up into output signal yp , and a column number q in [H] contains the frequency responses for input signal xq . By multiplying Equation (14.11) by {X}H , the Hermitian transpose of {X}, and take expected values of each term, we obtain [ ] [ ] [ ] Gyx = [H] Gxx + Gnx (14.12) which is a least squares solution.

14.1 Multiple-Input Systems

If we assume that all the noise sources in {N} are independent of the measured input [ ] signals, the cross-spectrum matrix Gnx in Equation (14.12) will be zero. Postmultiplying [ ] both sides of the equation by the inverse of Gxx , gives us the MIMO H1 estimator as [ ] [ ] [ ]−1 ̂ yx G ̂ xx . ̂1 = G H (14.13) Note that this equation has to be solved for each frequency of interest. The solution [ ] ̂ xx is not singular, i.e., its determiaccording to Equation (14.13) is possible if the matrix G nant does not equal zero at any frequency. In theory, this means that no ordinary coherence between any two input signals can be equal to unity. In practice, as the ordinary coherence between two inputs reaches unity, there will be numerical problems with the matrix inversion. As matrix inversion is rather inefficient computationally, in general, the system in Equation (14.13) is better solved by Gaussian elimination. We will discuss solving Equation (14.13) in Section 14.1.5, but first we will define a suitable coherence function for MIMO estimation. Example 14.1.1 Write a MATLAB/Octave script which computes a MIMO frequency response matrix based on two input signals and two output signals. The MIMO FRF matrix will be stored in a 3D matrix similar to the spectral matrices described in Section 10.8. We assume we have already computed the input cross-spectral matrix Gxx in 3D form, and the input–output, cross-spectral matrix in 3D matrix Gyx, as described in Section 10.8.2. The first thing we should note is that matrix inversion, using the MATLAB/Octave command inv, is not recommended. Instead, we will use Gaussian elimination, which in MATLAB/Octave, in this case where the matrix to invert is on the right-hand side of the multiplication matrix, is simply accomplished by the slash, “/” operator, see Appendix D. The following code then produces the H1 estimator in a variable with the same name. % First find sizes [Nf,D,R]=size(Gyx); % Preallocate H1 H1=zeros(Nf,D,R); % Loop through frequencies for f=1:Nf Gxxf=squeeze(Gxx(f,:,:)); Gyxf=squeeze(Gyx(f,:,:)); H1(f,:,:)=Gyxf/Gxxf; end End of example.

14.1.4

Multiple Coherence

If we look at the system in Figure 14.2, we realize that for multiple-input systems, the ordinary coherence functions calculated between each of the inputs and the output will generally be less than unity. If, for example, the two input signals are uncorrelated and have approximately the same spectral densities, the two frequency response functions are

373

374

14 Multiple-Input Frequency Response Measurement

of the same magnitude, and the contaminating noise is zero, each of the two ordinary coherence functions will be approximately 0.5. Thus, the ordinary coherence function is not very useful for multiple-input systems. For MIMO systems, we define the multiple coherence function 𝛾y2p ∶x for output signal yp in a similar way to which the ordinary coherence function was defined for the single-input system, namely by 𝛾y2p ∶x (f ) =

Guu (f ) . Gyp ,yp (f )

(14.14)

where u is the coherent output from the linear systems as in Figure 14.2. The notation yp ∶ x in the index is read “yp given (all inputs) x.” As for the single-input case, the estimation of the optimum H-systems, maximizes the power in yp due to the input signals and puts the remaining power in the extraneous noise signal np (t). Thus, the multiple coherence function is interpreted in analogy with the ordinary coherence for the single-input case, as the part of the output power spectrum Gyp ,yp which is linearly dependent on any of the inputs x. As for the ordinary coherence, evidently 0 ≤ 𝛾y2p ∶x (f ) ≤ 1

(14.15)

where the multiple coherence function, 𝛾y2p ∶x = 1 if and only if there is no output noise, i.e., if np = 0. For MIMO systems, there will obviously be one multiple coherence function for each output of the system. In order to find an expression which we can use in practice to estimate the multiple coherence function for output signal yp , we formulate the coherent output ⌊ ⌋ spectrum, U(f ), from Figure 14.2, taking row p from the full matrix [H], denoted Hp by ⌊ ⌋ (14.16) U = Hp {X} The Hermitian transpose of U, or since it is a scalar, the complex conjugate, is ⌊ ⌋H U H = ⌊X ∗ ⌋ Hp

(14.17)

Multiplying these two last equations together, and taking the expected value, we obtain the expression for the coherent output power spectrum Guu in Equation (14.18): ⌊ ⌋ [ ] ⌊ ⌋H (14.18) Guu = Hp Gxx Hp . Putting the result in Equation (14.18) into Equation (14.14), we obtain the equation for the multiple coherence estimator by ⌊ ⌋ [ ] ⌊ ⌋H ̂ xx H ̂p ̂p G H 2 . (14.19) 𝛾̂ yp ∶x = ̂ y ,y G p p We can also use the expression for the H1 estimator in Equation (14.13) in Equation (14.19), and after some simplification, we obtain [ ] ([ ]−1 )H [ ]H ̂ y ,x ̂ xx ̂ y ,x G G G p p (14.20) 𝛾̂ 2yp ∶x = ̂ Gyp ,yp

14.1 Multiple-Input Systems

from which it is obvious that the multiple coherence, like all other results for MIMO systems, as we have noted above, can be computed from averaged auto- and cross-spectra. The estimator in Equation (14.19) is somewhat faster to compute if the frequency response [ ] ̂ already exists because it does not involve any matrix inversion. Of course, if the matrix H multiple coherence is computed simultaneously with the frequency response, the inverse of the input cross-spectral matrix could be stored as an intermediate result, and then either estimator could be used with approximately the same computation effort. Example 14.1.2 Write a MATLAB/Octave script to produce the multiple coherence, using the data from Example 14.1.1. We will use the first definition in Equation (14.19), and the following code produces the multiple coherence in columns in variable Cm. [Nf,D,R]=size(H); Cm=zeros(Nf,D); for d = 1:D % Loop responses for f = 1:Nf % Loop frequencies Gxxf=squeeze(Gxx(f,:,:)); Hf=squeeze(H(f,d,:)); % Force to row Hf=Hf(:).′ ; Cm(f,d)=real((Hf*Gxxf*Hf′ )/Gyy(f,d)); end end End of example.

14.1.5

Computation Considerations for Multiple-Input System

All the discussions above assume that the input signals are not completely coherent. This in principle means that the signals used to average together the input- and output autoand cross-spectral matrices have to be formed by independent averages. This leads to the conclusion that the data must, in general, not be periodic, since a sine wave is completely coherent with any other sine wave of the same frequency. We will present two exceptions to this later in this chapter: (i) the so-called periodic random, see Section 14.4.3, and (ii) the multiphase stepped-sine procedure, see Section 14.4.4. For the general case, however, the input signals must not be periodic. Transient signals will in general not be valid either, except for the case where the transients consist of (uncorrelated) burst random signals. A direct effect of the procedure of conditioning the input signals as described above is that the number of averages used for calculation of the spectral matrices has to be larger than the number of inputs. This fact is analogous with the single-input case, where the coherence function is not defined for the first average, as it would equal unity, even with no actual correlation between the input and output signals. Similarly, with several inputs, enough statistically independent input records have to be combined into the auto- and cross-spectra, in order for all the ordinary coherence functions between the inputs to be well defined.

375

376

14 Multiple-Input Frequency Response Measurement

This requirement is of course also necessary when using matrix inversion to solve a MIMO system as in Equation (14.13). In linear algebra terms, the rank of the input [ ] cross-spectral matrix Gxx equals unity after one average, two after two averages, and so on, until the number of averages is larger than or equal to the number of inputs, after which the rank of the matrix equals the number of inputs (which is also the size of the [ ] matrix Gxx ).

14.1.6 The Hv Estimator When estimating MIMO systems on mechanical systems, the H1 estimator is often not optimal. In this application, the MIMO system is always defined as response over force, and the H1 estimator will optimize for contaminating noise in the acceleration (output) signals. Around the natural frequencies of the structure, however, the acceleration is large, and thus the extraneous noise is small relative to the acceleration signal. At the same frequencies, the force signals are often low due to limitations in the excitation system (shaker, amplifier, and stinger), which makes it difficult to excite the structure. Thus, the inputs of the MIMO system, the forces, are more prone to be contaminated by extraneous noise than the outputs, which calls for an estimator equivalent to the H2 estimator for the single-input case. This proves tricky, however, as it can easily be shown that an H2 estimator for multiple-input systems can be defined only for the special case where the number of response signals equals the number of forces, which is, of course, not generally the case. Because the H2 estimator for MIMO systems is of limited interest, we leave the proof of this fact to Problem 14.2. The Hv estimator was developed in the early 1980s (Mitchell 1982; Rocklin et al. 1985) to yield better results in general cases with contaminating noise on both input and output. The Hv estimator assumes that all the extraneous noise sources are incoherent with the true input and output signals. Then, the linear system is defined by the matrix equation: {N} = − [H] {X} + {Y }

(14.21)

where {n} is an error vector consisting of all extraneous noise. Next, we form the error autospectrum matrix, Gnn by multiplying Equation (14.21) by its complex conjugate and taking the expected value. This results in [ ] [ ] [ ] [ ] [ ] Gnn = Gyy + [H] Gxx [H]H − [H] Gxy − Gyx [H]H . (14.22) Equation (14.22) can be shown to be equal to the composed matrix [ ][ ] [ ] [ ] Gyy Gyx I Gnn = I | H Gxy Gxx HH Equation (14.23) is decomposed by the eigenvalue decomposition [ ] Gnn = [U]H [Λ] [U]

(14.23)

(14.24)

where the matrix [U] is the eigenvector matrix, and [Λ] is a diagonal matrix containing the eigenvalues on the diagonal. It can be shown with some advanced linear algebra, that the solution to minimizing the trace of Equation (14.24), which is a so-called Rayleigh quotient, is found by choosing the eigenvector corresponding to the lowest eigenvalue in [Λ]. This is

14.2 Conditioned Input Signals

the Hv estimator. For the single-input/single-output case, the Hv estimator reduces to the geometric mean of the H1 and H2 estimators: √ Gyx Gyy √ ̂ 1H ̂ 2. ̂v = = H (14.25) H | | Gxx |Gyx | | |

14.1.7

Other MIMO FRF Estimators

It can be shown (White et al. 2006) that the Hv estimator in Section 14.1.6 is a special case of a general maximum likelihood (ML) estimator, for the case where the input and output noise at each frequency are equal. It can also be shown that the minimization problem for noise on both input and output is only soluble for general random inputs, if at least the ratio of the two noises is known, which is rarely the case in our applications. The Hv estimator is thus not a very good estimator for our purposes, since we can assume on mechanical systems with high dynamic range that at the critical frequencies (typically the natural frequencies, and perhaps to some extent the antiresonances), we have dominant extraneous noise in either of the two signals (input or output), but not in both simultaneously. It therefore seems as though we are in vain trying to find any better estimator than the H1 estimator, for the general case with random inputs. In addition to the H1 and Hv estimators, which are the estimators commonly available in commercial software for noise and vibration analysis, several other estimators exist, which can sometimes be alternatives if dedicated software is developed. The reason to use such an alternative estimator has to be based on the fact that we want to minimize the bias error in the FRF due to noise on the input signals in addition to the noise on the output signals because, otherwise, the H1 is a superior estimator. We will therefore present references to some alternatives that can be used in such cases. One alternative is the Hc estimator described in Section 13.2.3, which can also be defined for MIMO situations. However, the price is that one extra measurement channel per force needs to be allocated, which makes it less attractive. A second approach is the H𝛼 estimator suggested in Antoni et al. (2004) which makes use of theory of cyclostationarity to find an estimator which is asymptotically unbiased even in the case of input noise. A third alternative is to use time domain averaging and periodic excitation signals. This alternative will be demonstrated in Section 14.5.3.

14.2 Conditioned Input Signals Much of the theory of estimating MIMO frequency response functions from random data was developed in early editions of (Bendat and Piersol 2000). In those days, matrix notation was not commonly used in this field, so instead, Bendat and Piersol used, so to speak, a manual Gaussian elimination. As part of this development, some concepts which are available in many commercial software packages for noise and vibration analysis, such as conditioned signals and partial coherence, were established. We will thus briefly discuss the approach used to develop these concepts. Although we will explain these ideas similarly to

377

378

14 Multiple-Input Frequency Response Measurement

the original presentations, they can conveniently be formulated in modern matrix terminology as well, (Smallwood 1995). We will present this technique by using the 2-input/1-output system in Figure 14.2. When there is correlation between the inputs x1 (t) and x2 (t), it is possible and convenient to define new, uncorrelated, the so-called conditioned signals. These “fictive” signals are obtained by subtracting the dependence of each preceding channel from each input signal, starting with the second input signal. Thus, the conditioned signals have zero correlation with each other. The concept of conditioned signals is an important tool to interpret and understand multiple-input systems. The linear systems relating one conditioned signal with another conditioned signal, or with the measured output, we term L-systems (Bendat and Piersol 1993, 2000). We will now deduce the relationships for conditioned signals, by first studying the system in Figure 14.3, where we look at the dependence between signals x1 and x2 . In Figure 14.3, we see that if we treat the two input signals as an input and an output signal to the system L21 , we get two new signals. The signal (spectrum) X2∶1 is the part of X2 that is linearly dependent on X1 and the “remaining” signal, the conditioned input signal X2⋅1 , where the index is read “2 with x1 removed,” is the part of X2 which is uncorrelated with X1 . The relationship between these signals can easily be found by the simple theory for linear single-input/single-output systems discussed in Chapter 13. From Figure 14.3, we thus have X2 = X1 L21 + X2⋅1 .

(14.26)

We also, using the H1 estimator of Equation (13.6), that G L21 = 21 G11 and thus G X2⋅1 = X2 − 21 X1 . G11

(14.27)

(14.28)

The relationship for the conditioned input autospectrum G22⋅1 of the conditioned signal X2⋅1 can be found by multiplying Equation (14.28) by its complex conjugate and taking the expected value: ] [ ∗ = G22⋅1 = E X2⋅1 X2⋅1 [( )( )] |G |2 (14.29) G21 G 12 E X2 − X2∗ − 12 X1∗ = · · · = G22 − | | . X1 G11 G11 G11 Combining Equation (14.29) with the definition of the ordinary coherence in Equation (13.22), we obtain the simplified formulation for the conditioned input autospectrum in Equation (14.30): 2 )G22 . G22⋅1 = (1 − 𝛾21

(14.30)

X2.1 X1

L21

X2:1 X2

Figure 14.3 L-system with conditioned signal x2⋅1 , where the dependence in signal x2 with signal x1 is removed.

14.2 Conditioned Input Signals

We could also have obtained this equation directly, using Equation (13.29), using the fact that our conditioned signal x2⋅1 is analogous to the extraneous noise in the “standard” single-input case. It should be noted that we simplify the notation by using only the number in indexes, when the numbers stand for input signals. It is important to note that the terms on the right-hand side of Equation (14.30) only contain functions that we already know how to calculate from the measured signals x1 and x2 , using auto- and cross-spectra of the two signals. It is also important to realize, that the two signals x1 and x2⋅1 , are uncorrelated, that is, the ordinary coherence function between these two signals is by definition equal to zero. Using the conditioned input signals, we can now transform the 2-input/1-output system in Figure 14.2 with possibly correlated inputs into a new two-input/one-output system with uncorrelated inputs, as depicted in Figure 14.4. The frequency response functions in this model are also referred to as L-systems. The benefit of this new system is that the two unknown L-systems are very easily obtained using Equation (13.6), treating the two-input system as two separate single-input systems because the input signals are now uncorrelated. Thus, we find that (14.31)

Y = Ly1 X1 + Ly2 X2⋅1 + N where Ly1 =

Gy1

(14.32)

G11

and Ly2 =

Gy2⋅1 G22⋅1

.

(14.33)

In Equation (14.33), we have used the conditioned cross-spectrum Gy2⋅1 which is found ∗ by multiplying Equation (14.31) by X2⋅1 and taking the expected value, which by using Equation (14.28) above yields ] [ ( )] [ ] ] [ [ ∗ = E Y X2∗ − L∗21 X1∗ = E Y X2∗ − L∗21 E Y X1∗ . (14.34) Gy2⋅1 = E Y X2⋅1 Thus, we have Gy2⋅1 = Gy2 −

G12 G G11 y1

(14.35)

which again is a formulation where only “standard” auto- and cross-spectra are needed.

Figure 14.4 2-input/1-output system with uncorrelated inputs.

x1(t)

Ly1( f )

n(t) y(t)

x2˙1(t)

Ly2( f )

379

380

14 Multiple-Input Frequency Response Measurement

14.2.1 Conditioned Output Signals To find relations for the multiple coherence and partial coherence functions beneath, we will also need a set of conditioned output signals. These are derived by using systems as depicted in Figure 14.5. In Figure 14.5, we have put the part of the output signal, y, which is not dependent on the input x1 , into the new conditioned output signal yy⋅1 . The conditioned output power spectrum of yy⋅1 denoted by Gyy⋅1 is then obtained by the following equation, which follows straight from Equation (13.29), ) ( 2 Gyy⋅1 = 1 − 𝛾y1 Gyy (14.36) 2 is the ordinary coherence between input signal x1 and the output y. where 𝛾y1

14.2.2 Partial Coherence Similarly with Figure 14.5, we can form an equivalent system for the conditioned input signal x2⋅1 and the conditioned output signal yy⋅1 as depicted in Figure 14.6. We use the factorial symbol, !, to denote that the dependence on both inputs x1 and x2 are removed. This corresponds to the standard use of this symbol, for example, 4! = 4 ⋅ 3 ⋅ 2 ⋅ 1. To obtain the relationship for the output signal of the linear system in Figure 14.6, we first note that this output signal is analog with the coherent output spectrum in Equation (13.28). This gives us 2 Gyy⋅1 Gu2 u2 = 𝛾y2⋅1

(14.37)

From the definition of ordinary coherence, we obtain 𝛾 y2⋅1 2

|2 | |Gy2⋅1 | | | = G22⋅1 Gyy⋅1

(14.38)

Y y ·1 X1 Figure 14.5 x1 and y.

Ly1

Yy :1 Y

Conditioned input–output system Ly1 obtained by looking at input and output signals

Yy·2! X2·1

Ly2

U2 = Yy:2·1 Yy·1

Figure 14.6 Conditioned input and output signals for the second (conditioned) input, and for the output with the dependence with signal x1 removed.

14.2 Conditioned Input Signals

Figure 14.7 2-input/1-output system equivalent to the system in Figure 3.15, but using the different conditioned signals.

X2·1

X1

Ly2

Yy ·2! = N Yy ·1

Ly1

Y

From these two relationships, we obtain the final expression for the output of the linear system Ly2 : | Gy2⋅1 |2 | | 2 Gu2 u2 = | = 𝛾y2⋅1 Gyy⋅1 (14.39) |G | G22⋅1 | 22⋅1 | | 2 where the new function 𝛾y2⋅1 is called the partial coherence of output yy⋅1 with conditioned 2 input x2⋅1 . Both functions 𝛾y2⋅1 and Gyy⋅1 can be computed from the auto- and cross-spectral functions by the equations developed above. Similarly, the partial coherence between any two conditioned signals can be formed. The partial coherence, furthermore, is analogous to the ordinary coherence for the two signals but is referred to as partial when the two signals are conditioned. Note that in this context also x1 , although unchanged, is a conditioned signal. In many physical cases, the direct interpretation of the partial coherence is difficult. In Section 15.3, on noise source identification we will discuss this in more detail. Using the different systems described above, we can now transform our 2-input/1-output system with correlated inputs to the equivalent in Figure 14.7. Figure 14.7 shows the principle behind the transformation of the multiple-input system (H-) system with correlated inputs, into the equivalent (L-) system with uncorrelated inputs, and with conditioned outputs. When the correlation between the inputs is removed, the L-systems are given by the same equations as for the single-input case. This can easily be extended to more inputs. In Section 14.2.6, we will extend this to an arbitrary number of input signals.

14.2.3

Ordering Signals Prior to Conditioning

The order in which the original, correlated signals are conditioned plays an important role in the interpretation in many applications. For general multiple-input systems with more than two inputs, Bendat and Piersol (1993) recommend the inputs be ordered in one of the following ways: 1. according to the physical causal relationships (if known), or 2. so that the input with the highest ordinary coherence with the output is placed first, and then the inputs should be ordered with descending ordinary coherence functions. A problem in ordering the inputs according to item 2: when the causal relationships are not known, is that the highest coherence value is often found for different input signals at different frequencies. In some cases, it may therefore be necessary to make the conditioning and subsequent analysis using several different orders among the inputs, and trying to

381

382

14 Multiple-Input Frequency Response Measurement

interpret the results accordingly. This could be done for only some frequencies in order to reduce the amount of computations.

14.2.4 Partial Coherent Output Power Spectra Using the conditioned system in Figure 14.7, it is also possible to define the partial coherent output power spectra due to the conditioned inputs x1 and x2⋅1 as 2 Gy∶1 = 𝛾y1 Gyy

(14.40)

2 Gy∶2⋅1 = 𝛾y2⋅1 Gyy⋅1

(14.41)

and

which each tells how much of the power in the output power spectrum Gyy that relates linearly to each conditioned input. These spectra were often used in noise source identification before the methods discussed in Chapter 15 were developed but are usually replaced today with the latter methods.

14.2.5 Backtracking the H-Systems After having calculated the conditioned input and output spectra and the L-systems, it is easy to go back to the original H-systems. This was in fact the recommended procedure to calculate the H-systems because it was considered much more computationally efficient than inverting the input spectrum matrix as discussed in Section 14.1.5. Modern software, such as MATLAB and Octave, for example, has changed this so that today there is no computational advantage in computing MIMO models like explained here. There is however, in my opinion, still an educational value in explaining it this way. The procedure to backtrack the “original” frequency response functions follows easily by redrawing the two-input system as in Figure 14.8 (Allemang et al. 1984; Bendat and Piersol 1993, 2000). From a comparison of Figure 14.8 with Figures 14.2 and 14.3, it is apparent

X1

Hy1

L21

U1 N

Ly2

Y

X2·1

Hy2

V2 ≠ U2

Figure 14.8 Equivalent diagram for backtracking the H systems from L systems. The system in this figure is equivalent to the system in Figure 14.2, while some of the included variables are taken from the conditioned signals. The figure shows that the L system Ly2 is equal to the H system Hy2 , since it is obvious that the conditioned signal x2⋅1 goes through the same linear system as the original signal x2 .

14.2 Conditioned Input Signals

that the H system Hy2 must be equal to Ly2 since the conditioned signal with dependence on the first input signal removed, x2⋅1 , and the original second signal, i.e., x2 , must both go through the same linear system. Thus, from the figure we have (14.42)

Hy2 = Ly2

X1

X2

Hy1

Hy2 N

X3

Hy3

Y

...

Xk

Hyk

...

XQ

Figure 14.9

HyQ

General multiple-input/single-output (MISO) system, with (possibly) correlated inputs.

X1

X2·1

Ly1

Ly2 N

X3·2!

Ly3

Y

...

Xk .(k–1)!

Lyk

...

XQ . (Q–1)!

LyQ

Figure 14.10 General multiple-input/single-output (MISO) system with uncorrelated conditioned input signals.

383

384

14 Multiple-Input Frequency Response Measurement

and Ly1 = Hy1 + L21 Ly2

(14.43)

Rewriting Equation (14.43), we then get the relationship used to calculate the system Hy1 by Hy1 = Ly1 − L21 Ly2

(14.44)

14.2.6 General Conditioned Systems When analyzing systems with more than two inputs, the method we have used so far in Section 14.2 is easily extended. We start with a MISO model with inputs that are correlated, as in Figure 14.9. As we discussed above, we then transform this system into a system with uncorrelated inputs as in Figure 14.10. If there is substantial correlation between some or all the inputs, the original input signals should be ordered as mentioned in Section 14.2.3.

14.3 Bias and Random Errors for Multiple-Input Systems We will not present any quantitative random or bias errors for the MIMO case, but restrain the discussion to some important qualitative aspects. Approximate formulas for MIMO estimates can be found in Bendat and Piersol (2000), and some errors have exact solutions developed in Antoni and Schoukens (2007, 2009). The same arguments as we discussed for SISO systems in Section 13.5 apply to MIMO systems. The spectral densities in the auto- and cross-spectral matrices discussed in this chapter should therefore be computed with half-sine window and 67% overlap to produce the least possible bias and random errors. The difference is small, however, compared with using the more traditional Hanning window and 50% overlap. The bias errors should be minimized by the same procedure with increasing blocksize (decreasing frequency increment) as discussed for spectral analysis and SISO FRF estimation in Chapters 10 and 13. In general, random errors in MIMO estimates are larger than the corresponding errors for SISO systems, which are due to the fact that the input correlation has to be removed in the inversion of the input autospectral matrix. In addition, numerical difficulties with this inversion can lead to noise, as we will show in Section 14.6. It is therefore important to make sure that the correlation between the inputs is sufficiently small when using multiple inputs. Finally, the random errors are small if the multiple coherence is unity or very close to unity. It should also be free from noise, as noise in the multiple coherence is an indication of bad signal-to-noise ratio, or possibly numerical problems.

14.4 Excitation Signals for MIMO Analysis When multiple inputs are used to excite a structure in order to estimate frequency response functions by MIMO techniques, certain considerations have to be made with regard to the

14.4 Excitation Signals for MIMO Analysis

excitation signals used. As mentioned in Section 14.1.5, a number of independent averages have to be taken when averaging the auto- and cross-spectral matrices. Thus, the excitation signals that are used must be such that the FFT of each time block of data will be independent. This disqualifies some of the popular excitation signals used for single-input estimation, such as the chirp signal. We discuss the most common excitation signals used in the modal analysis community in this chapter. To find some alternative signals that may be used, for example, multisines, see Pintelon and Schoukens (2012).

14.4.1

Pure Random Noise

The simplest excitation signal which can be used for multiple-input estimation is pure random, which consists of Gaussian noise. Each input is made up of noise which is incoherent (independent) with all other input signals. The main disadvantage with this excitation signal, as in the single-input case, is that it requires relatively long blocksize and thereby long measurement time in order to reduce the bias error. This makes pure random an inconvenient excitation signal for structures with light damping. However, the pure random signal poses a minimum of requirements on the hardware and is therefore sometimes the only available signal type. This is the case especially if external noise generators have to be used. Pure random is used with frequency domain averaging.

14.4.2

Burst Random Noise

The next signal type is the burst random signal. This transient signal consists of independent, pure random data for the first part of each force signal, followed by a “silent” period, see Section 13.9.3. The burst length is chosen long enough so that the responses (outputs of the estimated system) decays to near zero, usually of the order of 20–50% of the total record time. Burst random leads to measurements with a small amount of leakage, with a proper selection of the burst-off time so that the response signals actually decay to zero. The main drawback is the poor signal-to-noise ratio. In order to synchronize the data acquisition with the excitation signals, the burst random signals usually have to be generated by the measurement system. This puts some additional requirements on the hardware, but most commercially available multichannel data acquisition systems for noise and vibration testing today include this type of excitation signal. Averaging is made the same way as for the pure random excitation signal.

14.4.3

Periodic Random Noise

A disadvantage with the burst random signal is the decreased signal-to-noise ratio during the silent periods. In order to avoid this, periodic random signals can be used, as discussed for the SISO case in Sections 13.9.4 and 13.9.5. The periodic random signal for multiple-input estimation works in a somewhat different way from the single-input case. It turns out that in order for the input autospectrum matrix to be invertible in this case, it is not enough to have independent phase in each time block but also the amplitudes at each frequency must be altered independently.

385

386

14 Multiple-Input Frequency Response Measurement

Periodic random is therefore produced by first generating a single time block of pure random data for each force signal. This block is sent to the shaker, and repeated a number of times, until the transient response of the tested structure has decayed, without any data being measured. When the structure is in its steady-state response, a time block of all signals is acquired, FFT of all channels are computed, and auto- and cross-spectra are accumulated. A new, independent time block of random data is then generated for each force signal, which is again repeated a number of times before a new time block is acquired and processed. This procedure is repeated until enough averages have been taken. Periodic random is the best excitation signal in terms of its increased signal-to-noise ratio. As the signals are periodic within the time window, the periodic random signal has all its energy concentrated on the frequency lines of the FFT so it produces leakage-free spectra with good signal-to-noise ratio. However, a drawback with this signal is the large increase in total measurement time, since each new block has to be repeated several times, and in addition, it requires special synchronization between the input channels and the output channels in the hardware. Therefore, this signal type is not very common in commercial systems. As we will show in Section 14.5, in cases with severe input noise, the periodic random method can be used with a combination of time domain and frequency domain averaging. Although this requires more averages to yield unbiased results, it is a convenient method in this otherwise tricky case. The principle is to repeat each independent block in the periodic random sequence of blocks and produce a time average of a large amount of such repeated blocks, using those blocks where the transient response has vanished. In order to make the input autospectrum matrix invertible, at least as many independent blocks as there are inputs to the system (in practice, say, twice that number) have to be averaged together, see Section 14.5.

14.4.4 The Multiphase Stepped-Sine Method (MPSS) As for the single-input case, a stepped-sine approach can be used also for the multiple-input case (Williams and Vold 1986). In this case, however, care must be taken so that the input cross-spectral matrix is indeed invertible. The special method is called the multiphase stepped-sine method (MPSS) and consists of making several “sweeps” in frequency, between which the individual phases of the force signals are changed according to a scheme producing independent spectra. These spectra are accumulated into auto- and cross-spectral matrices which can then be solved by Equation (14.13). In order to make the input autospectrum matrix invertible, more “sweeps” than the number of forces have to be made, making this a rather slow method. MPSS excitation has found an increased interest in the aerospace industry for the so-called ground vibration tests, where the modes of whole aircraft are analyzed. Except for small fighter aircraft, which are rather stiff structures, aircraft are normally difficult to measure using broadband excitation. Recently, Pauwels et al. (2006) presented a faster MIMO swept sine technique by replacing the usual FFT processing by digital filters. A special consideration has to be made in order to make the forces independent when using the MPSS method. In many cases, the shakers used to excite the structure are too weak to produce exactly the same forcing function that is fed to them by the signal

14.5 Data Synthesis and Simulation Examples

generator voltages. Especially around resonance, the structure responds strongly by its mode, resulting in force patterns that are dependent. In order to avoid this problem, closed-loop control of the shakers should be utilized in the measurement system.

14.5 Data Synthesis and Simulation Examples After the theoretical descriptions of the MIMO estimation methods described in the preceding part of this chapter, we will now illustrate the theory by some simulation examples which will provide insight into the different issues related to MIMO FRF estimation. At the end of this section, there is also an example of real data from measurements on a Plexiglas plate to illustrate some issues which are difficult to simulate. For all simulation examples, we will use the same 2DOF system we used in Chapter 13, with the difference that we now excite both DOFs. The system has two natural frequencies at 100 and 200 Hz, and 1% damping. We use the method from Section 19.2.3 to produce the output due to each force (since the simulation method calculates the response in one DOF due to excitation in another DOF) and sum the two contributions.

14.5.1

Burst Random – Output Noise

In our first example, we use burst random excitation with uncorrelated input forces, and some contaminating extraneous output noise. In Section 13.10.2, we established that 25% burst length together with the chosen blocksize of 2048 samples was sufficient for this system to reduce the leakage to a sufficiently low degree. This will not change by the fact that we now excite two DOFs, so we choose the same burst length in this example. In Figure 14.11, some results of a MIMO estimation, using 50 frequency domain averages, of the two FRFs between both input DOFs and response in DOF 1 are plotted. In (a) and (b), ̂ 11 , in DOF 1 (zoomed in around the first natural frequency we see the driving point FRF, H ̂ 12 . In (e), the multiple coherence is in (b), and in (c) and (d) we see the second FRF, H shown, and in (f), the ordinary coherence between the two forces, from which we see that the forces are uncorrelated. The multiple coherence in Figure 14.11(e) is very close to unity, which means that the output noise level is small enough to not seriously deteriorate the measurement. This is a representative case when measuring, for example, a freely supported, simple structure in a lab environment, provided appropriate force level and sensors with suitable sensitivities have been chosen. In the zoomed-in plots in (b) and (d), we see the burst random frequency lines as “o” (rings) overlaid on the true FRF of this synthesized example. The estimates are very nearly perfect, the error remaining is the small random error due to the limited amount of averages, as the H1 estimator removes the bias due to the output noise present in the measurement. We proceed with a similar example where we add correlation between the two forces to illustrate how this affects the FRF estimates. In Figure 14.12, the same results as in Figure 14.11 are shown for a case where the correlation between the forces is approximately 0.84 (defined by the coherence function; this means the power is correlated to 84%). The example with correlation between the forces shows that the FRF estimates are affected when the input correlation is high. The estimates show some more sensitivity

387

14 Multiple-Input Frequency Response Measurement 0

10

−1

|H11|

|H11|

10

−2

10

−3

10

−4

10

100 200 300 400 500

0

(a)

99

100

101

99

100

101

(b)

|H12|

|H12|

10

−5

10

100 200 300 400 500

(c)

1

1

0.8

0.8

0.6

0.6

2 γ12

2 γy:x

388

0.4

0.4

0.2

0.2

0

100 200 300 400 500 Frequency [Hz]

(e)

0

(d)

100 200 300 400 500 Frequency [Hz]

(f)

Figure 14.11 Results from MIMO estimation of a simulation case with two FRFs on a 2DOF system, using 2048 samples blocksize, 50 averages in the frequency domain, and burst random excitation with 25 % burst length and the H1 estimator from Equation 14.13. In (a), the magnitude is ||H11 ||; in (b), the magnitude of the same FRF is zoomed in around the first natural frequency (rings) overlaid on the true FRF (solid); in (c), ||H12 || is shown; and in (d), the same FRF is zoomed in around the first 2 is shown in natural frequency (rings) overlaid on the true FRF (solid). The multiple coherence, 𝛾y∶x (e), and the ordinary coherence between the two forces is shown in (f), in which it is seen that there is no correlation between the two forces. This example shows that burst random excitation produces virtually bias-free estimates in this ideal case. There is a small random error, although difficult to see, but the bias error is negligible. The H1 estimator removes bias due to output noise.

14.5 Data Synthesis and Simulation Examples 0

10

−1

|H11|

|H11|

10

−2

10

−3

10

−4

10

100 200 300 400 500

0

99

100

101

99

100

101

(b)

|H12|

|H12|

10

(a)

−5

10

100 200 300 400 500 1

1

0.8

0.8

0.6

0.6

2 γ12

2 γy:x

(c)

0.4

0.4

0.2

0.2

0

100 200 300 400 500 Frequency [Hz]

0

(e)

(d)

100 200 300 400 500 Frequency [Hz]

(f)

Figure 14.12 Illustration of the same simulation example as in Figure 14.11, except that the correlation between the forces is now 0.84 as defined by the input coherence in (f). See Figure 14.11 for a description of the different plots. The difference in the results of the H1 estimator in this case is an increased random error most visible in ||H12 || in (c) at higher frequencies. Even with this high correlation, the FRF estimate is relatively good, but the example shows that to obtain the best results in MIMO FRF estimation, the input correlation should be kept low.

to noise when the signal-to-noise ratio is poor. Also, see the example on real data in Section 14.6 for more discussion on input correlation.

14.5.2

Burst and Periodic Random – Input Noise

Next, we will replace the extraneous output noise used in Section 14.5.1 with input noise, which is not removed by the H1 estimator. In Figure 14.13, results from a simulation with

389

0

10

−1

10

−2

10

−3

10

−4

|H11|

10

100 200 300 400 500 0

10

−5

99

100

101

99

100

101

(b)

|H12|

10

(a)

|H12|

|H11|

14 Multiple-Input Frequency Response Measurement

100 200 300 400 500

(c)

1

1

0.8

0.8

0.6

0.6

2 γ12

2 γy:x

390

0.4

0.4

0.2

0.2

0

100 200 300 400 500 Frequency [Hz]

(e)

0

(d)

100 200 300 400 500 Frequency [Hz]

(f)

Figure 14.13 Results of simulation using input noise on both input signals with a signal-to-noise ratio of approx. 40 dB. The results for two excitation signals are show: burst random (rings or dotted) and periodic random (+ signs or solid). As shown, the bias error is significantly smaller with periodic random than with burst random excitation due to the higher signal-to-noise ratio of the former. Both estimators are, however, clearly biased as the H1 estimator does not remove the bias due to extraneous input noise.

an input signal-to-noise ratio of approx. 40 dB (relatively poor) is shown for two input signals: burst random and periodic random with (normal) frequency domain averaging. In Figure 14.13(b) and (d), it is clearly seen that in this case, burst random excitation (rings) shows rather large bias, whereas the bias error is smaller, but still visible with periodic random. The reason for the smaller bias with periodic random is the increased signal-to-noise ratio with this excitation signal, most of all because it is a continuous signal, and to some extent because its energy is concentrated on the frequency lines of the DFT

14.5 Data Synthesis and Simulation Examples

(see Section 13.9.1). To be sure the error we see is not seriously affected by the random error, in this simulation 400 blocks were averaged for each excitation signal, which makes the random error sufficiently small. Thus, the remaining errors seen in Figure 14.13 are bias errors. As seen in Figure 14.13, the multiple coherence for burst random excitation is clearly less than unity over the entire frequency range, whereas for periodic random, it is closer to, but still slightly lower than, unity. This is a direct result of the increased signal-to-noise ratio of the periodic random signal and follows Equation (13.36).

14.5.3

Periodic Random – Input and Output Noise

As we discussed in Section 14.4.3, there is an alternative to the normal frequency domain averaging, which can be used to eliminate the bias error due to noise on both input and output signals, when we use a periodic excitation signal. The solution is to use time domain averaging to remove (attenuate) the extraneous noise on all input and output signals, prior to taking the FFT and accumulating auto and cross-spectra. An example of the results of this procedure for a severe case of input noise, and the same amount of output noise, and the H1 estimator, is shown in Figure 14.14. As seen in Figure 14.14, where the results of normal frequency domain averaging, using 400 blocks of data, and with time domain averaging (see list below for details) are compared, the time domain averaging is effective in removing the bias error. Note that the bias in the result of frequency domain averaging is caused by the fact that the H1 estimator is not able to remove the error due to input noise, so it would not help to increase the number of averages in the frequency domain averaging – the error is a consistent error. In this example, we have used a rather large amount of data, in total 200 × 20 blocks, each with 2048 samples, in the time domain averaging process, i.e., 8.192 000 samples. This has been done to reduce the random error and to illustrate the asymptotic effect. In practice, fewer averages can be used if larger random error can be tolerated. Random errors are less serious for modal parameter curve fitting algorithms than bias errors are, so this may be acceptable in some instances. The processing scheme for periodic random with time domain averaging is more complicated than if frequency domain averaging is done. In the simulation results presented in Figure 14.14, the following processing was followed.: 1. Two random sequences, x1 and x2 , with blocksize N = 2048 samples were generated as Gaussian noise, and then repeated 200 times to generate a periodic signal with period 2048 samples. 2. These two signals, x1 and x2 , were input to a time domain-forced response algorithm to produce output signals y1 and y2 . These two latter signals were then summed to yield the true system output, y. 3. Next, the first 10 periods of all signals, x1 , x2 , and y, were discarded to remove the transient part of the response. 4. Two independent extraneous random sequences, each with an RMS level of approximately 0.25 times the RMS level of each of the input signals, x1 and x2 , were then added to each input signal, and a random sequence with an RMS level of 10−4 times the RMS level of the output signal, y was added to y.

391

0

10

−1

10

−2

10

−3

10

−4

|H11|

10

100 200 300 400 500 0

10

−5

99

100

101

99

100

101

(b)

|H12|

10

(a)

|H12|

|H11|

14 Multiple-Input Frequency Response Measurement

100 200 300 400 500

(c)

1

1

0.8

0.8

0.6

0.6

2 γ12

2 γy:x

392

0.4

0.4

0.2

0.2

0

100 200 300 400 500 Frequency [Hz]

(e)

0

(d)

100 200 300 400 500 Frequency [Hz]

(f)

Figure 14.14 Results of a simulation using periodic random excitation with extraneous input and output noise. The input noise had a SNR of approx. 12 dB, and the output noise approx. 40 dB. Two alternative processing techniques were used: “normal” frequency domain averaging (dotted and rings), and time domain averaging (solid and +). As seen in (b) and (d), time domain averaging can reduce the bias error due to both input and output extraneous noise. The remaining error is predominantly random error because time domain averaging is less efficient in reducing this error.

5. The remaining 190 blocks, including the extraneous noise, where then time averaged to produce the pure periodic part of each signal. 6. The cleaned time signals obtained in step 5 where then used to compute an instantaneous estimate of each auto- and cross-spectrum in Equation (14.13). The results were added in a cumulative calculation of the final auto- and cross-spectra. 7. Steps 1 to 6 were repeated 20 times, and the thus-cumulated auto- and cross-spectra were finally used to produce an H1 estimate of the frequency responses.

14.6 Real MIMO Data Case

Note that step 7 has to be included in order to make the input autospectral matrix invertible; time domain averaging has to be combined with, at least a few, frequency domain averages. This also makes the multiple coherence defined.

14.6 Real MIMO Data Case Some properties and potential problems with MIMO shaker excitation are difficult to illustrate with simulated results as we have used in the preceding sections. We will therefore include an example where a Plexiglas plate was excited by two shakers, as illustrated in Figure 14.15. In this example, we will look at some of the measured functions and study how the correlation between the input forces affects the frequency response estimation. The Plexiglas plate was suspended by soft rubber bands, providing approximately free–free boundary conditions at frequencies of interest (above approximately 50 Hz). The driving point forces and accelerations were measured by impedance heads, ensuring that force and acceleration were measured in the same point. With this setup, the properties of the driving point frequency response functions as well as the reciprocity (see Section 6.4.2) can be verified. The frequency response functions were calculated using the H1 estimator and with the procedure described in Section 14.1.3. The two electrodynamic shakers were connected to a two-channel noise generator, providing Gaussian noise with adjustable correlation. In a first measurement, the noise sources were set to be uncorrelated, and the frequency response functions were estimated. Since the noise was continuous, a Hanning window was applied in the frequency analysis.

Figure 14.15 Measurement setup for multiple-input example. Two shakers were attached through impedance heads to a Plexiglas plate. The plate was suspended by rubber bands which were attached to a metal frame (illustrated by the rigid lines at the top of the figure), providing approximately free-free boundary conditions at frequencies above a few Hertz. The two forces and the two driving point acceleration signals were measured and used to estimate a two-input/two-output system.

ds

an

b er

bb

Ru

Plexiglas plate Impedance head

393

394

14 Multiple-Input Frequency Response Measurement

(a)

(b)

(c)

(d)

Figure 14.16 Measurement results of two-input-two-output example with uncorrelated forces. In (a), the spectral densities of the input forces are plotted; in (b), the multiple coherence of response DOF 1 is plotted; in (c), shows the ordinary coherence between the two forces; and in (d), the frequency responses between the two forces and the response in DOF 1.

The results of the first measurement with uncorrelated noise are found in Figure 14.16. In (a), the spectral densities of the input forces are plotted. As can be seen the forces have practically identical spectral densities. The variations are due to the mechanical impedance of the structure and the shaker characteristics. In Figure 14.16(c), where the ordinary coherence function between the forces is plotted, it can be seen that at some frequencies (e.g., 160, 210, and 250 Hz), the correlation between the input signals is rather high, with coherence values of 0.9 to 0.95. At other frequencies, however, the correlation is low. The increased correlation is found around antiresonances of the structure, where the stinger/amplifier/shaker combination is unable to force the structure according to the input voltage from the noise generators. In Figure 14.16(b) and (d), the multiple coherence and the frequency response functions of both forces with acceleration in one of the excitation positions are plotted, respectively. From the upper plot, (b), it can be seen that for frequencies above approximately 50 Hz, the multiple coherence equals unity. Below 50 Hz, the forces are low, causing low-coherence values due to contamination noise in the force transducers (and possibly accelerometers).

14.6 Real MIMO Data Case

(a)

(b)

(c)

(d)

Figure 14.17 Results from a measurement with some correlation between the input noise signals. In (a), FRF between force 2 and response 1 for uncorrelated inputs and correlated inputs are compared; in (b), the multiple coherence of response DOF 1 is plotted; (c) shows the ordinary coherence between the two forces; and (d) the frequency responses between the two forces and the response in DOF 1.

As the first resonance frequency is approximately 80 Hz, the force input spectrum is sufficient to measure the interesting part of the FRFs. In the lower plot, (d), the suspension resonances are seen at very low frequencies, indicating that the suspension was soft enough to provide free–free conditions at the frequencies of interest. In Figure 14.17, the results of adding some correlation between the excitation forces are shown. In Figure 14.17(c), the coherence between the forces is plotted, and it is apparent that the correlation is very high in several frequency ranges. The multiple coherence, however, in (b) does not show any apparent degradation, which is typical. An immediate look at the plot in Figure 14.17(a), where one of the FRFs from the uncorrelated input case is overlaid on the correlated input case, does not show any large difference. However, as evident from Figure 14.18, where the two FRFs in Figure 14.17(a) have been zoomed in around the first two natural frequencies, shows that there is a bias in the FRF estimate from the correlated input case. This shows how important it is to check the input correlation, as there will be no apparent visible indication in the estimates that a problem is present.

395

396

14 Multiple-Input Frequency Response Measurement

Uncorrelated inputs Some input correlation

Figure 14.18 Comparison of frequency response functions between measurements without correlation between the sources (top graph in lower plot) and with some correlation (lower graph in lower plot). The upper plot shows the coherence function between the two inputs in the case of correlated inputs, and the lower plot shows the two frequency response functions. Note that the two frequency responses are offset slightly in order for differences to show clearer. As is clear from the lower plot, the higher input correlation causes an increased variance in the estimate, seen as a “ripple” in the lower FRF.

Two important conclusions can be drawn from this example. First, it is important to make sure that the input correlation is not too close to unity, when estimating multiple-input frequency response functions. In practice, a value of the ordinary coherence functions between the inputs of less than 0.8 is recommended. Second, it is important to realize that the increased input correlation does not show in the multiple coherence, so the coherence function (or functions if more than two inputs) between the inputs must be checked. In practice, when more than two inputs are used, cumulated virtual coherence functions can be used in order to avoid having to study the coherence between each pair of input, see Chapter 15. Principal component, see Section 15.1, is another tool which may be used to check the input correlation.

14.7 Chapter Summary This chapter has presented an introduction to estimates of frequency response functions when more than one input signal is present in the system. The general multiple-input/multiple-output (MIMO) system is defined such that each output

14.8 Problems

is a linear combination of all the inputs, each going through an individual linear system. Thus, the outputs have no causal relations, and therefore, each output can be regarded by itself, we then talk about MISO systems. In general, from the point of view of understanding MIMO systems, we can view them as a number of parallel MISO systems. For MIMO systems, the H1 estimator is the predominantly used estimator, since the H2 estimator has the obvious drawback that it only works when the number of outputs equal the number of inputs, which is a rare case. We also introduced the Hv estimator, which is an estimator attempting to minimize the error in the FRFs due to both input and output extraneous noise. However, without knowing the input and output noise properties, this estimator has to be based on an assumption that both noises are equal, which is not the case at most frequencies. It is therefore not an estimator which generally solves the problem of noise in both input and output signals. The ordinary coherence function between an input signal and the output signal of a MISO system is not particularly useful. Instead, we defined the multiple coherence, which similarly with the ordinary coherence of a SISO system, describes how much of the output signal power is explained by all the input signals. In the case of no extraneous noise, the multiple coherence thus equals unity. All estimators for MIMO systems include having to solve the inverse of the input [ ] autospectral matrix, Gxx at every frequency. This has two important implications, namely: ●



that the excitation signals have to be independent at all frequencies (i.e., not fully correlated). [ ] that the rank of Gxx is full, which means at least as many averages has to be included [ ] in the computation of Gxx , as there are inputs in the system.

We discussed different excitation signals, which fulfill the demands in the list above. The most common excitation signals for MIMO estimation are pure random, burst random, and periodic random signals. Comparing these three excitation signals, we found that the periodic random signal has the best signal-to-noise ratio, and is therefore the best excitation signal in terms of measurement quality. Its drawbacks are that it requires more sophisticated hardware, and that the processing necessary for it is more complicated than is the case for the other two excitation signals. It is unfortunately, for this very reason, not very common in commercial measurement systems. An important advantage of periodic random is also its ability to minimize the error in FRF estimates due to noise on the input and output signals simultaneously. This is not widely acknowledged but was demonstrated by an example. Therefore, in cases of severe input noise, time domain averaging with periodic random excitation can be an attractive alternative.

14.8 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you

397

398

14 Multiple-Input Frequency Response Measurement

have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 14.1 Can a MIMO system be estimated using pure random noise as excitation signals? Is it a good choice, if you can choose between pure random or burst random noise? Why, or why not? Problem 14.2 Develop an H2 estimator for the MIMO case, by setting up the equations for a system with extraneous noise vector {M(f )} on the input signals, and by multiplying by {Y (f )}H . Explain why this estimator only works when the number of inputs is equal to the number of outputs. Problem 14.3 Write a MATLAB/Octave script which creates a mechanical 2DOF system with natural frequencies of 12.5 and 25 Hz and 2% damping, using appropriate commands from the accompanying toolbox. Set the sampling frequency to 200 Hz and use 100 (frequency domain) averages. Excite the system with pure random noise and determine which blocksize you should use as a minimum to eliminate the bias error by running the script with larger and larger blocksize until the peaks do not become higher with a larger blocksize. Plot the magnitudes of the FRFs (with logarithmic y-axis) and coherence. Problem 14.4 Make a new script using part of the script from Problem 14.3 and replace the excitation signals by burst random noise. First, adjust the burst length using a blocksize of 1024 samples until the measurement quality is good. Then check the maximum burst length (in % of the blocksize) you can have with blocksizes of 2048 and 4096, with good coherence. Problem 14.5 Use the scripts from Problems 14.3 and Problems 14.4 and add the same amount of noise to the output signal in both cases. Run the scripts with different blocksizes and note what happens with the random error. Explain why!

References Allemang RJ, Rost RW and Brown DL 1984 Multiple input estimation of frequency response functions Proceedings of 2nd International Modal Analysis Conference, Orlando, FL. Antoni J and Schoukens J 2007 A comprehensive study of the bias and variance of frequency-response-function measurements: optimal window selection and overlapping strategies. Automatica 43(10), 1723–1736. Antoni J and Schoukens J 2009 Optimal settings for measuring frequency response functions with weighted overlapped segment averaging. IEEE Transactions on Instrumentation and Measurement 58(9), 3276–3287. Antoni J, Wagstaff P and Henrio JC 2004 Hα – a consistent estimator for frequency response functions with input and output noise. IEEE Transactions on Instrumentation and Measurement 53(2), 457–465.

References

Bendat J and Piersol A 1993 Engineering Applications of Correlation and Spectral Analysis 2nd edn. Wiley Interscience. Bendat J and Piersol AG 2000 Random Data: Analysis and Measurement Procedures 3rd edn. Wiley Interscience. Mitchell L 1982 Improved methods for the FFT calculation of the frequency response function. Journal of Mechanical Design 102(2), 277–279. Pauwels S, Michel J, Robijns M, Peeters B and Debille J 2006 A new MIMO sine testing technique for accelerated, high quality FRF measurements Proceedings of 24th International Modal Analysis Conference, St. Louis, Missouri Society for Experimental Mechanics. Pintelon R and Schoukens J 2012 System Identification: A Frequency Domain Approach. John Wiley & Sons. Rocklin GT, Crowley J and Vold H 1985 A comparison of H1, H2, and Hv frequency response functions Proceedings of 3rd International Modal Analysis Conference, Orlando, FL. Smallwood D 1995 Using singular value decomposition to compute the conditioned cross-spectral density matrix and coherence functions Proceedings of 66th Shock and Vibration Symposium, vol. 1, pp. 109–120. White PR, Tan MH and Hammond JK 2006 Analysis of the maximum likelihood, total least squares and principal component approaches for frequency response function estimation. Journal of Sound and Vibration 290(3–5), 676–689. Williams R and Vold H 1986 Multiphase-step-sine method for experimental modal analysis. International Journal of Analytical and Experimental Modal Analysis 1(2), 25–34.

399

401

15 Orthogonalization of Signals The aim of conditioning of the input signals discussed in Chapter 14 was to make the input signals independent by removing any correlation between them so that the contribution of each independent input could be added to produce the output. In linear algebra terms, this is the same as to say that the input signals are orthogonal. As we discussed in Chapter 14, different ordering of the input signals gave rise to different decomposed L-systems. This can sometimes be of use, when there is a physically motivated way of ordering the input signals. Other times, however, using conditioned signals can be difficult, especially if there is no obvious way of ordering the signals. In this chapter, we will introduce an alternative approach to orthogonalize the input signals through the use of principal components, and we will introduce the concept of virtual signals to interpret the principal components. This mathematical tool was introduced in the form we are going to use it, by Hotelling (1933). Principal components theory has been applied to noise and vibration testing (e.g., (Otnes and Enochson, 1972; Otte et al., 1990; Tucker and Vold, 1990; Vold, 1986) and is a standard tool in many current commercial measurement systems. Although the concept of principal components was originally developed on covariance matrices, we will study a corresponding formulation in the frequency domain which is more common in noise and vibration applications. A good source for more information in line with the presentation here is the dissertation thesis by Otte (1994) who made significant contributions to the application of principal components in this field.

15.1 Principal Components [ ] With Q correlated input signals, we have an input cross-spectral matrix Gxx with nonzero off-diagonal elements consisting of the cross-spectra between the input signals. If we ⌈ ⌋ could transform this matrix into a new matrix G′xx which was diagonal, we would have a cross-spectral matrix of new input signals Xi′ which would be uncorrelated. The problem [ ] then is to diagonalize the original input spectral matrix Gxx in such a manner. [ ] The input cross-spectral matrix Gxx is Hermitian, (sometimes Hermitian symmetric), i.e., it equals the complex conjugate of its transposed matrix. In mathematical terms, we have [ ] [ ∗ ]T [ ]H (15.1) Gxx = Gxx = Gxx , Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

402

15 Orthogonalization of Signals

where the superscript H is the “Hermitian transpose” and means we transpose and take the complex conjugate, see Appendix E. For a Hermitian matrix, from linear algebra theory (see Appendix D and E or any standard textbook on linear algebra, (e.g., (Strang, 2005)) for this and much of the theory in this chapter), we know the following: 1. Its eigenvalues are real. { } 2. Its eigenvectors uk , corresponding to the eigenvalues 𝜆k , can be scaled so they are orthonormal, i.e., they have unit length and the dot (scalar) product of any two vectors is zero, i.e., ‖{ }‖ ‖ uk ‖ = 1 ‖ ‖2 (15.2) { }H { } ul = 0, k ≠ l. uk 3. The eigenvector matrix with the eigenvectors as its columns fulfills Equation (15.3) [U]H = [U]−1 ,

(15.3) { } where [U] is a matrix with each eigenvector uk in column k. Note: a complex matrix with orthonormal columns is called a unitary matrix, which the eigenvector matrix, [U], thus is. (This is the complex version of orthogonal matrices.) 4. It is diagonalized by its eigenvectors through Equation (15.4) [ ] (15.4) Gxx [U] = [U] ⌈𝜆⌋ , [ ] where ⌈𝜆⌋ is a diagonal matrix with the eigenvalues of Gxx on its diagonal, in descending order. The descending order is not necessary for Equation (15.4) to be true, but is important when we arrive at the principal components below, so we impose this restriction already here. From the numbered points above, it is clear that we can transform our original input ⌈ ⌋ cross-spectral matrix into a new diagonal input spectral matrix G′xx of uncorrelated “virtual” inputs Xi′ using Equation (15.4) which is reformulated using Equation (15.3) into [ ] ⌈ ′ ⌋ (15.5) Gxx = ⌈𝜆⌋ = [U]H Gxx [U]. ⌈ ′ ⌋ The elements on the diagonal of Gxx are called principal components. Note especially that the eigenvalues and corresponding eigenvectors are sorted in descending order, which is not necessary for the eigenvalue problem, but is an essential part of the concept of principal components. Note also that Equation (15.5) applies to one frequency at the time, and thus, the principal components in our case are frequency dependent “spectra” just as the power spectral densities. The principal components, further, will always be positive because input cross-spectral matrices are positive definite; their eigenvalues are > 0 (theoretically, they can be equal to zero, in which case we call the matrix positive semidefinite; for matrices obtained from a measurement we will always have noise which prevents any eigenvalue from being strictly equal to zero). An important feature of principal components is that the total power of the signals is preserved. This follows from the fact that the eigenvalue matrix [U] is unitary. Thus, for

15.1 Principal Components

⌈ ⌋ each frequency, the sum of the diagonal elements of Gxx and G′xx , respectively, are equal. The sum of the diagonal elements of a matrix is known as the trace of the matrix, i.e., [ ] ⌈ ⌋ (15.6) trace G′xx = trace Gxx . Example 15.1.1 Use MATLAB/Octave to compute the principal components based on the input cross-spectral matrix of three measured signals, x1 , x2 , x3 . [ ] We assume we have computed the input cross-spectral matrix Gxx in a 3D matrix in variable Gxx, as described in Section 10.8.2. The following MATLAB/Octave code then computes the 2D matrix PC, with principal components in columns. Note that there is no need to compute the off-diagonal zeros. [N,R,dum]=size(Gxx);% Find number of inputs, R for f=1:N Gxxf=squeeze(Gxx(f,:,:));% Force to 2D matrix PC(f,:)=eig(Gxxf);% Compute eigenvalues % Sort eigenvalues to produce principal components PC(f,:)=sort(PC(f,:),'descend'); end Note that we use the command eig here with only one output argument, which creates a column vector with the eigenvalues. We also have to sort the eigenvalues because they do not necessarily come in descending order (actually, in MATLAB and Octave, they will come in ascending order). The command squeeze has to be used to make Gxxf a two-dimensional matrix which eig requires. End of example.

15.1.1

Principal Components Used to Find Number of Sources

In order to illustrate a common application of principal components, we will study a simulation case of vibrations on a plate excited in two degrees of freedom. The plate model is similar to the experimental plate described in Section 14.6. The excitation signals were independent Gaussian noise. We simulate data for three accelerometers with some added extraneous output noise, on the plate. The autospectral densities of the simulated acceleration signals are plotted in Figure 15.1. As can be seen in the figure, the acceleration levels were of similar levels at the different points, but the actual spectra depend on the location of the accelerometers. The principal components of the three signals were then computed using the complete input cross-spectral matrix of the three signals, i.e., in addition to the three autospectral densities in Figure 15.1, also the cross-spectral densities between each pair of accelerations were calculated. The resulting principal components are plotted in Figure 15.2. This figure clearly shows that there are two principal components with high levels relative to the third, which is some 50–60 dB lower at most frequencies. The interpretation of this is that there were only two (dominating) independent sources causing the measured vibrations, since apparently the three spectra in Figure 15.1 can be calculated as linear combinations of the two highest principal components in Figure 15.2. This is a common application of

403

15 Orthogonalization of Signals

Acceleration PSD [(m/s2)2/Hz]

404

10

4

10

2

10

0

10

−2

10

−4

10

−6

100

200

300 400 Frequency [Hz]

500

600

700

Figure 15.1 Acceleration autospectral densities of three simulated accelerometers on a plate. The three acceleration PSDs are approximately equal level (surface under the PSDs), with local variations due to the position of each accelerometer.

principal components in noise and vibration testing as a starting point when analyzing an unknown environment. It is of course often very helpful to know how many sources there are in a system before starting to map them; this is often called the rank of the system under investigation. In order to explain the above interpretation, consider a mechanical system with a number of forces acting on it. Then, from Chapter 6, we know that the responses in column vector {Y } given the forces in column vector {F} satisfy [H]{F} = {Y }.

(15.7)

Multiplying Equation (15.7) by its Hermitian transpose and taking the expected value of both sides yields [

] [ ] [ ] [ ] Gyy = E {Y }{Y }H = E [H]{F}{F}H [H]H = [H] GFF [H]H .

(15.8)

Note that variable y is now used for the displacement (output), as opposed to u in the previous chapters on mechanics. This has been used here because U in this chapter stands for the eigenvectors, and we do not wish to confuse the reader too much. Furthermore, we use F to denote the force instead of input variable X, to stress the physical application here. Since the frequency response matrix [H] is in general a matrix with full rank, the rank of [ ] [ ] Gyy must clearly be equal to the rank of GFF , i.e., the number of sources in the system. This gives us the first use of principal components, which is to determine the number of incoherent (uncorrelated) sources, as we demonstrated in Figure 15.2. In the example above, we had two uncorrelated forces, generating noise on the plate. We put three accelerometers on the plate, and measured the cross-spectral matrix of these three

15.1 Principal Components

2

Principal components

10

0

10

−2

10

−4

10

−6

10

100

200

300 400 Frequency [Hz]

500

600

700

Figure 15.2 Principal components of the three signals whose PSDs were plotted in Figure 15.1. Two principal components remain at a high level, whereas the third is reduced by approximately 50–60 dB, indicating that only two sources were present.

signals. The principal components in this case will yield the equations as follows: [

Gyy

]

⎡ | = [U] ⌈𝜆⌋ [U] = ⎢{u}1 ⎢ ⎣ | H

| {u}2 |

| ⎤ ⎡𝜆1 {u}3 ⎥ ⎢ 0 ⎥⎢ | ⎦⎢ 0

0 𝜆2 0

0⎥ ⎡− ⌊u⌋∗1 −⎤ 0⎥ ⎢− ⌊u⌋∗2 −⎥ . ⎥⎢ ⎥ 0⎦ ⎣− ⌊u⌋∗3 −⎦ (15.9)

First, we note that in Equation (15.9), the third eigenvalue is zero. This comes directly from comparing Equation (15.9) with Equation (15.8) and noting the fact that we only have ] [ two sources in the system, which means element (3,3) in GFF in Equation (15.8) is zero. From the plot of the eigenvalues as functions of frequency, thus, it will immediately be clear that the number of sources is two, as was (approximately) the result plotted in Figure 15.2. The reason the third principal component in the figure is not equal to zero is that we added some uncorrelated extraneous noise to the acceleration signals to produce an example with a touch of reality. In general, the eigenvalues below the actual rank of the system studied will be much smaller than those due to actual sources. Another reason for the third principal component to rise is that the eigenvalue computation is sensitive to leakage, which can be seen especially around the resonances in Figure 15.2, (Otte, 1994). Principal components can also conveniently be used in multiple-input estimation of frequency response functions, by applying it to the input cross-spectral matrix of all forces. It is then easy to see if there are frequencies where one of the shakers is not contributing, by noting if one of the principal components is much lower than the remaining ones. The ideal situation in this case, as discussed in Chapter 14, is that all shakers force the structure [ ] at all frequencies, otherwise, the inversion of the input cross-spectral matrix, Gxx , can be endangered, as described in Section 14.1.5.

405

406

15 Orthogonalization of Signals

15.1.2 Data Reduction From Equation (15.9), it is obvious that each column in [U] is scaled by the corresponding [ ] eigenvalue to combine into a column in Gyy . The third column, then, does not contribute [ ] to Gyy , since the third eigenvalue equals zero (although there will still be an eigenvector). This illustrates the second use of principal components: it is a means of compressing the size of a system into a minimum number of independent linear combinations. By throwing away the third column of [U] and the third eigenvalue in ⌈𝜆⌋, we do not lose any information. [ ] Thus, with high accuracy, we can approximate Gyy by the truncated equation: [

⎡ | ] ⎢{u} = Gyy = [U]r ⌈𝜆⌋r [U]H r 1 ⎢ ⎣ |

| ⎤⌈ 𝜆 {u}2 ⎥ 1 ⎥ 0 | ⎦

0 𝜆2

⌋[ − ⌊u⌋∗1 − ⌊u⌋∗2

] − , −

(15.10)

where the index r denotes the truncation to the “approximate rank” of the eigenvalue matrix ⌈𝜆⌋. To illustrate this method, although remotely related to noise and vibration analysis, an image is probably the best example since it is possible to see the change by the eye. A (digital) image consists of a number of picture elements, pixels, which can, of course, be put in a matrix. In most cases, this matrix will be rectangular so the approach above is not directly applicable. However, instead of using an eigenvalue decomposition, in such cases, the singular value decomposition (SVD) can be used, see Appendix E. In essence, the SVD decomposes any (M × N) matrix [A], into three matrices, [U], [S], and [V], so that [A] = [U]⌈S⌋[V]H ,

(15.11)

where [U] is (M × M), ⌈S⌋ is diagonal and (M × N), and contains the singular values of A, in descending order, and [V] is (N × N). We call the left-hand matrix, [U], the left singular vector matrix, and the right-hand matrix, [V], is called the right singular vector matrix. For a square matrix, the singular values in matrix ⌈S⌋ equal the eigenvalues (if they are positive, otherwise the singular values equal the absolute values of the eigenvalues). Also note that [ ] comparing Equation (15.11) with Equation (15.5) (reorganizing the latter with Gxx on the left-hand side) shows that the SVD for a (quadratic and) symmetric matrix must yield equal matrices [U] and [V]. For a rectangular matrix, the singular values are the square roots of the eigenvalues of both [A][A]H and [A]H [A]. The columns of [U] are the eigenvectors of [A][A]H , and the columns of [V] are the eigenvectors of [A]H [A]. It is convenient, and also preferable from a computational standpoint, to use the SVD over the eigenvalue decomposition when computing principal components, so in fact it is often used instead of the eigenvalue decomposition for this purpose. To use the SVD for data reduction, we remove all singular values below a particular threshold, and the corresponding columns in [U] and [V]. In Figure 15.3, a picture of boats in a sunset, with 1920×2560 pixels and 256 gray scales (8 bits per pixel), is shown. A useful tool to see how much the picture can be compressed is to plot the singular values as in Figure 15.4. As can be seen in this figure, the singular values slowly drop after a rapid decrease in the first few values. This is an indication that the image in Figure 15.3 has a high rank. If the rank is low, there will be a significant drop in the singular values as the

15.1 Principal Components

Figure 15.3

Original image with 1920 by 2560 pixels and 256 gray scales [Photo: Anders Brandt].

5

10

4

Singular values

10

3

10

2

10

1

10

0

10

0

500

1000 Number

1500

2000

Figure 15.4 Singular values of the picture in Figure 15.3 plotted in descending order. Although not visible in this example, sometimes there is a knee in the plot which makes it easy to select a proper number of singular values to use, see Figure 15.7. In this example, the degradation of the picture is more gradual as the number of singular values is reduced.

407

408

15 Orthogonalization of Signals

Figure 15.5 The image in Figure 15.3 reduced to the 300 largest principal components by the method described in the text. As can be seen some detail is lost, although the picture is almost the same quality as the original picture. The amount of data is approximately 9% of the original [Photo: Anders Brandt].

rank is exceeded. There is therefore no obvious number of singular values to keep in the compression for our picture of the boats. In Figures 15.5 and 15.6, the same picture reduced to the 300 and 100 largest principal components, respectively, are shown. The original picture in Figure 15.3 contains information corresponding to 1920 × 1920 + 1920 × 2560 + 2560 × 2560 “information units” in the sense of the SVD. The information in Figure 15.5 is condensed down to 1920 × 300 + 300 × 300 + 2560 × 300, which corresponds to approximately 9% of the original amount. In Figure 15.6, the information is condensed down to approximately 3% of the original amount of information, which in this case causes some blur in the picture. Particularly look at the bush in the upper left-hand corner, and the tree line against the sky. In Figure 15.7(a), an image with a limited number of levels of gray scales and a simple, periodic pattern is shown. This image clearly has a low rank . As can be seen in the plot of the singular values in Figure 15.7(b), after the two highest singular values, the remaining values are very low. This illustrates the typical property of a singular values plot when the rank is limited. Data compression using SVD is very common in modal analysis and is discussed in Section 16.5.4 and used in several of the modal parameter estimation methods presented in Chapter 16.

15.1 Principal Components

Figure 15.6 The image in Figure 15.3 reduced to the 100 largest principal components by the method described in the text. After this much reduction, in this case the picture looks blurred. The amount of data is approximately 3% of the original [Photo: Anders Brandt].

Singular values

104

2

10

100 0

(a)

50

Number

100

(b)

Figure 15.7 Illustration of the drop in the singular values when the image rank is exceeded. The image in (a) has a low rank due to the limited number of gray scales and the periodicity in the graphical image. This results in a significant drop in the singular values above the first three or four singular values. Note that the lines have slightly different gray level, which increases the rank. The image is 100 by 100 pixels.

409

410

15 Orthogonalization of Signals

15.2 Virtual Signals We will now go back to our reference (input) signals and look at column vector {X} of the spectra (one intermediate average) of the time signal {x(t)}. We now define virtual input signals, {x′ (t)}, with spectra {X ′ } as {X ′ } = [U]H {X}.,

(15.12)

By postmultiplying Equation (15.12) by its Hermitian transpose and taking the expected value of each side of the equation, we obtain [ ] ⌈ ′ ⌋ [ ] Gxx = E[{X ′ }{X ′ }H ] = E [U]H {X}{X}H [U] = [U]H Gxx [U],

(15.13)

where we used the relationship that (AB)H = (B)H (A)H . Equation (15.13) is equal to the principal component definition in Equation (15.5), and thus, the virtual signals we have defined in Equation (15.12) are signals corresponding to the principal components (i.e., their autospectral densities are the principal components). { } Since the spectral matrix of the virtual signals is diagonal, the virtual signals, X ′ , are apparently orthogonal (independent, uncorrelated). Although the virtual signals, like the conditioned signals in Chapter 14, cannot be measured, their autospectral densities (the [ ] principal components) can be computed from the input autospectral density matrix Gxx According to Equation (15.13). It is useful to note that Equation (15.12) can also be rewritten as a MIMO system with the principal components being input signals, and the original signals with spectra {X} being the output signals. This follows directly from noting that [U]H = [U]−1 , since the matrix is unitary, as stated in Equation (15.3). Thus, multiplying Equation (15.12) by [U] leads to {X} = [U]{X ′ },

(15.14)

which is a MIMO system with the linear system being [U] and the virtual signals being the input signals. This system is depicted in Figure 15.8. From Equation (15.14), it follows that a particular element Upq in row p and column q of [U] can be interpreted as the frequency response between input virtual signal xq′ (t) and the measured signal xp (t). X′1

X1

X′2

X2

X′3 X′k X′q

[U]

X3 Xl Xq

Figure 15.8 Illustration of the linear relationship between each virtual signal in {X ′ } and the original signals in {X} being viewed as a MIMO system. The eigenvector matrix, [U], is then the linear system relating virtual signals to the original input signals.

15.2 Virtual Signals

15.2.1

Virtual Input Coherence

The concept of virtual signals is similar to the concept of conditioned signals in Chapter 14. In principle, the two methods both produce uncorrelated signals. The conditioned signals, however, become different if the order of the signals is varied because the correlation is “gradually” removed from the second signal and on. Virtual signals remain the same regardless of the ordering of the original signals, since they are based on the eigenvalues, which do not change if the signals are rearranged. The first virtual signal corresponds to the highest eigenvalue, and the second corresponds to the second highest eigenvalue, etc. This is equivalent to ordering the virtual signals in order of their power at each frequency. A consequence of this is that a particular (measured) input signal does not, in general, correspond to the same virtual signal over the entire frequency range, but rather jumps between the virtual signals, depending on which of the various measured input signals are largest at each frequency. In this section, we will show how we can compute the coherence between each virtual signal and each measured signal. This virtual coherence then shows the correlation between a particular virtual signal and a particular measured input signal, which can often show which of the measured input signals is (mostly) correlated with a virtual signal, at a particular frequency of interest. ] [ Using Equation (15.14), the virtual input cross-spectral density matrix Gxx′ can be calculated by ] [ ] [ ] [ (15.15) Gxx′ = E[{X}{X ′ }H ] = E {X}{X}H [U] = Gxx [U], where we again have used the relationship that (AB)H = (B)H (A)H . The virtual input cross spectrum is useful because it allows us to define the virtual input coherence, which is defined as the ordinary coherence between a particular virtual input signal and a measured signal. Using the virtual signals, analogous with the ordinary coherence function for the signals 2 , can be defined between any original signal in {X}, the virtual input coherence functions, gpq xp and a virtual signal xq′ by 2 gpq

|2 | |Gxp xq′ | | | = , Gpp G′qq

(15.16)

where, analogously with the notation in Chapter 14, we omit the x and x′ in the indices in the denominator because we are dealing with inputs. For each original signal, xp , there will be as many virtual input coherence functions as there are input signals, i.e., there will be Q virtual input coherence functions. Each virtual input coherence function tells how much of the power in Gpp that comes from a linear relationship with the corresponding virtual signal. Thus, the virtual coherence function can be used in order to understand to what extent a particular physical signal is related to the particular virtual signal, at each frequency. Each virtual coherence is normally not fruitful to analyze. Otte (1994) suggested that cumulated virtual coherences should be used. These are computed as gx2 ∶x′ = p

q!

q ∑ 2 gpk , k=1

(15.17)

411

412

15 Orthogonalization of Signals

and it is obvious that the cumulated virtual input coherences for each signal, xp sum up to unity, i.e., the last one, gx2 ∶x′ = 1. p

(15.18)

Q!

The cumulated virtual input coherence functions tell how much of Gpp that comes from the first virtual signal, the first and second virtual signals, etc. Example 15.2.1 To illustrate the concept of input virtual coherence with an example, we will use MATLAB/Octave to produce a 2-input-1-output system as depicted in Figure 14.2, where Hy1 and Hy2 are two SDOF systems with natural frequencies of 100 and 200 Hz, respectively. Each SDOF system is fed by a random signal, x1 and x2 , respectively, and the SDOF output signals are summed up to an output signal y. In this example, we let the RMS level of x1 be 1.5 times the RMS level of x2 , see Problem 15.4. To make the example more realistic, we also add some extraneous noise, n(t), to the output signal, y(t), with a signal-to-noise ratio of 40 dB, i.e., we let the RMS level of the extraneous noise be 0.01 times the RMS level of the output signal. We use an FFT blocksize of 2048 samples, and a sampling frequency of 2000 Hz, to have 10 times oversampling with respect to the highest natural frequency. Next, we compute the cumulated virtual input coherence functions for a case of two Gaussian noise signals x1 and x2 with 10% correlation (in the sense that the ordinary coherence between the two signals is approximately 0.1). We assume that we have already computed the input spectral matrix Gxx as a 3D matrix as discussed in Section 10.8.2. The following MATLAB/Octave code produces the 2D matrix PC with principal components of each output by using singular value decomposition, SVD. It also computes the virtual input coherence functions in variable VCxx and cumulated virtual input coherence in CVCxx. [N,R,dum]=size(Gxx); for f=1:N Gxxf=squeeze(Gxx(f,:,:)); [U1,S,U2]=svd(Gxxf); PC(f,:)=diag(S); VGxxf=Gxxf*U1; % Virtual cross spectrum Gxxf=real(diag(Gxxf)); % Reduce to autospectra for x_sig=1:R for pc_sig=1:R VCxx(f,x_sig,pc_sig)=abs(VGxxf(x_sig,pc_sig)). ̂ 2,... ./Gxxf(x_sig)./PC(f,pc_sig); end for x_sig=1:R for pc_sig=1:R if pc_sig == 1 CVCxx(f,x_sig,1)=VCxx(f,x_sig,pc_sig); else CVCxx(f,x_sig,pc_sig)=VCxx(f,x_sig,pc_sig-1)+,... VCxx(f,x_sig,pc_sig); end end end

1

1

0.8

0.8

Cum. Virt. Coh, x 2

Cum. Virt. Coh, x1

15.2 Virtual Signals

0.6 0.4 0.2 0

0.6 0.4 0.2

0

100 200 300 Frequency [Hz]

400

(a)

0

0

100 200 300 Frequency [Hz]

400

(b)

Figure 15.9 Figure from Example 15.2.1. In (a), the cumulated virtual input coherence functions g211 (dashed) and g212 for input signal x1 . In (b), the cumulated virtual input coherence functions g221 (dashed) and g222 for input signal x2 . The figure shows that the first input signal is strongly correlated with the first virtual input, whereas the second input signal is a little less correlated with, but still dominated by, the second virtual input. The less correlation in the latter case is due to the fact that x2 is somewhat correlated with x1 .

The cumulated virtual input coherence functions computed using this code are plotted in Figure 15.9. It shows that the main part of G11 (approximately 95%) comes from virtual signal x1′ , whereas approximately 40% of G22 comes from x1′ and the remaining 60% comes from x2′ . Thus, it is possible that to some degree couple the different virtual input signals to the respective measured input signal; however, care must be used, as we will discuss further in Example 15.2.2. End of example.

15.2.2

Virtual Input/Output Coherence

If we consider a multiple-input/multiple-output system with correlated inputs as in Chapter 14, using the concept of virtual signals, we can define virtual input/output cross-spectral density. This can be illustrated as in Figure 15.10, where the virtual, uncorrelated signals are the inputs, and the output signals are yp . In such cases, the virtual X′1

Y1

X1

Y2 X′k

X′q Figure 15.10

[U]

Xl

Xq

[H] Yp

Illustration of the concept of virtual signals on entire MIMO system.

413

414

15 Orthogonalization of Signals

cross-spectral densities of a particular output signal, yp , and each virtual signal, xq′ , are of interest. The virtual input/output cross-spectral matrix elements are computed using the same principle as in Equation (15.15), which gives [ ] [ ] [ ] Gyx′ = E[{Y }{X ′ }H ] = E {Y }{X}H [U] = Gyx [U], (15.19) which, in the case of a single output, is a row vector, and in the case of multiple outputs, is a matrix with one row for each output. In many noise source identification cases, it is useful to use virtual coherence signals between each virtual signal and a particular output signal yp . Such virtual signals are computed consistently with the virtual coherences in Equation (15.16), replacing the original input signals with the output signals, i.e., the virtual coherence between virtual signal xq′ and the output signal yp is gy2 x′ p

|2 | |Gyp xq′ | | | = ′ , Gqq Gyp yp

q

(15.20)

As for the input virtual coherences, it is often of more practical benefit to use the cumulated input/output virtual coherence, defined by gy2 ∶x′ ! = p

q

q ∑ k=1

gy2 x′ k .

(15.21)

p q

Since the cumulated input/output virtual coherence sums up all contributions in yp due to the virtual inputs it must hold that, gyp∶x′ , the last cumulated coherence, evidently equals Q! the multiple coherence.

15.2.3 Virtual Coherent Output Power In addition to the virtual coherence functions defined in Sections 15.2.1 and 15.2.2, virtual coherent output power spectra are very useful functions. They are defined similarly to partial coherent output power spectra for conditioned signals, as a coherence function multiplied by the output (target) spectrum. Thus, if we first consider the case of input virtual signals, for a particular measured input signal, xq , we can define the virtual coherent output power of xq with virtual signal xi′ , which we denote Gqq∶x′ , as i

Gqq∶x′ = i

gx2 x′ Gqq . q i

(15.22)

By using the virtual coherent output power spectra in comparison with the power spectral density Gqq , the individual contributions in xq to each virtual input can be plotted. For an output signal, yp , similarly the virtual coherent output power with each virtual input signal xq′ can be defined as Gyp yp∶x′ = gy2 x′ Gyp yp . q

p q

(15.23)

We will illustrate the use of virtual input/output coherence and virtual coherent output power by an example.

15.2 Virtual Signals

Example 15.2.2 We continue Example 15.2.1 by now looking at input/output relations. This example will show the importance of using virtual (or conditioned, see discussion below) signals instead of general MIMO techniques, when the input signals are correlated. [ ] We assume that the input spectral matrix Gxx , in this case 1025 × 2 × 2, the input/output [ ] cross-spectral matrix, Gyx , in this case 1025 × 1 × 2 and the output autospectrum Gyy , in this case 1025 × 1, are all computed using the procedures from Chapter 9. They are stored in variables Gxx, Gyx, and Gyy, respectively, in MATLAB/Octave. Next, the eigenvalues and eigenvectors of the input cross-spectral matrix in Gxx are computed at each frequency, and the virtual input/output cross-spectral matrix, the cumulated virtual input/output coherence functions, and the virtual coherent output power are all computed using the following MATLAB/ Octave code. [Nf,R,dum]=size(Gxx);Find the size of Gxx for f=1:Nf Gxxf=squeeze(Gxx(f,:,:)); Gyxf=Gyx(f,:); Gyyf=Gyy(f,:); [U1,S,U2]=svd(Gxxf); PC(f,:)=diag(S); Gxx_p=Gxxf*U1; % Virtual cross-spectrum Gxxf=diag(Gxxf); % Reduce to autospectra VGyx(f,:)=Gyxf*U1; % Virtual in/out cross-spectrum S=diag(S); % Reduce to vector TC=[]; for pc_sig=1:R TC=[TC abs(VGyx(f,pc_sig)) ̂ 2/(Gyyf*S(pc_sig))]; end VC(f,:)=TC; end % Produce cumulated coherence and virt. coherent spectra for r = 1:R if r == 1 CVCyx(:,r)=VC(:,1); else CVCyx(:,r)=CVCyx(:,r-1)+VC(:,r); end VPyyx(:,r)=Gyy.*VC(:,r); % Virt. coherent output power end With the natural frequencies of the two SDOF systems separated, we can anticipate that one of the virtual coherence functions will dominate around the natural frequency of the first SDOF system and the other virtual coherence should dominate around the natural frequency of the second SDOF system. In Figure 15.11, the results of the simulation with 10% correlation between the inputs is shown. In Figure 15.11, the results of the code above are shown in (a) and (b). In Figure 15.11(a), the input/output virtual coherence functions and the multiple coherence functions (also equal

415

15 Orthogonalization of Signals −12

10

0.8 Virt. PSD

In/Out Virt. Coh.

1

0.6 0.4

−15

10

0.2 0

−18

0

100 200 300 Frequency [Hz]

400

10

0

(a)

10

−12

10

−15

10

−18

400

100 200 300 Frequency [Hz]

400

(b)

10

−15

10

−18

0

100 200 300 Frequency [Hz]

(c)

Figure 15.11

100 200 300 Frequency [Hz]

−12

Coherent PSD

PSD

416

400

10

0

(d)

Figure for Example 15.2.2. In (a), the virtual input/output coherence functions g2yx′

1

(dashed) and g2yx′ (dotted) are plotted together with the cumulated input/output coherence g2y∶2! 2

(solid), which equals the multiple coherence. In (b), the virtual coherent output power spectra Gy∶x′ (dashed), Gy∶x′ (dotted), the measured output autopower spectrum Gyy (solid), and the remain1 2 ing uncorrelated spectrum (or error), Gnn (dash-dotted). In (c), the true autospectral densities Gu1 u1 (dashed), Gu2 u2 (dotted), Gyy (solid), and Gnn (dash-dotted). In (d), the same spectra as in (c) but estimated using a MIMO estimation, see text for details.

to the second cumulated virtual coherence) are plotted. As seen in the figure, the first virtual 2 input/output coherence function, gy,x ′ , is dominating at low frequencies, past the first natural 1 frequency, and at high frequencies above approx. 250 Hz. This is an indication that the output, y, is caused mostly by the first virtual signal, x1′ . In the intermediate region around the natural 2 frequency of the second SDOF system, the second virtual input/output coherence function, gy,x ′ 2 is dominating, indicating that the output is here dominated by the second virtual input, x2′ , which is what we expect from the setup. In Figure 15.11(b), the resulting virtual coherent output power spectra are shown. What we can see here is that the PSD of the output signal, Gyy , is composed of the two virtual coherent spectra. The error, dash-dotted in the figure, is below approximately 0.01, which means that the sum of the two virtual coherent spectra add up to the measured output spectrum. Thus, using the virtual coherent output spectra, we can tell, at each frequency, how much comes from x1′ and x2′ , respectively.

15.3 Noise Source Identification (NSI)

In Figure 15.11(c), the autospectra of each of the signals from the simulation are shown. Most of these spectra we know only because we have simulated the system, but we could never measure them (except Gyy ). This plot shows the importance of using uncorrelated signals to add up to the output spectrum. A careful examination of the frequency region, (particularly) above 200 Hz, shows that the sum Gu1 u1 + Gu1 u1 + Gnn ≠ Gyy . This is because the signals u1 and u2 are correlated, and in accordance with Equation (13.18), which showed that for correlated signals, we cannot add up the autospectra to produce the autospectrum of the sum. This is the main motivation for using virtual, or conditioned, signals. In Figure 15.11(d), we have included an “experimental” version of the results in ̂ u u = G11 ||Hy1 || Figure 15.11(c), in that we have produced each of the output autospectra as G 1 1 | | ̂ u u = G22 ||Hy2 ||, where Hy1 and Hy2 were solved by a MIMO solution. Note that the and G 2 2 | | multiple coherence in Figure 15.11(a) is approximately unity, and the ordinary coherence between the two input signals was low, so the MIMO system can be solved with good precision. However, the sum of the estimated coherent output spectra using this approach do not add up to the measured output signal, so this method cannot be used when we have correlated inputs. We should make one more observation from Figure 15.11(b). Looking at the error function, which should equal the uncorrelated spectral density Gnn shown in Figure 15.11(c), obviously there is some problem around the natural frequencies of both SDOF systems. This is due to the leakage sensitivity of the SVD, which was mentioned in Section 15.1.1. A main conclusion from this is, that great caution should be used when trying to estimate residual spectra, as the spectrum Gnn is really the error in the analysis, and not necessarily only the spectrum of the uncorrelated signal, n(t). Finally, we should make a special note regarding the coherence estimates. When the coherence (of any type) is low, there is a large random error in the estimate, as is obvious from Figure 15.11. This error propagates through to some of the coherent spectra, as seen in Figure 15.11(b). End of example.

15.3 Noise Source Identification (NSI) Noise source identification is a common application of virtual and conditioned signals in applications where there are at least two random, potential sources to perceived noise. A typical example from the automotive industry is road noise and wind noise, which are both random and both contribute to the perceived sound in a car.

15.3.1

Multiple Source Example

We will now apply the techniques described in the preceding part of this chapter to an experimental setup, illustrated in Figure 15.12. This setup consists of two electrical noise sources connected to a speaker and an aluminum plate, respectively, both of which radiate sound. A microphone (“Mic. 1”) is placed in front of the speaker to pick up this source, and an accelerometer (“Acc. 1”) is placed on the plate to pick up this source. A response microphone (“Mic. 2”) is placed in the room at a position where we want to be able to tell from which source the sound comes at each frequency.

417

418

15 Orthogonalization of Signals

Mic. 1 Mic. 2

Acc. 1 Figure 15.12 Measurement setup for noise source identification example. Two noise sources generating approximately white, Gaussian noise are fed to a speaker and (through a shaker) an aluminum plate, respectively. At some frequencies, the plate will emit noise which will mix with the noise from the speaker. The noise from the speaker can be assumed not to be totally flat as the speaker is not ideal. A reference accelerometer was used on the plate to pick up the noise correlated with what was emitted by the plate, one microphone was used to record the speaker noise and one microphone was used as the response pickup.

The first thing we want to do is to find the number of independent sources in the room so that we know how many reference signals we need to use for the virtual signals. The spectral densities of the voltage signals from the two microphones and the accelerometer are plotted in Figure 15.13(a). As can be seen in the figure, the three signals have approximately similar spectral density levels. The reason we use the voltage levels is that we want the three signals scaled to approximately the same levels for the principal component calculation. The principal components of the three signals are then computed using the complete input cross-spectral matrix of the three signals, i.e., in addition to the three autospectral densities in Figure 15.13, also the cross-spectral densities between each pair of sensors were calculated. The resulting principal components are plotted in Figure 15.13(b). This figure shows that the two highest principal components are at least 10 dB higher than the third principal component, and more at frequencies above 2000 Hz. The interpretation of this is that there were two dominating sources in the room, although the dynamic range is not very large. Indeed, at some frequencies, for example, approximately 900 and 1800 Hz, there are some narrowband peaks, most visible in the response microphone. These are tones from a fan located in the room, but it will not be possible to analyze these tones as we do not have any reference related to them. We will now use the principal components and virtual cross-spectra or virtual coherence functions to rank the sources at each frequency. The cumulated virtual input/output coherence functions are plotted in Figure 15.14. The figure shows that between 70% and close to 100 % of Gyy can be explained by a linear relationship with one or both of the virtual

−20

−20

−30

−30

Principal components

Voltage PSDs [V2/Hz]

15.3 Noise Source Identification (NSI)

−40 −50 −60 −70 −80

1000 2000 3000 4000 5000 Frequency, Hz

(a)

−40 −50 −60 −70 −80

1000 2000 3000 4000 5000 Frequency, Hz

(b)

Figure 15.13 In (a), autospectral densities of the voltage signals from the two microphones and the accelerometer in Figure 15.12. The spectral densities have approximately equal levels, with local variations due to the sound field in the room and the plate dynamics. In (b), principal components of the three signals win (a). Two principal components remain at a high level, whereas the third is reduced by approximately 10–20 dB, indicating that two sources were present.

Cum. Virt. In/Out Coh.

1 0.8 0.6 0.4 0.2 0

500

1000

1500 2000 2500 3000 3500 4000 4500 5000 Frequency, [Hz]

Figure 15.14 Virtual input/output coherence functions. The functions show that (i) more than 70 % of Gyy can be explained by the two uncorrelated sources, except mainly in the frequency range 400 to 700 Hz, where another (unmeasured) source is dominating; and (ii) the two uncorrelated sources dominate at different frequencies, which can be seen by the first cumulated virtual coherence dropping, where the second cumulated coherence is still high, which means the second (not cumulated) virtual coherence is high at those frequencies. Also, note that the second cumulated coherence in this case with two virtual signals equals the multiple coherence.

419

15 Orthogonalization of Signals −3

10

Cum. Virt. coherent power

420

−4

10

−5

10

−6

10

−7

10

−8

10

500

1000 1500

2000 2500 3000 3500 Frequency, [Hz]

4000 4500

5000

Figure 15.15 Virtual coherent output spectra and the spectral density of the response microphone signal, Gyy . From the figure, it is clear that at most frequencies the output signal is closely described by the two independent virtual signals, i.e., the sound in the response microphone is indeed produced by one or both of the reference signals, the speaker and the plate.

signals, which indicates that there were mainly two sources during the measurements, as is expected. This is not true, however, in the frequency range from approximately 400–700 Hz, where there is apparently another source contributing considerably to the sound in the response microphone. In Figure 15.15, the cumulated virtual coherent output spectra are plotted together with the spectral density of the response microphone, Gyy . This provides the same information as the virtual coherence functions in Figure 15.14, but this type of plot is often preferred over the former, since it shows directly the division of Gyy into the various contributions. Although difficult to see in print, the second cumulated coherent output spectrum is very close to the measured signal at most frequencies, which can also be interpreted from Figure 15.14. Note the important difference in reading the plots, however, between the linear scale in Figure 15.14 and the logarithmic scale in Figure 15.15. The final step may be to find which of the two sources, the plate or the speaker, is contributing to the response sound (“Mic. 2”) at a particular frequency. To do this, we need to compute the virtual input coherence functions, between each principal component and each input signal (“Acc. 1” and “Mic 1,” respectively). These functions are plotted in Figure 15.16. To find which of the two actual sources (plate or speaker) is dominating the response sound in “Mic. 2” at a particular frequency, we first go to Figure 15.15, and look at the frequency range of interest. As an example, we take the frequency range between approximately 1300 and 1500 Hz. In Figure 15.15, we find that in this frequency range the second virtual signal dominates the response signal (because the dotted line, which is the first virtual coherent spectrum drops). Then from Figure 15.16(b), we find that in the same frequency range, it is the second measured signal which is correlated with the

15.3 Noise Source Identification (NSI)

Cum. input Coh., x1

1 0.8 0.6 0.4 0.2 0

500

1000 1500 2000 2500 3000 3500 Frequency, [Hz]

4000 4500

5000

1000 1500 2000 2500 3000 3500 Frequency, [Hz]

4000 4500

5000

(a)

Cum. input Coh., x2

1 0.8 0.6 0.4 0.2 0

500

(b)

Figure 15.16 Cumulated virtual input coherence functions. In (a), the cumulated virtual coherence functions for measured signal x1 , the plate acceleration, and in (b), the cumulated virtual coherence functions for measured signal x2 , the speaker microphone. From the plots we can conclude that, e.g., in the frequency range from approx. 1300 to 1500 Hz, x1 is highly correlated with virtual signal x1′ and x2 with virtual signal x2′ . Because the SVD at each frequency is sorting the principal components according to the power of the measured signals, the relationship between the measured and the virtual signals swap, whenever one of the measured signals becomes higher than the other after having been smaller. Therefore, at each frequency of interest, this plot has to be investigated before concluding which of the measured signals actually correlates with a particular virtual signal.

second virtual signal (the first virtual signal is plotted as dotted, and the sum of the first and second as solid). The speaker is consequently the dominating source between 1300 and 1500 Hz.

15.3.2

Automotive Example

The correlation techniques discussed in this chapter have been used successfully in the automotive industry for many years. As an example of an application of virtual signals, we will conclude this chapter with an example of structure-borne road noise analysis from a modern car. The response sensor in this case was a microphone in the position of the driver’s left ear, measured using a standard measurement microphone. The inputs of interest were the front and rear wheel axles, as they produce structure borne noise which is uncorrelated to some extent. This is usually ascribed to the fact that the left and right

421

15 Orthogonalization of Signals

−20 A−weighted PSD [dB rel. 1 Pa2/Hz]

422

−30 −40 −50 −60 −70 −80

100

200

300

400 500 600 Frequency [Hz]

700

800

900

1000

Figure 15.17 Sound pressure in the driver’s ear from an analysis of a modern car. Virtual coherent output power spectra from front axle (dash-dotted), rear axle (dotted), sum of those two (solid), and total noise spectrum (solid and highest at all frequencies). The example shows that structure borne sound dominates the driver’s perception up to approximately 500 Hz and that the two uncorrelated sources contribute to most of the sound (within approx. 5 dB) in this frequency range. Above 500 Hz other sources contribute by more than 10 dB. Note that the y-scale is in dB but not in sound pressure level as the curves are PSDs [Courtesy of SAAB Automobile AB].

tires experience different road surfaces. To measure the part of the response microphone signal coherent with each of these axles, accelerometers were mounted near the center of each wheel spindle. An analysis using principal component analysis resulted in the virtual coherent output spectral densities shown in Figure 15.17. In this plot, where the spectral densities have been A-weighted (see Section 3.3.4), it can be seen that up to approximately 500 Hz, the sound in the drivers’ ear is dominated by the structure borne noise. Above 500 Hz, other sources contribute significantly to the sound in the driver’s ear. A careful examination of the plot also reveals that the noise is mostly coherent with the front axle.

15.4 Chapter Summary Principal components can be seen as the autospectral densities of virtual, uncorrelated signals. To compute the principal components, the eigenvalues of the input cross-spectral matrix of all involved signals are computed at each frequency, and the eigenvalues are ordered from largest to smallest. The largest eigenvalue at each frequency is the first principal component, the second largest eigenvalue is the second principal component, etc.

15.5 Problems

The eigenvalue decomposition results in that the measured signal with highest power at a particular frequency, dominates the principal component at that frequency. Thus, the principal component approach differs from the approach with conditioned signals described in Chapter 14 because the latter type of orthogonal signals is ordered manually by the user. This means that the conditioned spectra and coherence functions become different with different ordering of the signals. For principal components, the order of the input signals does not affect the result. There is also a numerical stability issue, as the SVD is very accurate also in cases with severe extraneous noise. It is therefore often preferred over the conditioned signal approach. The virtual signals, xi′ , cannot be measured or estimated from measured data, just as we could not measure or estimate the conditioned signals. However, we can estimate virtual cross-spectral densities between any virtual signal (as input) and any measured input signal, xq , or between any virtual signal and an output signal, yp . Because we already have the auto-spectral densities of the virtual signals, the principal components, we can therefore estimate virtual coherence functions between the virtual signals and any input or output signal. These virtual coherence functions are simply ordinary coherence functions between a virtual signal and a measured signal. The purpose of computing the virtual coherence functions is usually to split the power of an output signal (a “target sensor”) into the contribution from each virtual signal, which can be done because the virtual signals are uncorrelated. Several examples of this were given in this chapter. We also presented an application in this chapter of how to use the SVD for data compression, where a frequency response matrix was condensed to a smaller size of frequency responses, still having the same information.

15.5 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 15.1 Write a MATLAB/Octave script which creates three Gaussian signals using the randn command, with 102 400 samples each. Compute the cross-spectral matrix of the three signals, using a Hanning window of length 1024 samples. Then compute the principal components by using the eig and the svd commands. Compare the computed principal components and see if there are any differences. Also, clock the computation time using each of the commands for computing the principal components to see if there is any significant difference. Hint: use tic and toc, to clock the computation time, see MATLAB/Octave help. Problem 15.2 Calculate the principal components and virtual input/output coherence functions of two Gaussian signals with equal RMS level which each pass a linear system

423

424

15 Orthogonalization of Signals

with unity frequency response to produce an output signal. (That is, the output signal is simply the sum of the two random input signals). Use the parameters for sampling frequency, etc., and appropriate code from Example 15.2.2. Plot the principal components and the virtual input/output coherence functions. Do they look OK? Try to explain why. Hint: Think about what makes the principal components separate at each frequency, and see the discussion in Example 15.2.2.

References Hotelling H 1933 Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology 24, 417–441, 498–520. Otnes RK and Enochson L 1972 Digital Time Series Analysis. Wiley Interscience. Otte D 1994 Development and evaluation of singular value analysis methodologies for studying multivariate noise and vibration problems PhD thesis Catholic University Leuven, Belgium. Otte D, de Ponseele PV and Leuridan J 1990 Operational deflection shapes in multisource environments Proceedings of 8th International Modal Analysis Conference, Kissimmee, FL. Strang G 2005 Linear Algebra and its Applications 4th edn. Brooks Cole, San Diego. Tucker S and Vold H 1990 On principal response analysis Proceedings of ASELAB Conference, Paris, France. Vold H 1986 Estimation of operating shapes and homogeneous constraints with coherent background noise Proceedings of ISMA 1986, International Conference on Noise and Vibration Engineering, Catholic University, Leuven, Belgium.

425

16 Experimental Modal Analysis Experimental modal analysis (EMA) is a name for the technique of measuring frequency response functions (FRFs) and then using these FRFs to extract the modal parameters of the system (structure), namely the natural frequencies, damping ratios, mode shapes, and modal scaling constants. Several textbooks dedicated to EMA are available, for example, Heylen et al. (1997), Ewins (2000b), Maia and Silva (2003), and Avitabile (2017). EMA is a very common application in vibration engineering, where it is sometimes used for troubleshooting vibration problems, when it is suspected that the problem is related to a mode (often first found by an operating deflection shape, ODS, analysis, see Section 19.6). The main result of the EMA in this case is to find approximate natural frequencies and mode shapes in order to assess some design change which will move the problem mode away from the problem frequency. In such cases, EMA can be applied by using less-rigorous measurements, as the precision of the modal parameters is of less concern. Sometimes, an EMA analysis can also be done to obtain a modal model that can be used for analytical purposes such as structural modification, see any of the suggested textbooks above. EMA can also be made with the sole purpose of finding the damping ratios to use in analytical simulations, as damping is typically impossible to compute analytically at the moment. The most common reason to use EMA, however, is for validation of computational models, most commonly finite element models (FEMs). This puts high demand on the accuracy of the experimental modal model and requires great skills of the engineers involved. In this chapter, we will discuss how to conduct EMA successfully, including both the measurement procedure and the modal parameter extraction (MPE). We will also explain the most common MPE techniques used in this field of application. Since the techniques for estimating modal parameters for EMA are the same as those used for operational modal analysis, OMA, all discussions about modal parameter estimation methods will be found in this chapter. Discussions of the use of these methods for OMA, however, are found in Chapter 17.

16.1 Introduction to Experimental Modal Analysis The idea of EMA is to identify the modal parameters, i.e., the natural frequencies, fr , the damping ratios, 𝜁r , the mode shapes, [𝜓]r , and the modal scaling factors Qr , from Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

426

16 Experimental Modal Analysis

measurements of frequency responses, FRFs. This is based on the theory we discussed in Chapter 6, summarized in Equation (6.122), repeated here for convenience [

N ] ∑ [A∗ ]r [A]r + , H(j𝜔) = j𝜔 − sr j𝜔 − s∗r r=1

(16.1)

where the poles, sr , include the undamped√natural frequency (in rad/s), 𝜔r , and the relative damping factor, 𝜁r , by sr = −𝜁r 𝜔r + j𝜔r 1 − 𝜁r2 . The residue matrix, [A]r , for mode r, is composed of the modal scaling constant and the mode shape vectors by [A]r = Qr {𝜓}r {𝜓}Tr .

(16.2)

It is evident from these equations that in order to identify the poles, sr , we need at least a single FRF, provided that neither of the two degree-of-freedoms (DOFs) of this FRF (excitation and response DOFs) are on a node line for any of the modes of interest. To obtain the entire mode shape vectors, however, we need to measure at least an entire row (fixed response DOF) or column (fixed excitation DOF) of the FRF matrix [H(j𝜔)]. If we have closely spaced modes, we typically need more than one row or column in order to be able to separate the mode shapes. We will discuss the mathematical basis for EMA in more detail in later sections, but first we will discuss the practical aspects of performing an EMA test.

16.1.1 Main Steps in EMA EMA is an application where the engineering experience and skills of the engineer come to their edge; some people refer to it as an art form. The degree of difficulties depends on the purpose of the test: ●



For troubleshooting purposes, it is often sufficient to obtain rough estimates of the natural frequencies and mode shapes in order to evaluate possible design changes. In this case, it may be sufficient to use impact excitation, and in many cases, the structure of interest can be measured in its normal position, i.e., with the boundary conditions it has in operation. Furthermore, for the parameter extraction, we can use simple methods such as single degree-of-freedom (SDOF) methods (see Section 16.4). If the modal parameters are going to be used for validating an analytical model, the quality of the EMA model must be much more accurate. In most cases, this means that the structure has to be suspended with free–free boundary conditions for two main reasons: (i) as this produces measurements of the highest quality, and (ii) because this is an easy boundary condition to implement in the analytical model so that model and experiment share the same boundary conditions (rather than having to know what the boundary conditions of the finite element (FE) model are in some other configuration of the structure under test). In many cases, but certainly not always, shakers are used for excitation, as it is often possible to estimate FRFs of higher quality by this technique. More sophisticated MPE methods are usually used in this case, which assure the best possible estimates of the extracted parameters. In recent years, multiple-reference impact testing with improved analysis opportunities, as we present in Section 19.7, has proven to give high-quality EMA results.

16.2 Experimental Setup

Whereas there are many pitfalls in applying EMA, it should be noted that in most cases the key to success lies in making FRF measurements of good quality. Following the procedures described in Chapters 13 and 14, combined with the many tips in the present chapter, will ensure this, if enough care is taken. When the measured FRFs are of high quality, the MPE is often a rather simple process. You should also read the practical examples in Section 19.7 where you will find some examples of bad and good data. An EMA test can be decomposed into the following main steps: 1. Selection of test method. Impact or shaker excitation, and in the former case, of fixed force or fixed response locations. In the latter case, single or multiple shaker excitation should be considered. 2. Selection of reference DOFs, which are those DOFs that are measured for every batch, and response DOFs, which are those DOFs that are measured for each batch, if batch measurements are used (see Section 16.2.5). Otherwise, we use the term reference for the small dimension of the FRF matrix (the number of rows or columns measured). 3. Selection of test structure suspension, usually free–free support if good-quality results are necessary. 4. Preliminary tests to find optimal measurement settings. 5. Checks to verify correct suspension, shaker attachment, etc. 6. Data acquisition of all FRFs, sometimes in several batches (see 2. above), if all responses cannot be measured simultaneously due to insufficient supply of sensors or measurement channels. 7. Data quality analysis to verify that acquired data are good enough for good parameter extraction. 8. Modal parameter extraction. 9. Validation of obtained modal parameters. These points will be further discussed in the following.

16.2 Experimental Setup EMA is often said to be 80% measurements (including the planning, installation of sensors, etc., test of measurement settings, and the actual data acquisition) and 20% parameter extraction. The main reason we need to have good measurements, is that, as we explained in Chapters 13 and 14, the errors occurring when FRF measurements are badly measured are bias errors. There is no algorithm that can remove the effect of those errors in the parameter estimation, as only random errors can be averaged away. Therefore, this section on how to set up and check your measurements is the most important section in this chapter.

16.2.1

Points and DOFs

In the following, we will need some terms to specify the measurement degrees of freedom, etc. We will use the term node for a particular location on the structure. Each node is defined by its x-, y-, and z-coordinates in a Cartesian coordinate system, or sometimes in cylindrical

427

428

16 Experimental Modal Analysis

or spherical coordinate systems if the shape of the structure makes such a definition easier. At each node, we can measure the three translatory directions, x, y, and z. The combination of a node and a direction we call a DOF. Of course, we also have rotational DOF at each node. These are, however, very difficult to measure due to lack of good sensors for rotations, so rotational DOFs are usually not included in EMA. In most software for EMA, the nodes are simply numbered by an integer number, and the directions are usually entered in the software as, for example, X+, Y+, Z+. If a sensor is mounted in a negative direction, it is instead entered as X−, Y−, etc. On many structures, it is difficult to mount accelerometers in Cartesian coordinates, as many surfaces are tilted. Some software for EMA allows to enter a rotation angle of each sensor, and the software then translates the mode shape coefficients into proper directions in the animation software. It is often, however, difficult to accurately measure angles of the sensors, and the procedure is prone to human mistakes when entering angles into the software. It is therefore often recommended instead to manufacture small wedges with different angles that can be glued onto the structure to produce surfaces on which the accelerometers can be mounted in x, y, and z directions. Such wedges can be easily carved using balsa tree, which is usually stiff enough for the frequency ranges of most EMA applications. In other cases, wedges may be produced in aluminum, or 3D-printed out of polymer. As stated above, we define the term references for those sensors that are fixed during the measurement, if we use batch acquisition. If we use a shaker test or impact testing with fixed-impact positions, the references are the force locations, and if we use impact testing with roving hammer, the references are the accelerometer locations. For the other sensors, we will use the term responses. It should be noted that this is a confusing term in the case of impact testing with roving hammer, where the term “response” actually refers to force. It is, however, a classical terminology used in EMA, so I choose to keep it rather than inventing a new term. If we measure all forces and responses simultaneously, then the term reference denotes the smallest number of the number of forces and the number of responses.

16.2.2 Selecting Measurement DOFs Selecting which reference and response DOFs to use is an important consideration. If the purpose of the EMA is to validate an FE model, oftentimes results from the finite element analysis (FEA) can be used to find suitable measurement DOFs. How the selection of measurement DOFs is made depends on the use of the final modal model. If the purpose of the modal test is to troubleshoot some vibration problem, the DOFs can usually be determined by experience, with some thought of what the mode shapes will look like. Approximately, three points per “wavelength” of the mode shapes are usually sufficient for this purpose. If the purpose of the EMA is to validate a computational model (for example, an FE model), there is usually higher demands on the selection of DOFs. One reason for this is that in order to be sure that the experimental modes are correctly paired with the modes of the model, the off-diagonal values of the cross-modal assurance criterion (MAC) matrix (see Section 16.10.2) should be close to zero. Therefore, it is recommended, if possible, to extract DOFs from the FE model and compute an auto-MAC of the mode shapes defined by the selected DOFs and make sure the off-diagonal values are close to zero. Several methods

16.2 Experimental Setup

exist to extract (sub)optimal DOFs from FE mode shapes, see e.g., Kammer (1991) and Linderholt and Abrahamsson (2005). Once the measurement DOFs are selected, the next issue is to determine which of those DOFs will be suitable reference DOFs. Reference DOFs should be chosen so that all modes of interest are well described by at least one of the references. Although theoretically, known FE mode shapes could be used to synthesize FRFs and using some optimization criterion to find the optimal reference locations, in many cases suitable reference locations (read: excitation locations for model validation EMA where we typically use shakers) are not that many. Therefore, instead, a more down-to-earth methodology is often adapted, for example, as described by Ewins (2000a): ● ● ●



A convenient excitation point should be selected and a point FRF measured. A second excitation point should then be selected and a new point FRF measured. The resonance frequencies evident on the two plots must then be compared to establish whether there are any present in one plot and absent from the other. The process of selecting and checking further excitation points for the discovery of additional resonances should continue until the user is satisfied that all modes have been identified.

A commonly overlooked possibility is to choose reference DOFs that are skewed relative to the Cartesian coordinate system used for the response DOFs. For many right-angled structures, there is relatively little coupling between the three Cartesian DOFs. By choosing a skewed DOF, which contains components in several of the three directions, it is ensured that the reference data include modes in several directions. This can often significantly improve the parameter extraction result, and the only thing needed in order to get the modal scaling correct, is that the skewed driving point FRF (excitation and response in the reference points) is measured. If a shaker is attached in a skewed direction, an accelerometer should be attached in the same direction, and this DOF should be given a unique DOF number. It does not need to be included in the animation, if the software does not allow skewed directions, but it can be used for mode shape scaling as described in Section 16.9. Once the measurement DOFs are determined, a geometry description should be created, by numbering the coordinates and measuring their locations (usually as x, y, and z in a rectangular coordinate system, or corresponding coordinates for a cylindrical or spherical coordinate system in case of a cylindrical or spherical test structure). The best practice is to enter this geometry description into the software before instrumenting the structure to avoid mistakes. Having a picture of the geometry also helps during the acquisition of the data, to ensure the correct DOF information is allocated to each measurement channel.

16.2.3

Measurement System

The measurement system (data acquisition system) should be a dedicated system for vibration analysis. It is also desirable to have support for the bookkeeping part of the data acquisition. It is essential in modal testing that no mistakes are made regarding which DOF is measured on which channel, in each measurement. Without support in the measurement software, this becomes very difficult. When we are discussing this point, let us stress once and for all: good EMA results are only obtained if you take it easy, do not

429

430

16 Experimental Modal Analysis

stress, and are very, very cautious in every step taken. Moreover, most of all: keep good documenting practice so that you know everything that happened during the test.

16.2.4 Sensor Considerations To ensure data (FRFs) of the best possible quality, it is important to use the right equipment. Although other sensors exist, the absolutely most common sensors for EMA today are integrated electronics piezoelectric (IEPE) force and accelerometer sensors. The dynamic range of IEPE sensors is a factor potentially limiting the quality of FRF measurements, so it must be ensured that the signal-to-noise ratio (SNR) is as good as possible. Since IEPE sensors have 5 V full scale voltage, a good indicator is that the maximum voltage from the sensors is above, say, 0.5 V. For shaker tests, accelerometers should usually have sensitivity close to 1 or 500 mV/g, whereas for impact testing, 100 mV/g sensors are usually recommended. These numbers are, however, only guidelines; you should always check the voltage you get from your sensors, and you should have accelerometers with different sensitivities available. This is also equally important for the force sensors, of course. For the same reason, you may find it necessary to have a few impact hammers with different sensitivities (and also, with different weight), if you do not always measure similar objects. Finally, it is very practical to use transducer electronic data sheet (TEDS) sensors (See Chapter 7), and data acquisition hardware to support them, as this eliminates a common reason for human mistakes entering all sensor information into the measurement system software. Alternative sensors are usually laser Doppler vibrometers, LDVs, for measuring responses, and then most commonly scanning LDVs. These provide the advantage of giving no mass loading, although since LDVs are able to measure only one DOF at the time, measurement times can get long for large DOF counts. Also, using a scanning LDV require shaker excitation, and the dynamic range is not as good as with IEPE accelerometers. Yet, in some applications, the contactless measurement is a great advantage, or even necessary.

16.2.5 Data Acquisition Strategies An important point of planning an EMA test is how to conduct the measurement of all FRFs. There are two different choices: first, whether impact testing or shaker testing is preferred; and second, whether measuring batches of response DOFs or all DOFs at once. For the first choice, for top quality results, in most cases, it is required to use shaker testing. For most troubleshooting cases, it can be quite satisfactory to use impact testing. Impact testing may also be advantageous for some dynamically very flexible structures, where the effect of loading the excitation point by the force sensor and shaker setup may be detrimental. In such cases, however, using an impedance head and correcting for the mass above the force gauge, as described in Section 7.6 may be used with shaker excitation. The second choice is often dictated by the lab budget, but it should be made clear that for the best possible results, it is often necessary to acquire all responses simultaneously. The reason for this is that when using roving accelerometers, you have to interact with the structure between each batch, which may produce some change to the structure (for example, by changing the suspension between the measurements), and you are also more likely to encounter mass loading (also see Section 16.2.7). The mass loading can be avoided

16.2 Experimental Setup

by using dummy masses in the “empty” locations, while roving the accelerometers around the structure. An intermediate solution can be to have enough accelerometers to mount in all response DOFs on the structure, but only acquire them a few at the time, which allows a measurement system with fewer channels. This means there need not be any interaction with the structure during the data acquisition, although it makes the measurement time longer, and therefore, increases the risk of time variance due to, for example, temperature change. If you decide on impact testing, you can either use roving impact hammer (roving force) or you can use a fixed force location just as if you were using a shaker. The most practical choice here is often to use a roving force. You should then preferably use more than one reference accelerometer, the more, the better (except for mass loading considerations, see Section 16.2.7). A reason for not using a roving hammer can be that there are DOFs which cannot (or at least not easily) be excited by an impact hammer. If there are many such DOFs, you may have to revert to the fixed-impact DOF approach. If there are only a few inaccessible DOFs, you can, however, conveniently use the reciprocity principle, and for those points put an accelerometer in the DOF, or DOFs, which you cannot impact, and then sequentially measure the FRFs between excitation in each of the reference locations (i.e., where you have your reference accelerometers) and response in the DOFs you cannot impact. You should remember to re-index these FRFs so that they are indexed as if they were FRFs in the reverse order (force to acceleration location). For shaker excitation, I am strongly recommending using at least two shakers simultaneously because this allows to check that the shakers are correctly mounted, by checking the reciprocity throughout the measurement (see Section 16.2.7). As discussed in Section 13.12.2, shaker excitation is prone to errors from the force sensor if the shaker is not attached properly to the structure. If the two shakers are used, and the reciprocity between the two shaker locations is good, this is a guarantee that each force is correctly measured. Furthermore, two shaker locations increase the possibilities to excite all modes efficiently. It is, of course, possible to use only one shaker and move it between two (or more if desired) positions, to measure two (or more) columns of the FRF matrix. This is, however, not as good as measuring both forces simultaneously because many things can go wrong so as to make the two columns of the FRF matrix inconsistent, which may result in wrong modal parameters.

16.2.6

Suspension

How to suspend the test structure is an important consideration. Typically, there are three possible suspension options: (i) free–free conditions, (ii) as supported in real-life (i.e., connected to something), or (iii) fixed support. The main reasons for choosing the first option, free–free boundary conditions, is that it is most likely to produce good measurements, and that it is an easy boundary condition to apply to the analytical model producing analytical results to be compared with the experimental results. When you support your structure freely, the vibration energy you put into it by the excitation stays in the test structure until it has naturally decayed due to damping. When several structures are connected, as in cases (ii) and (iii), a lot of the excitation energy gets lost to the surrounding structures, which usually means it is much more difficult to obtain good FRFs. Furthermore, fixed support is very

431

432

16 Experimental Modal Analysis

difficult to obtain, as nothing in the real world is infinitely stiff. If your aim is to make the best possible estimation of the modal parameters of a structure, for validation of an analytical model, you should thus support it free–free. Only for troubleshooting purposes should you consider performing an EMA test on a structure installed into its normal environment. Let us now focus on free–free boundary conditions. Typically, this is achieved by suspending the test structure on soft springs (or hanging in soft springs). Although this may sound easy, there are many things to consider (Carne et al. 2007). Depending on the size and weight of the test structure, it can be supported by anything from rubber cords to steel springs, etc. The suspending springs should be soft enough to produce the six rigid body modes due to the mass and geometry of the test structure and the stiffness of the springs, at (as a rule of thumb) less than a tenth of the first eigenfrequency of the structure. To obtain this, a couple of things need to be considered. First of all, for the vertical rigid body mode, Equation (5.59) can be used to find how much the spring will compress/extend for a particular rigid body frequency. As an example, to achieve a (vertical) rigid body frequency of 1 Hz, the spring should compress/extend approx. 25 cm. The other rigid body modes also need some consideration. This is particularly important when suspending long, slender objects. Such objects should usually be suspended hanging vertically to make it more likely that the rotational rigid body mode with rotation around the long axis of the object becomes sufficiently low. This is an often neglected fact. If long slender objects are hung horizontally, there is a large risk that the rotational rigid body mode has a high frequency due to the small mass moment of inertia. A very important issue is to test the influence of the suspension springs to ensure that they are not affecting the structure. This will be discussed in Section 16.2.7. There is also a risk that slender objects suspended horizontally may exhibit static deformation which may significantly affect the stiffness and thus the modes of the object. Rubber cords are very common for suspending smaller objects with free–free boundary conditions. It should be avoided that the structure is touched by the rubber cords because this is likely to add damping to the structure. Instead, the structure should be hung in, for example, fishing line (there are woven fishing lines which are very strong and suitable for this purpose) or thin steel thread, which is then connected to the rubber cords.

16.2.7 Measurement Checks In this section, we will discuss some tests which are important to ensure the measurements will be possible to use for good MPE. It should perhaps be stressed here, that it is very important not to give up too early, with too poor coherence functions as a result. In most cases with free–free structures, and with the correct selection of accelerometers, force sensors, and measurement settings, it is possible to obtain coherence functions with values very near unity at most frequencies. At very sharp antiresonances, this may often be impossible due to very low-response signal causing the extraneous noise to cause a dip in the coherence. Using the H1 estimator, and using enough averages ensures that the FRF is still estimated without bias error. In any case, some bias in the FRFs at antiresonances may be tolerated. For shaker excitation, it is necessary to check that the force sensor, stinger (Br. Eng.: rod), and the shaker are correctly attached, particularly so that there is no transverse vibration

16.2 Experimental Setup

caused by bad alignment of the shaker, as described in Section 13.12.2. To summarize, these checks include the following: ●





checking the force spectrum to ensure that the stinger does not produce too high dynamic range, which can result in noise on the input; looking at the imaginary part of the driving point FRFs to see they only peak one way, positive or negative (which depends on the sensor orientation); and investigating the reciprocity, which both ensures that the shakers are properly attached, and also that the calibration is good.

The next step, regardless of impact hammer or shaker excitation, is to determine the measurement settings. For this, follow the procedures that are discussed in Chapters 13 and 13. After the optimum measurement parameters are determined, it is necessary to make some checks before starting to acquire data to ensure that the measurement setup is correct. First, mass loading should be checked. The easiest test is to measure an FRF between two points and store this FRF. Then, an accelerometer of the same kind as the one used for the first measurement is used as a dummy mass and mounted next to the first accelerometer. The same FRF is measured again and is compared with the first one. If the two FRFs are not identical, there is likely to be mass loading already by the mass of the first accelerometer. Two considerations can then be made; either a lighter kind of accelerometer is chosen for the measurements, or if this is not possible, all accelerometers can be added to the structure prior to the measurement. Although this will mean some (perhaps quite substantial) mass loading of the structure, the measurement will still be consistent, and the effect of the masses can be removed after the analysis, or alternatively, the masses can be added to the analytical model before it is compared with the experimental results. If not enough accelerometers are available, and the test strategy is to rove the accelerometers between measurement sets, dummy masses of approximately the same shape and with the same mass as the accelerometers used can be used in the positions where accelerometers are not located, so that the same mass distribution is achieved in each data set. After checking the mass load effect and solving any issues with that, it is suitable to check the suspension. This is essential as it is very common to neglect significant effects from the suspension that lead to erroneous results, particularly added damping, but potentially also mode shapes may be affected by the suspension. A way to ensure minimal effect of the suspension on the modal parameters is to measure some FRFs first with the structure suspended as it should be suspended during the measurement. These FRFs are then used with some parameter extraction technique to estimate the poles and mode shapes. A first check can now be made by making sure that the highest peak in the FRF caused by the rigid body modes is occurring at a frequency less than a tenth of the first eigenfrequency of the structure (which is a good rule of thumb, but no guarantee that there is no influence on the modal parameters from the suspension). If the rigid body modes are found at sufficiently low frequencies, the suspension is then changed, for example, by shortening the springs (if the structure is hanging in rubber cords or other type of suspension where the spring constant can be changed easily). FRFs between the same nodes as before are then measured and processed for modal parameters. The natural frequencies and damping factors of the structure modes should be the same as in the first test. If not, the modal parameters are affected by the suspension and this needs to be addressed.

433

434

16 Experimental Modal Analysis

The final check should be to measure some FRFs between excitation and response in different directions and between excitation and response far away from each other. In both cases, it should be ensured that the coherence is acceptable. In most cases, this should mean that it is very near unity.

16.2.8 Calibration Correct calibration of all force sensors and accelerometers is very important in order to get the best possible quality results. It is not so much because a correctly scaled model is essential, but with multiple references, without accurate calibration, the data will not be consistent. This will inevitably lead to errors in the multiple-reference parameter extraction routine used for estimating (scaled) mode shapes, see Section 16.9.2. If shakers are used, it is best to use the calibration factors from the latest official calibration (remember to recalibrate your sensors and measurement system regularly!). If, however, an impact test with roving hammer is performed, a mass can be used for calibration, as described in Chapter 7.

16.2.9 Data Acquisition Once all the checks mentioned in the previous sections have been performed, it is time to acquire the data. This should be done in as short time as possible, keeping conditions (for example, temperature) as constant as possible to avoid time variance in the data. As mentioned in many places in this book, the best strategy is usually to store time data so that the data processing can be redone at a later stage. With shaker excitation, however, since all nonwindowing excitation signals necessitate fixing the measurement settings (predominantly the blocksize) prior to the measurement, there is little need to store the time data in this case. If time data are not stored, it is good practice to store at least FRFs, coherence functions, and force spectra. All of these functions may be necessary to use for assessment of the measurement quality. The coherence functions and the measured FRFs should, of course, be inspected as they are acquired, before saving them, to ensure that all data are good, and that any errors occurring during the test are detected.

16.2.10 Mode Indicator Functions One of the tools often used in EMA is the mode indicator function, MIF. Actually, there are several MIFs commonly used, which can be found in Rades (1994) where a good comparison is presented. Here we will present three of the most common MIFs. The simplest MIF is a sum of the magnitude of all FRFs, usually squared, or sometimes a sum of the imaginary part squared. This type of plot exaggerates global modes, i.e., modes where most measured FRFs have a large displacement. The “Normal MIF,” or sometimes “MIF 1,” is a one-dimensional MIF, i.e., it operates on single-reference FRF matrices. The normal MIF is based on the fact that, as we know from Chapter 6, off the natural frequencies of a structure, the FRF (if it is an accelerance or a receptance) is approximately real, whereas exactly at the undamped natural frequency, it is purely imaginary. Note that if mobilities are used, the properties

16.2 Experimental Setup

of the real and imaginary parts of the FRF are swapped. Since accelerance is the most commonly used measurement function, we define the MIFs in the following text on this type of FRF. To make the formulas a little simpler in the following, we define the real part of an FRF ] ] [ [ by HRpq = Re Hpq , and the imaginary part we denote by HIpq = Im Hpq . We then define the normal MIF as follows: ∑| |2 |HRp (f )| | | p , (16.3) MIF1 (f ) = ∑| |2 |Hp (f )| | p| where p sums over all measured FRFs at each frequency f , and we assume only reference q is used, although the MIF would work per se even if there would be several references (but then we will use the multivariate MIF, see below). The normal MIF function will be near unity, except at frequencies where there is influence from global modes, where it will drop to near zero. An example of a normal MIF using all FRFs from a measurement on the Plexiglas plate described in Section 14.6 is shown in Figure 16.1(a). When multireference data have been obtained, more sophisticated MIFs can be computed. The most commonly used such MIF is the multivariate MIF, or MvMIF, (Williams et al. 1985). The multivariate MIF is based on the idea of finding a force vector exciting a normal mode, which is attempted at each frequency. When a force vector can be found that excites a normal mode, then the phase between force and response (in displacement or acceleration) will be 90∘ . This means that at such a frequency, the real part will be close to zero in each FRF, and the imaginary part of each FRF will be large in comparison with the real part. A minimization problem can thus be formulated, using the force vector {F} and the real and imaginary parts of the FRFs in [HR ] and [HI ], respectively, by {F}T [HR ]T [HR ] {F} = 𝜆. ([ ] [ ] [ ] [ ]) T T ‖F‖=1 HI {F} {F}T HR HR HI min

(16.4)

The minimization problem is similar to a Rayleigh quotient (see e.g., (Strang, 2005)) and is minimized by finding the smallest eigenvalue, 𝜆min , and the corresponding eigenvector {F}min of ([ ] [ ] [ ] [ ]) [ ]T [ ] T T (16.5) HI {F} , HR HR {F} = 𝜆v HR HR HI which is solved at each frequency, and where the eigenvalues, 𝜆v , and corresponding eigenvectors are sorted in the ascending order. The force vectors found can be used in the so-called force appropriation, or normal mode, testing, (see (Wright et al. 1999)) and was the original reason for developing the MvMIF. For our use, however, we are only interested in the eigenvalues. The first (smallest) of the eigenvalues is called the first MvMIF, the second smallest eigenvalue is called the second MvMIF, and so on, for as many MvMIFs as there are references. As for the Normal MIF, the MvMif functions are limited to the range zero to unity. The advantage with this MIF is that it detects repeated modes, i.e., if there are two modes at a particular frequency, both the first and second MvMIF will dip toward zero; if there are three modes the first, second, and third MvMIF will all dip toward zero, etc. An example of the MvMIF for a three-reference test is shown in Figure 16.1.

435

1

1

0.8

0.8

MvMIFs

MIF 1

16 Experimental Modal Analysis

0.6 0.4 0.2 0

0.6 0.4 0.2

0

500

0

1000

0

Frequency [Hz] (a)

1000

5

10

Modified MrMIFs

0.8 0.6 0.4 0.2 0 130

500

Frequency [Hz] (b)

1

MvMIFs

436

140

150

Frequency [Hz] (c)

160

0

10

0

500

1000

Frequency [Hz] (d)

Figure 16.1 Plots of mode indicator functions, MIFs, computed from data from measurements on a Plexiglas plate, see text. In (a), MIF 1 (normal MIF); in (b) multivariate MIF, MvMIF; in (c), zoomed in range of the MvMIF around first modes; and in (d) the modified real MIF, MrMIF. In (c), the typical eigenvalue cross-over effect is clearly seen, see text for details.

In Figure 16.1(b), at approximately 145 Hz, and also at approximately 420 Hz, the second MvMIF can be seen to dip. From a closer inspection, as shown in Figure 16.1(c), where the plot has been zoomed in around 145 Hz, it can be seen that the first MvMIF actually has two close dips, whereas the second MvMIF dips in between these two. This phenomenon is often referred to as the eigenvalue cross-over effect and should be interpreted such that there are actually only two modes, the two modes indicated by the first MvMIF. The second MvMIF dips only because of the poor frequency resolution and the cross-over effect. Some experience interpreting the MvMIF is thus necessary in order not to misinterpret the results when modes are closely spaced. The MvMIF is actually detecting real normal modes. A similar MIF, the modified real mode indicator function, MrMIF, is defined by the eigenvalues, 𝜆r , of ([ ] [ ]) [ ]H [ ] H (16.6) HR {F} = 𝜆r HI HI {F} , HR where the eigenvalues are sorted in the ascending order as for the MvMIF. Like for the multivariate MIF, MvMIF, the MrMIF produces as many functions as there are references. The MrMIFs are usually plotted in logarithmic y-scale and the functions dip where there

16.3 Introduction to Modal Parameter Extraction

are modes; the first function dips at every frequency where there is a mode, the second only where there are two or more modes, and so on. The MrMIFs are not bounded between zero and unity. An example of the MrMIFs as defined by Equation (16.6) is shown in Figure 16.1(d). When interpreting the MrMIF the eigenvalue cross-over effect has to be considered, as for the MvMIF, as was discussed above. This effect may be seen in the figure around 145 Hz, and also around 415 Hz, where in both cases, the dip in the second MrMIF does not indicate any mode. The complex mode indicator function, CMIF, is also a commonly used MIF which may also be used for parameter estimation. We present this in Section 16.8.4. It should be noted that all the MIFs are computed for each measured frequency. In cases where the frequency increment is poor, the MIFs may not dip to zero because there is no frequency value exactly at the undamped natural frequencies of the structure.

16.2.11 Data Quality Assessment After all FRFs are acquired, and before the MPE starts, it is worth to assess the quality of the measured data set. This can be done by computing and plotting the MIF. If the MvMIF is used, it should look similar to Figure 16.1 with sharp dips at the frequencies of the modes. Remember, however, that the MvMIF may fail if the mode shapes show a lot of complexity due to local (nonproportional) damping.

16.2.12 Checklist An EMA test is a comprehensive task where it is vital to get all tasks appropriately checked. To aid in this process, in Appendix G, a checklist for the entire procedure of an EMA test is provided. It is strongly recommended to use this, or an alternative checklist, to ensure nothing is forgotten during the test.

16.3 Introduction to Modal Parameter Extraction MPE methods are divided into SDOF methods and multiple degrees-of-freedom (MDOF) methods, as illustrated schematically in Figure 16.2. The SDOF methods rely on treating a single peak in a FRF as a peak corresponding to a single mode. Some SDOF methods will be described in Section 16.4. The MDOF methods, on the other hand, are methods that work over a larger frequency range, and which in that range estimate the modal parameters of several modes. MPE methods can also be divided into local methods and global methods. The local methods operate on a single FRF at a time (related to one response location, thereby the name, local), whereas the global methods estimate the modal parameters using FRFs of many DOFs simultaneously. The global methods, finally, are divided into single-reference methods using only one row or column in the FRF matrix, and multiple-reference methods, which use several rows or columns of the FRF matrix. The modal analysis theory discussed in Chapter 6 was based on FRFs in receptance format (displacement over force). In this chapter, we will use the same format. This means that the estimated FRFs normally need to be converted to receptance format before the MPE is

437

438

16 Experimental Modal Analysis

MPE methods

MDOF

SDOF

Local

Global

Local

LSQL LSQP

LSQG LSQGP

Prony

Global

Singlereference

Multiplereference

LSCE ITD

PTD MITD MMITD LSCF FDPI CMIF

Figure 16.2 Division of modal parameter estimation (MPE) methods into various groups. Some common MPE methods are mentioned under the box where they belong. Most SDOF methods are time domain methods; in the figure the least squares local (LSQL), least squares polynomial (LSQP) methods are SDOF local methods; least squares global (LSQG) and least squares global polynomial (LSQGP) are SDOF global methods. For MDOF methods, the methods are Prony’s method as a local method, least squares complex exponential (LSCE) and Ibrahim time domain (ITD) are global single-reference methods, and polyreference time domain (PTD), multiple-reference Ibrahim time domain (MITD), modified multiple-reference Ibrahim time domain (MMITD), least squares complex frequency (LSCF), frequency domain direct parameter identification (FDPI), and complex mode indicator function (CMIF) are global, multiple-reference methods.

done; otherwise, the modal model will get erroneous scaling. The conversion is, of course, readily obtained by dividing each frequency response by j𝜔 or the square of this, depending on whether the measurements were made by velocity or acceleration sensors, respectively, as described in Section 5.3. In the following, we will always use a FRF matrix with one or more columns, even if one or more rows were actually measured (as for roving hammer measurements). Since the (full) FRF matrix [H(f )] is symmetric due to Maxwell’s reciprocity theorem, we simply transpose the matrix if necessary. By this approach, we only need one formulation for each of the MPE methods. We also introduce the variables NL , the long dimension of [H(f )], and NS , the short dimension, for the size of the final FRF matrix, which is then [H(f )]NL ×NS , implying a test with fixed force positions, regardless of how the measurements were actually made. These variables are useful when describing the MPE methods later in this chapter. Before we continue explaining some of the many algorithms for MPE, a few words about the mathematical principles of parameter extraction (sometimes called system identification) may be appropriate. Consider linear regression for fitting a polynomial to measured data, which you are probably familiar with. Linear regression is a simple case of parameter extraction, and the same principles apply to MPE, although the equations may be more demanding in the latter case. To apply linear regression, you need to decide (know) the

16.3 Introduction to Modal Parameter Extraction

model order of your data, i.e., you need to know, for example, if the data should follow a straight line (first-order polynomial), or a parabola (second-order polynomial), etc. This applies to all parameter extraction methods; based on first principles you need to know what structure the data have, to set up a mathematical model of that structure. This is one of the main challenges in EMA, as we will see below, and is usually addressed by using stabilization diagrams (also called consistency diagrams). Another important thing to understand about parameter extraction is that the accuracy of the estimated parameters depends on the SNR, and the amount of data that are available. The higher the SNR, the better, and the more data, the better. MPE methods for EMA (we will discuss MPE for OMA in Chapter 17) use either frequency domain data in the form of FRFs, or time domain data in the form of impulse response functions, IRFs, both forms computed by the methods described in Chapters 13 and 14. Typically, FRFs are always estimated first and converted to IRFs by inverse fast Fourier transform (FFT) if time domain methods are applied. If we start by the frequency domain, the formulation is given by Equation (16.1). Some changes are usually made to this formulation; the first is that we rewrite the equation into a single sum by renumbering the poles and residues so that we have a sum from 1 to 2N: [H(j𝜔)] =

2N ∑ [A]r , j𝜔 − sr r=1

(16.7)

where it should be noted that half of the residue matrices (and, if complex, the mode shapes) are the complex conjugate of the other half, if mode shapes are complex. The next change we make is to account for the fact that we only have a few columns of the FRF matrix, not the entire matrix. This is accomplished by introducing the modal participation vector for mode r that, if we assume NS references (assumed to be forces), is given by: ⎧ 𝜓q r ⎫ ⎪ 1 ⎪ ⎪ 𝜓q r ⎪ {L}r = Qr ⎨ 2 ⎬ , ⎪ ⋮ ⎪ ⎪𝜓 ⎪ ⎩ qNS r ⎭

(16.8)

by which the residue matrix can be written [A]r = {𝜓}r {L}Tr , where you should note that we transpose the vector despite it being complex. Next, to decompose the FRF matrix into a matrix equation, we also introduce the diagonal inverse pole matrix, ⌈Λ−1 ⌋, defined by ⎡ 1 0 … 0 ⎥ ⎥ ⎢ j𝜔−s1 1 ⎢ 0 j𝜔−s … 0 ⎥ 2 ⌈Λ−1 (j𝜔)⌋ = ⎢ ⎥, ⋮ ⋱ ⋮ ⎥ ⎢ ⋮ 1 ⎥ ⎢ 0 0 … j𝜔−s 2N ⎦ ⎢ which allows us to write the decomposition of the FRF matrix as follows: [H(j𝜔)]NL × NS = [𝜓]NL × 2N ⌈Λ−1 ⌋2N × 2N [L]T2N × N , S

(16.9)

(16.10)

where for clarity, we have also indicated the size of each matrix. It should be noted that the inverse pole matrix, ⌈Λ−1 ⌋ is not actually a calculated inverse; the Λ−1 is to be regarded

439

440

16 Experimental Modal Analysis

as a name and we call it so because it includes the reciprocal (inverse) of poles, j𝜔 − sr . The modal participation matrix, [L], is a matrix with the modal participation vectors defined by Equation (16.8) as columns. Instead of the formulation in Equation (16.10), many algorithms are based on the transpose of Equation (16.10) which becomes [H(j𝜔)]TNS × NL = [L]NS × 2N ⌈Λ−1 ⌋2N × 2N [𝜓]T2N × N . L

(16.11)

For time domain MPE, we inverse Fourier transform Equation (16.10) to time domain. Only the pole matrix is affected by the transformation (since the two outer matrices are not frequency dependent), and its corresponding matrix in the time domain is the exponential pole matrix, ⌈esr t ⌋, given by ⎡ es1 t 0 … 0 ⎥ ⎢ 0 es2 t … 0 ⎥ ⎥, ⌈esr t ⌋ = ⎢ ⎢ ⋮ ⋮ ⋱ ⋮ ⎥ ⎢ 0 0 … es2N t ⎥ ⎦ ⎢ which gives us the time domain IRF decomposition: [h(t)]NL × NS = [𝜓]NL × 2N ⌈esr t ⌋2N × 2N [L]T2N × N . S

(16.12)

(16.13)

For methods using the transpose form in time domain, it is [h(t)]TNS × NL = [L]NS × 2N ⌈esr t ⌋2N × 2N [𝜓]T2N × N . L

(16.14)

The results in Equations (16.10)–(16.14) are the fundamental equations for most MPE methods we will look at in the following:

16.4 SDOF Parameter Extraction The simplest methods used for MPE are the SDOF extraction techniques, where one or more FRFs are processed in a narrow frequency range around a natural frequency, under the assumption that there is only one mode affecting the FRF, which is thus acting as an SDOF system, see Chapter 6. You may wonder why a simple SDOF method would be appropriate when, as we will see later, very sophisticated, global, algorithms are available. The answer is that those sophisticated methods are very sensitive to bad data quality. Therefore, there are situations, particularly for troubleshooting cases, where the global MDOF methods will fail. SDOF methods can then often still give reasonable estimates of both poles and mode shapes good enough to solve the problem. We will therefore look at two similar local SDOF methods, which assume we have a single FRF, Hpq (j𝜔), and we will see how both methods can be extended to global methods, that use all FRFs for a single reference, but still for one mode, of course.

16.4.1 The Least Squares Local Method The first method, the least squares local (LSQL) method, (Phillips and Allemang, 1996), is based on the general partial fraction expansion form of the FRF as in Equation (16.1).

16.4 SDOF Parameter Extraction

For a single mode, assuming there is only one mode affecting the FRF at a frequency, 𝜔k (in rad/s), close to the undamped natural frequency we then have that Apqr A∗pqr Hpq (j𝜔k ) ≈ + , (16.15) j𝜔k − sr j𝜔 − s∗r where the residue is Apqr = Qr 𝜓pr 𝜓qr .

(16.16)

We now make an additional approximation (in addition to assuming other modes do not have any influence): we assume that the complex conjugate term, which comes from the complex conjugate pole at negative frequency (see Figure 2.12), is negligible, which is usually appropriate as a first approximation. The FRF can thus be expressed as follows: Apqr , (16.17) Hpq (j𝜔k ) ≈ j𝜔k − sr which is now rewritten by multiplying by the denominator on both sides, which after rearranging the terms yields Hpq (j𝜔k ) ⋅ sr + Apqr = j𝜔k Hpq (j𝜔k ).

(16.18)

This equation is now valid at some frequencies close to the undamped natural frequency, 𝜔r , which means we can set up an equation system using L frequencies 𝜔1 , 𝜔2 , … , 𝜔L : ⎧ j𝜔 H (j𝜔 ) ⎫ 1⎤ 1 pq 1 ⎥{ ⎪ } ⎪ 1⎥ ⎪ j𝜔2 Hpq (j𝜔2 ) ⎪ sr (16.19) =⎨ ⎥ ⎬, ⋮ ⎥ Apqr ⋮ ⎪ ⎪ ⎪ j𝜔 H (j𝜔 ) ⎪ 1 ⎥⎦ ⎩ L pq L ⎭ ⌊ ⌋T which can be solved for the vector sr Apqr in a least squares sense or by a pseudoinverse, see Appendix E. The target frequencies may be identified by inspecting a MIF as defined in Section 16.2.10. By assuming a unity modal scaling constant, Qr = 1, which is equivalent to unity modal A scaling (see Section 6.4.4), the residue obtained is 𝜓pr 𝜓qr . Thus, if we compute the poles and residues for all response locations p, we can divide all residues by √ the square root of the driving point residue Aqqr = 𝜓qr and obtain the mode shape vector {𝜓}r . Each FRF will give an individual estimate of the pole, so by averaging the pole values, a better estimate of the pole may be obtained. ⎡ H (𝜔 ) ⎢ pq 1 ⎢ Hpq (𝜔2 ) ⎢ ⋮ ⎢ ⎢ H (𝜔 ) ⎣ pq L

16.4.2

The Least Squares Global Method

The LSQL method is easily modified to a global method, as described in Phillips and Allemang (1996). This is done by replacing each row in the equations above with the vector of all FRFs for reference q, which, assuming we use L frequencies as before, results in ⎡ {H (𝜔 )} ⎢ q 1 ⎢ {Hq (𝜔2 )} ⎢ ⋮ ⎢ ⎢ {H (𝜔 )} ⎣ q L

⌈I⌋ ⎤ ⎥ ⌈I⌋ ⎥ ⎥ ⋮ ⎥ ⌈I⌋ ⎥⎦

{

LNL ×NL +1

sr {Aqr }

} NL +1×1

⎧ j𝜔 {H (j𝜔 )} ⎫ ⎪ 1 q 1 ⎪ ⎪ j𝜔 {H (j𝜔 )} ⎪ =⎨ 2 q 2 ⎬ ⋮ ⎪ ⎪ ⎪ j𝜔 {H (j𝜔 )} ⎪ ⎩ L q L ⎭

LNL ×1

,

(16.20)

441

442

16 Experimental Modal Analysis

where we denote the entire FRF vector for reference q at frequency 𝜔k by {Hq (𝜔k )} and ⌈I⌋ is the size NL × NL identity matrix.

16.4.3 The Least Squares (Local) Polynomial Method The next method we will look at, the least squares polynomial method, is based on Equation (6.113) which can be rewritten at frequencies close to the undamped natural frequency, 𝜔r , assuming there is only one mode, as follows: Hpq (j𝜔) ≈

𝜓pr 𝜓qr ∕mr −𝜔2 + j2𝜁r 𝜔r 𝜔 + 𝜔2r

,

(16.21)

where 𝜔r and 𝜁r are the unknown undamped natural frequency (in rad/s) and relative damping ratio, respectively. We assume unity modal mass, mr = 1, which means only the mode shape coefficients remain in the numerator. We write the numerator as Cpqr = 𝜓pr 𝜓qr , remembering that the mode shapes are then scaled for unity modal mass (i.e., the modal scale constant Qr ≠ 1, see Equation (6.137)). We now rewrite Equation (16.21) at a frequency 𝜔k around the natural frequency and rearrange to obtain Hpq (𝜔k ) ⋅ 𝜔2r + j2𝜔k Hpq (𝜔k ) ⋅ 𝜁r 𝜔r − Cpqr = 𝜔2k Hpq (𝜔k ).

(16.22)

We can rewrite Equation (16.22) in matrix form by repeating it at several frequencies, 𝜔k , k = 1, 2, … , L and get a matrix equation: ⎡ H (𝜔 ) j2𝜔 H (𝜔 ) 1 pq 1 ⎢ pq 1 ⎢ Hpq (𝜔2 ) j2𝜔2 Hpq (𝜔2 ) ⎢ ⋮ ⋮ ⎢ ⎢ H (𝜔 ) j2𝜔 H (𝜔 ) L pq L ⎣ pq L

⎧ 𝜔2 H (𝜔 ) ⎫ −1 ⎤ ⎥ ⎧ 𝜔2r ⎫ ⎪ 1 pq 1 ⎪ −1 ⎥ ⎪ ⎪ ⎪ 𝜔22 Hpq (𝜔2 ) ⎪ ⎥ ⎨ 𝜁r 𝜔r ⎬ = ⎨ ⎬, ⋮ ⎥⎪ ⋮ ⎪ ⎪ ⎪ ⎩ Cpqr ⎭ ⎪ 2 ⎪ −1 ⎥⎦ ⎩ 𝜔L Hpq (𝜔L ) ⎭

(16.23)

which can be solved in a least squares sense or by a pseudoinverse, see Appendix E. Typically a few frequency lines, say, 5–11, are used to solve for the three unknowns in Equation (16.23), from which the modal parameters 𝜔r and 𝜁r , and the numerator coefficients, Cpqr , can easily be extracted. If the procedure is repeated for a vector of FRFs corresponding to the whole column q of [H], the scaled mode shape vector can be obtained by dividing each numerator coefficient, Cpqr , by the square root of the driving point FRF, as was described for the LSQL method in Section 16.4.2. The poles should then also be averaged. The SDOF polynomial method can easily be extended to a global method, in the same way the SDOF LSQL method was extended, which will produce a method we call the SDOF global polynomial method. This is left for the reader. Both the LSQL and the least squares polynomial (LSQP) methods are somewhat ill-conditioned because the values in the unknown vectors are very different in size. The pole estimates are usually robust, but the mode shapes may be badly estimated. An easy approach to find the mode shape coefficients once the pole has been obtained by one of the methods is to simply insert the pole into Equation (6.113) and omit the complex conjugate part so that Apqr = Hpq (j𝜔r )(j𝜔r − sr ),

(16.24)

16.5 The Unified Matrix Polynomial Approach, UMPA

where the residue is defined by Equation (16.16). This will produce mode shapes with unity modal A scaling if the modal scaling constant is set to unity, Qr = 1, and the procedure by dividing all the residues by the square root of the driving point residue as described above.

16.5 The Unified Matrix Polynomial Approach, UMPA Before introducing some of the most common algorithms for MPE, we will introduce the conceptual approach developed by Allemang and Brown (1998), and updated in Allemang and Phillips (2004b) and Allemang et al. (2011). This theory called the Unified Matrix (coefficient) Polynomial Approach, UMPA, describes MPE methods in a common framework and is good for understanding the concept of MPE. It also helps when comparing different methods with each other. The following essentially follows the lines of (Allemang and Phillips, 2004b).

16.5.1

Mathematical Framework

The basic equation for MPE can be written in the form of generalized frequency, si , which should not be confused with the Laplace variable. Usually, the generalized frequency is si = j𝜔, however, as we will see for some frequency domain methods, it can take on other forms, to improve numerical conditioning. The basic equation of an individual frequency response Hpq (𝜔i ) at an arbitrary frequency, 𝜔i is now written as a ratio of polynomials as follows: Up (𝜔i ) 𝛽 (s )n + 𝛽n−1 (si )n−1 + · · · + 𝛽0 (si )0 Hpq (𝜔i ) = = n im , (16.25) Fq (𝜔i ) 𝛼m (si ) + 𝛼m−1 (si )m−1 + · · · + 𝛼0 (si )0 which can be computed from the partial fraction expansion form in Equation (16.1). For FRFs in receptance form, the order of the numerator, n, is typically two less than the order of the denominator, i.e., n = m − 2. The polynomial can also be written as follows: n ∑

Hpq (𝜔i ) =

Up (𝜔i ) Fq (𝜔i )

=

𝛽l (si )l

l=0 m



k=0

𝛼k (si

,

(16.26)

)k

which can be rearranged into m ∑ k=0

𝛼k (si )k Up (𝜔i ) =

n ∑ 𝛽l (si )l Fq (𝜔i ).

(16.27)

l=0

The last equation can readily be generalized for the full FRF matrix by introducing coefficient matrices [𝛼k ] and [𝛽l ] and force and response vectors, whereby the equation can be written as follows: m n ∑ ∑ [𝛼k (si )k ]{U(𝜔i )} = [𝛽l (si )l ]{F(𝜔i )}. (16.28) k=0

l=0

We do, however, want equations in terms of frequency responses. By postmultiplying Equation (16.27) by {F(𝜔i )}H and averaging over several ensembles (FFT blocks in reality),

443

444

16 Experimental Modal Analysis

we will obtain auto and cross-spectrum matrices, and the following expression is eventually obtained m n ∑ ∑ [ ] k [ ] l 𝛼k (si ) [H(𝜔i )] = 𝛽l (si ) . (16.29) k=0

l=0

This equation is the basic equation for frequency domain MPE based on measured FRFs. If this model is going to be used for MPE, negative frequencies need to be included in order to get complex conjugate pole pairs. To obtain equations in the time domain, we can formulate the equivalent of Equation (16.27) as follows: m ∑ k=0

𝛼k up (ti+k ) =

n ∑

𝛽l fq (ti+l ),

or in matrix form for the whole measurement m n ∑ ∑ [𝛼k ]{u(ti+k )} = [𝛽l ]{f (ti+l )}. k=0

(16.30)

l=0

(16.31)

l=0

A difference between time and frequency domain formulations is that in time domain we are typically using free decay functions (also impulse responses are free decay functions). In such cases, we can assume all the forces to be zero, which gives us m ∑

[𝛼k ]{h(ti+k )} = 0,

(16.32)

k=0

which can be solved by transposing and rearranging, and adding more rows from other time instances i + 1, i + 2, … in addition to time instance i. It is very important to note that the IRF does not represent free decay for all time values. As is evident from Equation 16.31, if we apply a unit impulse force, f (ti ) = 𝛿(t − ti ), and remaining force samples are zero, f (ti+k ) = 0, k > 0, then the response u(ti+k ) will contain n samples which include the impulse, and the free decay only starts thereafter. The order of the numerator polynomial is close to the same order as the denominator polynomial, which is m = 2N, the number of poles. Higher modes decay relatively fast, however, so in most cases, it is enough to discard the first 10 to 20 values in the impulse responses. It is apparent from the above that the coefficient matrices are essentially the same for both frequency and time domain estimation. The difference lies in the multipliers. In the frequency domain, the characteristic equation to be solved is ] m−1 [ [ ]| | (16.33) + · · · + 𝛼0 | , |[𝛼m ]sm i + 𝛼m−1 si | | whereas in time domain, it becomes [ ]| | (16.34) |[𝛼m ]zm + [𝛼m−1 ]zm−1 + · · · + 𝛼0 | , | | where z = esΔt and s is the Laplace variable. This means, for example, that time domain roots, zr , will correspond to poles by zr = esr Δt which gives the poles as sr = fs ⋅ ln(zr ) since fs = 1∕Δt. In the frequency domain, the roots will depend on how we define si . In the simplest case, where s = j𝜔, the roots in si are directly the poles. It is thus obvious that although we use the same symbol, [𝛼] for the coefficient matrices in Equations (16.33) and (16.34) they will, of course, not have the same coefficients.

16.5 The Unified Matrix Polynomial Approach, UMPA

In order to solve the roots of the matrix coefficients [𝛼], the most common way is to solve for the eigenvalues of the companion matrix, [C] (this is often but not always the preferred method of solving for roots of polynomials). This can be formulated as follows: ⎡ − [𝛼 ] − [𝛼 ] … − [𝛼 ] ⎤ m−1 m−2 0 ⎥ ⎢ ⎢ ⌈I⌋ … [0] ⎥ [0] ⎢ ⎥ (16.35) [C] = ⎢ [0] ⌈I⌋ … [0] ⎥ , ⎢ ⎥ ⋮ ⋱ ⋮ ⎥ ⎢ ⎢ [0] … ⌈I⌋ [0] ⎥⎦ ⎣ which is the formulation based on normalizing the high-order coefficient, i.e., [𝛼(m)] = ⌈I⌋. The size of the coefficient matrices, and the order of the matrix polynomial, will be different depending on whether they are based on either the FRF or IRF matrix, [H(f )] or [h(t)], or on one of their transpose forms, [H(f )]T or [h(t)]T . As we will see when we describe the different MPE methods, we typically eliminate the rightmost matrix in Equations (16.10)–(16.14). If we base the MPE on either the FRF or IRF matrix, [𝛼k ] will be NL × NL , and [𝛽l ] will be NL × NS , and the order of the denominator, ([𝛼]), polynomial will typically be low (one or two). In this case, we will get mNL poles. If we base the MPE on the transposed forms of either the FRF or IRF matrix, then [𝛼k ] will be NS × NS , and [𝛽l ] will be NS × NL and the order of the denominator polynomial will be high. In this case we will get mNS poles. This is essentially what divides different MPE algorithms; they are either low-order methods, in the first case, or high-order methods, in the second case. The eigenvectors of the companion matrix, which we call modal vectors, are also of interest. First of all, the modal vector corresponding to eigenvalue (mode) number r, {𝜙}r has a structure of the form ⎧ 𝜆m−1 {𝜓} ⎫ r ⎪ ⎪ r ⎪ 𝜆rm−2 {𝜓}r ⎪ ⎪ ⎪ {𝜙}r = ⎨ ⋮ ⎬, ⎪ ⎪ 1 ⎪ 𝜆r {𝜓}r ⎪ ⎪ ⎪ {𝜓} r ⎭ ⎩

(16.36)

where 𝜆r is the eigenvalue number r of the companion matrix, [C]. Next, depending on the dimension of the matrix polynomials, the eigenvector will either contain the modal participation vector, in the case where the dimension of the matrix polynomial is “small” (i.e., NS × NS ), or it will contain the mode shape of mode r, if the dimension of the matrix polynomial is “large” (i.e., NL × NL ). Traditionally, commercial software for EMA has predominantly utilized high-order methods which estimate the poles and the modal participation matrix from the companion matrix. The mode shapes are thereafter usually found by a second stage, see Section 16.9. If a low-order method is used, however, the modal scaling may easily be computed by the method we present in Section 16.9.5. In OMA, on the other hand, since the mode shapes are typically unscaled because no forces are measured and no modal participation factors are therefore necessary, it may be desirable to use a low-order MPE algorithm to directly estimate poles and mode shapes in one step. We will describe both types of formulations below.

445

446

16 Experimental Modal Analysis

There is one important difference between low- and high-order MPE methods; the low methods can only estimate as many modes as the number of measured responses, and N ≤ NL the long dimension of the FRF or IRF matrix. High-order methods are instead limited by the number of time lags or frequency values in the data, usually that the number of poles 2N ≤ mNS , where m is the matrix coefficient polynomial order as used above.

16.5.2 Choosing Model Order As mentioned in Section 16.3, one of the most important choices in all parameter estimation cases is to select the order of the mathematical model. In our case that means the number of poles (or modes) in the frequency range of interest. As this is usually well known from the MIFs discussed in Section 16.2.10, it is reasonable to think that this would be no problem in modal parameter estimation. This is not the case, however, because the data we have in the form of FRFs, IRFs, or other measurement functions are not perfect. The errors we have may be due to the following: ● ●



sensor imperfections, mainly imperfect frequency characteristics, the fact that the measured structure has modes other than the ones we are interested in (e.g., local modes in some part of the structure), and these modes are visible in (some of) our measured data. Thus, the model we attempt to estimate from our data is not exactly the same as the model represented in the data, and noise in the measurements, which can also be vibrations due to sources that are unwanted (most common not only in OMA, but also in EMA cases where the measurements are not made under free–free support conditions).

The modern solution to the uncertainty of which model order is best is to produce a so-called stabilization diagram. Some authors call this diagram a consistency diagram instead, especially when poles from more than one parameter estimation condition (e.g., different MPE methods or normalization methods) are presented in the same diagram. Most authors and software companies call it stabilization diagram so we will use that term. The stabilization diagram is produced by estimating the poles and modal vectors, see Equation (16.36), for increasing model orders and plot the pole locations in frequency on the x-axis, as a function of the model order on the y-axis. For each new model order, the poles and modal vectors are compared, and if the estimates are within certain thresholds, they are indicated as stable estimates, in the other case they are unstable estimates. Furthermore, the stability is usually made in steps, so first the frequency can be stable, thereafter frequency and damping, and finally frequency, damping, and modal vectors. When modal vectors are used in the stability diagram, depending on the algorithm used for parameter estimation, it may be either the modal participation vectors, or mode shapes, or, as suggested by Phillips and Allemang (2005), the so-called pwMAC. A typical stabilization diagram is shown in Figure 16.3. The idea of the stabilization diagram is that physical poles will continue to appear at different model orders, whereas computational poles will change between model orders. Although there is no theory that underlies this assumption, empirically it is known to be the case for all methods of MPE; as is the case also in Figure 16.3.

16.5 The Unified Matrix Polynomial Approach, UMPA 30

25

Iteration step

20

15

10

5

0

Unstable Stable freq. Stable freq. + damping All stable

100

200

300

400

500 600 Frequency [Hz]

700

800

900

1000

Figure 16.3 Example of a stabilization diagram. The x-axis is frequency, and the y-axis is iteration step, which is most often the model order used for estimating the poles located at that level. Poles may be unstable, stable in frequency, stable in frequency and damping, or stable in frequency, damping, and modal vector. See text for details.

In addition to the stabilization diagram, plotting the pole estimates in a Nyquist plot (imaginary versus real part) is useful. These diagrams are often called pole cluster diagrams, see Figure 16.4. This type of diagram is valuable for evaluating the spread in estimated parameters, predominantly in damping, as frequencies are usually well estimated. The pole surface density plot is a particular type of a pole cluster diagram and is shown in Figure 16.5, see Allemang and Phillips (2004b). In this type of plot, a 2D histogram is plotted with the count number within each bin coded in color. The consistency of certain frequency and damping value combinations can be evaluated in this type of diagram.

16.5.3

Matrix Coefficient Normalization

The eigenvalues of the companion matrix in Equation (16.35) give solutions based on setting the high-order matrix coefficient to unity. Another solution, which has turned out to be at least as good, Cauberghe et al. (2005) and Cauberghe (2004) is to normalize the lowest coefficient, i.e., to set [𝛼(0)] = ⌈I⌋ in Equation (16.33) or (16.34). To see how to do this, we start with the matrix polynomial in a generic variable, z, as follows: zm [𝛼(m)] + zm−1 [𝛼(m − 1)] + · · · + z0 [𝛼(0)] = 0, which we multiply by

z−m

(16.37)

which yields

z0 [𝛼(m)] + z−1 [𝛼(m − 1)] + · · · + z−m [𝛼(0)] = 0,

(16.38)

which is now a polynomial in z−1 with the coefficient matrices in reversed order. The roots of Equation (16.38) are thus found by solving a companion matrix similar to Equation (16.35),

447

16 Experimental Modal Analysis

Pole cluster diagram

3.5 3 2.5

Damping [%]

2 1.5 1 0.5 0 100

200

300

400

500

600

700

800

900

1000

1100

Frequency [Hz] Figure 16.4 Example of a pole cluster diagram. The damping of each estimated pole is plotted versus frequency which reveals “clusters” of poles. In the figure, it is easy to assess how consistent the damping estimates are.

Pole surface density 40

5

35

4 Damping [%]

448

30 25

3

20

2

15 10

1 0

5 200

400

600 Frequency [Hz]

800

1000

0

Figure 16.5 Example of pole surface density plot. In this plot, a 2D histogram of all estimated poles are plotted versus frequency, with the number within each bin coded in color (seen as grayscale in the figure).

16.5 The Unified Matrix Polynomial Approach, UMPA

although the coefficient matrices appear in reversed order. The eigenvalues of the companion matrix will correspond to roots in z−1 which then have to be converted to the poles. The eigenvectors of the companion matrices will contain the modal vectors for each of the poles. Another alternative to obtain low-order normalization is to use the complex conjugate of the frequency response matrix for frequency domain methods or to reverse the time axis for time domain methods. It may be noted that since the poles are given as zr = esr Δt , then poles in (zr )−1 = e−sr Δt , and since the poles are sr = −𝜁r 𝜔r ± j𝜔d , the poles in (zr )−1 will have positive real part, and thus be unstable. Cauberghe showed that, particularly for discrete MPE methods, this produces very clear stabilization diagrams, if only the unstable poles from the parameter estimation are used, since the computational poles in those cases are all stable. This is, however, not the case for continuous MPE methods. (All time domain methods are continuous, whereas methods in frequency Z-domain are discrete, see Section 16.8). As shown in Cauberghe et al. (2005), the order normalization can alternatively be accomplished by using time reversed IRFs, for time domain methods, or using the complex conjugate of all FRFs for frequency domain methods. Both cases will lead to negative damping coefficients for the physical poles, whereas computational poles will still have positive damping. Thus, by changing the sign of all damping estimates, the stabilization diagram will be clean, since we do not plot unstable poles in the stabilization diagram. The different order normalizations offer the possibility to use both high-order and low-order coefficient normalization, and for each normalization estimate all poles and modal vectors. This should lead to approximately twice as many stable poles in the stabilization diagram. Furthermore, if there are modes for which the different normalizations give different results, that is an indication of some uncertainty in the data, which are additional information to be used in the decision on modal parameters and their uncertainty. This may be particularly interesting for automated MPE (also referred to as autonomous MPE).

16.5.4

Data Compression

One of the features that differ between modal parameter estimation methods is the way noise in the measurement data (FRFs, IRFs, etc.) is reduced. Most current MPE uses some form of data compression, also called data condensation. Essentially all MPE methods rely on least squares solutions of the matrix polynomials we discussed above, or sometimes double least squares, or total least squares. Least squares is itself averaging out some of the noise, but least squares solutions in the presence of noise are biased, since only noise in one “direction” is minimized. Therefore, reducing the amount of noise in the data can significantly reduce the uncertainty in the estimated modal parameters. The most common method of reducing the noise in the measured functions in MPE is using the singular value decomposition, SVD, or sometimes, if the data matrix is square, an eigenvalue decomposition, EVD. The principle is similar to the method we described for an image in Section 15.1.2. The difference, however, is that while for the image, we wished to keep the size of it, while reducing the information, in EMA we typically want to reduce the number of frequency responses or impulse responses while the (modal) information is maintained, whereas noise and data redundancy are reduced. In this case, a frequency

449

450

16 Experimental Modal Analysis

response matrix obtained from a multiple-input/multiple-output (MIMO) measurement can be enhanced in the sense that a large number of response degrees of freedom are condensed down to the minimum number needed to contain the same information as the original measured FRFs. This method was first proposed by Lembregts (1988) and is further discussed in Dippery et al. (1996). There are essentially two different approaches to this problem. One is to put all FRFs together into one FRF matrix, [H ′ ], with size NL × NS Nf , where Nf is the number of frequency lines, i.e., the FRFs of all references are stacked in the second matrix dimension. A SVD of the modified FRF matrix is then computed as follows: [H ′ ] = [U]⌈S⌋[V]H .

(16.39)

A transformation matrix, [T], is then defined by a reduced left singular vector matrix transposed, i.e., T = [Ur ]T , where [Ur ] contains the first r columns of [U] (as many as the number of compressed FRFs desired). The compressed FRF matrix, [Hc ] is thus computed by [Hc ] = T[H ′ ],

(16.40)

where [H] is the NL × NS Nf matrix of all FRFs. After the compression, the compressed FRFs can then be put back into a 3D matrix with all FRFs for each reference in a third dimension. The method can be used alternatively on the transposed matrix, [H]T , where the matrix H ′ will have size NS × NL Nf , and the short space (typically inputs) is compressed rather than the long space (typically responses). This was the original proposal in Lembregts (1988), and we will use it in Section 16.8.2. In Allemang and Phillips (2004a) it is demonstrated that, with increased damping or mode density, the compression based on the SVD of the entire complex matrix [H ′ ] leads to erroneous results. They therefore recommend to compute the SVD of the imaginary part of [H ′ ], which produces a real-valued transformation matrix [T]. An alternative to the method described above is to keep the 3D structure of the FRF matrix, and compute the SVD of the part of [H] due to each reference separately, which is the approach we will use here, presented by an example. The two methods produce very similar modal parameters, but the compressed FRFs tend to have better properties with this alternative method. The alternative method can only be used for high-order methods, where the modal participation matrix is found by the modal vectors. Example 16.5.1 Assume we have a 3D matrix with frequency responses based on a 3-input/35-output measurement. Use MATLAB/Octave and SVD to produce enhanced frequency responses corresponding to a suitable number of singular values. The following MATLAB/Octave code shows how to perform the condensation. The variable H contains the 3D FRF matrix with dimension for frequency, response, and direction, respectively, in the three dimension. for n=1:3 [U,S,V]=svd(H(:,:,n)); if n == 1 plot(S) title('Select number of singular values') x=round(ginput(1)); % Read cursor value end

16.5 The Unified Matrix Polynomial Approach, UMPA

Ht=U(:,1:x)*S(1:x,1:x)*V(1:x,1:x)'; He(:,:,n)=Ht; end The results of this condensation, using the first reference, are shown in Figure 16.6, where the original FRFs and the FRFs of the condensed matrix are plotted, together with the singular values. It can be seen that the singular value plot has a “knee” at approximately 13 singular values, after which it does not drop as fast. From the 35 FRFs originally measured, therefore, the matrix was condensed using the 13 first singular values and corresponding vectors of the singular vector matrices, [U] and [V]. The enhanced FRFs are shown in Figure 16.6(c). This can sometimes improve the numerical stability of modal analysis parameter extraction. The number of singular values to use for the compression can easily be automated by discarding all singular values smaller than a certain factor times the highest singular value, for example, 10−4 . End of example.

× 104

Singular values

8 6 4 2 0

0

5

10

15 20 Number

25

30

35

10

10

2

10

0

FRF magnitude [(m/s2)/N]

FRF magnitude [(m/s2)/N]

(a)

4

0

200 400 600 Frequency [Hz]

(b)

10

4

10

2

10

0

0

200 400 600 Frequency [Hz]

(c)

Figure 16.6 Singular values and compressed frequency response matrix from Example 16.5.1. In (a), the singular values of a frequency response matrix from measurements on a plate with 35 measured degrees of freedom, which are plotted in (b), are shown. In (c), the 13 FRFs of the condensed FRF matrix using the 13 highest singular values and corresponding vectors of the singular vector matrices, [U] and [V ], are shown. Note that the whole columns of [U] are used, whereas both columns and rows are reduced in the other matrices.

451

16 Experimental Modal Analysis

In literature, the data compression as presented above is usually applied to the FRF matrix. An alternative, however, for time domain MPE such as the polyreference time domain, PTD, method (see Section 16.7.6), is to use the compression on the matrix of IRFs instead of on the FRFs. Since the impulse responses are real-valued, this avoids the problem with complex transformation matrices mentioned above. This will be demonstrated in Section 19.7.

16.6 Time Versus Frequency Domain Parameter Extraction for EMA Before we go into detail with some MPE methods in time and frequency domain, a few words about the advantages and disadvantages with the two domains are worth mentioning. Note that this discussion is pertinent to EMA. A similar discussion for OMA is found in Section 17.3.4. The measurement functions estimated for EMA applications are almost always FRFs. Time domain methods for EMA, however, utilize IRFs, so each measured FRF needs to be inverse Fourier transformed by the inverse fast Fourier transform (IFFT) algorithm. Furthermore, this is done for a band of frequencies, usually manually selected, as shown in Figure 16.7, and as described in Section 16.7.1. The computation of the IRFs has two implications. First, there will be unavoidable aliasing from modes outside the selected frequency band. This will produce computational poles inside the frequency band of interest (discrete time domain MPE methods will generate all poles inside the frequency band of interest). Second, the inverse FFT produces leakage, usually noticeable as a part at the end of the impulse response with increasing cycles, see Figure 16.7(b). This error was often called “wrap-around error” previously, which reflects that the error is affected by the cyclic

10

1

Impulse response [(m/s 2 )/N s]

10

Accelerance [(m/s 2 )/N]

452

0

10–1

10–2 0

500

1000

Frequency [Hz] (a)

1500

1000 500 0 –500 –1000 –1500 –2000

0

0.2

0.4

Time [s] (b)

0.6

Figure 16.7 Illustration of the principle of selecting a frequency range in the frequency response to be used for modal parameter estimation (indicated by the dashed vertical lines), in (a), and the resulting impulse response, computed by the procedure in Section 16.7.1, in (b). It can be seen at the end of the impulse response, how it increases, which is a common effect of leakage.

16.6 Time Versus Frequency Domain Parameter Extraction for EMA

nature of the discrete Fourier transform (DFT). Today, it is more common to call the error leakage, but the result is nevertheless the same. This also means that some of the first samples of the IRFs may be affected by leakage. These values should, however, be discarded anyway since they do not represent free decay, as mentioned in Section 16.5.1. Similarly, the end of the impulse responses, where there may also be leakage effects, should not be used for parameter estimation, since the SNR is bad when the impulse response has decayed a lot. FRFs, on the other hand, can be measured without any significant errors, except random error due to extraneous noise, as we saw in Chapters 13 and 14, provided we use nonwindowing excitation techniques. There is also another advantage with FRFs and that is that frequency domain MPE can take modes outside the range of interest into account, by the so-called residual terms (not to be confused with residues!). Residual terms are defined by looking at a single FRF, which we split into three frequency ranges, assuming we are interested in the middle range, with modes N2 to N3 . The frequency response can then be written as follows: Hpq (𝜔) = =

N ∑ 2 r=1 𝜔n N1



2 r=1 𝜔n

Qr 𝜓pr 𝜓qr − 𝜔2 + j2𝜁𝜔n 𝜔 Qr 𝜓pr 𝜓qr −

𝜔2

+ j2𝜁𝜔n 𝜔

+

N2 ∑ 2 r=N1 +1 𝜔n

Qr 𝜓pr 𝜓qr −

𝜔2

+ j2𝜁𝜔n 𝜔

+

N ∑ 2 r=N2 +1 𝜔n

Qr 𝜓pr 𝜓qr − 𝜔2 + j2𝜁𝜔n 𝜔

.

(16.41) We are now interested in the effect of the first sum for frequencies in the frequency range of the second sum. It is evident that this can be approximated by −RL ∕𝜔2 , the lower residual. Similarly, the effect of the third sum in the frequency range of the second sum is a constant, RU , the upper residual. The residual terms are often very important to produce a frequency response within the frequency range of interest that matches the measured FRF, since this will always contain the effects of the out-of-band modes. Thus, we can write the FRF as follows: Hpq (𝜔) =

N Qr 𝜓pr 𝜓qr −Rl ∑ + Ru . + 2 2 2 𝜔 r=1 𝜔n − 𝜔 + j2𝜁𝜔n 𝜔

(16.42)

Other advantages with frequency domain methods are that they can deal with an uneven frequency axis and that different frequency values can be weighted differently (for example, using the coherence function). Data with uneven frequency step may occur from stepped sine testing, where it is often desirable to use a smaller frequency step around the resonances, to obtain more data in these critical frequency areas, whereas the frequency areas between modes can be described by less detail. In light of this, it seems that the best MPE methods for EMA should be the frequency domain methods. Against this, however, stands the fact that time domain methods have proven very successful, and apparently work very well, at least to compute the poles. The usual method to compute mode shapes is the frequency domain least squares method that we present in Sections 16.9.1 and 16.9.2 , which does take residual terms into account. It should, however, also be considered that an important reason for frequency domain methods not having been very popular in the past is that they until relatively recently had

453

454

16 Experimental Modal Analysis

problems with ill-conditioning. This was solved by introducing z transform formulations by the introduction of the least squares complex frequency (LSCF) domain method, (Guillaume et al. 2003), as is discussed in Section 16.8. A common argument in the modal analysis community is that the frequency domain methods are better for structures with high damping. The argument seems to be based on the fact that with higher damping, the impulse response decays faster, and thus fewer time values contain the information. This argument is questionable, however, since the DFT is an orthogonal transformation, and thus conserve all the information from one domain to the other. It seems more appropriate to consider the errors in the measurement functions in each of the domains. These errors are twofold; leakage and the distribution of the noise in the two domains. Since leakage (bias error) in FRF estimates can be effectively eliminated by using appropriate excitation signals, this is not an important factor for EMA (although it is for OMA, as we will discuss in Section 17.3.4). The noise in FRFs and IRFs, are in most cases, where the coherence is close to unity, very small. By using data compression techniques (eigenvalue or SVD) to clean up the FRFs or IRFs, the noise is further suppressed. We can therefore conclude, that in most cases, time and frequency domain methods should perform similarly for the EMA case. In Section 19.8, we will indeed demonstrate that this is the case, for a simple example of a Plexiglas plate.

16.7 Time Domain Parameter Extraction Methods After the introduction and overview of MPE in the previous sections, we will now discuss some common methods for MPE in time domain. This section and the next two on frequency domain methods and methods for obtaining mode shapes and modal participation factors are very technical and may be of interest only if you wish to understand the intricate mathematics of these methods. As a user of EMA or OMA, you may leave these sections unread and proceed to Section 16.10. In UMPA terms, time domain modal parameter estimation algorithms may be deduced from Equation (16.32) which may be rewritten in matrix form by [ ] ⎡ h(t ⎤ ⎢ [ 0] ⎥ [[ ] [ ] [ ] ] ⎢ h(t1 ) ⎥ (16.43) 𝛼0 𝛼1 · · · 𝛼m ⎢ ⎥ = [0] , ⎢ ⋮ ⎥ ⎢ [h(t )] ⎥ m ⎦ ⎣ where [h(ti )] are free decay matrices at time ti (where t0 is simply the first sample of the samples that are used for the parameter estimation). The size of the coefficient matrices and the order of the matrix polynomial differ. If the method is a high-order method, then the matrices are NS × NS , and the impulse response matrices are size NS × NL (i.e., the transpose of our usual formulation), and m is a high number. If the method is a low-order method, the coefficient matrix size is NL × NL , the free decay matrices are size NL × NS , and the order, m, is typically 1 or 2. In any case, the matrix system is solved by first normalizing one of the coefficient matrices to the identity matrix, and moving the corresponding free decay matrix over to the right-hand side of the equation system. Then the free decay

16.7 Time Domain Parameter Extraction Methods

matrices are repeated by adding columns for shifted time delays (forming the so-called Hankel matrices.). The equation system may then be solved for the coefficient matrices by a least squares approach, and then the poles are found as the roots of the matrix polynomial, solved by forming a companion matrix, and solving for the eigenvalues (poles) and eigenvectors (modal participation factors for high-order methods, or mode shapes for low-order methods). In the remaining part of this section, we will present the most common algorithms for modal parameter estimation in the time domain. We start by the oldest method for global MPE, the Ibrahim time domain (ITD) method, and an extended form of it, the multiple-reference Ibrahim time domain, MITD method. Both of those are based on a first-order matrix polynomial, that is, m = 1, and coefficient matrices of size NL × NL . After this we present the class of methods derived from Prony’s method, Prony’s method itself, the least squares complex exponential (LSCE) method, and the PTD method. Finally, we present the modified multiple-reference Ibrahim time domain, MMITD, method. These algorithms are using a high-order matrix polynomial with mNS ≥ 2N and coefficient matrices of size NS × NS . In this section and the following sections, we will not follow the outline of UMPA theory, but rather follow the way the methods were originally presented. We will, however, briefly mention how each method relates to the UMPA theory.

16.7.1

Converting Bandpass Filtered FRFs into IRFs

The first step of a time domain parameter estimation for EMA is typically to select a frequency range, fmin ≤ f ≤ fmax , and using this frequency range and the inverse FFT to produce impulse responses. This is done by first creating negative frequency values corresponding to the selected values, by producing an even real part, and odd imaginary part, see Section 2.7.2. Note that the negative frequencies shall in most cases be put in the upper half of the frequency response vector, see Chapter 9. In addition, the sampling frequency, fs , used when computing the FRF, must be known. The impulse response can then be computed as follows: hpq (t) = fs

N−1 N−1 ∑ 1∑ Hpq (k)ej2𝜋kn∕N = Δf Hpq (k)ej2𝜋kn∕N , N k=0 k=0

(16.44)

i.e., by the inverse discrete Fourier transform (IDFT) (IFFT) multiplied by the sampling frequency. The multiplication by the sampling frequency, as is seen in the equation, is made so that the continuous Fourier transform is approximated by the DFT times the frequency increment, Δf , since, of course, the impulse response is the inverse continuous Fourier transform of the frequency response. The frequency range selection works as a modulation in frequency where the minimum frequency of the frequency range appears as zero frequency in the impulse response. This means that frequencies estimated from the converted impulse response will be related to this zero frequency. Thus, after the MPE, all natural frequencies must be added by the left frequency that was used in the range selection.

455

456

16 Experimental Modal Analysis

16.7.2 The Ibrahim Time Domain Method We shall start with the oldest global method developed in the modal analysis community; the ITD method, first described in Ibrahim and Mikulcik (1973) although the usual reference to the origin is the later paper (Ibrahim and Mikulcik, 1977). The ITD method is a low-order method, which gives poles and unscaled mode shapes, which can then be used to compute a scaled model, if IRFs are available. It can also, however, be used on free decay measurements (as was also Ibrahim’s original goal) and correlation functions, as we will see in Chapter 17. The ITD method is a single reference method, based on the fact that any free decay, and thus for example an IRF, can be written as a sum of the contributions from each mode, see Section 6.5. The impulse response for DOF p, at time tk can thus be written as follows: hp (tk ) =

2N ∑

𝜓pr esr tk ,

(16.45)

r=1

where you should note that for simplicity we omit the excitation subscript. We will assume in this section that all impulse responses we use are due to a common excitation source. We are thus using one column of the impulse response matrix. We assume we want to estimate N modes, whereby with the ITD method we assume N measurements. For all measurements, we can thus define a matrix, [h1 ], of all impulse responses using N time values from tk to tk + (N − 1)Δt as follows: ⎡ h (t ) h (t + Δt) 1 k ⎢ 1 k ⎢ h2 (tk ) h2 (tk + Δt) ⎢ ⋮ ⎢ ⋮ ⎢ h (t ) h (t + Δt) ⎣ N k N k ⎡𝜓 𝜓 ⎢ 11 12 ⎢𝜓 𝜓 = ⎢ 21 22 ⋮ ⎢ ⋮ ⎢𝜓 𝜓 ⎣ N1 N2

· · · h1 (tk + (N − 1)Δt) ⎤ ⎥ · · · h2 (tk + (N − 1)Δt) ⎥ ⎥ ⋱ ⋮ ⎥ … hN (tk + (N − 1)Δt) ⎥⎦

· · · 𝜓1(2N) ⎤ ⎡ es1 tk es1 (tk +Δt) ⎥⎢ · · · 𝜓2(2N) ⎥ ⎢ es2 tk es2 (tk +Δt) ⎥⎢ ⋱ ⋮ ⎥⎢ ⋮ ⋮ ⎢ ⎥ s t s (t · · · 𝜓N(2N) ⎦ ⎣ e 2N k e 2N k +Δt)

· · · es1 (tk +(N−1)Δt) ⎤ ⎥ · · · es2 (tk +(N−1)Δt) ⎥ ⎥, ⋱ ⋮ ⎥ · · · es2N (tk +(N−1)Δt) ⎥⎦

(16.46)

which can be expressed as follows: [h1 ]N×N = [Ψ]N×2N [Λ]2N×N .

(16.47)

Next, we write the impulse response of response p at time tk + Δt which becomes hp (tk + Δt) =

2N 2N ∑ ∑ ( ) 𝜓pr esr (tk +Δt) = 𝜓pr esr Δt esr tk . r=1

(16.48)

r=1

Similar to Equation (16.47), we can thus write the whole matrix of impulse responses at time tk + Δt as follows: [h2 ]N×N = [Ψ]N×2N ⌈esr Δt ⌋2N×2N [Λ]2N×N .

(16.49)

Similarly to the above, it is now obvious, that for a third time, t + 2Δt, we can write a new matrix [h3 ]N×N = [Ψ]N×2N ⌈esr Δt ⌋22N×2N [Λ]2N×N .

(16.50)

16.7 Time Domain Parameter Extraction Methods

We now form the equation [[ ]] [ ] h [Ψ] [ 1] = [Λ] , h2 [Ψ] ⌈esr Δt ⌋ which can be written as follows: [ ] ̃ [H1 ]2N×N = Ψ [Λ]2N×N . 2N×2N Similarly, for one time step further, we can write [[ ]] [ ] h [Ψ] [ 2] = ⌈esr Δt ⌋ [Λ] , h3 [Ψ] ⌈esr Δt ⌋ which can be written as follows: [≈] [Λ]2N×N , [H2 ]2N×N = Ψ

(16.51)

(16.52)

(16.53)

(16.54)

2N×2N

[≈] [ ] s Δt ̃ ⌈e r ⌋. where it should be noted that Ψ = Ψ We can now eliminate [Λ] in Equations (16.52) and (16.54) which, noting that (AB)−1 = gives us [ ≈ ]−1 [ ]−1 [ ]−1 ̃ ̃ [H ], Ψ [H1 ] = Ψ [H2 ] = ⌈esr Δt ⌋−1 Ψ (16.55) 2

B−1 A−1 ,

[ ] s Δt ̃ ⌈e r ⌋, to obtain which we left multiply by Ψ [ ] s Δt [ ]−1 ̃ [H ] = [H ]. ̃ ⌈e r ⌋ Ψ Ψ] 1 2

(16.56)

We now define the system matrix, [A] by [ ] s Δt [ ]−1 ̃ , ̃ ⌈e r ⌋ Ψ [A] = Ψ]

(16.57)

which means that we can write Equation (16.56) as [A]2N×2N [H1 ]2N×N = [H2 ]2N×N .

(16.58)

The system matrix can be calculated two different ways: first, it can be calculated by post multiplying Equation (16.58) by [H1 ]T on both sides and then post multiplying by the inverse of this product on both sides of the equal sign, which yields )( )−1 ( [A1 ] = [H2 ][H1 ]T [H1 ][H1 ]T . (16.59) Second, the system matrix may be calculated by instead post multiplying Equation (16.58) by [H2 ]T which in a similar fashion yields ( )( )−1 (16.60) [A2 ] = [H2 ][H2 ]T [H1 ][H2 ]T . To see how we can obtain the modal parameters from the system matrix, we post multiply [ ] ̃ which yields Equation (16.57) by Ψ [ ] [ ] s Δt ̃ = Ψ ̃ ⌈e r ⌋, (16.61) [A] Ψ [ ] ̃ . The eigenvalues, 𝜆 , obviously which is an eigenvalue problem for each column in Ψ r s Δt r correspond to e from which the poles of the system, sr , can be calculated by sr = fs ln(𝜆r ),

457

458

16 Experimental Modal Analysis

and the upper half of the eigenvectors correspond to the (unscaled) mode shapes, as given by Equation (16.51). If FRFs are available, then the mode shapes can be scaled by the method in Section 16.9.5. Both alternatives for the system matrix, [A1 ] and [A2 ], lead to biased modal parameters in the presence of noise, although the bias is “directed” different ways. Therefore, it has been suggested to average these two expressions for the system matrix, which yields what is called a double least squares solution, (Brincker and Ventura, 2015). In modern MPE, however, as we will see in the next sections, we may use noise suppression by, for example, the SVD, as we saw in Section 16.5.4, which effectively reduces the noise, and thus the bias in the modal parameters. Therefore, it may be a better approach to use each expression for the system matrix to estimate poles and mode shapes and use all estimates in a consistency diagram. Since the size of the system matrix [A] is 2N × 2N, Equation (16.61) will yield N modes (2N complex conjugate pairs of poles and mode shapes). It should be particularly noted that N in this case, from Equation (16.46) is both the number of responses, and the number of time values we used to build the Hankel matrix. This is the major limitation of the ITD method; it is not useful when the number of sensors is less than the number of modes we are interested in. Neither is it very practical when the number of sensors is much larger than the number of modes, as we then may estimate many more modes than are represented in the data. The latter disadvantage is, however, readily solved by the MITD, that we will discuss next. Because the ITD method results in the poles and mode shapes in one step, it is well suited for OMA. However, the original method as described here should not be used in real cases, as it is sensitive to noise. Instead, the next algorithm we will present in Section 16.7.3, the MITD method, should be used. This method can, as the name suggests, also handle several references. To see how the solutions to Equation (16.58) relate to the UMPA theory, we note that the equation can be rewritten as follows: [ ][ ] [ ][ ] 𝛼0 H1 + 𝛼1 H2 = 0, (16.62) with the high-order polynomial coefficient matrix equal to the identity matrix. The companion matrix according to Equation (16.35) thus becomes [ ] (16.63) [C] = − 𝛼0 = [A] , with [A] defined by Equation (16.58). The solution we have outlined in this section thus corresponds to high-order normalization of the polynomial equation. To obtain low-order normalization, we set the low coefficient matrix equal to the identity [ ] [ ] matrix, i.e., 𝛼0 = ⌈I⌋. This will lead to a system matrix ALOW given by ][ ] [ ] [ (16.64) ALOW H2 = H1 , [ ] where the eigenvalues of ALOW are related to the poles by 𝜆r = e−sr Δt . The low-order system matrix is obtained by solving Equation (16.64) similarly to the solutions for the high-order normalization. It should be noted that the low-order normalization may be obtained by [ ] [ ] simply swapping the two matrices H1 and H2 and leaving all calculations the same.

16.7 Time Domain Parameter Extraction Methods

16.7.3

The Multiple-Reference Ibrahim Time Domain Method (MITD)

The MITD method is an extended version of the original ITD method, which allows multiple references. Furthermore, it uses SVD to suppress the noise in the measurements and is thus a very good method for real measurement data. Like the ITD method, it is a low (first)-order method, with coefficient matrices of size NL × NL , according to the UMPA theory from Section 16.5. The method was originally developed by Fukuzono (1986) and is also described in Allemang and Brown (1987) which is freely available on Internet. It should be particularly noted that this method, except for slightly different scaling, is essentially identical to the today very popular covariance-driven stochastic subspace identification, Cov-SSI, method which was developed much later by van Overschee and De Moor (1996). We will discuss this further below. The popular eigensystem realization algorithm, ERA, (Juang and Pappa, 1985), is a special case of the MITD method, as we will also discussed below. For the MITD method, we start with the matrix of impulse responses or any other form [ ] of free decays, h(t) N ×N , expressed by [

] h(t) N

L

L ×NS

S

⌈ ⌋ = [Ψ]NL ×2N esr t 2N×2N [L]T2N×N ,

(16.65)

S

⌈ ⌋ where [Ψ] is the mode shape matrix, esr t the diagonal pole matrix with the pole factors, esr t , on its diagonal, and [L] the modal participation matrix. We assume N is the number of modes, NS (the short dimension) the number of references (inputs), and NL (the long dimension) the number of responses (outputs). Equation (16.65) is now repeated at different times t, t + Δt, …, etc., a total of m times in row direction and n times in column direction, by producing a so-called block Hankel ] [ matrix, Hmn (t) given by [ ] ⎡ h(t) ⎢ [ ] ] ⎢ [ h(t + Δt) Hmn (t) = ⎢ ⋮ ⎢ ⎢ [h(t + (m − 1)Δt)] ⎣

[

] h(t + Δt) … ] h(t + 2Δt) …

[

] h(t + (n − 1)Δt) [ ] h(t + nΔt)

⎤ ⎥ ⎥ ⎥, ⋮ ⋱ ⋮ ⎥ [ ] [ ] h(t + mΔt) … h(t + (m + n − 2)Δt) ⎥⎦ [

which is size mNL × nNS . Using the block Hankel matrix, Equation (16.65) can be expanded into ⌈ s t⌋ [ ]T [ ] [ ] ̃ e r 2N×2N L̃ 2N×nN , Hmn (t) mN ×nN = Ψ mN ×2N L

S

L

S

(16.66)

(16.67)

where the expanded matrices are defined by ⎡ ⎤ [Ψ] ⎢ ⌈ s Δt ⌋ ⎥ [ ] ⎢ [Ψ] e r ⎥ ̃ =⎢ Ψ ⎥ ⋮ ⎢ ⎥ ⎢ [Ψ] ⌈esr (n−1)Δt ⌋ ⎥ ⎣ ⎦

(16.68)

] [ ]T [ T ⌈ s Δt ⌋ T ⌋ ⌈ L̃ = [L] e r [L] · · · esr (n−1)Δt [L]T .

(16.69)

and

459

460

16 Experimental Modal Analysis

Equation (16.67) can be rewritten as follows: [ ]+ [ ] ⌈ ⌋ [ ]T ̃ Ψ Hmn (t) = esr t L̃ ,

(16.70)

where + denotes the pseudoinverse, see Appendix E. If we formulate the block Hankel ⌋ ⌈ matrix at time t + Δt, then because ea+b = ea eb , and because esr (t+Δt) is diagonal we obtain [ ] [ ] ⌈ s Δt ⌋ ⌈ s t ⌋ [ ]T ̃ er Hmn (t + Δt) = Ψ (16.71) e r L̃ , which, using Equation (16.70) yields [ ] [ ] ⌈ s Δt ⌋ [ ]+ [ ̃ ̃ er Ψ H H (t + Δt) = Ψ

]

.

(16.72) [ ] ⌈ s Δt ⌋ [ ]+ ̃ er ̃ as Equation (16.72) can finally be rewritten using a system matrix [A] = Ψ Ψ follows: ] [ ] [ (16.73) Hmn (t + Δt) = [A] Hmn (t) , mn (t)

mn

and thus the system matrix is also equal to [ ][ ]+ [A] = Hmn (t + Δt) Hmn (t) .

(16.74)

From the definition of the system matrix, it follows that [ ] [ ] ⌈ s Δt ⌋ ̃ = Ψ ̃ er , (16.75) [A] Ψ [ ] ̃ which is an eigenvalue problem for each column of Ψ . The eigenvalues of the system matrix [A], zr , are equal to zr = esr Δt and thus the poles, sr , are obtained by sr = fs ⋅ ln(zr ).

(16.76)

It should be noted that the size of the system matrix is mNL × mNL , and thus Equation (16.75) produces mNL ∕2 modes, which is usually much larger than the number of modes of the system. To implement the multiple-reference ITD method, we thus need an approach to reduce the size of the Hankel matrix to the order of the system. The method we will use to do this is the minimal realization approach which is the method giving name to the popular ERA, see for example Viberg (1995). In fact, the method presented here is equivalent to ERA if the block Hankel matrix in Equation (16.66) is produced using only two rows, i.e., m = 2. By using the SVD, we will also obtain very efficient noise reduction. The SVD is commonly used as an important step in the so-called subspace methods (e.g., stochastic subspace identification, SSI), since reducing the rank is a necessary step in computing signal subspace. The data reduction we will use is based on the SVD of the Hankel matrix Hmn (t), similarly to what we showed in Section 15.1.2. We thus first decompose the Hankel matrix into ] [ (16.77) Hmn (t) mN ×nN = [U]mNL ×mNL ⌈S⌋mNL ×nNS [V]TnNS ×nNS . L

S

The data reduction is made by using only 2N columns of [U] if we assume N modes. [ ] We denote the reduced left singular matrix by U ′ and define the compressed block Hankel matrix by ] [ ]T [ ] [ ′ (16.78) Hmn (t) = U ′ Hmn (t) . It could be noted here that it is common to alternately use a compression matrix [U ′ ]⌈S⌋1∕2 instead of [U ′ ] in subspace methods, for example, SSI, the idea being to take the “left half”

16.7 Time Domain Parameter Extraction Methods

of the SVD. This changes the scaling of the compression, but experience shows that it makes little difference in the resulting modal parameters. We need to understand how the compression affects the eigenvalue problem that has to be solved. First, we multiply the Hankel matrix at time t by the transpose of the compressed [ ]T left singular vector matrix U ′ . From Equation (16.67), we get [ ′ ]T [ ] [ ]T [ ] ⌈ s t ⌋ [ ]T ̃ e r L̃ , U Hmn (t) = U ′ Ψ (16.79) which is rewritten by using Equation (16.78) into [ ′ ] [ ′ ] ⌈ s t ⌋ [ ]T ̃ Hmn (t) = Ψ e r L̃ ,

(16.80)

by letting [ ] [ ] [ ] ̃ . ̃ ′ = U′ T Ψ Ψ

(16.81)

Next, we compress the Hankel matrix at time t + Δt by defining [ ′ ] [ ]T [ ] Hmn (t + Δt) = U ′ Hmn (t + Δt) ,

(16.82)

which by using Equations (16.71) and (16.81) gives [ ′ ] [ ]T [ ] ⌈ s Δt ⌋ ⌈ s t ⌋ [ ]T ̃ er Hmn (t + Δt) = U ′ Ψ e r L̃ .

(16.83)

This can be rewritten, by using Equation (16.79), as follows: ] [ ′ ] ⌈ s Δt ⌋ ([ ′ ]T [ ])+ [ ′ ]T [ ] [ ′ ̃ ̃ er U Ψ U Hmn (t) , Hmn (t + Δt) = Ψ

(16.84)

which is simplified into ] [ ′ ] ⌈ s Δt ⌋ [ ′ ]+ [ ′ ] [ ′ ̃ ̃ er Ψ Hmn (t) . Hmn (t + Δt) = Ψ

(16.85)

Similar to what we did in the ITD method in Section 16.7.2, Equations (16.59) and (16.60), we can now define a compressed system matrix either by post multiplying Equation (16.85) ]T [ ′ (t) and then inverse the product on the right-hand side, which yields by Hmn [ ′] [ ′ ][ ′ ] T ([ ′ ][ ′ ]T )−1 A1 = Hmn (t + Δt) Hmn Hmn (t) Hmn (t) (t) , (16.86) ]T [ ′ (t + Δt) and then inverse the product or we can post multiply Equation (16.85) by Hmn on the right-hand side, which yields [ ′] [ ′ ][ ′ ]T ([ ′ ][ ′ ]T )−1 Hmn (t) Hmn A2 = Hmn (t + Δt) Hmn (t + Δt) (t + Δt) . (16.87) Combining either of the expressions of the compressed system matrices with Equation (16.85), after a few steps, we get [ ′ ] [ ′ ] ⌈ s Δt ⌋ [ ′ ]+ ̃ ̃ , er Ψ (16.88) A = Ψ [ ] ̃′ and thus by right multiplying by Ψ [ ′ ] [ ′ ] [ ′ ] ⌈ s Δt ⌋ ̃ ̃ = Ψ er . A Ψ

(16.89)

461

462

16 Experimental Modal Analysis

Equation (16.89) is an eigenvalue problem with the compressed modes as eigenvectors and eigenvalues zr = esr Δt . The expanded mode shapes are (see Equation (16.81)): [ ] [ ′] [ ′] ̃ = U Ψ ̃ , Ψ (16.90) [ ] ̃ , see from which the mode shapes are extracted from the first 2N rows of Ψ Equation (16.68). The solutions above correspond to high-order coefficient normalization, as discussed for the original ITD method in Section 16.7.2. For low-order normalization, the two matrices ] [ ′ ] [ ′ (t + Δt) are swapped, and the calculations repeated, leading to eigenvalHmn (t) and Hmn ues of the system matrix related to the poles by zr = e−sr Δt . It should be noted that the solution described here is a MIMO solution which can handle cases with repeated or closely spaced poles. The mode shapes obtained by Equation (16.90) are, however, still unscaled, as we have not obtained the modal participation vectors. If FRFs are available, the mode shapes can be scaled by a procedure we will discuss in Section 16.9.5. Generally, however, MITD is practical to use in OMA, since both poles and mode shapes are obtained in a single step. No forces are measured in OMA, and thus only unscaled mode shapes can be obtained by the MPE, see Chapter 17. To simplify the understanding of MITD, a summary of the method may be appropriate. 1. First, compute the Hankel matrices at times t and t + Δt using Equation (16.66). Experience has shown that using m = n is usually a good choice. 2. Compute the SVD of the Hankel matrix Hmn (t) using Equation (16.77). 3. To produce data for a stabilization diagram, for each of the model orders (number of modes) N = Nmin , Nmin + 1, … , Nmax . (a) Compute the compressed Hankel matrices using Equations (16.78) and (16.82) using 2N columns in [U]. [ ] (b) Compute the compressed system matrix A′ using either Equation (16.86) or (16.87), or both. [ ] (c) Solve the eigenvalues and eigenvectors of A′ . (d) Convert the eigenvalues to poles using Equation (16.76). (e) Expand the mode shapes using Equation (16.90) and extract the first 2N rows, which gives the mode shape matrix [Ψ]N . 4. Produce a stabilization diagram with all results from 3 and extract the poles and corresponding mode shapes.

16.7.4 Prony’s Method The oldest method we have for MPE, which can be used as a local MDOF method, is the method due to Baron de Prony (1795). It is sometimes called the complex exponential method for reasons obvious from the following. This method and evolutions of it are the most common procedure for extracting modal parameters because they were adopted early in most commercial software (which in turn may also be because they are memory efficient).

16.7 Time Domain Parameter Extraction Methods

Prony’s method is based on the fact that for an impulse response, or a free decay, h(t), if we assume we have N modes, in the single-input/single-output (SISO) case we have that 2N ∑

𝛼r h(tk+r ) = 0,

(16.91)

r=0

because we have no force, as we also saw in Section 16.5, Equation (16.32). This equation is then rewritten by extracting the last value in the sum in Equation (16.91) and by setting the coefficient 𝛼2N = 1 (high-order normalization) which does not change the result of Equation (16.91) because the equation still sums to zero. The result of this trick is the Prony formulation: ∑

2N−1

h(tk+2N ) = −

𝛼r h(tk+r ).

(16.92)

r=0

Equation (16.92) can be formulated into a vector equation ⎧ 𝛼 ⎫ ⎪ 0 ⎪ ⎪ 𝛼 ⎪ ⌊ h(tk ) h(tk+1 ) h(tk+2 ) … h(tk+2N−1 ) ⌋ ⎨ 1 ⎬ = −h(tk+2N ). ⎪ … ⎪ ⎪ ⎪𝛼 ⎩ 2N−1 ⎭

(16.93)

We could also have chosen to normalize the equation by setting the coefficient 𝛼0 = 1 (low-order normalization). From Equation (16.93), we can conclude that in UMPA terminology, Prony’s method is a high-order method, since we formulate 2N polynomial coefficients 𝛼k . The next trick is to repeat Equation (16.93) for a number of time values in order to get an overdetermined set of equations. If we assume we have N modes, i.e., 2N poles, as in the previous equations, we use L > 2N rows added to Equation (16.93) which then turns into ⎡ h(t ) h(t ) h(t ) k k+1 k+2 ⎢ ⎢ h(tk+1 ) h(tk+2 ) h(tk+3 ) ⎢ ⋮ ⋮ ⋮ ⎢ ⎢ h(t ⎣ k+L−1 ) h(tk+L ) h(tk+L+1 )

⎧ h(t ⎫ · · · h(tk+2N−1 ) ⎤ ⎧ 𝛼0 ⎫ k+2N ) ⎪ ⎥⎪ ⎪ ⎪ · · · h(tk+2N ) ⎥ ⎪ 𝛼1 ⎪ ⎪ h(tk+2N+1 ) ⎪ ⎥⎨ ⎬. ⎬ = −⎨ ⋱ ⋮ ⋮ ⎪ ⎥⎪ ⋮ ⎪ ⎪ ⎪ ⎥ ⎪ ⎪ ⎪ · · · h(tk+2N+L−2 ) ⎦ ⎩ 𝛼2N−1 ⎭ ⎩ h(tk+2N+L−1 ) ⎭ (16.94)

Note that the matrix in Equation (16.94) contains the same impulse response (or free-decay function), shifted one sample forward for each column. (The same applies to the rows, but we will refer to the columns in the following). As we already noted for the ITD method, such a matrix is called a Hankel matrix. Sometimes in MPE, it is referred to as a Toeplitz matrix, which works equally good; a Toeplitz matrix is a Hankel matrix flipped upside down. The name Hankel matrix seems, however, more common in current parameter identification literature. (In a Toeplitz matrix, diagonals are constant; in an Hankel matrix, the anti-diagonals are constant.) The coefficients 𝛼r in Equation (16.94) are best solved by using a least squares approach or a pseudo inverse. The final step is then to solve the roots of the polynomial, i.e., 𝛼2N−1 z2N−1 + 𝛼2N−2 z2N−2 + · · · + 𝛼0 = 0,

(16.95)

463

464

16 Experimental Modal Analysis

which produces poles in the z-domain, zr , which are then converted to poles, sr in the Laplace-domain by observing that zr = esr Δt and thus the poles are obtained by sr = fs ln(zr ). The Prony method may be implemented in a modern way producing a stabilization diagram as described in Section 16.5.2. The Prony method produces poles for one IRF. In a larger measurement, the method can be extended by, for example, estimating the poles for each IRF, and then averaging the pole estimates of each mode to obtain more accurate estimates. However, it may be even better to use a more sophisticated algorithm such as LSCE or ITD if more than one IRFs are available.

16.7.5 The Least Squares Complex Exponential Method The LSCE method is a global MDOF single-reference method, which was first published by Brown et al. (1979), and is an extension of Prony’s method to a number of IRFs and not just a single one. This is done by noting that the same coefficients, 𝛼r , apply for each impulse response, independent on response and excitation DOFs, and thus the matrix in Equation (16.94) and the right-hand side vector, can be added with additional rows for each impulse response we take into account. The solution is then similar to that for Prony’s method. By solving the system using different numbers of columns in the matrix (and thus coefficients 𝛼r ), different orders are obtained, and a stabilization diagram as described for Prony’s method can be produced. If we assume we have the impulse responses in a column vector {h(t)}, we can form a block Hankel matrix, which modifies the equation system in Equation (16.94) into ⎡ {h(tk )} ⎢ ⎢ {h(tk+1 )} ⎢ ⋮ ⎢ ⎢ {h(t ⎣ k+L−1 )}

⎫ {h(tk+2N−1 )} ⎤ ⎧ ⎥ ⎪ 𝛼0 ⎪ {h(tk+2 )} {h(tk+3 )} ··· {h(tk+2N )} ⎥ ⎪ 𝛼1 ⎪ ⎥⎨ ⋮ ⎬ ⋮ ⋮ ⋱ ⋮ ⎪ ⎥⎪ 𝛼2N−1 ⎪ {h(tk+L )} {h(tk+1 )} · · · {h(tk+2N+L−2 )} ⎥⎦ ⎪ ⎩ ⎭ ⎧ {h(t ⎫ )} k+2N ⎪ ⎪ ⎪ {h(tk+2N+1 )} ⎪ = −⎨ ⎬, ⋮ ⎪ ⎪ ⎪ {h(t ⎪ k+2N+L−1 )} ⎭ ⎩ {h(tk+1 )}

{h(tk+1 )}

···

(16.96)

where the block Hankel matrix is the leftmost matrix, consisting of the impulse response vectors at different time instances. Solving Equation (16.96) in a least squares sense gives the coefficients 𝛼n , which are then used to solve the poles using the roots of the coefficient polynomial, zr as in Equation (16.95) and the text beneath it.

16.7.6 Polyreference Time Domain The PTD method developed by Vold et al. (1982) is a natural extension of the LSCE method to use impulse responses from multiple references. It is a MIMO method that can handle repeated or closely spaced poles, and it extracts poles and modal participation factors. The PTD method is probably the MPE method most used in practice because it has been popular in commercial software.

16.7 Time Domain Parameter Extraction Methods

To describe the PTD method, we will essentially follow the description in Deblauwe et al. (1987). The PTD method starts with impulse responses between an arbitrary response (output) point p and all reference (input) points q = 1, 2, … , NS , which can be written as follows: hp1 (t) =

2N ∑ Ap1r esr t , r=1 2N

hp2 (t) =

∑ Ap2r esr t ,

(16.97)

r=1

⋮ hpNS (t) =

2N ∑ ApNS r esr t . r=1

Each residue in this equation can be formulated relative to another residue by noting that Apqr = Qr 𝜓pr 𝜓qr ,

(16.98)

and thus, for another reference point, say m, we can write 𝜓 Apmr = mr Apqr . 𝜓qr We now introduce the modal participation factor, Lmqr , 𝜓 Lmqr = mr , 𝜓qr

(16.99)

(16.100)

whereby we can rewrite Equation (16.97), arbitrarily using reference 1 as the relative reference, as follows: hp1 (t) =

2N ∑ Ap1r esr t , r=1

2N ∑ L21r Ap1r esr t , hp2 (t) = r=1

(16.101)

⋮ hpNS (t) =

2N ∑

LNS 1r Ap1r esr t .

r=1

Thus, for time t = nΔt, we can write Equation (16.101) in matrix form as follows: [ ] { } ⌊I⌋ ⌈ sr Δt ⌋n { } ] e hp (n) = [ Ap1 , (16.102) Lm1 ] [ where ⌊I⌋ is a row vector with ones, the modal participation matrix Lm1 is ⎡ L L21,2 ⎢ 21,1 [ ] ⎢ L31,1 L31,2 Lm1 = ⎢ ⋮ ⎢ ⋮ ⎢L ⎣ NS 1,1 LNS 1,2

· · · L21,2N ⎤ ⎥ · · · L31,2N ⎥ ⎥, ⋱ ⋮ ⎥ · · · LNS 1,2N ⎥⎦

(16.103)

465

466

16 Experimental Modal Analysis

{ } the residue vector Ap1 is ⎧ A ⎪ p1,1 { } ⎪ Ap1,2 Ap1 = ⎨ ⎪ … ⎪A ⎩ p1,2N

⎫ ⎪ ⎪ ⎬, ⎪ ⎪ ⎭

.

(16.104)

⌈ ⌋ and finally, the diagonal matrix with the poles, esr Δt is ⎡ es1 Δt 0 … 0 ⎥ ⎥ ⎢ ⌈ s Δt ⌋ ⎢ 0 es2 Δt … 0 ⎥ r (16.105) e =⎢ ⎥. ⋮ ⋱ ⋮ ⎥ ⎢ ⋮ ⎢ 0 0 · · · es2N Δt ⎥⎦ ⎢ Using Nt + 1 time samples, we can repeat Equation (16.102) for n = 0, 1, … , Nt and get { } { } hp (0) = [L] Ap1 ⌈ { ⌋{ } } hp (1) = [L] esr Δt Ap1 } ⌈ ⌋2 { } { (16.106) Ap1 hp (2) = [L] esr Δt {

… ⌈ ⌋N { } } hp (Nt ) = [L] esr Δt t Ap1 .

The NS Nt poles and corresponding modal participation factors that we want to estimate are solutions to the matrix polynomial (see Section 16.5) ] ⌈ ⌋ [ ⌋N ⌈ (16.107) [𝛼(0)] + [𝛼(1)] [L] esr Δt + · · · + 𝛼(Nt ) [L] esr Δt t = [0] , where each [𝛼(n)] is size (NS × NS ). Each matrix coefficient comes from a multiplication in Equation (16.106): { } { } [𝛼(0)] hp (0) = [𝛼(0)] [L] Ap1 { ⌋{ } } ⌈ [𝛼(1)] hp (1) = [𝛼(1)] [L] esr Δt Ap1 ⌋2 { } { } ⌈ (16.108) Ap1 [𝛼(2)] hp (2) = [𝛼(2)] [L] esr Δt [

⋮ ⌋N { } ]{ } [ ] ⌈ 𝛼(Nt ) hp (Nt ) = 𝛼(Nt ) [L] esr Δt t Ap1 .

Summing both sides of Equation (16.108) yields Nt ∑ n=0

t { ⌈ ⌋n { } } ∑ Ap1 , [𝛼(n)] hp (n) = [𝛼(n)] [L] esr Δt

N

(16.109)

n=0

from which it can be seen that the right-hand side equals Equation (16.107) times the factor { } Ap1 which is independent of n, and thus the right-hand side equals the zero vector. The solutions to Equation (16.109) do not change if we scale it by letting the matrix coefficient [ ] 𝛼(Nt ) = ⌈I⌋ (i.e., high-order normalization; we could also set 𝛼(0) = 1). Thus, we can rewrite it as follows: Nt −1 ∑ { } { } (16.110) [𝛼(n)] hp (n) = − hp (Nt ) , n=0

16.7 Time Domain Parameter Extraction Methods

or for low-order normalization Nt ∑

} { } { [𝛼(n)] hp (n) = − hp (0) .

(16.111)

n=1

We can now expand Equation (16.110) for more DOFs, i.e., Nt ∑ n=0

[𝛼(n)] [h(n)]TNS ×NL = [0]NS ×NL ,

(16.112)

where it should be noted that the impulse response matrix is transposed, since we always define it for shaker testing. To solve the coefficients of the matrix polynomial, we repeat Equation (16.110) for a number, m + 1, time points, each shifted Δt, and put each shift in a new column. We thus obtain the equation, for high-order normalization: T [ ] [ ] [ ] ⎡ ⎤ h(0) h(1) … h(m) ⎢ [ ⎥ ] [ ] [ ] [ [ ]]⎢ h(1) h(2) … h(m + 1) ⎥ [𝛼(0)] [𝛼(1)] · · · 𝛼(Nt −) ⎢ ⎥ ⋮ ⋮ ⋱ ⋮ ⎢ ⎥ (16.113) [ ] [ ] [ ] ⎢ h(N − 1) h(N ) · · · h(m + N − 2) ⎥ t t t ⎣ ⎦ [ [ ] [ ] [ ] ]T = − h(Nt ) h(Nt + 1) · · · h(Nt + m − 1) ,

which can be written as ]T [ ] [ T [A] HmNt = H ′ ,

(16.114)

where it should be noted that [HmNt ] is the same block Hankel matrix [Hmn ], that we used in the MITD method in Section 16.7.3, with Nt number of columns. Here it is transposed, and when transposed has size (Nt × m). This is the equation to solve for the matrix coefficients, usually by a least squares approach or pseudoinverse. It should also be noted that the block Hankel matrix in Equation (16.114) should be defined once, based on the maximum number of modes that are desired. For each model order, the equation may then be defined by selecting appropriate rows of the Hankel matrix, see point 4 in the point list below. The final step to obtain the poles and modal participation factors is to solve for the roots of the matrix polynomial. For high-order normalization, this is done by combining the coefficient matrices into the companion matrix defined by ⎡ − [𝛼(Nt − 1)] − [𝛼(Nt − 2)] ⎢ ⎢ ⌈I⌋ [0] ⎢ ⌈I⌋ [0] ⎢ ⎢ ⋮ ⋮ ⎢ ⎢ … [0] ⎣

… − [𝛼(0)] ⎤ Nt −1 ⎧ zNt −1 {L} ⎫ ⎥ ⎧ zr {L}r ⎫ r ⎪ ⎪ rN −2 ⎪ … [0] ⎥ ⎪ Nt −2 ⎥ ⎪ zr {L}r ⎪ ⎪ zr t {L}r ⎪ … [0] ⎥ ⎨ ⎬ = zr ⎨ ⎬, ⋮ ⋮ ⎥⎪ ⎪ ⎪ ⎪ ⋱ ⋮ ⎥ ⎪ {L} ⎪ ⎪ {L} ⎪ r r ⎩ ⎭ ⎩ ⎭ ⌈I⌋ [0] ⎥⎦ (16.115)

467

468

16 Experimental Modal Analysis

from which the eigenvalues zr give the poles as sr = fs log(zr ), and the corresponding modal participation factors are the last NS coefficients in the eigenvectors. For low-order normalization, Equation (16.113) is replaced by [ ] [ ] [ ] T ⎡ h(1) h(2) · · · h(m + 1) ⎤ ⎢ [ ] [ ] [ ] ⎥ [ [ ] ] ⎢ h(2) h(3) · · · h(m + 2) ⎥ [𝛼(1)] [𝛼(2)] · · · 𝛼(Nt ) ⎢ ⎥ (16.116) ⋮ ⋱ ⋮ ⎥ ⎢ ⋮ ] [ ] [ ] [ ⎢ h(N ) h(N + 1) · · · h(m + N ) ⎥ t t t ⎦ ⎣ [ [ ] [ ] [ ] ]T = − h(0) h(1) · · · h(m) , which is solved similarly to the description above, see Section 16.5.3. Although the mathematics of this method is somewhat intimidating, the process to implement it is not that complicated. To simplify it, we shall summarize the procedure in a point list: 1. Select the impulse responses to be used for parameter extraction. If you start with FRFs, convert those to receptance form, then use the inverse FFT to produce the IRFs, as described in Section 16.7.1. 2. Decide on the highest order (number of poles), N, and set Nt = N∕NS (Nt being an integer). [ ] 3. Build the block Hankel matrices [H] and H ′ using Equations (16.113) and (16.114). 4. For each mode order N = 1, 2, … , Nt . (a) Use NNS rows of the Hankel matrices to find the N coefficient matrices [𝛼(n)] by solving Equation (16.113) in a least squares sense (b) Build the companion matrix defined by Equation (16.115) (c) Find the eigenvalues zr and eigenvectors of the companion matrix (d) Convert the eigenvalues to poles and extract the last NS coefficients of the eigenvector as the modal participation factors for the corresponding pole. 5. Produce a stabilization diagram based on the estimates of the poles and modal participation factors and let the user extract appropriate poles. For each selected pole, extract the accompanying participation factors. Note that point 4 means that only orders NS , 2NS , … will be computed by the PTD method. Note also, that in modern implementations, both high-order and low-order normalization of the equations may be used to produce twice as many pole estimates, which are then all plotted in a stabilization (consistency) diagram.

16.7.7 The Modified Multiple-Reference Ibrahim Time Domain Method (MMITD) The ITD method can also be modified into a form in which poles and modal participation factors are obtained (Allemang and Brown, 1987). We will refer to this method as the modified multiple-reference ITD method, MMITD. This method is suitable if mode shapes are to be obtained in a second stage, using, e.g., the least squares frequency domain (LSFD) method (see Section 16.9.2), the most common method for obtaining mode shapes for EMA. You should note throughout the remaining part of this section that we use some variables with

16.7 Time Domain Parameter Extraction Methods

new definitions compared to what they denoted in Section 16.7.3, where we described the MITD method. We thus start by computing the SVD of the Hankel matrix Hmn (t) defined by Equation (16.66). From Equation (16.77), it follows that the Hankel matrix at time t, and remembering that (ABC)T = CT BT AT , can be written as follows: [ ]T Hmn (t) nN ×mN = [V]nNS ×nNS ⌈S⌋nNS ×mNL [U]TmNL ×mNL . (16.117) s

L

From the general equation of the Hankel matrix, Equation (16.67), we also have that ]T [ ] [ ⌈ ⌋ [ ]T ̃ . (16.118) Hmn (t) nN ×mN = L̃ nN ×2N esr t 2N×2N Ψ 2N×mN S

L

S

L

]T We now use the right-hand singular vectors to use V ′ as the compression matrix, by [ ′] defining V as the 2N first columns of [V] in Equation (16.117). This produces the compressed Hankel matrix that we denote H ′ mn (t) to indicate it is different from the compressed Hankel matrix used for the modified ITD method. It becomes ] [ ]T [ ]T [ ]T [ ] ⌈ ⌋ [ ]T [ ′ ̃ , (16.119) H mn (t) = V ′ Hmn (t) = V ′ L̃ esr t Ψ [

which we rewrite as follows: ] [ ′′ ] ⌈ s t ⌋ [ ]T [ ′ ̃ , er Ψ H mn (t) = L̃ using

[

[ ]T [ ] ′′ ] L̃ = V ′ L̃ .

(16.120)

(16.121)

We apply the same compression to the Hankel matrix at time t + Δt and obtain [ ′ ⌋ ⌈ ⌋ [ ]T ] [ ]T [ ] T [ ]T [ ] ⌈ ̃ , H mn (t + Δt) = V ′ Hmn (t + Δt) = V ′ L̃ esr Δt esr t Ψ (16.122) which is simplified into [ ′ ] [ ′′ ] ⌈ s Δt ⌋ ⌈ s t ⌋ [ ]T ̃ . H mn (t + Δt) = L̃ er er Ψ

(16.123)

By rewriting Equation (16.120), we get that ⌈ s t ⌋ [ ]T [ ′′ ]+ [ ′ ] ̃ = L̃ er Ψ H mn (t) ,

(16.124)

which we insert into Equation (16.122) which gives us [ ′ ] [ ′′ ] ⌈ s Δt ⌋ [ ′′ ]+ [ ′ ] er L̃ H mn (t) . H mn (t + Δt) = L̃ [ ] We now define the compressed system matrix A′′ as follows: [ ′′ ] [ ′′ ] ⌈ s Δt ⌋ ([ ′′ ])+ L̃ A = L̃ , er

(16.126)

which means that we can write Equation (16.125) as follows: ] [ ][ ] [ ′ H mn (t + Δt) = A′′ H ′ mn (t) .

(16.127)

(16.125)

From this equation, we can calculate the compressed system matrix either by post multiply[ ]T ing by H′mn (t) on both sides, and then move all Hankel matrices over to the right-hand side which gives the solution: [ ′′ ] ([ ′ ][ ]T ) ([ ′ ][ ]T )−1 A1 = H mn (t + Δt) H ′ mn (t) H mn (t) H ′ mn (t) , (16.128)

469

470

16 Experimental Modal Analysis

[ ]T or we can post multiply by H ′ mn (t + Δt) and similarly obtain ][ ]T ) ([ ′ ][ ]T )−1 [ ′′ ] ([ ′ . H mn (t) H ′ mn (t + Δt) A2 = H mn (t + Δt) H ′ mn (t + Δt)

(16.129)

From the definition of the compressed system matrix, Equation (16.126), we also have that [ ′′ ] [ ′′ ] [ ′′ ] ⌈ s Δt ⌋ A L̃ = L̃ er , (16.130) ⌈ s Δt ⌋ r which is an eigenvalue problem which gives eigenvalues e and the compressed modal [ ] participation matrix. The size of A′′ is (2N × 2N). The final modal participation matrix [ ] coefficients are calculated from L̃ defined by [ ]T [ ′′ ]T L̃ = [V] L̃ , (16.131) from which we extract the modal participation factors from the first 2N rows and NS [ ] columns of L̃ . You should note that the similarity between Equations (16.130) and (16.115). The MMITD method is closely related to the PTD method, but uses the SVD compression, which reduces noise in the measurements. It therefore produces clearer stabilization diagrams and in some cases, better modal parameters. The equations above produce high-order normalization, as we have seen for ITD and MITD before. To obtain low order normalization, the two Hankel matrices in Equation (16.127) should be swapped and then solved for the system matrix. The eigenvalues will then be related to the poles as 𝜆r = e−sr Δt as for the other methods. The MMITD method can be implemented similarly to the MITD method described in Section 16.7.3. The following steps are conducted: 1. First compute the Hankel matrices at times t and t + Δt using Equation (16.66). 2. Compute the SVD of the Hankel matrix Hmn (t) using Equation (16.77). 3. For each of the model orders (number of modes) N = Nmin , Nmin + 1, … , Nmax . (a) Compute the compressed Hankel matrices using Equations (16.120) and (16.122) using 2N columns in [V]. [ ] (b) Compute the compressed system matrix A′′ using Equation (16.128). [ ] (c) Solve the eigenvectors and eigenvalues of A′′ . (d) Convert the eigenvalues to poles using Equation (16.76). (e) Expand the modal participation matrix using Equation (16.131) and extract the first [ ] N columns of L̃ , which contain the modal participation matrix [L]. 4. Produce a stabilization diagram with all results from 3 and extract the poles and corresponding modal participation factors that the user selects, usually by clicking in the stabilization diagram.

16.8 Frequency Domain Parameter Extraction Methods In this section, we will present some of the more common frequency domain methods for MPE. As we mentioned in the introduction to Section 16.7, this is very technical and may not be for you. If so, you may want to skip to Section 16.10.

16.8 Frequency Domain Parameter Extraction Methods

We will start by presenting the Polyreference Least Squares Complex Frequency method (pLSCF, or as we will refer to it, only LSCF) (Guillaume et al. 2003), which in a commercial version is sometimes called PolyMax. In UMPA terms, the LSCF method is a high-order method, the frequency domain equivalent to the PTD method, and thus using coefficient matrices of size NS × NS . For a long time, high-order frequency domain methods were not very popular because, as we will see, they introduce high powers of the frequency values in the frequency range used for the parameter estimation. This issue was first solved by introducing orthogonal polynomials by, for example, Richardson and Formenti (1982) and Vold (1990). By the introduction of the LSCF method, however, the issue was elegantly resolved by making the estimation in the Z-domain, as we will see in Section 16.8.1. Since the LSCF method is easily defined by using the UMPA framework, we will use this framework to describe the method. We will also be discussing two low-order methods; the first-order method referred to as the polyreference frequency domain (PFD) method (Zhang et al. 1985), and the second-order method frequency domain direct parameter identification (FDPI) method (Lembregts et al. 1990). Both these methods use coefficient matrices of size NL × NL , i.e., PFD uses one coefficient matrix, and the FDPI method uses two, see Section 16.8.2. To limit the space a little we will only describe the FDPI method in detail and using the original development as described by Frans Lembregts. In Section 16.8.3, we will discuss an extension of the method using the same Z-domain frequency axis as used by LSCF. Frequency domain methods, in UMPA terms, are based on Equation (16.29). This equation may be defined in matrix formulation as follows: [ ] ⎡ (s )0 H(𝜔 ) ⎤ i i ⎢ [ ]⎥ ⎢ (si )1 H(𝜔i ) ⎥ ⎢ ⎥ ⋮ ⎢ ⎥ [ ] [[ ] [ ] [ ] [ ] [ ] [ ] ] ⎢ (si )m H(𝜔i ) ⎥ ⎥ = [0] . 𝛼0 𝛼1 … 𝛼m 𝛽0 𝛽1 … 𝛽n ⎢ ⎢ ⎥ (si )0 ⎢ ⎥ 1 (si ) ⎢ ⎥ ⎢ ⎥ ⋮ ⎢ ⎥ ⎢ ⎥ n ) (s i ⎣ ⎦

(16.132)

We will see how to use this basic equation in Section 16.8.1. It is clear that the frequency variable comes in powers of m and n (which are usually the same or similar numbers). Thus, if we use a regular frequency axis, where si = j𝜔, this may cause numerical problems if the frequency range is large, particularly if m and n are large, i.e., for high-order methods.

16.8.1

The Least Squares Complex Frequency Domain Method

As we mentioned in the introduction to this section, the LSCF method is a high-order method, which is the frequency domain version of the PTD method described in

471

472

16 Experimental Modal Analysis

Section 16.7.6. The basic equation is therefore based on the transpose of the frequency response matrix and is [ ] ⎡ (s )0 H(𝜔 ) T i i ⎢ [ ]T ⎢ (si )1 H(𝜔i ) ⎢ ⋮ ⎢ [ ]T [[ ] [ ] [ ] [ ] [ ] [ ] ] ⎢ (si )m H(𝜔i ) 𝛼0 𝛼1 … 𝛼m 𝛽0 𝛽1 … 𝛽n ⎢ ⎢ (si )0 ⎢ (si )1 ⎢ ⎢ ⋮ ⎢ ⎢ n (s i) ⎣

⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ = [0] , ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦

(16.133)

where m and n are the system order (related to the number of poles) from Equation (16.29). The left-hand matrix with the coefficient matrices is size NS × mNS + (n + 1)NL and the right-hand matrix is size mNS + (n + 1)NL × NL . To solve Equation (16.133), one of the coefficient matrices is normalized by setting it equal to the identity matrix, and this term [ ] is moved to the right-hand side. For high-order normalization (setting 𝛼m = ⌈I⌋), the equation becomes [ ] ⎡ (s )0 H(𝜔 ) T ⎤ i ⎢ i [ ]T ⎥ ⎢ (si )1 H(𝜔i ) ⎥ ⎢ ⎥ ⋮ ⎢ ⎥ ]T [ ] [ [[ ] [ ] [ ] [ ] [ ] [ ] ] ⎢ (si )m−1 H(𝜔i ) ⎥ ⎥ = −(s )m H(𝜔 ) T . 𝛼0 𝛼1 … 𝛼m−1 𝛽0 𝛽1 … 𝛽n ⎢ i i ⎢ ⎥ (si )0 ⎢ ⎥ 1 (si ) ⎢ ⎥ ⎢ ⎥ ⋮ ⎢ ⎥ ⎢ ⎥ (si )n ⎣ ⎦ (16.134) In order to get an overdetermined system, Equation (16.134) is repeated for all frequencies of interest by adding columns in the matrices on the left-hand and right-hand sides. In principle, this matrix equation may then be solved for the coefficient matrices by a least [ ] squares solution. By only keeping the 𝛼k matrices, a companion matrix is then created as mentioned in Section 16.5, which is then solved for the poles and modal participation matrix, just as for the PTD method. The direct solution as mentioned in the previous paragraph becomes a very slow solution due to the rather large sizes of the matrices. In Guillaume et al. (2003), and in the PhD thesis (Cauberghe, 2004), it is shown how a much faster solution to the denominator coefficient matrices may be found. We will briefly describe the steps here. For each response DOF k, three Toeplitz type matrices are first defined by elements as follows: Ro (r, s) =

NL ∑ ] |2 |[ | Wk (𝜔n ) | ⋅ ej2𝜋(r−s)k∕N , | | k=1

(16.135)

16.8 Frequency Domain Parameter Extraction Methods

So (r, q) =

NL ∑ ] |2 |[ | Wk (𝜔n ) | Ho (𝜔n ) ⋅ ej2𝜋(r−s)k∕N , | |

(16.136)

∑| [ ] |2 [ ]H [ ] Ho (𝜔n ) ⋅ ej2𝜋(r−s)k∕N , | Wk (𝜔n ) | Ho (𝜔n ) | |

(16.137)

k=1 NL

To (p, q) =

k=1

where k is discrete frequency and o is the output (response) index. Furthermore, p = [(r − 1)NS + 1 ∶ rNS ] and q = (s − 1) ⋅ NS + 1 ∶ sNS for p, q = 1, 2, … , m + 1. The [ ] matrix Wo (𝜔n ) is a weighting matrix which in its simplest form contains the number 1 in each position. These matrices, which contain the reduced normal equations, are used to produce a matrix, [M] by NL ( ∑ [ ] [ ]T [ ]−1 [ ]) To − So Ro So . (16.138) [M] = o=1

[ ] [ ] [ ] The coefficient matrices 𝛼0 , 𝛼1 … 𝛼m for order m may be found by forming an equation system [A] [X] = [B], where [A] is taken from the first NS (m − 1) block columns of [M], and {B} from the next block column. Solving for X, which contains the coefficient matrices, and then defining a companion matrix as usual, the poles and modal participation factors are found as the eigenvalues and eigenvectors of the companion matrix, respectively. Although the mode shapes may be found by using the numerator coefficient matrices, the LSCF is an inconsistent estimator, and this way of calculating the mode shapes is therefore usually avoided, (Cauberghe, 2004). It is more common to compute the mode shapes by the LSFD method (not to be confused with the LSCF method, despite the similarity in name), see Section 16.9.2. However, we need to use a modified version of the multiple-reference LSFD method, since LSCF as presented here does not give stable modal participation factors. This may be improved by implementing a maximum-likelihood estimator, see Cauberghe (2004). In Section 16.9.3, we show how to apply the LSFD without modal participation factors. Perhaps the most important contribution with the development of the LSCF method was the introduction of a frequency variable similar to (but not entirely equal to) a Z-domain variable. For the LSCF method, the frequency variable is thus defined as si = exp(−j2𝜋f ∕fmax ), where fmax is usually set to the sampling frequency. This definition of the frequency variable means that all powers of frequency stay within the complex unit circle, which in turn means that the matrices above are well conditioned also for high model orders. To summarize the LSCF method, the following points should be followed: 1. Compute the three Toeplitz-type matrices by Equations (16.135)–(16.137). 2. Compute the matrix [M] by Equation (16.138). 3. For each model order m = 1, 2, … , mmax : (a) Extract the matrices [A] and {B} from the matrix [M]. (b) Solve for the coefficient matrices in the matrix {X} as described under Equation (16.138). (c) Build the companion matrix. (d) Compute the eigenvalues of the companion matrix, which correspond to the poles. 4. Produce a stabilization diagram based on the estimates of the poles and let the user select appropriate poles to extract.

473

474

16 Experimental Modal Analysis

16.8.2 The Frequency Domain Direct Parameter Identification Method (FDPI) One of the oldest MIMO methods for obtaining poles and participation factors in the frequency domain is the PFD method developed by Zhang et al. (1985) at the University of Cincinnati. This method is a first-order method and thus a frequency version of the ITD method. Another similar, but second-order method was developed at around the same time at KU Leuven, (Lembregts et al. 1990), also known as the FDPI method. We will present the latter method here, although we use inspiration from Allemang and Brown (1987). It starts with the time domain equation of the impulse response matrix: [h(t)] = [Ψ]⌈esr t ⌋[L]T ,

(16.139)

which is Laplace transformed into ⌈ ] ⌋ [ Hu (s) = [Ψ] Λ−1 (s) [L]T ,

(16.140) [ ] where we denote that the dynamic flexibility transfer function matrix Hu (s) and the ⌈ ⌋ Laplace domain eigenvalue matrix Λ−1 (s) is the Laplace domain version of the “inverse diagonal pole matrix” in Equation (16.9). Again, note that it is not a matrix inverse operation, only in the name and in the variable symbol. It should be noted that each term on the diagonal of the eigenvalue matrix is the Laplace transform of the corresponding time domain term, esr t . The mobility transfer function matrix can be obtained by multiplying Equation (16.140) by the Laplace operator, s, which gives us ] [ ] ⌈ ⌋ [ (16.141) Hv (s) = sHu (s) = [Ψ] ⌈P⌋ Λ−1 (s) [L]T , where the diagonal matrix ⌈P⌋ contains the poles, which come from the inner derivative of ⌈esr t ⌋, on its diagonal. For the accelerance transfer function matrix, we similarly get ⌈ ] [ ] ⌋ [ (16.142) Ha (s) = s2 Hu (s) = [Ψ] ⌈P⌋2 Λ−1 (s) [L]T . Using Equations (16.140) through (16.142), we can formulate a matrix equation: [ ] ⎤ ⎡ Hu (s) ⎤ ⎡ [Ψ] ]⎥ ⎢ ⎥ ⎢[ (16.143) ⎢ Hv (s) ⎥ = ⎢ [Ψ] ⌈P⌋ ⎥ [Q(s)] , ]⎥ ⎢ ⎢[ 2⎥ ⌈P⌋ (s) H [Ψ] ⎦ ⎦ ⎣ ⎣ a where we have introduced the matrix [Q(s)] defined by ⌋ ⌈ [Q(s)] = Λ−1 [L]T .

(16.144)

We also have that the Laplace transform of the derivative of Equation (16.139) is ] [ ] [ Hv (s) = s Hu (s) −  {h(0)} (16.145) ] [ = s Hu (s) − [Ψ] [L]T , and for the accelerance similarly [ ] [ ] { } ̇ Ha (s) = s Hu (s) − s {h(0)} −  h(0) ] [ = s Hu (s) − s [Ψ] [L]T − [Ψ] ⌈P⌋ [L]T ,

(16.146)

16.8 Frequency Domain Parameter Extraction Methods

which, inserted into Equation (16.143) gives ] [ ] [ ⎤ ⎡ [Ψ] ⎤ ⎡ Hu (s) − Ru ] [ ⎥ ⎢ ⎥ ⎢ T s Hu (s) − [Ψ] [L] ⎥ = ⎢ [Ψ] ⌈P⌋ ⎥ [Q(s)] . ⎢ ] ⎢ ⎢ 2[ 2⎥ T T ⎥ ⎣ s Hu (s) − s [Ψ] [L] − [Ψ] ⌈P⌋ [L] ⎦ ⎣ [Ψ] ⌈P⌋ ⎦

(16.147)

In Lembregts et al. (1990), it is shown that two NL × NL coefficient matrices, [A0 ] and [A1 ], can be defined so that [[

A0

] [

A1

]

⎤ ⎡ [Ψ] ]⎢ ⎥ ⌈I⌋ ⎢ [Ψ] ⌈P⌋ ⎥ = [0] . ⎢ 2⎥ ⎣ [Ψ] ⌈P⌋ ⎦

(16.148)

From these matrix coefficients, we can compute the poles and mode shapes using a companion matrix as discussed in Section 16.5.1. By post multiplying Equation (16.148) by [Q(s)] and using Equation (16.147), it follows that [ ] ⎤ ⎡ Hu (s) ]⎢ [[ ] [ ] ] [ ⎥ (16.149) A0 A1 ⌈I⌋ ⎢ s Hu (s) − [Ψ] [L]T ⎥ = [0] . [ ] ⎢ 2 T T ⎥ ⎣ s Hu (s) − s [Ψ] [L] − [Ψ] ⌈P⌋ [L] ⎦ This equation gives us [ ][ ] [ ][ ] [ ] A0 Hu + s A1 Hu + s2 Hu [ ] = A1 [Ψ] [L]T + s [Ψ] [L]T + [Ψ] ⌈P⌋[L]T

(16.150)

= s[B1 ] + [B2 ], where

and

[ ] B1 = [Ψ] [L]T

(16.151)

[ ] [ ] B2 = A1 [Ψ] [L]T + [Ψ] ⌈P⌋[L]T .

(16.152)

We now introduce two residual matrices [RL ] and [RU ] to account for the modes below and above our frequency range, respectively. Thus, we replace the receptance matrix by [Hu (s)] = [Hu (s)] + [RU ] − 1∕𝜔2 [RL ],

(16.153)

by which we can define the basic equation at an arbitrary frequency 𝜔1 , by setting s = j𝜔1 and rearranging Equation (16.150) as follows: [ ] ⎤ ⎡ Hu (𝜔1 ) ] ⎥ ⎢ [ ⎢ j𝜔Hu (𝜔1 ) ⎥ ⎥ [[ ] [ ] [ ]⎢ ] − ⌈I⌋ ⎥ = −(j𝜔 )2 H (𝜔 ) , (16.154) A0 A1 [B] ⎢ 1 u 1 2 ⎢ −1∕(j𝜔1 ) ⌈I⌋ ⎥ ⎥ ⎢ ⎢ −j𝜔1 ⌈I⌋ ⎥ ⎥ ⎢ ⎣ −1∕(j𝜔1 ) ⌈I⌋ ⎦ where the matrix [B] contains the mode shapes and residual matrices.

475

476

16 Experimental Modal Analysis

To solve for the coefficient matrices, Equation (16.154) is repeated for Nf frequencies 𝜔1 , 𝜔2 , … , 𝜔Nf , by defining an extended matrix: [ ] ⎡ [H (j𝜔 )] Hu (j𝜔2 ) u 1 ⎢ ⎢ − ⌈I⌋ − ⌈I⌋ [ ′] ⎢ 2 Hu = ⎢ −1∕𝜔 ⌈I⌋ −1∕𝜔2 ⌈I⌋ 1 1 ⎢ −j𝜔1 ⌈I⌋ −j𝜔1 ⌈I⌋ ⎢ ⎢ −1∕(j𝜔 ) ⌈I⌋ −1∕(j𝜔 ) ⌈I⌋ ⎣ 1 1

[

··· ··· ··· ··· ···

] Hu (j𝜔Nf ) ⎤ ⎥ − ⌈I⌋ ⎥ ⎥ −1∕𝜔21 ⌈I⌋ ⎥ , ⎥ −j𝜔1 ⌈I⌋ ⎥ −1∕(j𝜔1 ) ⌈I⌋ ⎥⎦

and another extended matrix for accelerance ]] [ [ [ ′] ] ] [ [ Ha = − (j𝜔1 )2 Hu (j𝜔1 ) (j𝜔2 )2 Hu (j𝜔2 ) · · · (j𝜔Nf )2 Hu (j𝜔Nf ) . [ ] An extended system matrix Ae can now be computed by [ ] [ ′ ] [ ′ ]+ Ae = Ha Hu ,

(16.155)

(16.156)

(16.157)

where [⋅]+ denotes a pseudoinverse. From this extended system matrix, the coefficient matrices [A0 ] and [A1 ] are extracted. The poles and mode shapes are then determined by the eigenvalues and eigenvectors of the companion matrix as defined by Equation (16.35). The coefficient matrices [A0 ] and [A1 ] both have size NL × NL , and thus we will obtain 2NL pole estimates. The residual terms and modal participation factors can then be extracted from the matrix [B] if desired. Alternatively, these can also be obtained by the procedure in Section 16.9.5. There are two main issues with the FDPI method: first, that it generates as many [ ] poles as there are responses in the measured frequency response matrix Hu (or, more [ ] precisely, in most cases, the matrix Hu obtained by double-integrating the measured accelerance matrix), and second, that it is ill-conditioned, at least for large frequency ranges because the factors 𝜔2k grow rapidly. To address the first issue, we reduce the [ ] [ ] matrix Hu to each number of modes in the interval, 1, Nm , by using an SVD approach. For each of these reduced matrices, the system matrix is built, and the eigenvalues and eigenvectors are computed, and a stability diagram is built, in which the user can select poles and mode shapes. The latter issue is solved the same way as for the LSCF method we discussed in Section 16.8.1 and which will be introduced for the FDPI method in Section 16.8.3. The SVD approach used to reduce the FRF matrix starts by reshaping it into a [ ] two-dimensional matrix, Ht N ×N N , in which frequency runs in columns, and all FRFs L f S with respect to a single response are put in the same row. This matrix (or its imaginary part, see Section 16.5.4) is then decomposed by SVD into [ ] Ht N ×N N = [U]NL ×NL ⌈S⌋NL ×Nf NS [V]H (16.158) Nf NS ×Nf NS , L

f

S

from which a transformation matrix of Nc virtual responses (for each reference) are computed as follows: [ ] [ ] (16.159) Hc N ×N N = [T]T Ht N ×N N , L

f

S

L

f

S

16.8 Frequency Domain Parameter Extraction Methods

and the transformation matrix [T] is defined by taking the Nc first columns of the left singular matrix, [U], i.e., T

⎡ | ⎤ | { | }⎥ ⎢{ } { } u2 · · · uN c ⎥ . [T] = ⎢ u1 ⎢ | | | ⎥⎦ ⎣

(16.160)

Once the compressed matrix is computed, the matrix is reshaped into the general 2D form again. Using the compressed FRF matrix, the mode shapes obtained by solving the eigen[ ] vectors of the companion matrix, which we can denote Ψc , will, of course, only have Nc virtual DOFs. The full mode shapes are therefore calculated by expanding the compressed mode shapes by [ ] (16.161) [Ψ] = [T] Ψc . To summarize the FDPI method, the following steps are followed: 1. Select the frequency responses to be used for parameter extraction, and a frequency range 2. Integrate by dividing once or twice with j𝜔, if necessary, so that the FRF matrix Hu (𝜔) is in the form of dynamic flexibility (displacement / force) 3. Decide on the highest order (number of modes), Nm ≤ 2NL , to be computed 4. Reshape the FRF matrix and compute the SVD as in Equation (16.158) 5. For each iteration step ni = 1, 2, … , Nm ∕2 (a) Compute the compressed matrix Hc with ni responses for each reference, using Equation (16.159), and with the matrix [T] consisting of ni columns of the left singular vector matrix [ ] [ ] (b) Build the matrices Hu′ and Ha′ (with Hc replacing Hu ) using Equations (16.155) and (16.156), respectively (c) Compute the extended system matrix, Equation (16.157) and extract the coefficient matrices [A0 ] and [A1 ] (d) Build the companion matrix and solve the 2ni eigenvalues 𝜆r and the corresponding [ ] eigenvectors (compressed mode shapes Ψc ) of the companion matrix (e) Expand the compressed mode shapes to the physical DOFs using Equation (16.161) 6. Produce a stabilization diagram based on the estimates of the poles and mode shapes from 5 and let the user select appropriate poles to extract. It should be noted that the procedure described here only estimates the positive poles. The procedure to compute complex conjugate pairs can be found in Lembregts (1988). It should also be noted that the FDPI method can only compute as many modes as there are measured responses. This is sometimes a drawback in which case a high-order method or small frequency ranges must be used for repeated parameter estimation using the FDPI method. Finally, it should be noted again that the FDPI method as described here produces poles and unscaled mode shapes. The modal scaling factors can be computed by the procedure in Section 16.9.5.

16.8.3

The Frequency Z-Domain Direct Parameter Method, FDPIz

In Section 16.8.2, it was mentioned that the FDPI method may be ill-conditioned for large frequency ranges because the frequencies j𝜔k appear squared in the equations (for example,

477

478

16 Experimental Modal Analysis

Equation (16.150)). This may, however, be solved in the same way as for the LSCF method as we presented in Section 16.8.1, i.e., by using the frequency variable si = exp(−j2𝜋f ∕fmax ) instead of j𝜔 in Equations (16.154)–(16.156). This maps all frequencies inside the complex unit circle; positive frequencies onto the positive half circle from 1 to −1, and negative frequencies, if used, will map onto −1 to 1 on the negative half circle. Setting fmax = fs , the sampling frequency used for the FRF estimation is usually a good choice.

16.8.4 The Complex Mode Indicator Function, CMIF Method The last method we will describe for frequency domain modal parameter estimation is the CMIF method originally developed by Shih et al. (1988). (Some authors use “indication” instead of “indicator” in the name, but in this book we use “indicator” consistently.) Originally, the CMIF, as the name implies, was developed as a MIF for multiple-reference cases, one of the first to be developed. It was noticed, however, early on, that the CMIF functions may be used for modal parameter estimation. Our presentation will be relatively brief. For more details, see Allemang and Brown (2006). The CMIF method is not a parameter estimation technique per se, but is used in conjunction with some MPE method, usually a SDOF MPE method. It is closely related to the frequency domain decomposition (FDD) method which is popular for OMA, see Section 17.3.5. The CMIF method is also related to the FRF enhancement we discussed in Section 16.5.4. In UMPA terms, the CMIF method is neither a low order nor a high-order method; rather, it is referred to as a zero-order method as it does not use any coefficient matrix. It is also commonly referred to as a spatial method because, as we will see, it uses the spatial information (the mode shapes). A specific requirement for it is that it typically requires many references to be used, where a minimum is as many references as there are modes. It should also be noted that the CMIF used for modal parameter estimation leads to approximate mode shapes, as we will see below. Although originally defined by an EVD, today, the CMIF is usually based on a SVD of the FRF matrix. As before, we assume that the FRF matrix is in dynamic flexibility format and of size NL × NS , where NL > NS . Then, at each frequency, the SVD is [H(𝜔)] = [U(𝜔)] ⌈Σ(𝜔)⌋[V(𝜔)]H ,

(16.162)

and the CMIF of order l is defined as the l-th singular value, 𝜎l , i.e., CMIFl (𝜔) = 𝜎l .

(16.163)

It is common to use the imaginary part of the FRF matrix (if accelerance or dynamic flexibility) in Equation (16.162) instead of the complex FRF matrix. This ensures that the singular vectors become real-valued. The SVD produces as many singular values (and thus CMIFs) as the short dimension of the FRF matrix, NS , as shown in Figure 16.8. The first CMIF function will have a peak at each frequency where there is one or more modes, the second CMIF will peak only at frequencies where there are two or more modes, and so on. A left singular vector in [U(𝜔)] corresponding to a singular value at a peak, i.e., at a frequency close to a natural frequency, will approximate, but not be equal, to the mode shape of that mode. Similarly, the right singular vector at that frequency will be similar to the modal participation factor

16.8 Frequency Domain Parameter Extraction Methods

103

CMIFs

102

1

10

0

10

Figure 16.8

0

200

400

600

Frequency [Hz]

800

1000

Plot of the four highest CMIFs for a measurement on a Plexiglas plate.

of that mode. The fact that the singular vectors are not equal to the mode shapes may be understood simply by the fact that the singular vectors are unitary (orthonormal if real, i.e., orthogonal and of unity length), whereas actual modes are not necessarily orthogonal, see Section 6.3.2. The singular vectors are usually good approximations of the mode shapes, however. To explain the CMIF method, we compare the decomposition in Equation (16.162) with the modal decomposition from Equation (16.10), which we repeat here for convenience: [H(j𝜔)] = [𝜓] ⌈Λ−1 ⌋[L]T .

(16.164)

Comparing the two decompositions, we see the similarity of three matrices in both decompositions. This does, however, not mean that the three matrices are pairwise equal. After computing the SVD in Equation (16.162), and defining the CMIFs, the next step is to find the peaks in the CMIFs that correspond to modes. For each CMIFl , at the peak frequency 𝜔r for mode r, the unscaled mode shape is found by using the corresponding left } { singular vector u(j𝜔r ) l . Thus, in CMIF1 , all peaks are used, and in CMIF2 , only those peaks that correspond to close modes (where the first CMIF also had a peak) are selected, and so on, if there are modes in the higher CMIFs (which is, of course, rare). Similarly, the modal participation factor for each mode may be collected from the right singular vector. For modal parameter estimation using the CMIF method, a so-called enhanced frequency response function (eFRF) is produced by using a modal filter, i.e., by computing a FRF for modal coordinates. This is obtained by using the mode shape for the mode in question, mode r, which defines the enhanced FRF for mode r by ]{ { }H [ } (16.165) He (j𝜔)r = u(j𝜔r ) l H(j𝜔) v(j𝜔r ) l . Note that Equation (16.165) is calculated over a larger frequency range around the natural frequency. The enhanced FRF does not always entirely decouple the SDOF system, but in the vicinity of the targeted natural frequency, it is usually a good approximation. In that frequency range, the enhanced FRF will be equal to He (𝜔)r =

Qr , j𝜔 − sr

(16.166)

479

480

16 Experimental Modal Analysis

where Qr is a modal scaling constant, and sr is the pole, of the r-th mode. Fitting the enhanced FRF to a model as the right hand of Equation (16.166) thus allows to estimate the unknown poles and factors, Qr by a procedure similar to the one presented in Section 16.4.1. Since the enhanced FRF is calculated using several “physical” FRFs, the modal scaling issue is not trivial, and the modal scaling constant cannot be used directly. Although several methods have been suggested to solve this issue, see Allemang and Brown (2006), we suggest the accurate solution obtained by using the least squares method described in Section 16.9.5. This may be used also for cases with closely spaced (coupled) modes. To summarize the CMIF method, the following steps are followed: 1. Calculate the SVD in Equation (16.162) and define the CMIFl for all l from the singular values 2. Plot the CMIFs and let the user select the peaks corresponding to modes (or use some automatic peak finding algorithm) 3. For each identified peak (mode), r (a) Extract the mode shape as the corresponding left singular vector (b) Calculate the enhanced FRF by using Equation (16.166) (c) Fit the enhanced FRF to find the pole sr (the scaling factor may be omitted) For mode shape scaling, use the method suggested in Section 16.9.5.

16.9 Methods for Mode Shape Estimation and Scaling We will now look at how to find the remaining information to obtain full, scaled modal models. If a high-order method was used for the pole estimation, poles and, for some methods, modal participation factors are known. In this case, we can set an equation system up and solve for the mode shapes and scaling factors either in the frequency domain using FRFs, or in the time domain using IRFs as described in Sections 16.9.1–16.9.4. If a low-order method or the zero-order CMIF method was used for the pole estimation, then poles and mode shapes are known. In that case, it is relatively straightforward to find the modal scaling factors, see Section 16.9.5.

16.9.1 Least Squares Frequency Domain – Single Reference Case The most common single-reference method used to estimate mode shapes when poles are known is the LSFD method. The main reason for this is that in the frequency domain, residual factors can be estimated to account for the effects of modes outside the frequency band used for the mode shape estimation. This means that, in most cases, better mode shapes are estimated using this method than using the least squares time domain (LSTD) method. We start with this method for a single reference. We then have that for an arbitrary measured frequency response, if we use N complex conjugate pair of poles Hpq (𝜔) =

N ∑ Apqr r=1

j𝜔 − sr

+

A∗pqr j𝜔 − s∗r

.

(16.167)

16.9 Methods for Mode Shape Estimation and Scaling

In most cases, we will only use a limited frequency band for the extraction of a limited number of modes and we thus include residual terms, as described in Section 16.6. We thus rewrite Equation (16.167) as follows: Hpq (𝜔) =

RL,pq 𝜔2

+

N2 ∑ Apqr r=N1

j𝜔 − sr

+

A∗pqr j𝜔 − s∗r

+ RU,pq .

(16.168)

Note that these residual terms are individual terms for each measured frequency response Hpq (𝜔). The use of residual terms makes the estimates of the mode shape coefficients more accurate in cases where there is a strong influence from out-of-band modes. To estimate the mode shape coefficients plus the two residual terms for an arbitrary frequency response, we can set up a matrix equation in the case of a single reference. As used previously in this chapter, the poles are renumbered from r = 1, 2, … , 2N by treating each pole as a separate pole number instead of keeping the complex conjugate pairs. We then define an “inverse pole matrix,” [Λ−1 ] by ⎡ ⎢ [ −1 ] ⎢ Λ =⎢ ⎢ ⎢ ⎢ ⎣

1 j𝜔1 −s1 1 j𝜔2 −s1

1 j𝜔1 −s2 1 j𝜔2 −s2











1 j𝜔M −s1

1 j𝜔M −s2



1 j𝜔M −s2N



1 j𝜔1 −s2N 1 j𝜔2 −s2N

⎤ ⎥ ⎥ 1 ⎥, ⋮ ⋮ ⎥ ⎥ 1 𝜔12 ⎥ M ⎦ 1

1 𝜔21 1 𝜔22

(16.169)

where we have denoted the total of M discrete frequencies by 𝜔n = n2𝜋Δf for the frequency lines n that are used. Note that there is no inverse calculation; the matrix [Λ−1 ] is simply called inverse pole matrix because it includes the reciprocal (inverse) of j𝜔 − sr , analog with how we have denoted similar matrices before. If no residuals are wanted, the last two columns are left out. We next define the residue coefficient vector for point p by ⎧ ⎪ Apq1 ⎪ Apq2 ⎪ { } ⎪ ⋮ Ap = ⎨ ⎪ Apq2N ⎪ R ⎪ U ⎪ RL ⎩

⎫ ⎪ ⎪ ⎪ ⎪ ⎬, ⎪ ⎪ ⎪ ⎪ ⎭

(16.170)

where, again, the last two coefficients are left out if no residuals are wanted. Using Equations (16.169) and (16.170), we can now formulate the matrix equation corresponding to Equation (16.167), which becomes ⎧ ⎫ ⎪ Hpq (𝜔1 ) ⎪ [ −1 ] { } ⎪ Hpq (𝜔2 ) ⎪ Λ Ap = ⎨ ⎬, ⋮ ⎪ ⎪ ⎪Hpq (𝜔M )⎪ ⎩ ⎭

(16.171)

where the vector on the right-hand side is the measured frequency response defined at the same frequencies as was put in the matrix Λ. Equation (16.171) is an overdetermined system

481

482

16 Experimental Modal Analysis

assuming the number of frequencies M is larger than the number of modes. It can therefore be solved, for each measured location p, by a least squares or pseudoinverse approach. It remains to scale the mode shapes. We refer to Chapter 6 where it is shown that setting the modal scale constant Qr to unity, the mode shapes are scaled to unity modal A. To find this scaling, we normalize the residue vector using the coefficient Aqqr so that the final, scaled mode shape for mode r is √ (16.172) {𝜓}r = {A}r ∕ Aqqr . It should be mentioned that the procedure outlined here produces complex mode shapes, which is usually desirable. At the end of Section 16.9.2, we will show how to obtain real-valued mode shapes, should that be desired.

16.9.2 Least Squares Frequency Domain – Multiple Reference Case The LSFD method in the case of multiple references is based on the fact that we know the poles and modal participation factors, and that we wish to find the scaled mode shapes. We thus must have used a multiple-reference method such as the PTD method to extract the poles and the modal participation factors. If we look at one response location, p, if we add residual terms similarly to the single reference case in Section 16.9.1, we have that { } { } { } } 1 { Hp (𝜔) = [L]T ⌈Λ⌋ 𝜓p + RU (𝜔) − 2 RL (𝜔) , (16.173) 𝜔 where the diagonal pole matrix is defined by Equation (16.12) and the measured column { } vector Hp (𝜔) contains the frequency responses for response DOF p relative to all the references. To obtain a single matrix equation that we can solve for the mode shapes, we need to [ ] define some new matrices. First, we define an extended modal participation matrix, L′ by [ ′] [ T ] L = [L] ⌈I⌋ ⌈I⌋ , (16.174) [ ′] and an extended eigenvalue matrix Λ by ⎡ ⌈Λ⌋ [0] ⎥ [0] ⌈ ′⌋ ⎢ ⎥ Λ = ⎢ [0] ⌈I⌋ [0] ⎥, ⎥ ⎢ 1 ⎢ [0] [0] − 𝜔2 ⋅ ⌈I⌋ ⎦ and finally, the vector with mode shape coefficients for response DOF p, ⎫ ⎧ ⎪ 𝜓p1 ⎪ ⎪ 𝜓p2 ⎪ ⎪ ⎪ { } ⎪ ⋮ ⎪ 𝜓p = ⎨ ⎬, ⎪ 𝜓p2N ⎪ ⎪ {R } ⎪ ⎪ { F} ⎪ ⎪ RI ⎪ ⎭ ⎩

(16.175)

(16.176)

16.9 Methods for Mode Shape Estimation and Scaling

where each of the residual vectors have size (NS × 1). Using these definitions, Equation (16.173) can now be rewritten as follows: { { } } [ ]⌈ ⌋{ } Hp (𝜔) = L′ Λ′ 𝜓p = [B(𝜔)] 𝜓p , (16.177) which is the base equation for the LSFD method. What remains is to produce an overestimated system of equations. This is achieved by repeating Equation (16.177) for all frequen] [ cies in the range of interest, 𝜔1 , 𝜔2 , … , 𝜔M . We thus obtain a large equation [ { } ] ⎧ H (𝜔 ) ⎫ ⎡ B(𝜔 ) ⎤ p 1 1 ⎪{ }⎪ ⎢[ ]⎥ ⎪ Hp (𝜔2 ) ⎪ ⎢ B(𝜔2 ) ⎥ { } ⎨ ⎬=⎢ ⎥ 𝜓p , ⋮ ⋮ ⎪ ⎪ ⎢ ⎥ ⎪ {H (𝜔 )} ⎪ ⎢ [B(𝜔 )] ⎥ p M M ⎦ ⎩ ⎭ ⎣

(16.178)

which is solved either in a least squares fashion, or by pseudoinverse, for each response location p, for which the residues and residuals are saved. Note that the matrix [B] in Equation (16.178) only needs to be built once, as it is independent on the response location p. Consequently, if the pseudoinverse of [B] is used, it only needs to be computed once. It remains to scale the mode shapes. After solving Equation (16.178), assume we have stored the results in a residue matrix [A]r for each mode r. For multiple references, we use the modal participation factors relative to one of the references to get the scaling. From Equations (16.98) to (16.100), it follows that a modal participation factor Lqr is Lqr = Qr 𝜓qr .

(16.179)

We also know from Section 6.4.4 that scaling modes to unity modal A corresponds to using the modal scale constant Qr = 1. From one of the columns in the stored mode shape matrix obtained by solving Equation (16.178) for all responses p and, if residuals were used, removing those from the end of the vector, we can form a residue column vector for one of the references, q, and mode r, which is { } (16.180) Aq r = Lqr {𝜓}r = 𝜓qr {𝜓}r . From this vector, a mode shape vector scaled to unity modal A is computed by dividing { } the vector Apq r by the square root of the q-th coefficient, which obviously contains the 2 . It is usually best to choose the largest modal mode shape coefficient squared, i.e., Aqqr = 𝜓qr participation factor for each mode for this scaling process. Using this factor will avoid using a zero, or close-to-zero modal participation factor. As for the single reference case, the solution to Equation (16.178) produces complex mode shapes. In some cases, particularly for model validation, since normal modes from FEMs are real-valued, it may be desired to obtain real-valued mode shapes from EMA. The LSFD method can readily be adjusted for this purpose in the following way. First, the modal participation factor matrix is halved by only including the modal participation vectors corresponding to the poles with positive imaginary part, producing

483

484

16 Experimental Modal Analysis

[ ] a matrix LR N ×N . Second, the diagonal eigenvalue matrix is replaced by a modified S matrix ⎡ 1 − 1 ⎥ 0 … ⎥ ⎢ j𝜔−s1 j𝜔−s∗1 1 1 ⎥ ⎢ − 0 … 0 ∗ j𝜔−s2 j𝜔−s2 ⎥, (16.181) ⌈ΛR ⌋ = ⎢ ⎥ ⎢ ⋮ ⋱ ⎥ ⎢ 1 1 ⎥ ⎢ 0 … − j𝜔−s ∗ j𝜔−sN N ⎦ ⎢ where s1 , s2 , · · · , sN are the poles with positive imaginary part. Substituting these new matrices into Equations (16.174) and (16.175), respectively, and proceeding with the solutions described above, will produce real-valued mode shapes.

16.9.3 Least Squares Frequency Domain – Multiple Reference Without MPFs In some cases, particularly when estimating poles using the LSCF method, poles are estimated without modal participation factors. In this case provided that FRF data are available for several references, an alternative technique may be used to obtain multiple-reference estimates of the mode shapes, i.e., to be able to decouple modes that are closely spaced. This method which is, somewhat confusingly, also referred to as LSFD, was suggested in Guillaume et al. (2003). It is an extension of the single-reference method described in Section 16.9.1, with an added feature separating the mode shapes by use of the SVD. We will get to that. First, Equation (16.171) is extended for Q references, by adding columns to obtain an equation with all residues for the response p with all references in matrix [Ap ], i.e.,

[

Λ−1

] [{

Ap1

}

{ } Ap2

⎡ H (𝜔 ) H (𝜔 ) p2 1 ⎢ p1 1 { }] ⎢ Hp1 (𝜔2 ) Hp2 (𝜔2 ) · · · ApQ = ⎢ ⋮ ⋮ ⎢ ⎢ H (𝜔 ) H (𝜔 ) p2 M ⎣ p1 M

· · · HpQ (𝜔1 ) ⎤ ⎥ · · · HpQ (𝜔2 ) ⎥ ⎥, ⋱ ⋮ ⎥ · · · HpQ (𝜔M ) ⎥⎦ (16.182)

where residual terms may be included if the last two columns of the inverse pole matrix are appropriately defined, see Equation (16.169) and the following equations. Equation (16.182) is solved in a least squares sense, for example, by a pseudoinverse, to obtain the residue matrix, [Ap ], which is size Nm × Q. This is repeated for all responses, and all residue terms are then reshaped into residue matrices [R]r , size NL × NS for each mode r. Each residue matrix [R]r contains linear combinations of the modes, by the modal participation factors, i.e., [R]r = {𝜓}r {L}T ,

(16.183)

and furthermore, the rank of this matrix is, trivially, one, unless for closely spaced mode for which this method should be avoided. Therefore, the residue matrix may be separated into the mode shape vector and the modal participation vector by a SVD: [R]r = [U][S][V]H ,

(16.184)

for which, since the rank is one, only the first singular value has a (significant) value. Thus, the mode shape vector {𝜓}r is taken from the first column of [U], multiplied by the first

16.9 Methods for Mode Shape Estimation and Scaling

singular value, and the modal participation factor is taken from the complex conjugate of the first column of V.

16.9.4

Least Squares Time Domain

We can also compute mode shapes in the time domain. As said before, since we cannot apply residual terms to account for out of band modes in the time domain, this method is not as good as the LSFD method. There may be cases, however, when it is desirable to use time domain functions to compute mode shapes, and then this method can be used. One such case may be for OMA based on correlation functions, see Chapter 17. In case we have impulse responses for one reference, the calculation of the mode shape coefficients for each response location is simple. We start by defining a matrix for Nt time instances by ⎡ es1 t1 ⎢ ⎢ es1 t2 [P] = ⎢ ⎢ ⋮ ⎢ es1 tNt ⎣

es2 t1 · · · es2 t2 · · · ⋮



es2 tNt

· · · es2N t1 ⎤ ⎥ · · · es2N t2 ⎥ ⎥, ⋮ ⎥ · · · es2N tNt ⎥⎦

(16.185)

where it is practical to use the poles with positive imaginary part in the first half, and the complex conjugate poles in the second half. A column vector with the impulse response hpq (tk ) at different times is then defined by ⎧ h (t ) ⎫ ⎪ pq 1 ⎪ { } ⎪ hpq (t2 ) ⎪ hpq = ⎨ ⎬, ⎪ ⋮ ⎪ ⎪ h (t ) ⎪ ⎩ pq Nt ⎭

(16.186)

by which we can formulate the equation with the mode shape coefficients for DOF p and all modes by { } (16.187) {h}Nt×1 = [P]Nt×2N 𝜓p 2N×1 , which is solved in a least squares sense for each response DOF p. If the pole matrix was sorted as suggested above, the upper half of the mode shape coefficient vector contains the mode shapes for N modes that we wish to calculate. For multiple references, the approach is slightly different. We use the standard form of the transposed impulse response vector for one response DOF and all references at time tk : { { } } hp (tk ) N ×1 = [L]NS ×2N ⌈esr tk ⌋2N×2N 𝜓p 2N×1 , (16.188) S { } which is repeated at Nt times in new rows. This matrix is then solved for the vector 𝜓p for each response point p, in a least squares sense. If the matrices [L] and ⌈esr tk ⌋ are sorted with the positive vectors and poles first, and then the conjugate terms, the upper half of the mode shape vector contains the desired mode shape coefficients. If the impulse responses are results of a frequency range selection, as they will usually be for EMA, the poles used in Equations (16.185) or (16.188) must be adjusted to the changed frequency range as described in Section 16.7.1. This is done by subtracting the low-frequency range value from the natural frequencies of the estimated poles.

485

486

16 Experimental Modal Analysis

16.9.5 Scaling Modal Model When Poles and Mode Shapes Are Known If poles and mode shapes are known, it is relatively straightforward to find the modal scaling factors, for single- or multireference cases. Since the only thing missing for the modal scaling are the modal scaling constants for each mode, the simplest solution is to use a single frequency response, if such a function may be found that is well defined (not near a node line) for all modes. In this case the modal scaling constants, Qr , for all modes and, if desired, residual terms for the FRF at hand, Hpq (𝜔), may be simply computed from this single FRF. If residual terms are desired for other FRFs, then the OMAH method, described in Section 17.4.2, may be used for scaling the modes, even though it was developed for OMA. Here we will show the simple method using a single FRF. We start by the expression for FRF Hpq (𝜔), which is given with residual terms by Hpq (𝜔) =

2N ∑ 𝜓pr 𝜓qr r=1

j𝜔 − sr

Qr + RU +

RL , 𝜔2

(16.189)

where the only unknowns are the modal scaling constants, Qr . If we put all these constants in a column vector and the terms in the sum of Equation (16.189) into a row vector, then this equation may be repeated for all desired frequencies, 𝜔1 , 𝜔2 , … , 𝜔M , in a similar manner to the single-reference LSFD case in Section 16.9.1, but now adding the known mode shape coefficients. We thus obtain a matrix equation: ⎡ ⎧ Hpq (𝜔1 ) ⎫ ⎢ ⎪ ⎪ ⎢ ⎪ Hpq (𝜔2 ) ⎪ ⎢ ⎨ ⎬=⎢ ⋮ ⎪ ⎪ ⎢ ⎪ H (𝜔 ) ⎪ ⎢ ⎩ pq Nf ⎭ ⎢ ⎣

𝜓p1 𝜓p1

𝜓p2 𝜓p2

j𝜔1 −s1

j𝜔1 −s2

𝜓p1 𝜓p1

𝜓p2 𝜓p2

j𝜔2 −s1

j𝜔2 −s2





𝜓p1 𝜓p1

𝜓p2 𝜓p2

j𝜔M −s1

j𝜔M −s2

… … ⋱ …

𝜓p2N 𝜓p2N j𝜔1 −s2N 𝜓p2N 𝜓p2N j𝜔2 −s2N

⋮ 𝜓p2N 𝜓p2N j𝜔M −s2N

⎤ ⎧ Q1 ⎥⎪ ⎥ ⎪ Q2 1 𝜔12 ⎥ ⎪ ⋮ 2 ⎥⎨Q ⋮ ⋮ ⎥ ⎪ 2N ⎥ ⎪ RU 1 𝜔12 ⎥⎦ ⎪ ⎩ RL 1

1 𝜔21

M

⎫ ⎪ ⎪ ⎪ ⎬, ⎪ ⎪ ⎪ ⎭

(16.190)

which may be solved in a least squares sense for the modal scaling constants, Qr .

16.10 Evaluating the Extracted Parameters Once a scaled modal model has been obtained, it is important to evaluate how good the obtained model is, i.e., how well it agrees with the experimental data (i.e., FRFs). There are some standard tools for this purpose that we will describe in the following subsections. A good start may be to animate the estimated mode shapes, to see if the mode shapes look like they are expected to. Since it is usually known more or less what the expected mode shapes are, this tool is very useful. One thing that is particularly easy to see by animating the mode shapes is if the mode shapes look real-valued. If not, it is most likely that the poles are not accurately estimated, since it is very rare that mode shapes are very complex (see Chapter 6).

16.10.1 Synthesized FRFs The first tool and the most important is to investigate whether the obtained modal model is in agreement with the measured functions, i.e., the FRFs. If this is the case, then the modal

16.10 Evaluating the Extracted Parameters

model may be regarded as successful. This does not necessarily mean that it is “correct” as the experiment may have been inappropriately performed, but in most cases, and if the measurements were carefully carried out, it is a strong indication of a representative modal model. When the scaled modal model is complete, any FRF between two DOFs may be synthesized, i.e., calculated using Equation (16.1). A comparison between the extracted modal model and the measured FRFs may thus be obtained by synthesizing FRFs between points on the structure for which experimental FRFs were obtained, and compare these, for example, in overlay plots. To some extent, this can be done already in the stage where the mode shapes are estimated, or the modal scaling factors are obtained as described in the previous section. It is usual practice to overlay a synthesized FRF with the measured FRF for each excitation and response combination while estimating the mode shapes. This may give a good indication of whether the modal model agrees with the experiments already at this stage. For the LSFD and LSTD methods, this is very straightforward, as they process each response location separately. In the case only the modal scaling factors are obtained after parameter estimation by a low-order method, it is not as natural to overlay synthesized results with measured, since the modal scaling factors are obtained by a least squares solution using all FRFs at the same time. In either case, it may be desired to synthesize some or all the measured input/output combinations and compare the synthesized FRF with the measured, after the scaled modal model is obtained. This will reveal if there are any poles that are erroneously estimated, or if there are other issues with the modal model. When producing overlay plots, it is advisable to use the residual factors if these were calculated in the parameter estimation. The reason for this is that modes outside the frequency band of interest oftentimes affect the synthesized FRF significantly. This is demonstrated in Figure 16.9, where it may be seen that the FRF using the residual terms fits the measured data much better. For both synthesized FRFs, the same mode shape coefficients were used.

16.10.2 The MAC Matrix It is common to want to compare the similarity between different mode shapes. The MAC is the most common tool for this purpose (Allemang and Brown, 1982; Allemang, 2003). The MAC value between two modes {𝜓}r and {𝜓}s is defined by |2 | {𝜓}s | |{𝜓}H r | | MACrs = ( )( ), {𝜓}H {𝜓}H r {𝜓}r s {𝜓}s

(16.191)

which can be interpreted as the (squared) normalized correlation coefficient between the two vectors. The MAC value is a value between zero and unity and is detecting the similarity between two modes. The MAC is usually computed as a matrix for all combinations of r and s of one of two kinds: (i) the MAC between a certain set of modes and the same set of modes, which is called the auto-MAC, or (ii) the MAC between two different sets of modes, the cross-MAC. For an interpretation of the MAC matrix, we refer to the modal orthogonality relation in Equation (6.41), which is repeated here for convenience: {𝜓}Tr [M]{𝜓}s = 0,

(16.192)

for any two mode vectors where r ≠ s. Since Equation (16.192) is only valid for real mode shapes, the normal transpose is used here instead of the Hermitian transpose in

487

Accelerance [(m/s) 2 /N]

16 Experimental Modal Analysis

10

1

10

0

10

Response DOF 35

Measured Synthesized

–1

0

200

400

600

800

1000

1200

800

1000

1200

Frequency [Hz] Accelerance [(m/s) 2 /N]

488

10

1

10 0

10

–1

10 –2 0

200

400

600

Frequency [Hz] Figure 16.9

Synthesized versus measured frequency response.

the MAC definition. Comparing Equation (16.192) with the definition of the MAC in Equation (16.191), we note that if we replace the mass matrix in the former equation by the identity matrix (which can then be removed), we obtain the definition of the MAC. This has some important implications: ● ●

the MAC between two different modes is not guaranteed to be zero, and if the MAC is zero between two modes, it is similar to replacing the mass matrix by the identity matrix in Equation (16.192). (It is not equivalent because we are talking about experimental mode shapes, which means that we do not have the degrees of freedom of the mass matrix.)

An important concept of the MAC is that measurement of DOFs can be chosen to minimize cross-MAC values. This is a common way of selecting measurement DOFs, particularly when the aim is to use the EMA results to verify an analytical model, where it is important to be able to separate the modes from the analytical model and the EMA. The auto-MAC matrix is also often used to assess the quality of the parameter extraction results. In such cases, high auto-MAC values off the diagonal are taken as indications that the modes have not been properly separated by the mode shape extraction process. This, however, requires that the selection of measurement DOFs has been carefully done.

16.11 Chapter Summary

16.11 Chapter Summary To summarize this chapter, some important points are listed here. You should also look at the checklist in Appendix G: ●









Begin with a pretest, where the objective of the test, suitable reference and response locations, excitation type, etc., are considered. If a FEM is available, this is very useful for suitable selection of measurement locations. Suspend and instrument the structure and make the best possible measurements, utilizing the content of Chapter 13, and if multiple shakers are used, Chapter 14. Pay attention to reciprocity, driving point quality, and coherence of all measured FRFs. The coherence should be very close to unity for all frequencies. This is where the quality of the modal parameters is determined by the quality of the FRFs. No modal parameter estimation algorithm can compensate for bad measurements. After all FRFs are acquired, analyze the quality one more time. The multivariate MIF is often a good indication of the quality of the FRFs. Select a suitable method for the modal parameter estimation and execute it. This is usually done in two steps: the first step gives poles and modal participation factors if a high-order method is used (PTD, MMITD, LSCF), or poles and mode shapes if a low-order method is used (MITD, SSI, FDPI); the second step then gives scaled mode shapes or modal scaling constants, respectively. When the scaled modal model is obtained, investigate the quality of the obtained model. Are the mode shapes real-valued (do they change animation direction at the same time)? Do synthesized FRFs agree with the corresponding measured FRFs? Is the MAC matrix diagonal, with no modes resembling each other (this assumes that the measured DOFs are appropriately selected).

Modal parameter algorithms are either working in time domain or frequency domain, with very similar equations. We therefore only formulate the frequency domain equations in this summary. For time domain formulations, see Section 16.7. All modal parameter estimation algorithms are based on the modal superposition equation which may be written as follows: [H(j𝜔)] =

2N ∑ [A]r . j𝜔 − sr r=1

(16.193)

This equation may be written in matrix form by either [H(j𝜔)]NL ×NS = [𝜓]NL ×2N ⌈Λ−1 ⌋2N×2N [L]T2N×N

S

(16.194)

or [H(j𝜔)]TNS ×NL = [L]NS ×2N ⌈Λ−1 ⌋2N×2N [𝜓]T2N×N , L

(16.195)

where Equation (16.194) leads to low-order methods (or the zero-order method CMIF), whereas Equation (16.195) leads to high-order methods. The modal participation matrix,

489

490

16 Experimental Modal Analysis

[L], is a matrix with the modal participation vectors as columns defined by ⎧ 𝜓q r ⎫ ⎪ 1 ⎪ ⎪ 𝜓q r ⎪ {L}r = Qr ⎨ 2 ⎬ , ⎪ ⋮ ⎪ ⎪𝜓 ⎪ ⎩ qNS r ⎭

(16.196)

and the pole matrix is given by Equation (16.9). For time domain methods, see, for example Equation (16.13). The mode shape matrix is a matrix with each mode shape in a column. Typically, the modal parameters are extracted in two steps. The first step estimates the two left-hand matrices in Equation (16.194) if it is a low-order method, or Equation (16.195) if it is a high-order method. In a second step, the modal scaling factors or the mode shapes are then estimated, respectively. In Table 16.1, we summarize the most common MPE methods in terms of the UMPA theory and indicate the characteristics of each method. Table 16.1 Summary of the common modal parameter estimation methods covered in this chapter. The table is inspired by Allemang and Phillips (2004b).

Domain Algorithm

Section

Matrix polynomial order

Coefficients

Time Freq. Zero Low High Scalar

Matrix

Prony

16.7.4







Least squares complex exponential (LSCE)

16.7.5







Polyreference time domain (PTD)

16.7.6





NS × NS

Modified multi-reference Ibrahim time domain (MMITD)

16.7.7





N S × NS

Multi-reference Ibrahim time domain (MITD)

16.7.3





N L × NL

Stochastic subspace identification (SSI)

16.7.3





N L × NL

Eigensystem realization algorithm (ERA)

16.7.3





N L × NL

Least squares complex frequency domain (LSCF)

16.8.1



Polyreference frequency domain (PFD)

16.8.2





N L × NL

Frequency domain direct parameter (FDPI)

16.8.2





N L × NL

Complex mode indicator function (CMIF)

16.8.4







Source: Allemang and Phillips (2004b)/with permission of Elsevier.

NS × NS

N/A

16.12 Problems

16.12 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 16.1 Using MATLAB (or Octave), use the 2DOF system from Problem 6.2 and generate the entire FRF matrix in accelerance format, using a frequency axis from 0 to 25 Hz, with 0.01 Hz increment. Then calculate the “Normal MIF” (also called MIF1) described in Section 16.2.10, and plot it. At which frequencies does it have its minima? Hint: If you have downloaded ABRAVIBE, you can use the command mck2frf to generate the FRFs. Problem 16.2 Using each of the two FRFs from Problem 16.1 with the first DOF as reference (input), and set up and solve the poles and residues, using the “Least squares local” method in Section 16.4.1. Use all frequencies of the FRF from 5 to 20 Hz. What natural frequencies and damping ratios do you get? Do you get the exact same poles for each FRF? Problem 16.3 Use the residues from Problem 16.2 and put them into a mode shape vector size 2-by-1. Calculate the Auto-MAC matrix. What are the values of the off-diagonal elements? Problem 16.4 Similar to Problem 16.2, use all FRFs of the system from Problem 16.1 and solve for the poles using the “Least squares global” method described in Section 16.4.2. Use the same frequency range as in Problem 16.2. What natural frequencies and damping ratios do you get? Problem 16.5 Assume you wish to apply the “Polyreference time domain” method. You have measured a frequency response matrix with three reference DOFs and 40 response DOFs. Answer the following questions: (a) (b) (c) (d)

Which inputs do you have to give the algorithm? Which results do you get from the algorithm? What is the size of the modal participation matrix if you estimate 10 modes? How can you obtain the mode shapes?

Problem 16.6 Assume you wish to apply the “Multiple-reference Ibrahim time domain” method. Answer the following questions: (a) Which inputs do you have to give the algorithm? (b) Which results do you get from the algorithm? (c) How can you obtain the mode shapes?

491

492

16 Experimental Modal Analysis

Problem 16.7 Assume you wish to apply the “Frequency domain direct parameter z” method. Answer the following questions: (a) Which inputs do you have to give the algorithm? (b) Which results do you get from the algorithm? (c) How can you obtain the mode shapes?

References Allemang RJ 2003 The modal assurance criterion - twenty years of use and abuse. Sound and Vibration 37(8), 14–23. Allemang RJ and Brown DL 1982 A correlation coefficient for modal vector analysis Proceedings of 1st International Modal Analysis Conference, Orlando, FL. Allemang R and Brown D 1987 Experimental modal analysis and dynamic component synthesis – vol 3: Modal parameter estimation. Technical report, USAF – Contract No F33615–83–C–3218, AFWAL–TR–87–3069. Allemang RJ and Brown DL 1998 A unified matrix polynomial approach to modal identification. Journal Of Sound And Vibration 211(3), 301–322. Allemang RJ and Brown DL 2006 A complete review of the complex mode indicator function (CMIF) with applications Proceedings of the International Conference on Noise and Vibration Engineering (ISMA2006), pp. 3209–3246. Allemang RJ and Phillips AW 2004a The impact of measurement condensation and modal participation vector normalization on the estimation of modal vectors and scaling Proceedings of 22nd International Modal Analysis Conference (IMAC), Dearborn, MI. Allemang RJ and Phillips AW 2004b The unified matrix polynomial approach to understanding modal parameter estimation: an update Proceedings, International Conference on Noise and Vibration Engineering (ISMA). Allemang R, Phillips A and Brown D 2011 Combined state order and model order formulations in the unified matrix polynomial method (UMPA) Proceedings of 29th International Modal Analysis Conference (IMAC), Jacksonville, FL. Avitabile P 2017 Modal Testing: A Practitioner’s Guide 1st edn. Wiley, Hoboken, NJ. Brincker R and Ventura C 2015 Introduction to Operational Modal Analysis. John Wiley & Sons, Chichester, UK. Brown DL, Allemang R, Zimmerman R and Mergeay M 1979 Parameter estimation techniques for modal analysis. SAE Tech. Paper 790221. Carne TG, Griffith DT and Casias ME 2007 Support conditions for experimental modal analysis. Sound and Vibration 41(6), 10–16. Cauberghe B 2004 Applied Frequency-domain System Identification in the Field of Experimental and Operational Modal Analysis PhD thesis Vrije University of Brussels, Brussels, BelgiumVrije University of Brussels, Brussels, Belgium. Cauberghe B, Guillaume P, Verboven P, Vanlanduit S and Parloo E 2005 On the influence of the parameter constraint on the stability of the poles and the discrimination capabilities of the stabilisation diagrams. Mechanical Systems and Signal Processing 19(5), 989–1014.

References

de Prony BGR 1795 Essai éxperimental et analytique: sur les lois de la dilatabilité de fluides Élastique et sur celles de la force expansive de la vapeur de l’alkool, é différentes températures. Journal de l’école Polytechnique 1(22), 24–76. Deblauwe F, Brown DL and Allemang RJ 1987 The polyreference time domain technique Proceedings of 5th International Modal Analysis Conference (IMAC), London, England. Dippery KD, Phillips AW and Allemang RJ 1996 Condensation of the spatial domain in modal parameter estimation. Modal Analysis-the International Journal of Analytical and Experimental Modal Analysis 11(3–4), 216–225. Ewins DJ 2000a Basics and state-of-the-art of modal testing. Sadhana 25(3), 207–220. Ewins DJ 2000b Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Fukuzono K 1986 Investigation of multiple–reference ibrahim time domain modal parameter estimation technique Master’s thesis Dept. of Mechanical and Industrial Engineering, University of Cincinnati. Guillaume P, Verboven P, Vanlanduit S, Van der Auweraer H and Peeters B 2003 A poly-reference implementation of the least-squares complex frequency-domain estimator Proceedings of 21st International Modal Analysis Conference, Kissimmee, FL. Heylen W, Lammens S and Sas P 1997 Modal Analysis Theory and Testing 2nd edn. Catholic University Leuven, Leuven, Belgium. Ibrahim S and Mikulcik E 1973 A time domain modal vibration test technique. Shock and Vibration Bulletin 43(4), 21–37. Ibrahim SR and Mikulcik EC 1977 A method for the direct identification of vibration parameters from the free response. The Shock and Vibration Bulletin 47(47), 183–198. Juang, JN and Pappa, RS 1985 An eigensystem realization-algorithm for modal parameter-identification and model-reduction. Journal of Guidance Control and Dynamics 8(5), 620–627. Kammer DC 1991 Sensor placement for on-orbit modal identification and correlation of large space structures. Journal of Guidance, Control, and Dynamics 14(2), 251–259. Lembregts F 1988 Frequency Domain Identification Techniques for Experimental Multiple Input Modal Analysis PhD thesis Katholieke Universiteit Leuven, Leuven, Belgium. Lembregts F, Leuridan J and Vanbrussel H 1990 Frequency-domain direct parameter-identification for modal-analysis - state-space formulation. Mechanical Systems and Signal Processing 4(1), 65–75. Linderholt A and Abrahamsson T 2005 Optimising the informativeness of test data used for computational model updating. Mechanical Systems and Signal Processing 19(4), 736–750. (ed. Maia N and Silva J) 2003 Theoretical and Experimental Modal Analysis. Research Studies Press, Baldock, Hertforsdhire, England. Phillips A and Allemang R 1996 Single degree-of-freedom modal parameter estimation methods Proceedings of 14th International Modal Analysis Conference, Dearborn, MI Society for Experimental Mechanics. Phillips AW and Allemang RJ 2005 Data presentation schemes for selection and identification of modal parameters Proceedings, International Modal Analysis Conference (IMAC), p. 10. Rades M 1994 A comparison of some mode indicator functions. Mechanical Systems and Signal Processing 8(4), 459–474.

493

494

16 Experimental Modal Analysis

Richardson M and Formenti D 1982 Parameter estimation from frequency response measurements using rational fraction polynomials Proceedings of 1st International Modal Analysis Conference, Orlando, Florida Society for Experimental Mechanics. Shih C, Tsuei Y, Allemang R and Brown D 1988 Complex mode indication function and its applications to spatial domain parameter estimation. Mechanical Systems and Signal Processing 2(4), 367–377. Strang G 2005 Linear Algebra and its Applications 4th edn. Brooks Cole, San Diego, CA. van Overschee P and De Moor B 1996 Subspace Identification for Linear Systems: Theory – Implementation – Applications. Springer. Viberg M 1995 Subspace-based methods for the identification of linear time-invariant systems. Automatica 31(12), 1835–1851. Vold H 1990 Numerically robust frequency domain modal parameter estimation. Sound and Vibration 24(1), 38–40. Vold H, Kundrat J, Rocklin TG and Russell R 1982 A multi-input modal estimation algorithm for mini-computer. SAE Tech. Paper 820194. Williams R, Crowley J and Vold H 1985 The multivariate mode indicator function in modal analysis Proceedings of 3rd International Modal Analysis Conference, Orlando, FL. Wright J, Cooper J and Desforges M 1999 Normal-mode force appropriation – theory and application. Mechanical Systems and Signal Processing 13(2), 217–240. Zhang L, Kanda H, Brown D and Allemang R 1985 A polyreference frequency domain method for modal parameter identification ASME Paper No. 85-DET-106.

495

17 Operational Modal Analysis (OMA) Operational modal analysis (OMA) has gained much attention in the last many years, and it has become a standard tool for modal analysis of large structures in civil engineering. OMA was originally developed before EMA was becoming a standard tool in mechanical engineering, but due to the computational performance of the computers in those early days (1970s), it did not become popular. Today, OMA is not only used in civil engineering applications such as buildings and bridges, but is increasingly being used for wind turbines, machinery, and other more typical mechanical engineering fields. There are several textbooks dedicated to the area of OMA, for example Rainieri and Fabbrocino (2014) and Brincker and Ventura (2015). There are also several software packages dedicated to the field. In this chapter, we will lay out the basic principles and methods for OMA. Application on real data, however, is found in Chapter 19, where results from several real datasets, and using the different methods described in this chapter and in Chapter 16 are presented. As stated already in Chapter 16 on EMA, the methods for modal parameter estimation we described in Chapter 16 may be directly applied also to OMA, with very few exceptions. We will show in Section 17.1, that correlation functions in the time domain exhibit properties similar to free decay functions. Thus, we can use the same time domain MPE methods that we presented in Chapter 16. In the frequency domain, it is a little more complicated, but we will also show that the frequency domain methods described in Chapter 16, as well as slightly modified versions, may be used for estimating modal parameters in an OMA setting. The methods we describe in this chapter are commonly available in both commercial and open software. Some methods that will not be covered, mainly because they have not been shown to be superior to the traditional techniques, still deserve mentioning. First, a lot of research has been made on using response transmissibility functions instead of response autospectral matrices, see for example Rainieri and Fabbrocino (2014) and Devriendt and Guillaume (2008). Second, blind source separation techniques have been attempted, see for example Rainieri and Fabbrocino (2014), Kerschen et al. (2007), and Antoni and Chauhan (2013). Finally, cepstrum-based methods have also been shown to have some application to OMA, see for example Randall (2009), Randall et al. (2015), Randall and Gao (1994), and Hanson et al. (2007a).

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

496

17 Operational Modal Analysis (OMA)

17.1 Principles for OMA In OMA, the modal parameters, i.e., the poles (natural frequencies and damping factors) and the mode shapes of the structure are extracted from measurements of only responses of the structure. This is particularly desirable for large structures that may not be possible to excite by artificial loads of enough force levels. Since the loads are not measured, the modal model that is obtained is unscaled, i.e., we cannot know the modal mass if the modes are normal modes, or the modal A scaling factor, if modes are complex. There are several methods available for scaling the modal model after the modal parameters are estimated, two of which we will describe in Section 17.4. The principle for OMA is that a system excited by random loads will exhibit the poles and mode shapes of the system itself, plus the poles of the load, as illustrated in Figure 17.1. It is often stated in literature that a requirement for OMA is that the loads consist of white noise, i.e., the loads have flat spectral densities, as we will discuss Equation (17.6) in Section 17.3.1. Although this statement may be true for obtaining completely unbiased mode shapes, if it were a stringent requirement, OMA would be of very little use since flat load spectra are very rare. If the load has a spectral density that is not constant for all frequencies, it may be thought of as being produced by white noise passing a (multichannel) filter as illustrated in Figure 17.1. This filter has poles that give the loads their frequency characteristics as described in Chapters 2 and 3. Thus, OMA will lead to all poles and mode shapes, and we need to separate any detected poles into those related to the structure and those related to the loads. This is in most cases relatively simple, as poles related to loads rarely have damping as low as structural modes (i.e., the peaks in the spectra are not as sharp as peaks corresponding to poles of the structure). An exception to this is when there are harmonics in the data, but in such cases, the harmonics may be removed by any of the methods discussed in Section 18.7. It is also a fact that any poles not belonging to the structure will exhibit mode shapes that are different from the structure’s mode shapes. Since the mode shapes of the structure are usually known, at least within some limits, it is also possible to discard any estimated poles and mode shapes by this criteria. One may ask if, for example, an operating wind turbine or a power generator is actually excited by random loads. While there certainly are loads of periodic nature involved White noise

Responses (All poles)

Load coloring (Load poles)

Figure 17.1

Structure (System poles)

Illustration of the principle of operational modal analysis, OMA.

17.2 Data Acquisition Principles

in such cases, experience shows that it is almost always possible to estimate reliable modal parameters from such structures once the harmonic loads have been removed from the responses. This means that there are enough random loads from the environment to allow OMA methods to extract the modal parameters. A restriction that does occur sometimes in OMA applications, however, is that some modes of the structure may be badly excited by the natural loads and may therefore not be possible to be extracted. This may be a serious disadvantage in some cases, but on the other hand, in other cases, such modes may be considered to be of less interest since they are not excited in practice. A fundamental difference between EMA and OMA is that the latter is a result of the vibrations with boundary conditions under the operating conditions. This usually means that there is more damping than in a typical EMA case, where the structure is investigated under free-free conditions. In some cases, there may also be more variation in especially damping, e.g., if arerodynamic or hydrodynamic damping is present. When interpreting the OMA results, it is thus important to keep in mind what variation in the modal properties operating conditions may cause. Similarly, stationarity of the responses may be more difficult to obtain for OMA and time invariant assumptions imposed on modal parameters may not be satisfied, and such things should be investigated. With these things in mind, it is usually possible to interpret OMA results and to attribute changes in modal parameters to changing operating conditions and boundary conditions.

17.2 Data Acquisition Principles Special considerations about the data acquisition often have to be made in OMA testing. For EMA, the measured FRFs are normalized to the force, so as long as the structure and its suspension are not altered during the test (and, of course, temperature and other conditions that may affect the modal parameters are kept constant), all measured FRFs will be consistent and may be processed together for MPE. In an OMA case, there is no such guarantee, so if all desired DOFs cannot be measured simultaneously, then great care needs to be taken to ensure that data are consistent. If, for example, the vibration levels are not constant during the time several batches of time records are collected (so-called multi-setup data acquisition), then the correlation functions (or spectra) will not be consistent, i.e., they will have different levels from the different measurements, leading to erroneous mode shapes. Different ways of ensuring consistent data have been proposed and also compared with data measured simultaneously, see for example Döhler et al. (2011), Döhler et al. (2013), Cara et al. (2014), Orlowitz and Brandt (2014), and Orlowitz et al. (2015), and the books (Brincker and Ventura 2015; Rainieri and Fabbrocino 2014). The best recommendation is to measure all sensors simultaneously, but of course, this may in some situations not be feasible for lack of sensors or measurement channels. A drawback of using multi-setup data acquisition that is rarely mentioned is that it eliminates the possibility to use all DOFs as references. Low-order methods such as the MITD method and SSI, for example, usually perform best when all measurement DOFs are used

497

498

17 Operational Modal Analysis (OMA)

as references, as we will see in Chapter 19. If high-order methods are used, they usually perform best with a few references, but these methods do not produce poles and mode shapes in a single step as the low-order methods do, which is somewhat an inconvenience for OMA (since in OMA there is no need, or even possibility, to compute the modal scaling from the measurements, which is the reason for the second computation step in EMA). Unscaled mode shapes may, however, be computed in a second stage, for example by applying one of the methods in Section 16.9 on the so-called half spectra. Regardless of whether all channels are measured at once, or acquiring channels in batches, time data are almost always recorded in OMA applications today. This is a great advantage, since postprocessing may then be applied trying out different filters, integration and differentiation, removing harmonics, etc. A good way to process data is by the framework described in Section 10.6. It should also be emphasized that data should always be checked for data quality after recording, and before MPE, by the methods proposed in Chapter 4. The right choice of sensors is as important for successful OMA MPE as it is in other applications. It is important that the chosen sensors can measure the appropriate frequency range with high dynamic range so that enough information is available for the modal parameter extraction to succeed. Since many OMA applications deal with structures with low natural frequencies, accelerometers, which are otherwise the “standard” for vibration measurements, are not always the best choice, because there is typically little acceleration at low frequencies. Instead, sensors measuring velocity, such as geophones, have proven superior for many OMA applications. It is impossible to cover all possible sensors for different OMA applications here, but we will demonstrate the different dynamic range properties of an accelerometer versus a geophone as an illustration of the importance of selecting an appropriate sensor. In Figure 17.2, power spectral densities of two signals are shown: one signal from a high precision so-called force balance accelerometer, which is designed for low-frequency seismic measurements, and one from a geophone. The geophone signal was converted to acceleration before computing the PSD. Both sensors were located next to each other on the top floor of a highrise building. As can be seen in the figure, the dynamic range of the geophone is significantly better than that of the accelerometer, and this is despite the fact that the price of the accelerometer was several thousand Euros, whereas the geophone cost a few tens of Euros. This illustrates that the right choice of sensor is not always (but, of course, sometimes) about money.

17.3 OMA Modal Parameter Extraction for OMA A difference between EMA and OMA is that the latter method relies on responses only. This is possible because the responses include all information of the system, as we will see in this section. Although there are some methods that use the direct time responses, such as data-driven stochastic subspace identification (SSI-data) (van Overschee and De Moor 1996), we will look only at methods based on correlation (sometimes called covariance) functions and methods based on spectra.

17.3 OMA Modal Parameter Extraction for OMA 100

Acceleration PSD, [(m/s 2 ) 2 /Hz]

Accel. Geophone

10–1

10–2

10–3

10–4

0.5

1

1.5

2

2.5

3

Frequency [Hz]

3.5

4

4.5

5

Figure 17.2 Illustration of dynamic range. PSDs of signals from a high precision force balance accelerometer and from an inexpensive geophone, converted to acceleration, are compared. The significant difference in dynamic range should be noted. Although difficult to see, the PSDs are very similar at all the resonance peaks below 2 Hz.

17.3.1

Spectral Functions for OMA Parameter Extraction

For time domain identification, we may use correlation functions, as these exhibit the same free decays as impulse responses that we used to explain the time domain methods in Section 16.7. The modal decomposition in OMA terms was first developed by James et al. (1992) and is also found in Heylen et al. (1997). Rainieri and Fabbrocino as well as Brincker have also comprehensively investigated this in Rainieri and Fabbrocino (2014), Brincker and Ventura (2015), and Brincker (2017). We start in the Laplace domain, where we know from Equation (14.18) that the output spectral matrix of a linear system, [Gyy ] may be written as follows: [Gyy (s)] = [H(s)][Gxx (s)][H(s)]H ,

(17.1)

where [H] is the system transfer function matrix (Laplace domain version of the frequency response function matrix), and [Gxx (s)] is the input autospectral matrix in Laplace domain, which is the Laplace transform of the autocorrelation function matrix. From Chapter 6, we know that the transfer functions may be described in modal parameters by [H(s)] =

N ∑ [A]∗r [A]r + , s − sr s − s∗r r=1

(17.2)

499

500

17 Operational Modal Analysis (OMA)

√ where the poles, sr = −𝜁r 𝜔r + j𝜔r (1 − 𝜁r2 , and the residue matrix, [A]r for mode r is given by the modal scaling factors Qr and the mode shapes {𝜓}r by [A]r = Qr {𝜓}r {𝜓}Tr . We use this in Equation (17.1) which gives ( ) ) N ( N ∑ ∑ [A]Tk [A]H [A]∗r [A]r k [Gxx ] [Gyy ] = + + s − sr s − s∗r s ∗ − sk s ∗ − sk r=1 k=1 =

N N ∑ ∑ [A]r [Gxx ][A]H k r=1 k=1

+

(s − sr )(s∗ −

[A]∗r [Gxx ][A]H k (s −

s∗r )(s∗



s∗k )

s∗k ) +

+

[A]r [Gxx ][A]Tk (s − sr )(s∗ − sk )

[A]∗r [Gxx ][A]Tk (s − s∗r )(s∗ − sk )

(17.3)

,

which we in the next step wish to expand into residue-pole form by partial fraction expansion. Using the Heaviside cover-up method from Chapter 2, we get the following relationships: 1

1

s∗r −s∗k s −s 1 = + k r ∗ (s − sr )(s∗ − sk ) s − sr s∗ − s∗k 1

1

s∗ −sr s∗ −sk 1 = r + ∗k ∗ (s − sr )(s − sk ) s − sr s − sk 1

1

(17.4)

sr −s∗k sk −s∗r 1 = + ∗ ∗ ∗ (s − sr )(s∗ − sk ) s − sr s∗ − s∗k 1

1

s∗ −s∗r s −s 1 = r k∗ + ∗ k . ∗ ∗ (s − sr )(s − sk ) s − sr s − sk

We now use these partial fraction results in Equation (17.3) which gives ) ( H N N [A]r [Gxx ][A]Tk ∑ ∑ [A]r [Gxx ][A]k 1 + [Gyy ] = s−sr (s∗r − s∗k ) (s∗r − sk ) r=1k=1 ) ( ∗ [A]∗r [Gxx ][A]Tk [A]r [Gxx ][A]H k 1 + + s−s∗r (sr − s∗k ) (sr − sk ) ) ( [A]∗r [Gxx ][A]Tk [A]r [Gxx ][A]Tk 1 + + s∗ −sk (s∗k − sr ) (s∗k − s∗r ) ( ) [A]∗r [Gxx ][A]H [A]r [Gxx ][A]H k k 1 + + , s∗ −s∗k (sk − sr ) (sk − s∗r )

(17.5)

where the first two lines remind us of the modal superposition of an impulse response function and will yield positive time lags in the inverse transform, whereas the two lower lines will yield negative time lags in the inverse transform, as we will see beneath. To simplify this equation, we define a new matrix [B]r by ( ) N ∑ [A]Tk [A]H r [B]r = + , (17.6) s∗r − sk s∗r − s∗k k=1

17.3 OMA Modal Parameter Extraction for OMA

which is used for the first row in Equation (17.5). For the second row, we use the complex conjugate [B]∗r ( ) N ∑ [A]Tk [A]H k ∗ [B]r = + . (17.7) sr − s∗k sr − sk k=1 For rows three and four, we swap r and k and reverse the summation order. Thus, we get that the transpose [B]Tk is [B]Tk =

N ∑

(

r=1

[A]r [A]∗r + s∗k − sr s∗k − s∗r

) ,

(17.8)

by which we use for row three, and finally we get the Hermitian transpose [B]H k [B]H = k

N ∑

(

r=1

[A]r [A]∗r + sk − s∗r s k − sr

) .

(17.9)

After the application of the last two equations, the variable k is replaced by r which was eliminated. We can now write the autospectrum matrix as follows: [Gyy ] =

N ∑ [A]r [Gxx ][B]r

(s − sr ) T [B]r [Gxx ][A]H r (s∗ − sr )

r=1

+

+

[A]∗r [Gxx ][B]∗r (s − s∗r )

H [B]H r [Gxx ][A]r + . ∗ ∗ (s − sr )

(17.10)

The result in the first line of Equation (17.10) resembles the decomposition of transfer functions that we have seen before, i.e., sums of terms of 1∕(s − sr ) and 1∕(s − s∗r ). The inverse Laplace transform of these two terms will thus look similar to impulse responses, i.e., they will contain the same free decays as those of the impulse responses. The second row may easily be observed to consist of the complex conjugate of these poles, i.e., 1∕(s∗ − s∗r ) and 1∕(s∗ − sr ) (the two terms in the second row are reversed in order). Since the inverse Laplace transform of a transfer function of a linear system has the characteristic that H(−s) = H ∗ (s), and from Table 2.1, line 7, the inverse Laplace transform of H(−s) is h(−t), the second line in Equation (17.10) will result in a response similar to the time reverse of the impulse responses. Since, furthermore, impulse responses are causal, i.e., h(t) = 0, t < 0, the sum of these two decays will be “independent” for lags 𝜏 < 0 and for 𝜏 ≥ 0. This means we can extract the positive time lags of the correlation functions to be used for MPE just as if the functions were impulse responses. The negative time lags of the correlation functions are rarely used, although, in principle, they could be used similarly if time is reversed. The fact that the cross-spectral matrix of the loads, [Gxx ], is included in the expression of the response cross-spectral matrix is the reason many authors mean that we have to assume that the loads are white noise, as in that case, the matrix [Gxx ] does not include any poles and is unproblematic. The case where there are poles in the loading functions is not easy to treat stringently, so we will also make this assumption here. It should be mentioned, however, that experience shows the forcing functions rarely cause issues in the application of OMA. To continue, we observe that the residue matrix [A]r = {𝜓}r {𝜓}Tr is composed of the outer product of the mode shape of mode r (we ignore the modal scaling factors since OMA mode shapes are unscaled anyway due to the lack of measured loads). If we exclude the left-hand

501

502

17 Operational Modal Analysis (OMA)

mode shape vector, we can define a modal participation vector, {L}r of mode r by transposing the remaining part of the first residue term in Equation (17.10), i.e., {L}r = [Br ]T [Gxx ]{𝜓}r .

(17.11)

Using the modal participation vector, we can now write Equation (17.10) as follows: [Gyy ] =

N ∑ {𝜓}r {L}Tr r=1

(s − sr )

+

{𝜓}∗r {L}H r (s − s∗r )

{L}∗r {𝜓}Tr {L}r {𝜓}H r + , + (s∗ − sr ) (s∗ − s∗r )

(17.12)

which may be evaluated in the frequency domain by replacing s by j𝜔, which gives us the decomposition [Gyy ] =

N ∑ {𝜓}r {L}Tr r=1

(j𝜔 − sr )

+

{𝜓}∗r {L}H r (j𝜔 − s∗r )

{L}∗r {𝜓}Tr {L}r {𝜓}H r + , + (−j𝜔 − sr ) (−j𝜔 − s∗r )

(17.13)

which is our final result of the expression in the frequency domain. For MPE, Equation (17.13) may be expanded into a matrix equation: [Gyy ] = [Ψ]⌈Λ−1 ⌋[L]T + [L]⌈Λ−1∗ [Ψ]T ,

(17.14)

where the sizes of the pole matrices are 2N × 2N as used in Chapter 16. This decomposition is similar to the decomposition of the frequency responses in Equation (16.10), with the differences that the modal participation matrix in Equation (17.14) includes the weighted load spectral matrix, and of course, that we have the second term, which includes the complex conjugate of frequency, 1∕(−j𝜔 − sr ) and 1∕(−j𝜔 − s∗r ) on the diagonal of the conjugate of the pole matrix. It is worth taking a closer look at the structure of Equation (17.13). It may be seen that the denominator for the two terms for positive frequency produce the same magnitude, but opposite phase, compared to the two terms for negative frequency. The sum of the two terms thus have zero phase, if the mode shape coefficients are real-valued and of the same sign. If mode shapes are real-valued but with opposite sign, the phase is 180 degrees. In addition, the poles in the second term in Equation (17.14) are the same as the system poles, but mirrored to the positive real s-plane. This means that the frequency domain methods we discussed in Section 16.8 may be directly used to extract parameters using the autospectral matrix [Gyy ]. The poles that lie in the positive s-plane are simply discarded. As a consequence, higher model orders may typically have to be used for OMA than for EMA, taking into account that twice as many poles are to be computed.

17.3.2 Correlation Functions for OMA Parameter Extraction Time domain modal parameter extraction methods may be more desirable than frequency domain methods, as we will discuss in Section 17.3.4. For this purpose, we take the inverse transform of the autospectral matrix, which results in the correlation function matrix, [Ryy ]. The first term in the decomposition of [Gyy ] in Equation (17.14) will result in positive time

17.3 OMA Modal Parameter Extraction for OMA

lags of the correlation function matrix and the second term will yield negative time lags. We thus have a modal decomposition for the positive time lags 𝜏 ≥ 0 which is [Ryy (𝜏)] = [Ψ]⌈esr t ⌋[L]T ,

(17.15)

20

Cross-correlation, [(m/s2 ) 2 ]

Autocorrelation, [(m/s2 ) 2 ]

which is the relationship for time domain OMA modal parameter extraction. Note the similarity of this and the decomposition of the impulse response matrix in Equation (16.13), although the modal participation matrices are not the same. This proves that we can use the positive lags of the correlation function matrix [Ryy ](𝜏) as free decays just like we used the impulse responses for EMA. All the time domain methods we described in Section 16.7 may thus be used also for OMA. It should be noted that it is not arbitrary how the correlation function matrix is defined. Therefore, the fact that we may use the correlation function matrix as in Equation (17.15) is due to our definition of this matrix. How to use other definitions of the correlation function matrix may be found in Brincker (2017). When using correlation functions for MPE, it should also be noted that the first few lags should be discarded, as they may include content from extraneous noise. As such, noise is always broadband, it has a “short” width in time domain, and usually it is enough to discard, say, the first ten time lags. In Figure 17.3, a typical autocorrelation and a cross-correlation function are shown. The functions were calculated from data of the Plexiglas plate example that we will present in Chapter 19. Random decrement signatures (RDDs) are measurement functions that are closely related to correlation functions. The RDDs are common to use for OMA and may be treated as correlation functions (Brincker and Ventura 2015). They were introduced for use in OMA already by Ibrahim and Mikulcik (1977) and have been popular ever since, mainly because they are easy to calculate. In fact, for a linear system, they are linear combinations of the correlation function and its derivative (Brincker et al. 1992). For a comprehensive discussion of RDDs, see Asmussen (1997). For the purpose of OMA MPE, RDDs may be

10 0 –10 –20 0

0.02

Time lag [s] (a)

0.04

20 10 0 –10 –20 0

0.02

Time lag [s] (b)

0.04

Figure 17.3 Example of correlation functions for positive time lags. In (a) an autocorrelation and in (b) a cross-correlation function of data from the Plexiglas plate example. It may be seen that the two functions are similar, except that the autocorrelation exhibits a “spike” at time lag zero, corresponding to the variance of the time function.

503

17 Operational Modal Analysis (OMA)

treated as if they were correlation functions, and therefore, we will not discuss them in more detail here.

17.3.3 Half Spectra Knowing that the positive time lags in the correlation functions correspond to impulse responses, as shown in Equation (17.15), it is reasonable to consider computing the Fourier transform of the positive lags of [Ryy (𝜏)], 𝜏 ≥ 0, which will resemble frequency responses. These functions are sometimes referred to as positive spectra, but we will use the alternative term half spectra. The half spectra are alternative functions to use for frequency domain MPE, since they only exhibit the actual system poles, and lower model orders thus may be used. They may also be used for computing mode shapes by the LSFD method we described in Section 16.9, if poles and modal participation factors are known from applying a high-order MPE method such as PTD, MMITD, or LSCF. When computing half spectra, some leakage may occur because the (auto) correlation functions, unlike impulse responses, exhibit a value at f = 0 Hz (namely the variance of the time function, see Chapter 4). Nevertheless, half spectra are often used successfully to obtain modal parameters in OMA testing. In Figure 17.4, half spectra calculated from the correlation functions in Figure 17.3 are shown.

17.3.4 Time versus Frequency Domain Parameter Extraction for OMA

102

10

1

0

1000

2000

Frequency, [Hz] (a)

Cross-half spectrum, [(m/s2 ) 2 /Hz]

In Section 16.6, we argued that for EMA, it may make sense to use frequency responses for MPE, since these functions may be estimated virtually free of bias error, whereas the inverse transform from FRF to impulse response will often lead to some leakage error. For OMA, the case is somewhat reversed because, as we know from Section 10.4, correlation functions may be estimated without bias. On the other hand, the bias of the exponential slope of the

Auto half spectrum, [(m/s 2 ) 2 /Hz]

504

3000

10

2

101

100

–1

10

0

1000

2000

Frequency, [Hz] (b)

3000

Figure 17.4 Example of half spectra for the Plexiglas plate example, calculated from the correlation functions shown in Figure 17.3. The figure shows that the half spectra resemble frequency response functions, although they are affected by truncation of the correlation functions from the FFT and thus are not as clean as FRFs. Yet they exhibit the same modal decomposition as frequency responses.

17.3 OMA Modal Parameter Extraction for OMA

correlation functions results in a bias in the damping value if the measurement length is not sufficient, as was shown in Section 10.4.3. Spectral densities, on the other hand, will always exhibit leakage, although it may be limited by setting the frequency increment small enough (or the smoothing length if the smoothed periodogram is used). This has led some authors to recommend alternative ways to estimate spectral densities by first estimating the correlation functions, and then applying exponential windows before calculating the FFT. The added damping caused by the exponential window may be calculated similarly to what we described for impact testing in Chapter 13 and thus compensated for. This method to compute spectral densities may easily be used in the framework described in Section 10.6 if so desired, but it should also be realized that the spectra are biased also from this way of estimating them. As concluded for EMA in Section 16.6, also for OMA either domain offers its advantages and disadvantages. Regardless of which domain is chosen, it should always be investigated if measurement settings (frequency increment, choice of frequency range, lags, etc.,) used for the parameter estimation are affecting the parameters or not. This will be illustrated in Section 19.11. Just as for EMA, residual terms may be used in the frequency domain to account for modes outside the frequency range used for the parameter estimation. Thus, frequency domain methods are preferred by some.

17.3.5

Modal Parameter Estimation Methods for OMA

Any of the time domain methods we described in Section 16.7 will work if applied to (the positive time lags of) correlation functions since these exhibit the same free decays as impulse response functions, as described in Section 17.3.2. In the frequency domain, we may use any of the methods described in Section 16.8, either using the response autospectral matrix, or half spectra as measurement functions, as discussed in Section 17.3.4. Because we have no knowledge about the loads acting on the structure, OMA does not lead to scaled modal models. (Although models may be scaled by the methods we will describe in Section 17.4.) As we saw in Chapter 16, two stages are usually applied in EMA parameter estimation. The reason for this is that a scaled model is desired, either by finding the mode shapes after the poles and modal participation factors are known, or by finding the modal scaling factors, if poles and mode shapes are known from the first stage, as described in Section 16.9. For OMA, on the other hand, low-order methods such as MITD may seem advantageous, since they will estimate the desired poles and mode shapes in one step. As discussed briefly above, the low-order methods such as MITD and FDPI perform best when using many references, preferably as many as the number of responses, i.e., using the full correlation function matrix or spectral matrix. This is not possible if data are acquired in batches, where only a few sensors are kept as references, creating a “tall” correlation function matrix (with many more rows than columns). For those cases, high-order methods such as PTD, MMITD, or LSCF, may perform better. The CMIF method that we described in Section 16.8.4, is similar to the frequency domain direct parameter method, FDD, developed by Brincker et al. (2001) and described in detail in Brincker and Ventura (2015). There are small differences between these methods, as the former uses frequency responses, whereas the latter uses the response cross-spectral matrix. However, due to the similarity between frequency responses and the cross-spectral

505

506

17 Operational Modal Analysis (OMA)

matrix, described by Equation 17.14, it turns out that the CMIF method may be used also for spectral densities, with a very small modification, related to the fact that the enhanced FRF defined by Equation (16.165) when applied to a cross-spectrum will have zero phase, but a magnitude which is equal to that of a FRF. It is therefore possible to create the phase by the Hilbert transform as described in Section 18.2, and then the enhanced FRF may be fitted by a SDOF model as described in Section 16.8.4.

17.3.6 Least Squares Frequency Domain, OMA Versions If a high-order MPE method is used to obtain poles and modal participation factors, the mode shapes may be obtained in time domain from the positive lags of the correlation functions with the least squares time domain method described in Section 16.9.4. In the frequency domain, the least squares frequency domain (LSFD) method described in Sections 16.9.1 and 16.9.2 may be used on half spectra, as these have properties similar to FRFs. However, if the response spectral matrix is desired to be used, a modified version of the LSFD method needs to be applied, which takes into account also the poles in the right half plane, see Equation (17.14). The approach depends on whether the modal participation factors are known or not. We will describe both procedures briefly here, starting with a solution for the case where no modal participation factors are available. From Equation (17.13), we have that a single cross spectrum Gyp ,yq (j𝜔), by letting the sum go from 1 to 2 N and renumbering the complex conjugate poles, may be written as follows: Gyp ,yq (j𝜔) =

2N ∑ 𝜓pr Lqr r=1

(j𝜔 − sr )

+

∗ Lqr 𝜓pr

(−j𝜔 − sr )

.

(17.16)

In the case at hand, we will assume the measured spectral densities are accelerations, as this is most common. Since the modal models in OMA are unscaled, we may as well use the measured functions directly, unlike for EMA where we formulated everything for frequency response functions in receptance form. This is, of course, a matter of taste. If the spectral densities are of accelerations, Equation (17.16) may be written with residual terms as follows: Gyp ,yq (j𝜔) =

2N ∑ 𝜓pr Lqr

(j𝜔 − sr ) r=1

+

∗ Lqr 𝜓pr

(−j𝜔 − sr )

+ RpqL + 𝜔4 RpqU .

(17.17)

To formulate this equation in matrix form, we introduce a row vector ⌊pk,l ⌋, size 1-by-2, defined by ⌊pk,l ⌋ = ⌊

1 j𝜔k − sl

1 ⌋, −j𝜔k − sl

(17.18)

and a residue vector {R}pr size 2 × 1 by { {R}pqr =

𝜓pr Lqr ∗ Lqr 𝜓pr

} .

(17.19)

17.3 OMA Modal Parameter Extraction for OMA

We can then write Equation (17.16) for one frequency, 𝜔k , in matrix form as follows:

Gyp ,yq (j𝜔k ) = ⌊⌊pk,1 ⌋

⌊pk,2 ⌋



⌊pk,2N ⌋ 1

𝜔4

⎧ {R}pq1 ⎪ ⎪ {R}pq2 ⎪ ⋮ ⌋⎨ ⎪ {R}pq2N ⎪ {R}pqL ⎪ {R} pqU ⎩

⎫ ⎪ ⎪ ⎪ ⎬. ⎪ ⎪ ⎪ ⎭

(17.20)

We now extend this equation by adding columns for response DOF p with all the references, q = 1, 2, … , Q, and repeat it for frequencies 𝜔1 , 𝜔2 , … , 𝜔Nf , which gives ⎡ ⌊Gyy (𝜔1 )⌋ ⎤ ⎡ ⌊p1,1 ⌋ ⌊p1,2 ⌋ … ⌊p1,2N ⌋ ⎢ ⌊G (𝜔 )⌋ ⎥ ⎢ ⌊p ⌋ ⌊p ⌋ … ⌊p ⌋ yy 2 2,2 2,2N ⎥ = ⎢ 2,1 ⎢ ⋮ ⋱ ⋮ ⋮ ⎥ ⎢ ⋮ ⎢ ⎥ ⎢ ⎢ ⎣ ⌊Gyy (𝜔Nf )⌋ ⎦ ⎣ ⌊pNf ,1 ⌋ ⌊pNf ,2 ⌋ … ⌊pNf ,2N ⌋

1 1 ⋮ 1

𝜔41 𝜔42 ⋮ 𝜔42N

⎡ [R]p1 ⎤ ⎢ [R]p2 ⎥ ⎢⎢ ⋮ ⎥⎢ ⎥ ⎢ [R]p2N ⎥⎢ ⎦ [R]pU ⎢ ⎣ [R]pL

⎤ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎦

(17.21)

where the last two elements in the rightmost matrix are residual terms. This equation may be written as follows: [ ] (17.22) [G] = [P] R̃ p , [ ] which may be solved for the residue matrix for response DOF p, R̃ p , in a least squares sense, for example by a pseudoinverse. The mode shape coefficients for DOF p are then extracted from the residue matrices by decomposing each submatrix [R]pr for mode r, by a SVD [R̃ pr ] = USV H = {𝜓}r {L}H r ,

(17.23)

{𝜓}r = [U][S]1∕2 {L}r = [V]∗ [S]1∕2 ,

(17.24)

where

where ∗ is the complex conjugate part of the Hermitian transpose in Equation (17.23). This is repeated for all response DOFs. Since the rank of the residue matrix is unity, there is only a single nonzero singular value, which means the first columns of [U] and [V] are taken as the mode shape and the modal participation vector, respectively. In most cases, the mode shape scaling is not of interest, so, in practice, the mode shape vector may be extracted from the first column of the left singular matrix, [U] without scaling by the singular value, and the modal participation vector may be discarded. If modal participation factors are known, then we can modify the approach in the following way. Equation (17.18) is instead defined as follows: ⌊pq,k,l ⌋ = ⌊

Lql

Lql

j𝜔k −sl

−j𝜔k −sl

⌋,

and the residue vector is simplified into } { 𝜓pr {R}pr = . ∗ 𝜓pr

(17.25)

(17.26)

507

508

17 Operational Modal Analysis (OMA)

by which all cross-spectra for response p at frequency 𝜔k may be put in a column vector which can be expressed as follows: ⎧ G (𝜔 ) ⎫ ⎡ ⎪ yp y1 k ⎪ ⎢ ⌊p1,k,1 ⌋ ⌊p1,k,2 ⌋ ⎪ Gyp y2 (𝜔k ) ⎪ ⎢ ⌊p2,k,1 ⌋ ⌊p2,k,2 ⌋ ⎨ ⎬=⎢ ⋮ ⋮ ⋮ ⎪ ⎪ ⎪ Gyp yQ (𝜔k ) ⎪ ⎢⎣ ⌊pQ,k,1 ⌋ ⌊pQ,k,2 ⌋ ⎩ ⎭

… ⌊p1,k,2N ⌋ … ⌊p2,k,2N ⌋ ⋱ ⋮ … ⌊pQ,k,2N ⌋

1 1 ⋮ 1

⎧ 𝜓p1 ⎤ ⎪ 𝜓p2 ⎥⎪ ⎥⎪ ⋮ ⋮ ⎥⎨ ⎪ 𝜓p2N 𝜔4k ⎥⎦ ⎪ RpU ⎪ R ⎩ pL 𝜔4k 𝜔4k

⎫ ⎪ ⎪ ⎪ ⎬. ⎪ ⎪ ⎪ ⎭

(17.27)

This equation is repeated for all frequencies k = 1, 2, … , Nf by adding Q rows for every frequency and then solved for the mode shapes and residuals.

17.4 Scaling OMA Modal Models As stated above, since the loads acting on the structure are not known in the OMA context, the mode shapes obtained are unscaled, i.e., the modal mass or Modal A (depending on whether the damping is proportional or not, respectively) is unknown. Sometimes, for example for some structural health-monitoring applications, or if the modal model is desired to be used for structural modification (an analysis of effects of changes of mass, stiffness, and damping on the modal properties; see, e.g., Heylen et al. (1997), Ewins (2000), or Maia and Silva (2001)), then the modal scaling is necessary. There exist at least three different approaches to obtain a scaled modoral model from OMA results. The first is based on making several OMA measurements on different configurations of the structure, and using OMA results from these different configurations to extract the modal scaling. The configurations may be achieved by applying extra mass or stiffness in different degrees-of-freedom, see, e.g., Parloo et al. (2005), Bernal (2004), Bernal (2011), and Coppotelli (2009), or by adding tuned mass dampers, e.g., Hwang et al. (2006) and Brownjohn and Pavic (2007). There is not much literature where these methods have been attempted in practice, likely because the approaches require very large changes (e.g., masses to be moved around) to the structure, at least on civil engineering structures. The second approach is using a mass matrix from a computational model, for example a finite element model, and assume this mass matrix is correct, to obtain the modal mass. This technique was presented in Aenlle and Brincker (2013) and is reasonable in many cases where the mass distribution may be considered to be well known. It is briefly described in Section 17.4.1. To use a finite element model for scaling OMA results was also suggested in Hanson et al. (2007b). The third approach is to apply some kind of excitation. This may be applied by using the so-called OMAX techniques (OMA with eXogenous inputs), which is a hybrid technique where the signals measured for OMA are partly the response to natural loads, and partly are caused by (measured) external loads. Alternatively, the so-called OMAH method, described below, may be used, in which case harmonic loads are applied after the OMA measurements are completed, to add information that may be used to obtain the modal scaling. This technique is described in Section 17.4.2 beneath.

17.4 Scaling OMA Modal Models

There is not much literature where any of the methods have been attempted, although the OMAH method was successfully applied to a wood building in Abdeljaber et al. (2021).

17.4.1

Scaling an OMA Model Using the Mass Matrix

The method for scaling the OMA model using a mass matrix as suggested in Aenlle and Brincker (2013) is straightforward. It is based on calculating the modal mass from Equation (6.143), repeated here for convenience: [Φ]T [M][Φ] = ⌈Mr ⌋ .

(17.28)

where the mass matrix [M] is coming from a finite element (FE) model, or some other model, and [Φ] is the (real-valued) mode shape matrix. The diagonal of ⌈Mr ⌋ contains the modal mass of each mode. To use Equation (17.28) for scaling, i.e., to calculate the modal mass, the mode shape vectors from the OMA must usually be expanded to the size of the mass matrix, since the OMA mode shape vectors are usually lacking many DOFs in the FE model. This may be accomplished by expanding the OMA mode shape matrix, [Ψ] into an expanded matrix [Φe ] by the SEREP technique described in Section 6.3.3, and using this matrix in Equation (17.28). As is evident from the discussion here, this method assumes proportional damping and thus real-valued mode shapes, for which modal masses are defined.

17.4.2

The OMAH Method

The OMAH method (OMA scaling by Harmonic excitation) was originally developed for single-reference in Brandt et al. (2017), and later it was extended with a global scaling scheme, allowing excitation in several DOFs in Brandt et al. (2019). Although the cited papers both assumed damping to be proportional, the method may easily be extended to nonproportional damping as we will see beneath. The advantage with this technique from a principle point of view is that it is based on measured loads, and thus requires no a priori knowledge or assumptions about the structure (or a model of the structure). In addition, using a shaker to excite the structure allows to investigate the linearity by trying different force levels. On the other hand, the structure needs to be excited by loads at several frequencies, which may be a disadvantage due to cost. The idea with using harmonic excitation is that this type of excitation has two important advantages: 1. harmonic signals added to random signals may be detected at very low signal-to-noise (SNR) ratios, and thus the amount of force needed may be arbitrarily low (lower SNR only implies longer measurement time), and 2. harmonic excitation can be added by a simple shaker with a moving mass, and thus requires inexpensive hardware. Indeed, in Abdeljaber et al. (2021), OMAH was used to successfully scale the first mode of a four-story wood building, using only a small electrodynamic shaker and a moving mass of only one kilogram and a displacement of less than 8 mm, and the force level was only approximately 2 N RMS. The successful scaling was verified by using the scaled modal

509

510

17 Operational Modal Analysis (OMA)

model to estimate the response to the force in another DOF than the one used for the scaling. The result was accurate to within approximately 6 %. The basic principle of the OMAH method is obtained by observing that for a proportionally damped system, Equation (6.113), which is presented in the frequency domain here for convenience, states that a frequency response between two points p and q (in dynamic flexibility, or receptance, form) may be written as follows:

Hpq (j𝜔) =

N ∑

𝜓pr 𝜓qr

r=1

mr (j𝜔 − sr )(j𝜔 − s∗r )

,

(17.29)

where 𝜓pr is mode shape coefficient in point p for mode r, etc., and mr is the modal mass of mode r. Now, once the OMA analysis is done, all the variables on the right-hand side of Equation (17.29) are known, except the modal masses, mr . The simplest way to apply OMAH is by assuming that only one mode is affecting the FRF value Hpq (𝜔1 ) near, but not necessarily exactly at, a natural frequency 𝜔r . This is in effect a single-degree-of-freedom, SDOF, approximation, similar to that used for EMA in Chapter 16. If only mode r is contributing to the FRF at this frequency, then Equation (17.29) may be approximated by Hpq (j𝜔1 ) ≈

𝜓pr 𝜓qr mr (j𝜔1 − sr )(j𝜔1 − s∗r )

,

(17.30)

where it is assumed that 𝜔1 ≈ 𝜔r so that the FRF is dominated by mode r at this frequency. ̂ pq (j𝜔1 ) by exciting DOF q at a The idea is thus to estimate a frequency response value H ̂ single frequency, 𝜔1 , and measuring the force, F q (𝜔1 ), and response displacement, ûp (𝜔1 ) ̂ pq (j𝜔1 ) = ûp (𝜔1 )∕F̂ q (𝜔1 ) and then rearrange the terms so that to compute the FRF value H the modal mass is determined by ̂r = m

𝜓̂ pr 𝜓̂ qr ̂ pq (j𝜔1 )(j𝜔1 − ŝr )(j𝜔1 − ŝ∗r ) H

.

(17.31)

To estimate the harmonic force and displacement as complex quantities (with both amplitude and phase), the methods described in Section 18.6 may be used. In many cases, with clearly separated modes, this simple equation may be used with sufficient accuracy. In other cases, the more advanced global OMAH method described next may be used. Of course, to obtain a numerically more accurate solution, an equation system based on several frequencies around 𝜔r may be used. In cases with closely spaced modes, the global OMAH version may be used. This method was first described for systems with proportional damping (real-valued mode shapes) in Brandt et al. (2019). Here we will, however, extend it to a case for general damping that has never been published. We start with an approximation of an arbitrary FRF, including residual terms. We assume that we want to scale a number g of modes, starting with mode number h, where g and h are

17.4 Scaling OMA Modal Models

integer numbers, whereby an arbitrary FRF in receptance format may be written, similar to before, h+g−1 (

Hpq (j𝜔) =



Qr 𝜓pr 𝜓qr j𝜔 − sr

r=h

+

∗ 𝜓∗ Q∗r 𝜓pr qr

j𝜔 −

) +

s∗r

Cpq 𝜔2

+ Dpq .

(17.32)

Next, we assume that we have estimated FRFs measured between force DOFs q = q1 , q2 , … , qQ and response DOFs p = p1 , p2 , … , pP for some integer numbers, P, Q. We also assume that each FRF is measured at several excitation frequencies 𝜔 = 𝜔ex,1 , 𝜔ex,2 , … , 𝜔ex,k , by exciting the structure with one frequency and in one DOF at the time. All these estimated FRFs are gathered in a column vector, by [ ̂ p q (j𝜔ex,1 ) H ̂ = H ̂ p q (j𝜔ex,2 ) … H ̂ p q (j𝜔ex,1 ) H ̂ p q (j𝜔ex,2 ) … {H} 1 1 1 1 2 1 2 1 ̂ p q (j𝜔ex,1 ) H P 1

̂ p q (j𝜔ex,2 ) H P 1

̂ p q (j𝜔ex,1 ) H 1 2



̂ p q (j𝜔ex,1 ) H ̂ p q (j𝜔ex,2 ) H 1 Q 1 Q



̂ p q (j𝜔ex,1 ) H P Q

̂ p q (j𝜔ex,2 ) H 1 2 ̂ p q (j𝜔ex,2 ) H P Q

… (17.33) ]T … .

where the vector is broken over three lines only to save space. We also define a column vector, {x} including all the modal scaling constants and residual terms: [ {x} = Qh Q∗h … Qh+g−1 Q∗h+g−1 Cp1 q1 Dp1 q1 Cp2 q1 Dp2 q1 … ]T (17.34) CpP q1 DpP q1 Cp1 q2 Dp1 q2 … CpP qQ DpP qQ , where the vector is broken over two lines only to save space. We now introduce the function Γ(p, q, r, 𝜔ex ) defined by the 1-by-2 row vector: ⌊Γ(p, q, r, 𝜔ex )⌋ = ⌊

𝜓pr 𝜓qr

∗ 𝜓∗ 𝜓pr qr

(j𝜔ex − sr )

(j𝜔ex − s∗r )

⌋,

(17.35)

where it should be noted that all variables are known from the OMA result. Using the definition of Γ, we formulate a large matrix, [A], containing the contributions from all modes, including coefficients for the residual terms in Equation (17.32), ⎡ Γ(p1 , q1 , h, 𝜔ex,1 ) ⎢ Γ(p , q , h, 𝜔 ) 1 1 ex,2 ⎢ ⋮ ⎢ ⎢ Γ(p , q , h, 𝜔 ) 2 1 ex,1 ⎢ ⎢ Γ(p2 , q1 , h, 𝜔ex,2 ) ⎢ ⋮ ⎢ ⎢ Γ(pP , q1 , h, 𝜔ex,1 ) [A] = ⎢ Γ(pP , q1 , h, 𝜔ex,2 ) ⎢ ⋮ ⎢ ⎢ Γ(p1 , q2 , h, 𝜔ex,1 ) ⎢ Γ(p1 , q2 , h, 𝜔ex,2 ) ⎢ ⋮ ⎢ ⎢ Γ(pP , qQ , h, 𝜔ex,1 ) ⎢ Γ(pP , qQ , h, 𝜔ex,2 ) ⎢ ⋮ ⎣

Γ(p1 , q1 , h + 1, 𝜔ex,1 ) Γ(p1 , q1 , h + 1, 𝜔ex,2 ) ⋮ Γ(p2 , q1 , h + 1, 𝜔ex,1 ) Γ(p2 , q1 , h + 1, 𝜔ex,2 ) ⋮ Γ(pP , q1 , h + 1, 𝜔ex,1 ) Γ(pP , q1 , h + 1, 𝜔ex,2 ) ⋮ Γ(p1 , q2 , h + 1, 𝜔ex,1 ) Γ(p1 , q2 , h + 1, 𝜔ex,2 ) ⋮ Γ(pP , qQ , h + 1, 𝜔ex,1 ) Γ(pP , qQ , h + 1, 𝜔ex,2 ) ⋮

… … ⋮ … … ⋮ … … ⋮ … … ⋮ … … ⋮

Γ(p1 , q1 , h + g − 1, 𝜔ex,1 ) 1∕𝜔2ex,1 Γ(p1 , q1 , h + g − 1, 𝜔ex,2 ) 1∕𝜔2ex,2 ⋮ ⋮ Γ(p2 , q1 , h + g − 1, 𝜔ex,1 ) 0 Γ(p2 , q1 , h + g − 1, 𝜔ex,2 ) 0 ⋮ ⋮ Γ(pP , q1 , h + g − 1, 𝜔ex,1 ) Γ(pP , q1 , h + g − 1, 𝜔ex,2 ) ⋮ ⋮ Γ(p1 , q2 , h + g − 1, 𝜔ex,1 ) Γ(p1 , q2 , h + g − 1, 𝜔ex,2 ) ⋮ ⋮ Γ(pP , qQ , h + g − 1, 𝜔ex,1 ) Γ(pP , qQ , h + g − 1, 𝜔ex,2 ) ⋮ ⋮

1 0 1 0 ⋮ ⋮ 0 1∕𝜔2ex,1 0 1∕𝜔2ex,2 ⋮ ⋮ … … ⋮ ⋮ … … ⋮ ⋮ … … ⋮ ⋮

…⎤ … ⎥⎥ ⋮ ⎥ …⎥ ⎥ …⎥ ⋮ ⎥ ⎥ ⎥ ⎥, ⋮ ⋮ ⎥ ⎥ ⎥ ⎥ ⋮ ⋮ ⎥ ⎥ ⎥ ⎥ ⋮ ⋮ ⎥⎦

0 0 ⋮ 1 1 ⋮

(17.36)

511

512

17 Operational Modal Analysis (OMA)

by which Equation (17.32) may now be repeated for all estimated FRFs and for all frequencies, in the equation system: [ ][ ] ̂ Â x̂ = {H}, (17.37) which may be solved for all the unknowns in {̂x} by, for example a least squares solution. We have put hats on all variables in Equation (17.37) to indicate they are all estimates. It should be noted that not all FRFs need to be measured for all frequencies, although for simplicity we have indicated that. Indeed, any FRF may be measured at any frequency. Typically, for good results, each excited DOF should only be excited at frequencies for which the modes are well excited. Only some DOFs need to be excited to ensure the equation system in Equation (17.37) is suitably overdetermined. It is important to note that due to the complex conjugate symmetry, to ensure a proper solution, positive and negative frequencies must be used in the equations. An FRF for a negative frequency is simply the complex conjugate of the FRF for the positive frequency. Finally, it should be noted that once the scaling is done, the success of it may be investigated. This is a great advantage and is done by using a measured force and a synthesized FRF to compute what the response should be at one of the measured responses. By comparing this computed response with the measured response in the same DOF, a direct assessment of the accuracy of the modal scaling is obtained. This may be repeated for all measured responses.

17.5 Chapter Summary In this chapter, we have seen that OMA is closely related to experimental modal analysis, EMA, as far as the parameter extraction is concerned. The difference between the two methods is that OMA is based on analyzing responses of the structure due to the in-operation loads, whereas EMA is based on measurements of frequency responses from actively exciting the structure. In theory, the natural loads used for OMA have to be random in nature, but in practice, there is usually enough random variation in loads so that estimation of the modal parameters is possible even in cases where there are also harmonic loads. In some cases, it may be necessary to remove harmonic vibrations prior to the modal parameter estimation, which can be done by one of the methods described in Chapter 18. Data acquisition for OMA is performed by recording sensor signals from the DOFs that are desired. The best results are obtained by recording all responses simultaneously, but if this is not possible, some DOFs may be used as references, whereas other DOFs are roved, as described in Section 17.2. Modal parameter estimation for OMA is almost identical to that for EMA. The same methods that were described in Chapter 16 may be used with none or very small

17.5 Chapter Summary

[ ] variations. As we saw in Section 17.3, the autospectral matrix Gyy is decomposed in the frequency domain into [Gyy (j𝜔)] =

N ∑ {𝜓}r {L}Tr r=1

(j𝜔 − sr )

+

{𝜓}∗r {L}H r (j𝜔 − s∗r )

{L}∗r {𝜓}Tr {L}r {𝜓}H r + , + (−j𝜔 − sr ) (−j𝜔 − s∗r )

(17.38)

where {𝜓}r is the mode shape vector of mode r, {L}r is the modal participation vector, which is different for OMA than for EMA, and sr is the pole for mode r. The first two terms in Equation (17.38) are recognized from the decomposition of the frequency response matrix in Equation (16.1). The two last terms in the equation represent poles located in the positive s-plane. These terms will thus result in poles having a positive real part, and the same imaginary part as the true poles of the system. Thus, any frequency domain modal parameter estimation algorithm based on decomposing the FRF matrix into Equation (16.1) (i.e., not the CMIF but most other algorithms) can be used without modification, by increasing (doubling) the maximum number of modes in the parameter estimation process. If a method such as the CMIF is desired for OMA, the frequency domain decomposition, FDD, method, may be used. As described in Section 17.3.5, the FDD method corresponds to the CMIF method, with a small modification. Time domain methods for OMA are usually based on correlation functions that exhibit free decays similar to impulse responses used for EMA. As we described in Section 17.3.2, the inverse Fourier transform of Equation (17.38) leads to two parts. The first two terms lead to positive time lags with the same decay as impulse responses, whereas the inverse Fourier transform of the two last terms lead to negative time delays with similar properties. By ignoring the latter and only using the correlation functions for positive time lags, we can use time domain parameter estimation methods for EMA without modifications. Mode shapes from OMA are always unscaled (because no loads are measured) and may conveniently be obtained in the same step as the pole estimation, if using a low-order method such as multireference Ibrahim time domain or frequency direct parameter identification. If a high-order method such as the polyreference time domain or least squares complex frequency domain methods is desired, the mode shapes have to be obtained in a second step, as described in Section 17.3.6, or by using the least squares time domain method described in Section 16.9.4. The unscaled modal model obtained by OMA may be scaled using any of several methods, as described in Section 17.4. A mass matrix from a model of the structure under test may be used to calculate the modal masses as described in Section 17.4.1. Alternatively, the OMAH method described in Section 17.4.2 is based on harmonic excitation of the structure in one or more of the DOFs used for the OMA, after which the modal scaling may be obtained.

513

514

17 Operational Modal Analysis (OMA)

17.6 Problems Problem 17.1 Assume you want to make an OMA test on a structure using 40 response DOFs which we assume you can measure synchronously. You wish to use the “Multiple-reference Ibrahim time domain” (MITD) method to find the poles and mode shapes. Answer the following questions: (a) (b) (c) (d)

Do you have to calculate the power spectral densities of the responses? Which input functions should you calculate to input into the MITD algorithm? How many of the response signals would you choose as references? How do you obtain the mode shapes?

Problem 17.2 Assume you have the same case as in Problem 17.1, but you wish to use the “Least squares complex frequency” (LSCF) method to find the poles and mode shapes. Answer the following questions: (a) (b) (c) (d) (e) (f)

Do you have to calculate the power spectral densities of the responses? Which input functions should you calculate to input into the LSCF algorithm? How many of the responses would you select as references, 1, 4, or 40? Which are the output results you obtain from LSCF? What are the sizes of the different output results (vectors and/or matrices) from LSCF? How do you obtain the mode shapes?

Problem 17.3 You have decided that you want to scale the modal model you have obtained from Problem 17.1 using the OMAH method described in Section 17.4.2. You have estimated 10 modes (poles and mode shapes). Answer the following questions: (a) (b) (c) (d)

What is the size of the mode shape matrix? How many DOFs do you need to excite by the shaker? What type of signal do you need to excite the structure with? How many frequencies do you need to excite the structure with, as a minimum?

References Abdeljaber O, Dorn M and Brandt A 2021 Scaling an OMA modal model of a wood building using OMAH and a small shaker Topics in Modal Analysis & Testing, Volume 8 Springer pp. 151–157. Aenlle ML and Brincker R 2013 Modal scaling in operational modal analysis using a finite element model. International Journal of Mechanical Sciences 76, 86–101. Antoni J and Chauhan S 2013 A study and extension of second-order blind source separation to operational modal analysis. 332 (4), 1079–1106. Asmussen J 1997 Modal Analysis Based on the Random Decrement Technique – Application to Civil Engineering Structures PhD thesis Dept. of Building Technology and Structural Engineering, University of Aalborg.

References

Bernal D 2004 Modal scaling from known mass perturbations. Journal of Engineering Mechanics 130(9), 1083–1088. Bernal D 2011 A receptance based formulation for modal scaling using mass perturbations. Mechanical Systems and Signal Processing 25(2), 621–629. Brandt A, Berardengo M, Manzoni S and Cigada A 2017 Scaling of mode shapes from operational modal analysis using harmonic forces. Journal of Sound and Vibration 407, 128–143. Brandt A, Berardengo M, Manzoni S, Vanali M and Cigada A 2019 Global scaling of operational modal analysis modes with the OMAH method. Mechanical Systems and Signal Processing 117, 52–64. Brincker R 2017 On the application of correlation function matrices in OMA. Mechanical Systems and Signal Processing 87, Part A, 17–22. Brincker R and Ventura C 2015 Introduction to Operational Modal Analysis. John Wiley & Sons, Chichester, UK. Brincker R, Krenk S, Kirkegaard PH and Rytter A 1992 Identification of dynamical properties from correlation function estimates. Bygningsstatiske Meddelelser 63(1), 1–38. Brincker R, Zhang LM and Andersen P 2001 Modal identification of output-only systems using frequency domain decomposition. Smart Materials & Structures 10(3), 441–445. Brownjohn JMW and Pavic A 2007 Experimental methods for estimating modal mass in footbridges using human-induced dynamic excitation. Engineering Structures 29(11), 2833–2843. Cara FJ, Juan J and Alarcón E 2014 Estimating the modal parameters from multiple measurement setups using a joint state space model. 43(1–2), 171–191. Coppotelli G 2009 On the estimate of the FRFs from operational data. Mechanical Systems and Signal Processing 23(2), 288–299. Devriendt C and Guillaume P 2008 Identification of modal parameters from transmissibility measurements. Journal of Sound and Vibration 314(1–2), 343–356. Döhler M, Lam XB and Mevel L 2013 Uncertainty quantification for modal parameters from stochastic subspace identification on multi-setup measurements. 36(2), 562–581. Döhler M, Reynders E, Magalhaes F, Mevel L, Roeck GD and Cunha A 2011 Pre-and post-identification merging for multi-setup OMA with covariance-driven SSI Dynamics of Bridges, Volume 5 Springer pp. 57–70. Ewins DJ 2000 Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Hanson D, Randall RB, Antoni J, Thompson DJ, Waters TP and Ford RAJ 2007a Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems - Part I: Modal parameter identification. Mechanical Systems and Signal Processing 21(6), 2441–2458. Hanson D, Randall RB, Antoni J, Waters TP, Thompson DJ and Ford RAJ 2007b Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems - Part II: Obtaining scaled mode shapes through finite element model updating. Mechanical Systems and Signal Processing 21(6), 2459–2473. Heylen W, Lammens S and Sas P 1997 Modal Analysis Theory and Testing 2nd edn. Catholic University Leuven, Leuven, Belgium.

515

516

17 Operational Modal Analysis (OMA)

Hwang JS, Kim H and Kim J 2006 Estimation of the modal mass of a structure with a tuned-mass damper using H-infinity optimal model reduction. Engineering Structures 28(1), 34–42. Ibrahim SR and Mikulcik EC 1977 A method for the direct identification of vibration parameters from the free response. The Shock and Vibration Bulletin 47(47), 183–198. James G, Carne TG, Lauffer JP and Nord AR 1992 Modal testing using natural excitation Proceedings of 10th International Modal Analysis Conference, San Diego, CA. Kerschen G, Poncelet F and Golinval JC 2007 Physical interpretation of independent component analysis in structural dynamics. Mechanical Systems and Signal Processing 21(4), 1561–1575. Maia NMM and Silva JMM 2001 Modal analysis identification techniques. Philosophical Transactions of the Royal Society of London Series A-Mathematical Physical and Engineering Sciences 359(1778), 29–40. Orlowitz E and Brandt A 2014 Effects of simultaneous versus roving sensors measurement in operational modal analysis Proceedings of the International Conference on Noise and Vibration Engineering (ISMA 2014). Orlowitz E, Andersen P and Brandt A 2015 Comparison of simultaneous and multi-setup measurement strategies in operational modal analysis Proceedings of 5th International Operational Modal Analysis Conference (IOMAC), Gijón, Spain. Parloo E, Cauberghe B, Benedettini F, Alaggio R and Guillaume P 2005 Sensitivity-based operational mode shape normalisation: application to a bridge. Mechanical Systems and Signal Processing 19(1), 43–55. Rainieri C and Fabbrocino G 2014 Operational Modal Analysis of Civil Engineering Structures. Springer, New York. Randall RB 2009 Cepstral methods of operational modal analysis. chapter 24 in encyclopedia of structural health monitoring. Randall RB and Gao Y 1994 Extraction of modal parameters from the response power cepstrum. Journal of Sound and Vibration 176(2), 179–193. Randall RB, Coats MD and Smith WA 2015 OMA in the presence of variable speed harmonic orders ICEDyn 2015 International Conference on Structural Engineering Dynamics, Lagos, Portugal, pp. 22–24. van Overschee P and De Moor B 1996 Subspace Identification for Linear Systems: Theory – Implementation – Applications. Springer.

517

18 Advanced Analysis Methods In this chapter, we will discuss some signal analysis tools which have not found their right place in the preceding chapters, but which have their applications in noise and vibration analysis. Each method will be briefly discussed and references given for the reader who wants to find more information.

18.1 Shock Response Spectrum In cases where the damaging effect of a vibration environment is of interest, the various so-called response spectra are often used (Himmelblau et al. 1993; Lalanne 2002). The most common response spectrum is the Shock Response Spectrum, or SRS, (Greenfield 1977; ISO 18431-4: 2007; Smallwood 1981). It has been used for a long time in aerospace and military applications, especially for pyroshock signals. In recent years, it has also found a growing popularity in civilian applications, such as the automotive industry. The SRS calculates the damaging potential of a signal, originally a transient signal, as the name implies. This damaging potential is of course a factor of both the frequency content of the signal, and the resonance frequencies of the structure to which the transient (vibration signal) is applied. The problem of determining the damaging potential of a vibration environment is, of course, an extremely difficult task. The shock response offers a conservative, approximative method of finding the worst damaging potential of the vibration environment in question. While the SRS was originally developed for transients, it has proven valuable also when comparing vibration environments of very different character, such as periodic, random, and transient. Let us say that we have a sensitive electronics box, e.g., for engine control. We have to mount this box somewhere in the engine compartment of a car. In some possible mounting points, the vibrations are mainly periodic, such as on top of the engine block, whereas if we mount it on the chassis, perhaps there will be more random vibrations caused by the road. How do we compare these entirely different environments, and select the least harmful position to place our electronics box? The SRS can be used for this purpose. Another common application of SRS is in environmental testing (see Section 1.3). It is often the case that a real-life vibration measurement has to be condensed into a test specification such as a random or shock test. SRS is a useful tool in such design (Ahlin 2006; Henderson and Piersol 2003; Lalanne 2002). Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

518

18 Advanced Analysis Methods

The basis for the SRS is an assumption that we are going to mount an object in a location where we know the vibration environment (usually in terms of the acceleration level), and we wish to have a measure of how dangerous this environment is to our object. A simple assumption is then that the object to be mounted will act as a single-degree-of-freedom system, causing a resonance amplification of the vibrations in the mounting position. This assumption does not necessarily mean that the mounted object has only one natural frequency, as each natural frequency (mode) acts as an SDOF system, as we showed in Chapter 6. As we will see, the SRS accounts for any natural frequency our object may have. The shock response is defined as the output response of a single degree-of-freedom (SDOF) mechanical mass spring damper system, to the transient, which is applied to the base of the mechanical system. For each frequency value in the SRS, the natural frequency of the mechanical SDOF system is tuned to that frequency. The vibration signal to be analyzed can arbitrarily be displacement, velocity, or acceleration; however, acceleration is the most common, (ISO 18431-4: 2007). Some authors advocate the use of pseudo-velocity, (Gaberson 2003; Gaberson et al. 2000). For this brief introduction, we will focus on acceleration output. An illustration of an SDOF mechanical system is shown in Figure 18.1. The input signal, x (the measured vibration signal) is assumed to be the acceleration of the base of the mechanical system. The output, from which the SRS is derived, is the resulting vibration (acceleration) of the mass, m. Denoting the (undamped) natural frequency of the system by fn , and the (viscous) relative damping by 𝜁n , we obtain the transfer function: ( ) 1 + 2𝜁n s∕𝜔n Xout = (18.1) H(s) = ( ) ( )2 , Xin 1 + 2𝜁 s∕𝜔 + s∕𝜔 n

n

n

where 𝜔n = 2𝜋fn is the natural angular frequency in rad/s. Using the transfer function in Equation (18.1), the output time signal is calculated, and usually, the maximum absolute value of the output signal is taken as the SRS value. This is referred to as the maximax shock response. Furthermore, the output signal is divided into two parts. The part of the output signal during the time when the input signal is present is called the primary part, and the part of the output signal after the input signal is removed (no longer exists) is called the residual part. Depending on which of the two output signal parts is used, the SRS is referred to as primary SRS or residual SRS, respectively. Finally, the

m

xout(t)

xin(t)

Figure 18.1 SDOF mechanical system is used for SRS calculation. For each frequency value in the SRS, the mass-spring system is tuned to that frequency. The base of the system is supposed to be excited by the input signal xin (t), and the SRS is defined as the maximum of the (absolute value of) the resulting response of the mass, xout (t). The excitation and response units are usually acceleration, but can be velocity or displacement in certain applications, or the relative displacement, velocity, or acceleration, see ISO 18431-4: (2007).

18.1 Shock Response Spectrum

Shock response spectrum, Q =10 [g]

damping of the mechanical system has to be selected. The assumed damping of the structure for which the SRS is applied should be taken. If this value is not known, usually 𝜁n = 0.05 is used as a standard value. In SRS applications (and generally in environmental testing), it is more common to use the Q-factor, which we saw in Equation (5.27) is Q = 1∕2𝜁. It is obvious from the above that the SRS is not a spectrum in general terms. It is called a spectrum only because it has frequency on its x-axis. The interpretation of the SRS value for a particular frequency is the maximum acceleration that will be caused by the analyzed signal, given that the structure has a natural frequency of the particular “SRS frequency,” with the damping used for the SRS calculation. To calculate the SRS, digital filters defined in ISO 18431-4: (2007) should be used. The SRS is usually calculated for frequencies on a logarithmic scale, often 1/6-octave bands, and for the Q-factor of Q = 10. In Figure 18.2, the maximax SRS for absolute acceleration using Q = 10, produced by a transient (half sine) is plotted. When using a standard noise and vibration measurement system to acquire signals for SRS calculations, it is important to consider the oversampling rate, see Chapter 3, as well as the phase linearity of the antialiasing filters, see Section 11.2.2. As the SRS is calculated in the time domain, it is not sufficient to use the oversampling of 2.56 that is normally used for frequency analysis. At least a factor of 10 must be used in order for the error in the SRS calculation to be small. In many systems, this can be accomplished by first sampling the data with a sufficient frequency range and a “normal” oversampling ratio of 2.56, and in a postprocessing stage, digitally upsample the data by a factor of 4 using the procedure described in Section 3.2. This will limit the maximum amplitude inaccuracy due to the sampling frequency to less than 10%. A particular characteristic of the SRS should be especially noted. At high frequencies, the SRS reaches the maximum of the absolute value of the input acceleration. This occurs at

102

101

10

0

1

2

10

10

10

3

Frequency [Hz] Figure 18.2 SRS of a half sine of 11 ms duration, with a maximum value of 100 g, calculated with Q = 10. The y-axis as well as the frequency axis are logarithmic, and the spectrum is calculated for frequencies spaced 1/6th octave apart.

519

520

18 Advanced Analysis Methods

either the highest frequency contained in the analyzed acceleration signal or the bandwidth (highest frequency) of the acquisition system, whichever is lowest. This is easily realized by considering the low-frequency part of the transfer function in Equation (18.1), which equals unity. For high SRS frequencies, the natural frequency of the SDOF system is high, and thus the bandwidth of the measured signal is low compared with the natural frequency. This means that the output of the SDOF system will be the same as the measured signal; there is no resonance amplification.

18.2 The Hilbert Transform The Hilbert transform (Bendat and Piersol 2000) is a useful transform in some signal analysis applications. It is used for two different purposes: (i) for envelope calculations, for example when studying modulated signals, or (ii) to create the so-called analytic signals, for which it relates the real and imaginary parts. The Hilbert transform of a signal is a new signal in the same domain as the original signal. Thus, the Hilbert transform of a time signal is a new time signal, and the Hilbert transform of a frequency domain signal is a new signal in the frequency domain. The Hilbert transform, x̃ (t), of a real time signal x(t), given that this signal exists for times −∞ ≤ t ≤ ∞, is a new, real-valued time signal which is defined by ∞

x̃ (t) =

x(u) 1 du = x(t) ∗ , ∫ 𝜋(t − u) 𝜋t

(18.2)

−∞

where ∗ denotes convolution. The Hilbert transform is a linear operator, i.e., the Hilbert transform of a sum of two signals equals the sum of the separate Hilbert transforms of each signal. The convolution in Equation (18.2) corresponds to multiplication in the frequency domain by the Fourier transform of 1∕𝜋t. It can be shown that this Fourier transform is given by ⎧−j, f > 0 ] ⎪ 1 = −j ⋅ sgn(f ) = ⎨ 0, f = 0  𝜋t ⎪ j, f < 0 ⎩ [

(18.3)

where  denotes the Fourier transform. Thus, the Fourier transform of x̃ (t) is ̃ ) =  [̃x(t)] = −j ⋅ sgn(f )X(f ). X(f

(18.4)

Equation (18.4) can alternatively be written as follows: ⎧e−j𝜋∕2 , f > 0 ⎪ ̃ )=⎨ X(f 0, f = 0 . ⎪ ej𝜋∕2 , f < 0 ⎩

(18.5)

Thus, the Hilbert transform of a time domain signal equals a phase shift of ±90∘ in the frequency domain. The Hilbert transform acts as an all-pass filter with uniform (flat) amplitude characteristic and with phase shift of −90∘ for positive frequencies and +90∘ for negative frequencies. The inverse Hilbert transform which calculates the time signal x(t) from

18.2 The Hilbert Transform

the Hilbert transform x̃ (t), is given by ∞

x̃ (u) du x(t) = − ∫ 𝜋(t − u)

(18.6)

−∞

but can also be written as follows: [ ] ̃ ) . x(t) =  −1 j ⋅ sgn(f )X(f

(18.7)

It is important to note that the Hilbert transform does not commute with the Fourier transform, i.e.,  [ [x(t)]] ≠  [ [x(t)]] ,

(18.8)

where we have denoted the Hilbert transform by .

18.2.1

Computation of the Hilbert Transform

To compute the Hilbert transformation, a new function z(t) is defined by z(t) = x(t) + j̃x(t),

(18.9)

which is called the analytic signal of x(t). In the frequency domain, the Fourier transform of z(t) will be ̃ ). Z(f ) = X(f ) + jX(f

(18.10)

By using Equation (18.4) with Equation (18.10), we obtain ( ) ( ) Z(f ) = X(f ) + j −j ⋅ sgn(f )X(f ) = 1 + sgn(f ) X(f ),

(18.11)

If we define Z(0) = X(0), we obtain ⎧2X(f ), f > 0 ⎪ Z(f ) = ⎨ X(0), f = 0 . ⎪ 0, f < 0 ⎩

(18.12)

Using the Fourier transform Z(f ), we can now obtain the Hilbert transform by the inverse Fourier transform of Z(f ), whereby [ ] (18.13) x̃ (t) =  −1 Z(f ) , i.e., the method to compute the Hilbert transform is by inverse Fourier transforming the Fourier transform of the analytical signal z(t). Note that the negative half-plane has been zeroed out in Z(f ) so that the digital formula to calculate the Hilbert transform becomes [N∕2 ] ∑ j 2𝜋nk N x̃ (n) = 2Δf Im Xc (k)e . (18.14) k=0

Especially note that the sum in Equation (18.14) is only carried out up to the Nyquist frequency, N∕2. In Equation (18.14), Xc (k) is the normalized discrete Fourier transform which approximates the continuous Fourier transform. Thus, ∑

N−1

Xc (k) = Δt

n=0

x(n)e−j

2𝜋kn N

.

(18.15)

521

522

18 Advanced Analysis Methods

Since Δf = fs ∕N and Δt = 1∕fs then, using the ordinary DFT X(k), we can simplify Equation (18.14) into [N∕2 ] ∑ 2 j 2𝜋nk x̃ (n) = Im X(k)e N (18.16) N k=0 which is the algorithm to compute the Hilbert transform x̃ (t) =  [x(t)].

18.2.2 Envelope Detection by the Hilbert Transform One of the common applications of the Hilbert transform in noise and vibration analysis is to find the envelope of correlation signals or modulated signals. Recall that for bandwidthlimited noise, the autocorrelation of a signal x(t), Rxx (𝜏) becomes broadened into a sinc function, see Section 4.2.12. If we have a situation where a band-limited signal passes several paths from one point to another, as depicted in Figure 18.3, it can therefore potentially be hard to find the positions of the maxima of the cross-correlation between the two signals. To use the Hilbert transform to calculate envelopes, the complex variable z(t), called the analytic signal related to x(t) as defined in Equation (18.9) is used. The envelope of the signal x(t) is defined as the magnitude of z(t), i.e., the envelope e(t) is defined as follows: √ e(t) = |z(t)| = x2 (t) + x̃ 2 (t). (18.17) Example 18.2.1 We will illustrate the use of the Hilbert transform for envelope computation by an example of a two-path system as shown in Figure 18.3, where each of the two systems, in addition to a time delay, consists of an SDOF system with a natural frequency of 100 Hz and relative damping 𝜁 = 0.01. Both SDOF signals are excited by bandlimited white random noise with a bandwidth of 1000 Hz. In Figure 18.4, the cross-correlation of the two signals, Ryx (𝜏) is shown. As is seen in Figure 18.4, it is hard to find exactly where the two peaks are found due to the wide correlation peaks. In Figure 18.5, the envelope calculated by using the Hilbert transform is shown. In this plot, it is easy to see where the two peaks occur. End of example.

Delay τ1 x(t)

+

y(t)

Delay τ2 Figure 18.3 Two-path problem. The bandwidth-limited noise x(t) is passing through the two paths. Each path has a different time delay, denoted as tau1 and 𝜏2 . By calculating the cross correlation between signals x and y and locating the two peaks in this function, potentially the time delays can be estimated.

18.2 The Hilbert Transform

× 10–5

1

Ryx (τ)

0.5

0

–0.5

–1

0

50

100

150

200

Time [ms] Figure 18.4 Plot for Example 18.2.1. Cross correlation Ryx (𝜏) of two signals with delay as illustrated in Figure 18.3, where the paths have time delays 𝜏1 = 100 ms and 𝜏2 = 150 ms. In addition to the different time delays, each path consists of an SDOF system with a natural frequency of 100 Hz, and 1 % relative damping. Due to the band-limited nature of the noise after passing the SDOF systems, each correlation peak is broadened, making it hard to find the location of the two peaks in Ryx (𝜏).

1

× 10–5

Envelope of Ryx (τ)

0.8

0.6

0.4

0.2

0

0

50

100

150

200

Time [ms] Figure 18.5 Envelope of a cross correlation function Ryx (𝜏) in Figure 18.4. In the envelope, it is easy to find the locations of the two peaks at approximately 100 and 150 ms.

523

524

18 Advanced Analysis Methods

18.2.3 Relating Real and Imaginary Parts of Frequency Response Functions In the second use of the Hilbert transform, we will look at it to find out the relationship between the real part and the imaginary part of a frequency response function. We showed in Section 2.7.1 that any signal h(t) can be divided into a sum of an even part he (t) and an odd part ho (t). We are especially interested at the moment in impulse response functions h(t). If the impulse response function represents a physically realizable system, then it is causal, i.e., h(t) = 0, t < 0.

(18.18)

Then, from Equations (2.65) and (2.66), it follows that for times t > 0, 1 he (t) = h(t) 2 ho (t) = he (t) and for times t < 0 1 he (t) = h(−t) 2 ho (t) = −he (t)

(18.19)

(18.20)

From Equations (18.19) and (18.20), we thus obtain he (t) = sgn(t) ho (t).

(18.21)

For the frequency response function of any causal system, the real part of the frequency response function comes from the even part of the impulse response, and the imaginary part comes from the odd part of the impulse response, see Section 2.7. This means that if we define the frequency response by a sum of its real and imaginary parts, H(f ) = HR (f ) + jHI (f ),

(18.22)

then we have

[ ] HR (f ) =  he (t) [ ] HI (f ) =  ho (t) ,

(18.23)

Now, again, the multiplication in one domain, for example the time domain as in Equation (18.21), corresponds to convolution in the other domain, here the frequency domain. Using Equation (18.3) together with Equations (18.21) and (18.23), we get that ∞

[ ] [ ] HI (u) 1 HR (f ) =  he (t) =  sgn(t)ho (t) = HI (f ) ∗ = du ∫ 𝜋(f − u) 𝜋f

(18.24)

−∞

which is the Hilbert transform of HI . In other words, the real part, HR (f ), of the frequency response function of a causal system equals the Hilbert transform of the imaginary part, HI (f ) of the same FRF, i.e., [ ] (18.25) HR =  HI . Similarly, we can obtain [ ] HI (f ) = − HR (f ) ,

(18.26)

which, in words, says that the imaginary part of a causal frequency response function equals the Hilbert transform of the real part of the same FRF, with changed sign. The two

18.2 The Hilbert Transform

10

Impulse response [(m/s 2 )s/N]

Magnitude FRF [(m/s 2 )/N]

statements in Equations (18.25) and (18.26) apply to all causal frequency response functions and are called the Hilbert transform relationships between real and imaginary parts. The main application of Equations (18.25) and(18.26) in noise and vibration analysis is to investigate whether an estimated frequency response function belongs to a causal system. If that is not the case, it is a strong indication that the estimated system is nonlinear (Tomlinson and Kirk 1984). When investigating the frequency response functions by means of the Hilbert transform relationships between the real and imaginary parts, it has been suggested that the functions be estimated by use of stepped-sine excitation and not broadband excitation. The reasoning behind this is that the broadband excitation methods “linearize” the FRF of the structure, thus yielding the best linear system between the measured input and output signals. For strong nonlinearities, however, this is not necessary, as they will result in noncausal impulse response functions.

3

102

10

1

0

400

Frequency [Hz] (a)

1000 500 0 –500

–1000

600

0

200

400

Frequency [Hz] (c)

× 105

2 0 –2 –4

0

2000

Imag. FRF [(m/s 2 )/N]

Real FRF [(m/s 2 )/N]

1500

200

4

600

0.5

1

200

400

Time [s] (b)

1.5

1000 0 –1000 –2000

0

Frequency [Hz] (d)

600

Figure 18.6 Plots for Example 18.2.2. In (a) an FRF estimated from an impact excitation experiment is shown. Its impulse response in (b) shows noncausal behavior at the end of the block, which corresponds to negative time to the left of time zero. In (c) the real part of the FRF in (a) is shown overlaid with the Hilbert transform of the imaginary part, and in (d) the imaginary part of the FRF in (a) is overlaid with the Hilbert transform of the real part (with changed sign).

525

18 Advanced Analysis Methods

Example 18.2.2 To illustrate what the Hilbert transform relations between real and imaginary parts of frequency response functions look like for a typical FRF, we will look at an FRF obtained by impact testing as described in Section 13.8. In Figure 18.6(a), an FRF estimated by impact testing is shown. In Figure 18.6(b), the impulse response of the same FRF is shown, and it is apparent that there is some noncausality which can be seen as a rising at the end of the block. Recall the periodicity of the discrete Fourier transform, which means that the part at the end of the block can be wrapped around to the left of time zero. This is very often seen on experimentally obtained IRFs. In Figure 18.6(c) and (d), the real part and imaginary part of the original FRF and the corresponding real and imaginary parts obtained by the Hilbert transform relationships are plotted. As seen in the figure, in this case, there is some discrepancy, particularly in the real part of the measured FRF and the real part created as the Hilbert transform of the imaginary part. End of example. A potential use of the Hilbert transform, which has, to my best knowledge, never been published, is to “clean up” estimated frequency responses by keeping either the real or imaginary parts and replacing the discarded part by the Hilbert transform of the kept part. This produces the result of a causal system and can potentially be more accurate than using the originally estimated FRF. In Figure 18.7(a), the impulse response from the same FRF used for Example 16.2.18.2.2 has been computed after the FRF was processed this way, by discarding the imaginary part, and creating a new imaginary part from the real part by using Equation (18.25). In Figure 18.7(b), the impulse response is created by using

4

× 105

Impulse response [(m/s2 )s/N]

Impulse response [(m/s2 )s/N]

526

2 0 –2 –4

0

0.5

1

Time [s] (a)

1.5

4

× 105

2 0 –2 –4

0

0.5

1

Time [s] (b)

1.5

Figure 18.7 Illustration of using Hilbert transform to “clean up” frequency responses. In (a), an impulse response of the FRF used in Example 18.2.2 has been modified by using the real part of the FRF to create the imaginary part by the Hilbert relation in Equation (18.26). In (b), the impulse response using the imaginary part of the original FRF is shown. As apparent from the plots, the result is different depending on which part of the FRF is kept. In this example, keeping the real part of the FRF and creating the imaginary part using the Hilbert transform leads to a more causal impulse response.

18.3 Cepstrum Analysis

the same procedure, but instead discarding the real part of the measured FRF is shown. As can be seen in the impulse response in Figure 18.7(a), this can significantly reduce the noncausal behavior at the end of the record. However, as can be seen in Figure 18.7(b), it matters which of the parts, the real or imaginary parts, are discarded. The impulse response in Figure 18.7(b) shows no improvement over the original impulse response in Figure 18.6.

18.3 Cepstrum Analysis Cepstrum is an analysis function with many applications in signal processing. Cepstrum analysis is used, for example for echo removal, the so-called deconvolution (in principle going “backwards” through a filter), to find harmonics and hidden sidebands in signals, it is frequently used in speech recognition, etc. The term “cepstrum” is a paraphrase of the word “spectrum” and comes from the original paper (Bogert et al. 1963). Since cepstrum introduces a time-related domain that is similar, but not equal to, the “delay domain” of the autocorrelation function, it was found that it was necessary to introduce a new terminology in order to avoid confusion. Although most of their suggested terminology has not won common acceptance, a few of the terms are still used. The originally proposed definitions of cepstra (there are several, as we will see below) are usually modified in modern texts, so some care needs to be executed when going back to the original papers. The most common definition, today, is to define the cepstrum as the inverse Fourier transform of the logarithm of a spectrum. Several types of cepstra coexist, which are based on various spectra, single-sided and double-sided, etc. The three most common cepstrum functions, at least in noise and vibration analysis, the power cepstrum, the complex cepstrum, and the real cepstrum will be presented here. In noise and vibration analysis, cepstra are mostly used for diagnostics, particularly on gearboxes (Endo et al. 2009; Randall 1982). Lately, however, it has also received some interest for operational modal analysis applications (Gao and Randall 1996a,b; Hanson et al. 2007a,b; Randall and Gao 1994).

18.3.1

Power Cepstrum

The most common cepstrum function used in noise and vibration analysis is the power cepstrum, which is defined as the inverse Fourier transform of the logarithm of an autopower spectrum. In practice, the power cepstrum of a signal x(t), denoted cpx (𝜏), is calculated as follows: [ ( )] (18.27) cpx (𝜏) =  −1 log Sxx , where  −1 is the inverse Fourier transform, and Sxx is the double-sided autopower spectrum defined by Equation (10.1), or often, it is the magnitude squared of the Fourier transform |X(f )|2 of the time signal x(t). Sometimes, the power cepstrum is instead calculated from the single-sided spectrum, which under certain circumstances will give some useful qualities to the cepstrum. Of course, in real life, the Fourier transform is replaced by the FFT.

527

528

18 Advanced Analysis Methods

The power cepstrum has three important features: ● ●



it is an inverse Fourier transform, which means it finds periodicities in Sxx ; it uses log(Sxx ), which amplifies low levels and compresses high levels. This further enhances the ability of the cepstrum to find periodicities also where the harmonics are low; and if the analyzed signal is a composition of an input signal going through a linear system (which many vibration signals are), then the cepstrum can sometimes separate the input spectrum from the linear system frequency response.

The second point unfortunately also means that the cepstrum is sensitive to noise, as the logarithm, in addition to amplifying low harmonic levels, also amplifies the lower part of the spectrum, where the background noise is normally “hidden.” The third point can be seen easily from the equation of an output autospectrum of a linear system which, if x(t) is the input signal and y(t) is the analyzed output signal, and the linear system has FRF H(f ), is Syy (f ) = |H(f )|2 Sxx (f ), from which it is easily seen that the cepstrum cpy (𝜏) becomes )] [ ( )] [ ( cpy (𝜏) = IFFT log |H|2 + IFFT log Sxx ,

(18.28)

(18.29)

which means the cepstrum is a sum of the effect of the linear system and the effect of the input spectrum. If the linear system and the spectrum have different main frequency ranges, then it is possible to filter (lifter) out the effect of the linear system. The time variable of the cepstrum, 𝜏, is called “quefrency,” using the paraphrasing terminology originally proposed by Bogert et al. (1963). This terminology was introduced to stress the fact that, although 𝜏 has the unit of time, it is different than the lag domain of the autocorrelation function Rxx , although often the variable 𝜏 is used for both. In addition to the terms “cepstrum” which is a paraphrase of “spectrum,” and “quefrency” for the time variable which is a paraphrase of “frequency,” in cepstrum analysis it is also common to use the paraphrases “liftering” for “filtering,” and sometimes “short-pass liftering” for “lowpass filtering” and “long-pass liftering” for “highpass filtering,” although the two last paraphrases are rarely used. The cepstrum calculated by Equation (18.27) is, with our definition, a real quantity, with positive and negative values, much like an autocorrelation function. In most cases, however, the magnitude of it is displayed, as there is no information to be gathered from the positive and negative values. The cepstrum is useful for analyzing signals which contain many harmonics, which is often the case in, for example, gearboxes, see Konstantin-Hansen and Herlufsen (2010). We will illustrate its use with an example using data from a milling machine. Example 18.3.1 We will look at an example with an acceleration measured on a milling machine with a four-tooth endmill, running at approximately 1655 RPM, which corresponds to 27.59 Hz. In Figure 18.8(a), the time signal of a small segment of the data is shown. It is clearly seen that the signal is periodic with a period of approximately 9 ms, which corresponds to 1∕(4 ⋅ 27.59) seconds. The reason for this is, of course, that for each revolution, the four teeth grab into the material which causes the fourth harmonic of the RPM to be the main frequency.

18.3 Cepstrum Analysis

0.5

Lin. spec [m/s2 RMS]

Acceleration [m/s 2 ]

6 4 2 0 –2 –4 –6

0

0

0.02

Time [s] (a)

–4

10–6 0

500 1000 1500 2000

Frequency [Hz] (c)

0.2 0.1 0

0.4

Cepstrum [-]

Lin. spec [m/s 2 RMS]

10

–2

0.3

0

0.04

10

10

0.4

500 1000 1500 2000

Frequency [Hz] (b)

0.3 0.2 0.1 0

0

0.05

0.1

0.15

Quefrency [s] (d)

0.2

Figure 18.8 Plot for Example 18.3.1. In (a) a small part of an acceleration time signal from a measurement on a milling machine during a milling operation is shown. In (b) and (c), the linear spectrum of the time signal is shown with linear and logarithmic amplitude axis, respectively. As can be seen particularly in the latter plot, the vibration spectrum is very complex with many harmonics. In (d), the power cepstrum of the signal is shown in which it can be seen that the highest peak at approximately 0.03625 s corresponds to the RPM of the milling tool. See Example 18.3.1 for a discussion [Courtesy of Prof. Kjell Ahlin].

In Figure 18.8(b) and (c), linear spectra of the time signal are shown with linear y-scale in (b), and with logarithmic y-scale in (c). In the plot with linear y-scale, it is clearly seen that we have harmonics of approximately 110.3 Hz (4 times 27.59), with small peaks at every quarter of these frequencies. The harmonics rise up to approximately 1543 Hz (14th harmonic of 110.3 Hz) and then fall. In Figure 18.8(c), with logarithmic y-scale, the typical complexity of spectra from this type of machines is clearly seen. We have a “forest” of peaks, and the complexity can be startling. This is a good example of a case where the power cepstrum can concentrate the information into a more compressed form, where the interpretation can be easier. In Figure 18.8(d), the power cepstrum of the first 200 ms is shown. The highest peak in the cepstrum is found at approximately 36.25 ms, corresponding to 27.59 Hz, the RPM of the milling tool, which is the fundamental periodic component in the “forest” of peaks seen in Figure 18.8(c). For diagnostic purposes, rather than monitoring the complex spectrum in (c), it might be more efficient to track the level of this peak in the cepstrum. As our purpose here is not to dwell on monitoring and diagnostic methods for milling machines, we leave the example at this. End of example.

529

530

18 Advanced Analysis Methods

18.3.2 Complex Cepstrum Another common cepstrum function is the complex cepstrum, which is defined as the inverse Fourier transform of the logarithm of the Fourier transform X(f ), i.e., [ ] (18.30) cc (𝜏) =  −1 log (X(f )) . The complex cepstrum is, despite its name, a real function. This is a result of the fact that if X(f ) is the Fourier transform of a real signal, x(t), then if we write X(f ) = A(f )e−j𝜙(f ) ,

(18.31)

the logarithm of Equation (18.31) is given by log X(f ) = log A(f ) + j𝜙(f ).

(18.32)

Now, because x(t) is real, then we know from Section 2.7.1 that ● ● ●

A(f ) is even, log (A(f )) is even, 𝜙(f ) is odd. This proves that log (X(f )) is conjugate symmetric, i.e., [ ]∗ log X(−f ) = log (X(f )) ,

(18.33)

and thus from basic properties of the Fourier transform, the inverse Fourier transform of Equation (18.31) is a real signal. In order to calculate the complex cepstrum, the phase 𝜙(f ) must be a continuous function, that is the phase has to be unwrapped. This can be accomplished in MATLAB/Octave by the unwrap command. However, because of the requirement of continuous phase, it cannot be used for signals which contain (only) discrete frequency components, or with stationary random signals, for which the phase is random.

18.3.3 The Real Cepstrum The real cepstrum is obtained by setting the phase in Equation (18.32) to zero, i.e., the real cepstrum is cr (𝜏) =  −1 [log(|X(f )|)],

(18.34)

which is equal to half the power cepstrum, in cases where the latter is computed on a single Fourier transform. It should be noted that despite its name, the real cepstrum is complex valued.

18.3.4 Inverse Cepstrum Both the power and the complex cepstrum can be inverse transformed. This is usually done after some alteration of the cepstrum has been performed. Although today this is often referred to as filtering, the originally suggested term, “liftering,” is better to avoid confusion. When the inverse cepstrum is to be applied to a power cepstrum as in Equation (18.27), the double-sided autopower spectrum has to be used. The procedure to create an inverse cepstrum is simply to forward Fourier transform the cepstrum, for example ) [ ] ( (18.35) log Sxx (f ) = FFT cp (𝜏) ,

18.4 The Envelope Spectrum

As is seen in Equation (18.35), the inverse cepstrum actually produces the logarithm of the autopower spectrum.

18.4 The Envelope Spectrum The envelope spectrum is a spectrum often used in vibration monitoring, particularly to diagnose rolling element bearings. The idea of the envelope spectrum is to extract the envelope of the time signal before computing the spectrum. This is related to amplitude demodulation and is useful in cases where a vibration phenomenon occurs modulated on top of a constant frequency, e.g., when a ball bearing has a fault which causes a periodic impact with the period of the rotation speed (or related to the rotation speed). To understand the principle of the envelope spectrum, in Figure 18.9, we find a (zoomed-in part) of a time signal which is created by a train of impulses exciting an SDOF system. The envelope of the signal, computed by the Hilbert transform as described in Section 18.2.2, is overlaid on the signal. The envelope spectrum is the spectrum of the envelope signal in Figure 18.9, and clearly this signal has a much lower frequency than the original signal. In fact, the envelope is the demodulated signal, in telecommunication terminology. Incidentally, the time plot in Figure 18.9 is relatively similar to the time plot in Figure 18.8(a). In fact, as we will see in Example 16.4.18.4.1, the synthesized signal in Figure 18.9 is a simplified model of the phenomenon behind the vibrations on the milling machine, see also Problem 18.2. 0.8

Acceleration [m/s 2 ]

0.6 0.4 0.2 0

–0.2 –0.4 –0.6

0

0.01

0.02

0.03

0.04

0.05

Time [s] Figure 18.9 Example of time signal where amplitude modulation occurs. The signal is synthetically generated by letting a pulse train with approximately 110.1 Hz frequency excite an SDOF system with a natural frequency of 1543 Hz and 10 % relative damping. The time envelope of the signal, computed by the Hilbert transform, is overlaid on the vibration signal (dotted). The envelope is a signal with low frequencies relative to the frequencies of the vibration signal.

531

18 Advanced Analysis Methods

To compute envelope spectra, it is generally necessary to bandpass filter the raw vibration signal around the frequencies of interest. Another “trick” often used is to take the square of the envelope prior to computing the spectrum, which makes the envelope spectrum less sensitive to higher-frequency harmonics. We will now illustrate the computation of envelope spectra with an example. Example 18.4.1 We will look at the envelope spectrum of the milling machine vibration used in Example 16.3.18.3.1. Looking at the spectrum in Figure 18.8(b) and (c), we can see that the vibrations peak at approximately 1543 Hz. This is due to a resonance in the machine or workpiece at this frequency. By setting a bandpass filter centered at fc = 1543 Hz and with, say, 200 Hz bandwidth, we will encompass three of the harmonics on each side of the peak at 1543 Hz (because, from Example 16.3.18.3.1 we know that the harmonics are spread approximately 27 Hz apart). We then compute the square of the envelope and compute a spectrum. The following MATLAB/Octave code does all of this. % Bandpass filter the signal in variable x fc=1543; B=200; flo=(fc-B/2)/(fs/2); fhi=(fc+B/2)/(fs/2); [b,a]=butter(4,[flo fhi]); x=filtfilt(b,a,x); % Compute the envelope squared e2=abs(hilbert(x)).ˆ2; [E,f]=alinspec(e2,fs,ahann(8192),1); 0.25

0.2

Envelop spectrum [-]

532

0.15

0.1

0.05

0

0

50

100

150

200

Frequency [Hz] Figure 18.10 Plot for Example 18.4.1. The plot shows the envelope spectrum of an acceleration signal from a measurement on a milling machine, after applying a bandpass filter with a center frequency of 1543 Hz and 200 Hz bandwidth [Courtesy of Prof. Kjell Ahlin].

18.5 Creating Random Signals with Known Spectral Density

In the example code, we have used the command alinspec from the accompanying toolbox, which computes a linear spectrum; remaining commands are standard MATLAB/ Octave (signal processing toolbox) commands. The result of the code above, using the data from the milling machine, is plotted in Figure 18.10, for the first 200 Hz. The higher frequencies will not have any significant frequency content and can be thrown away. The plot clearly shows four peaks, whereof the highest is located at 110.2 Hz, which is the frequency of 4 times the RPM of the milling tool. End of example.

18.5 Creating Random Signals with Known Spectral Density It is rather common to want to create a random time signal with known properties. It can be for shaker excitation, where the spectrum is requested to compensate for stinger dynamics, for example or it can be in environmental testing, where a test object should be excited with a particular spectrum. It has already been mentioned, in Section 13.9.4, how to produce pseudo-random noise with a known spectrum. Here, we will limit the method to cases where the signal is short enough so that an inverse FFT can be performed on the entire signal. This is a reasonable limitation today, where FFT can be performed on several million samples of data in a matter of a second or two. The principle of creating a pseudo-random signal is to create an amplitude spectrum in the frequency domain with the spectrum shape (the square root of the PSD). Then, a random phase is added to each frequency, and an inverse FFT (IFFT) is performed. For the IFFT to work, we must, of course, create a double-sided spectrum prior to the IFFT, using the symmetry properties from Section 2.7. We will illustrate the method with an example using MATLAB/Octave. Example 18.5.1 Assume we have a PSD in MATLAB/Octave variable Gxx, and a frequency axis in variable f, going from zero Hz to fs/2. This means the length of f and Gxx is N∕2 + 1, where N was the blocksize used to compute the PSD. For this example, we let N = 2048 samples. Create a time signal with Gaussian PDF, and with the same RMS level as that of the PSD, and with length L > N, in our example, 100 ⋅ 1024 samples. The first thing we do is to find the sampling frequency, which is fs = 2 max (f ), since f has length N∕2 + 1. We should also calculate the total RMS level from the PSD. Next, we interpolate the PSD in Gxx up to the length L∕2 + 1 since we have a single-sided spectrum so far. The final, double-sided spectrum will then be length L. Let us show this part of the MATLAB/Octave code before proceeding. fs=2*f(end); df=f(2); % Compute the RMS level R=sqrt(df*sum(Gxx)); % We need a new freq. axis length newx=linspace(f(1),f(end),ceil(L/2)+1);

533

18 Advanced Analysis Methods

% Interpolate Gxx onto this new x-axis P=interp1(f,Gxx,newx,'linear','extrap'); P=P(:); % Make sure a column where the ceil is rounding upwards, in case L would be an odd number. Next, we will compute the amplitudes of the spectrum as the square root of the PSD. We then add a random phase between −𝜋 and 𝜋 and produce the negative frequencies with the Fourier transform symmetry properties (even amplitude, and odd phase, in this case). Then we compute the IFFT and scale the RMS to our variable R above. This final part is done by the following code. Note that the command rand creates uniformly distributed values between zero and unity, so we make a trick to get values between –0.5 and 0.5. A=sqrt(P); phi=2*pi*(rand(ceil(L/2)+1,1)-0.5); % Create negative frequencies in upper half % of the block (as FFT/IFFT wants it) A=[A(1); 0.5*A(2:end); 0.5*A(end-1:-1:2)]; phi=[phi; -phi(end-1:-1:2)]; phi(end)=0; % phi of fs/2 must be real y=sqrt(2*fs*length(A))*real(ifft((A.*exp(j*phi)))); y=y-mean(y); % Force mean to zero % Scale y to proper RMS level y=R/std(y)*y; In Figure 18.11, the resulting PSD of the time signal computed using an original PSD corresponding to the output signal of an SDOF system with natural frequency of 100 Hz and 1% 2000

–8

Probability density

10

Displacement PSD [m2/Hz]

534

–10

10

10–12

–14

10

0

200 400 Frequency [Hz] (a)

600

1500 1000 500 0

–5 0 5 Displacement [m] × 10–4 (b)

Figure 18.11 Plots for Example 18.5.1. In (a), the PSD of a time signal synthesized to provide a known PSD is shown (solid), overlaid by the true PSD (dashed). The difference is only due to the random error in the estimated PSD and the two curves cannot be distinguished from each other on the presented scale. In (b), the probability density of the synthesized time signal is shown (bars), overlaid with a theoretical Gaussian distribution with the same mean and standard deviation (solid). The time signal is obviously Gaussian.

18.6 Identifying Harmonics in Noise

damping, forced by white noise, is shown, overlaid by the true PSD, using the RMS level of the input and the magnitude squared of the SDOF frequency response. End of example.

18.6 Identifying Harmonics in Noise Detecting harmonic signals hidden in random noise is sometimes desired in vibration analysis, as well as in other fields. Two reasons are of special interest: to remove the harmonics prior to some analysis of the random part of the signal, for example for OMA or for bearing diagnostics, or to identify their amplitude and phase, for example to scale an OMA model using the OMAH method, as described in Section 17.4.2. In this section, we will discuss two techniques for detecting harmonics: (i) the three-parameter sine fit method, for a single harmonic in the signal, which is defined in the standard (IEEE 1057: 2017), and (ii) the automated harmonic detection method, able to handle any amount of harmonics, which was developed in Berntsen and Brandt (2022). In Section 18.7, we will show how to remove the harmonics once they are known.

18.6.1

The Three-Parameter Sine Fit Method

The three-parameter sine fit method is based on a least squares fit and assumes that there is a single harmonic in the signal with known frequency so that there are three unknowns: amplitude, phase, and mean value of the harmonic. This may be useful, for example analyzing signals after applying harmonic excitation for OMAH, see Section 17.4.2. Here, we will assume that the mean is zero, as we are dealing with vibration signals, which is easily ensured by removing any offset prior to estimating the sines (in the case of OMAH, the sines are those of the excitation force and the responses). If the frequency of the unknown harmonic is not known, the four-parameter sine fit method may be used (Fonseca da Silva et al. 2004; IEEE 1241: 2010). The three-parameter sine fit method is well investigated lately in, e.g., Händel (2010), Andersson and Händel (2006), Negusse et al. (2014), and Belega and Petri (2016). The method is based on the assumption that the signal is of the general form: y(t) = a cos (𝜔t) + b sin (𝜔t) + e(t),

(18.36)

where 𝜔 is the frequency of the sine, a and b are the unknown Fourier series coefficients we wish to estimate, and e(t) contains the random part of the signal, and possibly unknown harmonic components. If we want the complex amplitude/phase values, they are readily √ 2 2 obtained by a + b ⋅ exp(j arctan(b∕a)). By measuring N samples of the signal y(n ⋅ Δt) with a sampling frequency of fs = 1∕Δt, we can define the matrix: sin(𝜔 ⋅ 0Δt) ⎤ ⎡ cos(𝜔 ⋅ 0Δt) ⎢ cos(𝜔 ⋅ 1Δt) sin(𝜔 ⋅ 1Δt) ⎥ ⎥ ⎢ [B] = ⎢ cos(𝜔 ⋅ 2Δt) sin(𝜔 ⋅ 2Δt) ⎥ , ⎥ ⎢ ⋮ ⎥ ⎢ cos(𝜔 ⋅ (N − 1)Δt) sin(𝜔 ⋅ (N − 1)Δt) ⎦ ⎣

(18.37)

535

536

18 Advanced Analysis Methods

[ an unknown coefficient vector {x} = a

]T b , and the measurement vector

⎧ y(0) ⎫ ⎪ y(1) ⎪ ⎪ ⎪ {y} = ⎨ y(2) ⎬ , ⎪ ⎪ ⋮ ⎪ ⎪ ⎩ y(N − 1) ⎭

(18.38)

The model in Eq. (18.36) can now be written as follows: (18.39)

[B]{x} = {y} + {e},

which can be solved for the estimates {̂x} by a least squares solution. Furthermore, the result of the least squares solution can be used to estimate the remaining noise, by (18.40)

{ê} = {y} − [B]{̂x}, and the variance of {ê}, which we denote 𝜎ê2 , can then be readily estimated by 𝜎ê2 =

{ê}T {ê} . N

(18.41)

This variance, and the power of the ideal sine which is (â2 + b̂ )∕2, can now be used to estimate the signal-to-noise ratio, SNR, as follows: 2

SNR =

2 â2 + b̂ . 2𝜎ê2

(18.42)

The coefficients â and b̂ can be shown to be Gaussian with means a and b, respectively, that is, they are unbiased estimates (Händel 2010). Furthermore, the variance of each estimate is 2𝜎ê2

. (18.43) N Let us assume that we want to estimate the coefficients a and b with a certain maximum normalized uncertainty, 𝜀. Using 95 % confidence level, thus (â + 2𝜎â )∕â ≤ 1 + 𝜀, which means the normalized uncertainty is 𝜎â2 = 𝜎 2̂ = b

4𝜎 2𝜎â = e, (18.44) â Nâ by using Eq. (18.43), and similarly for 𝜀b . It may in some cases be more interesting to look at the amplitude estimate of the sine we apply, which is given by √ 2 (18.45) Â = â2 + b̂ . 𝜀a =

This equation is nonlinear which means that the estimate  will be biased. Therefore, it may make more sense to look at the mean square error (MSE) of this estimate, rather than only the variance. In Händel (2010), it is shown that the MSE is given approximately by [ ] 2𝜎 2 MSE  ≈ e . N

(18.46)

18.6 Identifying Harmonics in Noise

that is, the mean square error is approximately equal to the variance of the amplitude estimates given by Eq. (18.43). The approximation in Eq. (18.46) is furthermore a conservative error, so for bad SNR cases, the error is smaller than this value.

18.6.2

Periodogram Ratio Detection, PRD

Another method for identifying harmonics in noise is the so-called periodogram ratio detection method, PRD. This method may be used for any signal, with any number of harmonics and does not need the frequencies to be known a priori. It was presented in Berntsen and Brandt (2022), together with the method for removing the detected harmonics, frequency domain editing, FDE, that we describe in Section 18.7.1. The periodogram ratio detection method utilizes a periodogram in which, for each frequency line, it is determined if there is a harmonic at that frequency, or if the signal is random. The method may be entirely automated, and once the frequencies of the harmonics are known, they may either be identified using the result of a DFT (see end of the current section), or removed by the FDE method. We start by assuming that a discrete time signal x(n) of length L samples is a sum of a random part and a harmonic part, i.e., x(n) = xr (n) + xh (n),

n = 0, 1, … L − 1,

(18.47)

where xr (n) is a random signal with unknown spectral density, and xh is a sum of harmonic components with unknown amplitudes, phases, and frequencies. We then define a one-sided periodogram, P(k), by 2 (18.48) |X(k)|2 , k = 1, 2, … , L∕2 + 1, L where X(k) is the DFT of x(n). Note that we ignore the zero-frequency value, P(0), since it cannot contain a harmonic. Next, we smooth the periodogram by applying two running averages of length N1 < N2 , for two odd integers, to produce the smoothed periodograms P1 (k) and P2 (k), respectively. Thus, the first smoothed periodogram is P(k) =

P1 (k) =

k+(N1 −1)∕2 ∑ 1 P(k), N1 k−(N −1)∕2

(18.49)

1

and the second periodogram, P2 (k) is computed similarly, but using smoothing length N2 . We ignore the leftmost and rightmost values of k for which the sum falls outside the definition of P(k). Finally, we define the periodogram ratio, Pr (k) as follows: Pr (k) =

P1 (k) . P2 (k)

(18.50)

For each frequency line, k, there are two possibilities – either there is a harmonic from xh (n) at this frequency, or there is not. Let us first assume that there is no harmonic at a discrete frequency kr . Assuming the random part of the signal, xr (n) is Gaussian, the periodogram as defined by Equation (18.48) has a chi-square distribution, see Bendat and Piersol (2010), with two degrees of freedom, often denoted 𝜒22 , since |X(k)|2 = XR2 (k) + XI2 (k),

(18.51)

537

18 Advanced Analysis Methods

where XR and XI are the real and imaginary parts of the DFT of x(n), respectively, which are Gaussian if the time signal is Gaussian. Consequently, the smoothed periodograms will also be chi-squared distributed, but with 2N1 and 2N2 degrees of freedom, respectively. The ratio of two uncorrelated chi-square distributed variables has an F-distribution. But P1 and P2 are not uncorrelated, since P2 includes the values summed in P1 , so we have to empirically investigate the probability density of the periodogram ratio, which we will discuss below. Now assume that there is a harmonic at a discrete frequency kh . We also assume that, in the frequency band [kh − (N2 − 1)∕2, kh + (N2 − 1)∕2, the variance of the random signal xr is negligible compared to the variance of the harmonic signal, and that there is only a single harmonic. These are reasonable assumptions if the time signal is sufficiently long. Then, the 2 first periodogram will be approximately P1 (kh ) ≈ ||X(kh )|| ∕N1 , and the second periodogram 2 will be approximately P2 (kh ) ≈ ||X(kh )|| ∕N2 ]. Thus, the periodogram ratio, Pr , will be Pr (kh ) =

P1 (kh ) N2 . ≈ P2 (kh ) N1

(18.52)

The idea behind the PRD method is to select a threshold, T < N2 ∕N1 , so that the probability that Pr exceeds this threshold is small if the signal is random. Thus, if Pr does exceed the threshold, it is likely that there is a harmonic at that frequency. N1 and N2 must be chosen appropriately and a method for finding a suitable threshold for any values of the smoothing window lengths, was described in Berntsen and Brandt (2022). First, a large Gaussian signal x(n) is generated with, say, 108 samples. The periodogram ratio is then calculated with the two smoothing lengths, after which a complimentary cumulated probability density is computed, i.e., 1 − Prob[Pr (k) < T] as shown in Figure 18.12. From this complementary probability distribution, a suitable value for the threshold T is easily selected. In Berntsen and Brandt (2022), it was recommended to select N1 = 1 and N2 = 33, and by defining a threshold of T = 10, there is only 1 chance in 105 that Pr (k) exceeds the threshold, i.e., that Complementary cumulated distribution of P

538

100 10–2 –4

10

–6

10

10–8

0

2

4

6

Threshold, T

8

10

12

Figure 18.12 Complementary cumulated probability distribution (1 − Prob[Pr < T]) of the periodogram ratio Pr of a Gaussian signal, defined by smoothing lengths N1 = 1 and N2 = 33. As may be seen, choosing a threshold of T = 10 will result in less than 1 value in 105 for which Pr > T if the signal is Gaussian. If the signal is harmonic at a frequency, the value will be Pr = N2 ∕N1 = 33 in this case.

18.7 Harmonic Removal

Pr (k) > T, if the signal is random at the frequency line k. This is easily seen in Figure 18.12, which is calculated for the same smoothing window lengths. The value of 1 chance in 105 was selected based on the assumption that approximately 105 samples is a common data size, for which there will only be a single or few values that are incorrectly detected as harmonics. For other data sizes, different variable values may be chosen. The PRD method is attractive since it “proves” for each frequency, if there is a harmonic, or if the signal is purely random. For the method to be successful, there must not be too much smearing in the DFT result in the calculation of the periodogram because this will result in a periodogram ratio Pr < N2 ∕N1 . The distance between the threshold, T, and the smoothing ratio N2 ∕N1 should therefore be as large as possible. On the other hand, too long smoothing windows increase the risk of there being more than a single harmonic within the window. The values recommended above have proven to work in many cases, but in some cases where variation in the harmonic frequencies are expected due to, e.g., variable speed of engines, a larger ratio N2 ∕N1 may be used, with appropriately changed threshold. A better alternative, however, is to resample data with the technique described in Chapter 12, after which the PRD method may be applied to the resampled signal in the angle domain. Finally, it should be mentioned that once the frequencies are known, if the Fourier components or amplitudes and phases of the harmonics are desired, this may be easily accomplished by computing a DFT of the signal and extracting the desired frequencies from the DFT. For best accuracy, the DFT should be computed using a flattop window, as described in Chapter 9. It should also be noted that the imaginary part of the DFT has a minus sign compared to the Fourier components as defined, for example by Equation (18.36).

18.7 Harmonic Removal In some applications of vibration analysis, harmonics are unwanted, although they may be present in the signals. An example is operational modal analysis, OMA, which, as discussed in Chapter 17, is based on the assumption of random loads. Another example is bearing diagnostics, where the envelope spectrum described in Section 18.4 may be contaminated by harmonic components, so that a peak in the envelope spectrum is not due to a bearing fault but due to a harmonic. In such cases, techniques for removing harmonics prior to the analysis may be applied. There are two types of methods that have proven suitable for this purpose, which we will describe here. The first is the frequency domain editing, FDE, method, in Section 18.7.1 which may be combined with the PRD method from Section 18.6.2. The second type of methods are cepstrum-based methods, of which there are two different types that we describe in Section 18.7.2.

18.7.1

Frequency Domain Editing, FDE

The frequency domain editing method, FDE, is a relatively simple method, based on linear interpolation of a DFT of the signal in which harmonics should be removed. It was described and applied in Brandt (2019), although it seems to be briefly mentioned already in Randall

539

540

18 Advanced Analysis Methods

et al. (2011) although without much elaboration. The method assumes that the frequencies of all harmonics are known, either a priori, or by using, for example, the PRD method. If the method is combined with the PRD method into an automated method, it was named the “automated frequency domain editing,” AFDE, method, in Berntsen and Brandt (2022). To apply the FDE method on a discrete time signal x(n), at a discrete frequency k1 , we first compute the DFT of the signal, X(k) = XR (k) + jXI (k), where XR and XI are the real and imaginary parts of the DFT, respectively. We then define a number of frequency lines, Nk , on each side of k1 , to use for the interpolation, since the DFT in many cases contains some leakage. For the real part of X(k), a straight line is created from XR (k1 − Nk ) to XR (k1 + Nk ), and similarly for the imaginary part. Suitable values of Nk may be 1 to 3, but if smearing is expected, the method may be made more robust by increasing Nk , at the expense of interfering more with the signal being edited. It should be noted that for vibration signals from rotating machinery with variable speed, the editing as well as the detection should be made in the angle domain, after resampling the signal as described in Chapter 12, which is readily accomplished provided the instantaneous RPM is known, for example using a tachometer. Finally, it should be mentioned that the PRD, FDE, and AFDE methods may be readily integrated with the framework for signal processing described and recommended in Section 10.6. A result of applying the AFDE method to a simulated 3DOF system excited by a random force, and with a harmonic close to the first natural frequency is presented in Example 18.7.1 in Section 18.7.2.

18.7.2 Cepstrum-Based Harmonic Removal Methods It was recently proposed to remove harmonics in signals for both OMA and for bearing diagnostics, by editing a real cepstrum, (Randall and Sawalhi 2011). Unlike for the complex spectrum which, as mentioned in Section 18.3, may only be applied to signals with continuous phase, the method based on editing the real cepstrum works on any signal. The principle is schematically illustrated in Figure 18.13. The method uses a real cepstrum as defined in Section 18.3.3, which is edited in the quefrency domain, after which it is brought back to the frequency domain. It is then combined with the phase of the original spectrum, which is taken as an exponential of turning it into a complex spectrum, which is finally inverse Fourier transformed into the edited signal. The editing of the real cepstrum may be of two kinds. It is recommended by Randall to remove harmonics for bearing diagnostics by notching (setting to zero) the quefrencies that represent harmonic families to the average of the surrounding quefrency values. For OMA, instead an exponential window is applied to the real cepstrum. The reason for this is that for most signals, the modal content is located at low quefrency values, whereas the families of harmonics typically occur at high quefrencies. Example 18.7.1 To illustrate the removal of harmonics, we create a response signal by exciting a 3DOF system by a random force in DOF 1 and create the response in the same DOF, using the forced response algorithm described in Section 19.2.3. We then add a harmonic close to the first natural frequency, with the same RMS level as the random response. The natural frequencies are 1, 2, and 2.5 Hz, and the harmonic has a frequency of 0.98 Hz. Subsequently,

18.7 Harmonic Removal

x(n)

FFT

+

Phase

Edited log Exp. cepstrum

Complex cepstrum

+ Log amplitude

IFFT

IFFT

y(n)

Real cepstrum Edit Edited cepstrum FFT Edited log amplitude cepstrum

Figure 18.13 of Elsevier].

Schematic illustration of the cepstral editing principle [Brandt 2019/With permission

10–6

10

10

Spectral density [(m/s2 ) 2 /Hz]

Spectral density [(m/s2 ) 2 /Hz]

we apply the AFDE method from Section 18.7.1 by combining the PRD method with the FDE method. We also apply the cepstrum editing method described in the current section, using an exponential window with an end value of 0.001. The results of these operations are plotted in Figure 18.14(a) and (b), respectively, using the spectral densities of the original signal and the signals with the harmonic removed. The cepstrum editing changes the scale of the signal, so for

–8

–10

10–12

0

1

2

Frequency [Hz] (a)

3

10–6

10

10

–8

–10

10–12

0

1

2

Frequency [Hz] (b)

3

Figure 18.14 Power spectral densities of a signal before and after harmonic removal with the AFDE method and cepstrum editing, respectively. In (a) the result of applying the AFDe method, and in (b) the result of applying cepstrum editing is shown with solid lines, whereas the original signal is plotted with dashed lines. See Example 18.7.1 for details.

541

542

18 Advanced Analysis Methods

comparison we have scaled the spectral density of the signal after cepstrum editing so that the peak at the first natural frequency is equal to the peak of the original signal. As can be seen in Figure 18.14, both methods remove the harmonic. Whereas the AFDE method does not affect the spectrum of the signal at other frequencies than close to the harmonic, the cepstrum editing alters the spectrum more, but the modal properties are little affected. If a very strong exponential window is used, it will add damping, similarly to the exponential window applied to impulse excitation discussed in Section 13.8.4 and can be compensated similarly. See Randall (2021) for details. End of example.

18.8 Chapter Summary In this chapter, we have presented some common signal analysis tools which have not found their proper place in other chapters, but which are important to know about. The SRS is an important tool to use for comparison of various vibration environments with respect to how harmful they can be. It is also commonly used as a tool in environmental engineering to compare a test specification with a real-life vibration environment. The SRS is based on the assumption that the worst-case which can happen is that a structure, mounted in the point where the environment is measured has a particular resonance frequency and damping. The SRS then tells how large the maximum acceleration level (if it is absolute acceleration SRS) the structure will get. The Hilbert transform has two main applications in noise and vibration analysis: (i) it is used to compute the envelope of an oscillating function, for example a cross-correlation function; and (ii) it is used to relate the real and imaginary parts of a frequency response function. It was shown how the Hilbert transform in the latter application can also be used to “clean up” an FRF which exhibits noncausal behavior, i.e., which has a corresponding impulse response which is not zero for negative time. We then presented the cepstrum, which is the inverse FFT of the logarithm of a spectrum. The cepstrum measures the periodicity in the spectrum, which is useful in cases where the spectrum consists of a number of harmonics to some periodic phenomenon. Cepstra are commonly used, for example for diagnostics of gear box vibrations. The envelope spectrum is another common tool particularly used for bearing diagnostics. The principle of the envelope spectrum is to bandpass filter a time signal around an important “carrier frequency,” for example the rotation speed or a multiple of this frequency. The envelope of the bandpass-filtered signal is then computed using the Hilbert transform, and the spectrum of this envelope is computed. This essentially works as amplitude demodulation, and the resulting envelope spectrum is a cleaner version of the original spectrum, with a limited number of spectral peaks corresponding to the carrier frequency and the spectral peaks nearest to it.

18.9 Problems

We also showed how to produce random noise with Gaussian probability density and a given PSD. The procedure is to create a pseudo-random signal by setting the amplitudes of each sine in the frequency domain and give each frequency an arbitrary phase between −𝜋 and 𝜋 radians. The resulting time signal was proven by an example to have the correct PSD and PDF. The new periodogram ratio method for identifying harmonics in a signal was described. The method may also be used for removing harmonics automatically, if combined with frequency domain editing, as we described. We also illustrated a new method for removing harmonics using an exponential window on a real cepstrum.

18.9 Problems Many of the problems following are supported by the accompanying ABRAVIBE toolbox for MATLAB/Octave and further examples which can be downloaded with the toolbox. If you have not already done so, please read Section 1.6. ABRAVIBE is completely free and can be downloaded from www.abravibe.com, together with example files and other material complementing this book, including a solutions manual for all book problems. Problem 18.1 Assume a sensitive electronics box which is to sit in the engine compartment of a car. There are two potential locations for mounting the box. The first place, on top of the engine, has worst vibration levels when the engine is running at 2400 RPM. Then, the dominating vibrations are orders 2, 4, and 6, with vibration levels of 2, 1, and 0.5 g, respectively, in the location where the box can be mounted. The other potential place is on the chassis, where the worst vibrations occur as shocks of 20 ms duration and 20 g peak levels. The shocks can be modeled as half sine pulses. Compute the shock response spectra of both vibrations and decide which location is least harmful to the electronics box, assuming there are no long-term (fatigue) effects, but the damage will occur instantly. (Otherwise, of course, we would have to take the statistics of number of pulses, etc., into account.) Problem 18.2 Assume a rotating machine has a resonance at 400 Hz, with 5% damping. The machine is operating at 1200 RPM and a fault occurs in the drive shaft, producing one pulse per revolution. Use MATLAB/Octave to produce a simulated acceleration signal, if each pulse is 100 N and the machine can be modeled as an SDOF system with a mass of 200 kg. Use a sampling frequency of 10 000 Hz and 5 seconds of data. Compute the envelope spectrum, using a bandpass filter with center frequency of 400 Hz, and try different bandwidths. Problem 18.3 Use the time signal from Problem 18.2 and compute a power cepstrum. Problem 18.4 Compute a time signal with a frequency range from 0 to 1000 Hz and a PSD which is constant between 0 and 400 Hz, then 10 times higher from 400 to 600 Hz, and then as the same level as between 0 and 400 Hz up to 1000 Hz. The total RMS of the signal should be 20 g, and the signal should be Gaussian. After generating the signal, verify its properties by computing a PSD and a PDF.

543

544

18 Advanced Analysis Methods

References Ahlin K 2006 Comparison of test specifications and measured field data. Sound and Vibration 40(9), 22–25. Andersson T and Händel P 2006 IEEE Standard 1057, Cramer–Rao bound and the parsimony principle. IEEE Transactions on Instrumentation and Measurement 55(1), 44–53. Belega D and Petri D 2016 Accuracy analysis of the sine-wave parameters estimation by means of the windowed three-parameter sine-fit algorithm. Digital Signal Processing: A Review Journal 50, 12–23. Bendat J and Piersol AG 2000 Random Data: Analysis and Measurement Procedures 3rd edn. Wiley Interscience. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Berntsen J and Brandt A 2022 Periodogram ratio based automatic detection and removal of harmonics in time or angle domain. Mechanical Systems and Signal Processing 165, 108310. Bogert B, Healy M and Tukey J 1963 The quefrency alanysis of time series for echoes: Cepstrum, pseudo-autocovariance, cross-cepstrum and saphe cracking In Proceedings of Symposium on time Series Analysis (ed. Rosenblatt M), pp. 209–243. Brandt A 2019 A signal processing framework for operational modal analysis in time and frequency domain. Mechanical Systems and Signal Processing 115, 380–393. Endo H, Randall RB and Gosselin C 2009 Differential diagnosis of spall vs. cracks in the gear tooth fillet region: experimental validation. Mechanical Systems and Signal Processing 23(3), 636–651. Fonseca da Silva M, Ramos PM and Serra A 2004 A new four parameter sine fitting technique. Measurement 35(2), 131–137. Gaberson HA 2003 Using the velocity shock spectrum to predict shock damage. Sound and Vibration 37(9), 5–6. Gaberson HA, Pal D and Chapler RS 2000 Classification of violent environments that cause equipment failure. Sound and Vibration 34(5), 16–23. Gao Y and Randall RB 1996a Determination of frequency response functions from response measurements.1. Extraction of poles and zeros from response cepstra. Mechanical Systems and Signal Processing 10(3), 293–317. Gao Y and Randall RB 1996b Determination of frequency response functions from response measurements.2. Regeneration of frequency response functions from poles and zeros. Mechanical Systems and Signal Processing 10(3), 319–340. Greenfield J 1977 Dealing with the shock environment using the shock response spectrum analysis. Journal of the Society of Environmental Engineers (9), 3–15. Händel P 2010 Amplitude estimation using IEEE-STD-1057 three-parameter sine wave fit: statistical distribution, bias and variance. Measurement 43(6), 766–770. Hanson D, Randall RB, Antoni J, Thompson DJ, Waters TP and Ford RAJ 2007a Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems - Part I: Modal parameter identification. Mechanical Systems and Signal Processing 21(6), 2441–2458. Hanson D, Randall RB, Antoni J, Waters TP, Thompson DJ and Ford RAJ 2007b Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems -

References

Part II: Obtaining scaled mode shapes through finite element model updating. Mechanical Systems and Signal Processing 21(6), 2459–2473. Henderson GR and Piersol AG 2003 Evaluating vibration environments using the shock response spectrum. Sound and Vibration 37(4), 18–21. Himmelblau H, Piersol AG, Wise JH and Grundvig MR 1993 Handbook for Dynamic Data Acquisition and Analysis. Institute of Environmental Sciences and Technology, Mount Prospect, Illinois. IEEE 1057 2017 Standard for digitizing waveform recorders. IEEE 1241 2010 Standard for terminology and test methods for analog-to-digital converters. ISO 18431-4 2007 Mechanical vibration and shock – signal processing – Part 4: Shock spectrum analysis. Konstantin-Hansen H and Herlufsen H 2010 Envelope and cepstrum analyses for machinery fault identification. Sound and Vibration 44(5), 10–12. Lalanne C 2002 Mechanical Vibration & Shock – Specification Development, Volume 5. CRC Press. Negusse S, Händel P and Zetterberg P 2014 IEEE-STD-1057 three parameter sine wave fit for SNR estimation: performance analysis and alternative estimators. IEEE Transactions on Instrumentation and Measurement 63(6), 1514–23. Randall RB 1982 Cepstrum analysis and gearbox fault-diagnosis. Maintenance Management International 3(3), 183–208. Randall RB 2021 Vibration-Based Condition Monitoring 2nd edn. John Wiley and Sons. Randall RB and Gao Y 1994 Extraction of modal parameters from the response power cepstrum. Journal of Sound and Vibration 176(2), 179–193. Randall RB and Sawalhi N 2011 A new method for separating discrete components from a signal. Sound and Vibration 45(5), 6–9. Randall RB, Sawalhi N and Coats M 2011 A comparison of methods for separation of deterministic and random signals. International Journal of Condition Monitoring 1(1), 11–19. Smallwood D 1981 An improved recursive formula for calculating shock response spectra. Shock and Vibration Bulletin 2(51), 4–10. Tomlinson G and Kirk N 1984 Modal analysis and identification of structural non-linearity Proceedings of the 2nd International Conference on Recent Advances in Structural Dynamics, University of Southampton, pp. 495–510.

545

547

19 Practical Vibration Measurements and Analysis In this last chapter, we will apply a number of the tools described in the book to illustrate how they may be used for practical vibration analysis. For this purpose, we will use both synthesized data (i.e., data generated by using some numerical model) and real data. The intention with this chapter is not to provide a deep insight into how vibration problems are solved; this is outside the scope of this book, because such approaches are very application-dependent and the range of areas where the methods in this book may be used is very wide. Instead, the intention is to provide some examples of how the methods presented in the previous chapters in the book may be applied and to show some good practices that may (or even should) be followed when applying these techniques. Hopefully, the reader will find it useful to see how the techniques presented earlier in the book may be applied to both synthesized data and real data. We start by describing how forced response and operational deflection shapes, ODS, may be applied for simulation and experimental analysis, respectively. These two techniques are closely related; the former is a simulation tool to generate responses, the latter is an experimental tool from measured forced responses. Later in the chapter, we present some results of spectrum analysis and discuss some common considerations that need to be taken into account when analyzing vibrations in general. Finally, we demonstrate experimental and operational modal analysis on simulated and real cases. The accompanying toolbox for this book, ABRAVIBE (see (Brandt 2013)) provides all the tools to repeat the analyses presented here. The toolbox may be downloaded from www.abravibe.com where also many of the examples in this book are available.

19.1 Introduction to a Plexiglas Plate For many of the examples in this chapter, we will use a model of a Plexiglas (PMMA) plate that was originally proposed by Smallwood and Gregory (1986) as a test for modal analysis capabilities. We will also use experimental results from an actual Plexiglas plate. Smallwood and Gregory proposed this plate in the early times of commercial systems for experimental modal analysis as a means to provide a simple example by which the results of different systems could be compared. Also, Smallwood and Gregory pointed out that the plate would “provide new users with a means to evaluate their newly acquired experimental and analytical skills.” Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

548

19 Practical Vibration Measurements and Analysis

We will use the plate for several reasons: first of all, we will use the model of the plate to produce synthesized data (signals provided through simulation of a known model); data for which we know the true values of, for example, the modal parameters so that we can evaluate the performance of the numerical methods for modal parameter estimation (MPE). We will also use experimental measurements from an actual plate to illustrate both EMA and OMA. I have for a long time advocated the use of simple examples such as this plate, for laboratory experiments for students, for example, in the paper (Brandt et al. 2014) where I together with coauthors presented a simple example to be used for teaching finite element model validation and calibration. In Orlowitz and Brandt (2017), a carefully designed experiment was presented, using a Plexiglas plate to illustrate that under the same boundary conditions, EMA and OMA produce similar results. It had sometimes been questioned in the early times of OMA, if the technique actually gave correct results. The plate we are going to use measures 533.4–by–321.1–by-20 mm. It differs slightly from the one designed by Smallwood, in that the thickness was originally specified as 21.6 mm, but 20 mm is a standard thickness (at least in Europe), making the plate considerably cheaper to produce. A difference caused by the slight changes of the dimensions is that the two first natural frequencies, rather than being exactly equal, as was Smallwood’s intention, separate slightly in frequency. As we will see, this does not change the challenge of separating the two modes by MPE, and the spectra and FRFs around the first two natural frequencies still exhibit only a single peak – at least with reasonable frequency increment. For the MPE experiments, the plate is divided into a 7–by–5 grid of measurement points as illustrated in Figure 19.1, which was also proposed by Smallwood, who showed that using these 35 measurement DOFs (only the direction normal to the plane) produced well-defined modes for the first 10 modes of the plate, which are located between approximately 145 and 1050 Hz. The mode shapes are shown in Figure 19.2.

y

29

35

22

28

15

21

8

14

1

7

x

Figure 19.1 Experimental 7–by–5 grid for measurements on Plexiglas plate used as in many of the examples in this chapter. Note the coordinate system, which is right-handed, i.e., the z-axis is out of plane toward the viewer; the origin is in node 1.

19.1 Introduction to a Plexiglas Plate

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

Figure 19.2 The 10 first modes of the Plexiglas plate computed by CALFEM and reduced to the test DOFs. Eigenfrequencies are found in Table 19.1.

In Brandt et al. (2014), a model of the Plexiglas plate was presented using the free MATLAB toolbox CALFEM (see (Austrell et al. 2004)). The model uses first-order shell elements which were presented in Sturesson et al. (2013). A convergence analysis showed that using an 11–by–7 mesh of first-order shell elements was shown to produce accurate enough results, and the eigenfrequencies using this mesh size are presented in Table 19.1. The measured thickness of the experimental plate is actually 19.65 mm, as mentioned in Brandt et al. (2014).

549

550

19 Practical Vibration Measurements and Analysis

Table 19.1 Table with eigenfrequencies of FE model using 11–by–7 mesh of first-order shell elements to model the Plexiglas plate. Mode #

Eigenfrequency [Hz]

Mode description

1

145.3

First bending, x

2

150.3

First torsion

3

340.5

Second torsion

4

401.5

Second bending, x

5

413.4

First bending, y

6

522.0

Higher-order torsion

7

616.4

Higher-order torsion

8

751.7

Higher-order torsion

9

827.5

Third bending, x

10

1005.7

Higher-order torsion

19.2 Forced Response Simulation Computing forced response is a common desire in simulation software such as finite element analysis, FEA. Forced response may be computed in the frequency domain for any type of input, but as the spectra for periodic, random, and transient signals are different, we also may want to perform the forced response differently depending on the type of input. For periodic or random signals, steady-state forced response can be very efficiently computed in the frequency domain for desired input force spectra, as described in Section 19.2.1 and 19.2.2, respectively. For transients, the solution most often has to be computed in the time domain, and of course, this is also possible for periodic and random cases. This will be described in Section 19.2.3.

19.2.1 Frequency Domain Forced Response for Periodic Inputs For harmonic (periodic) forces, the steady-state forced response may be simply calculated in the frequency domain. The relationship between force and response in the frequency domain, with our definitions, is [H(f )]{F(f )} = {U(f )},

(19.1)

where [H(f )] is the frequency response matrix, size No × Ni , where Ni is the number of forces (inputs) and No is the number of responses (outputs). Furthermore, the column vector {F(f )} contains the spectra of the forces acting on the structure, and the column vector {U(f )} contains the response spectra. Now, assuming forces defined by amplitudes and phase angles in the force vector {F(f )}, the response at each frequency is simply the result of the multiplication defined by Equation (19.1). The frequency responses in [H(f )] may be easily computed either from the mass, damping, and stiffness

19.2 Forced Response Simulation

matrices as described in Section 6.4.1, or by using the modal model (where undamped natural frequencies, damping ratios, and mode shapes are known) and the procedure in Section 6.4.2. For a single-input force in DOF q, a very simple solution is, furthermore, to produce linear, RMS scaled, spectra, by simply multiplying the RMS level of the force, Fq (f ), by the FRF | | value |Hpq (f )|, to produce the steady-state RMS response Up (f ), in the units of the FRF (if | | the FRF is, for example, an accelerance, the resulting response is in units of acceleration).

19.2.2

Frequency Domain Forced Response for Random Inputs

For random loads, the procedure is somewhat different than the procedure for periodic inputs in Section 19.2.1, since the spectrum type should be in power spectral density form. We know, however, from Equation (14.18) that the output spectrum matrix of a MIMO system is [Gyy (f )] = [H(f )][Gxx (f )][H(f )]H . Assuming uncorrelated spectral densities of the inputs in vector {F(f )} (for each frequency), the spectral density matrix is easily calculated Gxx (f ) = E[{F(f )}{F(f )}H ] as described in Section 10.8. From this, the output spectral matrix [Gyy (f )] is then easily calculated.

19.2.3

Time Domain Computation of Forced Response for Any Inputs

In order to include the transient part of the solution, a time domain procedure has to be used, although this may be computed in the frequency domain (see Section 19.2.3.1), since convolution in the time domain corresponds to multiplication in the frequency domain. This may also be desired for producing data for trying out signal processing procedures or modal parameter estimation as we will do later in this chapter. There are many ways of solving the forced response in time domain, the most common class of methods being the so-called Ordinary Differential Equations (ODE) solvers, for example, using the Runge–Kutta method. These methods are attractive because of their generality for solving any type of differential equations, linear or nonlinear. However, there are some drawbacks, the main ones being risk of instability, difficulties choosing proper time step size, and computational inefficiency. Many techniques are presented in standard textbooks such as (Craig and Kurdila 2006; Inman 2007; Rao 2003). 19.2.3.1 Time Domain Response by Frequency Domain Computation

In Section 9.3.14, we discussed that the time response may be computed in the frequency domain and inverse transformed to time domain. This process produces an exact convolution by the impulse response in the time domain – if zero padding is used before computing the FFT because otherwise the result of the FFT is circular convolution. This is a very straightforward and fast method to produce time domain outputs, provided that the time data are short enough so that the FFT may be computed. In today’s computers, this can be done for signals several million samples long, so for postprocessing measured signals this is usually the best option. We illustrate frequency domain time response with an example of how to calculate the forced response of a SDOF system to a sinusoidal input force. We know the result of this from Section 5.2.5 where we discussed the beating phenomenon. Here, we will use an

551

552

19 Practical Vibration Measurements and Analysis

input signal that does not create beating. It should perhaps be noted that unlike for general frequency domain signal processing such as integration and differentiation and filtering, as described in Section 9.3.14, which require long signals to perform well, forced response computation by frequency domain multiplication produces the exact convolution, and thus is error-free – also for short signals. The only limitation is that the sampling theorem needs to be fulfilled. Example 19.2.1 The following MATLAB code computes the forced response of the SDOF system with a mass m = 1 kg, natural frequency of fr = 100 Hz, damping ratio of 𝜁 = 0.01, and the input is a force F(t) = sin(2𝜋10.2 t), using a sampling frequency of fs = 1000 Hz to have a response with high resolution, and we produce one second of data. % Define SDOF system z=0.01; % Damping wn=2*pi*100; % Natural frequency in rad/s m=1; k=m*wnˆ2; c=2*z*sqrt(m*k); % …and parameters for simulation fs=1e3; % Sampling frequency f0=10.2; % sine frequency T=1; % Length of data in sec. L=T*fs; % Length of time data in samples; T secs % Create sine input t=makexaxis((1:L)',1/fs); x=sin(2*pi*f0*t); X=fft(x,2*L); % FFT with zero padding! % Create FRF of SDOF system for positive frequencies w=2*pi*(0:fs/(2*L):fs/2)'; H=1/m./(wn.ˆ2-w.ˆ2+j*2*z*w*wn); % Create negative frequencies in upper part of H % Note that all values except first and last are mirrored! H=[H;conj(H(end-1:-1:2))]; y=real(ifft(X.*H)); % Time domain solution y=y(1:L); % Discard zero padding part The example should be repeated without zero padding. Note that you also have to change the variable w accordingly. Plots of this calculation with and without zero padding are shown in Figure 19.3. 19.2.3.2 Time Domain Response by Digital Filters

Using digital filter theory, Ahlin et al. (2006) have described an accurate and fast time domain method using simple digital filters. The method is not unique; equivalent models can be formulated using state-space techniques or so-called autoregressive moving average (ARMA) models, see, for example, Kozin and Natke (1986). A newer paper (Jelicic et al. 2021) also describes further developments of accurate and fast computation of forced response for all types of signals.

19.2 Forced Response Simulation

10–6

0

–5

0

10–6

5

Time response [m]

Time response [m]

5

0.5

1

Time [s] (a)

0

–5

0

0.5

Time [s] (b)

1

Figure 19.3 Results of forced response calculation for Example 19.2.1. In (a), the forced response is calculated with zero padding, and thus shows the correct response of the SDOF system. In (b), the result without zero padding is shown. The plots illustrate how important it is to use zero padding when calculating forced response in the frequency domain.

The formulation of the method by Ahlin is, however, attractive in its simplicity and transparency, and it has proven to be fast and to have superior dynamic range and speed compared to, e.g., ODE-based methods (Brandt and Ahlin 2003). It will therefore be presented here as an example of an accurate and fast way of producing forced response time data, for example, when data are too long to calculate an FFT as suggested in Section 19.2.3.1. Such data are essential in many method development cases where data from known mechanical systems are needed to verify, for example, a signal processing method or an experimental modal analysis parameter extraction method. The example of beating in the forced response of an SDOF system in Figure 5.4, for example, was computed by this method. Many of the examples in this book have also been produced using data simulated with the method described here. To keep the presentation here simple, we will restrict the analysis to linear mechanical systems, although (Ahlin et al. 2006) have also shown how the method can be extended to nonlinear systems. That is, however, beyond the scope of this book. The digital filter method is based on modal superposition which we discussed in Section 6.4.2, i.e., the equation for the frequency response between points p and q, expressed as Hpq (j𝜔) =

N ∑ Apqr r=1

j𝜔 − sr

+

A∗pqr j𝜔 − s∗r

,

(19.2)

where the residues, Apqr , if we assume scaling to unity modal A for each mode r, only depends on the mode shape coefficients in the two points p and q, Apqr = 𝜓pr 𝜓qr ,

(19.3)

553

554

19 Practical Vibration Measurements and Analysis

which should be familiar by now. Using modal superposition for the simulation has some obvious advantages that we can conclude from Chapter 6, namely that ● ●





we can use known mass, damping, and stiffness matrices to obtain the poles and residues; we can use known mass and stiffness matrices to obtain undamped poles and mode shapes, and then add modal (viscous) damping, to obtain the complex poles; we can use the modal model in Equation (19.2) directly, whether it comes from a FE model, an experimental modal analysis, etc.; and we can easily exclude certain modes, if we wish to simulate what would happen if we remove a mode from a particular frequency range (by using a modal model and exclude whichever modes we want to).

To solve the forced response problem, we now formulate digital filters for each mode in the summation in Equation (19.2). This is the key to the method, and Ahlin et al. (2006) shows that the best method to transform the analog filters in the modal superposition, is in most cases a so-called ramp invariant transform. Using this transform, the difference equation (see Section 3.3.2) for the positive pole (the left term in Equation 19.2 for one mode) is y(n) = e−sr Δt y(n − 1) +

−sr Δt − 1 + esr Δt

s2r Δt sr Δt − esr Δt 1 + sr Δt e

Apqr x(n)+

Apqr x(n − 1) (19.4) s2r Δt where sr is the pole for mode number r and Δt = 1∕fs is the sampling increment. From Equation (19.4), we can derive the numerator digital filter polynomial Apqr (19.5) npqr = 2 [−sr Δt − 1 + esr Δt , 1 + sr Δt esr Δt − esr Δt ], sr Δt and the denominator digital filter polynomial +

dpqr = [1, −esr Δt ],

(19.6)

which are defined using the MATLAB/Octave nomenclature here (i.e., higher-order comes first). The total digital filter coefficients, including the complex conjugate term in Equation (19.2), can be conveniently computed from the numerator and denominator polynomials npqr and dpqr using polynomial multiplication, which as we know is equivalent to convolution. The total numerator polynomial A(n) is thus A = Re[dpqr ∗ d∗pqr ],

(19.7)

B = 2Re[npqr ] ∗ Re[dpqr ] + 2Im[npqr ] ∗ Im[dpqr ],

(19.8)

and where in both equations ∗ stands for convolution and the superscript ∗ stands for complex conjugate. The similarity between the symbols should cause no problem. The details of these calculations are found in Ahlin et al. (2006). Although the formulation presented here has been formulated for displacement output, it can also be extended to velocity output. The modal superposition solution does not exist for acceleration output because the degree of the numerator polynomial in this case is equal

19.2 Forced Response Simulation

to the degree of the denominator polynomial. The displacement or velocity output can, however, conveniently be differentiated, e.g., by the methods presented in Section 3.4.3 to produce velocity or acceleration output with very high accuracy. The method presented here is, of course, not free of error. The error of most concern with the filters presented here is a bias error. The error is, however, relatively easy to calculate as the filter coefficients, once computed, can be used to synthesize the frequency response using the MATLAB/Octave command freqz as was explained in Section 3.3.2 This frequency response can then be compared with the true frequency response of the system, based on the synthesis formulas from Chapter 6. The bias error is only dependent on the sampling frequency and is usually negligible if at least 10 times oversampling is used for the filters. For an example of the filters used here, see, e.g., Section 14.5.1, where the presented method is used to calculate input/output data of a known 2DOF system. The method will be used extensively later in this book.

19.2.4

Plexiglas Plate Forced Response Example

We will now show how to use modes from a FEM model, add modal damping as shown in Section 6.4.3, and then using the FFT processing from Section 19.2.3.1 to produce the response due to uncorrelated forces added to the four corners (DOFs 1, 7, 29, and 35) of the Plexiglas plate, see Figure 19.1. All forces and responses are normal to the plane. We start by assuming we have the mode shapes (reduced by selecting only the DOFs of the experimental plate) in columns in the variable V, and the eigenfrequencies in column vector fr. Furthermore, we will assume we have the following functions defined (they are available in the free accompanying ABRAVIBE toolbox): ●



H = modal2frf(f,p,V,indofs,outdofs,OutType), which computes the entire FRF matrix, size N-by-D-by-R y = fftfresp(x,fs,p,V,indofs,outdof,OutType), which computes the time forced response using FFT convolution as described in Example 19.2.1 (where the FRFs in variable H are computed by modal2frf inside this file)

where N is the number of frequencies, D=length(outdofs), R=length(indofs). The following code produces the time responses due to the four forces: indofs=[1 7 29 35]; outdofs=[1:35]; z=0.03; % 3 % modal damping p=-2*pi*fr*z + j*2*pi*fr*sqrt(1-zˆ2); fs=5000; % highest natural frequency is approx. 1000 Hz N=50*1024; % Number of samples; 50 blocks of size 1024 Forces=randn(N,4); % 4 uncorrelated Gaussian forces for n=1:length(outdofs); Data = fftfresp(Forces,fs,p,V,indofs,outdofs(n),OutType) Header.Dof=outdofs(n); Header.Dir='Z+'; save(FileName{n},'Data','Header')

555

19 Practical Vibration Measurements and Analysis

3

1.5

2

1

Displacement [m]

1

Force [N]

556

0

–1

0.5 0

–0.5

–2

–1

–3 –4

10–5

1

1.05

Time [s] (a)

1.1

–1.5

1

1.05

Time [s] (b)

1.1

Figure 19.4 Results of forced response calculation for the Plexiglas plate. In (a) part of the force, and in (b) part of the response in DOF 1.

The example code listed here is not complete but includes the important steps. For a complete example, see the website, www.abravibe.com. A portion of one of the forces and the response in DOF 1 are shown in Figure 19.4. The data from this example will be used for ODS in Section 19.6, and for OMA in Section 19.11.1.

19.3 Spectra of Periodic Signals We will now look at some examples of spectra for periodic signals. As we showed in Section 10.2, the recommended spectrum for periodic signals is the linear spectrum, RMS scaled. We illustrate spectra of periodic signals by the acceleration measurement of vibrations on a milling machine during operation that we presented in Chapter 18. The time response and linear spectra in both linear and logarithmic y-axes were shown in Figure 18.8, where it could be seen that the linear y-axis limits the visible information, which may be good if one is interested in the dominating frequencies, as the linear scale shows things “like they are.” Sometimes, however, there is more information in the signal that may appear visible in logarithmic scale. We shall now look at another choice of display: should the spectra be plotted in acceleration, velocity, or displacement? This is not an easy question to answer, as it depends on what is the purpose of the measurement. Many times it does not matter, but as we will see, in the different units, different frequencies often dominate. In Figure 19.5, the acceleration, velocity, and displacement of the milling vibration are plotted with linear y-axes on the left, and with logarithmic y-axes on the right. As may be seen in the figure, changing the unit

19.3 Spectra of Periodic Signals

100

Linear spectrum [m/s2]

Linear spectrum [m/s2 RMS]

0.5 0.4 0.3 0.2

0

3

Frequency [Hz] (a) –4

2

1.5 1

6

500 1000 1500 2000

Frequency [Hz] (c) –7

4

10–8

500 1000 1500 2000

10–6 10–8

10–10

3 2

10–12

1 0

10–6

Frequency [Hz] (d)

×10

5

10–4

10–10

Linear spectrum [m RMS]

Linear spectrum [m RMS]

0.5

500 1000 1500 2000

Frequency [Hz] (b)

×10

2.5

0

10–4

10–6

500 1000 1500 2000

Linear spectrum [m/s2]

Linear spectrum [m/s RMS]

0.1

10–2

500 1000 1500 2000

Frequency [Hz] (e)

10–14

500 1000 1500 2000

Frequency [Hz] (f)

Figure 19.5 Linear spectra of vibrations of a milling machine with different plot formats. In (a) and (b) acceleration is shown in linear and logarithmic y-axis, respectively, in (c) and (d) velocity is shown with same scales, and in (e) and (f) displacement. The entire data (approximately 38000 samples) were used for the FFT calculation.

from the measured acceleration (top) by integrating once or twice, changes the significance of different frequencies considerably. This is particularly noticeable with linear y-axis, although the largest peak is of course the same in both linear and logarithmic axes. So in which units should we plot data then? This is sometimes (but not always) determined by the task at hand. Sound is usually related to vibration velocity, as sound pressure is related to velocity (sound intensity is the product of sound pressure and particle velocity).

557

Linear spectrum [m/s2 RMS]

0.5 0.4 0.3 0.2 0.1 0

500 1000 1500 2000

0.5 0.4 0.3 0.2 0.1 0

0

500 1000 1500 2000

Frequency [Hz] (c)

0.5 0.4 0.3 0.2 0.1 0

0

0.5 0.4 0.3 0.2 0.1 0

Frequency [Hz] (a)

500 1000 1500 2000

Frequency [Hz]

(e)

Linear spectrum [m/s2 RMS]

0

Linear spectrum [m/s2 RMS]

Linear spectrum [m/s2 RMS]

Linear spectrum [m/s2 RMS]

19 Practical Vibration Measurements and Analysis

Linear spectrum [m/s2 RMS]

558

0

500 1000 1500 2000

0

500 1000 1500 2000

0

500 1000 1500 2000

Frequency [Hz] (b)

0.5 0.4 0.3 0.2 0.1 0

Frequency [Hz] (d)

0.5 0.4 0.3 0.2 0.1 0

Frequency [Hz]

(f)

Figure 19.6 Linear spectra of vibrations of a milling machine. In this figure in (a), (c), and (e) spectra based on one record with blocksize 1024, 2048, and 4096, respectively, are plotted. It may be seen that the best choice of blocksize is the largest, 4096 samples, since with this blocksize the peaks for each harmonic are well separated, and the spectrum goes down to zero in between the peaks. In (b), (d), and (f) are shown spectra with a blocksize of 4096 and 10 averages with 50 % overlap and calculated starting at sample 0, 4096, and 8192, respectively. Here it may be noted that the differences between the spectra are small, with peak values for each harmonic having a maximum of approximately 5 % variation. Data are thus stationary. It may be seen by comparing panels (e) and (f), that using only one block of data in (e) produces a slightly higher peak than for the spectrum calculated using 10 averages in (f). This may be an indication of some noise in the measurement that justifies using some averages to get repeatable values for the peaks.

19.4 Spectra of Random Signals

More often, the unit of display is not so critical, if the task is to find the cause of a particular frequency, for example. Next, we investigate appropriate settings for the spectrum estimation, i.e., appropriate blocksize and whether averaging should be used or not. In Figure 19.6, in the left column spectra based on blocksizes of 1024, 2048, and 4096 samples are shown. It can be seen that the plot, especially in Figure 19.6(a) but to some extent also (b) have some frequency ranges where the spectra do not reach zero between the peaks, whereas in (c), the spectrum does reach zero between the peaks. It should therefore be concluded that the reason for the behavior in (a) and (b) is that the blocksize is not adequately large. The blocksize of 4096 should therefore be preferred. In Figure 19.6(b), (d), and (f), spectra with blocksize of 4096, but using 10 averages with 50 % overlap are shown. The spectrum in (b) is based on samples starting from sample number zero (the first sample) of the time signal, whereas (d) is based on a spectrum calculated from time data starting at sample number 4096, and in (f) the time data starts at sample 8192. It may be seen that the spectra are very stable, giving the same peak values independently on starting point. The data can then be concluded to be stationary and the peak values trusted. By comparing Figure 19.6(e) based on a single time block and (f) based on 10 averages, it may be seen that the averaging should be preferred, as it gives more stable data.

19.4 Spectra of Random Signals Spectra of random signals are best described by spectral densities, as described in Section 8.3.1 and Section 10.3. When analyzing random vibrations, it is important to investigate its stationarity to investigate the best blocksize and the number of averages necessary for a desired random error. In this section, we will look at two different examples of random data: first, we will look at measurements made on the Plexiglas plate mentioned in Section 19.1; and second, we will look at data from a suspension bridge. The Plexiglas plate was excited by moving a pencil randomly over the plate, while tapping the pencil with random time intervals and random intensity. The plate was instrumented with 35 accelerometers on the opposite side of the plate to where the tapping occurred, and the measurement time was set to 300 seconds. The first investigation is trying to establish if the data are stationary and then select an appropriate blocksize. In Figure 19.7(a), the time data of one of the accelerometer signals is shown. As can be seen in the figure, the RMS level is relatively stable. In Figure 19.7(b), three different PSDs with 2K, 4K, and 8K blocksize are shown, zoomed in around the peak of the first natural frequency (actually there are two natural frequencies, see Table 19.1, but this does not change the bias of the PSD). The idea, as explained in Section 10.7.3, is to see for which blocksize the bias error vanishes. This is seen if there are two of the PSD estimates with different blocksizes that produce the same peak value. In the figure, it can be seen that this is not entirely the case with the two highest peaks (which are for 4K and 8K blocksizes), but this is due to the fact that for 8K blocksize, the random error begins to increase. This is not seen very clearly in the signal, but zooming out it is obvious. We omit this to save space here. So the result is that we choose 4K as the blocksize.

559

19 Practical Vibration Measurements and Analysis

10–4

20

8.4

2048 4096 8192

8.2

10

PSD [(m/s 2 ) 2 /Hz]

Acceleration [m/s 2 ]

15

5 0 –5

8 7.8 7.6 7.4

–10

7.2

–15

7

–20 0

100

200

Time [s] (a)

300

6.8 130

140

150

Frequency [Hz] (b)

160

PSD [(m/s 2 ) 2 /Hz]

Figure 19.7 Time response in (a), and PSDs for three different blocksizes zoomed in around the first peak, to see the bias, in (b). See text for discussion.

1Z+ 10Z+ 18Z+

10–4

–6

10

–8

10

0

200

–4

400

600

Frequency [Hz] (a)

800

1000

10

CSD [(m/s2 ) 2 /Hz]

560

10Z+/1Z+ 18Z+/1Z+

–5

10

–6

10

–7

10

10–8 0

200

400

600

Frequency [Hz] (b)

800

1000

Figure 19.8 Three power spectral densities of DOFs 1, 10, and 18 of the Plexiglas plate excited by tapping with a pencil, moving randomly over the plate, in (a), and in (b) cross-spectral densities of DOFs 10 and 18 with reference to DOF 1.

19.5 Data with Random and Periodic Content

We now compute PSDs of all responses and CSDs between all responses and a selection of reference DOFs. In Figure 19.8, we present three PSDs (autospectral densities) of DOFs 1, 10, and 18 (see Figure 19.1), and also the two CSDs between DOFs 1 and 10, and 1 and 18, respectively. Note that DOF 18 is the center point of the plate. Here it may be seen that some peaks are available only in some of the PSDs, because, for example, DOF 18 is on a node line for many of the modes.

19.5 Data with Random and Periodic Content We will now look at two examples where the signals contain random vibrations and periodic components. We will look at both how to describe these signals with suitable spectra, and we will show how to separate the periodic and random components with the AFDE method described in Sections 18.6.2 and 18.7.1.

19.5.1

Car Idling Sound

We start by looking at a sound pressure signal measured at the exhaust outlet of a car, as seen in Figure 19.9. This signal was acquired in an engine cell, but the purpose here is not to discuss how to analyze combustion engine noise, but only to use it as an example of, as it will turn out, a signal with a mix of random and harmonic content. This is thus not the way it would be analyzed in the automotive industry, typically, because the interest there is mainly in the harmonic components. In Figure 19.9, a time plot is shown in panel (a) that reveals a stationary signal (with seemingly constant RMS level). In panel (b) is shown a zoomed time segment, from which it may be seen that there is some periodicity in the signal. We can thus suspect that the signal contains some periodic components. To investigate this, we compute a PSD of the signal, which is found in Figure 19.9(c), where, again, it may be suspected that some of the peaks are due to harmonics. To investigate whether this is the case, we turn to the cumulated PSD in panel (d). This is an uncommon display function, but has its merit when analyzing an unknown signal. Of course, in most industries unknown signals are rarely investigated, since the products and their vibrations are well known from experience. Nevertheless, it makes sense to show how one can go about investigating a new signal without any knowledge about its content. In any case, the cumulated mean square level, which was mentioned in Section 10.7.4, displays the cumulated mean-square level up to the displayed frequency. It is easily computed from the PSD, Gxx (k), by Cxx (k) = Δf

k ∑ Gxx (r),

(19.9)

r=0

where Δf is the frequency increment and Cxx (k) is the cumulated mean-square level, which at each discrete frequency k equals the mean-square level summed from zero to frequency k. If so desired, the cumulated mean-square level may, of course, be computed from a linear spectrum instead of from the PSD. In that case, the summation has to take the equivalent noise bandwidth of the time window used for the spectrum estimation has to be taken

561

19 Practical Vibration Measurements and Analysis

0.4

0

0.2 0

–0.2

10

–4

10

–6

10

–8

0

10

–2

10

–4

10

10

20

Time [s] (a)

2000

4000

Frequency [Hz] (c)

–6

0

–0.4 10

30

Cum. mean square [Pa2 ]

0

6000

Cum. mean square [Pa2 ]

–0.5

PSD [Pa 2 /Hz]

Sound pressure [Pa]

Sound pressure [Pa]

0.5

PSD [Pa 2 /Hz]

562

500

Frequency [Hz] (e)

1000

10.1

10.2

10.3

2000

4000

6000

Time [s] (b)

0.01 0.008 0.006 0.004 0.002 0

0

Frequency [Hz] (d)

0.01 0.008 0.006 0.004 0.002 0

0

500

Frequency [Hz] (f)

1000

Figure 19.9 Car idling sound pressure data. In (a) the whole time signal is plotted, in (b) 0.3 seconds are zoomed in to reveal some periodicity, in (c) a PSD is shown, in (d) the cumulated mean-square level is shown, and in panels (e) and (f) the x-axis of panels (c) and (d) are zoomed, respectively [Courtesy of Volvo Car Corporation].

into account, as described in Section 10.7.6. The equation for calculation of the cumulated mean-square level is then Cxx (k) =

k 1 ∑ 2 L (r), Ben r=0 xx

(19.10)

where Ben is the normalized equivalent noise bandwidth (without the multiplication by Δf ) of the time window used, i.e., approximately 3.7 for a flattop window, depending on which such window has been used, see Table 9.2. For the Hanning window, the value is 1.5.

19.5 Data with Random and Periodic Content

6

Cum. mean square [Pa2 ]

Linear spectrum [Pa RMS]

0.06 0.05 0.04 0.03 0.02 0.01 0

0

50

Frequency [Hz] (a)

100

× 10–3

5 4 3 2 1 0

0

50

Frequency [Hz] (b)

100

Figure 19.10 Car idling sound pressure data. A linear spectrum from zero to 100 Hz is displayed in (a) and the corresponding cumulated mean-square level in (b), the latter revealing two harmonics at approximately 29 and 58 Hz. It is also clear that the signal is a mix of random and harmonic content. See text for details [Courtesy of Volvo Car Corporation].

In Figure 19.9(d), the cumulated PSD reveals that out of the end mean-square level of approximately 0.11, nearly half of it comes from the content at very low frequency, which, on this scale, cannot be determined whether being random or harmonic. In panels (e) and (f) of the figure, we therefore show a zoomed frequency axis up to 1000 Hz. On this scale, it is obvious that there is a concentration of energy in the very low-frequency region, but still it is impossible to conclude if this is due to harmonic or random (resonant) content. However, it is very clear from panel (f) that the rise from 0 to 0.05 is a result of a very narrow frequency range. In Figure 19.10, we have further zoomed the frequency range, now from zero to 100 Hz. Also, since the suspicion is that there are harmonics in this range, we plot the spectrum in the form of a linear spectrum in panel (a), and the cumulated mean-square plot in panel (b). Here it is very obvious that the signal contains a mix of harmonic and random content. There are obviously two harmonics, at approximately 29 and 58 Hz, that contribute to a large portion of the mean-square level. These peaks correspond to the RMS level of orders two and four of the engine, which is a four-stroke, four-cylinder engine. However, the random portion still contributes approximately 0.8 ⋅ 10−3 Pa2 below 29 Hz, and 1.3 ⋅ 10−3 Pa2 between 29 and 58 Hz. By taking the square root of these values, we can conclude that the RMS level of the signal in the frequency range from zero to 58 Hz is composed of ● ● ● ●

28 ⋅ 10−3 Pa RMS from random content between zero and 29 Hz 51 ⋅ 10−3 Pa RMS from the periodic component at 29 Hz 36 ⋅ 10−3 Pa RMS from random content between 29 and 58 Hz 19 ⋅ 10−3 Pa RMS from the periodic component at 58 Hz

Since the signal we are investigating is a sound pressure, it may be tempting to analyze it with third-octave spectra, as is common in acoustics. It may also be considered to A-weigh the spectrum, again because it is a sound pressure. Such spectra are shown in Figure 19.11.

563

19 Practical Vibration Measurements and Analysis

80

SPL [dB]

60 40 20

20 25 31. 5 40 50 63 80 100 125 160 200 250 315 400 500 630 800 100 1250 1600 2000 2500 3150 4000 5000 6300 0

0

80

SPL [dB(A)]

Frequency [Hz] (a)

60 40 20 0 20 25 31. 5 40 50 63 80 100 125 160 200 250 315 400 500 630 800 100 1250 1600 2000 2500 3150 4000 5000 6300 0

564

Frequency [Hz] (b) Figure 19.11 Third-order spectrum of car idling sound pressure with linear weighting in (a) and A-weighted in (b) [Courtesy of Volvo Car Corporation].

As can be seen, these spectra do not reveal much information of the nature of the signal. For a signal such as this, with most of the energy concentrated at low frequencies, the third-octave spectra do not contribute much to the understanding of the signal. This does not, of course, in any way suggest that third-octave spectra are not useful; they are often the go-to spectrum type in acoustics because they are closely related to the perception of the sound. Another possibility for analyzing signals that contain a mix of random and harmonic content, is to separate the two contributions. This is common, for example, in sound quality analysis, also called psychoacoustics, see, for example, Lyon (2000), where sounds may be separated into the contribution from different sources. In Figure 19.12, we display results from separating the car idling signal. First, the harmonics of this signal were removed by the automated frequency editing method, AFDE, described in Section 18.7.1. Since we are interested in removing the harmonics with as much precision as possible, i.e., without distorting the random part of the signal, we select to interpolate over only the one frequency line where a harmonic is detected. Next, the difference between the original signal and the signal without the harmonics was calculated. This signal thus only contains the harmonics. The original signal is shown in Figure 19.12(a),

19.5 Data with Random and Periodic Content

0.15

0

0.1 0.05 0

–0.05

0

10

20

Time [s] (a)

30

10–4

–6

10

10–8 0

100

200

Frequency [Hz] (c)

300

–0.1

–0.15

Linear spectrum [Pa RMS]

–0.5

PSD [Pa 2 /Hz]

Sound pressure [Pa]

Sound pressure [Pa]

0.5

0

100

10

20

30

100

200

300

Time [s] (b)

10–10

10–20 0

Frequency [Hz] (d)

Figure 19.12 Car idling sound pressure signal separated into random and harmonic content. In (a) only the random part of the original signal is shown, in (b) the extracted harmonic part of it, in (c) a PSD of the random part, and in (d) a linear spectrum of the extracted harmonic part of the signal [Courtesy of Volvo Car Corporation].

whereas the signal with only the harmonics is shown in panel (b). The “strange” look of the envelope of the harmonic signal is an effect of the initial phase relationships between the harmonic components; this can be ascertained by summing only the harmonic components using the Fourier coefficients obtained from the detection of the harmonics with the periodogram ratio detection method, PRD, see Section 18.6.2. This was attempted and produced the same result. In panel (c) of the figure, we see the PSD of the random part of the signal after removal of the harmonics. Less than 1 % of the harmonics remain in the signal, and if desired, can be removed by applying the AFDE method a second time, or, alternatively, by choosing to interpolate over more than the single frequency line for each harmonic in the first application of AFDE.

19.5.2

Container Ship Measurement

We will now look at an acceleration signal from a container ship, see Figure 19.13. In panel (a) of the figure, the PSD of the signal is shown. This spectrum clearly shows the natural

565

–4

PSD [Pa 2 /Hz]

PSD [Pa 2 /Hz]

19 Practical Vibration Measurements and Analysis

10

10–6 0

1

2

3

4

Frequency [Hz] (a) –4

10–6

5

0

× 10

2

1.5 1 0.5 0

–4

10

Cum. mean square [Pa 2 ]

2

Cum. mean square [Pa2 ]

566

0

1

2

3

Frequency [Hz] (c)

4

5

1

2

3

4

5

4

5

Frequency [Hz] (b) –4

× 10

w/o harm. Original

1.5 1 0.5 0

0

1

2

3

Frequency [Hz] (d)

Figure 19.13 Spectral densities and cumulated mean-square levels of an acceleration signal from a measurement on a container ship. In (a), the PSD of the original signal with four peaks due to harmonics are marked by asterisks, and in (c), the cumulated mean-square level of the original signal. In (b), the PSD of the signal after automatic removal of harmonics by the AFDE method (see text) and in (d), the mean-square level of this signal. It can be seen in (b) that the harmonics are efficiently removed, and in (d) that the only harmonic contributing notably to the energy of the original signal was the one at approximately 3 Hz.

frequencies of the ship hull as peaks in the PSD. Note, however, also the four harmonics marked by asterisks. Three of these frequencies were caused by the propeller rotation speed, and one was an engine frequency. The measurements of this ship will be used for an example of OMA in Section 19.11.4. In Figure 19.13(c), the cumulated mean-square level is plotted. There is evidently very little energy contribution from the harmonics, since there are no clear “jumps” in the cumulated mean-square plot. In panel (b) is plotted the PSD of the signal with harmonics removed by use of the AFDE method, and in panel (d) the cumulated mean-square level of this signal is plotted, overlaid by the cumulated mean-square level of the original signal. It is clear that only the harmonic at approximately 3 Hz is contributing to the energy of the signal and to a relatively small extent. Remember that harmonics will stick out of the PSD if only the frequency increment is small enough. So the fact that a harmonic is clearly visible, does not mean that it necessarily is important in terms of its energy.

19.6 Operational Deflection Shapes – ODS

19.6 Operational Deflection Shapes – ODS One of the most useful tools for solving many vibration problems is operating deflection shapes. This technique is usually based on animation of the forced response of a structure in operation at a particular frequency, although it is possible also to animate time domain signals, particularly for transient analysis. The frequency domain ODS is the forced response expressed by the displacement vector {U(f )} in Equation (19.1). We know from Chapter 6 that the frequency response matrix [H(f )] has a direct relation to the mode shapes of the structure as shown by, for example, Equation (6.113) The basis for ODS is that in many (but not all) cases when a structure is forced, the deformation shape at a particular frequency, as described by Equation (19.1), will look very similar to one of the mode shapes. This is particularly true close to the natural frequencies, where structures are very “unwilling” to move in any way other than by the shape of the mode in question. Some vibration problems naturally occur near the natural frequencies because of resonance amplification, and so, if we take a problem frequency and insert into Equation (19.1) and animate the vector {U(f )}, in many cases, we will see a motion resembling one of the modes. Knowing this deformation shape, a solution to the vibration problem may be found without having to perform a more complex measurement such as an experimental modal analysis test. ODS is therefore a very useful trouble-shooting tool. If we measure accelerations or velocities, Equation (19.1) needs to be multiplied by −𝜔2 or j𝜔, respectively, but, since that is only a scaling of all vector coefficients, the shape is not altered. It is therefore insignificant what units we measure and animate. Which spectrum type should we use to extract the vector in Equation (19.1), or derivatives of it if we measure with, say, accelerometers? The answer to this question is that it does not really matter, as long as we follow good practice for spectrum measurements. From Chapters 10 and 13, we know that the only possibility we have to measure the phase relation between two channels is to measure a cross-spectrum between the two points. This is what gives us the phase of a frequency response, but we might as well obtain it directly by a cross-spectrum measurement, of course. When it comes to the amplitudes of each coefficient in {U(f )}, they are best measured with autospectra, usually with some averaging to produce stable values. A common way to perform ODS for periodic vibrations is therefore to use the phase spectrum described in Section 10.2.3. For random vibrations, we may instead choose cross-spectral densities. The choice of the reference DOF (coefficient in {U(f )}), used for the phase reference, may sometimes be important. It should always be chosen from a DOF with as much response as possible at the frequency of interest (which is the same as saying a DOF which has a large mode shape coefficient for the mode of interest). As described for EMA and OMA in Chapters 16 and 17, it is also important to select a reference DOF which is not on a node line of any of the possible modes of interest, as this will cause badly defined operating shapes, see Section 6.4.5. Frequency response functions (transmissibilities) could also be used for the purpose of extracting ODSs. While perfectly usable per se, there are, however, at least two disadvantages with using FRFs for ODS extraction. First, there will, in general, not be any peaks in transmissibilities at the frequencies of interest. Therefore, it is difficult to find the proper

567

568

19 Practical Vibration Measurements and Analysis

frequencies for extracting the ODSs. Second, if the loads forcing the structure are periodic, the transmissibilites will be badly defined at all frequencies but those of the periodic loads, since at other frequencies the transmissibility will be divided by the response of the background noise (in theory zero, but experimentally we always have some noise, of course). The phase spectrum or cross-spectral density do not suffer from any of these two drawbacks and are most often better choices. ODS analysis also requires software that allows us to generate a wire frame model of the measurement points and to animate the operating shapes. Such software is common in commercial applications and can also be implemented in, for example, MATLAB. To make an ODS measurement is rather straightforward. The following points briefly describes the procedure: 1. After deciding which DOFs should be measured, number each point, decide a coordinate system, and build a geometry model in the animation software. 2. Make a measurement of vibrations of the selected DOFs on the structure during steady-state conditions, recording which sensor is in which point and direction (most commercial systems have support for this). If all requested DOFs cannot be measured simultaneously, select one or more references which are kept in the same place in every measurement, move the other sensors around and make a set of all the signals of each measurement until all points have been measured. The reference DOF or reference DOFs should be selected at points with clear peaks at the frequency or frequencies of interest. 3. Compute phase spectra of all measured DOFs, with phase reference to the reference channel. If the signals are random, cross-spectral densities may be selected instead. 4. If more than one set of measurements was used, each set should be scaled by the vibration level in the reference DOF of the same set, to equal out potential differences in the operating conditions between the sets. 5. Extract amplitude and phase at the frequencies of interest, from all measurements, and store the result for each frequency in a shape vector. 6. Import the shape vectors into the animation software and animate the shapes.

19.6.1 Plexiglas Plate ODS Example – Single Reference We will now look at an example of how to extract ODS shapes in a very simple MATLAB script. We assume that we have the 35 forced responses computed for the Plexiglas plate in Section 19.2.4. We choose DOF 1 (the first coefficient in the vector y) as reference. Since the plate vibrations are random, we compute cross-spectral density functions for all responses with reference to the first signal, in variable Gyx, whereafter we plot these spectra, using a frequency axis in vector f. The following code then creates the ODS at several frequencies picked by the user. We assume the matrix Gyx is size N-by-D, where N is the number of frequencies, D is the number of responses. A plot of the spectra is found in Figure 19.14. n=0; title('Pick frequencies, to end press ') [xx,yy]=ginput(1); % Read one frequency selection while ∼isempty(xx) % while user selected a frequency n=n+1; % next index into result vectors

19.6 Operational Deflection Shapes – ODS

–12

CSD [(m/s2 ) 2 /Hz]

10

–14

10

–16

10

–18

10

0

200

400

600

800

1000

1200

Frequency [Hz] Figure 19.14 Cross-spectral densities between all responses of the Plexiglas plate, and the DOF 1 as reference, used for the single-reference ODS example in Section 19.6.1. Each selected frequency is marked by a vertical line.

idxs(n)=round(xx/(f(2)-f(1))); % find index into f vector freq(n)=f(idxs(n)); % put value in result vector % next value, or empty if user pressed "RETURN": [xx,yy]=ginput(1); ODS(:,n) = Gyx(idxs(n),:).'; % ODS vector at frequency f(n) end

After this code is run, the vector freq contains the picked frequencies, and the matrix ODS contains the ODS shapes in columns. Of course, the code can be made more advanced by, for example, adding a peak search around each selected frequency, as the user may have selected a frequency slightly besides the peak. The ODSs for the first two frequencies selected around the first peak of Figure 19.14 are shown in Figure 19.15(a) and (b). The first peak in the spectrum actually corresponds to two closely spaced modes, with theoretical frequencies of 145.3 and 150.3 Hz, respectively, see Table 19.1. In an attempt to select frequencies of both modes, we therefore select frequencies 141 and 154.6 Hz (a little arbitrary, but clearly below and above each mode). The first mode is supposed to be the first bending mode along the long axis (x), and the second mode the first torsional mode. We will later see that the two modes in real life are actually reversed, but for now we should note that the two ODSs in Figure 19.15(a) and (b) both look mostly like the first torsion mode. The bending mode is part of the first ODS, but the torsion is dominating. This is very common in cases with two closely spaced modes because the one mode is dominating the response. In cases with close spaced modes we may use the technique presented in Section 19.6.2, using multiple references and virtual signals.

569

570

19 Practical Vibration Measurements and Analysis

(a) ODS 1, 141 Hz, Single reference

(b) ODS 2, 154.6 Hz, Single reference

(c) ODS 1, 149.6 Hz, Multiple references

(d) ODS 2, 151.6 Hz, Multiple references

Figure 19.15 Operating deflection shapes (ODSs) for the single-reference case in Section 19.6.1 in (a) and (b), and from the multiple-reference case described in Section 19.6.2 in (c) and (d). As can be seen, in the case of closely spaced modes, the multiple-reference case is needed to extract both the torsion and bending modes around 150 Hz.

19.6.2 Plexiglas Plate ODS Example – Multiple-Reference As we saw in Section 19.6.1, in some cases with closely spaced modes, the ODS obtained by using cross-spectra with a single reference may miss one of the modes. Actually, the ODS will be a linear combination of the two modes, but many times, as in our example, one of the modes is dominating. This situation can be solved by using more than one reference during the measurements, and computing the input/output cross-spectral matrix using the references as inputs, and all other channels as outputs. Then virtual cross-spectra of all outputs in {y} with the r first virtual signals are computed using Equation (15.15). This virtual cross-spectrum matrix has one column for each virtual signal (principal component), and each of these columns is then used to extract operating shapes. Each column will give a virtual ODS. In the case of several independent operating shapes located at a particular frequency, provided at least one of the references is located off the nodal lines for each shape, the virtual shapes will each give one of the independent shapes (Otte et al. 1990; Tucker and Vold 1990). It is also possible to use this technique in the case where all points of interest cannot be measured simultaneously. It may be worth noting that as we said initially in this section, ODS is a trouble-shooting tool, and we may be unaware of the actual mode shapes. In such a case, missing one of two modes may cause much confusion, if the mode that occurs from the ODS for some reason cannot describe the vibration problem at hand. This may be a case where one then tries to apply multiple-reference ODS.

19.6 Operational Deflection Shapes – ODS

We shall now show some simple MATLAB code that applies multiple-reference ODSs for our case with the Plexiglas plate. For this case, we use the same data as in Section 19.6.1, but we now calculate the cross spectra in a 3D matrix Gyx which is now N-by-D-by-R, where N are the frequencies, D the responses, and R are the reference index. In our case, the matrix is thus N-by-35-by-4. We select the four corner DOFs as references, although we could select any DOFs as long as at least on DOF is not on a node line for each mode. Using the corner points as references, means that we treat these four DOFs as inputs, and then treat all the responses, including the four corner points, of course, as outputs. Next, we compute the virtual input/output cross-spectrum matrix as in Example 15.2.2, which is also N-by-D-by-R (N-by-35-by-4 in our case). To extract the multiple-reference ODSs, we first plot the virtual cross-spectra with the first reference, as shown in Figure 19.16(a). We select frequencies and extract ODSs for this reference just as in Section 19.6.1. In our case, we only select one frequency at the first peak, as indicated in the figure. You should note that there is a peak in the virtual cross-spectrum with the first reference for every mode of the structure. Next, we plot the virtual cross-spectra with the second reference, as shown in Figure 19.16(b). Here it may be seen that there is only one significant peak, around 150 Hz. The other peaks are significantly lower and do not reflect other modes. This may be hard to see, but with some experience, it is usually clear which peaks are to be regarded as significant. In case one is in doubt, there is nothing wrong with trying to extract an ODS, it will usually be either similar to the ODS picked at the same frequency with the first reference, or a linear combination of surrounding modes that will look “strange.” The two ODSs that were extracted in this way, are shown in Figure 19.15(c) and (d). As can be seen, the ODS for the first frequency, picked at 149.6 Hz is similar to the first torsion mode of the plate, and the second ODS from the frequency picked at 151.6 Hz is similar to the first bending of the plate. As mentioned above, the fact that the frequency of the torsion mode is lower than the frequency of the bending mode is true for the plate, although

10–12

CSD [(m/s2 ) 2 /Hz]

CSD [(m/s2 ) 2 /Hz]

10–12

10–14 –16

10

10–18 0

500

1000

Frequency [Hz] (a)

10–14 –16

10

10–18 0

500

1000

Frequency [Hz] (b)

Figure 19.16 Virtual cross-spectral densities between all responses of the Plexiglas plate, and the first virtual reference in (a), and with the second virtual reference in (b), used for the multiple-reference ODS example in Section 19.6.2. Each selected frequency is marked by a vertical line.

571

572

19 Practical Vibration Measurements and Analysis

the FEM model gave another result. This is common when frequencies are this close to each other. We have thus showed that the virtual cross-power spectra have the ability of separating closely spaced modes. If ever in doubt if there could be several closely spaced modes, multiple-reference ODSs are recommended to be used. As we have showed, the extra effort is minimal.

19.7 Impact Excitation and FRF Estimation In Section 13.8.6, we described how improved impact excitation can be improved by implementing it differently than what is usually done in commercial systems. In this section, we will look at an example of how to implement this processing. There are two main reasons that contribute to deteriorating the quality of FRFs from impact testing. It is first, that the operator hits in slightly different locations around the desired excitation point, which causes the FRF to be inconsistent, since it is based on FRFs between different force positions and the fixed response location of the sensor. Second, it does not guarantee consistent levels of the impacts, as it is hard for the user to hit with the same force level for each impact. If the structure is just slightly nonlinear, as many structures are, including impacts with varying force levels in the averaging, will produce an erroneous FRF estimate. The method we demonstrate here helps avoiding these disadvantages and was first proposed in Brandt and Brincker (2011). The method of postprocesses time data with the force signal and the response signals are recorded synchronously. We assume several responses are measured as this is recommended to enhance the results of EMA. During the data acquisition, the measurement system is set to a fixed recording time, and during this time, the structure is hit in one of the DOFs repeatedly, say five to ten times, with a given (approximate) time interval. A sufficient time interval is checked during a pretest, where the optimal blocksize is obtained, so that one knows the minimum time interval between the impacts. This pretest is best performed by setting the measurement time long enough to acquire a large number, say 10, of impacts. After this, several steps are performed as follows from the following list: 1. Select a number of the force impacts with similar peak value to minimize effects of nonlinearities. 2. Set a trigger level and pretrigger condition. (a) Using these settings, for each trigger condition encountered in the force signal, mark the time block defined by the trigger condition and a blocksize in a plot. (b) Check that all impacts are defined, which means that the trigger level works. Otherwise, adjust it. (c) Check each time block thus defined, to see that there is a (small) number of samples prior to the impact start rising, so that the initial part of the force is not cut off. (If it is, increase the pretrigger). 3. Investigate the best blocksize. (a) Define an example blocksize, say N samples. (b) Calculate the average FRF using the selected impacts, and with blocksizes of N∕2, N, and, 2N.

19.7 Impact Excitation and FRF Estimation

(c) Plot the three FRFs overlaid, and zoom in around the first natural frequency. (d) Check that the two largest blocksizes have a similar peak value, and the smallest blocksize has a slightly smaller peak value. If this is the case, using the middle blocksize is sufficient to avoid bias at the peaks. (e) If the criterion in point 3d) is not fulfilled, adjust the blocksize to a larger one if all peaks have different peak value, and to a smaller one if they all have the same peak value. Then repeat from 3a) until the condition in 3d) is fulfilled. 4. Recheck point 2 and the subpoints under it and establish a minimum time interval between impacts so that one block does not extend into the next block. 5. Now, we need to optimize the time windows. (a) Process the FRF and coherence using the selected impacts and with the settings from the previous points in this list. (b) Plot the transient spectrum of the force and check if there is noise in it. If there is, apply a force window and reprocess. Repeat until the force spectrum is smooth. (c) Plot the FRF and coherence and check the quality; particularly, if the coherence is close to unity, and that the FRF is not oscillating (“wavy”), which may be caused by double impacts. If the FRF and/or coherence are not optimal, adjust the exponential window and reprocess. Repeat until the FRF and coherence are optimal. We will now look at some of the steps listed above for an impact record from the Plexiglas plate in the process to obtain optimal parameters for the impact processing. The trigger and pretrigger conditions are usually not very difficult to set in this type of processing. It is usually sufficient to set the trigger level to, say, 5 % of the maximum force peak, and the pretrigger to 50 samples. This may have to be adjusted if the blocksize is very small or very large. We start the process to find the optimal settings by plotting the force and one accelerometer response as shown in Figure 19.17. We can conclude that the trigger and pretrigger conditions and the blocksize are adequate, and that each of the blocks defined by the conditions are well separated. The time in between the impacts is adequate for the given blocksize. The next step is to investigate an adequate blocksize. It should be selected as small as possible, but large enough to avoid bias around the natural frequencies, as explained in Chapter 13. We thus calculate the FRF using three different blocksizes of, say, N∕2, N, and 2N, as shown in Figure 19.18, where the three FRF estimates are plotted overlaid. In (b), we see the plot zoomed in around the first natural frequency, since this peak, assuming all modes have similar damping, is the most narrow, as was explained in Section 5.5.3. In Figure 19.18(b), we see that the smallest blocksize has a lower peak than the other two, but that the two largest peaks are similar. We can then conclude that the middle blocksize of N = 4096 samples in this case is adequate to avoid bias around the natural frequencies, and we select this blocksize for our analysis. Once the optimum blocksize has been determined, the force signal as plotted in Figure 19.17 should be rechecked, and a suitable minimum time between impacts should be established for the later measurements. After this remains to investigate appropriate force and exponential windows to obtain FRFs with as good quality as possible. We thus look at the spectrum of a single force impact, as shown without and with using a force

573

19 Practical Vibration Measurements and Analysis

20

25

15

15

Force [N]

Force [N]

20

10 5

0

100

200

0

300

Time [s] (a)

40

20

30

15

Acceleration [m/s 2 ]

Acceleration [m/s 2 ]

–5

10 5

0

20 10 0

–10 –20

0

100

200

300

Time [s] (c)

85

90

95

100

85

90

95

100

Time [s] (b)

10 5 0 –5

–10

Time [s] (d)

Figure 19.17 The entire measured record of the forces for an impact test in (a) and an interval around the fifth impact in (b). The entire force signal is plotted in dotted line, and each of the blocks defined by using the trigger condition (20 % in this case), the pretrigger condition (200 samples), and the selected blocksize (32K samples for this plot) are plotted in solid line. In (b), it may be seen that the force is well defined by a few samples to the left of the force peak. In (c) and (d), the response signal of the first accelerometer of the measurement is display similarly to the force signal. 1

4

10

0

10

–1

10

10–2 0

2048 4096 8192

500

Frequency [Hz] (a)

1000

FRF [(m/s2 )/N]

3.8

FRF [(m/s2 )/N]

574

3.6 3.4 2048 4096 8192

3.2 3 135

140

145

Frequency [Hz] (b)

150

Figure 19.18 Three overlaid frequency responses calculated with blocksizes of N∕2, N, and, 2N where N = 4096. In (a), the entire frequency range is shown, and in (b), an interval around the first natural frequency at approximately 144 Hz is shown, where it may be seen that the smallest blocksize gives a bias error in the FRF, whereas the two largest blocksizes do not, since they show similar peak value. The middle blocksize is thus selected for the analysis.

15

15

10

10

Force [N]

Force [N]

19.7 Impact Excitation and FRF Estimation

5

62

62.2

Time [s] (a)

62.4

62.6

0.25 0.2

Trans. spec. [N/Hz]

Trans. spec. [N/Hz]

0 61.8

0.15 0.1

0.05

0

500

Frequency [Hz] (c)

1000

5

0 61.8

62

62.2

Time [s] (b)

62.4

62.6

0.25 0.2 0.15 0.1

0.05

0

500

Frequency [Hz] (d)

1000

Figure 19.19 Transient spectrum of the force of the fourth impact in the measurement shown in Figure 19.17(a) in dotted line, and the force window in solid line in (a), and using a force window with a width of 3 % of the blocksize in (b). It can be seen how the noise in the spectrum in (a) is removed by the multiplication by the force window in the spectrum in (b).

window in Figure 19.19(c) and (d), respectively. To visualize what this means in terms of the entire time block, the corresponding force windows are shown in panels (a) and (b) in solid line, and the force is plotted in dotted line. The level of the force windows are not appropriate (since they are equal to unity) but are displayed above the peak of the force impact. It may be seen in panel (d) that the force window of 3 % of the blocksize in this case removes the noise visible in Figure 19.19(c). This window is therefore selected for the further analysis. The only thing that remains now, in the process of obtaining optimal parameters for the analysis of the impact data, is to find an optimal exponential window. We do this by taking a look at the coherence of the estimate based on several impacts and gradually applying an exponential window with higher and higher exponent, until the coherence looks appropriate. The results without an exponential window, and with a window ending with a value of 0.01, are shown in Figure 19.20(c) and (d), respectively. In panels (a) and (b), the corresponding FRFs are shown. As can be seen in the figure, the improvement of the coherence is very small, as only the very small dip around the first antiresonance of the FRF is improved. In this case, it may be considered if it is worth

575

19 Practical Vibration Measurements and Analysis

101

FRF [(m/s2 )/N]

FRF [(m/s2 )/N]

101

100

–1

10

10–2 0

500

Frequency [Hz] (a)

1000

100

–1

10

10–2 0

1

1

0.8

0.8

Coherence [–]

Coherence [–]

576

0.6 0.4 0.2 0

500

1000

500

1000

Frequency [Hz] (b)

0.6 0.4 0.2

0

500

Frequency [Hz] (c)

1000

0

0

Frequency [Hz] (d)

Figure 19.20 Plots of frequency responses in without using exponential window in (a), and using an exponential window ending with the value 0.01, in (b). The difference in the coherence functions is marginal in this case, but the dip around 100 Hz is somewhat smaller using the window.

using the window, since any EMA results will have to be corrected for damping, as we described in Section 13.8.4. For the further processing in this chapter, we will not apply any exponential window. Now that we have established the optimal parameters for the signal analysis, we may start measuring the responses for each impact DOF, roving the hammer over the structure. Once all DOFs are excited, we can start using the data to estimate frequency responses. We do this by looking at the data for each force location, and optimizing the FRF estimates by selecting appropriate impacts so that the coherence function and the spectrum of the force look as good as possible. We illustrate this in Figure 19.21, where we show the FRF and coherence estimates for three selections of impacts for the same signal that we have used above, as shown in Figure 19.17. There are two things we want to achieve by the analysis shown in the plots in Figure 19.21. First, that the coherence is close to unity over all frequencies, with few acceptable exceptions if dips occur at deep antiresonances. Second, that the force spectrum is smooth and does not show significant ripple, because that is an indication of double-impacts as illustrated in Figure 13.13. In Figure 19.21, the effects of double impact is particularly seen in panel (f). In panels (g), (h), and (i), the results of a good selection of impacts are seen. Selecting only those impacts that create an

1

10

0

1

–1

10

0.8 0.7

500

0.5

1000

Frequency [Hz] (a)

101

0

500

Trans. spec. [N/Hz]

10–1

0.9 0.8 0.7 0.6

–2

0

500

0.5

1000

Frequency [Hz] (d)

101

0

500

500

1000

500

1000

–1

–2

10

1000

0

Frequency [Hz] (f)

0

Trans. Spec. [N/Hz]

1

10–1

0.9 0.8 0.7 0.6

–2

10

1000

10

Frequency [Hz] (e)

Coherence [–]

10

500

Frequency [Hz] (c)

1

100

10

10–1

10–2 0

1000

Frequency [Hz] (b)

Coherence [–]

FRF [(m/s2 )/N]

0.9

0.6 10–2 0

FRF [(m/s2 )/N]

Trans. Spec. [N/Hz]

10

Coherence [–]

FRF [(m/s2 )/N]

19.7 Impact Excitation and FRF Estimation

0

500

1000

Frequency [Hz] (g)

0.5

0

500

1000

Frequency [Hz] (h)

–1

10

–2

10

0

Frequency [Hz] (i)

Figure 19.21 FRF, coherence, and transient force spectrum estimates for three different selections of impacts of the signal shown in Figure 19.17. In (a), (b), and (c) all impacts where chosen, and it is obvious that there are one or more impacts that produce a bad result. In (d), (e), and (f) only impacts 1 through 6 (out of the 11 in all) were chosen, which produces a better result, but still the coherence does not look good. In (g), (h), and (i), only impacts 1, 4, 5, 6, and 9 where chosen (by experimenting with combinations) and this produces a significantly better result.

acceptable coherence and force spectrum in this fashion, should be applied to the data of each of the excitation points, after which the FRFs, and if desired the coherence and force spectrum are stored, when good results are obtained. This procedure should be supported by software that makes it easy to investigate the effect on the estimates of combinations of impacts. Such software is included in the free ABRAVIBE toolbox for MATLAB, see Section 1.6.2.

577

19 Practical Vibration Measurements and Analysis

19.8 Plexiglas EMA Example To give an introductory demonstration of experimental modal parameter estimation, EMA, we start by a complete example of modal analysis parameter estimation, MPE, starting with quality assessment of the FRFs, then pole estimation, followed by mode shape estimation, and ended by assessing the quality of the results. This example is intended to give a first overview of the different steps taken in the MPE process to not get lost in too many details. Section 19.9, we will present more details about all the many choices and how they affect the results.

19.8.1 FRF Quality Assessment After all the FRF data are stored, and prior to the parameter estimation, it is advisable to investigate the consistency of the data to get an impression whether data are good enough for appropriate parameter estimation. Having looked at the FRFs, coherence functions, and the force spectra during the processing as mentioned in Section 19.7, you may already have a good first impression of the data quality. However, it is still possible that data contain problems that may be revealed by further analysis. A good tool for quality assessment is the multivariate mode indicator function, MIF, described in Section 16.2.10. In Figure 19.22, the multivariate MIFs based on the 70 FRFs for the Plexiglas plate, with references in the two corner DOFs 1 and 7 (as shown in Figure 19.1) are shown for the entire frequency range in panel (a), and in the frequency interval from 1

1

0.8

0.8

Multivariate MIF

Multivariate MIF

578

0.6 0.4 0.2 0

0

500

Frequency [Hz] (a)

1000

0.6 0.4 0.2 0 100

200

300

Frequency [Hz] (b)

400

Figure 19.22 Multivariate MIFs for the Plexiglas plate, based on impact testing data with reference accelerometers in the two corner points, DOFs 1 and 7. The first MIF, plotted in solid line, shows dips for each mode, whereas the second mif, plotted in dashed line, dips only for the first natural frequency around 150 Hz, indicating that there are two closely spaced modes there. In panel (b), a zoomed frequency range reveals that the first and second MIF both dips to almost zero. The second MIF also dips at approximately 420 Hz, but that dip is, however, not an indication of two modes, but is the so-called eigenvalue crossover effect, where the dip coincides with the peak in MIF 1 between the relatively closely spaced modes for which MIF 1 dips. The only two modes around this frequency range are thus the two modes for which MIF 1 exhibit dips.

19.8 Plexiglas EMA Example

100 to 450 Hz in panel (b). It should be noted that the multivariate MIF indicates real normal modes, so in cases where the modes are not expected to be real, other mode indicator functions may have to be used instead, see Section 16.2.10. In Figure 19.22(a), the multivariate mode indicator functions (two since we are using two references) indicate where there are real-valued normal modes. The first MIF makes a dip for every frequency where there is a natural frequency. The second MIF dips only at frequencies where there are two or more natural frequencies, and so on, if there are more references. In the frequency range shown, we thus have a total of 10 modes, since there are two closely spaced modes around 150 Hz, and above this the first MIF exhibits eight additional dips. A special phenomenon is encountered at approximately 415 Hz, which is visible in Figure 19.22(b), where the second MIF dips in between two dips in MIF 1. This characteristic look, where the dip in MIF 2 coincides with the peak in MIF 1 between the two dips, is referred to as the eigenvalue crossover effect and is not an indication of an extra mode where MIF 2 dips; there are only two modes in this frequency range, coinciding with the dips in MIF 1. The MIFs in Figure 19.22(a) show that the FRFs are good, and that the modes are close to normal modes, that is with real-valued mode shapes. This is seen by the fact that each dip in the first MIF is narrow and reaches close to zero. Another important assessment, in cases of multiple-reference data, is to validate that the data show reciprocity, as this is very important in order for multiple-reference parameter estimation techniques to perform well. A reason for data to show bad reciprocity may be that the calibration factors used for the sensors are inaccurate. Other reasons may be that the points are badly defined so that the force impacts have hit a slightly different point than the accelerometer position, which may easily happen on more complicated structures. Yet another reason could, of course, be that the structure exhibits nonlinearity. Reciprocity plots in magnitude and phase form of the FRFs H17 (f ) and H71 (f ) overlaid are shown in Figure 19.23. The y-axis is shown in linear scale, which is rare for FRFs, but is done since logarithmic y-axis tends to hide small differences. In the figure, it may be seen that the reciprocity is within approximately 3 %, which is close to perfect since the accelerometers used have an uncertainty of ±5 %, so we move on to the next test. It is always advisable to check the driving point FRFs, i.e., the FRFs where force and response are in the same DOF. This is especially important for shaker tests, as described in Section 13.12.2. In Figure 19.23(b), the driving point FRF in DOF 1 of the plate is shown in log magnitude. A driving point FRF must exhibit an antiresonance between each peak, for reasons we explained in Section 6.4.6. A better plot format is to look at the imaginary part, as shown in Figure 19.23(d), where all peaks should point in the same direction, i.e. they should all be positive peaks as in the figure, or they should be negative dips. The most common reason for some peaks to point in a different direction is an erroneously attached force sensor when applying shaker excitation. The driving point FRF can be used also to check the orientation of the sensors. If the force and response accelerometer both point in the same direction, then the phase of the FRF at resonance for an accelerance is +90 degrees, which means that the imaginary part is positive, as is the case in the figure. If the sensors are directed in opposite direction of each other, then an accelerance should show a phase of –90 degrees, i.e., the imaginary part should be negative. The reason for this is that for a SDOF system (which a mode corresponds to), the receptance has a phase relationship between force and displacement of –90 degrees.

579

19 Practical Vibration Measurements and Analysis

101

6

FRF [(m/s2 )/N]

H71

4 3 2

FRF [(m/s2 )/N]

H17

5

100

–1

10

1 0

0

500

Frequency [Hz]

1000

Imaginary part of FRF

0

–100

0

500

Frequency [Hz]

1000

500

Frequency [Hz]

1000

(b)

5

100

–200

10–2 0

(a)

200

FRF phase [Deg.]

580

4 3 2 1 0 –1

(c)

0

500

Frequency [Hz]

1000

(d)

Figure 19.23 Further quality checks of the impact test data. In (a) and (c), the reciprocity is shown in magnitude and phase by plots of the two FRFs H17 (f ) and H71 (f ) overlaid. This indicates, among other things, that the calibration factors of both response sensors are accurate; the deviation between the two functions is within 5 % which is the accuracy of the sensors. In (b) the magnitude of the driving point FRF H11 (f ) is plotted, and in (d), the imaginary part of the same.

Integrating twice to acceleration over force means multiplication by −𝜔2 , thus changing the phase to +90 degrees. See Section 5.2 for more on this.

19.8.2 EMA Modal Parameter Extraction, MPE We now use the data from the impact test described in Section 19.7, assuming we have assessed the quality and consistency of the measured frequency responses as described in Section 19.8.1. We will show how to estimate the poles and modal participation factors, MPFs, by use of the Modified Multiple-reference Ibrahim time domain method, MMITD, a time domain EMA MPE method, and was described in Section 16.7.7. Once the poles and MPFs are known, the frequency domain least squares method, LSFD, is used for estimating the mode shapes. The process is very similar with most commonly used methods for EMA MPE, except perhaps CMIF, which will be demonstrated in Section 19.9.4. The first step in the MPE process is to select the frequency range to use for the estimation. This may be done with many background functions, although a plot of all FRFs is preferred

19.8 Plexiglas EMA Example

101 0

Accelerance

10

–1

10

–2

10

–3

10

–4

10

0

200

400

600

800

1000

Frequency [Hz] Figure 19.24 Plot for selecting frequency range for modal parameter estimation. This plot is also useful to get a feeling for how consistent data are.

or, if too many, all FRFs for one of the references. This type of plot, shown in Figure 19.24, is useful not only for selecting the frequency range, but it also gives an impression of the consistency of data. After the frequencies are selected, the impulse responses are calculated as described in Section 16.7.1, which are then used to define the Hankel matrix, after which the poles are extracted by the procedure in Section 16.7.7. This is typically done for every model order from 1 to a selected upper limit. In our case, we choose 40 as the highest model order. All pole estimates are then presented in a stabilization diagram as in Figure 19.25. Unstable pole estimates are plotted as circles, whereas stable estimates are plotted with plus signs. In the diagram, it may be seen that all modes stabilize well, although for some of the modes, after the poles are stable for a while, they split into several false pole estimates, but then the poles may be selected from the range where there is a single, stable pole estimate for each model order. It is usually not critical where one selects the poles, as long as they are selected where there are no other estimates really close. It should be noted, however, that the pole estimates of the last mode stabilize first at one frequency and around iteration step 22 shift to a slightly higher frequency. In this case, it is necessary to see where the mode indicator has its minimum, which appears to be at the higher of the two frequencies of stabilization. In Figure 19.26, the selected poles are marked by the MATLAB “data tip” feature used in the ABRAVIBE toolbox. On the right-hand side of the screen, there is a table of the selected poles, where the undamped frequency and damping estimates are displayed. It is good practice to investigate how much the damping estimates vary for each pole along the stable model orders (in the plot called “Iteration Step”). This may also be evaluated using a pole cluster diagram as shown in Figure 16.4. Once the poles and the modal participation vectors for all modes are available, the mode shapes may be estimated. This is done by the (in our case multiple-reference) least squares

581

19 Practical Vibration Measurements and Analysis 40

35

30

Iteration step

25

20

15

10

5

Unstable Stable

0

200

Figure 19.25

300

400

500

600

700

Frequency [Hz]

800

900

1000

Stabilization diagram from parameter estimation with the MMITD method.

40

X 993.644 Y 33

35

X 146.124 Y 28 30

X 147.842 Y 27

X 746.227 Y 25

25

Iteration step

582

X 829.978 Y 20

20

X 409.926 Y 16 X 332.843 Y 13

15

X 421.803 Y 16

X 519.027 X 608.499 Y 14 Y 13

146.1240

3.1244

147.8422

2.8443

332.8431

2.5853

409.9264

2.4466

421.8032

2.3533

519.0266

2.2791

608.4988

2.2221

746.2268

2.3310

829.9781

2.1943

993.6437

2.0750

10

5

0

200

Figure 19.26 right.

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram with selected poles marked and with a pole table visible on the

frequency domain (LSFD) method, as described in Section 16.9.2. LSFD solves the fit of the measured FRFs for each response DOF, and all (both in our case) references, using the poles and modal participation factors. Most software for EMA allows to view this fit for each response DOF as they are processed, or rather, since today’s computers are very fast, the results are displayed with a duration of one or a few seconds so the user may observe the result. This is time well spent, as it gives good insight into the quality of the modal model.

19.8 Plexiglas EMA Example

Accelerance, [(m/s)2 /N]

10

100

10

–1

Measured Synthesized

10–2 0

Accelerance, [(m/s)2 /N]

Response dof 29

1

10

1

10

0

10–1 0

200

400

200

400

600

800

1000

1200

600

800

1000

1200

Frequency, [Hz]

Frequency, [Hz]

Figure 19.27 Plot of measured frequency response of an arbitrary DOF (29), overlaid by the result of the modal model after obtaining the mode shape coefficients for this DOF. The two plots show the FRFs with both reference DOFs.

In Figure 19.27, the measured frequency responses of an arbitrary DOF (29) are shown in overlay plots with the synthesized FRFs of this response DOF with each of the references. The fact that the model agrees well with the measured data is perhaps the best verification of a reliable result. As can be seen in this figure, there is good agreement around all natural frequencies, which is most important. The slight disagreement at the first antiresonance frequency in the upper plot, and in the valleys between some of the modes in the lower plot, is an almost inevitable effect of the logarithmic scale, and of the fact that very small differences in the residue values create large changes in the location of the antiresonances. The fit in Figure 19.27 is thus very satisfactory. Once the modal model is complete after obtaining the mode shapes, it is time to investigate if the results are satisfactory. As mentioned above, this is already to some extent done by observing the fits of the model when calculating the mode shapes. It is, however, also common to plot the auto-MAC matrix of the obtained mode shapes, as this may reveal some issues that could arise. In Figure 19.28, the auto-MAC matrix of the mode shapes obtained by the LSFD fit is shown in a so-called Manhattan display. Usually this is in color, making it a little easier to read the values. The important thing to look for here is that there are no high off-diagonal values, which may be an indication that some mode shapes may have been erroneously estimated. This happens sometimes especially for closely spaced modes. In

583

584

19 Practical Vibration Measurements and Analysis

Auto MAC matrix

Figure 19.28 Auto-MAC matrix in a Manhattan display, showing that there is very little similarity between the modes, which is also evidence that the two closely modes are well separated.

Figure 19.28, this is not the case, as all modes are well separated. The diagonal is, of course, equal to unity, since for an auto-MAC it equals the similarity of each mode with itself. The final step in the quality assessment is usually to animate the mode shapes, although this does not do well in a book. But since we have mode shapes from a FE model, we can calculate a cross-MAC matrix and compare the mode shapes to the theoretical ones. This is shown in Figure 19.29 where it may be seen that the first eight mode shapes are very similar, and the last two of the mode shapes are satisfactorily similar, with MAC values above 0.9. In the figure, it is also seen that the two first modes are swapped, as we also noticed for the ODS results in Section 19.6.1. Perhaps it should be noted that the FE mode shapes are by no means “true” (an neither are the experimental, of course), so the MAC values here must not be interpreted as any absolute indication of the quality of the EMA results. We will compare the mode shapes for various EMA methods in Section 19.9. Perhaps it should be mentioned that while the mode shapes from the FE model are typically real-valued, the experimental mode shapes are typically computed as complex-valued. But the latter, if computed correctly, in most cases are real enough to produce meaningful MAC values. This is the case for the mode shapes compared in Figure 19.29. When animating the mode shapes, there are a couple of things to particularly look out for. We have not mentioned it here, but the mode shapes that were computed are complex mode shapes. This is an option, as mentioned in Section 16.9.2, but is strongly recommended, as it allows some quality assessment. As mentioned in several places in this book, we usually expect mode shapes of most structures to be real-valued. So calculating

19.9 Methods for EMA Modal Parameter Estimation, MPE

Cross MAC matrix

Figure 19.29 Cross-MAC matrix between the experimental mode shapes and the mode shapes of the finite element model displayed in Figure 19.2, showing that there is very good agreement between modes up to mode 8, and reasonable agreement between the last two mode pairs (> 0.9). It may also be observed that the first two modes are swapped as we also found for the multiple-reference ODS example in Section 19.6.2.

the mode shapes as complex and observing that they come out as real-valued is a quality measure, especially as small errors in the natural frequency may produce complex mode shapes. There are in literature a number of suggested quality measures, for example, mode complexity. The interested reader is referred to the more specialized text books referenced at the beginning of Chapter 16 for more information.

19.9 Methods for EMA Modal Parameter Estimation, MPE In this section, we will illustrate the use of different methods for modal parameter estimation, MPE. As described in Chapter 16, the methods may be divided into time domain and frequency methods, and within each of these there are high-order and low-order methods, named after the size of the matrix polynomial used. We will start by looking at some special considerations that apply to time domain methods, regardless of whether they are high- or low-order methods. Then we will present the high-order methods in both time and frequency domain, after which we do the same thing for the low-order methods. After this, we present the complex mode indicator function, which is referred to as a zero order method, as it is not using any coefficient matrix. We refer the reader to Table 16.1 on page 497fora summary of the classification of different MPE methods.

585

586

19 Practical Vibration Measurements and Analysis

19.9.1 Time Domain Variable Settings The time domain MPE algorithms have a number of input parameters. What may be varied is typically ● ● ● ●

● ●

The frequency range to be used to calculate the impulse responses. The maximum model order to be used. Whether to use data reduction before calculating impulse responses, or not. How many of the first lines (time lags) in the impulse response to discard (because they include transient response, see Section 16.5.1). How many lines of the impulse responses to use for the parameter estimation. If matrix normalization should be done for the highest-order coefficient, or the lowest.

Of all these variables, we will omit the first two as the frequency range is selected from approximately 100 to 1050 Hz for all methods to make the comparisons relevant, and the maximum model order is kept constant at 40. These turned out to be reasonable choices, and it would take up too much space to include also variations of these settings. We will exemplify the last four choices in the bullet list, using the polyreference time domain method, PTD, although most of the parameters apply to any time domain MPE method. Many of these variables are not immediately available in most commercial software for EMA, but sometimes they may be able to set. In MATLAB, for example, with the ABRAVIBE toolbox, it is possible, even necessary, to select all parameters. As we will see, the modal parameter estimates may vary depending on these settings, why it is often interesting to investigate the effect of them, as we will discuss in Section 19.10. Time domain EMA MPE is based on impulse responses, usually calculated after the user selects a frequency band for the estimation. In Figure 19.30, we show a plot of a driving point impulse response function at the first DOF of the Plexiglas plate. In panel (a), the entire time of the function is shown, and although a little hard to see in the plot, at the very right it is possible to see that there is some leakage, i.e., the function starts to rise in an oscillating manner. This is common and due to the cyclicity of the discrete Fourier transform which means that also the first lines of the impulse response are affected. We do not use the first few values of the impulse responses (see Chapter 16), however, since the impulse response does not exhibit free decay at the first few lags, so the leakage is usually not a significant issue. The leakage, which was in early days of frequency estimation called wrap-around error, may be avoided by using zero-padding, as recommended by Bendat and Piersol (2010). But for EMA, the effect is small since the typical signals we use for excitation are either transient, which means they include “natural zero padding,” or they are periodic signals as recommended in Chapters 13 and 14 for which zero-padding should not be used since these signals are “perfect” in the cyclic, discrete Fourier transform. In Figure 19.30(b), we show a smaller portion of the start of the impulse response, and with the x-axis scaled in sample number (which we also refer to as ‘lines’). This is done because an important setting for time domain MPE algorithms is to choose which portion of the impulse response is used for the parameter estimation. As we know from Chapter 13, the higher the time values in the impulse response, the more significant the errors will be. On the other hand, typically for parameter estimation, or statistics, the more values we use

19.9 Methods for EMA Modal Parameter Estimation, MPE

× 104

1

Impulse response [m/Ns]

Impulse response [m/Ns]

1 0.5 0

–0.5 –1

× 104

0.5 0

–0.5

0

0.2

0.4

Time [s] (a)

0.6

0.8

–1

0

100

200

Sample number (b)

300

Figure 19.30 Plot of impulse response function for the driving point accelerance in DOF 1 of the Plexiglas plate. In (a) the entire time data, where it may be seen that there is some leakage at the end of the record, due to the periodicity of the discrete Fourier transform (not very apparent, but can be seen at the very end at 0.8 s). In (b) the time axis is zoomed in, and shown in sample number, to show the implication of choosing a certain number of lines to use for parameter estimation, see text for discussion.

to calculate the parameter estimates, the less variance in the estimates. So for EMA MPE, there is a trade-off between these two opposing considerations, and therefore there is an optimum number of lines to be used for the parameter estimation. This number is, of course, dependent on the estimated impulse responses, and therefore, it is often worth investigating what effect a change in the number of lines has on the modal parameter estimates. We will investigate this below. As described in Chapter 16, most modal parameter estimation methods include some sort of data reduction. Many algorithms have this included, but the polyreference time domain method that we are using in this section has not. Therefore, we start by illustrating and applying the type of data reduction we described in 16.5.4 on the data for the Plexiglas plate. The reduction is done on the frequency response functions prior to calculating the impulse responses. In Figure 19.31, the stabilization diagram using PTD without data reduction is shown to be compared with the stabilization diagram in Figure 19.32, where data reduction was applied before calculating the modal parameter estimates. In both cases, we chose to use 100 lines, starting from line 11 of the impulse responses to avoid the transient first part. Although the differences in the stabilization diagrams between Figures 19.31 and 19.32 are small, the latter shows fewer computational poles, for example, around the mode at approximately 330 Hz. The data reduction removes some noise and should make the estimates better. The next thing we investigate is what difference to the pole estimates is obtained by changing the number of lines used for the MPE from 100 to 200. We thus keep the data reduction and apply 200 lines to the parameter estimation, which produces the stabilization diagram in Figure 19.33. The differences do not appear obvious, but if we zoom in on the first two modes around 150 Hz, which is shown in Figure 19.35(b) with 100 lines, and in panel (b)

587

19 Practical Vibration Measurements and Analysis

35

30

Iteration step

25

20

15

10

5 Unstable Stable

0 100

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.31 Stabilization diagram for PTD with 100 lines, starting from line 11, and no FRF enhancement, high matrix normalization.

35

30

25

Iteration step

588

20

15

10

5 Unstable Stable

0 100

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.32 Stabilization diagram for PTD with 100 lines, starting from line 11, and with FRF enhancement, high matrix normalization.

19.9 Methods for EMA Modal Parameter Estimation, MPE

35

30

Iteration step

25

20

15

10

5 Unstable Stable

0 100

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.33 Stabilization diagram for PTD with 200 lines, starting from line 11, and FRF enhancement, high matrix normalization.

40

35

30

Iteration step

25

20

15

10

5

0 100

Unstable Stable

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.34 Stabilization diagram for PTD with 200 lines, starting from line 11, and FRF enhancement, low matrix normalization.

589

19 Practical Vibration Measurements and Analysis

30

Iteration step

Iteration step

30 20 10 0 100

120

140

160

180

20 10 0 100

200

120

Frequency [Hz]

140

160

180

200

180

200

Frequency [Hz]

(a)

(b)

40

Iteration step

30

Iteration step

590

20 10 0 100

120

140

160

180

200

Frequency [Hz]

(c)

30 20 10 0 100

120

140

160

Frequency [Hz]

(d)

Figure 19.35 Stabilization diagram for PTD around the first two modes with the four settings from Figures 19.31–19.34. In (a) 100 lines with no FRF enhancement, in (b) 100 lines with FRF enhancement, in (c) 200 lines, and in (d) 200 lines with low matrix normalization.

with 200 Hz, we can see that there are more stable estimates for both modes when we use 200 lines in panel (c). Thus, for these data, there is a slight improvement using 200 instead of 100 values. The last parameter we investigate is the matrix polynomial normalization that was described in Section 16.5.3. All the previous examples were using high-order normalization. In Figure 19.34, we show the stabilization diagram using 200 lines as in the previous figure, but with low-order normalization. It can be seen that in this case this does not improve the results. This is also obvious in Figure 19.35(d) where it is seen that using low-order normalization in this case produces many more computational poles.

19.9.2 High-Order Methods for EMA MPE Now that we have investigated some variations of the variables for time domain, we will apply some different algorithms and compare the results. In this section, we will compare two high-order methods (not to be confused by the choice of matrix polynomial normalization method), namely the time domain multiple-reference Ibrahim time domain method, MMITD, and the frequency domain least squares complex frequency method, LSCF.

19.9 Methods for EMA Modal Parameter Estimation, MPE 40

35

30

Iteration step

25

20

15

10

5

0 100

Unstable Stable

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.36 Stabilization diagram for MMITD using 50 lines starting at line 11, and maximum model order of 40 modes.

We start with MMITD that we also used above to introduce EMA MPE. The stabilization diagram of applying MMITD to the Plexiglas data, using 50 lines starting at line 11, and a maximum model order of 40 modes, is shown in Figure 19.36. MMITD usually performs best when not using as many lines as is necessary, for example, for the PTD method (for which we used 100 lines above). A total of 50 lines for MMITD were thus selected after testing with a few different lengths. The MMITD method uses SVD for data reduction as described in Section 16.7.7, and therefore, we did not apply any data reduction prior to calculating the impulse responses. Furthermore, MMITD calculates poles and modal participation factors. We will focus on the pole estimates here and present the mode shape estimation for all methods in Section 19.9.5. As may be seen in the stabilization diagram in Figure 19.36, all modes stabilize nicely with MMITD. The undamped natural frequency estimates are tabulated in Table 19.2, and relative damping factors are found in Table 19.3. In the table is also included the results of the PTD method from Section 19.9.1, for the case of 200 lines, data reduction, and high-order matrix polynomial normalization. Next, we try the least squares complex frequency (LSCF) method. The stabilization diagram of this is found in Figure 19.37, and the frequency and damping estimates are found in Tables 19.2 and 19.3. In Figure 19.37, it may first be seen that there are fewer estimates than for MMITD. This is typical for LSCF because it only gives estimates for model orders that are multiples of the number of references, so even though we have applied a highest mode order of 50, it produces only 25 iteration steps. The stabilization diagram is relatively clean, and all modes stabilize nicely. The frequency and damping estimates agree well with both PTD and MMITD.

591

592

19 Practical Vibration Measurements and Analysis

Table 19.2 Table with undamped natural frequency estimates from EMA of the Plexiglas plate, in Hz, from parameter estimation using six different methods. Mode #

PTD

MMITD

LSCF

MITD

FDPIz

CMIF

1

144.9

145.0

146.0

145.7

145.6

145.4

2

146.7

146.7

147.9

147.4

147.2

147.2

3

331.9

332.0

333.1

332.8

332.5

332.7

4

409.2

409.1

410.3

409.8

408.9

409.9

5

421.1

421.1

422.2

421.8

422.1

421.5

6

518.6

518.7

519.7

519.4

519.3

519.3

7

608.3

608.1

609.4

608.9

608.4

608.4

8

745.2

745.3

746.2

746.1

745.0

745.9

9

830.8

830.8

831.7

831.6

831.6

831.7

10

993.6

993.2

994.6

994.1

993.2

993.6

See text for details and Section 19.10 for conclusions.

Table 19.3 Table with relative damping ratio estimates from EMA of the Plexiglas plate, in % from parameter estimation using six different methods. Mode #

PTD

MMITD

LSCF

MITD

FDPIz

CMIF

1

3.1

3.0

3.0

3.1

3.1

3.1

2

2.9

2.9

2.9

2.9

2.8

2.9

3

2.6

2.6

2.6

2.6

2.7

2.6

4

2.5

2.4

2.4

2.4

2.7

2.8

5

2.4

2.4

2.4

2.4

2.5

2.5

6

2.3

2.3

2.3

2.3

2.3

2.3

7

2.3

2.4

2.3

2.4

2.2

2.5

8

2.2

2.3

2.2

2.3

2.3

2.4

9

2.1

2.2

2.2

2.2

2.2

2.3

10

2.2

2.1

2.1

2.1

2.1

2.3

See text for details and Section 19.10 for conclusions.

19.9.3 Low-Order Methods for EMA MPE We now apply two low-order methods for MPE, namely the multiple-reference Ibrahim time domain method (MITD) and the frequency domain direct parameter z-domain (FDPIz) method. As described in Sections 16.7.3 and 16.8.3, low-order methods estimate poles and mode shapes in one step, but we will wait with comparing mode shapes until Section 19.9.5.

19.9 Methods for EMA Modal Parameter Estimation, MPE 25

Iteration step

20

15

10

5 Unstable Stable

0 100

Figure 19.37

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for LSCF using a maximum model order of 50.

40

35

30

Iteration step

25

20

15

10

5

0 100

Unstable Stable

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Figure 19.38 Stabilization diagram for MITD with 100 lines starting with line 11 and a maximum model order of 40 modes.

The stabilization diagram of MITD applied to the Plexiglas plate data is shown in Figure 19.38, and the frequencies and damping ratios in column five of Tables 19.2 and 19.3, respectively. As for the MMITD method above, 100 lines were used, and no data reduction since the MITD algorithm includes a SVD for data compression. As seen in Figure 19.38, the stabilization diagram is clean, and it may also be seen that the poles stabilize at

593

19 Practical Vibration Measurements and Analysis 35

30

25

Iteration step

594

20

15

10

5 Unstable Stable

0 100

Figure 19.39

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for FDPIz with a maximum model order of 35.

low-model orders. The natural frequencies and damping ratio estimates are also consistent with the other methods. The FDPIz method is finally applied to the data, and the stabilization diagram is shown in Figure 19.39, and frequency and damping ratio estimates in column six in Tables 19.2 and 19.3, respectively. Again, this method also performs well with stable modes. The frequency and damping ratios agree well with the other methods.

19.9.4 The Complex Mode Indicator Function, CMIF CMIF is a method which works very differently from the other methods presented here, as was described in Section 16.8.4. It is often referred to as a zero-order method, since it is not utilizing any polynomial matrix, but rather uses the spatial information to extract the complex mode indicator functions, CMIFs. In Figure 19.40 some plots that describe the process are presented. For the MPE, we selected to use 111 frequency values to fit the enhanced FRF (see Section 16.8.4). With the implementation we use here, the CMIF method is more interactive than the other methods, although in principle it could be automated to a level similar to the other methods. The first plot, in Figure 19.40(a), shows an overlay plot of the two CMIFs we obtain since we have two references. We start by using the first of these, as plotted in panel (b). By selecting a peak in the first CMIF, the enhanced FRF is calculated, and from this the frequency, damping, and unscaled mode shape is estimated, and a fit plot as in Figure 19.40(c) is shown together with the frequency and damping estimates. The user can then select each peak in the first CMIF, until all modes with a peak in this CMIF, in this case modes 1, 3, 5 – 10, are estimated, as shown by vertical lines in panel (d). Thereafter the second CMIF is displayed as in panel (e), and the user selects

19.9 Methods for EMA Modal Parameter Estimation, MPE

10–4

10–4

10–6

10–6

0

500

(a)

1000

0

500

1000

500

1000

500

1000

(b)

–4

10

–5

10

–6

10

–6

10

100

150

(c)

200

0

–4

10

–6

10

–4

10

–6

10

0

(d)

500

Frequency [Hz] (e)

1000

0

Frequency [Hz] (f)

Figure 19.40 Plots for CMIF extraction of poles and mode shapes. In (a), a plot of the two CMIFs, in (b); the first CMIF which is used for extraction of modal parameters in the first round; in (c), a plot of the curve fit of the first mode, both pole and mode shape are extracted using a SDOF method (see text); in (d), all selected modes are marked with a vertical line; in (e), the second CMIF is plotted, which is used to extract the two modes that are marked by vertical lines in (f).

peaks in this CMIF in the same manner as for the first CMIF, in this case giving modes 2 and 4, which are both close to modes 2 and 5, respectively, as shown in Figure 19.40(f). The frequency and damping estimates that were extracted are shown in column seven in Tables 19.2 and 19.3, respectively. It is clear that CMIF produces results similar to the more numerically advanced methods, perhaps with a little higher uncertainty in damping estimates, as seen in Table 19.3.

595

596

19 Practical Vibration Measurements and Analysis

19.9.5 Calculating Scaled Mode Shapes After the poles are calculated with the high- and low-order methods in Sections 19.9.2 and 19.9.3, we now have the following results: ● ● ● ● ●

From PTD and MMITD: poles and modal participation factors. From LSCF: poles (and unreliable modal participation factors). From MITD: poles and unscaled mode shapes. From FDPIz: poles and unscaled mode shapes. From CMIF: poles and unscaled mode shapes.

In order to obtain a scaled modal model for each of the methods, we therefore need to apply different methods, as explained in Section 16.9. The methods we will apply here are ●





For the PTD and MMITD results, we will apply the multiple-reference least squares frequency domain (LSFD) method described in Section 16.9.2. For the LSCF results, we will apply the LSFD method without modal participation factors, described in Section 16.9.3. This is done because the modal participation factors from the LSCF method are known to be unstable. For the MITD, FDPIz, and CMIF results, we will scale the modal models using a single FRF as described in Section 16.9.5.

We start by applying the multiple-reference LSFD method, using the poles and modal participation factors from the PTD method. This gives us scaled mode shapes that are used to plot synthesized FRFs overlaid with the measured ones with both references as they are processed. We plot an arbitrary such plot, for response in DOF 29 and force in DOF 1, in Figure 19.41(a) where it may be seen that the modal model fits the measured FRFs very well. During the process of fitting, a plot showing the FRFs of DOF 29 with both references is displayed, as in Figure 19.27, but for space reasons, we only plot the function with respect to the first reference here. The two fits are very similar for all the results we present here. We then do the same with the poles and modal participation factors from MMITD. The synthesized and measured FRFs are plotted overlaid as they are processed. We plot an arbitrary such plot, for response DOF 29, in Figure 19.41(b), where it may be seen that the modal model fits the measured FRFs very well. Next, we apply the alternative LSFD method without using the modal participation factors for the poles from LSCF, as this method does not produce reliable MPFs. The synthesized plot of this is found in Figure 19.41(c). Also, this fit is good. In Figure 19.41(d), the synthesized result of the scaled mode shapes using the poles and mode shapes of MITD is shown overlaid with the measured FRF, and again, this fit is good. You should note that to produce this result, the driving point FRF in DOF 1 was used for the modal scaling, but then the FRFs for the response in DOF 29 with force in DOF 1 was synthesized as plotted in the figure. In Figure 19.41(e), we present the synthesized FRF results of the scaled model using the poles and mode shapes from the FDPIz method which fit the measured FRFs well. Finally, we do the same for the poles and mode shapes from the CMIF method, which are found in Figure 19.41(f). Again, the synthesized FRFs fit the measured well, although perhaps a little worse than for the other methods.

10

Accelerance, [(m/s)2 /N]

101

0

10–1 0

500

Frequency, [Hz] (a)

1000

10

0

10–1 0

10

Accelerance, [(m/s)2 /N]

101

500

Frequency, [Hz] (c)

1000

1

Accelerance, [(m/s)2 /N]

Accelerance, [(m/s)2 /N]

Accelerance, [(m/s)2 /N]

Accelerance, [(m/s)2 /N]

19.9 Methods for EMA Modal Parameter Estimation, MPE

100

10–1 0

500

Frequency, [Hz] (e)

1000

101

10

0

10–1 0

500

Frequency, [Hz] (b)

1000

101

10

0

10–1 0

10

500

1000

500

1000

Frequency, [Hz] (d)

1

100

10–1 0

Frequency, [Hz] (f)

Figure 19.41 Example of synthesized FRFs overlaid with the measured for DOF 29, for (a) PTD, (b) MMITD, (c) LSCF, (d) MITD, (e) FDPIz, and (f) CMIF.

Now, we have mode shapes from all methods, we can compare them by using the MAC matrix (see Section 16.10.2). Since we established in Section 19.8 that the mode shapes obtained by the MMITD and multiple-reference LSFD methods were very similar for the first eight modes, we may use these as reference mode shapes. An interesting comparison could be to use the poles of the MMITD method and compute the mode shapes with the LSFD method without using the modal participation factors, and compare these mode shapes with not only the mode shapes using the same poles but also the modal participation factors obtained with the MMITD method.

597

598

19 Practical Vibration Measurements and Analysis

(a)

(b)

(c)

(d)

(e)

(f)

Figure 19.42 Cross-MAC matrices between the mode shapes obtained by the MMITD and LSFD methods, using the modal participation factors from MMITD, and in (a) PTD and LSFD with MPFs, (b) MMITD and LSFD without MPFs, (c) LSCF and LSFD without MPFs, (d) MITD (scaling does not affect MAC), (e) FDPIz, and (f) CMIF. Colorbar can be seen in Figure 19.29.

In Figure 19.42, these MAC matrices are shown. For better readability, the diagonal values, i.e., the similarity between each mode shape and the corresponding mode shape of the reference mode set (MMITD plus LSFD with MPFs) is tabulated in Table 19.4. Here, it may be seen that the mode shapes from PTD, MITD, and FDPIz agree very well with those of MMITD. This is despite the fact the two latter methods produce poles and mode shapes in one step, whereas the mode shapes for PTD, as for MMITD, are obtained by the LSFD method using the MPFs from the execution of PTD. It may also be seen that for the first three modes, the CMIF method produces mode shapes very similar to the other methods. It is likely that the CMIF estimates could be better for higher modes with some adjustment of the curve fitting of the enhanced CMIFs.

19.10 Conclusions of EMA MPE

Table 19.4 Table with Cross-MAC values between mode shapes obtained by the different methods, and the reference mode shapes obtained by MMITD with modal participation factors, and the multiple-reference LSFD method from Section 16.9.2. MMITD2 are the alternative mode shapes obtained by the poles of the MMITD parameter estimation from Table 19.2 and mode shapes obtained by the LSFD method without MPFs, similar to what is used for LSCF. Mode #

PTD

MMITD2

LSCF

MITD

FDPIz

CMIF

1

1.000

0.974

0.996

0.992

0.997

0.998

2

0.999

0.989

0.964

0.998

0.998

0.989

3

1.000

1.000

1.000

1.000

0.998

0.995

4

0.999

0.991

0.994

0.997

0.993

0.896

5

1.000

0.998

0.996

0.999

0.997

0.943

6

1.000

1.000

1.000

0.999

0.998

0.990

7

0.999

0.999

0.999

0.999

0.999

0.932

8

0.997

0.997

0.996

0.999

0.993

0.952

9

0.996

0.997

0.997

0.998

0.991

0.950

10

0.959

0.986

0.984

0.990

0.972

0.886

Another observation from Table 19.4 is that the mode shapes obtained by the LSFD method without using MPFs are not as accurate as the other methods for the first two modes, the closely spaced modes. Although the differences are small, they are probably evident of the fact that no method stands out as much better than any other method. This will be discussed further in Section 19.10 next.

19.10 Conclusions of EMA MPE What are the conclusions of the results in Sections 19.8 and 19.9? The data used for the examples here are clean and the structure simple. We have seen that even in an ideal case like this, the modal parameters have some, albeit small, variations. This is the nature of parameter estimation. Especially damping is a difficult property to accurately determine. However, the main conclusion is that despite the different nature of the various algorithms, they all result in similar modal parameters. The variation may, in fact, be attributed to the uncertainties in modal parameter estimation. Since the modal parameters are known to have this uncertainty, it is good practice to always investigate how sensitive the parameters are to changes in the estimation procedure. What about a slightly different frequency range? Other settings (number of lines, model order, etc.)? What about another estimation method? All these things should be investigated to form an understanding of how certain, or uncertain, the estimated parameters are. What can we expect from more difficult structures where it is not possible to obtain data of the measurement quality we have experienced in the Plexiglas plate? It is sometimes argued that some methods for MPE are more suited for such data than other methods. However, as stated in other places in this book, we have to consider the implication of bad measurement

599

600

19 Practical Vibration Measurements and Analysis

quality on the basis of our parameter estimation; the frequency response function estimates. In cases where the coherence is not unity, the FRFs are biased, in fact, they are distorted. There is then the possibility with frequency domain methods to only use those frequencies for which the coherence has high values. But other than that, parameter estimation reliability is limited when data are not of good quality. So most important for obtaining reliable modal parameters is good craftsmanship when it comes to suspending the structure and applying the measurements. The checklist in Appendix G is a good starting point. And remember: EMA is considered an art form. Learn to master it!

19.11 OMA Examples Operational modal analysis (OMA) has seen a large growth in recent years and is now a trusted analysis tool, especially for large civil engineering structures, but increasingly also for mechanical engineering applications. In this section, we will first introduce OMA on synthesized data using a model of the Plexiglas plate used above, since it gives an opportunity to look at the accuracy of OMA, since the data are known. On these known data, we will compare results of some of the different OMA MPE methods discussed in Chapters 16 and 17. We will then continue with an example of measurements on the same Plexiglas plate used in Sections 19.8 and 19.9. After this, we will look at two datasets from real structures: first, a suspension bridge measured by geophones, and second, a RO-LO (roll-on/lift-off) ship measured by accelerometers.

19.11.1 OMA Using Synthesized Data for Plexiglas Plate Our first OMA example is using the finite element model described in Section 19.1. The undamped natural frequencies and mode shapes of the plate are combined with modal damping of 𝜁 = 0.025 (2.5 % damping), see Section 6.4.3, and the FFT-based method described in Section 19.2.4 is then used to calculate forced responses of all DOFs. For OMA, it is usually better to have distributed loads, so for this case, the plate was excited with uncorrelated forces in all DOFs. For the simulation, the sampling frequency was set to fs = 5000 Hz and L = 1.5 ⋅ 106 samples of 35 uncorrelated forces were generated and applied, one in each DOF of the plate, to calculate displacement responses in all DOFs. This amount of samples is rather large and was chosen to produce data with small deviation in the exponential decay of the envelope of the correlation function estimates, resulting in small variance in damping estimated as explained in Section 10.4.3. The first step after the time data were generated was to calculate the cross-correlation and cross-spectral matrices. This was done using all responses as references, i.e., computing the complete matrices. The unbiased cross-correlation matrix was computed by the long FFT method described in Section 10.4.1. The cross-spectral matrix was computed using Welch’s method, a blocksize of 2048 samples, 50 % overlap, and a Hanning window. A typical correlation function (of DOF 1, i.e., one of the corner points) is found in Figure 19.43(a), with the axis scaled in sample number to make it easier to assess which part of the correlation function that is used for parameter estimation below. A typical spectral density of the same DOF is shown in Figure 19.43(b). Here it is seen that there are data enough to produce a very smooth PSD, i.e., one with very small variance.

19.11 OMA Examples

101

2500

1500 1000

PSD [m 2 /Hz]

Autocorrelation [m 2 ]

2000

500 0

100

10–1

–500

–1000 –1500

0

100

200

300

Sample number (a)

10

–2

200

400

600

800 1000

Frequency [Hz] (b)

Figure 19.43 Correlation function for DOF 1 using synthesized data of the Plexiglas plate in (a), and the corresponding PSD in (b).

The cross-correlation matrix is now used with all references and the MITD method (Section 16.7.3), starting with lag 11, and using 30 lags. The maximum model order was set to 30 modes, and the matrix order normalization set to low (see Section 16.5.3). The number of lags and maximum order was investigated by “trial and error” until a good results was found. The resulting stabilization diagram is shown in Figure 19.44. The 30

25

Iteration step

20

15

10

5 Unstable Stable

0 100

Figure 19.44

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for MITD on synthesized data of the Plexiglas plate.

601

602

19 Practical Vibration Measurements and Analysis

stabilization diagram is overlaid by the two largest principal components (sometimes referred to as singular values, although for square matrices it is more appropriate to call them principal components). The second principal component peaks around 145 Hz indicating there are close modes around this frequency. The undamped natural frequencies of the model (“true”) and from MITD are shown in columns two and three in Table 19.5. Here it may be seen that the frequencies are very accurate for all modes. The true and estimated damping ratios are tabulated in columns two and three of Table 19.6, where it may be seen that they are accurate to one decimal. More accuracy is hardly interesting for

Table 19.5 Table with undamped natural frequencies for the synthesized data for the Plexiglas plate. Mode #

True

MITD

PTD

LSCF

CMIF

1

145.349

145.383

145.323

146.455

145.318

2

150.313

150.284

150.354

151.408

150.161

3

340.493

340.474

340.688

341.573

340.376

4

401.536

401.482

401.654

402.280

401.536

5

413.436

413.600

413.458

414.149

413.629

6

522.004

522.307

522.236

522.823

521.940

7

616.382

616.438

616.285

617.114

621.130

8

751.709

751.773

752.144

752.317

753.460

9

827.469

827.349

827.417

828.731

851.194

10

1005.693

1005.836

1005.736

1005.592

1008.379

Table 19.6 Table with damping ratios in % for the synthesized data for the Plexiglas plate. Mode #

True

MITD

PTD

LSCF

CMIF

1

2.5

2.5

2.5

2.5

2.7

2

2.5

2.5

2.5

2.5

2.7

3

2.5

2.5

2.5

2.6

2.6

4

2.5

2.5

2.5

2.5

2.6

5

2.5

2.5

2.5

2.5

2.5

6

2.5

2.5

2.5

2.5

2.4

7

2.5

2.5

2.5

2.5

2.5

8

2.5

2.5

2.5

2.4

2.5

9

2.5

2.5

2.5

2.4

2.5

10

2.5

2.5

2.6

2.5

2.4

19.11 OMA Examples

20

Iteration step

15

10

5

Unstable Stable

0 100

Figure 19.45

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for PTD on synthesized data for the Plexiglas plate.

damping. Although the mode shapes are estimated simultaneously with the poles by the MITD method, we will discuss the mode shape estimates after presenting all pole estimates for the four methods we compare here. We next apply the PTD method (Section 16.7.6) to the same data. This method typically requires significantly higher number of lags than the high-order methods, and it was found that 150 lags gave a good result in this case. The model order was set to 60, and the stabilization diagram is shown in Figure 19.45. Typical for low-order methods as PTD (and LSCF) is that they perform best using a small number of references. In this case, it was found that using DOFs 1, 7, 15, 29, and 35, which corresponds to all four corner points, and one point (15) in the middle of the short edge. As can be seen in Figure 19.45, all modes stabilize properly, and the estimates in Tables 19.5 and 19.6 show that frequencies and damping factors are accurate. Mode shapes will be discussed below, after stabilization diagrams of LSCF and FDPIz are presented. As mentioned in Section 16.7.6, for high-order methods such as PTD only model orders being multiples of the number of references are computed, which explains the relatively high maximum model order compared to the number of iteration steps shown in the stabilization diagram. The third method we apply is the LSCF method. This method requires selection of a frequency range, where we used the entire range, zero to 2500 Hz, as this usually gives the best results for this method. Different references were tried, and it was found that using references in DOFs 1, 15, and 29 produced good results. The stabilization diagram for a maximum model order of 120 (again, to yield enough iteration steps, since high-order methods steps in multiples of the number of references) is shown in Figure 19.46. The natural frequency estimates in Table 19.5 are accurate, although a little less accurate than the two time domain methods. The damping ratio estimates in Table 19.6 show that there is a little more uncertainty in the LSCF damping estimates, although they may still be considered

603

19 Practical Vibration Measurements and Analysis 30

25

20

Iteration step

604

15

10

5 Unstable Stable

0 100

Figure 19.46

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for LSCF on synthesized data of the Plexiglas plate.

sufficiently accurate. It should be mentioned that the damping bias is not an effect of the frequency increment; it was tried to decrease the frequency increment by a factor of four, but it did not improve the damping estimates. The PTD method estimates poles and modal participation factors (MPFs). In a second step, we therefore apply the LSFD with the poles and MPFs from the PTD estimation to compute the mode shapes. This results in a fit for each response DOF, using all the references that were used for the PTD. During this least squares fit, it is possible to overlay the cross-spectral densities of the estimated model with the measured functions. Such a fit is shown in Figure 19.47 for an arbitrary DOF (29, one of the corner points). The plots show overall good agreement, although there are some regions where the spectra are low where the fit is less good. Considering the logarithmic scale, the differences are acceptable, however. The mode shape estimation using the pole and MPF estimates from the LSCF method are shown in Figure 19.48. As may be seen the fit is good. However, it should be mentioned that the estimation of the poles and MPFs needed to be retried a couple times until the fits were good. The LSCF produces MPFs of less quality than, for example, the PTD method, which was shown by Cauberghe (2004), who also showed that a maximum likelihood implementation of the LSCF method performed better. It should be mentioned that the LSFD without using MPFs does not work for this case, for the two closely spaced modes around 145 Hz. The poles are close and similar enough that the rank of the residue matrix is not one as is required for the SVD to separate the modes. The fourth method we apply is the FDD method, or CMIF method with the modification mentioned in Section 17.3.5, for which an overview of the steps is shown in Figure 19.49.

19.11 OMA Examples

DOF 29

PSD

1010

Measured Synthesized

5

10

100

200

300

400

200

300

400

200

300

400

200

300

400

200

300

400

500

600

700

800

900

1000

500

600

700

800

900

1000

500

600

700

800

900

1000

500

600

700

800

900

1000

500

600

700

800

900

1000

Frequency, [Hz]

PSD

1010 5

PSD

10 100

Frequency, [Hz]

5

10

100

Frequency, [Hz]

PSD

1010 105 100

Frequency, [Hz]

PSD

1010 105 100

Figure 19.47 tion factors.

Frequency, [Hz]

Results for DOF 29 of the LSFD using data from PTD, including modal participa-

Similar to the CMIF method shown in Figure 19.40, the FDD method starts by showing the principal components (PC). Since the second PC peaks at around 145 and 400 Hz, we select to use both PCAs for the estimation. For each of the peaks the user selects a frequency, after which the enhanced FRF is computed and shown as in Figure 19.49(c), with the fit of an SDOF system (pole and mode shape) shown by plus signs. After selecting all peaks to use for the first PC as indicated in panel (d), the user ends this stage. The second PCA is then plotted and the user continues to select peaks in this PC. In our case, the two peaks at approximately 145 and 410 Hz are selected. The frequency and damping estimates are

605

PSD/CSD [(m/s2 ) 2 /Hz]

PSD/CSD [(m/s2 ) 2 /Hz]

19 Practical Vibration Measurements and Analysis

PSD/CSD [(m/s2 ) 2 /Hz]

606

Figure 19.48 tion factors.

DOF 29

10

10

8

10

106 4

10 100

Measured Synthesized

200

300

400

200

300

400

200

300

400

500

600

700

800

900

1000

500

600

700

800

900

1000

500

600

700

800

900

1000

Frequency, [Hz]

10

10

8

10

6

10

4

10 100

Frequency, [Hz]

1010 108 106 4

10 100

Frequency, [Hz]

Results for DOF 29 of the LSFD using data from LSCF, including modal participa-

found in the last column in Tables 19.5 and 19.6. Here it may be seen that the frequency estimates are accurate, but that the damping estimates are somewhat more uncertain than for the other methods, especially for the first two, closely spaced, modes. Finally, we compare the mode shapes of all four methods applied here, with the true mode shapes of the model used to synthesize the data. In Figure 19.50, it can be seen that the similarity between all mode shapes is high, except for the first two modes with the LSCF and LSFD method, where the MAC values are 0.92 and 0.97. All other MAC values are above 0.99. This is an indication that the MPFs of the LSCF method are somewhat inaccurate and do not allow a correct estimation of the mode shapes of closely spaced modes, which is consistent with results found by Cauberghe (2004). He showed that a maximum likelihood implementation of the LSCF method worked better.

19.11 OMA Examples

10–10

10–10

10–12

10–12

10–14

10–14 200

400

600

(a)

800 1000

200

400

200

400

600

800 1000

600

800 1000

(b)

–11

10

10–12

10–11

10–13 10–14

100 10

150

200

(c)

–10

(d) 10–11

10

–12

–12

10

10–13 10

10–14

–14

200

400

600

800 1000

Frequency [Hz] (e)

200

400

600

800 1000

Frequency [Hz] (f)

Figure 19.49 Plots of applying FDD (CMIF) to the synthesized data for the Plexiglas plate. In (a), the two highest principal components are shown; in (b), the first PCA used for the first round of peak selections; in (c), the fit of the first mode where it is seen that 31 frequency values are used for the fit; in (d), all eight peaks selected from the first PCA are shown. In (e), the second PCA is shown; and in (f), the two peaks selected from the second PCA are indicated.

19.11.2 OMA on Measured Data of Plexiglas Plate In this section, we will look at measured data from the same Plexiglas plate that was used in Section 19.8. However, the plate was suspended differently, and 35 accelerometers were mounted in all points according to the grid in Figure 19.1. This results in considerable mass loading, which means that the undamped natural frequencies are not comparable with those found in the EMA test above. In Orlowitz and Brandt (2017), these data were used to

607

608

19 Practical Vibration Measurements and Analysis

(a)

(b)

(c)

(d)

Figure 19.50 Plots of cross-MAC matrices with the true mode shapes of the model of the plate. In (c), the cross-MAC of the true mode shapes with those of the MITD method are shown; in (b), with PTD and LSFD; in (c), LSCF and LSFD, and in (d), the true mode shapes are compared with the mode shapes of FDD (CMIF).

show that EMA and OMA produce similar results, provided the boundary conditions are the same. The plate was measured with a sampling frequency, fs = 5000 Hz, and L = 1.5 ⋅ 106 samples, the same as for the synthesized plate in Section 19.11.1, thus adding up to 300 seconds of data. During the measurement, the plate was excited by a pencil lightly tapping the plate in a random fashion in both time intervals between taps and the position of the taps. This produces data of “random” nature, although the kurtosis is approximately 190, i.e., it is by no means Gaussian, but the data work very well for OMA purposes. In Figure 19.51(a), a typical autocorrelation function is shown with x-axis in lag number, and in panel (b), the corresponding PSD. The first method we apply for MPE is the MITD method, using 60 lags starting with lag 11, and a model order of 40. The stabilization diagram is shown in Figure 19.52. The stabilization diagram is very clean, and the estimated frequencies and damping ratios are tabulated in the second column of Tables 19.8 and 19.8. Next, we apply PTD to estimate poles and modal participation factors, and the stabilization diagram of this is found in Figure 19.53. In this case DOFs 1, 7, and 15 were used as references, 299 lags were used, and the maximum model order set to 50. The stabilization diagram is very clean and the estimated parameters may be found in the third columns of Tables 19.7 and 19.8 where it may be seen that the values are very similar to those for MITD.

19.11 OMA Examples

10–3

0.1

0.06

PSD [m 2 /Hz]

Autocorrelation [m 2 ]

0.08

0.04 0.02 0

10–4

10–5

–0.02 –0.04

–6

0

100

200

300

10

Sample number (a)

200

400

600

800 1000

Frequency [Hz] (b)

Figure 19.51 Correlation function of DOF 1 for measured data of Plexiglas plate in (a), and corresponding PSD in (b).

40

35

30

Iteration step

25

20

15

10

5

0 100

Figure 19.52

Unstable Stable

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for MITD applied on measured data of the Plexiglas plate.

The stabilization diagram of LSCF is found in Figure 19.54 and corresponding modal parameters in column four of Tables 19.8 and 19.8. References in DOFs 1, 7, and 15 were used, and the model order was set to 100. Although the stabilization diagram is not as clean as for MITD and PTD, estimated natural frequencies and damping ratios are consistent with the other two methods with the exception of the last mode for which the damping is a bit higher than for the other methods.

609

19 Practical Vibration Measurements and Analysis 30

25

20

Iteration step

610

15

10

5 Unstable Stable

0 100

Figure 19.53

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for PTD for measured data of Plexiglas plate.

Table 19.7 Table with undamped natural frequencies for the measured data for the Plexiglas plate. Mode #

MITD

PTD

LSCF

CMIF

1

141.795

141.795

142.964

141.864

2

142.443

142.423

143.596

142.399

3

323.016

322.988

323.929

323.161

4

392.764

392.863

393.637

392.840

5

408.476

408.441

409.373

408.581

6

503.487

503.437

504.214

503.547

7

583.152

583.097

583.695

582.997

8

711.747

711.821

712.195

711.587

9

799.633

799.546

799.543

797.037

10

940.215

940.110

940.085

940.319

Mode shape estimation was done using the LSFD method using the poles and MPFs from PTD. A typical fit plot is shown in Figure 19.55, where it may be seen that the fit is good. Similarly, the poles and MPFs from LSCF were used with LSFD to estimate the mode shapes for this method. The plots look very similar to those for PTD and LSFD so the plot is omitted here to save some space. All mode shapes will be compared below. The final method to be applied is the FDD (CMIF), which is illustrated in Figure 19.56 similar to what we described for the synthesized data of the Plexiglas plate. It was found by

19.11 OMA Examples

Table 19.8 Table with damping ratios in % for the measured data of the Plexiglas plate. Mode #

MITD

PTD

LSCF

CMIF

1

3.2

3.1

3.2

3.4

2

3.0

3.0

3.0

3.3

3

2.7

2.7

2.7

2.7

4

2.5

2.5

2.6

2.6

5

2.5

2.5

2.5

2.5

6

2.5

2.5

2.5

2.5

7

2.4

2.4

2.4

2.4

8

2.4

2.4

2.4

2.5

9

2.2

2.2

2.2

2.6

10

2.4

2.4

2.8

2.8

25

Iteration step

20

15

10

5 Unstable Stable

0 100

Figure 19.54

200

300

400

500

600

Frequency [Hz]

700

800

900

1000

Stabilization diagram for LSCF for measured data of Plexiglas plate.

trial and error that using 111 values centered at the peak resulted in the best damping estimates. As can be seen a total of ten modes were selected. The frequency and damping estimates are found in the rightmost column of Tables 19.8 and 19.8. Again, frequencies are very close to the estimates of the other methods with a little more variation in damping ratios. To compare the mode shapes, we calculate the cross-MAC between the mode shapes of MITD as a reference and all the other mode shapes in Figure 19.57. The results show that all methods give very similar MAC values except a little deviation of mode shapes obtained by

611

19 Practical Vibration Measurements and Analysis

PSD/CSD [(m/s2 ) 2 /Hz] PSD/CSD [(m/s2 ) 2 /Hz] PSD/CSD [(m/s2 ) 2 /Hz]

612

Figure 19.55 plate.

DOF 29 10–5

100

10

200

300

400

200

300

400

200

300

400

500

600

700

800

900

1000

500

600

700

800

900

1000

500

600

700

800

900

1000

Frequency, [Hz]

–5

100

10

Measured Synthesized

Frequency, [Hz]

–5

100

Frequency, [Hz]

Synthesized data for LSFD on data from PTD in DOF 29 for measured data of Plexiglas

the LSCF and LSFD methods. Again, this is an indication of unreliable modal participation factors of the LSCF method as mentioned in Section 19.11.1. The MAC values are larger than 0.94 for all modes for LSCF and LSFD and larger than 0.98 for the other methods. This makes it reasonable to assume that the other methods are closer to the true mode shapes.

19.11.3 OMA of a Suspension Bridge We will now look at an example of data from a full-scale measurement. The data are from measurements of a suspension bridge. The Little Belt Bridge in Southern Denmark is approximately 600 m between the pylons. The bridge was instrumented by 45 geophone transducers evenly distributed along the main span, 15 in vertical direction on each side of the bridge, and 15 sensors horizontally on one side to capture lateral modes. For more information about the bridge, see Christensen et al. (2019). In the paper, it was found that the first vertical mode was estimated with three decimals at the frequency that was modeled in the 1960s when the bridge was constructed. The geophone signals were acquired in parallel with a sampling frequency of fs = 1000 Hz, and later downsampled to 4 Hz. Three hours of continuous data are used for the parameter estimation here. As mentioned in Section 7.11, geophones output signals that are proportional to velocity well above the natural frequency of the sensor

19.11 OMA Examples

10–4

10

10–4

–6

10 200

400

600

(a)

800 1000

10–3

10–4 0

100

200

10

–4

10

–6

300

10

–4

10

–4

10

–6

10

–6

200

Figure 19.56

(c)

–6

400

600

800 1000

Frequency [Hz] (e)

200

400

200

400

200

400

600

800 1000

600

800 1000

600

800 1000

(b)

(d)

Frequency [Hz] (f)

Plots for FDD (CMIF) for measured data of Plexiglas plate.

but below that fall off proportional to frequency squared. Although the signals may be converted to velocity, in this case, we simply treat the voltage outputs as we are only interested in the relative measurements between all response sensors. Since the sensors are very similar in their characteristics, it is enough for OMA purposes to do this. A typical vertical autocorrelation function and corresponding PSD from a sensor at the center of the bridge are shown in Figure 19.58(a) and (b). It can be seen that there are at least six modes (peaks in the PSD) between 0.1 and 0.8 Hz. For space reasons, we will only apply the MITD method, chosen as it has proven to be one of the best performing algorithms in Sections 19.11.1 and 19.11.2. A total of 60 lags, starting at lag 11, and a maximum model order of 30 were chosen as input parameters

613

19 Practical Vibration Measurements and Analysis

(a)

(b)

(c) Figure 19.57 Cross-MAC matrices for measured data of Plexiglas plate of mode shapes of MITD compared with, in (a) PTD, in (b) LSCF, and in (c) CMIF.

10–3

0.015

0.01

–4

10

PSD [V2/Hz]

Autocorrelation [V 2 ]

614

0.005

0

–6

10

–0.005

–0.01

–7

0

100

200

300

Sample number (a) Figure 19.58

10–5

Correlation function of the bridge.

10

0.2

0.4

0.6

Frequency [Hz] (b)

0.8

19.11 OMA Examples 30

25

Iteration step

20

15

10

5 Unstable Stable

0 0.1

Figure 19.59

0.2

0.3

0.4

0.5

Frequency [Hz]

0.6

0.7

0.8

0.9

Stabilization diagram of the MITD method applied to the bridge data.

to MITD. In addition, low matrix polynomial normalization was chosen, and the whole cross-correlation matrix used for the estimation, i.e., all responses were used as references. In Figure 19.59, the stabilization diagram overlaid by the first two principal components shows that there are indeed nine modes in the frequency interval despite there are only six peaks in the PSD previously shown in Figure 19.58(b). This is, of course, because the sensor chosen for the latter plot was on a node line for some of the modes and shows how one has to be careful drawing conclusions from spectra of measurements at single locations. A better choice is to use principal components calculated from all sensors. In Figure 19.59, it is not obvious from the peak in the first PC that there are two modes around the first peak. But the stabilization diagram is clear, and as we will see below, there are indeed closely spaced modes here. The natural frequencies and damping ratios of the first nine modes of the bridge are presented in Table 19.9 together with a description of the modes. The first mode is found at 0.155 Hz, and the second at 0.170 Hz. In the third column of the table, it can be seen that the damping of the first mode is 2.3 %, and that mode 2 has damping as high as 11.3 %. In fact, this type of difference in damping is not uncommon and was also found in for example Brownjohn et al. (2010). The high damping of the second mode explains why there is not a clear peak corresponding to it in the PCA. The remaining modes have damping values from 0.5 to 1.8 %. An auto-MAC matrix is shown in Figure 19.60. Here it may be seen that there is some similarity between the first and fourth modes. This may be explained by the choice of measurement locations as animation of the modes show that both modes are as expected from the first and third symmetrical bending modes. Better selection of measurement locations would have allowed to produce a MAC matrix with better discrepancy between the modes.

615

616

19 Practical Vibration Measurements and Analysis

Table 19.9 Table with undamped natural frequencies and damping ratios of the modes of the bridge. Mode #

Frequency %

Description

1

0.155

2.3

2

0.170

11.3

First antisymmetrical vertical bending

3

0.258

0.6

Second symmetrical vertical bending

4

0.355

1.8

Third symmetrical vertical bending

5

0.402

0.8

Second antisymmetrical vertical bending

6

0.523

0.7

First torsional

7

0.572

0.5

Fourth symmetrical vertical bending

8

0.770

0.7

Third antisymmetrical vertical bending

9

0.808

0.7

Second torsional

First symmetrical, vertical bending

Auto MAC matrix

Figure 19.60

Auto-MAC matrix of the bridge.

This example serves to show that if measurement quality is high, as is the case with the highly sensitive geophones, then the modal parameter estimation is uncomplicated. Also, in this case, since all sensors were measured synchronously, there is no issue with aligning datasets as is sometimes the case. In the author’s view, it should always be attempted to acquire as much data as possible simultaneously to produce as consistent data as possible. Also, the cost of equipment may be kept down by using inexpensive sensors like geophones as in the case here.

19.11 OMA Examples

(a)

(b)

(c)

(d)

(e)

(f)

Figure 19.61 The six first modes of the bridge. (a) Mode 1, (b) Mode 2, (c) Mode 3, (d) Mode 4, (e) Mode 5, and (f) Mode 6.

19.11.4 OMA on Container Ship The last dataset we are going to investigate consists of data from a RO-LO (roll-on/lift off) ship. This ship was investigated in Orlowitz and Brandt (2014), although the dataset we will be looking at here is a different data set containing harmonic vibrations that should be removed prior to the OMA parameter estimation. The ship was instrumented with 45 Dytran Instruments model 3097A2 accelerometers with sensitivity of 500 mV/g and distributed along the deck, superstructure, and the flume tank in the rear of the ship. All signals were acquired synchronously using a sampling frequency of fs = 1000 Hz, later downsampled to 10 Hz. The signals were acquired during approximately 45 minutes while the ship sailed at high speed, 21 knots. This caused considerable harmonic vibrations as will be seen.

617

19 Practical Vibration Measurements and Analysis

× 10–4

4 2 0

–2 –4

× 10–4

6 Autocorrelation [(m/s 2)2]

2 2 Autocorrelation [(m/s ) ]

6

4 2 0

–2

0

10

20 30 Time [s]

40

50

–4

0

10

(a)

20 30 Time [s]

40

50

(b)

Figure 19.62 Correlation functions of the ship, in (a) before removing harmonics, and in (b) after removing the harmonics by applying the automatic AFDE method.

–1

10 Principal Components [–]

10 Principal Components [–]

618

10–2 10–3 10–4 10–5

1

2 3 4 Frequency [Hz]

(a)

5

–1

10–2 10–3 10–4 10–5

1

2 3 4 Frequency [Hz]

5

(b)

Figure 19.63 Principal component (also called singular value) plots of the ship of the original signals in (a) and with the signals after removing harmonics in (b).

In Figure 19.62, an autocorrelation function of a typical vertical sensor is shown, in (a) of the original signal, and in (b) of the signal with harmonics removed. The removal of harmonics is not easily distinguished in the plots of the autocorrelation functions, but is clear from the principal components in Figure 19.63, where the two first principal components of the original signals are shown in (a). It is obvious that there are harmonics at approximately 1.5, 3.0, 3.8, and 4.5 Hz. The first, second, and fourth of these frequencies are originating from the propeller shaft, whereas the frequency at 3.8 Hz is caused by the engine. In Figure 19.63(b), the principal components calculated from the signals after automatic removal of the harmonics by applying the AFDE method are shown (see Section 18.7.1). It is clear that the harmonic frequencies are efficiently removed. In Figure 19.64, we show a stabilization diagram from MITD on the original signals, containing the frequency range up to 4 Hz. It is evident that poles stabilize at the harmonic frequencies. It is rather commonly believed that this causes no problem, but data may be

19.11 OMA Examples 30

25

Iteration step

20

15

10

5 Unstable Stable

0 0.5

Figure 19.64

1

1.5

2

Frequency [Hz]

2.5

3

Stabilization diagram of MITD on ship data including harmonics.

processed with the harmonics and the lightly damped poles due to harmonics may be simply ignored. However, in Brandt (2015) it was shown, coincidentally on the same ship signal as used here, that the damping ratios of the modes of the ship were affected by the harmonics even though the harmonics were not obviously close to the natural frequencies of the ship. It is advised to show caution when harmonics are present, and the best practice is to remove any harmonics that are found before applying an MPE algorithm to estimate the modal parameters. The AFDE method was applied, removing 2 frequency lines on each side of the harmonics. The 45 signals, each containing 26213 samples, were processed in less than 0.7 seconds on a regular laptop to identify and remove the harmonics found in each signal. Since some signals may contain different harmonics than others, it is recommended to apply the AFDE method to each signal and not try to make the procedure more efficient by finding the frequencies in one signal and removing those frequencies from all other signals. The stabilization diagram obtained by MITD after removing the harmonics is shown in Figure 19.65. The 60 lags were used to produce the stabilization diagram, starting from lag 11, a maximum model order of 30 modes, and low matrix polynomial normalization was used. As may be seen in the figure six modes clearly stabilize in the frequency range 0.5 to 3 Hz. At higher model orders, the mode at 2.88 Hz splits into two, but from the mode shapes of these two modes, it was concluded that the split mode is due to computational poles. In Tables 19.10 and 19.11, natural frequencies and damping ratios of the first six modes of the ship are presented. In addition, a second analysis was performed using 120 lags (12 seconds), starting with lag 11 as before. This was done to show a recommended procedure to experimentally verify that the selected analysis parameters lead to reliable estimates. If changing some analysis parameter leads to changes in the modal parameters, the confidence in the parameters should be considered. In the present case, the frequency estimates

619

19 Practical Vibration Measurements and Analysis 30

25

20

Iteration step

620

15

10

5 Unstable Stable

0 0.5

1

Figure 19.65

Table 19.10

1.5

2

2.5

Frequency [Hz]

3

Stabilization diagram of MITD on ship data without harmonics.

Table with undamped natural frequencies for the modes of the RO-LO ship.

Mode

Frequency [Hz]

Frequency [Hz]

#

Using 60 Lags

Using 120 Lags

1

0.892

0.891

0.1

2

1.762

1.762

−0.0

3

1.834

1.834

0.0

4

2.143

2.141

0.1

5

2.639

2.639

−0.0

6

2.875

2.874

0.0

Table 19.11

Diff. %

Description

Two-node vertical bending Three-node vertical bending Two-node horizontal bending One-Node torsion Four-node vertical bending Two-node Torsion

Table with damping ratios in % for the modes of the RO-LO ship.

Mode #

60 Lines

120 Lines

Diff. %

1

1.74

1.66

4.7

2

0.60

0.58

2.9

3

1.06

1.10

−3.5

4

1.00

1.04

−3.7

5

0.70

0.72

−2.7

6

1.04

1.10

−5.9

19.11 OMA Examples

Auto MAC matrix 0.9 0.8 0.8

0.7

0.6

0.6

0.4

0.5

0.2 0

0.4 1

0.3 2

3

4

60 lines

Figure 19.66

5

4

6

1

2

3 120 lines

5

6

0.2 0.1

Cross-MAC between results for ship with 60 and 120 lines, respectively.

(e)

(a)

(b)

(c)

(d)

(f)

Figure 19.67 Mode shapes of the six first modes of the ship obtained by the MITD method. (a) Mode 1, (b) Mode 2, (c) Mode 3, (d) Mode 4, (e) Mode 5, and (f) Mode 6.

621

622

19 Practical Vibration Measurements and Analysis

from the second estimation were within 0.1 %, whereas damping factors changed by less than 6 %. Considering the uncertainty in damping estimates, this should be considered acceptable. The best way to assess the estimated mode shapes is to animate the modes and see if they look as expected. In Figure 19.61, we show the first six modes, the last three being omitted for space reasons. They all look as expected and are real-valued, which is another common assessment criteria. The higher modes may be found in Christensen et al. (2019). In addition to comparing the natural frequency and damping ratio estimates from the two different number of lags used in the estimation, in Figure 19.66, we present a cross-MAC between the mode shapes from the two runs. As can be seen the cross-MAC values are very close to unity (larger than 0.999). This gives some confidence in the obtained modal parameters. Mode shapes of the first six modes of the ship are shown in Figure 19.67. The mode shapes are all real-valued and show expected mode shapes as described in Table 19.10.

References Ahlin K, Magnevall M and Josefsson A 2006 Simulation of forced response in linear and nonlinear mechanical systems using digital filters Proceedings of International Conference on Noise and Vibration Engineering (ISMA), Catholic University, Leuven, Belgium, pp. 3817–3831. Austrell PE, Dahlblom O, Lindemann J, Olsson A, Olsson KG, Persson K, Petersson H, Ristinmaa M, Sandberg G and Wernberg PA 2004 CALFEM - A finite element toolbox version 3.4. Technical report, Lund University, The Division of Structural Mechanics. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Brandt A 2013 ABRAVIBE–A toolbox for teaching and learning vibration analysis. Sound and Vibration 47(11), 12–17. Brandt A 2015 Comparison and assessment of methods to treat harmonics in operational modal analysis the International Conference on Structural Engineering Dynamics (ICEDyn), Lagos, Portugal. Brandt A and Ahlin K 2003 A digital filter method for forced response computation Proceedings of 21st International Modal Analysis Conference, Kissimmee, FL. Brandt A and Brincker R 2011 Impact excitation processing for improved frequency response quality Structural Dynamics, Vol. 3., pp. 89–95. Springer, New York. Brandt A, Sturesson PO and Ristinmaa M 2014 Test analysis verification using open software. Sound and Vibration 48(6), 13–16. Brownjohn J, Magalhaes F, Caetano E and Cunha A 2010 Ambient vibration re-testing and operational modal analysis of the Humber Bridge. Engineering Structures 32(8), 2003–2018. Cauberghe B 2004 Applied Frequency-domain System Identification in the Field of Experimental and Operational Modal Analysis PhD thesis Vrije University of Brussels, Brussels, Belgium Vrije University of Brussels, Brussels, Belgium. Christensen SS, Andersen MS and Brandt A 2019 Dynamic characterization of the little belt suspension bridge by operational modal analysis Dynamics of Civil Structures, Volume 2 Springer pp. 17–22.

References

Craig RR and Kurdila AJ 2006 Fundamentals of Structural Dynamics. John Wiley. Inman D 2007 Engineering Vibration 3rd edn. Prentice Hall. Jelicic G, Böswald M and Brandt A 2021 Improved computation in terms of accuracy and speed of LTI system response with arbitrary input. Mechanical Systems and Signal Processing 150, 107252. Kozin F and Natke HG 1986 System-identification techniques. Structural Safety 3(3–4), 269–316. Lyon R 2000 Designing for Product Sound Quality. CRC Press. Orlowitz E and Brandt A 2014 Operational modal analysis for dynamic characterization of a Ro-Lo ship. Journal of Ship Research 58(4), 216–224. Orlowitz E and Brandt A 2017 Comparison of experimental and operational modal analysis on a laboratory test plate. Measurement 102, 121–130. Otte D, Van de Ponseele P and Leuridan J 1990 Operational deflection shapes in multisource environments Proceedings of 8th International Modal Analysis Conference, Kissimmee, FL. Rao S 2003 Mechanical Vibrations 4th edn. Pearson Education. Smallwood D and Gregory D 1986 A rectangular plate is proposed as an IES modal test structure Proceedings of 4th International Modal Analysis Conference, Los Angeles Society for Experimental Mechanics. Sturesson PO, Brandt A and Ristinmaa M 2013 Structural dynamics teaching example - a linear test analysis case using open software Proceedings of 31st International Modal Analysis Conference (IMAC), Garden Grove, CA. Tucker S and Vold H 1990 On principal response analysis Proceedings of ASELAB Conference, Paris, France.

623

625

Appendix A Complex Numbers Complex numbers are frequently used in signal analysis. A complex number, c is defined as c = a + jb,

(A.1)

where the real numbers a and b are called the real part and imaginary part, respectively, of c. The number j, the imaginary number, also sometimes denoted i, is equal to the square root of −1. Of course, this does not (at least immediately) provide any insight into the use of complex numbers, so we shall here show some fundamental use of complex numbers. First, we define the complex conjugate, c∗ , of c, by c∗ = a − jb.

(A.2)

A useful picture of complex numbers is obtained if we plot the real and imaginary parts of c as x and y coordinates, respectively, in a coordinate system as in Figure A.1. Complex numbers represented by Equation (A.1) is often called rectangular form, or the Euclidian form. From Figure A.1, it directly follows that the complex number, c, may be written using trigonometric functions, as c = A[cos 𝜙 + j sin 𝜙], from which it follows that √ A = a2 + b2 , and 𝜙 = arctan

( ) b . a

(A.3)

(A.4)

(A.5)

The expression of the complex number, c, in Equation (A.3) is often called the trigonometric form. The factor A is also the square root of the amplitude squared of the complex number, c, which is obtained by |c|2 = cc∗ = (a + jb)(a − jb) = a2 + b2 .

(A.6)

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

626

Appendix A Complex Numbers

Figure A.1

The complex plane.

ϕ

There is a third common notation for expressing c; the Euler form or polar form. Here, c is written as in Equation (A.7), which can readily be seen as a special notation. c = Aej𝜙 ,

(A.7)

where A and 𝜙 are equal to those in Equation (A.3). The polar form also has a simplified notation commonly used in, for example, electrical engineering. Here, c is written as c = A∠𝜙,

(A.8)

where the symbol ∠ is read “angle.” When we use complex numbers in signal analysis, there are mainly two operations of interest. The first is a summation of two complex numbers, say c1 = a1 + jb1 and c2 = a2 + jb2 . An example of this case is when we have two sound waves with a certain common frequency, and the two sounds are added together at a certain point. Since the sound information contains both amplitude and phase, it becomes a complex addition, see also below where we describe how complex numbers are used to describe sinusoids. With the addition of two complex numbers, the rectangular form is most suitable and the sum, c, of the two numbers is ) ( ) ( (A.9) c = c1 + c2 = a1 + a2 + j b1 + b2 , that is, the real and imaginary parts are summed separately. This is equivalent to vector addition. The other important operation is multiplication of two complex numbers. An example of this is if we let a sinusoidal force excite a structure for which we know the frequency response between force and response at a certain point. The response at this point may be obtained by multiplying the complex sinusoid by the (complex) value of the frequency response at the frequency of the sinusoid. When we multiply two complex numbers, we prefer to use the polar form of Equation (A.7) and the product then becomes c = c1 ⋅ c2 = A1 A2 ej(𝜙1 +𝜙2 ) ,

(A.10)

that is, with multiplication, the amplitudes are multiplied and the phase angles are summed. The most important reason for using complex numbers in signal analysis (noise and vibration analysis) is that when we have sinusoids, it is quite effective to replace them with

Appendix A Complex Numbers

their complex analogs. Assume first that we have a real, time-dependent signal, x(t), e.g., a measured acceleration signal of a certain frequency x(t) = A cos(𝜔t).

(A.11)

A complex sinusoid is now defined as x̃ (t) = Aej(𝜔t+𝜙) = Ce𝜔t ,

(A.12)

C = Aej𝜙 .

(A.13)

where

Using this notation, our actual (original) signal can be written as x(t) = Re [̃x(t)].

(A.14)

By introducing the complex signal, x̃ (t), we are able to easily change both the amplitude and phase of our signal, for example, passing through a frequency response. The resulting signal is then obtained by taking the real part of the calculated complex signal. We achieve the same result as if we had used the real signal the whole time, but without the complicated trigonometric rules. The imaginary part of the complex signal sometimes also has interpretations which we shall not delve into here, but basically we can say that it simply follows along as a complement to the calculations. Example A.0.1 As an example of using complex numbers, assume that we have a sinusoidal force with amplitude 30 N and frequency 100 Hz. The force passes through an SDOF system with a natural frequency of 100 Hz, where we let the frequency response of accelerance type be 0.1∠𝜋∕2 [(m/s2)/N]. We let the phase of our force be the reference, that is, 0 radians. What is the resulting acceleration? Our force signal, F(t), can be written in complex form as F(t) = Cej2𝜋f0 t , where C = is

10ej0

(A.15)

= 10 [N], and f0 = 100 [Hz]. Furthermore, the frequency response at 100 Hz

H(100) = 0.1ej𝜋∕2 .

(A.16)

We thus obtain from Equation (A.10) that the resulting acceleration is a(t) = F(t)H(100) = 10 ⋅ 0.1ej(2𝜋f0 t+0+𝜋∕2) = ej(2𝜋f0 t+𝜋∕2) ,

(A.17)

or, if we write the actual, real acceleration, that is, the real part of Equation (A.17), then a(t) = cos(2𝜋f0 t + 𝜋∕2). End of example.

(A.18)

627

629

Appendix B Logarithmic Diagrams Logarithmic (log) scales are often used when displaying spectra. There are two reasons for this: 1. the compression that occurs when changing to log scale (usually on the y-axis) reveals details in the curve that are not as obvious using a linear scale, and 2. many curves become straight lines on a log–log scale (where both axes are logarithmic). Logarithms can be defined with an arbitrary base. The logarithm we most often use within noise and vibration is the base 10 logarithm, or “log-base-10.” For this logarithm, if x = 10 y ,

(B.1)

y = log10 (x),

(B.2)

then

which is read as “the log-base-10 of x is equal to y.” For example, the log-base-10 of 1000 is equal to 3. A simple algebraic rule for logarithms when multiplying, which follows from the definition, is that log10 (a ⋅ b) = log10 (a) + log10 (b) .

(B.3)

Especially useful, as we will discover in Appendix C, dealing with decibels, is that ( ) (B.4) log10 a2 = 2 ⋅ log10 (a) . The log-base-10 as a function of x is shown in Figure B.1 a). A common reason why log scales are used is that many curves become straight lines on a log–log scale (when both the x-scale and y-scale are logarithmic). More specifically, this is valid for curves which are exponential expressions, that is, where y = xa . In Figure B.1 b), two such functions are plotted with linear and log–log scales. In this case, we must use log scales on both the x-axis and the y-axis in order to obtain straight lines. Log–log plots are common for plotting filter characteristics and spectra in, e.g., environmental testing. The compression effect of the logarithm is often used, for example, when we plot the frequency response of mechanical structures; otherwise, we lose details because of the large dynamic range of such frequency responses, as illustrated in Figure B.2.

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

3

y = log 10 (x)

2 1 0 –1

0

500

x (a)

y = log 10 (x2) and y = log 10 (1/x 2)

Appendix B Logarithmic Diagrams

1000

6

10

4

10

102 100 10

–2

10

–1

10

0

10

1

x (b)

10

2

10

3

Figure B.1 In (a), y = log10 (x) is shown as a function of x, for x-values from 0.1 to 1000 with a linear x-axis. As seen in the diagram, taking the log results in a strong compression, that is, a large difference in x-values give only small differences in y-values (“logged” values). In (b), the functions y = x2 (solid) and y = 100∕x2 = 100x−2 (dashed) are plotted with log–log scales. We see in (b) how exponential functions of the form y = xa become straight lines.

30 25

Accelerance [(m/s2)/N]

Accelerance [(m/s2)/N]

630

20 15 10 5 0

0

100 200 300 400 500 Frequency [Hz]

(a)

10

0

10

−2

0

100 200 300 400 500 Frequency [Hz]

(b)

Figure B.2 Frequency response plotted with (a) linear y-scale and (b) logarithmic y-scale. Comparing the two formats shows that many details of the curve are only visible when using a log y-scale.

Some people also prefer to plot frequency responses of mechanical systems on a log–log scale, although that is not the case with the author of this book. The reason is shown in Figure B.3. Although there are some good points for plotting FRFs on a log–log scale, for example, the fact that resonance bandwidths are constant relative bandwidths, the natural frequencies have a tendency to be “packed” in the upper part of the x-axis when using a log–log scale. As apparent throughout this book, I therefore prefer logarithmic y-axis and linear x-axis format for FRF plots.

Appendix B Logarithmic Diagrams

0

Accelerance [(m/s2)/N]

10

−2

10

−4

10

0

10

Figure B.3

1

10

Frequency [Hz]

Log–log plot of the same FRF as shown in Figure B.2.

2

10

631

633

Appendix C Decibels The concept of decibels is central to noise and vibration analysis. It is primarily used within acoustics, where the concept is related to the logartihmic sensitivity of the human ear. It is also used for plotting, for example, frequency responses and filter characteristics. Thus, it is essential to understand how the decibel expression is calculated. The decibel was invented (be people at the Bell Telephone Labs) for use in telecommunications and was invented to make expressions independent of the context, that is, if amplitude or power is used, or maybe we should say the unit Bell was invented. The Bell unit is so large, however, that in most fields, it is most common to use the deciBell, a tenth of a Bell. To obtain this desired quality, the decibel is defined as a relative measure using a power ratio. For a power, P, which is to be converted to decibels relative to a reference power, P0 , the resulting power, PdB , in decibels is ( ) P . (C.1) PdB = 10 ⋅ log10 P0 If we, for example, have a power of 100 watts and the reference power is 1 watt, then we obtain a power of 20 dB relative to 1 watt. Of course, we often measure entities with linear units, and not powers. We can therefore use an analog with an electrical circuit, where the power consumed by a circuit component, for example, a resistor, is proportional to the product of the current through the resistor and the voltage across it. That is, if the resistance is R, then the power consumed by the resistor is U2 U = . R R If our reference power is P0 , which corresponds to a reference voltage U0 , then P = UI = U ⋅

U02

. R We now express the power, P in decibels relative to P0 , and obtain ( ) ( )2 P U = 10 ⋅ log10 . Pdb = 10 ⋅ log10 P0 U0 P0 =

(C.2)

(C.3)

(C.4)

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

634

Appendix C Decibels

If we now use the relation log10 (a2 ) = 2log10 (a) from Appendix B, we obtain an alternative formula for calculating the decibel expression for linear quantities, namely ( ) U . (C.5) PdB = 20 ⋅ log10 U0 Equation (C.5) thus expresses a voltage ratio in decibels. Note that the decibel value is the same if we use a voltage ratio as if we use the power ratio. That is the point with decibels. Of course, in noise and vibration analysis, we rarely express electrical voltages directly in decibels. For arbitrary units we use the following rules: 1. If the unit of the entity we wish to convert is linear (not squared), we use Equation (C.5). This is true, for example, for an acceleration in m/s2 . 2. If the unit is quadratic, we use Equation (C.1) and replace P with our measured entity. This is true, for example, for a PSD of an acceleration in [(m/s) 2 ) 2 /Hz]. Finally, we must observe that a decibel value is only meaningful when the reference that has been used is indicated. This may seem confusing if you are used to hearing, for example, sound levels given in dB without any reference. In acoustics, however, standard reference values are commonly used which are often not reported explicitly. For sound pressure levels (which sound levels usually are), the reference 20 𝜇Pa is used.

635

Appendix D Some Elementary Matrix Algebra I have assumed the reader to be acquainted with some basic linear algebra in this book. In this appendix, we will summarize some important matrix algebra relations and define the nomenclature for matrices used throughout the book. For a complete coverage, see for example, Strang (2004). First, let us define the nomenclature used in this book. A column vector is denoted by {x}, by which we mean ⎧ x1 ⎫ ⎪ ⎪ ⎪ x2 ⎪ {x} = ⎨ ⎬, ⎪…⎪ ⎪ ⎪ ⎩ xM ⎭

(D.1)

if we assume the vector has M elements. A row vector is denoted by ⌊y⌋, by which we mean ⌊y⌋ = ⌊y1 y2 … yN ⌋,

(D.2)

if we assume the vector has length N. A regular matrix, [A], is denoted by brackets, i.e., ⎡ a11 ⎢ ⎢ a21 [A] = ⎢ ⎢… ⎢ ⎣ aM1

a12



a22







aM2



a1N ⎤ ⎥ a2N ⎥ ⎥, … ⎥ ⎥ aMN ⎦

(D.3)

and we call this matrix an M-by-N matrix, or we say that it has size M × N. A diagonal matrix is denoted by, e.g., ⌈S⌋, and it has no nonzero off-diagonal elements, i.e., ⎡ s11 ⎢ ⎢0 ⌈S⌋ = ⎢ ⎢… ⎢ ⎢0

0 s22 0

⎥ ⎥ … 0 ⎥ ⎥, … ⎥ ⎥ … sMM ⎦ … 0

(D.4)

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

636

Appendix D Some Elementary Matrix Algebra

and, of course, this matrix has to be square, M × M. The most common diagonal matrix is perhaps the identity matrix, ⌈I⌋, which is ⎡1 0 … 0 ⎥ ⎥ ⎢ 1 … 0 ⎥ ⎢0 ⌈I⌋ = ⎢ ⎥, …⎥ ⎢… ⎥ ⎢ 0 … 1 ⎦ ⎢0

(D.5)

and which has the important property that, for any matrix, [A], [A] ⌈I⌋ = [A] .

(D.6)

We also have the important property that the inverse of the identity matrix equals itself, i.e., ⌈I⌋−1 = ⌈I⌋ .

(D.7)

We denote the transpose of a real vector or matrix by the superscript T . For complex vectors and matrices, we usually replace the transpose by the Hermitian transpose, [A]H , which is equal to the complex conjugate (see Appendix A), of the transposed matrix, i.e., ⎡ a11 ⎢ ⎢ a12 T [A] = ⎢ ⎢… ⎢ ⎣ a1N

a21 a22 … a2N

aM1 ⎤ ⎥ … aM2 ⎥ ⎥, … … ⎥ ⎥ … aMN ⎦

(D.8)

… a∗M1 ⎤ ⎥ … a∗M2 ⎥ ⎥. … … ⎥ ⎥ … a∗MN ⎦

(D.9)



and ⎡ a∗11 ⎢ ∗ ⎢a H [A] = ⎢ 12 ⎢… ⎢ ∗ ⎣ a1N

a∗21 a∗22 … a∗2N

The “standard” matrix equation [A] {x} = {b} has the solution {x} = [A]−1 {b} ,

(D.10)

−1

where we call [A] the inverse of [A]. Of course, the solution to Equation (D.10) may or may not exist. If the matrix [A] is square, we have the situation of an equation system with the same number of unknowns (in {x}) as we have equations. Then, the inverse exists if the determinant of [A], denoted |A|, is nonzero. We leave the details of determinants to the text, see for example Section 6.3.1, where we use the determinant of a system matrix. If, on the other hand, there are more lines than unknowns, i.e., M > N, then we have to find some other solution. The most common solution in that case is the least squares solution, ( )−1 (D.11) {x} = [A]T [A] [A]T {b} . The inverse of a matrix is a numerically unstable entity and should be avoided in computations. In MATLAB/Octave, the best way of solving standard, square equations, is therefore

Appendix D Some Elementary Matrix Algebra

to use the slash and the backslash operators. These work so that if we have an equation of three matrices A, B, and C, the solution is [A] = [B] [C]−1 ,

(D.12)

then the solution in MATLAB/Octave is best obtained by the code A = B / C and if the equation has the solution [A] = [C]−1 [B] ,

(D.13)

then the solution in MATLAB/Octave is best obtained by the code A = C \ B, where, in both codes, you should note that the inverse entity is, so to speak, below the division sign; in “B/C,” C is below the sign, indicating the inverse is taken of C. In “C ∖ B,” C is also below the division sign. In MATLAB/Octave, these two operators are called “right division” (/) and “left division” (∖), respectively, which points to which side of the noninverted matrix ([B]), the inverse matrix ([C]−1 ) is standing.

637

639

Appendix E Eigenvalues and the SVD Modern noise and vibration analysis applications involve many advanced linear algebra concepts and we have used some in this book, particularly in Chapters 6, 14, and 15. Some of the linear algebra theory we have used is not included in most curricula, even at graduate level. We will therefore summarize some of the most important concepts here. More details can be found in some textbooks on linear algebra, see, for example, Strang (2004).

E.1

Eigenvalues and Complex Matrices

The concept of eigenvalues and eigenvectors is very important. Generally, the eigenvalue problem is related to the equation: ([A] − 𝜆 ⌈I⌋) {x} = {0} ,

(E.1)

which is also sometimes written as [A] {x} = 𝜆 {x} .

(E.2)

The solutions, 𝜆n , to any of these two equations are called the eigenvalues of [A]. Furthermore, if we put such an eigenvalue, 𝜆n , into Equation (E.1), then there is only a particular { } vector, xr , which satisfies the equation. Such vectors are called the eigenvectors of [A] and correspond to each eigenvalue. Naturally, there are infinitely many eigenvectors for each eigenvalue, because we can scale it with a factor, and it will still satisfy Equation (E.1), but there is only one unique vector times a scale factor. The eigenvalues in the solution to Equation (E.1) are found be finding the values, 𝜆n , satisfying |[A] − 𝜆 ⌈I⌋| = 0,

(E.3)

where || denotes the determinant. Thus, the solution is found by finding the values 𝜆n which makes the determinant in Equation (E.3) equal zero. This is also called the characteristic equation of [A]. Eigenvalues and eigenvectors are closely related to complex matrices, particularly those matrices which are Hermitian (sometimes “Hermitian symmetric”), i.e., complex matrices [A] which are equal to their Hermitian transpose, i.e., [A]H = [A] .

(E.4)

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

640

Appendix E Eigenvalues and the SVD

Hermitian matrices are the equivalent of real symmetric matrices, for which [A]T = [A]. There are some fundamental properties of the eigenvalues of symmetric matrices, which are of particular interest to us. We will therefore state some of these briefly; ● ● ●





The eigenvalues of any symmetric matrix are real. The eigenvalues of any Hermitian matrix are real. If the eigenvalues are all larger than zero, the matrix is called positive definite, and if some eigenvalues are allowed to be equal to zero, the matrix is referred to as positive semidefinite. The eigenvectors of any symmetric or Hermitian matrix can be chosen orthonormal, i.e., with unity length, and orthogonal to each other. For an orthonormal matrix, [Q], [Q]T = [Q]−1 .

We now come to the important diagonalization properties of symmetric matrices. Any symmetric matrix, [A], can be diagonalized by its eigenvectors into [A] = [Q] ⌈Λ⌋ [Q]T ,

(E.5)

where the matrix [Q] has each eigenvector as a column, and ⌈Λ⌋ is a diagonal matrix with the eigenvalue 𝜆n in its (n, n) element, corresponding to the eigenvector in column n of Q. If the matrix [Q] is chosen so that it is orthonormal, then ⌈Λ⌋ = [Q]T [A] [Q] ,

(E.6)

and it should be noted that MATLAB/Octave produces orthonormal eigenvectors by default, using the eig command. [ ] For complex matrices, for example, the input cross-spectral matrix Gxx discussed in Section 10.8 and in Chapters 14 and 15, the above equations turn into equivalent equations if we replace the transpose with Hermitian transpose. Thus, for any Hermitian (and therefore complex) matrix, [A], then [A] = [U] ⌈Λ⌋ [U]H ,

(E.7)

⌈Λ⌋ = [U]H [A] [U] ,

(E.8)

and

where the matrix [U] is a matrix with each eigenvector corresponding to the element (n, n) in Λ located in column n of [U]. The matrix [U] is called a unitary matrix, in that it is complex and has orthonormal columns. For this matrix, of course, it is true that [U]H = [U]−1 .

E.2

The Singular Value Decomposition (SVD)

The singular value decomposition, SVD, is perhaps one of the most powerful decompositions used in modern engineering. It resembles the eigenvalue decomposition in Equations (E.6) and (E.7), but it works on any (possibly rectangular) matrix [A], whereas

E.2 The Singular Value Decomposition (SVD)

the eigenvalue decompositions, of course, only work for square matrices. The SVD of any matrix [A] is [ ]H [ ] (E.9) [A] = U1 ⌈S⌋ U2 , [ ] where the columns in U1 are called the left singular vectors, the values in the diagonal [ ] matrix [S] are called the singular values, and the columns in U2 are the right singular vectors. If [A] is an M × N matrix, then [ ] ● U1 is an M × M matrix, whose columns are the eigenvectors of [A] [A]H , ● the matrix ⌈S⌋ is M × N and the singular values on the diagonal equal the square roots of the eigenvalues of [A]H [A] and [A] [A]H , ● the singular values are always real and nonnegative, and sorted in the descending order, [ ] U2 is an N × N matrix, whose columns are the eigenvectors of [A]H [A], and ● [ ] [ ] ● the columns of both the left and the right singular vector matrices, U 1 and U2 , are orthonormal (orthogonal and unity length). The matrix ⌈S⌋ is an M × N matrix, so we need to define how such a matrix can be diagonal. It is simply diagonal in such a way that each diagonal element starting from element (1, 1) contains a (perhaps) nonzero value, and all remaining values in the matrix are zero. If M > N, then ⌈S⌋ is ⎡ s11 ⎢ ⎢ 0 ⎢ ⎢ 0 ⎢ ⌈S⌋ = ⎢ … ⎢ ⎢ 0 ⎢ ⎢… ⎢ 0 ⎢

0



s22



0

… …

0



0



0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ sNN ⎥ , ⎥ 0 ⎥ ⎥ … ⎥ ⎥ 0 ⎦

(E.10)

and if M < N, then ⌈S⌋ instead looks like ⎡ s11 ⎢ ⎢ 0 ⌈S⌋ = ⎢ ⎢ 0 ⎢ ⎢ 0

0



0

s22



0





0

0

… sMM

… 0⎥ ⎥ … 0⎥ ⎥. … 0⎥ ⎥ … 0⎦

(E.11)

An important property of the SVD of a Hermitian matrix is that for a (positive definite [ ] [ ] or positive semidefinite) Hermitian matrix [A], U1 = [U] and U2 = [U], and the singular values are equal to the eigenvalues ⌈S⌋ = ⌈Λ⌋ (if the matrix is not semidefinite, and thus the eigenvalues may be negative, the singular values equal the absolute values of the eigenvalues). A concept closely related to the SVD is the pseudo-inverse, [A]+ of any M × N matrix [A]. The pseudo-inverse is the solution to [A] {x} = {b} ,

(E.12)

641

642

Appendix E Eigenvalues and the SVD

for rectangular matrices, where the pseudoinverse solution is {x} = [A]+ {b} .

(E.13)

The solution to Equation (E.13) is of particular interest in many measurement situations where typically M > N, i.e., we have an overdetermined set of equations (more equations than unknowns). The pseudoinverse can be shown to be equal to [ ] ⌈ ⌋ [ ]H (E.14) [A]+ = U2 S+ U1 , where the inverse singular value matrix ⌈S+ ⌋ contains the reciprocal of the singular values, i.e., s+nn = 1∕snn . The pseudoinverse is closely related to the least squares solution in Equation (D.11), and its main advantage is its very robust numerical performance. At a slight increase in computational effort, the pseudoinverse has proven to perform extremely well in cases where the matrices involved are noisy or ill-conditioned and is therefore often preferred over the former. Finally, it should be mentioned that the eigenvalues, the SVD, and the pseudoinverse are all integral parts of MATLAB/Octave. They are performed by the commands eig, svd, and pinv, respectively. For applications of the SVD, see Section 15.1.

643

Appendix F Organizations and Resources For the newcomer to the field of noise and vibrations, it is desirable to find good resources for information. To facilitate this need, this appendix gives an overview of some useful places to search for more information. It has not been my intention to exclude any particular organization or resource. However, by necessity, the information in this appendix will be an incomplete list. See it as examples, and if you go to these places to look for more information, you will be “in the loop,” and you will be able to nest your way through the vast range of information available, particularly with the easy access to Internet today. I should also point out that there are domestic organizations in most countries which are involved in (usually) either acoustics, vibrations, or environmental engineering vibration testing. For scientific journals, the reader is suggested to look in the Bibliography section. The (main) journals published by the organizations below, are, however, mentioned in conjunction with each organization. The Acoustical Society of America, ASA, is a scientific organization, as its name implies predominantly in acoustics. Apart from publishing the Journal of the Acoustical Society of America, it organizes a number of scientific meetings. The Catholic University of Leuven, K.U. Leuven, in Belgium, organizes a biannual conference, International Conference on Noise and Vibration Engineering, also known as ISMA, which is the largest international conference on noise and vibration engineering in Europe. Information about ISMA can be found at http://www.isma-isaac.be. The Institute of Environmental Sciences and Technology is a professional organization not only for vibrations but also for all kinds of environmental testing (climate, etc.). It organizes the annual ESTECH conference and its website is http://www.iest.org. The International Institute of Acoustics and Vibration, IIAV, is a worldwide scientific organization which, among other things, organize an annual conference, the International Conference on Sound and Vibration, ICSV. The IIAV website is http://www.iiav .org. International Institute of Noise Control Engineering, I-INCE, is an international consortium of organizations in this field and organizes the annual INTER-NOISE conference. Its website is http://www.i-ince.org. The International Operational Modal Analysis Conference, IOMAC, is a relatively new, biannual, conference specializing in operational modal analysis, OMA. Information about the conference can be found at http://www.iomac.dk. Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

644

Appendix F Organizations and Resources

The Shock and Vibration Information Analysis Center, SAVIAC, is an American organization which, among other things, organize the annual Shock and Vibration Symposium in the United States. It also publishes the Shock and Vibration Journal. Its website is http://www.saviac.org. The Society for Experimental Mechanics, SEM, is a worldwide organization. It organizes, among other things, the annual International Modal Analysis Conference, IMAC, which is, despite its name, a conference with topics presented from many fields of noise and vibration engineering. SEM also publishes Experimental techniques and has a website at http://www.sem.org. SAE International is an organization which, among many things, has sections on vibrations in, predominantly, automotive and aerospace applications. SAE organize a biannual conference on vibrations, and the website is http://www.sae.org.

645

Appendix G Checklist for Experimental Modal Analysis Testing To summarize the points to ensure a successful EMA test from Section 16.2, the following check list can be used to ensure that all the most important checks are remembered during the measurement setup phase. No list like this can, unfortunately, be entirely comprehensive. This is particularly true for EMA, which is, as stated before, often considered an artform more than a technical application. Anyhow, this list should at least serve as some sort of guideline for the inexperienced user, ensuring the most important things are not forgotten. 1. 2. 3. 4. 5.

Plan the test by determining suitable measurement DOFs, reference points, etc. Enter the geometry description into your software Support the structure according to your decision (free-free or alternatively supported) Mark all measurement nodes on the structure. If necessary measure the exact location. For a shaker test (a) Mount the accelerometers and possibly dummy masses, draw cables to the measurement system. (b) Enter the sensitivity of each sensor, and other information your measurement system needs, including the DOF of each sensor. (c) Mount the force sensor(s), align the stinger/shaker(s) and attach it/them. Enter the force sensor information into your measurement system. (d) Select your choice of excitation signal. The preferred excitation signal should be pseudo random (single shaker) or periodic random (multiple shakers). Turn the amplifier’s volume knob all the way down and turn the signal generator(s) on. Then slowly turn the amplifier volume up until you (barely) hear your excitation signal if you put your ear against the structure (note! this is of course not always possible, for example, if the frequency range is below the audible range.). This is the preferred level, increasing the excitation level increases the risk of nonlinearities. (e) Look at the voltage levels of your accelerometer signals and ensure they are close enough to 5 V, assuming you are using IEPE sensors. If you are using some other sensors, you may need to change the input range of your measurement system for each channel. (f) Try out measurement settings (frequency range, excitation signal, blocksize, perhaps other settings like burst length for burst random excitation) until the quality of each FRF is good (assessed by ensuring the coherence function is near unity).

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

646

Appendix G Checklist for Experimental Modal Analysis Testing

6.

7.

8. 9.

(g) When the right measurement parameters are found, check reciprocity. If not in order, try disconnecting all shakers, realigning, and connect again, until the reciprocity is good. (h) Look at the imaginary part of each driving point FRF (if accelerances; real part if mobility) and ensure that all peaks point in the correct direction (which depends on the individual directions of the force sensor and accelerometer; if both point in the same direction, the imaginary part should peak in positive direction). (i) Next, investigate the correlation between the force signals to avoid unreliable FRF estimates. There are different ways to this, depending on your measurement system. I personally prefer to investigate the virtual coherences (see Chapter 15). For impact testing, instead do the following (for simplicity we assume a roving hammer test!) (a) Mount the reference accelerometers, connect them to the measurement system, and enter the sensor information into your measurement system. (b) Test out the right hammer tip and measurement settings. Follow the instructions in Chapter 13 and Section 19.7. (c) Check another excitation location far from the first one, maybe in another direction, to ensure the settings work also for other impact DOFs. You are now ready to start acquiring data. If you need to do this in several steps, be careful when moving accelerometers so that you do not change the position of the structure on the supporting springs, etc. For each set of data, make sure to check all FRFs and coherence functions before moving on, so that you are sure the data are good. Also, be very careful and ensure that you enter the correct measurement DOFs for all channels in each new measurement. Once your data are acquired, assess the quality by computing and plotting the MIF and see that it is looking good. Proceed with the modal parameter extraction as described in Chapter 16 and Section 19.8.2.

647

Bibliography Abdeljaber O, Dorn M and Brandt A 2021 Scaling an OMA modal model of a wood building using OMAH and a small shaker Topics in Modal Analysis & Testing, Volume 8 Springer pp. 151–157. Aenlle ML and Brincker R 2013 Modal scaling in operational modal analysis using a finite element model. International Journal of Mechanical Sciences 76, 86–101. Ahlin K 2006 Comparison of test specifications and measured field data. Sound and Vibration 40(9), 22–25. Ahlin K, Magnevall M and Josefsson A 2006 Simulation of forced response in linear and nonlinear mechanical systems using digital filters Proceedings of International Conference on Noise and Vibration Engineering (ISMA), Catholic University, Leuven, Belgium, pp. 3817–3831. Allemang RJ 2003 The modal assurance criterion - twenty years of use and abuse. Sound and Vibration 37(8), 14–23. Allemang RJ and Brown DL 1982 A correlation coefficient for modal vector analysis Proceedings of the 1st International Modal Analysis Conference, Orlando, FL. Allemang R and Brown D 1987 Experimental modal analysis and dynamic component synthesis – vol 3: modal parameter estimation. Technical report, USAF – Contract No F33615–83–C–3218, AFWAL–TR–87–3069. Allemang RJ and Brown DL 1998 A unified matrix polynomial approach to modal identification. Journal of Sound and Vibration 211(3), 301–322. Allemang RJ and Brown DL 2006 A complete review of the complex mode indicator function (CMIF) with applications Proceedings of the International Conference on Noise and Vibration Engineering (ISMA2006), pp. 3209–3246. Allemang RJ and Phillips AW 2004a The impact of measurement condensation and modal participation vector normalization on the estimation of modal vectors and scaling Proceedings of the 22nd International Modal Analysis Conference (IMAC), Dearborn, MI. Allemang RJ and Phillips AW 2004b The unified matrix polynomial approach to understanding modal parameter estimation: an update Proceedings of International Conference on Noise and Vibration Engineering (ISMA). Allemang RJ, Rost RW and Brown DL 1984 Multiple input estimation of frequency response functions Proceedings of the 2nd International Modal Analysis Conference, Orlando, FL.

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

648

Bibliography

Allemang R, Phillips A and Brown D 2011 Combined state order and model order formulations in the unified matrix polynomial method (UMPA) Proceedings of the 29th International Modal Analysis Conference (IMAC), Jacksonville, FL. Andersson T and Händel P 2006 IEEE Standard 1057, Crame/spl acute/r-Rao bound and the parsimony principle. IEEE Transactions on Instrumentation and Measurement 55(1), 44–53. ANSI S1.11 2004 Specification for Octave-Band and Fractional-Octave-Band Analog and Digital Filters. American National Standards Institute. Antoni J and Chauhan S 2013 A study and extension of second-order blind source separation to operational modal analysis. Journal of Sound and Vibration 332(4), 1079–1106. Antoni J and Schoukens J 2007 A comprehensive study of the bias and variance of frequency-response-function measurements: optimal window selection and overlapping strategies. Automatica 43(10), 1723–1736. Antoni J and Schoukens J 2009 Optimal settings for measuring frequency response functions with weighted overlapped segment averaging. IEEE Transactions on Instrumentation and Measurement 58(9), 3276–3287. Antoni J, Wagstaff P and Henrio JC 2004 Hα -a consistent estimator for frequency response functions with input and output noise. IEEE Transactions on Instrumentation and Measurement 53(2), 457–465. Asmussen J 1997 Modal Analysis Based on the Random Decrement Technique – Application to Civil Engineering Structures PhD thesis Dept. of Building Technology and Structural Engineering, University of Aalborg. Austrell PE, Dahlblom O, Lindemann J, Olsson A, Olsson KG, Persson K, Petersson H, Ristinmaa M, Sandberg G and Wernberg PA 2004 CALFEM - a finite element toolbox version 3.4. Technical report, Lund University, The Division of Structural Mechanics. Avitabile P 2017 Modal Testing: A Practitioner’s Guide 1st edn. Wiley, Hoboken, NJ. Belega D and Petri D 2016 Accuracy analysis of the sine-wave parameters estimation by means of the windowed three-parameter sine-fit algorithm. Digital Signal Processing: A Review Journal 50, 12–23. Bendat J and Piersol A 1993 Engineering Applications of Correlation and Spectral Analysis 2nd edn. Wiley Interscience. Bendat J and Piersol AG 2000 Random Data: Analysis and Measurement Procedures 3rd edn. Wiley Interscience. Bendat J and Piersol AG 2010 Random Data: Analysis and Measurement Procedures 4th edn. Wiley Interscience. Bernal D 2004 Modal scaling from known mass perturbations. Journal of Engineering Mechanics 130(9), 1083–1088. Bernal D 2011 A receptance based formulation for modal scaling using mass perturbations. Mechanical Systems and Signal Processing 25(2), 621–629. Berntsen J and Brandt A 2022 Periodogram ratio based automatic detection and removal of harmonics in time or angle domain. Mechanical Systems and Signal Processing 165, 108310. Blackman RB and Tukey JW 1958a The measurement of power spectra from the point of view of communications engineering .1. Bell System Technical Journal 37(1), 185–282. Blackman RB and Tukey JW 1958b The measurement of power spectra from the point of view of communications engineering .2. Bell System Technical Journal 37(2), 485–569.

Bibliography

Blough J 1998 Improving the Analysis of Operating Data on Rotating Automotive Components PhD thesis University of Cincinnati, College of Engineering. Bogert B, Healy M and Tukey J 1963 The quefrency alanysis of time series for echoes: cepstrum, pseudo-autocovariance, cross-cepstrum and saphe cracking Proceedings of Symposium on time Series Analysis (ed. Rosenblatt M), pp. 209–243. Brandt A 2013 ABRAVIBE – a toolbox for teaching and learning vibration analysis. Sound and Vibration 47(11), 12–17. Brandt A 2015 Comparison and assessment of methods to treat harmonics in operational modal analysis International Conference on Structural Engineering Dynamics (ICEDyn), Lagos, Portugal. Brandt A 2019 A signal processing framework for operational modal analysis in time and frequency domain. Mechanical Systems and Signal Processing 115, 380–393. Brandt A and Ahlin K 2003 A digital filter method for forced response computation Proceedings of the 21st International Modal Analysis Conference, Kissimmee, FL. Brandt A and Brincker R 2011 Impact excitation processing for improved frequency response quality Structural Dynamics, Volume 3 Springer, New York pp. 89–95. Brandt A and Brincker R 2014 Integrating time signals in frequency domain – comparison with time domain integration. Measurement 58, 511–519. Brandt A, Lago T, Ahlin K and Tuma J 2005 Main principles and limitations of current order tracking methods. Sound and Vibration 39(3), 19–22. Brandt A, Sturesson PO and Ristinmaa M 2014 Test analysis verification using open software. Sound and Vibration 48(6), 13–16. Brandt A, Berardengo M, Manzoni S and Cigada A 2017 Scaling of mode shapes from operational modal analysis using harmonic forces. Journal of Sound and Vibration 407, 128–143. Brandt A, Berardengo M, Manzoni S, Vanali M and Cigada A 2019 Global scaling of operational modal analysis modes with the OMAH method. Mechanical Systems and Signal Processing 117, 52–64. Brincker R 2017 On the application of correlation function matrices in OMA. Mechanical Systems and Signal Processing 87, Part A, 17–22. Brincker R and Ventura C 2015 Introduction to Operational Modal Analysis. John Wiley & Sons, Chichester, UK. Brincker R, Krenk S, Kirkegaard PH and Rytter A 1992 Identification of dynamical properties from correlation function estimates. Bygningsstatiske Meddelelser 63(1), 1–38. Brincker R, Zhang LM and Andersen P 2001 Modal identification of output-only systems using frequency domain decomposition. Smart Materials & Structures 10(3), 441–445. Brincker R, Brandt A and Bolton R 2010 Calibration and processing of geophone signals for structural vibration measurements Proceedings of the 28th International Modal Analysis Conference, Jacksonville, FL Society for Experimental Mechanics. Brown DL, Allemang R, Zimmerman R and Mergeay M 1979 Parameter estimation techniques for modal analysis. SAE Tech. Paper 790221. Brownjohn JMW and Pavic A 2007 Experimental methods for estimating modal mass in footbridges using human-induced dynamic excitation. Engineering Structures 29(11), 2833–2843.

649

650

Bibliography

Brownjohn J, Magalhaes F, Caetano E and Cunha A 2010 Ambient vibration re-testing and operational modal analysis of the humber bridge. Engineering Structures 32(8), 2003–2018. Brownlee K 1984 Statistical Theory and Methodology. Krieger Publishing Company. Cara FJ, Juan J and Alarcón E 2014 Estimating the modal parameters from multiple measurement setups using a joint state space model. Mechanical Systems and Signal Processing 43(1–2), 171–191. Carlsson B 1991 Maximum flat digital differentiator. Electronics Letters 27(8), 675–677. Carne TG, Griffith DT and Casias ME 2007 Support conditions for experimental modal analysis. Sound and Vibration 41(6), 10–16. Cauberghe B 2004 Applied Frequency-domain System Identification in the Field of Experimental and Operational Modal Analysis PhD thesis Vrije University of Brussels, Brussels, Belgium. Cauberghe B, Guillaume P, Verboven P, Vanlanduit S and Parloo E 2005 On the influence of the parameter constraint on the stability of the poles and the discrimination capabilities of the stabilisation diagrams. Mechanical Systems and Signal Processing 19(5), 989–1014. Christensen SS, Andersen MS and Brandt A 2019 Dynamic characterization of the little belt suspension bridge by operational modal analysis Dynamics of Civil Structures, Volume 2 Springer pp. 17–22. Clough RW and Penzien J 2003 Dynamics of Structures Computers & Structures Inc., Berkeley, CA. Cooley JW and Tukey JW 1965 An algorithm for machine calculation of complex Fourier series. Mathematics of Computation 19(90), 297–301. Cooley JW and Tukey JW 1993 On the origin and publication of the FFT paper - a citation-classic commentary on an algorithm for the machine calculation of complex Fourier-series - Cooley, J.W., Tukey, J.W. Current Contents/Engineering Technology & Applied Sciences (51–52), 8–9. Cooley J, Lewis P and Welch P 1967 Historical notes on the fast Fourier transform. IEEE Transactions on Audio and Electroacoustics 15(2), 76–79. Cooley JW, Lewis PAW and Welch PD 1970 The application of the fast Fourier transform algorithm to the estimation of spectra and cross-spectra. Journal of Sound and Vibration 12(3), 339–352. Coppotelli G 2009 On the estimate of the FRFs from operational data. Mechanical Systems and Signal Processing 23(2), 288–299. Craig RR and Kurdila AJ 2006 Fundamentals of Structural Dynamics. John Wiley & Sons. Daniell PJ 1946 Discussion of ‘on the theoretical specification and sampling properties of autocorrelated time-series’. Journal of the Royal Statistical Society 8 (Suppl.)(1), 88–90. Deblauwe F, Brown DL and Allemang RJ 1987 The polyreference time domain technique Proceedings of the 5th International Modal Analysis Conference (IMAC), London, England. Den Hartog JP 1985 Mechanical Vibrations. Dover Publications Inc. Devriendt C and Guillaume P 2008 Identification of modal parameters from transmissibility measurements. Journal of Sound and Vibration 314(1–2), 343–356. Dippery KD, Phillips AW and Allemang RJ 1996 Condensation of the spatial domain in modal parameter estimation. Modal Analysis-the International Journal of Analytical and Experimental Modal Analysis 11(3–4), 216–225. Döhler M, Reynders E, Magalhaes F, Mevel L, Roeck GD and Cunha A 2011 Pre-and post-identification merging for multi-setup OMA with covariance-driven SSI Dynamics of Bridges Volume 5 Springer pp. 57–70.

Bibliography

Döhler M, Lam XB and Mevel L 2013 Uncertainty quantification for modal parameters from stochastic subspace identification on multi-setup measurements. Mechanical Systems and Signal Processing 36(2), 562–581. Einstein A 1914 Méthode pour la détermination de valeurs statistiques d’observations concermant des grandeurs soumises à des fluctuations irréguliéres (method for the determinination of the statistical values of observations concerning quantities subject to irregular fluctuations). Archives des Sciences Physiques et Naturelles 37(4), 254–256. Endo H, Randall RB and Gosselin C 2009 Differential diagnosis of spall vs. cracks in the gear tooth fillet region: experimental validation. Mechanical Systems and Signal Processing 23(3), 636–651. Ewins DJ 2000a Basics and state-of-the-art of modal testing. Sadhana 25(3), 207–220. Ewins DJ 2000b Modal Testing: Theory, Practice and Application 2nd edn. Research Studies Press, Baldock, Hertfordshire, England. Fladung W 1994 The Development and Implementation of Multiple Reference Impact Testing Master’s thesis University of Cincinnati. Fladung W and Rost R 1997 Application and correction of the exponential window for frequency response functions. Mechanical Systems and Signal Processing 11(1), 23–36. Fladung W, Zucker A, Phillips A and Allemang R 1999 Using cyclic averaging with impact testing Proceedings of the 17th International Modal Analysis Conference, Kissimmee, FL Society for Experimental Mechanics. Fonseca da Silva M, Ramos PM and Serra A 2004 A new four parameter sine fitting technique. Measurement 35(2), 131–137. Fukuzono K 1986 Investigation of Multiple–Reference Ibrahim Time Domain Modal Parameter Estimation Technique Master’s thesis Dept. of Mechanical and Industrial Engineering, University of Cincinnati. Fyfe KR and Munck EDS 1997 Analysis of computed order tracking. Mechanical Systems and Signal Processing 11(2), 187–205. Gaberson HA 2003 Using the velocity shock spectrum to predict shock damage. Sound and Vibration 37(9), 5–6. Gaberson HA, Pal D and Chapler RS 2000 Classification of violent environments that cause equipment failure. Sound and Vibration 34(5), 16–23. Gao Y and Randall RB 1996a Determination of frequency response functions from response measurements .1. Extraction of poles and zeros from response cepstra. Mechanical Systems and Signal Processing 10(3), 293–317. Gao Y and Randall RB 1996b Determination of frequency response functions from response measurements .2. Regeneration of frequency response functions from poles and zeros. Mechanical Systems and Signal Processing 10(3), 319–340. Goyder H 1984 Foolproof methods for frequency response measurements Proceedings of the 2nd International Conference on Recent Advances in Structural Dynamics, Southampton, UK. Greenfield J 1977 Dealing with the shock environment using the shock response spectrum analysis. Journal of the Society of Environmental Engineers (9), 3–15. Guillaume P, Verboven P, Vanlanduit S, Van der Auweraer H and Peeters B 2003 A poly-reference implementation of the least-squares complex frequency-domain estimator Proceedings of the 21st International Modal Analysis Conference, Kissimmee, FL. Håkansson B and Carlsson P 1987 Bias errors in mechanical impedance data obtained with impedance heads. Journal of Sound and Vibration 113(1), 173–183.

651

652

Bibliography

Halvorsen WG and Brown DL 1977 Impulse technique for structural frequency-response testing. Sound and Vibration 11(11), 8–21. Händel P 2010 Amplitude estimation using IEEE-STD-1057 three-parameter sine wave fit: statistical distribution, bias and variance. Measurement 43(6), 766–770. Hannig J and Lee TCM 2004 Kernel smoothing of periodograms under Kullback–Leibler discrepancy. Signal Processing 84(7), 1255–1266. Hanson D, Randall RB, Antoni J, Thompson DJ, Waters TP and Ford RAJ 2007a Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems - Part I: Modal parameter identification. Mechanical Systems and Signal Processing 21(6), 2441–2458. Hanson D, Randall RB, Antoni J, Waters TP, Thompson DJ and Ford RAJ 2007b Cyclostationarity and the cepstrum for operational modal analysis of MIMO systems Part II: Obtaining scaled mode shapes through finite element model updating. Mechanical Systems and Signal Processing 21(6), 2459–2473. Harris FJ 1978 On the use of windows for harmonic-analysis with the discrete Fourier-transform. Proceedings of the IEEE 66(1), 51–83. Haykin S 2003 Signals and Systems 2nd edn. John Wiley & Sons. Heidemann MT, Johnson DH and Burrus CS 1984 Gauss and the history of the fast Fourier-transform. IEEE ASSP Magazine 34(3), 15–21. Henderson GR and Piersol AG 2003 Evaluating vibration environments using the shock response spectrum. Sound and Vibration 37(4), 18–21. Heylen W, Lammens S and Sas P 1997 Modal Analysis Theory and Testing 2nd edn. Catholic University Leuven, Leuven, Belgium. Higgins RJ 1990 Digital Signal Processing in VLSI. Prentice Hall. Himmelblau H, Piersol AG, Wise JH and Grundvig MR 1993 Handbook for Dynamic Data Acquisition and Analysis. Institute of Environmental Sciences and Technology, Mount Prospect, IL. Hons MS and Stewart RR 2006 Transfer functions of geophones and accelerometers and their effects on frequency content and wavelets. Technical report, CREWES, www.crewes.org. Hotelling H 1933 Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology 24, 417–441, 498–520. Hwang JS, Kim H and Kim J 2006 Estimation of the modal mass of a structure with a tuned-mass damper using H-infinity optimal model reduction. Engineering Structures 28(1), 34–42. Ibrahim S and Mikulcik E 1973 A time domain modal vibration test technique. Shock and Vibration Bulletin 43(4), 21–37. Ibrahim SR and Mikulcik EC 1977 A method for the direct identification of vibration parameters from the free response. The Shock and Vibration Bulletin 47(47), 183–198. IEC 61260 1995 Electroacoustics – Octave-Band and Fractional-Octave-Band Filters. International Electrotechnical Commission. IEC 61672-1 2005 Electroacoustics - Sound Level Meters – Part 1: Specifications. International Electrotechnical Commission. IEEE 1057 2017 Standard for Digitizing Waveform Recorders. IEEE 1241 2010 Standard for Terminology and Test Methods for Analog-to-Digital Converters.

Bibliography

IEEE 1451.4 2004 A Smart Transducer Interface for Sensors and Actuators – Mixed-mode Communication Protocols and Transducer Electronic Data Sheet (TEDS) Formats. IEEE Standards Association. Inman D 2007 Engineering Vibration 3rd edn. Prentice Hall. ISO 18431-1 2005 Mechanical Vibration and Shock – Signal Processing – Part 1: General Introduction. ISO 18431-4 2007 Mechanical Vibration and Shock – Signal Processing – Part 4: Shock Spectrum Analysis. ISO 2631-1 1997 Mechanical Vibration and Shock – Evaluation of Human Exposure to Whole-Body Vibration – Part 1: General Requirements. ISO 2631-5 2004 Mechanical Vibration and Shock – Evaluation of Human Exposure to Whole-Body Vibration – Part 5: Method for Evaluation of Vibration Containing Multiple Shocks. ISO 2641 1990 Vibration and Shock – Vocabulary. ISO 8041 2005 Human Response to Vibration – Measuring Instrumentation. James G, Carne TG, Lauffer JP and Nord AR 1992 Modal testing using natural excitation Proceedings of the 10th International Modal Analysis Conference, San Diego, CA. Jelicic G, Böswald M and Brandt A 2021 Improved computation in terms of accuracy and speed of LTI system response with arbitrary input. Mechanical Systems and Signal Processing 150, 107252. Juang, J-N and Pappa, RS 1985 An eigensystem realization-algorithm for modal parameter-identification and model-reduction. Journal of Guidance Control and Dynamics 8(5), 620–627. Kammer DC 1991 Sensor placement for on-orbit modal identification and correlation of large space structures. Journal of Guidance, Control, and Dynamics 14(2), 251–259. van Kann F and Winterflood J 2005 Simple method for absolute calibration of geophones, seismometers, and other inertial vibration sensors. Review of Scientific Instruments 76(3), 034501. Kay SM and Marple SL 1981 Spectrum analysis - a modern perspective. Proceedings of the IEEE 69(11), 1380–1419. Kennedy C and Pancu C 1947 Use of vectors in vibration measurement and analysis. Journal of the Aeronautical Sciences 14(11), 603–625. Kerschen G, Poncelet F and Golinval JC 2007 Physical interpretation of independent component analysis in structural dynamics. Mechanical Systems and Signal Processing 21(4), 1561–1575. Khintchine A 1934 Korrelationstheorie der stationären stochastischen prozesse. Matematische Annalen 109(1), 604–615. Konstantin-Hansen H and Herlufsen H 2010 Envelope and cepstrum analyses for machinery fault identification. Sound and Vibration 44(5), 10–12. Kozin F and Natke HG 1986 System-identification techniques. Structural Safety 3(3–4), 269–316. Kumar B and Roy SCD 1988 Coefficients of maximally linear, FIR digital differentiators for low-frequencies. Electronics Letters 24(9), 563–565. Lalanne C 2002 Mechanical Vibration & Shock – Specification Development, Volume 5. CRC Press.

653

654

Bibliography

Le Bihan J 1995 Maximally linear FIR digital differentiators. Circuits Systems and Signal Processing 14(5), 633–637. Lembregts F 1988 Frequency Domain Identification Techniques for Experimental Multiple Input Modal Analysis PhD thesis Katholieke Universiteit Leuven, Leuven, Belgium. Lembregts F, Leuridan J and Vanbrussel H 1990 Frequency-domain direct parameteridentification for modal-analysis - state-space formulation. Mechanical Systems and Signal Processing 4(1), 65–75. Linderholt A and Abrahamsson T 2005 Optimising the informativeness of test data used for computational model updating. Mechanical Systems and Signal Processing 19(4), 736–750. Lyon R 2000 Designing for Product Sound Quality. CRC Press. Magalhaes F, Cunha A, Caetano E and Brincker R 2010 Damping estimation using free decays and ambient vibration tests. Mechanical Systems and Signal Processing 24(5), 1274–1290. Maia NMM and Silva JMM 2001 Modal analysis identification techniques. Philosophical Transactions of the Royal Society of London Series A: Mathematical Physical and Engineering Sciences 359(1778), 29–40. (ed. Maia N and Silva J) 2003 Theoretical and Experimental Modal Analysis. Research Studies Press, Baldock, Hertforsdhire, England. Mansfield NJ 2005 Human Response to Vibration. CRC Press. Mitchell L 1982 Improved methods for the FFT calculation of the frequency response function. Journal of Mechanical Design 104(2), 277–279. Mitchell L and Cobb R 1987 An unbiased frequency response function estimator Proceedings of the 5th International Modal Analysis Conference, London, UK, pp. 364–373. Negusse S, Händel P and Zetterberg P 2014 IEEE-STD-1057 three parameter sine wave fit for SNR estimation: performance analysis and alternative estimators. IEEE Transactions on Instrumentation and Measurement 63(6), 1514–1523. Newland DE 2005 An Introduction to Random Vibrations, Spectral, and Wavelet Analysis 3rd edn. Dover Publications Inc. Newton I 1687 Philosophiæ Naturalis Principia Mathematica. London, UK. Norfield D 2006 Practical Balancing of Rotating Machinery. Elsevier Science. Nuttall AH 1981 Some windows with very good sidelobe behavior. IEEE Transactions on Acoustics Speech and Signal Processing 29(1), 84–91. Nuttall A and Carter C 1982 Spectral estimation using combined time and lag weighting. Proceedings of the IEEE 70(9), 1115–1125. Nyquist H 2002 Certain topics in telegraph transmission theory (reprinted from transactions of the A. I. E. E., February, p. 617–644, 1928). Proceedings of the IEEE 90(2), 280–305. O’Callahan J, Avitabile P and Riemer R 1989 System equivalent reduction expansion process (SEREP) Proceedings of the 7th International Modal Analysis Conference, pp. 29–37. Oppenheim AV and Schafer RW 1975 Digital Signal Processing. Prentice Hall. Oppenheim AV, Schafer RW and Buck JR 1999 Discrete-Time Signal Processing. Pearson Education. Orlowitz E and Brandt A 2014a Effects of simultaneous versus roving sensors measurement in operational modal analysis Proceedings of the International Conference on Noise and Vibration Engineering (ISMA 2014). Orlowitz E and Brandt A 2014b Operational modal analysis for dynamic characterization of a Ro-Lo ship. Journal of Ship Research 58(4), 216–224.

Bibliography

Orlowitz E and Brandt A 2017 Comparison of experimental and operational modal analysis on a laboratory test plate. Measurement 102, 121–130. Orlowitz E, Andersen P and Brandt A 2015 Comparison of simultaneous and multi-setup measurement strategies in operational modal analysis Proceedings of the 5th International Operational Modal Analysis Conference (IOMAC), Gijón, Spain. Otnes RK and Enochson L 1972 Digital Time Series Analysis. Wiley Interscience. Otte D 1994 Development and Evaluation of Singular Value Analysis Methodologies for Studying Multivariate Noise and Vibration Problems PhD thesis Catholic University Leuven, Belgium. Otte D, de Ponseele PV and Leuridan J 1990 Operational deflection shapes in multisource environments Proceedings of the 8th International Modal Analysis Conference, Kissimmee, FL. van Overschee P and De Moor B 1996 Subspace Identification for Linear Systems: Theory – Implementation – Applications. Springer. Pan MC and Wu CX 2007 Adaptive Vold–Kalman filtering order tracking. Mechanical Systems and Signal Processing 21(8), 2957–2969. Pan MC, Liao SW and Chiu CC 2007 Improvement on Gabor order tracking and objective comparison with Vold–Kalman filtering order tracking. Mechanical Systems and Signal Processing 21(2), 653–667. Papoulis A 2002 Probability, Random Variables, and Stochastic Processes 4th edn. McGraw-Hill. Parks TW and McClellan J 1972 Chebyshev approximation for nonrecursive digital filters with linear phase. IEEE Transactions on Circuit Theory CT19(2), 189–194. Parloo E, Cauberghe B, Benedettini F, Alaggio R and Guillaume P 2005 Sensitivity-based operational mode shape normalisation: application to a bridge. Mechanical Systems and Signal Processing 19(1), 43–55. Pauwels S, Michel J, Robijns M, Peeters B and Debille J 2006 A new MIMO sine testing technique for accelerated, high quality FRF measurements Proceedings of the 24th International Modal Analysis Conference, St. Louis, MO Society for Experimental Mechanics. Pelant P, Tuma J and Benes T 2004 Vold–Kalman order tracking filtration in car noise and vibration measurements Proceedings of the 33rd International Congress and Exposition on Noise Control Engineering, INTER-NOISE, Prague, Czech Republic. Phillips A and Allemang R 1996 Single degree-of-freedom modal parameter estimation methods Proceedings of the 14th International Modal Analysis Conference, Dearborn, MI Society for Experimental Mechanics. Phillips AW and Allemang RJ 2003 An overview of MIMO-FRF excitation/averaging/ processing techniques. Journal of Sound and Vibration 262(3), 651–675. Phillips AW and Allemang RJ 2005 Data presentation schemes for selection and identification of modal parameters Proceedings of International Modal Analysis Conference (IMAC), p. 10. Pintelon R and Schoukens J 1990 Real-time integration and differentiation of analog-signals by means of digital filtering. IEEE Transactions on Instrumentation and Measurement 39(6), 923–927. Pintelon R and Schoukens J 2012 System Identification: A Frequency Domain Approach. John Wiley & Sons. Pintelon R, Peeters B and Guillaume P 2008 Continuous-time operational modal analysis in the presence of harmonic disturbances. Mechanical Systems and Signal Processing 22(5), 1017–1035.

655

656

Bibliography

Potter R 1990a A new order tracking method for rotating machinery. Sound and Vibration 24(9), 30–34. Potter R 1990b Tracking and resampling method and apparatus for monitoring the performance of rotating machines. US4912661A. Proakis JG and Manolakis DG 2006 Digital Signal Processing: Principles, Algorithms, and Applications 4th edn. Prentice Hall. de Prony BGR 1795 Essai éxperimental et analytique: sur les lois de la dilatabilité de fluides Élastique et sur celles de la force expansive de la vapeur de l’alkool, é différentes températures. Journal de l’école Polytechnique 1(22), 24–76. Qian S 2003 Gabor expansion for order tracking. Sound and Vibration 37(6), 18–22. Rabiner LR and Schafer RW 1974 On the behavior of minimax relative error FIR digital differentiators. Bell System Technical Journal 53(2), 333–361. Rades M 1994 A comparison of some mode indicator functions. Mechanical Systems and Signal Processing 8(4), 459–474. Rainieri C and Fabbrocino G 2014 Operational Modal Analysis of Civil Engineering Structures. Springer, New York. Randall RB 1982 Cepstrum analysis and gearbox fault-diagnosis. Maintenance Management International 3(3), 183–208. Randall RB 2009 Cepstral methods of operational modal analysis. Chapter 24 in Encyclopedia of Structural Health Monitoring. Randall RB 2021 Vibration-Based Condition Monitoring 2nd edn. John Wiley & Sons. Randall RB and Gao Y 1994 Extraction of modal parameters from the response power cepstrum. Journal of Sound and Vibration 176(2), 179–193. Randall RB and Sawalhi N 2011 A new method for separating discrete components from a signal. Sound and Vibration 45(5), 6–9. Randall RB, Sawalhi N and Coats M 2011 A comparison of methods for separation of deterministic and random signals. International Journal of Condition Monitoring 1(1), 11–19. Randall RB, Coats MD and Smith WA 2015 OMA in the presence of variable speed harmonic orders ICEDyn 2015 International Conference on Structural Engineering Dynamics, Lagos, Portugal, pp. 22–24. Rao S 2003 Mechanical Vibrations 4th edn. Pearson Education. Reljin IS, Reljin BD and Papic VD 2007 Extremely flat-top windows for harmonic analysis. IEEE Transactions on Instrumentation and Measurement 56(3), 1025–1041. Richardson M and Formenti D 1982 Parameter estimation from frequency response measurements using rational fraction polynomials Proceedings of the 1st International Modal Analysis Conference, Orlando, FL Society for Experimental Mechanics. Rimell AN and Mansfield NJ 2007 Design of digital filters for frequency weightings required for risk assessments of workers exposed to vibration. Industrial Health 45(4), 512–519. Rocklin GT, Crowley J and Vold H 1985 A comparison of H1, H2, and Hv frequency response functions Proceedings of the 3rd International Modal Analysis Conference, Orlando, FL. Saavedra PN and Rodriguez CG 2006 Accurate assessment of computed order tracking. Shock and Vibration 13(1), 13–32. Schmidt H 1985a Resolution bias errors in spectral density, frequency response and coherence function measurements: errata. Journal of Sound and Vibration 101(3), 377–404.

Bibliography

Schmidt H 1985b Resolution bias errors in spectral density, frequency response and coherence function measurements, I: General theory. Journal of Sound and Vibration 101(3), 347–362. Schoukens J, Rolain Y and Pintelon R 2006 Analysis of windowing/leakage effects in frequency response function measurements. Automatica 42(1), 27–38. Serridge M and Licht T 1986 Piezoelectric Accelerometer and Vibration Preamplifier Handbook. Brüel & Kjær, Nærum, Denmark. Shannon CE 1998 Communication in the presence of noise (reprinted from the proceedings of the IRE, vol 37, p. 10–21, 1949). Proceedings of the IEEE 86(2), 447–457. Shao H, Jin W and Qian S 2003 Order tracking by discrete Gabor expansion. IEEE Transactions on Instrumentation and Measurement 52(3), 754–761. Sheskin D 2004 Handbook of Parametric and Nonparametric Statistical Procedures 3rd edn. Chapman & Hall. Shih C, Tsuei Y, Allemang R and Brown D 1988 Complex mode indication function and its applications to spatial domain parameter estimation. Mechanical Systems and Signal Processing 2(4), 367–377. Smallwood D 1981 An improved recursive formula for calculating shock response spectra. Shock and Vibration Bulletin 2(51), 4–10. Smallwood D 1995 Using singular value decomposition to compute the conditioned cross-spectral density matrix and coherence functions Proceedings of the 66th Shock and Vibration Symposium, Volume 1, pp. 109–120. Smallwood D and Gregory D 1986 A rectangular plate is proposed as an IES modal test structure Proceedings of 4th International Modal Analysis Conference, Los Angeles, CA Society for Experimental Mechanics. Stoica P and Moses R 2005 Spectral Analysis of Signals. Prentice Hall. Stoica P and Sundin T 1999 Optimally smoothed periodogram. Signal Processing 78(3), 253–264. Strang G 2005 Linear Algebra and its Applications 4th edn. Brooks Cole, San Diego, CA. Sturesson PO, Brandt A and Ristinmaa M 2013 Structural dynamics teaching example - a linear test analysis case using open software Proceedings of the 31st International Modal Analysis Conference (IMAC), Garden Grove, CA. Tarpø M, Friis T, Georgakis C and Brincker R 2020 The statistical errors in the estimated correlation function matrix for operational modal analysis. Journal of Sound and Vibration 466, 115013. Thrane N 1979 The discrete Fourier transform and FFT analyzers. Technical Report 1, Brüel & Kjær Technical Review No. 1. Tomlinson G and Kirk N 1984 Modal analysis and identification of structural non-linearity Proceedings of the 2nd International Conference on Recent Advances in Structural Dynamics, University of Southampton, pp. 495–510. Tran T, Claesson I and Dahl M 2004 Design and improvement of flattop windows with semi-infinite optimization Proceedings of the 6th International Conference on Optimization: Techniques and Applications, Ballarat, Australia. Tucker S and Vold H 1990 On principal response analysis Proceedings of ASELAB Conference, Paris, France. Tuma J 2004 Sound quality assessment using Vold–Kalman tracking filtering Seminar, Instruments and Control, Ostrava, Czech Republic.

657

658

Bibliography

Tuma J 2005 Setting the passband width in the Vold–Kalman order tracking filter Proceedings of the 12th ICSV , Lisbon, Portugal. Viberg M 1995 Subspace-based methods for the identification of linear time-invariant systems. Automatica 31(12), 1835–1851. Vold H 1986 Estimation of operating shapes and homogeneous constraints with coherent background noise Proceedings of ISMA 1986, International Conference on Noise and Vibration Engineering, Catholic University, Leuven, Belgium. Vold H 1990 Numerically robust frequency domain modal parameter estimation. Sound and Vibration 24(1), 38–40. Vold H and Leuridan J 1993 High resolution order tracking at extreme slew rates Proceedings of SAE Noise and Vibration Conference, Traverse City, MI Society of Automotive Engineers. Vold H, Kundrat J, Rocklin TG and Russell R 1982 A multi-input modal estimation algorithm for mini-computer. SAE Tech. Paper 820194. Vold H, Crowley J and Nessler J 1988 Tracking sine waves in systems with high slew rates Proceedings of the 6th International Modal Analysis Conference, Kissimmee, FL, pp. 189–193. Vold H, Mains M and Blough J 1997 Theoretical foundation for high performance order tracking with the Vold–Kalman tracking filter Proceedings of the 1997 Noise and Vibration Conference, SAE, Volume 3, pp. 1083–1088. Welch PD 1967 The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Transaction on Audio and Electroacoustics AU-15(2), 70–73. White PR, Tan MH and Hammond JK 2006 Analysis of the maximum likelihood, total least squares and principal component approaches for frequency response function estimation. Journal of Sound and Vibration 290(3–5), 676–689. Wicks A and Vold H 1986 The Hs frequency response estimator Proceedings of the 4th International Modal Analysis Conference, Los Angeles, CA. Wiener N 1930 Generalized harmonic analysis. Acta Mathematica 55(1), 117–258. Williams R and Vold H 1986 Multiphase-step-sine method for experimental modal analysis. International Journal of Analytical and Experimental Modal Analysis 1(2), 25–34. Williams R, Crowley J and Vold H 1985 The multivariate mode indicator function in modal analysis Proceedings of the 3rd International Modal Analysis Conference, Orlando, FL. Wirsching PH, Paez TL and Ortiz H 1995 Random Vibrations: Theory and Practice. Wiley Interscience. Wise J 1983 The effects of digitizing rate and phase distortion errors on the shock response spectrum Proceedings of Institute of Environmental Sciences, Annual Technical Meeting, 29th, April 19–21, Los Angeles, CA. Wowk V 1991 Machinery Vibration: Measurement and Analysis. McGraw-Hill. Wright J, Cooper J and Desforges M 1999 Normal-mode force appropriation – theory and application. Mechanical Systems and Signal Processing 13(2), 217–240. Zhang L, Kanda H, Brown D and Allemang R 1985 A polyreference frequency domain method for modal parameter identification ASME Paper No. 85-DET-106. (ed. Zwillinger D) 2002 CRC Standard Mathematical Tables and Formulae 31st edn. Chapman & Hall.

659

Index H1 estimator of MIMO system 376, 397 of SISO system 331 H2 estimator of MIMO system 373 of SISO system 330 Hc estimator 332 Hv estimator 376–377 2-input/1-output system 370 2DOF system introduction 121–123 matrix equations for 131

a Accelerance 107 Accelerometer 165 base strain sensitivity 169 calibration of 173 mass calibration description of 174 illustration 174 results 179 mass loading 168 mounted resonance frequency 166 mounting of 167 temperature sensitivity 169 transverse sensitivity 169 Acoustic A-weighting frequency domain 271–272 time domain 56 Acoustic C-weighting frequency domain 271–272 time domain 56

Aliasing 42 by circular convolution 221 preventing 43, 288 Analytic signal 522 Angle domain see Order domain Angular frequency 11 Antialias filter 288 Antiresonance 154 Autocorrelation see Correlation function Average ensemble 72 time 72 Averaging exponential 299 frequency domain 229, 298 interrupted 298 linear 299 of rotating machinery signals 318 peak hold 299 stable 299 time domain 253, 298

b Bandwidth of signal 40 resonance 105 Bandwidth-time product 78, 197 discussion for order tracking 304, 320 Bartlett window 216 Beating principle 13 SDOF system 106

Noise and Vibration Analysis: Signal Analysis and Experimental Procedures, Second Edition. Anders Brandt. © 2023 John Wiley & Sons Ltd. Published 2023 by John Wiley & Sons Ltd.

660

Index

Bias error definition of 74 normalized 74 of average estimate 78 of frequency response estimate 335–337 of smoothed periodogram estimator 248–249 of Welch estimator 237–242 Block Hankel matrix 467 Blocksize choosing for impact testing 347, 350 choosing for linear spectrum 265 choosing for shaker excitation 357, 360, 366 choosing for spectral density estimation 267 definition of 198 Burst random see Shaker excitation

c CALFEM 549 Central moment 78 Cepstral editing, see Cepstrum Cepstrum 527–531 complex cepstrum 530 editing 540 inverse 531 liftering 528 power cepstrum definition of 527 example of 528 features 528 quefrency 528 real cepstrum 530 Charge amplifier 160 cable for 161 Chirp signal see Shaker excitation Circular convolution 219–221 avoiding 221, 551 CMIF see Complex mode indicator function Coherence function estimator 334 multiple estimator 374, 375 example 375

random error 340 Coherent output power 339 virtual see Virtual signals Companion matrix 445, 467 Complex number 625 sine wave 12 Complex cepstrum, see Cepstrum Complex mode indication function see Complex mode indicator function Complex mode indicator function 478 enhanced FRF 479 for OMA 605, 609 mode scaling for 480 summary 480 Computational poles 446, 449, 452, 587, 590, 619 Condenser microphone see Microphone Conditioned signals 377–384 L-systems 378 ordering inputs 381–382 partial coherence 380 partial coherent output spectra 382 Confidence interval 74 Consistent estimator 74 Convolution as aliasing 221 by time window 208, 214, 219 circular 219–221 continuous 25 description of 25–29 discrete 40 Fourier transform pair 30 integral 25 Correlation function auto definition of 80 long FFT estimator 252 Welch estimator 253 crossdefinition of 80 long FFT estimator 252 Welch estimator 253 effect of noise 256–257 properties of 80

Index

random error 255 variance 254 envelope of 254, 257 Cosine see Sine wave Crest factor 80 Cross-correlation see Correlation function Cross-correlation matrix modal decomposition 503 Cross-spectral matrix modal decomposition 502 CSD see Spectral density, crossCyclic frequency 11

d Damping determining 113–115, 442 effect on frequency response 104 hysteretic 117 modal 142 models of 117 nonproportional 142–145 eigenvalue problem 143 of container ship 619 of Plexiglas plate 592, 608 of suspension bridge 615 proportional 140–142 ratio 99, 113 relative, of SDOF system 99 structural 117 viscous 98 Data compression see modal parameter extraction Data quality assessment 91–94 for EMA 578 for impact testing 573 of modal model 582 Decibel 633 Degree-of-freedom 132 meaning 427 of Plexiglas plate 548 selecting 428 DFT see Discrete Fourier transform Differentiation in frequency domain 106, 272 of time signals 62–65

Dirac unit impulse 25 Discrete Fourier transform 198–223 comparison with continuous 200 definition of 198 fast computation of 200 inverse 198 leakage see Leakage periodicity of 202 periodogram 234 picket-fence effect 209 principle of 202 properties of 205 relation with continuous 206 scaling of 199 smearing 310 symmetry properties 205 table of pairs 205 time windows see Time windows zoom 223 DOF see Degree-of-freedom Driving point see Frequency response Dynamic flexibility 106

e Eigenvalue from char. equation 133 problem, example of 134–136 problem, for nonproportional damping 143 relation to pole 133, 134 theory of 639–640 Eigenvectors of nonproportionally damped MDOF system 143 of proportionally damped MDOF system 140 of undamped MDOF system 134 Energy spectral density definition of 189 estimator 258 Enhanced frequency response 450 for CMIF 594 for PTD on Plexiglas plate 587 Ensemble average of correlation function 253

661

662

Index

Ensemble (contd.) average of spectra 229 statistical 72 Envelope by Hilbert transform 522 spectrum, see Spectrum Equivalent noise bandwidth 210, 214 Equivalent number of averages 244 Equivalent sound pressure 55 Ergodic random process 73 Error function, erf 82 ESD see Energy spectral density Excitation signal see Shaker excitation Expected value definition of 73 estimate of 77 of general function 77 Experimental modal analysis see Modal analysis Exponential window compensating effect of 347–348 for impact excitation 345 for transient 260

f Fast Fourier transform see Fourier transform FDD see Frequency domain decomposition FDPI see Frequency domain direct parameter identification FDPIz see Frequency Z-domain direct parameter identification FFT see Fourier transform FFT Analyzer see Measurement system Filters 1/1-octave 52 1/3-octave 52 1/3-octave, example 54 1/n-octave 54 A-weighting 56 acoustic weighting 56 analog 48 antialiasing 289 Butterworth 50 C-weighting 56 differentiation 62–65

digital 50 first order HP 48 first order LP 48, 49 for analog integration 55 for frequency weighting 55 fractional octave 54 ideal characteristics 47 integration 58–62 linear-phase for recording transients 295 importance of 58 producing 51 octave center frequencies 53 octave time constant 198 phase distortion of 51 smoothing 52 time delay of 51 Finite element model 549 Flattop window 215, 311, 317 Folding see Aliasing Force transducer 170 Force window 344 Forced response of Plexiglas plate 555 of SDOF system 106 relation with ODS 567 time domain by digital filter 552 by FFT 551 time domain simulation of 551–555 Fourier series complex 184 definition of 184 with amplitude and phase 184 Fourier transform definition of 29 description 29–33 discrete see Discrete Fourier Transform fast 200 inverse 29 properties of 31–33 table of 30 Frequency analysis nonparametric 195

Index

nonparametric principle 196 parametric 195 principle of 195 angular 11 cyclic 11 damped natural, of SDOF 100 damped resonance 104 Nyquist 41 undamped see Natural frequency Frequency domain decomposition 478 on measured plate data 610 on synthesized plate data 605 Frequency domain direct parameter identification 474 companion matrix 476 summary 477 Frequency domain editing 539 Frequency resolution 218 Frequency response H1 estimator of MIMO system 373 of SISO system 330 H2 estimator of MIMO system 376, 397 of SISO system 331 Hc estimator 332 Hv estimator 376–377 [M], [C], [K] 147 [M], [K], and 𝜁r 151 bias error 335–336, 339 definition of 33 driving point properties 155 enhanced 450 from poles and mode shapes 149, 150 from transfer function 33 imaginary part of 110 impact excitation see Impact excitation magnitude of 108 multiple-input optimizing measurements 386–396 names of 108 Nyquist plot of 112 of SDOF system 102 phase of 108 plot formats 108–113

properties of 33 random error 337 real part of 110 shaker excitation see Shaker excitation single-input optimizing measurements 357–361 synthesizing 486–487 using modal damping 151 Frequency Z-domain direct parameter identification 477

g Gaussian see Probability distribution Geophone 175 example of dynamic range 498 use of 612 GNU Octave arranging spectra 275 using 7

h Half sine window 216 Half spectra 504 Half-power bandwidth 105 Hankel matrix 455, 458, 459, 464, 469 Hanning window 214 Harmonic removal 249 Hermitian matrix 639 transpose 371, 636 Hilbert transform 520–527 analytic signal 522 computation of 522 definition of 520 for computing envelope 522 for frequency domain decomposition 506 to clean up FRF estimates 526 to compute imaginary part of FRF 525 to compute real part of FRF 524 Histogram 75 Hypothesis test for normality 87 for stationarity 88–91 theory of 83

663

664

Index

i Ibrahim time domain 455 modified multiple-reference 468 multiple-reference 459 IEPE transducers current supply 161 electrical model 161 TEDS 165 time constants of 164 Impact excitation 342–351 alternative processing 350–351, 571 compensating effect of exponential window 347–349 default settings 346 double impact 349 exponential window 345 force signal 343–345 force window 344 of Plexiglas plate 569 optimizing settings for 345–347 example 573–577 setting blocksize 347 time domain processing summary 572 triggering 345 when appropriate 342 Impact hammer see Impulse hammer Impedance head 171 Impulse hammer 172, 337 Impulse response computing from FRF 455 definition of 25 from poles and mode shapes 155 of SDOF system 100 Input cross-spectral matrix definition of 275 inversion of 372 Input-output cross-spectral matrix 274 Integration in frequency domain 272 of time signals 58–62 Interpolation for synchronous resampling 314 of time signal 45 Inverse DFT see Discrete Fourier transform

Inverse Fourier transform see Fourier transform Inverse Laplace transform see Laplace transform ITD see Ibrahim time domain

k Kurtosis definition of 79 excess 79 for data quality assessment 92

l Laplace transform 20–24 definition 20 inverse 20 of derivative 21 table of 22 transfer function 23 Leakage 206–210 explanation of 207, 214 illustration of 206 in spectrum of periodic signal 208 in spectrum of random signal 219 Least squares complex exponential 464 Least squares complex frequency domain 471 companion matrix 472 summary 473 Least squares frequency domain 480, 482 multiple reference 482 modal participation matrix 482 without modal participation factors 484 single reference 480 Least squares time domain 485 Liftering, see Cepstrum Linear spectrum see Spectrum Linear system coherent output power 339 definition 19 noise on in- and output 329 Hc estimator 332 noise on input 330 H2 estimator 332

Index

noise on output 329 H1 estimator 330 theory of 19–29 LSCE see Least squares complex exponential LSCF see Least squares complex frequency LSFD see Least squares frequency domain Lumped parameter model 132

m MAC matrix 487 definition 487 example 612, 621 Mass calibration 174 Mass loading 168 MATLAB arranging spectra 275 using 7 Matrix coefficient normalization high 445 low 445 Maxwell’s reciprocity 149 MDOF system eigenvalues of 133 eigenvectors of 134 frequency response 147, 149 impulse response 155 matrix equations for 131 nonproportional damping natural frequency 143 poles 143 normal modes of 134 poles of 133, 134, 140 state-space equations of 143 undamped natural frequency of 133 Measurement system absolute accuracy 292 analog-to-digital conversion 284 sigma-delta ADC 290 antialias filter 288 antialias protection 292 averaging options 298 block processing 296–297 cross-channel match 295 cross-channel talk 294 data scaling 297

dynamic range 284, 287, 293 FFT parameters 299 optimizing input range 285 overload 286, 290 overview 282 quantization 284 real-time bandwidth 296 recording signals 295 recording transients 295 sample-and-hold 293 sampling requirements 286 signal conditioning 283 triggering 297 MEMS-based sensors 176 Microphone calibration of 175 types 174 MIF see Mode indicator function MIMO system H1 estimator 373 H2 estimator 376, 397 Hv estimator 376–377 bias error discussion 384 computation considerations 375 illustration of 370 noise on input and output unbiased estimation 391 noise on output 372 principle of 369 random error discussion 384 MITD see Multiple-reference Ibrahim time domain MMITD see Modified multiple-reference Ibrahim time domain Mobility 106 Modal A definition of 144 relation with modal mass 153 Modal analysis analytical 153 experimental 425 driving point FRF check 393 measurement strategy 430 MIF see Mode indicator function of Plexiglas plate 578–585

665

666

Index

Modal analysis (contd.) overview 427 parameter extraction see Modal parameter extraction reciprocity check 393 sensor considerations 430 suspension 431 UMPA see Unified matrix polynomial approach frequency versus time domain see Time versus frequency domain OMA 490 operational correlation decomposition 503 data acquisition for 497 examples 600–621 half spectra 504 modal decomposition of correlation matrix 503 modal decomposition of spectrum matrix 502 modal participation factors 502 principles 496 spectrum decomposition 502 spectrum functions 499 parameter extraction, see Modal parameter extraction time versus frequency domain for EMA 452 for OMA 504 Modal assurance criterion see MAC matrix Modal B 144 Modal coordinates for undamped system 138–139 of proportionally damped system 140 Modal damping see Damping Modal mass of proportionally damped system 140 of undamped system 137 relation with Modal A 153 Modal parameter extraction block Hankel matrix 467 companion matrix 445, 467 complex mode indicator function 478

enhanced FRF 479 summary 480 complex mode indicator function 478 converting FRF to impulse response 453 data compression example 452 data reduction for PTD on Plexiglas plate 587 estimating mode shapes 480 experimental 454, 470 evaluating results 486 FRF Quality assessment 578 of Plexiglas plate 580–600 for OMA 498 frequency domain 470 frequency domain decomposition example 604, 611 frequency domain direct parameter identification 474 summary 477 frequency Z-domain direct parameter identification 477–478 Hankel matrix 454, 458, 459, 464 least squares complex exponential 464 least squares complex frequency example 603, 610 least squares complex frequency domain companion matrix 472 summary 473 least squares complex frequency domain 471 least squares frequency domain example 604, 610 multiple reference 482 single reference 480 least squares frequency domain for OMA 506 least squares global 441 least squares local 440 least squares polynomial 442–443 least squares time domain 485 MAC matrix 487 example 612, 621 matrix coefficient normalization high 447

Index

modal participation matrix 465 mode shape extraction 482 mode shapes extraction 478 time domain 485 model order 446 modified multiple-reference Ibrahim time domain 468 Hankel matrix 469 summary 470 multiple reference Ibrahim time domain example 600, 608 operational 503 example on container ship 617–621 example on suspension bridge 612–616 on measured data of Plexiglas plate 607–615 on synthesized data of Plexiglas plate 600–607 scaling modal model 508 scaling modal model by mass matrix 509 scaling modal model by OMAH 509 polyreference time domain 464 example 603, 609 summary 468 Prony’s method 462 scaling mode shapes 486 SDOF 440 synthesizing FRFs 487 time domain 454 modal parameter extraction data compression 449 Modal participation factors for EMA 439 for OMA 501 Modal participation matrix 465 Modal scale constant 150 Modal stiffness of proportionally damped system 140 of undamped system 137 Modal superposition 149, 553 Mode from wave equation 130 Mode indicator function 434 modified real 436

multivariate 435 normal (MIF1) 436 Mode shape determining experimentally 442 of Plexiglas model 549 scaling of 137, 152–153 experimental modal analysis 486 operational modal analysis 509 scaling to unity modal A 153 scaling to unity modal mass 153 weighted mass orthogonality 136 weighted orthogonality of 136–138 weighted stiffness orthogonality 136 Modified multiple-reference Ibrahim time domain 468 Hankel matrix 469 summary 470 Multi-channel data spectrum of see Spectrum Multiple coherence see Coherence function Multiple-input/multiple-output system, see MIMO system Multiple-reference Ibrahim time domain 459 summary 462

n Natural frequency damped, of SDOF system 100 determining 113–115, 442 of nonproportionally damped MDOF system 143 undamped, of MDOF system 133 undamped, of SDOF system 99 Newton’s laws 97 equation for SDOF system 98 equation, state-space form 143 equations for MDOF system 131 Node line 153 real example of 561 Noise source identification 417 Nonproportional damping see Damping Normal distribution see Probability distribution

667

668

Index

Normal mode 134 Nyquist frequency 41 plot format 112

o Octave (software), see GNU Octave Octave filter see Filters ODS 567–572 definition of 567 multiple reference 570 of Plexiglas plate multiple reference 570 single reference 568 procedure 568 recommended measurement function 567 OMA, see Modal analysis, operational OMAH method 509 global 510 local 510 principle 510 selecting excitation DOFs 512 One-sided spectral density see Spectral density, single-sided Operating deflection shape see ODS Operational modal analysis, see Modal analysis Order see Rotating machinery analysis Order domain 314 Order track fixed sampling frequency 312 synchronous sampling frequency 317 Ordinary coherence see Coherence function Orthogonality between input signals 401 of sines 15 of virtual signals 410 Overlap processing effect on random error with Welch’s estimator 243–246 illustration of 230 Oversampling definition of 43 for shock response 519

p Parseval’s theorem continuous 31 discrete 205, 209 Partial coherence see Conditioned signals Partial coherent output spectra see Conditioned signals Partial fraction expansion definition of 21 of FRF for MDOF system 148 of SDOF system 100 Period 11 Periodic Chirp see Shaker excitation Periodic signal see Signal Periodogram 234 Periodogram ratio detection 537 Phase angle definition of complex 625 of sine wave 11 Phase distortion see Filters Phase spectrum see Spectrum, linear Picket-fence effect 209 Piezoelectric accelerometers, principle of 165 effect 159 sensor, electrical models of 160 Plexiglas plate EMA of 578 forced response of 555 model 547 eigenfrequencies 549 mode shapes 549 Poles computational see Computational poles of nonproportionally damped MDOF system 143 of proportionally damped MDOF system 140 of SDOF system 99 of undamped MDOF system 134 Polynomial multiplication by convolution 29 of filter coefficients 52 Polyreference time domain 464 block Hankel matrix 467

Index

companion matrix 467 modal participation matrix 465 summary 468 Power cepstrum, see Cepstrum Principal components see also Virtual signals finding number of independent sources 403 for multiple-reference ODS 570 in stabilization diagram 601 Principal coordinates see Modal coordinates Probability density calculation of 76 definition of 75 Probability distribution definition of 75 Gaussian 81 normal 81 test of normality 86 Prony’s method 462 Proportional damping see Damping PSD see Spectral density, auto Pseudo random see Shaker excitation PTD see Polyreference time domain Pure random see Shaker excitation

q Q-factor definition of 105 for shock response 517 Quefrency, see Cepstrum

r Random decrement signatures 503 Random error definition of 74 normalized 74 of frequency response 337–339 of smoothed periodogram estimator 250 of standard deviation 78 of variance 78 Welch estimator 242–247 Random signal see Signal RDD, see Random decrement signatures Real cepstrum, see Cepstrum

Receptance see Dynamic flexibility Reciprocity check for 393 definition of 149 Recording signals see Measurement system Rectangular window 208 Relative damping see Damping Removing harmonics 539–542 Resampling for removing harmonics 540 of time signal 47 synchronously with RPM 314–317 Residue definition of 22 from mode shape coefficients 150 Resonance bandwidth see Bandwidth Resonance frequency see Natural frequency Reverse arrangements test 87 Rigid body mode 133 RMS level calculation of 77 computing from spectrum 269–270 definition 18 random error of 197 weighted spectrum 271 Root mean square see RMS level Rotating machinery analysis averaging 317–318 bandwidth-time product 304, 320 color map 310 maximum order 317 nonparametric methods 322 order 303 order domain 314 order resolution 317 order track fixed sampling frequency 312 synchronous sampling frequency 317 RPM 308 RPM map 308 selecting time window 311 smearing 310 synchronous sampling 314–316 DFT parameters 317

669

670

Index

Rotating machinery analysis (contd.) tachometer processing 306–308 time-frequency analysis 304 waterfall plot 309

s Sampling frequency 39 synchronous see Rotating machinery analysis theorem 41 SDOF system forced response 106 frequency response of 102 illustration of 98 impulse response of 100 poles of 99 transfer function of 99 Shaker description 177 Shaker excitation checking shaker attachment 362 multiple-input 384–386 burst random 385, 387 checking input correlation 396, 404 multiphase stepped sine 386 optimizing measurements 386–396 periodic random 385, 387, 391 pure random 385 single-input 352–356 burst random 355, 358 optimizing measurements 357–360 periodic chirp 356, 359 pseudorandom 355, 359 pure random 352, 357 stepped-sine 356 SNR of 352 sources of error 364 Shock response spectrum 517–520 maximax 518 oversampling rate for 519 primary 518 Q-factor 519 residual 518

Signal classes, description of 183 harmonic, see periodic period of 11 periodic 11 detecting 535, 537 example of 556 removing 539 removing automatically 540 removing by cepstral editing 540 random 16 spectra of 559 sine wave 11 transient 17 recording requirements 292 Sine wave 11 complex 12 multiplication of two 15 sum of two 13 Single degree-of-freedom see SDOF system Single-input/single-output system see SSO system Singular value decomposition 640 Sinusoid see Sine wave SISO system illustration of 328 noise on input 330 noise on input and output 332 noise on output 329 Skewness definition of 78 use for data quality assessment 92 Smearing 310 Smoothed periodogram PSD see Spectral density Smoothing filter 52 of periodogram for PSD 248 Sound pressure level 55, 634 Spectral density creating time signal with known PSD 533 double-sided auto definition of 188

Index

double-sided crossdefinition of 188 example of 559 guidelines for computing 266–267 of mixed property signal 269 example of 561–566 single-sided auto definition of 188 property of 188 Welch estimator 236 single-sided crossdefinition of 188 property of 188 Welch estimator 236 smoothed periodogram 247–250 advantages 248 bias error 249 estimator 248 for harmonic removal 249 random error 250 Welch bias error 237–242 bias error for SDOF resonance 240 normalized random error 245 random error 242–247 random error with Hanning and 50% overlap 246 scaling factor 236 Spectral lines usable 219 vs. blocksize 220 Spectrum autopower estimator 231 envelope 531 computation of 532 estimation of guidelines 262–269 interpretation of 189–191 linear definition of 185 estimator 232 example of 556 guidelines 264–266

phase spectrum 186, 233 of multi-channel data 273–274 of periodic signal see Spectrum, linear of random signal see Spectral density of transient definition of 189 estimator 258 Standard deviation definition of 77 random error 78 State-space 142 Stationarity of random process 72 test for 87, 88, 90 Steady state response definition of 34 Stepped sine see Shaker excitation Stinger checking 362 for force sensor 170 SVD, see Singular value decomposition Synchronous sampling 314–317 DFT parameters 317 Syntesized FRFs 486–487

t Three-parameter sine fit method 535 Time windows amplitude correction 212 Bartlett 216 comparison of 210 equivalent noise bandwidth 214 normalized 210, 214 exponential 345 flattop 215, 311, 317, 539 for periodic signals 211–216 for Random Signals 218, 219 for transient signals 259 force 343 half sine 216 Hanning 211, 214 rectangular 208 resolution of 218 smearing 310

671

672

Index

Time-frequency see also bandwidth-time product analysis limitations 318 analysis principle 304–305 illustration of 186 Time versus frequency domain for EMA 452 for OMA 504 Transfer function definition of 23 of SDOF system 99 Transient signal see Signal Transient spectrum definition of 189 estimator 258 Triboelectric effect 161 True random see Shaker excitation, pure random Tuned damper 123–125 Two-sided spectral density see Spectral density, double-sided

u UMPA see Unified matrix polynomial approach Undamped natural frequency see Natural frequency Unified matrix polynomial approach companion matrix 444 data compression 449 mathematical framework 443 matrix coefficient normalization 445 high 445 low 447

model order 446 Uniform window see Rectangular window

v Variance definition of 77 estimate of 77 random error 78 Vibration isolation 118–120 Virtual signals 411–417 cumulated virtual coherence 411 cumulated virtual coherent output power 414, 418 cumulated virtual input/output coherence 414, 420 definition of 402, 410 for multiple-reference ODS 571 number of independent sources 403 principal components 401 virtual coherence 411 virtual coherent output power 414 virtual input coherence 411, 420 virtual input cross-spectrum 411 Vold-Kalman filter 323

w Wave equation solutions 130 Welch estimator see Spectral density Wiener-Khinchin relations 188 Window see Time windows

z Zero padding 221–222, 551, 553, 586 Zoom FFT 223

WILEY END USER LICENSE AGREEMENT Go to www.wiley.com/go/eula to access Wiley’s ebook EULA.