Structural Dynamics Fundamentals and Advanced Applications, Volume I: Volume I [1 ed.] 012821614X, 9780128216149

The two-volume work, Structural Dynamics Fundamentals and Advanced Applications, is a comprehensive work that encompasse

1,469 403 38MB

English Pages 928 [916] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Structural Dynamics Fundamentals and Advanced Applications, Volume I: Volume I [1 ed.]
 012821614X, 9780128216149

Table of contents :
Structural Dynamics Fundamentals and Advanced Applications
Copyright
Dedication
About the authors
Preface
1 - Structural dynamics
1. Introduction
1.1 Newton's laws of motion
1.1.1 Newton's First Law
1.1.2 Newton's Second Law
1.1.3 Newton's Third Law
1.2 Reference frames
1.3 Degrees of freedom
1.3.1 Newton's Second Law and rotational motion
1.4 Absolute and relative coordinates
1.5 Constraints
1.6 Distributed coordinates
1.7 Units
1.7.1 International System of Units
1.7.2 US Customary units
Problems
Problem 1.1
Solution 1.1
Problem 1.2
Solution 1.2
Problem 1.3
Solution 1.3
Problem 1.4
Solution 1.4
Problem 1.5
Solution 1.5
Problem 1.6
Solution 1.6
Problem 1.7
Solution 1.7
Problem 1.8
Solution 1.8
Problem 1.9
Solution 1.9
Problem 1.10
Solution 1.10
Problem 1.11
Solution 1.11
Problem 1.12
Solution 1.12
Problem 1.13
Solution 1.13
Problem 1.14
Solution 1.14
Problem 1.15
Solution 1.15
Problem 1.16
Solution 1.16
Problem 1.17
Solution 1.17
Problem 1.18
Solution 1.18
Problem 1.19
Solution 1.19
References
2 - Single-degree-of-freedom systems
2. Introduction
2.1 Vibration
2.2 Rayleigh—energy
2.3 Vibration with viscous damping
2.3.1 Oscillatory damped vibration
2.3.2 Nonoscillatory damped vibration
2.4 Free vibration with Coulomb friction (damping)
2.5 Forced vibration
2.5.1 Harmonic excitation
2.5.1.1 Displacement quadrature and coincident responses
2.5.1.2 Acceleration quadrature and coincident responses
2.5.1.3 Frequency of peak response
2.5.1.4 Relationships between response quantities
2.5.1.5 Magnitude and phase of response
2.5.2 Sudden cessation of harmonic excitation
2.5.3 Beating
2.6 Base excitation
2.6.1 Base excitation equations of motion
2.6.2 Harmonic base excitation
2.6.3 Sudden cessation of harmonic excitation
2.7 Frequency sweep effects
2.7.1 Linear sweep
2.7.2 Octave sweep
2.7.3 Single-degree-of-freedom response
Problems
Problem 2.1
Solution 2.1
Problem 2.2
Solution 2.2
Problem 2.3
Solution 2.3
Problem 2.4
Solution 2.4
Problem 2.5
Solution 2.5
Problem 2.6
Solution 2.6
Problem 2.7
Solution 2.7
Problem 2.8
Solution 2.8
Problem 2.9
Solution 2.9
Problem 2.10
Solution 2.10
Problem 2.11
Solution 2.11
Problem 2.12
Solution 2.12
Problem 2.13
Solution 2.13
Problem 2.14
Solution 2.14
Problem 2.15
Solution 2.15
Problem 2.16
Solution 2.16
Problem 2.17
Solution 2.17
Problem 2.18
Solution 2.18
Problem 2.19
Solution 2.19
Problem 2.20
Solution 2.20
Problem 2.21
Solution 2.21
Appendix 2.1 L’Hôpital's Rule
References
3 - Transfer and frequency response functions
3. Introduction
3.1 Laplace transform
3.1.1 Laplace transform and harmonic excitation
3.2 Fourier transform
3.2.1 Frequency response functions
3.2.2 Base excitation frequency response functions
3.2.3 Fourier transforms of useful functions
3.2.3.1 Boxcar
3.2.3.2 Unit impulse (Dirac delta)
3.2.3.3 Unit impulse sifting property
3.2.3.4 Constant
3.2.3.5 Cosine and sine
3.2.4 Multiplication of Fourier transformed functions and convolution
3.2.5 Convolution and dynamic response
3.2.6 Multiplication of functions and frequency domain convolution
3.2.7 Unit impulse and convolution
3.2.8 Relationship between boxcar function and unit impulse
Problems
Problem 3.1
Solution 3.1
Problem 3.2
Solution 3.2
Problem 3.3
Solution 3.3
Problem 3.4
Solution 3.4
Problem 3.5
Solution 3.5
Problem 3.6
Solution 3.6
Problem 3.7
Solution 3.7
Problem 3.8
Solution 3.8
Problem 3.9
Solution 3.9
Problem 3.10
Solution 3.10
Problem 3.11
Solution 3.11
Problem 3.12
Solution 3.12
Problem 3.13
Solution 3.13
Problem 3.14
Solution 3.14
Problem 3.15
Solution 3.15
Problem 3.16
Solution 3.16
Problem 3.17
Solution 3.17
Appendix 3.1 Integration by parts
Appendix 3.2 Laplace transform
Appendix 3.3 Integration
References
4 - Damping
4. Introduction
4.1 Viscous damping from coincident component of response
4.2 Damping from half-power points of total response
4.3 Logarithmic decrement
4.3.1 Damping from nonsequential cycles
4.3.2 Damping from least squares fit of data
4.4 Work, strain energy, and kinetic energy
4.5 Equivalent viscous damping
4.6 Equivalent viscous damping and Coulomb damping
4.7 Equivalent viscous damping and fluid resistance
4.8 Structural damping and complex stiffness
4.8.1 Quadrature/coincident response with structural damping
4.8.2 Structural damping from coincident response
4.9 Hysteresis
Problems
Problem 4.1
Solution 4.1
Problem 4.2
Solution 4.2
Problem 4.3
Solution 4.3
Problem 4.4
Solution 4.4
Problem 4.5
Solution 4.5
Problem 4.6
Solution 4.6
Problem 4.7
Solution 4.7
Problem 4.8
Solution 4.8
Problem 4.9
Solution 4.9
Problem 4.10
Solution 4.10
Problem 4.11
Solution 4.11
Problem 4.12
Solution 4.12
Problem 4.13
Solution 4.13
Appendix 4.1 Taylor series expansion
Appendix 4.2 Area of an ellipse
References
5 - Transient excitation
5. Introduction
5.1 Ramp, step, and boxcar excitation
5.1.1 Step excitation
5.1.2 Ramp excitation
5.1.3 Ramp excitation and response behavior
5.1.4 Boxcar excitation
5.1.5 Boxcars of short time duration
5.2 Impulse, impulsive forces, and superposition
5.3 Convolution and Duhamel's integrals
5.3.1 Step function response using Duhamel's integral
5.3.2 Duhamel's integral and initial conditions
5.4 Response Spectra and Shock Response Spectra
5.5 Random response analysis
5.5.1 Mean square value and Power Spectral Density
5.5.1.1 Autocorrelation function
5.5.2 Pseudo acceleration response to random base excitation
5.5.3 Absolute acceleration response to random base excitation
5.5.4 Absolute acceleration response to external random forces
5.5.5 Pseudo and absolute acceleration response with frequency limits
5.6 Time domain random response analysis
5.6.1 Time domain root mean square computation
5.7 Swept frequency excitation
5.7.1 Octave sweep rates
5.7.2 Linear sweep rates
5.7.3 Closed-form solutions
5.7.3.1 Octave sweep
5.7.3.2 Linear sweep
Problems
Problem 5.1
Solution 5.1
Problem 5.2
Solution 5.2
Problem 5.3
Solution 5.3
Problem 5.4
Solution 5.4
Problem 5.5
Solution 5.5
Problem 5.6
Solution 5.6
Problem 5.7
Solution 5.7
Problem 5.8
Solution 5.8
Problem 5.9
Solution 5.9
Problem 5.10
Solution 5.10
Problem 5.11
Solution 5.11
Problem 5.12
Solution 5.12
Problem 5.13
Solution 5.13
Problem 5.14
Solution 5.14
Problem 5.15
Solution 5.15
Problem 5.16
Solution 5.16
Problem 5.17
Solution 5.17
Problem 5.18
Solution 5.18
Problem 5.19
Solution 5.19
Appendix 5.1 Derivation of Parseval's theorem
Appendix 5.2 Contour integral
Appendix 5.3 Integrals for pseudo and absolute acceleration response to base excitation, and for absolute acceleration to f ...
Appendix 5.4 atan2(x, y) function
Appendix 5.5 Octave sweep rate attenuation; Hz, octave, minute
Appendix 5.6 Linear sweep rate attenuation; Hz, minute
References
6 - Multi-degree-of-freedom systems
6. Introduction
6.1 Two-degree-of-freedom systems
6.2 Mode shapes
6.2.1 Rigid body modes
6.2.2 Natural frequencies
6.3 Mode shape orthogonality
6.4 Normalization of mode shapes
6.5 Modal coordinates
6.6 Vibration initiated with initial conditions
6.7 Free vibration with viscous damping
6.8 Rotational degrees of freedom
6.9 Mass matrix of a rigid body
6.10 Classical normal modes
6.10.1 Proportional damping
6.10.2 Damping that yields classical normal modes
6.10.2.1 Mode superposition damping
6.10.2.2 Modified Caughey series damping
6.11 Nonclassical, complex modes
6.11.1 First-order systems
6.11.2 Multi-degree-of-freedom systems with complex modes
6.11.3 Left and right eigenvectors
6.11.3.1 Orthogonality of complex mode shapes
6.11.4 First-order solution for systems with classical normal modes
6.11.5 Complex solution for systems with nonclassical modes
6.11.5.1 Approximate classically damped systems
6.11.6 Complex modes response with rigid body modes
6.12 Modes of vibration
6.12.1 Rayleigh's quotient
6.12.2 Stationarity and convexity of Rayleigh's quotient
6.12.3 Rayleigh-Ritz
6.12.4 Modes of vibration
Problems
Problem 6.1
Solution 6.1
Problem 6.2
Solution 6.2
Problem 6.3
Solution 6.3
Problem 6.4
Solution 6.4
Problem 6.5
Solution 6.5
Problem 6.6
Solution 6.6
Problem 6.7
Solution 6.7
Problem 6.8
Solution 6.8
Problem 6.9
Solution 6.9
Problem 6.10
Solution 6.10
Problem 6.11
Solution 6.11
Problem 6.12
Solution 6.12
Problem 6.13
Solution 6.13
Problem 6.14
Solution 6.14
Problem 6.15
Solution 6.15
Problem 6.16
Solution 6.16
Problem 6.17
Solution 6.17
Problem 6.18
Solution 6.18
Appendix 6.1 Rotation of complex vectors
References
7 - Forced vibration of multi-degree-of-freedom systems
7. Introduction
7.1 Modal forces
7.2 Harmonic excitation
7.2.1 Steady-state harmonic response
7.2.2 Quadrature and coincident components of response
7.3 Beating
7.3.1 Superposition of harmonic functions
7.3.2 Multi-degree-of-freedom systems
7.4 Sweep rate effects
7.5 Short transient excitation
7.5.1 Step excitation
7.5.2 Impulse excitation
7.6 Base excitation
7.6.1 Unidirectional motion
7.6.2 Translation plus rotation
7.6.3 Multipoint excitation
7.6.4 Harmonic excitation
7.6.5 Practical considerations
7.6.5.1 Mode participation factors
7.6.5.2 Sweep rate effects
7.6.5.3 Shake table—test article interaction
7.7 Random response analysis
7.7.1 Forced vibration
7.7.1.1 Acceleration response
7.7.1.2 Loads computation
7.7.1.3 Implementation
7.7.2 Base excitation
7.8 Time-domain random response analysis
7.9 Truncated modal coordinates
7.9.1 Mode acceleration
7.9.2 Mode acceleration and unconstrained systems
7.9.2.1 Three-degree-of-freedom example
7.9.3 Computation of loads and stresses
7.9.4 Residual flexibility
7.10 Dynamic behavior as a function of response
7.10.1 Instantaneous displacement-proportional feedback
7.10.2 Gyroscopic moments
7.10.3 Whirl
7.10.3.1 Symmetric systems
7.10.3.2 Slightly nonsymmetric systems
7.10.3.3 Rotating symmetric systems with gyroscopic effects
7.10.3.4 Rotating systems with gyroscopic effects and excitation
7.10.3.5 Complex modal coordinates solution
7.10.3.6 Complex modal forces
7.10.3.7 Nonsymmetric systems
7.10.3.8 Dynamic imbalance
7.10.4 Gyroscopic moments and energy dissipation
7.11 Fluid–structure interaction
7.11.1 Aerodynamic instability
7.11.1.1 Aerodynamic instability and complex modes
7.11.2 Pogo
Problems
Problem 7.1
Solution 7.1
Problem 7.2
Solution 7.2
Problem 7.3
Solution 7.3
Problem 7.4
Solution 7.4
Problem 7.5
Solution 7.5
Problem 7.6
Solution 7.6
Problem 7.7
Solution 7.7
Problem 7.8
Solution 7.8
Problem 7.9
Solution 7.9
Problem 7.10
Solution 7.10
Problem 7.11
Solution 7.11
Problem 7.12
Solution 7.12
Problem 7.13
Solution 7.13
Problem 7.14
Solution 7.14
Problem 7.15
Solution 7.15
Problem 7.16
Solution 7.16
Problem 7.17
Solution 7.17
Problem 7.18
Solution 7.18
Problem 7.19
Solution 7.19
Problem 7.20
Solution 7.20
Problem 7.21
Solution 7.21
Problem 7.22
Solution 7.22
Problem 7.23
Solution 7.23
Problem 7.24
Solution 7.24
Appendix 7.1 Work and coordinate transformations
Appendix 7.2 Beating
Appendix 7.3 Periodicity and Lissajous graphs
References
8 - Numerical methods
8. Introduction
8.1 Numerical solution of differential equations of motion
8.1.1 One-step methods
8.1.1.1 Euler’s method
8.1.1.1 Euler’s method
8.1.1.2 Runge–Kutta methods
8.1.1.2 Runge–Kutta methods
8.1.1.3 Analysis of one-step methods
8.1.1.3 Analysis of one-step methods
8.1.1.4 First-order formulation for Single‐degree-of-freedom systems
8.1.1.4 First-order formulation for Single‐degree-of-freedom systems
8.1.2 Duhamel’s method
8.1.3 Newmark’s method
8.1.4 Comparison of methods
8.1.4.1 Stability
8.1.4.1 Stability
8.1.4.2 Frequency response
8.1.4.2 Frequency response
8.1.4.3 Numerical comparisons
8.1.4.3 Numerical comparisons
8.1.4.4 Rigid-body response
8.1.4.4 Rigid-body response
8.2 Multi-degree-of-freedom system numerical integration
8.2.1 Classically damped systems
8.2.2 Nonclassically damped systems
8.2.3 General methods
8.2.3.1 Complex modal superposition
8.2.3.1 Complex modal superposition
8.2.3.2 Direct integration using first-order formulation
8.2.3.2 Direct integration using first-order formulation
8.2.3.3 Direct integration using second-order formulation
8.2.3.3 Direct integration using second-order formulation
8.3 Solution of systems of linear equations
8.3.1 Matrix computation preliminaries
8.3.1.1 Vector and matrix norms
8.3.1.1 Vector and matrix norms
8.3.1.2 Floating point representation and arithmetic
8.3.1.2 Floating point representation and arithmetic
8.3.1.3 Problem sensitivity
8.3.1.3 Problem sensitivity
8.3.2 LU factorization
8.3.2.1 Gaussian elimination
8.3.2.1 Gaussian elimination
Direct LU factorization
Direct LU factorization
Forward substitution
Forward substitution
Backward substitution
Backward substitution
8.3.2.2 Gaussian elimination with partial pivoting
8.3.2.2 Gaussian elimination with partial pivoting
LU factorization with partial pivoting
LU factorization with partial pivoting
Forward substitution with partial pivoting
Forward substitution with partial pivoting
8.3.2.3 Error analysis
8.3.2.3 Error analysis
8.3.3 Factorization for symmetric positive-definite matrices
8.3.3.1 Cholesky factorization
8.3.3.1 Cholesky factorization
Cholesky factorization
Cholesky factorization
8.3.3.2 Error analysis
8.3.3.2 Error analysis
8.3.4 Iterative methods
8.3.4.1 Classical iterative methods
8.3.4.1 Classical iterative methods
8.3.4.2 Convergence of iterative methods
8.3.4.2 Convergence of iterative methods
8.4 Linear least-square problems
8.4.1 Normal equation
8.4.2 QR factorization
8.4.2.1 Orthogonal projectors
8.4.2.2 Classical Gram-Schmidt method
Classical Gram-Schmidt algorithm
Classical Gram-Schmidt algorithm
8.4.2.3 Modified Gram-Schmidt method
Modified Gram-Schmidt algorithm
Modified Gram-Schmidt algorithm
8.4.2.4 Householder transformation method
Householder QR algorithm
Householder QR algorithm
8.4.2.5 Givens transformation method
Givens QR algorithm
Givens QR algorithm
8.4.3 Singular value decomposition
8.4.3.1 Singular value decomposition theorem
8.4.3.2 Pseudo-inverse
8.4.4 Error analysis
8.5 Matrix eigenvalue problem
8.5.1 Symmetric eigenvalue problem
8.5.1.1 QR iteration
8.5.1.1 QR iteration
QR iteration
QR iteration
8.5.1.1.1 Vector iteration methods
8.5.1.1.1 Vector iteration methods
Power iteration algorithm
Power iteration algorithm
Inverse iteration algorithm
Inverse iteration algorithm
Rayleigh quotient iteration
Rayleigh quotient iteration
8.5.1.1.2 Orthogonal iteration
8.5.1.1.2 Orthogonal iteration
Orthogonal iteration algorithm
Orthogonal iteration algorithm
8.5.1.1.3 QR iteration convergence
8.5.1.1.3 QR iteration convergence
8.5.1.1.4 Relation to power and inverse iterations
8.5.1.1.4 Relation to power and inverse iterations
8.5.1.1.5 Incorporating shifts
8.5.1.1.5 Incorporating shifts
8.5.1.1.6 Tridiagonal reduction
8.5.1.1.6 Tridiagonal reduction
Householder tridiagonalization algorithm
Householder tridiagonalization algorithm
Product of householder transformations
Product of householder transformations
8.5.1.1.7 QR iteration for tridiagonal matrices
8.5.1.1.7 QR iteration for tridiagonal matrices
QR iteration on tridiagonal system with Rayleigh shifts
QR iteration on tridiagonal system with Rayleigh shifts
8.5.1.1.8 Implicit shifts
8.5.1.1.8 Implicit shifts
8.5.1.2 Divide-and-conquer method
8.5.1.2 Divide-and-conquer method
8.5.1.3 Lanczos method
8.5.1.3 Lanczos method
Basic Lanczos algorithm
Basic Lanczos algorithm
8.5.2 Nonsymmetric eigenvalue problem
8.5.3 Error analysis
Problems
Problem 8.1
Solution 8.1
Problem 8.2
Solution 8.2
Problem 8.3
Solution 8.3
Problem 8.4
Solution 8.4
Problem 8.5
Solution 8.5
Problem 8.6
Solution 8.6
Problem 8.7
Solution 8.7
Problem 8.8
Solution 8.8
Problem 8.9
Solution 8.9
Problem 8.10
Solution 8.10
Problem 8.11
Solution 8.11
Problem 8.12
Solution 8.12
Problem 8.13
Solution 8.13
Problem 8.14
Solution 8.14
Problem 8.15
Solution 8.15
Problem 8.16
Solution 8.16
Problem 8.17
Solution 8.17
Problem 8.18
Solution 8.18
Problem 8.19
Solution 8.19
Problem 8.20
Solution 8.20
Problem 8.21
Solution 8.21
Problem 8.22
Solution 8.22
Problem 8.23
Solution 8.23
Problem 8.24
Solution 8.24
Problem 8.25
Solution 8.25
Problem 8.26
Solution 8.26
Problem 8.27
Solution 8.27
Problem 8.28
Solution 8.28
Problem 8.29
Solution 8.29
Problem 8.30
Solution 8.30
Problem 8.31
Solution 8.31
Problem 8.32 (This problem requires access to a numerical software tool)
Solution 8.32
Problem 8.33 (This problem requires access to a numerical software tool)
Solution 8.33
Problem 8.34 (This problem requires access to a numerical software tool)
Solution 8.34
Problem 8.35 (This problem requires access to a numerical software tool)
Solution 8.35
References
Index
A
B
C
D
E
F
G
H
I
J
K
L
M
N
O
P
Q
R
S
T
U
V
W
Z

Citation preview

Structural Dynamics Fundamentals and Advanced Applications Volume I

Alvar M. Kabe Brian H. Sako

Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2020 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-821614-9 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals Publisher: Matthew Deans Acquisitions Editor: Carrie Bolger Editorial Project Manager: Mariana Kulh Production Project Manager: Sruthi Satheesh Cover Designer: Matthew Limbert Typeset by TNQ Technologies

The first author dedicates this work to his children, Nickole, Caroline, and Erik, and their mother, Erika, for without them it would not have been possible. The second author dedicates his work to his wife, Lee Anne, and his daughter, Erin, for their endless patience, encouragement, and support. Both authors also express sincere gratitude to their colleagues at The Aerospace Corporation for the privilege of working with them on some of the most challenging engineering problems in aerospace.

About the authors Dr. Alvar M. Kabe is the Principal Director of the Structural Mechanics Subdivision of The Aerospace Corporation. His prior experience includes Director of the Structural Dynamics Department and Manager of the Flight Loads Section at The Aerospace Corporation. Dr. Kabe has made fundamental contributions to the state of the art of launch vehicle and spacecraft structural dynamics. He introduced the concept of multishaker correlated random excitation to better isolate modes for measurement in mode survey tests, and the concept of using the superposition of scaled frequency response functions to isolate modes for identification. He then introduced the concept of using structural connectivity information as additional constraints when optimally adjusting dynamic models to better match test data; this work has been cited over 260 times in other publications. Dr. Kabe developed the atmospheric flight turbulence/gust and time-domain buffet loads analysis methodologies used on several operational launch vehicle programs, and he pioneered the concept of using structural dynamic models to compute atmospheric flight static-aeroelastic loads. Dr. Kabe led the development of a continually evolving integrated dynamics analysis system that has been used for over two decades to compute loads on over a dozen launch vehicle systems and their payloads. The work included independently developing and implementing analysis methodologies, developing loads and stress analysis models, computing loads, and establishing structural margins. This also included independent day-of-launch placards analyses and independent go/no-go launch recommendations. Dr. Kabe has led, co-chaired, or participated on numerous high level reviews and assessment teams that have had significant impact. He was a member of the Defense Science Board’s Aviation Safety Task Force, and he co-chaired four US Air Force Titan IV Independent Readiness Reviews. He led the Space Shuttle Radar Topography Mission assessment, and he co-chaired NASA’s Mars Sample Return project review. In addition, Dr. Kabe is on the NASA Engineering Safety Center Structural Dynamics Technical Discipline Team as a subject-matter expert. Dr. Kabe has published 23 technical papers and written over 150 corporate technical reports. He has taught undergraduate and graduate structural dynamics classes, presented invited seminars at major universities, and the Keynote at an AIAA Structural Dynamics Specialist conference. xi

xii

About the authors

Dr. Kabe has received numerous awards and over 40 letters of commendation. The awards include the Trustees Distinguished Achievement Award, The Aerospace Corporation’s highest award, The Aerospace Corporation’s President’s Award, Division and Group Achievement Awards, and nine Program Recognition Awards. Dr. Kabe is a Registered Professional Engineer in the state of California; his BS, MS, and PhD degrees are from UCLA. Dr. Brian H. Sako is a Distinguished Scientist in the Structural Mechanics Subdivision of The Aerospace Corporation. Prior to this position, Dr. Sako was an Engineering Specialist, a Senior Engineering Specialist, and an Aerospace Fellow. Dr. Sako has made significant contributions to the fields of structural dynamics, numerical analysis, and time series data analysis. His development of the filtering approach used to separate the more rapidly varying wind features from more slowly varying components is used on several launch vehicle programs to develop turbulence forcing functions for atmospheric flight loads analysis. Dr. Sako also developed an approach to remove tones from wind tunnel buffet test data; the approach was used, for example, on NASA’s Space Launch System program. His developments have also made significant contributions to the assessment of the internal dynamic properties of rocket engines, pogo stability of launch vehicles, and the development of forcing functions for loads analysis. Dr. Sako developed state-of-the-art time series analysis and mode parameter identification tools that are currently used to analyze data and identify structural dynamic parameters on numerous operational systems. The time series data analysis tool is used to assess flight and ground vibration test data. The mode parameter identification tool is used to extract mode parameters from launch vehicle and satellite mode survey test data, as well as flight data. Dr. Sako’s developments are used routinely to assess data from operational launch and space systems. Dr. Sako has published 25 technical papers and written 100 corporate technical reports. He has taught graduate classes in numerical analysis, engineering mathematics, and signal processing. Dr. Sako has earned numerous awards and letters of commendation, including The Aerospace Corporation’s President’s Award, Division and Group Achievement Awards, and several Program Recognition Awards. Dr. Sako’s BA and MA degrees are from the University of Hawaii, and his PhD is from UCLA.

About the authors

xiii

Dr. Kabe’s training and expertise are in structural dynamics and Dr. Sako’s are in applied mathematics. They have worked together for three decades on the most complex structural dynamics systems in existence, and their complimentary expertise and experience have led to the development of first-of-a-kind methodologies and solutions to complex structural dynamics problems. Dr. Kabe’s and Dr. Sako’s experience and contributions encompass numerous past and currently operational launch and space systems.

Preface The two-volume work, Structural DynamicsdFundamentals and Advanced Applications, is a comprehensive work that encompasses the fundamentals of structural dynamics and vibration analysis, as well as advanced applications used on extremely large and complex systems. Because of the scope of the work, the material is divided into two volumes. Volume I covers fundamentals, and Volume II covers advanced applications. The derivations are complete and rigorous, and the topics covered are those needed to become a learned member of the structural dynamics community and solve the complex problems encountered. Volume I covers all the material needed for a first course in structural dynamics, including a thorough discussion of Newton’s laws, singledegree-of-freedom systems, damping, transfer and frequency response functions, transient vibration analysis (frequency and time domain), multi-degree-of-freedom systems, forced vibration of single- and multidegree-of-freedom systems, numerical methods for solving for the responses of single- and multi-degree-of-freedom systems, and symmetric and nonsymmetric eigenvalue problems. This volume also includes a thorough discussion of real and complex modes, and the conditions that lead to each. Solutions to systems with gyroscopic effects due to spinning rotors, as well as aeroelastic instability in simple systems, are covered as part of the discussion on complex modes. In addition, stochastic methods are covered, including derivation of solutions for the response of single- and multidegree-of-freedom systems excited by random forces or base motion. Volume II includes all material needed for graduate-level courses in structural dynamics. This includes d’Alembert’s principle, Hamilton’s principle, and Lagrange’s equations, all of which are derived from fundamental principles. Development of large complex structural dynamic models is thoroughly covered with derivations and detailed discussion of component mode synthesis and fluid/structure interaction; an introduction to applicable finite element methods is also included. Material needed to solve complex problems, such as the response of launch vehicles and their payloads to turbulence/gust, buffet, and static-aeroelastic loading encountered during atmospheric flight, is addressed from fundamental principles to the final equations and response calculations. The formulations of the equations of xv

xvi

Preface

motion include aeroelasticity, and the response calculations include statistical analysis of the response quantities. Volume II also includes a thorough discussion of mode survey testing, mode parameter identification, and model adjustment to improve agreement with test data. Detailed data processing needed for the analysis of time signals, such as digitization, filtering, and transform computation, is also included with thorough derivations and proofs. Since the field of structural dynamics needs to deal with variability in practically all aspects, a comprehensive discussion of probability and statistics is included, with detailed derivations and proofs related to the statistics of time series data, small sample statistics, and the combination of responses whose statistical distributions are different. Volume II concludes with an extensive chapter on continuous systems, which not only includes the classical derivations and solutions for strings, membranes, beams, and plates but also the derivation and closed form solutions for rotating disks and sloshing of fluids in rectangular and cylindrical tanks. The two volumes of Structural DynamicsdFundamentals and Advanced Applications were written with both the practicing engineer and students just learning structural dynamics in mind. The derivations are rigorous and comprehensive, thus making understanding the material easier; this also allows more material to be covered in less time. To facilitate learning, detailed solutions to nearly 300 problems are included. This allows students to work the problems and immediately check their solutions; and for practicing engineers these problems serve as additional examples to those provided within the chapters. As a final note, the material covered in the two volumes can be divided into two primary categories: material that is fundamental to learning and understanding structural dynamics; and material that is being used to solve extremely complex structural dynamics problems by the leading practitioners in the field.

CHAPTER

Structural dynamics

1

1. Introduction The subject matter of this work is structural dynamics and the Laws of Nature that describe the vibratory behavior of structures. In our endeavor, we will use the language of mathematics to describe these laws and to make engineering predictions of the dynamic behavior of structures. Since we are interested in engineering applications where speeds are considerably less than the speed of light, the Law of Universal Gravitation and the Laws of motion as developed by Sir Isaac Newton, and published in 1686 in Philosophiae Naturalis Principia Mathematica (Newton, 1946), and commonly referred to as The Principia, will be our interest. The Principia is one of the greatest works of science, if not the greatest; and the definitions and laws Newton enunciated in The Principia impact our lives daily. Newton’s laws are used in practically every aspect of mechanical engineering, including the disciplines of orbital mechanics, flight mechanics, fluid dynamics, soil mechanics, environmental sciences, structures, civil engineering, and structural dynamics, the subject matter of this work. 1.1 Newton’s laws of motion Sir Isaac Newton’s three Laws of motion, as translated in 1729 from Latin into English by Andrew Motte (Newton, 1946), are (1) Every body continues in its state of rest, or of uniform motion in a right

line, unless it is compelled to change that state by forces impressed upon it. (2) The change of motion is proportional to the motive force impressed and is made in the direction of the right line in which that force is impressed. Structural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00001-X Copyright © 2020 Elsevier Inc. All rights reserved.

1

2

CHAPTER 1 Structural dynamics

(3) To every action there is always opposed an equal reaction; or, the mutual

actions of two bodies upon each other are always equal and directed to contrary parts. 1.1.1 Newton’s First Law

To properly understand Newton’s First Law, we must first understand what he meant by “body” and “forces impressed.” The word “body” in our context means a rigid quantity of matter that has mass; and where mass is an inherent property of matter that manifests itself as “. a power of resistance, .” to changing the state of the matter, whether at rest or constant speed and direction of motion (i.e., constant velocity). Newton referred to this as, “. inertia . or force of inactivity.” In his explanation, Newton added, “But a body only exerts this force when another force, impressed upon it, endeavors to change its condition; and the exercise of this force may be considered as both resistance and impulse. . Resistance is usually ascribed to bodies at rest, and impulse to those in motion.” (Newton, 1946). By “forces impressed” Newton was referring to what we will call external forces, forces that cause a change in the state of rest or constant velocity of an object. This, as we will discuss with respect to the third law, excludes internal forces within the object; which will always occur in pairs of opposite and equal forces and, therefore, not affect the overall motion of the object’s center of mass. For translational motion, the center of mass is the point where it can be assumed all the mass of a rigid object is concentrated, and for rotational motion it is the point about which the rotations of an unconstrained rigid object would occur. Since mass is defined as the resistance to change of the current state of a quantity of matter (object) and external force is defined as that which causes a change in the current state of the object, we are left with the quandary that each is defined in terms of the other. Hence, we require a reference quantity that will anchor the definitions. As of 1795 (Wikipedia) this reference quantity has been the kilogram, which is the base unit of mass in the International System of Units (SI) (Bureau International des Poids et Mesures, 2006). Since 1889, the kilogram had been equal to a reference cylinder of platinumeiridium alloy, which is stored by the International Bureau of Weights and Measures just outside of Paris, France. In May 2019, the definition of the kilogram was changed so that it is defined in terms of the Planck constant (Bureau International des Poids et Mesures, 2006). Hence, for our purposes, the magnitude of a force can be established by applying it to a one-kilogram mass and measuring the change in velocity, i.e., its acceleration. Newton’s Second Law of motion, which we will discuss in the next

1.1 Newton’s laws of motion

section, can then be used to establish the magnitude of the force. In the International System of Units, the unit measure of force is the newton, which is the force required to accelerate one kilogram one meter per second squared; i.e.,   one newton ¼ ðone kilogramÞ one meter=second2 Newton’s First Law is intuitively the simplest of the three. If we have in space a rigid mass point, where mass point also refers to the center of mass of a rigid finite-size object, and there are no external forces, including forces such as gravity, acting on the object, then it will either remain at rest if it is not moving or continue with its current speed and direction (velocity). This seems simple enough except for the fundamental question of how does one establish whether the object is stationary, moving at a constant speed and/or direction, or its speed and/or direction are changing. To do this we need a reference frame against which we establish the location of the object as a function of time. Newton addressed this by assuming the distant stars could serve as a “stationary” or inertial reference frame (or inertial frame of reference). We will discuss this in detail in Section 1.2, but for our purposes we will use the common definition of an inertial reference frame: An inertial reference frame is a frame of reference in which Newton’s laws can be written in their simplest mathematical form; and this requires that the reference frame’s velocity be zero or constant, i.e., the reference frame is not undergoing acceleration, where acceleration is defined as the time rate of change of velocity. 1.1.2 Newton’s Second Law

Newton’s Second Law addresses the momentum of a rigid mass point. If the mass point has a velocity ! v , where the superscript arrow indicates a vector, then it has momentum, where momentum is the mass of the object, a scalar quantity, times its velocity, i.e., m! v ; and since velocity is a vector quantity, so is momentum. The term “change in motion” in the law is in the context of change in momentum, and for constant mass implies a change in velocity, D! v ðtÞ, over some period of time, Dt; hence, velocity is a function of time, Dðm! v ðtÞÞ D! v ðtÞ ¼m , which in the limit as Dt becomes and we can write Dt Dt d! v ðtÞ infinitesimally small gives the sought-after expression, m . The law dt states that this change has to be equal to the “motive force impressed” and is directed in the direction of the force. In this context, the force is also ! a vector quantity and a function of time, and we will write this as f ðtÞ.

3

4

CHAPTER 1 Structural dynamics

Newton expressed the sum of forces by the parallelogram law, i.e., the effect of multiple forces would be their vector sum. Newton’s Second Law of motion, therefore, can be written as X! d ! d ðm v ðtÞÞ ¼ m ! v ðtÞ ¼ f j ðtÞ (1.1-1) dt dt A consequence of Newton’s Third Law is that any internal forces within the mass will have equal and opposite forces within the mass. Hence, the internal forces will sum to zero and the forces on the right-hand side of Eq. (1.1-1) are all due to sources external to the mass. m is referred to as inertial mass and it represents the inertial resistance of the object to a P! d v ðtÞ, due to the external forces f j ðtÞ. change in its velocity, ! dt 1.1.3 Newton’s Third Law

Newton’s Third Law states that for “. every action there is always opposed an equal reaction.” We experience this every day, where Earth’s gravity, and some other small effects, exerts on us a net force directed toward the center of mass of the Earth, and the ground exerts an equal and opposite force that keeps us from moving toward the Earth’s center of mass. Fig.1.1-1A shows a rigid object of mass m that is restricted to slide in the plane of the page on a frictionless surface fixed in inertial space. A massless spring connects the

FIGURE 1.1-1 (A) A rigid object of mass m connected by a massless spring to a wall in inertial space; m can only slide on the indicated frictionless surface in the plane of the page. (B) System in (A) at an instant of time, ti , where the mass point is a distance a þ dðti Þ from the wall. (C) Free-body diagrams for the system shown in (B).

1.1 Newton’s laws of motion

mass to a wall fixed to the horizontal surface. The Earth exerts on the mass a net force of magnitude fE directed downward, which we will define as negative since force is a vector and, therefore, must have direction in addition to magnitude. The surface exerts an equal and opposite force directed upward, as shown in Fig. 1.1-1C. We refer to this force, whose magnitude is fR , as a reaction force, since it is due to a boundary condition. Because there is no vertical acceleration of the mass, fR ¼ fE . In Fig. 1.1-1A, the position of the mass is a distance a to the right of the wall. In this position, the neutral position, the spring is neither stretched nor compressed. In Fig. 1.1-1B we observe the mass-spring system at a later instant in time ti , where the mass has moved relative to the neutral position a distance dðti Þ to the right. Here, we define displacement away from the wall as positive. Since dðtÞ defines position relative to a reference point, i.e., the point a distance a from the wall, it is referred to as a coordinate; and since it takes on different values to describe the position of the mass as it moves, it is a function of time. In Fig. 1.1-1C we show the free-body diagrams of the spring and mass at the instant of time ti . In this position, the mass has to exert a force on the spring directed to the right of magnitude kdðti Þ, where we assume the spring force is proportional to the displacement between its ends according to Hooke’s Law (Crandall et al., 1972). Then, according to Newton’s Third Law, the mass has to “feel” an equal and opposite force directed to the left, i.e., kdðti Þ. Since the spring is stretched, it exerts a force, kdðti Þ, directed to the right on the wall. Then, according to Newton’s Third Law, the wall exerts an equal and opposite force on the spring of kdðti Þ. Since the spring is massless, the sum of the forces at each end must cancel, and as can be ascertained from Fig. 1.1-1C, the vector sum of the spring forces is zero. On the other hand, the spring exerts a force on the mass, and since there is no other applied force, according to Newton’s First Law this force will cause the mass to deviate from rest or constant velocity motion. In this case we must apply Newton’s Second Law of motion, which states that the vector sum of all external forces (here we are only considering horizontal motion) must be equal to the time rate of change of the velocity times the mass, Eq. (1.1-1), i.e.,   d d dðtÞ ¼ kdðtÞ m dt dt (1.1-2) € þ kdðtÞ ¼ 0 m dðtÞ d where dðtÞ is the velocity of the mass, and the dot superscripts indicate dt differentiation with respect to time.

5

6

CHAPTER 1 Structural dynamics

1.2 Reference frames Determining whether an object is stationary or moving can only be done relative to a reference frame, a three-dimensional space of infinite dimension that contains the mass points of interest. Newton “measured” motion relative to distant stars, which he considered fixed in the sky. Today, we know this is not the case; however, for most practical engineering purposes, it is a very good assumption. The application of Newton’s laws of motion in their simplest mathematical form requires that the reference frame in which the motion is defined is an inertial reference frame (or inertial frame of reference). An inertial reference frame is a frame of reference in which the velocity of a mass point, with no applied external forces, is either zero or constant, i.e., there is no acceleration as measured relative to the reference frame. To illustrate the difference between inertial and non-inertial reference frames, we will compute the acceleration of a mass point undergoing circular motion about a point fixed in inertial space. Fig. 1.2-1A and B show a mass point, p, moving counterclockwise from position 1 to position 2 during a time interval Dt. In position 1 the location of the mass point is given by b r ðtÞ and at position 2 by b r ðt þDtÞ. In Fig. 1.2-1A we have assumed a reference frame (Cartesian coordinate system) fixed in inertial space with the origin located at the center of the circular motion of the mass point. In Fig. 1.2-1B we specify a polar coordinate system to describe the motion

FIGURE 1.2-1 (A) Position of point p defined in an inertial Cartesian coordinate system, with the orientation of unit vectors ebx and eby parallel to the x and y-axes, respectively; (B) Position of the same point p as in (A) defined in a polar coordinate system, with the orientation of unit vectors ebr ðtÞ and ebq ðtÞ a function of qðtÞ.

1.2 Reference frames

of the mass point. The motion of the mass point in inertial space is not affected by the coordinate systems we use, but the mathematical description of this motion will be very different in the two coordinate systems. In Fig. 1.2-1A the location of mass point p, at time t (position 1), is given by b ey r ðtÞ ¼ xðtÞb e x þ yðtÞb

(1.2-1)

where the superscript b designates a vector, and ebx and eby are unit vectors parallel to the x and y-axes, respectively. When point p moves from position 1 to position 2 over time Dt, the unit vectors ebx and eby remain parallel to the x and y-axes. Hence, they are not a function of time. Differentiating b r ðtÞ once with respect to time yields the velocity, d _ e x þ yðtÞb b _ ey r_ ðtÞ ¼ xðtÞb r ðtÞ ¼ b (1.2-2) dt and differentiating again yields the time rate of change of velocity, or acceleration, d _ b r€ ðtÞ ¼ x€ðtÞb e x þ y€ðtÞb r ðtÞ ¼ b ey (1.2-3) dt In Fig. 1.2-1B the position of mass point p is described in the indicated polar coordinate system. qðtÞ denotes the counterclockwise angle from the x-axis to vector b r ðtÞ; hence, b r ðtÞ ¼ rðtÞb e r ðtÞ

(1.2-4)

where rðtÞ is the distance at time t from the origin to point p, and ebr ðtÞ is the unit vector that points from the origin to point p, i.e., r ðtÞj ebr ðtÞ ¼ b r ðtÞ=jb

(1.2-5)

The unit vector ebq ðtÞ is defined to be orthogonal (perpendicular) to ebr ðtÞ, as shown. Unlike the rectangular coordinate system, unit vectors ebq ðtÞ and ebr ðtÞ must be functions of time since their orientations relative to an inertial reference frame, in this case the Cartesian coordinate system x-y axes, change as point p moves. This can be seen in Fig. 1.2-1B, where these unit vectors do not remain parallel when point p moves from position 1 to position 2. Because these unit vectors are functions of time, computing the velocity and acceleration in polar coordinates is more involved than if we were in a Cartesian coordinate system.

7

8

CHAPTER 1 Structural dynamics

To obtain the velocity in polar coordinates, we differentiate Eq. (1.2-4) with respect to time, d _ e r ðtÞ þ rðtÞ eb_ r ðtÞ b r_ ðtÞ ¼ rðtÞb r ðtÞ ¼ b (1.2-6) dt Having the time derivative of a unit vector is not very convenient. However, we know that ebr ðtÞ ¼ ebx cos qðtÞ þ eby sin qðtÞ. Differentiating with respect to time yields _ _ e x qðtÞsin eb_ r ðtÞ ¼ b qðtÞ þ eby qðtÞcos qðtÞ   _ ¼ qðtÞ  ebx sin qðtÞ þ eby cos qðtÞ _ e q ðtÞ ¼ qðtÞb Substituting into Eq. (1.2-6) produces the sought-after velocity, _ e q ðtÞ b _ e r ðtÞ þ rðtÞqðtÞb r_ ðtÞ ¼ rðtÞb

(1.2-7)

(1.2-8)

Differentiating Eq. (1.2-8) with respect to time produces the acceleration, d _ d _ _ e q ðtÞ þ rðtÞ b _ qðtÞb _ eb_ r ðtÞ þ rðtÞ ðqðtÞb r ðtÞ ¼ b r€ ðtÞ ¼ r€ðtÞb e r ðtÞ þ rðtÞ e q ðtÞÞ dt dt _ e q ðtÞÞ _ e q ðtÞ þ rðtÞ _ e q ðtÞ þ rðtÞ d ðqðtÞb _ qðtÞb _ qðtÞb ¼ r€ðtÞb e r ðtÞ þ rðtÞ dt   _ eb_ q ðtÞ _ e q ðtÞ þ rðtÞ qðtÞb € e q ðtÞ þ qðtÞ _ qðtÞb ¼ r€ðtÞb e r ðtÞ þ 2eðtÞ (1.2-9) e x sin qðtÞ þ eby cos qðtÞ. From Eq. (1.2-7) we know that ebq ðtÞ ¼ b Therefore,   _  ebx cos qðtÞ  eby sin qðtÞ eb_ q ðtÞ ¼ qðtÞ (1.2-10) _ e r ðtÞ ¼ qðtÞb Substituting into Eq. (1.2-9) produces the sought-after acceleration,  2  €b _ þ rðtÞqðtÞÞb € e q ðtÞ _ qðtÞ r ðtÞ ¼ r€ðtÞ  rðtÞq_ ðtÞ ebr ðtÞ þ ð2rðtÞ (1.2-11) Eqs. (1.2-1) and (1.2-11) describe the same absolute acceleration, but in different reference frames (coordinate systems). In the former, the Cartesian coordinate system is in inertial space and, hence, the acceleration of the frame is zero. In the later, because of the use of polar coordinates, the

1.2 Reference frames

reference frame rotates as the mass point moves. The time-dependent change in the radial and tangential velocities is acceleration and, hence, the reference frame is not inertial. The additional terms in Eq. (1.2-11) are needed to transform the polar coordinate definition of motion into an inertial reference frame in which Newton’s Laws of motion are defined. With Eq. (1.2-11) the equations of motion of a mass point in polar coordinates can be written. Applying Newton’s Second Law, Eq. (1.1-1), we obtain X! r€ ðtÞ ¼ f j ðtÞ mb    2 (1.2-12) _ þ rðtÞqðtÞÞb € e ðtÞ _ qðtÞ m r€ðtÞ  rðtÞq_ ðtÞ eb ðtÞ þ ð2rðtÞ r

q

¼ fr ðtÞ, ebr ðtÞ þ ft ðtÞ, ebq ðtÞ where fr ðtÞ$b e r ðtÞ and ft ðtÞ$b e q ðtÞ are the radial and tangential components, respectively, of any external forces acting on the mass. The term _ e q ðtÞ is referred to as the Coriolis acceleration, which is only pre_ qðtÞb 2rðtÞ € e q ðtÞ is sent if the mass particle has a radial velocity. The term rðtÞqðtÞb referred to as the Euler acceleration, and it is only present when the rotation _ rate, qðtÞ, is not constant. When multiplied by the mass, these two terms are referred to as the Coriolis force and the Euler force, respectively. 2 The term rðtÞq_ ðtÞ in Eq. (1.2-12) is best described by computing the force required to cause a mass point to move in a circular orbit about a fixed point. Since ebr ðtÞ and ebq ðtÞ are orthogonal, Eq. (1.2-12) can be written as  2  m r€ðtÞ  rðtÞq_ ðtÞ ebr ðtÞ ¼ fr ðtÞ$b e r ðtÞ (1.2-13)   2 m r€ðtÞ  rðtÞq_ ðtÞ ¼ fr ðtÞ and _ þ rðtÞqðtÞÞb € e q ðtÞ ¼ ft ðtÞ$b _ qðtÞ mð2rðtÞ e q ðtÞ _ € _ qðtÞ þ rðtÞqðtÞÞ ¼ ft ðtÞ mð2rðtÞ

(1.2-14)

Let the period of a complete orbit be T sec; then the angular rate will be _ ¼ u and qðtÞ € ¼ 0. Since the orbit is u ¼ 2p=T rad/sec, which gives qðtÞ _ circular, the radius of the orbit will be constant; hence, rðtÞ ¼ R, rðtÞ ¼ 0, and r€ðtÞ ¼ 0. Substituting into Eq. (1.2-14) yields ft ðtÞ ¼ 0 and into Eq. (1.2-13) gives mRu2 ¼ fr ðtÞ ¼ fr

(1.2-15)

9

10

CHAPTER 1 Structural dynamics

Since m, R, and u2 are all positive quantities, the negative sign defines the direction of fr . Hence, for this problem, the radial force is always directed toward the center of the circular motion. Newton called this force the centripetal force, and it causes the particle to stay in circular motion about the origin. The force of gravity that keeps the moon in Earth’s orbit and the tension in a string that keeps a rock whirling about a point are examples of centripetal force. For most practical engineering problems on or near the surface of the Earth and of reasonable dimensions, i.e., hundreds of meters (feet) not many kilometers (miles), the assumption is made that the surface of the Earth can serve as an inertial reference frame. However, as we know, the Earth rotates on its axis and orbits the sun. In addition, the solar system rotates about the center of mass of our galaxy. So, before accepting the surface of the Earth as a “good enough” inertial reference frame, we should determine, for example, the impact of not including the centripetal acceleration term in our equations of motion. The Euler acceleration is zero because the rotation of the Earth is constant. The radius of the Earth at the equator is 6378 km (3963 miles), and it completes one full rotation every 24 hours, which is a rotation rate of 7:27  105 rad/sec. This yields a centripetal acceleration of ), as compared to 0:03 m sec2 (1:33 in sec2  the2 acceleration due to the 2 force of gravity of 9:81 m sec (386:09 in sec ). Hence, the maximum centripetal acceleration due to Earth’s rotation is about one-third of one percent of the acceleration due to the force of gravity. The acceleration due to Earth’s orbit about the sun and of the solar system about the center of mass of the galaxy is considerably smaller. Therefore, it is a reasonable assumption for most practical Earth-bound engineering problems to not include the acceleration due to the rotation of the Earth and to consider the surface of the Earth to be an inertial reference frame. It should be noted, however, that the Coriolis effect needs to be included when analyzing largescale motion, such as global movement of air, ocean currents, and trajectories of objects moving relative to the Earth. The interest of this book will be small-scale vibratory motions of relatively small systems, when compared to the Earth; hence, we will not need to include the Coriolis term either.

1.3 Degrees of freedom

1.3 Degrees of freedom An infinitely rigid object of mass m and non-zero dimensions has six independent freedoms of motion in three-dimensional space; we refer to these freedoms as degrees of freedom. Figure 1.3-1 shows such an object in inertial space. The figure also shows the three-dimensional Cartesian coordinate system whose origin, o, is embedded in the same inertial space as the object. In this space, the object can translate independently along the three Cartesian coordinate directions. These directions are independent because they are orthogonal to each other and, therefore, information about the position of the object along any one axis is not known to the other two. In addition, since the object has finite size, its mass is not concentrated at a single point at the center of mass. Hence, the object can have rotational momentum. The angular velocities and, therefore, angular rotations will be independent about the three coordinate axes. If the axes coincide with the principal axes, and their origin is at the center of mass, there will be no mathematical (coordinate) coupling in the mass properties of the object; this will be discussed in detail in Chapter 6. However, irrespective of the coordinates chosen, there will be three independent rotational degrees of freedom. In Fig. 1.3-1, coordinates xðtÞ, yðtÞ, and zðtÞ establish the position of the center of mass of the object relative to the origin of the inertial coordinate system. These coordinates are a function of time since the object has the freedom to translate relative to the origin, and any movement must occur

FIGURE 1.3-1 Three-dimensional Cartesian coordinate system, with origin at point o, in inertial space, and an infinitely rigid object with mass m. The three translational and three rotational degrees of freedom are shown.

11

12

CHAPTER 1 Structural dynamics

over some period of time since the object has mass. The figure shows the location of the object at time ti , where ti  0. The rotations of the object can be established by coordinates qx ðtÞ, qy ðtÞ, and qz ðtÞ, which are in the figure defined to be rotations about the x, y, and z-axes, respectively. Note that depending on the mass distribution of the object, defining rotations about the x, y, and z-axes shown in the figure may not be optimum, since these might not be principal axes of rotation. This will be covered in detail in Chapter 6 and in Volume II. For the majority of the problems discussed in this book, and unless otherwise noted, the rotation angles will be assumed small and, hence, to a first order can be treated as vectors. Since the six coordinates that describe the location and state of the object are functions of time, differentiation with respect to time will yield the corresponding velocities; and differentiation with respect to time of the velocities will yield the corresponding changes in velocity or accelerations. 1.3.1 Newton’s Second Law and rotational motion

Newton’s Second Law of motion, Eq. (1.1-1), describes the motion of a rigid mass point undergoing translational motion. So how do we apply this law to a rigid mass system that is not a point mass and has rotational degrees of freedom? Fig. 1.3-2 shows a system consisting of two mass points; each of mass m connected by a rigid bar that we will assume is massless. The two masses are separated by a distance 2r. The masses are allowed to only move in the plane of the page, and since they are rigidly connected, the system has three degrees of freedom, translation in the x- and y-coordinate directions, and rotation about the z-axis, which is coming out of the page from the origin. We have located the origin of the inertial coordinate

FIGURE 1.3-2 Two equal point masses separated by a distance 2r and connected by a massless infinitely rigid bar.

1.3 Degrees of freedom

system such that it coincides with the center of mass of the system at t ¼ 0; the figure shows the orientation of the coordinate system and the positions ! of the mass points at t ¼ 0. There are external forces, f ðtÞ, of equal magnitude, but opposite direction, acting on each mass point. These forces will remain perpendicular to the bar as the system rotates; hence, they are referred to as follower forces. Newton’s Second Law describes the behavior of the center of mass of rigid objects. Therefore, we must establish equivalent forces that when applied to the center of mass yields the same behavior as the vector sum of the forces acting on the object. Because the two external forces are equal, but directed opposite, the net force at the center of mass in the xand y-coordinate directions will be equal to zero. Since there is a moment arm between the two external forces, they form a couple, and this will cause the system to rotate. Hence, we must derive an equivalent torque (moment) to be applied at the center of mass about the z-axis, i.e., e z . The subscript on the torque indicates that it is about Tqz ðtÞ ¼ 2rf ðtÞb the z-axis and is, therefore, a vector whose direction is perpendicular to the plane in which it is applied and whose magnitude is that of the torque. Since the net translational forces are zero, the center of mass of the twomass system will not translate. However, the equivalent external torque about the center of mass will cause the system to rotate, and we must, therefore, extend Newton’s Second Law to this case. The inertial force at each d v ðtÞ. These inertial forces will each produce an inertial mass point is m ! dt

d v ðtÞ . ! v ðtÞ is the velocity of the torque about the center of mass of r m ! dt mass; and the direction of the velocity is at ninety degrees to the bar that connects the two masses. Accordingly, ! v ðtÞ ¼ r q_z ðtÞ, where q_z ðtÞ is the rotational velocity of the two-mass system about its center of mass, point o, and the subscript, z, indicates the axis about which the rotation takes place. Substituting into the expression for the torque produced by the inertial force yields      d d! d_ 2 (1.3-1) r m v ðtÞ ¼ r m qz ðtÞ ¼ r2 m q_z ðtÞ dt dt dt

13

14

CHAPTER 1 Structural dynamics

Accordingly, the torque produced by the inertial forces must be equal to the torque produced by the external forces acting on the masses; hence, d  (1.3-2) 2 r 2 m q_z ðtÞ ¼ Tqz ðtÞ dt We can generalize the above expression for any number of arbitrary mass points in the plane of the page that are rigidly connected and any number of applied force couples (torques), i.e.,

d X X rj2 mj Tqz j ðtÞ (1.3-3) q_z ðtÞ ¼ dt j j The term in the parenthesis is referred to as rotational inertia or mass moment of inertia and typically the symbol Iij is used to represent this quantity. The mass moment of inertia represents the resistance of the object to changes in rotation, just like translational mass represents the resistance to changes in translational motion. Since Iij depends on the distance, rj , of each mass point to the rotation axis, its value will be a function of the selected axis. For our simple example, Eq. (1.3-3) would be written as d (1.3-4) Izz q_z ðtÞ ¼ Tqz dt It should be noted, and it will be discussed in significant detail in upcoming chapters, that irrespective of which reference frames and/or coordinate systems are chosen, the equations of motion must still define the motion of the center of mass, including rotations about the center of mass, in inertial space. If a non-inertial coordinate system is used, additional terms will be required in the equations of motion to account for the fact that a noninertial reference frame is used. 1.4 Absolute and relative coordinates Previously, it was stated that Newton’s laws require that motion be defined in an inertial reference frame; and, if a non-inertial reference frame were used, additional terms would be required to describe the motion relative to an inertial reference frame. Coordinates that define motion in inertial reference frames are typically referred to as absolute coordinates. In developing structural dynamic models, however, numerous coordinate systems

1.4 Absolute and relative coordinates

might be used, including relative coordinates that are defined in non-inertial reference frames. In this case, appropriate transformations must be applied so that the ultimate motion is still defined relative to an inertial reference frame. Fig. 1.4-1 shows a two-mass system at time tj , tj  0, that is allowed to translate in the plane of the page in the lateral (right/left) direction only, i.e., there is no vertical or rotational motion. Mass m can move relative to mass M as shown because of the flexibility of the two columns that connect the masses. The position of mass M in inertial space is defined by coordinate xB ðtÞ, which is defined in the x-y coordinate system with its origin, point o, embedded in the Earth. The position of mass m relative to mass M is defined by coordinate xR ðtÞ, which is defined in the xR -yR coordinate system with its origin, point R, embedded in mass M. In this system, xB ðtÞ is an absolute coordinate because it defines motion in the Earth inertial reference frame. Coordinate xR ðtÞ defines the motion of mass m relative to mass M, and since mass M will accelerate when subjected to force f ðtj Þ, the origin of the xR -yR coordinate system is in a non-inertial reference frame; hence, xR ðtÞ is a relative coordinate and by itself cannot be used to define the motion of the mass in inertial space, as required by Newton’s laws.

FIGURE 1.4-1 Two-mass system at time tj , tj  0, that is only allowed to translate in the plane of the page in the lateral (right/left) direction. The x-y coordinate system is defined in the Earth inertial reference frame with origin at point o. The xR -yR coordinate system is attached to mass M with its origin at point R.

15

16

CHAPTER 1 Structural dynamics

For modeling purposes we may wish to use coordinate xR ðtÞ, which defines the relative motion between the two masses and is, therefore, directly proportional to the forces that the columns exert on the masses when deformed. However, to apply Newton’s laws we must define the inertial force associated with mass m in an inertial reference frame and, therefore, the coordinates that are used to define the acceleration must be in an inertial reference frame. For the system shown in Fig. 1.4-1, this coordinate would be xðtÞ. Therefore, for mass m we must define the inertial force term, in Newton’s Second Law, as either m€ xðtÞ or mð€ xB ðtÞ þ x€R ðtÞÞ, while only using coordinate xR ðtÞ to define the forces that the columns exert on the masses. It should be noted, however, that if we use coordinate xR ðtÞ to establish the external forces that act on mass m, then the computed overall rigid body displacement (position) would not be correct. Hence, when developing the equations of motion for the two-mass system, xR ðtÞ will have to be defined as xðtÞ  xB ðtÞ. This will be discussed in considerable detail in upcoming chapters. 1.5 Constraints Although every rigid object of finite size has six degrees of freedom in three-dimensional space, for modeling purposes we may wish to constrain some of these freedoms to be the same as those of other mass points, or constrain them to some specific geometric condition (kinematic constraints), or to undergo some prescribed motion. Constraints that specify relations between displacement coordinates or specify displacement coordinate values are referred to as holonomic constraints. These could be functions of time. A boundary condition that specifies zero displacement or slope at a particular point in a structure would be an example of a holonomic constraint. All other constraints are referred to as non-holonomic constraints. An example of a non-holonomic constraint would be an equation that specifies that a displacement at a specific point in a structure must be greater than a certain value. If the constraint is not an explicit function of time, it is referred to as a scleronomic constraint; here, a boundary condition that is independent of time would be an example. A constraint that is an explicit function of time is referred to as a rheonomous constraint. Imposing measured ground motion due to an earthquake, as a prescribed motion at the base of a model of a building, would be an example of a rheonomous constraint.

1.5 Constraints

FIGURE 1.5-1 Cubic object from Fig. 1.3-1 constrained to move between two frictionless surfaces. If a coordinates describe the displacements of a system, and there are b equations of constraint between the displacements, then only a  b coordinates are independent; these independent coordinates are also referred to as generalized coordinates. The a  b independent or generalized coordinates will fully describe the displacement configurations of the system for which the a coordinates were originally selected; this conclusion is also valid for external forces. Fig. 1.5-1 shows the cubic object from Fig. 1.3-1 constrained to slide between two frictionless surfaces that are in contact with the top and bottom faces of the object. The two plates impose several constraints. First, the plates prevent motion in the y-coordinate direction; hence, yðtÞ ¼ 0. In addition, the plates prevent the object from rotating about the x and z axes; hence, qx ðtÞ ¼ 0 and qz ðtÞ ¼ 0. In total, there are three constraints and, therefore, there are only three (6 - 3) independent coordinates. These correspond to the three degrees of freedom the object would have after the constraints were imposed, namely xðtÞ, zðtÞ, and qy ðtÞ. Fig. 1.5-2 shows the cubic object from Fig. 1.3-1 restricted to move in a square tube with frictionless sides. This adds two more constraints relative to the configuration in Fig. 1.5-1. While in Fig. 1.5-1 the object could rotate about the y-axis, the sidewalls in Fig. 1.5-2 preclude this degree of freedom; hence, qy ðtÞ ¼ 0. In addition, the two sidewalls preclude translation in the

FIGURE 1.5-2 Cubic object from Fig. 1.3-1 constrained to move in a square tube with frictionless sides.

17

18

CHAPTER 1 Structural dynamics

FIGURE 1.5-3 Two circular masses connected by a flexible rod and restricted to slide along the round frictionless tube. The two masses can also rotate about the x-axis. z-coordinate directions; hence, zðtÞ ¼ 0. Adding the two new constraints to the three from the previous discussion leaves us with one degree of freedom, which can be described by one coordinate, xðtÞ. The five constraint equations from the preceding discussion are examples of holonomic constraints. Fig. 1.5-3 shows two circular masses that are connected by a round flexible rod. The flexibility of the rod allows for relative motion between the masses along the x-coordinate direction. In addition, because the rod is flexible, the two masses can rotate about the x-axis relative to each other. The tube restricts both masses to slide along its length; hence, the masses neither translate along the y- or z-coordinate directions nor rotate about the y- and zaxes. Because of the constraints imposed by the tube, each mass has only two degrees of freedom and, hence, we need two independent coordinates per mass, for a total of four, to describe the motions of this two-mass system. The four independent coordinates, x1 ðtÞ, x2 ðtÞ, qx;1 ðtÞ, and qx;2 ðtÞ, are shown in the figure. Now, assume that the rod connecting the two masses becomes infinitely rigid in torsion so that there cannot be any relative rotation about the x-axis between the two masses. This then yields an additional constraint equation, qx;1 ðtÞ ¼ qx;2 ðtÞ. Furthermore, assume that the rod is also infinitely rigid along the direction of the length of the tube. In this case we get an additional constraint equation, x1 ðtÞ ¼ x2 ðtÞ. The two additional constraints reduce the number of independent coordinates required to describe the displacements of the two-mass system from four to two. The possible combinations are x1 ðtÞ and qx;1 ðtÞ, or x1 ðtÞ and qx;2 ðtÞ, or x2 ðtÞ and qx;1 ðtÞ, or x2 ðtÞ and qx;2 ðtÞ. Before leaving this section we will show a simple example of a generalized coordinate. Fig. 1.5-4 shows a planar (allowed to move in the plane of the page only) pendulum where the rod of length l is rigid. The position

1.6 Distributed coordinates

FIGURE 1.5-4 Mass m constrained to move in the plane of the page at a distance l from the pivot point. The Cartesian coordinates xðtÞ and yðtÞ define motion in an inertial reference frame. of mass m can be established relative to the origin, o, of the Cartesian coordinate system using coordinates xðtÞ and yðtÞ. Instinctively we know, however, that xðtÞ and yðtÞ are dependent since the circular motion of the mass about the pivot point relates them. Therefore, this is a onedegree-of-freedom system that only requires one independent coordinate to define its position, either xðtÞ or yðtÞ. Choosing either one, however, would not be very convenient because of the constraint relationship between the two, i.e., x2 ðtÞ þ ðl  yðtÞÞ2 ¼ l2 . A more convenient coordinate would be qz ðtÞ, where we can now also assume small angular motion to simplify the equation of motion. Once we have solved for qz ðtÞ, we can always use the constraint equations, xðtÞ ¼ l sin qðtÞ and yðtÞ ¼ lð1 cos qðtÞÞ, to establish xðtÞ and yðtÞ. qz ðtÞ is an example of a generalized coordinate. 1.6 Distributed coordinates In the preceding sections, discrete coordinates were used to describe the degrees of freedom of rigid mass objects. But, what if we want to describe the behavior of the distributed mass itself and not just the center of mass? What if we wanted to describe the behavior of the mass in an elastic structure, such as a flexible beam? What if we wanted to describe the behavior of the internal mass as a function of the motion at discrete points at the boundaries of the elastic structure? In these cases we would need to use distributed coordinates and shape and interpolation functions.

19

20

CHAPTER 1 Structural dynamics

FIGURE 1.6-1 (A) A planar rigid beam of length l and mass per unit length r pinned at its left end and connected to ground at the right end by a spring with spring constant k. (B) Beam in (A) shown with inertial coordinates x and y whose origin, o, coincides with the center of mass, cm, of the beam when it is perfectly horizontal and the spring is not deformed. Inertial coordinate qz is also shown. Fig. 1.6-1 shows a uniform rigid beam of length l and mass density per unit length, r. The beam is pinned at its left end, and the right end is connected to ground by a spring with spring constant k; the spring is not deformed when the beam is horizontal as shown in Fig. 1.6-1A. The beam can only undergo motion in the plane of the page; and we will assume small angular rotation about the left end. Hence, the beam only has one independent degree of freedom, that of its center of mass moving in the y-coordinate direction or rotation of the center of mass about the pinned end. Any motion in the x-coordinate direction, due to the rotation about the left end, would be negligibly small and not independent of the vertical motion. For this system, we will assume that there is no force due to gravity. To apply Newton’s laws in their simplest form, we need to specify coordinates that describe the motion of the center of mass in inertial space. The first choice is to specify coordinates in inertial space whose origin coincides with the center of mass when the beam is as shown in Fig. 1.6-1A; this is shown in Fig. 1.6-1B. Recall that the origin of the coordinate system can be placed anywhere in the same inertial space as the mass, but the coordinates must describe the behavior of the center of mass. In Fig. 1.6-1B we also included a coordinate, qz , that describes the rotation of the beam about the pinned connection. Note that whether this rotational (angular) coordinate is placed at the origin of the coordinate system, or the pinned end of the beam, the rotation it will define will be the same so long as it is about an axis that is parallel to the z-axis. The location of the parallel axis relative to the center of mass will, however, impact the mass moment of inertia value.

1.6 Distributed coordinates

FIGURE 1.6-2 Rigid beam from Fig. 1.6-1 shown with coordinate wðx; tÞ and rotated about its left end. Now consider the case where we wish to define the motion anywhere along the beam, not just at the center of mass. For this we would use a distributed coordinate wðx; tÞ, where wðx; tÞ defines the vertical motion of the beam at any point along the length of the beam. This is shown in Fig. 1.6-2. Since position along the beam is independent of time, we can separate the spatial and time-dependent components of wðx; tÞ, i.e., let wðx; tÞ ¼ fðxÞqðtÞ

(1.6-1)

As defined, qðtÞ is a generalized coordinate. The function fðxÞ has to satisfy boundary conditions, that is, fð0Þ ¼ 0, and must properly define the relationship between each mass point in the beam. Since the beam is rigid, fðxÞ ¼ x=l

(1.6-2)

where we chose to normalize the function to unity at the right end of the beam. Note that the normalization is arbitrary, so long as it is accounted for in qðtÞ. Hence, fðxÞ defines a shape of vibration, but not the magnitude. To apply Newton’s Second Law we would need to compute the total translational momentum of the beam as it undergoes small angular rotation about the left end. We will divide the beam into an infinite number of infinitesimal length dx. Then the momentum associated with each increment is d _ tÞrdx ¼ wðx; € tÞrdx. Now, _ tÞrdx, and the inertial force would be wðx; wðx; dt the vertical velocity of each mass increment is related by the fact that the beam is rigid; hence, we can write € tÞ ¼ xq€z ðtÞ wðx;

(1.6-3)

where q€z ðtÞ is the rotational acceleration about the left end, and we have assumed small angular rotations. The total moment (torque) about

21

22

CHAPTER 1 Structural dynamics

the left end produced by the sum of the inertial forces of each increment dx is 0 1 Z l Z l Z l € tÞrdx ¼ xwðx; x2 q€z ðtÞrdx ¼ @r x2 dxAq€z ðtÞ 0

0

¼

0

l ! 2  x3 € l r qz ðtÞ ¼ lr q€z ðtÞ 3 0 3

(1.6-4)

where we recognize the term in parentheses to be the mass moment of inertia of a rigid bar about its end. The inertial force of an increment of € tÞrdx and that of the entire beam rotating about length dx is given by wðx; 2  l lr q€z ðtÞ. its left end by 3 In the preceding example, the beam was assumed rigid and, hence, each increment, dx, along the length of the beam was related (constrained) to the rotational motion of the entire beam about its left end; but, what about a beam that is allowed to bend along its length as shown in Fig. 1.6-3?

FIGURE 1.6-3 Distributed-coordinate shape functions (B) and (C) for a flexible beam (A). The beam is fixed against translation and rotation at its right end.

1.6 Distributed coordinates

Here, depending on the flexibility of the beam, we can imagine a very large (infinite) number of different distorted shapes. Because each segment of the beam is connected to its adjacent segments, there will be constraints based on stressestrain relationships that will, in combination with the mass distribution of the beam, determine the relative motions of adjacent beam increments as the beam vibrates. For the type of systems we will be interested in, the relationships between adjacent mass increments will be smooth and describable by mathematical functions. Fig. 1.6-3B and C show two interpolation functions that could be used to describe the deflection of the beam as a function of unit translation in the y-coordinate direction and unit rotation about the z-axis of the left tip of the beam. In addition to these types of interpolation functions, this beam can also undergo vibratory motion that is independent of the displacements of its ends, but not of its boundary conditions. We will show in Volume II that these vibratory shapes can also be used as generalized coordinates. As a last example, we show in Fig. 1.6-4 a unit square surface that is allowed to only displace normal to the surface, i.e., in the z-coordinate direction. For this more complicated geometry, we can create a function that describes the z-coordinate displacement of the surface as a function of z-direction displacements at each of the four corners, i.e., ubn ðx; y; u1 ; u2 ; u3 ; u4 ; tÞ ¼ fð1  xÞð1  yÞgu1 ðtÞþfxð1  yÞgu2 ðtÞ (1.6-5) þ fxygu3 ðtÞ þ fyð1  xÞgu4 ðtÞ where ubn ðx; y; u1 ; u2 ; u3 ; u4 ; tÞ is the normal displacement of the surface at location (x, y), as a function of the displacements, u1 ðtÞ / u4 ðtÞ, at the four

FIGURE 1.6-4 z-coordinate direction interpolation functions for a surface, which are a function of location specified by the x- and y-coordinates. The distributed coordinates allow for rigid body translation in the z-coordinate direction (E) and elastic deformation as a function of unit displacement at each of the four corners (A through D).

23

24

CHAPTER 1 Structural dynamics

corners of the surface. Fig. 1.6-4A shows the shape when u1 ðtÞ ¼ 1, u2 ðtÞ ¼ 0, u3 ðtÞ ¼ 0, and u4 ðtÞ ¼ 0. Fig. 1.6-4B shows the shape when u1 ðtÞ ¼ 0, u2 ðtÞ ¼ 1, u3 ðtÞ ¼ 0, and u4 ðtÞ ¼ 0. Fig. 1.6-4C shows the shape when u1 ðtÞ ¼ 0, u2 ðtÞ ¼ 0, u3 ðtÞ ¼ 1, and u4 ðtÞ ¼ 0. Fig. 1.6-4D shows the shape when u1 ðtÞ ¼ 0, u2 ðtÞ ¼ 0, u3 ðtÞ ¼ 0, and u4 ðtÞ ¼ 1. Fig. 1.6-4E shows the resulting surface displacement when u1 ðtÞ ¼ u2 ðtÞ ¼ u3 ðtÞ ¼ u4 ðtÞ ¼ 1. Hence, the interpolation function also incorporates the rigid body displacement of the surface. The derivation of this interpolation function is presented in Volume II. Before leaving this section, it is worthwhile summarizing. Rigid objects have six degrees of freedom in three-dimensional space. In order to apply Newton’s laws, in their simplest form, we need to define the motion of the mass in an inertial reference frame; hence, we will need six coordinates to describe the six degrees of freedom. On the other hand, structures that have flexibility can be modeled in one of two ways. We either discretize the system by assigning the mass to discrete points, where each is assumed to be rigid, and then apply Newton’s laws to these mass points, or we model the continuous properties of the system. The former requires discrete coordinates, and the latter requires continuous coordinates or discrete coordinates and interpolation functions. All of these approaches, including the derivation of the equations of motion from fundamental principles, are discussed in significant detail in subsequent chapters and in Volume II. 1.7 Units In the field of structural dynamics, we often have to deal with two systems of units. The most prevalent, and used practically by every country in the world, is the International System of Units (SI) (Bureau International des Poids et Mesures, 2006) and commonly referred to as the metric system. In the SI, the meter, the kilogram, and the second are used for length, mass, and time, respectively, and are defined as the base units. All other units of interest to us are derivable from these and are, therefore, referred to as derived units. The other system of units of significance is the US Customary system, which is primarily used in the United States although its base units are defined in terms of the SI. 1.7.1 International System of Units

As mentioned above, in the SI, the base units of meter, kilogram, and second are used for length, mass, and time, respectively. Since 1983, a meter has been defined as the distance traveled by light in vacuum during a

1.7 Units

time interval 1=299 792 458 of a second (Bureau International des Poids et Mesures, 2006). Until May 2019, the kilogram was defined as being equal to the mass of the international prototype of the kilogram (Bureau International des Poids et Mesures, 2006). The prototype is a cylinder of platinumeiridium alloy stored by the International Bureau of Weights and Measures in France. In May 2019, the definition of the kilogram was changed so that it is defined in terms of the Planck constant (Bureau International des Poids et Mesures, 2006). The second “is defined as the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom.” (Bureau International des Poids et Mesures, 2006). For our purposes we will not need to concern ourselves with these definitions other than to be aware of the fact that the definitions are not unique and, thus, serve the purpose of providing consistent relative measures. 1.7.2 US Customary units

The US Customary units system is primarily used in the United States. Although modern engineering is more likely to use SI units, many legacy systems still use the US Customary units. Therefore, we must be able to deal with both systems. The base units in the US Customary system are the foot, avoirdupois pound mass, and second. These, however, are defined in terms of SI units. The foot (ft) is defined as 0.3048 of a meter (m), the pound-mass (lbm) is defined as 0.45359 of a kilogram (kg), and the second (s) is the same in both systems. In the SI system, force carries the derived unit of newton (N), where one newton is defined as the force that causes a one-kilogram mass to accelerate one meter per second squared, i.e., 1 newton ðNÞ ¼ 1 kg  1 m=sec2

(1.7-1)

In the US Customary system, force carries the derived unit of pound-force (lbf), which is often shortened to just pound (lb). A complication arises in the definition of pound-force, however, since it is defined as the force that causes a one-slug mass to accelerate one foot per second squared, i.e., 1 pound-force ðlbf or lbÞ ¼ 1 slug  1 ft=sec2

(1.7-2)

Recall that the kilogram is the reference for a pound-mass, not a slug. Hence, we must reference the slug back to the kilogram through the pound-mass. To do this we must first address standard gravitational acceleration.

25

26

CHAPTER 1 Structural dynamics

Standard gravitational acceleration is defined as the acceleration due to Earth’s gravitational pull at sea level at the latitude of 45 degree. Given the mass of the Earth, mEarth , the distance, R, from sea level to the center of mass of the Earth, and Newton’s Gravitational Constant, G ¼ 6:673  1011 N ðm = kgÞ2 or 3:436  108 lbf ðft = slugÞ2 (Resnick and Halliday, 1966), the standard gravitational acceleration can be derived with Newton’s Law of Universal Gravitation (Newton, 1946), i.e., mEarth m mEarth

(1.7-3) ¼ G 2 m f1 ¼ f2 ¼ G R2 R where the quantity in parentheses is the standard gravitational acceleration, and f1 and f2 are the equal and opposite forces of mutual attraction between the Earth and an object of mass m at a distance R from the center of mass of the Earth. The quantity in parenthesis in SI units has a value of 9:80665 m=sec2 (Butcher et al., 2006) at the reference location. In US Customary units, the value is 32:1740 ft=sec2 , or 386:0886 in=sec2, where the conversion factor between meters and feet of 3.28084 ft=m was used. Using Newton’s Second Law we can establish that an object with a mass of 100 kg will experience, at the reference location, a force  due to Earth’s gravity of 980:665 newton 100 kg  9:80665 m=sec2 . If we wish to compute the equivalent force in US Customary units, we must first convert the 100 kg to units of slug, where a slug is the mass of an object that will accelerate one foot per second squared when subjected to a one pound-force (lbf) (see Eq. 1.7-2). The pound-mass (lbm) is defined as 0.45359 of a kilogram. To convert this to units of slug, we start with Newton’s Second Law, 1 lbf ¼ 1 lbm  32:1740 ft=sec2

(1.7-4)

2

where 32:1740 ft=sec is the standard gravitational acceleration in US Customary units. Eq. (1.7-4) can be written as 1 lbf ¼ ð1 lbm  32.1740Þ  1 ft=sec2

(1.7-5)

Comparing Eqs. (1.7-2) and (1.7-5) we conclude that 1 slug ¼ 1 lbm  32.1740

0

1 slug ¼1 1 lbm  32.1740

(1.7-6)

Solution 1.1

Therefore, the mass, m, of the 100 kg object in the US Customary unit of slug is m ¼ ð100 kgO0:45359 kg=lbmÞ 

1 slug 1 lbm  32.1740

¼ 6:8522 slug From this we can also compute the conversion factor, ℂ, between slugs and kg, i.e., ð6:8522 slugÞℂ ¼ 100 kg

0 ℂ ¼ 14.5939

kg slug

(1.7-7)

Table 1.7-1 shows conversion factors between the SI and US Customary systems taken from Butcher et al. (2006). The conversion factors are multiplication factors to be applied to quantities in US Customary units to obtain the values in SI units. The reciprocals are to be used for conversion from SI to US Customary units. Note that the pound-mass referenced in the table is the avoirdupois pound-mass used in the United States. Other interesting references include works by Judson (1960) and Michaeliss et al. (1995).

Problems Problem 1.1 You are given the mass of a component of a structural dynamic model that was developed in Europe. You are told that the component weighs 5 newton at a location on Earth where the standard gravitational acceleration value is applicable. What is its mass in SI units? What is its weight in US Customary units of pound-force (lbf) and its mass in units of slug? Solution 1.1 Per the problem statement, 5 newton (N) is a measure of the force with which gravity pulls the object toward the center of mass of the Earth, at the surface of the Earth, at a location where the standard gravitational

27

28

CHAPTER 1 Structural dynamics

Table 1.7-1 Conversion factors between International System of Units (SI) and US Customary units.

acceleration is applicable. Hence, we need to use Newton’s Second Law to compute the mass: ma ¼ f 0 m ¼ f =g, where g is the acceleration due to the force of gravity at the referenced location. This acceleration value will vary as a function of the distance from the center of mass of the Earth to our

Solution 1.2

location on the surface. Using the standard gravitational acceleration value 2 of 9:80665 ðm=sec , the mass in SI units is m ¼ 5 newtons=9:80665 m=sec2 ¼ 5 ðkg m=sec2 Þ=9:80665 m=sec2 ¼ 0:5099 kg Weight is a force; and since we are given the force that the Earth exerts on the mass, i.e., 5 N, all we need to do is convert 5 N to US Customary units of pound-force (lbf). The conversion factor is given in Table 1.7-1; hence, weight ¼ 5 N=4:44822 ðN=lbfÞ ¼ 1:124 lbf Mass in US Customary units will be in units of slug or lbf$sec2 =ft, and as with the SI system, we will use ma ¼ f 0 m ¼ f =g to compute the mass. For this, however, we need to use the standard gravitational acceleration 2 , which was obtained by converting the meter value of 32:1740 ðft=sec 2 in 9:80665 ðm=sec to feet using the conversion factor of 0:3048 m=ft (see Table 1.7-1). The mass, therefore, is m ¼ 1:1240 lbf=32:1740 ft=sec2   ¼ 0:0349 lbf$sec2 =ft ¼ 0:0349 slug Alternatively, we couldhave divided the  mass of 0.5099 kg by the conver2 sion factor of 14.5939 kg$ft=lbf$sec from Table 1.7-1 and obtained the same result. Problem 1.2 You are given the mass of a component of a structural dynamic model that was developed in Asia. You are told that the component weighs 5 newton and standard gravitational acceleration is applicable. What is its mass in US Customary units of pound-mass? How is this related to units of slug. Solution 1.2 From the solution to Problem 1.1, we know that a weight (force) of 5 newton corresponds to a mass of m ¼ 5 newtons=9:80665 m=sec2 ¼ 5 ðkg m=sec2 Þ=9:80665 m=sec2 ¼ 0:5099 kg

29

30

CHAPTER 1 Structural dynamics

From the conversion factors in Table 1.7-1, we obtain 0.45359 ðkg=lbmÞ. Hence, the mass in pound-mass (lbm) units is m ¼ 0:5099 kg=0:45359 ðkg=lbmÞ ¼ 1:124 lbm To obtain the mass of an object is units of slug when we have the value in 1 slug , which yields units of lbm, we must multiply by 1 lbm  32.1740 m ¼ 1:1241 lbm=32:1740 slug=lbm ¼ 0:0349 slug Note that the appropriate/consistent units to use in US Customary units in Newton’s Second Law are lbf, slug, and ft=sec2 . Problem 1.3 Use Newton’s Law of Universal Gravitation to derive the standard gravitational acceleration at sea level at 45 degree latitude. Find the required quantities on the Internet and provide the value in both SI and US Customary units. Show your work and discuss why your values may differ slightly from those in the chapter. Solution 1.3 Newton’s

Law of Universal Gravitation is mEarth m mEarth

¼ G 2 m. f1 ¼ f2 ¼ G R2 R The required values from the Internet are (note that the values may vary slightly depending on the source) R ¼ 6367:49 km ¼ 6367:49  103 m mEarth ¼ 5:9722  1024 kg G ¼ 6:673  1011 N ðm=kgÞ2 Substituting and solving yields G

24 mEarth 2 5:9722  10 kg 11 ¼ 6:673  10 N ðm = kgÞ  2 R2 6367:49  103 m

¼ 9:8292 N=kg ¼ 9:8292 ðkg-m=sec2 Þ=kg ¼ 9:8292 m=sec2

Solution 1.4

To obtain the value in US Customary units, we convert meter to feet using the conversion factor of 0.3048 m=ft from Table 1.7-1, i.e., mEarth 1 2 ¼ 32:2480 ft=sec2 ¼ 9:8292 m=sec 0:3048 m=ft R2 Note that these values are slightly greater than those presented in Section 1.7. This is due to the differences between the values obtained from the particular Internet site used for this problem and those published by Butcher, Crown, and Gentry (Butcher et al., 2006). The values from the reference should be considered the more precise values, and one should always exercise caution when using values from a website. G

Problem 1.4 Use the Butcher, Crown, and Gentry reference (see references) and answer the following questions: (1) In SI units, are the units capitalized? i.e., is it “m” or “M” for the meter? (2) Are there any instances where the answer to the preceding question is

the opposite? (3) In SI units, if a unit is the name of a person, is it capitalized? What about

its letter (symbol) designation, is it capitalized? (4) If a distance is measured to be five meters, is it written as “5 meter” or

“5 meters?” (5) In SI units, is there a space between the numeric value and its units?

Give examples of your response. (6) Are SI units ever followed by a period? Solution 1.4 (1) In SI units, are the units capitalized? i.e., is it “m” or “M” for the meter? SI symbols are unique and are, therefore, always written in lower case except for the liter and those units derived from the name of a person, such as N for the newton. Hence, for 6 newton we have “6 N,” but for 4 meter we have “4 m.” (Note that the period is only used because it is the end of the sentence.) (2) Are there any instances where the answer to the preceding question is the opposite? SI symbols are always written in lower case except for the liter and those units derived from the name of a person, such as N for the newton. Hence, for 6 newton we have “6 N,” but for “4 meter” we have “4 m.” (Note that the period is only used because it is the end of the sentence.)

31

32

CHAPTER 1 Structural dynamics

(3) In SI units, if a unit is the name of a person, is it capitalized? What about

its letter designation, is it capitalized? No, units that are named after a person are not capitalized if the name is written out. If its symbol is used, then it is capitalized. For example, for force, “newton” and “N” are correct, and for cycles per second, “hertz” and “Hz” are correct. (4) If a distance is measured to be five meters, is it written as “5 meter” or “5 meters?” Since SI units are symbols and unique, and they are not abbreviations, they stand for both the singular and plural. Hence, “5 meter” and “5 m” are correct. (5) In SI units, is there a space between the numeric value and its units? Give examples of your response. Yes. One should write “6 N” or “4 newton,” not “6N” or “4newton.” (6) Are SI units ever followed by a period? No, except at the end of a sentence. SI units are not abbreviations and are unique symbols. Problem 1.5 Use the Butcher, Crown, and Gentry reference (see references) to answer the following questions: (1) The letter “g” is used to indicate the acceleration due to gravity. When

so used, is it italicized or not? (2) When the symbol “g” is used to indicate the acceleration due to gravity,

which designation of 2.75 times the acceleration due to gravity is correct: 2.75g or 2.75 g? The difference is the space between the number and g. (3) Is it ever appropriate to use “s” after “g” when indicating a multiple of the acceleration due to the force of gravity? Solution 1.5 (1) The letter “g” is used to indicate the acceleration due to gravity. When so used, is it italicized or not? It should be italicized, i.e., g. (2) When the symbol “g” is used to indicate the acceleration due to gravity, which designation of 2.75 times the acceleration due to gravity is correct: 2.75g or 2.75 g?

Problem 1.7

The correct designation does not include a space between the number and g; hence, 2.75g is correct. This is because g represents a numerical value. (3) Is it ever appropriate to use “s” after “g” when indicating a multiple of the acceleration due to the force of gravity? The letter “s” should never be used after “g” to indicate more that one multiple of the acceleration due to the force of gravity. “3.5g” is correct, “3.5gs” is not correct. Problem 1.6 A spaceship is stationary in inertial space, and there are no external forces, such as gravity, acting on the ship. At t ¼ 0, the ship expels 1% of its mass at a speed of 10 meter/second out of its engine nozzle as a single unit of mass. As a result, the ship will move in the opposite direction of the velocity of the ejected mass (see Newton’s Third Law). Where is the combined center of mass of the ship and the expelled mass after 10 seconds, after one hour? Solution 1.6 The combined center of mass of the ship and the expelled mass is where it was before the mass was ejected, even though the ship acquired a velocity in the opposite direction of the expelled mass. The state of the overall (combined) center of mass is not affected because an external force did not cause the ship to move. Hence, the combined center of mass of the ship plus expelled mass must remain in the same state, i.e., stationary in this case, as before the mass was expelled. So, what about a launch vehicle as it lifts off of its pad and flies to orbit; would the combined center of mass of the launch vehicle and its propellants and propellants converted into thrust gas still be at the pad when the vehicle enters orbit? Problem 1.7 The pendulum shown in the figure consists of a rigid bar that connects the mass m to the frictionless pivot point o. The distance from the pivot point to the center of mass of m is l. Therefore, x2 þ ðl  yÞ2 ¼ l2 is a scleronomic constraint. What could you do to the

33

34

CHAPTER 1 Structural dynamics

system to introduce a rheonomous constraint? Hint: Think in terms of imposed motion.

Solution 1.7 A rheonomous constraint is one in which time is included explicitly. Therefore, a prescribed time-dependent motion of the pivot point, o, would be a rheonomous constraint. Note that the motion, X sin ut, has to be prescribed, i.e., the motion of the pendulum cannot alter the pivot point prescribed motion.

Problem 1.9

Problem 1.8 Derive an equivalent spring, ke , that could be placed at the center of mass of the uniform rigid beam shown in the figure. The beam is pinned at the left end and connected to ground by a spring with spring constant k at the right end. The origin of the inertial coordinates x and y coincides with the center of mass, cm, when the spring is not deformed. Assume small angular rotation about the left end.

Solution 1.8 Taking moments about the left end gives    l l ke q ¼ 0 0 ke ¼ 4k lðkðlqÞÞ þ 2 2 Problem 1.9 ∞ 2n1 X n1 ðqðtÞÞ sin qðtÞ is defined as sin qðtÞ ¼ (Sokolnikoff and ð  1Þ ð2n  1Þ! n¼1 Redheffer, 1958), and for small qðtÞ we have the following approximation: !  ∞ 2n1 X q3 ðt ðqðtÞÞ sin qðtÞ ¼ qðtÞ þ þ ð 1Þn1 3! ð2n  1Þ! n¼3 z qðtÞ

35

36

CHAPTER 1 Structural dynamics

What is the small angle approximation for cos qðtÞ? Hint: Use cos2 qðtÞ þ sin2 qðtÞ ¼ 1, sin qðtÞ z qðtÞ, and complete the square. Solution 1.9 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos qðtÞ þ sin qðtÞ ¼ 1 0 cos qðtÞ ¼ 1  sin2 qðtÞ 2

2

Since we are dealing with small angles, cos qðtÞ z the square of the term under the radical yields

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  q2 ðtÞ. Completing

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi q4 ðtÞ q4 ðtÞ  1  q2 ðtÞ þ 4 4 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 q2 ðtÞ q4 ðtÞ z 1  2 4

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos qðtÞ z 1  q2 ðtÞ z

q2 ðtÞ 2  4 where for small qðtÞ, q ðtÞ 4 z 0. For very small qðtÞ, cos qðtÞ z 1. z1 

Problem 1.10 In engineering, and in particular vibration analysis, the assumption is often made that for small angular vibrations, qðtÞ, sin qðtÞ z qðtÞ and cos qðtÞ z 1 (see preceding problem). Develop a table that compares qðtÞ (in one degree increments) to sin qðtÞ and cos qðtÞ to 1. Note that even though the table will have qðtÞ in degree, qðtÞ in sinqðtÞ and cos qðtÞ must be in units of radian. Compare cos qðtÞ to 1  q2 ðtÞ 2; is this a higher accuracy approximation? How large can qðtÞ be such that the error in the approximations does not exceed one percent?

Solution 1.11

Solution 1.10

The shaded cells indicate the one-percent error levels. Problem 1.11 In engineering, and in particular vibration analysis, the assumption is often made that for small angular vibrations qðtÞ, sin qðtÞ z qðtÞ and cos qðtÞ z 1. These approximations allow the treatment of small rotations as vector quantities. Consider xðtÞ ¼ A sin qðtÞ as the starting point. Are the small angle approximations also valid for angular velocity? How about angular acceleration? Solution 1.11 Starting with xðtÞ ¼ A sin qðtÞ, the velocity is d d _ _ ¼ AqðtÞcos xðtÞ ¼ A sin qðtÞ 0 xðtÞ qðtÞ dt dt

37

38

CHAPTER 1 Structural dynamics

For small angular rotation (see Problem 1.10 for what is meant by “small”), _ _ z AqðtÞ. cos qðtÞ z 1 and we obtain xðtÞ What about acceleration? Differentiating the velocity yields d d _ _ ¼ AqðtÞcos xðtÞ qðtÞ 0 x€ðtÞ dt dt € _ qðtÞsin _ ¼ AqðtÞcos qðtÞ  AqðtÞ qðtÞ For small angular rotations, sin qðtÞ z qðtÞ and cos qðtÞ z 1, and we obtain €  Aq_ ðtÞqðtÞ x€ðtÞ z AqðtÞ €  Aq_2 ðtÞqðtÞ z AqðtÞ 2

It should be noted that just because qðtÞ is small it does not mean that _ will be small, since the magnitude of a displacement/rotation is indeqðtÞ pendent of its rate of change. However, as qðtÞ/0, we do obtain € x€ðtÞ ¼ AqðtÞ. Problem 1.12 iut , Write sin utpand ffiffiffiffiffiffiffi cos ut in terms of the complex exponential function, e where i ¼ 1. Hint: Use Euler’s formula. Solution 1.12 Euler’s formula is eiut ¼ cos ut  i sin ut (Sokolnikoff and Redheffer, 1958). Therefore, cos ut ¼

eiut þ eiut eiut  eiut and sin ut ¼ 2 2i

Problem 1.13 When the dot product of two functions is equal to zero, then the functions are orthogonal to each other. Show R 2p that sin ut and cos ut are orthogonal to each other over one cycle, i.e., 0 sin ut cos utdt ¼ 0. Hint: Integrate by parts. Solution 1.13 To integrating by parts let u ¼ sin ut and dv ¼ cos utdt. Then,

Problem 1.14

1 sin ut and we obtain u 2p Z 2p Z b 1 b sin ut cos utdt ¼ uvja  vdu¼ sin ut sin ut u 0 a 0

du ¼ ðu cos utÞdt and v ¼

Z

2p

1 sin utðu cos utÞdt u 0 Z 2p 1 2 2p sin ut cos utdt ¼ sin ut  u 0 0 

Adding the rightmost integral to both sides yields 2p Z 2p 1 2 sin ut cos utdt ¼ sin ut ¼ 0 2u 0 0 Problem 1.14 We have two identical, infinitely rigid boxes as shown in the figure; there are no windows. The total mass of each is M ¼ mbox þ 2m and we have assumed that the mass of the springs is included in mbox . R is the distance from the surface of the Earth, where the left box is resting, to the center of mass of the Earth; the other lengths are relative to the bottom of the box. Each box has two identical mass spring systems where the mass is only allowed to move up or down relative to the sidewall of the box. The magnitude of the external force, F, acting on the right box is such that the acceleration of the box is equal to the acceleration due to Earth’s gravity a distance R þ l1 from the Earth’s center of mass (see left box). Assume that the only force acting on the left box is that due to Earth’s gravity and on the right box is that due to the external force F. If you were a massless observer, or your mass was included in the mass of each box, would you be able to tell from the compression of the springs which box you were in? Assume Newton’s Laws are exact. Explain your answer.

39

40

CHAPTER 1 Structural dynamics

Solution 1.14 Using Newton’s Law of Gravitation, compute the acceleration due to the force of Earth’s gravity at a distance R þ l1 from the center of mass of the Earth, i.e.,   mEarth mEarth ¼ G ¼ G fEarth ¼ flm ¼ G m 0 g lm 2 2 ðR þ l1 Þ2 Rb Rb where Rb is the distance between the center of mass of the Earth and center of mass of the lower mass, m, in the left box. Note that for practical engineering problems, R þ l1 ¼ R. However, for the purposes of this discussion we will need to keep the higher precision value of R þ l1 . Note that fEarth and flm are the equal and opposite forces that the Earth and lower mass in the left box exert on each other. Accordingly, because of the mutual attraction between the Earth and lower mass, the lower mass will compress the spring it is connected to by an amount mEarth m

flm glm m ¼ k k Likewise, for the upper mass in the left box, we obtain   mEarth m mEarth mEarth fEarth ¼ fum ¼ G ¼ G 2 m 0 gum ¼ G 2 ðR þ l1 þ l2 Þ2 Re Re xlm ¼ 

Solution 1.15

Hence, the upper mass will compress the upper spring it is connected to by an amount fum gum m xum ¼  ¼ k k where we note that xum sxlm . The total mass of the right box is M ¼ mbox þ 2m; and if we wish this entire box to have a constant acceleration that is equal to the lower mass, m, of the left box, then F must be F ¼ glm M Since the box is rigid and the two mass points are attached to the box through their respective springs, they will each undergo acceleration glm . This acceleration, however, will be the same at every point in the right box since it is solely a function of the applied external force. Hence, both springs will be compressed the same as the lower spring in the left box. This compression, however, is different than that of the upper spring in the left box. Hence, by monitoring the compression of each spring, and noting that they are different in one box and equal in the other, we will be able to establish whether we are in a gravitational field or being accelerated by a steady external force. Problem 1.15 According to Newton, how should the two forces shown in the figure be combined when acting on mass m?

Solution 1.15 Forces are vectors, so they should be combined using the parallelogram law as shown below.

41

42

CHAPTER 1 Structural dynamics

Problem 1.16 Gravitational mass is defined by Newton’s Law of Universal Gravitation,   Mm M f1 ¼ f2 ¼ G 2 ¼ G 2 m R R Inertial mass is defined by Newton’s Second Law, where a known force is applied to the mass and the acceleration it induces is measured. The mass is then computed as X! X! d ! d d ðm v ðtÞÞ ¼ m ! v ðtÞ ¼ f j ðtÞ 0 m ¼ v ðtÞ f j ðtÞ= ! dt dt dt Read appropriate articles on the Internet, available books, and technical literature and then discuss the relationship between gravitational and inertial mass; are they the same, i.e., identical? Solution 1.16 To date, all experiments have led to the conclusion that the two are identical. Problem 1.17 Eq. (1.2-15) defines the centripetal force needed to maintain a mass, m, in a circular orbit of radius R, when rotating at a rate u rad/sec. What is the magnitude of the tangential velocity, and what is the centripetal force as a function of this speed.

Solution 1.19

Solution 1.17 _ e q ðtÞ, and the The tangential velocity, ! v ðtÞ, is given by ! qffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffiv ðtÞ ¼ RqðtÞb 2 _ which yields magnitude is k! v ðtÞk ¼ vðtÞ ¼ R2 q_ ðtÞ ¼ RqðtÞ, 2 v ðtÞ 2 _ ¼ u, we obtain q_ ðtÞ ¼ 2 . For qðtÞ R fr ðtÞ ¼  m

v2 R

Problem 1.18 Find on the Internet the origin and meaning of the word “centripetal.” Solution 1.18 The word “centripetal” comes from Latin and means center seeking. Problem 1.19 The mass density of water in US Customary units is 1.940 slug=ft3 . Assume that the volume of water in a person is (50 in)  (8 in)  (8 in). What is the mass of the water in units of slug, pound-mass (lbm), and kilogram (kg)? What would the reading be in pound-force (lbf) if the volume of water were placed on a spring scale in the United States where the standard acceleration due to gravity is applicable? What would the weight be in units of newton? Do your answers make sense; assume an average person is 60% water and the density of the other 40% is also that of water? What conclusion can you draw from comparing the weight of the water in US Customary units of lbf and the mass of the water in units of lbm? Solution 1.19   3  3 50  8  8 in Mass in units of slug is 1.940 slug=ft  3  3 3  12 in ¼ 3.6 slug. ft To determine the mass in units of pound-mass (lbm), we first need to compute the force that gravity exerts on the mass. For this, we use Newton’s Second Law and the standard acceleration due to gravity of 32.1740 ft/sec2, i.e.,   ð3.6 slugÞ 32.1740 ft=sec2 ¼ 115.8 lbf

43

44

CHAPTER 1 Structural dynamics

So, if we place the volume of water on a spring scale in the United States where the standard acceleration due to gravity is applicable, we would obtain a reading of 115.8 lbf. Since a person is composed of roughly 60% water, then the weight of a person with the volume of water in this problem would be approximately 193 lbf (115.8/0.60), which is a reasonable number. We know that by definition one pound-mass subjected to standard acceleration due to gravity is equivalent to one pound-force, i.e., 1 lbf ¼ ð1 lbmÞ  32.1740 ft=sec2 . Multiplying both side by 115.8 yields 115.8  1 lbf ¼ 115:8  ð1 lbmÞ  32.1740 ft=sec2   From above, we know that 115.8 lbf ¼ ð3.6 slugÞ 32.1740 ft=sec2 ; hence,   ð3.6 slugÞ 32.1740 ft=sec2 ¼ 115.8ð1 lbmÞ32:1740 ft=sec2 3.6 slug ¼ 115.8 lbm which is what we would obtain with the conversion factor in Table 1.7-1. Since the pound-mass is defined in terms of the kilogram, we simply need to use the conversion factor to obtain the mass in units of kg, i.e., ð115.8 lbmÞð0.45359 kg = lbmÞ ¼ 52.5 kg The weight of the water in SI units would then be   ð52.5 kgÞ 9.80665 m=sec2 ¼ 514.8 newton Note that the numerical value of one’s weight in US Customary units of lbf is the same as one’s mass in units of lbm, provided the standard acceleration due to gravity is applicable. This is why we can take the reading from a bathroom scale in the United States, which gives our weight in units of lbf, and divide by 2.2, which is the conversion factor between lbm and kg, to obtain kilogram, which is unit of mass and not force (weight) in SI units.

References Bureau International des Poids et Mesures, March 2006. The International System of Units (SI), eighth ed.

References

Butcher, K., Crown, L., Gentry, E.J., May 2006. The International System of Units (SI) Conversion Factors for General Use. NIST Special Publication 1038, United States Department of Commerce. Crandall, S.H., Dahl, N.C., Lardner, T.J., 1972. An Introduction to the Mechanics of Solids. McGraw-Hill Book Company, New York, New York. Judson, L.V., December 20, 1960. Units of Weight and Measures (United States Customary and Metric) Definitions and Tables of Equivalents. United States Department of Commerce, National Bureau of Standards, Miscellaneous Publication 233. Michaelis, W., Haars, H., Augustin, R., 1995. A new precise determination of Newton’s gravitational constant. Metrologia 32, 267. Newton, I., 1946. Philosophiae Naturalis Principia Mathematica, 5 July 1686. University of California Press, Berkeley, California. Translated by Andrew Motte, Kessinger Legacy Reprints. Resnick, R., Halliday, D., 1966. Physics Part I. John Wiley & Sons, Inc., New York, New York. Sokolnikoff, I.S., Redheffer, R.M., 1958. Mathematics of Physics and Modern Engineering. McGraw-Hill Book Company, Inc., New York, New York. Wikipedia. https://en.wikipedia.org/wiki/Mass.

45

CHAPTER

Single-degree-of-freedom systems

2

2. Introduction Equations of motion can be derived using either force equilibrium or energy methods. The force equilibrium approach applies Newton’s three laws of motion (Newton, 1946) as discussed in Chapter 1. The energy methods are based on specifying the kinetic and potential energies, and the work done by forces acting on the system mass; Rayleigh’s and Hamilton’s Principles, and Lagrange’s Equations fall into this category. In this chapter, we will use Newton’s laws to derive equations of motion for single-degree-offreedom systems. We will obtain solutions where the vibrations are initiated by initial conditions or are due to harmonic forces. We will explore damped and undamped systems, frequency swept excitation, sudden cessation of harmonic excitation, and base excitation. An introduction of Rayleigh’s approach is also included in this chapter, whereas other energy methods are thoroughly covered in Volume II. 2.1 Vibration We begin our discussion with the system shown in Fig. 2.1-1. The rigid mass, m, is restricted to move horizontally on a frictionless surface, in the plane of the page; and it is not allowed to rotate. Hence, the mass has one degree of freedom. The mass is connected to “ground” by a spring that acts along the x-coordinate direction only. We will define the lateral displacement of the mass by the discrete coordinate xðtÞ, where xðtÞ is defined in an inertial frame of reference (see Chapter 1) with its origin at the black block. Furthermore, we will define xðtÞ to be zero at the point where the spring is neither compressed nor stretched. When the mass moves to the right, the spring will be stretched and the mass will “feel” a force pulling it to the left according to Newton’s Third Law. Likewise, when the mass Structural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00002-1 Copyright © 2020 Elsevier Inc. All rights reserved.

47

48

CHAPTER 2 Single-degree-of-freedom systems

FIGURE 2.1-1 (A) Single-degree-of-freedom system sliding on a frictionless surface, attached to “ground” by a weightless spring; there is no energy dissipation. (B) System at an instant of time, ti , and corresponding position xðti Þ to the right of the equilibrium point. moves to the left, past the equilibrium point, the spring will be compressed and the mass will “feel” a force pushing it to the right, again, according to Newton’s Third Law. The spring is linear and follows Hooke’s Law (Crandall et al., 1972), which means that the spring force is directly proportional to the relative displacement between its ends. We will assume that there is no energy dissipation mechanism, hence, once the oscillation starts it will continue forever. The equation of motion for the system shown in Fig. 2.1-1 can be derived with Newton’s Second Law of motion (see Chapter 1), which states that the time rate of change of a mass particle’s linear momentum, in an inertial reference frame, is equal to the net force acting on the particle. This can be written in equation form as X! d ! ðm v ðtÞÞ ¼ f ðtÞ (2.1-1) dt Since the mass is restricted to move along the x-coordinate direction, the forces we will consider will also be along the x-coordinate direction. Note that we are excluding the vertical force due to gravity and the corresponding reaction force due to the surface on which the mass slides. Since the mass is constant, Eq. (2.1-1) simplifies to X d _ ¼ m€ xðtÞ ¼ f ðtÞ (2.1-2) m ðxðtÞÞ dt where it is understood that f ðtÞ is directed along the x-coordinate direction. The mass times acceleration term for our system is straightforward and is as shown in Eq. (2.1-2). To derive the right-hand side of the equation, we

2.1 Vibration

must sum all the external forces acting on the mass; and as can be ascertained from the figure the only external force is imparted by the spring when it is either stretched or compressed. The easiest way to derive this term is to deform the system; in our case we “freeze” the motion at a point where the mass has moved to the right as in Fig. 2.1-1B. Since the spring is stretched, the mass will sense a force directed to the left; and according to Hooke’s Law f ðtÞ ¼ kxðtÞ, where k is the stiffness constant of proportionality for the spring. Note that the right end of the spring senses a force that is equal and opposite, i.e., kxðtÞ. Substituting f ðtÞ into Eq. (2.1-2) produces the equation of motion that governs the behavior of the single-degree-offreedom system shown in Fig. 2.1-1, X m€ xðtÞ ¼ f ðtÞ ¼ f ðtÞ ¼ kxðtÞ (2.1-3) m€ xðtÞ þ kxðtÞ ¼ 0 Dividing the equation by m gives x€ðtÞ þ u2n xðtÞ ¼ 0

(2.1-4) pffiffiffiffiffiffiffiffiffi where un ¼ k=m. Eq. (2.1-4) is a homogeneous second-order linear differential equation, whose general solution is xðtÞ ¼ A cos un t þ B sin un t

(2.1-5)

This can be verified by substituting the assumed solution and its second time derivative into Eq. (2.1-4). Since there are no external forces acting on the system, any motion must be initiated with initial conditions. That is, at time t ¼ 0 the system has to have an initial displacement, xð0Þ, an _ initial velocity, xð0Þ, or both in order for there to be motion. Since the initial conditions represent a known state, at a specific time, they can be used to compute A and B: xð0Þ ¼ Að1Þ þ Bð0Þ

0 A ¼ xð0Þ

_ xð0Þ ¼ Aun ð0Þ þ Bun ð1Þ 0 B ¼

_ xð0Þ un

(2.1-6)

Substituting A and B into Eq. (2.1-5) produces the solution xðtÞ ¼ xð0Þcos un t þ

_ xð0Þ sin un t un

(2.1-7)

49

50

CHAPTER 2 Single-degree-of-freedom systems

The frequency of the oscillation will be un radian/second (rad/s), and if we wish to use cycles/second or hertz (Hz), we must divide un by 2p, since there are 2p radians per a complete cycle, i.e., rffiffiffiffi 1 1 k un ¼ (2.1-8) fn ¼ 2p 2p m Fig. 2.1-2 shows the response of the system where the values of k and m were set to yield a natural frequency of 0.5Hz, and the oscillation was initi_ ated with initial conditions xð0Þ ¼ 1:0 and xð0Þ ¼ 1:57. The system has a period of vibration of 2.0 s, and since there is no energy dissipation, the oscillation will continue forever. The single-degree-of-freedom system described above is the simplest mass-spring system that will undergo vibratory motion. Despite its simplicity, many more-complex systems can be modeled using the approach just described. For example, in Fig. 2.1-3 the rigid mass is only allowed to translate horizontally on a frictionless surface. The linkages are infinitely rigid and assumed to be massless. A frictionless pin joint connects the horizontal and vertical linkage bars. The vertical bar pivots about point A through a frictionless pin joint. Since equations of motion describe the behavior of the mass, and since the system has only a single degree of freedom, it can be modeled with a single coordinate, xðtÞ. The forces that

FIGURE 2.1-2 Response of a single-degree-of-freedom system with a natural frequency of 0.5Hz. Motion was initiated with an initial displacement and initial velocity.

2.1 Vibration

FIGURE 2.1-3 (A) Single-degree-of-freedom system connected to ground by two springs and a rigid linkage that pivots about point A (B). act on the mass are due to the two springs. Whereas the force produced by deformation of spring k2 acts directly on the mass, the force produced by spring k1 acts through the linkages. Recall that in applying Newton’s laws we need to address the net forces that act on the mass. The force due to spring k2 is straightforward and is given by fs;2 ¼ k2 xðtÞ. The force due to spring k1 can be established with the aid of Fig. 2.1-3B. The relationship between the deformation, dðtÞ, and the displacement of the mass, xðtÞ, is dðtÞ xðtÞ ¼ b a

b 0 dðtÞ ¼  xðtÞ a

(2.1-9)

Therefore, the spring force that acts on the linkage at its attach point is b fs; 1 ¼ k1 dðtÞ ¼ k1 xðtÞ. This, however, is not the force that acts on a the mass. To obtain this force we can compute the moment about point A,  2 b b k1 xðtÞb þ fbs;1 a ¼ 0 0 fbs;1 ¼ k1 xðtÞ (2.1-10) a a Setting the sum of the forces acting on the mass equal to the time rate of change of the momentum of the mass yields X d _ ðmxðtÞÞ ¼ f ðtÞ dt (2.1-11)  2 b xðtÞ m€ xðtÞ ¼ k2 xðtÞ  k1 a

51

52

CHAPTER 2 Single-degree-of-freedom systems

and

 2 ! b m€ xðtÞ þ k2 þ k1 xðtÞ ¼ 0 a

(2.1-12)

Now, suppose that we wish to use the coordinate that defines the horizontal motion of the right end of spring k1 , where it connects to the bottom of the vertical bar of the linkage. This coordinate, wðtÞ, is shown in Fig. 2.1-3B. It needs to be noted here that irrespective of where we define the origin of our coordinates, they must always define the motion of the mass and the forces that act on that mass. There are several ways to use coordinate wðtÞ, but the simplest is to define a coordinate transformation that is applied to the kinetic and strain energy expressions. The relationship between the two coordinates is   a wðtÞ (2.1-13) xðtÞ ¼  b The kinetic, T, and strain, U, energies are

 2 ! 1 2 1 b k2 þ k1 T ¼ mx_ ðtÞ U ¼ x2 ðtÞ 2 2 a

(2.1-14)

Substituting the coordinate transformation in Eq. (2.1-13) yields (   ) (  2 ! 2 ) 1 a 2 1 b a m k2 þ k1 w_ 2 ðtÞ U ¼ w2 ðtÞ T¼ 2 b 2 a b (2.1-15) A review of the kinetic energy expression leads to the conclusion that the expression for the mass in the wðtÞ coordinate system is mða=bÞ2 ; likewise,  the expression for the stiffness is k2 ða=bÞ2 þ k1 . Hence, the equation of motion in the wðtÞ coordinate system is   € þ k2 ða=bÞ2 þ k1 wðtÞ ¼ 0 (2.1-16) mða=bÞ2 wðtÞ where mða=bÞ2 is referred to as the effective mass. This problem could also have been solved by substituting the coordinate transformation, Eq. (2.1-13), and its second time derivative, into the

2.2 Rayleighdenergy

equation of motion, Eq. (2.1-12), and then premultiplying the resulting equation by just the transformation relationship, i.e.,    2 !  a b a € þ k2 þ k1 m  wðtÞ  wðtÞ ¼ 0 b a b )  (    2 !  a a b a (2.1-17) € þ k2 þ k1 m  wðtÞ   wðtÞ ¼ 0 b b a b   € þ k2 ða=bÞ2 þ k1 wðtÞ ¼ 0 mða=bÞ2 wðtÞ We will discuss this approach in future chapters, but for now it suffices to state that the premultiplication by the coordinate transformation was necessary to conserve energy, and once external forces are added to conserve the work done by the external forces. 2.2 Rayleighdenergy We will mention one of the energy methods here with a brief introduction to Rayleigh’s method (Rayleigh, 1877). The detailed energy methods discussion will be presented in Volume II. For the system shown in Fig. 2.1-1 the kinetic and strain energies, respectively, are 1 1 (2.2-1) T ¼ mx_2 ðtÞ and U ¼ kx2 ðtÞ 2 2 If there are no external forces adding energy to the system, nor any energy dissipation mechanisms such as damping, the sum of the kinetic and strain energies must be a constant, 1 1 (2.2-2) T þ U ¼ mx_2 ðtÞ þ kx2 ðtÞ ¼ constant 2 2 Provided that the mass and stiffness are constant, and the velocity is not a function of the displacement, which for our problem it is not, differentiating with respect to time yields   d d 1 2 1 2 ðT þ UÞ ¼ mx_ ðtÞ þ kx ðtÞ ¼ 0 dt dt 2 2 (2.2-3) _ ¼0 ¼ ðm€ xðtÞ þ kxðtÞÞxðtÞ

53

54

CHAPTER 2 Single-degree-of-freedom systems

_ can take on nonzero values, the above equality can only be satisSince xðtÞ _ if the quantity inside the parenthesis is equal to fied for all values of xðtÞ zero, hence, m€ xðtÞ þ kxðtÞ ¼ 0

(2.2-4)

which is the equation of motion obtained in the previous section with Newton’s laws. In Section 1.1, we obtained the solution to Eq. (2.2-4) (see Eq. 2.1-7). For this example we will assume that the response is due solely to an initial _ displacement, i.e., xð0Þ ¼ 0. Therefore, the strain energy as a function of the motion of the mass is 1 UðtÞ ¼ k½xð0Þcos un t2 2 with the maximum strain energy being,

(2.2-5)

1 (2.2-6) Umax ðtÞ ¼ kx2 ð0Þ 2 The maximum strain energy occurs when the deflection is a maximum and the mass is reversing its direction of motion. At this point the corresponding velocity and kinetic energy are zero and, hence, Umax is the total energy in the system. The kinetic energy as a function of the motion of the mass is 1 T ¼ mu2n x2 ð0Þsin2 un t 2 with the maximum kinetic energy being,

(2.2-7)

1 (2.2-8) Tmax ¼ mu2n x2 ð0Þ 2 The maximum kinetic energy occurs when the deflection is zero and the mass is passing through the static equilibrium point. At this point the corresponding strain energy is zero and, therefore, Tmax is the total energy in the system. Since no energy is being added nor dissipated, Tmax ¼ Umax

(2.2-9)

Substituting Eqs. (2.2-6) and (2.2-8) yields u2n ¼ which is as presented in Eq. (2.1-4).

k m

(2.2-10)

2.3 Vibration with viscous damping

2.3 Vibration with viscous damping In the previous section, we introduced an idealized system that did not dissipate energy and, hence, its vibrations would continue forever. Our practical experience, however, tells us that a system eventually stops vibrating unless there is an external force that supplies energy to the system. The mechanism that dissipates energy and causes oscillations to decrease in amplitude and eventually stop is called damping. Damping results from different phenomena; examples include friction in joints, heat generation and dissipation in materials being deformed, energy dissipation by producing waves in air and/or fluids, and drag due to movement in air and/or fluids. Experimental observations indicate that for a large class of systems damping can be modeled as a force that is proportional to velocity. An advantage of this, as we will show later, is that the velocity of a mass will always indicate its direction of motion. Hence, a damping force proportional to velocity can be made to always oppose the motion of the mass. We refer to this type of damping as viscous damping, and as is customary we will designate this mechanism by a dashpot as shown in Fig. 2.3-1. We begin the derivation with Eq. (2.1-2), and as in the previous section the acceleration proportional term is straightforward. The force exerted on the mass by the spring is also as in the previous section. Since the damping force will be proportional to velocity and directed in the opposite direction _ where c is a constant of proof motion, it will be given by fd ðtÞ ¼ cxðtÞ, portionality. Collecting all terms we obtain X _ m€ xðtÞ ¼ f ðtÞ ¼ fs ðtÞ þ fd ðtÞ ¼ kxðtÞ  cxðtÞ (2.3-1) _ þ kxðtÞ ¼ 0 m€ xðtÞ þ cxðtÞ

FIGURE 2.3-1 Single-degree-of-freedom system sliding on a frictionless surface, attached to “ground” by a weightless spring and a weightless velocity-proportional energy dissipation mechanism.

55

56

CHAPTER 2 Single-degree-of-freedom systems

Eq. (2.3-1) is a homogeneous second-order linear differential equation. To begin, we assume a solution of the form xðtÞ ¼ est _ ¼ sest xðtÞ x€ðtÞ ¼ s2 est

(2.3-2)

where e is Euler’s number. Substituting into Eq. (2.3-1), factoring out the common term est, which we can eliminate by division since it will never equal zero for positive time, and then dividing through by m we obtain   2 ms þ cs þ k est ¼ 0 (2.3-3) c k 2 s þ sþ ¼0 m m Eq. (2.3-3) is a quadratic equation in s and, therefore, has two roots that can be obtained with the quadratic formula, sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2 c c k  (2.3-4)  s1 ; s2 ¼  2m 2m m There are three possible solutions depending on whether the value under the radical is negative, zero, or positive. The critical damping, cc , is defined as that value of damping that reduces the radical to zero, i.e.,  2 cc k  ¼0 m 2m (2.3-5) rffiffiffiffi k cc ¼ 2m ¼ 2mun m The critical damping ratio, z, is defined as c (2.3-6) z¼ cc Substituting into Eq. (2.3-4) produces

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  2  2ffi z  1 un s1 ; s2 ¼ zun   qffiffiffiffiffiffiffiffiffiffiffiffiffi  ¼  z  z2  1 u n

(2.3-7)

2.3 Vibration with viscous damping

Since Eq. (2.3-1) is a second-order differential equation, its solution will be xðtÞ ¼ Aes1 t þ Bes2 t     pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi 2 2 ¼ Ae zþ z 1 un t þ Be z z 1 un t  pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi  zun t z2 1un t  z2 1un t Ae ¼e þ Be

(2.3-8)

We will show later that the constants A and B are complex conjugates of each other and are needed so that the solution can satisfy two initial condi_ tions, xð0Þ and xð0Þ. 2.3.1 Oscillatory damped vibration

As indicated above there are three possible solutions depending on whether z is less than one, equal to one, or greater than one. We will start with the case where z < 1:0 and the term inside the radical is negative. The radical can be rewritten as qffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiqffiffiffiffiffiffiffiffiffiffiffiffiffi   2 2 2 (2.3-9) z  1 ¼ 1 1  z ¼ 1 1  z ¼ i 1  z2 where i is the imaginary unit and is defined as i2 ¼ 1. Substituting into Eq. (2.3-8) yields the solution  pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi  zun t i 1z2 un t i 1z2 un t Ae þ Be xðtÞ ¼ e (2.3-10)  iu t  zun t iud t d ¼e Ae þ Be Note thatpinffiffiffiffiffiffiffiffiffiffiffiffiffi Eq. (2.3-10) the damped circular frequency of oscillation, ud ¼ un 1  z2 , was introduced; the reason will become apparent shortly. Since Eq. (2.3-10) contains complex exponentials, we can use Euler’s formula, eiq ¼ cos q  i sin q, to obtain xðtÞ ¼ ezun t ðA cos ud t þ iA sin ud t þ B cos ud t  iB sin ud tÞ ¼ ezun t ððA þ BÞcos ud t þ ðiA  iBÞsin ud tÞ   e ud t þ Bsin e ud t ¼ ezun t Acos

(2.3-11)

57

58

CHAPTER 2 Single-degree-of-freedom systems

For a real solution to exist, the constants Ae and Be must be real numbers and, hence, A and B must be complex conjugates of each other. Ae and Be are established with the initial conditions, i.e.,   e þ Bð0Þ e xð0Þ ¼ 1 Að1Þ 0 Ae ¼ xð0Þ   _ ¼ zun ezun t Ae cos ud t þ Be sin ud t xðtÞ   e d sin ud t þ Bu e d cos ud t þ ezun t  Au     _ þ zun xð0Þ e þ Bð0Þ e d ð0Þ þ Bu e e d ð1Þ 0 B¼ e xð0Þ _ ¼ zun Að1Þ þ  Au xð0Þ ud (2.3-12) Therefore, the solution is   _ xð0Þ þ zun xð0Þ zun t xð0Þcos ud t þ sin ud t xðtÞ ¼ e ud ( ! )  z sin u t d _ ¼ ezun t xð0Þ cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t þ xð0Þ 2 u d 1z (2.3-13) As can be ascertained pffiffiffiffiffiffiffiffiffiffiffiffiffi from Eq. (2.3-13), the frequency of oscillation will be at ud ¼ un 1  z2 , whereas for a system without damping, i.e., z ¼ 0, the oscillations would be at un . The lower the critical damping ratio, the closer the damped frequency of oscillation will be to the case without damping. Fig. 2.3-2 shows the responseq time historyffi for an example problem in ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

which un ¼ 2p, z ¼ 0:1, ud ¼ 1  ð0:1Þ2 2p ¼ 1:99p, xð0Þ ¼ 2, and _ xð0Þ ¼ 0. As can be seen, the oscillation decays. The critical damping ratio used in this example problem is on the high side as shown in Table 2.3-1; hence, the decay rate is relatively high. Also, as indicated in the table, for the majority of structures of interest, ud will be very close to un . There will be some exceptions, and these will be discussed later. 2.3.2 Nonoscillatory damped vibration

In the previous section, we covered the case where the critical damping ratio, z, was less than 1.0, and as such the system oscillated when set in motion. In this section, we will first cover the case where z is equal to 1.0, and

2.3 Vibration with viscous damping

FIGURE 2.3-2 Response of a single-degree-of-freedom system with a damped circular natural frequency of ud ¼ 1:99p, whose motion was initiated with initial _ conditions xð0Þ ¼ 2:0 and xð0Þ ¼ 0. Table 2.3-1 Comparison of typical critical damping ratios for different structural systems, and the effect of damping on a system with un ¼ 5:0 rad=s. Structure type Solid metal Spacecraft Launch vehicle Building Soil structure

z 0, and xðtÞ ¼ 0 for t < 0. In addition, xðtÞ must be an integrable function of t. In Eq. (3.1-1), s is commonly referred to as the complex frequency. The Laplace transform e Þ, is linear; hence, LðaxÞ e e where a is a constant, and operator, Lð ¼ aLðxÞ e þyÞ ¼ LðxÞ e þ LðyÞ. e Lðx Recall the equation of motion of a single-degree-of-freedom system driven by an external force f ðtÞ (see Chapter 2), _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ Structural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00003-3 Copyright © 2020 Elsevier Inc. All rights reserved.

1 f ðtÞ m

(3.1-2) 125

126

CHAPTER 3 Transfer and frequency response functions

We begin by multiplying each term in Eq. (3.1-2) by est ; and then integrating with respect to t from 0 to ∞ produces Z ∞ Z ∞ Z ∞ Z ∞ 1 st st 2 st _ f ðtÞest dt x€ðtÞe dt þ 2zun xðtÞe dt þ un xðtÞe dt ¼ 0 0 0 0 m (3.1-3) Integrating the first term by parts twice (see Appendix 3.1) yields Z ∞ Z ∞ ∞ st st st  _ 0  ðsÞ _ x€ðtÞe dt ¼ e xðtÞ xðtÞe dt 0 0 9 8 Z ∞ = < ∞ _ xðtÞest dt ¼ xð0Þ þ s est xðtÞ0  ðsÞ (3.1-4) ; : 0 Z ∞ 2 _ xðtÞest dt ¼ xð0Þ  sxð0Þ þ s 0

Using the shorthand notation from Eq. (3.1-1) gives €e e _ XðsÞ ¼  xð0Þ  sxð0Þ þ s2 XðsÞ

(3.1-5)

Next, integrating the second term in Eq. (3.1-3) by parts produces Z ∞ Z ∞ ∞ st st  _ 2zun xðtÞe dt ¼ 2zun e xðtÞ 0  ðsÞ 2zun xðtÞest dt 0 0 Z ∞ (3.1-6) st ¼ 2zun xð0Þ þ s 2zun xðtÞe dt 0

Using the shorthand notation from Eq. (3.1-1) yields e_ e XðsÞ ¼  xð0Þ þ s XðsÞ

(3.1-7)

Substituting Eqs. (3.1-4) and (3.1-6) into Eq. (3.1-3), and noting that we will be interested in the steady-state solution where the transient response due to the initial conditions has decayed, produces Z ∞ Z ∞ Z ∞ Z ∞ 1 2 st st 2 st f ðtÞest dt xðtÞe dt þ 2zun s xðtÞe dt þ un xðtÞe dt ¼ s 0 0 0 0 m e e þ u2n XðsÞ e þ 2zun s XðsÞ e ¼ 1 FðsÞ s2 XðsÞ m  2 e ¼ 1 FðsÞ e s þ 2zun s þ u2n XðsÞ m (3.1-8)

3.1 Laplace transform

The steady-state displacement transfer function, therefore, is e XðsÞ 1 1  ¼ 2 e m s þ 2zun s þ u2n FðsÞ

(3.1-9)

The corresponding transfer function for steady-state acceleration is € e 1 s2 XðsÞ  ¼ 2 e m s þ 2zun s þ u2n FðsÞ

(3.1-10)

3.1.1 Laplace transform and harmonic excitation

To obtain the steady-state acceleration response to harmonic excitation, we start by defining the excitation as in Chapter 2, i.e., f ðtÞ ¼ fa eiut . Computing the Laplace transform yields Z ∞ Z ∞ iut st e fa e e dt ¼ fa eðsiuÞt dt FðsÞ ¼ 0

0

   1 ðsiuÞt ∞ 1 st iut (3.1-11) ¼ fa e lim e e  1  ¼ fa s  iu t/∞ s  iu 0

1 s  iu Note that the term lim est will be zero in the limit since the parameter s is a ¼ fa

t/∞

complex number, a þ ib with a > 0. Recall that by Euler’s formula eibt and eiut will be harmonic functions and, thus, bounded. Substituting into Eq. (3.1-10) gives fa s2 1 €e  XðsÞ ¼ 2 m s þ 2zun s þ u2n ðs  iuÞ

(3.1-12)

We will solve for the roots of Eq. (3.1-12) using the technique of partial fractions, which requires that we first factor the denominator. This, therefore, requires that we factor s2 þ 2zun s þ u2n ; this term is commonly referred to as the system impedance. We can factor the impedance by completing the square, s2 þ 2zun s þ u2n ¼ ðs þ zun Þ2 þ h   h ¼ u2n 1  z2

(3.1-13)

127

128

CHAPTER 3 Transfer and frequency response functions

Eq. (3.1-13), therefore, has the following two factors:   s2 þ 2zun s þ u2n ¼ ðs þ zun Þ2 þ u2n 1  z2  qffiffiffiffiffiffiffiffiffiffiffiffiffi  qffiffiffiffiffiffiffiffiffiffiffiffiffi  2 ðs þ zun Þ  iun 1  z2 ¼ ðs þ zun Þ þ iun 1  z ¼ ðs þ zun þ iud Þðs þ zun  iud Þ (3.1-14) Note that we introduced the imaginary unit, i, into the second terms in the middle equation so that we could assign opposite signs to the terms. This will result in the sum of the inner and outer products being zero, while at the same time resulting in a positive sign on the u2d term. Also note that pffiffiffiffiffiffiffiffiffiffiffiffiffi we made the substitution un 1  z2 ¼ ud , where ud is the damped circular natural frequency of the system. Substituting (3.1-14) into (3.1-12) gives fa s2 €e XðsÞ ¼ m ðs  s1 Þðs  s2 Þðs  iuÞ

(3.1-15)

where (3.1-16) s1 ¼  zun  iud and s2 ¼ zun þ iud € e ðsÞ. The quantities sj are referred to as the poles of X Proceeding, we first write the right-hand side of Eq. (3.1-15) as a sum of partial fractions, s2 P1 P2 P3 ¼ þ þ ðs  s1 Þðs  s2 Þðs  iuÞ ðs  s1 Þ ðs  s2 Þ ðs  iuÞ

(3.1-17)

For each term in the denominator the following operations need to be repeated. First, we multiply Eq. (3.1-17) by ðs s1 Þ, s2 P2 P3 ¼ P1 þ ðs  s1 Þ þ ðs  s1 Þ ðs  s2 Þðs  iuÞ ðs  s2 Þ ðs  iuÞ

(3.1-18)

Then, by setting s equal to s1 , we can solve for P1, i.e., s21 ð  zun  iud Þ2 ¼ P1 ¼ ðs1  s2 Þðs1  iuÞ ði2ud Þð  zun  iðud þ uÞÞ

(3.1-19)

Repeating for P2 and P3 gives P2 ¼

s22 ð  zun þ iud Þ2 ¼ ðs2  s1 Þðs2  iuÞ ði2ud Þð  zun þ iðud  uÞÞ

(3.1-20)

3.1 Laplace transform

and P3 ¼

ðiuÞ2 u2 ¼ 2 ðiu  s1 Þðiu  s2 Þ un  u2 þ i2zun u

(3.1-21)

Substituting Eq. (3.1-17) into Eq. (3.1-15) produces the sought-after result,  fa P1 P2 P3 € e X ðsÞ ¼ þ þ (3.1-22) m ðs  s1 Þ ðs  s2 Þ ðs  iuÞ where P1 , P2 , and P3 are given, respectively, by Eqs. (3.1-19), (3.1-20), and (3.1-21). To obtain the time domain solution, we must inverse transform each of the terms on the right-hand side of Eq. (3.1-22). The inverse transform of Pj is Pj esj t (see Appendix 3.2). Therefore, the time domain solution is s  sj fa s1 t x€ðtÞ ¼ P1 e þ P2 es2 t þ P3 eiut m o fa n ðzun iud Þt ðzun þiud Þt iut (3.1-23) ¼ þ P2 e þ P3 e P1 e m  fa zun t  iud t ¼ P1 e þ P2 eiud t þ P3 eiut e m where we recall that t  0. Note that P1 and P2 are multiplied by ezun t eiud t and ezun t eiud t , respectively. These terms will reduce to zero as t increases to infinity because by Euler’s formula, the eiud t terms are oscillatory and bounded. This is the transient portion of the response. The steady-state solution, therefore, is what remains after the transient portion has decayed sufficiently, i.e., fa P3 eiut m  fa u2 ¼ eiut m u2n  u2 þ i2zun u

x€ðtÞ ¼

(3.1-24)

We can eliminate the complex number in the denominator by multiplying through by the ratio of the complex conjugate of the denominator divided by it. Then normalizing with respect to un and letting u=un ¼ l we obtain

129

130

CHAPTER 3 Transfer and frequency response functions

1  4  2 2 un  u  i2zun u un iut fa u   x€ðtÞ ¼ e m u2n  u2 þ i2zun u u2n  u2  i2zun u 1 u4n ( )   2 1  l fa 2zl eiut ¼  l2   i   2 2 2 2 2 2 m 1l 1l þ ð2zlÞ þ ð2zlÞ 

2

(3.1-25)

We recognize Eq. (3.1-25) as the solution obtained in Chapter 2. Furthermore, we will show in the next section how the solution presented in Eq. (3.1-25) can be derived by means of the Fourier transform. 3.2 Fourier transform One of the most powerful tools available for the analysis of vibration data is the Fourier transform pair (Hurty and Rubinstein, 1964): Z ∞ XðuÞ ¼ xðtÞeiut dt (3.2-1) ∞

and 1 xðtÞ ¼ 2p

Z

∞ ∞

XðuÞeiut du

(3.2-2)

Eq. (3.2-1) decomposes the function xðtÞ into harmonic components, XðuÞ, whereas Eq. (3.2-2) resynthesizes these harmonic components to recreate the time domain function. Differentiating Eq. (3.2-2) with respect to time yields Z ∞ 1 _ ¼ iuXðuÞeiut du (3.2-3) xðtÞ 2p ∞ By comparison to Eqs. (3.2-1) and (3.2-2) we obtain Z ∞ iut _ xðtÞe dt iuXðuÞ ¼ ∞

(3.2-4)

The right-hand side of Eq. (3.2-4) is the Fourier transform of the velocity and, therefore, we can conclude that _ XðuÞ ¼ iuXðuÞ

(3.2-5)

3.2 Fourier transform

131

Differentiating a second time, and following the same steps as for the velocity, we obtain € XðuÞ ¼  u2 XðuÞ

(3.2-6)

€ _ XðuÞ ¼ iu XðuÞ

(3.2-7)

and We, therefore, conclude that differentiation in the time domain is equivalent to multiplication by iu in the frequency domain. 3.2.1 Frequency response functions

Recall the equation of motion of a single-degree-of-freedom system driven by an external force f ðtÞ, 1 f ðtÞ (3.2-8) m and then integrating from ∞ to ∞ with

_ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ Multiplying each term by eiut respect to t yields Z ∞ Z ∞ Z iut iut 2 _ dt þ 2zun dt þ un x€ðtÞe xðtÞe ∞

∞

∞ ∞

xðtÞe

iut

1 dt ¼ m

Z

∞ ∞

f ðtÞeiut dt

1 _ € þ u2n XðuÞ ¼ FðuÞ XðuÞ þ 2zun XðuÞ m (3.2-9)

where FðuÞ designates the Fourier transform of f ðtÞ. Substituting Eqs. (3.2-5) and (3.2-6) gives 1 FðuÞ (3.2-10) m Factoring, and solving for the normalized response, XðuÞ=FðuÞ, yields the displacement frequency response function, HðuÞ,  XðuÞ 1 1 ¼ ¼ HðuÞ (3.2-11) FðuÞ m u2n  u2 þ i2zun u u2 XðuÞ þ i2zun uXðuÞ þ u2n XðuÞ ¼

We can eliminate the complex number in the denominator by multiplying through by the complex conjugate of the denominator divided by it. Then, normalizing with respect to un , letting u=un ¼ l, and noting that

132

CHAPTER 3 Transfer and frequency response functions

mu2n ¼ k, we obtain the normalized displacement frequency response function, 1  u  i2zun u u4n XðuÞ 1 1   ¼  2 FðuÞ m un  u2 þ i2zun u  u2  i2zun u 1 u4 ( )   1  l2 XðuÞ 1 2zl ¼  i 2 FðuÞ k 1  l2 2 þ ð2zlÞ2 1  l2 þ ð2zlÞ2 

u2n  2 un

2



(3.2-12)

By substituting Eq. (3.2-6) into Eq. (3.2-11), the acceleration frequency response function, Hx€ðuÞ, is obtained,  1 1 2 € FðuÞ XðuÞ ¼ u 2 2 un  u þ i2zun u m (3.2-13) € XðuÞ ¼ Hx€ðuÞFðuÞ Following the same normalization steps as above yields ( )   2 € 1  l 1 2zl XðuÞ  l2  ¼ (3.2-14) þ il2   2 2 2 2 FðuÞ m þ ð2zlÞ 1l 1  l2 þ ð2zlÞ2 The terms inside the braces are identical to the acceleration coincident, Cox€, and quadrature, Qdx€, components of response derived in Chapter 2. Recall that the coincident response is the component of the steady-state response to harmonic excitation that is collinear with the excitation; and the quadrature component is the component that is at 90 degree to the excitation. Hence, the Fourier transform of the response divided by the Fourier transform of the excitation yields a complex function that provides the coincident and quadrature components of response. This is significant, since it confirms that the Fourier transform has decomposed the forcing function, and the response to that forcing function, into their respective harmonic components. Inherent in this statement is the requirement that the forcing function contain energy at all frequencies from 0þ to the limits of the analysis, i.e., for large u, which is the same as large l. In addition, the forcing function has to be integrable, which because of the smoothness of physical phenomena, will generally not be an issue. There

3.2 Fourier transform

may be some limitations for extremely short duration pulses, but this is a limitation in the measurement apparatus, rather that the theoretically possible solutions. It’s also worth noting that if the forcing function did not contain energy at all frequencies, such that the quotients on the lefthand side of Eqs. (3.2-12) and (3.2-14) did not exist because of division by zero, we could still solve for the response because Eqs. (3.2-11) and (3.2-13) would still be valid. We cannot understate the significance of the above result. To illustrate, we will describe an experimental approach of deriving the acceleration complex frequency response function, Hx€ðuÞ. We start by exciting a single-degree-of-freedom system with a random force that contains energy over a broad frequency range that encompasses the natural frequency of the system. Next, we measure the acceleration response and compute its Fourier transform. We also measure the corresponding excitation force and compute its Fourier transform. Both transforms will be complex and functions of u. We then divide the Fourier transform of the response by that of the excitation. This requires that at each u, or spectral line, we perform a complex division, which will produce a complex number. We plot the real and imaginary components against u divided by the natural frequency, un , which we will take as the frequency at which the imaginary component of the response is a maximum. These graphs will be the quadrature and coincident components of response, and they will be identical to those that would be obtained with harmonic excitation at frequencies corresponding to each spectral line. 3.2.2 Base excitation frequency response functions

The equation of motion for base excitation was derived in Chapter 2. Taking the Fourier transform of each term in this equation produces Y€e ðuÞ þ 2zun Y_ e ðuÞ þ u2n Ye ðuÞ ¼  Y€B ðuÞ

(3.2-15)

Using the relationships defined by Eqs. (3.2-5) and (3.2-7) yields u2 Ye ðuÞ þ i2zun uYe ðuÞ þ u2n Ye ðuÞ ¼  Y€B ðuÞ    u2 þ i2zun u þ u2n Ye ðuÞ ¼  Y€B ðuÞ

(3.2-16)

which gives the desired frequency response function, Ye ðuÞ 1  ¼  2 un  u2 þ i2zun u Y€B ðuÞ

(3.2-17)

133

134

CHAPTER 3 Transfer and frequency response functions

Next, we will derive the frequency response function for the absolute acceleration response, €yðtÞ ¼ €ye ðtÞ þ €yB ðtÞ. We start by taking the Fourier transform of the absolute acceleration, € YðuÞ ¼ Y€e ðuÞ þ Y€B ðuÞ

(3.2-18)

Dividing by the Fourier transform of the base excitation yields € YðuÞ Y€e ðuÞ ¼ þ1 Y€B ðuÞ Y€B ðuÞ

(3.2-19)

Noting the relationship defined by Eq. (3.2-6) and then substituting from Eq. (3.2-17) gives € YðuÞ Ye ðuÞ ¼ u2 þ1 € Y B ðuÞ Y€B ðuÞ u2  þ1 ¼ 2 un  u2 þ i2zun u

(3.2-20)

Performing the indicated multiplications produces € u2 þ i2zun u YðuÞ  ¼ 2 n 2 un  u þ i2zun u Y€B ðuÞ

(3.2-21)

In Chapter 2, a quantity of practical  engineer interest, referred to as the pseudo-acceleration, was defined as  €yps  ¼ jye ju2n z j€yj. This approximate relationship is valid for lightly damped systems excited through their base such that resonant response is achieved. Since the system is assumed to be excited at its natural circular frequency un, the following relationship holds: €yps ðtÞ ¼ ye ðtÞu2n and, therefore Y€ps ðuÞ ¼ u2n Ye ðuÞ. Substituting into Eq. (3.2-17) yields the frequency response function for pseudo-acceleration, Y€ps ðuÞ u2n  ¼ 2 un  u2 þ i2zun u Y€B ðuÞ

(3.2-22)

3.2.3 Fourier transforms of useful functions

In subsequent chapters, we will make use of the Fourier transform in the computation of the response of systems to random excitation and in the analysis of time series data. As such, we will need the Fourier transform of several useful functions, including the boxcar, unit impulse, and cosine and sine. In addition, the relationship between multiplication in the frequency domain and its counterpart in the time domain represents a very

3.2 Fourier transform

important relationship in the computation of vibratory response of structures. We will derive this relationship also. 3.2.3.1 Boxcar

The boxcar function is used to convert a time series of infinite length to one of finite durations, as shown in Fig. 3.2-1. Mathematically, the boxcar function is defined as  1 T  t  T (3.2-23) wT ðtÞ ¼ 0 otherwise and its Fourier transform is Z ∞ Z iut WT ðuÞ ¼ wT ðtÞe dt ¼ ∞

T T

ð1Þeiut dt

Performing the indicated integration yields  1 iut T 1 iuT iuT e ¼   e WT ðuÞ ¼  e  iu iu T

(3.2-24)

(3.2-25)

1 fcos uT  i sin uT  ðcos uT þ i sin uTÞg iu In Eq. (3.2-25), Euler’s formula was used to substitute for the exponential terms. Simplifying, we obtain the sought-after result, which is shown in Fig. 3.2-2B, ¼

WT ðuÞ ¼

2 sin uT u

(3.2-26)

FIGURE 3.2-1 (A) Portion of infinite-duration random time history. (B) Random time history from (A) truncated by multiplying xðtÞ by the boxcar function wT ðtÞ, which is defined by Eq. (3.2-23).

135

136

CHAPTER 3 Transfer and frequency response functions

FIGURE 3.2-2 (A) Boxcar function defined by Eq. (3.2-23). (B) Fourier transform of boxcar function in (A). 3.2.3.2 Unit impulse (Dirac delta)

The Dirac delta, or unit impulse, dðtÞ, function is a generalized function having an area of one, but a base width that in the limit approaches zero (see Fig. 3.2-3A). Mathematically, the unit impulse can be written as 8 > < 1 ε  t  ε 2 2 (3.2-27) dðtÞ ¼ lim IðtÞ ¼ ε > ε/0 : 0 otherwise The Fourier transform of dðtÞ is Z ∞ DðuÞ ¼ dðtÞeiut dt ∞

Z

Z



ε=2

(3.2-28)

1 iut e ¼ lim IðtÞeiut dt ¼ lim dt ε/0 ε=2 ε ∞ ε/0 where we interchanged the limit and integration operations; in Volume II, we provide a more rigorous approach to generalized functions. Performing the indicated integration yields !  1 iut ε=2 e DðuÞ ¼ lim   ε/0 iuε ε=2  2 sinðuε=2Þ ¼ lim ε/0 uε 

(3.2-29)

3.2 Fourier transform

FIGURE 3.2-3 (A) The function IðtÞ, which as ε goes to zero becomes the unit impulse dðtÞ; (B) Fourier transform of unit impulse dðtÞ. Since the numerator and denominator both approach zero as ε approaches zero, we must use L’Hoˆpital’s rule (Crowell and Slesnick, 1968) to establish the value of the quotient as ε goes to zero:   2 sinðuε=2Þ DðuÞ ¼ lim ε/0 uε v ! (3.2-30) 2 u cosðuε=2Þ 2 sinðuε=2Þ vε 2 ¼ lim ¼1 ¼ lim v ε/0 ε/0 u uε vε Fig. 3.2-3 shows the function IðtÞ and the Fourier transform of dðtÞ, DðuÞ. Note that the Fourier transform of a unit impulse has a value of one at each spectral line. Finally, since the Fourier transform of the unit impulse is one, the inverse Fourier transform of one must be the unit impulse. Hence, the inverse Fourier transform of a constant will be that constant times the unit impulse. 3.2.3.3 Unit impulse sifting property

The sifting property of the unit impulse function is extremely important in the computation of Fourier transforms. The sifting property is defined as Z ∞ f ðtÞdðt  aÞdt ¼ f ðaÞ (3.2-31) ∞

where dðtÞ is the unit impulse function. Since the unit impulse function will have a value of zero everywhere except at ta ¼ 0, we will only have to consider the behavior of the integral in the vicinity of a. Hence, Eq. (3.2-31) can be written as

137

138

CHAPTER 3 Transfer and frequency response functions

Z

∞ ∞

Z f ðtÞdðt  aÞdt ¼ lim

aþε=2

ε/0 aε=2

1 ¼ lim ε/0 ε

Z

1 f ðtÞ dt ε

aþε=2

(3.2-32) f ðtÞdt

aε=2

d fbðtÞ ¼ f ðtÞ, then Let fbðtÞ be the result of the integration, i.e., dt Z aþε=2 aþε=2 1 1  f ðtÞdt ¼ lim fbðtÞ lim ε/0 ε aε=2 ε/0 ε aε=2

(3.2-33)

fbða þ ε=2Þ  fbða  ε=2Þ ε/0 ε The above expression is the definition of the slope, or derivative of the function fbðtÞ at a; hence,  fbða þ ε=2Þ  fbða  ε=2Þ d fbðtÞ lim ¼ f ðaÞ (3.2-34) ¼  ε/0 dt  ε ¼ lim

t¼a

which when substituted into Eq. (3.2-32) yields the sought-after result, Eq. (3.2-31). 3.2.3.4 Constant

To compute the Fourier transform of a constant, we begin with the inverse Fourier transform, Eq. (3.2-2), of the shifted frequency domain unit impulse b Þ, function, XðuÞ ¼ dðu  u Z ∞ 1 b Þeiut du dðu  u (3.2-35) xðtÞ ¼ 2p ∞ b Þ is the shifted frequency domain unit impulse function. where dðu  u Then, by the sifting property, Eq. (3.2-31), we obtain Z ∞ 1 1 ut b Þeiut du ¼ eib dðu  u (3.2-36) xðtÞ ¼ 2p ∞ 2p b ¼ 0, xðtÞ ¼ 1=2p, which is a constant. Hence, we can write and for u Z ∞ 1 1 (3.2-37) dðuÞeiut du ¼ 2p ∞ 2p

3.2 Fourier transform

Multiplying both sides by a constant, a, Z ∞ 1 1 adðuÞeiut du ¼ a 2p ∞ 2p

(3.2-38)

and then taking the Fourier transform of both sides yields FðaÞ ¼ 2padðuÞ

(3.2-39)

3.2.3.5 Cosine and sine

We start by taking the Fourier transform of Eq. (3.2-36), 1 0 Z ∞ 1 1 ut b Þeiut duA ¼ F ei b dðu  u F@ 2p ∞ 2p From which we obtain



ut bÞ F ei b ¼ 2pdðu  u

(3.2-40)

(3.2-41)

b t, we begin with the equality, To compute the Fourier transform of cos u 1 b t þ i sin u b t þ cos u b t  i sin u b tÞ b t ¼ ðcos u cos u 2 (3.2-42)  1  ib ut i b ut ¼ e þe 2 u t ¼ cos u b t  i sin u b t, was used to substitute for where Euler’s formula, eib the terms in the parentheses. Taking the Fourier transform of both sides, while noting Eq. (3.2-41), produces the sought-after result, b tÞ ¼ pdðu  u b Þ þ pdðu þ u bÞ Fðcos u

(3.2-43)

b t consists of unit impulse functions at Hence, the Fourier transform of cos u b and at u ¼ u b , both scaled by p. Since cosine is an even function, u ¼ u its Fourier transform is real. In addition, since the result involves the impulse function, it has meaning only in the context of an integral, such as when being transformed back to the time domain. Before leaving this section we will state the Fourier transform of a sine function and leave its derivation for the problems at the end of this chapter: b tÞ ¼  ipdðu  u b Þ þ ipdðu þ u bÞ Fðsin u

(3.2-44)

139

140

CHAPTER 3 Transfer and frequency response functions

FIGURE 3.2-4 Notional plot of the Fourier transform of the cosine and sine functions.

It should be noted that the Fourier transform of a sine function is complex. This is due to, unlike cosine, sine not being symmetric about the ordinate axis. Fig. 3.2-4 shows a notional plot of the Fourier transforms of the cosine and sine functions. 3.2.4 Multiplication of Fourier transformed functions and convolution

An extremely valuable relationship is the product of the Fourier transforms of two functions and the corresponding time domain operation. We begin by taking the product of the two Fourier transformed functions defined in Eq. (3.2-11), HðuÞ and FðuÞ, and setting this equal to the product of the Fourier transforms of the corresponding time domain functions, hðhÞ and f ðsÞ, respectively, i.e., 10 1 0 Z ∞ Z ∞ hðhÞeiuh dhA@ f ðsÞeius dsA (3.2-45) HðuÞFðuÞ ¼ @ ∞

∞

To avoid confusion in subsequent steps, we used h as the independent variable in the first integral, and s in the second. Since the two integrals are independent, they can be combined, Z ∞Z ∞ eiuh eius hðhÞf ðsÞdhds HðuÞFðuÞ ¼ ∞ ∞ (3.2-46) Z ∞Z ∞ iuðhþsÞ ¼ e hðhÞf ðsÞdhds ∞

∞

3.2 Fourier transform

Assuming sufficient integrability, we can change the order of integration, 1 0 Z ∞ Z ∞ @ HðuÞFðuÞ ¼ eiuðhþsÞ hðhÞdhAf ðsÞds (3.2-47) ∞

∞

Next, we define a new integration variable, t, such that t ¼ h þ s. Differentiating with respect to h yields, dh ¼ dt. Substituting into Eq. (3.2-47) gives 0 1 Z ∞ Z ∞ @ eiut hðt  sÞdtAf ðsÞds HðuÞFðuÞ ¼ (3.2-48) ∞

∞

Changing the order of integration yields 0 1 Z ∞ Z ∞ @ hðt  sÞf ðsÞdsAeiut dt HðuÞFðuÞ ¼ ∞

∞

(3.2-49)

The term inside the parenthesis is referred to as the convolution integral, and as we will see in subsequent discussion, this is an extremely important relationship in the computation of dynamic responses and analysis of time series data. 3.2.5 Convolution and dynamic response

Recall Eq. (3.2-11), XðuÞ ¼ HðuÞFðuÞ. Substituting into Eq. (3.2-49) yields 1 0 Z ∞ Z ∞ @ hðt  sÞf ðsÞdsAeiut dt (3.2-50) XðuÞ ¼ ∞

∞

The right-hand side of Eq. (3.2-50) is the Fourier transform of the term in the parenthesis. The left-hand side is the Fourier transform of the displacement response. Taking the inverse Fourier transform of both sides produces the sough-after result, Z ∞ hðt  sÞf ðsÞds (3.2-51) xðtÞ ¼ ∞

141

142

CHAPTER 3 Transfer and frequency response functions

The function hðt sÞ is referred to as the unit impulse response function and it will be derived in Chapter 5. Eq. (3.2-51) provides the dynamic response of a single-degree-of-freedom system to any arbitrary forcing function f ðtÞ, once hðtÞ is defined. 3.2.6 Multiplication of functions and frequency domain convolution

In the preceding discussion, we showed that multiplication in the frequency domain was equivalent to convolution in the time domain. It is reasonable, therefore, to expect that multiplication in the time domain would be equivalent to convolution in the frequency domain. We begin by taking the Fourier transform, Fð Þ, of a product of two time domain functions, for example, wT ðtÞxðtÞ, Z ∞ FðwT ðtÞxðtÞÞ ¼ wT ðtÞxðtÞeiut dt ∞

Z



0

@1 ¼ ∞ 2p

Z



∞

1 WT ðuÞe duAxðtÞe iut

(3.2-52) iut

dt

Note that in the second line of (3.2-52) we replaced wT ðtÞ with the inverse Fourier transform of WT ðuÞ, where WT ðuÞ is the Fourier transform of wT ðtÞ. Changing the order of integration, 0 1 Z ∞ Z ∞ 1 WT ðuÞ@ xðtÞeiut eiut dtAdu FðwT ðtÞxðtÞÞ ¼ 2p ∞ ∞

¼

1 2p

Z

0



∞

WT ðuÞ@

Z

∞ ∞

1

(3.2-53)

xðtÞeiðuuÞt dtAdu

The term inside the parenthesis is the Fourier transform of xðtÞ, where u  u is the independent frequency variable. Therefore, Z ∞ 1 WT ðuÞXðu  uÞdu FðwT ðtÞxðtÞÞ ¼ 2p ∞ (3.2-54) ¼ WT ðuÞ  XðuÞ where  designates convolution.

3.2 Fourier transform

3.2.7 Unit impulse R and convolution ∞

We begin with ∞ dðuÞXðu  uÞdu, where dðuÞ is the unit impulse and we recognize the integral to be the convolution integral as presented in Eq. (3.2-51). Letting u  u ¼ s, and differentiating with respect to u gives du ¼ ds and we get, Z ∞ Z ∞ dðuÞXðu  uÞdu ¼  dðu  sÞXðsÞds ∞ ∞ (3.2-55) Z ∞ dðu  sÞXðsÞds ¼ ∞

From the definition of the unit impulse, we note that the integral in Eq. (3.255) will be zero for all values of s, except when s equals u. Since this is an infinitesimally small value, the function XðsÞ can be considered constant when s ¼ u. Therefore, Z ∞ Z ∞ dðu  sÞXðsÞds ¼ XðuÞ dðu  sÞds (3.2-56) ∞ ∞ ¼ XðuÞ and we obtain the desired result, Z ∞ dðuÞXðu  uÞdu ¼ XðuÞ ∞

(3.2-57)

Note that we could have also used the sifting property described in Section 3.2.3.3 to arrive at the same result. The relationship in Eq. (3.2-57) will be used in Chapter 5 to solve for the response of a system subjected to random excitation. 3.2.8 Relationship between boxcar function and unit impulse

Earlier we derived the Fourier transform of a boxcar function (Eq. 3.2-26). We also derived the Fourier transform of an impulse starting with a boxcar that had a base that ran from ε=2 to ε=2 and a height of 1=ε (see Fig. 3.2-3A). So, can we use these facts to establish a relationship between a boxcar function and the unit impulse? We will start with Eq. (3.2-30) and compute the inverse Fourier transform of both sides: Z ∞ Z ∞ 1 1 iut DðuÞe du ¼ ð1Þeiut du (3.2-58) 2p ∞ 2p ∞

143

144

CHAPTER 3 Transfer and frequency response functions

Performing the indicated integration yields  1 iut ∞ e  dðtÞ ¼ 2pit ∞

∞  1 ðcos ut þ i sin utÞ ¼ 2pit ∞

(3.2-59)

where we used Euler’s formula to substitute for the exponential term. Since cosine is an even function, i.e., cosðutÞ ¼ cosðutÞ, the cos ut term will be equal to zero for the indicated limits. In addition, since sine is an odd function, i.e., sinðutÞ ¼ sinðutÞ, we obtain ∞  1 2 sin ut sin ut sin ut ¼ lim (3.2-60) ¼ lim dðtÞ ¼ u/∞ 2pt u/∞ pt 2pt ∞

Problems Problem 3.1 Show that Aeiu is the complex conjugate of Aeiu where i is the imaginary unit. Solution 3.1 Recall Euler’s formula, Aeiut ¼ Aðcos ut i sin utÞ; hence, Aeiut ¼ Aðcos ut þ i sin utÞ ¼ A cos ut þ iA sin ut ¼ x þ iy Aeiut ¼ Aðcos ut  i sin utÞ ¼ A cos ut  iA sin ut ¼ x  iy Problem 3.2 A phasor, Aeiut , is a vector that rotates counterclockwise, for positive u, in the complex plane, completing a full rotation, 2p rad, every 2p=u sec (see figure). Show that in the complex plane the magnitude of the phasor Aeiut is A.

Solution 3.3

Solution 3.2 From Euler’s formula we have Aeiut ¼ Aðcos ut þi sin  utÞ ¼ Aeiut  ¼ A cos ut þ iA sin ut. Hence, the magnitude is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi A2 cos2 ut þ A2 sin2 ut ¼ A cos2 ut þ sin2 ut ¼ A.

Problem 3.3 Using Euler’s formula to prove Euler’s identity, eip þ 1 ¼ 0. Solution 3.3 Euler’s formula gives eip ¼ cos p þ i sin p ¼ 1 þ ið0Þ e þ1¼0 ip

145

146

CHAPTER 3 Transfer and frequency response functions

Problem 3.4 Show that lim est eiut ¼ 0, when s is a complex number, i.e., s ¼ a þ ib, t/∞

with a > 0 (see Eq. 3.1-11 for use). Solution 3.4 lim est eiut ¼ lim eðaþibÞt eiut ¼ lim eat eiðubÞt

t/∞

t/∞

t/∞

Using Euler’s formula,

lim eat eiðubÞt ¼ lim eat ðcosðu  bÞt þ i sinðu  bÞtÞ ¼ 0

t/∞

t/∞

Problem 3.5 Derive the Laplace transform of a step function, f ðtÞ ¼ 1, for t  0. Solution 3.5 From the definition of the Laplace transform, Eq. (3.1-1), we obtain Z ∞ Z ∞ Z ∞ st st e FðsÞ ¼ f ðtÞe dt ¼ 1e dt ¼ est dt 0

 1 st ∞ 1 ¼ e  ¼ s s 0

0

0

Problem 3.6 Derive the Laplace transform of the unit impulse, dðtÞ ¼ 1, for t ¼ 0. Solution 3.6 Begin with the figure and first compute the Laplace transform of the function by integrating from zero to ε and then let ε go to zero in the limit.

Problem 3.7

From the definition of the Laplace transform, Eq. (3.1-1), we obtain Z ε Z ε 1 st st e dt dðtÞe dt ¼ lim ε/0 0 ε 0      1 st ε 1 sε 1 ¼ lim  e  ¼ lim  e þ ε/0 ε/0 εs εs εs 0   1  esε ¼ lim ε/0 εs Since the numerator and denominator both approach zero as ε approaches zero, we need to use L’Hoˆpital’s rule to establish the value of the quotient as ε approaches zero: v  sε   ð1  esε Þ 1  esε se vε ¼ lim ¼ lim ¼1 lim v ε/0 ε/0 ε/0 εs s εs vε 

Problem 3.7 Use the Laplace transform to compute the displacement response of an undamped single-degree-of-freedom system subjected to a unit impulse, dðtÞ, at t ¼ 0. Assume that the displacement and velocity before the impulse are both zero. Compare your results to the solution obtained in Chapter 2 for the response of an undamped single-degree-of-freedom system whose motion was initiated with an initial velocity. Discuss the results.

147

148

CHAPTER 3 Transfer and frequency response functions

Solution 3.7 The equation of motion for an undamped single-degree-of-freedom system subjected to a unit impulse, dðtÞ, is m€ xðtÞ þ kxðtÞ ¼ dðtÞ. To compute the Laplace transforms, we begin by multiplying each term by est, and then integrating with respect to t from 0 to ∞, Z ∞ Z ∞ Z ∞ st st m€ xðtÞe dt þ kxðtÞe dt ¼ dðtÞest dt 0

0

0

The Laplace transform of the first term is 0 Z ∞ Z ∞ st st  @ e _ 0  ðsÞ Lðm€ xðtÞÞ ¼ m€ xðtÞe dt ¼ m e xðtÞ 0



1 st A _ dt xðtÞe

0

8
T. We begin the solution with Euler’s formula, Z T Z T iut cosðu0 tÞe dt ¼ cosðu0 tÞ½cosðutÞ  i sinðutÞdt 0 0 Z T Z T cosðu0 tÞcosðutÞdt  i cosðu0 tÞsinðutÞdt ¼ 0

0

b CðuÞ where u0 ¼ 2p=T. Recall

b DðuÞ

Solution 3.9

1 cos a cos b ¼ ðcosða þ bÞ þ cosða  bÞÞ and 2 1 cos a sin b ¼ ðsinða þ bÞ  sinða  bÞÞ 2 Hence,

Z 1 T b CðuÞ ¼ cosððu0 þ uÞtÞ þ cosððu0  uÞtÞdt 2 0   1 sinððu0 þ uÞtÞ sinððu0  uÞtÞ T þ ¼  2 u0 þ u u0  u 0   1 sinððu0 þ uÞTÞ sinððu0  uÞTÞ ¼ þ 2 u0 þ u u0  u

and 1 b DðuÞ ¼ 2 1 ¼ 2 1 ¼ 2 This yields

Z

T

sinððu0 þ uÞtÞ  sinððu0  uÞtÞdt

0



 cosððu0 þ uÞtÞ cosððu0  uÞtÞ T  þ  u0 þ u u0  u 0 cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u  þ þ 2 u0 þ u u0  u u  u20

!

A sinððu þ uÞTÞ sinððu  uÞTÞ A b 0 0 b FðuÞ ¼ þ CðuÞ  i DðuÞ ¼ 2 4 u0 þ u u0  u ! A cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u þ þ 2 i  4 u0 þ u u0  u u  u20

151

152

CHAPTER 3 Transfer and frequency response functions

The requested plots are shown below.

Problem 3.10 Repeat the calculations in Problem 3.9, but now assume that the forcing function acts for 10 cycles, i.e., 10T. Plot the Fourier transform real and imaginary parts, for A ¼ 1 and Tq¼ 1:5, against theffi circular frequency, ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

u, and then plot the modulus, i.e., Re2 ðuÞ þ Im2 ðuÞ. Discuss your observations relative to the solution and plots obtained in Problem 3.9. Solution 3.10 The Fourier transform is defined in Eq. (3.2-1); hence,   Z Z ∞ A 10T 2p iut FðuÞ ¼ cos t eiut dt f ðtÞe ¼ 2 T ∞ 0

where we note that f ðtÞ ¼ 0 for t  0 and for t > 10T. As in the preceding problem, we begin the solution by recalling Euler’s formula, Z 10T Z 10T iut cosðu0 tÞe dt ¼ cosðu0 tÞ½cosðutÞ  i sinðutÞdt 0

Z ¼

0 10T

Z cosðu0 tÞcosðutÞdt  i

0

10T

cosðu0 tÞsinðutÞdt 0

b CðuÞ

b DðuÞ

Solution 3.10

153

where u0 ¼ 2p=T. Recall 1 cos a cos b ¼ ðcosða þ bÞ þ cosða  bÞÞ and 2 1 cos a sin b ¼ ðsinða þ bÞ  sinða  bÞÞ 2 Hence,

Z 10T 1 b CðuÞ ¼ cosððu0 þ uÞtÞ þ cosððu0  uÞtÞdt 2 0   1 sinððu0 þ uÞtÞ sinððu0  uÞtÞ 10T þ ¼  2 u0 þ u u0  u 0   1 sinððu0 þ uÞ10TÞ sinððu0  uÞ10TÞ ¼ þ 2 u0 þ u u0  u

and 1 b DðuÞ ¼ 2

Z

10T

sinððu0 þ uÞtÞ  sinððu0  uÞtÞdt

0

  1 cosððu0 þ uÞtÞ cosððu0  uÞtÞ 10T þ  ¼  2 u0 þ u u0  u 0 " # 1 cosððu0 þ uÞ10TÞ cosððu0  uÞ10TÞ 2u  ¼ þ þ 2 2 u0 þ u u0  u u  u20 This yields

A sinððu þ uÞ10TÞ sinððu  uÞ10TÞ A b 0 0 b FðuÞ ¼ þ CðuÞ  i DðuÞ ¼ 2 4 u0 þ u u0  u # " A cosððu0 þ uÞ10TÞ cosððu0  uÞ10TÞ 2u þ þ 2 i  4 u0 þ u u0  u u  u20

154

CHAPTER 3 Transfer and frequency response functions

The requested plots are shown below.

Comparing the above figure to the figure in Problem 3.9 we observe that the center lobe of the modulus has narrowed and the side lobes have decreased in amplitude relative to the center lobe for the case with the larger number of cycles, i.e., Problem 3.10. Indeed, if the number of cycles were infinite, then the center lob would reduce to an impulse function centered at u0 ¼ 2p=T. Problem 3.11 Compute the Fourier transform  of a one-minus cosine forcing function A 1 cos 2p defined by f ðtÞ ¼ T t and shown in the figure. Plot the real 2 and imaginary parts of the Fourier transform, for A ¼ 1 and T ¼ 1:5, against the circular frequency, u, and then plot the modulus, i.e., qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Re2 ðuÞ þ Im2 ðuÞ. For additional discussion of this forcing function, see Volume II, the section on atmospheric flight turbulence/gust analysis.

Solution 3.11

Solution 3.11 The Fourier transform is defined in Eq. (3.2-1); hence,  Z T  Z ∞ A 2p iut 1  cos t eiut dt f ðtÞe dt ¼ FðuÞ ¼ 2 T ∞ 0  Z Z  A T iut A T 2p cos t eiut dt e dt  ¼ 2 0 2 0 T where we note that f ðtÞ ¼ 0 for t  0 and for t > T. The solution to the first integral is  Z  A T iut A iut T A  e dt ¼  e ¼ 1  eiuT  2 0 2iu 2iu 0 We begin the solution to the second integral by recalling Euler’s formula, Z T Z T iut cosðu0 tÞe dt ¼ cosðu0 tÞ½cosðutÞ  i sinðutÞdt 0 0 Z T Z T cosðu0 tÞcosðutÞdt  i cosðu0 tÞsinðutÞdt ¼ 0

0

b CðuÞ

b DðuÞ

where u0 ¼ 2p=T. Recall 1 cos a cos b ¼ ðcosða þ bÞ þ cosða  bÞÞ and 2 1 cos a sin b ¼ ðsinða þ bÞ  sinða  bÞÞ 2 Hence, for usu0, Z T 1 b CðuÞ ¼ cosððu0 þ uÞtÞ þ cosððu0  uÞtÞdt 2 0   1 sinððu0 þ uÞtÞ sinððu0  uÞtÞ T ¼ þ  2 u0 þ u u0  u 0   1 sinððu0 þ uÞTÞ sinððu0  uÞTÞ ¼ þ 2 u0 þ u u0  u

155

156

CHAPTER 3 Transfer and frequency response functions

and 1 b DðuÞ ¼ 2 1 ¼ 2 ¼

1 2

Z

T

sinððu0 þ uÞtÞ  sinððu0  uÞtÞdt

0



 cosððu0 þ uÞtÞ cosððu0  uÞtÞ T þ   u0 þ u u0  u 0 

cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u þ þ 2 u0 þ u u0  u u  u20

!

This yields

A sinððu þ uÞTÞ sinððu  uÞTÞ A b 0 0 b CðuÞ  i DðuÞ ¼ þ 2 4 u0 þ u u0  u A i 4

cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u  þ þ 2 u0 þ u u0  u u  u20

For u ¼ u0,   T  Z T  1 1 sinð2u tÞ 1 sinð2u TÞ 0 0 b 0Þ ¼ cosð2u0 tÞ þ 1dt ¼ þ t  ¼ þT Cðu 2 0 2 2u0 2 2u0 0 and b 0Þ ¼ 1 Dðu 2

Z 0

T

 1 cosð2u0 tÞT 1 sinð2u0 tÞdt ¼  ¼ ð1  cosð2u0 TÞÞ  2 2u0 4u0 0

b b 0 Þ and lim DðuÞ b ¼ ¼ Cðu where limit arguments show that lim CðuÞ u/u0

u/u0

b 0 Þ. Collecting the above terms yields the sought-after solution, Dðu

!

Problem 3.12

157

   A sinððu0 þ uÞTÞ sinððu0  uÞTÞ A  iuT 1e þ  FðuÞ ¼ 2iu 4 u0 þ u u0  u þi

A 4

A ¼ 4



A þi 4



cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u þ þ 2 u0 þ u u0  u u  u20

!

 sinððu0 þ uÞTÞ sinððu0  uÞTÞ 2  þ ðsin uTÞ  u0 þ u u0  u u ! cosððu0 þ uÞTÞ cosððu0  uÞTÞ 2u 2  þ ðcos uT 1Þ þ þ 2 u0 þ u u0  u u  u20 u

The requested plot is shown below.

Problem 3.12 Compute the Fourier transform of a shifted one-minus cosine forcing function shown in the figure and defined by 8   A 2p T > > > jtj  < 2 1 þ cos T t 2 f ðtÞ ¼ > T > > : 0 jtj > 2

158

CHAPTER 3 Transfer and frequency response functions

Plot the Fourier transform, for A ¼ 1 and T ¼ 1:5, against the circular freqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi quency, u, and then plot the modulus, i.e., Re2 ðuÞ þ Im2 ðuÞ. Compare your results to those of Problem 3.11 and discuss your observations. For additional discussion of this forcing function see Volume II, the section on atmospheric flight turbulence/gust analysis.

Solution 3.12 Computing the Fourier transform, Z Z ∞ iut f ðtÞe dt ¼ FðuÞ ¼

  A 2p 1 þ cos t eiut dt T T=2 2

∞

A ¼ 2

Z

T=2 T=2

e

iut

A dt þ 2

T=2

Z

T=2  T=2

 2p cos t eiut dt T

Proceeding with the first integral, T=2 Z

A T=2 iut A eiut  A T=2 e dt ¼ ¼  cos ut  i sin utj T=2 2 T=2 2 iu T=2 iu2

A T=2 ¼i cos ut  i sin utjT=2 u2 A cosðuT=2Þ  i sinðuT=2Þ  cosðuT=2Þ ¼i u2

þi sinðuT=2Þ sinðuT=2Þ ¼A u

Solution 3.12

Solving the second integral, where u0 ¼ 2p=T,     Z Z A T=2 2p A T=2 eiu0 t þ eiu0 t iut iut cos t e dt ¼ dt e 2 T=2 T 2 T=2 2 ! Z A T=2 eiðu0 uÞt þ eiðu0 þuÞt dt ¼ 2 T=2 2 T=2 ! A eiðu0 uÞt eiðu0 þuÞt  ¼ i  þ 2 2ðu0  uÞ 2ðu0 þ uÞT=2 ðu0  uÞT ðu0 þ uÞT ! sin A sin 2 2 ¼ þ 2 u0  u u0 þ u Combining produces the sought-after result, ðu0  uÞT ðu0 þ uÞT ! sin sinðuT=2Þ A sin 2 2 FðuÞ ¼ A þ þ u 2 u0  u u0 þ u Note, the solution could also be written in terms of the sincðxÞ function, where sincðxÞ ¼ sin x=x. In this case, the solution is      AT 1 ðu  u0 ÞT 1 ðu þ u0 ÞT FðuÞ ¼ sincðuT = 2Þ þ sinc þ sinc 2 2 2 2 2 Below is plotted the modulus of the above Fourier transform (solid line); and since the transform is real due to the symmetry of the forcing function about the ordinate axis, the modulus is simply the square root of the square (or absolute value of the Fourier transform). Also plotted is the modulus obtained in Problem 3.11 (dashed line), where the Fourier transform was a complex quantity due to the lack of symmetry. However, the moduli of both functions are identical, as shown by the below graph.

159

160

CHAPTER 3 Transfer and frequency response functions

Problem 3.13 In Section 3.2.3, we showed that the Fourier transform of a boxcar function defined by Eq. (3.2-23) is 2 sin uT u Show that WT ðuÞ is bounded at u ¼ 0 and its magnitude is 2T. WT ðuÞ ¼

Solution 3.13 Since the numerator and denominator both approach zero as u approaches zero, we need to use L’Hoˆpital’s rule to establish the value of the quotient as u goes to zero: 2 sin uT u/0 u v !   2 sinðuTÞ 2T cosðuTÞ vu ¼ lim ¼ lim ¼ 2T v u/0 u/0 1 u vu

lim WT ðuÞ ¼ lim

u/0

Problem 3.14 Compute the Fourier transform of the two functions shown in the figure. Note that the left function is nonzero for 0  x  1 and the right function

Solution 3.14

is nonzero for 1  x  1. Discuss the results, with particular emphasis on whether either or both are complex value functions. If one is not, why not?

Solution 3.14 The first function is given by f ðtÞ ¼ t, for 0  x  1, otherwise it is zero. The second function is given by f ðtÞ ¼ t, for 1  x  0, f ðtÞ ¼ t, for 0  x  1, otherwise it is zero. We will compute the Fourier transform of f ðtÞ ¼ t first. The Fourier transform is defined in Eq. (3.2-1), i.e., Z 1 Z ∞ iut f ðtÞe dt ¼ teiut dt FðuÞ ¼ ∞

0

dv ¼ eiut , hence, This lends itself to integration by parts; let u ¼ t and dt 1 Z 1 Z 1 du eiut  1 1 v dt ¼ t eiut dt FðuÞ ¼ uvj0    iu dt iu 0 0 0   eiu 1 1 iut 1 iueiu 1  e ¼ þ 2 eiu  1 þ ¼  2 iu iu iu u u 0 ¼

 1  iu 1 iue þ eiu  1 ¼ 2 ððcos u  i sin uÞðiu þ 1Þ  1Þ 2 u u

¼

1 ðiðu cos u  sin uÞ þ u sin u þ cos u  1Þ u2

161

162

CHAPTER 3 Transfer and frequency response functions

Note that the Fourier transform is a complex quantity. Proceeding to the second function we have Z ∞ Z 0 Z iut iut FðuÞ ¼ f ðtÞe dt ¼ te dt þ ∞

1

1

teiut dt

0

We will multiply through by 1, move the second term on the right-hand side to the left side, and then integrate by parts the right-hand side, 0 1 0 Z 0 Z 1 iut  e   1 FðuÞ þ teiut dt ¼ @t eiut dtA  iu 1 iu 1 0   eiu 1 1 iut 0 iueiu 1  iu e 1  e ¼ þ þ ¼  iu iu iu u2 u2 1  1  1 ¼ 2 iueiu  eiu þ 1 ¼ 2 ðcos u þ i sin uÞ u u

ðiu  1Þ þ 1 1 ¼ 2 ðiðu cos u  sin uÞ  u sin u  cos u þ 1Þ u Combining the solution from part one with the just derived solution gives 1 ðiðu cos u  sin uÞ  u sin u  cos u þ 1Þ u2 1  2 ðiðu cos u  sin uÞ þ u sin u þ cos u  1Þ u 2 ¼  2 ðu sin u þ cos u  1Þ u 2 which yields the sought-after solution, FðuÞ ¼ 2 ðu sin u þcos u 1Þ. u The item to note is that the transform is real. This is because the second function is symmetric about the ordinate axis. FðuÞ ¼

Problem 3.15 b t is Show that the Fourier transform of sin u b tÞ ¼  ipdðu  u b Þ þ ipdðu þ u bÞ Fðsin u Plot the results and compare to those for the Fourier transform of the cosine function.

Problem 3.16

Solution 3.15 Following the procedure in Section 3.2.3.5 we note that 1 b t ¼ ðiÞðcos u b t þ i sin u b t  ðcos u b t  i sin u b tÞÞ sin u 2

1 ut ut ¼ ðiÞ ei b  ei b 2 Hence, substituting Eq. (3.2-41), we obtain b tÞ ¼  ipdðu  u b Þ þ ipdðu þ u bÞ Fðsin u The requested plots are below.

Problem 3.16 In Chapter 2, the response of a damped single-degree-of-freedom system whose motion was initiated by an initial velocity was derived, i.e.,   zun t sin ud t _ xðtÞ ¼ xð0Þe ud In Problems 3.7 and 3.8, the response of an undamped single-degree-offreedom system subjected to an impulse of magnitude Idð0Þ at t ¼ 0 was I sin un t; and when compared to the response derived, i.e., xðtÞ ¼ mun _ xð0Þ caused by an initial velocity, i.e., xðtÞ ¼ sin un t, we concluded that un an impulsive force was equivalent to imparting an initial velocity of magni_ tude, xð0Þ ¼ I=m. We can then conclude for this problem that the response of a damped single-degree-of-freedom system to a unit impulse at t ¼ 0 is

163

164

CHAPTER 3 Transfer and frequency response functions

zun t

_ xðtÞ ¼ xð0Þe



sin ud t ud



    I zun t sin ud t 1 zun t sin ud t ¼ e ¼ e m ud m ud

Given the above response compute the Fourier transform of the response of a damped single-degree-of-freedom system to a unit impulse at t ¼ 0. Plot the imaginary and real components of the transform. Compare the results presented in Eq. (3.2-12) and explain your results. Assume un ¼ p and z ¼ 0:02. Solution 3.16 The Fourier transform is defined by Eq. (3.2-1), hence,   Z Z ∞ 1 ∞ zun t sin ud t iut iut e xðtÞe dt ¼ e dt XðuÞ ¼ m 0 ud ∞  Z ∞ 1 ¼ sinðud tÞeðiuþzun Þt dt mud 0 where the lower limit of integration was changed to zero since there is no response prior to the initial velocity at t ¼ 0. Integrating by parts twice yields

Z

∞ 0

∞   sinðud tÞeðiuþzun Þt dt ¼ ð  ðiu þ zu Þsinðu tÞ  u cosðu tÞÞ  n d d d 2  ðiu þ zun Þ þ u2 eðiuþzun Þt

d

¼

ud ðiu þ zun Þ2 þ u2d

Hence, XðuÞ ¼

1 ud 1 1 ¼ 2 mud ðiu þ zun Þ þ u2d m ðiu þ zun Þ2 þ u2d

Performing the indicated multiplications in the denominator and collecting the real and imaginary parts yields XðuÞ ¼

1 1 1 1

¼ 2 m ðiu þ zun Þðiu þ zun Þ þ ud m u2  u2 þ ðzun Þ2 þ i2zuun d

0

Solution 3.16

165

Multiplying by the complex conjugate of the denominator divided by it gives

2 2 2  u þ ðzu Þ u  i2zuun n d 1 1



XðuÞ ¼ m u2  u2 þ ðzun Þ2 þ i2zuun u2  u2 þ ðzun Þ2  i2zuun d

d



2 2 2  u þ ðzu Þ u  i2zuun n d 1 ¼

2 m u2d  u2 þ ðzun Þ2 þ ð2zuun Þ2

2 2 2 u  u þ ðzu Þ n d 1 ¼

2 m u2d  u2 þ ðzun Þ2 þ ð2zuun Þ2 i

1 m

2zuun

2 2 2 2 þ ð2zuun Þ2 ud  u þ ðzun Þ

Eq. (3.2-12) presents the frequency response function for harmonic excitation, i.e., ( )   1  l2 XðuÞ 1 2zl ¼ i  2  FðuÞ k 1  l2 2 þ ð2zlÞ2 1  l2 þ ð2zlÞ2 Comparing the above to our solution we observe that the impulse response is scaled by 1=m, whereas the transfer function is scaled by 1=k. Since u2n ¼ k=m, we can substitute for 1=k and obtain ( )   1  l2 XðuÞ 1 2zl ¼ i   2 FðuÞ mu2n 1  l2 2 þ ð2zlÞ2 1  l2 þ ð2zlÞ2 Plotting the Fourier transform of the impulse response (solid line) and the transfer function to harmonic excitation (dashed line) we observe that they are identical. Hence, harmonic excitation and impulse excitation yield the same frequency domain information. These excitation/testing techniques will be discussed in considerable detail in Volume II.

166

CHAPTER 3 Transfer and frequency response functions

Problem 3.17 In Problem 3.16, the Fourier transform of the displacement response of a single-degree-of-freedom system whose motion was caused by a unit impulse was computed. Compute the Fourier transform of the acceleration response of the same system. Plot the imaginary and real components and compare to the values obtained with Eq. (3.2-14). Explain your results. Solution 3.17 € From Eq. (3.2-6) we obtain XðuÞ ¼ u2 XðuÞ. Hence, the acceleration response Fourier transform is € XðuÞ ¼ u2 XðuÞ

2 2 2 2 u u  u þ ðzu Þ n d 1 ¼

2 m u2d  u2 þ ðzun Þ2 þ ð2zuun Þ2 þi

1 m

2zu3 un

2 u2d  u2 þ ðzun Þ2 þ ð2zuun Þ2

where XðuÞ is from the solution to Problem 3.16. The Fourier transform of the response to harmonic excitation is given by Eq. (3.2-14) and is repeated here to facilitate the discussion, ( )   2 € 1  l 1 2zl XðuÞ  l2  ¼ þ il2   2 2 2 2 FðuÞ m 1l 1  l2 þ ð2zlÞ2 þ ð2zlÞ

Appendix 3.1 Integration by parts

Plotting the Fourier transform of the impulse acceleration response (solid line) and the acceleration response to harmonic excitation transfer function (dashed line) we observe that they are identical.

Appendix 3.1 Integration by parts _ and est are differentiable functions of t, we can apply the Chain Rule If xðtÞ st , i.e., _ of differentiation to xðtÞe  d d st d st _ _ _ xðtÞe e þ est xðtÞ ¼ xðtÞ dt dt dt  d d d st st _ ¼ _ _ xðtÞ xðtÞe e  xðtÞ dt dt dt Multiplying each term by dt and integrating from 0 to ∞ yields Z ∞ Z ∞ Z ∞  d d st st d st _ _ _ xðtÞdt ¼ xðtÞ xðtÞe e dt dt  e dt dt 0 0 dt 0 est

Z

∞ 0

st

x€ðtÞe

dt ¼



st ∞ _ xðtÞe 0

Z  Z

_ ¼ xð0Þ þs 0

0 ∞



_ xðtÞ

d st e dt dt

st _ xðtÞe dt

167

168

CHAPTER 3 Transfer and frequency response functions

The above result can be stated for two differentiable functions, u and v, as follows: Z b Z b dv du b u dt ¼ vuja  v dt dt dt a a

Appendix 3.2 Laplace transform The Laplace transform is a useful tool for solving ordinary differential equations. If xðtÞ is an integrable function of t, and t  0, then the Laplace transform of xðtÞ is defined as follows: Z ∞ e e xðtÞest dt LðxðtÞÞ ¼ XðsÞ ¼ 0

where s is a complex number, a þ ib, and a has to be greater than zero in order for the integral to converge. The reason for this will become apparent below. e Þ is linear and therefore The Laplace transform operator Lð e e LðaxðtÞÞ ¼ aLðxðtÞÞ and e 1 ðtÞÞ þ Lðx e 2 ðtÞÞ e 1 ðtÞ þ x2 ðtÞÞ ¼ Lðx Lðx In addition, e xðtÞÞ e _ Lð ¼ sLðxðtÞÞ  xð0Þ and e e xðtÞÞ ¼ s2 LðxðtÞÞ _  sxð0Þ  xð0Þ Lð€ The usefulness of the Laplace transform is due to the unique correspondence that exists between a function and its Laplace transform. This allows the construction of tables that can be used to expedite deriving the solutions to ordinary differential equations. As an example, we will derive one such pair:   Z ∞ 1 zun t sin ud t e est ezun t sin ud t ¼ L e ud ud 0 Z ∞ 1 eðzun þsÞt sin ud t ¼ ud 0

Appendix 3.2 Laplace transform

pffiffiffiffiffiffiffiffiffiffiffiffiffi where ud ¼ un 1  z2 . Substituting ud ¼ a and ðzun þsÞ ¼ b will facilitate applying the relationship derived in Appendix 3.3, i.e., Z 1 ebt ðb sin at  a cos atÞ ebt ðsin atÞdt ¼ 2 2 a þb Dividing both sides by a and applying the integration limits yields ∞ Z  1 ∞ bt 1 1 bt  e sin at ¼ e ðb sin at  a cos atÞ  a 0 a a2 þ b2 0 Recall that s is a complex number of the form a þ ib, where a > 0. Therefore, ebt ¼ eun t est ¼ eun t eat eibt . This is critical, since when we apply the limit t ¼ ∞, eun t and eat will yield zero; and since eibt is oscillatory by Euler’s formula irrespective of whether b is positive or negative, and since ðb sin at a cos atÞ is also oscillatory, the entire term will go to zero. Applying the limit t ¼ 0, then yields ∞   1 1 1 1 1 bt  e ðb sin at  a cos atÞ ¼ ð0Þ  2 ð1Þðað1ÞÞ a a2 þ b2 a a2 þ b2 a þ b2 0 ¼

1 a a a2 þ b2

Substituting for a and b produces 1 a 1 ud ¼ 2 2 2 a a þb ud ud þ ð  ðzun þ sÞÞ2 ¼ ¼

u2n

1   2 2 2 1  z þ z un þ s2 þ 2zun s

1 s þ 2zun s þ u2n 2

The Laplace transform pair, therefore, is   sin u t 1 d zu t n ¼ 2 Le e ud s þ 2zun s þ u2n and 1

Le



 1 sin ud t ¼ ezun t 2 2 s þ 2zun s þ un ud

169

170

CHAPTER 3 Transfer and frequency response functions

Below are Laplace transform pairs that were derived in the same manner as above and are useful in solving structural dynamics problems:  1  e e ¼ LðxðtÞÞ e XðsÞ xðtÞ ¼ Le XðsÞ 1

1 s

dðtÞ

1

sin ut

u s2 þ u2 u s2  u2 s 2 s þ u2

sinh ut cos ut cosh ut

s s  u2

eut

1 su

2

aeeut ebt sin at ebt cos at ezun t Z

t

sin ud t ud

f ðsÞhðt  sÞds

ae

1 su a

ðs  bÞ2 þ a2 sb ðs  bÞ2 þ a2 1 s þ 2zun s þ u2n 2

FðsÞHðsÞ

0

where dðtÞ is the unit impulse at t ¼ 0, and ud ¼ un

pffiffiffiffiffiffiffiffiffiffiffiffiffi 1  z2 .

Appendix 3.3 Integration

Appendix 3.3 Integration

R To solve the integral ebt ðsin atÞdt we begin by integrating by parts (see Appendix 3.1). Let u ¼ ebt and dv ¼ ðsin atÞdt; then du ¼ bebt dt and cos at v ¼  , which yields a Z Z cos at bt bt bt cos at þ be dt e ðsin atÞdt ¼ e a a Z b bt cos at þ ebt ðcos atÞdt ¼ e a a where we have not included the integration constant because when applying this result we will have integration limits. Integrating by parts again, we let sin at , which u ¼ ebt and dv ¼ ðcos atÞdt; then du ¼ bebt dt and v ¼ a produces   Z Z b bt sin at sin at bt bt bt cos at e ðsin atÞdt ¼ e þ e  be dt a a a a Z b2 bt cos at bt sin at þe b 2  2 ebt ðsin atÞdt ¼ e a a a Collecting the integrals on the left-hand side produces Z  b2 cos at sin at þ ebt b 2 1þ 2 ebt ðsin atÞdt ¼  ebt a a a   Z a2 bt bt cos at bt sin at þe b 2 e ðsin atÞdt ¼ 2 e a a þ b2 a By simplifying the right-hand side we obtain the sought-after result Z 1 ebt ðb sin at  a cos atÞ ebt ðsin atÞdt ¼ 2 a þ b2 Following the same steps for the integral involving a cosine term produces Z 1 ebt ða sin at þ b cos atÞ ebt ðcos atÞdt ¼ 2 a þ b2

171

172

CHAPTER 3 Transfer and frequency response functions

References Crowell, R.H., Slesnick, W.E., 1968. Calculus with Analytic Geometry. W. W. Norton & Company Inc., New York, New York. Hurty, W.C., Rubinstein, M.F., 1964. Dynamics of Structures. PrenticeHall, Inc., Englewood Cliffs, New Jersey. Sokolnikoff, I.S., Redheffer, R.M., 1958. Mathematics of Physics and Modern Engineering. McGraw-Hill Book Company, New York, New York.

CHAPTER

Damping

4

4. Introduction Analytical derivation of damping is extremely difficult and in most cases not possible. This leaves us with establishing the damping properties of structures by test. If testing is not feasible because the system has not been built yet, one must use estimated values based on experience with similar systems until data become available. In previous chapters, we concentrated on the viscous damping model because it matches empirical data the best for a broad range of systems. Accordingly, we will present several procedures for deriving viscous damping from empirical data. This will be followed by a discussion of equivalent viscous damping and hysteresis caused by damping. Because we will eventually want to deal with multi-degree-offreedom systems, it will be beneficial to establish the critical damping ratio, z, instead of the proportionality constant, c. We will first derive z from the coincident component of response, then from the half-power points of the total response, and finally from the free-decay response. In addition, we will discuss structural damping and complex stiffness. 4.1 Viscous damping from coincident component of response In Chapter 2, we introduced the concept of coincident and quadrature components of response to harmonic excitation. We showed that the steady-state acceleration (or displacement) coincident and quadrature responses were the components of the total response that were collinear and at 90 degree to the harmonic excitation force, respectively. In Chapter 3, we introduced the concept of transfer functions and showed that they are complex quantities, with the real part corresponding to the coincident component of response and the imaginary part to the quadrature component. Furthermore, it was shown that these components were identical whether they were Structural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00004-5 Copyright © 2020 Elsevier Inc. All rights reserved.

173

174

CHAPTER 4 Damping

derived with harmonic excitation or via the Fourier transform as frequency response functions relative to broadband random excitation. In this section, we will show how the critical damping ratio, z, can be computed from the coincident component of response. Fig. 4.1-1 shows the acceleration coincident and quadrature components of response of a single-degree-of-freedom system (see Chapter 2 for derivation). An item to note are the two peaks in the coincident component, Cox€, that correspond to l1 and l2 . We will show that the critical damping ratio, z, can be derived with only knowledge of the two frequencies at which these peaks occur. We begin by establishing the frequencies that correspond to the two peaks of the coincident component of response, which are the extrema of the function Cox€ðlÞ. We can accomplish this by differentiating the coincident component with respect to l, set the result equal to zero, and solve for the corresponding values of l. Differentiating yields !   2 1  l v v ¼0 Cox€ðlÞ ¼  l2  2 vl vl 1  l2 þ ð2zlÞ2    2   2l  4l3 l  l4 4l3 þ 8z2 l  4l   ¼ 2 ¼ 0 2  2 2 2 1  l2 þ ð2zlÞ2 þ ð2zlÞ 1l (4.1-1)

FIGURE 4.1-1 Quadrature (solid line) and coincident (dashed line) components of acceleration response of a single-degree-of-freedom system with z ¼ 0:02 and li ¼ ui =un .

4.1 Viscous damping from coincident component of response

By multiplying the second equation in (4.1-1) by the denominator of the second term we obtain   2      2l  4l3 1  l2 þ 2l  4l3 ð2zlÞ2 þ l4  l2 4l3  4l þ 8z2 l ¼ 0   2  8z2 l4  4l2 þ 2 ¼ 0   1  4z2 b2  2b þ 1 ¼ 0 (4.1-2) The last equation in (4.1-2), where we substituted l2 ¼ b, is a second-order polynomial in b and, therefore, has two roots that can be obtained with the quadratic formula qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2H 4  4 1  4z2 ð1Þ 1H2z   ¼ (4.1-3) b1; 2 ¼ 2 2 1  4z 1  4z2   ui Since li ¼ , we obtain un  2 u1 1  2z 1 ¼ ¼ l21 ¼ b1 ¼ ð1  2zÞð1 þ 2zÞ 1 þ 2z un (4.1-4)  2 u2 1 þ 2z 1 ¼ ¼ l22 ¼ b2 ¼ ð1  2zÞð1 þ 2zÞ 1  2z un Subtracting the first equation in (4.1-4) from the second yields u22  u21 4z ¼ 2 un 1  4z2

(4.1-5)

Note that Eq. (4.1-5) defines z in terms of circular frequencies that are easily obtained from a coincident response graph, such as in Fig. 4.1-1; recall that u1 and u2 are the frequencies associated with the coincident component peaks, and un is the frequency that coincides with the peak in the quadrature response plot, which will be the same as the frequency at which the coincident response is zero between u1 and u2 . For lightly damped systems, z2 will be a small number and, thus, Eq. (4.1-5) simplifies to 1 u22  u21 (4.1-6) zz 4 u2n

175

176

CHAPTER 4 Damping

Furthermore, for lightly damped systems it is a good assumption that 1 un ¼ ðu1 þ u2 Þ, that is, un is the average of the two frequencies associ2 ated with the coincident component peaks. Therefore, we obtain u2  u1 zz (4.1-7) u2 þ u1 or zz

1 u2  u1 2 un

(4.1-8)

4.2 Damping from half-power points of total response In this section, we will derive the critical damping ratio, z, from the frequencies associated with the half-power points of the total response. We will also show that these frequencies are nearly the same as the frequencies that correspond to the peaks in the coincident component of response, and that the average of the coincident component of response frequencies corresponds to the half-power points. Fig. 4.2-1 shows the total response (solid line) as well as the quadrature (short dashes) and coincident (short/long dashes) components. The half-power point frequencies are labeled as l1 and l2 , and they correspond to the values, on either side of the natural

FIGURE 4.2-1 Quadrature (short dash line), coincident (long/short dashed line), and total (solid line) components of the acceleration response of a single-degree-offreedom system with z ¼ 0:02 and li ¼ ui =un .

4.2 Damping from half-power points of total response

pffiffiffi frequency, of the total response curve that are 1 2, or 0.707, of the peak response value. We begin the derivation by seeking the fraction, ai , of the peak total response that corresponds to the frequencies associated with each of the peaks in the coincident component of response, which we have labeled as l1 and l2 . We will start with l1 ; and note that we have introduced functional notation for the quadrature and coincident components to explicitly indicate that they are both functions of l, i.e., qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a1 fQdx€ðln Þg2 ¼ fQdx€ðl1 Þg2 þ fCox€ðl1 Þg2 (4.2-1) 2 2 2 2 a1 fQdx€ðln Þg ¼ fQdx€ðl1 Þg þ fCox€ðl1 Þg For lightly damped systems, the peak total response can be computed with l ¼ ln ¼ 1, and the coincident component of response will, therefore, be zero. The left-hand side of Eq. (4.2-1), which is the peak of the total response, only involves the quadrature component evaluated at ln ¼ 1. The right-hand side is the total response at l1 . Substituting from Chapter 2, )2 ( )2 (   2 1  l 1 2zl 1 1 þ  l21  a21 2 ¼ l21   2 2 2 2 4z 1  l1 þ ð2zl1 Þ 1  l21 þ ð2zl1 Þ2 (4.2-2) Performing the indicated algebraic operations yields a21

1 1 ¼ l41  2 2 2 4z 1  l1 þ ð2zl1 Þ2

(4.2-3)

Next, we substitute for l21 the value from Eq. (4.1-4) that corresponds to the first peak in the coincident component of response, 2  1 1 2 1 a1 2 ¼  2 1 þ 2z 4z 1 4z2 þ 1 1 þ 2z 1 þ 2z (4.2-4) ¼

1 4z þ 4z2 ð1 þ 2zÞ 2

177

178

CHAPTER 4 Damping

Solving for a1 we obtain 1 a1 ¼ pffiffiffi 2

rffiffiffiffiffiffiffiffiffiffiffi 1 1þz

Following the same procedure for l2, we get rffiffiffiffiffiffiffiffiffiffiffi 1 1 a2 ¼ pffiffiffi 2 1z

(4.2-5)

(4.2-6)

a1 and a2 are the fractions of the peak response at the frequencies where the coincident component of response peaks. For lightly damped systems, 1 a1 z a2 z pffiffiffi . However, it’s worth noting that a2 will be slightly greater 2 than a1 . So what if we used the average of the two, i.e.,

rffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffi 1 1 1 1 1 a ¼ ða1 þ a2 Þ ¼ pffiffiffi þ (4.2-7) 2 1þz 1z 22 Taking the Taylor series expansion (see Appendix 4.1) of the term in the braces, and noting that for lightly damped system we can neglect the higher-order terms associated with z, we obtain

 6 1 1 3 2 35 4 z þf z a ¼ pffiffiffi 2 1 þ z þ 16 128 22 (4.2-8) 1 z pffiffiffi 2 Therefore, given p a ffiffitotal response function we can take the frequencies ffi corresponding to 1 2 times the peak response, and use these frequencies in either Eqs. (4.1-6), (4.1-7) or (4.1-8) to compute the critical damping ratio, z. In Volume II, we will show how we can use the above derivations to establish damping for large, complex, multi-degree-of-freedom systems, such as buildings, airplanes, launch vehicles, and satellites. 4.3 Logarithmic decrement As noted in Chapter 2, once oscillation has been started with initial conditions, and if there is no external excitation, the oscillation will decay (see Fig. 2.3-2). Also, in a system that is being driven by an external force that goes to zero, the oscillations after the force stops will also decay (see Fig. 2.5-10). Since the rate of decay for a system with viscous damping

4.3 Logarithmic decrement

will be a function of the critical damping ratio, z, we should be able to extract from the decaying time history the damping that is causing the reduction in vibration amplitude. In Chapter 2, we derived the unforced response of a viscously damped single-degree-of-freedom system, i.e.,   e ud t þ Bsin e ud t xðtÞ ¼ ezun t Acos (4.3-1) ¼ ezun t X sinðud t þ qÞ Taking the ratio of two consecutive response values one cycle apart gives xðtÞ ezun t X sinðud t þ qÞ ¼ zu ðtþT Þ d X sinðu ðt þ T Þ þ qÞ xðt þ Td Þ e n d d

(4.3-2)

Since Td ¼ 2p=ud , sinðud ðt þ Td Þ þ qÞ ¼ sinðud t þ qÞ, and Eq. (4.3-2) reduces to xðtÞ ezun t ¼ zu ðtþT Þ ¼ ezun t ezun ðtþTd Þ ¼ ezun tþzun tþzun Td d xðt þ Td Þ e n

(4.3-3)

¼ ezun Td Starting with Eq. (4.3-3), define the logarithmic decrement, d, as the natural log of the ratio of the magnitudes of two successive cycles, i.e.,     xðtÞ ¼ ln ezun Td d ¼ ln xðt þ Td Þ (4.3-4) ¼ zun Td . pffiffiffiffiffiffiffiffiffiffiffiffiffi  Since Td ¼ 2p=ud ¼ 2p un 1  z2 , Eq. (4.3-4) can be written as 2pz d ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1  z2 pffiffiffiffiffiffiffiffiffiffiffiffiffi For lightly damped system 1  z2 z 1, and we obtain

(4.3-5)

d (4.3-6) 2p It should be noted that we would get equivalent results whether we use the displacement or acceleration response time histories. This is important, since the most common method for obtaining vibration response data is by measuring acceleration. z¼

179

180

CHAPTER 4 Damping

4.3.1 Damping from nonsequential cycles

For lightly damped systems, the reduction in magnitude of oscillation from one cycle to the next can be relatively small, and if the data contain noise, the damping estimate could have significant error. Therefore, it would be beneficial to be able to use the logarithmic decrement method with data points that are several cycles apart. We can accomplish this by noting that xðtÞ xðt þ Td Þ xðt þ 2Td Þ xðt þ ðn  1ÞTd Þ ¼ ¼ ¼/¼ ¼ ed xðt þ Td Þ xðt þ 2Td Þ xðt þ 3Td Þ xðt þ nTd Þ (4.3-7) Therefore,

      xðtÞ xðtÞ xðt þ Td Þ xðt þ 2Td Þ xðt þ ðn  1ÞTd Þ ¼ / xðt þ nTd Þ xðt þ Td Þ xðt þ 2Td Þ xðt þ 3Td Þ xðt þ nTd Þ       ¼ ed ed ed / ed  n ¼ ed ¼ end (4.3-8)

Taking the natural log of both sides gives   xðtÞ ln ¼ ln end xðt þ nTd Þ

(4.3-9)

¼ nd Substituting Eq. (4.3-6) produces the desired expression,      1 1 xðtÞ ln z¼ 2p n xðt þ nTd Þ

(4.3-10)

Fig. 4.3-1 shows the displacement response of a single-degree-offreedom system whose motion was initiated with an initial displacement of 2 and no initial velocity. The system has a critical damping ratio of 0.05, and un ¼ 2p. Also shown are the magnitudes that correspond to the peaks at cycle number one and cycle number five; n will, therefore, be equal to 4. Substituting these values into Eq. (4.3-10) yields      1 1 1:4602 ln ¼ 0:05 (4.3-11) z¼ 2p 4 0:4149

4.3 Logarithmic decrement

FIGURE 4.3-1 Free-decay response of single-degree-of-freedom system with z ¼ 0:05 and un ¼ 2p; motion was initiated with an initial displacement of 2 and no initial velocity. The displacement response shown in Fig. 4.3-1 was computed and, therefore, does not contain errors expected from real-world data. A way to increase the accuracy of the damping estimated from noisy data is to compute the damping numerous times using different peak pairs, and then taking the average. If the errors are random, averaging will reduce their ! xðtÞ against n, for various impact. Another method is to graph ln xðt þ nTd Þ values of n, and then drawing, or optimally fitting (see next section), a bestfit straight line through the plotted points. The slope of the line, divided by 2p, will provide an improved estimate of z. Fig. 4.3-2 shows the acceleration response that corresponds to the displacement time history shown in Fig. 4.3-1. The short horizontal lines correspond to the acceleration amplitude values that one might extract if there were noise in the data. Fig. 4.3-3 shows the natural log of the ratios of the first data point divided by subsequent data points plotted against n, where n is the cycle number from the first data point. If the data were “noise” free, then all points would fall along a straight line. However, if the data contain random errors, some points will be above and others below the line that represent the true damping of the system. The straight line shown in the figure was derived by an optimum fit of the data, which will be described in the next section. It should be noted that the average damping obtained from the data points shown in Fig. 4.3-3 is 0.049, as compared to the exact value of 0.05.

181

182

CHAPTER 4 Damping

FIGURE 4.3-2 Acceleration time history corresponding to the displacement response shown in Fig. 4.3-1. The short horizontal lines correspond to the acceleration amplitude values that one might extract if there were noise in the data.

FIGURE 4.3-3 Plot of the natural log of the ratios of the first data point shown in Fig. 4.3-2 divided by subsequent data points plotted against n, where n is the cycle number from the first data point. 4.3.2 Damping from least squares fit of data

Real response data will contain “noise,” and the calculation of damping from these data should account for this source of error. As indicated in the previous section, if the data were error-free, the extracted data points when plotted as in Fig. 4.3-3 would fall on a straight line. As can be seen in the figure this is not the case. Therefore, when fitting the line to the data, we must either eyeball it or use a more rigorous approach that minimizes the difference between the line and the data points; for this we can

4.3 Logarithmic decrement

use the method of least squares developed by Gauss (Sokolnikoff and Redheffer, 1958). The equation for a straight line that goes through the origin is y ¼ bx, where b is the slope of the line. Let Dyi ¼ bxi  yi be the deviation between the best-fit line and the data for the ith data point. Next, define the error function, ε, as the sum of the squares of the deviations between the optimum line and the six data points in the set: 6 X ðDyi Þ2 ε¼ i¼1

¼

6 X

ðbxi  yi Þ2 ¼

(4.3-12)

6  X

i¼1

b2 x2i  2bxi yi þ y2i



i¼1

What we are looking for is that value of the slope, b, that minimizes the error function, ε. To achieve this we must compute the value of b that dε ¼ 0, i.e., satisfies db 6   dε d X ¼ b2 x2i  2bxi yi þ y2i db db i¼1 (4.3-13) ¼ 2b

6 X

x2i  2

i¼1

6 X

xi yi ¼ 0

i¼1

Solving for b yields the desired expression, 6 P xi yi i¼1 b¼ 6 P 2 xi

(4.3-14)

i¼1

Substituting the values of n ¼ 1; 2; 3; .; 6 for xi , and the six data values of 0.2545, 0.6358, 0.8107, 1.4056, 1.30, and 2.1063 for yi gives 6 P xi yi 28:7184 i¼1 ¼ 0:3156 (4.3-15) ¼ b¼ 6 91 P 2 xi i¼1

Dividing b by 2p results in the least squares estimate of the critical damping ratio, z ¼ 0:0502.

183

184

CHAPTER 4 Damping

4.4 Work, strain energy, and kinetic energy The work, W, done by a constant force, f , acting through displacement x is defined as W ¼ fx (Resnick and Halliday, 1966). The work done by a force that varies as a function of displacement, acting from x1 to x2 along the direction of motion, is defined as Z x2 f ðxÞdx (4.4-1) W¼ x1

Note that work is a scalar quantity and can be either positive or negative, and only the component of the net force that undergoes a displacement will do work. The integral in Eq. (4.4-1) computes the area under a forcee displacement curve defined by f ðxÞ. We will address the damping force and the forces that change the momentum of a mass; however, we will first deal with the spring force. We will continue to assume that the spring is weightless such that there are no inertial forces due to the acceleration of the spring itself; recall that in our modeling of single-degree-of-freedom systems the mass of the system was concentrated at a single point that was connected to weightless springs. Letting the force exerted on the spring be in the direction of motion as the spring stretches, we obtain f ðxÞ ¼ fs ðxÞ ¼ kx. Substituting into Eq. (4.4-1) and performing the indicated integration yields Z x2 1 2 x2 Ws ¼ kxdx ¼ kx 2 x1 x1 (4.4-2)  1  ¼ k x22  x21 2 If x1 ¼ 0 corresponds to the position where the spring is not deformed, then the work computed in Eq. (4.4-2) will be equal to the strain energy, U, that is stored in the spring by the work done by the force that deformed the spring from 0 to x2 . In this case, the strain energy imparted to the spring 1 would simple be kx2 , where x is the relative deformation between the 2 ends of the spring. Next, we will compute the work done by a force that changes the momentum of a rigid mass. In other words, a force that causes a mass to accelerate according to Newton’s Second Law of motion. Since we are dealing with single-degree-of-freedom systems in this chapter, we will assume

4.4 Work, strain energy, and kinetic energy

that the applied force acts in the direction of motion, xðtÞ, and, therefore, d x_ d x_ dx ¼ ¼ f ðxÞ ¼ m€ x. Substituting into Eq. (4.4-1) and noting that x€ ¼ dt dx dt d x_ x_ we obtain dx Z x2 Z x2 d x_ Wx€ ¼ m€ x dx ¼ mx_ dx dx x1 x1 (4.4-3) Z x_2 ¼ mx_ d x_ x_ 1

Note that the limits of integration were also changed to the velocities that the mass has at positions x1 and x2 , since we are now integrating with _ Performing the integration, we obtain respect to d x. Z x_2 1 2 x_2 Wx€ ¼ mx_ d x_ ¼ mx_ 2 x_ 1 x_ 1 (4.4-4)  1  ¼ m x_ 22  x_21 2 The expression in Eq. (4.4-4) is the kinetic energy, T. If we assume the mass was initially at rest, then the kinetic energy imparted by the force(s) along 1 the path of travel would be mx_2 . 2 The WorkeEnergy theorem (Resnick and Halliday, 1966) states that the work done on a mass particle by the resultant force is equal to the change in the kinetic energy of the mass. Therefore, for our single-degree-of-freedom systems the resultant force would be the sum of the external, spring, and damping forces, and the work done by these forces must be equal to the change in kinetic energy of the mass. A single-degree-of-freedom system with no energy dissipation mechanism, and no external force acting on it, will not increase nor decrease its energy as it vibrates. Therefore, the sum of the kinetic energy and the strain energy stored in the spring has to be a constant at every instant of time as the system oscillated, i.e., 1 1 (4.4-5) T þ U ¼ mx_2 þ kx2 ¼ constant 2 2 Recall the solution for a single-degree-of-freedom system with no energy dissipation mechanism, no external forces, and whose motion was _ initiated by an initial displacement only, i.e., xð0Þ ¼ 0 (see Chapter 2),

185

186

CHAPTER 4 Damping

xðtÞ ¼ xð0Þcos un t (4.4-6) dxðtÞ ¼ xð0Þun sin un t dt We note from the equations in (4.4-6) that when the displacement is a maximum, the velocity will be zero, and vice versa. This is also understandable from the physics of the system. When the mass reaches it peak displacement, it will come to a stop in order to reverse its direction of motion. This stop corresponds to zero velocity and, hence, all the energy of the system is stored in the spring as strain energy. When the mass passes through its equilibrium point, its displacement will be zero, but its velocity will be at its maximum; hence, all the system’s energy will be kinetic and equal to the peak strain energy. From the preceding discussion, a system that has no energy dissipation mechanism (e.g., no damping), and there are no external forces adding energy to the system, the peak strain energy must be equal to the peak kinetic energy. The peak strain and kinetic energies are _ ¼ xðtÞ

1 1 Tpeak ¼ mð  xð0Þun ðsin un t ¼ 1ÞÞ2 ¼ mð  xð0Þun ð1ÞÞ2 2 2 1 1 Upeak ¼ kðxð0Þðcos un t ¼ 1ÞÞ2 ¼ kðxð0Þð1ÞÞ2 2 2 Setting the two equal to each other, we obtain

(4.4-7)

Tpeak ¼ Upeak 1 1 mð  xð0Þun ð1ÞÞ2 ¼ kðxð0Þð1ÞÞ2 2 2

(4.4-8)

and 1 2 kx kx2 ¼ 2 (4.4-9) u2n ¼ 2 1 2 mx mx 2 The quotient on the right-hand side of Eq. (4.4-9) is referred to as Rayleigh’s quotient (Rayleigh, 1945). The value of Rayleigh’s quotient will become apparent when we discuss multi-degree-of-freedom systems starting in Chapter 6. For a single-degree-of-freedom system we obtain the expected result, u2n ¼ k=m.

4.5 Equivalent viscous damping

4.5 Equivalent viscous damping We begin by computing the work done by the viscous damping force used in our single-degree-of-freedom systems to model energy dissipation. Recall that the viscous damping force that acts on the mass is given by _ Therefore, the force that does work on the damping mechfd ðtÞ ¼ cxðtÞ. _ Substituting into Eq. (4.4-1) produces anism is f ðtÞ ¼ fd ðtÞ ¼ cxðtÞ. Z 2p _ Wd ¼ cxðtÞdx (4.5-1) 0

dx Since dx ¼ dt, Eq. (4.5-1) can be written as dt Z 2p=u Wd ¼ c x_2 ðtÞdt

(4.5-2)

0

The steady-state solution for a single-degree-of-freedom system driven by a harmonic force, fa sinðutÞ, is (see Chapter 2) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi fa xðtÞ ¼ Co2 þ Qd 2 sinðut þ qÞ 2 mun fa ¼ k

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Co2 þ Qd2 sinðut þ qÞ

(4.5-3)

¼ X sinðut þ qÞ Recall that for steady-state response, the part of the solution associated with the initial conditions has decayed to a negligible level; hence, the response is given by the particular solution only. Differentiating with respect to time the third equation in (4.5-3) to obtain the velocity response, which when substituted into Eq. (4.5-2) yields Z 2p=u 2 2 cos2 ðut þ qÞdt (4.5-4) Wd ¼ cX u 0

Letting f ¼ ut þ q results in dt ¼ df=u, and for the integration limits, when t ¼ 0, f ¼ q, and when t ¼ 2p=u, f ¼ 2p þ q. Substituting into Eq. (4.5-4) and performing the integration produces

187

188

CHAPTER 4 Damping

2 21

Wd ¼ cX u

u

Z

2pþq

cos2 ðfÞdf

q

  1 sinð2fÞ 2pþq ¼ cX u f þ 2 2 q 2

(4.5-5)

  1 sinð4p þ 2qÞ sinð2qÞ 2p þ q þ q ¼ cX u 2 2 2 2

¼ cX 2 up Substituting c ¼ z2mun (see Chapter 2) yields Wd ¼ 2zmun upX 2

(4.5-6)

For harmonic excitation at the resonant frequency we have u ¼ un and Eq. (4.5-6) reduces to Wd ¼ 2zkpX 2

(4.5-7)

Finally, substituting X from Eq. (4.5-3), and recalling that at resonance pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 Co2 þ Qd2 ¼ , we obtain the sought-after solution, 2z   fa 1 2 fa2 p (4.5-8) Wd ¼ 2zkp ¼ 2zk k 2z The usefulness of this expression is that we can now compute the work done over one cycle of oscillation by another damping mechanism and equate the result to the expression in Eq. (4.5-8) and solve for an “equivalent” value of z. This will then allow us to use the viscous damping model in the response calculations. Note that there were a number of assumptions made in the derivation. Therefore, equivalent viscous damping should be used with caution, and with an understanding of its assumptions and limitations. This will become more apparent when we discuss in the next section Coulomb friction and its equivalent viscous damping model. Another item of interest is the fact that the critical damping ratio is in the denominator, which results in an apparent contradiction. For example, if we reduce the damping in half, the work done by the damping force doubles. This is due to the fact that in Eq. (4.5-2) the velocity term is squared, and

4.5 Equivalent viscous damping

the damping force is proportional to the velocity. By reducing damping, the response of the system will increase proportionally for the case of resonant response. Hence, the work done by the viscous damping force will increase for a reduction in damping. Before leaving this section we will compute the work done by the external force used to derive the solution in Eq. (4.5-8) for the resonant case u ¼ un , i.e., Z 2p Wf ¼ fa sinðun tÞdx 0

Z ¼

2p=un

(4.5-9) _ fa sinðun tÞxðtÞdt

0

Differentiating the displacement solution in Eq. (4.5-3) with respect to time, and noting that the excitation is at the natural frequency, un , and therefore, q ¼ p=2, we obtain Z 2p=un sinðun tÞcosðun t  p=2Þdt Wf ¼ fa Xun 0

Z

(4.5-10) 2p=un

¼ fa Xun

sin2 ðun tÞdt

0

Letting s ¼ un t results in dt ¼ ds=un ; and when t ¼ 0, s ¼ 0, and when t ¼ 2p=un , s ¼ 2p. Substituting and performing the indicated integration yields Z 2p sin2 ðsÞds Wf ¼ fa X 0

  1 sinð2sÞ 2p ¼ fa X s  2 2 0

(4.5-11)

¼ fa Xp Finally, substituting for X while noting that u ¼ un we obtain Wf ¼

fa2 p 2zk

(4.5-12)

189

190

CHAPTER 4 Damping

Comparing Eq. (4.5-12) to (4.5-8) we note that they are identical. This means that the work done by the damping force is equal to the work done by the external force, and this is as it should be for this problem. In Chapter 2, Section 2.5, Fig. 2.5-7 shows the vector relationship between the inertial, damping, spring, and external forces. The middle diagram shows the relationship for the condition where the excitation frequency, u, is equal to the circular natural frequency, un . Note that the damping force and the external force are equal in magnitude, and both are at 90 degree to the inertial and spring forces. Since we are at steady state, the energy put into the system by the external force has to be equal to the energy dissipated by the damping force. Therefore, the work done by these two forces must be equal since they both undergo the same displacement. This then implies that the work done by the inertial force will be equal to the work done by the spring force for steady-state harmonic excitation at the natural frequency of a system. 4.6 Equivalent viscous damping and Coulomb damping In the previous section, we computed the work done by the viscous damping force during one cycle of steady-state oscillation. We also compared this to the work done by the external force when the system was driven harmonically at its natural frequency. In this section, we will determine whether we can use the concept of equivalent viscous damping for Coulomb friction energy dissipation, and establish applicable restrictions. The force that Coulomb damping (friction) applies to a mass is ff ¼ mfN . The friction force acting on the mass is always opposite to the direction of motion. The sign of the friction force mechanism is, therefore, the opposite of this. Accordingly, the work performed by the Coulomb damping mechanism is Z 1X Z 0 Z 1 X Z 0 4 4 mfN dx þ mfN dx þ mfN dx þ mfN dx (4.6-1) Wf ¼ 0

1X 4

0

1 4X

Note that the integration is divided into four segments. The first starts at zero and since the mass moves in the positive displacement direction, the friction force will be negative, and the friction mechanism force will have the opposite sign. Once the mass reaches the peak positive displacement after one-quarter of a cycle, it will reverse direction and move in the negative displacement direction and, hence, the friction force will

4.6 Equivalent viscous damping and Coulomb damping

oppose this and be positive until the mass reaches zero displacement; therefore, the mechanism force doing the work will be negative. From here it will continue until it reaches the peak negative displacement, and the friction force and friction mechanism force doing the work will have the same sign as the previous quarter cycle. Finally, the mass will be pulled back to zero displacement and the friction force will be negative since the mass will be moving in the positive direction, and the friction mechanism force doing the work will be positive. Performing the indicated integrations yields ( 1 ) X 0 0  1 X 4 4 þ x 1 Wf ¼ mfN x  x 1  x 0 0 4 X 4X (4.6-2) ¼ 4mfN X Setting this equal to the work done by viscous damping (see Eq. 4.5-6), where we denote the equivalent critical damping ratio by zf , gives Wd ¼ Wf 2zf mun upX 2 ¼ 4mfN X zf ¼

(4.6-3)

2mfN mun upX

The steady-state response of a viscously damped single-degree-offreedom system to harmonic excitation was derived in Chapter 2 and is ffi fa pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi (4.6-4) Co2 þ Qd2 cosðut þ qÞ xðtÞ ¼ k The peak magnitude of response occurs when the cosine term is equal to one. We proceed by replacing z with zf and noting that vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u u 1  l2 2 þ ð2zf lÞ2 2 2 u Co þ Qd ¼ th i2 2 1  l2 þ ð2zf lÞ2 (4.6-5) 1 ffi ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2 1  l2 þ ð2zf lÞ2

191

192

CHAPTER 4 Damping

Therefore, X¼

fa 1 ffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   k 2 2 2 þ ð2zf lÞ 1l

(4.6-6)

Squaring both sides and then multiplying both sides by the term in the radical produces  2   fa 2 2 2 2 2 (4.6-7) X 1  l þ X ð2zf lÞ ¼ k Substituting Eq. (4.6-3) yields 2  2 2mfN fa l ¼ mun upX k  2  2   4mfN fa 2 2 2 X 1l þ l ¼ mun up k

   2 2 2 X 1l þX 2 2

(4.6-8)

Noting that l ¼ u=un and solving for X we obtain the sought-after solution sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   4mfN 2 1 pfa fa   X¼ (4.6-9) k 1  l2 The most important item to note in Eq. (4.6-9) is that when u ¼ un , which yields l ¼ 1, the amplitude is unbounded. This implies that as the frequency of excitation approaches the natural frequency and the amplitude of response increases, the energy dissipated by friction will be insufficient to keep the response bounded. Why is this? Coulomb friction is a constant force, irrespective of the amplitude of oscillation, whereas a viscous damping force would grow with increasing amplitude of response because it is a function of the system’s velocity. 4.7 Equivalent viscous damping and fluid resistance Structures that vibrate in fluids will dissipate energy due to the motion of the structure through the fluid. This energy dissipation is in addition to

4.7 Equivalent viscous damping and fluid resistance

the energy dissipated internally by deformation of the structure. Let r be the density of the fluid, then the dynamic pressure, QðtÞ, is defined as 1 (4.7-1) QðtÞ ¼ rx_2 ðtÞ 2 Multiplying the dynamic pressure by the projection of the surface area, a, normal to the direction of motion produces the fluid force resisting the 1 motion. Therefore, letting n ¼ a r, the resisting force will be 2 ffd ðtÞ ¼  nx_2 ðtÞ

(4.7-2)

Note that in Eq. (4.7-2) the sign must be selected to always oppose the motion of the structure in the fluid. We start by computing the work done by the fluid resisting force over one cycle of oscillation. Since we have to account for the sign, it will be easier to compute the work over one-quarter cycle, and since we are at steady state the total work done over one full cycle will be four times as great. We will select the portion of the cycle where the mass moves from zero displacement to its peak positive excursion, hence, the resistive force will be negative, Z 2p ffd ðtÞdx (4.7-3) Wf ¼ 4 0

dx _ Substituting Eq. (4.7-2) and recalling that dx ¼ dt ¼ xðtÞdt, we obtain dt Z p=2u (4.7-4) x_3 ðtÞdt Wf ¼  4n 0

For steady-state, harmonic vibration the displacement solution is given by Eq. (4.5-3). Differentiating with respect to time and substituting into Eq. (4.7-4) yields Z p=2u X 3 u3 cos3 ðut þ qÞdt (4.7-5) Wf ¼  4n 0

We begin by letting ut þ q ¼ f, which results in dt ¼ df=u and the limits of integration become as shown below, Z p=2þq 3 2 cos3 ðfÞdf (4.7-6) Wf ¼  4nX u q

193

194

CHAPTER 4 Damping

Performing the integration, Z pþq 2 3 2 Wf ¼ 4nX u cos2 ðfÞcosðfÞdf q

Z

pþq

2

¼ 4nX u

3 2

 1  sin2 ðf cosðfÞdf

q

¼ 4nX 3 u2

8 >

: 8 > >
> q :

p  2 þ

9 > q> =

pþq 2

cosðfÞdf 

q

p 2

q

9 > = 2 sin ðfÞcosðfÞdf > ;

Z

þq

1  sin3 ðf 3 q

> > ; (4.7-7)

Applying the integration limits produces

1 3 1 3 3 2 Wf ¼  4nX u cosðqÞ  sinðqÞ  cos ðqÞ þ sin ðqÞ 3 3

(4.7-8)

For the case where the excitation is at the natural frequency, u ¼ un , and q ¼ p=2, we obtain 8 (4.7-9) Wf ¼ nX 3 u2n 3 Equating the work done during one cycle at resonance by viscous damping (see Eq. 4.5-7), where we replace z with zfd , to the work done by the fluid, and then solving for the equivalent viscous critical damping ratio yields zfd ¼

4 nX 3 mp

(4.7-10)

4.8 Structural damping and complex stiffness The concept of equivalent viscous damping will be used to introduce structural damping into the equations of motion of single-degree-of-freedom

4.8 Structural damping and complex stiffness

systems. Structural damping is a mechanism that dissipates energy as the square of the displacement amplitude of vibration, and to first order is independent of frequency. Hence, the work done by structural damping is eX 2 Wsd ¼ m

(4.8-1)

where m e is a constant that needs to be established by test, or be based on historical data for similar materials and structural systems for which test data exist. Setting Eq. (4.8-1) equal to Eq. (4.5-6) and solving for the equivalent viscous critical damping ratio (note that we replaced z with zs to denote that we are computing equivalent structural damping) produces Wd ¼ Wsd 2kzs pX 2 ¼ m eX 2

(4.8-2)

m e 2kp Recall that the energy dissipated by viscous damping was computed with the assumption that the system was vibrating harmonically and at steady state, and the result used in Eq. (4.8-2) for Wd also has the assumption that the system in being driven at its natural frequency (see Eq. 4.5-7). The equation of motion that governs this state is Eq. (2.5-2) in Chapter 2, once we set u ¼ un in the external force term, i.e., zs ¼

fa iun t (4.8-3) e m _ ¼ iun xðtÞ for steadyReplacing z with zs , and noting that at resonance xðtÞ state response, we obtain _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

_ þ u2n xðtÞ ¼ x€ðtÞ þ 2zs un xðtÞ x€ðtÞ þ 2

fa iun t e m

m e fa iun t un iun xðtÞ þ u2n xðtÞ ¼ e 2kp m

m e x€ðtÞ þ i xðtÞ þ u2n xðtÞ ¼ pm   m e 2 þ un xðtÞ ¼ x€ðtÞ þ i pm

(4.8-4) fa iun t e m fa iun t e m

195

196

CHAPTER 4 Damping

m e , we Multiplying the last equation in (4.8-4) by m, and letting g ¼ pk obtain m€ xðtÞ þ kð1 þ igÞxðtÞ ¼ fa eiun t

(4.8-5)

where g is referred to as the structural damping factor, and kð1 þ igÞ is referred to as the complex stiffness. To compare the steady-state response at resonance of a system with viscous damping to one with structural damping, we need to derive the particular solution for Eq. (4.8-5). Assume the following solution: x€ðtÞ ¼ ju2n eiun t

(4.8-6)

mju2n eiun t þ kð1 þ igÞjeiun t ¼ fa eiun t   j  mu2n þ kð1 þ igÞ ¼ fa

(4.8-7)

xðtÞ ¼ jeiun t

0

Substituting into Eq. (4.8-5) yields

Solving for j gives j¼

fa f ¼ a figkg  mu2n þ kð1 þ igÞ

(4.8-8)

Therefore, the peak amplitude at resonance of a single-degree-of-freedom system with structural damping is jxpeak j ¼

fa gk

(4.8-9)

Likewise, the peak amplitude at resonance of a system with viscous damping is (see Eq. 4.5-3) jxpeak j ¼

fa 2zk

(4.8-10)

For the same amplitude of response, comparing the two equations, we conclude that g ¼ 2z. 4.8.1 Quadrature/coincident response with structural damping

In Chapter 2, we solved for the steady-state response of a viscously damped single-degree-of-freedom system to harmonic excitation. The dynamic amplification was presented in the form of response components that were collinear (coincident) and at 90 degree (quadrature) to the harmonic

4.8 Structural damping and complex stiffness

excitation force. In Section 4.1 of this chapter, we showed how the critical damping ratio, z, could be computed from the coincident component of response. In this section, we will first derive the coincident and quadrature components of response of a single-degree-of-freedom system that has structural damping, and then in the following section we will derive from the coincident component of response the structural damping factor, g. The equation of motion of a system with structural damping and subjected to harmonic excitation is m€ xðtÞ þ kðig þ 1ÞxðtÞ ¼ fa eiut (4.8-11) fa iut x€ðtÞ þ þ 1ÞxðtÞ ¼ e m iut Letting xp ðtÞ ¼ je and differentiating twice and substituting into the equation of motion gives u2n ðig

u2 jeiut þ u2n ðig þ 1Þjeiut ¼ 

fa iut e m

 fa u2n  u2 þ igu2n j ¼ m j¼

(4.8-12)

fa 1  2  2 m un  u þ igu2n

Multiplying by the complex conjugate of the denominator divided by it and 1 1 normalizing with the identity 4 ð 4 Þ gives un un

=



fa 1  2  2 m un  u þ igu2n

fa ¼ mu2n



u2n  2 un



 u  igu2n   u2  igu2n 2

(

1 u4n 1 u4n

1  l2 g  i    2 2 2 1  l2 þ ðgÞ2 1  l2 þ ðgÞ

(4.8-13)

)

where l ¼ u=un . Therefore, the particular solution is xp ðtÞ ¼ jeiut ¼

fa fa fCo þ iQdgeiut ¼ fCo þ iQdgeiut 2 mun k

(4.8-14)

197

198

CHAPTER 4 Damping

where Co ¼ 

1  l2 2 1  l2 þ ðgÞ2

and

Qd ¼ 

g  2 1  l2 þ ðgÞ2

(4.8-15)

To obtain the acceleration response we differentiate Eq. (4.8-14) with respect to time twice,  iut d2 fa u2 fa  iut 2 2 € þ iQdge x ðtÞ ¼ x ðtÞ ¼  ¼ Co  il Qd e  l fCo p p dt2 mu2n m (4.8-16) which produces the sought-after result, Cox€ ¼  l2 

1  l2 g and Qdx€ ¼ l2  (4.8-17)   2 2 1  l2 þ ðgÞ2 1  l2 þ ðgÞ2

When comparing the above solution to that obtained for viscous damping (see Chapter 2) we note that if the frequency of excitation is equal to the circular natural frequency, u ¼ un , then l ¼ 1, and g ¼ 2z, as expected (see Section 4.8). If, however, the excitation frequency is not equal to the circular natural frequency, then the relationship between viscous and structural damping is a function of the frequency of excitation. However, as can be ascertained from Fig. 4.8-1, the two are very close.

FIGURE 4.8-1 Quadrature and coincident components of response of a single-degree-offreedom system with viscous damping, z ¼ 0:02, (solid line), and structural damping, g ¼ 2z, (dashed line).

4.8 Structural damping and complex stiffness

4.8.2 Structural damping from coincident response

As with systems with a viscous damping model, the coincident response of a system with structural damping has two extreme points, as shown in Fig. 4.8-2. Note that both the structural damping (solid line) and viscous damping (dashed line) models are shown. The circular natural frequency for the system shown in the figure is un ¼ 2p, and g ¼ 2z at u ¼ un , and z ¼ 0:02. Since the peaks in the coincident component of response represent two unique extreme points, we can solve for the associated l: !  4  l  l2 v v Cox€ðlÞ ¼ vl vl 1  l2 2 þ ðgÞ2  3     4l  2l 4 l4  l2 l  l3 þ  ¼ 2 ¼ 0 2  2 2 2 1  l2 þ ðgÞ2 þ ðgÞ 1l Multiplying by the denominator of the second term produces   3      2 2 2 4l  2l 1  l þ ðgÞ þ 4 l4  l2 l  l3 ¼ 0

(4.8-18)

(4.8-19)

FIGURE 4.8-2 Quadrature and coincident components of response for a single-degree-offreedom system with viscous damping, z ¼ 0:02, (solid line), and structural damping, g ¼ 2z, (dashed line). l1 and l2 designate the frequency ratios at which the coincident component of response of the system with structural damping peaks.

199

200

CHAPTER 4 Damping

After some algebraic manipulation (see Problem 4.12) we obtain 1 a2  2a þ 1 ¼ 0 ð1 þ g2 Þ

(4.8-20)

The above equation, where we substituted a for l2, is a second-order polynomial in a and, therefore, has two roots that can be obtained with the quadratic formula: rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 sffiffiffiffiffiffiffiffiffiffiffiffiffiffi !  2H 4  1 1 þ g2 1 þ g2 ¼ 1Hg a1; 2 ¼ 2 1 þ g2 1 2 1þg  pffiffiffiffiffiffiffiffiffiffiffiffiffiffi  (4.8-21) ¼ 1 þ g2 H g 1 þ g2 which gives

 

u1 un u2 un

2 ¼ 2

l21

qffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ a1 ¼ 1 þ g  g 1 þ g 2 2

qffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 ¼ l2 ¼ a2 ¼ 1 þ g þ g 1 þ g2

(4.8-22)

pffiffiffiffiffiffiffiffiffiffiffiffiffiffi u22  u21 Subtracting the first from the second gives ¼ 2g 1 þ g2 , and for 2 u n small g we obtain the sought-after solution, gz

1 u22  u21 2 u2n

(4.8-23)

Comparing to the viscous damping solution, Eq. (4.1-6), we note again that g ¼ 2z. 4.9 Hysteresis Another phenomena of interest is the steady-state behavior of the combined _ þ kxðtÞ, under harmonic excitadamping and spring forces, fds ðtÞ ¼ cxðtÞ tion at the system’s natural frequency. In Chapter 2, we derived the response of a single-degree-of-freedom system excited by a harmonic force fa sin ut; the steady-state response is the particular solution, which is repeated below, ffi fa pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 þ Qd 2 sinðut þ qÞ xðtÞ ¼ Co (4.9-1) mu2n

4.9 Hysteresis

Since we are interested in the response when the system is being driven at its natural frequency, Eq. (4.9-1) becomes  fa 1 p sin un t  xðtÞ ¼ 2 mu2n 2z ¼

o p fa n fa ¼  sin  un t cos un t 2 k2z k2z

(4.9-2)

¼ X cos un t The corresponding velocity is _ ¼ un X sin un t xðtÞ

(4.9-3)

Substituting Eqs. (4.9-3) into the expression for the combined damping and spring force gives fds ðtÞ ¼ kxðtÞ þ cun X sin un t  1 fds ðtÞ  kxðtÞ ¼ cun X 2 sin2 un t 2 

(4.9-4)

 1 ¼ cun X 1  cos2 un t 2 2

 1 ¼ cun X 2  X 2 cos2 un t 2 Substituting Eq. (4.9-2) and then squaring both sides yields 1  fds ðtÞ  kxðtÞ ¼ cun X 2  x2 ðtÞ 2

(4.9-5)

 2

 2 fds ðtÞ  2fds ðtÞkxðtÞ þ k2 x2 ðtÞ ¼ c2 un X 2  x2 ðtÞ Performing the indicated algebraic manipulations produces 

fds ðtÞ cun X

2 

2k ðcun XÞ2

  fds ðtÞxðtÞ þ k2 þ c2 u2n



xðtÞ cun X

2 ¼1

(4.9-6)

201

202

CHAPTER 4 Damping

We recognize Eq. (4.9-6) to be that of an ellipse that has been rotated counterclockwise from the x axis toward the fds ðtÞ axis. The third axis is the time axis. However, since each cycle is identical to the previous one, we can collapse all the cycles onto the fds  x plane. We will show this graphically, but first we will simplify Eq. (4.9-6). Recall that c ¼ 2mun z. Therefore, cun X ¼ fa , and c2 u2n ¼ 4k2 z2 . Substituting into Eq. (4.9-6) produces the desired form,       xðtÞ 2 fds ðtÞ 2 2k 2 2  2 fds ðtÞxðtÞ þ k 1 þ 4z ¼1 (4.9-7) fa fa fa We will illustrate Eq. (4.9-7) by means of an example problem. Fig. 4.9-1 shows the hysteresis loop of a single-degree-of-freedom system undergoing steady-state vibration, at its natural frequency, in response to a harmonic force of magnitude fa ¼ 5. For this system, un ¼ 2p, k ¼ u2n m, m ¼ 1, z ¼ 0:05, and u ¼ un . The first item to check is the magnitude of fds ðtÞ _ _ ¼ 2mun zxðtÞ. Recall that since when xðtÞ is zero, that is fds ðtÞ ¼ cxðtÞ we are at resonance, the peak velocity occurs when the displacement is zero; hence, x_ peak ðtÞ ¼ Xun jsin un tj ¼ Xun . Substituting, yields

FIGURE 4.9-1 Hysteresis loop for a single-degree-of-freedom system in steady-state vibration, excited by a harmonic force at the system’s natural frequency. Dotted line shows the peak displacement of 1.267. The hysteresis loop crosses the vertical axis at 5 and 5. Time axis comes straight out of the page.

4.9 Hysteresis

_ fds ðtÞ ¼ cxðtÞ ¼ 2mun zXun ¼ 5. Therefore, we conclude that fds ðtÞ ¼ fa ðtÞ when xðtÞ ¼ 0, which is consistent with our understanding that at resonance the damping force will be equal to the external force and 180 degree out of phase. We also note that since we are at resonance the acceleration will be x€ðtÞ ¼ Xu2n cos un t and the peak will be Xu2n . Comparing this value to the peak spring force we note that they are both equal to 50, as they should be, since at resonance the spring and acceleration forces are 180 degree out of phase, and at 90 degree to the damping and external force. A final note, the area inside the hysteresis loop is equal to the energy dissipated during each cycle of oscillation, and according to Eq. (4.5-8) f 2p this should be Wd ¼ a . This can be verified by computing the area inside 2zk the ellipse defined by Eq. (4.9-7). In Appendix 4.2, we show that the area of an ellipse whose semimajor axis, B, lies along the xðtÞ axis, and its semiminor axis, A, lies along the fds ðtÞ axis is area ¼ pAB. Therefore, if we can establish the semimajor and semiminor axes of the ellipse described by Eq. (4.9-7), we will be able to compute its area. We begin by noting that Eq. (4.9-7) is of the form b þ cbx2 ¼ 1 (4.9-8) abf 2  2 bfx   k2 1 þ 4z2 1 b k . Eq. (4.9-8) can be written as where ab ¼ 2 , b ¼ 2 , and cb ¼ fa2 fa fa " # f b ¼1 (4.9-9) f f x g abb  b b cb x

 e , that diagonalizes the What we now seek is a coordinate transformation, j matrix in Eq. (4.9-9). This is equivalent to rotating the ellipse clockwise until the semimajor axis aligns with the xðtÞ axis, i.e., " #T " #" #( ) e 11 j e 11 j e 12 e 12   j fe b  bb j a ¼1 fe xe e 21 j e 22 e 21 j e 22 j j xe  bb cb (4.9-10) " 2 #( )   U1 0 e f ¼1 fe xe e 2 x 0 U 2

203

204

CHAPTER 4 Damping

Performing the indicated multiplication, we obtain U21 fe2 þ U22 xe2 ¼ 1

(4.9-11)

1 1 . U1 U2 U21 and U22 are the eigenvalues of the matrix in Eq. (4.9-9), and the col  e are the corresponding eigenvectors. The topic of eigenvalues umns of j and eigenvectors will be discussed extensively starting in Chapter 6; here, we will only introduce the solution technique for a two-coordinate system, since it is needed to solve this problem. The eigenvalues and eigenvectors are obtained by solving the corresponding eigenvalue problem: #!( )   " 0 b e1 j b a  b 2 1 0 ¼ (4.9-12) þ  Ui e2 j  bb cb 0 0 1

Therefore, the area of the ellipse is p

i

Since the right-hand side is a null vector, a nontrivial solution (nonzero eigenvectors) exists only if the determinate of the matrix is equal to zero (Cramer’s rule). Therefore,  ab  U2 b    b 2 i (4.9-13) ¼ ab  U2i cb  U2i  bb ¼ 0  bb cb  U2i Performing the indicated multiplications yields 2 U4i  ðb a þ cbÞU2i þ abcb  bb ¼ 0

Letting y ¼

U2i

(4.9-14)

produces a second-order algebraic equation in y, 2 a þ cbÞyi þ abcb  bb ¼ 0 y2i  ðb

(4.9-15)

Eq. (4.9-15) has two roots, which can be obtained with the quadratic formula, rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  ffi 2 2 ðb a þ cbÞH ðb a þ cbÞ  4 abcb  bb y1; 2 ¼ (4.9-16) 2 Therefore, the area of the ellipse is 1 1 1 area ¼ p pffiffiffiffiffi pffiffiffiffiffi ¼ p pffiffiffiffiffiffiffiffiffi y1 y2 y1 y2

(4.9-17)

4.9 Hysteresis

The product y1 y2 is rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 8  ffi9 2 > > 2 > a þ cbÞ  ðb a þ cbÞ  4 abcb  bb >

> 2 > > : ; rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 8  ffi9 2 > > 2 > a þ cbÞ þ ðb a þ cbÞ  4 abcb  bb > =

> :

¼

> > ;

2

(4.9-18)

  2 ðb a þ cbÞ2  ðb a þ cbÞ2 þ 4 abcb  bb 4

2 ¼ abcb  bb

It is interesting to note that the above result is equal to the determinate of the matrix in Eq. (4.9-9), which is then also equal to the product of its eigenvalues, i.e., Eq. (4.9-18). Substituting Eq. (4.9-18) into (4.9-17), while b and cb defined above, produces the substituting the values for ab, b, sought-after result, 1 area ¼ p qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 abcb  bb ¼p

¼

1 8 !  ! < 1 k2 1 þ 4z2  : fa2 fa2

1 !2 9 k =2 fa2

(4.9-19)

;

pfa2 2zk

The area computed in Eq. (4.9-19) is equal to Wd , as computed by Eqs. (4.5-1) through (4.5-8). Therefore, the work done in one cycle of oscillation by the _ is equal to the area of the ellipse defined by damping force, fd ðtÞ ¼ cxðtÞ, Eq. (4.9-7).

205

206

CHAPTER 4 Damping

Problems Problem 4.1 Using the coincident component of response, compute the critical damping ratio of the system that produced the coincident and quadrature components of acceleration response shown in the figure. Then explain why this makes sense relative to the quadrature component.

Solution 4.1 The peaks of the coincident component of response occur at l1 ¼ 0:9805 and l2 ¼ 1:0206. Hence, 1:0206  0:9806 ¼ 0:020 1:0206 þ 0:9806 The peak amplitude at resonance in given by 1=2z. Hence, 1=2ð0:02Þ ¼ 25, which is the magnitude in the figure. z¼

Problem 4.2 The figure shows the peak acceleration response of a single-degree-offreedom system driven by harmonic excitation. The points corresponding to the peaks of the coincident component of response are indicated. The corresponding values for l are left peak l1 ¼ 0:9806, right peak l2 ¼ 1:0206. Compute for each l the critical damping ratio. If we wanted to use the average of these two points, what percentage of the peak total response would we use?

Solution 4.3

Solution 4.2 The left and right solutions are obtained from Eqs. (4.2-5) and (4.2-6), i.e., rffiffiffiffiffiffiffiffiffiffiffi 1 1 1 1  1 ¼ 0:020 a1 ¼ pffiffiffi 0 z¼ 21¼ 2a1 2 1þz 2ð0:7001Þ2 rffiffiffiffiffiffiffiffiffiffiffi 1 1 1 1 a2 ¼ pffiffiffi 0 z¼1 2 ¼1 ¼ 0:020 2a2 2 1z 2ð0:7143Þ2 If we pffiffiffiwanted to use the average of the two half-power points, we would use 1 2 ¼ 0:707 of the peak response. Problem 4.3 If the frequency difference between the half-power points of two singledegree-of-freedom systems is the same, but one has twice the damping of the other, which one has the higher frequency? Solution 4.3 1 u2  u1 and the half-power point bandwidth is the same, i.e., Since z ¼ 2 un ðu2  u1 Þ1 ¼ ðu2  u1 Þ2, we have z1 2un; 1 ¼ z2 2un; 2 . Let z1 ¼ 2z2 , then, 2z2 2un; 1 ¼ z2 2un; 2 0 2un; 1 ¼ un; 2 The system with higher damping has a natural frequency half of the one with lower damping.

207

208

CHAPTER 4 Damping

Problem 4.4 List the formulas that can be used to compute damping from frequency response functions (coincident and quadrature components) of singledegree-of-freedom systems, or from total response functions. Indicate the primary assumptions used to arrive at each formula. Solution 4.4 From the peak response points of the coincident component of response we have u22  u21 4z ¼ 2 un 1  4z2 1 u22  u21 (1) If we assume that z2 is small we obtain z z 4 u2n (2) If we assume that the natural frequency is halfway between the fre1 quencies of the coincident component peaks, i.e., un ¼ ðu1 þu2 Þ, 2 u2  u1 1 u2  u1 and z z . we obtain z z 2 un u2 þ u1 From the total response we have rffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffi 1 1 1 1 a1 ¼ pffiffiffi and a2 ¼ pffiffiffi 2 1þz 2 1z where a1 and a2 are the fractions of the peak response at the frequencies where the total response has inflection points on either side of the peak response. These points correspond to the frequencies where the coincident component of response peaks. 1 (1) If we assume that z2 is small, we obtain a1 z a2 z pffiffiffi and 2 1 u2  u1 zz , where u1 and u2 correspond to the frequencies at 2 un pffiffiffi which the total response magnitude is 1 2 ¼ 0:707 of the peak magnitude. Problem 4.5 The figure shows the quadrature and coincident components of response of two single-degree-of-freedom systems where the excitation forces were not of the same magnitude. Which one has the higher damping value (system 1, solid line; or system 2, dashed line)? What are the critical damping ratios of the two systems? Explain what causes the differences in quadrature magnitude.

Problem 4.6

Solution 4.5 System 1, shown with the solid line, has the higher damping value because the bandwidth between the peaks of its coincident component of response is twice that of system 2, shown with the dashed line. By comparing the coincident component frequencies where the peaks occur we can conclude that system 1 has twice the damping of system 2 provided they have the same natural frequency. System 1 has z ¼ 0:04 and system 2 has z ¼ 0:02. The system 1 excitation force magnitude is 2.2 times higher than the system 2 excitation, which is why the higher damped system has a higher response level. Problem 4.6 The figure shows the free-vibration acceleration time history of a singledegree-of-freedom system. Use the single- and multi-cycle logarithmic decrement methods to compute the critical damping ratio of the system. The peak values for each cycle of oscillation are indicated in the figure.

209

210

CHAPTER 4 Damping

Solution 4.6 d (see Eq. 4.3-6) where The single-cycle logarithmic decrement is z ¼   2p x€ðtÞ d ¼ ln . Hence, using the first two peaks we obtain x€ðt þ Td Þ   71:76 d 0:1884 ¼ ¼ 0:03, which is the d ¼ ln ¼ 0.1884, and z ¼ 59:44 2p 2p critical damping ratio used to compute the time history. For the multicycle calculation  we will fifth  cycles  use  the  first and  (hence   n ¼ 4)  in x€ðtÞ 1 1 1 1 71:76 Eq. (4.3-10): z ¼ ln ln ¼ ¼ x€ðt þ nTd Þ 2p n 2p 4 33:77 0:03. Problem 4.7 The figure shows the decay time history of a single-degree-of-freedom system where the measured response (solid line) is contaminated. The dashed line is the uncontaminated response. The horizontal markers indicate the amplitude at the times that correspond to each complete period of oscillation. Compute the critical damping ratio by the approach described in Section 4.3.1 by generating a plot of the natural log of the ratios of the first data point shown in the figure divided by subsequent data points plotted against n, where n is the cycle number from the first data point. Then draw your best-fit line through the data points and compute the critical damping ratio from the resulting slope.

Problem 4.8

Solution 4.7 Requested plot, where the line was drawn by judgment. The slope, therefore, will depend on the individual drawing the “best-fit” line. Note that the line has to always go through the origin.



 xðtÞ Substituting the slope, i.e., ln n, into Eq. (4.3-10) yields xðt þ nTd Þ   1 z¼ 0:22 ¼ 0:035 2p as compared to the value used to compute the decay time history of z ¼ 0:04. Problem 4.8 The figure in Problem 4.7 shows the decay time history of a single-degreeof-freedom system where the measured response (solid line) is contaminated. The dashed line is the uncontaminated response. The horizontal markers indicate the amplitude at the times that correspond to each complete period of oscillation. Compute the critical damping ratio by the least squares approach described in Section 4.3.2.

211

212

CHAPTER 4 Damping

Solution 4.8

As can be ascertained, the least-square solution is to three decimal places the same as the value obtained in Problem 4.7 by drawing a “best-fit” line by eyeball. But because of the corruption in the data, the computed damping value is lower that the actual value of the system, z ¼ 0:04. Problem 4.9 Provide a physics-based argument as to why the peak strain energy must be equal to the peak kinetic energy in an undamped single-degree-of-freedom system in free vibration, i.e., there are no external forces and there is no internal dissipation of energy. Solution 4.9 When the mass reaches it peak displacement it will come to a stop in order to reverse its direction of motion. This stop corresponds to zero velocity and all the energy of the system must be stored in the spring as strain energy. When the mass passes through its equilibrium point, its displacement will be zero, but its velocity will be at its peak; hence, all the system’s energy will be kinetic and must be equal to the peak strain energy, since energy is conserved. Problem 4.10 The unforced motion of an undamped single-degree-of-freedom system is initiated with an initial displacement xð0Þ. If at 0.25 sec, which is within

Solution 4.10

the first cycle of oscillation, the kinetic energy is three times the potential energy, what is the natural period of vibration? What is the circular natural frequency of vibration? Discuss your results. Solution 4.10 Since at t ¼ 0 there is only an initial displacement, all the energy available 1 for vibration is in the form of strain energy, i.e., kx2 ð0Þ. The total energy 2 during vibration will be a combination of strain energy, UðtÞ, plus kinetic energy, TðtÞ; hence, 1 TðtÞ þ UðtÞ ¼ kx2 ð0Þ 2 At t ¼ 0:25 sec we have Tð0:25Þ ¼ 3Uð0:25Þ. Substituting gives 3Uð0:25Þ þ Uð0:25Þ ¼ 4fUð0:25Þg

1 2 1 ¼ 4 kx ð0:25Þ ¼ kx2 ð0Þ 2 2 The displacement response of an undamped single-degree-of-freedom system to an initial displacement (see Chapter 2) is, xðtÞ ¼ xð0Þcos un t Substituting gives,

1 1 2 4 kðxð0Þcosðun 0:25ÞÞ ¼ kx2 ð0Þ 2 2 The first thing we note is that the initial displacement can be divided out. This is as it should be since the natural frequency of a linear system is not a function of the amplitude of oscillation, nor of how the vibrations were initiated. Proceeding, 1 1 0 cosðun 0:25Þ ¼ ðcosðun 0:25ÞÞ2 ¼ 4 2 un ¼ 4cos1 0:5 ¼ 4:1888 rad=sec As a check we can substitute the solution into the energy equation, 1 1 1 TðtÞ þ UðtÞ ¼ mx2 ðtÞ þ kx2 ðtÞ ¼ kx2 ð0Þ. Substituting the displace2 2 2 ment solution and its derivative yields, 1 1 1 mð  un xð0Þsin un tÞ2 þ kðxð0Þcos un tÞ2 ¼ kx2 ð0Þ 2 2 2

213

214

CHAPTER 4 Damping

Substituting the computed circular frequency and time of interest gives, ðsinðð4:1888Þ0:25ÞÞ2 0:75

þ ðcosðð4:1888Þ0:25ÞÞ2 ¼1 0:25

The natural period of vibration is, period ¼ 1=frequency ¼ 1=ðun = 2pÞ¼ 2p=4:1888 ¼1:5 sec Problem 4.11 The left figure shows the quadrature and coincident responses of a singledegree-of-freedom system. The right figure shows an expanded view of the coincident component. Assume that the system has structural damping. Compute from the coincident component the structural damping factor, g. What if the system had viscous damping, what would the critical damping ratio, z, be?

Solution 4.11 From the expanded-view right figure we obtain the frequency ratio values at the two peaks, l1 ¼ 6:1028

and l2 ¼ 6:4805

The structural damping factor is; gz

1 l22  l21 1 ð6:4805Þ2  ð6:1028Þ2 1 4:7527 ¼ 0:06 ¼ ¼ 2 2 2 ln 2 fð6:4805 þ 6:1028Þ=2g 2 39:5849

The viscous damping critical damping ratio is z ¼ g=2 ¼ 0:03.

Solution 4.12

Problem 4.12 In Section 4.8.2, it was shown that the structural damping factor g could be derived with knowledge of the frequencies at which the corresponding acceleration coincident response peaked. The derivation involved computing the derivative of the coincident response function with respect to l, i.e., !  4  l  l2 v v Cox€ðlÞ ¼ vl vl 1  l2 2 þ ðgÞ2    3   4 l4  l2 l  l3 4l  2l þ  ¼ 2 ¼ 0 2  2 2 2 1  l2 þ ðgÞ2 þ ðgÞ 1l Show that this leads to Eq. (4.8-20), Solution 4.12 Starting with,

1 a2  2a þ 1 ¼ 0. ð1 þ g2 Þ



    4l3  2l 4 l4  l2 l  l3 þ  2 ¼ 0 2   2 2 2 1  l2 þ ðgÞ2 þ ðgÞ 1l

multiply by the denominator of the second term and then perform the required algebraic manipulations,  2     3  4l  2l 1  l2 þ ðgÞ2 þ 4 l4  l2 l  l3 ¼ 0  3   4l  2l 1  2l2 þ l4 þ 4g2 l3  2g2 l þ 4l5  4l3  4l7 þ 4l5 ¼ 0   2l þ 4l32l5þ4l38l5 þ 4l7 þ 4g2  4 l3  2g2 l þ 8l5  4l7 ¼ 0      2 þ 2g2 l þ 4 þ 4g2 l3  2l5 ¼ 0     l4  2 1 þ g2 l2 þ 1 þ g2 ¼ 0 1 a2  2a þ 1 ¼ 0 2 ð1 þ g Þ

215

216

CHAPTER 4 Damping

Problem 4.13 The left figure shows a plot of fds ðtÞ versus xðtÞ for a single-degree-offreedom system being excited harmonically at its undamped circular natural frequency, un (see Section 4.9 for detailed discussion). If the circular natural frequency is un ¼ 2p and the mass is m ¼ 1, what is the critical damping ratio, z, of this system?

Solution 4.13 We start with Eq. (4.9-19), which relates the critical damping ratio, excitation force amplitude, and system stiffness to the area of the hysteresis ellipse obtained by plotting fds ðtÞ against xðtÞ, i.e., area of ellipse ¼

pfa2 2zk

From Appendix 4.2, we know that the area of an ellipse is pAB, where A and B are the semiminor and semimajor axes of the ellipse, respectively. We also know from the discussion in Section 4.9 that fa is the fds ðtÞ axis crossing value of the ellipse. Hence, from the figure we obtain fa ¼ 5. In addition, since the natural circular frequency and mass are given, we can compute the stiffness, i.e., k ¼ u2n m ¼ ð2pÞ2 1 ¼ 39:4784. What remains is the computation of A and B. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi The values of B and q are straightforward: B ¼ 13:202 þ 0:3112 ¼ 13:2037 and q ¼ tan1 ð13:20 =0:311Þ ¼ 1:5472. Knowing q we can compute A: A ¼ 5 cos q ¼ 0:1180. The area of the ellipse, therefore, is

Appendix 4.1 Taylor series expansion

pAB ¼ pð13:2037Þð0:1180Þ ¼ 4:8947. We now have all values required to compute the critical damping ratio: z¼

pfa2 3:1416ð25Þ ¼ 0:20 ¼ 2ðpABÞk 2ð4:8947Þ39:4784

Appendix 4.1 Taylor series expansion A function f ðzÞ can be expressed by a Taylor power series expansion provided it has an nth continuous derivative at a (Crowell and Slesnick, 1968), i.e., 1 ðnÞ f ðaÞðz  aÞn þ Rn n! We seek such an expansion for the following function: f ðzÞ ¼ f ðaÞ þ f 0 ðaÞðz  aÞ þ / þ

f ðzÞ ¼ ð1 þ zÞ1=2 þ ð1  zÞ1=2 The first five terms in the series are f ð0Þ ¼ ð1 þ 0Þ1=2 þ ð1  0Þ1=2 ¼ 2 o 1n f 0 ð0Þ ¼  ð1 þ 0Þ3=2  ð1  0Þ3=2 ¼ 0 2 n o 3 3 ð1 þ 0Þ5=2 þ ð1  0Þ5=2 ¼ f 00 ð0Þ ¼ 4 4 n o 15 f ð3Þ ð0Þ ¼  ð1 þ 0Þ7=2  ð1  0Þ7=2 ¼ 0 8 o 105 105 n 9=2 9=2 ð4Þ ð1 þ 0Þ þ ð1  0Þ f ð0Þ ¼ ¼ 16 8 Substituting into the Taylor series expansion we obtain       1 3 1 105 2 ðz  0Þ þ 0 þ ðz  0Þ4 þ Rn f ðzÞ ¼ 2 þ 0 þ 2! 4 4! 8   3 2 35 4 z þ Rn ¼2 1þ z þ 16 128

217

218

CHAPTER 4 Damping

For lightly damped structures, z will be small and we can neglect its higherorder terms, which then yields f ðzÞ ¼ ð1 þ zÞ1=2 þ ð1  zÞ1=2 z 2

Appendix 4.2 Area of an ellipse

 2  2 f x The equation of the ellipse shown in the figure is þ ¼ 1. A B 1 (  2 )2 x This can be rewritten as f ¼ A 1  . Since the ellipse is symmetB ric about both the f and x axes, to obtain the total area we can compute the area of one quadrant and then multiply by four, i.e., Z B

 x 2 12 area ¼ 4 A 1 dx B 0 x Letting ¼ sin q gives dx ¼ B cos qdq. Therefore, when x ¼ 0, q will B p equal zero, and when x ¼ B, q will be equal to . Substituting and 2 computing the integral produces Z area ¼ 4AB

p pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2

Z

1  sin q cos qdq ¼ 4AB

0

Z ¼ 4AB 0



0 p 21

2

ð1 þ cos 2 qÞdq

 p sin 2 q 2 ¼ 2AB q þ ¼ pAB 2 0

p 2

cos2 qdq

References

References Crowell, R.H., Slesnick, W.E., 1968. Calculus with Analytic Geometry. W. W. Norton & Company Inc., New York, New York. Rayleigh, J.W.S., 1945. Theory of Sound, vol. I. Dover Publications, New York, New York [1877]. Resnick, R., Halliday, D., 1966. Physics Part I. John Wiley & Sons, Inc., New York, New York. Sokolnikoff, I.S., Redheffer, R.M., 1958. Mathematics of Physics and Modern Engineering. McGraw-Hill Book Company, New York, New York.

219

CHAPTER

Transient excitation

5

5. Introduction In Chapter 2, we establish closed-form solutions for single-degree-offreedom systems whose motion was initiated with an initial displacement and/or velocity, and/or driven by constant frequency periodic forces that could be modeled with cosine and sine functions. In this chapter, we will provide tools to compute the response of single-degree-of-freedom systems when the forcing functions are transient in nature. Often, the word transient is associated with short-duration forces; herein, however, long duration random forces will also be referred to as transient forces. While doing this, we will discuss the concepts of impulse and impulsive forces, the principles of superposition and convolution, and Duhamel’s integral. We will also derive the tools necessary to solve for the response of systems that are excited by nondeterministic forces; that is, forces that are described by their statistical properties. We will begin by first solving for the response of damped single-degree-of-freedom systems subjected to sudden forces so that we can get a feel for the difference in behavior between systems excited by sinusoidal and transient forces. 5.1 Ramp, step, and boxcar excitation Ramp forcing functions with “flattops” are forces that increase linearly from zero to a specified value, fr , over a period, s, and then stay at that value (flattop) without changing (see Fig. 5.1-1A). If s becomes infinitesimally small, we obtain what is referred to as a step forcing function (see Fig. 5.1-1B). We will solve for the response to both types of forcing functions. The step function will be addressed first, since this will make it easier to solve for the response to a ramp function with a flattop. Structural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00005-7 Copyright © 2020 Elsevier Inc. All rights reserved.

221

222

CHAPTER 5 Transient excitation

FIGURE 5.1-1 (A) Ramp forcing function with a flattop of magnitude fr , and buildup duration s; (B) Step forcing function of magnitude fs . 5.1.1 Step excitation

In Chapter 2, we derived the equation of motion for a single-degreeof-freedom system subjected to an external force, f ðtÞ. For a step excitation, f ðtÞ will be equal to fs for all values of t  0; and since fs does not vary with time, the equation of motion becomes _ þ kxðtÞ ¼ f ðtÞ ¼ fs m€ xðtÞ þ cxðtÞ (5.1-1) fs m Note that we used relationships from Chapter 2 to substitute for c=m and k=m. Eq. (5.1-1) is a second-order, linear differential equation whose solution will consist of the sum of two parts, the solution to the homogeneous equation (right-hand side set equal to zero), and a particular solution that satisfies the equation with the forcing function term included. The solution to the homogeneous equation, xh ðtÞ, was derived in Chapter 2, and for the case 0 < z < 1:0 we have   e ud t þ Bsin e ud t (5.1-2) xh ðtÞ ¼ ezun t Acos _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

The particular solution, xp ðtÞ, will depend on the form of the term on the right-hand side; and since this is a constant, the solution must be a constant. Hence, by letting xp ðtÞ ¼ ℕ, substituting it and its time derivatives, which will be zero, into Eq. (5.1-1), and solving for ℕ, we obtain xp ðtÞ ¼

fs mu2n

(5.1-3)

5.1 Ramp, step, and boxcar excitation

The complete solution, therefore, is xðtÞ ¼ xh ðtÞ þ xp ðtÞ   e ud t þ Bsin e ud t þ fs ¼ ezun t Acos mu2n

(5.1-4)

We will assume that the system is at rest prior to the application of the step _ function; hence, xð0Þ ¼ 0 and xð0Þ ¼ 0. Using these initial conditions to e we obtain solve for Ae and B, ( !) fs z 1  ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t (5.1-5) xðtÞ ¼ mu2n 1  z2 By multiplying both sides of the equation by the reciprocal of the term in front of the braces, and then substituting u2n ¼ k=m, we obtain the response normalized relative to the deflection one would obtain if fs were applied as a static load. Fig. 5.1-2 shows the normalized response for three different values of damping. The first item to note is that the dynamic response of the system with light damping (z ¼ 0:01) is almost twice the static response. Indeed, if we set the damping to zero, the dynamic response would be twice that of the static solution. Therefore, the first conclusion we can reach is that when a force is applied suddenly the system will overshoot the static deflection, and the maximum deflection will exceed that which would occur if the force were applied more slowly, with the limit being a static force. We will show

FIGURE 5.1-2 Normalized response of a single-degree-of-freedom system to a step function force of magnitude fs for three different values of damping.

223

224

CHAPTER 5 Transient excitation

later that “applied suddenly” is a relative term. In other words, if a system has a long natural period of vibration, the force could be applied more slowly, but still fast relative to the natural period, and the dynamic response would still overshoot relative to the static solution. The second significant observation is that when the damping was increased from 1% of critical (z ¼ 0:01) to a relatively high 10% (z ¼ 0:10), the peak dynamic response did not decrease appreciably; 1.97 versus 1.73. Had this been harmonic excitation at the natural frequency, the dynamic amplification would have been considerably higher (50 for z ¼ 0:01), but the decrease would have been directly proportional to the increase in damping. For example, a doubling of the damping to z ¼ 0:02 would cut the resonant response in half. Therefore, it seems reasonable to conclude that the peak transient response is less sensitive to damping when compared to harmonic excitation. The final item to note is that irrespective of damping or the magnitude of the force, the response to a step function force decays over time to the static solution. The more lightly damped, the more cycles it will take, but eventually the response will reach the static response, which for this type of forcing function could be considered the steady-state solution. Therefore, the steadystate solution will be less than the initial startup transient response. Recall that for sinusoidal excitation at the natural frequency, the steady-state solution was oscillatory, and considerably higher than the static response; and _ higher than the startup transient, provided xð0Þ ¼ 0 and xð0Þ ¼ 0, which over time approached the steady-state value from below. 5.1.2 Ramp excitation

Next, we will derive the response to the ramp forcing function (with a flattop) shown in Fig. 5.1-1A. To accomplish this we will need to solve the problem in two parts. The first will provide the solution for 0  t < s, and the second for the period t  s. For the second period we will need to superimpose, starting at s, a solution that will cancel the continually increasing ramp that started at t ¼ 0. This is shown in Fig. 5.1-3, where the bottom dashed line is the force that, when superimposed, will cause the initial ramp function to flatten to fr starting at s. For the initial period, 0  t < s, the equation of motion is _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

fr t ms

(5.1-6)

5.1 Ramp, step, and boxcar excitation

FIGURE 5.1-3 Ramp function with a flattop (solid line) obtained by superimposing the two ramp functions shown by the dashed lines.

The solution to the homogeneous equation is as above, Eq. (5.1-2). To establish the particular solution we note that a ramp function is the time integral of a step function, i.e., Z fr fr t¼ dt (5.1-7) m m where we substituted fr for fs to indicate we are dealing with the magnitude of the ramp function with a flattop. Dividing both sides by s to normalize with respect to the ramp duration, we obtain Z fr t 1 fr ¼ dt (5.1-8) ms s m The left-hand side of Eq. (5.1-8) is the same as the right-hand term in Eq. (5.1-6). The right-hand side of Eq. (5.1-8) is the time integral of the righthand side of Eq. (5.1-1), scaled by 1=s. Therefore, the time integral of the particular solution to Eq. (5.1-1), i.e., Eq. (5.1-3), scaled by 1=s, must be the particular solution to Eq. (5.1-6), i.e., Z 1 fr fr t þ cb (5.1-9) dt ¼ 2 s mun mu2n s Substituting the solution into Eq. (5.1-6) yields the integration constant, cb, and the particular solution, xp ðtÞ ¼

fr t 2zfr  2 mun s mu3n s

(5.1-10)

225

226

CHAPTER 5 Transient excitation

The solution for the period 0  t < s, therefore, is x1 ðtÞ ¼ xh ðtÞ þ xp ðtÞ ¼e

zun t

  e ud t þ Bsin e ud t þ fr t  2zfr Acos mu2n s mu3n s

(5.1-11)

Note that we introduced the subscript 1 on x1 ðtÞ to indicate that the solution corresponds to the period 0  t < s. We will assume that the system is at rest prior to the application of the _ ramp force; therefore, xð0Þ ¼ 0 and xð0Þ ¼ 0. Using these initial conditions e e to solve for A and B, we obtain     fr 1 zun t 2z 2z2  1 2z cos ud t þ sin ud t þ t  e x1 ðtÞ ¼ un ud un mu2n s (5.1-12) The response, x2 ðtÞ, to the second ramp starting at s will be identical to the solution obtained for the first ramp, except that the force will be negative and the start of the solution needs to be delayed in time by s, which is accomplished by replacing t with ðt sÞ. Making these changes in Eq. (5.1-12) we obtain the solution for the second ramp for t  s,     fr 1 zun ðtsÞ 2z 2z2  1 2z cos ud ðt  sÞ þ sin ud ðt  sÞ þ ðt  sÞ  e x2 ðtÞ ¼  2 un ud un mun s (5.1-13) Note that if s is set to zero, i.e., the ramp is moved back to the origin, Eq. (5.1-13) becomes Eq. (5.1-12), except for the minus sign that indicates it’s sloping down. Also, when t ¼ s, x2 ðtÞ will be equal to zero, as it should be. Summarizing, the response of the system during the period 0  t < s will be x1 ðtÞ, and for the period t  s it will be the sum of x1 ðtÞ and x2 ðtÞ. Fig. 5.1-4 shows the response of the system to four forcing functions with different ramp durations: s ¼ 0:2; 0:5; 1:0; and 0:5p. The first item to note is that as the ramp duration becomes shorter the peak response, normalized relative to the static deflection, approaches that of the step

5.1 Ramp, step, and boxcar excitation

FIGURE 5.1-4 Response of a single-degree-of-freedom system to the ramp excitation shown in Fig. 5.1-3 for four different ramp durations, s. The system has a circular natural frequency un ¼ 2p, and a critical damping ratio of z ¼ 0:05. function, which would be 2.0 for a system with no damping. Another item to note is that the peak response for s ¼ 0:5p ¼ 1:57 is larger than for s ¼ 1 (see value at t z 1:7). So shortening the duration of the ramp does not necessarily lead to a monotonically increasing response level. This is a critical observation that will be discussed in detail in the next section. 5.1.3 Ramp excitation and response behavior

The response of a system with zero damping to sinusoidal excitation at its natural frequency will be infinite; recall that when the frequency of excitation equals the natural frequency the dynamic amplification factor is 1=2z. However, a review of Eqs. (5.1-5) and (5.1-13) indicates that at least for these cases, with any reasonable selection of force magnitude, the response to transient excitation is bounded, even if damping is set to zero. By setting the damping to zero we will be able to explore more easily the response of single-degree-of-freedom systems to transient excitation. Setting z equal to zero in Eqs. (5.1-12) and (5.1-13) yields the zerodamped response to a ramp (with a flattop) forcing function, i.e.,   fr t sin un t  (5.1-14) 0  t < s 0 x1 ðtÞ ¼ un s mu2n s   fr sin un t sin un ðt  sÞ 1 þ t  s 0 x1 ðtÞ þ x2 ðtÞ ¼ un s un s mu2n (5.1-15)

227

228

CHAPTER 5 Transient excitation

FIGURE 5.1-5 Response of a single-degree-of-freedom system to a ramp (with flattop) excitation (s ¼ 0:2), un ¼ 2p, and three different values of damping: z ¼ 0, 0:01, and 0:05. Fig. 5.1-5 compares the responses of systems with different levels of damping to a ramp (with a flattop) forcing function; two have low values of damping and the third has no damping. As can be ascertained, the peak responses are comparable; and as it should be, the two cases with damping decay, whereas the zero-damped system continues to oscillate without any reduction in amplitude. We observed above that shortening the ramp period, s, did not necessarily lead to a monotonically increasing response level and, indeed, it could result in a lower response (see Fig. 5.1-4). To understand this behavior, we need to establish the peak response as a function of s. We will do this for the period t  s to demonstrate the approach, and then just state the results for the period 0  t < s. Since we are seeking a maximum, we will start by differentiating Eq. (5.1-15) with respect to t, and then set the result equal to zero:    d fr sin un t sin un ðt  sÞ 1 þ ¼ cos un tðcos un s  1Þ þ sin un tðsin un sÞ ¼ 0 dt mu2n un s un s (5.1-16) Eq. (5.1-16) can be written as sin un t 1  cos un s ¼ cos un t sin un s

(5.1-17)

sin un t sin un t sinðun t þ npÞ ¼ ¼ , the peak response will cos un t cos un t cosðun t þ npÞ correspond to either un t, which will be in quadrant I, or un t þ p, which

Since

5.1 Ramp, step, and boxcar excitation

FIGURE 5.1-6 Relationship of terms in Eq. (5.1-17). will be in quadrant III (see Fig. 5.1-6). We could differentiate Eq. (5.1-15) with respect to time to determine which result corresponds to the peak positive value, and which corresponds to the minimum value; or we could simply substitute each and see which one yields the larger result, which in this case will be the quadrant III solution. As shown in Fig. 5.1-6,q the hypotenuse of the triangle whose sides are ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi defined by Eq. (5.1-17) is

ð1  cos un sÞ2 þ ðsin un sÞ2 ; therefore,

ð1  cos un sÞ sin un t ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1  cos un sÞ2 þ ðsin un sÞ2 sin un s cos un t ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ð1  cos un sÞ2 þ ðsin un sÞ2

(5.1-18)

Eq. (5.1-15) can be written as xðtÞk sin un t sin un t cos un s  cos un t sin un s ¼1  þ fr un s un s

(5.1-19)

Substituting the two equations from (5.1-18) yields the sought-after result,   xðtÞk 2  2 cos un s qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼1þ fr peak un s ð1  cos un sÞ2 þ ðsin un sÞ2 2  2 cos un s pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 1 þ ¼1þ un s 2  2 cos un s

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2  2 cos un s un s (5.1-20)

229

230

CHAPTER 5 Transient excitation

FIGURE 5.1-7 Peak responses to ramp excitation (with a flattop) of a single-degree-offreedom system (z ¼ 0) plotted against the ratio of the ramp duration to the natural period of vibration of the system. The peak response has been normalized with respect to the static response. Fig. 5.1-7 shows the peak response to a ramp function (Eq. 5.1-20) plotted against the ratio of the ramp period to the natural period of vibration; recall that Tn ¼ 1=fn and fn ¼ un =2p. The first item to note is the dips in the value of the peak response. The magnitudes at the bottom of the dips are equal to the static solution, i.e., no dynamic amplification. These occur at values of s that coincide with the natural period of vibration and its multiples. This is a critical observation because it implies that for ramp periods that exceed the natural period of vibration, the peak response does not always increase with decreasing ramp period. Once the ramp period is less than the natural period of vibration, the peak response increases with decreasing ramp period. Also, as the ramp period approaches zero, which would correspond to a step function, the expected amplification factor of two is reached. In addition, as the ramp period increases relative to the natural period of vibration, the peak response approaches the static solution. 5.1.4 Boxcar excitation

Fig. 5.1-8 shows a boxcar forcing function (solid line). As with the ramp function with a flattop, the problem needs to be solved in two parts. The first will provide the solution for 0  t < s, and the second for the period t  s.

5.1 Ramp, step, and boxcar excitation

FIGURE 5.1-8 Boxcar forcing function (solid line) with a period s. The dashed lines are the forcing functions for the period t  s; note that the top dashed line is the continuation of the step function that started at t ¼ 0, and the bottom dashed line in the step function that starts at t ¼ s. For the second period we will need to subtract, starting at s, a solution that will cancel the step function that started at t ¼ 0. This is shown in Fig. 5.1-8, where the bottom dashed line is the forcing function that will cause the initial step function to reduce to a value of zero starting at s. For the period t < s, the solution, x1 ðtÞ, is given by Eq. (5.1-5); note that we will substitute fb for fs to differentiate between the boxcar and step excitations. For the period t  s, the solution, x2 ðtÞ, will be for the negative step function delayed in time by s, i.e., x1 ðt sÞ; hence, xðtÞ ¼ x1 ðtÞ þ x2 ðtÞ ¼

fb mu2n

fb  mu2n

(

z 1  ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t 1  z2

( 1  ezun ðtsÞ

!)

!) z cos ud ðt  sÞ þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud ðt  sÞ 1  z2 (5.1-21)

Fig. 5.1-9 shows the responses of a single-degree-of-freedom system to boxcar functions of different period, s. Fig. 5.1-9A shows the response to a boxcar whose period was selected to be less than the natural period of vibration of the system (s ¼ 0:5Tn ), and to a boxcar whose period was set to be longer than the natural period of vibration (s ¼ 3:5Tn ).

231

232

CHAPTER 5 Transient excitation

FIGURE 5.1-9 Response of a single-degree-of-freedom system to boxcar forcing functions of magnitude fb , and durations: (A) s ¼ 0:5, 3:5, and (B) s ¼ 15. The natural period of vibration of the system is 1, and the critical damping ratio is z ¼ 0:05. Note that for the shorter boxcar the response appears to be similar to that of a system whose motion was initiated by an initial velocity. For the longer boxcar, the response until the end of the boxcar period is the same as it would be for a step function of the same magnitude; that is, an initial overshoot and then a decreasing response toward the static solution. The response after the boxcar ends will depend on where in the response cycle the force stops. From this point forward there is no external force and the solution corresponds to a system whose motion was initiated by the displacement and velocity at the instant of time the force stopped. Therefore, we can obtain the response for t  s by computing the displacement and velocity at the instant the boxcar force ends, i.e., x1 ðsÞ and x_1 ðsÞ, and then using these as the initial conditions to an unforced problem. If the boxcar were sufficiently long compared to the natural period of vibration of the system, such that the number of oscillations was enough for the dynamic response to decay to nearly the static solution, then the response after the boxcar ends would correspond to an initial displacement solution (see Fig. 5.1-9B).

5.2 Impulse, impulsive forces, and superposition

FIGURE 5.1-10 Response of a single-degree-of-freedom system to boxcar forcing functions of varying duration s. Magnitude of force is normalized so that the area under the boxcar remains constant as the duration decreased. The system had a natural period of vibration of 1, and a critical damping ratio of z ¼ 0:05. 5.1.5 Boxcars of short time duration

We noted above that as the duration of the boxcar became shorter, the resulting response appeared to be similar to that of a system whose motion was initiated by an initial velocity. Fig. 5.1-10 shows the response of the single-degree-of-freedom system to boxcar forcing functions of ever decreasing duration s. Note that we maintained the area under the boxcar constant by increasing the magnitude fb as s was reduced. In other words, we replaced fb by fb =jsj. In the next section, we will show that a system subjected to a very short-duration impulsive force will respond as though it had an initial velocity, the value of which is the magnitude of the force times the short time it acts, divided by the mass of the system. For the _ shortest duration ¼

boxcar, s ¼ 0:01, the velocity would be xð0Þ ð fb =jsjÞs =m ¼ ð3:948 =0:01Þ0:01 =0:01 ¼ 39:48. In Fig. 5.1-10, the straight line from the origin has this slope; and as we can ascertain, the slope of xðtÞ, for s ¼ 0:01, at the origin is very close to this value. The next section will provide the theoretical foundation for why this should be the case. 5.2 Impulse, impulsive forces, and superposition Consider the damped, single-degree-of-freedom system in Fig. 5.2-1A. The force, f ðtÞ, acts on the mass starting at t ¼ 0 for an extremely short time, ε.

233

234

CHAPTER 5 Transient excitation

FIGURE 5.2-1 (A) Single-degree-of-freedom system. (B) A very short-duration impulsive force of magnitude FI =jεj that acts for a very short time, ε. Indeed, ε is so short that the force will stop before the mass will have had a chance to displace appreciably because of its inertia. These very shortduration forces are referred to as impulsive forces. Impulsive forces change the momentum of a system without appreciably changing the displacement during the period they act. Experience tells us that when a force acts on a spring-mass system, the mass will move and continue to oscillate after the force is no longer acting. If the force is sufficiently short, the instant the force ceases to act the mass will not have displaced appreciably and the spring will, therefore, not be deformed much. Hence, for engineering purposes we can assume that the amount of strain energy stored in the spring is negligible. Since there is negligible strain energy in the spring and the force is no longer acting, but the mass proceeds to oscillate, we must conclude that the force imparted kinetic energy to the mass. In other words, it changed the velocity of the mass from zero to a nonzero value, i.e., it changed the momentum of the mass. Thus, for the purposes of vibration analysis, we can treat this as a system whose motion is due to an initial velocity, _ þ εÞ, the magnitude of which has to be proportional to the shortlim xð0 ε/0

duration force. Starting with Newton’s Second Law we can derive the equation of motion for the system in Fig. 5.2-1A for the period during which the impulsive force acts. The mass is assumed constant. For practical purposes, this is prior to any appreciable motion of the mass since ε will approach zero in the limit; hence, the spring and damper forces can be neglected during this short time period and we obtain

5.2 Impulse, impulsive forces, and superposition

d _ ðmxðtÞÞ ¼ f ðtÞ dt m

d _ ¼ f ðtÞ xðtÞ dt

(5.2-1)

_ ¼ f ðtÞdt mdxðtÞ Integrating Eq. (5.2-1) with respect to time will yield the customary definition of impulse, I, Z t2 Z t2 _ ¼ md xðtÞ f ðtÞdt I¼ t1

t1





Z

t2

_ 2 Þ  xðt _ 1Þ ¼ ¼ m xðt

(5.2-2) f ðtÞdt

t1

From Eq. (5.2-2) we can conclude that the impulse is the change in momentum due to force acting over a given time. Note that we can effect the same change in momentum by applying a larger force over a shorter period of time, or a smaller force over a longer period. We will be interested in the former, since the later could also induce a change in the displacement of the mass while the force is acting. Substituting for f ðtÞ in Eq. (5.2-2), integrating over the time period that the impulsive force, FI =jεj, acts (see Fig. 5.2-1B), and then letting ε approach zero in the limit yields Z ε Z ε FI _ ¼ lim dt (5.2-3) mdxðtÞ I ¼ lim ε/0 0 ε/0 0 jεj and _ ε0 lim ½mxðtÞ ε/0

FI t ¼ lim ε/0 jεj

ε 0

_  xð0Þ _ m lim ½xðεÞ ¼ lim ½FI  0 ε/0

ε/0

_ m lim xðεÞ ¼ FI ε/0

_ þÞ¼ xð0

FI m

(5.2-4)

235

236

CHAPTER 5 Transient excitation

_ Since the system is at rest until the impulsive force, FI =jεj, acts, xð0Þ in the second equation of (5.2-4) will be zero at time t ¼ 0. Hence, we designated _ þÞ to minimize confusion the velocity after the impulsive force acted as xð0 in the above derivation. However, since ε will approach zero in the limit, we can consider 0þ close enough to 0 so that in subsequent discussions we will _ return to the notation xð0Þ to indicate the velocity due to an impulsive force at time t ¼ 0. Also, note that the units of FI will be force times time, since the normalization factor jεj has units of time. This will result in the area under the force versus time curve having a magnitude of FI when integrated over a period ε. In Chapter 2, we derived the solution for the vibration response of a damped, single-degree-of-freedom system whose motion was initiated by an initial displacement and an initial velocity. Since an impulsive force produces a change in momentum, which for a stationary system with constant _ mass results in a velocity, xð0Þ ¼ FI =m, at time t ¼ 0, but negligible displacement, i.e., xð0Þ ¼ 0, we can use the solution derived in Chapter 2 to define the response of a damped, single-degree-of-freedom system subjected to an impulse FI at time t ¼ 0, i.e.,   sin ud t zun t _ xð0Þ xðtÞ ¼ e ud (5.2-5)   zun t FI sin ud t ¼e m ud It is customary, and it will facilitate future discussion, to write Eq. (5.2-5) as xðtÞ ¼ FI hðtÞ

(5.2-6)

where hðtÞ is referred to as the unit impulse response function and is given by   ezun t sin ud t hðtÞ ¼ (5.2-7) m ud Fig. 5.2-2 shows the response of a single-degree-of-freedom system acted on by two impulsive forces, the durations of which are considerably shorter that the natural period of vibration of the system. One force acts at 1.0 s and the other at 3.0 s. The first force has a magnitude of 1.0 (units of force) and the second has a magnitude of 0.5 (units of force); both forces act for 0.1 s and the mass of the system is 0.1 (units of mass). Therefore, the

5.2 Impulse, impulsive forces, and superposition

237

FIGURE 5.2-2 Response of a damped, single-degree-of-freedom system excited by two 0.1 s duration forces acting at 1.0 and 3.0 s. The forces have magnitudes of 1.0 and 0.5, respectively; the mass is 0.1. first impulse has a magnitude of 0.1 (units of force times units of time) and the second a magnitude of 0.05 (units of force times units of time). Dividing these by the mass yields the initial velocities. The dotted lines show the response due to each impulsive force as though the other did not exist. Note that at the start of motion the slopes of the response curves are equal . to the initial velocities, ðFI Þj m. The solid line is the total response after the second impulse acts, which because we are dealing with a linear system is the superposition (sum) of the responses due to each impulsive force. The responses in Fig. 5.2-2, in equation form, are   zun ðts1 Þ ðFI Þ1 sin ud ðt  s1 Þ x1 ðtÞ ¼ e ud m (5.2-8)   ðF Þ sin u ðt  s Þ I 2 d 2 x2 ðtÞ ¼ ezun ðts2 Þ m ud and xðtÞ ¼ x1 ðtÞ þ x2 ðtÞ     zun ðts1 Þ ðFI Þ1 sin ud ðt  s1 Þ zun ðts2 Þ ðFI Þ2 sin ud ðt  s2 Þ þe for t  3 ¼e ud ud m m (5.2-9)

238

CHAPTER 5 Transient excitation

Note that each response is applicable for the period after the corresponding impulsive force acts, and the total response given by Eq. (5.2-9) is applicable only after the second force acts. In Eq. (5.2-9), s1 and s2 correspond to the times of each impulsive force, 1 and 3 s, respectively. For a system subjected to a large number, N, of impulsive forces, we can write Eq. (5.2-9) in a more generic form, i.e., ! N X ðFI Þj sin ud ðt  sj Þ zun ðtsj Þ e xðtÞ ¼ ud m j¼1 ¼

N X

(5.2-10) ðFI Þj hðt  sj Þ

j¼1

Eq. (5.2-10) provides the response for the periods after the Nth impulsive force. 5.3 Convolution and Duhamel’s integrals Fig. 5.3-1 shows a forcing function (solid line) that cannot be described by analytic functions and, therefore, a closed-form solution for the response cannot be derived. If we represent the forcing function, f ðtÞ, by a collection of impulsive forces, one of which is shown in the figure, we could use Eq. (5.2-10) to obtain a good approximation of the response. If we then let the width of each impulsive force become infinitesimally small, and

FIGURE 5.3-1 One of an infinite number of impulsive forces (bar at sj ) used to describe the transient forcing function f ðtÞ (solid line). Dashed line shows the impulse response function scaled by the magnitude of the impulse, f ðsj ÞDs.

5.3 Convolution and Duhamel’s integrals

then sum the contribution from each by integrating we will obtain the sought-after result. We start by defining the impulsive force f ðsj Þ that corresponds to time sj . Multiplying this force by Ds yields the associated impulse, ðFI Þj ¼ f ðsj ÞDs

(5.3-1)

Substituting into Eq. (5.2-10) gives xðtÞ ¼

N X ðf ðsj ÞDsÞhðt  sj Þ

(5.3-2)

j¼1

In the limit, Ds/ds and the summation becomes an integral, Z t f ðsÞhðt  sÞds xðtÞ ¼

(5.3-3)

0

Eq. (5.3-3) is referred to as the convolution integral, since the function f ðsÞ acts to modify the function hðt sÞ. If we substitute Eq. (5.2-7) for hðt sÞ, we obtain Duhamel’s integral,   Z t ezun ðtsÞ sin ud ðt  sÞ ds (5.3-4) f ðsÞ xðtÞ ¼ ud m 0 Eq. (5.3-4) provides the response of a damped, single-degree-of-freedom system subjected to a forcing function f ðtÞ. If an analytic solution is not possible, Eq. (5.3-4) must be solved numerically; this is discussed extensively in Chapter 8. 5.3.1 Step function response using Duhamel’s integral

Recall that a step forcing function is given by f ðtÞ ¼ fs (see Fig. 5.1-1). Substituting into Eq. (5.3-4) gives  Z t zun ðtsÞ  e sin ud ðt  sÞ fs xðtÞ ¼ ds ud m 0 (5.3-5) Z t fs zun t ¼ e ezun s sin ud ðt  sÞds mud 0

239

240

CHAPTER 5 Transient excitation

Using the standard trigonometric relationship for a sine function we obtain 8 9 Z t Z t < = fs zun t e xðtÞ ¼ sin ud t ezun s cos ud sds  cos ud t ezun s sin ud sds : ; mud 0 0 (5.3-6) Solving the integrals yields 8  t 9 zun s  = < zu t tðu sin u s þ zu cos u sÞg u e fsin n d d d d fe n 0 s   xðtÞ ¼   t 2 2 : 2 mud z un þ ud  ezun s fcos ud tðzun sin ud s  ud cos ud sÞg  ; 0 (5.3-7) Note that the second line inside the braces is the continuation of the first line. We will use this notation in the next equation pffiffiffiffiffiffiffiffiffiffiffiffiffialso. Applying the integration limits, recalling that ud ¼ un 1  z2 , and performing the indicated algebraic manipulations produces  9 8 2 2 > zun t > ðsin u tÞ þ u ðcos u tÞ u e < = d d d d fs ezun t    xðtÞ ¼ > mud z2 u2n þ u2n 1  z2 > : ; ðzun sin ud t þ ud cos ud tÞ ¼

o fs ezun t n zun t e  ðzu sin u t þ u cos u tÞ u n d d d d mud u2n

¼

o fs n zun t  e ðzu sin u t þ u cos u tÞ u n d d d d mud u2n (5.3-8)

Finally, taking ud inside the braces gives ( !) fs z zun t 1e cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t xðtÞ ¼ mu2n 1  z2

(5.3-9)

Eq. (5.3-9) is identical to Eq. (5.1-5). It’s reasonable to extrapolate the results for the step function transient excitation to other excitation functions. Accordingly, we conclude that Duhamel’s integral provides the

5.3 Convolution and Duhamel’s integrals

241

response to excitation f ðtÞ, for initial conditions of zero for both the displacement and velocity. If we have initial conditions, we would need to solve for the response due to the initial conditions separately (solution to homogenous equation) and add the result to that obtained with Duhamel’s integral. This is discussed in the next section. 5.3.2 Duhamel’s integral and initial conditions

Duhamel’s integral corresponds to zero initial conditions, i.e., xð0Þ ¼ 0 and _ xð0Þ ¼ 0. This can be seen, for example, by computing the displacement and velocity at t ¼ 0 for the response computed in the preceding section. For the general case we must apply the initial conditions to the total response, which is xðtÞ ¼ xh ðtÞ þ xp ðtÞ zun t

¼e



 Z t zun ðtsÞ  e sin u ðt  sÞ d e ud t þ Bsin e ud t þ f ðsÞ ds Acos ud m 0 (5.3-10) 

Applying the initial displacement, while noting the integration limits, produces xð0Þ ¼ xh ð0Þ þ xp ð0Þ zun t

¼e

  Ae þ

Z

0 0

    ezun s sin ud ð sÞ f ðsÞ ds ¼ ezun t Ae þ 0 ud m (5.3-11)

Proceeding to compute the velocity we obtain _ ¼ x_h ðtÞ þ x_ p ðtÞ xðtÞ   d e ud t þ Bsin e ud t þ ¼ ezun t Acos dt

Z 0

t

  d ezun ðtsÞ sin ud ðt  sÞ f ðsÞ ds dt ud m (5.3-12)

242

CHAPTER 5 Transient excitation

Computing the second term gives   Z t d ezun ðtsÞ sin ud ðt  sÞ f ðsÞ ds m ud 0 dt   Z t Z t ezun ðtsÞ ezun ðtsÞ sin ud ðt  sÞ ðcos ud ðt sÞÞdszun f ðsÞ ¼ f ðsÞ ds ud m m 0 0 (5.3-13) and

   Z  d zun t  e zun 0  zun s  sin ud ð sÞ  e _ xð0Þ ¼ e ds Acos ud t þ Bsin ud t  þ f ðsÞ e dt ud m 0 t¼0 1 þ m

Z

0

   f ðsÞ ezun s cosðud sÞ ds

0

  d zun t  e e ud t  Acos ud t þ Bsin ¼ e  dt t¼0 (5.3-14) Since Duhamel’s integral and its derivative at t ¼ 0 will be equal to zero, the homogeneous solution coefficients are independent of Duhamel’s integral and correspond to the unforced free-vibration problem worked in Chapter 2. Accordingly, the complete solution is ( !  ) z sin u t d _ xðtÞ ¼ ezun t xð0Þ cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t þ xð0Þ ud 1  z2 Z þ 0

t

  ezun ðtsÞ sin ud ðt  sÞ ds f ðsÞ ud m (5.3-15)

5.4 Response Spectra and Shock Response Spectra Response Spectra and Shock Response Spectra are useful tools for the analysis of vibration data, and in particular the analysis of very short-duration transients. In addition, Response Spectra can be used in the vibration

5.4 Response Spectra and Shock Response Spectra

analysis of structures where approximate, and conservative solutions are acceptable. In this section, we will describe how Response Spectra and Shock Response Spectra are developed, and how they can be used to understand the behavior of vibrating systems; and in Volume II we will describe how Response Spectra can be used to analyze the response of buildings subjected to earthquake ground motion. It should be noted that “Response Spectra” and “Shock Response Spectra” are terms that refer to the same tool. “Response Spectra” is the term more commonly used for the analysis of vibration data where the modes of vibration of a system are excited, whereas “Shock Response Spectra” refers to the analysis of response data associated with shortduration stress wave propagation in a structure. Hence, Response Spectra are used, for example, to analyze the relatively low frequency portion of launch vehicle liftoff and atmospheric flight vibration responses, or the response of buildings to earthquakes. Shock Response Spectra would be used in the analysis of responses to high frequency shocks caused by explosive ordnance devices, for example. There are also differences in the practical manner in which these spectra are calculated and, therefore, the techniques used to compute the spectra need to account for the numerical issues associated with each. Response Spectra (RS) and Shock Response Spectra (SRS) provide the peak response of a single-degree-of-freedom system as a function of its natural frequency and critical damping ratio. RS and SRS of acceleration time histories are computed as a base excitation problem, whereas RS and SRS of force time histories are computed as a base-fixed force vibration problem. We will first derive the RS/SRS for a prescribed base acceleration, y€B ðtÞ, where y€B ðtÞ could be, for example, the acceleration time history measured within a vibrating system, such as an airplane in flight, or ground acceleration measured during an earthquake. In Chapter 2, we derived the equation of motion of a single-degree-offreedom system excited by base motion, y€B ðtÞ: y€e ðtÞ þ 2zun y_e ðtÞ þ u2n ye ðtÞ ¼ € yB ðtÞ

(5.4-1)

In Eq. (5.4-1), ye ðtÞ defines the motion of the mass relative to a base that can undergo acceleration and, therefore, is not a coordinate in an inertial reference frame (see Chapter 1). The absolute acceleration response, however, would be in an inertial reference frame provided it is computed

243

244

CHAPTER 5 Transient excitation

as y€ðtÞ ¼ y€B ðtÞ þ y€e ðtÞ. Since y€B ðtÞ will usually be a time history for which closed-form solutions will not exist, Eq. (5.4-1) must be solved by means such as Duhamel’s integral (Eq. 5.3-4) and numerical integration,   Z t zun ðtsÞ sin ud ðt  sÞ ds (5.4-2) ye ðtÞ ¼  y€B ðsÞe ud 0 Differentiating Eq. (5.4-2) with respect to time yields the relative velocity response, y_e ðtÞ,   Z t sin ud ðt  sÞ zun ðtsÞ y_ e ðtÞ ¼  y€B ðsÞe þ cos ud ðt  sÞ ds  zun ud 0 (5.4-3) For a given critical damping ratio, z, the above process is repeated for a pffiffiffiffiffiffiffiffiffiffiffiffiffi range of natural frequencies, un ; recall that ud ¼ un 1  z2 . Let Sd ð fn ; zÞ be the displacement RS/SRS, where fn ¼ un =2p. Then Sd ð fn ; zÞ ¼ maxjye ðtÞj

(5.4-4)

Likewise, we define RS/SRS for the computed relative velocity, Sv ð fn ; zÞ, as   Sv ð fn ; zÞ ¼ maxy_e ðtÞ (5.4-5) To obtain the absolute acceleration RS/SRS, Sa ð fn ; zÞ, we substitute the computed relative displacement, Eq. (5.4-2), and the computed relative velocity, Eq. (5.4-3), into Eq. (5.4-1),   yðtÞj ¼ maxy€e ðtÞ þ y€B ðtÞ Sa ð fn ; zÞ ¼ maxj€ (5.4-6)   ¼ max2zun y_e ðtÞ þ u2n ye ðtÞ Fig. 5.4-1A shows an acceleration time history measured during the 1940 El Centro earthquake (U.S. Geological Survey). An inspection of this time history reveals important information; such as the peak values were less that 0.4g, and the ground motion lasted for at least 30 s. However, we are not able to discern information that may tell us how a structure with a particular natural frequency might have responded. Fig. 5.4-1B shows the RS we computed for this earthquake time history. Recall that an RS provides the peak responses of single-degree-of-freedom systems as a function of natural frequency and damping. Hence, we are able to ascertain from the RS that the earthquake had more energy in the low frequency range than

5.4 Response Spectra and Shock Response Spectra

FIGURE 5.4-1 (A) Ground acceleration time history measured during the 1940 El Centro earthquake. (B) Response Spectrum (RS) of time history in (A), computed with z ¼ 0:05. above roughly 10 Hz, since the single-degree-of-freedom systems with low natural frequencies had the higher responses. Fig. 5.4-2 shows an acceleration time history measured during a pyrotechnic event, and the corresponding SRS. Ordnance or pyrotechnic events are extremely short-duration events, note the time axis, but produce very high accelerations at high frequencies; several thousand g would not be unusual. These events generally produce stress waves that propagate through a system and, therefore, the measured accelerations are representative of local rather than global responses. An SRS of the acceleration time history will provide information on the energy content of the vibration at the accelerometer location. For our example, the peak energy is between 10 and

FIGURE 5.4-2 (A) Pyrotechnic shock response acceleration time history. (B) Shock Response Spectrum (SRS) for time history in (A), computed with z ¼ 0:05.

245

246

CHAPTER 5 Transient excitation

15,000 Hz. Although the acceleration magnitudes are extremely high, the corresponding displacements will be very small because the vibrations are at high frequencies. The preceding discussion introduced the Response Spectra and Shock Response Spectra for acceleration base motion. There is also value in computing Response Spectra for force time histories. To accomplish this we solve for the vibration response of a single-degree-of-freedom system fixed to “ground” and driven by a force, f ðtÞ, whose Response Spectra we seek. The equation of motion for this type of system was derived in Chapter 2: f ðtÞ (5.4-7) m Eq. (5.4-7) is best solved numerically with Duhamel’s integral, and since we are now dealing with a system fixed to “ground” the computed responses, _ xðtÞ, xðtÞ, and x€ðtÞ, will be in absolute coordinates and in an inertial reference frame; the displacement response is   Z t ezun ðtsÞ sin ud ðt  sÞ f ðsÞ xðtÞ ¼ ds (5.4-8) ud m 0 _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

Using the same definitions as for the base excitation case, the displacement Response Spectrum for a force is   Z t    zu ðtsÞ n   e sin ud ðt  sÞ f ðsÞ ds Sd ð fn ; zÞ ¼ max (5.4-9) ud m  0  The corresponding velocity spectrum, therefore, is   Z t    zu ðtsÞ n   e sin ud ðt  sÞ þ cos ud ðt  sÞ ds  zun Sv ð fn ; zÞ ¼ max f ðsÞ ud m  0  (5.4-10) _ Substituting the computed displacement, xðtÞ, and velocity, xðtÞ, into Eq. (5.4-7) yields the acceleration response    f ðtÞ  2 _  un xðtÞ xðtÞj ¼ max  2zun xðtÞ (5.4-11) Sa ð fn ; zÞ ¼ maxj€ m

5.4 Response Spectra and Shock Response Spectra

Fig. 5.4-3A shows a thrust time history derived from a pressure measurement. Fig. 5.4-3B shows the corresponding Response Spectrum. The large responses at the very low frequencies are due to the steady-state thrust prior to the shutdown. One item that can be ascertained from the RS is that there are narrow frequency bands that contain higher energy. One would, therefore, expect that any components with natural frequencies within these frequency bands could experience higher vibration levels than if the natural frequencies coincided with the valleys. This is a simplistic assessment and once we get to multi-degree-of-freedom systems we will explore the limitations of this type of assessment. Nevertheless, there is value in understanding where excitation forces have higher energy content as a function of frequency. Fig. 5.4-3C shows an acceleration time history that is due to the force transient. As can be ascertained, the response acceleration contains higher

FIGURE 5.4-3 (A) Thrust transient; (B) Response Spectrum of time history in (A); (C) Acceleration measured during transient; (D) Relative and absolute acceleration Response Spectra of time history in (C). Response Spectra were computed with z ¼ 0:01.

247

248

CHAPTER 5 Transient excitation

oscillatory components relative to the steady-state condition. These are due to the dynamic properties of the system. Fig. 5.4-3D shows the corresponding Response Spectrum, where we plotted both the relative, Sa rel ð fn ; zÞ ¼ y€e ðtÞ, and absolute, Sa ð fn ; zÞ ¼ y€B ðtÞ þ y€e ðtÞ, acceleration spectra. One reason for looking at both is that the relative spectra will not contain, for example, the steady-state component and, therefore, may provide more insight into the vibration properties of the system from which the measurements were taken. However, one should not lose sight of the fact that the relative spectra correspond to noninertial reference frames. The displacement and velocity results presented in the preceding discussion were obtained by numerical integration of Duhamel’s integral; these were then used in the equation of motion to compute the acceleration responses. For lightly damped systems, once the displacement SRS/RS has been computed, we can derive good approximations of the velocity and absolute acceleration SRS/RS. As discussed in Chapter 2, these approximations are referred to as pseudo velocity and pseudo acceleration. The corresponding spectra are defined as Sea ð fn ; zÞ ¼ u2n Sd ð fn ; zÞ Sev ð fn ; zÞ ¼ un Sd ð fn ; zÞ

(5.4-12)

We can take advantage of the two equations in (5.4-12) to plot Sd ð fn ; zÞ, e Sv ð fn ; zÞ, and Sea ð fn ; zÞ in one graph, which is typically referred to as a tripartite plot. Fig. 5.4-4 shows the tripartite plot of the Response Spectrum shown in Fig. 5.4-1B. Observe that all four scales are logarithmic. The vertical scale provides the pseudo velocity values, whereas the scale sloping up to the left provides the pseudo acceleration values. Both of these were derived with the equations in (5.4-12) using the displacement scale that slopes to the right. The dashed lines show how this tool can be used. First, the natural period of oscillation of the system is selected and a vertical line is drawn to the spectrum. From that point lines are drawn (dashed lines) that are parallel to the constant pseudo acceleration and displacement scales, and perpendicular to the pseudo velocity scale. In Fig. 5.4-4, the lines parallel to the pseudo acceleration scale correspond to constant displacement, whereas lines parallel to the displacement scale correspond to constant pseudo acceleration. Horizontal lines correspond to constant pseudo velocity. Therefore, since the pseudo velocity axis is fixed to be perpendicular to the frequency/period axis, the pseudo

5.5 Random response analysis

FIGURE 5.4-4 Tripartite plot of El Centro earthquake Response Spectrum computed with z ¼ 0:05. acceleration and displacement axes have to be such that the equations in (5.4-12) are satisfied; hence, when ascertaining pseudo acceleration and displacement values, we must use the constant pseudo acceleration and displacement lines, which may not be perpendicular to each other. 5.5 Random response analysis In Chapter 2, we dealt with deterministic problems; that is, problems where given a forcing function we could establish the value of the response, xðtÞ, at any time ti . In practice, although we may have numerous measurements of an excitation, the actual forces the system will experience in operation will most likely be different from those measured in the past. Examples include the buffet environments launch vehicles and airplanes experience during transonic flight; the vibration from rocket and jet engines; and the acoustic environment produced by a car driving at high speeds. These types of forces are referred to as nondeterministic, and they must be dealt with statistically. In other words, we may not be able to compute the exact response value, but we will be able to make a statement about the probability of a particular

249

250

CHAPTER 5 Transient excitation

value being exceeded provided the excitation is in family with the existing dataset of forcing functions. Assume that the excitation, f ðtÞ, comes from a family (ensemble) of forcing functions that were produced by a stationary and ergodic process. Fig. 5.5-1 shows what several of the time histories might look like. Volume II provides formal definitions of stationary and ergodic processes. However, for the purposes of the discussion in this chapter, we will use the term stationary to imply stationary in mean and autocorrelation. In addition, if the statistical properties at any time point across the ensemble (vertical lines) are equivalent to those of any one record, then the process is said to be ergodic. Although expected value or mean, mean square value, variance, and standard deviation are defined formally in Volume II, we will summarize

FIGURE 5.5-1 Family (ensemble) of an infinite number of forcing functions whose expected value at any time point (vertical lines are examples) across the family is the same as for any one infinite-length forcing function. m and s are the mean and standard deviation of the time history points.

5.5 Random response analysis

the definitions here to facilitate the discussion in this chapter. The expected value or mean, E½xðtÞ, of a time function, xðtÞ, is given by Z T 1 xðtÞdt (5.5-1) E½xðtÞ ¼ x ¼ lim T/∞ 2T T The corresponding variance, s2 , and standard deviation, s, are, respectively, Z T 1 2 ðxðtÞ  xÞ2 dt (5.5-2) s ¼ lim T/∞ 2T T and

pffiffiffiffiffi s ¼ s2

The mean square value, x2 , is defined as 1 E x2 ðtÞ ¼ x2 ¼ lim T/∞ 2T

Z

T

T

x2 ðtÞdt

(5.5-3)

Comparing Eqs. (5.5-2) and (5.5-3) we note that for x ¼ 0, s2 ¼ x2 . In other words, the standard deviation is equal to the square root of the mean square value for a time history with a zero mean. For a linear system, if the mean of the excitation time history is zero, then the mean of the response time history will also be zero. Therefore, from the mean square value of the response, we will be able to make statements about the probability of exceeding specific response levels. 5.5.1 Mean square value and Power Spectral Density

We begin with Parseval’s theorem, the derivation of which is in Appendix 5.1, Z ∞ Z ∞  1 x1 ðtÞx2 ðtÞdt ¼ X2 ðuÞX1 ðu du (5.5-4) 2p ∞ ∞ In Eq. (5.5-4), X2 ðuÞ is the Fourier transform of x2 ðtÞ, and X1 ðuÞ is the complex conjugate of the Fourier transform of x1 ðtÞ. Parseval’s theorem relates integration in the time domain to integration in the frequency domain. This is of interest since the left integral in Eq. (5.5-4) can be related to the mean square value, and both integrals can be related to the response of a system. Let x1 ðtÞ ¼ x2 ðtÞ ¼ xðtÞ, then Eq. (5.5-4) becomes Z ∞ Z ∞ 1 2 x ðtÞdt ¼ XðuÞX  ðuÞdu (5.5-5) 2p ∞ ∞

251

252

CHAPTER 5 Transient excitation

In practice, time histories will be finite. Therefore, we will use the boxcar function, wT ðtÞ, introduced in Chapter 3, to define the time history as xT ðtÞ ¼ wT ðtÞxðtÞ where

 wT ðtÞ ¼

1 T  t  T 0 otherwise

(5.5-6)

(5.5-7)

From Chapter 3, the Fourier transform of wT ðtÞ is 2 sin uT (5.5-8) u In Chapter 3, it was also shown that multiplication in the time domain was equivalent to convolution in the frequency domain, which we will designate by  when used as an operator between functions. Therefore, the Fourier transform of xT ðtÞ is WT ðuÞ ¼

XT ðuÞ ¼ WT ðuÞ  XðuÞ 1 ¼ 2p Z ¼

Z

∞ ∞

∞

∞

WT ðuÞXðu  uÞdu

(5.5-9)

 1 sin uT Xðu  uÞdu p u

In Chapter 3, accounting for the differences in variables, it is also shown 1 sin uT that lim ¼ dðuÞ, where dðuÞ is the Dirac unit impulse function. T/∞ p u Therefore,  Z ∞ 1 sin uT lim XT ðuÞ ¼ lim Xðu  uÞdu T/∞ T/∞ ∞ p u Z ¼

lim

∞ T/∞

Z ¼

∞ ∞

 1 sin uT Xðu  uÞdu p u





dðuÞXðu  uÞdu

(5.5-10)

5.5 Random response analysis

Letting u  u ¼ s, and then differentiating with respect to u yields du ¼ ds. Observing that lim u  u ¼ lim  u ¼ lim s and u/∞

lim u  u ¼ lim  u ¼ lim s produces u/∞ u/∞ s/∞ Z ∞ Z dðuÞXðu  uÞdu ¼  lim XT ðuÞ ¼ T/∞

¼

∞ Z ∞ ∞

u/∞ ∞



s/∞

dðu  sÞXðsÞds

dðu  sÞXðsÞds (5.5-11)

Noting the change in the limits of integration, and recalling the sifting property of the unit impulse function (see Chapter 3), we obtain lim XT ðuÞ ¼ XðuÞ

T/∞

(5.5-12)

Another way to understand Eq. (5.5-12) is to note that as T increases, the boxcar will eventually encompass the entire time history, and xT ðtÞ ¼ xðtÞ as T goes to infinity. It follows that the Fourier transforms will also be equal in the limit, as indicated by Eq. (5.5-12). Substituting xT ðtÞ and XT ðuÞ into Eq. (5.5-5) yields Z ∞ Z ∞ 1 2 xT ðtÞdt ¼ XT ðuÞXT ðuÞdu (5.5-13) 2p ∞ ∞ R∞ RT Noting that ∞ x2T ðtÞdt ¼ T x2 ðtÞdt, and recalling that a complex number multiplied by its complex conjugate produces the modulus squared of pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 the complex number, e.g., ða þibÞða ibÞ ¼ a2 þ b2 , Eq. (5.5-13) becomes

Z

T

1 x ðtÞdt ¼ 2p T 2

Z

∞ ∞

jXT ðuÞj2 du

(5.5-14)

Since jXT ðuÞj2 is an even function, i.e., jXT ðuÞj2 ¼ jXT ð  uÞj2 , we can change the limits of integration to run from zero to ∞, which then requires that the right-hand side of Eq. (5.5-14) be multiplied by two, Z ∞ Z T 1 2 x ðtÞdt ¼ 2jXT ðuÞj2 du (5.5-15) 2p 0 T

253

254

CHAPTER 5 Transient excitation

Recalling the definition of mean square value in Eq. (5.5-3), we divide both sides of Eq. (5.5-15) by 2T; and then applying the limit of T going to infinity to both sides of the equality yields 1 1 0 0 Z T Z ∞ 2 1 1 ðuÞj jX T duA (5.5-16) x2 ¼ lim @ x2 ðtÞdtA ¼ lim @ T/∞ 2T T T/∞ 2p 0 T jXT ðuÞj2 will be bounded; therefore, we can move the T limit operation inside the integral, i.e., 1 0 ! Z T Z ∞ 2 1 1 ðuÞj jX T du (5.5-17) x2 ¼ lim @ x2 ðtÞdtA ¼ lim T/∞ 2T T 2p 0 T/∞ T   jXT ðuÞj2 Letting lim ¼ Gxx u , where Gxx ðuÞ is referred to as the T/∞ T one-sided Power Spectral Density (PSD) function of xðtÞ, we obtain the sought-after result, Z ∞ 1 2 x ¼ Gxx ðuÞdu (5.5-18) 2p 0 For physical systems,

!

In addition, for sufficiently large T, jXT ðuÞj2 jXðuÞj2 z (5.5-19) T T For the above formulation, Gxx ðuÞ will have units of displacement squared divided by radians per second. If we multiply Gxx ðuÞ by 2p, we obtain units of displacement squared divided by hertz, and the frequency axis would need to be changed accordingly. Eq. (5.5-18) states that the area under the displacement PSD function will be the mean square value of the displacement; and for a zero-mean process the square root of this area will be the standard deviation, or the root mean square of the response. In practice, the lengths of time histories will be finite. However, if they are sufficiently long, then the area under the resulting PSD function will be a good approximation of the mean square value. We will later discuss what is meant by sufficiently long, but a practical test would be to compute the PSD with increasing-length time Gxx ðuÞ z

5.5 Random response analysis

histories, and judge the adequacy by the differences in sequential PSDs. Section 5.6 will address this topic more formally. As a final note we need to state that, although we can compute the variance and standard deviation from a PSD, the statistical distribution of the response is not defined by the PSD. Although often it is assumed that the distribution is normal or Gaussian and, therefore, the number of standard deviations required for a particular statistical enclosure level is well known, the data itself from which the PSD was derived may not be normally distributed. In this case, the number of standard deviations required for a particular statistical enclosure would be different than for a normal distribution. In addition, we may not be interested in the statistics of the response values, but rather the statistics of the peaks; and in this case the distribution would not be normal for the large majority of system responses. This is covered in detail in Volume II. 5.5.1.1 Autocorrelation function

The autocorrelation function, Rxx ðsÞ, of a time history, xðtÞ, is defined as Rxx ðsÞ ¼ E½xðt þ sÞxðtÞ

(5.5-20)

Substituting the definition of the expected value, E½xðt þsÞxðtÞ, of a time function (Eq. 5.5-1), gives Z T 1 xðt þ sÞxðtÞdt (5.5-21) Rxx ðsÞ ¼ lim T/∞ 2T T Taking the Fourier transform of both sides yields 8 9 Z T Z ∞< Z ∞ = 1 ius Rxx ðsÞe ds ¼ xðt þ sÞxðtÞdt eius ds lim : ; T/∞ 2T ∞ ∞ T

1 T/∞ 2T

¼ lim

Z

8

∞ > > > >   Q > > 2zun T > > > > 1  e þ 1  > > 2 > > 2pf Tz = < n d pfn QG0 ¼ i  > 2 2zu T h 2   > > > 1 > 2 2 2 > n > > e cosð2u  z TÞ þ 2zz sinð2u TÞ  z  z z > > d d d d d > > 2 > > QTz 8pf > > n d > > ; : (5.6-24) For lightly damped systems z  1 and, hence, z z 0 and zd z 1; and we obtain 9 8  Q  > > 2zu T n > > 1e þ 1 > > = < 2pf T n pfn QG0 hðTÞ z   i h > > 2 1 > > 2zun T > TÞ þ 2z sinð2u TÞ 1 > e cosð2u ; : d d 8pfn QT 2

(5.6-25) Furthermore, for z  1, Q[1, we obtain the following approximation for low-damped systems,    pfn QG0 Q  2zun T hðTÞ z (5.6-26) 1 1e 2pfn T 2 Introducing a normalized cycle count, n ¼ Tfn =Q, produces the soughtafter result, pfn QG0 (5.6-27) m2 ðnÞ 2  1  1 e2pn . Note that if we divide Eq. (5.6-27) by where m2 ðnÞ ¼ 1  2pn Miles’ equation (Eq. 5.6-23), we obtain the normalized mean square, i.e., m2 ðnÞ. The normalized mean square will approach one as the duration of the response time increases. hðTÞ z

5.7 Swept frequency excitation

FIGURE 5.6-3 Normalized mean square, m2 ðnÞ ¼ 1  the normalized cycle count, n ¼ Tfn =Q.

 1  1 e2pn , plotted against 2pn

Fig. 5.6-3 shows m2 ðnÞ plotted against the normalized cycle count, n. As expected, the normalized mean square approaches one as the number of cycles or time duration of the time history increases. This relationship can now be used to establish the length of the forcing function that is required so that, on average, the mean square is within a specified tolerance of the infinite-length solution. For example, we would be within 10% of the average infinite duration solution if the normalized cycle count were greater than 1.6 (see dashed lines in Fig. 5.6-3). This implies, for example, that a system with a natural frequency of 2 Hz, and a Q ¼ 25, i.e., z ¼ 0:02, will require a forcing function of at least 20 s, i.e., T  fnQ =fn ¼ ð1:6Þð25Þ =2 ¼ 20g, to achieve the average 90% mean square level that would result from an infinite-length forcing function. Also, since the standard deviation is the square root of the mean square value, the possible error in the standard deviation would be roughly 5% for this example. 5.7 Swept frequency excitation In Chapter 2, we described the effect on the response of single-degree-offreedom systems of sweeping the frequency of harmonic excitation. At that time we postponed the more general discussion to this chapter since

279

280

CHAPTER 5 Transient excitation

we would need additional tools to compute responses. Historical references on this topic include publications by Lewis, Hawkes, Cronin, and Lollock (Lewis, 1932; Hawkes, 1964; Cronin, 1965; Lollock, 2002). The most general types of swept frequency excitation are linear and octave frequency sweeps. Recall the equation of motion for a single-degree-of-freedom system subjected to excitation f ðtÞ, 1 f ðtÞ (5.7-1) m where m, un , and z are the mass, circular natural frequency, and critical damping ratio, respectively. The equation of motion for harmonic excitation with a linear frequency sweep is (see Chapter 2)   A Rl 2 2 _ þ un xðtÞ ¼ sin 2pfs t þ p t þ q (5.7-2) x€ðtÞ þ 2zun xðtÞ m 60 _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

where fs is the starting frequency in Hz, and Rl is the sweep rate in Hz per minute. If zero-excitation corresponds to t ¼ 0, then q ¼ 0 and the equation to be solved is   Rl 2 2 _ þ un xðtÞ ¼ sin 2pfs t þ p t (5.7-3) x€ðtÞ þ 2zun xðtÞ 60 where, without loss of generality, we assumed that A=m ¼ 1. For an octave sweep rate (see Chapter 2) the equation of motion is    120pfs Ro t 2 _ þ un xðtÞ ¼ sin x€ðtÞ þ 2zun xðtÞ (5.7-4) 260  1 Ro ln 2 where we assumed again that A=m ¼ 1, Ro is the sweep rate in octave per minute, fs is the nonzero start frequency, and zero-excitation corresponds to t ¼ 0 and, therefore, q ¼ 0. Closed-form solutions to Eqs. (5.7-3) and (5.7-4) are difficult to obtain; and they involve functions whose numerical computation requires special handling because of numerical precision requirements. Reed and Kabe presented closed-form solutions for both octave and linear sweep rates and successfully computed the closed-form peak responses of single-degree-offreedom systems (Reed and Kabe, 2019). The closed-form solutions were compared to those obtained by direct numerical integration of the equations of motion with very close agreement obtained. These will be discussed in detail in Section 5.7.3; but first we will present some results and discuss

5.7 Swept frequency excitation

the interesting behavior of systems subjected to swept frequency excitation. The results presented herein were computed by C. C. Reed using Wolfram’s Mathematica10 (Wolfram Research, 2015) and the methodologies described in the above referenced paper. 5.7.1 Octave sweep rates

Fig. 5.7-1 shows the peak response of a single-degree-of-freedom system with z ¼ 0:01 subjected to octave frequency-sweep excitation. The figure shows the peak response normalized by the steady-state response to sinusoidal excitation at the natural frequency, i.e., 1=2z, plotted against the natural frequency of the system. Each curve corresponds to a different octave sweep rate, Ro , which ranged from 0.5 to 4 octave per minute. Each sweep was started at 0.125 Hz and the lowest system natural frequency considered was 0.25 Hz. The ripples at the lower frequencies are due to the transient nature of the excitation and are the same for both the closed-form solutions and the direct numerical integration responses. An important item to note in Fig. 5.7-1 is that the sweep rate has a significant impact on the peak response. For example, the peak response of a

FIGURE 5.7-1 Normalized peak responses of a single-degree-of-freedom system with z ¼ 0:01 and excited by sine-swept excitation of various rates, Ro octave per minute, plotted against the system’s natural frequency in Hz.

281

282

CHAPTER 5 Transient excitation

10-Hz system is 44% higher at a 0.5 octave per minute sweep rate than at 4 octave per minute. The second item to note is the significant reduction in peak response compared to the resonant steady-state response. For example, for a 2 octave per minute sweep rate, which is commonly used in sine-swept base shake testing, a 10-Hz system would only achieve 75% of the steadystate peak response, and a 50-Hz system would only achieve 93%. This, then, becomes problematic when trying to identify the system damping by compute the ratio of the peak response to the base excitation acceleration, for example. Fig. 5.7-2 shows the responses of systems excited by a 2 octave per minute swept sine force for various damping values, as a function of the natural frequency of the system. As can be ascertained, at a given natural frequency the higher critical damping ratios yield higher responses relative to the steady-state values. For example, if the damping value of a 10-Hz system doubled from z ¼ 0:01 to 0:02, the response level would increase from 75% of steady state to 92%, although the steady-state response levels for higher damping would be lower due to the increased damping. This points to an important consideration. Since damping cannot be derived

FIGURE 5.7-2 Normalized peak responses of single-degree-of-freedom systems for various critical damping ratios, z, excited by sine-swept excitation of 2 octave per minute, plotted against the system’s natural frequency in Hz.

5.7 Swept frequency excitation

analytically, and the response of systems to sine-swept excitation is highly dependent on both the sweep rate and damping, it is important that any pretest analysis model be updated with measured data before relying on the analytical results to establish the ability of a test article to survive the test. However, because of the swept nature of the test, any damping-value estimates that do not account for the sweeping effects would be in error. Another observation that will be discussed in more detail later is that, due to the transient nature of the swept excitation, the peak responses can exceed the resonant steady-state values, as can be seen for the z ¼ 0:08 system. Fig. 5.7-3 shows the normalized peak responses of a system, with z ¼ 0:01, excited by the 0.5, 1, 2, and 4 octave per minute sweep rate excitations plotted against the natural frequency of the system divided by the sweep rate; this normalization is due to Hawkes (Hawkes, 1964). As can be ascertained, the normalization with respect to sweep rate has produced a single function that describes the peak response for a fixed value of critical damping ratio. Fig. 5.7-4 shows the same results, but for systems with different critical damping ratios (Appendix 5.5 contains tabulated values).

FIGURE 5.7-3 Normalized peak responses of the z ¼ 0:01 system excited by 0.5, 1, 2, and 4 octave per minute sweep rate excitations plotted against the natural frequency of the system divided by the sweep rate.

283

284

CHAPTER 5 Transient excitation

It was noted in Fig. 5.7-4 that as the damping value increases the peak sweep rate results could exceed the steady-state resonance values. This is shown more clearly in Fig. 5.7-5, where the ordinate axis scale has been expanded and the data for z ¼ 0:06 and 0:08 are included. These results are counterintuitive since the usual assumption, that steady-state harmonic excitation at a system’s natural frequency yields the highest possible response of any forcing function, assuming their magnitudes are consistent, does not hold. The reason we get higher responses, relative to steady state, at the higher damping values and slower sweep rates is because the number of cycles between the half-power points is sufficient to yield responses closer to steady-state results. In addition, because the excitation is sweeping, there is a transient component. The superposition of the almost steady-state response and the transient response yields response levels above the steady-state values. As the sweep rate increases there will be fewer cycles between the half-power points and, hence, relative to steady state the response levels will be lower, even though the transient component may be higher.

FIGURE 5.7-4 Normalized peak responses, for various levels of damping, of systems excited by 0.5, 1, 2, and 4 octave per minute sweep rate excitations plotted against the natural frequency of the system divided by the sweep rate.

5.7 Swept frequency excitation

FIGURE 5.7-5 Same as Fig. 5.7-4 except the ordinate axis has been expanded and the data for z ¼ 0:06 and 0:08 have been added. For the results described through Fig. 5.7-5, the starting frequency of the sweep was a fair amount below the natural frequencies of the systems analyzed (see discussion associated with Fig. 5.7-1). If, however, we start the sweep frequency near or above the natural frequency of a system, then the attenuation in the response could be significant. Fig. 5.7-6 presents the results for the same computations as in Fig. 5.7-1, except the starting frequency of the sweeps was set to 1 Hz (dashed response curves) instead of 0.125 Hz (solid curves). The lowest frequency of the single-degree-offreedom systems analyzed was 0.25 Hz. As can be seen, the responses of systems with natural frequencies near and below 1 Hz are significantly attenuated when the sweep starts at 1 Hz. 5.7.2 Linear sweep rates

In the previous section, results were presented for responses to octave sweep rate excitation. Typically, qualification and acceptance sinusoidal vibration tests are performed with octave sweep rates, where 2 or 4 octave per minute are typical. Swept excitation tests, such as mode survey tests (see Volume II), however, are generally performed using linear sweep rate excitation (see Eq. 5.7-3). This allows for slower sweep rates and higher responses through wider frequency ranges.

285

286

CHAPTER 5 Transient excitation

FIGURE 5.7-6 Solid curves are the same as in Fig. 5.7-1. Dashed curves were computed in the same manner as the solid curves except the sweep starting frequency was set to 1 Hz instead of 0.125 Hz. Fig. 5.7-7 shows the peak responses of a system, with z ¼ 0:01, excited by sinusoidal forces with 10, 20, 150, and 200 Hz per minute sweep rates; the excitation frequency was started at zero Hz in all cases. The ripples in the peak responses at the lower natural frequencies are caused by the transient nature of the excitation in relation to the relatively longer natural periods of vibration. This phenomenon is present in both the numerical integration results and closed-form solutions, which will be discussed in more detail in the next section. Fig. 5.7-8 shows the same peak responses, but this time they are plotted against the natural frequency squared divided by the sweep rate. As can be ascertained, the four graphs overlay. For the linear sweep rate excitation, the frequency normalization factor is the linear sweep rate divided by the natural frequency; rather than just the sweep rate, as it is for an octave sweep. The results are consistent in that the response functions merge into a single function. The normalized linear sweep rate results for different damping levels are shown in Fig. 5.7-9 (see Appendix 5.6 for tabulated values). Note that the

5.7 Swept frequency excitation

FIGURE 5.7-7 Normalized peak responses of a single-degree-of-freedom system with z ¼ 0:01 for four different linear sweep rates, Rl Hz/min.

FIGURE 5.7-8 Normalized peak responses of a single-degree-of-freedom system, with z ¼ 0:01, plotted against the natural frequencies (Hz) squared divided by the linear sweep rate (Hz/min), for four different sweep rates.

287

288

CHAPTER 5 Transient excitation

FIGURE 5.7-9 Normalized peak responses, for various levels of damping, plotted against the natural frequency (Hz) of the system squared divided by the sweep rate, Rl (Hz/min). z ¼ 0:01 and 0:08 curves were computed with a finer frequency increment; hence, the expected ripples in the peak responses can be seen. As with the octave sweep rates, as the damping value increases the peak sweep rate results exceed the steady-state values. This is shown more clearly in Fig. 5.7-10 where the ordinate axis scale is expanded. As mentioned above, these results are counterintuitive since the usual assumption is that steadystate harmonic excitation at a system’s natural frequency yields the highest possible response of any forcing function of consistent magnitude. At the higher damping values, and slower sweep rates, the number of cycles between the half-power points is sufficient to yield responses close to steady-state vibration. In addition, because the excitation is sweeping, there is a transient component. The superposition of these two can yield response levels above the resonant steady-state values. As the sweep rate increases there will be fewer cycles between the half-power points and, hence, relative to steady state the response levels will be lower, even though the transient component may be higher.

5.7 Swept frequency excitation

FIGURE 5.7-10 Same as Fig. 5.7-9 except the ordinate axis has been expanded.

As an example, assume that we wish to obtain at least 90% of the steadystate amplitude while subjecting the system to linear swept harmonic excitation. The system has a natural frequency of 20 Hz and a critical damping ratio of 0.01. What is the maximum sweep rate that.can be used? From Fig. 5.7-9 we observe that for the z ¼ 0:01 curve, fn2 Rl must be greater than 24.8 for the response to be at least 90% of the steady-state value. Hence, fn2 400 ¼ > 24:8 Rl Rl

0 Rl < 16:1 Hz=min

(5.7-5)

Now, suppose we wish to achieve 99% of the steady-state value. In this case, fn2 400 ¼ > 143 Rl Rl

0 Rl < 2:8 Hz=min

(5.7-6)

Hence, to achieve a 10% increase (from 90% to 99%) in the amplitude of the response, the sweep rate needs to slow down by a factor of nearly six.

289

290

CHAPTER 5 Transient excitation

5.7.3 Closed-form solutions

The results presented in the preceding sections were a combination of closed-form solutions, for z ¼ 0:01 and 0:08, and values obtained for the other critical damping ratios by numerically integrating the equations of motion directly. The closed-form solutions presented herein were obtained from Reed and Kabe (2019). We will summarize the results in this section; for additional discussion the reader is encouraged to consult the reference. 5.7.3.1 Octave sweep

The closed-form solution presented by Reed and Kabe for octave swept excitation was derived by first introducing into Eq. (5.7-4) the change of R0 t

independent variable u ¼ 2 60 , which led to 2 2

u m

_ x€ðuÞ þ muðm þ 2zun ÞxðuÞ þ u2n xðuÞ



2pðu  1Þfs  sin m

 ¼0 (5.7-7)

where m ¼ R0 logð2Þ=60, fs is the start frequency in Hz, and Ro is the sweep rate in octave per minute. Using variation-of-parameters the following expression for xðuÞ is obtained: 0 pffiffiffiffiffiffi 2 1 þz un z pffiffiffiffiffiffi ð Þ B Z u ð z2 1 þzÞun 1 m u B m pffiffiffiffiffiffiffiffiffiffiffiffiffi B jðxÞ x dx xðuÞ ¼ 2mun z2  1 @ 1

þu

2un

pffiffiffiffiffiffi

pffiffiffiffiffiffi Z u z2 1

ðz jðxÞ x

m

1

1 Þ

z2 1 un m

1

C C dxC A

(5.7-8)

 2pðu  1Þ fStart . The solution involves expanding the where jðuÞ ¼ sin m sine expressions in terms of complex exponentials, which ultimately yields integrals that can be expressed in terms of incomplete gamma functions. The solution thus obtained in the reference is 

5.7 Swept frequency excitation

291

  ! pf pd   ^ dþð2þiÞf R t   i þ f dþð4þi5Þf 1 0 s 2 qffiffiffiffiffiffiffiffiffiffiffiffiffi22ð3þi5Þf pð3i5Þf 2 60 fbs xðtÞ¼ e 2 mun 1  z2 0

     1 1 R0 t epfþipd G Z  ; i2 60 fbs  G Z  ; i fbs B CC B CC  R0 t 2if B B B CC B ð2pÞi2f 2 60 B CC B B CC B      @ AC R0 t B ^ B C þ ei2 f s G Z  ; i fbs  G Z  ; i2 60 fbs B C B C B C B C B C 0 1      B C R0 t B C ipd þ þ b 60 fb ; i f ; i2 G Z  G Z e B C s s B C B C B C B C C B þð2pÞi2f fbi2f B C B C B C s B C @ A      @ A R0 t pfþi2 f^s þ þ b b þe G Z ; i2 60 f s  G Z ; i f s 0

(5.7-9)  pffiffiffiffiffiffiffiffiffiffiffiffiffi . pffiffiffiffiffiffiffiffiffiffiffiffiffi . m, Z  ¼ un z i 1  z2 m, fbs ¼ where Z þ ¼ un z þi 1  z2 pffiffiffiffiffiffiffiffiffiffiffiffiffi. 2pfs =m, d ¼ un z=m, f ¼ un 1  z2 m, m ¼ R0 logð2Þ=60, and R∞ Gða; bÞ ¼ b ta1 et dt. The solution is the real part of Eq. (5.7-9) with the imaginary part being exactly equal to zero, or if computed, zero to the numerical precision of the calculations. Computation of the incomplete gamma function with complex arguments has numerical challenges and requires the use of “infinite-precision” arithmetic; which is how the closed-form solutions were established (Reed and Kabe, 2019; Wolfram Research, 2015). Fig. 5.7-11 shows two response time histories for a single-degree-offreedom system with a natural frequency of 2 Hz and a critical damping ratio of z ¼ 0:02. The excitation sweep rate was 2 octave per minute and the starting frequency was 0.125 Hz. The solid line is the closed-form solution and the dots are the values obtained by numerically integrating the equation of motion directly. As can be ascertained the agreement is extremely close. Table 5.7-1 shows a comparison of selected peak values, normalized by 1=2z, for systems with various natural frequencies obtained from the 

292

CHAPTER 5 Transient excitation

FIGURE 5.7-11 Response time histories of a 2 Hz single-degree-of-freedom system with z ¼ 0:02 obtained with the closed-form solution (solid line) and by direct numerical integration (dots) of the equation of motion. Excitation was 2 octave per minute with a sweep starting frequency of 0.125 Hz. closed-form solutions, Eq. (5.7-9), and corresponding values obtained from solutions established by numerical integration. The peak response values shown in the table were extracted from each response time history. The excitation sweep rate was 2 octave per minute and the starting frequency was 0.125 Hz. As can be seen, the agreement is very good. Table 5.7-1 Comparison of selected peak values, normalized by 1=2z, obtained with the closed-form solution and results obtained by numerically integrating the equation of motion directly. The sweep rate was 2 octave per minute, with a starting sweep frequency of 0.125 Hz. z [ 0.01 Natural frequency (Hz)

Closed form

0.25 1 1.5 4 8.5 10

0.22124 0.38135 0.43923 0.59655 0.71982 0.74503

z [ 0.08

Numerical integration

Closed form/ numerical integration

Closed form

Numerical integration

Closed form/ numerical integration

0.22124 0.38135 0.43923 0.59655 0.71982 0.74503

0.99999 0.99999 1.00000 1.00000 1.00000 1.00000

0.88404 0.99001 1.00259 1.01101 1.00900 1.00839

0.88404 0.99001 1.00259 1.01101 1.00900 1.00839

1.00000 1.00000 1.00000 1.00000 1.00000 1.00000

5.7 Swept frequency excitation

293

5.7.3.2 Linear sweep

The closed-form solution presented by Reed and Kabe for linear swept excitation was derived starting with Duhamel’s integral (see Section 5.3, Eq. 5.3-4), i.e.,     Z t Rl 2 ezun ðtsÞ sin ud ðt  sÞ A sin 2pfs t þ p t xðtÞ ¼ ds (5.7-10) 60 m ud 0   Rl 2 where the excitation is defined by f ðtÞ ¼ A sin 2pfs t þ p t , and Rl is 60 the sweep rate in Hz per minute. For the purposes of this discussion we let A=m ¼ 1 without any loss of generality, and the start frequency, fs , was set to zero. The solution involves expanding the sine expressions in terms of complex exponentials, which ultimately yields integrals that can be computed in terms of error functions, erf ðzÞ, and complex error functions with complex arguments, erfiðzÞ, where erfiðzÞ ¼ ierf ðizÞ. The solution thus obtained and presented in the reference is  1   0 1   2 1  i pffiffiffi atzun i b z þ 2 þgt C B 1 p e yðtÞ ¼ @ pffiffiffiffiffi A g us 8 0



þ









  1 þ i pffiffiffiffiffi  þ erf  t us þ iU þ U 2

1

erf U þ iU  e C Be C B C B C B C B    C B iðbþ2gtÞ   p ffiffiffiffiffi 1 þ i  þ  þ iðbþ2gtÞ C B e erf t us þ U þ iU þ erf U þ iU e C B 2 C B C B B C       C B    2C B p ffiffiffiffiffi 1 þ i 2aþið2gtþ2bz ÞC  þ Bþ erfi U þ iUþ  erfi  t us þ U þ iU e C B 2 C B C B C B    C B   2 2 p ffiffiffiffiffi 1 þ i A @ 2ibz e erfi t us þ iU þ Uþ þ erfi Uþ þ iU e2ibz 2 2aþib

2aþib

(5.7-11)  . pffiffiffiffiffi  . pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 1  z2 þz un 2 us , U ¼ 1  z2 z un where Uþ ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffi. pffiffiffiffiffiffiffiffiffiffiffiffiffi  pffiffiffiffiffi 2 2 us , a ¼ un z 1  z2 us , b ¼ u2n us , g ¼ un 1  z2 , and us is

294

CHAPTER 5 Transient excitation

the sweep rate in radian per second, i.e., us ¼ 2pRl . The solution is the real part of Eq. (5.7-11) with the imaginary part being exactly equal to zero. As with the octave sweep rate solution and computation of the incomplete gamma function with complex arguments, the computation of the error function, erf ðzÞ, and the imaginary error function with complex arguments, erfiðzÞ, has numerical challenges and, therefore, requires the use of “infinite-precision” arithmetic. Fig. 5.7-12 shows two response time histories for a single-degree-offreedom system with a natural frequency of 2 Hz and a critical damping ratio of z ¼ 0:02. The excitation sweep rate was 0.5 Hz per minute and the starting frequency was zero Hz. The solid line is the closed-form solution and the dots are the values obtained by numerically integrating the equation of motion. As can be ascertained the agreement is extremely close. Table 5.7-2 shows comparisons of selected peak values obtained with the closed-form solutions, Eq. (5.7-11), and peak values obtained from solutions established by direct numerical integration of the equation of motion. The peak response values shown in the table were extracted from each response time history. The excitation sweep rate was 10 Hz per minute and the starting frequency was zero Hz. As can be seen, the agreement is very good.

FIGURE 5.7-12 Response time histories, of a 2 Hz single-degree-of-freedom system with z ¼ 0:02, obtained with the closed-form solution (solid line) and by direct numerical integration (dots) of the equation of motion. Excitation was 0.5 Hz per minute with a starting frequency of zero Hz.

Problem 5.1

Table 5.7-2 Comparison of selected peak values, normalized by 1=2z, obtained with the closed-form solution and results obtained by numerically integrating the equation of motion directly for linear swept excitation. Sweep rate was 10 Hz per minute, with a start frequency of zero Hz. z [ 0.01 Natural frequency (Hz)

Closed form

0.25 1 1.5 4 8.5 10

0.05135 0.17545 0.24525 0.50099 0.74533 0.79335

z [ 0.08

Numerical integration

Closed form/ numerical integration

Closed form

Numerical integration

Closed form/ numerical integration

0.05135 0.17545 0.24525 0.50099 0.74531 0.79333

1.00000 1.00000 1.00000 1.00000 1.00002 1.00003

0.37384 0.80599 0.90454 1.00815 1.00842 1.00729

0.37384 0.80599 0.90454 1.00815 1.00840 1.00718

1.00000 1.00000 1.00000 1.00000 1.00002 1.00010

Problems Problem 5.1 Define graphically (draw) the constituents of the forcing function shown in the figure such that closed-form solutions can be established. Assume that the initial displacement and velocity are equal to zero. Plot the functions and explain in words your solution. Hint: Use ramp and step functions.

295

296

CHAPTER 5 Transient excitation

Solution 5.1 The forcing function can be composed of two ramp functions and one step function as shown in the below figure:

The second ramp begins at t ¼ 1 and will produce an equal and opposite response to the continuing increasing first ramp; this then yields the response to a constant force after t ¼ 1 plus any residual response from t < 1. This can be seen in the two figures below.

Problem 5.2 Define the equations of motion for a single-degree-of-freedom system with damping and the appropriate forcing function for each period in Problem 5.1. Hint: Define forcing functions for the periods: 0  t < 1, 1  t < 4, and t > 4.

Solution 5.2

297

Solution 5.2 For the initial period, 0  t < 1, the equation of motion is fr t m and its solution is given by Eq. (5.1-12), where s ¼ 1 and fr ¼ 1, _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

    fr 1 zun t 2z 2z2  1 2z e cos ud t þ sin ud t þ t  x1 ðtÞ ¼ un ud un mu2n s For the period, 1  t < 4, the equation of motion, where s ¼ 1 and fr ¼ 1, is fr _ þ u2n xðtÞ ¼  t x€ðtÞ þ 2zun xðtÞ m and the solution is given by Eq. (5.1-13),     fr 1 zun ðtsÞ 2z 2z2  1 2z e x2 ðtÞ ¼  cos ud ðt  sÞ þ sin ud ðt  sÞ þ ðt  sÞ  un ud un mu2n s For 1  t < 4 we must add the two responses, x1 ðtÞ and x2 ðtÞ. For the time period t > 4, the equation of motion for the step function excitation, where s ¼ 4 and fr ¼ 1, is fs m and the solution is given by Eq. (5.1-5), where we adjust for the delay of the forcing function by s, ( !) fs z x3 ðtÞ ¼ 1  ezun ðtsÞ cos ud ðt  sÞ þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud ðt  sÞ mu2n 1  z2 _ þ u2n xðtÞ ¼ x€ðtÞ þ 2zun xðtÞ

The solution for period t > 4 is x1 ðtÞ þ x2 ðtÞ þ x3 ðtÞ with the appropriate values of s used in the two later responses.

298

CHAPTER 5 Transient excitation

Problem 5.3 You are told that a small structure that can be modeled as a single-degreeof-freedom system has a natural frequency of 2 Hz. You are also told that the system will be subjected to a force that has an initial ramp that flattens out to a constant value of fr after a period of s. You are told that the initial ramp rise period is 0.25 s. Is this a good choice? Is there a rise period that would reduce the dynamic response relative to the 0.25-second period? Solution 5.3 2 Hz is a natural period of vibration of 0.5 s. This yields s=Tn ¼ 0:25=0:5 ¼ 0:5, where s is the ramp period in Fig. 5.1-7. We observe from the figure that the amplification would be about 1.6 times the static deflection. Doubling the ramp period would eliminate the dynamic amplification. Problem 5.4 Other than the response spectra shown in Section 5.4, are there any other response spectra in this chapter. If so, explain why it is a response spectrum? Solution 5.4 Fig. 5.1-7 is a response spectrum because it shows, for a fixed value of ramp period, the peak response of a single-degree-of-freedom system as a function of its natural frequency. Problem 5.5 A system with no damping is subjected to a step function force of fs . What is the magnitude of the peak displacement relative to the static deflection obtained with the same force? Show all your work. Solution 5.5 Static deflection is xs ¼ fs =k Dynamic deflection (Eq. 5.1-5) is

Problem 5.7

( fs xðtÞ ¼ mu2n

1  ezun t

z cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t 1  z2

!)

!) ( fs z 1  ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t ¼ k 1  z2 m m ( !) z ¼ xs 1  ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t 1  z2 For a system with no damping, z ¼ 0, xðtÞ ¼ 1  ðcos un tÞ 0 xs

xðtÞpeak xs

¼2

(5.7-12)

Problem 5.6 A system with no damping and a natural period of vibration Tn ¼ 2 s is subjected to ramp forces that reach the peak value, fr , and then stay at that value. The first ramp reaches its plateau in 2 s, whereas the second ramp takes 3 s. Which ramp produces the larger deflection in the system? Solution 5.6 From Fig. 5.1-7 we can determine that for the first case s=Tn ¼ 2=2 ¼ 1, and there is no amplification over the static deflection. For the second ramp, s=Tn ¼ 3=2 ¼ 1:5, and from the figure the dynamic over static deflection is approximately 1.2. Therefore, the longer ramp period produces the larger deflection. However, if the ramp period were increased further, say to 4 s, then the amplification would be down to that of the 2-second ramp. Problem 5.7 What is the response of a single-degree-of-freedom system initially at rest, _ i.e., xð0Þ ¼ 0 and xð0Þ ¼ 0, and subjected to an impulse of 10 (units of force) times 0.1 (units of time)? Assume that the period of the impulse is considerably shorter than the natural period of vibration. The mass of the system is 0.4 (units of mass). All units are consistent.

299

300

CHAPTER 5 Transient excitation

Solution 5.7 Impulsive forces, provided the duration of the forces are considerably shorter than the natural period of the system, will manifest themselves as initial velocities with no external force. The initial velocity is ðf ÞðtimeÞ ð10Þð0:1Þ _ ¼ ¼ 2:5. The response of a single-degree-ofxð0Þ ¼ mass 0:4 freedom system whose motion is due solely to initial conditions is (see Chapter 2)   _ xð0Þ þ zun xð0Þ zun t xð0Þcos ud t þ sin ud t xðtÞ ¼ e ud For this problem there is only the equivalent velocity due to the impulse at t ¼ 0, hence,   zun t 2:5 sin ud t xðtÞ ¼ e ud Problem 5.8 The response to a step function of magnitude fs was computed using Duhamel’s integral in Section 5.3.1, Eq. (5.3-9), which is shown below, ( !) fs z 1  ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t xðtÞ ¼ 2 mun 1  z2 Show that the displacement and velocity at t ¼ 0 are both zero, i.e., _ xð0Þ ¼ 0 and xð0Þ ¼ 0. Solution 5.8 For the displacement at t ¼ 0, xð0Þ ¼

fs f1  1ð1 þ 0Þg ¼ 0 mu2n

For the velocity at t ¼ 0, ( ! d fs z xðtÞ ¼ zun ezun t cos ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud t dt mu2n 1  z2 !) u z d  ezun t  ud sin ud t þ pffiffiffiffiffiffiffiffiffiffiffiffiffi cos ud t 1  z2

Solution 5.9

and fs xð0Þ ¼ mu2n

(

ud z zun 1ð1 þ 0Þ  1 0 þ pffiffiffiffiffiffiffiffiffiffiffiffiffi 1  z2

(

!) fs ¼ mu2n

301

pffiffiffiffiffiffiffiffiffiffiffiffiffi ) un 1  z2 z zun  pffiffiffiffiffiffiffiffiffiffiffiffiffi ¼0 1  z2

Problem 5.9 The spring-mass system shown in the left figure drops a distance L so that its base impacts the floor. The base remains attached to the floor after impact as the mass, m, oscillates. Assume the spring and dashpot are weightless. Also, assume the coordinate yðtÞ has its origin where the system will settle after the vibration decays to zero; i.e., the static equilibrium point. What is the equation of motion that describes the displacement response of the system after first contact with the floor?

Solution 5.9 We will formulate the elastic response problem starting at the instance the base of the spring-mass systems hits the floor (right figure). At that point the mass, m, has a downward (negative) velocity due to the mass having been subjected to the force of gravity as it traveled through the distance L. In addition, it will have a positive displacement relative to the static equilibrium point where the origin of yðtÞ is defined. Hence, we have a negative initial velocity and positive initial displacement.

302

CHAPTER 5 Transient excitation

The initial displacement will be ys k ¼ mg 0 yð0Þ ¼ ys 0 yð0Þ ¼ mg=k The velocity at impact will be Z Z _ ¼ gt þ a y€ðtÞdt ¼ gdt 0 yðtÞ m€ yðtÞ ¼  mg 0 Since at the time the system starts to drop the velocity is zero, a ¼ 0. Solving for the displacement, Z Z 1 _ yðtÞdt ¼  gtdt 0 yðtÞ ¼  gt2 þ b 2 For this calculation, the displacement is zero at the time the system starts to drop, hence, b ¼ 0. Solving for the time it takes the system to drop a distance L, where downward displacement is negative, pffiffiffiffiffiffiffiffiffiffi 1 L ¼  gt2 0 t ¼ 2L=g 2 Hence, the velocity at first contact, which we will consider the start of the elastic response, is pffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi _ ¼  g 2L=g ¼  2Lg yð0Þ The response of the single-degree-of-freedom system to initial conditions is   _ yð0Þ þ zun xð0Þ zun t yð0Þcos ud t þ sin ud t yðtÞ ¼ e ud _ where yð0Þ and yð0Þ are given above. Problem 5.10 Show that if Q ¼ 1=2z, and substituting a ¼ 2ud and b ¼ 2zun into Eq. (5.6-20) 1 0 ebT  1 Ta2    2 C B 2 C b a þ b2 G0 u2n B C B b hðTÞ ¼ C B i h 2B C    2 4Tzd @ 1 2 bT bT  a  b 1 e cosðaTÞ þ 2abe sinðaTÞ A  2 a2 þ b 2

Solution 5.10

303

we obtain Eq. (5.6-21), i.e., (   u2n G0 Q2 e2zun T  1 TQz2d z2d  z2 hðTÞ ¼ þ  un 4Tz2d 4u2n u2n  e2zun T  2 2 z cosð2u þ  z TÞ þ 2zz sinð2u TÞ d d d d 4u2n



Solution 5.10 hðTÞ ¼ 0 1 2 2zun T e 1 Tð2ud Þ   B C  2 2 B ð2zun Þ2 C 2zun ð2ud Þ þ ð2zun Þ B C C G0 u2n B B C   2 3 C  2 B 2 2 2zu T n B 4Tzd B cosð2ud TÞ C ð2ud Þ  ð2zun Þ 1 e 1 6 7C B  4 5C  @ A 2 2 2 2zun T ð2ud Þ þ ð2zun Þ sinð2ud TÞ 8ud zun e 0

2 2zun T

1 2

2

1 ðud Þ  ðzun Þ BQ e C þ  B C 2 4 u 4u u n B C n n 2 B C G 0 un B C   0 1 ¼ C 2 2 4Tz2d B 2zu T ðud Þ  ðzun Þ ðcosð2ud TÞÞ Be C n B @ AC þ 2zz sinð2u TÞ d d @ A 4u2n u2n u2 G 0 ¼ n 2 4Tzd

(   Q2 e2zun T  1 u2n

QTz2d

þ

TQz2d z2d  z2  un 4u2n

 e2zun T  2 zd  z2 cosð2ud TÞ þ 2zzd sinð2ud TÞ þ 2 4un

)

304

CHAPTER 5 Transient excitation

Problem 5.11 Show that if we let T approach infinity, Eq. (5.6-22), hðTÞ ¼

 un QG0 Q2 G0  2zun T e 1 þ 2 4 4Tzd

þ

 o 2 G0 n 2zun T  2 2 2 z cosð2u  z TÞþ 2zz sinð2u TÞ  z  z e d d d d d 16Tz2d

reduces to Miles’ equation, x€2pa ¼ h ¼

un QG0 pfn QG0 ¼ 4 2

Solution 5.11  un QG0 Q2 G0  2zun T e þ  1 T/∞ 4 4Tz2d

lim hðTÞ ¼ lim

T/∞

i   

G0 2zun T h 2 2 e cosð2u  z TÞ þ 2zz sinð2u TÞ  z2d  z2 z d d d d 2 T/∞ 16Tz d

þ lim

  un QG0 1 þ ð0Þ lim 2zu T  1 ¼ n T/∞ e 4  þ ð0Þ ¼

lim

1

T/∞ e2zun T

h

z2d

 i    2 2  z cosð2ud TÞ þ 2zzd sinð2ud TÞ  zd  z 2

un QG0 2pfn QG0 pfn QG0 ¼ ¼ 4 4 2

Solution 5.13

Problem 5.12 A single-degree-of-freedom system with a natural frequency of 30 Hz is subjected to broadband random base excitation that has a zero mean. The Power Spectral Density function of the base acceleration is constant from 10 to 200 Hz and has a value of 0:001 g2 =Hz. If the critical damping ratio of the system is 0.01, what are the pseudo acceleration mean square value, root mean square value, and standard deviation? Solution 5.12 Miles’ equation, Eq. (5.5-44), provides the root mean square value, which is also the standard deviation, rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffi 1 un G0 1 pfn G0 1 p30ð0:001Þ ¼ ¼ 3:07g ¼ spa z 2 2 2 0:01 2z z The mean square value is s2pa ¼ 9:42g. Problem 5.13 A measured random, zero-mean forcing function is 1000 s long. We wish to compute the mean square response of a single-degree-of-freedom system to this forcing function. The natural frequency of the system is 5 Hz, and its critical damping ratio is 0.02. How many seconds of the 1000-s forcing function need to be used in order to achieve a result that on average is within 5% of the infinite-length solution. What if the natural frequency were 1 Hz? Solution 5.13 We can use Fig. 5.6-3, or use (see Eq. 5.6-26)  1  1  e2pn m2 ðnÞ ¼ 0:95 ¼ 1  2pn to compute the value of the normalized cycle count, n. For this problem the value of n was obtained by iteration and is 3.185. Fig. 5.6-3 was used to provide a starting point. From the discussion in Section 5.6, T must be equal to or greater than nQ=fn . Substituting produces the desired result:

305

306

CHAPTER 5 Transient excitation

 nQ=fn ¼

. 1 3:185 5 2ð0:02Þ

¼ 15:92



Hence, T  15:92 s. For the 1 Hz system, T  ð15:92Þ5 ¼ 79:6 s . Problem 5.14 The figure shows the Response Spectrum for the El Centro earthquake for a damping value of z ¼ 0:05. Assume that you have a one-story building with a natural period of vibration of 3.2 s; this is shown in the figure by the dashed line. Assume that we can model this building as a singledegree-of-freedom system. Define from the Response Spectrum the following quantities: (a) Pseudo acceleration; (b) Pseudo velocity; and (c) displacement. What is the relationship between these quantities and the circular natural frequency?

Problem 5.15

Solution 5.14 The dashed lines in the figure show where to read the requested quantities.

(a) Pseudo acceleration: 0.10g (b) Pseudo velocity: 19.6 in/s (c) Displacement: 10 in The relationships between the circular natural frequency and pseudo acceleration, pseudo velocity, and pseudo displacement are Sea ðfn ; zÞ ¼ u2n Sd ð fn ; zÞ and Sev ðfn ; zÞ ¼ un Sd ð fn ; zÞ Note that the pseudo acceleration from the above formula will have units of in=s2 , whereas the spectrum is defined in terms of g. Problem 5.15 Derive the equation for the one-minus-cosine forcing function whose first cycle is shown in the figure. Using Duhamel’s integral compute the response of an undamped single-degree-of-freedom system subjected to

307

308

CHAPTER 5 Transient excitation

this forcing function. Assume m ¼ 1, A ¼ 10, and T ¼ 0:4 s, and plot 5 s of the response for systems that have the following natural periods of vibration, Tn : 0.3, 0.41, 1, and 2 s. Plot the response time history for the 0.41 s system out to 50 s and explain the results.

Solution 5.15 A The forcing function is f ðtÞ ¼ ð1 cos 2p T tÞ 2 Z t ezun ðtsÞ sin ud ðt  sÞ Duhamel’s integral is xðtÞ ¼ f ðsÞ ð Þds ud m 0 Z t 1 and for zero damping we get xðtÞ ¼ f ðsÞsin un ðt sÞds mun 0 Substituting the forcing function and performing the integration yields  Z t A 2p xðtÞ ¼ 1  cos s sin un ðt  sÞds 2mun 0 T 8 T (see figure)? Assume m ¼ 1, A ¼ 10, the natural period of vibration, Tn , of the system is 0.41 s, and the period of the forcing function is T ¼ 0:4 s. Plot your response time history to 2 s and compare to the solution to the periodic forcing function defined in Problem 5.15. Hint: the solution for the period zero to the end of the forcing function at T ¼ 0:4 s was computed in Problem 5.15. For t > T the system motion is due to the displacement and velocity it has at t ¼ T.

Solution 5.17 The solution for the period zero to the end of the forcing function at T ¼ 0:4 s is as derived in Problem 5.15, ( ) A 1  cos un t un xðtÞ ¼  2 ðcosðltÞ  cosðun tÞÞ 2mun un un  l2 where for this problem l ¼ 2p=T ¼ 15:70796 and un ¼ 2pð1 =Tn Þ ¼ 15:32484. The displacement and velocity at T ¼ 0:4 s, therefore, are 8 9 1  cosðð15:32484Þ0:4Þ > > > > > > > > < = 15:32484 xð0:4Þ ¼ 0:32627 > > > > 15:32484 > > > > : ; ðcosðð15:70796Þ0:4Þ cosðð15:32484Þ0:4ÞÞ 2 2 15:32484  15:70796 ¼ 0:32627f0:000765  ð 1:28896Þð1:0000  0:98828Þg ¼ 0:0052

Solution 5.17

and

( d A _ ¼ xðtÞ dt 2mun A ¼ 2mun

1  cos un t un  2 ðcosðltÞ  cosðun tÞÞ un un  l2

313

)

(

) un sin un t  2 ð  l sinðltÞ þ un sinðun tÞÞ u n  l2

where

8 9 sinðð15:32484Þ0:4Þ > > > > < != 15:70796 sinðð15:70796Þ0:4Þ _ xð0:4Þ ¼ 0:32627 > > ð 1:28896Þ > > : ; þ15:32484 sinðð15:32484Þ0:4Þ ¼ 0:32627f  0:15265 þ ð1:28896Þð0  2:33934Þg ¼ 1:03

The response of an undamped single-degree-of-freedom system to initial conditions was derived in Chapter 2 and is xðtÞ ¼ xð0Þcos un t þ

_ xð0Þ sin un t un

¼ 0:0052 cos 15:3248 t  0:0674 sin 15:3248 t Hence, the solution is t  0:4   1  cosðð15:3248ÞtÞ þ 1:2887ðcosðð15:7080ÞtÞ  cosðð15:3248ÞtÞÞ xðtÞ ¼ 0:3263 15:3248 ¼ 0:0213  0:0213 cosðð15:3248ÞtÞ þ 0:4205ðcosðð15:7080ÞtÞ  cosðð15:3248ÞtÞÞ t > 0:4 xðtÞ ¼ 0:0052 cosð15:3248ðt  0:4ÞÞ  0:0674 sinð15:3248ðt  0:4ÞÞ

314

CHAPTER 5 Transient excitation

Because of the repetitive (periodic) nature of the forcing function in Problem 5.15, the response continued to grow. For this problem, where only one cycle is imposed, the response continues after the force stops without attenuation because there is no damping and there is no additional energy being supplied by an external force. Problem 5.18 Assume that the period, T, of a one-minus-cosine forcing function is onetenth of the natural period of vibration of a single-degree-of-freedom system. Assume that the natural period of vibration is 4 s, the system has no damping, m ¼ 1, and the peak amplitude of the forcing function is A ¼ 10. Compare the response amplitude to that obtained by assuming the forcing function is an impulse of duration T. Discuss your results. Solution 5.18 The response of a single-degree-of-freedom system to a one-minus-cosine forcing function was developed in Problem 5.15 and is ( ) A 1  cos un t un  2 ðcosðltÞ  cosðun tÞÞ xðtÞ ¼ 2mun un un  l2 where l ¼ 2p=T ¼ 2p=0:4 ¼ 15:70796 and un ¼ 2pð1 =Tn Þ ¼ 1:5708. The displacement and velocity at T ¼ 0:4 s, therefore, are

Solution 5.18

315

xð0:4Þ ¼ 8 1  cosðð1:57096Þ0:4Þ > > > > < 1:57096

9 > > > > =

10 2ð1:57096Þ> > > > :

    > > 1:57096 > > ; cos ð15:70796Þ0:4  cos ð1:57096Þ0:4 2 2 1:57096  15:70796 n o ¼ 3:18277 0:1216  ð 0:00643Þð1:0000  0:8090Þ ¼ 0:3909

d A _ ¼ xðtÞ dt 2mun A ¼ 2mun

(

1  cos un t un  2 ðcosðltÞ  cosðun tÞÞ un un  l2

)

(

) un sin un t  2 ð  l sinðltÞ þ un sinðun tÞÞ u n  l2

and

8 9 sinðð1:57096Þ0:4Þ > > > > < != 15:70796 sinðð15:70796Þ0:4Þ _ xð0:4Þ ¼ 3:18277 > > ð 0:00643Þ > > : ; þ1:57096 sinðð1:57096Þ0:4Þ ¼ 3:18277f0:58783 þ ð0:00643Þð0 þ 0:92347Þg ¼ 1:8898

The response of an undamped single-degree-of-freedom system to initial conditions was derived in Chapter 2 and is xðtÞ ¼ xð0Þcos un t þ

_ xð0Þ sin un t un

Hence, the solution is t  0:4

      1  cosðð1:57096ÞtÞ þ0:00643 cos ð15:70796Þt  cos ð1:57096Þt xðtÞ ¼ 3:18277 1:57096     t > 0:4 xðtÞ ¼ 0:3909 cos 1:57096ðt  0:4Þ þ 1:2030 sin 1:57096ðt  0:4Þ

316

CHAPTER 5 Transient excitation

These are plotted below:

Impulse is computed with Eq. (5.2-2),  Z t2 Z 0:4  1 2p t dt I¼ f ðtÞdt ¼ 10 1  cos 2 0:4 t1 0 8 9 0:4 = 0:4 < Z 0:4   1 ¼5 sin 5 pt ð1  cos 5 ptÞdt ¼ t  : 0 5p 0 0 ; ¼ 5f0:4g ¼ 2 _ and the initial velocity is xð0Þ ¼ I=m ¼ 2=1 ¼ 2 in/s. The response of the single-degree-of-freedom system, therefore, is xðtÞ ¼

_ xð0Þ sin un t ¼ 1:273 sin 1:57 t un

Hence, the peak response is when sin un t ¼ 1, i.e., xpeak ¼ 1:273. This should be compared to the peak response from the closed-form solution, which is 1.26 (see preceding figure). The reason these values are close is because the forcing function has a period that is relatively short compared to the fundamental period of the system and, therefore, acts nearly as an impulse. Below is a plot of the closed-form solution (solid line) versus the one

Solution 5.19

computed with the assumption that the force is an impulse (dashed line). We started the “impulse” response in the middle of the forcing function, xðtÞ ¼ 1:273 sin 1:57ðt 0:2Þ, since this is the average time the “impulse” acts.

Problem 5.19 A test is to be performed on a system whose natural frequency of vibration is 10 Hz. The system has a critical damping ratio z ¼ 0:01. The excitation will be sinusoidal and the frequency of excitation will start at zero Hz. At what linear rate can the frequency increase so that the excitation will generate at least 95% of the steady-state response? If the excitation starts at 0.125 Hz and increases at an octave sweep rate, what rate must it be limited to in order to achieve the same results as for the linear sweep test. Solution 5.19 From Fig. 5.7-9, the z ¼ 0:01 curve for a response of 95% of steady-state gives a value of 46.88 for fn2 =Rl. For our system, fn ¼ 10 Hz and, hence, Rl ¼ 2:13 Hz per minute. For the octave sweep rate we need to use Fig. 5.7-4 (we could also use Fig. 5.7-3), from which we obtain a value of 37.5 for fn =Ro. Solving for the sweep rate we obtain Ro ¼ 0:27 octave per minute.

317

318

CHAPTER 5 Transient excitation

Appendix 5.1 Derivation of Parseval’s theorem Let x1 ðtÞ and x2 ðtÞ be two stationary and ergodic time histories, with Fourier transforms, Z ∞ Z ∞ iut x1 ðtÞe dt and X2 ðuÞ ¼ x2 ðtÞeiut dt X1 ðuÞ ¼ ∞

∞

The product of x1 ðtÞ and x2 ðtÞ can be expressed as 8 9

xcm > z y 1 0 0 0 > > > > >y > > 6 > > > 7 > > > > > yp > cm > x 7> > > > > 6 0 1 0 z 0 > > > > > > > 7 6 < zcm = 6 0 0 1 y x 0 7< zp > = 7 6 ¼ qxcm > 6 0 0 0 1 0 0 7 > qxp > > > > > (6.9-2) 7> 6 > > > > > > > > 7 6 > > > qycm > 0 0 0 0 1 0 > > > > 5> qyp > > > 4 > > > :q > ; 0 0 0 0 0 1 : qzp ; zcm fwcm ðtÞg ¼ ½Rfwp ðtÞg The mass matrix for a rigid mass, defined in a coordinate system whose origin is at the center of mass and whose coordinate axes xcm , ycm , and zcm are the principal axes, is 2 3 mx 6 7 my 6 7 6 7 6 7 mz 6 7 ½mcm  ¼ 6 (6.9-3) 7 I xx 6 7 6 7 4 5 Iyy Izz The kinetic energy, T, of the mass is

6.9 Mass matrix of a rigid body

1 T ¼ fw_ cm ðtÞgT ½mcm fw_ cm ðtÞg (6.9-4) 2 Substituting the time derivative of the coordinate transformation in Eq. (6.9-2) yields T   1 1 T ¼ fw_ cm ðtÞgT ½mcm fw_ cm ðtÞg ¼ w_ p ðtÞ ½RT ½mcm ½R w_ p ðtÞ (6.9-5) 2 2 Energy is a scalar and invariant under coordinate transformation. Hence, the mass matrix referenced to the coordinates defining the motion of point p is ½mp  ¼ ½RT ½mcm ½R ¼ 2

mx 0 0

6 6 6 6 6 6 6 0 6 6 6 6 6 zmx 6 6 4 ymx

0 my 0

0 0 mz

0 zmy ymz

zmx 0 xmz

zmy

ymz

Ixx þ z2 my þy2 mz

xymz

0

xmz

xymz

Iyy þ z2 mx þx2 mz

xmy

0

xzmy

yzmx

ymx xmy 0

3

7 7 7 7 7 7 xzmy 7 7 7 7 7 yzmx 7 7 7 2 Izz þ y mx 5 þx2 my (6.9-6)

The mass matrix in Eq. (6.9-6) has several useful properties. The terms in the upper right and lower left three-by-three sub-matrices are referred to as the first moment terms. These terms “tell” the system where the center of mass is located relative to point p. Dividing these terms by the mass, and accounting for the sign as defined in the matrix, will yield the distances along the coordinate axes from point p to the center of mass. The lower right three-by-three partition contains the mass moments of inertia referenced to point p. And of course, no matter which reference point we use, the total mass in translation has to be the same, which is what the upper left three-by-three indicates.

365

366

CHAPTER 6 Multi-degree-of-freedom systems

If we extract the second (translation in y) and sixth (rotation about the z axis) rows and columns from the mass matrix in Eq. (6.9-3) we obtain the mass matrix for the two-degree-of-freedom system in Fig. 6.8-1 (see l Eq. 6.8-5). Now, if we set x ¼  , and y ¼ z ¼ 0 (note that the center of 2 mass is to the left of point p along the x axis, and in the y-z plane in the example problem), and extract the same rows and columns from the mass matrix in Eq. (6.9-6) we obtain the mass matrix in Eq. (6.8-7). 6.10 Classical normal modes In Section 6.7, we assumed that in modal coordinates the damping matrix would be uncoupled like the mass and stiffness matrices and, hence, we bypassed the need to derive the elements of the physical coordinate damping matrix, ½c, by simply assigning to each mode an appropriate modal critical damping ratio. This approach assumes, however, that the damping properties of the structure are such that the resulting modes, in addition to producing diagonal modal mass and stiffness matrices, will also yield a diagonal modal damping matrix. If the modes of the system are real (vs. complex), and in the modal domain the damped equations of motion are uncoupled, then the modes are referred to as classical normal modes. Classical normal mode shapes will have stationary node points. If the damped system does not have classical normal modes, then the modes will be complex and the mode shapes will not have stationary node points. This will be discussed further in Section 6.11. In 1965, Caughey and O’Kelly derived a necessary and sufficient condition for classical normal modes to exist, namely the damping matrix had to satisfy the following equality:  T 1 1 (6.10-1) ½c½m ½k ¼ ½c½m ½k ¼ ½k½m1 ½c where ½m, ½c, and ½k are the mass, damping, and stiffness matrices, respectively. It should be noted, however, that even if a damping matrix yields classical normal modes, and satisfies Eq. (6.10-1), it does not mean that it is a valid damping formulation and represents the actual physics of the system. A valid damping matrix, for example, needs to satisfy certain rigid body constraints, which we will discuss next.

6.10 Classical normal modes

The second-order matrix differential equation of motion for an unconstrained (free-free) multi-degree-of-freedom system with no applied external forces and a viscous damping model is € _ ½mfwðtÞg þ ½cfwðtÞg þ ½kfwðtÞg ¼ f0g

(6.10-2)

where ½m, ½c, and ½k are as defined above, and fwðtÞg is the vector of displacement coordinates. For a physically realizable, unconstrained system with rigid body modes, ½m will be positive definite, and ½c and ½k must be positive semidefinite. Depending on the formulation of the damping matrix, the system described by Eq. (6.10-2) will either have classical (real) normal modes or nonclassical (complex) modes. In either case, however, the rigid body modes will be real. Assume that the damping matrix will yield classical (real) normal modes. The coordinate transformation from physical to modal coordinates (see Eq. 6.5-4) is fwðtÞg ¼ ½fr fqr ðtÞg þ ½fe fqe ðtÞg ( ) fqr ðtÞg ¼ ½ ½fr ½fe   (6.10-3) fqe ðtÞg ¼ ½ffqðtÞg where ½fr  and ½fe  are the rigid body and elastic modes of the system, respectively, and fqr ðtÞg and fqe ðtÞg are the corresponding modal coordinates. We will assume, without any loss of generality, that the mode shapes are normalized such that ½fT ½m½f ¼ ½I. Substituting Eq. (6.10-3) and its first and second time derivatives into Eq. (6.10-2), and then premultiplying the entire equation by ½fT produces _ ½fT ½m½ff€ qðtÞg þ ½fT ½c½ffqðtÞg þ ½fT ½k½ffqðtÞg ¼ f0g

(6.10-4)

Partitioning the equation into the rigid body and elastic modes, we obtain #(  " #(  ) " ) ½I ½0 ½c ½0 q_ r ðtÞ q€r ðtÞ  þ    q€e ðtÞ q_ e ðtÞ ½0 ½2zun  ½0 ½I 9 ( 2 38 ) (6.10-5) < = ðtÞg f0g fq r ½0 ½0 5 þ4 ¼ ½0 u2n : fqe ðtÞg ; f0g

367

368

CHAPTER 6 Multi-degree-of-freedom systems

where ½c ¼ ½fr T ½c½fr . The modal mass and stiffness matrices are diagonal because of mode shape orthogonality with respect to the mass and stiffness matrices. The elastic modes modal damping matrix, ½2zun , is also diagonal because by definition for this problem the damping properties yield classical normal modes, which will produce a diagonal damping matrix. In order for a system to dissipate energy in the absence of external forces, it must deform elastically. Hence, rigid body motion,in the absence of external  forces,  cannot dissipate energy. Therefore, ½c q_ r ðtÞ ¼ ½0, and since q_ r ðtÞ is arbitrary, ½c ¼ ½0

(6.10-6)

½fr T ½c½fr  ¼ ½0

(6.10-7)

T

Since ½c ¼ ½fr  ½c½fr , we have Furthermore, since ½c is positive semidefinite, in order for Eq. (6.10-7) to be true, ½c½fr  ¼ ½0

(6.10-8)

6.10.1 Proportional damping

We now turn our attention to what has historically been referred to as proportional, or Rayleigh damping (Rayleigh, 1877), and is defined as ½cR  ¼ a½m þ b½k

(6.10-9)

where a and b are constants to be determined. Applying the coordinate transformation defined by Eq. (6.10-3) produces ½fT ½cR ½f ¼ a½fT ½m½f þ b½fT ½k½f 2 3 3 2 ½0 ½0 ½I ½0 5 þ b4 ¼ a4 2 5 ½0 un ½0 ½I

(6.10-10)

Because there are only two constants of proportionality, modal damping can only be specified independently for two modes. Partitioning out the rigid body portion we obtain ½fr T ½cR ½fr  ¼ a½I þ b½0

(6.10-11)

6.10 Classical normal modes

Since modal damping in rigid body modes must equal zero (Eq. 6.10-7), we have to conclude that a must be equal to zero. Hence, for an unconstrained system the damping matrix cannot be proportional to the mass matrix. Models are developed as unconstrained systems to which boundary conditions are applied. Therefore, before boundary conditions are applied, the mass, damping, and stiffness matrices must be valid. This leads to the conclusion that damping matrices cannot include an additive term that is proportional to the mass matrix. Including the mass proportional term, as in Eq. (6.10-9), would result in a system that is grounded by velocity proportional terms. Setting a to zero, as would happen if one of the modes used to compute the two parameters were a rigid body mode, would result in stiffness proportional damping. If two elastic modes were used to compute a and b, the resulting damping matrix could produce an unstable system, one in which the damping term adds energy to the system and the response grows unbounded (Kabe and Sako, 2016). Hence, we conclude that mass proportional damping should not be used, and only the stiffness proportional term might provide an approximation. 6.10.2 Damping that yields classical normal modes

In the previous section, we showed that a physical damping matrix formulation that includes an additive mass proportional term may not be valid. In this section, we will describe physical damping matrix formulations that result in classical normal modes and produce specified modal damping in all modes, including rigid body modes. In Volume II, we will describe checks that can be performed on damping matrices to ascertain the type of modes they will yield and whether the matrices are properly formulated. 6.10.2.1 Mode superposition damping

We seek a damping matrix, ½c, such that ½fT ½c½f ¼ ½2zun 

(6.10-12)

where ½2zun  is a diagonal matrix, and the mode shapes, ½f, have been normalized such that ½fT ½m½f ¼ ½I (6.10-13)  T 1 Premultiplying Eq. (6.10-12) by ½f and postmultiplying by ½f1 yields

369

370

CHAPTER 6 Multi-degree-of-freedom systems

1  ½c ¼ ½fT ½2zun ½f1

(6.10-14)

Note that since mode shapes are linearly independent and orthogonal to each other with respect to the mass and stiffness matrices, the inverse of the modal matrix exists. From Eq. (6.10-13) we note that  T 1 ½f ¼ ½m½f (6.10-15) 1 ½f ¼ ½fT ½m Substituting into Eq. (6.10-14) yields ½c ¼ ½m½f½2zun ½fT ½m

(6.10-16)

which is the result presented by Timoshenko, Young, and Weaver (Timoshenko et al., 1974). Eq. (6.10-16) can be used with a truncated set of modes. In addition, since the right-hand side of Eq. (6.10-16) is quadratic, and ½2zun  is a symmetric positive or positive semidefinite matrix, ½c will also be symmetric and positive or positive semidefinite. Although not discussed as such by Timoshenko et al. the formulation shown in Eq. (6.10-16) is applicable to systems with rigid body modes, i.e., #" # " ½0 ½0 ½fr T ½m ½c ¼ ½m½ ½fr ½fe   (6.10-17) ½0 ½2zun  ½fe T ¼ ½m½fe ½2zun ½fe T ½m In addition, the constraint on a well-formulated damping matrix specified by Eq. (6.10-8) is also satisfied since elastic modes are orthogonal to rigid body modes with respect to the mass matrix, i.e., ½c½fr  ¼ ½m½fe ½2zun ½fe T ½m½fr  ¼ ½m½fe ½2zun ½0

(6.10-18)

¼ ½0 Eq. (6.10-16) can also be written as 1 0 N X ½c ¼ ½m@ ffgj 2zj unj ffgTj A½m

(6.10-19)

j¼1

where we can see the contributions from each mode to the total physical damping matrix. It follows that if zj and unj are zero, as would be the

6.10 Classical normal modes

case for rigid body modes, there would not be any contribution from these modes to the system damping matrix. Care, however, should be exercised in interpreting the individual terms within ½c in relation to the actual physical structure. In addition, there is no guarantee that the structural connectivity defined by the elements of ½c corresponds to the actual load paths (physical structural elements) defined by the stiffness matrix. In order to preserve structural connectivity, the damping matrix could be formulated as described in Volume II. For systems that are constrained, that is, they do not posses rigid body modes, we can repeat the above derivation, but take advantage of the orthogonality property associated with the stiffness matrix. Again, assuming that the mode shapes have been normalized as in Eq. (6.10-13), which yields ½fT ½k½f ¼ u2n , we obtain 1 1 ½c ¼ ½k½f u2n ½2zun  u2n ½fT ½k (6.10-20) Since rigid body modes do not contribute to the elastic damping properties, we can apply the above formulation to unconstrained systems by noting that we can exclude the rigid body frequencies and associated mode shapes from the corresponding matrices in Eq. (6.10-20). By equating Eq. (6.10-16) to Eq. (6.10-20) and premultiplying the entire equation by ½fT and then postmultiplying by ½f we obtain the fact that the damping matrices, generated by the two formulations, are the same. If the damping matrices are generated as in Eq. (6.10-16), or Eq. (6.1020), the system will posses classical normal modes; i.e., the mode shapes of the undamped system will uncouple the damped equations of motion. In addition, the conditions specified by Caughey and O’Kelly (1965) for damped systems that poses classical normal modes are satisfied by the damping matrices in Eqs. (6.10-16) and (6.10-20), i.e., C K ¼ ½fT ½c½f½fT ½k½f ¼ ½2zun  u2n ¼ u2n ½2zun  (6.10-21) ¼ ½fT ½k½f½fT ½c½f ¼ K C where K and C are from Eq. (6.7-6), and the modes have been normal ized such that M ¼ ½I, and the second equation in (6.10-21) is true because the matrices are diagonal. Also, as mentioned above, Caughey and O’Kelly (1965) showed that a necessary and sufficient condition for classical normal modes to exist was for the damping matrix to satisfy

371

372

CHAPTER 6 Multi-degree-of-freedom systems

½c½m1 ½k ¼ ½k½m1 ½c T  ¼ ½c½m1 ½k

(6.10-22)

We can verify Eq. (6.10-22) for damping matrices defined by Eq. (6.10-16). We start by substituting into the equality, ½c½m1 ½k ¼ ½c½m1 ½k, Eq. (6.10-16),   ½c½m1 ½k ¼ ½m½f½2zun ½fT ½m ½m1 ½k (6.10-23) ¼ ½m½f½2zun ½fT ½k 2 Recall the eigenvalue 2 1 problem, ½m½f un ¼ ½k½f. Postmultiplying this equation by un , and then substituting yields 1 ½c½m1 ½k ¼ ½k½f u2n ½2zun  u2n ½fT ½m 1 2 T un ½f ½m ¼ ½k½f½2zun  u2n ¼ ½k½f½2zun ½fT ½m

(6.10-24)

¼ ½k½m1 ½m½f½2zun ½fT ½m ¼ ½k½m1 ½c 6.10.2.2 Modified Caughey series damping

In 1965 Caughey and O’Kelly presented a power series damping matrix formulation for constrained systems (i.e., no rigid body modes) in which the first two terms in the series were identical to those of Rayleigh proportional damping (Caughey and O’Kelly, 1965). As discussed in Section 6.10.1 a mass proportional term is problematic. In this section, we will modify the Caughey and O’Kelly power series formulation to be valid for a system with rigid body modes and show its relationship to the modal superposition formulations described in Section 6.10.2.1. Let ½m, ½c, and ½k be N  N symmetric mass, damping, and stiffness matrices, respectively. Let ½f ¼ ½ ffg1 / ffgN  be the matrix of mode shape vectors with the standard mass normalization, ffgTi ½mffgj ¼ dij

(6.10-25)

where dij is the Kronecker delta function. Also, let l1 ; .; lN denote the circular natural frequencies squared so that ½fT ½k½f ¼ ½L ¼ diagfl1 ; .; lN g

(6.10-26)

6.10 Classical normal modes

Caughey and O’Kelly showed that a necessary and sufficient condition for a system to be classically damped is that ½c and ½k commute with respect to ½m1 , i.e., ½c½m1 ½k ¼ ½k½m1 ½c

(6.10-27)

In this case ½c can be represented in terms of ½m and ½k via the series, ½c ¼ ½m

N1 X

 l 1 al ½m ½k

(6.10-28)

l¼0

Note that for l ¼ 0 and l ¼ 1 the series yields the two terms of Rayleigh proportional damping and, hence, we would have the issues discussed in Section 6.10.1. Transforming Eq. (6.10-28) to modal coordinates yields ½G ¼

N 1 X

al ½Ll

and

½G ¼ diagfg1 ; .; gN g

(6.10-29)

l¼0

pffiffiffiffi where gj ¼ 2zj lj ¼ 2zj unj . Suppose the terms gj are known and we wish to solve for the series coefficients, al , in Eq. (6.10-29). Since the series is linear in al , the following linear system of equations results: 9 8 9 38 2 a0 > > g 1 > > l21 / lN1 1 l1 > > > > 1 > > > 7> 6 > > > > > > > > 7 6 > > > > a g 2 N1 > 1 > > > 2 l2 l2 / l2 7> > > > 61 > > > > = < = 7< 6 7 6 a 2 « ¼ (6.10-30) « « 1 « 7 6« > > > 7> 6 > > > > > > 7> 6 > > > gN1 > > > > > « > 6 1 lN1 l2N1 / lN1 > > > N1 7 > > > > 5> 4 > > > > > > > > 2 N1 : ; ; : 1 lN lN / lN aN1 gN The coefficient matrix is of the Vandermonde type and is nonsingular if the natural frequencies are distinct, i.e., li slj for isj. If this is the case, ½c has a unique Caughey series representation, but because the first term would be mass proportional it would not be a valid formulation. This can be remedied by developing the damping matrix for the unconstrained system, i.e., by introducing rigid body modes. Assume we have an unconstrained system with six rigid body modes so that l1 ¼ l2 ¼ / ¼ l6 ¼ 0, and g1 ¼ g2 ¼ / ¼ g6 ¼ 0. For this system, Eq. (6.10-30) becomes

373

374

CHAPTER 6 Multi-degree-of-freedom systems

(6.10-31)

Clearly, a0 ¼ 0, system, 2 l 6 7 6 6 « 6 4 lN

and we obtain the under-determined N  6  N  1 / 1 /

3 9 8 9 8 lN1 7 g7 > a1 > > 7> = < = < 7 7 ¼ « « « 7 > > ; > : ; : 5> a g N1 N N1 lN

(6.10-32)

b fb ½L a g ¼ fb gg Assuming that lj , j ¼ 7; .; N, are distinct, the kernel, K, is a fivez g ¼ f z1 / zN1 gT ˛K, dimensional subspace in ℝN1 . Suppose fb b fb then L z g ¼ f0g and,

(6.10-33)

where z0 ¼ 0. Moreover, the physical coordinate damping matrix is also the zero matrix, N 1  l X 1 zl ½m ½k ¼ ½0 (6.10-34) ½m l¼0

Therefore, for a particular solution, fb a g ¼ f a1 representation of ½c is given by

/

aN1 gT , a general

6.10 Classical normal modes

½c ¼ ½m

 l 1 ðal þ zl Þ ½m ½k ;

N 1 X

a0 ¼ z 0 ¼ 0

(6.10-35)

l¼0

Recall the modal superposition formulation (Eq. 6.10-16) of a damping matrix, (6.10-36) ½cT  ¼ ½m½f½2zun ½fT ½m ¼ ½m½f½G½fT ½m where gj ¼ 0 for j ¼ 1; .; 6; and we have chosen to designate this formulation of the damping matrix by the subscript T so as to distinguish it from the formulation of Eq. (6.10-35). Note that the matrix product in Eq. (6.10-36) automatically ensures that ½cT ffgj ¼ f0g;

j ¼ 1; .; 6

(6.10-37)

which is required of the damping matrix of an unconstrained system. We will now show that the Caughey series expansion, Eq. (6.10-35), relates to this formulation. For simplicity, we first consider a transformation that allows us to use an identity mass matrix. Since ½m is symmetric and positive-definite, it possesses the Cholesky factorization ½m ¼ ½L½LT , where ½L is a lower triangular matrix. Consider the undamped eigenvalue problem, ½kffgj ¼ lj ½mffgj ;

j ¼ 1; .; M

(6.10-38)

As before, we will assume that the first six modes are rigid body modes. In practice, we may not be able to experimentally obtain damping estimates for all N modes. Therefore, we can either assume a conservative value for the missing data or derive the damping matrix for the case M < N. Premultiplying Eq. (6.10-38) by ½L1 yields the standard symmetric eigenvalue problem,      T   ½k fe j ¼ lj fe j ; fe j fe k ¼ djk (6.10-39)   e ¼ ½LT ffg . Similarly, transform ½c k  ¼ ½L1 ½k½LT and f where ½ e j j symmetrically via ½e c ¼ ½L1 ½c½LT The resulting homogeneous equation of motion is     €e e e_ ¼ f0g k fwðtÞg ½I wðtÞ þ ½e c wðtÞ þ½e

(6.10-40)

(6.10-41)

375

376

CHAPTER 6 Multi-degree-of-freedom systems

e ¼ ½LT fwðtÞg. where fwðtÞg The corresponding Caughey series becomes ½e c ¼

N1 X

h il al ke

(6.10-42)

l¼0

and the mode superposition solution for ½e cT  using the first M modes is T     e ½G f e ; e ¼ f e e (6.10-43) ½e cT  ¼ f f / f 1 M   e , are orthonormal, Since the modes, f j ½e cT  ¼

M X

M M X    T X e e e gj f j f j ¼ gj P j ¼ gj Pe j

j¼1

j¼1

(6.10-44)

j¼7

   T   e , is a rank-one projection matrix onto f e f e and the where Pe j ¼ f j j h ij last summation results from g1 ¼ g2 / ¼ g6 ¼ 0. Similarly, ke has the decomposition, N h i X e k ¼ lj Pe j

(6.10-45)

j¼7

where we note that l1 ¼ l2 / ¼ l6 ¼ 0. Consider now the stiffness matrix that is truncated to its first M modes, i.e., M h i X e lj Pe j (6.10-46) k ¼ M

j¼7

l Since Pe j ¼ Pe j , we obtain M h il X e llj Pe j k ¼ M

(6.10-47)

j¼7

Theh Caughey series considers the following linear combination of powers i of ke , M

6.10 Classical normal modes

H X

M h il X al ke ¼ pðlj Þ Pe j ; M

l¼1

H N1

(6.10-48)

j¼7

where pðlÞ ¼ a1 l þ a2 l þ / þ aH lH , and we have constrained a0 ¼ 0 to be consistent with the required rigid body behavior. Equating the Caughey series to ½e cT  yields H M M h il X X X al ke ¼ pðlj Þ Pe j ¼ gj Pe j ¼ ½e cT  (6.10-49) 2

l¼1

M

j¼7

j¼7

Hence, the coefficients corresponding to the projections, Pe j , satisfy pðlj Þ ¼ gj ;

j ¼ 7; .; M

(6.10-50)

This leads to a M  6  H Vandermonde-type system of equations, 9 8 9 38 2 > > > > g a l7 l27 / lH > > > 1 7 > 7 > > > > > > > > > > > 7> 6 > > > > 2 H = = < < 7 6 l8 l g a / l8 7 2 8 8 6 ¼ (6.10-51) 7 6 > > > 7> 6 « « « > > > > « 1 « > > 5> 4 > > > > > > > > > > > 2 H > ; ; : : gM > aH lM lM / lM If M ¼ N and H ¼ N  1, we obtain the Vandermonde system as before and, N 1 X

h il al ke ¼ ½e cT 

(6.10-52)

l¼1 T

with f0 a1 /aN1 g being unique, except for additive vectors in the kernel, K. If we set H ¼ M  6, then we obtain a unique solution, cT  has a unique expansion in terms of f0 a1 /aM6 gT , and hence, ½e h il ke , i.e., M

½e cT  ¼

M 6 X l¼1

h il al ke

M

(6.10-53)

377

378

CHAPTER 6 Multi-degree-of-freedom systems

Furthermore, if M < N, then the diagonal damping terms for the rigid body and higher modes are zero, i.e., 8 >

: 0 M > f1j > > > >0> > > = > < < > = 6 7> 6 7 f 2 4  2uj 2 7 ¼ 0 6 2 1j > > > 4 5> > > > > > > > > 2 :f ; :0; 1j 0 2 2  uj 2 3 2 3 0 0 0 0:4472 0:5117 0:1954 2 6 7 6 7 0 5 ½f ¼ 4 0:4472 0:1954 0:5117 5 un ¼ 4 0 1.3820 0 0 3.6180 0:4472 0:6325 0:6325

418

CHAPTER 6 Multi-degree-of-freedom systems

Problem 6.3 Show that the mode shapes computed in Problem 6.2 are orthogonal with respect to the mass and stiffness matrices. Solution 6.3 ½fT ½m½f 2 32 3T 2 3 2 0 0 0:4472 0:5117 0:1954 0:4472 0:5117 0:1954 6 76 7 6 7 7 6 0 2 0 76 0:4472 0:1954 0:5117 7 ¼6 0:4472 0:1954 0:5117 4 54 5 4 5 0 0 1 0:4472 0:6325 0:6325 0:4472 0:6325 0:6325 2 3 1 0 0 6 7 7 ¼6 40 1 05 0 0 1 ½fT ½k½f 32 3 2 3T 2 2 2 0 0:4472 0:5117 0:1954 0:4472 0:5117 0:1954 76 7 6 7 6 76 7 7 6 ¼6 4 0:4472 0:1954 0:5117 5 4 2 4 2 54 0:4472 0:1954 0:5117 5 0 2 2 0:4472 0:6325 0:6325 0:4472 0:6325 0:6325 2 3 0 0 0 6 7 ¼6 0 7 4 0 1.3820 5 0 0 3.6180 The resultant matrices are diagonal, indicating that the cross-orthogonality terms (i.e., orthogonality between unlike mode shapes) have values of zero; hence, the mode shapes are orthogonal to each other with respect to the mass and stiffness matrices. Problem 6.4 Postmultiply the stiffness matrix in Problem 6.3 by the rigid body mode shape normalized to a peak value of 1.0. Explain the results.

Problem 6.6

Solution 6.4 ½kfxg ¼ ½kffRB g ¼ ff g 9 8 9 2 38 2 2 0 > 1:0 > >0> > > = > < > = 6 7< 6 2 7 4 2 5 1:0 ¼ 0 4 > > > > > > : ; > : > ; 0 2 2 1:0 0 Mode shapes are displacement patterns. Since rigid body mode shapes do not elastically deform a structure, multiplying the stiffness matrix by this displacement shape will not produce any forces. Note that any normalization of the rigid body mode shape could have been used. Problem 6.5 Perform the following operation, ffr gT ½mffr g, with the mass matrix from Problem 6.2 and the unit normalized rigid body mode shape from Problem 6.3. Explain your results. Solution 6.5 The rigid body mode shape accumulates the mass associated with each coordinate to produce the total mass of the system, i.e., 9 8 9T 2 38 2 0 0 1:0 1:0 > > > > < = < = 6 7 1:0 4 0 2 0 5 1:0 ¼ 5 > > > > : ; : ; 0 0 1 1:0 1:0 Problem 6.6 For the system whose mass and stiffness matrices are given below, compute the natural frequencies and mode shapes, establish the equations of motion in modal coordinates, and then compute the physical coordinate response for the initial displacements, x1 ð0Þ ¼ 1 and x2 ð0Þ ¼ 2.         4 2 x1 ðtÞ 0 3 0 x€1 ðtÞ þ ¼ x€2 ðtÞ 2 4 x2 ðtÞ 0 0 1

419

420

CHAPTER 6 Multi-degree-of-freedom systems

Solution 6.6 Eigenvalue problem:          0 4 2 f1 2 3 0 ¼ þ  uj 0 0 1 2 4 f2 j Eigenvalues and mode shapes (eigenvectors):    2 0.9028 0 0.5410 un ¼ ½f ¼ 0 4.4305 0.3493

0.2017 0.9370



Coordinate transformation to modal coordinates:      0.5410 0.2017 x1 ðtÞ q1 ðtÞ ¼ x2 ðtÞ q2 ðtÞ 0.3493 0.9370 Equations of motion in modal coordinates: ½fT ½m½ff€ qðtÞg þ ½fT ½k½ffqðtÞg ¼ f0g         0.9028 0 1 0 0 q1 ðtÞ q€1 ðtÞ þ ¼ q€2 ðtÞ q2 ðtÞ 0 4.4305 0 1 0 q€1 ðtÞ þ 0.9028q1 ðtÞ ¼ 0 q€2 ðtÞ þ 4.4305q2 ðtÞ ¼ 0 Solution for each modal coordinate is that of a single-degree-of-freedom system whose motion is initiated with initial conditions. Transforming initial displacements to modal coordinates: fqð0Þg ¼ ½fT ½mfxð0Þg ) " #T " ( 3 0.5410 0.2017 q1 ð0Þ ¼ 0 q2 ð0Þ 0.3493 0.9370 ( ) 0.9243 ¼ 2.4791

0 1

#(

1 2

)

Solution 6.7

The vibration response of an undamped single-degree-of-freedom system to q_ j ð0Þ initial conditions, from Chapter 2, is qj ðtÞ ¼ qj ð0Þcos uj t þ sin uj t, uj where q_ j ð0Þ ¼ 0 and the qj ð0Þ values are given above; hence, pffiffiffiffiffiffiffiffiffiffiffiffiffiffi q1 ðtÞ ¼ ð0.9243Þcos 0.9028t pffiffiffiffiffiffiffiffiffiffiffiffiffiffi q2 ðtÞ ¼ ð2.4791Þcos 4.4305t The solution in the x-coordinate system is obtained by transforming the modal coordinate solutions back to the x-coordinate system: (

fxðtÞg ¼ ½ffqðtÞg pffiffiffiffiffiffiffiffiffiffiffiffiffiffi ) #( 0.5410 0.2017 x1 ðtÞ ð0.9243Þcos 0.9028t ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 ðtÞ 0.3493 0.9370 ð2.4791Þcos 4.4305t pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi x1 ðtÞ ¼ 0:5 cos 0.9028t þ 0:5 cos 4.4305t pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2 ðtÞ ¼ 0:3228 cos 0.9028t  2:3229 cos 4.4305t )

"

Problem 6.7 The first-order solution of the response of a single-degree-of-freedom system is       zun  iud fv _ xðtÞ ðzun þiud Þt a1 e þ a2 el2 t ¼ xðtÞ 1 1 1 2 _ Compute fv and l2 and show the derivation of the velocity solution, xðtÞ, in terms of cosine and sine functions. Verify your solution by differentiating the displacement solution given in Eq. (6.11-21) Solution 6.7 The two eigenvectors and eigenvalues have to be complex conjugates, i.e., fv ¼  zun þ iud The velocity response is

and l2 ¼ zun  iud

421

422

CHAPTER 6 Multi-degree-of-freedom systems

_ ¼ ð  zun þ iud Þa1 eðzun þiud Þt þ ð  zun  iud Þa2 eðzun iud Þt xðtÞ   ¼ ezun t ð  zun a1 þ iud a1 Þeiud t þ ð  zun a2  iud a2 Þeiud t ( ) ð  zu a þ iu a Þðcos u t þ i sin u tÞþ n 1 1 d d d ¼ ezun t ð  zun a2  iud a2 Þðcos ud t  i sin ud tÞ ( ) a cos u t  izu a sin u t þ iu a cos u t  u a sin u t zu n n 1 1 1 1 d d d d d d ¼ ezun t zun a2 cos ud t þ izun a2 sin ud t  iud a2 cos ud t  ud a2 sin ud t ( ) ð  zu a þ iu a  zu a  iu a Þcos u tþ n n 1 1 2 2 d d d ¼ ezun t ð  izun a1  ud a1 þ izun a2  ud a2 Þsin ud t ( ) ð  zu ða þ a Þ þ iu ða  a ÞÞcos u tþ n 1 2 1 2 d d ¼ ezun t ð  izun ða1  a2 Þ  ud ða1 þ a2 ÞÞsin ud t      ¼zun t  zun Ae þ ud Be cos ud t  zun Be þ ud Ae sin ud t

Problem 6.8 Show that ½c ¼ ½m½f½2zun ½fT ½m yields the same damping matrix as 1 1 ½c ¼ ½k½f u2n ½2zun  u2n ½fT ½k. Solution 6.8 From the eigenvalue problem, ½m½f u2n ¼ ½k½f, we obtain 1 ½m½f ¼ ½k½f u2n

1 and ½fT ½m ¼ u2n ½fT ½k

Substituting yields the desired result, 1 1 ½m½f½2zun ½fT ½m ¼ ½k½f u2n ½2zun  u2n ½fT ½k Problem 6.9 An unconstrained three-degree-of-freedom system has the below mass matrix, stiffness matrix, circular natural frequencies, and mode shapes.

Solution 6.9

423

Compute the corresponding damping matrix that will result in a system with classical normal modes. Assume the first elastic mode has z ¼ 0:01 and the second has z ¼ 0:05. 2

2 6 ½m ¼ 4 0 0 2

0

0

6 ½un  ¼ 4 0 1.1756 0 0

0 2 0

0

3 0 7 05 1 3

2

3 2 2 0 6 7 ½k ¼ 4 2 4 2 5 0 2 2 2

0:4472

7 6 0 5 ½f ¼ 4 0:4472 1.9021 0:4472

0:5117 0:1954 0:6325

0:1954

3

7 0:5117 5 0:6325

Solution 6.9 ½c ¼ ½m½f½2zun ½fT ½m 2 3 0:4472 0:5117 0:1954 6 7 7 ¼ ½m6 4 0:4472 0:1954 0:5117 5 0:4472 0:6325 0:6325 2 32 3T 0 0 0 0:4472 0:5117 0:1954 6 76 7 76 0:4472 0:1954 0:5117 7 ½m 6 0 4 0 2ð0:01Þð1.1756Þ 54 5 0 0 2ð0:05Þð1.9021Þ 0:4472 0:6325 0:6325 2 3 0.0537 0.0855 0.0318 6 7 7 ¼6 4 0.0855 0.2028 0.1173 5 0.0318 0.1173 0.0855

424

CHAPTER 6 Multi-degree-of-freedom systems

Problem 6.10 For the system in Problem 6.2 compute the classical damping matrix using 1 1 Eq. (6.10-20), ½c ¼ ½k½f u2n ½2zun  u2n ½fT ½k. Solution 6.10 1 1 ½c ¼ ½k½f u2n ½2zun  u2n ½fT ½k 2 3 0:5117 0:1954 " # #1 " 6 7 1:3820 2ð0:01Þð1.1756Þ 0 0 6 7 ¼ ½k6 0:1954 0:5117 7 4 5 0 2ð0:05Þð1.9021Þ 0 3:6180 0:6325 0:6325 # " #1 " 0:5117 0:1954 0:6325 1:3820 0 ½k  0:1954 0:5117 0:6325 0 3:6180 2 3 0.0537 0.0855 0.0318 6 7 6 7 ¼ 6 0.0855 0.2028 0.1173 7 4 5 0.0318 0.1173 0.0855

Problem 6.11 Verify that the damping matrices computed in Problems 6.9 and 6.10 yield uncoupled modal damping matrices with the proper diagonal terms, including zero damping for the rigid body mode.

Solution 6.12

425

Solution 6.11 ½2un z ¼ ½fT ½c½f 2

0:4472

6 6 ¼ 6 0:4472 4 0:4472

0:5117 0:1954

0:1954

3T 2

7 7 0:5117 7 5 0:6325

0.0537

6 6 6 0.0855 4 0.0318

0.0855 0.2028

0.0318

3

7 7 0.1173 7 5 0.0855

0.1173 0:6325 2 3 0:4472 0:5117 0:1954 6 7 6 7 6 0:4472 0:1954 0:5117 7 4 5 0:4472 0:6325 0:6325 2 3 2 3 0 0 0 0 0 0 6 7 6 7 6 7 6 7 ¼ 6 0 0:0235 0 7 ¼ 6 0 2ð0:01Þð1.1756Þ 0 7 4 5 4 5 0 0 0:1902 0 0 2ð0:05Þð1.9021Þ

Problem 6.12 Verify that the damping matrices computed in Problems 6.9 and 6.10 satisfy the necessary and sufficient condition for a damping matrix that will yield classical normal modes specified in the following reference: Caughey, T.K., O’Kelly, M.E.J., September 1965. Classical Normal Modes in Damped Linear Dynamic Systems. J. Appl. Mech. Trans. ASME. Solution 6.12 Necessary and sufficient condition, 1

1



1

½c½m ½k ¼ ½k½m ½c ¼ ½c½m ½k

T

426

CHAPTER 6 Multi-degree-of-freedom systems

2

0.0537

6 6 0.0855 4 0.0318

0.0855

0.0318

32

2

0

31 2

2

7 6 6 07 5 4 2 0.1173 0.0855 0 0 1 0 2 3 0.1392 0.2883 0.1491 6 7 7 ¼6 4 0.2883 0.7256 0.4374 5 0.1491 0.4374 0.2883 0.2028

76 6 0.1173 7 54 0

0 2

2

0

3

4

7 2 7 5

2

2

The resulting matrix is symmetric, as it should be. Problem 6.13 Below are the complex eigenvalue and eigenvector of an unconstrained system. (1) Is the mode a rigid body or elastic mode? Explain. (2) What are the undamped and damped circular frequencies of the mode? (3) What is the critical damping ratio of the mode? (4) If all the modes of the system had the properties of this mode, would the system have classical modes, or complex modes [hint: compute the scaling (rotation) for one of the displacement coordinates (last three rows are displacements) that converts the complex value to pure real, then scale (rotate) the other two coordinates]. Discuss your results. lj ¼ ½ 0:0951 þ i1:8997 2 0:1938 þ i0:1152 6 6 0:5074  i0:3017 6 6 6 0:6271 þ i0:3729 6 fwgj ¼ 6 6 0:0554  i0:1048 6 6 6 0:1451 þ i0:2743 4 0:1793  i0:3391

0:0951  i1:8997  3 0:1938  i0:1152 7 0:5074 þ i0:3017 7 7 7 0:6271  i0:3729 7 7 7 0:0554 þ i0:1048 7 7 7 0:1451  i0:2743 7 5 0:1793 þ i0:3391

Solution 6.13 (1) The mode is an elastic mode, since the eigenvalue is nonzero. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 u ¼ Reðlj Þ þ Imðlj Þ ¼ ð0:0951Þ2 þ ð1:8997Þ2 ¼ 1:9021 (2) nj udj ¼ jImðlj Þj ¼ 1:8997

Problem 6.14

Reðlj Þ 0:0951 ¼ 0:05 ¼ 1:9021 unj (4) We first compute the scalar (rotation) that will rotate the third displacement coordinate (row 6) to a pure real number (we could have also used either of the other two displacement coordinates): (3) zj ¼

q6;2 ¼

conjð0:1793 þ i0:3391Þ 0:1793  i0:3391 ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi j0:1793 þ i0:3391j ð0:1793Þ2 þ ð0:3391Þ2 ¼ 0:4674  i0:8840

We then rotate (scale) all coordinates. If the three displacement coordinates become pure real numbers, then it implies that the three displacements are collinear, with each coordinate having a zero or 180-degree phase relative to the other two coordinates. This then would verify that the mode shape is for a system with real classical normal modes. 3 2 3 2 0:1938  i0:1152 0:0113  i0:2252 6 0:5074 þ i0:3017 7 6 0:0295 þ i0:5895 7 7 6 7 6 7 6 7 6 6 0:6271  i0:3729 7 6 0:0365  i0:7287 7 7 6 7 6 6 0:0554 þ i0:1048 7ð0:4674  i0:8840Þ ¼ 6 0:1185 þ i0:0000 7 7 6 7 6 7 6 7 6 4 0:1451  i0:2743 5 4 0:3103 þ i0:0000 5 0:1793 þ i0:3391

0:3836 þ i0:0000

Conversely, we could have computed the angle between the real axis and the displacement components as tan1 ðIm =ReÞ and verified that all three components were collinear. Problem 6.14 For the below two-degrees-of-freedom system, where h ¼ 2zun , compute the complex eigenvalues for c < 0, c ¼ 0, and c > 0. #( ) " #( ) ( ) " #( ) " 0 0 1 0 q_ r q€r c 0 qr 0 þ þ ¼ 2 0 un q€e q_ e 0 h qe 0 1 0

427

428

CHAPTER 6 Multi-degree-of-freedom systems

Solution 6.14 Recasting as a first-order system yields the following eigenvalue problem,

The characteristic polynomial is

  pðlÞ ¼ l4 þ ðh þ cÞl3 þ u2n þ ch l2 þ cu2n l   ¼ lðl þ cÞ l2 þ hl þ u2n

Hence, the eigenvalues are c¼0 0

lr ¼ 0; 0

c>0 0

lr ¼ 0; c

c > = f11 f12 < u1 ¼ > > > f21 f22 > q_ 2 ð0Þ > > > ; : sin u2 t > u2

(7.3-17)

Performing the indicated multiplications yields the physical coordinate responses, x1 ðtÞ ¼ f11

q_ 1 ð0Þ q_ ð0Þ sin u1 t þ f12 2 sin u2 t u1 u2

¼ A1 sin u1 t þ B1 sin u2 t q_ ð0Þ q_ ð0Þ x2 ðtÞ ¼ f21 1 sin u1 t þ f22 2 sin u2 t u1 u2

(7.3-18)

¼ A2 sin u1 t þ B2 sin u2 t As can be ascertained, the physical coordinate responses consist of the sum of two sinusoidal functions of different frequencies and different amplitudes; hence, the responses will have envelope functions, with beat frequencies, that will modulate the response time histories whose oscillatory frequencies will be the average of the two natural frequencies. By way of an example, let m1 ¼ m2 ¼ 1, k1 ¼ k2 ¼ 1000, K ¼ 100, x_ 1 ð0Þ ¼ 100, and x_2 ð0Þ ¼ 0. Substituting and solving the eigenvalue problem (Eq. 7.3-10) produces the modes of the system, #  " 2      u1 0 1000 0 0:7071 0:7071 f11 f12 ¼ ¼ and f21 f22 0 1200 0:7071 0:7071 0 u22 (7.3-19)

7.3 Beating

where the mode shapes have been normalized such that ½fT ½m½f ¼ ½I. The initial velocities in modal coordinates are computed with Eq. (7.3-15), i.e.,         0:7071 0:7071 1 0 100 70:71 q_ 1 ð0Þ ¼ ¼ q_ 2 ð0Þ 0:7071 0:7071 0 1 0 70:71 (7.3-20) Substituting into Eq. (7.3-18) produces the sought-after response equations, x1 ðtÞ ¼ 1:58 sin 31:62 t þ 1:44 sin 34:64 t x2 ðtÞ ¼ 1:58 sin 31:62 t  1:44 sin 34:64 t

(7.3-21)

Fig. 7.3-4 shows the two time histories, and as can be ascertained both responses have the same beat frequency, and the same response frequency that is the average of the two natural frequencies. However, neither time history repeats itself within the period of the envelope function (beat period). This is because the ratio of the natural frequencies is not a rational number. In addition, neither time history has a value of zero at the time points where the envelope function would be zero. This is due to the fact that the

FIGURE 7.3-4 Physical coordinate response time histories of a two-degree-of-freedom system whose motion was initiated with an initial velocity.

455

456

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

harmonic functions being combined (modal responses) do not have the same amplitude and Eq. (7.3-5) cannot be applied as derived. Furthermore, one must be cautious not to look at a short segment of the time histories and conclude that there is only one mode responding. In Volume II, spectral analysis techniques, such as Fourier transforms and Power Spectral Densities, are discussed; these would reveal that indeed there are two modes responding, even when a cursory review of a short segment of the time history would seem to indicate otherwise. Just remember, if there is beating, then at least two response time histories at different frequencies must be involved. Before leaving this section we will add viscous damping to the system shown in Fig. 7.3-3 and explore the effect it has on the beating behavior of the system. The equations of motion for this system, with damping and in modal coordinates, were derived in Chapter 6, i.e.,   _ þ u2n fqðtÞg ¼ f0g (7.3-22) ½If€ qðtÞg þ ½2zun fqðtÞg Hence, for this two-degree-of-freedom system we have q€1 ðtÞ þ 2z1 u1 q_ 1 ðtÞ þ u21 q1 ðtÞ ¼ 0 q€2 ðtÞ þ 2z2 u2 q_ 2 ðtÞ þ u22 q2 ðtÞ ¼ 0

(7.3-23)

In Chapter 2, we solved for the response of a single-degree-of-freedom system with viscous damping and whose motion was initiated with initial conditions. Making the coordinate change, xðtÞ0qðtÞ, the solution for the jth mode is ! _ q ð0Þ þ z u q ð0Þ j j j j sin ud;j t (7.3-24) qj ðtÞ ¼ ezj uj t qj ð0Þcos ud;j t þ ud;j For our problem, q1 ð0Þ ¼ q2 ð0Þ ¼ 0, q_ 1 ð0Þ and q_ 2 ð0Þ are given by Eq. (7.3-20), and we will assume that z1 ¼ z2 ¼ 0:02. Substituting yields 1 0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 70:71 C B q1 ðtÞ ¼ e0:02ð31:62Þt @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin 1  ð0:02Þ2 ð31:62ÞtA 1  ð0:02Þ2 ð31:62Þ ¼ e0:63t ð2:24 sin 31:61 tÞ (7.3-25)

7.3 Beating

and

0

70:71 B q2 ðtÞ ¼ e0:02ð34:64Þt @qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1  ð0:02Þ2 ð34:64Þ

1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi C sin 1  ð0:02Þ2 ð34:64ÞtA

¼ e0:69t ð2:04 sin 34:63 tÞ (7.3-26) The sought-after responses, therefore, are x1 ðtÞ ¼ 0:7071e0:63t ð2:24 sin 31:61 tÞ  0:7071e0:69t ð2:04 sin 34:63 tÞ ¼ e0:63t ð1:58 sin 31:61 tÞ þ e0:69t ð1:44 sin 34:63 tÞ x2 ðtÞ ¼ 0:7071e0:63t ð2:24 sin 31:61 tÞ þ 0:7071e0:69t ð2:04 sin 34:63 tÞ ¼ e0:63t ð1:58 sin 31:61 tÞ  e0:69t ð1:44 sin 34:63 tÞ (7.3-27) Fig. 7.3-5 shows x1 ðtÞ and x2 ðtÞ, and as can be ascertained the responses decay because of the damping now present in the system; compare this to the response of the same system without damping shown Fig. 7.3-4. Also, if one were to use the decay rate from the second to the third peak in the x1 ðtÞ time history, for example, a significantly erroneous (more

FIGURE 7.3-5 Damped response time histories of a two-degree-of-freedom system whose motion was initiated with initial velocities.

457

458

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

than twice) critical damping ratio would be obtained with the logarithmic decrement method (see Chapter 4). It is extremely important that singledegree-of-freedom (single mode) assumptions not be used with response time histories of multi-mode systems unless one can verify that the system is responding in a single mode. The difficulty with closely spaced modes is that the response time histories can easily be confused for those of a single mode, unless one understands that the appearance of beating typically indicates that more than one mode is involved. 7.4 Sweep rate effects In Chapter 2, we briefly discussed the response of single-degree-of-freedom systems to sinusoidal excitation where the frequency of excitation changed with time. In that chapter, an example problem was presented where it was shown that the peak response amplitude was lower than the steady-state response because of the changing excitation frequency. In Chapter 5, we extended the discussion and provided data that could be used to obtain the attenuation due to sweep effects for a large class of single-degree-offreedom systems. In most instances for lightly damped systems, the peak response amplitude, relative to the steady-state peak response, will be lower when the frequency of excitation changes with time. Generally, the slower the change in excitation frequency, the closer the peak response will be to the steady-state value. For fast sweep rates, the reduction could be significant relative to the steady-state values. However, it was also shown that, because of the transient nature of the sweep, for more highly damped systems and slower sweep rates, the peak response could actually be higher than the steady-state value. In this section, we will explore the effect of excitation sweep rate on the response of multi-degree-of-freedom systems. We begin with the threedegree-of-freedom system shown in Fig. 7.4-1.

FIGURE 7.4-1 Three-degree-of-freedom system subjected to excitation fs ðtÞ.

7.4 Sweep rate effects

The second-order matrix differential equation of motion for the system in Fig. 7.4-1 is 9 2 9 38 2 38 0 > m1 0 2c c 0 > > > € _ ðtÞ ðtÞ x x > > > > 1 1 = 6 = 7< 6 7< 6 0 m2 0 7 x€2 ðtÞ þ 6 c 2c c 7 x_2 ðtÞ 5> 4 5> > 4 > > > ; ; : x€3 ðtÞ > : x_3 ðtÞ > 0 0 m3 0 c 2c 9 8 9 38 2 (7.4-1) k2 0 k1 þ k2 x1 ðtÞ > 0 > > > > > > > = < = 7< 6 7 6 k2 þ k3 k3 5 x2 ðtÞ ¼ þ 4 k2 0 > > > > > > > ; > ; : : x3 ðtÞ 0 k3 k3 þ k4 fs ðtÞ € _ ½mfwðtÞg þ ½cfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg

(7.4-2)

We will assume that the damping properties yield classical normal modes and, therefore, we will be able to assign damping in the modal domain, mode by mode. Let fwðtÞg ¼ ½ffqðtÞg

(7.4-3)

where thecolumns of ½f are the mode shapes obtained from the eigenvalue

problem,  u2j ½m þ½k ffgj ¼ f0g. Let the mode shapes be normalized such that ½fT ½m½f ¼ ½I. Substituting the transformation defined by Eq. (7.4-3), and its first and second time derivatives, into Eq. (7.4-2), and then premultiplying both sides of the equation by ½fT yields _ ½fT ½m½ff€ qðtÞg þ ½fT ½c½ffqðtÞg þ ½fT ½k½ffqðtÞg ¼ ½fT ff ðtÞg (7.4-4) As discussed above and in Chapter 6, Eq. (7.4-4) reduces to   _ ½If€ qðtÞg þ ½2zun fqðtÞg þ u2n fqðtÞg ¼ ½fT ff ðtÞg

(7.4-5)

where the three matrices on the left-hand side are diagonal matrices. The dynamic behavior of the three-degree-of-freedom system shown in Fig. 7.4-1 will be described by three uncoupled second-order differential equations, i.e., q€1 ðtÞ þ 2z1 u1 q_ 1 ðtÞ þ u21 q1 ðtÞ ¼ ffgT1 ff ðtÞg ¼ f31 fs ðtÞ q€2 ðtÞ þ 2z2 u2 q_ 2 ðtÞ þ u22 q2 ðtÞ ¼ ffgT2 ff ðtÞg ¼ f32 fs ðtÞ q€3 ðtÞ þ 2z3 u3 q_ 3 ðtÞ þ u23 q3 ðtÞ ¼ ffgT3 ff ðtÞg ¼ f33 fs ðtÞ

(7.4-6)

459

460

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

To explore the effect of sweep rate on the response of this simple multidegree-of-freedom system, we let m1 ¼ 4, m2 ¼ 16, m3 ¼ 4, k1 ¼ 10; 000, k2 ¼ 2; 500, k3 ¼ 5; 000, and k4 ¼ 7; 500. In addition, we let the critical damping ratio, zj , in each mode be equal to 0.02. Solving the undamped eigenvalue problem yields the system modes: 2 3 2 3 17:2088 0:0536 0:4472 0:2171 6 7 6 7 7 ½f ¼ 6 0:2427 0:0000 0:0600 7 55:9017 ½un  ¼6 4 5 4 5 57:4248 0:1072 0:2236 0:4342 (7.4-7) where the modes have been normalized such that ½fT ½m½f ¼ ½I. We will compute the response to a 2-octave per minute sweep rate and then a 4octave per minute rate; these rates are commonly used in testing. The sweeps will start at one hertz and the maximum amplitude of excitation will be assumed one. Hence, the forcing function for the 2-octave sweep (see Chapter 2) is   120pð1Þ  Ro t 260  1 fs ðtÞ ¼ sin Ro ln 2 (7.4-8) 0:033t ¼ sin 272 2 1 where Ro ¼ 2 is the sweep rate in octave per minute. For the 4-octave sweep, Ro ¼ 4 octave per minute, and we have ! 120pð1Þ  Ro t fs ðtÞ ¼ sin 260  1 Ro ln 2 (7.4-9) 0:067t ¼ sin 136 2 1 Substituting the modal parameters and the 2-octave per minute forcing function into Eq. (7.4-6) yields q€1 ðtÞ þ 0:688q_ 1 ðtÞ þ 296:143q1 ðtÞ ¼ 0:1072 sin 272 20:033t  1 q€2 ðtÞ þ 2:236q_ 2 ðtÞ þ 3125:000q2 ðtÞ ¼ 0:2236 sin 272 20:033t  1 q€3 ðtÞ þ 2:297q_ 3 ðtÞ þ 3297:608q3 ðtÞ ¼ 0:4342 sin 272 20:033t  1 (7.4-10)

7.4 Sweep rate effects

To obtain the response to the 4-octave sweep rate we would simple replace the sine term argument with that from Eq. (7.4-9). Substituting the responses obtained with Eq. (7.4-10) into Eq. (7.4-3) yields the desired results, i.e., 9 2 9 8 38 ðtÞ ðtÞ 0:0536 0:4472 0:2171 x q > > > > 1 1 > > > > = 6 = < 7< 7 q2 ðtÞ x2 ðtÞ ¼ 6 0:2427 0:0000 0:0600 4 5> > > > > > > > ; ; : : x3 ðtÞ q3 ðtÞ 0:1072 0:2236 0:4342 9 8 (7.4-11) 0:0536q1 ðtÞ þ 0:4472q2 ðtÞ  0:2171q3 ðtÞ > > > > = < ¼ 0:2427q1 ðtÞ þ 0:0000q2 ðtÞ þ 0:0600q3 ðtÞ > > > > ; : 0:1072q1 ðtÞ  0:2236q2 ðtÞ  0:4342q3 ðtÞ Since the closed-form solution is extremely complex (see Chapter 5), the equations in (7.4-10) were solved numerically using Duhamels’ method (see Chapter 8). Fig. 7.4-2A and B show plots of x3 ðtÞ for the 2- and 4-octave per minute sweep rates, respectively. As expected, the higher sweep rate reaches the natural frequencies of the modes quicker; hence, the elevated responses associated with passing through a natural frequency occur earlier. An item of note is the beating that occurs once the instantaneous excitation frequency is past a natural frequency. This is due to the

FIGURE 7.4-2 Response of coordinate x3 ðtÞ to swept frequency excitation: (A) 2-octave per minute and (B) 4-octave per minute.

461

462

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

superposition of, once excited, the mode continuing to vibrate at its natural frequency as its amplitude decays due to damping, and the vibration response of the system at the instantaneous excitation frequency that is still close to the natural frequency as it sweeps by. Once the instantaneous excitation frequency is sufficiently past the natural frequency, and the mode response has decayed sufficiently, the beating is not noticeable. The beating after the second blossom is more complex since it involves two modes close in frequency and the response to the excitation past the natural frequencies. With multi-degree-of-freedom systems, there is the added complexity of multiple modes responding; in particular when the modes are close in frequency as the second and third modes in our example problem. With single-degree-of-freedom systems, the amplitude attenuation due to sweeping of the excitation frequency was greater for the faster sweep rates (see Chapter 5). This can also be seen in the response time histories in Fig. 7.4-2 during the first mode response. Since this mode is significantly separated in frequency from the other two modes, the response, for practical purposes, is that of a single-degree-of freedom system. The peak responses associated with the first blossom are 0.000681 and 0.000564 for the 2- and 4-octave per minute sweeps, respectively. If the response were only in a single (first) mode, the corresponding values would be 0.000720 and 0.000610. The small differences are due to the two higher frequency modes contributing to the total response. Since the peak occurs at a frequency past the first mode natural frequency, but below the natural frequencies of the other two modes, the higher frequency mode responses will be phased such that they will reduce the total response at the first blossom, past the first mode natural frequency. This can be seen in Fig. 7.4-3 where we show the first and third mode responses on the same time axis. As can be ascertained, for

FIGURE 7.4-3 First (dashed line) and third (solid line) mode responses to 2-octave per minute swept excitation.

7.4 Sweep rate effects

instantaneous sweep frequencies below the natural frequencies the two modes have practically the same phase. As the excitation approaches the first mode natural frequency, the phase of the first mode response transitions through 90 and once past the natural frequency will be, for all practical purposes, 180 out of phase relative to the response at frequencies below the natural frequency. This is the expected behavior of a single-degreeof-freedom system as the excitation frequency sweeps through its natural frequency (see Chapter 2). During the first blossom, the response magnitude ratios for the faster-toslower sweep rates are 0.85, 0.85, and 0.83 for coordinates x1 , x2 , and x3 , respectively. For a single-degree-of-freedom system the value would be 0.84. During the second blossom the response magnitude ratios for the faster-to-slower sweep rates are 0.86, 0.90, and 0.95 for coordinates x1 , x2 , and x3 , respectively. For a single-degree-of-freedom system the values would be 0.90, or 0.91 depending on whether we considered the second or third mode natural frequency. The reason for the disparity is that the second blossom involves two closely spaced modes. If one were to use a singledegree-of-freedom reduction criterion, we would have over predicted the expected reduction when increasing the sweep rate from 2- to 4-octave per minute for coordinate x3 , and under predicted for coordinate x1 . Hence, one must be careful when using single-degree-of-freedom results to predict the behavior of multi-degree-of-freedom systems, especially when they have many modes and/or modes with closely spaced natural frequencies. Figs. 7.4-4 and 7.4-5 show the contribution from each of the three modes to the total responses shown in Fig. 7.4-2. That is, if we sum the three time histories in Fig. 7.4-4, we would obtain the time history shown in Fig. 7.42A, and likewise for the time histories in Fig. 7.4-5 and the resultant in Fig. 7.4-2B. Each of the time histories in Figs. 7.4-4 and 7.4-5 exhibits the classic response behavior of a single-degree-of-freedom system excited by swept sinusoidal excitation, with a blossom followed by beating. We can also see how the responses in the second and third modes would combine to produce a response different from that of a single-degree-of-freedom systemdthe closer the natural frequencies, the greater the deviation. Tables 7.4-1 and 7.4-2 show in the third column the times at which the instantaneous excitation frequencies were equal to the natural frequencies of each mode whose responses are shown in Figs. 7.4-4 and 7.4-5, respectively. The fourth column shows the time at which the peak responses

463

464

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.4-4 2-octave per minute sweep rate response in each of the three modes of the system shown in Fig. 7.4-1.

FIGURE 7.4-5 4-octave per minute sweep rate response in each of the three modes of the system shown in Fig. 7.4-1.

7.4 Sweep rate effects

Table 7.4-1 Times when excitation frequencies equal natural frequencies, and times of peak responses in each mode, for 2-octave per minute swept excitation.

Mode 1 2 3

Mode frequency (rad/sec) 17.21 55.90 57.42

Time (sec) sweep frequency same as mode frequency 43.61 94.56 95.76

Time (sec) of peak response 46.02 95.70 96.84

Ratio of column 4 to column 3 1.06 1.01 1.01

Sweep frequency (rad/sec) at peak response 18.20 57.33 58.87

Ratio of column 6 to column 2 1.06 1.03 1.03

Table 7.4-2 Times when excitation frequencies equal natural frequencies, and times of peak responses in each mode, for 4-octave per minute swept excitation.

Mode 1 2 3

Mode frequency (rad/sec) 17.21 55.90 57.42

Time (sec) sweep frequency same as mode frequency 21.80 47.30 47.88

Time (sec) of peak response 23.66 48.18 48.75

Ratio of column 4 to column 3 1.08 1.02 1.02

Sweep frequency (rad/sec) at peak response 18.75 58.23 59.77

Ratio of column 6 to column 2 1.09 1.04 1.04

occurred. As can be ascertained, the peaks occur when the instantaneous frequency of excitation is greater than the natural frequency. The sixth column shows the instantaneous excitation frequency when the response is at a maximum. As expected, these are greater than the corresponding natural frequencies. Hence, using the instantaneous excitation frequency corresponding to a peak response as an indication of a natural frequency could produce significant errors. For our example problem, the errors are shown in the seventh column, and for the faster sweep rate (Tables 7.4-2) the error is 9% for the lowest mode, which is considerably greater than accepted test practice (see Volume II). Hence, one should not use the instantaneous excitation frequency corresponding to a peak response as the natural frequency of a mode unless the sweep rate is extremely slow.

465

466

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Table 7.4-3 Peak response in each mode for 2-octave per minute swept unit excitation compared to peak steady-state resonant response. Mode 1 2 3

Mode frequency (rad/sec) 17.21 55.90 57.42

Time (sec) of peak response 46.02 95.70 96.84

Peak response 0.0627 0.0072 0.0068

Steady-state response at natural frequency 0.0844 0.0080 0.0076

Ratio of column 4 to column 5 0.74 0.89 0.90

Table 7.4-4 Peak response in each mode for 4-octave per minute swept unit excitation compared to peak steady-state resonant responses. Mode 1 2 3

Mode frequency (rad/sec) 17.21 55.90 57.42

Time (sec) of peak response 23.66 48.18 48.75

Peak response 0.0531 0.0065 0.0062

Steady-state response at natural frequency 0.0844 0.0080 0.0076

Ratio of column 4 to column 5 0.63 0.81 0.82

Tables 7.4-3 and 7.4-4 show in the third column the times of the peak responses of the time histories shown in Figs. 7.4-4 and 7.4-5, respectively. In addition, the tables show in the fourth column the corresponding peak values obtained with unit excitation. To obtain the physical coordinate response amplitudes in each mode, we would need to multiply these values by the appropriate modal gains. For example, to obtain the first mode peak response of coordinate x3 , due to unit excitation at that coordinate, we would multiply the response values for the first mode in the tables by f31 f31 , where f31 is the mode shape value of the first mode at coordinate x3 (see Eq. 7.4-7), i.e., ð0:1072Þð0:1072Þð0:0627Þ ¼ 0:00072. The fifth column shows the peak steady-state responses at the natural frequencies of each mode. As shown in Chapter 5, and confirmed by the results in the fifth column of the tables, the sweep effects produce significantly attenuated responses compared to the steady-state values. The attenuation is significantly greater for the faster sweep rates. 7.5 Short transient excitation In Chapter 5, we solved for the response of single-degree-of-freedom systems subjected to transient excitation. In the preceding sections of this

7.5 Short transient excitation

chapter, we solved for the response of multi-degree-of-freedom systems subjected to harmonic excitation. We showed in Chapter 6 that if the system damping yields classical normal modes, then the equations of motion could be transformed from the physical coordinate domain to modal coordinates, and the equations would be uncoupled and the same as those for single-degree-of-freedom systems. Hence, all solutions derived for single-degree-of-freedom systems would be applicable, including the closed-form solutions derived in Chapter 5 for step excitation, ramp excitation, and base excitation. 7.5.1 Step excitation

To facilitate subsequent discussion we will summarize previously presented material. In Section 7.1, we showed that the matrix differential equation of motion for a multi-degree-of-freedom system is € _ ½mfwðtÞg þ ½cfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg

(7.5-1)

fwðtÞg ¼ ½ffqðtÞg

(7.5-2)

Let where the matrix of mode shapes, ½f, is normalized to unit modal mass, i.e., ½fT ½m½f ¼ ½I. Substituting the coordinate transformation and its time derivatives into Eq. (7.5-1), and then premultiplying the entire equation by ½fT yields   _ þ u2n fqðtÞg ¼ ½fT ff ðtÞg ½If€ qðtÞg þ ½2zun fqðtÞg (7.5-3) For every mode shape retained in the coordinate transformation in Eq. (7.5-2) we obtain an uncoupled equation of the form q€j ðtÞ þ 2zj un;j q_ j ðtÞ þ u2n;j qj ðtÞ ¼ ffgTj ff ðtÞg ¼ f1j f1 ðtÞ þ f2j f2 ðtÞ þ / þ fNj fN ðtÞ (7.5-4) where we recognize the equation to be that of a single-degree-of-freedom system. Once each modal domain equation has been solved, the physical coordinate responses are obtained with Eq. (7.5-2). Note that the time history consistency between the modal responses needs to be retained when computing the physical coordinate time histories. It would not be appropriate, for example, to extract the peak values of the modal responses and

467

468

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

transform these to physical coordinates. This may not produce the peak physical coordinate response, since it might occur when none of the modal responses are at their peak. On the other hand, if one were to take the peak values of the physical coordinate responses on a mode-by-mode basis, then the absolute sum would be the largest value possible. As our first example, we will solve for the response of a two-degree-offreedom system subjected to a unit step function force. Assume that the force is applied to the mass corresponding to the first coordinate at t ¼ 0, i.e., f1 ðtÞ ¼ 1 and f2 ðtÞ ¼ 0. Accordingly, the uncoupled modal domain equations of motion are ( ) 1 ¼ Q1 q€1 ðtÞ þ 2z1 un;1 q_ 1 ðtÞ þ u2n;1 q1 ðtÞ ¼ ffgT1 0 ( ) (7.5-5) 1 T q€2 ðtÞ þ 2z2 un;2 q_ 2 ðtÞ þ u2n;2 q2 ðtÞ ¼ ffg2 ¼ Q2 0 In Chapter 5, we obtained the response of a single-degree-of-freedom system subjected to a step function force, fs , at t ¼ 0, and initial conditions of zero for both the displacement and velocity. Making the notation substitutions gives 8 19 0 > > < = zj Qj C zj un;j t B qj ðtÞ ¼ 2 1e (7.5-6) @cos ud;j t þ qffiffiffiffiffiffiffiffiffiffiffiffiffi sin ud;j tA > un;j > : ; 1  z2j Hence, the solution is       f12 f11 w1 ðtÞ q1 ðtÞ þ q2 ðtÞ ¼ f21 f22 w2 ðtÞ

(7.5-7)

where q1 ðtÞ and q2 ðtÞ are given by Eq. (7.5-6) with the appropriate frequency and damping substitutions for each mode. 7.5.2 Impulse excitation

In Chapter 5, we introduced the concept of impulse excitation. At that time we noted that the definition of an impulse is the time integral of a force that acts over a given time interval. If the time interval is extremely small relative to the natural period of vibration, the impulse will produce a change in the system’s momentum without altering its displacement while it acts.

7.5 Short transient excitation

Hence, we could solve for the response of the system by replacing the short duration force with the change in velocity it would produce; and for a stationary system this corresponds to an initial velocity. The solution (see Chapter 2) is then that of a system with an initial velocity and no initial displacement, i.e.,   sin ud t zun t _ xð0Þ (7.5-8) xðtÞ ¼ e ud _ where xð0Þ ¼ FI =m and FI is the magnitude of the impulse, which is equal to the magnitude of the force integrated with respect to time over the duration of the force. In a multi-degree-of-freedom system, where the mass has been discretized into rigid mass points, any impulsive force acting on a particular mass would produce a velocity in that mass in the direction of the impulse. The impulse, however, would not change the velocity of any of the other masses in the system. We will show this with the two-degree-of-freedom system in Fig. 7.3-3, whose matrix differential equation of motion is given in Eq. (7.3-9). If we assume that a very short duration boxcar force of magnitude Fb acts on mass m2 for a very short time, Dt, then the impulse, b and the initial velocity it produces will be FI , will be FI ¼ FDt, b x_2 ð0Þ ¼ FI =m ¼ FDt=m 2

(7.5-9)

The initial velocity of mass m1 will be zero, since the duration of the impulse acting on mass m2 is such as to not produce any displacement; hence, the spring connecting the two masses would not be deformed during Dt and mass m1 would not be “aware” that a velocity had been imparted to mass m2 . Accordingly, the initial velocities are     0 x_1 ð0Þ ¼ b (7.5-10) x_2 ð0Þ FDt=m2 Transforming the equations of motion into the modal domain using the mode shapes of the system (see Eqs. (7.3-12) and (7.3-13)) yields q€1 ðtÞ þ u21 q1 ðtÞ ¼ f0g q€2 ðtÞ þ u22 q2 ðtÞ ¼ f0g

(7.5-11)

469

470

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The corresponding initial displacements are zero for both modes, and the initial velocities are (see Eq. 7.3-15) _ _ ¼ ½fT ½mfxð0Þg fqð0Þg     f11 f21 m1 0 0 ¼ b FDt=m 0 m2 f12 f22 2 )   (  b f FDt f11 f21 0 21 ¼ ¼ b b FDt f12 f22 f22 FDt

(7.5-12)

The solutions to the equations in (7.5-11) are given in (7.3-16), with q_ 1 ð0Þ and q_ 2 ð0Þ given in (7.5-12), hence, 8 9 b > > f FDt > > 21 > sin u t >   >  1 > < = u1 f11 f12 x1 ðtÞ ¼ (7.5-13) > x2 ðtÞ f21 f22 > b > > f22 FDt > > > sin u2 t > : ; u2 Now, suppose that instead of computing the initial velocities in the physical coordinate set as above, we transform the impulsive force into the modal domain where we will then compute the initial velocities. In this case the force vector is     f1 ðtÞ 0 for 0  t  Dt ¼ b F f2 ðtÞ   (7.5-14) 0 ¼ for t > Dt 0 For 0  t  Dt, where Dt is very small, the equations of motion in the modal domain are ) " 2 # ) " #( ( u1 0  q1 ðtÞ   f11 f12 T  0  1 0 f21 Fb q€1 ðtÞ þ ¼ ¼ Fb q€2 ðtÞ q2 ðtÞ f21 f22 f22 Fb 0 1 0 u22 (7.5-15) Since the forces act for a very short time, they will produce a change in momentum, i.e., yield an initial velocity; hence, b b ¼ f21 FDt q_ 1 ð0Þ ¼ FI =m ¼ f21 FDt=1 b b q_ 2 ð0Þ ¼ FI =m ¼ f22 FDt=1 ¼ f22 FDt

(7.5-16)

Comparing these to the initial velocities in Eq. (7.5-12) we observe that they are the same.

7.6 Base excitation

7.6 Base excitation There is a class of excitation that is often approximated by base motion of the system. Examples include buildings excited by earthquakes, or small components attached to a larger system undergoing vibration. It is important to note that inherent in the formulation of the typical base-shake problem is the assumption that the system being excited does not interact, or influence, the base motion. For example, if one wishes to analyze the base shake of an electronic unit in a spacecraft, then the assumption is that the vibration of the unit will not alter the motion of the points where the unit attaches to the spacecraft. If this assumption cannot be justified, then a base-shake analysis/test is at best an approximate solution. To start, we will derive the equations of motion of a simple system being forced by a prescribed base acceleration in one translational direction. We will then add rotational excitation and rotational degrees of freedom, and finally we will present the general formulation for a three-dimensional structure, with six degrees of freedom at each mass, with independent excitation at multiple interface points. 7.6.1 Unidirectional motion

We will start with the four mass system defined in Fig. 7.6-1 and derive the equations of motion about the static equilibrium point so that we do not need to include the effects of gravity. However, when computing internal loads and stresses, we would have to add the static internal loads due to the force of gravity. Note that this system is a one-dimensional structure, i.e., we are only allowing unidirectional motion in the vertical direction.

FIGURE 7.6-1 Four-degree-of-freedom system driven at its base, unidirectional motion.

471

472

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Since masses m3 and m4 are attached to the base, they will undergo the same motion as the base. Therefore, this four-mass system only has two independent degrees of freedom, described by coordinates y1 ðtÞ and y2 ðtÞ, which are defined in an inertial coordinate system. In addition, we will assume that the damping of the system leads to classical normal modes. In this manner we will not need to define the physical damping mechanism, and we can introduce modal damping after we have transformed to modal coordinates. The equations of motion for masses m1 and m2 are m1 y€1 ðtÞ þ k1 ðy1 ðtÞ  y2 ðtÞÞ þ k3 ðy1 ðtÞ  yB ðtÞÞ ¼ 0 m2 y€2 ðtÞ  k1 ðy1 ðtÞ  y2 ðtÞÞ þ k2 ðy2 ðtÞ  yB ðtÞÞ ¼ 0

(7.6-1)

Introducing new coordinates, y1e ðtÞ and y2e ðtÞ, to describe the motion of masses m1 and m2 relative to the motion of the base, yB ðtÞ, gives y1e ðtÞ ¼ y1 ðtÞ  yB ðtÞ and y€1e ðtÞ ¼ y€1 ðtÞ  y€B ðtÞ and

y2e ðtÞ ¼ y2 ðtÞ  yB ðtÞ y€2e ðtÞ ¼ y€2 ðtÞ  y€B ðtÞ

Substituting into Eq. (7.6-1) yields m1 y€1e ðtÞ þ y€B ðtÞ þ k1 ðy1e ðtÞ  y2e ðtÞÞ þ k3 y1e ðtÞ ¼ 0 m2 y€2e ðtÞ þ y€B ðtÞ  k1 ðy1e ðtÞ  y2e ðtÞÞ þ k2 y2e ðtÞ ¼ 0

(7.6-2)

(7.6-3)

Rewriting (7.6-3) as a matrix equation and moving the term that is proportional to the base acceleration to the right-hand side produces ) " #( #( #( ) ) " " k1 þ k3 k1 y1e ðtÞ 1 m1 0 m1 0 y€1e ðtÞ þ y€B ðtÞ ¼ y€2e ðtÞ 1 0 m2 0 m2 k1 k1 þ k2 y2e ðtÞ (7.6-4) Note that Eq. (7.6-4) is a second-order, matrix differential equation of motion written in terms of relative coordinates y1e and y2e . Hence, the accelerations in inertial coordinates are those obtained by adding the base acceleration to the computed relative accelerations. To facilitate the solution, we will write Eq. (7.6-4) in a more compact form, i.e.,   y ðtÞ (7.6-5) ½m y€e ðtÞ þ ½kfye ðtÞg ¼ ½mffRB g€  B Solving the eigenvalue problem  u2n;j ½m þ½k ffgj ¼ f0g yields the circular natural frequencies, un; j , and mode shapes, ffgj , of the system

7.6 Base excitation

fixed at its base. Using the computed mode shapes to transform Eq. (7.6-5) into modal coordinates yields     yB ðtÞ (7.6-6) ½I q€e ðtÞ þ u2n fqe ðtÞg ¼ ½fT ½mffRB g€ where we substituted fye ðtÞg ¼ ½ffqe ðtÞg, and its second time derivative into Eq. (7.6-5), and then premultiplied each term in the equation by ½fT . Note that the mode shapes were normalized to yield unit modal   mass, which then also yields the diagonal modal stiffness matrix, u2n , with the circular natural frequencies squared on the diagonal. Introducing modal damping, Eq. (7.6-6) becomes       yB ðtÞ ½I q€e ðtÞ þ ½2zun  q_ e ðtÞ þ u2n fqe ðtÞg ¼ ½fT ½mffRB g€ (7.6-7) where ½2zun  is a diagonal matrix, and zj is the critical damping ratio for the jth mode. The term ½fT ½mffRB g on the right-hand side of Eq. (7.6-7) is referred to as the mode participation factor. As will be seen later, this term makes it difficult to excite many modes by driving a system at its base. ffRB g is the rigid body vector of the system referenced to its base, which for the above example is a vector with ones corresponding to the coordinates that define motion in the direction of the base motion. To obtain the response of the system we could numerically integrate Eq. (7.6-7) (see Chapter 8) or obtain a closed form solution if y€B ðtÞ is a function for which closed form solutions exist. Once qe ðtÞ and its derivatives are computed, the physical responses are computed as     yB ðtÞ ¼ ½f q€e ðtÞ þ ffRB g€ yB ðtÞ yðtÞg ¼ y€e ðtÞ þ ffRB g€ f€ (7.6-8) fyðtÞg ¼ fye ðtÞg þ ffRB gyB ðtÞ ¼ ½ffqe ðtÞg þ ffRB gyB ðtÞ fye ðtÞg ¼ ½ffqe ðtÞg 7.6.2 Translation plus rotation

In this section, the equations of motion will be derived for a system (Fig. 7.6-2) that has both translational and rotational degrees of freedom. We will assume that the base motion is imposed on mass m3 ; in other words, the structure is attached to “ground” at mass m3 . Hence, m3 will undergo translational and rotational acceleration defined by x€B ðtÞ and q€B ðtÞ. The accelerations at the other two masses will be composed of motion due to motion of the base plus motion relative to the base. These components are depicted to the right of the equal sign in Fig. 7.6-2.

473

474

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.6-2 Three-mass system where each mass is allowed to translate and rotate. Motion of mass m3 is prescribed. If for the moment we assume the structure is infinitely rigid, then, when the base undergoes a translational acceleration, x€B ðtÞ, each mass will experience the same acceleration. This is the first diagram to the right of the equal sign in Fig. 7.6-2. If we impose on the same rigid structure a base rotational acceleration, q€B ðtÞ, then mass m1 will undergo a translational acceleration of L1 q€B ðtÞ and a rotational acceleration of q€B ðtÞ, where the positive coordinate directions are defined in the leftmost diagram of the figure and we have assumed small angular rotation such that sin q ¼ q. Likewise, mass m2 will undergo a translational acceleration of L2 q€B ðtÞ and a rotational acceleration of q€B ðtÞ. These are depicted in the third diagram in the figure. In addition to the masses undergoing rigid body motion, masses m1 and m2 will also translate and rotate relative to the base since the actual system could deform elastically. We will define these accelerations as x€1e ðtÞ, x€2e ðtÞ, q€1e ðtÞ, and q€2e ðtÞ. Note that in the rightmost diagram we have shown a deformed structure; the actual motion, however, will be defined by the properties of the structure and the motion of the base. Note that x€3e ðtÞ ¼ 0 and q€3e ðtÞ ¼ 0 since mass m3 is constrained to move with the base; hence m3 will not have any translation or rotation relative to the base. We can now write the total acceleration, in an inertial reference frame, of each mass, i.e., x€1 ðtÞ ¼ x€B ðtÞ  L1 q€B ðtÞ þ x€1e ðtÞ q€1 ðtÞ ¼ q€B ðtÞ þ q€1e ðtÞ x€2 ðtÞ ¼ x€B ðtÞ  L2 q€B ðtÞ þ x€2e ðtÞ q€2 ðtÞ ¼ q€B ðtÞ þ q€2e ðtÞ

(7.6-9)

7.6 Base excitation

475

and the equations of motion for this system are 9 2 9 8 308 1 3 2 ðtÞ m1 x 1 L > > > > 1 1e > > >x€1e ðtÞ> > > > > > > > 7B> 7( 6 6 )C > > > > = = < < 7 C 7 6 6 B € q I 0 1 ðtÞ € q ðtÞ ðtÞ x 1 1e B 7B 1e C 7 6 6 þ ¼ f0g þ ½k 7B C 7 € 6 6 > > > > 7 C 7 6 6 B € ðtÞ ðtÞ x q m B ðtÞ 1 L x 2e > > > > 2 25 2e > > > 5@> A 4 > 4 > > > > > > ; ; : :q€2e ðtÞ> q2e ðtÞ I2 0 1 (7.6-10) Note that we will add the damping once we have transformed to modal coordinates, and that we only defined the acceleration proportional matrix explicitly. Also, note that we are representing the stiffness of the system by the generic term ½k, and it should not be confused with the stiffness matrix defined in the previous section. Transforming Eq. (7.6-10) into modal coordinates and adding the modal damping term yields       2  x€B ðtÞ (7.6-11) ½I q€e ðtÞ þ ½2zun  q_ e ðtÞ þ un fqe ðtÞg ¼ ½G € qB ðtÞ where ½G ¼ ½fT ½m½fRB , and as indicated previously is referred to as the mode participation factor. For rigid base excitation, we can have up to six terms in the base acceleration vector, three translations, and three rotations. In the next section, we will deal with the generic case where motion is independently prescribed at multiple interface points. 7.6.3 Multipoint excitation

We begin with the matrix differential equation of motion of an unconstrained, undamped, multi-degree-of-freedom system, where we have partitioned the equation into noninterface, fwðtÞgN , and interface, fwðtÞgI , coordinates, " #( ) " #( )   ½mNN ½mNI ½k ½k fwðtÞg 0 € fwðtÞg NN NI N N þ ¼ € fwðtÞg ½mIN ½mII ½kIN ½kII fwðtÞgI 0 I (7.6-12) The vector fwðtÞgN contains the xi , yi , zi , qx; i , qy; i , and qz; i physical coordinates for each noninterface mass point in the model, and the vector fwðtÞgI contains the xi , yi , zi , qx; i , qy; i , and qz; i physical coordinates for each interface point where the motion will be imposed, or prescribed.

476

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

We next define a transformation matrix that relates the coordinates in Eq. (7.6-12) to coordinates that define motion relative to the interface and coordinates that define motions due to the distortion or motion of the interface, i.e.,      fqe ðtÞg fwðtÞgN ½f ½fC  ¼ (7.6-13) fwðtÞgI fwðtÞgI ½0 ½I In Eq. (7.6-13), we define the motion relative to the interface in terms of modal coordinates, i.e., interface-fixed mode shapes, ½f, scaled by modal shapes are obtained from the interfacecoordinates fqe ðtÞg. These mode  fixed eigenvalue problem,

u2n; j ½mNN þ½kNN ffgj ¼ f0g. For conve-

nience, we will normalize the mode shapes to unit modal mass, i.e., ½fT ½mNN ½f ¼ ½I. Since we will want to excite the structure through multiple independent points, the interface to the base (“ground”) will be indeterminate and, therefore, we will need to use the system’s constraint modes, ½fC  ¼ ½k1 NN ½kNI , to define the motion away from the interface due to displacements of the interface (see Volume II and Hurty/Craig-Bampton models). Note that the constraint modes contain both rigid body vectors referenced to the interface, and displacement vectors that relate noninterface motion to the nonrigid body distortions of the interface. For a statically determinate interface, the constraint modes are simply the rigid body vectors of the system referenced to the interface. By substituting Eq. (7.6-13) and its second time derivative into Eq. (7.612), and then premultiplying the resulting equation by the transpose of the transformation matrix we obtain 3 2  2         ½0 u 0 ½INN ½mNI q€e ðtÞ 6 n NN h iNI 7 fqe ðtÞg þ4 ¼ 5 ½mIN ½mII € fwðtÞg fwðtÞgI 0 ½0IN k I II

(7.6-14) Performing the multiplications associated with the upper partition matrices produces     € (7.6-15) ½INN q€e ðtÞ þ u2n NN fqe ðtÞg ¼ ½mNI fwðtÞg I where ½mNI ¼ ½fT ½mNN ½fC  þ ½fT ½mNI ½I. Note that if there is no mass coupling between the interface and noninterface coordinates, then ½mNI

7.6 Base excitation

would be a null matrix, and Eq. (7.6-15) would have the same form as Eq. (7.6-11) once the damping term is included. Also, since the system is attached to “ground” at the interface coordinates, the interface mass has no impact on the solution since the interface motions are prescribed. For a statically determinate interface, ½fC  would reduce to ½fRB , the six rigid body vectors referenced to the interface. The final matrix differential equation of motion, with damping, for a generic base excitation problem where the various interface points can undergo independently prescribed acceleration is       € ½INN q€e ðtÞ þ ½22un NN q_ e ðtÞ þ u2n NN fqe ðtÞg ¼ ½mNI fwðtÞg I (7.6-16) Once Eq. (7.6-16) is solved the displacement response is recovered with the transformation defined by Eq. (7.6-13). Note that the dynamic response is obtained by defining the excitation (interface motion) in terms of the acceleration of the interface coordinates. The acceleration response is recovered using the second time derivative of Eq. (7.6-13). 7.6.4 Harmonic excitation

A common base excitation test involves unidirectional, sinusoidal excitation whose frequency sweeps through a range of interest, and all interface points are constrained to undergo the same motion. The response levels will depend on the dynamic properties of the shake table and test article system, the direction and magnitude of the excitation, and the sweep rate. For the purposes of this discussion we will assume that the response of the test article will not impact the motion of the table, which we will define for this discussion as A cos ut; see Section 7.4 in this chapter, and Chapter 5 for detailed discussion of sweep rate effects. For shake tables that drive the test article in one direction only, Eq. (7.6-16) reduces to       ½INN q€e ðtÞ þ ½22uNN q_ e ðtÞ þ u2n NN fqe ðtÞg ¼ fGgAeiut (7.6-17)  where fGg ¼ ½mNI I and   iut € (7.6-18) fwðtÞg I ¼ I Ae  The vector I defines the components   of the interface coordinates in the direction of excitation such that ½fC  I ¼ ffRB g where ffRB g is the rigid body vector referenced to the base of the system and in the direction of base

477

478

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

motion. Also, note that we have defined the excitation as Aeiut , which is equivalent to A cos ut þ iA sin ut by Euler’s formula. Since we are interested in the steady-state solution, we will only need to solve for the particular solutions since the complimentary solution (i.e., solution to the homogeneous equation) will decay for a system with damping. The solution will be complex and the real part will then correspond to the A cos ut excitation, and the imaginary part will correspond to the iA sin ut term. Once we have the solution, we can retain the real part as the response to the A cos ut excitation. We begin by assuming the following solution: fq ðtÞg ¼ fjgeiut  e  q_ ðtÞ ¼ iufjgeiut  e  q€e ðtÞ ¼ u2 fjgeiut

(7.6-19)

Substituting into Eq. (7.6-17) yields

  u2 ½Ifjgeiut þ iu½2zun fjgeiut þ u2n fjgeiut ¼ fGgAeiut

Dividing by eiut and collecting terms produces   2  u ½I þ iu½2zun  þ u2n fjg ¼ fGgA   2 un  u2 þ i½2zun u fjg ¼ fGgA

(7.6-20)

(7.6-21)

Recall that each matrix to the left of the equal sign in Eq. (7.6-17) is a diagonal matrix. Therefore, each matrix in Eq. (7.6-21) is also diagonal. Premultiplying Eq. (7.6-21) by the complex conjugate of the term inside the parentheses gives  2   u2n  u2 þ ½2zun u2 fjg ¼ u2n  u2  i½2zun u fGgA (7.6-22) Finally, we solve for fjg by inverting the diagonal matrix on the left-hand side, i.e., 1     2 2 2 2 u2n  u2  i½2zun u fGgA fjg ¼ un  u þ ½2zun u (7.6-23) Substituting into our assumed solution yields   q€e ðtÞ ¼  u2 ðfjgR þ ifjgI Þeiut

(7.6-24)

7.6 Base excitation

479

where fjgR and ifjgI are the real and imaginary parts, respectively, of fjg (see Section 7.2.2 for discussion on quadrature and coincident components of response). Differentiating Eq. (7.6-13) with respect to time twice, and then substituting Eqs. (7.6-18) and (7.6-24) yields the physical coordinate acceleration response,       ½f ½fC  € fwðtÞg q€e ðtÞ N ¼ € fwðtÞg € ½0 ½I ( fwðtÞg I I )   (7.6-25) 2 ½f ½fC  u ðfjgR þ ifjgI Þ iut   ¼ e I A ½0 ½I Performing the indicated multiplications and separating the real and imaginary parts, we obtain    2 € fwðtÞg N ¼ u ½fðfjgR þ ifjgI Þ þ ½fC  I A ðcos ut þ i sin utÞ  ¼u2 f½fðfjgR þ ifjgI Þgðcos ut þ i sin utÞ þ ½fC  I Aðcos ut þ i sin utÞ  ¼ u2 ½fðfjgR cos ut  fjgI sin utÞ þ ½fC  I A cos ut    þi  u2 ½fðfjgR sin ut þ fjgI cos utÞ þ ½fC  I A sin ut (7.6-26) The sought-after solution is the real part of the preceding equation, i.e.,  2 € fwðtÞg N ¼  u ½fðfjgR cos ut  fjgI sin utÞ þ ½fC  I A cos ut (7.6-27) 7.6.5 Practical considerations

Issues associated with base-shake analysis and testing include the inability to properly excite the desired dynamic behavior, sweep rate effects, the interaction that occurs between a shake table and the test article, and the fact that the dynamic properties of the test article will not be the same as when coupled into the rest of the system when in operation. This last point is particularly limiting for base-shake tests of fully configured spacecraft, and less critical for subsystems whose modes of vibration do not interact significantly with the rest of the system. 7.6.5.1 Mode participation factors

We begin by looking at why it is difficult to excite higher-order modes through unidirectional base excitation. This is best demonstrated with the

480

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.6-3 Three-degree-of freedom-system, its rigid body vector, and its three elastic normal mode shapes. three-mass example problem shown in Fig. 7.6-3. Let the springs have equal stiffness values, and let the masses also be equal. We first solve the basefixed eigenvalue problem to obtain the elastic modes of the system, which are normalized to yield a unity modal mass matrix. The mode participation factors, i.e., elements of fGg, are fGg ¼ ½fT ½mffRB g 8 9 2 38 9 3T 2 1:656 m 0:737 0:591 0:328 1 > > > > > > > > > > > > < = < = 6 6 7 7 1 6 m 7 7 6 ¼ pffiffiffiffi6 0:591 0:328 0:737 7 6 m 7 1 ¼ pffiffiffiffi 0:474 > > 5> 5 4 m4 m> > > > > > > > : ; : > ; 0:182 m 0:328 0:737 0:591 1 (7.6-28) pffiffiffiffi where the 1= m term multiplying the mode shape matrix is the mode shape normalization factor. pffiffiffiffi The first mode, mode participation factor is 1:656m= m, and when multiplied by the base motion corresponds to the modal force for the first mode. As can be ascertained, the value corresponding to the second mode is considerably smaller than that of the first, and that of the third is considerably less than that of the second. If all else is equal, and each mode

7.6 Base excitation

participation factor is multiplied by a base excitation with equal energy at all frequencies, then the excitation for the second mode will be considerably lower than that of the first; and likewise for the third relative to the second and first modes. Note that the actual response of each mode will depend not just on the excitation magnitude but also on its frequency content. However, if the excitation frequency were at the natural frequency of each mode, which would happen if one swept the frequency of excitation, and we assumed that all modes had the same damping, then generally the response levels of the higher-order modes would be considerably lower than those of the lower-order modes. This, therefore, can limit the value of base-shake tests to characterize the higher-order dynamic properties of a system. The decrease in the magnitude of the mode participation factors with increasing mode number is due to the phase reversals that are introduced in each mode as one increases in mode number; recall that it is these phase reversals that make mode shapes orthogonal to each other and linearly independent. The rigid body vector, ffRB g, in the mode participation factor maps the base acceleration to each mass point in the system. Thus, one can think of this as fully correlated external forces acting at each mass point. Therefore, for any one mode the modal force is obtained by summing the products of these forces and the corresponding mode shape values (gains) at each mass point. It is the fully correlated nature of these equivalent forces and the phase reversals that are introduced as one increases in mode number that yield lower values for the mode participation factor as one increases in mode number. This mode-participation-factor property is part of the reason why it is possible to compute the response of buildings subjected to earthquakes by using only the lower-order modes of the system. Likewise, this is the reason why it is difficult to excite through base excitation the higher-order modes of complex systems, such as spacecraft. 7.6.5.2 Sweep rate effects

As discussed above, the response of a system to harmonic excitation is composed of two parts, the solution to the homogeneous equation and the particular solutions, one for each forcing function term. The superposition of these solutions yields a response that increases from zero, assuming the initial conditions are zero, until the steady-state response is achieved (see Chapter 2). In other words, it takes numerous cycles for the response to grow to its steady-state limit value. Therefore, if the excitation frequency

481

482

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

is increasing with time (sweeping), the system cannot reach steady-state oscillation, since there would not be enough cycles at any given frequency. The reduction relative to the steady-state response would depend on the sweep rate, start frequency of the sweep relative to the natural frequency, damping, and natural frequencies relative to the excitation frequency; the faster the sweep rate, the larger the deficit, whereas the slower the rate, the closer to the steady-state response one would get. It should be noted, however, that for higher damped modes and slow sweep rates, the response of a system to swept base excitation could exceed the steady-state resonant response; this is discussed in detail in Chapter 5. In Chapter 5, there is extensive discussion on the general case of attenuation of response. Figures presented in Chapter 5, Section 5.7, show attenuation levels for single-degree-of-freedom systems excited by harmonic forces with linear and octave sweep rates, as a function of the natural frequency of the oscillator, and its critical damping ratio. The total response of a multi-degree-of-freedom system to swept excitation is considerably more complex; and this is discussed in Section 7.4 of this chapter. The response in any one mode, however, would be as described in Chapter 5. The complexity is due to how multiple modal responses add to produce a total response. The only way to predict this is to compute the response of the multi-degree-of-freedom system to the swept excitation. It can be concluded, however, that sweeping the base excitation will generally lead to lower responses relative to steady-state excitation. It is possible, however, for combinations of closely spaced modes to yield higher responses during a sweep than in steady-state excitation. This occurs because swept excitation will excite a system over a broader frequency range. Hence, caution should be exercised when using single-degree-of-freedom responses to draw conclusion about complex multi-degree-of-freedom systems. 7.6.5.3 Shake tabledtest article interaction

Shake table motion is achieved with either hydraulic actuators or electromagnetic coils. In either case, applying a force to the table produces the motion and, if the actuation system cannot produce sufficient force, the table will not be able to achieve the desired acceleration levels. When the swept excitation frequency approaches the fundamental natural frequency of the system under test, the response will grow significantly and reach a peak at a frequency slightly past the natural frequency. The response then

7.7 Random response analysis

decreases as the excitation frequency continues to increase. This increase in response will appear to the table as a significant increase in the reaction force it has to drive against. Therefore, the table actuation system should be sized not to the mass times the desired peak base acceleration, but to the mass times the resonant response of the system (plus margin), which can be significantly higher than the mass times the peak desired base acceleration. In order to protect the test article against an overtest, modern shake tables are operated under closed loop control, such that the desired table acceleration is achieved within the tolerances of the table actuation system and controller. However, if the table actuation system is undersized, the desired levels may not be achievable near the natural frequencies of the test article, even with a closed-loop controller. It is also common to monitor the acceleration at numerous locations within the test article (strain gauges are also used), with redlines based on analytical predictions. If the redline is reached at any location, the table is shut down automatically. However, it should be noted that how a table is shut down is critical, since sudden cessation of the base motion is a transient imparted to the test article. Another consideration is the accuracy of the redline predictions. Since these are based on analytical models that most likely will not be accurate in the higher-order modes, the associated predictions of internal loads will also be suspect, and so will the protection that the redlines offer. In summary, base-shake tests of complex structures, such as large spacecraft, is a risky proposition, whereas for components, such as electronic boxes, where design conservatism does not cause undo weight penalties, a base-shake test is a good option for vibration testing and screening of components for defects and design quality. 7.7 Random response analysis In Chapter 5, we developed the tools to solve for the response of singledegree-of-freedom systems excited by nondeterministic forces described by their statistical properties. This was accomplished by solving for responses in the frequency domain, where forcing functions were described by Power Spectral Density functions. The resulting response quantities were then also described by Power Spectral Density functions, from which the variance and response standard deviation could be computed.

483

484

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

In addition, we introduced the concept of obtaining statistical descriptions of response quantities by solving for the responses in the time domain, and computing the mean square values directly from the response time histories. In the next two sections, we will extend these concepts to multi-degree-offreedom systems by using a two-degree-of-freedom system for illustration purposes, while simultaneously deriving the equations for systems with a larger number of degrees of freedom. We will start by deriving the frequency domain solution for the multi-degree-of-freedom forced vibration problem, and then we will address the base excitation problem. The time domain solutions will be addressed in Section 7.8. 7.7.1 Forced vibration

We will derive the forced vibration response to random excitation by starting with the definition of the mean square value of the displacement response. Recall that the square root of the mean square value of a response to zero-mean excitation is the standard deviation of the response (see Chapters 5 and Volume II). Once we have the standard deviation we can compute the probability of exceeding specific response levels. The mean square value of the jth displacement response, x2j , is Z T h i 1 2 2 E xj ðtÞ ¼ xj ¼ lim x2 ðtÞdt (7.7-1) T/∞ 2T T j We start by expressing Eq. (7.7-1) in matrix notation for a two-degree-offreedom system: 9 8 > > > > > > Z T > > > > 1 9 > > 2 8 > > x ðtÞdt lim > > 1 > > > Z T ( x2 ðtÞ ) < x2 = < T/∞ 2T T = 1 1 1 dt (7.7-2) ¼ ¼ lim Z T : x2 > > > T/∞ 2T T 2 ; > > x2 ðtÞ > > 1 2 > > > lim x22 ðtÞdt > > > > > T/∞ 2T T > > > > ; : We can express the rightmost vector as ( 2    T   x1 ðtÞ x1 ðtÞx2 ðtÞ x21 ðtÞ x1 ðtÞ x1 ðtÞ ¼ diag ¼ diag x2 ðtÞ x2 ðtÞ x2 ðtÞx1 ðtÞ x22 ðtÞ x22 ðtÞ (7.7-3)

7.7 Random response analysis

where the operator diag selects the diagonal terms only. Substituting into Eq. (7.7-2) gives 8 9 2 # 3 Z T " x2 ðtÞ = < x2 > x1 ðtÞx2 ðtÞ 1 1 1 dt5 (7.7-4) ¼ diag4 lim : x2 > T/∞ 2T T 2 ; x2 ðtÞ x2 ðtÞx1 ðtÞ 2 Recall Parseval’s theorem (see Appendix 5.1): Z ∞ Z ∞ 1 x1 ðtÞx2 ðtÞdt ¼ X2 ðuÞX1 ðuÞdu 2p ∞ ∞

(7.7-5)

where  designates the complex conjugate. Substituting Eq. (7.7-5) into Eq. (7.7-4), while noting that T/∞, gives 8 9 2 3 3 2 > Z T X1 ðuÞX1 ðuÞ X2 ðuÞX1 ðuÞ = < x2 > 1 1 1 4 5du5 ¼ diag4 lim > > T/∞ 2T 2p   2 T X ðuÞX ðuÞ X ðuÞX ðuÞ :x ; 1 2 2 2 2 (7.7-6) We can write the matrix within the integral as 3 8 2 9T 98  X1 ðuÞX1 ðuÞ X2 ðuÞX1 ðuÞ < X1 ðuÞ =< X1 ðuÞ = 7 6 5¼ 4 ; ;: :    X2 ðuÞ X2 ðuÞ X1 ðuÞX2 ðuÞ X2 ðuÞX2 ðuÞ

(7.7-7)

¼ fX  ðuÞgfXðuÞgT Substituting Eq. (7.7-7) into Eq. (7.7-6), while noting that although we derived these equations for a two-degree-of-freedom system, the equations are applicable to systems with any number of degrees of freedom, we obtain 2 3   Z T 1 1 x2 ¼ diag4 lim (7.7-8) fX  ðuÞgfXðuÞgT du5 T/∞ 2T 2p T The sought-after solution will be available once we obtain X1 ðuÞ, X2 ðuÞ, and the corresponding complex conjugates X1 ðuÞ and X2 ðuÞ, which will be available by inspection.

485

486

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The matrix differential equation of motion for a multi-degree-offreedom system in modal coordinates (see Section 7.1) is   _ þ u2n fqðtÞg ¼ ½fT ff ðtÞg ½If€ qðtÞg þ ½2zun fqðtÞg (7.7-9) Note that the modes, ½f, are normalized such that ½fT ½m½f ¼ ½I, and we used the transformation in Eq. (7.8-10) to go from the fxðtÞg coordinate system to modal coordinates, fqðtÞg, i.e., fxðtÞg ¼ ½ffqðtÞg

(7.7-10)

Also note that we are using the vector fxðtÞg to designate all coordinates used to describe the behavior of the system, not just those in the x-coordinate direction. In addition, recall that because of the orthogonality property of mode shapes, and the form of the damping, the matrices on the left-hand side of Eq. (7.7-9) are diagonal. Proceeding, we take the Fourier transform of the terms in Eq. (7.7-9) by multiplying each by eiut, and then integrating with respect to time from ∞ to ∞, Z ∞ Z ∞ Z ∞  2 iut iut _ un fqðtÞgeiut dt ½If€ qðtÞge dt þ ½2zun fqðtÞge dt þ ∞ ∞ ∞ Z ∞ ½fT ff ðtÞgeiut dt ¼ ∞

(7.7-11) Using our standard shorthand notation, Eq. (7.7-11) reduces to      2 € _ ½I QðuÞ þ ½2zun  QðuÞ þ un fQðuÞg ¼ ½fT fFðuÞg (7.7-12) _ € From Chapter 3 we know that QðuÞ ¼ iuQðuÞ and QðuÞ ¼ u2 QðuÞ. Substituting into Eq. (7.7-12) we obtain    u2 ½I þ iu½2zun  þ u2n fQðuÞg ¼ ½fT fFðuÞg (7.7-13)   2 un  u2 þ i½2zun u fQðuÞg ¼ Solving for fQðuÞg produces  1  fQðuÞg ¼ u2n  u2 þ i½2zun u ½fT fFðuÞg ¼ ½HðuÞ½fT fFðuÞg

(7.7-14)

As previously discussed, the matrices in Eq. (7.7-12) are all diagonal matrices due to the orthogonality property of mode shapes and the assumed form of damping. Therefore, ½HðuÞ is a diagonal matrix, with diagonal

7.7 Random response analysis

487

. 2 2 terms: 1 unj u þi2zj unj u . Recall that the matrix ½HðuÞ is referred to as the system’s frequency response (or admittance) function. Next, we take the Fourier transform of both sides of Eq. (7.7-10): fXðuÞg ¼ ½ffQðuÞg

(7.7-15)

Substituting Eq. (7.7-14) into Eq. (7.7-15) yields fXðuÞg ¼ ½f½HðuÞ½fT fFðuÞg

(7.7-16)

Noting that ½HðuÞ and fFðuÞg are complex and, therefore, their complex conjugates (designated by ) can be established by reversing the sign of the imaginary portion of the complex numbers, we can write fX  ðuÞg ¼ ½f½H  ðuÞ½fT fF  ðuÞg

(7.7-17)

Substituting Eq. (7.7-16) and (7.7-17) into Eq. (7.7-7), while recalling that ½HðuÞ is a diagonal matrix and, therefore, ½HðuÞT ¼ ½HðuÞ, yields  T fX  ðuÞgfXðuÞgT ¼ ½f½H  ðuÞ½fT fF  ðuÞg ½f HðuÞ½fT fFðuÞg ¼ ½f½H  ðuÞ½fT fF  ðuÞgfFðuÞgT ½f½HðuÞ½fT (7.7-18) Finally, by substituting Eq. (7.7-18) into Eq. (7.7-8) we obtain 2 3   Z ∞ 1 1 x2 ¼ diag4 lim ½f½H  ðuÞ½fT fF  ðuÞgfFðuÞgT ½f½HðuÞ½fT du5 T/∞ 2T 2p ∞ (7.7-19) We can change the limits of integration to run from zero to ∞ by doubling the integral. For physical systems, the innermost product in Eq. (7.7-19), divided by T, will be bounded. Therefore, moving the limit operation inside the integral gives 2 3     Z ∞ 1 1 x2 ¼ diag4 ½f½H  ðuÞ½fT lim fF  ðuÞgfFðuÞgT ½f½HðuÞ½fT du5 T/∞ T 2p 0 (7.7-20)

488

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Expanding the innermost matrix for our two-degree-of-freedom example problem yields 3 2 F1 ðuÞF1 ðuÞ F1 ðuÞF2 ðuÞ lim lim 7 6 T/∞ T/∞ T T 7 6 1  T 7 lim fF ðuÞgfFðuÞg ¼ 6 7 6 T/∞ T 5 4 F2 ðuÞF1 ðuÞ F2 ðuÞF2 ðuÞ lim lim T/∞ T/∞ T T 3 2 Gf1 f1 ðuÞ Gf1 f2 ðuÞ 5 ¼ ½Gf ðuÞ ¼4 Gf2 f1 ðuÞ Gf2 f2 ðuÞ (7.7-21) We recognize the diagonal terms in ½Gf ðuÞ as the Power Spectral Density functions of the two forces. The off-diagonal terms are referred to as the cross Power Spectral Density functions and, whereas the Power Spectral Densities will be real-valued functions, the cross-Power Spectral Density functions will be complex. The cross-spectra provide the degree of correlation between the applied forces. If the forces are uncorrelated, these terms will be zero. If the forces are fully correlated, that is they are identical, then these terms will be real-valued and equal to the Power Spectral Density functions. Substituting Eq. (7.7-21) into Eq. (7.7-20), and recognizing that the solution is applicable to any number of degrees of freedom and applied forces, we obtain the sought-after solution: 2 3   Z ∞ 1 x2 ¼ diag4 ½f½H  ðuÞ½fT ½Gf ðuÞ½f½HðuÞ½fT du5 (7.7-22) 2p 0 If we wish to work in units of hertz, rather than radian/second, we can define a change of variable u ¼ 2pf using the standard relationship between frequency, f , which has units of cycles/second or hertz, and circular frequency, u, which has units of radian/second. Differentiating with respect to f yields du ¼ 2pdf . Substituting into Eq. (7.7-22) yields 2 3   Z ∞ x2 ¼ diag4 ½f½H  ðf Þ½fT ½Gðf Þ½f½Hðf Þ½fT df 5 (7.7-23) 0

7.7 Random response analysis

where the change of variable has also been introduced into the quantities defined by the matrices. One way to envision Eq. (7.7-22) is to think of the matrices, for any given value of u, as customary two-dimensional objects as defined above. However, since they are a function of u, there will be as many of these products of matrices as there are values of u. The integral in Eq. (7.7-22) implies infinite resolution when it comes to the values of u. In practice, however, Eq. (7.7-22) must be solved numerically. Therefore, one selects a value of u ¼ u1 , starting sufficiently below the frequency of the lowest elastic mode, and computes the matrix products indicated in the equation. This would then correspond to the response values at u1 . The value of u is then increased by a small amount, Duj , and the process repeated until one is sufficiently past the highest natural frequency in the model, which should correspond to where the forcing function energy content becomes negligible. Note that Duj does not have to be a constant; this will be discussed below. The values obtained for each uj are the spectral lines of the Power and cross-Power Spectral Density functions of each response quantity. The integral, which computes the areas under these functions, must then be computed numerically to obtain the desired results. We indicated above that Eq. (7.7-22) had to be solved at discrete values of u. So, how small must the increment Duj be? Experience with numerous lightly damped systems indicates that four spectral lines between the halfpower-points of the modal response function should be adequate for most situations. We showed in Chapter 4 that the critical damping ratio, z, can be computed with the half power points of the total response function as z¼

1 Du 2 un

where Du is the frequency separation between the half-power points. Therefore, knowing the critical damping ratio and the natural frequency of a mode, we can compute the frequency separation between the halfpower points as Du ¼ 2zun

(7.7-24)

Taking one-fourth provides the frequency increment to be used, i.e., Duj ¼

zun 2

(7.7-25)

489

490

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Note that this increment is a function of the damping and frequency of each mode and, therefore, the smallest increment associated with any mode should be used if one wishes to use a constant increment. If higher accuracy is desired, then the increment can be reduced or a variable step can be incorporated such that the peaks in the Power Spectral Density functions are always included. Another item to note is that the displacement frequency response function defined in Eq. (7.7-14) cannot be used with rigid body modes when u ¼ 0. Recall that the displacement frequency response function for a mode is 1 u2n; j  u2 þ i2zj un; j u

(7.7-26)

Since un ¼ 0 for rigid body modes, the quantity in Eq. (7.7-26) will not exist for u ¼ 0. Furthermore, we know that rigid body modes do not contribute to the elastic displacements of a system. Therefore, excluding the rigid body modes from the displacement calculations would not introduce errors when loads and relative/internal displacement differences are sought. This will not be the case for the acceleration response, which we will address next. 7.7.1.1 Acceleration response

To compute acceleration responses we start by differentiating Eq. (7.7-10) twice with respect to time, xðtÞg ¼ ½ff€ qðtÞg f€ Taking the Fourier transform of each side yields     € € XðuÞ ¼ ½f QðuÞ

(7.7-27)

(7.7-28)

€ Next, starting with Eq. (7.7-13), and recalling that QðuÞ ¼ u2 QðuÞ, we obtain    1  € ¼ ½fT fFðuÞg (7.7-29)  2 u2n  u2 þ i½2zun u QðuÞ u   € Solving for QðuÞ produces   1   € QðuÞ ¼ u2 u2n  u2 þ i½2zun u ½fT fFðuÞg (7.7-30) ¼ ½Hx€ðuÞ½fT fFðuÞg

7.7 Random response analysis

491

where the diagonal terms of the diagonal matrix ½Hx€ðuÞ are as follows: Hx€; jj ðuÞ ¼ 

u2 u2n; j  u2 þ i2zj un; j u

(7.7-31)

Substituting Eq. (7.7-30) into Eq. (7.7-28) yields   € XðuÞ ¼ ½f½Hx€ðuÞ½fT fFðuÞg

(7.7-32)

and its corresponding complex conjugate,      X€ ðuÞ ¼ ½f Hx€ ðuÞ ½fT fF  ðuÞg

(7.7-33)

Following the same steps as in the previous section for the displacement mean square response, we obtain the acceleration mean square response: 2 3   Z ∞    1 1 T  € x€2 ¼ diag4 lim X€ ðuÞ XðuÞ du5 T/∞ 2T 2p ∞ 2 ¼ diag4

1 2p

Z 0

buN



3



  1 ½f Hx€ ðuÞ ½fT lim fF  ðuÞgfFðuÞgT ½f½Hx€ðuÞ½fT du5 T/∞ T (7.7-34)

The first item to note is the change in the limits of integration. The change in the lower limit results in the introduction of a factor of two. The upper limit was changed from ∞ to buN , where b is a constant and uN is the frequency of the highest mode that needs to be considered. As can be ascertained from Eq. (7.7-31), as the frequency of excitation, u, increases the frequency response function approaches a constant nonzero value and, hence, the response will continue to increase so long as the excitation has energy at ever-increasing frequencies. Therefore, the mean square response becomes unbounded as the frequency limit increases to infinity. Since physical excitation sources are limited in their frequency content, past a certain frequency the forcing function will contain very little, if any, energy. Thus, the corresponding Power Spectral Density functions will drop to a negligible level at a finite frequency. Accordingly, the upper limit on the integral used to compute the mean square response does not have to be infinity, but the frequency after which the excitation energy becomes negligible and, thus, its contribution to the integral can be ignored.

492

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Finally, noting that the innermost matrix in Eq. (7.7-34) is as defined by (7.7-21), we obtain the sought-after solution: 3 2   Z buN   1 x€2 ¼ diag4 ½f Hx€ ðuÞ ½fT ½Gf ðuÞ½f½Hx€ðuÞ½fT du5 2p 0 (7.7-35) In the previous section, we noted that the displacement response could not be computed for rigid body modes at u ¼ 0, and that rigid body displacements did not contribute to the elastic distortion or internal loads of a system. What about the acceleration response? From Eq. (7.7-31) we note that for un ¼ 0, Hx€ðuÞ ¼ 1 for all values of u. Therefore, an acceleration response does exist at u ¼ 0, and from Eq. (7.7-32) we can compute the acceleration response due to the rigid body modes:   € ¼ ½fr ½I½fr T fFðuÞg (7.7-36) XðuÞ r Substituting into Eq. (7.7-34) we obtain the rigid body contribution to the mean square value: 2 3   Z buN 1 x€2 ¼ diag4 ½fr ½I½fr T ½Gf ðuÞ½fr ½I½fr T du5 (7.7-37) 2p 0 r Conversely, Eq. (7.7-35) can be used for systems with both rigid body and elastic modes. As a final note, inherent in the above derivation was the assumption that we have a zero-mean process. This requires that the forcing functions have a zero mean, which requires that they be high-pass filtered before their Power Spectral and cross Spectral Densities are computed. 7.7.1.2 Loads computation

For subsystems that have statically determinate interfaces to the rest of the system, internal loads will be proportional to the accelerations of the subsystem masses. This will be discussed in considerable detail in Volume II. For the current discussion, however, we will define internal loads, fLðtÞg, as xðtÞg fLðtÞg ¼ ½LTMx€f€

(7.7-38)

where the matrix ½LTMx€ is referred to as an acceleration-based loads transformation matrix, and it relates internal loads to system accelerations

7.7 Random response analysis

493

(Volume II has considerable discussion on load transformation matrices). Substituting Eq. (7.7-27) and taking the Fourier transform of both sides yields   € (7.7-39) fLðuÞg ¼ ½LTMx€½f QðuÞ Following the same procedure as discussed above, the mean square loads response is 3 2   Z ∞ 1 1 L2 ¼ diag4 lim (7.7-40) fL ðuÞgfLðuÞgT du5 T/∞ 2T 2p ∞ Substituting Eq. (7.7-39) and its complex conjugate produces 2 3   Z ∞    1 1 T  € L2 ¼ diag4 lim ½LTMx€½f Q€ ðuÞ QðuÞ ½fT ½LTMx€T du5 T/∞ 2T 2p ∞ (7.7-41) Substituting Eq. (7.7-30) and its complex conjugate, and Eq. (7.7-21), modified to an arbitrary number of coordinates, produces the desired result, 3 2   Z buN   1 L2 ¼ diag4 ½LTMx€½f Hx€ ðuÞ ½fT ½Gf ðuÞ½f½Hx€ðuÞ½fT ½LTMx€T du5 2p 0 (7.7-42) Note that since we are computing loads with the acceleration response, we changed the limits on the integral to reflect the reality that past a certain frequency, buN , the excitation energy will be negligible, and for practical purposes a converged solution is achieved. If loads are defined in terms of displacement proportional equations, i.e., fLðtÞg ¼ ½LTMfxðtÞg

(7.7-43)

then

2 3   Z buN 1 L2 ¼ diag4 ½LTM½f½H  ðuÞ½fT ½Gf ðuÞ½f½HðuÞ½fT ½LTMT du5 2p 0 (7.7-44)

494

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

7.7.1.3 Implementation

From the previous three sections it should be apparent that the computation of the response to random excitation involves computing a matrix product of the form   (7.7-45) ½LTMx€½f Hx€ ðuÞ ½fT ½Gf ðuÞ½f½Hx€ðuÞ½fT ½LTMx€T If we seek the acceleration response only, we set ½LTMx€ ¼ ½I, and if we seek displacement responses, we use the displacement frequency response function ½HðuÞ instead of the acceleration frequency response function ½Hx€ðuÞ. For illustration purposes we will solve for the acceleration response. From Eq. (7.7-31) we note that the acceleration frequency response function is a complex quantity, where a typical term can be expressed as  2 2 2 u  i 2zj un;j u  u n;j u  Hx€; jj ðuÞ ¼  u2n;j  u2 þ i2zj un;j u u2n;j  u2  i 2zj un;j u  2 2 u un;j  u u2 2zj un;j u ¼  2 2 2 þ i  2 2 un;j  u2 þ 2zj un;j u u2n;j  u2 þ 2zj un;j u 2

(7.7-46) We recognize the real part of Eq. (7.7-46) as the coincident component, Cox€, of the response, and the imaginary part as the quadrature component, Qdx€; note that in Chapter 2 we had normalized these quantities such that they were a function of l, where l ¼ u=un . For this discussion we will retain the form in Eq. (7.7-46). Therefore, we can write and

½Hx€ðuÞ ¼ ½Cox€ðuÞ þ i½Qdx€ðuÞ

(7.7-47)

 Hx€ ðuÞ ¼ ½Cox€ðuÞ  i½Qdx€ðuÞ

(7.7-48)



Recall that the diagonal elements of ½Gf ðuÞ are the Power Spectral Density functions, which are real quantities, whereas the off-diagonal elements are the cross-Power Spectral Densities, which are complex quantities,

7.7 Random response analysis

unless there is full correlation. Therefore, we can separate the elements of ½Gf ðuÞ as follows: ½Gf ðuÞ ¼ ½Gf ðuÞR þ i½Gf ðuÞI

(7.7-49)

where ½Gf ðuÞR contains the real-valued Power Spectral Density values on the diagonal and the real part of the cross-Power Spectral Density values in the off-diagonal elements. Then i½Gf ðuÞI will contain the imaginary portion of the cross-Power Spectral Densities as the off-diagonal terms and its diagonal elements will be zero. Substituting Eqs. (7.7-47)e(7.749) into Eq. (7.7-45), with ½LTMx€ ¼ ½I, we obtain ½fð½Cox€ðuÞ  i½Qdx€ðuÞÞ½fT ½Gf ðuÞR þ i½Gf ðuÞI ½f (7.7-50) ð½Cox€ðuÞ þ i½Qd x€ðuÞÞ½fT Performing the indicated multiplications yields   ½R½Gf ðuÞR½R þ ½T½Gf ðuÞI ½R  ½R½Gf ðuÞI ½T þ ½T½Gf ðuÞR½T   þi ½R½Gf ðuÞI ½R  ½T½Gf ðuÞR½R þ ½R½Gf ðuÞR½T þ ½T½Gf ðuÞI ½T (7.7-51) T

T

where ½R ¼ ½f½Cox€ðuÞ½f and ½T ¼ ½f½Qdx€ðuÞ½f ; note that both ½R and ½T are a function of u. Let ½UðuÞ and ½VðuÞ represent the real and imaginary parts of Eq. (7.7-51), then substituting into Eq. (7.7-35) produces 2 3   Z buN Z buN 1 1 x€2 ¼ diag4 ½UðuÞdu þ i ½VðuÞdu5 (7.7-52) 2p 0 2p 0 The diagonal elements of ½UðuÞ are the Power Spectral Densities of the acceleration response. The off-diagonal terms of ½UðuÞ and ½VðuÞ are the real and imaginary parts of the cross-Power Spectral Density functions, respectively. The diagonal terms of ½VðuÞ will be zero. Therefore, we can obtain the desired mean square values from 2 3   Z buN 1 x€2 ¼ diag4 ½UðuÞdu5 (7.7-53) 2p 0 The above is consistent with the results presented in 1990 by Broussinos and Kabe (Broussinos and Kabe, 1990).

495

496

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

As a check, we will collapse Eq. (7.7-53) to that of a single-degree-offreedom system and compare the results to our previously derived solution in Chapter 5, Section 5.5. Since we are dealing with a system with a single force, ½Gf ðuÞR becomes a one-by-one matrix with the Power Spectral Density of the excitation, Gff ðuÞ, as its only element. Likewise, since there is only one force, the cross-Spectral Densities by definition will be zero and, therefore, the ½Gf ðuÞI proportional terms will be zero. In addition, since we are dealing with a single-degree-of-freedom system, the other matrices in Eq. (7.7-51) become scalars. This then leaves us with UðuÞ ¼ fCox€ðuÞfT Gff ðuÞfCox€ðuÞfT þ fQdx€ðuÞfT Gff ðuÞfQdx€ðuÞfT (7.7-54) First we note that the transpose of a scalar is a scalar, so fT ¼ f. Second, we normalize the mode shape coefficient, f, to be consistent with the derivation of the equations. Recall that we normalize mode shapes such that ½fT ½m½f ¼ ½I. Therefore, fmf must equal one, where m is the mass pffiffiffiofffi our single-degree-of-freedom system, and this leads to f ¼ 1= m. Substituting the normalized mode shape coefficient, and the expressions for Qdx€ðuÞ and Cox€ðuÞ into Eq. (7.7-54) yields n o 1 UðuÞ ¼ 2 Gff ðuÞ Co2x€ðuÞ þ Qdx€2 ðuÞ m ) ( (7.7-55) 1 u4 ¼ 2 Gff ðuÞ 2 m u2n  u2 þ ð2zun uÞ2 Substituting into Eq. (7.7-53) produces the sought-after result, which for a constant Power Spectral Density function is the same as Eq. 5.5-61 in Chapter 5, i.e., ) ( Z bun 4 1 1 u du (7.7-56) Gff ðuÞ x€2 ¼ 2 2 m 2p 0 u2n  u2 þ ð2zun uÞ2 7.7.2 Base excitation

The matrix equation of motion of a system driven at its base was derived in Section 7.6. To facilitate the discussion we will repeat the equation here:       € I ½INN q€e ðtÞ þ ½22un NN q_ e ðtÞ þ u2n NN fqe ðtÞg ¼ ½fT ½mNN ½fC fwðtÞg € ¼ ½GfwðtÞg I (7.7-57)

7.7 Random response analysis

where we have assumed that there is no mass coupling between the inter€ face and noninterface coordinates. The vector fwðtÞg I contains the prescribed base accelerations, the subscript I designates the interface coordinates where the motion is prescribed, and the subscript N designates the noninterface coordinates whose response we seek. Eq. (7.7-57) was obtained by transforming the system from physical coordinates, fwðtÞg, to a mixed set of modal and physical coordinates,      fqe ðtÞg fwðtÞgN ½f ½fc  ¼ (7.7-58) fwðtÞgI fwðtÞgI ½0 ½I Note that the modal coordinates define motion relative to a fixed interface. The base-fixed mode shapes are normalized such that ½fT ½mNN ½f ¼ ½I, and ½G contains the mode participation factors. The absolute accelerations of the noninterface coordinates are given by €e ðtÞg þ ½fc fwðtÞg € € fwðtÞg N ¼ fw I , where the columns of ½fc  are the €e ðtÞg defines acceleraconstraint modes (see Volume II), and the vector fw tions relative to the fixed interface. Note that for a determinate interface the constraint modes become the rigid body vectors referenced to the base of the system. By differentiating Eq. (7.7-58) twice with respect to time we € can compute the noninterface acceleration response, fwðtÞg N,   € €e ðtÞ þ ½fc fwðtÞg € fwðtÞg N ¼ ½f q  I q€e ðtÞ (7.7-59) ¼ ½ ½f ½fc   € fwðtÞg I ¼ ½fa f€ uðtÞg € € Next, we augment Eq. (7.7-57) with the identity ½IfwðtÞg I ¼ ½IfwðtÞg I to incorporate the coordinate vector defined in Eq. (7.7-59): " #(  #(  ) " ) ½INN ½0 ½2zun NN ½0 q€e ðtÞ q_ e ðtÞ þ € _ fwðtÞg fwðtÞg ½0 ½I ½0 ½0 I I "  2 # (7.7-60)    un NN ½0 fqe ðtÞg ½G € þ ¼ fwðtÞg I fwðtÞg ½I I ½0 ½0 We can write Eq. (7.7-60) as

  _ € uðtÞg þ ½2zun afuðtÞg þ u2n a fuðtÞg ¼ ½Ga fwðtÞg ½Ia f€ I

(7.7-61)

497

498

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The subscript a indicates the matrices have been augmented as in Eq. (7.7-60). Computing the Fourier transform of each term in Eq. (7.7-61), and _ € noting that UðuÞ ¼ iuUðuÞ and UðuÞ ¼ u2 UðuÞ, we obtain the acceleration response,     € € (7.7-62) UðuÞ ¼ ½Hw€ ðuÞa½Ga WðuÞ I where



½Hw€ ðuÞ ½Hw€ ðuÞa ¼ ½0

½0 ½I

 (7.7-63)

and the diagonal elements of the diagonal matrix ½Hw€ ðuÞ are Hw;jj € ðuÞ ¼ 

u2 u2n;j  u2 þ i2zj un;j u

(7.7-64)

Taking the Fourier transform of Eq. (7.7-59) we obtain     € € WðuÞ ¼ ½f UðuÞ

(7.7-65)

Substituting Eq. (7.7-62) yields     € € WðuÞ ¼ ½fa ½Hw€ ðuÞa½Ga WðuÞ N I

(7.7-66)

N

a

Proceeding as discussed in previous sections, the mean square response of the acceleration response is computed as 2 3   Z ∞     1 1 T €  ðuÞ € €2 w W WðuÞ ¼ diag4 lim du5 N N T/∞ 2T 2p ∞ N 2 1 ¼ diag4 2p

Z 0

3

buN

    ½fa Hw€ ðuÞ a ½Ga Gw€I ðuÞ ½GTa ½Hw€ ðuÞa½fTa du5 (7.7-67)

where 

 Gw€I ðuÞ ¼



T 1 €    € W ðuÞ I WðuÞ I lim T/∞ T

 (7.7-68)

7.7 Random response analysis

  As discussed, the diagonal terms of Gw€I ðuÞ are the Power Spectral Density functions and the off-diagonal terms are the cross Power Spectral Density functions of the prescribed interface accelerations. And as noted, physical excitation sources are limited in their frequency content, and past a certain frequency the forcing function will not contain energy. In other words, the corresponding Power Spectral Density will drop to a negligible level at a finite frequency. Accordingly, the upper limit on the integral used to compute the mean square response does not have to extend to infinity, but only to a finite frequency, buN , after which the excitation energy can be considered to be zero, and its contribution to the integral can be ignored. Next, we will compute the displacement response. We start by taking the Fourier transform of Eq. (7.7-57),       ½INN Q€e ðuÞ þ ½22un NN Q_ e ðuÞ þ u2n NN fQe ðuÞg   € (7.7-69) ¼ ½fT ½mNN ½fC  WðuÞ I   € ¼ ½G WðuÞ I

€ Recall that WðuÞ ¼ therefore, we can augment Eq. (7.7-69) as follows: #(  #(  " ) " ) ½2zun NN ½0 ½INN ½0 Q€e ðuÞ Q_ e ðuÞ   þ   € _ WðuÞ WðuÞ ½0 ½0 ½0 ½0 I I 2  2 3 (7.7-70)     un NN ½0  fQe ðuÞg ½G  5 € þ4 ¼ WðuÞ I fWðuÞg ½I 2 I ½0 u ½I u2 WðuÞ,

Let



 fQe ðuÞg ¼ fUðuÞg (7.7-71) fWðuÞgI _ € Since UðuÞ ¼ iuUðuÞ and UðuÞ ¼ u2 UðuÞ, it can be shown that the displacement response is   € (7.7-72) fUðuÞg ¼ ½Hw ðuÞd½Ga WðuÞ I where

" ½Hw ðuÞd ¼

½Hw ðuÞ

½0

½0

u2 ½I

# (7.7-73)

499

500

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

and Hw;jj ðuÞ ¼

u2n;j



u2

1 þ i2zj un;j u

(7.7-74)

Taking the Fourier transforms of Eq. (7.7-58), and then substituting Eq. (7.7-72) yields fWðuÞg ¼ ½fd fUðuÞg   (7.7-75) € ¼ ½f ½Hw ðuÞ ½G WðuÞ d

d

a

I

We can now compute the displacement mean square response as 3 2   Z ∞ 1 1 w2 ¼ diag4 lim fW  ðuÞgfWðuÞgT du5 T/∞ 2T 2p ∞ 2 1 ¼ diag4 2p

Z

3

buN

0

    ½fd Hw ðuÞ d ½Ga Gw€I ðuÞ ½GTa ½Hw ðuÞd½fTd du5

(7.7-76)  where Gw€I ðuÞ is defined by Eq. (7.7-68). Note that again we kept the upper integration limits at buN , since for practical purposes the excitation energy will decrease to a negligible level at a frequency after which we will not need to include any additional modes. In Volume II, the equations for computing loads and other response quantities of interest will be derived. For the purposes of the present discussion we will assume that the loads of interest are defined by the following relationship: (7.7-77) fLðtÞg ¼ ½LTMfwðtÞg 

where fLðtÞg are the loads, and other response quantities of interest, and ½LTM contains the response recovery equation coefficients and is our loads transformation matrix. Following the same derivation steps as above, we obtain for the mean square response of fLðtÞg, 2 3   Z ∞ 1 1 L2 ¼ diag4 lim fL ðuÞgfLðuÞgT du5 T/∞ 2T 2p ∞ 2 1 ¼ diag4 2p

8

Z

buN < ½LTM½fd 0

:

    9 Hw ðuÞ d ½Ga Gw€I ðuÞ =

½GTa ½Hw ðuÞd½fTd ½LTMT

;

3 du5

(7.7-78)

7.8 Time-domain random response analysis

501

7.8 Time-domain random response analysis In the preceding sections, we described approaches for computing the mean square response of multi-degree-of-freedom systems when the forcing functions, or base excitation, were given in terms of Power Spectral and cross Power Spectral Density functions. In Chapter 5, Section 5.6, we described an approach that would allow the computation of the mean square responses in the time domain when the excitation was given as a time history. Recall that the mean square value, w2j , of a time history, wj ðtÞ, is given by Z T Z 1 1 T 2 2 2 wj ¼ lim w ðtÞdt ¼ lim wj ðtÞdt (7.8-1) T/∞ 2T T j T/∞ T 0 If T is sufficiently large, we can write Eq. (7.8-1) as Z 1 T 2 2 wj z w ðtÞdt T 0 j

(7.8-2)

We begin with the matrix differential equation of motion for a multidegree-of-freedom system, € _ ½mfwðtÞg þ ½cfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg

(7.8-3)

Transforming to modal coordinates yields   _ þ u2n fqðtÞg ¼ ½fT ff ðtÞg ½If€ qðtÞg þ ½2zun fqðtÞg

(7.8-4)

where fwðtÞg ¼ ½ffqðtÞg

(7.8-5) T

and the mode shapes, ½f, have been normalized such that ½f ½m½f ¼ ½I. _ qðtÞg, are obtained by and f€ The response time histories, fqðtÞg, fqðtÞg, numerically integrating (see Chapter 8) the uncoupled equations in (7.8-4). It should be noted, however, that even if the equations in (7.8-4) are coupled, the response time histories could still be obtained by numerically integrating Eq. (7.8-4). Once the computed responses are available, substitution into Eq. (7.8-5) produces the sought-after response time histories, which when substituted into Eq. (7.8-2) will produce the sought-after response mean square values. Hence, a typical displacement mean square response would be ! ! Z T Z T Z T X N N X 1 1 1 w2j z w2 ðtÞdt ¼ wj ðtÞwj ðtÞdt ¼ fji qi ðtÞ fji qi ðtÞ dt T 0 j T 0 T 0 i¼1 i¼1 (7.8-6)

502

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The length of the time histories, T, requires further discussion. An assumption in the time domain approach is that the time histories of the forcing functions are stationary random and come from an ergodic process. This, therefore, requires that the duration of the forcing functions be sufficiently long to yield root mean square response values that are statistically stable, i.e., they have converged to within some acceptable error bounds. This was discussed in detail in Chapter 5, Section 5.6.1, and we will, therefore, only summarize the results here. It was shown that if we define a normalized cycle count, n ¼ Tfn =Q, where T is the length of the time history, fn is the natural frequency of vibration in hertz, and Q ¼ 1=2z, where z is the critical damping ratio, then the length of the forcing function that is required so that, on average, the mean square is within a specified tolerance of the infinite length solution is given in Fig. 7.8-1, where 1 1  e2pn (7.8-7) m2 ðnÞ ¼ 1  2pn and is the average normalized mean square value. As can be ascertained from the preceding discussion, the lower the frequency of a mode, the longer the time history of the forcing function must be to achieve a specified level of convergence. This is consistent with our understanding that convergence to a given mean square value depends on the number of cycles in the time history; the more the cycles, the quicker the convergence. Hence, in a multi-degree-of-freedom system it is the

FIGURE 7.8-1 Normalized mean square, m2 ðnÞ, versus normalized cycle count, n ¼ Tfn =Q.

7.9 Truncated modal coordinates

natural frequency of the lowest mode that establishes the required duration of the forcing function. Once a desired level of convergence is attained for the lowest mode, all others modes with higher frequencies will, on average, have as good or more accurate mean square response predictions. 7.9 Truncated modal coordinates In Chapter 6 and in the preceding sections of this chapter, we derived solutions by transforming the equations of motion from physical coordinates to modal (normal) coordinates. This offers a significant advantage for a large class of problems since the resulting equations are uncoupled. Another significant advantage, which we have not yet discussed in detail, is the fact that in the modal domain we can reduce the size of the problem by considering the energy content of the excitation forces and the natural frequencies associated with each mode of vibration. Physical forces that act on systems of interest have limited frequency content. For example, atmospheric turbulence/gusts have very little energy above 10 Hz when considering the speeds that airplanes and launch vehicles fly through the atmosphere (in Volume II, we will discuss in detail the conversion of atmospheric turbulence/gusts into time domain forcing functions, which will be dependent on the speed of the vehicle flying through the atmospheric wind features). Therefore, it is reasonable to assume that, for practical purposes, the responses of modes above 10 Hz in a turbulence/ gust analysis will be negligible and their responses do not need to be computed. We refer to the frequency past which responses are not computed as the analysis cut-off frequency. Limiting the analysis to those modes with frequencies less than the cutoff frequency is accomplished by retaining only the mode shapes with natural frequencies below the cut-off frequency in the coordinate transformation, fwðtÞg ¼ ½ffqðtÞg

(7.9-1)

pl l1

where l  p and l is the number of retained modes. Applying this transformation to the equations of motion yields _ qðtÞg þ ½fT ½c ½ffqðtÞg ½fT ½m ½ff€ þ ½fT ½k ½ffqðtÞg ¼ ½fT ff ðtÞg

lp pppl l1

lp pppl l1

lp pppl l1

lp

p1

(7.9-2)

503

504

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

where we have included the dimensions of each matrix and vector. Performing the indicated multiplications, and noting that the mode shapes have been normalized such that ½fT ½m½f ¼ ½I, we obtain   _ þ u2n fqðtÞg ¼ ½fT ff ðtÞg (7.9-3) ½I f€ qðtÞg þ ½2zun fqðtÞg ll l1

ll

l1

ll

l1

ln

n1

Eq. (7.9-3) contains the l uncoupled modal coordinate equations of motion with natural frequencies less than the cut-off frequency. In practice, the physical coordinate model can have hundreds of thousands to millions of coordinates, whereas the truncated modal model could be as small as a few dozen to a few thousand modal equations of motion; this is a significant reduction in the number of equations. In the next section, we will discuss the accuracy of truncated modal models. 7.9.1 Mode acceleration

Fig. 7.9-1 shows, as a function of frequency of excitation u, the peak displacement response to harmonic excitation of a two-degree-of-freedom system. The natural circular frequencies of the system are un1 ¼ 0:95 and un2 ¼ 1:10. The solid curve is the peak response. The dashed lines in Fig. 7.9-1A are the components of response in each mode that are at the same phase angle relative to the excitation as the plotted total peak response. Hence, the sum of the dashed lines in Fig. 7.9-1A will yield the solid line. In Fig. 7.9-1B the dashed line is the peak response of the first mode only. Hence, if we were to truncate the modal model such that the

FIGURE 7.9-1 Peak displacement response of two-degree-of-freedom system as a function of frequency of excitation, u. (A) True response (solid line) and first and second mode contributions (dashed lines) to total response. (B) True response (solid line) and first mode only response (dashed line).

7.9 Truncated modal coordinates

second mode was not included in the response calculation, we would obtain the dashed line in Fig. 7.9-1B, whereas the solid line would be the response of the actual system without mode truncation. Assume that the excitation force has no energy past u ¼ 1:0, and we used this as the justification for not including the second mode in the calculations. The difference between the solid and dashed lines in Fig. 7.9-1B below u ¼ 1:0 would still be the error between the truth (solid line) and the computed response obtained with the truncated model (dashed line). Even though there is no energy in the excitation at or near the natural frequency of a mode, the mode will still have a response at off resonant frequencies if the excitation contains energy at those frequencies. This is a more critical consideration for computed displacements than accelerations, which we will discuss next. Fig. 7.9-2 shows the displacement, q, and acceleration, q€, dynamic amplification factors of a single-degree-of-freedom system driven by a harmonic force of magnitude Q, and l ¼ u=un . We can think of this amplification factor as the response of a single mode with a natural frequency of un . Assume for illustration purposes that the energy content of the forcing function is zero for l  0:75. So, one might conclude that since the natural frequency is considerably past the point where there is energy in the

FIGURE 7.9-2 Displacement and acceleration modal dynamic amplification factors for a single mode response, for four different critical damping ratios.

505

506

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

excitation that this mode does not need to be included in the analysis. However, as can be ascertained from the figure, there will be a contribution from this mode at l ¼ 0:75 and below; the contribution to the displacement response is greater than for the acceleration response. We also note that the contribution from the acceleration response decays considerably faster than that of the displacement, and that the displacement response becomes asymptotic to the static response as l decreases. If this mode were truncated from the model, the computed total response would be deficient by the indicated amounts, assuming uniform excitation. The fact that the acceleration response approaches zero as one moves lower in frequency relative to the mode’s natural frequency suggests that the computed accelerations will be less sensitive to modal truncation than displacements. This implies that if we could derive the displacements from the computed accelerations, we would not have to retain as many modes in the analysis past where the excitation energy becomes negligible. This concept is referred to as the mode acceleration approach of response recovery (Thomson, 1981). To demonstrate the concept and the convergence properties we will solve the applicable equations for an undamped, constrained system (i.e., no rigid body modes). We will then repeat the derivation for a system with rigid body modes. The matrix equation of motion for an undamped, constrained system in physical coordinates, fwðtÞg, is € ½mfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg

(7.9-4)

Solving for fwðtÞg yields € fwðtÞg ¼ ½k1 ðff ðtÞg  ½mfwðtÞgÞ

(7.9-5)

1

where ½k exists because the system is constrained. Note that we could have retained the damping term, but for present purposes we will leave it € out. Recall that fwðtÞg ¼ ½ff€ qðtÞg, therefore, ! qðtÞg fwðtÞg ¼ ½k1 ff ðtÞg  ½m½ff€

(7.9-6)

nl

where ½f is the truncated set of normal modes. nl

Eq. (7.9-6) can be written as 1

fwðtÞg ¼ ½k ff ðtÞg 

l X j¼1

½k1 ½mffgj q€j ðtÞ

(7.9-7)

7.9 Truncated modal coordinates

Recall the eigenvalue problem,   u2n;j ½m þ ½k ffgj ¼ f0g

(7.9-8)

Solving for ½k1 ½mffgj yields ½mffgj ¼

1 ½kffgj u2n;j (7.9-9)

1 ½k ½mffgj ¼ 2 ffgj un;j 1

Substituting into Eq. (7.9-7) produces the sought-after solution, fwðtÞg ¼ ½k1 ff ðtÞg 

l X 1 ffgj q€j ðtÞ 2 u j¼1 n;j

(7.9-10)

Since the frequencies of modes increase progressively with mode number, j, and since u2n; j is in the denominator the contribution to the displacement response from higher-order modes generally decreases with increasing frequency. Note that in Eq. (7.9-10) the term ½k1 ff ðtÞg accounts for the complete static response. 7.9.2 Mode acceleration and unconstrained systems

In the preceding discussion we showed that the mode acceleration approach of computing displacement responses offered superior convergence when dealing with truncated mode sets. However, the approach requires that we invert the stiffness matrix of the system. The purpose of this section is to show how to deal with unconstrained systems, such as airplanes and launch vehicles, where the stiffness matrices are singular. We begin as before by transforming the matrix equation of motion into modal coordinates. Assume the modal coordinate transformation matrix, ½f, is partitioned such that the rigid body modes are the leftmost columns, and the remaining rightmost columns are the elastic modes, i.e.,   fqr ðtÞg (7.9-11) fwðtÞg ¼ ½ ½fr  ½fe   fqe ðtÞg Assume that the rigid body and elastic mode shapes have been normalized such that ½fT ½m½f ¼ ½I. To simplify the presentation we will neglect the

507

508

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

damping term, which can be added back in at the end, if desired. Applying the coordinate transformation in Eq. (7.9-11) to the equations of motion € ½mfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg produces the following uncoupled " #(   ) 2 ½0 ½I ½0 q€ ðtÞ  r  þ4 q€e ðtÞ ½0 ½I ½0

(7.9-12)

equations in modal coordinates: 3( ) ( ) ½0 fqr ðtÞg ½fr T ff ðtÞg ¼  2 5 fqe ðtÞg ½fe T ff ðtÞg un (7.9-13)

The equations associated with fqr ðtÞg represent the rigid body behavior of the system, whereas those associated with fqe ðtÞg represent the elastic behavior. We can use the upper partition of Eq. (7.9-13) to solve for the rigid body acceleration response,   q€r ðtÞ ¼ ½I1 ½fr T ff ðtÞg (7.9-14) ¼ ½fr T ff ðtÞg   Likewise, assuming the system is not vibrating, i.e., q€e ðtÞ ¼ f0g, we can use the lower partition to solve for the static portion of the elastic response,  1 (7.9-15) fqe static ðtÞg ¼ u2n ½fe T ff ðtÞg  2 1  2 its inverse, un , will be a diNote that since un is a diagonal matrix,

agonal matrix with diagonal terms 1 u2nj . Transforming back to physical coordinates we obtain fwðtÞg ¼ fwrigid ¼ ½ ½fr 

body ðtÞg þ

½fe  

8
x1 ðtÞ > > 0 > 2 0 0 > = < = = < x€1 ðtÞ > < 6 6 7 7 4 2 5 x2 ðtÞ ¼ 0 (7.9-30) 4 0 2 0 5 x€2 ðtÞ þ 4 2 > > > > ; > ; : > ; : : x€3 ðtÞ 0 0 1 x3 ðtÞ 0 0 2 2 The associated eigenvalues and eigenvectors are 2 3 2 0 0 0 0:4472  2 6 7 6 0 5 ½f ¼ 4 0:4472 un ¼ 4 0 1:3820 0 0 3:6180 0:4472

0:5117 0:1954 0:6325

0:1954

3

7 0:5117 5 0:6325 (7.9-31)

where we have normalized the eigenvectors such that ½fT ½m½fT ¼ ½I. The inertia relief matrix is 2 3 2 32 32 3T 1 0 0 2 0 0 0:4472 0:4472 6 7 6 76 76 7 7 6 76 76 7 ½I  ½m½fr ½fr T ¼ 6 4 0 1 0 5  4 0 2 0 54 0:4472 54 0:4472 5 0 0 1 0 0 1 0:4472 0:4472 2 3 0:60 0:40 0:40 6 7 7 ¼6 0:40 0:60 0:40 4 5 0:20 0:20 0:80 (7.9-32) x1 (t) k=2 m1 = 2

x3 (t)

x2 (t) k=2 m2 = 2

FIGURE 7.9-3 Unconstrained, three-degree-of-freedom system.

m3 = 1

7.9 Truncated modal coordinates

Next, we constrain the stiffness matrix in a statically determinate manner, which for this one-dimensional system requires that we constrain (fix) one of the three coordinates: 2 3 0 0 0 " #1 4 2 6 7 60 4 2 7 4 5 0 ½a ¼ 2 2 (7.9-33) 0 2 2 " # 0:50 0:50 ¼ 0:50 1:00 which yields

2

0 6 ½Ga  ¼ 4 0 0

0 0:50 0:50

3 0 7 0:50 5 1:00

(7.9-34)

Premultiplying the matrix in Eq. (7.9-34) by the transpose of the matrix in Eq. (7.9-32), and then postmultiplying the result with the matrix in Eq. (7.9-32) produces the sought-after result, 2 32 32 3 0:60 0:40 0:20 0 0 0 0:60 0:40 0:40 6 76 76 7 76 0 0:50 0:50 76 0:40 0:60 0:40 7 0:40 0:60 0:20 ½Ge  ¼ 6 4 54 54 5 0:40 0:40 0:80 0 0:50 1:00 0:20 0:20 0:80 3 2 0:20 0:10 0:20 7 6 0:10 0:00 7 ¼6 5 4 0:10 0:20 0:00 0:40 (7.9-35) We can check this result in several ways. First, we can verify that this result matches that obtained with Eq. (7.9-27),

513

514

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

  T ½Ge  ¼ ½fe u2 n ½fe  3 2 32 1 0:5117 0:1954 3 2 7 0:5117 0:1954 0:6325 6 76 1:3820 7 6 76 5 74 76 ¼6 7 6 0:1954 0:511776 4 54 1 5 0:1954 0:5117 0:6325 0:6325 0:6325 3:6180 2 3 0:20 0:10 0:20 6 7 6 7 7 ¼6 0:10 0:10 0:00 6 7 4 5 0:20 0:00 0:40 (7.9-36) We can also check the results by making sure that they are consistent with a physics-based argument. Assume that a constant positive force of magnitude 5 is acting on the leftmost mass, and the system is not oscillating. This force will produce an overall rigid body acceleration of 1; i.e., ð2 þ2 þ1Þ€ x ¼ 5 0 x€ ¼ 1. Hence, each mass will experience an inertial force equal to the mass times this acceleration. Now, each mass can undergo an elastic displacement that is a function of the system’s flexibility. It’s this deformed shape that we are interested in, since these deflections produce the internal loads and stresses. Recall that internal loads must sum to zero since they cannot produce rigid body motion. Using the flexibility matrix, ½Ge , we can compute the deformed shape of the system, fxe static ðtÞg ¼ ½Ge ff ðtÞg 2 0:20 0:10 6 ¼6 0:10 4 0:10 0:20 0:00 8 9 1:00 > > > > < = ¼ 0:50 > > > > : ; 1:00

9 38 0:20 > 5:00 > > > = 7< 0:00 7 0:00 5> > > > : ; 0:40 0:00

(7.9-37)

7.9 Truncated modal coordinates

Multiplying the stiffness matrix by these deflections yields 9 8 9 2 38 2 2 0 > 1:00 3 > > > < = < = 6 7 4 2 5 0:50 ¼ 2 4 2 > > > : ; > : ; 0 2 2 1:00 1

(7.9-38)

These are the net forces the springs exert on the masses due to the system being deformed by the rigid body accelerationeinduced inertial forces. Note that they sum to zero as expected. Writing the equations of motion, 2

2 6 40 0

½mf€ xg þ ½kfxg ¼ ff g 9 8 9 38 9 8 3 0 0 > 1 > > > = >

= < = < 7 2 0 5 1 þ 2 ¼ 0 > > : ; > : > ; : > ; > 1 0 0 1 1

(7.9-39)

and as required, the system is in equilibrium. 7.9.3 Computation of loads and stresses

Let the loads and stresses of interest be defined as fLðtÞg ¼ ½LTMfwðtÞg ¼ ½LTMðfwr ðtÞg þ fwe ðtÞgÞ

(7.9-40)

where ½LTM is referred to as a loads transformation matrix, and it relates deflections of the system to internal loads and stresses, fLðtÞg (see Volume II). Note that the total displacement, fwðtÞg, is defined as the sum of the rigid body displacements, fwr ðtÞg, and the elastic deformations, fwe ðtÞg. A characteristic of loads transformation matrices is that when multiplied by rigid body vectors or rigid body mode shapes they produce values of zero, since rigid body displacements do not deform a system and, hence, cannot produce internal loads and stresses. To compute Eq. (7.9-40) with the mode acceleration approach we start with the unconstrained, undamped equations of motion, € ½mfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg € ½kfwðtÞg ¼ ff ðtÞg  ½mfwðtÞg

(7.9-41)

Applying the coordinate transformation defined by Eq. (7.9-11) and its second time derivative, ( ( ) ) fqr ðtÞg q€r ðtÞ  € ¼ ½ ½fr  ½fe    and fwðtÞg fwðtÞg ¼ ½ ½fr  ½fe   q€e ðtÞ fqe ðtÞg (7.9-42)

515

516

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

and then premultiplying the result by the transpose of the transformation produces 2 4

½0 ½0

qðtÞg ½fT ½k½ffqðtÞg ¼ ½fT ff ðtÞg  ½fT ½m½ff€ 3 ( )     T ½0  fqr ðtÞg  ½I ½0 ½f  ðtÞg ff € ðtÞ q r r   ¼   2 5 q€e ðtÞ ½0 ½I fqe ðtÞg ½fe T ff ðtÞg u n

(7.9-43) From the upper partition we obtain   q€r ðtÞ ¼ ½fr T ff ðtÞg

(7.9-44)

and from the lower partition we obtain   2 1   T ½fe  f ðtÞg  q€e ðtÞ fqe ðtÞg ¼ un

(7.9-45)

Transforming back to physical coordinates yields ( fwðtÞg ¼ fwr ðtÞg þ fwe ðtÞg ¼ ½ ½fr 

½fe  

f0g fqe ðtÞg

¼ f0g þ ½fe fqe ðtÞg    2   T € ¼ ½fe  u2 ½f f ðtÞg  u q  ðtÞ e e n n

) (7.9-46)

where we note that fwðtÞg now only defines the elastic deformation, fwe ðtÞg, i.e., the rigid body displacements are arbitrary, and chosen to be zero here, since they will not contribute to internal loads or stresses. Substituting into Eq. (7.9-40) yields   2   2   T q€e ðtÞ fLðtÞg ¼ ½LTM ½fe  un ½fe  f ðtÞg  ½fe  un 1 (7.9-47) N X   1 T ¼ ½LTM@½fe  u2 ffe gjq€ej ðtÞA n ½fe  f f ðtÞg  2 j¼1 un; j 0

where N is the total number of modes retained in the analysis. The rightmost term will converge to a desired level of accuracy with only a subset of the modes being included because the natural frequencies increase with mode number and are in the denominator of the term. The leftmost term in the

7.10 Dynamic behavior as a function of response

parenthesis will provide the exact static solution irrespective of how many modes are retained in the response calculations since all elastic modes are retained in this expression. Finally, instead of computing all modes to estab  T lish ½fe  u2 n ½fe  , we could use ½Ge  as defined by Eq. (7.9-26). 7.9.4 Residual flexibility

In developing complex structural dynamic models   theT concept of residual flexibility is often used. The quantities ½fe  u2 n ½fe  and ½Ge  represent the total flexibility of the system. If we subtract from the total flexibility the flexibility associated with the modes retained in the truncated modal model, we will be left with the residual flexibility, ½Gresidual , i.e., the flexibility associated with the modes that were not included in the model,   T ½Gresidual  ¼ ½Ge ½fe k u2 (7.9-48) n k ½fe k pp

or

pp

pl

ll

lp

   2  T T ½Gresidual  ¼ ½fe  u2 ½f   ½f  e e n k un k ½fe k pp

pp pp pp

pl

ll

(7.9-49)

lp

where the subscript k indicates the kept modes, i.e., the modes that were retained in the truncated modal model. As discussed before, the advantage of using Eq. (7.9-48) instead of Eq. (7.9-49) is that one does not have to compute all the modes of the system; only the modes one intends to keep in the truncated dynamic model. 7.10 Dynamic behavior as a function of response There is an important class of problems in structural dynamics that involve excitation of the system where the excitation is a function of the response of the system. Examples include (1) aeroelasticity, where the aerodynamic forces acting on a launch vehicle or aircraft depend on the speed of the vehicle and its rigid body plus local elastic angles of attack, which change in response to the excitation; (2) control forces, as produced by gimbaling of rocket engines to maintain vehicle stability in flight; (3) engine thrust oscillations that vary as a function of the elastic vibrations of the system; and (4) gyroscopic moments produced by rotating wheels, such as reaction and momentum wheels, or gears in machinery. Feedback mechanisms can lead to degradation of performance and in some instances to the loss of the system. In this section, we will derive the equations of motion of systems

517

518

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

where the excitation forces are a function of the dynamic response of the system; and in solving these equations we will have to deal with complex modes. 7.10.1 Instantaneous displacement-proportional feedback

Fig. 7.10-1 shows an unconstrained, two-degree-of-freedom system where the excitation force acting on mass m2 is a function of the relative displacement between the two masses. This could correspond to, for example, relative deformations between propulsion system elements that lead to oscillation in the fuel, which then manifests as thrust oscillations that excite the system. In a real system, there would be time delays and frequency content effects between the distortion of the system and the manifestation of the oscillations in the thrust. However, for the purposes of introducing the concept we will deal with the simplest case where the effect is instantaneous and solely dependent on the oscillatory responses of the masses. Let m1 ¼ m2 ¼ 1 and k ¼ 50; therefore, the matrix equation of motion for this system is     n   o 50 50 y1 ðtÞ 1 0 y€1 ðtÞ _ þ ½DfyðtÞg þ ¼ fbðtÞ y€2 ðtÞ y2 ðtÞ 50 50 0 1 (7.10-1) where n

  o  0 0  y ðtÞ  0 1 b f ðtÞ ¼ fa þ fb ðtÞ y2 ðtÞ 1 1 1

(7.10-2)

FIGURE 7.10-1 Unconstrained two-degree-of-freedom system, with excitation f ðtÞ a function of the relative displacement between the two masses.

7.10 Dynamic behavior as a function of response

Solving the undamped eigenvalue problem gives      2 0 0 1 1 1 un ¼ ½f ¼ pffiffiffi 0 100 2 1 1

(7.10-3)

where we normalized the mode shapes such that ½fT ½m½f ¼ ½I. As expected, the first mode is the rigid body mode of the unconstrained system, and the second one is the elastic mode, where the masses move out of phase relative to each other. We will start by transforming the equations of motion into modal coordinates. Let      q1 ðtÞ y1 ðtÞ 1 1 1 ¼ pffiffiffi (7.10-4) y2 ðtÞ q2 ðtÞ 2 1 1 Substituting Eq. (7.10-4) and its first and second time derivatives into Eq. (7.10-1), and then premultiplying the entire equation by ½fT produces ) " #( ) " #( " #( ) 0 0 0 0 q1 ðtÞ 1 0 q_ 1 ðtÞ q€1 ðtÞ þ þ ¼ q€2 ðtÞ q_ 2 ðtÞ 0 2zð10Þ 0 100 0 1 q2 ðtÞ ) " #" #" #( " #( ) 0 0 1 1 q1 ðtÞ 0 1 1 1 1 1 1 fa þ pffiffiffi fb ðtÞ 2 1 1 1 1 1 1 2 1 1 1 q2 ðtÞ (7.10-5) Note that we have assumed damping that yields classical normal modes, hence, the modal coordinate damping matrix is also diagonal. Performing the multiplications yields ) ) " #( ) " #( #( " 0 0 0 0 q1 ðtÞ 1 0 q_ 1 ðtÞ q€1 ðtÞ ¼ þ þ € _ q q ðtÞ ðtÞ q2 ðtÞ 0 20z 0 100 0 1 2 2 ( " #( ) ) 1 0 fa q1 ðtÞ 1 þ pffiffiffi fb ðtÞ 2 1 0 fa q2 ðtÞ (7.10-6)

519

520

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Combining the modal coordinate displacement terms yields the sought-after equation, ) #( ) " #( ) " " #( q1 ðtÞ 0 0 0 fa 1 0 q_ 1 ðtÞ q€1 ðtÞ þ þ q€2 ðtÞ q_ 2 ðtÞ 0 100 þ fa q2 ðtÞ 0 20z 0 1 8 9 1 > > > p ffiffiffi > > > > < 2 > = ¼ fb ðtÞ > > > > 1 > pffiffiffi > > > : ; 2 (7.10-7) The first item to note is that the modal coordinate stiffness matrix is no longer symmetric or diagonal, and the response in the rigid body mode is a function of the response of the elastic mode. For this problem, however, we can solve for the second mode response independent of the rigid body mode, and having obtained q2 ðtÞ we can compute the rigid body response. The reason the rigid body behavior does not affect the applied force, whereas the elastic mode response does, is because the force was defined as a function of the relative displacement between the two masses, and in the rigid body mode this displacement is zero. Because the fa proportional component of the force is a function of y1 ðtÞ  y2 ðtÞ, the net effect is to make the system appear stiffer, i.e., the effective circular frequency squared is u2effective ¼ 100 þ fa . Conversely, if the force were proportional to y2 ðtÞ  y1 ðtÞ, the net effect would be to make the system softer, i.e., u2effective ¼ 100  fa . In this case, if fa were sufficiently large, the system would be unstable and not oscillate. 7.10.2 Gyroscopic moments

Reaction and momentum wheels and rotating gears in turbo machinery are examples where self-generated disturbances may cause undesirable effects. Rotating wheels have static and dynamic imbalances that produce periodic forces and moments at the spin rate of the wheel. In addition, because of imperfections in ball bearings and other components, they will also produce periodic disturbances at other frequencies. The resulting forces and moments act through the wheel shaft and wheel/machinery support structure and are reacted at the interface to the rest of the system. Hence, they

7.10 Dynamic behavior as a function of response

represent not just excitation to the local wheel and its support but excitation to the entire system. Fig. 7.10-2A and B shows a wheel rotating at a constant spin rate, U, about the z-axis, which results in rotational/angular momentum Izz U; the momentum vector is aligned with the z-axis as shown in the figure. If the wheel undergoes a rotation qy about the y-axis during a time interval dt, the momentum vector will change from ðIzz UÞ to ðIzz UÞ (Fig. 7.10-2A). The change in momentum during this time interval will be ðDIzz UÞ, which is also shown in the figure. This vector is directed along the x-axis and, thus, there is momentum about the x-axis. According to Euler’s Second Law, which is derived from Newton’s laws of motion, the time rate of change of angular momentum is equal to the applied torque (moment), hence, d ðDIzz UÞ ¼ Mx (7.10-8) dt From the figure we note that for small angular rotations ðDIzz UÞ ¼ ðIzz UÞqy . Substituting into Eq. (7.10-8) yields Mx ¼ ðIzz UÞ

dqy dt

(7.10-9)

FIGURE 7.10-2 Wheel with mass moment of inertia about the z-axis of Izz spinning about the z-axis at a rate U. (A) Momentum vectors corresponding to rotation about y-axis. (B) Momentum vectors corresponding to rotation about x-axis.

521

522

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Eq. (7.10-9) indicates that a rotational velocity about the y-axis produces a positive moment about the x-axis. Therefore, if the system containing the spinning wheel is vibrating, any rotational vibration about the y-axis at the location of the wheel will produce a vibratory moment at that location about the x-axis. The moment defined by Eq. (7.10-9) is referred to as a gyroscopic moment, or a moment produced by gyroscopic effects. Fig. 7.10-2B shows the movement of the momentum vector when the wheel undergoes a positive rotation about the x-axis. What is important to note here is that the resulting change in momentum produces a negative moment about the y-axis. Hence, repeating the derivation for positive rotation about the x-axis produces dqx (7.10-10) My ¼  ðIzz UÞ dt Let ffg ðtÞg contain the moments produced by the gyroscopic effects defined by Eqs. (7.10-9) and (7.10-10) for each spinning wheel in a system. Then, _ ffg ðtÞg ¼ ½TfwðtÞg

(7.10-11)

where all elements of ½T are zero except those corresponding to the centers of the wheels and associated with the rotational coordinates about the axes _ contains the time derivatives, i.e., perpendicular to the spin axes, and fwðtÞg velocities, of all the coordinates used to define the behavior of the entire system. As an example, assume we have a system with one spinning wheel; then all elements of ½T will be zero except for two. Let the center of the wheel correspond to the nth grid point in a finite element model with N grid points. Also assume that the wheel spins about the z-axis and w_ qx ; n and w_ qy ; n correspond to the nth grid point velocities perpendicular to the spin axis; then Eq. (7.10-11) for this system is 9 2 9 8 38 > > > > _ w > > > > x;1 Fx; 1 > > > 0 / 0 0 / 0 7> 6 > > > > > > > > 7 6 > > > > > > > « 1 « « 0 « 7> « > « > > > 6 > > > > > > > > 7 6 = < M = 60 / < 7 U / 0 0 I zz x;n _ w qx ;n 7 ¼6 (7.10-12) 7> 6 0 / I U > > > 0 / 0 M zz y;n > > > > _ 7 6 w > > > 6 > > > qy ;n > 7> > > 6« 0 > > « > > « > 7> « « 1 « > > > > > > > > 5 4 > > > > > > > > : M qz ; N ; : _ w 0 0 0 0 0 0 qz ;N ; Noted that ½T is a skew symmetric matrix. It is this characteristic of ½T that will cause the system to have complex modes.

7.10 Dynamic behavior as a function of response

The matrix differential equation of motion of a multi-degree-of-freedom system with gyroscopic moments can now be written as € _ ½mfwðtÞg þ ½cfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg þ ffg ðtÞg _ ¼ ff ðtÞg þ ½TfwðtÞg € _ ½mfwðtÞg þ ð½c  ½TÞfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg (7.10-13) where ff ðtÞg contains all forces other than the gyroscopic moments, and we used Eq. (7.10-11) to substitute for ffg ðtÞg. All forces and moments due to wheel imbalances and other imperfections are included in ff ðtÞg. If the analysis were being done for an engine shutdown, for example, then ff ðtÞg would also contain the shutdown transients in addition to the turbo machinery gear excitation. In all cases, however, the gyroscopic moments would be included in ffg ðtÞg. We will solve Eq. (7.10-13) for specific systems in the next sections. However, we will show the generic solution first. An inherent assumption in the subsequent discussion is that the imbalances in the rotating components do not alter the mass matrix of the system as they rotate, and that the induced forces are not a function of the displacements (deformations) of the system, but solely of the velocities. Hence, this formulation would not be appropriate if one where analyzing the stability of wheels attached to a flexible rotating shaft, but it would be appropriate for a wheel that rotates about a flexible shaft. We start by solving the eigenvalue problem, (7.10-14)  u2n ½m þ ½k ffg ¼ f0g where the mass of the wheel is included in the model, except that the mass moment of inertia about the spin axis is set to zero, since, other than very low friction effects, the wheel is allowed to rotate freely relative to the rest of the system, and would otherwise contribute a rigid body mode. As a result, this coordinate should be collapsed out of the model since the mass moment of inertia associated with it has been set to zero and the momentum effects due to spinning will be accounted for elsewhere, i.e., in ffg ðtÞg. Note that the eigenvalues and eigenvectors correspond to the undamped system with a nonspinning wheel and, therefore, are not the modes of the system when the wheel(s) are spinning. We normalize the computed   eigenvectors such that ½fT ½m½f ¼ ½I and, therefore, ½fT ½k½f ¼ u2n .

523

524

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Substituting fwðtÞg ¼ ½ffqðtÞg and its first and second time derivatives into Eq. (7.10-13) and then premultiplying the entire equation by ½fT produces   _ ½If€ qðtÞg þ ½LfqðtÞg þ u2n fqðtÞg ¼ ½fT ff ðtÞg (7.10-15) where ½L ¼ ½2zun   ½fT ½T½f; and if absent gyroscopic effects the damping properties yield classical normal modes, then ½2zun  will be a diagonal matrix. If the damping properties yield nonclassical normal modes, then ½L ¼ ½fT ½c½f  ½fT ½T½f

(7.10-16)

The solution to Eq. (7.10-15) will depend on the form of the excitation, and this will be discussed in significant detail in subsequent sections. One item of note here is that we are using the eigenvectors of the undamped system, without gyroscopic effects, as a vector basis. An advantage to this approach is that the damping properties can be specified on a mode-bymode basis without the complexity of having to derive a physical coordinate damping matrix. In the transformed coordinate system, Eq. (7.10-15), the equations of motion will yield the expected complex modes that result when gyroscopic effects are included. Before leaving this section we will cast Eq. (7.10-13) in first-order form. Let   _ wðtÞ (7.10-17) fWðtÞg ¼ wðtÞ _ _ Then using the identity, ½mfwðtÞg  ½mfwðtÞg ¼ f0g, Eq. (7.10-13) can be written as n o      e WðtÞ _ M þ Ke fWðtÞg ¼ feðtÞ (7.10-18) where

   ½0 e ¼ M ½m

½m ð½c  ½TÞ



   ½m Ke ¼ ½0

½0 ½k

 n

o  f0g  feðtÞ ¼ ff ðtÞg (7.10-19)

    e is invertible, Ke is symmetric, and ½c  ½T is skewNote that M symmetric. This latter fact adds complexity to the solution of the eigenvalue problem, which will be discussed in detail in subsequent sections, and in Chapter 8. For now, it suffices to state that if damping is set to zero, the resulting eigenvalues will be pure imaginary and the eigenvectors will be

7.10 Dynamic behavior as a function of response

complex (Mirsky, 1982). Hence, gyroscopic moments will not add damping since the real parts of the eigenvalues are zero. However, if the system has damping, gyroscopic moments will alter the damping characteristics of the system (see Section 7.10.4). 7.10.3 Whirl

In the previous section, we derived the equations of motion for systems that contained wheels/disks that rotate at sufficiently high speeds to where their momentum due to spinning has to be considered in the formulation of the equations of motion. It was observed that the modes of such systems would be nonclassical, i.e., complex. In this section, we will discuss in detail the impact of a spinning wheel/disk on the modes of a shaft/disk system, and in the process we will describe the whirling motion that can occur, whether the disk is spinning or stationary. We will first address a perfectly symmetric system with a stationary disk and show that whirling motion does not occur until the system deviates from perfect symmetry. We will then show that whirling motion can occur in systems that are perfectly symmetric if the disk is spinning. Finally, we will address both perfectly symmetric and then nonsymmetric systems with excitation caused by imbalances in the spinning disk. 7.10.3.1 Symmetric systems

We begin the discussion of symmetric systems with Fig. 7.10-3, where we have a rigid disk (top) that, if spinning, would spin at U rad/sec counterclockwise on a flexible round rod of length L that is fixed at its base. The connection between the rod and disk is through a frictionless bearing that

FIGURE 7.10-3 Rigid disk spinning at U rad/sec counterclockwise about a rod of length L that is fixed at its base.

525

526

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

transmits lateral forces and moments between the disk and rod. We will assume that the disk cannot slide along the z-axis, and that the rod does not deform axially, along the z-axis, or torsion/twist about the z-axis. We will model this system with a single, four-degree-of-freedom point at the centerline of the rod, so that it describes the motion of the rod at the plane corresponding to the location of the center of mass of the disk. The point will be able to move laterally, with translational coordinates defining motion along the x- and y-axes. In addition, coordinates qx and qy will define rotation of the point about the x- and y-axes, respectively. This means that for a completely symmetric disk, the center of mass aligns with the centerline of the rod. We will assume that the radius of the rod is 0.2 in, its length, L, to the mid-plane of the disk is 12 in, and it is made of steel. In modeling there is always a question as to whether the length of the rod should be to the bottom or mid-plane of the disk. We will assume that L corresponds to the mid-plane. The disk is also made of steel, has a radius of 6 in and a thickness of 0.5 in. We will assume that the mass of the bearing and the length of the embedded rod are equal to that of the material removed from the disk to accommodate the bearing and rod. In Volume II, we will derive the stiffness and mass matrices for a beam/ rod. For this discussion, we will simply state what these matrices are. A general, three-dimensional beam element has 12 coordinates that describe the six degrees of freedom at each end. For the rod in Fig. 7.10-3 the six degrees of freedom at the bottom are fixed, and the axial and torsional degrees of freedom at the free end can be excluded from the model. Since there is no coupling between the axial and torsional degrees of freedom and the lateral translational and rotational degrees of freedom, we can simply delete the associated rows and columns from the stiffness and mass matrices; this then yields 3 2 12EIx 6EIx *x 0 0  2 7 6 6 L3 L 7 7 6 7 6 12EI 6EI y y 7 *y 6 0 0 7 6 L3 L2 7 6 (7.10-20) ½k ¼ 6 7 7 6 6EI 4EI y y 6 0 0 7 7 *qx 6 L L2 7 6 7 6 4 6EIx 4EIx 5  2 0 0 *qy L L

7.10 Dynamic behavior as a function of response

2

156 0 0 6 156 22L rL 6 0 ½mR  ¼ 6 420 4 0 22L 4L2 22L 0 0

3 22L *x 0 7 7 *y 7 0 5 *qx

527

(7.10-21)

4L2 *qy

where E is Young’s modulus, Ix ¼ Iy ¼ pr 4 4 ¼ 0:001257 are the area moments of inertia of the cross-sectional area, r is the radius of the rod, r is the mass density per unit length, and L is the length of the rod from the fixed boundary to the mid-plane of the disk, and it is assumed connected to the disk through the bearing. The stiffness matrix was derived using Bernoullie Euler beam theory, and the mass matrix is the associated consistent mass matrix. Since the disk is assumed rigid, only its mass properties need to be included. The origin of the coordinate system is at the center-of-mass of the disk; and since the disk is uniform, the x- and y-axes can be considered principal axes irrespective of the rotational angle of the disk. This yields a diagonal mass matrix, 3 2 2 0 0 0 7 6 rV pR d 7 6 7 2 6 7 6 pR d 0 0 0 r 7 V 6 7 6 7 6 7 ½mD  ¼ 6 rV pR2 d 3R2 þ d2 7 6 0 0 0 7 6 12 7 6 7 6 2 2 6 2 7 rV pR d 3R þ d 7 4 5 0 0 0 12 (7.10-22) where R is the radius of the disk, rV is the volume mass density, and d is the thickness. Note that we approximated the mass of the bearing and half of the embedded rod by computing the mass of the disk as though it did not have a hole. In Section 7.10.2, we derived the moments produced by gyroscopic effects (see Eq. (7.10-9) through (7.10-13)). For our system, the gyroscopic effects matrix is 3 2 0 0 0 0 *x_ 7 60 0 0 0 7 *y_ 6 (7.10-23) ½T ¼ 6 7 40 0 0 Izz U 5 *q_x *q_y 0 0 Izz U 0

*x *y *qx *qy

528

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

where Izz ¼ rV



pR2 d

2. R 2 ¼ 0:7628 is the mass moment of inertia of the

disk about the z-axis. We will later also solve this problem by assigning damping in the modal domain. Assembling the above matrices into the equations of motion (see Eq. 7.10-13) we obtain € _ þ ð½c  ½TÞfwðtÞg þ ½kfwðtÞg ¼ f0g ð½mR  þ ½mD ÞfwðtÞg

(7.10-24)

where fwðtÞg ¼ ½ x y qx qy T . To solve for the modes we first need to recast Eq. (7.10-24) in first-order form (see previous section, or Chapter 6, Section 6.11),         ½m ½0 ½0 ½m f0g _ € fwðtÞg fwðtÞg þ ¼ _ fwðtÞg fwðtÞg ½0 ½k ½m ½c  ½T f0g      e WðtÞ _ M þ Ke fWðtÞg ¼ f0g (7.10-25) where ½m ¼ ½mR  þ ½mD  and we used the identity, _ _ ½mfwðtÞg  ½mfwðtÞg ¼ f0g. For the initial solution we will assume that the system does not have damping, i.e., ½c ¼ ½0, and U ¼ 0. The eigenvalue problem produces the real (classical) normal modes of the system, 2 3 35.982 6 7 6 7 35.982 6 7 ½un  ¼ 6 7 6 7 190.09 4 5 190.09 3 0 4:512 0 1:735 6 7 6 4:512 0 1:735 0 7 6 7 ½f ¼ 6 7 6 0:588 7 0 1:503 0 4 5 0 0:588 0 1:503 2

(7.10-26)

where the mode shapes have been normalized such that ½fT ½m½f ¼ ½I. It should be noted that we have repeated roots, i.e., the first two modes have identical frequencies as well as the last two. Therefore, any linear combination of the first two mode shapes and of the last two are mode shapes of the system. The repeated roots exist because the system is perfectly symmetric and there is no “communication” between the x-z and y-z planes. Once the

7.10 Dynamic behavior as a function of response

gyroscopic effects are added the two planes will be coupled through the gyroscopic moments. However, before working that problem, let us explore the behavior of this system. We will initiate vibration of the system with the following initial velocities: 9 8 8 9 100 > _ xð0Þ > > > > > > > > > > > > > > > = < yð0Þ < _ 100 = _ ¼ _ ¼ (7.10-27) fwð0Þg > > > > ð0Þ q 0 x > > > > > > > > > > ; > :q_ ð0Þ > : ; 0 y Transforming the initial conditions into the modal domain gives _ _ ¼ ½fT ½mfwð0Þg fqð0Þg 2 0 4:512 6 6 4:512 0 6 ¼6 6 0:588 0 4 2 6 6 6 6 6 4

¼

0

1:735

1:735

0

1:503

0

0 0:588 0:0428 0 0

0:0428

0

0:0007

0:0007 0 8 9 19:268 > > > > > > > > > < 19:268 > =

3T 7 7 7 7 7 5

0

1:503 9 38 0 0:0007 > 100 > > > > > > 7> > > < = 7 0:0007 0 100 7 7 7> 0:3839 0 0 > > > > 5> > > > > : ; 0 0:3839 0

(7.10-28)

> 7:532 > > > > > > > > > : ; 7:532

Transforming the equations of motion into the modal domain yields q€1 ðtÞ þ ð35.981Þ2 q1 ðtÞ ¼ 0 q€2 ðtÞ þ ð35.981Þ2 q2 ðtÞ ¼ 0 q€3 ðtÞ þ ð190.09Þ2 q3 ðtÞ ¼ 0 q€4 ðtÞ þ ð190.09Þ2 q4 ðtÞ ¼ 0

q1 ð0Þ ¼ 0 q2 ð0Þ ¼ 0 q3 ð0Þ ¼ 0 q4 ð0Þ ¼ 0

q_ 1 ð0Þ ¼ 19:268 q_ 2 ð0Þ ¼ 19:268 (7.10-29) q_ 3 ð0Þ ¼ 7:532 q_ 4 ð0Þ ¼ 7:532

529

530

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Each equation provides the response in a mode and accordingly corresponds to the equation of a single-degree-of-freedom system. Therefore, the solutions are (see Chapter 2) q1 ðtÞ ¼

q_ 1 ð0Þ 19:268 sinð35:981tÞ sin un1t ¼ un1 35:981

q2 ðtÞ ¼

q_ 2 ð0Þ 19:268 sinð35:981tÞ sin un2 t ¼ un2 35:981

q_ ð0Þ 7:532 q3 ðtÞ ¼ 3 sin un3 t ¼ sinð190:09tÞ un3 190:09 q4 ðtÞ ¼

(7.10-30)

7:532 q_ 4 ð0Þ sinð190:09tÞ sin un4 t ¼ un4 190:09

The physical response will, therefore, be fwðtÞg ¼ ½ffqðtÞg, and of particular interest to us will be the time-phased displacement response of the center of the disk in the x-y plane, which will be the same as the point on the rod selected for the equations of motion. For this we need to compute xðtÞ and yðtÞ, 8 9 0:536 sinð35:982tÞ > > > > > > > ( ) " #> > > < xðtÞ 0 4:512 0 1:735 0:536 sinð35:982tÞ = ¼ > yðtÞ 4:512 0 1:735 0 0:040 sinð190:09tÞ > > > > > > > > > : ; 0:040 sinð190:09tÞ ( ) 2:418 sinð35:982tÞ þ 0:069 sinð190:09tÞ ¼ 2:418 sinð35:982tÞ þ 0:069 sinð190:09tÞ (7.10-31) Fig. 7.10-4 shows the motion of the center of the disk in the x-y plane as a function of time, which in the figures are the vertical axes. The motion occurs in a plane, 45 counterclockwise from the x-axis. This is because the initial velocities were equal in the x and y-coordinate directions and we have a perfectly symmetric system. Had the initial velocities been something other than equal, the vibration would still occur in a plane, but the azimuth of the plane in which the vibration occurs would be different. This can be seen in Fig. 7.10-5, where we have rotated the axes to provide a

7.10 Dynamic behavior as a function of response

FIGURE 7.10-4 Trajectory of the center of the disk of system in Fig. 7.10-3; motion was initiated with equal initial velocities in the x- and y-coordinate directions. Vibration is in a plane 45 counterclockwise from the x-axis. (A) Responses for modes 1 and 2 are shown separately from those of modes 3 and 4. (B) Combined modes 1 through 4.

FIGURE 7.10-5 View along the time axis of the response shown in Fig. 7.10-4, and that due _ _ to initial velocities of xð0Þ ¼ 200 and yð0Þ ¼ 100. view along the z-axis for the response in Fig. 7.10-4, and have added the _ response corresponding to initial velocities of xð0Þ ¼ 200 and _ yð0Þ ¼ 100. As can be seen, the response for the latter initial velocities has rotated toward the x-axis, and the magnitude is proportionally greater because of the larger initial velocity in the x-coordinate direction.

531

532

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The preceding examples illustrate why, even though we have only two distinct natural frequencies, four independent and orthogonal mode shapes are required to produce the physical motion allowed by the four degrees of freedom of the system. It is important to note that for highly symmetric structures, such as rotating disks, it is important to identify all modes, even those they may have, for-all-practical purposes, identical frequencies. This is particularly true for experimental determination of natural frequencies and mode shapes where the only way to separate modes close in frequency is to measure the mode shapes and determine the phase reversals necessary for orthogonality between the mode shapes. Irrespective of how close in frequency, and even for identical frequencies, the corresponding mode shapes will be orthogonal and, therefore, identifiable as distinct modes from the phase reversals. 7.10.3.2 Slightly nonsymmetric systems

To introduce a slight nonsymmetry we will increase the stiffness terms in the y direction by a factor of 1.2. This yields the following modes: 2 3 35.982 6 7 6 7 39.416 6 7 ½un  ¼ 6 7 6 7 190.09 4 5 208.233 2 3 (7.10-32) 4:512 0 1:735 0 6 7 6 0 7 4:512 0 1:735 6 7 ½f ¼ 6 7 6 0 0:588 0 1:503 7 4 5 0:588

0

1:503

0

As expected, the frequencies of the modes associated with translation along the y-axis and rotation about the x-axis increased, whereas the other two modes did not change since we did not alter the stiffness associated with bending in the x-z plane and the two planes are not coupled in this system. The other item to note is that the four mode shapes are the same as for the problem in the preceding section. This is because the mass matrices are the same, and the change in stiffness that produced the results in (7.10-32) affected both the bending and rotation by the same amount.

7.10 Dynamic behavior as a function of response

We will initiate the motion with the same initial velocities as in (7.10-27), i.e., _ _ ¼ ½fT ½mfwð0Þg fqð0Þg 2 4:512 6 6 4:512 6 ¼6 6 0:588 4 2 6 6 6 6 6 4

¼

0:588 0:0428

0

1:735

1:503 0

0

0:0428 0:0007

0

0:0007 0:3839

0:0007 0 8 9 19:268 > > > > > > > > > < 19:268 > =

0

3T 7 1:735 7 7 7 1:503 7 5 9 38 0:0007 > 100 > > > > > > 7> > = 7< 100 > 0 7 7 7> 0 0 > > > > 5> > > > > : ; 0:3839 0

(7.10-33)

> 7:532 > > > > > > > > > : ; 7:532

The modal responses are q1 ðtÞ ¼

q_ 1 ð0Þ 19:268 sinð35:982tÞ sin un1 t ¼ un1 35:982

q2 ðtÞ ¼

q_ 2 ð0Þ 19:268 sinð39:416tÞ sin un2 t ¼ un2 39:416

q_ ð0Þ 7:532 sinð190:09tÞ q3 ðtÞ ¼ 3 sin un3 t ¼ un3 190:09 q4 ðtÞ ¼

q_ 4 ð0Þ 7:532 sinð208:233tÞ sin un4 t ¼ un4 208:233

(7.10-34)

533

534

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The physical coordinate responses are (

xðtÞ yðtÞ

)

" ¼

4:512

0

1:735

0

8 9 0:535 sinð35:982tÞ > > > > > > > #> >

=

1:735 > 0:040 sinð190:09tÞ> > > > > > > > > : ; 0:036 sinð208:233tÞ ( ) ) ( 0:069 sinð190:09tÞ 2:414 sinð35:982tÞ þ ¼ 0:062 sinð208:233tÞ modes 3; 4 2:206 sinð39:416tÞ modes 1; 2 0

4:512

0

(7.10-35) Fig. 7.10-6A shows the trace ðxðtÞ; yðtÞÞ of the first cycle of motion of the center of the disk, in the x-y plane, as a function of time; this type of plot is referred to as a Lissajous graph. We have chosen to plot only the contribution from the first two modes, since the third and fourth modes contribute very little; this will be shown later. The displayed period of vibration, 0.167, is the average period of the two modes, i.e., 2ð2pÞ=ð35:982 þ39:416Þ; we will discuss later why this choice was made. The first item to note, that unlike the system in the previous section where the frequencies of the first two modes were identical, the motion of the center of the disk does not occur in a plane. But rather the motion traces a whirling trajectory. This is due solely to the difference in the natural

FIGURE 7.10-6 Lissajous graphs of Eq. (7.10-35), response of first two modes: (A) First cycle, t ¼ 0 to t ¼ 0:167; (B) First two cycles, t ¼ 0 to t ¼ 0:333, dashed line corresponds to first cycle.

7.10 Dynamic behavior as a function of response

FIGURE 7.10-7 Lissajous graphs of Eq. (7.10-35), response of first two modes, t ¼ 0 to t ¼ 0:667: (A) Third cycle, t ¼ 0:333 to t ¼ 0:500, shown as solid line; (B) Fourth cycle, t ¼ 0:500 to t ¼ 0:667, shown as solid line. frequencies of the two modes, since all other parameters are the same as for the system in the previous section. Fig. 7.10-6B shows the first two cycles, with the solid line being the second cycle. The item to note is that the amplitude along the initial trajectory has decreased, while at 90 degree it increased. Fig. 7.10-7 shows the next two cycles, and we note the continuing decrease in the amplitude of motion along the initial trajectory while there is a corresponding increase along the perpendicular direction. Fig. 7.10-8 shows the fifth and sixth cycles. Of interest here is the sixth cycle that is shown in Fig. 7.10-8B. Whereas the motion until this cycle had

FIGURE 7.10-8 Lissajous graphs of Eq. (7.10-35), response of first two modes, t ¼ 0 to t ¼ 1:000: (A) Fifth cycle, t ¼ 0:667 to t ¼ 0:833, shown as solid line; (B) Sixth cycle, t ¼ 0:833 to t ¼ 1:000, shown as solid line.

535

536

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.10-9 Lissajous graph of Eq. (7.10-35), response of first two modes, t ¼ 0 to t ¼ 1:333: (A) Seventh cycle, t ¼ 1:000 to t ¼ 1:167, shown as solid line; (B) Eighth cycle, t ¼ 1:167 to t ¼ 1:333, shown as solid line. been clockwise, with this cycle the motion reverses and begins whirling in the counterclockwise direction. Fig. 7.10-9 shows the next two cycle, where the reversed direction of motion can be seen clearly. As indicated, during the sixth cycle of oscillation the whirling motion reversed direction. This reversal in direction will occur three more times, and then the oscillation pattern repeats itself. This is akin to a beating phenomenon. However, since the ratio of the two natural frequencies is not a rational number (see Section 7.3 and Appendix 7.3), there does not exist a common period for the high frequency oscillation, i.e., the average frequency of the two modes, and the long period envelope function, whose frequency is half of the difference between the two natural frequencies. Since a common period does not exist, the beating phenomenon will occur, but the high frequency oscillations will be progressively shifted relative to the longer envelope function oscillation. Hence, the oscillation will eventually fill the entire Lissajous space. This can be seen in Fig. 7.10-10, where each plot shows an increasing number of cycles. If on the other hand we plot the Lissajous graph where we assume that the second mode frequency is 1:1ð35:982Þ ¼ 39:580, we would obtain the plot in Fig. 7.10-11. As can be ascertained, the oscillations, at a frequency of ð35:982 þ39:580Þ=2, are now periodic with the envelope function frequency of, ð39:580 35:982Þ=2, and, therefore, repeat exactly during each period of the envelope function. This occurs because the two frequencies are integer multiples of each other, and their ratio is a rational number (see Appendix 7.3).

7.10 Dynamic behavior as a function of response

FIGURE 7.10-10 Eq. (7.10-35) Lissajous graphs, first two modes: (A) t ¼ 0 to t ¼ 3:659; (B) t ¼ 0 to t ¼ 2ð3:659Þ; (C) t ¼ 0 to t ¼ 5ð3:659Þ; and (D) t ¼ 0 to t ¼ 10ð3:659Þ.

FIGURE 7.10-11 Lissajous graph of Eq. (7.10-35), response of first two modes for t ¼ 0 to t ¼ 10ð3:493Þ. Natural frequency of the second mode was changed from 39.416 to 1.1 times the first mode frequency, i.e., 1:1ð35:982Þ ¼ 39:580.

537

538

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.10-12 Lissajous graphs of Eq. (7.10-35), response of all four modes: (A) First cycle, t ¼ 0 to t ¼ 0:167; (B) First two cycles, t ¼ 0 to t ¼ 0:333, dashed line corresponds to first cycle. The preceding examples considered responses in the first two modes. Fig. 7.10-12 shows a repeat of Fig. 7.10-6, except the responses of the third and fourth modes, which are at considerably higher frequency than the first two modes, have been included. As can be seen, the higher frequency responses “ride” on the longer period first two mode responses. Fig. 7.10-13 shows the first two cycles of the response of the higher frequency modes only; the period of a cycle was computed as the average period of the two modes, i.e., 2ð2pÞ=ð208:233 þ190:09Þ. Fig. 7.10-14A shows the response

FIGURE 7.10-13 Lissajous graphs of Eq. (7.10-35), third and fourth mode responses only: (A) First cycle, t ¼ 0 to t ¼ 0:032; (B) First two cycles, t ¼ 0 to t ¼ 0:063, dashed line corresponds to first cycle.

7.10 Dynamic behavior as a function of response

FIGURE 7.10-14 Lissajous graphs of Eq. (7.10-35), third and fourth mode responses only: (A) Oscillations within first cycle of envelope function, t ¼ 0 to t ¼ 0:693; (B) Oscillations within first 10 cycles of envelope function, t ¼ 0 to t ¼ 6:926. for a period of 2ð2pÞ=ð208:233 190:09Þ, which is the envelope function period of the third and fourth modes. Fig. 7.10-14B shows the response for 10 times the duration of that in (A). As was the case for the first two modes, since the ratio of the two higher frequency mode frequencies is not a rational number, the high frequency drifts relative to the low frequency envelop function, which over time causes the Lissajous graph to be filled by the trace. Before leaving this section it is important to summarize. For the system discussed herein, where the motion was initiated with initial velocities, whirling motion occurred solely due to the fact that two modes had their primary motion in noncollinear planes and their natural frequencies were different. This produced motion of the center of the disk that followed a whirling trajectory, which reversed direction at each “quarter cycle” point of the envelope function. Finally, it should be noted that if the initial velocity were such as to be orthogonal to all modes except one, the oscillations would only occur in that mode, without the whirling motion discussed above since this requires two modes. In the next section, we will discuss the motion of our symmetric system, but with orthogonal degrees of freedom coupled by gyroscopic effects. 7.10.3.3 Rotating symmetric systems with gyroscopic effects

In the preceding section, it was shown how a lack of perfect symmetry in a nonspinning rod/disk system leads to whirling motion. In this section, we

539

540

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

will solve for the response of the perfectly symmetric system described in Section 7.10.3.1 (see Fig. 7.10-3), but include the gyroscopic effects due to disk rotation. In Section 7.10.3.1, we presented the equations of motion (see Eq. 7.10-25), which included the moments caused by gyroscopic effects. These effects will couple the vibration in the x-z and y-z coordinate planes and eliminate the symmetry needed for modes with identical frequencies to exist. We will not include damping in this discussion to better explore the effects of gyroscopic moments on modes. Once we include excitation forces, which will be done in the next section, damping will also be included. The equations of motion for our four-degree-of-freedom system, with gyroscopic effects and no damping, written in first-order form are (see Sections 7.10.2 and 7.10.3.1)         ½m ½0 ½0 ½m f0g _ € fwðtÞg fwðtÞg þ ¼ _ fwðtÞg fwðtÞg ½0 ½k ½m ½T f0g      e WðtÞ _ M þ Ke fWðtÞg ¼ f0g (7.10-36)     e is invertible, Ke is symmetric, ½T is given by Eq. (7.10-23), and M ½m ¼ ½mR  þ ½mD  [see Eq. (7.10-21) and (7.10-22)], and ½k is given by Eq. (7.10-20). Letting fWðtÞgj ¼ fR wgj elj t produces the eigenproblem, ðlj ½I þ ½AÞfR wgj ¼ f0g where  1   e Ke ¼ ½A ¼ M and

"

½m1 ½T ½I

½m1 ½k ½0

"  1 ½m1 ½T½m1 e M ¼ ½m1

½m1 ½0

(7.10-37) # (7.10-38) # (7.10-39)

The superscript, R, on the eigenvector in Eq. (7.10-37) indicates that we are computing the right eigenvectors and will need to distinguish these from the left eigenvectors, which will be different because of the skew-symmetry introduced by the gyroscopic effects (see Chapter 6, Section 6.11.3, and Section 7.10.3.5 in this chapter).

7.10 Dynamic behavior as a function of response

We will first solve for the modes of the system for increasing values of the disk spin rate, U. Because of the gyroscopic effects, the modes will be nonclassical (complex); see Chapter 6, Section 6.11, for discussion of nonclassical modes, and Chapter 8 for eigenproblem solution methods. Also, since the damping of the system was set to zero, the eigenvalues will be pure imaginary. Hence, the circular natural frequencies will simply be the imaginary portions of the computed eigenvalues. Later we will discuss the mode shapes, but first we will compare the natural frequencies. Figure 7.10-15 shows the computed circular natural frequencies plotted against the disk’s spin rate, U; this type of plot is referred to as a Campbell diagram (Campbell, 1924). The first item to note is that for a spin rate of zero the natural frequencies are as computed in Section 7.10.3.1, Eq. (7.10-26), and occur in two pairs of identical values, i.e., 35.982 rad/s and 190.09 rad/s. For nonzero values of the spin rate, however, the mode pairs cannot have identical frequencies since the gyroscopic effects couple the previously uncoupled degrees of freedom. Because the coupling effect increases with increasing spin rate, the frequencies of the mode pairs move further apart, with one increasing and the other decreasing in each pair.

FIGURE 7.10-15 Circular natural frequencies, un rad/sec, of system shown in Fig. 7.10-3 as a function of the disk spin rate, U rad/sec. Sloping straight line corresponds to un ¼ U.

541

542

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

This causes the lower frequency mode in the higher frequency pair to approach asymptotically the frequency of the lower-frequency-pair mode that is increasing in frequency; see un2 and un3 . The sloping straight line in Fig. 7.10-15 corresponds to un ¼ U. The intersections of this line with the natural frequency lines are the disk rotation frequencies that would coincide with the natural frequencies and, therefore, indicate disk spin frequencies that without any additional insight should be avoided. Here, “avoided” means not just the frequencies but also some conservative frequency band around each frequency to protect against errors in modeling, test measurements, and/or changes in the system over time. This figure emphasizes the critical need to include gyroscopic effects when analyzing the dynamics of a system with spinning disks. Had the gyroscopic effects not been included, we would have concluded erroneously that the “stay-out” frequency ranges were centered at 36 and 190 rad/s. As can be ascertained from Fig. 7.10-15, the “stay-out” frequency ranges should actually be centered near 32, 41, and 124 rad/s. The fourth-mode natural frequency is not a concern since it is increasing with increasing spin rate, and there is sufficient separation between the two. The mode shapes for the above system are complex. This means that node points are not stationary, i.e., the phase relationship between the coor dinates is not either zero  or 180 , as with classical mode shapes. In addition, e is not symmetric, the eigensolution will have as indicated, because M right and left eigenvectors; the eigenvalues, however, will be the same. Since the natural frequencies and mode shapes are a function of the disk’s spin rate, we will select the systems where U ¼ 40 and U ¼ 200 for further discussion. The eigenvalues, and right eigenvectors corresponding to the displacement coordinates, i.e., lower partition of fR wgj , for U ¼ 40 are f l1

l2

l3

l4 gU¼40 ¼

f 0:0 þ i30:81 2

0:0 þ i41:18

0:0 þ i160:56

0:0 þ i229:68 g

½ fR wd g1 fR wd g2 fR wd g3 fR wd g4 U¼40 ¼ 0:0000 þ i0:0325

6 60:0325 þ i0:0000 6 6 6 6 0:0045 þ i0:0000 4 0:0000 þ i0:0045

0:0243 þ i0:0000 0:0062 þ i0:0000 0:0000 þ i0:0032 0:0000 þ i0:0243

0:0000  i0:0062

0:0000 þ i0:0029

0:0000  i0:0035

0:0029 þ i0:0000

0:0035 þ i0:0000

3

7 0:0032 þ i0:0000 7 7 7 7 0:0044 þ i0:0000 7 5

0:0000  i0:0044 (7.10-40)

7.10 Dynamic behavior as a function of response

543

Note that we did not show the corresponding complex conjugate eigenvalues and eigenvectors. The eigenvalues and right eigenvectors for U ¼ 200 are f l1 l2 l3 l4 gU¼200 ¼ f 0:0 þ i16:4 0:0 þ i56:21

2

0:0 þ i108:79

0:0 þ i466:46 g

½ fR wd g1 fR wd g2 fR wd g3 fR wd g4 U¼200 ¼ 0:0000 þ i0:0610 0:0000 þ i0:0178 0:0092  i0:0000

6 60:0610 þ i0:0000 0:0178 þ i0:0000 6 6 6 0:0097  i0:0000 0:0014  i0:0000 4 0:0000 þ i0:0097

0:0000 þ i0:0092 0:0000 þ i0:0015

0:0000 þ i0:0014 0:0015 þ i0:0000

0:0000  i0:0003

7 0:0003 þ i0:00007 7 7 0:0021 þ i0:00007 5 0:0000 þ i0:0021 (7.10-41)

As indicated in earlier discussion, gyroscopic effects do not add damping to the system, and this is indicated by the fact that the real parts of the eigenvalues are zero for our undamped system. However, we will show later that when damping is included, gyroscopic effects will alter the response decay characteristics. As discussed in Chapter 6, all complex modes of a system without rigid body modes will occur in complex conjugate pairs. This was the case for our problem, however, in (7.10-40) and (7.10-41) we chose to display only one set. As can be determined by inspection of the eigenvectors, they are complex since there does not exist any single rotation that would align all the mode shape values in any one mode along the real axis. Next, we will add damping such that the resulting modes for the nonrotating system will still be classical. In Chapter 6, Section 6.10.2.1, we described an approach for deriving physical-coordinate damping matrices that produced classical normal modes and diagonal damping matrices in modal coordinates, i.e., ½c ¼ ½m½f½2zun ½fT ½m

3

(7.10-42)

We will assume z ¼ 0:01; then substituting the sum of the mass matrices in Eqs. (7.10-21) and (7.10-22), and the mode shapes and circular natural frequencies from Eq. (7.10-26) we obtain

544

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

2

0:0483 6 0 6 ½c ¼ 6 4 0 0:1348

0 0:0483

0 0:1348

0:1348 0

1:3069 0

3 0:1348 7 0 7 7 5 0

(7.10-43)

1:3069

Substituting into Eq. (7.10-25), setting ½T ¼ ½0, and solving the eigenvalue problem, produces the same mode shapes and undamped circular frequencies as in Eq. (7.10-26). We will repeat the two cases presented above, i.e., U ¼ 40 and U ¼ 200, but this time we will include the damping from Eq. (7.10-43). The eigenvalues and right eigenvectors corresponding to the displacement coordinates, i.e., lower partition of fR wgj , for U ¼ 40 are f l1 l2 l3 l4 gU¼40 ¼ f0:3023 þ i30:8066 0:4054 þ i41:1749 1:5723 þ i160:5479 ½ fR wd g1 fR wd g2 fR wd g3 fR wd g4 U¼40 ¼ 2 0:0000  i0:0321 0:0236 þ i0:0004 0:0056 þ i0:0007 6 6 0:0321  i0:0000 0:0004 þ i0:0236 0:0007 þ i0:0056 6 6 60:0045 þ i0:0000 0:0000  i0:0028 0:0004 þ i0:0032 4

2:2414 þi229:6755g 0:0001 þ i0:0031

3

7 0:0031  i0:0001 7 7 7 0:0042  i0:0001 7 5 0:0000  i0:0045 0:0028 þ i0:0000 0:0032  i0:0004 0:0001  i0:0042 (7.10-44) The eigenvalues and right eigenvectors for U ¼ 200 are

f l1 l2 l3 l4 gU¼200 ¼ f0:1132þ i16:3983 0:4786þ i56:2127 0:8854 þ i108:7894 3:0443þ i466:4549g ½ fR wd g1 fR wd g2 fR wd g3 fR wd g4 U¼200 ¼ 3 2 0:0001  i0:0605 0:0002 þ i0:0174 0:0091 þ i0:0000 0:0000  i0:0003 7 6 6 0:0605  i0:0001 0:0174  i0:0002 0:0000 þ i0:0091 0:0003 þ i0:00007 7 6 7 6 60:0096 þ i0:0000 0:0014 þ i0:0000 0:0000 þ i0:0015 0:0021 þ i0:00007 5 4 0:0000  i0:0096 0:0000 þ i0:0014 0:0015 þ i0:0000 0:0000 þ i0:0021 (7.10-45) The first item to note is that the imaginary portions of the eigenvalues for the systems with damping are close to those without. However, the real parts of the eigenvalues are not related to the imaginary parts, as they would be

7.10 Dynamic behavior as a function of response

545

for systems without gyroscopic effects. If we remove the gyroscopic effects, i.e., set ½T ¼ ½0, we obtain the following eigenvalues: f l1 l2 l3 l4 gU¼0 ¼ f0:3598 þ i35:98 0:3598 þ i35:98 1:9009 þ i190:08 1:9009 þ i190:08g (7.10-46) Recall that the damping matrix was derived to yield classical normal modes. This, therefore, yields uncoupled, single coordinate (single-degree-offreedom) equations of motion in modal coordinates. In Chapter 6, Section 6.11.1, we showed that when the equation of motion of a single-degree-offreedom system is cast in first-order form, we obtain two eigenvalues, l1 and l2 , that are complex conjugates of each other and are of the form l1 ; l2 ¼  zun iud

(7.10-47)

From Eq. (7.10-46) we observe thatpthe above relationship holds to machine ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi precision, since ðjReðli Þj =0:01Þ 1  0:012 ¼ jImðli Þj. However, this relationship does not hold when gyroscopic effects are included, as can be ascertained from the eigenvalues in Eqs. (7.10-44) and (7.10-45). This implies that gyroscopic effects alter the energy decay mechanism of the system, provided the system has damping. We will discuss this in more detail in Section 7.10.4. 7.10.3.4 Rotating systems with gyroscopic effects and excitation

In this section, we will solve for the steady-state response of the fourdegree-of-freedom system discussed in the previous section, but include an imbalance in the rotating disk to cause the rotations to be unbalanced. One critical assumption that we will make is that the imbalance is caused by an imperfection that is sufficiently small so that it can be ignored when deriving the overall mass matrix of the system. In other words, it has negligible effect on the stationary total mass and mass moments of inertia. For example, assume that the disk is slightly heavier on one side of the spin axis than the other. This will produce a static unbalance, a slight offset in the center of mass from the spin axis, and when spinning a radially outward force that is a function of the mass imbalance and the spin rate of the rotating disk. By assuming that the imbalance has negligible effect on the static mass properties we will be able to assume that the rotating

546

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

mass properties, as far as the dynamics of the system are concerned, are the same as those of the static system, and the disk rotation can be included as gyroscopic moments and the imbalance can be treated as a force whose magnitude is not dependent on the deformation of the system. If the forces due to the imbalance are functions of the distortion of the system, e.g., bending of the rotating shaft, for example, then the associated whirling oscillations could result in instability. This whirling phenomenon is different than what is being discussed herein. In Section 7.10.2, we derived the equations of motion for a system with a spinning disk that was subjected to “external” forces (see Eq. 7.10-15); we will repeat the equation here to facilitate the discussion,   _ (7.10-48) ½If€ qðtÞg þ ½LfqðtÞg þ u2n fqðtÞg ¼ ½fT ff ðtÞg where ½L ¼ ½2zun   ½fT ½T½f and for our example problem, 3 2 2 3 0 0 0 0 0 0 0 0 60 0 6 0 0 7 0 0 7 7 60 0 6 7 ½T ¼ 6 ¼ 7 6 7U 5 40 0 4 0 Izz U 0 0 0 0:7628 5 0 0 Izz U

0

0

0 0:7628

0 (7.10-49)

where U is the counterclockwise rotation rate of the disk in rad/sec. The undamped modes without gyroscopic effects are as shown in (7.10-26). In addition, we will assume that absent gyroscopic effects the damping properties yield classical normal modes and, hence, ½2zun  will be a diagonal matrix; we will assume that z ¼ 0:01. Assume that the force due to the imbalance is aligned with the positive xaxis coordinate direction at t ¼ 0, hence, 9 8 9 8 fx > > Ao U2 cos Ut > > > > > > > > > > < fy = < Ao U2 sin Ut = ¼ (7.10-50) ff ðtÞg ¼ > > > M qx > 0 > > > > > : > > > ; ; : M qy 0 where, for our example problem, we let Ao ¼ ðDmÞðeÞ ¼ 0:00001.

7.10 Dynamic behavior as a function of response

Performing the indicated matrix operations produces the following system matrices for Eq. (7.10-48): ½L ¼ ½2zun   ½fT ½T½f 2 0.7196 6 6 0.7196 6 ¼6 6 6 3.8018 4 0

6 6 0:2634 6 6 6 6 0 4



 6 6 u2n ¼ 6 4

7 7 7 7 7 7 5 (7.10-51)

3.8018

2

2

3

0:6739

0:2634

0

0

0:6739

0:6739

0

0

1:7240

0:6739

3

7 7 7 7U 7 1:7240 7 5 0

0

1294.66

3 7 7 7 5

1294.66 36134.58

(7.10-52)

36134.58 and

½fT ff ðtÞg ¼

8 9 4:5117 sin Ut > > > > > > > > > < 4:5117 cos Ut > =

Ao U2

> 1:7349 sin Ut > > > > > > > > > : ; 1:7349 cos Ut 8 8 9 9 0 4:5117 > > > > > > > > > > > > > > > > > > > > < 4:5117 = < = 0 2 ¼ Ao U cos Ut þ Ao U2 sin Ut > > > > 0 1:7349 > > > > > > > > > > > > > > > > : : ; ; 1:7349 0 ¼ fLgcos Ut þ fPgsin Ut (7.10-53)

547

548

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Hence, the problem to be solved is   _ ½If€ qðtÞg þ ½LfqðtÞg þ u2n fqðtÞg ¼ fLgcos Ut þ fPgsin Ut (7.10-54) where the quantities in the equation are defined above. Let fGg ¼ fLg  ifPg

(7.10-55)

Then using Euler’s formula, eiUt ¼ cos Ut þ i sin Ut, we obtain   (7.10-56) fLgcos Ut þ fPgsin Ut ¼ Re fGgeiUt Next, we analytically extend the solution by considering the complex differential equation,  2 iUt _ ½If€ (7.10-57) qðtÞgG þ ½LfqðtÞg G þ un fqðtÞgG ¼ fGge The solution we seek will then be

  fqðtÞg ¼ Re fqðtÞgG

(7.10-58)

The solution to Eq. (7.10-57) will consist of the sum of the solution to the homogeneous equation and a particular solution for the term on the righthand side. Since we seek the steady-state response, and for a stable system the solution to the homogeneous equation will decay to a negligible value because of damping, we will only need to solve for the particular solution. Assume a solution fqðtÞgG ¼ fjgG eiUt ; substituting it and its time derivatives produces    U2 ½IfjgG þ iU½LfjgG þ u2n fjgG eiUt ¼ fGgeiUt (7.10-59)  2  un  U2 þ iU½L fjgG ¼ fGg Solving for fjgG we obtain  1  fjgG ¼ u2n  U2 þ iU½L fGg

(7.10-60)

Substituting into our assumed solution, and then solving for fwðtÞg produces   fwðtÞg ¼ ½ffqðtÞg ¼ ½fRe fjgG eiUt n  o  1 ¼ ½fRe u2n  U2 þ iU½L fGgeiUt (7.10-61)   ¼ ½fRe ½AðUÞG þ i½BðUÞG fGgeiUt

7.10 Dynamic behavior as a function of response

549

½AðUÞG is the real part and ½BðUÞG is the imaginary part of  2  1 un  U2 þ iU½L . Substituting Eq. (7.10-55) and applying Euler’s formula yields   fwðtÞg ¼ ½fRe ½AðUÞG þ i½BðUÞG ðfLg  ifPgÞðcos Ut þ i sin UtÞ ) ( ½AðUÞG fLg þ ½BðUÞG fPg cos Ut  ½BðUÞG fLg  ½AðUÞG fPg sin Ut ¼ ½fRe i ½BðUÞG fLg  ½AðUÞG fPg cos Ut þ i ½AðUÞG fLg þ ½BðUÞG fPg sin Ut   ¼ ½f ½AðUÞG fLg þ ½BðUÞG fPg cos Ut  ½BðUÞG fLg  ½AðUÞG fPg sin Ut (7.10-62) Fig. 7.10-16 shows the first two rows of ½f ½AðUÞG fLg þ½BðUÞG fPg and ½f ½BðUÞG fLg ½AðUÞG fPg plotted against the spin rate of the disk, U. Recall that these two rows correspond to the x- and y-coordinates, respectively, and therefore, the plots indicate the spin rates at which we will have elevated translational response levels. The first observation is that at 31.85 rad/s and 124.25 rad/s the response is significantly higher than away from these frequencies. Recall that in Fig. 7.10-15 we plotted the natural frequencies, un , of the modes with gyroscopic effects versus the disk spin rate, U. In addition, the line un ¼ U was plotted. The intersections between this line and the natural frequencies indicate frequencies where the excitation frequency coincides with a natural frequency. Three such frequencies were identified, namely 31.8, 41.3, and 124.2 rad/s. In Fig. 7.10-16 we observe that the elevated responses coincide with two of the three frequencies, i.e., 31.8 and 124.2 rad/s. However, there is no increased response level in the vicinity of 41 rad/s, as one might expect. So why is the mode in the vicinity of 41 rad/s not responding when the spin rate coincides with its natural frequency? The answer will be provided in Section 7.10.3.6, Complex Modal Forces. Fig. 7.10.17 shows the Lissajous plots of the x-coordinate versus y-coordinate steady-state responses, i.e., the first two rows of Eq. (7.10-62), for selected values of U. Fig. 7.10.17A covers the range across the intersection of U with the corresponding first mode natural frequency, and Fig. 7.10.17B covers the range across the intersection of U with the corresponding third mode natural frequency. As can be seen, the center of the disk undergoes

550

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.10-16 First two rows of the cosine and sine proportional terms in Eq. (7.10-62). (A) x coordinate; (B) y coordinate. a counterclockwise whirling motion that increases in amplitude as U approaches either natural frequency and then decreases once past that frequency. Solid circles correspond to spin rates below or at the natural frequencies, and the dashed lines correspond to frequencies past the natural frequencies. The responses shown in Fig. 7.10-17A correspond to values of the spin rate, U, of 30.0, 31.0, 31.55, 31.85, 32.15, 32.5, and 33.5 rad/s. The responses shown in Fig. 7.10-17B correspond to values of the spin rate, U, of 121.0, 122.0, 123.45, 124.3, 125.3, 127.0, and 128.0 rad/s.

7.10 Dynamic behavior as a function of response

551

FIGURE 7.10.17 Lissajous graphs of the x-coordinate versus y-coordinate steady-state response at various values of spin rate, U, in the vicinities of the first, (A), and third, (B), mode natural frequencies. Solid lines correspond to spin rates below or at the natural frequencies; dashed lines correspond to frequencies past the natural frequencies. As a final note is this section, if the disk were to spin in the opposite direction of the system described above (i.e., clockwise), the force term would be 8 9 4:5117 sinð UtÞ > > > > > > > > > < 4:5117 cosð UtÞ > = T ½f ff ðtÞg ¼ Ao ð UÞ2 > > 1:7349 sinð UtÞ > > > > > > > > : ; 1:7349 cosð UtÞ 8 8 9 9 0 4:5117 > > > > > > > > > > > > > > > > > > > > < 4:5117 = < = 0 2 ¼ Ao U cosð UtÞ þ Ao U2 sinð UtÞ > > > > 0 1:7349 > > > > > > > > > > > > > > > > : : ; ; 1:7349 0 ¼ fLgcosð UtÞ þ fPgsinð UtÞ (7.10-63)

552

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

and the solution would be 9 8 h i   > b b = < AðUÞ fLg þ BðUÞ G fPg cosð UtÞ >  h i fwðtÞg ¼ ½f  G > b b ; :  BðUÞ  AðUÞ fPg sinð UtÞ > G fLg G 9 8 h (7.10-64) i   > > b b = < AðUÞ fLg þ BðUÞ G fPg cosðUtÞ  h i ¼ ½f G > b b ; : þ BðUÞ G fLg  AðUÞ fPg sinðUtÞ > G

where all terms rotation with the h are ias defined for counterclockwise   b b is the real part and BðUÞ is the imaginary part exception that AðUÞ G G  1  of u2n  U2  iU½Lb  , and ½Lb ¼ ½2zun  þ ½fT ½T½f. The graphs of h  i   first two rows of ½f Ab U and fLg þ Bb U G fPg G   h  i ½f Bb U G fLg  Ab U fPg will be identical to those shown in G

Fig. 7.10.16. In addition, the response Lissajous plots will be the same as those for counterclockwise rotation (Fig. 7.10-17), except the center of the disk will be whirling in the clockwise direction. Hence, other than the direction of the whirling motion, the response of the system will be identical irrespective of the direction of disk rotation, which is as expected. Also, note that irrespective of the direction of disk rotation, the second mode is not excited; Section 7.10.3.6, Complex Modal Forces, will provide the reason for this. 7.10.3.5 Complex modal coordinates solution

We begin by casting Eq. (7.10-57) in first-order form, 2 3 ( ) ) ( ) " #( ½I ½0 ½0 ½I f0g qðtÞgG f€ _ qðtÞg f G 6 7 þ4 ¼ eiUt 5   fqðtÞgG _ fqðtÞg fGg ½I ½L G ½0 u2n h in o n o h in o b eiUt _ b QðtÞ þ Kb QðtÞ ¼ G M G

G

(7.10-65)

7.10 Dynamic behavior as a function of response

where fGg ¼ fLg  ifPg (see Eqs. 7.10-55 and 7.10-56), fLg and fPg are given by Eq. (7.10-53), and ½L ¼ ½2zun   ½fT ½T½f. With these definitions, the solution we seek is   (7.10-66) fQðtÞg ¼ Re fQðtÞgG We will solve the homogeneous equation first by letting fQðtÞgG ¼ fR wgj elj t ; this produces the eigenvalue problem,  h i h i n o n o R b þ Kb lj M w ¼ 0 (7.10-67) j h i b where fR wgj is the jth right eigenvector. Since ½L is skew symmetric, M will not be symmetric and we must also compute the left eigenvectors that satisfy n oT  h i h i n oT L b þ Kb ¼ 0 w (7.10-68) lj M j L  where w j is the complex conjugate of the left eigenvector, fL wgj . This can be a point of confusion, and one should always verify that the conjugate transposes of the computed left eigenvectors satisfy Eq. (7.10-68) before using them. The eigenvalues and eigenvectors computed in (7.10-67) and (7.10-68) will be complex, and the eigenvalues will be identical. h The i left and right eigenvectors form a biorthogonal basis with respect to b (see Chapter 6, Section 6.11.3) in that M L T h i R b f wgj ¼ w p M



0 psj mp s0 p ¼ j

(7.10-69)

Since the eigenvectors are unique to within a scalar, we will adopt the following normalization, kfR wgp kmax ¼ 1

L  where we will scale w p by m1 p so that

(7.10-70)

553

554

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

L T h i R b f wgp ¼ 1 w p M With this convention,  L T h i R w p Kb f wgj ¼

0 psj lp s0 p ¼ j

(7.10-71)

(7.10-72)

Define the following transformation, fQðtÞgG ¼ ½R wfUðtÞg

(7.10-73)

and then substitute it and its time derivative into Eq. (7.10-65). Premultiplying the resulting equation by the conjugate transpose of the left eigenL T vectors, w , yields  L T h i R L T n o iUt L T h i R  b e _ b ½ w UðtÞ þ w Kb ½ wfUðtÞg ¼ w w M G   L T n o iUt b e _ ½I UðtÞ  ½lfUðtÞg ¼ w G (7.10-74) where ½I is the identity matrix (see Eq. 7.10-69 and 7.10-71), and ½l is a diagonal matrix whose diagonal elements are defined in Eq. (7.10-72). Assuming a solution fUðtÞg ¼ fYgeiUt yields

L T n o iUt b e w G n o 1 L T b w G fYg ¼ ½iU  l

(7.10-75)

ðiU½I  ½lÞfYgeiUt ¼

(7.10-76)

Since ½iU  l1 is a diagonal matrix because ½iU l is diagonal, we obtain

7.10 Dynamic behavior as a function of response

3 3 2 L T L T w 1 w 1 7 7 6 6 7 7 6 6 iU  ð  z u þ iu Þ 6 iU  l1 7 6 d1 7 1 n1 7 7 6 6 7 7 6 6 7 7 6 6 « « 7 7 6 6 7 6   7 6 L T 7 6 L T 7 6 7 7 6 6 w w N N 7 7 6 6 6 iU  l 7n o 6 iU  ð  z u þ iu Þ 7n o N 7 dN 7 N nN 6 6 7 G 7 b b 6 fYg ¼ 6 7 6 L T 7 G ¼ 6   T L 7 7 6 w 6 w Nþ1 7 6 6 Nþ1 7 7 7 6 6 6 iU  l1 7 6 iU  ð  z1 un1  iud1 Þ 7 7 7 6 6 7 7 6 6 7 7 6 6 « « 7 7 6 6 7 7 6 6 7 6 L T 7 6   T L 7 7 6 w 6 w 5 5 4 4 2N 2N iU  lN iU  ð  zN unN  iudN Þ 3 2 L T w 1 7 6 7 6 6 z1 un1 þ iðU  ud1 Þ 7 7 6 7 6 7 6 « 7 6 7 6   T 7 6 L 7 6 w N 7 6 6 z u þ iðU  u Þ 7n o nN dN 7 6 N 7 G b ¼6 7 6   T L 7 6 w Nþ1 7 6 7 6 6 z1 un1 þ iðU þ ud1 Þ 7 7 6 7 6 7 6 « 7 6 7 6 7 6   T L 7 6 w 2N 5 4 zN unN þ iðU þ udN Þ 2

(7.10-77) Note that we have ordered the elements of fYg such that the first N elements are associated with those eigenvalues that have positive imaginary parts, L T L T l1 / lN , and their corresponding eigenvectors, w 1 / w N . The second N elements are associated with the eigenvalues, and their corresponding

555

556

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

L T L T eigenvectors, w Nþ1 / w 2N , that are the complex conjugates, l1 / lN , of the eigenvalues in the first N elements. The complex eigenvectors, right and left, need to be ordered consistent with the eigenvalue order in Eq. (7.10-77). Multiplying each term in Eq. (7.10-77) by the complex conjugate of the denominator divided by it yields L T 3 2 w 1 ðz1 un1  iðU  ud1 ÞÞ 7 6 7 6 ðz1 un1 Þ2 þ ðU  ud1 Þ2 7 6 7 6 7 6 « 7 6 7 6 7 6 L T 7 6 w 7 6 N ðz u  iðU  u ÞÞ nN 7 6 dN N 7n o 6 ðzN unN Þ2 þ ðU  udN Þ2 7 b 6 fYg ¼ 6 7 G 7 6 L T 7 6 w Nþ1 6 ðz1 un1  iðU þ ud1 ÞÞ 7 7 6 2 2 7 6 ðz1 un1 Þ þ ðU þ ud1 Þ 7 6 7 6 7 6 « 7 6 7 6 7 6 L T 5 4 w 2N ðz u  iðU þ u ÞÞ dN N nN ðzN unN Þ2 þ ðU þ udN Þ2 (7.10-78) It should be noted that the largest values of each of the first N elements occur when the spin rate, U, coincides with each of the damped natural frequencies, udk . In this case, 1 L T n b o w p G Yp ¼ (7.10-79) zp unp These values will be significantly greater than the corresponding complex conjugate values, provided the corresponding modal forces are comparable. Substituting Eqs. (7.10-76) and (7.10-78) into Eq.n(7.10-75), and then o b and fGg from into Eq. (7.10-73); and recalling the definitions of G

7.10 Dynamic behavior as a function of response

Eqs. (7.10-65) and (7.10-55), respectively, and then using Euler’s formula, produces fQðtÞgG ¼ ½R wfUðtÞg ¼ ½R wfYgeiUt         e e Lb  i Pb ðcos Ut þ i sin UtÞ ¼ AðUÞ þ i BðUÞ G G (7.10-80) Performing the indicated multiplications yields h i       b b b L þ BðUÞ G Pb cosUt fQðtÞg ¼ AðUÞ G i       h b b Lb  AðUÞ Pb sinUt  BðUÞ 

G

G

i      h b b b þi BðUÞ G L  AðUÞ Pb cosUt G h i       b b Lb þ BðUÞ Pb sinUt þi AðUÞ G

(7.10-81)

G

Substituting into Eq. (7.10-66) produces   fQðtÞg ¼ Re fQðtÞgG         e b þ BðUÞ e L Pb cos Ut ¼ AðUÞ G G         e e  BðUÞ Lb  AðUÞ Pb sin Ut G G

(7.10-82)

which leads to the sought-after solution,      ½f ½0 _ _ fwðtÞg fqðtÞg ¼ fwðtÞg fqðtÞg ½0 ½f          ½f ½0  e e Lb þ BðUÞ Pb cos Ut ¼ AðUÞ G G ½0 ½f          ½f ½0  e b  AðUÞ e L Pb sin Ut  BðUÞ G G ½0 ½f (7.10-83) Fig. 7.10-18 shows the imaginary part, as a function of the spin rate, U, of the eight complex eigenvalues computed for our four-degree-of-freedom system. As can be ascertained, four have negative values since they form a complex conjugate pair with the other four eigenvalues. Fig. 7.10-19

557

558

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

FIGURE 7.10-18 Imaginary parts of complex eigenvalues, as a function of spin rate, U.

FIGURE 7.10-19 Undamped circular natural frequencies as a function of spin rate, U. shows the undamped circular natural frequencies as a function of the spin rate, U. The frequencies were computed as the moduli of the eigenvalues from Eq. (7.10-74) and used in Eq. (7.10-77), i.e., qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi unp ¼ jlp j ¼

ðRelp Þ2 þ ðImlp Þ2 . There are eight such frequencies;

7.10 Dynamic behavior as a function of response

however, as can be ascertained from the figure, only four unique frequencies exist, which is consistent with the fact that a four-degree-of-freedom system can only have four real modes and associated natural frequencies. Finally, the results obtained with Eq. (7.10-83) are identical to those presented in Section 7.10.3.4. 7.10.3.6 Complex modal forces

The matrices on the left-hand side of Eq. (7.10-74) are diagonal; hence Eq. (7.10-74) contains 2N uncoupled first-order equations, where N is the number of coordinates in the physical model. The eigenvalues in the first N equations are complex conjugates of the second set because of how we chose to order the equations. The right-hand term contains the complex modal forces computed with the conjugate transpose of the left eigenvectors that correspond to the eigenvalues. The magnitude of the response obtained with each equation is a function of the magnitude of the corresponding modal force, and the associated amplification term in Eq. (7.10-77), or (7.10-78). The magnitude of the amplification term depends on the separation of the natural frequency from the disk spin rate, U. In Fig. 7.10-15, Section 7.10.3.3, the natural frequencies of the modes of the four-degree-of-freedom system discussed above were plotted against the spin rate, U. In addition, the line un ¼ U was included in the figure to identify the disk spin rates that would coincide with the system’s natural frequencies. These intersections, which were three, are typically frequencies that one would avoid in operation because a coincidence implies the potential for significant dynamic amplification of the vibration response. In Section 7.10.3.4, we added to the disk an imbalance that was modeled as a radially outward force rotating at the disk spin rate. The response of the system was computed as a function of the disk spin rate, and the magnitude functions (from Eq. 7.10-62) were plotted in Fig. 7.10-16. As indicated in Fig. 7.10-16, the system response is significantly elevated when the disk spin rate coincided with the natural frequencies of the first and third modes; i.e., 31.85 and 124.3 rad/s. However, there was no elevated response when the spin rate coincided with the second mode natural frequency near 41 rad/s. Since the amplification factor for the second mode would be no different than for the other two modes when the spin rate coincides with the natural frequency (they have the same critical damping ratio), the lack of modal response has to be due to a low, or zero modal force. We can see this from the two applicable equations in (7.10-78),

559

560

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Y2 ¼

ðz2 un2  iðU  ud2 ÞÞ L T n bo w 2 G ðz2 un2 Þ2 þ ðU  ud2 Þ2

ðz2 un2  iðU þ ud2 ÞÞ L T n bo w 6 G Y6 ¼ ðzN un2 Þ2 þ ðU þ ud2 Þ2

(7.10-84)

The maximum value of the amplification term in the first equation is obtained when U ¼ ud2 . However, for U ¼ ud2 the second equation would have a much larger denominator and, therefore, a much smaller response level, which is no different than for the first and third modes. So the lack of elevated resonant response from the first equation must be due to the L T n o b . The lack of resonant response from the secG modal force term, w 2

ond equation is due to the fact that the denominator is a large quantity when U ¼ ud2 .  T n o  L b , p ¼ 1; /; 8, Fig. 7.10-20 shows the modal force moduli,  w p G plotted against the spin rate, U, for the eight modal forces in Eq. (7.10-78). As can be ascertained, the modal forces not equal to a machine precision of zero are associated with the first and third modes, and the complex conjugates

FIGURE 7.10-20

 T n o  L b , for unp, p ¼ 1; /; 8, as a function of Modal force moduli,  w p G spin rate, U. Moduli values not shown are zero (w10-15 ) for all practical purposes.

7.10 Dynamic behavior as a function of response

FIGURE 7.10-21 Products of amplification factor and modal force for the eight response equations in (7.10-77). Quantities not shown are zero (w10-15 ) for all practical purposes. of the second and fourth modes. However, the amplification factors for the complex conjugate equations of the third and fourth modes are small and, thus, there would not be elevated responses. Fig. 7.10-21 shows the product of the amplification factor and the modal force for the eight response equations. As can be seen, the elevated responses are only associated with the first and third modes, which is consistent with what was shown in Section 7.10.3.4, and not as expected as a result of the analysis associated with Fig. 7.10-15. 7.10.3.7 Nonsymmetric systems

The results presented in the preceding section were for a perfectly symmetric system. It was this symmetry in combination with the gyroscopic effects that produced a modal force of zero for the second mode. If we introduce nonsymmetry into the stiffness and/or mass of the system, the whirling motion will no longer be circular, but elliptical, and there would be elevated response associated with the second mode. Furthermore, the whirling rotation corresponding to the second mode would be clockwise, opposite to that of the first and third modes, and counter to the rotation direction of the disk. In Section 7.10.3.2, we introduced a slight nonsymmetry into the fourdegree-of-freedom system by increasing the rod stiffness associated with the y and qx coordinates by a factor of 1.2 over that of the x and qy

561

562

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

coordinates. This produced for the nonspinning system the modes shown in Eq. (7.10-32). Because of the increased stiffness in one direction, we no longer had pairs of modes with identical frequencies. Instead we had two pairs of modes with close frequencies, and mode shapes that no longer defined symmetric behavior. Fig. 7.10-22 shows the first two rows, which correspond to the x- and y-coordinates of the cosine and sine proportional terms in Eq. (7.10-62),

FIGURE 7.10-22 First two rows of the cosine and sine proportional terms in Eq. (7.10-62) for the four-degree-of-freedom rotor system with the rod stiffness associated with the y and qx coordinates 1.2 times that of the x and qy coordinates. (A) x coordinate; (B) y coordinate.

7.10 Dynamic behavior as a function of response

plotted against the spin rate of the disk, U. This figure should be compared to Fig. 7.10-16, which shows the results for the symmetric system. The item to note in Fig. 7.10-22 is the presence of an elevated response associated with the second mode of the system, which is not present for the symmetric system. Recall that for the symmetric system the modal force was, for all practical purposes, zero for this mode. This is not the case for the nonsymmetric system. Fig. 7.10-23 shows the Lissajous plot of the x-coordinate versus y-coordinate steady-state response at the three spin rates corresponding to the peak elevated responses in Fig. 7.10-22. The two solid trajectories run counterclockwise and are associated with the first and third modes. The dashed trajectory is associated with the second mode and it runs clockwise, opposite to that of the other two. It is for this reason that this phenomenon is referred to as backward whirl, whereas the other two are referred to as forward whirl since the motion is in the same direction as the disk rotation. Because the direction of whirl changes from the first mode to the second, and from

FIGURE 7.10-23 Lissajous graph of the x-coordinate versus y-coordinate steady-state response at the spin rates, U, corresponding to elevated responses due to coincidence of the spin rate with the natural frequencies of the lowest three modes. Solid lines indicate counterclockwise rotation; dashed line indicates clockwise rotation.

563

564

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

the second to the third, there have to be spin rates between the modes where the amplitude of the whirling motion is zero. These “sweet spots” offer the possibility of running the disk at an appropriate spin rate between two natural frequencies, and reducing the whirling vibration, for all practical purposes, to zero; and this would be irrespective of the magnitude of the forces due to disk imbalances. For our example problem, these “sweet spots” occur in the vicinity of 40 and 115 rad/s spin rates. Had we solely relied on Fig. 7.1-15, we would have possibly selected values half way between the modes, which would have been approximately 36 and 82 rad/s; these would still be reduced vibration spin rates, but not the optimum amount. 7.10.3.8 Dynamic imbalance

In the preceding sections the excitation was caused by what is referred to as static imbalance, since orienting the disk perpendicular to a gravity field causes the heaviest point/side to rotate to the bottom. On the other hand, if there were extra mass on the top surface of the disk, for example, and an equal amount on the bottom surface, but directly on the other side of the shaft, in a gravity field the disk would be balanced as far as rotation about the shaft. However, once spinning, there would be a couple formed. This couple produces a rotating moment. This is referred to as dynamic imbalance, since it can only be detected when the disk is spinning. Computing the vibration response of a system experiencing dynamic unbalance is straightforward. Eq. (7.10-50) defined the force due to a static imbalance. A dynamic imbalance would produce a moment and, therefore, (7.10-50) becomes 9 8 9 8 fx > > 0 > > > > > > > > = > = < < fy > 0 ¼ (7.10-85) ff ðtÞg ¼ 2 > > > M qx > U sin Ut A o > > > > > > > ; > ; : : M qy Ao U2 cos Ut where we have assumed that at time t ¼ 0 the imbalances are in the x-z plane, with the positive z-coordinate (top) imbalance being in the positive x-axis direction, and the negative (bottom) z-coordinate imbalance being in the negative x-axis direction. These imbalances will produce a positive moment about the y-axis and zero moment about the x-axis at time t ¼ 0. As the disk rotates counterclockwise 90 degree, the imbalance will produce

7.10 Dynamic behavior as a function of response

an increasing negative moment about the x-axis as the moment about the y-axis decreases. The solution obtained for the static problem is directly applicable here, provided we compute the appropriate modal forces, i.e., 9 2 38 0 4:512 0:588 0 0 > > > > > > > 6 7> > > = 6 4:512 7< 0 0 0:588 0 6 7 T ½f ff ðtÞg ¼ 6 7 > Ao U2 sin Ut > 6 0 1:735 1:503 0 7 > > > 4 5> > > > > ; : 2 Ao U cos Ut 1:735 0 0 1:503 8 9 0:588 sin Ut > > > > > > > > > < 0:588 cos Ut > = ¼ Ao U2 > > 1:503 sin Ut > > > > > > > > : ; 1:503 cos Ut 8 8 9 9 0 > 0:588 > > > > > > > > > > > > > > > > > > < 0:588 = < 0 > = ¼ Ao U2 cos Ut þ Ao U2 sin Ut > > > > 0 > 1:503 > > > > > > > > > > > > > > > : : ; ; 1:503 0 ¼ fLgcos Ut þ fPgsin Ut (7.10-86) Hence, the solution presented in Eq. (7.10-62) is applicable provided we use the definitions of fLg and fPg from Eq. (7.10-86). In addition, the conclusions reached regarding the static imbalance excitation, for perfectly symmetric and nonsymmetric systems, and for forward and backward whirl, are also applicable to dynamic imbalances. 7.10.4 Gyroscopic moments and energy dissipation

In Section 7.10.3.3, we solved for the modes of a system with gyroscopic moments, but no damping, and showed that the real part of the eigenvalues remained zero, indicating that energy dissipation was not added [see Eqs. (7.10-40) and (7.10-41)]. However, we also showed that if a system had damping, then the gyroscopic effects could alter the vibration-reduction mechanism of the system.

565

566

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

We will start with the equations of motion, in modal coordinates, of a system with gyroscopic moments and damping that yields classical normal modes, when gyroscopic effects are not included, i.e.,   _ þ u2n fqðtÞg ¼ f0g ½If€ qðtÞg þ ½2zun   ½fT ½T½f fqðtÞg (7.10-87)   _ ½If€ qðtÞg þ ð½D þ ½GÞfqðtÞg þ u2n fqðtÞg ¼ f0g where ½D ¼ ½2zun  and is diagonal, and ½G ¼ ½fT ½T½f and is a skew symmetric matrix with zeros on the diagonal [see Eq. (7.10-49) for an example, and Section 7.10.2 for derivation]. Also, the mode shapes from the undamped eigenvalue problem without gyroscopic effects are normalized such that ½fT ½m½f ¼ ½I. Taking the Laplace transform of Eq. (7.10-87) yields the quadratic eigenvalue problem,   2 (7.10-88) s ½I þ sð½D þ ½GÞ þ u2n fVg ¼ f0g Normalizing the eigenvectors such that kVk ¼ 1, and then premultiplying by the transpose of the complex conjugate of fVg yields    T 2 s ½I þ sð½D þ ½GÞ þ u2n fVg ¼ f0g V (7.10-89)    s2 kVk2 þ shð½D þ ½GÞfVg; fVgi þ u2n fVg; fVg ¼ f0g   ½D and u2n are diagonal, and positive definite, hence Hermitian. Therefore, any one of the equations in (7.10-89) can be expressed as   2  (7.10-90) un fVg; fVg ¼ u2n h½DfVg; fVgi ¼ 2d and Since ½G is real and skew symmetric we obtain h½GfVg; fVgi ¼ hfVg; ½G fVgi ¼ hfVg; ½GfVgi ¼ h½GfVg; fVgi (7.10-91) and therefore h½GfVg; fVgi ¼ i2g

(7.10-92)

a pure imaginary number. To study the effect of the gyroscopic term on system damping we will approximate each equation in (7.10-87) as s2 þ 2ðd þ igÞs þ u2n ¼ 0

(7.10-93)

7.10 Dynamic behavior as a function of response

Let l ¼ s=un , d ¼ d=un , and g ¼ g=un ; then Eq. (7.10-93) becomes l2 þ 2ðd þ igÞl þ 1 ¼ 0

(7.10-94)

We will look at three different cases. First, let g ¼ 0, then Eq. (7.10-93) becomes l2 þ 2dl þ 1 ¼ 0 and the solution for d < 1 is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi 2d 4d2  4 l1;2 ¼ ¼ d i 1  d2 2 Substituting for d gives sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   qffiffiffiffiffiffiffiffiffiffiffiffiffi zun zun 2 ¼ z i 1  z2 i 1 l1;2 ¼  un un

(7.10-95)

(7.10-96)

(7.10-97)

Since l ¼ s=un , we obtain

qffiffiffiffiffiffiffiffiffiffiffiffiffi s1;2 ¼  zun i 1  z2 un

(7.10-98)

which is as expected for a system where the gyroscopic effects have been set to zero. For the next case, setting the damping to zero, i.e., d ¼ 0, we obtain l2 þ i2gl þ 1 ¼ 0 The solution is

(7.10-99)

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4g2  4 l1;2 ¼ ¼ ig i g2 þ 1 (7.10-100) 2 Since the solution is pure imaginary, and since energy dissipation only exists if the eigenvalue has a negative real part, we can conclude that the gyroscopic effect has not added damping. For the third case we will keep both the damping term and the gyroscopic effects [see Eq. 7.10-94]; and, therefore, we obtain a solution of qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ðd þ igÞ 4ðd þ igÞ2  4 ¼ ðd þ igÞ ðd þ igÞ2  1 l1;2 ¼ 2 (7.10-101) i2g

567

568

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

With Eq. (7.10-101) we have to compute the square root of a complex number. We begin by rewriting the term inside the radical as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 2 2 2 d  g  1 þ i2dg ðd þ igÞ  1 ¼ (7.10-102) pffiffiffiffiffiffiffiffiffiffiffiffi ¼ a þ ib ¼ a þ ib where a ¼ d2  g2  1 and b ¼ 2dg. Squaring both sides and equating the real and imaginary parts yields two equations that can be used to solve for a and b, i.e., sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 a þb þa a2 þ b2  a and b ¼ signðImða þ ibÞÞ a¼ 2 2 (7.10-103) Substituting into Eq. (7.10-102) and then into (7.10-101) yields l1;2 ¼  d a þ iðg bÞ

(7.10-104)

Fig. 7.10-24 shows the moduli of Eq. (7.10-104) as a function of g, which is normalized by unj . The moduli were computed as qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi jl1 j ¼ jlð þ Þj ¼ ð d þ aÞ2 þ ðg þ bÞ2 (7.10-105) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 jl2 j ¼ jlð  Þj ¼ ð d  aÞ þ ðg  bÞ

FIGURE 7.10-24 Undamped circular frequencies of oscillation and damping values as a function of gyroscopic effects.

7.11 Fluidestructure interaction

In the figure these were divided by the undamped circular frequency. In addition, the figure shows the damping values extracted from the complex eigenvalues normalized by 2zj unj , i.e., qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 Reðlðþ ÞÞ=jlðþ Þj ðd þ aÞ= ð d þ aÞ þ ðg þ bÞ ¼ 2zj unj 2zj unj qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 Reðlð ÞÞ=jlð Þj ðd  aÞ= ð d  aÞ þ ðg  bÞ ¼ 2zj unj 2zj unj (7.10-106) The first observation is that for the case without gyroscopic effects (g ¼ 0) both eigenvalues, i.e., l1 and l2 , yield the same frequency (modulus) and damping value; however, once gyroscopic effects are included (g > 0) the two eigenvalues yield distinct frequencies, with one increasing in value and the other decreasing as the gyroscopic effect increases. This is consistent with the problems solved in the preceding sections. Without gyroscopic effects, the damping values extracted from either eigenvalue have the same value. However, as indicated in the figure, these values are a function of the gyroscopic effects. As g increases, the damping values decrease. This is consistent with the observations noted in Section 7.10.3.3, where the damping that one would extract from the eigenvalues in Eqs. (7.10-44) and (7.10-45) are different, with the system with the higher spin rate being lower, even though the assigned critical damping ratio for both systems were identical. 7.11 Fluidestructure interaction Fluidestructure interaction problems represent their own expansive fields of study and would require several books and many technical papers to cover properly (e.g., Bisplinghoff et al., 1955). In this section, we will briefly describe two important problems in this area. First, we will discuss the dynamic interaction between structural systems and fluids (we include gases in the term “fluids”), which if not properly addressed can lead to catastrophic failure. Second, we will describe the interaction between structural vibrations and liquid fuels in launch vehicles. Both of these problems have a common thread in that the excitation that causes the vibration is a function of the vibration itself. This feedback mechanism leads to nonclassical

569

570

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

(complex) modes and the potential for the system oscillations to become unbounded, which of course would lead to structural failure and loss of the system, or other anomalies such as the unintended shutdown of launch vehicle engines, for example (Larsen, 2008; Blair et al., 2011). 7.11.1 Aerodynamic instability

A body moving through a fluid (e.g., air) produces aerodynamic (hydrodynamic if a liquid or incompressible gas) forces. The force component in-line with the body motion is referred to as drag and the component perpendicular to the body motion is referred to as lift. Fig. 7.11-1 shows a rigid airfoil in a wind tunnel where the gas flows horizontally from right to left at a speed x_R ; this is equivalent to the airfoil moving from left to right at that speed. We have represented the attachment of the airfoil to the wind tunnel wall by the two springs shown in the figure. The indicated stiffness values of the springs are the components aligned with the z-axis. The springs are both a distance l from the center of mass. In addition, acting in parallel to the spring elements, but not shown in the figure, are viscous damping elements with constants of proportionality of c1 and c2 , for the left and right elements, respectively. The airfoil is placed in the wind tunnel with its nose pitched up at an angle aR relative to the horizontal airflow. This angle of attack and the contour of the airfoil will produce a force that will cause the angle of attack to increase to aRE and the airfoil to translate vertically up such that there is a state of static equilibrium between the aerodynamic force and the elastic stiffness elements that connect the airfoil to the wind tunnel wall.

FIGURE 7.11-1 Airfoil in a wind tunnel with the longitudinal location of the center of pressure, cp, indicated by the forward dot. The coordinate system is at the center of mass (aft dot) and aligned with the static equilibrium position.

7.11 Fluidestructure interaction

We seek to describe the dynamic behavior of the airfoil about this static equilibrium state; hence, we will place the inertial coordinate system at the location of the center of mass in the deformed configuration, as shown in the figure. V is the component of flow aligned with the x-coordinate direction, and since for this problem we will be dealing with small angles, jVj z jx_R j. For the purposes of this example, we will assume that the aerodynamic loading that acts in the vertical direction over the entire area of the airfoil can be described as a single equivalent force, fp , that acts in the z-coordinate direction at a distance ε from the center of mass along the x-axis. This point is referred to as the center of pressure. fp and its location are computed such that they produce the equivalent z-direction force and moment about the center of mass as the distributed aerodynamic loading. Furthermore, we will assume that the location of the center of pressure does not change as the airfoil oscillates; in Volume II, equations of motion where the aerodynamic loading is distributed over the entire structure and the location of the center of pressure can vary as a function of the rigid body and elastic vibration of the system are derived. Finally, we will also ignore any work done by the vibrating airfoil on the fluid. Assuming that the aerodynamic force is a function of the total angle of attack at the center of pressure gives  (7.11-1) fp atotal ¼  QS CNz atotal where Q is the dynamic pressure and is given by Q ¼ 12 rV 2, r is the density of the fluid, CNz ðatotal Þ is the force coefficient associated with the aerodynamic force along the z-axis, and S is the reference area defined for CNz ðatotal Þ; see Volume II for details on how these terms are derived. Since we are interested in the vibration behavior about the static equilibrium point, we can rewrite incremental vCNz incremental ¼  QS (7.11-2) a fp a va vCNz where is the slope of the aerodynamic force coefficient at va a ¼ aR þ aRE , and atotal ¼ aR þ aRE þ aincremental . This assumes that the aerodynamic force varies linearly about the static equilibrium angle aR þ aRE . Note that a positive aincremental corresponds to the leading edge of the airfoil pitching up relative to the direction of flow and for our

571

572

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

problem this corresponds to positive qy . Also, because of the definition of the coordinate system, a positive angle of attack produces a negative z-direction force at the center of pressure, hence, the minus sign in equations (7.11-1) and (7.11-2). For our example problem, the angle of attack at the center of pressure, about the static equilibrium angle, is εq_ y ðtÞ zðtÞ _  (7.11-3) V V where we have now explicitly noted the time dependencies. The first term on the right-hand side is straightforward. A positive rotation about the center of mass will pitch the leading edge of the airfoil up and increase the angle of attack. This would cause a force in the negative z-direction. For the second term, a positive velocity of the airfoil in the z-direction will produce a flow across the airfoil in the negative z-direction. Dividing by V then yields the change in the angle of attack due to this translational velocity. For the last term, a positive rotational velocity of the center of mass will cause a negative translational velocity of the airfoil in the z-direction at the center of pressure of magnitude εq_ y . This produces a flow across the airfoil in the positive direction, which then yields a reduction in the angle of attack. Substituting Eqs. (7.11-3) into Eqs. (7.11-2) yields the aerodynamic force at the center of pressure, !  z incremental q_ y ðtÞ vCN zðtÞ _ ε ðtÞ ¼ QS qy ðtÞ þ fp a V V va (7.11-4) aincremental ðtÞ ¼ qy ðtÞ þ

_ þ εN_ z q_ y ðtÞ ¼ Nz qy ðtÞ  N_ z zðtÞ We can now derive the equations of motion. Letting k1 ¼ 2k, k2 ¼ k, c1 ¼ 2c, and c2 ¼ c, we proceed by summing the forces at the center of mass in the z-coordinate direction and setting the sum equal to the mass times the acceleration, _  czðtÞ _  2clq_ y ðtÞ m€ zðtÞ ¼ 2kzðtÞ  kzðtÞ  2klqy ðtÞ þ klqy ðtÞ  2czðtÞ incremental þclq_ y ðtÞ þ fp a ðtÞ ¼ 3kzðtÞ  klqy ðtÞ  3czðtÞ _  clq_ y ðtÞ  Nz qy ðtÞ  N_ z zðtÞ _ þ εN_ z q_ y ðtÞ ¼ ð3kÞzðtÞ  ðkl þ Nz Þqy ðtÞ  3c þ N_ z zðtÞ _  cl  εN_ z q_ y ðtÞ (7.11-5)

7.11 Fluidestructure interaction

573

Summing the moments about the center of mass and setting the result equal to the mass moment of inertia about the y-axis, Iyy , times the rotational acceleration about the y-axis produces _ þ clzðtÞ _ Iyy q€y ðtÞ ¼ 2klzðtÞ þ klzðtÞ  2kl2 qy ðtÞ  kl2 qy ðtÞ  2clzðtÞ incremental 2_ 2_ ðtÞ 2cl qy ðtÞ  cl qy ðtÞ  εfp a _  3cl2q_ y ðtÞ þ εNz qy ðtÞ þ εN_ z zðtÞ _  ε2 N_ zq_ y ðtÞ ¼ klzðtÞ  3kl2 qy ðtÞ  clzðtÞ _  3cl2 þ ε2 N_ z q_ y ðtÞ ¼ klzðtÞ  3kl2  εNz qy ðtÞ  cl  εN_ z zðtÞ (7.11-6) Collecting the two equations into a matrix equation yields 3( ) ) 2  ( _ _ zðtÞ _ cl  ε N 3c þ N z € ðtÞ m 0 z z þ4 2 5 2 _ 0 Iyy _ q€y ðtÞ 3cl þ ε N z cl  εN z q_ y ðtÞ (7.11-7) # "    3k kl þ Nz zðtÞ 0 2 ¼ þ kl 3kl  εNz qy ðtÞ 0   € _ þ ½cfwðtÞg þ k fwðtÞg ¼ f0g (7.11-8) ½mfwðtÞg The first item to note about Eq. (7.11-7) is the 2,2 term in the stiffness matrix. It is important to note that the term εN has the effect of making the system “softer”; and it is possible for the system to become so “soft” that its displacement response could grow unbounded. Note that N is a func1 tion of the aerodynamic coefficient and the dynamic pressure, Q ¼ rV 2 , 2 which in turn is a function of the speed squared of the fluid flow past the airfoil. Therefore, the higher the speed, the greater the “softening” effect. Should the center of pressure move aft of the center of mass, i.e., ε becomes negative, then εN will increase the overall stiffness of the system and provide a stabilizing effect. It should also be noted that because of this effect, natural frequencies measured in flight would be different than when measured in stationary ground vibration tests, such as a mode survey test. Another item of note is that the stiffness matrix is no longer symmetric. This is due to the aerodynamic force being a function of rotation, rotational velocity, and translational velocity, but not translational displacement. In addition, note the sign of the off-diagonal terms in the damping matrix, which depend on the location of the center of pressure and N_ z , which depends on the aerodynamic coefficient and dynamic pressure. Since it is

574

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

obvious by inspection that the damping matrix will not yield classical normal modes, the modes of the system described by Eq. (7.11-7) will be complex. When the airfoil in our example is perturbed its oscillations will either decay or grow unbounded. If the oscillations grow, then the phenomenon is referred to as unstable aeroelastic flutter. Determining the type of oscillation requires that we solve for the complex modes of the system. This is accomplished by casting Eq. (7.11-8) in first-order form using the identity, _ _ ½mfwðtÞg  ½mfwðtÞg ¼ f0g, and the coordinate transformation,   _ wðtÞ (7.11-9) fWðtÞg ¼ wðtÞ This yields

where

     e WðtÞ _ M þ Ke fWðtÞg ¼ f0g

   e ¼ ½0 M ½m

½m ½c



   ½m and Ke ¼ ½0

½0 k

(7.11-10)  (7.11-11)

We will discuss the solution of this type of problem in the next section and in Chapter 8. 7.11.1.1 Aerodynamic instability and complex modes

We begin by first solving for the modes of the system described by Eq. (7.11-7) without aerodynamic effects (i.e., Q set equal to zero). Letting m ¼ 25, Iyy ¼ 175, k ¼ 10000, c ¼ 20, and l ¼ 1 we obtain the following first-order matrices, 2 3 0 0 25 0   6 0   0 0 175 7 ½0 ½m 6 7 e ¼6 M ¼ (7.11-12) 7 ½m ½c 4 25 0 60 20 5 0 and

2    ½m e K ¼ ½0

25

 6 6 0 ½0 ¼ 6 6 0 k 4 0

175 0 175 0 0

20

60

0

0

0

0

3

7 7 7 30000 10000 7 5 10000 30000

(7.11-13)

7.11 Fluidestructure interaction

575

    e þ Ke fwgj ¼ f0g, Solving the corresponding eigenvalue problem, lj M produces the following eigenvalues and eigenvectors:   ½l ¼ l1 l1 l2 l2 ¼ ½0:1497 þ i12:2329 0:1497  i12:2329 1:2218 þ i34:9324 1:2218 i34:9324 (7.11-14)       ½w ¼ fwg1 w 1 fwg2 w 2 ¼ 2 3 0:0044 þ i0:3567 0:0044  i0:3567 0:0346  i0:9893 0:0346 þ i0:9893 60:0115  i0:9365 0:0115 þ i0:9365 0:0019  i0:0538 0:0019 þ i0:05387 6 7 6 7 40:0292  i0:0000 0:0292 þ i0:0000 0:0283 þ i0:0000 0:0283  i0:0000 5 0:0766 þ i0:0000 0:0766  i0:0000 0:0015 þ i0:0000 0:0015  i0:0000 (7.11-15) As described in Chapter 6, Section 6.11, the eigenvalues and eigenvectors occur in complex conjugate pairs. The eigenvectors as shown in Eq. (7.11-15) have been scaled (rotated) as described in Chapter 6, Section 6.11; and as can be seen, the displacement components of the eigenvectors, bottom two rows, are real numbers. This is as expected since the damping matrix is proportional to the stiffness matrix and we would, therefore, expect classical normal modes. Furthermore, computing the critical damping ratios from the real part of the eigenvalues produces z1 ¼

Reðl1 Þ 0:1497 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:0122 un1 0:14972 þ 12:23292

Reðl2 Þ 1:2218 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:0350 z2 ¼ un2 1:22182 þ 34:93242

(7.11-16)

In addition, the damped circular frequencies squared are the imaginary parts of the complex eigenvalues, i.e., ud1 ¼ 12:2329

ud2 ¼ 34:9324

(7.11-17)

Undamped circular frequencies are the moduli of the complex eigenvalues, i.e., pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi un1 ¼ 0:14972 þ 12:23292 ¼ 12:2338 (7.11-18) pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi un2 ¼ 1:22182 þ 34:93242 ¼ 34:9538 which are the same values as the square root of the eigenvalues that we would obtain from the undamped eigenvalue problem.

576

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Next, we will introduce the aerodynamic effects. Letting V ¼ 800, vC z Q ¼ 0:5ð0:001ÞV 2 ¼ 320, N ¼ 4, l ¼ 1, and S ¼ 1, yields N ¼ 1280, va and N_ ¼ 1:6. Substituting into Eq. (7.11-7), and solving the first-order eigenvalue problem for increasing values of ε yields the results shown in Fig. 7.11-2. As can be ascertained, as the center of pressure moves forward and away from the center of mass, the natural frequency of the first mode decreases until the period of oscillation becomes infinite. In other words, any perturbation that causes the airfoil to move will produce an everincreasing displacement until the system breaks up. Next, we let k1 ¼ k, k2 ¼ 2k, c1 ¼ c, c2 ¼ 2c, k ¼ 10000, m ¼ 235, vC z Iyy ¼ 200, c ¼ 20, l ¼ 1, N ¼ 4, S ¼ 1, and r ¼ 0:001. For this probva lem we will vary V, starting at V ¼ 800 and then increase its value until the coupled system yields a negative critical damping ratio, which would indicate the onset of dynamic instability, or flutter. At each value of V we solve the first-order complex eigenvalue problem. The resulting eigenvalues are plotted in the complex plane in Fig. 7.11-3. Recall that if the system has positive damping, then the real part of the eigenvalue will be negative. As can be ascertained from the figure, the real part of the first mode eigenvalue becomes positive between V ¼ 2360 and V ¼ 2365, indicating the onset of exponentially increasing oscillation and unstable flutter. The second mode remains stable throughout this speed range. Another item to note is that the damped circular natural frequencies (imaginary part of

FIGURE 7.11-2 Damped circular frequency of first mode of airfoil in Fig. 7.11-1 as a function of location of the center of pressure relative to the center of mass.

7.11 Fluidestructure interaction

FIGURE 7.11-3 Eigenvalues, l1 and l2 , plotted on the complex plane for the speed range V ¼ 800 to V ¼ 2365. the eigenvalues) of the two modes approach each other as V increases, until the modes are close enough to exchange sufficient energy and cause the dynamic instability. The corresponding damping values, as a function of V, are shown in Fig. 7.11-4. These were computed from the real part of the eigenvalues as shown in Eq. (7.11-16).

FIGURE 7.11-4 Critical damping ratios, as a function of speed, V, for the modes whose eigenvalues are shown in Fig 7.11-3.

577

578

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

7.11.2 Pogo

Liquid rocket engines operate by transferring fuel and oxidizer through pipes (feed lines) that run from the bottom of tanks to the inlet of the engine turbo pumps. The turbo pumps raise the pressure before the fuel and oxidizer enter the combustion chamber. Because the combustion is not perfectly uniform, the resulting thrust will contain fluctuations. In addition, any flow oscillations in the fuel and/or oxidizer can also cause thrust fluctuations. If the thrust fluctuations are near or coincide in frequency with that of a structural mode, the thrust excitation can cause an increase in the structural vibrations. These, generally, tend to be self-limiting because of structural damping. On the other hand, if the structural oscillations coincide with a mode involving either the fuel or oxidizer in the feed lines, then the flow oscillations could increase. This will in turn lead to increases in the thrust oscillations, which will cause an increase in the structural oscillations, and so forth. This generally occurs when the primary axial mode frequency of a launch vehicle, which increases as propellants are used, approaches and passes that of the fuel or oxidizer in the feed lines. This feedback loop can cause structural/propulsion system vibrations to increase to the point where the consequences could be catastrophic (Larsen, 2008; Blair et al., 2011). This phenomenon is referred to as pogo. It is required launch vehicle design practice to eliminate from the design the possibility of pogo (NASA SP-8055, 1970). The most common practice is to separate the structural axial mode of the system from the primary modes that involve the feed lines and propellants. The structural axial mode increases in frequency with flight time, whereas the feed line/propellant modes tend to remain within some frequency bounds. Hence, if the axial structural mode (frequency) starts below the feed system modes, then it could approach and possibly cross the feed line modes as the propellants are used and the vehicle becomes lighter. On some vehicles this point has occurred near the stage engine shutdowns and the mitigating action involved monitoring the axial acceleration and commanding the engines to shut down once certain acceleration limits were reached. However, on most launch vehicles the mitigating action has been the introduction of pogo accumulators, which have the effect of lowering the critical feed line/propellant mode frequencies below the axial mode of the system. Since the axial mode increases in frequency with flight time, the separation in frequency increases and the possibility of pogo is mitigated.

Problem 7.1

The analytical prediction of pogo involves the coupling of the structural dynamic model of the launch vehicle to models of the engines and fuel/ oxidizer line liquids, including the tank bottom pressures and structural motions. The feedback between the thrust and structural vibrations, because of the phasing, can become such that the oscillatory forces reduce/negate the structural damping and cause the oscillations to grow until: nonlinearities in the system increase the damping; or the structural mode frequencies and modal gains change and decouple the structural and propulsion system dynamics; or the structural capability is exceeded; or the propulsion system performance is negatively impacted, including shutting down the engines. Therefore, a pogo stability analysis involves computing the coupled system damping values, which are obtained from the complex eigenvalues computed in the coupled system complex modes solution (Oppenheim and Rubin, 1993; Rubin, 1966; Dotson et al., 2005; Sekita et al., 2001; Brennen, 1994). Pogo stability analysis model development and analysis methodology, for all practical purposes, has not evolved significantly in the last several decades. However, today there are signs of potentially significant advances being made. For example, the use of finite element models with full hydroelastic modeling of the propellants is being considered. These could possibly replace the historical stand-alone models of the feed line liquids. Because of these changes, any description of the current pogo stability analysis approaches is most likely going to become quickly dated. Hence, instead of providing such description herein, the reader in encouraged to obtain the latest developments from the to-be-published technical literature.

Problems Problem 7.1 The system shown below is unconstrained (free-free) in the y-coordinate direction, which is the only direction the masses are allowed to move. The y-coordinates are in an inertial coordinate system (black blocks). Considering the constraints, how many rigid body modes does the system have? Derive the rigid body mode without solving the eigenvalue problem; note, this can be done by inspection. Normalize the rigid body mode to unit modal mass.

579

580

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Solution 7.1 Since the system is allowed to only move in the y-coordinate direction, it has one rigid body mode, provided it is not connected to “ground” in the y-coordinate direction, which it is not. In a rigid body mode a system does not deform elastically; therefore, there is no relative motion between the mass points, which means that all have to undergo the same displacement. Hence, the rigid body mode shape is ½ f11

f21

f31

f41

f51

f61 T ¼ ½ 1

1

1

1

1

1 T

To normalize a mode shape, elastic or rigid body, to unit modal mass we first must compute the modal mass with the as computed normalization, i.e., 2 38 9 1:0 > >1> > > 6 7> > > > > 6 7> > 2:0 1 > > 6 7> > > > > 6 7> > > < = 6 7 3:0 1 7 T6 T ¼9 ffg ½mffg ¼ f1 1 1 1 1 1g 6 7 > 6 7> 0:5 1 > > > 6 7> > > > 6 7> > > > 6 7> 1 > > 1:0 > 4 5> > > > : > ; 1:5 1 The rigid body mode shape, ffr g, normalized to unit modal mass is ffr g ¼ ½ 1  ¼

1

1 1 3 3

1 1 T pffiffiffi 9 T 1 1 1 1 3 3 3 3

1

1

1

Solution 7.2

Problem 7.2 The system shown below is unconstrained (free-free) in the y-coordinate direction, which is the only direction the masses are allowed to move. The y-coordinates establish position in an inertial coordinate system (black blocks are origins). The rigid body mode shape is the same as in the solution to Problem 7.1. Using the normalized rigid body mode shape from Problem 7.1 compute the rigid body inertial acceleration of each mass due to the externally applied forces, f1 ¼ 45 and f2 ¼ 45. Solve the problem by transforming the equations of motion into the modal coordinate domain. The problem can be solved without knowledge of the elastic modes of the system because of mode shape orthogonality. Discuss why your solution makes sense.

Solution 7.2 The matrix differential equation of motion for the system is ½mf€ yðtÞg þ ½kfyðtÞg ¼ ff ðtÞg Solving the eigenvalue problem yields the six modes of the system, one rigid body mode and five elastic modes. The modal transformation between the six physical coordinates, fyðtÞg ¼ ½ y1 ðtÞ /

y6 ðtÞ T

581

582

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

and the six modal coordinates, q6 ðtÞ T

fqðtÞg ¼ ½ q1 ðtÞ / is



qr ðtÞ fyðtÞg ¼ ½ ffr g ½fe  fqe ðtÞg



where we explicitly show the single rigid body mode shape separate from the five elastic mode shapes. Applying the coordinate transformation and its second time derivative to the equation of motion, and then premultiplying by the transpose of the transformation yields 3 ) #( ) 2 " T   ( T T 0 f0g 1 f0g q€r ðtÞ qr ðtÞ ðtÞg ff g ff r 5   þ4 ¼  2 € ðtÞ q fqe ðtÞg ½fe T ff ðtÞg e f0g ½I f0g un The equation associated with qr ðtÞ represents the rigid body behavior of the system, whereas those associated with fqe ðtÞg represent the elastic behavior. We can use the upper partition to solve for the rigid body acceleration response, i.e., 8 9 0 > > > > > > > > > 0 > > > > > >  < 0 > = 1 1 1 1 1 1 q€r ðtÞ ¼ ffr gT ff ðtÞg ¼ ¼ 30 > 0 3 3 3 3 3 3 > > > > > > > > > 45 > > > > > : > ; 45 Transforming back to physical coordinates yields the sought-after result, 9 8 8 9 8 9 y€1 ðtÞ > > 1=3 > 10 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > € y ðtÞ > > > > > > 2 1=3 10 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > = < 1=3 = < y€3 ðtÞ = < 10 > qr ðtÞ ¼ ¼ ffr g€ 30 ¼ > > > y€4 ðtÞ > 1=3 > 10 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > € y ðtÞ > > > > > > 1=3 10 5 > > > > > > > > > > > > > > > > > > ; : > ; > : : y€6 ðtÞ ; 1=3 10

Problem 7.3

We can establish the reasonableness of the above result by writing Newton’s Second Law for the overall center of mass of the system. Since the total force acting on the system is 90, and the total mass of the system is 9, we get 9€ yðtÞ ¼ 90

0 y€ðtÞ ¼ 10

which is the acceleration of the center of mass and, hence, that of each mass point since there is no relative motion between the masses in a rigid body mode. Note that this is the rigid body acceleration. Superimposed on this will be the elastic mode accelerations, but because of the orthogonality between the rigid body and elastic mode shapes, the elastic mode vibrations cannot affect the overall rigid body acceleration of the system unless there is a feedback mechanism, such as aeroelasticity. Problem 7.3 Derive the equations of motion for the system shown in the figure, and then collect the equations into a matrix differential equation of motion. The wheel is pinned (allowed to rotate about point o) to the same frame that spring k3 is attached to (black bar). Point o, the black bar, and the coordinates are in the same inertial reference frame with their origins as shown. How would the matrix differential equation of motion change if instead of force f ðtÞ there were a torque, TðtÞ, acting on the wheel about the z-axis?

583

584

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Solution 7.3 Applying Newton’s laws to mass m1 : m1 y€1 ðtÞ ¼ k1 ðy1 ðtÞ þ rqz ðtÞÞ þ f ðtÞ m1 y€1 ðtÞ þ k1 y1 ðtÞ þ k1 rqz ðtÞ ¼ f ðtÞ Applying Newton’s laws to m: Izz q€z ðtÞ ¼ k1 ðy1 ðtÞ þ rqz ðtÞÞr þ k2 ðy2 ðtÞ  rqz ðtÞÞr Izz q€z ðtÞ þ k1 ry1 ðtÞ  k2 ry2 ðtÞ þ ðk1 þ k2 Þr 2 qz ðtÞ ¼ 0 Applying Newton’s laws to mass m2 : m2 y€2 ðtÞ ¼ k2 ðy2 ðtÞ  rqz ðtÞÞ  k3 y2 ðtÞ m2 y€2 ðtÞ  k2 rqz ðtÞ þ ðk2 þ k3 Þy2 ðtÞ ¼ 0 Collecting the three equations into a matrix differential equation we obtain 9 8 9 9 2 38 38 2 k1 r 0 y1 ðtÞ> m1 k1 > f ðtÞ> > > > € ðtÞ y > > > > = > < > = < 7< 1 = 6 7 6 2 € 7 7 6 6 0 Izz k2 r 5 qz ðtÞ ¼ 5> qz ðtÞ> þ 4 k1 r ðk1 þ k2 Þr 4 > > > > > > > > ; : > ; ; : y€ ðtÞ> : 2 m2 ðk2 þ k3 Þ 0 0 k2 r y2 ðtÞ If instead we had a torque, TðtÞ, acting on the wheel about the z-axis the above equation would become 9 2 9 8 9 2 38 38 k1 r 0 m1 k1 y1 ðtÞ> 0> > > > > € ðtÞ y > > > > > > = < = 6 7< 1 = 6 7< 2 € 6 7 7 6 Izz k2 r 5 qz ðtÞ ¼ TðtÞ 4 5> qz ðtÞ> þ 4 k1 r ðk1 þ k2 Þr > > > > > > > > ; ; > : y€ ðtÞ> : : ; 2 0 k2 r y2 ðtÞ m2 ðk2 þ k3 Þ 0

Problem 7.4 Derive the equation of motion for the double pendulum shown in the figure. Assume small angular motion, i.e., cos qðtÞ ¼ 1 and sin qðtÞ ¼ qðtÞ. Also, coordinates x1 ðtÞ and x2 ðtÞ are in an inertial reference frame, and g is the acceleration due to the force of gravity. Write the equations of motion as a matrix equation of motion. Set m2 ¼ 0 and discuss your results.

Solution 7.4

Solution 7.4 Since coordinates x1 ðtÞ and x2 ðtÞ are in an inertial reference frame, we can apply Newton’s Second Law directly to each mass point’s lateral, x-coordinate direction motion. But first, since we are dealing with small angular motion, q1 ðtÞ ¼ x1 ðtÞ=l1 and q2 ðtÞ ¼ ðx2 ðtÞ x1 ðtÞÞ=l2 . The vertical component of T2 acting on mass m2 , which must be equal to m2 g, but directed opposite, yields T2 cos q2 ¼ m2 g 0

T2 ¼ m2 g

Note that because the vertical motion will be small compared to the lateral, we are not including any vertical inertial loads. The lateral component, T2x , of T2 is T2x ¼  T2 sin q2 ðtÞ ¼ m2 g

x2 ðtÞ  x1 ðtÞ l2

and applying Newton’s Second Law yields x2 ðtÞ  x1 ðtÞ l2 m2 g m2 g ¼ x1 ðtÞ  x2 ðtÞ l2 l2

m2 x€2 ðtÞ ¼ m2 g

585

586

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Note that the mass can be divided out, but for now we will leave it as is. Proceeding to mass m1 , the lateral force component due to T2 will be the same as for mass m2 , but directed in the opposite direction, according to Newton’s Third Law. To compute the lateral component of T1 , we must first establish T1 so that we have equilibrium in the vertical direction, T1 cos q1 ¼ m1 g þ T2 cos q2 ðtÞ

0

T1 ¼ m1 g þ T2 ¼ ðm1 þ m2 Þg

Applying Newton’s Second Law to the lateral motion of mass m1 produces m1 x€1 ðtÞ ¼ T1 sin q1 ðtÞ þ T2 sin q2 ðtÞ ¼ ððm1 þ m2 ÞgÞq1 ðtÞ þ ðm2 gÞq2 ðtÞ x1 ðtÞ x2 ðtÞ  x1 ðtÞ þ m2 g l1 l2   ðm1 þ m2 Þg m2 g m2 g ¼ þ x2 ðtÞ x1 ðtÞ þ l1 l2 l2

¼ ðm1 þ m2 Þg

Collecting the two equations of motion into matrices yields 3 2 ðm1 þ m2 Þg m2 g m2 g   6     þ  7 m1 0 x€1 ðtÞ l1 l2 l2 7 x1 ðtÞ 6 þ6 ¼ 7 x€2 ðtÞ 4 m2 g m2 g 5 x2 ðtÞ m2 0  l2 l2 Note that rffiffiffiffi if we set m2 ¼ 0, we obtain a single equation, which yields g ; this is the expected result for a single mass pendulum oscilun ¼ l1 lating at small amplitudes. Problem 7.5 A two-degree-of-freedom system with classical normal modes has the following circular natural frequencies squared and mode shapes:      2 900 0 1 1 1 and ½f ¼ pffiffiffi un ¼ 0 1100 2 1 1

Problem 7.6

There are two forces acting on the system,     A1 f1 ðtÞ ¼ sin ut f2 ðtÞ A2 What can be done to cause the system to only vibrate in its first mode, irrespective of the frequency of excitation, u? What can be done so that it vibrates only in its second mode? Solution 7.5 The response of a system is composed of a linear superposition of responses in each of its modes. Because of mode shape orthogonality, the response of each mode can be computed independent of the others. Hence, € ¼ ½ff€ qðtÞg ¼ ffg1 q€1 ðtÞ þ ffg2 q€2 ðtÞ fwðtÞg € where fwðtÞg are the physical coordinates, and q€1 ðtÞ and q€2 ðtÞ are obtained as the solutions to the following uncoupled equations of motion: q€1 ðtÞ þ 2z1 un;1 q_ 1 ðtÞ þ u2n;1 q1 ðtÞ¼ffgT1 fAgsin ut¼ðf11 A1 þ f21 A2 Þsin ut q€2 ðtÞ þ 2z2 un;2 q_ 2 ðtÞþ u2n;2 q2 ðtÞ¼ffgT2 fAgsin ut¼ðf12 A1 þ f22 A2 Þsin ut In order for the system to solely vibrate in its first mode, irrespective of the frequency of excitation, q€2 ðtÞ must be equal to zero. The only way this can happen is if the modal excitation force, ðf12 A1 þf22 A2 Þsin ut, is zero. Since we control the force magnitudes, we can solve for A1 and A2 such that f12 A1 þ f22 A2 ¼ 0. Hence, for our problem, 1 1 pffiffiffi A1 þ pffiffiffi A2 ¼ 0 0 2 2

A1 ¼ A2

To set the first mode modal force to zero, we just set A1 ¼ A2 . Problem 7.6 The figure shows the time history responses of two coordinates of a structure whose motion was initiated with initial conditions. Is the structure a single- or multi-degree-of-freedom system? How many modes are most likely responding? What are estimates of their natural frequencies? Does the system have damping?

587

588

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Solution 7.6 Since the vibrations were initiated with initial conditions, the response is unforced. Because of the beating, the system must have at least two modes. Because of the character of the vibration time histories, the responses are most likely due to two modes whose natural frequencies are close. So, we will proceed with that assumption. We know that the beat frequency (envelope function frequency) is given by ð f2  f1 Þ=2 and the carrier frequency (high frequency vibration) is given by ð f2 þ f1 Þ=2. Hence, from the figure we can estimate each, and compute f1 and f2 , i.e., ð f2  f 1 2 ¼ 0:25 Hz and ð f2 þ f1 Þ=2 ¼ 4:8 Hz. These yield f1 ¼ 4:55 Hz and f2 ¼ 5:05 Hz. The system has damping because the response decays. Problem 7.7 For the following time history, xðtÞ ¼ x1 ðtÞ þ x2 ðtÞ ¼ sinð2p2tÞ þ sinð2p3tÞ, establish the envelope function and compute the unit response. Plot x1 ðtÞ þ x2 ðtÞ and the envelope and unit response time histories. See the section on beating for how to proceed. Solution 7.7  f2  f1 t ¼ 2 cosðptÞ Envelope function is; 2 cos 2p 2   f2 þ f1 t ¼ sinð5ptÞ Unit response time history is; sin 2p 2 

Problem 7.8

Problem 7.8 The below time history is the free-vibration response of a system. How many modes are responding? What are the natural frequencies of the modes in hertz and in rad/s?

589

590

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Solution 7.8 Since it is free-vibration response and we have beating, at least two modes must be involved. The natural frequencies are related to the response and the beat (envelope) frequencies by f2  f1 f2 þ f1 and fresponse ¼ 2 2 From the time history we obtain a beat period of 1 s (1.25e0.25), which is a beat frequency of 1 Hz; and within the beat period we have 10 complete cycles of the response time history. Hence, fbeat=envelope ¼

f2  f1 f2 þ f1 and 10 ¼ 2 2 Solving for the two frequencies we obtain f1 ¼ 9 Hz and f2 ¼ 11 Hz. Since un ¼ 2pfn we also have u1 ¼ 2pð9Þ ¼ 56:55 rad/sec and u2 ¼ 2pð11Þ ¼ 69:12 rad/sec. The below graph shows the beat (envelope) time history superimposed on the time history in the problem statement. 1¼

Problem 7.9 For the system in Fig. 7.9-3 show that inverting the stiffness matrix using singular value decomposition and then transforming the resulting matrix with the inertia relief matrix (Eq. 7.9-32) produces the same flexibility matrix, ½Ge , as in the example of Section 7.9, Eq. (7.9-35).

Problem 7.10

Solution 7.9 The eigenvalue problem  u2kn ½I þ½k ffk g ¼ f0g gives 2 3 2 0 0 0 0:5774 0:7071  2 6 7 6 ukn ¼ 4 0 2 0 5 and ½fk  ¼ 4 0:5774 0:0000 0 0 6 0:5774 0:7071

591

3 0:4082 7 0:8165 5 0:4082

The singular value decomposition inverse is 3 2 32 1 0:7071 0:4082 2 3 6 76 2 0 7 0:7071 0:0000 0:7071 7 6 76   T 74 6 5 76 ½fke  u2 kn e ½fke  ¼ 6 0:0000 0:8165 76 7 4 54 1 5 0:4082 0:8165 0:4082 0 0:7071 0:4082 6 2 3 0:2778 0:0556 0:2222 6 7 6 7 6 ¼ 6 0:0556 0:1111 0:0556 7 7 4 5 0:2222 0:0556 0:2778 and the resulting flexibility matrix is T   T T ½Ge  ¼ ½I  ½m½fr ½fr T ½fke  u2 kn e ½fke  ½I  ½m½fr ½fr  2 3 0:20 0:10 0:20 6 7 6 7 ¼ 6 0:10 0:10 0:00 7 4 5 0:20 0:00 0:40 This is the same flexibility matrix as obtained in the example problem is Section 7.9. It should be noted that ½fr  in the last equation contains the rigid body eigenvectors of the system, and not just of the stiffness matrix. Problem 7.10 Show that the displacement computed in Eq. (7.9-46), when substituted into the equation of motion will yield equilibrium.

592

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Solution 7.10 € ½mfwðtÞg þ ½kfwðtÞg ¼ ff ðtÞg    2   T € € ½f f ðtÞg  u q  ðtÞ ¼ ff ðtÞg ½mfwðtÞg þ ½k½fe  u2 e e n n   q€ ðtÞ € ¼ ½ ½fr  ½fe    r  gives Substituting fwðtÞg q€e ðtÞ     ½m½fr  q€r ðtÞ þ ½m½fe  q€e ðtÞ þ    2   T € ½f f ðtÞg  u  ðtÞ ½k½fe  u2 q ¼ ff ðtÞg e e n n Premultiplying by ½fe T produces     ½fe T ½m½fr  q€r ðtÞ þ ½fe T ½m½fe  q€e ðtÞ þ    2   T € ½f f ðtÞg  u ½fe T ½k½fe  u2  ðtÞ q ¼ ½fe T ff ðtÞg e e n n Performing the indicated multiplications,    2        T € ½0 q€r ðtÞ þ ½I q€e ðtÞ þ u2n u2 ½f f ðtÞg  u q  ðtÞ ¼ ½fe T ff ðtÞg e e n n ½fe T ff ðtÞg ¼ ½fe T ff ðtÞg Problem 7.11 Compute the value of ut that maximizes the quantity, wðtÞ, in the following equation: wj ðtÞ ¼ aj cosut  bj sinut Solution 7.11 Differentiation with respect to ut, and then setting the result equal to zero produces dwj ðtÞ ¼  aj sinut  bj cosut ¼ 0 dut Solving for ut gives the desired result, ut ¼ tan1 ðbj =aj Þ.

Solution 7.12

Problem 7.12 For the below first-order system use a complex eigensolver and compute the complex eigenvalues and eigenvectors. If a solver is not available, use the results in Eqs. (7.11-14) and (7.11-15). Note that the first two coordinates in the eigenvector correspond to the velocities and the last two to the displacements. Next, solve the undamped eigenvalue problem,    2  unj ½m þ k ffgj ¼ f0g, and compare the eigenvectors to the displacement portion of the complex eigenvectors obtained in the first part of this problem. Also, compare the undamped circular natural frequencies. Explain your results. Then extract from the complex eigenvalues the critical damping ratios for each mode. 2 3 0 0 25 0   6 0   0 0 175 7 ½0 ½m 6 7 e ¼6 M ¼ 7 ½m ½c 4 25 0 60 20 5 2    ½m e K ¼ ½0

0 25

 6 6 0 ½0 ¼ 6 6 0 k 4 0

175 20 0 175 0 0

60

0

0

0

0

3

7 7 7 30000 10000 7 5 10000 30000

Solution 7.12 The complex eigenvalues and eigenvectors are given, respectively, in Eqs. (7.11-14) and (7.11-15). The mass and stiffness matrices are given in the problem definition, and the resulting eigenvalues and eigenvectors of the undamped eigenvalue problem are         0 0 30000 10000 f1 2 25 ¼ þ  unj f2 j 0 0 175 10000 30000 where 

u2n





149:7 0 ¼ 0 1221:8





0:0285 0:1980 ½f ¼ 0:0748 0:0108



593

594

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

The undamped circular frequencies from the complex eigensolution are given in Eq. (7.11-18), which are the same as the square root of the eigenvalues from the undamped eigenvalue problem shown above. Normalizing the real part of the displacement components of the eigenvectors in Eq. (7.11-15) to yield unit modal mass produces the same eigenvectors as from the undamped eigensolution shown above. The reason the undamped and complex eigenvector are the same is because we have classical normal modes due to the damping matrix being proportional to the stiffness matrix. The critical damping ratios extracted from the complex eigenvalues are given in Eq. (7.11-16). Problem 7.13 Scale (rotate) the complex velocity components of the lowest frequency eigenvector computed in Problem 7.12 such that the first coordinate aligns with the real axis. Explain the result you obtain by rotating the second coordinate. Scale both components by the ratio of the first coordinate undamped eigenvector value from Problem 7.6 divided by the rotated result at that coordinate. Solution 7.13 The scaling (rotation) factor is (see Chapter 6, Section 6.11.4) q1;1 ¼

conjð0:0044 þ i0:3567Þ ¼ 0:0122 þ i0:9999 j0:0044 þ i0:3567j

Applying it to the first eigenvector velocity coordinates yields     0:3567 þ i0:0000 0:0044 þ i0:3567 q1;1 ¼ 0:9366  i0:0000 0:0115  i0:9365

(7.11-19)

Since the rotation also aligned the second coordinate along the real axis, the velocity mode shape is real, which it should be for a system with classical damping. Normalizing the vector to the same value as the real normal mode at the first coordinate produces      0:0285 0:3567 0:0285 ¼ 0:3567 0:0748 0:9366 Problem 7.14 Show that the plots in Fig. 7.10-5 are correct.

Solution 7.14

595

Solution 7.14 _ The solution shown in Fig. 7.10-5 that corresponds to xð0Þ ¼ 100 and _ yð0Þ ¼ 100 was derived in the section where the figure is presented. The _ figure also shows the solution for initial velocities of xð0Þ ¼ 200 and _ yð0Þ ¼ 100. For this latter case we have 2

_ _ ¼ ½fT ½mfwð0Þg fqð0Þg 0

6 4:512 6 ¼6 4 0:588

4:512

0

0 1:735 0 1:503 0 0:588 0 8 9 19:268 > > > > > < 38:536 > = ¼ > 7:532 > > > > > : ; 15:064 and

xðtÞ yðtÞ

)

" ¼

¼

0 0 1:503

9 38 0:0007 > 200 > > > < 100 > = 6 7> 0 0:0428 0:0007 0 6 7 6 7 4 0 0:0007 0:3839 0 5> 0 > > > > > : ; 0:0007 0 0 0:3839 0

3T2 7 7 7 5

0:0428

0

0

q1 ðtÞ ¼

q_ 1 ð0Þ 19:268 sinð35:982tÞ sin un1 t ¼ un1 35:982

q2 ðtÞ ¼

q_ 2 ð0Þ 38:536 sinð35:982tÞ sin un2 t ¼ un2 35:982

q3 ðtÞ ¼

q_ 3 ð0Þ 7:532 sinð190:09tÞ sin un3 t ¼ un3 190:09

q4 ðtÞ ¼ (

1:735

0

q_ 4 ð0Þ 15:064 sinð190:09tÞ sin un4t ¼ un4 190:09 8 9 0:535 sinð35:982tÞ > > > > > > > #> > > 4:512 0 1:735 < 1:071 sinð35:982tÞ =

> > 0:040 sinð190:09tÞ > > > > > > > > ; : 0:080 sinð190:09tÞ ( ) 4:832 sinð35:982tÞ þ 0:139 sinð190:09tÞ 4:512

0

1:735

0

2:414 sinð35:982tÞ þ 0:069 sinð190:09tÞ

The plot is shown in Fig. 7.10-5.

596

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Problem 7.15 Let x1 ðtÞ ¼ sinð2pf1 tÞ and x2 ðtÞ ¼ sinð2pf2 tÞ, where f1 ¼ 5=2 and f2 ¼ 11=3. Do these functions have common periodicity, and if so what is it? Plot the trace, ðx1 ðtÞ; x2 ðtÞÞ as a Lissajous graph. How many tangent points are there along each axis of the plot. Solution 7.15 The ratio of the natural frequencies is f1 =f2 ¼ 15=22, which is a rational number with n1 ¼ 15 and n2 ¼ 22; hence, T ¼ n1 =f1 ¼ 15=ð5 =2Þ ¼ 6, which is the shortest common period (see Appendix 7.3). Below is the Lissajous plot. There are 15 tangent points along x ¼ 1, and 22 along y ¼ 1.

Problem 7.16 Which of the following pairs of natural frequencies will yield Lissajous ðp; graphs that are space filling (see Appendix pffiffiffi 7.3): (a) ðf1 ; f2 Þ ¼ p ffiffiffi 2Þ, (b) ðf1 ; f2 Þ ¼ ð3; pffiffiffi (c) ðf1 ; f2 Þ ¼ p; 2 , (d) ðf1 ; f2 Þ ¼ 1; 4 , (e) 6Þ, ðf1 ; f2 Þ ¼ 2; 8 .

Problem 7.18

Solution 7.16 (a), (c), and (e) will be space filling because the ratio of the corresponding natural frequencies are not rational numbers. Problem 7.17



   u1  u2 u1 þ u2 Show that 2 sin t sin t ¼ cos u1 t  cos u2 t. 2 2 Solution 7.17 Let 2a ¼ u1 t and 2b ¼ u2 t, then     u1  u2 u1 þ u2 2 sin t sin t 2 2 ¼ 2 sinða  bÞsinða þ bÞ ¼ 2ðsin a cos b  cos a sin bÞðsin a cos b þ cos a sin bÞ ¼ 2 sin a cos b sin a cos b þ 2 cos a sin b cos a sin b ¼ 2 sin2 a cos2 b þ 2 cos2 a sin2 b ¼ 2 sin2 a 1  sin2 b þ 2 1  sin2 a sin2 b ¼ 2 sin2 a þ 2 sin2 b ¼ 2 þ 2 cos2 a þ 2 1  cos2 b ¼ 2 cos2 a  2 cos2 b ¼ ð1 þ cos 2 aÞ  ð1 þ cos 2 bÞ ¼ cos 2 a  cos 2 b ¼ cos u1 t  cos u2 t Problem 7.18 Verify the following equality: fQgcos Ut þ fPgsin Ut ¼ fGgeiUt þ fHgeiUt where

597

598

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

1 fGg ¼ ðfQg  ifPgÞ 2

1 and fHg ¼ ðfQg þ ifPgÞ 2

Solution 7.18 Using Euler’s formula, 1 1 fGgeiUt þ fHgeiUt ¼ ðfQg  ifPgÞeiUt þ ðfQg þ ifPgÞeiUt 2 2 1 1 ¼ ðfQg  ifPgÞðcos Ut þ i sin UtÞ þ ðfQg þ ifPgÞðcos Ut  i sin UtÞ 2 2 ( ) 1 fQgcos Ut þ ifQgsin Ut  ifPgcos Ut þ fPgsin Ut ¼ 2 þfQgcos Ut  ifQgsin Ut þ ifPgcos Ut þ fPgsin Ut ¼ fQgcos Ut þ fPgsin Ut Problem 7.19 The matrix equation of motion of the four-degree-of-freedom system discussed in Section 7.10.3.4 is given by Eq. (7.10-57). The equation describes a disk rotating counterclockwise on a flexible nonrotating shaft. The solution is given by Eq. (7.10-62). Fig. 7.10-2 shows the coordinate system. Derive the forcing function term, i.e., ½fT ff ðtÞg, for clockwise rotation of the disk. Show the complete matrix equation of motion. Solution 7.19 8 9 4:5117 sinðUtÞ > > > > > > > > > < 4:5117 cosðUtÞ > = ½fT ff ðtÞg ¼ Ao ð UÞ2 > > 1:7349 sinðUtÞ > > > > > > > > : ; 1:7349 cosðUtÞ 8 8 9 9 0 4:5117 > > > > > > > > > > > > > > > > > > > > < 4:5117 = < = 0 2 ¼ Ao U cosðUtÞ þ Ao U2 sinðUtÞ > > > > 0 1:7349 > > > > > > > > > > > > > > > > : : ; ; 1:7349 0 ¼ fLgcosð UtÞ þ fPgsinðUtÞ

Solution 7.20

Furthermore, recall that for counterclockwise rotation we have ½L ¼ ½2zun   ½fT ½T½f where

2

0 0 60 0 6 ½T ¼ 6 40 0

0 0 0

0 0 Izz U

3 2 0 0 7 6 0 7 60 7¼6 Izz U 5 4 0 0

0

0 0 0

0 0 0

0 0:7628

3 0 0 7 7 7U 0:7628 5 0

For clockwise rotation, U must be replaced with U, and we obtain   b ¼ ½2zun   ½fT ð½TÞ½f L ¼ ½2zun  þ ½fT ½T½f Hence, Eq. (7.10-54) becomes     b fqðtÞg _ ½If€ qðtÞg þ L þ u2n fqðtÞg ¼ fLgcosð UtÞ þ fPgsinð UtÞ Problem 7.20 Show that the solution to     _ ½Ie UðtÞ þ ½lfUðtÞgH^ ¼ ½J Hb eiUt H^ is fUðtÞgH^ ¼ fYgH^ eiUt , where 1    ðRe½l  iðIm½l  ½UÞÞ½J Hb fYgH^ ¼ ðRe½lÞ2 þ ðIm½l  ½UÞ2  Note that Ie is a diagonal matrix with values of one for the real part and zero for the imaginary part of each diagonal term (see Section 7.10.3.5). Solution 7.20 Substituting the assumed solution fUðtÞgH^ ¼ fYgH^ eiUt and its first time derivative into the differential equation yields      iU Ie þ ½l fYgH^ eiUt ¼ ½J Hb eiUt     Re½l þ i Im½l  Re Ie U fYgH^ ¼ ½J Hb   ðRe½l þ iðIm½l  ½UÞÞfYgH^ ¼ ½J Hb

599

600

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Multiplying the equation by the complex conjugate of the left-hand side gives ðRe½l  iðIm½l  ½UÞÞðRe½l þ iðIm½l  ½UÞÞfYgH^ ¼    ðRe½lÞ2 þ ðIm½l  ½UÞ2 fYgH^ ¼ ðRe½l  iðIm½l  ½UÞÞ½J Hb Solving for fYgH^ produces  1   2 2 ðRe½l  iðIm½l  ½UÞÞ½J Hb fYgH^ ¼ ðRe½lÞ þ ðIm½l  ½UÞ Problem 7.21 In Section 7.10.3.4, the solution for a disk that rotated counterclockwise on a rod was derived [see Eq. 7.10-62], and it is shown below to facilitate the discussion,  fwðtÞg ¼ ½f ½AðUÞG fLg þ ½BðUÞG fPg cos Ut   ½BðUÞG fLg  ½AðUÞG fPg sin Ut Show that for the same disk, when rotating clockwise, the solution is 8 h 9 i   > b b < = AðUÞ fLg þ BðUÞ G fPg cosð UtÞ >  h i fwðtÞg ¼ ½f  G > b b :  BðUÞ ;  AðUÞ fPg sinð UtÞ > G fLg G 9 8 h i   > b b = < AðUÞ fLg þ BðUÞ fPg cosðUtÞ > G G  h i ¼ ½f   > b b ; : þ BðUÞ fLg  AðUÞ fPg sinðUtÞ > G G h i   b b is the real part and BðUÞ is the imaginary part of AðUÞ G   1  2 G 2    b b ¼ ½2zun  þ ½fT ½T½f. u  U  iU L , respectively, and L n

Solution 7.21

Solution 7.21

½fT ff ðtÞg ¼

8 9 4:5117 sinðUtÞ > > > > > > > > > < 4:5117 cosðUtÞ > =

Ao ð UÞ2

> 1:7349 sinðUtÞ > > > > > > > > > : ; 1:7349 cosðUtÞ 8 9 8 9 0 4:5117 > > > > > > > > > > > > > > > > > > > > < 4:5117 = < = 0 2 ¼ Ao U cosðUtÞ þ Ao U2 sinðUtÞ > > > > 0 1:7349 > > > > > > > > > > > > > > > > : : ; ; 1:7349 0 ¼ fLgcosðUtÞ þ fPgsinðUtÞ

Recall that for counterclockwise rotation we have ½L ¼ ½2zun   ½fT ½T½f where 2 3 2 3 0 0 0 0 0 0 0 0 60 0 6 0 0 7 0 0 7 6 7 60 0 7 ½T ¼ 6 7¼6 7U 40 0 0 Izz U 5 4 0 0 0 0:7628 5 0 0 Izz U 0 0 0 0:7628 0 For clockwise rotation, U must be replaced with U, and we obtain   b ¼ ½2zun   ½fT ð½TÞ½f L ¼ ½2zun  þ ½fT ½T½f Hence, Eq. (7.10-54) becomes     b fqðtÞg _ þ u2n fqðtÞg ¼ fLgcosð UtÞ þ fPgsinð UtÞ ½If€ qðtÞg þ L Let fGg ¼fLg  ifPg, then fLgcosðUtÞ þ fPgsinðUtÞ ¼ iðUtÞ Re fGge . Analytically extending the solution by considering the complex differential equation,    2 b fqðtÞg _ þ un fqðtÞgG ¼ fGgeiðUtÞ ½If€ qðtÞgG þ L G   The solution we seek will then be fqðtÞg ¼ Re fqðtÞgG .

601

602

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

Assume a solution fqðtÞgG ¼ fjgG eiðUtÞ ; substituting it and its time derivatives produces     2 b fjg þ u2 fjg eiðUtÞ ¼ fGgeiðUtÞ  U ½IfjgG  iU L n G G  2    2 b un  U  iU L fjgG ¼ fGg    1  b fjgG ¼ u2n  U2  iU L fGg Substituting into our assumed solution, and then solving for fwðtÞg produces the sought-after result, n o fwðtÞg ¼ ½ffqðtÞg ¼ ½fRe fjgG eiðUtÞ n  o    1 2 2 iðUtÞ b ¼ ½fRe un  U  iU L fGge nh i o   iðUtÞ b b ¼ ½fRe AðUÞ þ i BðUÞ fGge G G h i   b b is the real part and BðUÞ is the imaginary part of where AðUÞ G G     2 b 1 , respectively. Substituting Eq. (7.10-55) and un  U2  iU L applying Euler’s formula yields nh i   b b þ i BðUÞ ðfLg  ifPgÞ AðUÞ fwðtÞg ¼ ½fRe G G o ðcosð UtÞ þ i sinð UtÞÞ 8 h 9 i   > > > > b b > > þ BðUÞ cosð UtÞ AðUÞ fLg fPg > > G > > G > > > > h i  > >   > > > > b b <  BðUÞ =  AðUÞ sinð UtÞ fLg fPg G G h  i ¼ ½fRe  > b b > >  cosð UtÞ > i BðUÞ AðUÞ fLg fPg > > G > > G > > > > h i > >   > > > > b b > : þi AðUÞ G fLg þ BðUÞ G fPg sinð UtÞ > ; 9 8 h i   > > b b = < cosð UtÞ AðUÞ fLg þ BðUÞ fPg G G h  i ¼ ½f  > b b ; :  BðUÞ sinð UtÞ >  AðUÞ fLg fPg G G 8 h 9 i   > > b b < = cosðUtÞ AðUÞ fLg þ BðUÞ fPg G G h i  ¼ ½f  > b b : þ BðUÞ ; fLg  AðUÞ fPg sinðUtÞ > G

G

Problem 7.24

Problem 7.22 Show for a 2  2 system that if the matrix ½T is skew-symmetric with zero values on the diagonal, then ½G will also be skew-symmetric with zero diagonal terms where ½G ¼  ½fT ½T½f Solution 7.22     0 l a b Let ½T ¼ and ½f ¼ , then l 0 c d      0 l a b lc ld ½T½f ¼ ¼ l 0 c d la lb and

 T

½f ½T½f ¼

a

c

b d



lc ld la

lb



 ¼

0

lda þ lbc

lad  lcb

0



Problem 7.23 Show that if A ¼ a þ ib and B ¼ c þ id, then ðABÞ ¼ A B ; the superscript * designates the complex conjugate. Solution 7.23 AB ¼ ða þ ibÞðc þ idÞ ¼ ðac  bdÞ þ iðad þ bcÞ Therefore, ðABÞ ¼ ðac  bdÞ  iðad þ bcÞ and A B ¼ ða  ibÞðc  idÞ ¼ ðac  bdÞ  iðad þ bcÞ Hence, we conclude that ðABÞ ¼ A B . Problem 7.24 A measured random, zero-mean forcing function is 1000 s long. We wish to compute the mean square response of a multi-degree-of-freedom system

603

604

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

subjected to this forcing function. The natural frequency of the fundamental mode of the system is 5 Hz, and its critical damping ratio is 0.02. The highest mode natural frequency is 50 Hz. How many seconds of the 1000-sec forcing function need to be used in order to achieve a result that on average is within 5% of the infinite-length solution? What if the natural frequency were 1 Hz? Solution 7.24 We can use Fig. 7.8-1 to obtain a good estimate, or use Eq. (7.8-7), 1 1  e2pn m2 ðnÞ ¼ 0:95 ¼ 1  2pn to compute the value of the normalized cycle count, n. For this problem the value of n was obtained by iteration and is 3.185. Fig. 7.8-1 was used to provide a starting point. From the discussion, T must be equal to or greater than nQ=fn . Substituting produces the desired result:   1 nQ=fn ¼ 3:185 =5 2ð0:02Þ ¼ 15:92 Hence, T  15:92 sec. For the 1 Hz system, T  fð15:92Þ5 ¼ 79:6g sec.

Appendix 7.1 Work and coordinate transformations Virtual displacements are arbitrarily small changes in the displacement configuration of a system. Therefore, a force that acts on a system that undergoes a virtual displacement will do virtual work. Let fxðtÞg ¼ ½ffqðtÞg be a coordinate transformation where fxðtÞg and fqðtÞg describe the same displacement configuration of the system, but in different coordinate systems. Then the virtual work done by the fxðtÞg coordinate system forces, ff ðtÞg, undergoing a virtual displacement fdxðtÞg is ff ðtÞgT fdxðtÞg. The virtual work done by the fqðtÞg coordinate system forces, fQðtÞg, is fQðtÞgT fdqðtÞg. Work is a scalar quantity, and from the physics of the problem we understand that the amount of work done has to be the same

Appendix 7.2 Beating

605

irrespective of the coordinate systems used to describe the forces and displacements. Therefore, fQðtÞgT fdqðtÞg ¼ ff ðtÞgT fdxðtÞg Applying the coordinate transformation to the virtual displacements we obtain fdxðtÞg ¼ ½ffdqðtÞg. Substituting gives fQðtÞgT fdqðtÞg ¼ ff ðtÞgT ½ffdqðtÞg Since the coordinates were selected to be independent we conclude that fQðtÞgT ¼ ff ðtÞgT ½f Transposing the equation we obtain the sought-after relationship, fQðtÞg ¼ ½fT ff ðtÞg

Appendix 7.2 Beating We start with the equality, sin u1 t þ sin u2 t ¼ sin 2 a þ sin 2 b where 2a ¼ u1 t and 2b ¼ u2 t. The right-hand side of the preceding equation can be written as sin 2 a þ sin 2 b ¼ 2 sin a cos a þ 2 sin b cos b ¼ 2 sin a cos a cos2 b þ sin2 b þ 2 sin b cos b cos2 a þ sin2 a Performing the indicated multiplications yields sin 2 a þ sin 2 b ¼ 2 sin a cos a cos b cos b þ 2 sin a cos a sin b sin b þ 2 sin b cos b cos a cos a þ 2 sin b cos b sin a sin a ¼ 2ðcos a cos b þ sin a sin bÞðsin a cos b þ cos a sin bÞ We can simplify the right-hand side by using the cosine and sine sum and difference formulas, sin 2 a þ sin 2 b ¼ 2 cosða  bÞsinða þ bÞ Substituting the values for a and b yields the sought-after relationship,     u1  u2 u1 þ u2 sin u1 t þ sin u2 t ¼ 2 cos sin 2 2

606

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

As an example, assume we have the sum of two sinusoidal functions, i.e., x1 ðtÞ þ x2 ðtÞ ¼ sinð2p2tÞ þ sinð2p3tÞ The figure shows a plot of this sum, and as can be ascertained, the sum repeats every 1 s.

Above we indicated that the response, x1 ðtÞ þ x2 ðtÞ, can be written as the product of a cosine function, with a frequency of ðu2 u1 Þ=2, and a sine function with a frequency of ðu2 þu1 Þ=2. For our example problem these two functions, which are shown in the below figure, are     f2  f1 f2 þ f1 t ¼ 2 cosðptÞ and sin 2p t ¼ sinð5ptÞ 2 cos 2p 2 2 The factor of 2 multiplying the cosine term is due to the unit amplitudes of the two sinusoidal functions, x1 ðtÞ and x2 ðtÞ.

Appendix 7.3 Periodicity and Lissajous graphs

Multiplying the two functions shown above at each time point will produce the time history shown in the first figure, hence, x1 ðtÞ þ x2 ðtÞ ¼ sinð2p2tÞ þ sinð2p3tÞ ¼ 2 cosðptÞsinðp5tÞ

Appendix 7.3 and Lissajous graphs  Periodicity  Let xðtÞ ¼

x1 ðtÞ

. Then xðtÞ is periodic with period T if xðt þTÞ ¼ xðtÞ x2 ðtÞ for all t. Let x1 ðtÞ ¼ sinð2pf1 tÞ and x2 ðtÞ ¼ sinð2pf2 tÞ. xðtÞ will have period T if there exist integers n1 and n2 such that f1 T ¼ n1 and f2 T ¼ n2 . n1 n2 f1 n1 f1 This will be the case if and only if T ¼ ¼ 0 ¼ , and is a f1 f2 f2 n2 f2 rational number. For the first example, let f1 ¼ 2 and f2 ¼ 3. Since 2=3 is rational with n1 n1 ¼ 2 and n2 ¼ 3, the period will be T ¼ ¼ 1. Lissajous plots graph f1 time-varying functions against each other and are, therefore, useful in visualizing relationships such as the trace ðx1 ðtÞ; x2 ðtÞÞ. The Lissajous graph of sinð2p2tÞ versus sinð2p3tÞ is shown below.

Since n1 ¼ 2, the curve has two tangency points at x ¼ 1, and since n2 ¼ 3, the curve has three tangency points at y ¼ 1. Since x1 ðtÞ ¼ sinð2p2tÞ and x2 ðtÞ ¼ sinð2p3tÞ have a common period, the Lissajous curve shown above will repeat forever for increasing values of t.

607

608

CHAPTER 7 Forced vibration of multi-degree-of-freedom systems

pffiffiffi pffiffiffi f1 2 For this example, let f1 ¼ 2 and f2 ¼ 2. This yields ¼ , which is 2 f2 an irrational number and, hence, the functions will not have a common period. In this case, the trace, ðx1 ðtÞ; x2 ðtÞÞ, will be space filling. Below are Lissajous graphs for four increasing periods; i.e., t ¼ 0  5, t ¼ 0  10, t ¼ 0  50, and t ¼ 0  100. As can be seen, since no common periodicity exists, the trace will eventually cover the entire space.

If we assume that f1 ¼ 1:4, then T ¼ 7 and we obtain a periodic trace as an approximation to the actual response; this is shown below.

References

References Bisplinghoff, R.L., Ashley, H., Halfman, R.L., 1955. Aeroelasticity. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts. Blair, J.C., Ryan, R.S., Schutzenhofer, L.A., June 2011. Lessons Learned in Engineering, NASA/CR-2001-216468. Brennen, C.E., 1994. Hydrodynamics of Pumps. Concepts ETI and Oxford Science Publications, pp. 72e98, 104e107 and 247e255. Broussinos, P., Kabe, A.M., January 1990. “Multi-Mode Random Response Analysis Procedure,” Aerospace Technical Report TR-0090(5533)-1. The Aerospace Corporation. Campbell, W., “Protection of steam turbine disk wheels from axial vibration,” American Society of Mechanical Engineers, Spring Meeting, 26 to 29 May, 1924. Dotson, K.W., Rubin, S., Sako, B.H., July-August 2005. Mission-Specific pogo stability analysis with correlated pump parameters. AIAA Journal of Propulsion and Power 21 (4). Kabe, A.M., November-December 1984. Multi-shaker random mode testing. AIAA Journal of Guidance Control and Dynamics 7 (6). Larsen, C.E., May 2008. NASA Experience with Pogo in Human Spaceflight Vehicles. RTO-MP-AVT-152. Mirsky, L., 1982. An Introduction to Linear Algebra. Dover Publications, Inc., New York, New York, originally published by Clarendon Press, Oxford, 1955. NASA SP-8055, October 1970. Prevention of Coupled StructurePropulsion Instability. Pogo. Oppenheim, B.W., Rubin, S., May-June 1993. Advanced pogo stability analysis for liquid rockets. J. of Spacecr. Rocket. 30 (3). Rubin, S., August 1966. Longitudinal instability of liquid rockets due to propulsion feedback (POGO). Journal of Spacecraft 3 (8). Sekita, R., Watanabe, A., Hirata, K., Imoto, T., 2001. Lessons learned from H-2 failure and enhancement of H-2A project. Acta Astronaut. 48 (5e12), 431e438. Schwendler, R.G., MacNeal, R.H., March 1962. Optimum Structural Representation in Aeroelastic Analyses. Flight Control Laboratory, Aeronautical Systems Division, Air Force Systems Command, WrightPatterson Air Force Base, Ohio. Technical Report No. ASD-TR-61-680. Thomson, W.T., 1981. Theory of Vibration with Applications, second ed., Prentice- Hall, Inc., Englewood Cliffs, New Jersey.

609

CHAPTER

Numerical methods

8

8. Introduction In addition to the analysis of time series data, which is discussed in Chapter 5 of Volume II, there are many areas where closed-form solutions to practical problems are not feasible and solutions must be obtained with numerical methods. The first involves numerically integrating the differential equations of motion of single- and multi-degree-of-freedom systems to obtain solutions for general forcing functions. The second involves generating structural dynamic models of complex systems and then solving the associated eigenvalue problem to establish the system’s dynamic properties. This chapter will cover both topics, the numerical solution of differential equations of motion and the numerical solution of eigenvalue problems. In addition, we will cover the least-squares method, which will be used in the experimental structural dynamics chapter in Volume II, to develop test-based structural dynamic models. 8.1 Numerical solution of differential equations of motion In Chapters 2 through 7, analytic response solutions were derived for singleand multi-degree-of-freedom systems subjected to simple and idealized forcing functions. The computation of structural responses subjected to general forcing functions, such as measured launch vehicle engine thrust transients, atmospheric turbulence, random vibration due to acoustic pressure loading, or earthquakes, must be accomplished numerically. This section will present several methods that can be used to numerically integrate differential equations and have been shown to work very well for most vibration problems of practical interest. All the methods that will be presented approximate the derivatives using finite differences, and are explicit oneStructural Dynamics. https://doi.org/10.1016/B978-0-12-821614-9.00008-2 Copyright © 2020 Elsevier Inc. All rights reserved.

611

612

CHAPTER 8 Numerical methods

step methods that calculate the responses in terms of the state at a previous time step. First, we will discuss one-step methods for general, first-order, scalar initial-value problems, since they are the simplest time marching schemes available. We will start with Euler’s method as a way to introduce one-step methods and show that these methods follow naturally from approximations to the integral of the time derivative over each time step. This leads to the second- and fourth-order Runge-Kutta methods. A detailed discussion of convergence concepts for one-step methods will also be presented. Although one-step methods can be used to numerically integrate general initial-value problems, these methods require that the differential equation be reformulated as a first-order system. On the other hand, the Newmark and Duhamel methods discretize the equations of motion directly and, therefore, provide a more direct approach for numerical integration of single-degree-of-freedom systems. Lastly, a practical assessment of these methods in terms of their applicability and performance for structural dynamic response calculations will be presented. 8.1.1 One-step methods

We begin with the class of explicit, one-step methods that numerically integrate the following first-order, scalar, initial-value problem: _ ¼ Aðt; xÞ xðtÞ xð0Þ ¼ x0

0 >

= < x_n þ fn > Dt 2 a2 ¼ A xn þ a1 þ f nþ 1 ¼ > > 2 2 > ; : fnþ1 > 2

Dt a3 ¼ A xn þ a2 þ f nþ 1 2 2

9 8 Dt > > > = < x_n þ fnþ1 > 2 2 ¼ > > > > ; : fnþ1 2

a4 ¼ Aðxn þ Dt a3 Þ þ f nþ1 ¼

9 8 < x_n þ Dt fnþ1 = 2

:

fnþ1

;

(8.1-118)

8.2 Multi-degree-of-freedom system numerical integration

Substituting the expressions for ak into the final RK-4 step yields ) ( ) ( xn xnþ1 Dt ¼ þ ða1 þ 2a2 þ 2a3 þ a4 Þ 6 x_nþ1 x_n 8 9 > > >  > > > > > (8.1-119) > > ( ) > > 1 _ þ Dt f þ 2f 6 x < = n n nþ xn 2 Dt þ ¼ > 6 > x_n > > > fn þ 4fnþ1 þ fnþ1 > > > > > 2 > > : ; As in most implementations of RK-4, we use the linear approximation 1 1 fnþ1 ¼ fn þ fnþ1 2 2 2 Substituting the above into Eq. (8.1-119) produces

  

Dt 6x_n þ Dtð2fn þ fnþ1 Þ xn xnþ1 ¼ þ x_nþ1 x_n 3fn þ 3fnþ1 6

(8.1-120)

(8.1-121)

which, after simplification, shows that RK-4 is equivalent to Eq. (8.1-112). The discrete formulation of Newmark’s method for solving Eq. (8.1-111) is straightforward; i.e., from the differential equation x€n ¼ fn , which after substitution into Eq. (8.1-76), yields xnþ1 x_nþ1

 Dt2  ¼ xn þ Dt x_n þ ð1  aÞfn þ afnþ1 2   ¼ x_n þ Dt ð1  gÞfn þ gfnþ1

(8.1-122)

Hence, for the linear acceleration approximation, a ¼ 1=3 and g ¼ 1=2, Newmark’s method is equivalent to the RK-4 and Duhamel’s methods. 8.2 Multi-degree-of-freedom system numerical integration We will discuss several numerical integration methods for computing solutions of initial valued problems to linear, second-order, multi-degree-offreedom systems. In particular, we will consider the following initial value problem:

649

650

CHAPTER 8 Numerical methods

_ þ KxðtÞ ¼ fðtÞ M€ xðtÞ þ CxðtÞ _ ¼ x_ 0 xð0Þ ¼ x0 and xð0Þ

t>0

(8.2-1)

where M, C, and K are N  N mass, damping, and stiffness matrices, respectively. We will assume that M is a nonsingular matrix and, therefore, has a well-defined inverse. The stiffness and damping matrices can be singular, if, for example, the system includes the rigid-body dynamics. The Ndimensional displacement vector at time equal to t is represented as, xðtÞ ¼ f x1 ðtÞ x2 ðtÞ / xN ðtÞ gT , with x0 and x_ 0 denoting the initial displacement and velocity vectors, respectively. Similarly, the N-dimensional time-varying force vector is fðtÞ ¼ f f1 ðtÞ f2 ðtÞ / fN ðtÞ gT . We will extend the single-degree-of-freedom system one-step methods discussed in the previous sections to solving the initial value problem in Eq. (8.2-1). We first discuss the application of these methods to classically damped systems since they can be completely decoupled as independent single-degree-of-freedom systems in terms of their modal coordinates. Then, we will consider the use of single-degree-of-freedom system numerical integration methods for symmetric, but nonclassically damped multidegree-of-freedom systems. The approach basically involves moving the damping-related terms to the right-hand side as an additional force. Lastly, we address the numerical integration of general multi-degree-of-freedom systems using RK-4 and Newmark’s methods. 8.2.1 Classically damped systems

In this section, we will restrict Eq. (8.2-1) to passive systems where the matrices are symmetric, M is positive definite, and C and K are positive semidefinite. We showed in Chapter 6 that undamped systems can be decoupled in terms of their real normal modes, i.e., M and K can be diagonalized. For a passively damped system, Caughey (1960) labeled it as classically damped if it decoupled via coordinate transformation to its real normal modes (see Chapters 6 and 7 for detailed discussion). He and O’Kelly (Caughey and O’Kelly, 1965) showed that a necessary and sufficient condition for Eq. (8.2-1) to be classically damped is that C and K commute with respect to M1 , CM1 K ¼ KM1 C

(8.2-2)

8.2 Multi-degree-of-freedom system numerical integration

Let us assume that the above holds, so that the passive system is classically damped and, therefore, diagonalizable through its real normal modes, fm , of the undamped system, Kfm ¼ u2m Mfm ;

m ¼ 1; /; N

(8.2-3)

We will adopt the usual mass normalization so that the modes represent an orthonormal basis with respect to the mass matrix,

1 i¼j T f i Mfj ¼ (8.2-4) 0 isj Representing the response in terms of the first R modal coordinates, qm , xðtÞ ¼

R X

qm ðtÞfm

(8.2-5)

m¼1

transforms Eq. (8.2-1) to the diagonal system, 9 2 9 8 38 2z1 u1 > > > > € _ ðtÞ ðtÞ q q > > > > < 1 = 6 7< 1 = 7 « 1 þ6 5> « > 4 > > > > > ; : q€ ðtÞ ; : q_ ðtÞ > R R 2zR uR 2 2 9 8 9 38 u1 g1 ðtÞ > q1 ðtÞ > > > = = < < 6 7 6 ¼ þ4 « « 5 1 > > > ; > ; : : qR ðtÞ gR ðtÞ 2 uR

(8.2-6)

where u1  u2  /  uR , and, 2zm um ¼ fTm Cfm

and

gm ðtÞ ¼ fTm fðtÞ;

m ¼ 1; /; R (8.2-7)

In addition, the initial conditions transform under the coordinate change via, qm ð0Þ ¼ fTm Mx0

and q_ m ð0Þ ¼ fTm Mx_ 0

(8.2-8)

The numerical integration methods from Section 8.1 can now be applied to each of the R single-degree-of-freedom initial value problems. Since the single-degree-of-freedom systems are solved individually, one could specify a unique step size for each equation to ensure at least 40 samples

651

652

CHAPTER 8 Numerical methods

per cycle. Generally, however, Eq. (8.2-6) is solved using a single step size, and in that case the step size should be at least 40 samples per cycle for the highest natural frequency, uR =2p Hz. Recall that the 40-samples-per-cycle guideline is to ensure that the peak response can be accurately estimated. Additionally, since the numerical integration involves digital samples of the generalized forces, gm ðtÞ, with sampling period equal to the step size, care should be exercised to prevent aliasing of the force time histories. Note that using a step size equal to Dt implies a sampling rate equal to fsamp ¼ 1=Dt Hz. Let uforce denote the maximum bandwidth of the force time histories. As a general rule, the step size should be defined so that 

p p (8.2-9) ; Dt < min 20uR uforce For wideband forcing functions where uforce [20uR , use of Eq. (8.2-9) could lead to very small step sizes that would require prohibitively long computational times. In these instances, low-pass filtering the forcing functions is advised to reduce their maximum bandwidth. 8.2.2 Nonclassically damped systems

Passive systems with real-valued symmetric mass, damping, and stiffness matrices that do not satisfy the Caughey-O’Kelly criterion, Eq. (8.2-2), will be discussed next. These systems are referred to as nonclassically damped systems and are often encountered in large-scale dynamic systems that are comprised of different subcomponents. For example, when component mode synthesis techniques (see Volume II) are used to couple substructures, the resulting system modal damping matrices will generally contain off-diagonal elements that lead to complex modes (Hsiao and Kim, 1993). There have been numerous studies on solution methods for determining the response to nonclassically damped systems. Basically, these methods can be categorize as either approximate or exact. We start by transforming Eq. (8.2-1) in terms of the modal coordinates, qm ðtÞ, using the change of coordinates defined in Eq. (8.2-5), €ðtÞ þ GqðtÞ _ þ U2 qðtÞ ¼ gðtÞ Iq

(8.2-10)

8.2 Multi-degree-of-freedom system numerical integration

where qðtÞ ¼ fq1 ðtÞ / qR ðtÞgT U ¼ diagðu1 ; .; uR Þ

gðtÞ ¼ fg1 ðtÞ / gR ðtÞgT   G ¼ gij RR ; gij ¼ fTi Cfj (8.2-11)

For nonclassically damped systems, the modal damping matrix, G, although symmetric, is no longer diagonal. Approximate methods uncouple the differential equation by essentially replacing the damping matrix with an approximate diagonal matrix. Some of the diagonalization procedures that have been proposed are to replace G by (1) its diagonal terms gii ; (2) row sum of G; and (3) weighted average of the substructure damping ratios based on either their strain energy or normalized mass participation. Generally, these approaches produce reasonable results only if the degree of modal coupling is extremely weak. Claret and Venancio-Filho (1991) defined the coupling index for the ith and jth modes as sffiffiffiffiffiffiffiffiffiffiffi g2ij for isj (8.2-12) aij ¼ gii gjj with weak coupling occuring if aij  1. Still another criterion was provided by Hasselman (1976), who observed that modal coupling becomes more significant as the system frequencies, ui , move closer together. Hsiao & Kim (1993) compared spacecraft response time histories during launch vehicle engine shutdown and transonic buffet events for systems with approximate diagonal matrices to those possessing full damping matrices. It was concluded that significant discrepancies were introduced by the approximate decoupling methods and, therefore, the time-domain integration methods should employ the full damping matrix. The authors have also observed significant differences (factors of three) in response calculations when the fully coupled damping matrix is replaced by one of the diagonal approximations described above. Exact modal superposition methods that incorporate the full damping matrix have also been investigated. For damping matrices that are symmetric, Veletsos and Ventura (1986) presented a detailed review of the generalized modal superposition approach. The approach begins with examining the complex-valued modes of the equivalent first-order system and then

653

654

CHAPTER 8 Numerical methods

expressing the response as a linear combination of the displacement and velocity components. They too noted substantial errors in the dynamic characteristics and responses when using approximate diagonal damping matrices. We will postpone further discussion of this approach until the next section where we investigate the solution of Eq. (8.2-1) by recasting it as a 2N first-order system. As an alternative to direct integration approaches, several investigators have proposed iterative methods that recursively solve the R related single-degree-of-freedom systems subjected to pseudo-forces that are comprised of gðtÞ and velocity contributions from the off-diagonal damping terms (Ibrahimbegovic and Wilson, 1989; Claret and Venancio-Filho, 1991; Udwadia and Esfandiari, 1990; Udwadia, 1993; Udwadia and Kumar, 1994a,b; Fromme and Golberg (1998)). The iteration is similar to Jacobi’s method for iteratively solving large systems of linear equations whose matrices cannot be stored entirely in memory (see Section 8.3.4.1). First, denote the matrices with the diagonal and off-diagonal damping matrix terms by Gdiag ¼ diagðg11 ; .; gRR Þ

and

Goff ¼ G  Gdiag

(8.2-13)

Then the differential equation, Eq. (8.2-10), can be written as _ þ U2 qðtÞ ¼ gðtÞ  Goff qðtÞ _ €ðtÞ þ Gdiag qðtÞ Iq

(8.2-14)

With respect to rigid-body modes, the above system is decoupled since the related modal damping and stiffness terms are zero. Hence, without loss of generality, we will assume for convinience that um > 0 and that gmm ¼ 2zm um

and

zm > 0

(8.2-15)

Suppose we have an initial “guess” of the velocity, q_ ð0Þ , then the above equation suggests the iterative scheme, €ðkÞ ðtÞ þ Gdiag q_ ðkÞ ðtÞ þ U2 qðkÞ ðtÞ ¼ e gðk1Þ ðtÞ Iq n oT ðkÞ ðkÞ where e gðkÞ ðtÞ ¼ e g1 ðtÞ; .; e gR ðtÞ is the kth pseudo-force, e gðkÞ ðtÞ ¼ gðtÞ  Goff q_ ðkÞ ðtÞ

(8.2-16)

(8.2-17)

Since the left side of Eq. (8.2-16) represents an uncoupled system, we can iteratively solve R single-degree-of-freedom systems, i.e.,

8.2 Multi-degree-of-freedom system numerical integration

initialize : q_ ð0Þ ðtÞ ¼ q_ 0 for k ¼ 1; 2; / for m ¼ 1; /; R ðkÞ 2 ðkÞ gðk1Þ ðtÞ; solve: q€ðkÞ m m ðtÞ þ 2zm um q_ m ðtÞ þ um qm ðtÞ ¼ e qðkÞ m ð0Þ

¼ q0 and

q_ ðkÞ m ð0Þ

00

(8.2-47)

Note that another set of R first-order initial value problems exist that corresponds to the conjugate of Eq. (8.2-47). However, it suffices to just consider Eq. (8.2-47), or its conjugate, since we seek only real-valued responses.

662

CHAPTER 8 Numerical methods

To obtain the corresponding initial conditions, we premultiply Eq. (8.2-45) e then by Eq. (8.2-40), which yields at t ¼ 0 by zTm M, (8.2-48)

The solution of the first-order complex-valued initial value problem, Eq. (8.2-47), with xm ð0Þ ¼ xm;0 , is obtained by straightforward use of the integrating factor, elm t , which yields Z t lm t xm ðtÞ ¼ xm;0 e þ elm ðtsÞ gm ðsÞds (8.2-49) 0

Therefore, the displacement and velocity responses are given by ! ! R R X X _ ¼ 2Re xðtÞ ¼ 2Re xðtÞ xm ðtÞvm lm xm ðtÞvm m¼1

m¼1

(8.2-50) We now suggest a simple procedure for numerically integrating Eq. (8.247) by using Eq. (8.2-49) and complex arithmetic; refer to Veletsos and Ventura for calculating the response to a base input using single-degreeof-freedom integrators. We take an approach that is similar to the one used to derive Duhamel’s method. First observe that by the time-stepping nature of the solution, if we know the solution at t ¼ tn , then Eq. (8.249) implies Z tnþ1 lm Dt þ elm ðtsÞ gm ðsÞds (8.2-51) xm ðtnþ1 Þ ¼ xm ðtn Þe tn

For a general forcing function, the above integral cannot be evaluated exactly, and therefore, must be approximated. First substitute its linear interpolant,



tnþ1  s s  tn gm ðtn Þ þ (8.2-52) gm ðtnþ1 Þ gbm ðsÞ ¼ Dt Dt

8.2 Multi-degree-of-freedom system numerical integration

and then evaluating the integral produces the one-step method, xm ðtnþ1 Þ z am xm ðtn Þ þ bm gðtn Þ þ b0m gðtnþ1 Þ am ¼ elm Dt bm ¼

1 lm



am  1 am  lm Dt

b0m ¼

1 lm



am  1 1 lm Dt

(8.2-53)

The above numerical integration method is simple and easy to implement since complex arithmetic is supported in programming languages used for scientific computing. The method is stable for positively damped systems and is globally second-order accurate. The primary disadvantage of the complex modal superposition approach is that it requires accurate computation of eigensolutions of Eq. (8.2-38). For small-to mediumsized problems, the QZ algorithm (Golub and Van Loan, 2013) is the method of choice for solving the general eigenvalue problem. However, for large systems, the memory requirements may be prohibitive and significant round-off errors may degrade the accuracy of the eigensolution. 8.2.3.2 Direct integration using first-order formulation

The complex modal superposition method recasts the multi-degree-offreedom system as a first-order system that takes advantage of symmetry in the mass, damping, and stiffness matrices. We will consider a different first-order formulation that lends itself to one-step and multistep integration methods for solving first-order initial value problems. Premultiplying Eq. (8.2-1) by M1 yields _ þ M1 KxðtÞ ¼ M1 fðtÞ € xðtÞ þ M1 CxðtÞ _ and xð0Þ ¼ x_ 0 xð0Þ ¼ x0

t>0

(8.2-54)

Defining yðtÞ as in Eq. (8.2-45), we obtain the following equivalent firstorder system,

(8.2-55)

663

664

CHAPTER 8 Numerical methods

The discrete solution, yn z yðtn Þ, can now be computed using standard onestep or multistep integration methods. As an example, we list the steps for solving Eq. (8.2-55) by the RK-4 method, which are similar to the steps in Eq. (8.1-53) that numerically integrate the single-degree-of-freedom initial value problem over discrete times, tn ¼ nDt; n ¼ 0; /; NT  1: for n ¼ 0; /; NT  1 a1 ¼ Ayn þ Fðtn Þ

1 a2 ¼ A yn þ Dta1 þ Fðtn þ 1 Þ 2 2

1 a3 ¼ A yn þ Dta2 þ Fðtn þ 1 Þ 2 2

(8.2-56)

a4 ¼ Aðyn þ Dta3 Þ þ Fðtnþ1 Þ Dt ða1 þ 2a2 þ 2a3 þ a4 Þ 6 As discussed in Section 8.1.4, although RK-4 is fourth-order accurate, ynþ1 ¼ yn þ

most implementations approximate Fðtnþ1=2 Þ by linear interpolation of the discrete  2  force time histories. Consequently, the overall accuracy reduces to O Dt . Furthermore, using the results of Section 8.1.4 as a guide, we can expect the RK-4 method to be more accurate than Newmark’s method with a ¼ 1=2, but less accurate compared to Duhamel’s or Newmark’s method with a ¼ 1=6. As mentioned previously, extensive experience indicates that a step size small enough to provide at least 40 samples per cycle at the highest natural frequency should be used to obtain accurate results. This ensures adequate numerical precision and resolution of response peaks to permit accurate estimates of peak responses. Recall that for general multi-degree-of-freedom systems, the eigensolutions of A are complex-valued. Let lk denote the system’s eigenvalues, then Dt 

p 20maxjlk j k

(8.2-57)

8.2 Multi-degree-of-freedom system numerical integration

Furthermore, the step size should also be small enough to prevent aliasing during the discretization and interpolation of the force time histories. One drawback of the direct integration approach is that it can require large in-core memory and very small step sizes. For example, with today’s finite element modeling and computational capabilities, it is not uncommon to encounter models with millions of coordinates that have natural frequencies exceeding thousands of hertz. Whereas the modal superposition approach, through modal uncoupling and truncation, can limit the memory requirements and the maximum natural frequency, direct integration of Eq. (8.2-55) cannot. Use of large mass, damping, and stiffness matrices directly in Eq. (8.2-55) will incur significant computational penalties associated with excessive memory usage and long integration times due to the small step sizes. To avoid this, model reduction techniques should be used to limit the size and modal content of the system matrices prior to solving Eq. (8.2-55). 8.2.3.3 Direct integration using second-order formulation

The main advantage of the first-order formulation of a multi-degree-offreedom system is that it permits the application of general methods used to numerically integrate first-order differential equations. One of its drawbacks is that it generally requires approximately 4N 2 words of in-core memory, which may cause difficulties for large systems. Methods for numerically integrating the equations of motion of a multi-degree-offreedom system in its second-order form have been investigated by numerous authors (Newmark, 1959; Bathe and Wilson, 1973; Bathe, 1982; Hilber et al., 1977; Wood et al., 1981; Zienkiewicz et al., 1984; Belytschko and Hughes, 1983; Hilber, 1976; Ibrahimbegovic and Wilson, 1989). As an example, we will first present Newmark’s method, which requires about N 2 memory words less than the comparable first-order formulations. This approach has been used successfully for calculating dynamic responses of large aerospace structures for several decades. We will also propose an implementation of the RK-4 method that only requires 2N 2 words of memory. This approach is straightforward and can be generalized to other numerical methods for integrating first-order linear systems.

665

666

CHAPTER 8 Numerical methods

The explicit Newmark method for solving the multi-degree-of-freedom initial value problem, Eq. (8.2-1), is a generalization of Eq. (8.1-86), i.e., 1   2 2g 1 ga    CþK Dt C Mþ M ¼ Kinv $ M þ Kinv ¼ aDt2 aDt a a    2 2g   € Mþ C x0 ¼ M1 fð0Þ  Cx_ 0  Kx0 C ¼ Kinv $ aDt a (8.2-58) and for

n ¼ 0; /; NT  1 Df nþ1 ¼ fðtnþ1 Þ  fðtn Þ Dxnþ1 ¼ Kinv Df nþ1 þ M € xn þ C x_ n D€ xnþ1 ¼

2 2 1 Dxnþ1  x_ n  € xn 2 aDt a aDt

Dx_ nþ1 ¼ Dt €xn þ gDt D€ xnþ1

(8.2-59)

xnþ1 ¼ xn þ Dxnþ1 x_ nþ1 ¼ x_ n þ Dx_ nþ1 € xn þ D€ xnþ1 xnþ1 ¼ €

Once Kinv , M , and C have been calculated, Newmark’s method requires about 3N 2 words of memory to calculate the response quantities within the for loop. Compared to the first-order formulation of the RK-4 method, this implies a savings of about N 2 words of memory, which can be significant for very large structural dynamic models. Newmark’s method is second-order accurate if g ¼ 1=2, otherwise it is only first-order accurate. The choice of a will vary among analysts. Some prefer unconditional

8.2 Multi-degree-of-freedom system numerical integration

stability and, therefore, set a ¼ 1=2. Based on the results presented in Section 8.1.4, one should, however, use a ¼ 1=6 since it improves the accuracy by providing a better approximation of the exact continuous transfer function. Also, as discussed previously, the step size should satisfy Eq. (8.2-57); and as noted, this may require the use of very small step sizes that will significantly increase the integration times. If the higher modal responses are not necessary, then larger step sizes can be chosen if these modes are removed from the dynamic model prior to implementation. The implementation of the RK-4 method that was presented earlier is based on recasting the second-order multi-degree-of-freedom initial value problem to a first-order system. This requires 4N 2 words of memory to store the state matrix, A, defined in Eq. (8.2-55). Since the lower row partition consists of the N  N identity and zero matrices, a savings of about 2N 2 words of memory can be achieved by expanding the matrix-vector products algebraically. This can be accomplished by computing b ¼  M1 C C

and

b ¼ M1 K K

(8.2-60)

and then implementing for n ¼ 0; /; NT  1 b x_ n þ Kx b n z¼C a_ 1 ¼ z þ f n ;  Dt  b b 1 þ f 1; C a_ 1 þ Ka nþ 2 2  Dt  b b 2 þ f 1; C a_ 2 þ Ka a_ 3 ¼ z þ nþ 2 2   b a_ 3 þ Ka b 3 þ f nþ1 ; a_ 4 ¼ z þ Dt C a_ 2 ¼ z þ

x_ nþ1 ¼ x_ n þ

Dt ða_ 1 þ 2a_ 2 þ 2a_ 3 þ a_ 4 Þ 6

xnþ1 ¼ xn þ

Dt ða1 þ 2a2 þ 2a3 þ a4 Þ 6

a1 ¼ x_ n a2 ¼ x_ n þ

Dt a_ 1 2

a3 ¼ x_ n þ

Dt a_ 2 2

a4 ¼ x_ n þ Dta_ 3

(8.2-61) There are some practical considerations that experience indicates should be considered when selecting the integration method and step size for

667

668

CHAPTER 8 Numerical methods

general multi-degree-of-freedom systems. There is the risk of treating numerical integrators as “black boxes” and selecting step sizes that do not yield accurate results. This problem is further exacerbated if it goes undetected, which can occur when unconditionally stable integrators are used. For this reason, one should use integrators that are conditionally stable so that the calculated response will “blow-up” if the step size is too large. Although a step size that is adequate for stability does not necessarily ensure accuracy, it will at least force some consideration of the appropriate step size. As an alternative to the step-size criterion, Eq. (8.2-57), one can determine a suitable step size by recalculating the responses with decreasing step sizes until the differences in responses are negligible. For large systems, this approach may not be feasible, especially if the integration is performed over long time durations. Since accuracy depends mainly on approximating the higher frequency content, a possible remedy would be to perform the step-size reduction by integrating over a short period where the higher frequency modes will be excited. Finally, there are integrators that try to improve efficiency by implementing variable step sizes. In practice, the calculated time-domain responses are often postprocessed using spectral analysis and signal processing methods that require that they be uniformly sampled. If variable step-size integrators were used, Dt would not be constant and the time responses would have to be interpolated to a constant sampling rate. This extra calculation effectively cancels the gains in efficiency obtained during integration. Moreover, interpolation will reduce the accuracy since they attenuate the response and can introduce aliasing at the higher frequencies. Hence, variable step-size integrators should be avoided, except when needed to solve a nonlinear system as piece-wise linear. Nonlinear behavior in complex structural dynamics systems is more common than one would suspect. Examples of nonlinear behavior include launch vehicles separating from their launch pads, joints where friction can prevent relative motion until the joint forces are sufficient to overcome the friction forces, and large deflection geometric effects. The only feasible way to solve large complex problems with nonlinearities is to treat the system as piecewise linear. In this case, the point within a time step at which a nonlinear change occurs, such as a friction joint begins to slide or becomes stuck, would be identified and the integration step would be repeated to this

8.3 Solution of systems of linear equations

time point. The appropriate change in state would then be introduced, and the integration would proceed with the altered system from this point forward until the next change needs to be addressed. 8.3 Solution of systems of linear equations In this section, we address several methods for solving the following system of linear algebraic equations, 



Ax ¼ b

(8.3-1)

where A ¼ ai;j is an N  N matrix, and x ¼ f x1 / xN gT and b ¼ f b1 / bN gT are N  1 vectors. We will restrict our discussion to real-valued matrices; however, most of the concepts and results presented here easily extend to the complex-valued matrices and vectors. Central to matrix algebra is the decomposition of a matrix as a product of special matrices that reveal its properties and facilitates numerical computations. We will present two such decompositions of the matrix, A, that are used to solve Eq. (8.3-1). The first is known as the LU factorization and is based on Gaussian elimination. The second is known as the Cholesky factorization, which essentially is the LU factorization that is restricted to symmetric positive-definite matrices. The LU and Cholesky factorizations provide direct methods for solving Eq. (8.3-1). In Section 8.3.4, we will discuss the Jacobi, Gauss-Seidel, and successive over-relaxation iterative methods and examine conditions under which convergence is guaranteed. These iterative methods, when applicable, are useful for solving large sparse systems efficiently. This section also serves as a reference to our earlier discussion on iterative methods for integrating nonclassically damped systems. In addition, the material presented in this and the next section will be used when describing the solution techniques for the matrix eigenvalue problem (see Section 8.5). Before discussing the above topics, we briefly review some introductory material that is usually covered at the beginning of a computational linear algebra course. For in-depth treatments of these topics and matrix computations in general, the interested reader is refered to Golub and Van Loan (2013), Demmel (1997), Hingham (2002), Stewart (1998, 2001a,b), and Trefethen and Bau (1997).

669

670

CHAPTER 8 Numerical methods

8.3.1 Matrix computation preliminaries

Any discussion on numerical methods should be accompanied by an analysis of the errors that can be expected. Generally, these errors depend on the sensitivity of the problem to perturbations in its input data, the accuracy of the algorithm, and the precision errors that result from limitations of the computer’s digital representation. Section 8.3.1.1 reviews the definition and properties of matrix and vector norms. Norms will be used to quantify the perturbations and the differences in the resulting solutions. In Section 8.3.1.2, we will present a short discussion on the finite precision errors that occur on digital computers. We will then introduce a constant known as machine precision, εmach , that pervades all error estimates related to digital computation. The sensitivity of solutions to perturbations in the input can be examined by viewing the solution procedure as a map from the input to the solution. Section 8.3.1.3 will present the main ideas of this approach, and lead to the useful characterization of sensitivity known as the condition number. 8.3.1.1 Vector and matrix norms

Before discussing the sensitivity of the problem, Ax [ b, we need a metric that quantifies the perturbations of the input and the resulting changes in the output, or solution. Since the input and output consist of vectors and matrices, we can use well-established vector and matrix norms. For an N-dimensional vector, x ¼ f x1 / xN gT , three different vector norms are common: 1-norm

2-norm max-norm

N P

kxk1

¼

kxk2

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi N P ¼ jxn j2

kxk∞

¼ max jxn j

jxn j

n¼1

(8.3-2)

n¼1 1nN

We will also use the generic notation, kxk, when the result does not depend on which norm is used. Accordingly, we can state the following theorem: Theorem 8.3-1 Let x and y be N-dimensional vectors. Then the vector norms defined in Eq. (8.3-2) satisfy the following properties:

8.3 Solution of systems of linear equations

1. kxk  0 for all x 2. kxk ¼ 0 if and only if x ¼ 0 3. kaxk ¼ jaj$kxk for any scalar; a 4. kx þ yk  kxk þ kyk

(8.3-3)

It should be noted that the fourth property is a generalization of the triangle inequality. Also, a mathematically rigorous approach to general normed linear spaces would use the above theorem as the definition of a norm. All vector norms on a N-dimensional vector space are equivalent in that they all define the same topology. To see this, first note that the following chain of inequalities hold pffiffiffiffi (8.3-4) kxk∞  kxk2  kxk1  N kxk2  Nkxk∞ Since the topology defines when two vectors are “near” each other, the above inequalities imply that if x is close to y(i.e., kx  yk is small) in one of the norms, then it will also be close with respect to the other norms. Also, if x is “large” (i.e., kxk[1) in one of the norms, then it will also be large with respect to the other norms. To visualize the relations among the norms as stated by the first two inequalities in Eq. (8.3-4), Fig. 8.3-1 compares the “unit circle” with respect to these norms in ℝ2 . For a M  N matrix, A, the matrix-vector product, Ax, can be viewed as a mapping from the N-dimensional vector space, ℝN , to the M-dimensional

FIGURE 8.3-1 Unit circles in ℝ2 for the norms defined in Eq. (8.3-2).

671

672

CHAPTER 8 Numerical methods

vector space, ℝM . It will be useful at this point to define two subspaces associated with A. Let aj represent columns of A and x ¼ f x1 / xN gT , then y ¼ Ax ¼ x1 a1 þ x2 a2 þ / þ xN aN , i.e., y is equal to a linear combination of the columns of A. This leads to the definition of the range of A, which we denote by RA, as the subspace of ℝM that is equal to the linear span of the columns of A. Therefore, if y˛RA , then there exists a vector, x, such that Ax ¼ y. The null space of A, which we denote by N A, is the subspace in ℝN that consists of all vectors, v, such that Av ¼ 0. Note that if Ax ¼ y, then for any v˛N A, Aðx þvÞ ¼ y. The 2-norm is associated with and gives rise to the standard (Euclidean) inner product. Let y ¼ Ax for a M  N matrix, A, and let b y ˛ℝM . Consider T y ; yiℝM hb y y. Substituting for y leads to the inner product in ℝM , hb  T  y ; AxiℝM ¼ A b x ; xiℝN (8.3-5) y ; x ℝN ¼ hb y ; yiℝM ¼ hb hb where b x ¼ AT b y . Therefore, AT naturally defines a map from ℝM to ℝN that relates the inner product of y˛RA to a corresponding inner product involving b x ¼ AT b y ˛RAT . If b y is orthogonal to RA , we have for all  T  N x˛ℝ , 0 ¼ hb y ; AxiℝM ¼ A b y ; x ℝN . Since x is arbitrary, AT b y ¼ 0, y ˛N AT , then hb y ; yiℝM ¼ 0 which implies that b y ˛N AT . Conversely, if b and, therefore, b y is orthogonal to RA . To summarize, the null space of T A is the orthogonal complement of the range of A, i.e., N AT ¼ Rt A. Reversing the roles of A and its transpose, we also conclude that the null space of A is the orthogonal complement of the range of AT , i.e., . These results will be useful when we discuss the properties N A ¼ Rt AT of the pseudo-inverse. We conclude this section with a discussion of matrix norms. An upper bound of kyk ¼ kAxk in terms of A and x leads to the following definition: Definition Let A be a matrix in ℝMN and x a vector in ℝN . Define kAk by kAk ¼ maxkAxk=kxk. Then, kAk defines a norm on ℝMN xs0

and for y ¼ Ax, kyk ¼ kAxk  kAk$kxk. We say that the matrix norm is subordinate to the vector norms since it depends on the vector norms, kxk and kyk, used to define it. For example, let y ¼ Ax; if the norms kxk2 and kyk2 are used, then kAk2 is subordinate to the 2-norm. Furthermore the matrix and vector norms are consistent, since kAxk  kAkkxk, i.e., the norm of the product is less than or equal

8.3 Solution of systems of linear equations

to the product of the norms. Note that by linearity, the maximum in the definition could have been specified over the unit circle kAk ¼ max kAxk kxk¼1

(8.3-6)

The next theorem, whose proof can be found in Horn and Johnson (1990), shows how to calculate the matrix norms:   Theorem 8.3-2 Let A ¼ ai; j be a matrix in ℝMN . Then 1. kAk1 ¼ max

1 jN

M X ai; j

i¼1 p ffiffiffiffiffi 2. kAk2 ¼ max ln 1nN

3. kAk∞ ¼ max

1iM

ln ¼ nth eigenvalue of AT A

(8.3-7)

N X ai; j j¼1

As an example, the norms for a 2 2 60 6 A¼6 44 2

matrix A where 1 0 4 4

0 12 7 10

3 1 2 7 7 7 1 5 10

(8.3-8)

are 4 X ai;j ¼ maxf8; 9; 29; 14g ¼ 29 kAk1 ¼ max j

i¼1 p ffiffiffiffiffi kAk2 ¼ max ln ¼ maxf0:6621; 2:8010; 9.7664; 18:7705g ¼ 18:7705 n

kAk∞

4 X ai;j ¼ maxf4; 14; 16; 26g ¼ 26 ¼ max i

j¼1

(8.3-9) It should be noted that kAk2 also corresponds to the maximum singular value. As we will discuss in Section 8.4.3, the singular values, sn , of A are defined by the square root of the eigenvalues of AT A.

673

674

CHAPTER 8 Numerical methods

The Frobenius norm, defined by, vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi uM N uX X 2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ai;j ¼ traceðAT AÞ kAkF ¼ t

(8.3-10)

i¼1 j¼1

is a useful matrix norm that is analogous to the vector 2-norm and can be easily calculated. Recall that the trace of a matrix is the sum of its diagonal elements. Furthermore, since kAk2  kAkF , it provides a simple upper bound for kAk2. For the matrix A, defined in Eq. (8.3-8), kAkF ¼ 21:3542. It is easy to show that matrix norms obey the properties listed in Theorem 8.3-1 with an additional property related to consistency. We state these here for future reference. Theorem 8.3-3 Let A and B be M  N matrices. Also, let C be an L  M matrix. The matrix norms subordinate to vector norms and the Frobenius norm satisfy the following properties: 1. kAk  0 for all A 2. kAk ¼ 0 if and only if A ¼ 0 3. kaAk ¼ jaj$kAk for any scalar; a 4. kA þ Bk  kAk þ kBk 5. kCAk  kCk$kAk

(8.3-11)

As with vector norms, these matrix norms are equivalent in that they induce the same topology. For a N  N matrix, A, the following sequence of inequalities is analogous to Eq. (8.3-4) and implies the topological equivalence of matrix norms: pffiffiffiffi pffiffiffiffi 1 1 kAk∞  pffiffiffiffikAk2  kAk1  N kAk2  N kAkF  NkAk∞ N N (8.3-12) For square matrices, another useful metric is related to its largest eigenvalue. Let A be a matrix in ℝNN where ln , n ¼ 1; /; N denote its eigenvalues. Then the spectral radius of A is defined as rðAÞ ¼ max jln j 1nN

(8.3-13)

8.3 Solution of systems of linear equations

675

The next theorem lists some results related to the spectral radius. Theorem 8.3-4 Let A be a N  N matrix. Then 1. rðAÞ  kAk for all matrix norms 2. If A is symmetric ðor HermitianÞ; rðAÞ ¼ kAk2 3. For any ε > 0; there exists a matrix norm; k$kε ; ðwhich depends on AÞ such that kAkε  rðAÞ þ ε (8.3-14) To show the first statement, let xk be the eigenvector of A associated with the largest eigenvalue, i.e., rðAÞ ¼ jlk j. Then rðAÞ ¼ jlk j ¼

klk xk k kAxk k ¼  kAk kxk k kxk k

(8.3-15)

If A is symmetric, then it has the spectral decomposition, A ¼ QLQT

(8.3-16)

where Q is a N  N orthonormal matrix and L ¼ diagðl1 ; /; lN Þ. Hence AT A ¼ QL2 QT , which implies that the largest eigenvalue of AT A is equal to the square of rðAÞ. The rest follows from Theorem 8.32. Note that this statement also holds if A is a normal matrix. A matrix is normal if AT A ¼ AAT . Normal matrices generalize symmetric (or Hermitian) matrices in that they are diagonalizable under unitary transformations. For the proof of the last statement, we refer the reader to Horn and Johnson (1990) or Demmel (1997). The spectral radius will be useful when we investigate the convergence properties of iterative methods for solving systems of linear equations. The norms discussed above will allow us to quantify the errors of the computed solutions of linear systems. We will see that these errors depend on errors introduced by the digital representation and arithmetic on computers, and on the sensitivity of the problem that is to be solved. This requires discussion of precision, stability, and conditioning. 8.3.1.2 Floating point representation and arithmetic

Since calculations are performed on digital computers that have finite number of bits, there are limits to the precision that is available for representing real numbers. The errors that are introduced due to finite precision calculations must, therefore, be examined. Most computers today adhere to the

676

CHAPTER 8 Numerical methods

IEEE 754 floating-point representation that provides single precision (32bit) and double precision (64-bit) binary representations shown in Fig. 8.3-2. The fractional part implicitly includes a leading one that should be added and the exponential part has an offset so that it is always positive. This leads to the representation x ¼ ð 1Þsign  ð1 þ fractionÞ  2exponent127 x ¼ ð 1Þsign  ð1 þ fractionÞ  2exponent1023

for single precision for double precision (8.3-17)

For example, x ¼ 3:125 will be represented by the 32-bit single precision word as

where ð  1Þ1  ð1:5625Þ  2128127 ¼ 3:125, and the conversion of the exponent and fractional part to decimal representation are given by exponent ¼

7 X

b23þi 2i

i¼0

and

fraction ¼

23 X

b23i 2i

(8.3-18)

i¼1

A similar conversion rule holds for the double precision representation. Let us first consider all single precision floating point numbers that lie between one and two, inclusive, 1; 1 þ

1 2 3 223  1 ; 1 þ ; 1 þ ; /; 1 þ ; 2 223 223 223 223

FIGURE 8.3-2 IEEE floating-point format.

8.3 Solution of systems of linear equations

Observe that the gap between consecutive numbers is equal to 223 . Multiplying the above sequence by two yields 1 2 3 223  1 ; 2 þ ; 2 þ ; /; 2 þ ; 4 222 222 222 222 Although the gaps between consecutive numbers have doubled, we see that in a relative sense, the gaps are no greater than 223 z 1:19  107 . Hence, for any real number lying within a gap, the relative error to its nearest digital neighbor will be less than or equal to the machine precision error, defined as 2; 2 þ

1 εmach ¼  223 z 5:96  108 2 Similarly, for double precision floating-point representation,

(8.3-19)

1 (8.3-20) εmach ¼  252 z 1:11  1016 2 Most of the bounds that address floating-point errors include constants that slightly increase εmach . While a finite subset of the real numbers can be represented exactly, most of them will be in error due to rounding. Introduce the notation, flðxÞ to denote the floating-point representation of x. Then the error due to rounding satisfies flðxÞ ¼ xð1 þ εÞ;

for some ε; with jεj  εmach

(8.3-21)

A useful interpretation of εmach and the quantization due to round off is that the computer cannot distinguish between 1 and 1 þ u, for juj  εmach. Hence, for x ¼ 1 þ εmach Eq. (8.3-21) becomes, 1 ¼ flð1 þ εmach Þ ¼ ð1 þ εmach Þð1 þ εÞ, for ε ¼ εmach =ð1 þ εmach Þ. There is also a simple and concise result for floating point arithmetic. Let + denote any of the four arithmetic operations, þ; ; ; and O. Then the floating-point representation of x+y satisfies, flðx + yÞ ¼ ðx + yÞð1 þ εÞ;

for some ε; with jεj  εmach

(8.3-22)

Many linear algebraic computations involve the dot product of two N-vectors, xT y. It can be shown (Hingham, 2002) that the floating-point error due to the accumulation of finite precision errors is  T  Nεmach T (8.3-23) gN ¼ x y  fl x y  gN jxjT jyj 1  Nεmach

677

678

CHAPTER 8 Numerical methods

where jxj denotes the vector x with its elements replaced by their absolute values. Also, Eq. (8.3-23) assumes that Nεmach  1, which then suggests a simpler bound, T   x y  fl xT y  Nε0 jxjT jyj (8.3-24) mach where ε0mach ¼ Oð1Þεmach represents an adjusted machine precision. Since our goal is to present basic and simple error bounds, we will use Nε0mach rather than gN . 8.3.1.3 Problem sensitivity

For analyzing the sensitivity of a problem, it will be useful to consider a function, F, that maps the input data, X, to its solution, Y, as shown in b is represented Fig. 8.3-3. The algorithm used to calculate the solution, Y, b denotes a perturbation of X and corresponds to the solub X as the map, F.   b . The errors in Y b and X b are known as the forward and backward tion F X errors, respectively. For example, in the context of solving the linear system, Ax ¼ b, we have the input, X ¼ ðA; bÞ and the solution, Y ¼ x. The calculated solution by LU decomposition, which will be discussed in the next section, with forward and backward substitution would be repreb ¼b sented by Y x. An algorithm is accurate if the relative forward error is within machine precision,

FIGURE 8.3-3 Mapping of input data to solution and associated errors.

8.3 Solution of systems of linear equations

  Y b  Y kYk

¼

   FðXÞ b  FðXÞ kFðXÞk

¼ Oðεmach Þ

(8.3-25)

In practice, errors are always present in the input. For example, errors occur due to finite precision round-off errors, measurements errors, or errors from preprocessing calculations. A “good” algorithm should be insensitive to small errors in the input. We say that the algorithm is stable, if for all input, b such that X, there exists a perturbation, X,      X  FðXÞ b  b  X b F X ¼ Oðεmach Þ and ¼ Oðεmach Þ    F X b  kXk (8.3-26) As stated by Trefethen and Bau (1997), “A stable algorithm gives nearly the right answer to nearly the right question.” A useful and practical notion of stability, which applies to many of the algorithms for solving linear systems, is that of backward stability. An algorithm is said to be backward stable if   b such that F X b ¼ FðXÞ, b there exists a close perturbation, X, i.e., for all b such that input, X, there exists X,   X b  X   b b (8.3-27) ¼ Oðεmach Þ and FðXÞ ¼F X kXk In other words, a backward-stable algorithm provides the exact solution to a nearby problem that is within the uncertainty of the input data. As discussed, the notion of a stable algorithm involves the relative errors of the solution and input. A limiting upper bound of the ratio of these relative errors leads to the useful characterization of the problem’s sensitivity known as the condition number. We will use the definition from Trefethen and Bau (1997). For a given problem, let FðXÞ represent the solution for b ¼ X þ DX denote perturbations of X. The (relative) input data X. Let X condition number, k, at X is defined as !     F X b  FðXÞ=kFðXÞk (8.3-28) kðXÞ ¼ lim max d/0 kDXkd kDXk=kXk

679

680

CHAPTER 8 Numerical methods

If F is differentiable, then by using the Jacobian of the map the condition number simplifies to kðXÞ ¼

kJðXÞk kFðXÞk=kXk

(8.3-29)

By definition, we see that the condition number provides an amplification factor, in a relative sense, to small changes in the input, i.e., kFðX þ DXÞ  FðXÞk kDXk ; z kðXÞ kXk kFðXÞk

kDXk 1 kXk

(8.3-30)

Hence, problems with very large condition numbers are called ill conditioned and are very sensitive to slight changes in their input. On the other hand, problems whose condition numbers are “reasonably bounded” are well conditioned and are insensitive to small perturbations in the input. Let us look at two examples. Consider the map, F1 ðx1 ; x2 Þ ¼ x21 þ x22 with Jacobian equal to J1 ðx1 ; x2 Þ ¼ ½ 2x1 2x2  . Using the 2-norm, the condition number becomes qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 x21 þ x22 kJ1 ðx1 ; x2 Þk2  ¼  . . qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 2 k1 ðx1 ; x2 Þ ¼     2 T 2 ðx ; x Þ x x g F1 1 2  f 1 2  x21 þ x22 x1 þ x2 2

2

(8.3-31) Now consider the map F2 ðx1 ; x2 Þ ¼ x21  x22 , with J2 ðx1 ; x2 Þ ¼ ½ 2x1  2x2 . The condition number is given by x2 þ x22 kJ2 ðx1 ; x2 Þk2  ¼ 2$ 12  . k2 ðx1 ; x2 Þ ¼  x  x2    T 1 2 f x1 x2 g  F2 ðx1 ; x2 Þ 2

Jacobian

(8.3-32)

2

Consider the perturbed input, xb1 ¼ x1 ð1 þ ε1 Þ and xb2 ¼ x2 ð1 þ ε2 Þ for jε1 j z jε2 j z ε  1. Then, a lower bound for the relative input error is #1=2 " ðε1 x1 Þ2 þ ðε2 x2 Þ2 kf ð1 þ ε1 Þx1 ð1 þ ε2 Þx2 g  f x1 x2 gk2 ¼ zε kf x1 x2 gk2 x21 þ x22 (8.3-33)

8.3 Solution of systems of linear equations

681

An upper bound for the relative error in the solution for F1 is given by         x 1 ; xb2 Þ  F1 ðx1 ; x2 Þ 2 ε1 x2 þ ε2 x2 þ ε2 x2 þ ε2 x2 F1 ðb 1 2 1 1 2 2   ¼ 2 2   þ x x 1 2 F1 ðx1 ; x2 Þ 

    2 jε1 jx2 þ jε2 jx2 þ ε2 x2 þ ε2 x2 1

2

1 1

2 2

x21 þ x22

z 2ε þ ε2 (8.3-34)

Clearly, Eqs. (8.3-31), (8.3-33), and (8.3-34) imply that (8.3-30) holds. Furthermore, since k1 ðx1 ; x2 Þ ¼ 2, F1 is well conditioned. In a similar fashion, an upper bound for the relative error of the solutions of F2 is given by         x 1 ; xb2 Þ  F2 ðx1 ; x2 Þ 2 ε1 x2  ε2 x2 þ ε2 x2  ε2 x2 F2 ðb 1 2 1 1 2 2   2 ¼ 2    x x F2 ðx1 ; x2 Þ 1 2   2 jε1 jx21 þ jε2 jx22 þ ε21 x21  ε22 x22 x21 þ x22 þ ε2 2  z 2ε x 2  x 2 x  x2 1

2

1

2

(8.3-35) Observe that for jx1 jsjx2 j, Eqs. (8.3-32), (8.3-33), and (8.3-35) satisfy Eq. (8.3-30). However, k2 ðx1 ; x2 Þ is unbounded when jx1 j z jx2 j, that is, small relative changes in the input can produce very large relative changes in the solutions; hence, F2 is ill conditioned. This illustrates how cancellation can lead to forward errors that are significant in a relative sense. In the next section, we will discuss the sensitivity of the linear system, Ax ¼ b. In particular, we will show that its condition number is given by     (8.3-36) kðAÞ ¼ kAk$A1 

682

CHAPTER 8 Numerical methods

8.3.2 LU factorization

The main idea behind Gaussian elimination is the transformation of a matrix, A, to an upper triangular matrix so that the resulting system can be solved efficiently via backward substitution. By representing the procedure as a sequence of Gauss transformations, we will arrive at the LU factorization of A as a product of a unit lower triangular matrix and an upper triangular matrix. Since not all nonsingular matrices can be factorized directly, partial row pivoting will be introduced to yield an LU decomposition of an equivalent matrix with its rows permuted. Lastly, we will examine the errors of the solutions of Eq. (8.3-1) that can arise from Gaussian elimination. 8.3.2.1 Gaussian elimination

The transformation of A to an upper triangular matrix by Gaussian elimination is accomplished by applying a sequence of elementary row operations as follows: 1. Swap the ith row with the kth row (Ri 4Rk ) 2. Replace the ith row by a nonzero scalar times itself (Ri )aRi ) 3. Replace the ith row by the sum of itself and a scalar times the kth row (Ri )Ri þ aRk ) where Ri denotes the ith row. If we interpret these row operations in terms of their corresponding equation manipulations, it is clear that these row transformations result in an equivalent system of equations and, therefore, yield the same solutions. We illustrate the row transformations in the Gaussian elimination process with the following example: 9 2 38 9 8 2 2 0 1 > x1 > > 3 > > > > > = > < 5 > =

6 2 3 7> 3 2 2 6 7 ¼ (8.3-37) 6 7 > > > 4 4 1 x 0 7 1 5> 3 > > > > > > ; > : ; : > x4 4 2 4 10 10 Starting with the first column, we delete the nonzero elements below the first row by applying the following row operations: R2 )R2 þ R1 , R3 )R3 þ ð2ÞR1 and, R4 )R4 þ ð1ÞR1 . Representing the system as , these row operations yield an augmented matrix,

8.3 Solution of systems of linear equations

(8.3-38)

Continuing with the second column, we see that the row operations, R3 )R3 þ ð3ÞR2 and R4 )R4 þ 2R2 will delete the nonzero entries in the third and fourth rows, respectively.

(8.3-39)

Note that these transformations still retain the zeros in the first column. Moving to the third column, we delete the entry in the fourth row by applying the operation, R4 )R4 þ ð2ÞR3 to obtain

(8.3-40)

Note that the final augmented matrix represents an equivalent system in upper triangular form, i.e., 2x1

 2x2 x2

þ þ

0x3 3x3

þ 

x4 x4

¼ 3 ¼ 2

2x3

þ

2x4 3x4

¼ 0 ¼ 3

(8.3-41)

and the coefficient matrix, A, was transformed to an upper triangular matrix, U, 2 3 2 2 0 1 60 1 3 1 7 6 7 U¼6 (8.3-42) 7 4 0 0 2 2 5 0

0

0

3

683

684

CHAPTER 8 Numerical methods

The solution is obtained by backward substitution that starts at the fourth row and leads to 3 ¼ 1 3 0  2x4 x3 ¼ ¼ 1 2 (8.3-43) 2  ð3x3  x4 Þ x2 ¼ ¼0 1 3  ð2x2 þ x4 Þ ¼2 x1 ¼ 2 In vector form, the solution can be expressed as x ¼ f 2 0 1 1 gT . We will formally describe the Gaussian elimination process using matrices to represent the transformations that delete the nonzero entries below the diagonal for each column. This leads to rank-one perturbations of the identity matrix known as Gauss transformations. As an example, let x ¼ f x1 x2 x3 x4 x5 x6 x7 gT be a vector of dimension seven so that x3 s0. Suppose we want a transformation that deletes all of the elements below x3 while leaving x1 , x2 , and x3 unchanged. First, define a vector l3 as xk l3 ¼ f 0 0 0 l4 l5 l6 l7 gT lk ¼ ; k ¼ 4; /; 7 x3 (8.3-44) x4 ¼

Next, consider 2 1 60 6 6 60 6 6 L3 ¼ 6 0 6 60 6 6 40 0

the 7  7 matrix L3 : 0 1

0 0

0 0 0 0

0 0

0 1 0 l4 0 l5

0 0 0 1 0 0 0 1 0

0 l6 0 l7

0 0 1 0 0 0

3 0 07 7 7 07 7 7 0 7 ¼ I  l3 eT3 7 07 7 7 05 1

8 9 0> > > > > > > > > > 0 > > > > > > > > > > 1 > < > = e3 ¼ 0 > > > > > 0> > > > > > > > > > >0> > > > > : > ; 0 (8.3-45)

8.3 Solution of systems of linear equations

Then

    L3 x ¼ I  l3 eT3 x ¼ x  l3 eT3 x ¼ x  l3 x3 8 9 8 9 8 9 x x x1 > 1 1 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > x x x > > > > > > 2 2 2 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > x x x > > > > > > 3 3 3 > > > > > > > > > > > < = < = < > = ¼ x4  l4 x3 ¼ x4  ðx4 =x3 Þx3 ¼ 0 > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Þx ðx x  l x x  0 > > > > > > =x 5 5 3> 5 5 3 3> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > x6  l6 x3 > > x6  ðx6 =x3 Þx3 > > 0 > > > > > > > > > > > > > > > > > > > > : ; : ; : > ; x7  l7 x3 x7  ðx7 =x3 Þx3 0

685

(8.3-46)

Also, note that for a vector y ¼ f y1 y2 0 0 / 0 gT , L3 y ¼ y. Clearly, this property generalizes so that if a matrix has zeros below the diagonal in the first k  1 columns, then premultiplying it by Lk will preserve the zeros. It is this property that permits the efficient introduction of zeros below the diagonal in the LU decomposition. Let us revisit the Gaussian elimination steps in Eqs. (8.3-38) through (8.3-40) using Gauss transformations. The row transformations in Eq. (8.3-38) that were applied to the first column lead to the Gauss transformation L1 , 8 9 8 9 8 9 2 3 1 0 0 0 0 0 > > > > >0> > > > > > > > > > > > > > > > > > > 6 7 > > > > < l21 =

= 6 1 1 0 07 6 7 T L1 ¼ I  l1 e1 ¼ 6 ¼ ¼ 7 where l1 ¼ > > > 6 2 0 1 0 7 l31 > 4=2 > 2> > > > > > > > > > > > > 4 5 > > > > > > > > > > > : ; : ; : > ; l41 1 0 0 1 2=2 1 (8.3-47) Similarly, we find that the Gauss transformations used for the second and third columns are, respectively,

686

CHAPTER 8 Numerical methods

2

1

6 6 60 6 T L2 ¼ I  l2 e2 ¼ 6 6 60 6 4 0 2

1

6 6 60 6 T L3 ¼ I  l3 e3 ¼ 6 6 60 6 4 0

0 1 3 2 0

0 0

3

7 7 0 07 7 7 where 7 1 07 7 5 0 1 0

0

3

l2 ¼

8 9 0> > > > > > > > > > > > > > >

= > > > l32> > > > > > > > > > > > : > ; l42

8 9 0> > > > > > > > > > > > > > >

=

¼

8 > > > > > > > >
> > > > > > > > > > > > > > < 0 > = > > > 3=1 > > > > > > > > > > > > > : ; 2=1 0

9 > > > > > > > > =

¼

8 9 0> > > > > > > > > > > > > > >

= > > > 3> > > > > > > > > > > > ; : > 2 89 0> > > > > > > > > > > > > > >

=

7 7 1 0 07 0 7 7 where l3 ¼ ¼ ¼ 7 > > > > > > > > > > > 7 0> 0 1 07 0 > 0> > > > > > > > > > > > > > > > > > > > > > > 5 > > > > > > > > > > > : ; : ; :> ; l43 0 2 1 4=2 2 (8.3-48)

Hence, the Gaussian elimination process that transformed A to an uppertriangular matrix, U, can be expressed as the product, U ¼ L3 L2 L1 A

(8.3-49)

To solve for A, we must premultiply Eq. (8.3-49) by 1 1 ðL3 L2 L1 Þ1 ¼ L1 1 L2 L3 . This requires that we calculate the inverses of Gauss transformations. Given a Gauss transformation, Lk ¼ I  lk eTk , T it can be shown that its inverse is L1 k ¼ I þ lk ek , since    T  T   T T L1 k Lk ¼ I þ lk ek I  lk ek ¼ I  lk ek lk ek (8.3-50)  T  T T ¼ I  lk ek lk ek ¼ I  lk $ 0 $ ek ¼ I 1 1 Premultiplying Eq. (8.3-49) by L1 1 L2 L3 yields   1 1 A ¼ L1 1 L2 L3 U

(8.3-51)

1 1 Before evaluating the product L1 1 L2 L3 we first note that for k < m,    T  T  (8.3-52) lk ek lm em ¼ lk eTk lm eTm ¼ lk $ 0 $ eTm ¼ 0

It is this property that allows us to efficiently compute the product 1 1 L1 1 L2 L3 by simply augmenting l1 , l2 , and l3 column-wise. Let us see 1 1 and applying how this occurs by expanding the product L1 1 L2 L3 Eqs. (8.3-50) and (8.3-52),

8.3 Solution of systems of linear equations

    1 1 T T T L1 1 L2 L3 ¼ I þ l1 e1 I þ l2 e2 I þ l3 e3 h   T  T i T T T ¼ I þ l1 e1 þ l2 e2 þ l1 e1 l2 e2 I þ l3 e3    ¼ I þ l1 eT1 þ l2 eT2 I þ l3 eT3   h T  T   T  T i T T T ¼ I þ l1 e1 þ l2 e2 þ l3 e3 þ l1 e1 l3 e3 þ l2 e2 l3 e3 ¼ I þ l1 eT1 þ l2 eT2 þ l3 eT3 3 2 2 1 0 0 0 1 0 0 7 6 6 7 6 6 7 6 1 1 0 6 l21 1 0 0 7 6 6 7¼6 ¼6 7 6 6 6 l31 l32 1 0 7 6 2 3 1 7 6 6 5 4 4 l41 l42 l43 1 1 2 2

0

3

7 7 07 7 7 7 07 7 5 1 (8.3-53)

1 1 L1 1  L2 L3

is a unit lower triangular matrix, which we will Observe that denote by L ¼ li;j . Therefore, Eqs. (8.3-51) and (8.3-53) yield the LU decomposition of A, A¼L U

(8.3-54)

where the elements of L are directly obtained from the Gauss transformations. The direct version of the LU factorization algorithm is summarized below. Direct LU factorization  

Let A ¼ ai;j be a N  N nonsingular matrix that admits an LU  factorization. Then, the following algorithm calculates the lower, L ¼ li;j , and upper, U ¼ ui;j , triangular factors. For i > j, li;j overwrites the ði; jÞ terms of A. The resulting upper triangular entries of A correspond to the elements, ui;j , for i  j, i.e., for j ¼ 1; /; N  1

loop over columns 1; /; N  1

for i ¼ j þ 1; /; N delete elements below aj; j ai; j ai; j ¼ store li; j by overwriting ai; j aj; j ai; jþ1: N ¼ ai; jþ1: N  ai;j aj; jþ1: N

687

688

CHAPTER 8 Numerical methods

Below, the algorithm is illustrated for a 5  5 matrix as we loop over the columns, j ¼ 1; /; 4. The jth column and row of A are replaced by the corresponding column and row elements of L and U, respectively. The suðjÞ

perscript, ai;k , is used to denote that the (i, k) entry of A has been modified. 2

a1;1

6 6 a2;1 6 6 6 a3;1 6 6 6 a4;1 4 a5;1

a1;2

a1;3

a1;4

a2;2

a2;3

a2;4

a3;2

a3;3

a3;4

a4;2

a4;3

a4;4

a5;2

a5;3

a5;4

2 u1;1

0 j¼2

6l 6 2;1 6 6 6 l3;1 6 6 6 6 l4;1 4

0 j¼1

a1;5

u1;3

u1;4

u2;2

u2;3

u2;4

l3;2

a3;3

l4;2

a4;3

a4;4

ð2Þ

ð2Þ

ð2Þ

ð2Þ

a3;4

ð2Þ

3

7 a2;5 7 7 7 a3;5 7 7 7 a4;5 7 5 a5;5

u1;2

ð2Þ

2u

0 j¼1

u1;5 3 u2;5 7 7 7 ð2Þ 7 a3;5 7 7 7 ð2Þ 7 a4;5 7 5 ð2Þ

1;1

6 6l 6 2;1 6 6 6 6 l3;1 6 6 6 6 l4;1 6 4

u1;2 ð1Þ

a2;2 ð1Þ

a3;2 ð1Þ

u1;3 ð1Þ

a2;3

ð1Þ

a3;3

ð1Þ

u1;4 ð1Þ

a2;4

ð1Þ

a3;4

ð1Þ

a4;2

a4;3

a4;4

ð1Þ

ð1Þ

ð1Þ

2 r a1;1 6a 6 2;1 6 6 a3;1 6 6 4 a4;1 a5;1 2 1 6l 6 2;1 6 ¼6 6 l3;1 6 4 l4;1 l5;1

l5;2

l5;3

l5;4

a1;2

a1;3

a1;4

a1;5

a2;2

a2;3

a2;4

a3;2

a3;3

a3;4

a4;2

a4;3

a4;4

a5;2

a5;3

a5;4

0

0

0

1

0

0

l3;2

1

0

l4;2

l4;3

1

l5;2

l5;3

l5;4

ð1Þ

l5;1 a5;2 a5;3 a5;4 a5;5 3 2 u1;1 u1;2 u1;3 u1;4 u1;5 6 l2;1 u2;2 u2;3 u2;4 u2;5 7 7 6 7 6 6 l3;1 l3;2 u3;3 u3;4 u3;5 7 7 0 6 7 j¼3 6 ð3Þ ð3Þ 7 6 l4;1 l4;2 l4;3 a4;4 a4;5 7 4 5 ð3Þ ð3Þ l5;1 l5;2 l5;3 a5;4 a5;5

l5;1 l5;2 a5;3 a5;4 a5;5 2 3 u1;1 u1;2 u1;3 u1;4 u1;5 6l 7 6 2;1 u2;2 u2;3 u2;4 u2;5 7 6 7 6 l3;1 l3;2 u3;3 u3;4 u3;5 7 6 7 6 7 4 l4;1 l4;2 l4;3 u4;4 u4;5 5 l5;1

u1;5 3 7 ð1Þ a2;5 7 7 7 7 ð1Þ 7 a3;5 7 7 7 ð1Þ 7 a4;5 7 7 5

u5;5 3

a2;5 7 7 7 a3;5 7 7 7 a4;5 5 a5;5 3 2 u1;1 0 6 7 07 6 0 7 6 6 07 7,6 0 7 6 05 4 0 1 0

u1;2

u1;3

u1;4

u2;2

u2;3

u2;4

0

u3;3

u3;4

0

0

u4;4

0

0

0

u1;5

3

u2;5 7 7 7 u3;5 7 7 7 u4;5 5 u5;5

8.3 Solution of systems of linear equations

Earlier we saw that an upper triangular system can be solved efficiently using backward substitution. Let us see how the LU factors provide an efficient scheme for solving Eq. (8.3-37). Substituting Eq. (8.3-54) into Eq. (8.3-1) and letting y ¼ Ux, we obtain Ax ¼ L Ux ¼ Ly ¼ b

(8.3-55)

From Eq. (8.3-53), the lower triangular system, Ly ¼ b, represents the following system of linear equations: y1 y1

þ

2y1 y1

þ 3y2  2y2

y2 þ y3 þ 2y3

þ

y4

¼ ¼

3 5

¼ ¼

0 4

(8.3-56)

Eq. (8.3-56) can be readily solved using forward substitution, y1 ¼ 3 y2 ¼ 5  ðy1 Þ ¼ 2 y3 ¼ 0  ð2y1 þ 3y2 Þ ¼ 0

(8.3-57)

y4 ¼ 4  ðy1  2y2 þ 2y3 Þ ¼ 3 Observe that y equals the right-hand side of Eq. (8.3-41). Since y is known, we can calculate the solution, x, by solving the upper triangular system, Ux ¼ y, using backward substitution as was done in Eq. (8.3-43). The algorithms for forward and backward substitutions are summarized below. Forward substitution   Let L ¼ li;j be an N  N unit lower triangular matrix. Then the solution to Ly ¼ b is calculated as y1 ¼ b1 for

i ¼ 2; /; N yi ¼ bi 

i1 X

li;j yj

j¼1

Backward substitution  

Let U ¼ ui;j be a N  N nonsingular upper triangular matrix. Then the solution to Ux ¼ y is calculated as

689

690

CHAPTER 8 Numerical methods

xN ¼ yN =uNN for

i ¼ N  1; :::; 1 1, 0 N X xi ¼ @yi  ui; j xj A ui;i j¼iþ1

8.3.2.2 Gaussian elimination with partial pivoting

There are two weaknesses with the LU algorithm in its present form. First, not all nonsingular matrices possess an LU decomposition. This is because the Gaussian elimination step requires that the pivot elements along the di 0 1 agonal, aj;j , be nonzero. A simple example is the matrix A ¼ 1 0 that does not admit to a direct LU factorization. However, swapping the rows of A will allow it to have the trivial LU factorization as a product of identity matrices. In fact, this observation extends to all nonsingular matrices in that, we can always permute the rows of A so that the resulting matrix is LU factorable. Before restating this fact formally, recall that a row permutation matrix is the identity matrix with its rows rearranged. For example, the permutation matrix corresponding to swapping rows i and k is the identity matrix with its ith and kth rows swapped. This matrix will be denoted by, Pi4k . We can now state the result (proof can be found in Demmel, 1997 or Horn and Johnson, 1990): Theorem 8.3-5 Let A be a N  N nonsingular matrix. Then there exists a row permutation matrix P such that PA possesses an LU factorization. The second weakness of the direct LU factorization is related to small factors, li;j , which pivot elements, aj;j . Recall that the Gauss transformation  are the entries of L, are equal to the ratio ai;j aj;j . These factors can be very large if a small pivot element is encountered and can lead to significant round-off errors during the elimination process and the forward and backward substitution computations. A simple remedy for this numerical issue is to maximize the pivot elements by row permutations. The above discussion suggests a row permutation strategy that maximizes the pivot element at each Gaussian elimination stage to ensure numerical stability and accuracy. This leads to the algorithm known as the LU factorization with partial pivoting. We will illustrate this approach for the 4  4 matrix defined in Eq. (8.3-8), which is repeated below for convenience,

8.3 Solution of systems of linear equations

2

2 60 6 A¼6 44 2

1 0

0 12

4 4

7 10

3 1 2 7 7 7 1 5

(8.3-58)

10

Starting with the first column, we swap the first and third rows before applying the Gauss transformation, L1 , to delete the entries below the first row in column one: 2 2 2 3 3 2 1 0 1 4 4 7 1 4 4 7 6 6 6 7 7 60 0 60 0 60 0 12 2 7 12 2 7 12 6 6 6 7 7 6 7 0 6 7 0 6 6 4 4 6 6 7 1 7 0 1 7 7=2 4 5 P143 4 2 1 5 L1 4 0 1 2 4 10 10 2 4 10 10 0 2 27=2 (8.3-59) where

2

3 1 0 0 0 6 0 1 0 07 6 7 L1 ¼ 6 7 4 1=2 0 1 0 5

(8.3-60)

1=2 0 0 1 Continuing to the second column, we note that the pivot element in the (2, 2) position is zero; hence, a direct LU factorization is not possible. Partial row pivoting allows us to circumvent this by swapping the second and fourth rows before applying the Gauss transformation, L2 , i.e., 2 2 3 3 4 4 7 1 4 4 7 1 6 6 7 7 60 0 6 0 2 27=2 19=2 7 12 2 7 6 6 7 7 6 7 0 6 7 60 1 7 P244 6 0 1 7 7=2 1=2 7=2 1=2 4 4 5 5 0 2 27=2

19=2

2

0 L2

0 0 4 4

12 7

0

12

6 6 0 2 27=2 6 6 6 0 0 41=4 4 0

691

2 3 1 7 19=2 7 7 7 21=4 7 5 2

(8.3-61)

1

3

7 2 7 7 7 1=2 7 5 19=2

692

CHAPTER 8 Numerical methods

where 2

1 0 60 1 6 L2 ¼ 6 4 0 1=2 0

0

3 0 0 0 07 7 7 1 05

(8.3-62)

0 1

A common mistake is to search the entire column for the maximum pivot element. For this example, this would lead to swapping the first and second rows, which destroys the zeros in the first column below the first row that resulted from the previous Gauss transformation. Therefore, for the kth column, the search for the maximum pivot element must be restricted to rows k through N. Continuing to the third column, since 41=4 < 12, we swap the third and fourth rows. Applying the Gauss transformation, L3 , to delete the entry in the fourth row and third column produces the upper triangular matrix, U, i.e., 2

4 4

7

6 0 2 27=2 6 6 4 0 0 41=4 0 0 12

1

19=2 7 7 7 21=4 5 2 2

0 L3

2

3

0

P344

4 4 7 6 0 2 27=2 6 6 40 0 12 0 0 0

4

60 6 6 40 0

4

7

1

3

2 27=2 19=2 7 7 7 0 12 2 5 0 41=4 21=4

3 1 19=2 7 7 7¼U 2 5 85=24

(8.3-63)

8.3 Solution of systems of linear equations

where 2

1 60 6 L3 ¼ 6 40 0

0 1

0 0

0 1 0 41=48

3 0 07 7 7 05

(8.3-64)

1

Summarizing, we have shown that ðL3 P344 L2 P244 L1 P143 ÞA ¼ U

(8.3-65)

Inverting the product in parenthesis yields   1 1 1 1 1 A ¼ ðL3 P344 L2 P244 L1 P143 Þ1 U ¼ P1 143 L1 P244 L2 P344 L3 U (8.3-66) e Denote the matrix product within the parenthesis on the right by L. e e Then, A ¼ LU. The computation of L is straightforward since the row permutations, Pi4k , are equal to their inverses, i.e., P1 i4k ¼ Pi4k , and inverses of Gauss transformations are easy to calculate. Therefore, it can be shown that 2

e ¼ P143 L1 P244 L1 P344 L1 L 1 2 3

1=2 1=2 41=48 6 0 0 1 6 ¼6 4 1 0 0 1=2

1

0

3 1 07 7 7 05 0 (8.3-67)

e is not a unit lower triangular matrix. In fact, it can be Unfortunately, L shown that A does not possess an LU factorization (see Theorem 8.3-5). Consider the permutation matrix, P ¼ P344 P244 P143 , where

693

694

CHAPTER 8 Numerical methods

2

P ¼ P344 P244 P143

32 1 0 0 0 1 6 0 1 0 0 76 0 6 76 ¼6 76 4 0 0 0 1 54 0 0 0 1 0 0 2 3 0 0 1 0 60 0 0 17 6 7 ¼6 7 40 1 0 05

32 0 0 0 0 7 6 0 0 1 76 0 76 0 1 0 54 1 1 0 0 0

3 0 1 0 1 0 07 7 7 0 0 05 0 0 1

1 0 0 0 (8.3-68) Premultiplying Eq. (8.3-66) by P yields the LU factorization of PA,     1 1 e PA ¼ P P143 L1 1 P244 L2 P344 L3 U ¼ PL U 2 3 2 1 0 0 0 4 4 7 1 6 7 6 6 1=2 6 1 0 07 6 7 6 0 2 27=2 19=2 6 7 6 ¼6 7 $6 6 0 6 0 1 07 12 2 4 5 40 0

3 7 7 7 7 7 7 5

1=2 1=2 41=48 1 0 0 0 85=24 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl}  U e ðPL (8.3-69) The preceding discussion illustrates Theorem 8.3-5 in that, although A did not possess a LU factorization, a row permutation of it, namely, PA, did. The proof of Theorem 8.3-5 boils down to showing that the product, e is lower unit triangular. Let us take a closer look as to why this happens. PL, Let Lk be a Gauss transformation for the kth column. Consider the permutation triple products of the form, Pm4n Lk Pm4n , where, k < m, and Pm4n is a row permutation matrix that swaps rows m and n with m  n. Since premultiplying and postmultiplying by Pm4n swaps the mth and nth rows and columns, respectively, we obtain

8.3 Solution of systems of linear equations

2

Pm4n Lk Pm4n

6 6 6 6 6 6 6 6 6 6 ¼ Pm4n 6 6 6 6 6 6 6 6 6 6 4 2 6 6 6 6 6 6 6 6 6 6 ¼6 6 6 6 6 6 6 6 6 6 4 2 6 6 6 6 6 6 6 6 6 6 ¼6 6 6 6 6 6 6 6 6 6 4

3

1

7 7 7 7 7 7 7 7 7 7 7Pm4n 7 7 7 7 7 7 7 7 7 5

1 1 lkþ1

1

«

1 1

lm «

0 1

0

ln

1

«

1

1

7 7 7 7 7 7 7 7 7 7 7Pm4n 7 7 7 7 7 7 7 7 7 5

1 1 lkþ1

1

«

1 0

ln «

1 1

1

lm

0

«

1

1

1 « ln

1 1 1

« lm «

0 1

0

3 7 7 7 7 7 7 7 7 7 7 7hLk; m4n 7 7 7 7 7 7 7 7 7 5

1

lkþ1

3

1 1

(8.3-70)

695

696

CHAPTER 8 Numerical methods

where Lk;m4n denotes the Gauss transformation Lk with the mth and nth entries in the kth columns swapped. Therefore, under these types of permutation triple products, the Gauss transformation form of the matrix is retained. Recall that inverses of Gauss transformations are also Gauss transe in Eq. (8.3-69), while formations. Therefore, expanding the product PL noting the invariance of the form of Gauss transformations under these permutation triple products, yields    e ¼ P344 P244 P143 P143 L1 P244 L1 P344 L1 PL 1 2 3   1 1 ¼ P344 P244 P143 P143 L1 1 P244 L2 P344 L3 1 1 ¼ P344 P244 $I$L1 1 P244 L2 P344 L3   1 1 1 1 1 ¼ P344 P244 L1 P 244 L2 P344 L3 ¼ P344 L1;244 L2 P344 L3 1   1 1 1 1 1 ¼ P344 L1;244 $I$L2 P344 L3 ¼ P344 L1;244 P344 P344 L1 2 P344 L3    1 ¼ P344 L1 P L P P L1 344 344 344 1;244 2 3 1 1 ¼ L1 1;244;344 L2;344 L3 ¼ L 2 3 2 1 0 0 0 1 6 7 6 6l 6 0 07 6 4;1 1 7 6 1=2 6 7 6 ¼6 7¼6 6 l2;1 l4;2 1 0 7 6 0 4 5 4

l3;1

l3;2

l4;3

1

1=2

0

0

1

0

0

1

1=2

41=48

0

3

7 07 7 7 7 07 5 1 (8.3-71)

The swapping of the elements, li;j , of the Gauss transformations as a result of the partial row pivoting leads to the following modification of the Direct LU factorization. LU factorization  with partial pivoting

matrix. Then the following Let A ¼ ai; j be a N  N nonsingular   algorithm calculates the lower, L ¼ li; j , and upper, U ¼ ui; j , triangular factors of a row permutation, PA, of A. For i > j, the elements li; j overwrite ai;j . The resulting upper triangular entries of A correspond to the elements,

8.3 Solution of systems of linear equations

697

ui; j , for i  j. The integer valued vector p ¼ f p1 p2 / pN g represents the row permutation matrix, P, such that the ith row of P has one in the pi position and zeros elsewhere. p ¼ f1 2

/

Ng

initialize p

for j ¼ 1; /; N  1 find k such that ak; j ¼ max an; j

loop over columns 1; :::; N  1 locate maximum pivot element

jnN

aj; 1:N 4ak; 1:N

swap jth and kth rows

pj 4pk

record row permutation

for

i ¼ j þ 1; /; N delete elements below aj; j ai; j ai; j ¼ store li;j by overwriting ai; j aj; j ai; jþ1:N ¼ ai; jþ1:N  ai; j aj; jþ1:N

The above algorithm outlines the main steps in the LU factorization with partial pivoting. It is meant to provide a basic understanding of the algorithm when using LAPACK or commercial software packages. As such, it does not incorporate the necessary checks to guard against small pivot terms, nor is it the most efficient in terms of speed and memory usage. Additional numerical stability may be gained by searching for the maximum pivot element over rows and columns j through N. This searching strategy leads to what is known as LU decomposition with complete pivoting. It is seldom used since partial pivoting suffices for most applications. Third, the use of partial pivoting requires modifying the forward substitution to account for the permutation of the elements of b. Forward substitution with partial pivoting  

Let L ¼ li; j be a N  N unit lower triangular matrix of PA, and p ¼ ½ p1 / pN T represents row permutations of P. Then the solution of Ly ¼ Pb is obtained via y1 ¼ bp1 for

i ¼ 2; :::; N yi ¼ bpi 

i1 X j¼1

li; j yj

698

CHAPTER 8 Numerical methods

8.3.2.3 Error analysis

We conclude this section with a discussion about LU factors and error bounds  for  the solution. We begin with the following definition: Let A ¼ ai; j be a N  N matrix. Then for m  N, the m  m submatrix Am ¼   a1:m; 1:m is a leading principal submatrix. The next theorem, whose proof can be found in Horn and Johnson(1990), Golub and Van Loan(2013), or Demmel(1997), provides a necessary and sufficient condition for a matrix to possess a unique LU factorization. Theorem 8.3-6 Let A be an N  N nonsingular matrix. Then A possess a LU factorization, A ¼ LU, if and only if all leading principal submatrices are nonsingular. Furthermore, if we require that L be unit lower triangular, then the LU factors are unique. One of the central themes in numerical analysis is that if a problem is to be solved numerically, it should be well posed. This means that a unique solution exists and that the solutions remain “close” under small changes to the input data. The second part is related to the sensitivity of linear systems that will be discussed later. Theorem 8.3-6 establishes the first part in that, under certain conditions, we can be assured that the LU factors exist and, furthermore, are unique. We saw earlier that even if the conditions of Theorem 8.3-6 were not met, as long as A is nonsingular, we can reorder its rows so that the resulting permuted matrix has an LU factorization. The reordering was performed implicitly during the LU decomposition with partial pivoting. For example, consider the matrix A defined in Eq. (8.3-8). Because its leading 2  2 submatrix is singular, it does not have an LU factorization. On the other hand, we saw that permuting the rows of A during Gaussian elimination via partial pivoting led to the permuted matrix, PA, that is LU factorable. Section 8.3.3 described how the sensitivity of a problem can be characterized by its condition number. Before establishing Eq. (8.3-36), we stated the following result, which is a direct consequence of the consistency of matrix and vector norms. Let x be a solution to the problem, Ax ¼ b, then kxk  kAk1 kbk

(8.3-72)

For 0  d  1, consider the following perturbations of the input, b AðdÞ ¼ A þ dF

and

b bðdÞ ¼ b þ df

(8.3-73)

8.3 Solution of systems of linear equations

Also, let b x ðdÞ denote the solution to the perturbed problem, b x ðdÞ ¼ b AðdÞb bðdÞ

(8.3-74)

Differentiating (8.3-74) with respect to d and then letting d ¼ 0, we obtain an expression for b x 0 ð0Þ,  d  d ¼ ðb þ dfÞ ½A þ dFb x ðdÞ dd dd d¼0

d¼0

Fx þ Ab x 0 ð0Þ ¼ f

(8.3-75)

b x 0 ð0Þ ¼ A1 ðf  FxÞ Taylor expansion of b x ðdÞ yields

  b x ðdÞ ¼ x þ dA1 ðf  FxÞ þ O d2

(8.3-76)

Omitting the second-order term, we obtain a bound for the forward error,     x ðdÞ  xk ¼ dA1 ðf  FxÞ  A1 kdf  dFxk kb   (8.3-77)  A1 ðkdfk þ kdFxkÞ  1   A ðkdfk þ kdFk$kxkÞ Dividing the above inequality by kxk and then using Eq. (8.3-72) leads to !

   x ðdÞ  xk  kb kdfk kdfk  A1  þ kdFk  A1  þ kdFk kxk kxk kAk1 kbk

 1  kdfk kdFk   þ ¼ kAk A kbk kAk (8.3-78) This suggests defining the condition number as in Eq. (8.3-36), which leads to the following bound of the relative forward error in terms of the relative backward errors:

  x ðdÞ  xk kb kdfk kdFk  kðAÞ þ and kðAÞ ¼ kAk$A1  kxk kbk kAk (8.3-79)

699

700

CHAPTER 8 Numerical methods

Therefore, linear systems with large condition numbers are ill conditioned and can drastically amplify small changes in the input. Note that the qualifiers “large” and “small” depend on the problem and the precision available. Although kðAÞ depends on the matrix norm used, it is equivalent by virtue ofpthe inequalities  1in  Eq. (8.3-12). pffiffiffiffiffiffiffiffiffi If we were to use the 2-norm, ffiffiffiffiffiffiffiffiffi   ¼ 1 lmin , where lmax and lmin are the kAk2 ¼ lmax and A 2 maximum and minimum eigenvalues of AT A. Therefore, the condition number with respect to the 2-norm is equal to sffiffiffiffiffiffiffiffiffi lmax k2 ðAÞ ¼ (8.3-80) lmin Let us consider solving the following 3  3 unit lower-triangular system, Ly ¼ b, where L is given by 2 3 1 0 0 6 7 L¼40 1 05 and l0 (8.3-81) 0

l

1

This problem could occur as part of the forward substitution phase after the LU factors of A have been computed. Calculating the eigenvalues of LT L, we find that the condition number of L using Eq. (8.3-80) is given by pffiffiffiffiffiffiffiffiffiffiffiffi!1=2 pffiffiffiffiffiffiffiffiffiffiffiffi l2 þ 2 þ l l2 þ 4 l2 þ 2 þ l l2 þ 4 pffiffiffiffiffiffiffiffiffiffiffiffi (8.3-82) ¼ k2 ðLÞ ¼ 2 l2 þ 2  l l2 þ 4 If a small pivot element was encountered while performing Gaussian elimination without pivoting, l could be very large. Hence, for l[1, kðLÞ z l2 and the lower-triangular system can be ill conditioned. On the other hand, if partial pivoting was employed, then jlj  1 and kðLÞ  2:62, which implies that L is well conditioned. We can generalize the above example to further illustrate the effect of partial pivoting on the condition number of the Gauss transformations. Recall that the Gaussian elimination procedure and LU decomposition were based on the Gauss transformations, Lk ¼ I  lk eTk , k ¼ 1; /; N  1, where lk is a vector whose first k elements are zero. In order to calculate the condition number of Lk using the 2-norm, we

8.3 Solution of systems of linear equations

need to compute lmax and lmin . The singular values of Lk are the square roots of the eigenvalues of LTk Lk , which has the form 3 2 1 7 6 7 6 1 7 6 7 6 2 6 1 þ klk k2 lkþ1 / lN 7 7 6 T Lk Lk ¼ 6 (8.3-83) 7 7 6 lkþ1 1 7 6 7 6 7 6 « 1 5 4 lN

1

Clearly, vm ¼ em , m ¼ 1; /; k  1, are eigenvectors of LTk Lk with eigenvalues, lm ¼ 1. For the vector bl k ¼ f lkþ1 lkþ2 / lN gT , b there are N  k  1 linearly independent vectors, up ¼  T up;kþ1 up;kþ2 / up;N , that are orthogonal to bl k . Note T  that the vectors up ¼ 0 / 0 up;kþ1 up;kþ1 / up;N , p ¼ 1; /; N  k  1, are also eigenvectors with eigenvalues equal to one. We need to find two more eigen-solutions. Consider the vector x¼f0

/ 0

1 xkþ1

xkþ2

/

xN gN

(8.3-84)

Then, x will be an eigenvector of LTk Lk with eigenvalue l, if 1 þ klk k22  ðlkþ1 xkþ1 þ / þ lN xN Þ ¼ l  lkþ1 þ xkþ1 ¼ lxkþ1 «

(8.3-85)

 lN þ xN ¼ lxN Solving for xm, m ¼ k þ 1; /; N, gives lm (8.3-86) 1l Substituting the above into the first equation of (8.3-85), and noting that klk k22 ¼ l2kþ1 þ / þ l2N , leads to the characteristic equation, xm ¼

701

702

CHAPTER 8 Numerical methods

  2 l  klk k2 þ 2 l þ 1 ¼ 0 2

(8.3-87)

The quadratic formula yields the two remaining eigenvalues of LTk Lk , qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 klk k2 þ 2 þ klk k2 klk k22 þ 4 lþ ¼ 2 (8.3-88) qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi klk k22 þ 2  klk k2 klk k22 þ 4 l ¼ 2 Since 0 < l  1  lþ , we conclude that lmin ¼ l and lmax ¼ lþ , which leads to 0 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi11=2 pffiffiffiffiffiffiffiffiffi 2 2 lmax Bklk k2 þ 2 þ klk k2 klk k2 þ 4C qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiA (8.3-89) k2 ðLk Þ ¼ pffiffiffiffiffiffiffiffiffi ¼ @ lmin klk k22 þ 2  klk k2 klk k22 þ 4 Rationalizing the denominator yields the condition number for Lk, 0 11=2 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  4 2 2 2 klk k2 þ 4 þ 2C Bklk k2 þ 4klk k2 þ klk k2 klk k2 þ 2 k2 ðLk Þ ¼ @ A 2

¼

klk k22

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi þ 2 þ klk k2 klk k22 þ 4 2 (8.3-90)

Observe that this is similar to Eq. (8.3-82). Again, we note that partial pivoting will produce factors,  lm , whose  absolute values are bounded by one. Therefore, k2 ðLk Þ ¼ O ðN  kÞ2 ; which are generally orders of magni-

tude less than the condition numbers that result when pivoting is not used. Additionally, linear systems in structural dynamics that result from finite element modeling generally possess sparse matrices. Therefore, these matrices have sparse lower unit triangular factors with condition numbers much less than N 2 .

8.3 Solution of systems of linear equations

In a later section, we will discuss the QR factorization that uses elementary orthonormal matrices, Uk , to factor a matrix as a product of orthonormal and upper triangular matrices. Observe that k2 ðUk Þ ¼ 1 since UTk Uk equals the identity matrix that has eigenvalues equal to one. This implies that algorithms using orthonormal transformations are numerically more stable than those using Gauss transformations. The next result bounds the floating-point errors of the computed LU factors of A under Gaussian elimination. If A does not have an LU factorization, then without loss of generality, we can instead consider a row permutation of A that does. In fact, the row permutation can be chosen and applied a priori as the one that would have resulted under partial pivotb and U b denote the LU factors under a floating-point implementaing. Let L tion of the LU decomposition algorithm. Then bU b ¼AþE L (8.3-91) b $ U b jEj  Nε0mach L   The above result uses the notation jAj ¼ ai; j , which implies that the inequality holds element-wise in an absolute value sense. Observe that the bound applies to the product of the computed LU factors and implies nothing about the accuracy of the factors themselves. According to Stewart (1998, 2001a,b), the factor N usually overestimates the errors that occur. Therefore, we will later replace it by Oð1Þ to obtain a practical upper bound. Details of the above and the following discussion can be found in Demmel (1997) or Stewart (1998, 2001a,b) Let us look at the errors that can arise during the forward and backward substitutions with the computed LU factors. It can be shown that the b ¼ b, satisfies computed solution, b y , of the lower triangular system, Ly the perturbed problem,   b þ DL b b L y¼b (8.3-92) D L b b  Nε0 L mach

b ¼b Similarly, the backward substitution solution, b x , of Ux y , satisfies   b þ DU b b U x¼b y (8.3-93) D U b b  Nε0 U mach

703

704

CHAPTER 8 Numerical methods

Substituting leads to



 b þD U b b U x into Eq. (8.3-92) and applying Eq. (8.3-91) ðA þ DAÞb x¼b b b b U b þ D L$D b U b DA ¼ E þ L$D U þ D L$

(8.3-94)

Combining the bounds of the perturbations in Eqs. (8.3-91e8.3-93), and then using the triangle inequality yields the element-wise bound,   b $ U b z OðNεmach Þ L b $ U b (8.3-95) jDAj  Nε0mach 3 þ Nε0mach L For a matrix, H, note that for any matrix norm, except the 2-norm, k jHj k ¼ kHk. Let us take the ∞-norm, for example. Then Eq. (8.3-91) and division by kAk∞ implies        L L b U b  b  U b kEk∞ 0 0 ∞ ∞ ∞  Nεmach  Nεmach (8.3-96) kAk∞ kAk∞ kAk∞ As we noted previously, if Gauss elimination was applied without pivoting,   L b  could be very large. On the other hand, partial row pivoting will pro∞ duceGauss  transformation factors, lij , that are less than one. This implies b   N, which leads to that  L ∞   U b kEk∞ ∞  N 2 ε0mach (8.3-97) kAk∞ kAk∞ Note that in practice the N 2 factor is an overestimate and can be replaced by a factor, CN , that modestly grows with N. In particular, most of the matrices that are encountered in structural dynamics are symmetric and sparse and, therefore, have factors CN wOð1Þ. Hence, for practical applications, we have the relative error bound,   U b kEk∞ ∞  CN ε0mach (8.3-98) kAk∞ kAk∞    b  z kAk , then kEk kAk z Oðεmach Þ. Under Additionally, if  U ∞ ∞ ∞ ∞ these conditions, Eqs. (8.3-95), (8.3-94), and (8.3-79) imply x  xk∞ kb kEk∞  k∞ ðAÞ z k∞ ðAÞOðεmach Þ kxk∞ kAk∞

(8.3-99)

8.3 Solution of systems of linear equations

If A is well-conditioned, then Gaussian elimination with partial pivoting is a backward stable algorithm. The LAPACK Users Guide (Dongarra, 1999) suggests using Eq. (8.3-99) to provide an approximate error bound by letting Oðεmach Þ z εmach , and using an estimate of the condition number. Therefore, the product k∞ ðAÞOðεmach Þ is better interpreted as an indicator of the number of correct decimal digits rather than an actual error bound. The subtleties of the errors in LU decomposition can be illustrated with an example from Golub and Van Loan (2013). Consider solving the following system using 3-digit floating-point arithmetic:    1:00 :001 1:00 x1 ¼ (8.3-100) x2 3:00 1:00 2:00 where the exact solution is  x ¼f x1 x2 gT ¼ f 1:002004::: 0:998997:::gT . Also, note that kAk z A1  z 2 and, hence, kðAÞ z 4. This says that A is well conditioned. Applying the LU decomposition without pivoting yields the following floating point LU factors,   1 0 :001 1 b b L wp z and U wp z (8.3-101) 1000 1 0 1000     b wp [kAk. These conb wp [1 and  U Observe that without pivoting  L ditions do not satisfy the assumptions that led to Eq. (8.3-99). Continuing with the forward and backward substitutions, we obtain the solution, b x wp ¼ f x1 x2 gT z f 0 1:00 gT , which differs significantly from the exact solution. This was expected since the estimate in Eq. (8.3-99) was not applicable. Let’s now examine the solution when applying partial pivoting. Swapping the first and second rows prior to calculating the LU factors leads to the system    3:00 1:00 2:00 x1 ¼ (8.3-102) 1:00 x2 :001 1:00 Note that the norm and condition number of A have not changed. However, the floating point LU factors become   1 0 1:00 2:00 b pp z b pp z and U (8.3-103) L :001 1 0 1:00

705

706

CHAPTER 8 Numerical methods

Clearly, partial pivoting resulted  in LU factors that satisfy the   conditions    b b that led to Eq. (8.3-99), i.e., L pp z Oð1Þ and U pp  z kAk z 2. Hence, we expect that this solution scheme is backward stable. Applying the forward and backward substitutions leads to xpp ¼ f x1 x2 gT z f 1:00 1:00 gT , which approximates the exact solution to within 3-digits. This simple example illustrates how row permutations that maximize the pivot elements lead to stability and improved numerical accuracy. It also shows how accuracy depends on the numerical method used, even for a well-conditioned system. The preceding discussion suggests that Gaussian elimination with partial pivoting is backward stable. Indeed, for all practical problems this is the case. The relative error bound, Eq. (8.3-99) implied backward stability, resb was similar to or bounded by the ted on the assumption that the norm of U norm of A. If this was not the case, then we would expect some elements of b to be much larger than that of A. This amplification in ubi;j with respect to U ai;j is known as the growth factor, max ubi;j i;j r¼ (8.3-104) max ai;j i;j   b   NrkAk , Eq. (8.3-97) yields the bound on the relative error, Since  U ∞ ∞ kEk∞  N 3 rε0mach kAk∞

(8.3-105)

In practice, the factor N 3 overestimates the relative error and can be replaced by a constant, CN , that slowly grows with N. If r ¼ Oð1Þ, then the Gaussian elimination process is stable. On the other hand, there are cases where large growth factors are possible that indicate potential numerical instabilities. For example, consider the following N  N matrix, A, which is presented in Trefethen and Bau (1997): 2 3 1 1 6 7 17 6 1 1 6 7 A¼6 « 1 «7 (8.3-106) 6 « 7 6 7 4 1 1 / 1 1 5 1 1 / 1 1

8.3 Solution of systems of linear equations

It turns out that the LU decomposition with partial pivoting is the same as direct Gaussian elimination and leads to the factors 2 3 3 2 1 0 / 0 0 1 0 / 0 1 6 7 7 6 2 7 6 1 1 / 0 0 7 60 1 / 0 6 7 7 6 7 7 and U ¼ 6 « « 1 « L¼6 « « 1 « « « 6 7 7 6 6 7 7 6 4 1 1 / 1 0 5 4 0 0 / 1 2N2 5 0 0 / 0 2N1 1 1 / 1 1 (8.3-107) Clearly, r ¼ 2N1 , which implies that for “moderate size” systems with a few hundred coordinates, Gaussian elimination is likely unstable. In particular, Eq. (8.3-91) indicates that the element, eN;N , of E has the bound h i b $ U b z N2N ε0mach (8.3-108) jeNN j  Nε0mach L N;N

2N

The factor implies that we should expect to lose about N bits of precision. Clearly, this problem is not backward stable. Admittedly, this example of catastrophic loss of accuracy is pathological, and problems like these are rarely encountered in practice. In particular, for structural dynamic problems, Gaussian elimination with partial pivoting is a numerically stable method for solving systems of linear equations. We saw earlier that Eq. (8.3-99) provides a way to determine the number of correct digits in the solution by using an estimate of the condition number. Generally, this approach is applicable for practical problems that are solved   using Gaussian elimination with partial pivoting and when U b  z kAk. Another method for estimating an error bound is based on the residual, r; which is defined as r ¼ Ab xb

(8.3-109)

For problems, such as interpolation, where we are more interested in finding a solution so that Ab x z b, krk offers a direct and practical way to calculate this residual error. If an estimate of the norm of A1 is available, then the residual also provides an approximate error bound of the relative forward error with respect to b x . Substituting Ax ¼ b into Eq. (8.3-109) and dividing by kb x k produces

707

708

CHAPTER 8 Numerical methods

   1  A r

x  xk kb ¼ xk xk kb kb



   1   A $jrj xk kb

   1  krk  A $ xk kb

(8.3-110)

As with Eq. (8.3-99), the above estimate indicates the number of significant digits where x and b x are equal. In addition, Eq. (8.3-110) holds regardless of the method used to calculate b x . Also, the LAPACK routine, SGESVX, provides a “tighter” estimate of the relative forward error based on component-wise estimates of the residual that involves the product 1 A $jrj in Eq. (8.3-110). The residual also provides a useful indicator of backward stability. We have the following result, which can be found in Bjo¨rk (Bjo¨rk, 1996). Theorem 8.3-7 An algorithm for solving Ax ¼ b is backward stable if and only if the residual, r ¼ Ab x  b, satisfies the estimate, krk  CkAkkb x kεmach . To establish sufficiency, we use the following result by Wilkinson (1965), who showed that the residual can be used to calculate a backward error of A that is associated with b x . In particular, let E¼ 

rb xT x k22 kb

(8.3-111)

Then the algorithm is backward stable since ðA þ EÞb x¼b þ f

and

kEk2 ¼

krk2  CkAk2 εmach and f ¼ 0 x k2 kb (8.3-112)

To prove necessity, suppose that the algorithm is backward stable. That is, there exists E and f such that ðA þ EÞb x ¼ b þ f, kEk  C1 kAkεmach , and kfk  C2 kbkεmach . Hence, x  bk ¼ k  Eb x þ ðA þ EÞb x  bk  kEb x k þ kðb þ fÞ  bk krk ¼ kAb  kEkkb x k þ kfk  C1 εmach kAkkb x k þ C2 εmach kbk (8.3-113) We can bound kbk in terms of kAk and kb x k by using the following chain of inequalities,

8.3 Solution of systems of linear equations

ð1  C2 εmach Þkbk  kbk  kfk  kb þ fk ¼ kðA þ EÞb xk xk  ð1 þ C1 εmach ÞkAkkb r kbk 

(8.3-114)

ð1 þ C1 εmach Þ xk kAkkb ð1  C2 εmach Þ

Substituting the above into Eq. (8.3-113) leads to a bound of the form in Theorem 8.3-7,

1 þ C1 εmach (8.3-115) x kεmach krk  C1 þ C2 kAkkb 1  C2 εmach 0 It is interesting to note that any perturbation, E  , other than E defined in x k2. Hence, we see that Eq. (8.3-111), will have a norm larger than krk2 kb the 2-norm of the residual, relative to kb x k2 , quantifies the smallest perturbation in A for which b x is a solution of the perturbed system. From the definition of r in Eq. (8.3-109), it is also natural to consider the following normalized residual error,

xÞ ¼ hA;b ðb

krk x k þ kbk kAk$kb

(8.3-116)

Rigal and Gaches (Higham, 2002) extended Wilkinson’s result to bound the backward errors in both A and b using hA;b ðb x Þ. Consider perturbations DA and Db so that b x exactly solves the perturbed system, ðA þ DAÞb x ¼ b þ Db

(8.3-117)

Clearly, there are infinitely many perturbations that satisfy Eq. (8.3-117). x Þ quantifies the smallest perturbations However, it can be shown that hA;b ðb in both A and b that are possible, i.e., hA;b ðb x Þ ¼ minfε > 0: ðA þ DAÞb x ¼ b þ Db; kDAk  εkAk

and

kDbk  εkbkg 

(8.3-118)

x k2 and hA;b ðb x Þ are normalized residual norms that provide Both krk2 kb practical lower bounds on the backward errors associated with the computed solution. If any of these bounds are unacceptably large, then

709

710

CHAPTER 8 Numerical methods

alternate methods should be considered or the problem itself may require a different mathematical formulation. 8.3.3 Factorization for symmetric positive-definite matrices

The linearized equations of motion in structural dynamics involve mass, damping, and stiffness matrices that are symmetric. Additionally, these matrices define quadratic forms that define various forms of energy and, therefore, are non-negative. A matrix, A, is said to be symmetric positivedefinite if it is symmetric and defines a positive quadratic form, i.e., for any nonzero vector, x, xT Ax > 0

(8.3-119)

For example, a mass matrix, M, is generally a symmetric positive-definite matrix and defines the kinetic energy as 1 (8.3-120) T ¼ x_ T Mx_ 2 Also, the potential energy of a system, where K is the symmetric stiffness matrix, is computed as 1 U ¼ xT Kx 2

(8.3-121)

8.3.3.1 Cholesky factorization

We discussed earlier that not all nonsingular matrices have an LU factorization. However, it can be shown that all symmetric positive-definite matrices have an LU factorization. Theorem 8.3-8 lists the properties of symmetric positive-definite matrices; and their proofs can be found in Demmel (1997):   Theorem 8.3-8 Let A ¼ ai;j be a N  N symmetric positive-definite matrix. Then the following properties hold 1. A is nonsingular 2. Any principal submatrix of A is symmetric positive-definite 3. Symmetric positive-definite is equivalent to, A is symmetric and all eigenvalues of A are positive 4. For all i, ai;i > max ai;j  0 jsi 5. If T is a N  M matrix of full rank, then H ¼ TT AT is also a symmetric positive-definite matrix

8.3 Solution of systems of linear equations

  6. There exists a unique nonsingular lower-triangular matrix LC ¼ li;j , called the Cholesky factor, such that A ¼ LC LTC and li;i > 0 Observe that by virtue of Theorem 8.3-8, Properties 1 and 2, any symmetric positive-definite matrix will possess an LU factorization. In fact, the Cholesky factorization can be obtained from the LU factorization. For example, it can be shown that the matrix, A, defined below, is symmetric positive-definite and, therefore, is LU factorable. Using the Direct LU factorization algorithm we obtain 2 3 2 32 3 16 20 24 1 0 0 16 20 24 6 7 6 76 7 A ¼ 4 20 89 50 5 ¼ 4 5=4 1 0 54 0 64 80 5 24

50

280

3=2

5=4

1

0

0

144 (8.3-122)

If we factor the diagonal of the upper triangular factor, we arrive at what is known as the LDU factorization, where L and U are unit lower and upper triangular matrices, respectively. Note that U ¼ LT , which is a consequence of the symmetry of A and uniqueness of the LDU factors. Moreover, the positive-definiteness implies that the elements of D are positive. Expressing the diagonal as a product of its square root, and associating each with L and U, we obtain the Cholesky factors of A: 2 32 32 3 1 0 0 16 0 0 1 5=4 3=2 6 76 76 7 6 76 76 7 76 0 64 0 76 0 7 A¼6 5=4 1 0 1 5=4 6 76 76 7 4 54 54 5 3=2 5=4 1 0 0 144 0 0 1 L D U 2 3T 32 4 0 0 4 0 0 6 7 76 6 7 76 1 1 T 6 6 7 2 2 ¼ 6 5 8 0 7 8 0 76 5 ¼ LD LD 7 4 5 54 6 10 12 6 10 12 Lc LTc (8.3-123)

711

712

CHAPTER 8 Numerical methods

Hence, given a symmetric positive-definite matrix, its Cholesky factor can be calculated by the Direct LU decomposition. Informally, we showed by example and Theorem 8.3-8 that a symmetric positive-definite matrix can be expressed as a product of its Cholesky factors. We now prove by induction that a symmetric positive-definite matrix possesses a Cholesky factorization. The statement is clearly true for a 1  1 symmetric positive-definite matrix. Suppose now that any ðN 1Þ  ðN 1Þsymmetric positive-definite matrix has a Cholesky factor ization. Let A ¼ ai;j be a N  N symmetric positive-definite matrix. We can express A as the following product that involves the lower ðN 1Þ  ðN 1Þ principal submatrix, (8.3-124) . pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffi where l1 ¼ a1;1 , l1 ¼ a2 :N;1 a1;1 , and Að2Þ ¼ a2 :N;2 :N  l1 lT1 . Since A is a symmetric positive-definite matrix, a1;1 > 0 and the lower triangular matrix factor is nonsingular. Hence, we can solve for the block diagonal matrix in the triple product in Eq. (8.3-124), i.e., (8.3-125) By the fifth property in Theorem 8.3-8, we conclude that Að2Þ is also an ðN 1Þ  ðN 1Þ symmetric positive-definite matrix and, therefore, by the inductive hypothesis, has a Cholesky factorization, Að2Þ ¼ L2 LT2

(8.3-126)

Substituting the above into Eq. (8.3-124), we obtain the Cholesky factorization for A,

(8.3-127)

Therefore, by induction, any symmetric positive-definite matrix possesses a Cholesky factorization. The inductive argument can be used to derive the

8.3 Solution of systems of linear equations

Cholesky factorization algorithm. However, it is simpler to solve for the factors directly. First note that Eqs. (8.3-124) and (8.3-127) show us how to obtain the first column of the Cholesky factor, LC . Now consider the T product, LC LC ¼ A, and suppose we know columns 1 to j  1 of LC , which T also equal rows 1 to j  1 of LC :

(8.3-128) For j  k  N, the kth row of the matrix product that corresponds to the jth column of A satisfies, lk;1 lj;1 þ / þ lk;j1 lj;j1 þ lk;j lj;j ¼ ak;j

(8.3-129)

Since we know the first j  1 terms and ak;j , we can solve for the term lk;j lj;j , i.e.,   (8.3-130) lk;j lj;j ¼ ak;j  lk;1 lj;1 þ / þ lk;j1 lj;j1 Storing and overwriting the result in the ðk; jÞ location of the lower triangular part of A leads to the following Cholesky factorization algorithm, which can be found in Golub and Van Loan (2013).   Cholesky factorization Let A ¼ ai; j be a N  N symmetric positive-definite matrix. Then the following   algorithm computes the lower-triangular Cholesky factor, LC ¼ li; j , and overwrites the lower-triangular part of A:

713

714

CHAPTER 8 Numerical methods

for

j ¼ 1; /; N for

k ¼ j; /; N ak; j ¼ ak; j  ak;1:j1 $aTj;1:j1

dot product of ak;1:j1 and aTj;1:j1

aj : N; j aj:N; j ¼ pffiffiffiffiffiffiffi aj; j 8.3.3.2 Error analysis

Because the Cholesky factorization is similar to the LU factorization, most of the error bounds from Section 8.3.2 hold. For example, let A be a symmetric positive-definite matrix; if the Cholesky factorization runs to completion (i.e., A is not nearly singular), then the following result, which is analogous to Eq. (8.3-91), holds. bT ¼ A þ E bCL L C T b b 0 jEj  Nεmach L C $ L C

(8.3-131)

Observe that, as before, we can bound the error of the product of the factors, but not the error in the Cholesky factor itself. Similar arguments, that lead to Eq. (8.3-96) and use the Frobenius norm, yield  T   2  b b  L  LC LC  b C kEkF F F  Nε0mach  Nε0mach (8.3-132) kAkF kAkF kAkF The key difference between Eqs. (8.3-96) and (8.3-132) is that the symmetry in the Cholesky factorization leads to factors having the same norm. Recall that Gaussian elimination transforms a matrix, A, to a product of b and its upper triangular factor, U. b Under its unit lower triangular factor, L, b are less than or equal to one. Hence, the partial pivoting, the elements of L b is bounded by stability of the LU factorization depends on whether or not U A, either norm-wise or element-wise. b with respect to A, the To characterize the possible amplification in U growth factor, r, was introduced. We saw an example where the growth factor is equal to 2N , which implies instability and significant loss of accuracy

8.3 Solution of systems of linear equations

for even modest size systems. This problem does not occur for the Cholesky factorization because of symmetry. Essentially, symmetry causes the amplib C to be distributed equally to its transpose. Furthermore, fication in L positive-definiteness implies that the diagonal elements tend to be larger than the off-diagonal elements. In fact, shown that if A is a sym it canpbe ffiffiffiffiffiffiffiffiffiffiffiffi metric positive-definite matrix, then ai;j  ai;i aj;j . Positive-definiteness ensures that the pivot elements, b l i;i , are not too small. For example, it is known (Higham, 2002) that for any positive-definite, diagonally dominant matrix the growth factor is bounded by two.  2 b C  . By Eqs. (8.3-129) and Next, we derive an upper bound for  L F b C with itself leads to (8.3-131), the dot product of the ith row of L 2 2 2 b l i;2 þ / þ b l i;i ¼ ai;i þ ei;i  ai;i þ ei;i (8.3-133) l i;1 þ b Applying the bound in Eq. (8.3-131) to ei;i produces    0 ei;i  Nε0 mach ai;i þ ei;i  Nεmach ðai;i þ ei;i Solving for ei;i leads to 0 ei;i  Nεmach ai;i 1  Nε0mach

(8.3-134)

(8.3-135)

Substituting into Eq. (8.3-133) yields the upper bound 2 2 2 b l i;1 þ b l i;2 þ / þ b l i;i 

1 ai;i 1  Nε0mach

Summing the above over i ¼ 1; /; N gives  2  2 X N 2 2  b    b b l i;i  l i;1 þ / þ b  LC  ¼ LC F ¼ F

i¼1

(8.3-136)

N X 1 ai;i 1  Nε0mach i¼1

(8.3-137) Before bounding the rightmost expression in Eq. (8.3-137), recall the following inequality for the sum of non-negative quantities, xi : vffiffiffiffiffiffiffiffiffiffiffiffi u N N X pffiffiffiffiuX xi  N t x2i (8.3-138) xi  0 i¼1

i¼1

715

716

CHAPTER 8 Numerical methods

Applying the above inequality to Eq. (8.3-137) produces !1=2 pffiffiffiffi N N X X N 1 ai;i  a2 1  Nε0mach i¼1 1  Nε0mach i¼1 i;i The definition of the Frobenius norm implies 11=2 !1=2 0 N N N X XX a2i;i @ a2i;j A ¼ kAkF i¼1

(8.3-139)

(8.3-140)

i¼1 j¼1

Finally, the inequalities in Eqs. (8.3-137), (8.3-139), and (8.3-140) lead to an upper bound on the norm-squared of the Cholesky factor, pffiffiffiffi  2 N L b C  (8.3-141) kAkF F 1  Nε0mach Substituting the above into Eq. (8.3-132) yields  2   L b C N 3=2 ε0mach kEkF 0 3=2 F  Nεmach  z O N εmach 1  Nε0mach kAkF kAkF

(8.3-142)

Again, we note that in practice, the N 3=2 factor can be replaced by a constant, CN , that has moderate growth with N. For example, consider matrices produced by finite element methods whose coordinates are reordered to produce banded matrices with minimum bandwidth. The bandwidth, B, can be orders of magnitude less than N. Since the Cholesky factorization produces factors the same  bandwidth, the relative error in Eq. (8.3-142) is bounded  of 3=2 by O B εmach z Oðεmach Þ. Similar to Eq. (8.3-99), we find that under these conditions, the computed solution, b x , of Ax ¼ b, by forward/backward substitution with the Cholesky factors, has a relative error bound x  xkF kb kEkF  kF ðAÞ z kF ðAÞOðεmach Þ kxkF kAkF

(8.3-143)

This shows that solving Ax ¼ b by Cholesky factorization is backward stable. Additionally, we note that results similar to Eqs. (8.3-142) and (8.3143) exist with the 2-norm replacing the Frobenius norm. As a final remark, we note that for structural dynamics applications, the Cholesky factorization is most often used to factor the symmetric

8.3 Solution of systems of linear equations

positive-definite mass matrix and transform the generalized eigenvalue problem for the undamped equations of motion, Kf ¼ lMf

(8.3-144)

to the standard symmetric eigenvalue problem, Aj ¼ lj

(8.3-145)

b C is the computed Cholesky factor of M and where L bTf j¼ L C

b 1 K L b T and A ¼ L C C

(8.3-146)

Generally, the Cholesky factorization will run to completion. From Eq. b C is the Cholesky factor of the perturbed mass (8.3-131), we know that L matrix, c M ¼ M þ DM. If the mass matrix is nearly singular and DM is large enough, then c M might not be positive-definite. In these cases, the factorization process will encounter a negative pivot term and abort as it attempts to calculate its square root. Modeling errors aside, occurrences of this type indicate that the mass matrix is nearly singular. If a negative pivot b j;j is encountered, then the component-wise estimate, Eq. (8.3-131), term, m indicates that numerical round-off errors have accumulated so that 2 2 2 b b j; j ¼ mj; j þ ej; j < 0 l j;1 þ b l j;2 þ / þ b l j; j ¼ m

(8.3-147)

Wilkinson (1965) showed that if M is symmetric positive-definite, then the Cholesky factorization runs to completion if 1 lmin  20N 3=2 εmach ¼ k2 ðMÞ lmax

(8.3-148)

where lmin and lmax are, respectively, the minimum and maximum eigenvalues of M. Geometrically, the inverse of the condition number, k2 ðMÞ1 , measures the minimum relative distance from M to the nearest singular matrix. Therefore, the inequality in Eq. (8.3-148) provides a lower bound of this distance that is sufficient (but not necessary) for completing the Cholesky factorization. We end this section with some practical considerations. The above discussion assumed that no modeling errors were committed during the development of the mass matrix. In addition to model checks (see Volume II), there are simple tests that can be performed to check if M is a symmetric

717

718

CHAPTER 8 Numerical methods

positive-definite matrix. Symmetry checks should always be performed since numerical round-off errors, say in Guyan (1965) reduction, can produce matrices that are no longer symmetric. Although the Cholesky factorization and symmetric eigen solver procedures will generally use only the lower (or upper) triangular part of the matrices involved, we suggest that the matrices be symmetrized prior to performing these computations. One 1 M þMT Þ. We will discuss this simple approach is to replace M with 2 further in the eigenvalue problem section, Section 8.5. To check that the mass matrix is sufficiently positive-definite, one would have to calculate its smallest eigenvalue. This could be computationally expensive, if the dimension of M is large. A definitive, yet less expensive approach would be to perform the Cholesky factorization. If the process runs to completion, then, within numerical precision, the matrix is positive-definite. A simpler check, that has worked with some success, is to calculate the absolute value of the normalized matrix, equals the diagonal part of M. If M is H ¼ D1=2 jMjD1=2 , where Dpffiffiffiffiffiffiffiffiffiffiffiffiffi ffi positive-definite, then mi;j  mi;i mj;j . This implies that all off-diagonal elements of H must be less than or equal to unity. If any element violates this condition, then M is not positive-definite. This approach also helps to identify elements of the matrix that may be in error and can aid the analyst in model correction. Keep in mind that this check is only sufficient and that there are indefinite matrices that pass this test. Lastly, we comment on the use of the matrix inverse. Generally, numerical analysts do not recommend that the matrix inverse be computed. Hingham (2002) wrote an entire chapter devoted to matrix inversion; he begins the chapter with: To most numerical analysts, matrix inversion is a sin. Forsythe, Malcolm and Moler put it well when they say . “In the vast majority of practical computational problems, it is unnecessary and inadvisable to actually compute A1 .” There are two major reasons that support this view. First, calculating A1 and then multiplying by b requires about three times more floating-point operations than computing the solution by Gauss elimination with partial pivoting. Second, the use of the matrix inverse is less stable and tends to produce larger residual errors. Here, some judgment, keeping in mind the shortcomings of the matrix inversion approach, should be exercised. If the problem at hand is to solve Ax ¼ b with well-conditioned matrices,

8.3 Solution of systems of linear equations

then from a practical standpoint, both approaches may be used. On the other hand, if many systems of equations need to be solved, say in a Monte-Carlo simulation or Newmark’s integration, then the Gauss-elimination-withpartial-pivoting method is more efficient. Model reduction methods in structural dynamics are typically formulated by partitioning the stiffness and mass matrices and applying the inverse of one or more principal submatrices. Although the reduced model can be computed without matrix inverses, it is probably more convenient to use inverses directly as specified in the formulation. Again, if the matrices are well conditioned, then such an approach will most likely be adequate. However, the necessary model checks to ensure the validity of the reduced model should still be performed. 8.3.4 Iterative methods

The LU and Cholesky factorization methods are known as “direct” methods and are well suited for solving systems of linear equations where the coefficient matrices are dense and can be stored within the computer’s internal memory. In general, the mass and stiffness matrices associated with structural dynamic models fall into this category since they characterize the dynamics using a reduced set of coordinates. For example, high-resolution finite element models of large and complex structures could possess tens of millions of coordinates. Model reduction techniques used in structural dynamics can often reduce the size of these finite element models by several orders of magnitude. However, the reduction process, say by Guyan reduction, will require solving a very large system of equations that may not be feasible by direct methods. Large systems of equations can also result from finite difference discretization of boundary-valued partial differential equations. For example, the discretization of the Poisson equation over a large plate with a very fine mesh can yield matrices having dimensions that easily exceed tens of millions of row and columns. Often, the memory requirements that are needed for storing and manipulating these matrices in their entirety can exceed a computer’s storage capacity. Fortunately, most of the large systems that arise from finite element modeling and finite difference discretization tend to be sparsely populated, i.e., the number of nonzero elements in the matrix is a small percentage of the total number of elements. In general, these large and sparse systems of equations lend themselves to iterative methods for computing the solution.

719

720

CHAPTER 8 Numerical methods

In this section we will describe iterative methods for solving systems of linear equations. For this purpose, we will only discuss the “classical” iterative methods, which include the Jacobi, Gauss-Seidel, and Successive Over-Relaxation (SOR) methods. For extensive and in-depth treatments of iterative methods, the reader can consult Varga (1962) or Saad (2003). Demmel’s text (Demmel (1997)) provides a concise yet complete chapter on iterative methods that motivates and illustrates the discussion using the discretized Poisson’s equation over rectangular domains. 8.3.4.1 Classical iterative methods

We start with a simple example to introduce the classical iterative methods. Consider the following second-order differential equation on the unit interval with zero boundary conditions, 

d2 xðuÞ ¼ f ðuÞ du2

0 2 1 0 / 0 0 > > > > > > > > > 6 1 2 1 / 7> > > > > > > > > x f 0 0 2 2 > > > 6 7> > > > > > > > 6 7> < = < = 6 0 1 1 1 7 « « « « 2 6 7 ¼ Du 6 « > > « « > « 1 2 1 0 7 > > > > 6 7> > > > > > > > 6 7> > > > > > > > > 4 0 5 x f 0 / 1 2 1 > > > > N1 N1 > > > > : ; : ; xN fN 0 0 / 0 1 2 |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl} x f AN (8.3-151)

8.3 Solution of systems of linear equations

Observe that the coefficient matrix, AN , is symmetric and that for Du  1, will be large and sparse. Furthermore, it can be shown that AN is positive

kp , definite since it has positive eigenvalues, lk ¼ 2 1 cos Nþ1 where k ¼ 1; /; N. The Jacobi iteration method for solving Eq. (8.3-151) starts by solving for xn in the nth row, 1 Du2 xn ¼ ðxn1 þ xnþ1 Þ þ fn 2 2

(8.3-152)

Let us suppose that we have an mth approximation, xðmÞ ¼ n oT ðmÞ ðmÞ , of x. Then Eq. (8.3-152) suggests the recursion for x1 / xN the Jacobi iteration,  Du2 1  ðmÞ ðmÞ x ¼ þ x (8.3-153) xðmþ1Þ n nþ1 þ 2 fn 2 n1 The criterion for stopping the iteration can be based on determining when the change between consecutive iterates is sufficiently small, or when the residual is below a specified threshold. Observe that as the Jacobi recursion loops through the coordinates, say from n ¼ 0;

ðmþ1Þ

/; N, the updated term, xn1 , is available and could be ðmÞ

used instead of xn1 in Eq. (8.3-153). This modification of the Jacobi recursion leads to the Gauss-Seidel method,  Du2 1  ðmþ1Þ ðmÞ x xðmþ1Þ ¼ þ x (8.3-154) n nþ1 þ 2 fn 2 n1 In general, the Gauss-Seidel method has a faster convergence rate; however, there are cases where the Gauss-Seidel iteration diverges while the Jacobi iteration converges. We will discuss the convergence properties of these methods later. Heuristically, it seems plausible that the convergence of the Gauss-Seidel method can be improved by averaging consecutive iterates. Specifically, consider the weighted average ¼ uxðmþ1Þ þ ð1  uÞxðmÞ xðmþ1Þ n n n

(8.3-155)

721

722

CHAPTER 8 Numerical methods

where u is known as the relaxation parameter. Substituting the Gaussðmþ1Þ into the right-hand side of Eq. (8.3-155) leads Seidel recursion for xn to one step of the successive overrelaxation method (SOR),    Du2 1 ðmþ1Þ ðmÞ ðmþ1Þ xn1 þ xnþ1 þ ¼u (8.3-156) fn þ ð1  uÞxðmÞ xn n 2 2 Note that setting u ¼ 1 reduces to the Gauss-Seidel method. It can be shown that 0 < u < 2 is a necessary condition for convergence. In general, choosing the optimal relaxation parameter can be difficult and results are known for only certain classes of systems. As an example, let’s consider the boundary value problem in Eq. (8.3-149), with f ðuÞ ¼ 2. One can easily verify that the solution of Eq. (8.3-149) is xðuÞ ¼ uð1 uÞ. We will discretize the problem using N ¼ 8 and perform the Jacobi, Gauss-Seidel, and SOR iterations for the resulting 8  8 system of linear equations. For the SOR iterations, we will use two relaxation parameters, u ¼ 0:5 and u ¼ 1:5. Figs. 8.3-4A ðmÞ

through D compare the iterates, xn for m ¼ 1; 2; 4; 8; 16; 32; and 64. They indicate that the Gauss-Seidel method converges faster than the Jacobi method and that SOR with u ¼ 1:5 was the fastest with the SOR method with u ¼ 0:5 having the slowest convergence rate. For each iteration number, m, define the error,   ðmÞ   ðmÞ  x ¼ max  x xn  x n 1nN ∞ (8.3-157) xn ¼ xðun Þ Fig. 8.3-5A compares the errors among the four methods used to solve the 8  8 example problem. The plot quantifies the earlier remarks on convergence. It clearly shows that the Gauss-Seidel method converges faster than the Jacobi method. Also, the plot shows that SOR with u ¼ 1:5 has the fastest convergence rate, while SOR with u ¼ 0:5 is the slowest to converge. Note that after 56 iterations, the errors from the SOR  method with u ¼ 1:5 have reached an “error floor” equal to O 1016 due to machine precision limits. The linearity of the errors in Fig. 8.3-5A as a function of iteration number, m, on a log-linear plot indicates that the errors decrease geometrically according to the power law,

8.3 Solution of systems of linear equations

FIGURE 8.3-4 ðmÞ

Iterates xn for m ¼ 1; 2; 4; 8; 16; 32; and 64: (A) Jacobi iteration; (B) Gauss-Seidel iteration; (C) SOR ðu1 ¼ 0:5Þ; (d) SOR ðu2 ¼ 1:5Þ. The exact solution xðuÞ ¼ uð1 uÞ is shown in black.

FIGURE 8.3-5 Error  ðmÞ and spectral radius for example problem. (A) Comparison of the error, x  x versus the iteration number, m; (B) Spectral radius for SOR ∞ for 0 < u < 2.

723

724

CHAPTER 8 Numerical methods

   ðmÞ  x  x z Cr m ∞

0 jsn

An immediate consequence is the following convergence result for the Jacobi method: Theorem 8.3-11 If A is strictly diagonal dominant, then the Jacobi iteration converges. From Eq. (8.3-165), we see that the elements of the Jacobi iteration matrix are given by

8.3 Solution of systems of linear equations

½GJ n;j ¼ gJn;j

8 > < an;j =an;n ; j < n j¼n ¼ 0; > : an;j =an;n ; j > n

(8.3-177)

Hence, by definition of the ∞-norm, if A is strictly diagonally dominant, we get X 1 X an;j < 1 (8.3-178) kGJ k∞ ¼ max gJn;j ¼ max 1nN 1nN an;n j jsn Therefore, by Theorem 8.3-9, the Jacobi iteration converges. This result also holds true for the GausseSeidel method. Before proving this, it will be convenient to introduce the normalized lower- and upper-triangular matrices of A, e ¼ D1 U e ¼  D1 L and U L

(8.3-179)

With these notations, the Jacobi and GausseSeidel iteration matrices become   e eþU e e 1 U (8.3-180) and GG-S ¼ I  L GJ ¼ L Also, note that the ∞-norm of a matrix is equal to the largest 1-norm of its rows. So suppose the kth row of GJ is maximized in the 1-norm sense, then for the unit vector, ek , with one in the kth position and zero elsewhere, Eq. (8.3-178) implies     T  T   T  e T ek  ¼ e L ek þ U L ek 1 þ e U ek 1 < 1 kGJ k∞ ¼ GTJ ek 1 ¼ e 1 (8.3-181) We are now ready to state and prove the next result: Theorem 8.3-12 If A is strictly diagonal dominant, then the Gauss-Seidel iteration converges. First note that by Eq. (8.3-180),   e e e e GG-S ¼ U0G (8.3-182) IL G-S ¼ U þ LGG-S Suppose the mth row of GG-S has the largest 1-norm. Then by Eq. (8.3-182) and norm inequalities,

729

730

CHAPTER 8 Numerical methods

  T    e T em  e þ GT L kGG-S k∞ ¼ GTG-S em 1 ¼  U G-S 1  T   T   T   T   T    e  e  e L em 1  U em 1 þ GG-S 1 L em 1 ¼ U em 1 þ kGG-S k∞e (8.3-183) Solving for kGG-S k∞, we obtain the inequality,  T  e U em 1 kGG-S k∞   T  1  e L em 1

(8.3-184)

Hence,  T  e  T   T   T   T  U em 1 L em 1 þ e U em 1  e L ek 1 þ e U ek 1 ¼ kGJ k∞  T   e e  1  L em 1 (8.3-185) The right inequality follows from Eq. (8.3-181), where the kth row of GJ  T   T  had the largest 1-norm. Hence, 0  e L em  þ e U em  < 1. This then 1

1

establishes the first inequality in Eq. (8.3-185) from simple algebraic manipulations. Finally, Eqs. (8.3-185), (8.3-184), and (8.3-181) imply kGG-S k∞  kGJ k∞ < 1

(8.3-186)

and, therefore, by Theorem 8.3-9 the Gauss-Seidel iteration converges. The strict diagonally dominance condition does not always hold in practice. For example, the system of equations resulting from our finite difference discretization, Eq. (8.3-150), is not strictly diagonally dominant, but still converges under the Jacobi and Gauss-Seidel iterations. Observe that “weak” diagonal dominance does hold in a greater than or equal to sense with strict diagonal dominance occurring only in the first and last rows. Weak diagonal dominance also occurs frequently in mass-spring systems. As an example for his proposed stiffness matrix adjustment procedure, Kabe (1985) introduced the eight-degree-of-freedom system shown in Fig. 8.3-6. The system’s mass and stiffness matrices are

8.3 Solution of systems of linear equations

731

FIGURE 8.3-6 Kabe’s one-dimensional 8 DOF mass-spring system (Kabe, 1985). M ¼ diagð0:001; 1.0; /; 1.0; 2 1:5 1:5 6 6 1:5 1011:5 10 6 6 6 10 1110 6 6 6 6 K¼6 6 100 6 6 6 6 6 6 4

0.002Þ

3

100 1100

100

100

1100

100

100

1112 10 2

7 7 7 7 7 7 7 7 7 7 7 7 7 10 2 7 7 7 1011:5 1:5 7 5 1:5 3:5 (8.3-187)

Observe that K is strictly diagonally dominant in rows two through seven, but only weakly dominant in the first and last rows. Also, based on physics, K is symmetric and positive-definite. It can be shown that the spectral radii of the iteration matrices associated with K are rðGJ Þ z 0:148 and rðGG-S Þ z 0:022 for the Jacobi and GausseSeidel methods, respectively.

732

CHAPTER 8 Numerical methods

Therefore, by Theorem 8.3-10, the iteration from both of these methods will converge to the solution. In light of these observations, we introduce the following definition: Definition N  N matrix, A, is weakly diagonally dominant if for all An P an;j , with strict inequality holding for at least one row. n, an;n  jsn

The next examples show that we cannot relax the strict diagonal dominance condition and still ensure convergence. Consider the following 4  4 system, which is weakly diagonally dominant:

(8.3-188)

The associated Jacobi iteration matrix is

(8.3-189)

Straightforward algebra leads to GJ having the characteristic polynomial,

 d 1  2 2 pðlÞ ¼ l  (8.3-190) l  ð1  dÞ  l2 4 2 Solution of Eq. (8.3-190) by the quadratic formula yields 2 ffi 31=2

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

2 16 1 1 1 1 17 rðGJ Þ ¼ 41  þ 5 d þ d 2 2 2 4 2 2

(8.3-191)

A plot of the above shows that rðGJ Þ is a decreasing function that is less than one for 0 < d  1 and equal to unity for d ¼ 0. First, note that the upper-left 2  2 and lower-right 2  2 matrices of GJ have characteristic polynomials pupper-left ðlÞ ¼ l2  1=4 and plower-right ðlÞ ¼ l2  ð1 dÞ, respectively. Hence, for d ¼ 0, rðGJ Þ ¼ 1, and the Jacobi iteration does

8.3 Solution of systems of linear equations

not converge. On the other hand, a nonzero d produces a coupling of the upper-left and lower-right matrices that affects the overall eigenvalues of GJ so that its spectral radius is less than one. Therefore, by Theorem 8.310, for d > 0, the Jacobi iteration converges. This suggests that for convergence, the “coupling” of principal submatrices is required in addition to weak diagonal dominance. We can make this notion of coupling precise with the following definition: Definition An N  N matrix, A, is reducible if there exists a permutation matrix, P, such that

A is irreducible if it is not reducible. Recall that a permutation matrix is the identity matrix with its rows interchanged. Therefore, permutation matrices are nonsingular and have inverses equal to their transposes. Hence, the triple product PAPT is a similarity transformation that does not change the eigenvalues and the spectral radius. Moreover, it is easily shown that the eigenvalues of the iteration matrix, G, also remain invariant. The term reducible refers to the fact that the system can be “reduced” to a smaller set of equations. Consider a nonsingular and reducible matrix, A. Then the system Ax ¼ b is equivalent to e 12e e 11e x1 þ A x2 ¼ e b1 A e 22e e2 x2 ¼ b A where

and

(8.3-192)

e 22 is invertible, we can solve . Since A

for e x2 in the second equation of (8.3-192). Subsequent substitution into the first equation yields the reduced set of equations, e 12 A e 1 e e 11e x1 ¼ e b1  A A 22 b2

(8.3-193)

Referring back to our 4  4 example, the weakly diagonally dominant system is irreducible for d > 0 and reducible for d ¼ 0. We are ready to state the next result. Theorem 8.3-13 If a matrix A is weakly diagonal dominant and irreducible, then the Jacobi and Gauss-Seidel iterations converge. Moreover, rðGG-S Þ  rðGJ Þ < 1.

733

734

CHAPTER 8 Numerical methods

We will omit the proof, which can be found in Varga (1962). Our two examples, Eqs. (8.3-151) and (8.3-188), are weakly diagonally dominant and irreducible. Therefore, Theorem 8.3-13 implies that the Jacobi and Gauss-Seidel iterations, applied to these systems, will converge to the solution, regardless of the initial guess. We should also note that for large systems of equations, diagonal dominance can be easily determined. Furthermore, components of structural systems are coupled and, therefore, have stiffness matrices that are irreducible. Hence, Theorem 8.3-13 is practical for structural mechanical systems in the sense that its conditions can be quickly ascertained. We will end the discussion of Jacobi and Gauss-Seidel convergence properties with an example from Varga (1962). Previous examples and discussions indicated that the Gauss-Seidel method converges faster than the Jacobi method. This is not true in general. Consider the following nonsingular matrix, A, 2 3 1 2 2 6 7 A¼41 1 1 5 (8.3-194) 2 2 Straightforward matrices, 2 0 6 GJ ¼ 4 1 2

1

calculations lead to the Jacobi and Gauss-Seidel iteration 2 0 2

2

2

3

7 1 5 0

and

0 2

6 GG-S ¼ 4 0 0

2 0

2

3

7 3 5 2 (8.3-195)

Calculation of the characteristic equations yields pJacobi ðlÞ ¼ l3

0 rðGJ Þ ¼ 0

(8.3-196) pG-S ðlÞ ¼ lðl  2Þ2 0 rðGG-S Þ ¼ 2 Therefore, the Jacobi iteration converges, but the Gauss-Seidel method diverges. In fact, the Jacobi method converges in three steps since G3J ¼ 0. From the preceding discussion, we know that diagonal dominance and irreducibility constrain the eigenvalues of the iteration matrix so that its spectral radius is less than one. Many of the matrices encountered in structural dynamics are symmetric and positive definite. This condition also

8.3 Solution of systems of linear equations

735

restricts the eigenvalues and leads to a convergent SOR iteration matrix. We start by determining the possible relaxation factors that are necessary for convergence. First, recall three properties of determinants: 1. det½AB ¼ det½A,det½B N Y ln , where ln are the eigenvalues of A 2. det½A ¼ n¼1

3. det½T ¼

N Y

tn;n , where T is a lower- or upper-triangular matrix.

n¼1

We are now ready to prove the following theorem: Theorem 8.3-14 If the SOR iteration with relaxation factor u converges, then 0 < u < 2. We begin by noting that Eqs. (8.3-172) and (8.3-179) yield     e e 1 ð1  uÞI þ uU GSOR ¼  ðD þ uLÞ1 ðð1  uÞD  uUÞ ¼  I  uL (8.3-197)  e is lower triangular, det I uL e ¼ 1. Hence, by determinant Since I  uL Properties 2 and 3 (see above), 

N Y

  e , det½GSOR  ln ¼ det½GSOR  ¼ det I  uL n¼1      e ¼ ð1  uÞN e GSOR ¼ det ðð1  uÞI þ uU ¼ det I  uL (8.3-198) Taking the modulus of each side leads to the lower bound for the spectral radius, N Y n¼1

jln j ¼ j1  ujN 0rðGSOR Þ ¼ max jln j  j1  uj 1nN

(8.3-199)

Hence, by virtue of Theorem 8.3-10, if the SOR iteration converges, 0 < u < 2. Theorem 8.3-14 is a necessary condition for the SOR iteration to converge. As we alluded to earlier, additional constraints are needed for sufficiency. We have the following result whose proof can be found in Demmel (1997):

736

CHAPTER 8 Numerical methods

Theorem 8.3-15 If A is symmetric positive-definite and 0 < u < 2, the SOR iteration converges. Observe that the coefficient matrices from our two example problems, Eq. (8.3-151) and (8.3-187), are symmetric positive-definite and, therefore, will possess SOR iteration matrices that are convergent for 0 < u < 2. Determining an optimal relaxation factor that minimizes rðGSOR Þ is a difficult problem for general matrices; however, results exist for matrices having specialized structures. We will briefly discuss one example. The matrix, A, resulting from our central difference problem, Eq. (8.3-151), and the Poisson problem in general, belongs to a class of matrices possessing a trait known as property A. A matrix has this property if there exists a permutation matrix, P, such that (8.3-200)

e 2 are diagonal matrices. It is easy to show that reordering e 1 and D where D the rows and columns of Eq. (8.3-151) to separate the odd and even nodes will produce a matrix of the form in Eq. (8.3-200). On the other hand, because of the connectivity among masses six, seven, and eight, Kabe’s stiffness matrix (see Eq. 8.3-187) does not have property A. Recall that the eigenvalues and spectral radii of A and its iteration matrices remain invariant under permutation similarity transformations. Therefore, any result associated with the spectral radius for the permuted system will also hold for the original system. So without loss of generality, let us assume that we have re-ordered the matrix so Eq. (8.3-200) holds, i.e., (8.3-201)

For as0, consider the modified Jacobi iteration matrix,

(8.3-202)

8.4 Linear least-square problems

Then, Eq. (8.3-202) shows that GJ ðaÞ and GJ ð1Þ ¼ GJ have the same eigenvalues since they are related by a similarity transformation, i.e.,

(8.3-203)

Matrices whose modified Jacobi iteration matrix possess eigenvalues that are independent of a are said to have a consistent ordering property. For this class of matrices, the spectral radii of the classical iteration matrices are related as stated in Theorem 8.3-16 (Varga (1962); Demmel (1997)): Theorem 8.3-16 If A is consistently ordered, then 1) rðGG-S Þ ¼ rðGJ Þ2 , and 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi is the optimal SOR relaxation parameter with 2) uopt ¼ 1 þ 1  rðGG-S Þ rðGSOR Þ ¼ uopt  1. Observe that the spectral radii of the Jacobi and Gauss-Seidel methods in Table 8.3-1 satisfy the first property. Substituting rðGG-S Þ into the expression for uopt yields the optimal SOR relaxation parameter of 1.49029 that produces an iteration matrix with the minimum spectral radius equal to 0.49029. This optimal relaxation factor is consistent with Fig. 8.3-5B. 8.4 Linear least-square problems The previous section presented direct and iterative methods for solving systems of linear equations where the coefficient matrix, A, is an N  N nonsingular matrix. Many problems that are related to data analysis and approximation require the solution of systems of equations where the number of constraints (rows) is not equal to the number of variables (columns). In this section, we will discuss several methods for solving the following system of linear algebraic equations: Ax z b;

A˛ℝMN ;

x˛ℝN

and

b˛ℝM

(8.4-1)

Note that we have used the approximation symbol, z , since a solution satisfying Eq. (8.4-1) exactly may not exist. We will mainly consider over determined systems where M  N. Also, for simplicity we will assume

737

738

CHAPTER 8 Numerical methods

that A has full column rank, i.e., the N columns of A are linearly independent. We will start by formally defining the least-square approximation to Eq. (8.4-1) as a minimization problem. This leads to the normal equation whose solution is the unique minimizer of the cost function associated with the least-square problem. By virtue of the minimization property, the solution to the normal equation can be viewed geometrically in terms of orthogonal projections onto the appropriate subspaces. Therefore, we will first review orthogonal projectors that are useful in the study of the least-square problem. Direct solution of the normal equations by the methods discussed in the previous sections usually suffers from ill conditioning. An approach that is numerically more stable relies on the QR factorization, A ¼ QR, where Q is an M  N orthonormal matrix and R is an N  N upper-triangular matrix. We will present several methods that compute the QR factorization. We will also see in the next section that the QR factorization is the basis of the QR algorithm that calculates the eigenvalues of matrices. In particular, we will see that the Householder and Givens methods have superior numerical properties and, therefore, are central to the QR algorithm. As a final method for solving Eq. (8.4-1), we will introduce the singular value decomposition (SVD). The singular value decomposition leads to a natural definition of the pseudo inverse of a rectangular matrix, which is often used to solve the linear least-squares problem. In addition, we will examine and compare the numerical properties of these methods and their sensitivities via perturbation analysis. As a final note, the goal of our discussion is to introduce the reader to the linear least-square problem and some of the standard approaches for solving it. The algorithms presented herein are intended to outline the computational steps and, therefore, do not detail the most accurate, efficient, and robust implementations that are available in LAPACK. Demmel (1997) provides a concise introduction to the subject. For details on numerical properties and error bounds, the reader should consult Stewart (1998, 2001a,b) or Hingham (2002). Bjo¨rck’s text, Numerical Methods for Least Squares Problems (Bjo¨rck (1996)), is an authoritative reference on the subject. 8.4.1 Normal equation

A solution, x, to Eq. (8.4-1) should be a vector such that Ax is “close” to b. Since norms provide a way to quantify the distance between vectors, it is

8.4 Linear least-square problems

reasonable that we should attempt to minimize the cost function, kAx  bk. Selecting a norm that is differentiable, e.g., the 2-norm, allows the minimization problem to be examined analytically using straightforward calculus. In view of these considerations, we make the following definition: Definition Let A be a matrix in ℝMN , M  N and A is of full column rank. Let b ˛ ℝM . Then x is a least-square solution to Ax z b if kAx  bk2 ¼ minkAy  bk2 y

(8.4-2)

Before analyzing (8.4-2), we will first introduce an example problem to illustrate how the normal equation arises naturally using a nonrigorous approach for solving Eq. (8.4-1). Consider the problem of using polynomials to approximate the following discrete function, f ðtm Þ ¼ eatm cosð2pbtm Þ; a ¼ b ¼ 1=5; and tm ¼ m=2; m ¼ 0; /; 20 (8.4-3) The decaying sinusoids are ubiquitous in structural dynamics since they represent single-degree-of-freedom system responses as well as single mode responses. Expressing the N  1 degree polynomial as 2 þ/ þc N1 pðtm Þ ¼ c0 þ c1 tm þ c2 tm N1 tm , we obtain the following system of linear equations, Ax z b: 3 2 9 9 8 1 t0 t02 t03 / t0N1 8 f ðt0 Þ > c0 > > > 7 6 > > > > > > > > > > > 6 1 t1 t2 t3 / tN1 7 > > > > c Þ f ðt 1 1 1 > > > 1 > > 1 = 7< 6 = < 7 6 6 1 t2 t2 t3 / tN1 7 (8.4-4) 2 2 2 7 > c2 > z > f ðt2 Þ > 6 > > > 7> 6 > > > « > > « > > > 6« « > > > « « 1 « 7 > > > > 5> 4 ; ; : : Þ f ðt c N1 20 2 3 N1 1 t20 t20 t20 / t20 |fflfflfflfflfflfflffl{zfflfflfflfflfflfflffl} |fflfflfflfflfflffl{zfflfflfflfflfflffl} |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} x b A The M  N matrix A is a Vandermonde matrix and, therefore, has full column rank if N  21. For polynomial degrees less than 20, A is a rectangular matrix with more rows than columns. In order to use the algorithms from the previous section, we first need to transform Eq. (8.4-4) to an equivalent system of linear equations with a square coefficient matrix. This can be accomplished by premultiplying Eq. (8.4-4) by AT to produce the following N  N system of linear equations,

739

740

CHAPTER 8 Numerical methods

AT Ax ¼ AT b

(8.4-5)

Eq. (8.4-5) is known as the normal equation and its solution solves the least-square problem. Observe that the coefficient matrix, AT A, is a symmetric positive-definite matrix. Hence, Eq. (8.4-5) can be solved by first decomposing the coefficient matrix using the Cholesky factorization and then applying forward and backward substitutions. Fig. 8.4-1 compares the polynomial approximations that were computed for degrees equal to three, five, seven, and  nine in double precision. The relative errors, equal to kf ðtm Þ  pðtm Þk2 kf ðtm Þk2, were calculated and are indicated in parentheses. As expected, the error decreases as the degree of the polynomial increases with a nine-degree polynomial providing close to an exact fit. However, as we will discover later, the normal equations are poorly condition for high-order polynomials. The normal Eq. (8.4-5) can be rigorously derived from the least-square definition. First, note that kAx  bk22 defines a positive quadratic form on

FIGURE 8.4-1 Least-square polynomial approximations of f ðtÞ ¼ eat cosð2pbtÞ. The normal equations were solved in double precision using the Cholesky factorization and forward and backward substitution. The relative 2-norm errors are indicated in parentheses.

8.4 Linear least-square problems

741

ℝN and, therefore, has a minimum. Let x denote a solution that minimizes kAx  bk22 . For an arbitrary nonzero vector, u, and d > 0, define the function CðdÞ as CðdÞ ¼ kAðx þ duÞ  bk22

(8.4-6)

Expanding the norm-squared we obtain CðdÞ ¼ ðAðx þ duÞ  bÞT ðAðx þ duÞ  bÞ    ¼ xT AT Ax  2 xT AT b þ bT b þ d2 uT AT Au þ 2duT AT Ax  AT b (8.4-7) Observe that CðdÞ is a positive quadratic function in d and, therefore, will have a minimum at x if and only if its derivative vanishes at d ¼ 0. Evaluating C 0 ð0Þ yields   C 0 ð0Þ ¼ 2d uT AT Au þ 2uT AT Ax  AT b d¼0 (8.4-8)  ¼ 2uT AT Ax  AT b Thus, we see that C 0 ð0Þ ¼ 0 if and only if

 uT AT Ax  AT b ¼ 0

(8.4-9)

Since u is arbitrary, we conclude that the normal Eq. (8.4-5) is necessary and sufficient for x to be a least-square solution. That the least-square solution is unique, follows from the fact that we have assumed that A is of full rank. To prove this, consider any ysx. Then there is a nonzero vector, u, such that y ¼ x þ u. Therefore, Eq. (8.4-7), with d ¼ 1, and Eq. (8.4-5) lead to kAy  bk22 ¼ ðAðx þ uÞ  bÞT ðAðx þ uÞ  bÞ   ¼ xT AT Ax  2 xT AT b þ bT b þ uT AT Au þ 2uT AT Ax  AT b  ¼ xT AT Ax  2 xT AT b þ bT b þ uT AT Au ¼ ðAx  bÞT ðAx  bÞ þ ðAuÞT ðAuÞ ¼ kAx  bk22 þ kAuk22 (8.4-10)

742

CHAPTER 8 Numerical methods

Since A is of full rank, Aus0, which implies that kAuk22 > 0. Therefore, from Eq. (8.4-10), we conclude that kAy  bk22 > kAx  bk22

(8.4-11)

Hence, there does not exist a ysx such that the distance of Ay to b is a minimum. Because the minimization is defined using the 2-norm, which is associated with the Euclidean inner product, the least-square solution has a natural geometric interpretation. Given the solution, x, which satisfies the normal equation, let us define the residual, r, by r ¼ Ax  b

(8.4-12)

Then, b ¼ Ax þ ð rÞ. Furthermore, by virtue of Eq. (8.4-5), r is orthogonal to Ax since   ðAxÞT r ¼ ðAxÞT ðAx  bÞ ¼ xT AT Ax  AT b ¼ xT , 0 ¼ 0 (8.4-13) Recall that the range of A, denoted by RA, is defined as the linear span of the column vectors of A. Hence, the normal equation defines a solution, x, such that Ax is the orthogonal projection of b onto RA with the smallest residual, as shown in Fig. 8.4-2. Clearly, if b lies in the column space of A, then the normal equation provides the exact solution (in the sense of equality) to Eq. (8.4-1). Otherwise, x is an exact solution to the projection of b onto RA . This observation suggests an approach for solving the leastsquare problem using orthogonal projections, which leads us to the QR factorization.

FIGURE 8.4-2 Geometric interpretation of the least-square solution where Ax is the orthogonal projection of b onto the column space of A.

8.4 Linear least-square problems

8.4.2 QR factorization 8.4.2.1 Orthogonal projectors

Central to the QR factorization is the concept of orthogonal projections. Therefore, we will start with a review of projectors on ℝM . First observe that the simplest linear mappings from ℝM to ℝM are the rank-one transformations, 2 3 u1 v1 . u1 vM 6 7 1 « 5 uvT ¼ 4 « (8.4-14) uM v1 / uM vM where u and v are M-dimensional vectors. Note that a vector, x˛ℝM , will be mapped to the vector ðvT xÞu. In general, any linear transformation, P, can be expressed as a sum of rank-one transformations, P ¼ u1 vT1 þ u2 vT2 þ /ur vTr ¼ ½ u1

u2

/

ur ½ v1

v2

/

vr T

(8.4-15)

¼ UVT Therefore, P maps a vector, x, to Px, which lies in the subspace, S, that is equal to the span of u1 ; /; ur . For P to be a projector onto S, it should map any vector in S to itself, i.e., PðPxÞ ¼ Px. This leads us to the definition: Definition A matrix P in ℝMM defines a projector if P2 ¼ P. Projectors are also known in the literature as idempotent . Clearly P is a projector if and only if VT U ¼ IM , and IM is a M  M identity matrix since      P2 ¼ UVT UVT ¼ U VT U VT ¼ U,IM ,VT ¼ UVT ¼ P (8.4-16) This is equivalent to fu1 ; .; ur g and fv1 ; .; vr g forming a biorthonormal set where vTi uj ¼ di;j

(8.4-17)

For example, consider the biorthonormal set of vectors u1 ¼ f1  1 0gT , u2 ¼ f1 0  1gT , v1 ¼ f1 0 1gT , and v2 ¼ f1 1 0gT . Then, P ¼ UVT defines a projector, i.e., 2 3 2 3 1 1  2 1 1 6 7 1 0 1 6 7 P ¼ UVT ¼ 4 1 0 5 ¼ 4 1 0 1 5 (8.4-18) 1 1 0 0 1 1 1 0

743

744

CHAPTER 8 Numerical methods

The reader should verify that P2 ¼ P. The residual, x  Px, defines the complementary projector, Pc ¼ IM  P

(8.4-19)

where Pc is also a projector since P2c ¼ ðIM  PÞ2 ¼ IM  2P þ P2 ¼ IM  2P þ P ¼ IM  P ¼ Pc (8.4-20) For example, the projector defined in Eq. (8.4-18) has the complementary projector, 2 3 2 3 2 3 1 0 0 2 1 1 1 1 1 6 7 6 7 6 7 Pc ¼ 4 0 1 0 5  4 1 0 1 5 ¼ 4 1 1 1 5 (8.4-21) 0 0 1 1 1 0 1 1 1 The complementary projector, Pc , projects any vector, x, onto a subspace, S c , such that its direct sum with S equals all of ℝM , i.e., S4S c ¼ ℝM . This implies that P and Pc provide a decomposition of x into a unique sum of two complementary components in S and S c , respectively. Let y ¼ Px and yc ¼ Pc x, then x ¼ IM x ¼ ðP þ IM  PÞx ¼ Px þ Pc x ¼ y þ yc

(8.4-22)

As an example, let x ¼ f1 2 3gT . Then, for P and Pc defined by Eqs. (8.4-18) and (8.4-21), respectively, y ¼ f7  4  3gT and yc ¼ f6 6 6gT . We will now determine the condition under which a projector is orthogonal. If x can be expressed as an orthogonal sum of y and yc , then they must be orthogonal to each other, i.e.,   0 ¼ yTc y ¼ ðPc xÞT ðPxÞ ¼ xT ðIM  PÞT Px ¼ xT P  PT P x (8.4-23) Thus, if P is symmetric (or Hermitian, if P is complex), then P  PT P ¼ P  P2 ¼ 0, and P and Pc will resolve a vector into its orthogonal components. Hence, we have the following result: Theorem 8.4-1 A projector P is orthogonal if PT ¼ P. Observe that the projector defined in Eq. (8.4-18) is not orthogonal since the matrix is not symmetric. A general projector can be represented as the matrix product, P ¼ UVT , and P will be symmetric if U ¼ V ¼ Q, where Q ¼ ½q1 ; /; qr . Furthermore, since the columns of U and V are

8.4 Linear least-square problems

745

biorthonormal, Q must be an orthonormal matrix. In fact, any orthogonal projector can be represented as the product of QQT for some orthonormal matrix, Q. Let us consider the example pffiffiffi 3 2 pffiffiffi 2 3 5=6 1=6 1=3 1= 2 1= 3 6 pffiffiffi 6 7 pffiffiffi 7 T 7 6 7 Q ¼ ½ q1 q2  ¼ 6 4 1= 2 1= 3 50P ¼ QQ ¼ 4 1=6 5=6 1=3 5 pffiffiffi 1=3 1=3 1=3 0 1= 3 (8.4-24) Then P is an orthogonal projector since Q is an orthonormal matrix. Moreover, since P is symmetric, its complementary projector, Pc ¼ I3  P, is also orthogonal. For clarity, we will also use the notation, Pt ¼ Pc , to denote complementary orthogonal projectors. Observe that for  pffiffiffi pffiffiffi pffiffiffi T , the set, fq1 ; q2 ; q3 g, is an orthoq3 ¼  1 6 1 6 2 6 3 normal basis for ℝ , which provides the resolution of the identity, I3 ¼ q1 qT1 þ q2 qT2 þ q3 qT3 ¼ P þ q3 qT3

(8.4-25)

and leads to Pt ¼ I3  P ¼ q3 qT3 . Therefore, for our example, we obtain 2 3 1=6 1=6 1=3 6 7 Pt ¼ I3  P ¼ 4 1=6 1=6 (8.4-26) 1=3 5 ¼ q3 qT3 1=3 1=3 2=3 Recall that the normal equation yields a solution, x, where Ax is the orthogonal projection of b onto RA . The next result allows us to calculate the least-square solution by means of orthogonal projectors. Theorem 8.4-2 (QR Factorization) Let A be a M  N matrix of full column rank. Then A ¼ QR, where Q is an orthonormal M  N matrix and R is an N  N upper-triangular and nonsingular matrix. Furthermore, if we require that the diagonal elements of R are positive, then the factorization is unique. Observe that since the column span of Q equals the column span of A, PA ¼ QQT will define the orthogonal projection onto RA . From our earlier discussion, we established that Eq. (8.4-1) can be solved exactly with PA b replacing b. Therefore, by the QR factorization, we are led to solving the equation, QRx ¼ QQT b Premultiplying the above by QT yields

(8.4-27)

746

CHAPTER 8 Numerical methods

Rx ¼ QT b

(8.4-28)

Since R is nonsingular and upper triangular, x can be efficiently computed by backward substitution. The equivalence of Eq. (8.4-28) and the normal equation is a direct consequence of QR factorization. Substituting A ¼ QR into Eq. (8.4-5) leads to RT Rx ¼ RT QT b

(8.4-29)

Eq. (8.4-28) follows since R is nonsingular and, therefore, we can premultiply Eq. (8.4-29) by RT . The reader should recognize that RT is the lowertriangular Cholesky factor of AT A. Also, note that the residual can be expressed in terms of the complementary projection, r ¼ Ax  b ¼ ðb  PA bÞ ¼ Pt Ab

(8.4-30)

8.4.2.2 Classical Gram-Schmidt method

We now proceed with a constructive proof of Theorem 8.4-2 known as the Classical Gram-Schmidt (CGS)procedure. Denote the columns of A by a1 ; /; aN and define q1 ¼ a1 ka1 k2 and P1 ¼ q1 qT1 . Extract from a2 the orthogonal complement with respect to q1 that equals  t t t T to obtain a2 ¼ P1a2 ¼ IM q1 q1 a2 . Then, normalize a2 t t orthogonal q2 ¼ a2 a2 2 . Next, extract from a3 the complementary t  t T T   projection of P2 ¼ q1 q1 þ q2 q2 . This leads to q3 ¼ a3 = a3 2 , where   t t a3 ¼ P2 a3 ¼ ðIM  P2 Þa3 ¼ IM  q1 qT1  q2 qT2 a3 (8.4-31) Continuing inductively, suppose that we have computed q1 ; /; qk1 , then qk is calculated via   t ak ¼ IM  q1 qT1  q2 qT2  /  qk1 qTk1 ak (8.4-32) t  t   qk ¼ ak = ak 2 To express the above in factored  t   1; :::; k  1; and rk;k ¼ ak 2 . Then

form,

let

ri;k ¼ qTi ak ; i ¼

8.4 Linear least-square problems

ak ¼ r1;k q1 þ / þ rk1;k qk1 þ rk;k qk 8 9 r1;k > > > > > > > > > > > < « > = ¼ ½ q1 / qk1 qk  > > > rk1;k > > > > > > > > > : ; rk;k

(8.4-33)

Applying Eq. (8.4-33) to all the columns of A, and letting r1;1 ¼ ka1 k2 leads to the QR factorization, A ¼ ½ a1

¼ ½ q1

a2

q2

/

/

aN 

2

r1;1

6 6 0 6 qN 6 6 6 « 4 0

r1;2

/

r2;2

/

«

1

0

/ rN;N

r1;N

3

7 r2;N 7 7 7 ¼ QR 7 « 7 5

(8.4-34)

Since A has full column rank, the procedure can be performed to completion. Observe that ri;i > 0 and Q ¼ ½q1 / qN  is an orthonormal matrix whose span is RA . Finally, we note that the factorization is not unique since any of the column vectors of Q can be modified by sign changes with the corresponding sign changes along the rows of R. However, if we insist on a factorization such that ri;i > 0, then the QR factors are unique. The reader can refer to Stewart (1998, 2001a,b) for a proof.   Classical Gram-Schmidt algorithm Let A ¼ ai;j be an M  N matrix with full column rank. Then the following  such    algorithm computes the QR factors that the elements of Q ¼ qi;j overwrite the entries of A, and R ¼ ri;j is an upper-triangular matrix: for k ¼ 1; /; N q;k ¼ a;k T  r1:k1;k ¼ a;1:k1 q;k   q;k ¼ q;k  a;1:k1 r1:k1;k   rk;k ¼ q;k  2

a;k ¼ q;k =rk;k

747

748

CHAPTER 8 Numerical methods

The classical Gram-Schmidt algorithm is not recommended for calculating the QR factorization. When A is nearly rank deficient, the algorithm can produce Q-factors whose columns are not orthogonal. We will return to this issue later when we discuss the numerical properties of the least-square solution methods. 8.4.2.3 Modified Gram-Schmidt method

The extraction of orthogonal vectors from the columns of A in the classical t Gram-Schmidt algorithm relies on the accurate calculation of ak in Eq. (8.4-32). Denote the orthogonal projector onto the spanfq1 ; .; qk1 g by Pk1 ¼ Qk1 QTk1, where Q ¼ ½q1 / qk1 . Then at k is calculated t from the complementary projector, Pk1 ¼ IM  Pk1 , which is implemented in the classical Gram-Schmidt algorithm via,   t t ak ¼ Pk1 ak ¼ ak  q1 qT1 þ q2 qT2 þ / þ qk1 qTk1 ak (8.4-35) If A is nearly rank deficient so that ak z Pk1 ak , their difference have  will t   many fewer significant digits. Moreover, the normalization by ak 2 z 0 will amplify the round-off errors and can progressively lead to nonorthogonality of the vectors, qk . The modified Gram-Schmidt procedure tries t to remedy this by expressing Pk1 as a product of complementary projections. Specifically, the orthogonality of q1 ; /; qk1 implies   t Pk1 ¼ IM  q1 qT1 þ q2 qT2 þ / þ qk1 qTk1 (8.4-36)      ¼ IM  q1 qT1 IM  q2 qT2 / IM  qk1 qTk1 t

This change in representation of Pk1 leads to the modified Gram-Schmidt method, which is numerically superior to the classical Gram-Schmidt method.   Modified Gram-Schmidt algorithm Let A ¼ ai;j be an M  N matrix with full column rank. The following  that   algorithm computes the QR factors such the elements of Q ¼ qi;j overwrite the entries of A and R ¼ ri;j is an upper-triangular matrix:

8.4 Linear least-square problems

for

k ¼ 1; /; N q;k ¼ a;k for

i ¼ 1; /; k  1 ri;k ¼ aT;i q;k

rk;k

q;k ¼ q;k  ri;k a;i   ¼ q;k 2

a;k ¼ q;k =rk;k 8.4.2.4 Householder transformation method

Let Q ¼ ½q1 / qN  be the orthonormal matrix in the QR factorization of A. Premultiplying the QR factorization of A by QT leads to QT A ¼ R

(8.4-37)

Recall that q1 ; q2 ; /; qN are orthonormal M-dimensional vectors whose span equals RA . We can always find M  N remaining orthonormal vectors, w1 ; w2 ; /; wMN so that fq1 ; /; qN ; w1 ; /; wMN g is an orthonormal basis for ℝM. Note that wk are orthogonal to q1 ; /; qN and, hence, are orthogonal to the columns of A. Therefore, the M  ðM NÞ orthonormal matrix, W ¼ ½w1 / wMN , is “orthogonal” to A in that WT A ¼ 0. Consider the M  M orthonormal matrix, e ¼ ½ Q j W , then by Eq. (8.4-37) and orthogonality of W to A, Q  T  T e A ¼ Q A ¼ R hR e (8.4-38) Q WT A 0 Eq. (8.4-38) says that there exists an M  M orthonormal matrix that transe Premultiplying Eq. forms A into an “extended” upper-triangular matrix, R. e leads to the extended QR factorization, (8.4-38) by Q eR e A¼Q

(8.4-39)

In this and the following section, we will present two methods that calculate e T . Let us discuss the first of these methods, which is based on the HouseQ holder transformation.

749

750

CHAPTER 8 Numerical methods

Section 8.3 showed how to compute the LU factorization by applying Gauss transformations in a column-by-column manner to transform a square matrix to upper-triangular form. Similarly, Householder transformations, which are defined by orthonormal reflector matrices, can be applied sequentially to the columns of A to transform it to an extended upper triangular matrix. We start by defining the Householder reflector matrices via an example. Let x ¼ fx1 x2 / xM gT be a vector in ℝM as shown in Fig. 8.4-3. Consider a unit vector u, and the subspace, S t , that is perpendicular to u. Let b x equal the reflection of x about S t . Observe that since reflections are length preserving, kb x k2 ¼ kxk2 . We want to define u so that b x ¼ kxk2 e1 . Recall that e1 is the unit vector with one in the first position and zero elsewhere. Note that ðuuT Þx is the orthogonal projection of x in the direction of u. The figure suggests that the reflection transformation can be defined as H ¼ IM  2uuT

(8.4-40)

Requiring that Hx ¼ kxk2 e1 , and algebraic manipulation leads to   IM  2uuT x ¼ kxk2 e1 (8.4-41)  T  2u x u ¼ xHkxk2 e1 Therefore, u is a unit vector parallel to e u ¼ xHkxk2 e1 , which can be normalized to yield u, i.e.,

FIGURE 8.4-3 Householder reflection of x about plane orthogonal to u so that b x ¼ kxk2 e1 .

8.4 Linear least-square problems



e u xHkxk2 e1 ¼ uk2 kxHkxk2 e1 k2 ke

Clearly H is symmetric and, moreover, H is orthonormal since    HT H ¼ HH ¼ IM  2uuT IM  2uuT    ¼ IM  4uuT þ 4 uuT uuT   ¼ IM  4uuT þ 4u uT u uT

(8.4-42)

(8.4-43)

¼ IM  4uuT þ 4uð1ÞuT ¼ IM In order to uniquely define H, we need to adopt a sign convention. To avoid round-off errors, if x1 z Hkxk2 , we choose the sign to equal sgnðx1 Þ. This modification of Eq. (8.4-42) leads to u¼

x þ sgnðx1 Þkxk2 e1 kx þ sgnðx1 Þkxk2 e1 k2

With this sign convention,

(8.4-44)

  2 ¼ 2 kxk2 þ kxk2 jx1 j

u ¼ x þ sgnðx1 Þkxk2 e1 e

and

uk22 ke

eT x ¼ kxk22 þ kxk2 jx1 j u

and

Hx ¼ sgnðx1 Þkxk2 e1 (8.4-45)

We now illustrate how to apply the Householder reflectors to transform to upper-triangular form the matrix, A, defined in Eq. (8.4-46): 2 3 2 1 1 0 6 7 6 1 3 1 2 7 6 7 7 A¼6 0 2 0 1 (8.4-46) 6 7 6 7 0 5 4 1 1 1 1 0 1 0 Starting with the first column, let x ¼ f2  1 0 1 1gT . We want to calculate the Householder transformation to introduce zero elements below the first row element. From Eqs. (8.4-44) and (8.4-40), we obtain

751

752

CHAPTER 8 Numerical methods

u1 ¼



0:936998

0:201689

0:00000

0:201689

0:201689

T

H1 ¼ I5  2u1 uT1 (8.4-47) Applying H1 to A yields 2 2:64575 6 0 6 6 ð1Þ A ¼ H1 A ¼ 6 0 6 6 0 4 0

1:51186 2:45932 2:00000

0:37796 1:29661 0:00000

1:54068 0:54068

0:70339 1:29661

3 0:75593 7 1:83729 7 7 1:00000 7 7 7 0:16271 5 0:16271 (8.4-48)

Next, we want to reflect the vector elements in the second column below the firstrow. So let x ¼ f2:45932 2:00000  1:54068  0:54068gT . Then Eqs. (8.4-44) and (8.4-40) lead to

(8.4-49)

Premultiplying H2 to Að1Þ produces 2 2:64575 1:51186 6 0 3:56571 6 6 ð2Þ ð1Þ 6 A ¼ H2 A ¼ 6 0 0 6 0 0 4 0 0

3 0:37796 0:75593 7 1:00160 1:92308 7 7 0:76289 0:24825 7 7 7 0:11571 0:79886 5 1:50285 0:17474 (8.4-50)

Observe that the first row and column are unchanged. Continuing on to the third column with x ¼ f 0:76289 0:11571 1:50285 gT , we obtain

8.4 Linear least-square problems

(8.4-51)

and

Að3Þ ¼ H3 Að2Þ

2

2:64575 1:51186 6 0 3:56571 6 6 ¼6 0 0 6 6 0 0 4 0 0

3 0:37796 0:75593 7 1:00160 1:92308 7 7 1:68936 0:21283 7 7 7 0 0:82062 5 0 0:10783 (8.4-52)

Again, note that the first two columns and rows are unchanged. Finally, the last column entries lead to the Householder transformation,

(8.4-53)

e Applying H4 to Að3Þ yields the extended upper-triangular matrix, R,

(8.4-54)

753

754

CHAPTER 8 Numerical methods

Expressing the overall transformation as the product of Hk , we obtain e ðH4 H3 H2 H1 ÞA ¼ R

(8.4-55)

e T ¼ H4 H3 H2 H1 . Transposing the Therefore, Eq. (8.4-38) implies that Q product, while noting that each Hk is symmetric, leads to e ¼ H1 H2 H3 H4 , i.e., Q

(8.4-56) Recall that the QR factorization, A ¼ QR, is unique if we require that the diagonal elements of R be positive. The diagonal entries of R in Eq. (8.454) imply that we must change the sign of the first, third, and fourth rows of R. This will require that we change the signs of the corresponding columns of Q. We have shown how to compute the extended QR factorization, e R, e using Householder transformations. To solve the least-square A ¼Q e first and then the product problem, it is not necessary to compute Q e T b. Instead, it is more efficient to apply Hk to b ¼ fb1 / bM gT during Q transformation of A by calculating the inner product of uk with the elements of bk : M , i.e., for

k ¼ 1; /; N a ¼ 2uTk bk : M bk : M ¼ bk : M  auk

Similarly, for computational efficiency, the transformation, Hk Aðk1Þ , is not calculated as a matrix product, but rather by the inner products of uk with the columns of Aðk1Þ restricted to rows k through M. Specifically, we can calculate the jth column via

8.4 Linear least-square problems

755

ðk1Þ

a ¼ uTk ak : M; j ðkÞ

(8.4-57)

ðk1Þ

ak : M; j ¼ ak : M; j  ð2aÞuk e is equal to zero in the lower-triangular part, the reflection vecAlso, since R  T tors, uk ¼ uk;1 / uk;Mkþ1 can be stored there. Since rk;k occupy the diagonal entries, the first elements of uk should be saved in a separate Ndimensional vector. This leads us to the below QR factorization algorithm using Householder transformations.   Householder QR algorithm Let A ¼ ai;j be a M  N matrix with full column rank. The following algorithm transforms A so that it contains the N N  upper triangular factor, R ¼ ri;j . The reflection vectors, uk , are stored with uk;2 /uk;Mkþ1 overwriting akþ1;k/aM;k . The elements, uk;1 , are T stored as the N-dimensional vector u ¼ u1;1 / uN;1 . for

k ¼ 1; /; N    s ¼ sgn ak;k ak : M; k 2

ak;k ¼ ak;k þ s qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi  m ¼ 2s s þ ak;k ak : M;

k

¼ ak : M; k =m

loop over columns of A calculate sgnðx1 Þkxk2 x1 ¼ x1 þ sgnðx1 Þkxk2 calculate norm of e uk normalize e uk

for j ¼ k þ 1; /; N

calculate columns of Hk Aðk1Þ

a ¼ aTk: M; k ak : M; j ak : M; j ¼ ak : M; j  ð2aÞak : M; k uk ¼ ak;k

save uk;1

ak;k ¼ s

overwrite ak;k with rk;k

8.4.2.5 Givens transformation method

The Householder transformation provides an efficient and numerically stable approach to introduce a block of zeros into a matrix column. Often, however, there is a need to zero-out particular locations in a matrix. The Givens transformation, which is based on planar rotation, allows us to locally introduce zeros into a matrix. Consider the vector x ¼ fx1 / xi / xj / xM gT , and say we want to zero-out xj using xi .

756

CHAPTER 8 Numerical methods

Define the M  M Givens rotation matrix, Gi;j ðqÞ, that rotates the ith and jth coordinates by an angle q in a clockwise sense, i.e., 2 3 1 6 7 6 7 1 6 7 6 7 6 7i c s 6 7 6 7 6 7 Gi;j ðqÞ ¼ 6 1 c ¼ cos q; s ¼ sin q 7 6 7 6 7j s c 6 7 6 7 6 7 1 4 5 1 i j (8.4-58) The angle, q, is defined implicitly by letting xj xi and sin q ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi cos q ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x2i þ x2j x2i þ x2j Straightforward calculation leads to

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Gi;j ðqÞx ¼ x1 / x2i þ x2j i

(8.4-59)

T /

0 /

xM

(8.4-60)

j

Observe that the Gi;j ðqÞ is orthonormal and only modifies the ith and jth elements of x, while also preserving its vector length. Clearly, by applying the appropriate sequence of Givens transformations, say from the i þ 1 to Mth positions, we can zero-out all the elements of x below xi. Therefore, similar to the Householder algorithm, we can transform a matrix to uppertriangular form by deleting the elements below the diagonal, column-bycolumn. Let us transform the first column in A that is defined in Eq. (8.4-46) so that the elements below the second row are zeros. The Givens transformation that rotates the first and second row to zero-out a2;1 will be

8.4 Linear least-square problems

a1;1 c ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:894426 a21;1 þ a22;1

757

a2;1 s ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:447213 a21;1 þ a22;1

q1;2 ¼ 0:463648 

h



ð1;2Þ ai;j

Premultiplying G1;2 q1;2 to A leads to Að1;2Þ ¼ , 2 2:23607 2:23607 0:44721 6 0 2:23607 1:34164 6 6   Að1;2Þ ¼ G1;2 q1;2 A ¼ 6 0 2:00000 0 6 6 4 1:00000 1:00000 1:00000 1:00000

0

(8.4-61)

i

0.89443

3

7 1.78885 7 7 1:00000 7 7 7 0 5

1:00000

0 (8.4-62)

ð1;2Þ

Since a3;1 ¼ 0, the Givens transformation, G1;3 ðqÞ, is the identity so that Að1;3Þ ¼ Að1;2Þ . Moving to the fourth row, G1;4 ðqÞ is defined by ð1;3Þ

a1;1 c ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    ffi ¼ 0:912871  ð1;3Þ 2 ð1;3Þ 2 a1;1 þ a4;1

ð1;3Þ

a4;1 s ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    ffi ¼ 0:408248  ð1;3Þ 2 ð1;3Þ 2 a1;1 þ a4;1

q1;4 ¼ 0:420534 h i   ð1;4Þ and Applying G1;4 q1;4 yields Að1;4Þ ¼ ai;j 2 2:44949 1:63299 6 0 2:23607 6 6   Að1;4Þ ¼ G1;4 q1;4 Að1;3Þ ¼ 6 0 2:00000 6 6 0 1:82574 4 1:00000 0

(8.4-63) 0:816497 1:34164 0 0:73030 1:00000

3 0.816497 7 1.78885 7 7 1:00000 7 7 7 0:36515 5 0 (8.4-64)

758

CHAPTER 8 Numerical methods

Finally, the fifth row element is removed by G1;5 ðqÞ defined by ð1;4Þ

a1;1 c ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    ffi ¼ 0:925820  ð1;4Þ 2 ð1;4Þ 2 þ a5;1 a1;1

ð1;4Þ

a5;1 s ¼ rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi    ffi ¼ 0:377964  ð1;4Þ 2 ð1;4Þ 2 þ a5;1 a1;1

q1;5 ¼ 0:387597 (8.4-65) which leads to   Að1;5Þ ¼ G1;5 q1;5 Að1;4Þ

2

2:64575 1:51186 0:37796 6 0 2:23607 1:34164 6 6 ¼6 0 2:00000 0 6 6 0 1:82574 0:73030 4 0 0:61721 1:23443

3 0.75593 7 1.78885 7 7 1:00000 7 7 7 0:36515 5 0:308607 (8.4-66)

We then proceed in a similar fashion to zero-out the elements in the remaining columns below the diagonal. This yields the upper-triangular matrix,

(8.4-67)

Additionally, if we accumulate the Givens transformations, we obtain

(8.4-68)

8.4 Linear least-square problems

759

Observe that the Givens transformation will produce positive diagonal elements, and therefore yield a unique QR factorization. Note that R in Eq. (8.4-67) equals R in Eq. (8.4-54) except for sign differences in the first, third, and fourth rows. These sign differences also exist to the corresponding columns of Q in Eqs. (8.4-68) and (8.4-56). The sign of the last column of Q e is arbitrary since it corresponds to a vector that is the orthonormal complement of RA , which is unique up to a sign. Just as we were able to store the Householder vectors, uk , in the lower triangular part of A we could also record the Givens transformation by overwriting the zero it introduced with a scalar that represents the transformation. One way is to overwrite aj;i by qi;j, which we will adopt here for simplicity. Stewart (1998, 2001a,b) prefers to record the smaller of cos qi;j or sin qi;j to mitigate round-off errors when recovering the complementary value. The following algorithm describes the extended QR factorization algorithm using the Givens transformations:   Givens QR algorithm Let A ¼ ai;j be an M  N matrix with full column rank. The following algorithm   transforms A so that it contains the N  N triangular factor R ¼ ri;j with positive diagonal elements. The Givens rotation angles, qi;k , overwrite ai;k . for k ¼ 1; /; N for i ¼ k þ 1; /; M if ai;k s0 then qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r ¼ a2k;k þ a2i;k   q ¼ atan2 ai;k ; ak;k

loop over columns of A loop over rows

rotation angle

c ¼ ak;k =r s ¼ ai;k =r

endif

ak; kþ1: N ¼ c$ak; kþ1: N þ s$ai; kþ1: N

update the kth row

ai; kþ1: N ¼ s$ak; kþ1: N þ c$ai; kþ1: N

update the ith row

ak;k ¼ r

update ak;k

ai;k ¼ q

record rotation angle

760

CHAPTER 8 Numerical methods

8.4.3 Singular value decomposition

Our previous discussions on the QR factorization in Section 8.4.2 and the LU and Cholesky factorizations in Section 8.3.3 highlight the importance of matrix decomposition. In this section, we will present the singular value decomposition (SVD), which is a powerful matrix factorization. In addition to providing an effective tool for analyzing and solving the least-square problem, the singular value decomposition has been used extensively in principal component analysis, model reduction, data approximation, and image compression. 8.4.3.1 Singular value decomposition theorem

Before stating the singular value decomposition theorem, we will show why the form of the decomposition is possible. Let A be a M  N matrix, with M  N, and full column rank. Recall from Eq. (8.4-15) that there exists , and a N  N matrix, a M  N matrix, , such that eV eT ¼ e u1e vT1 þ / þ e uN e vTN A¼U

(8.4-69)

Now suppose we chose e vk ¼ vk so that they define an orthonormal basis for N uk depend on e vk , changing the basis for ℝN requires a ℝ . Note that since e corresponding change in the vectors e uk that span RA . By orthonormality of vk we obtain Avk ¼

N  X    un vTn vk ¼ e e uk ¼ ke uk vTk vk ¼ e uk k2uk

(8.4-70)

n¼1

where uk is the unit vector in the direction e uk . Let sk ¼ ke uk k2, then Eqs. (8.4-69) and (8.4-70) yield the factorization,

(8.4-71)

8.4 Linear least-square problems

The significant result that lies at the heart of the singular value decomposition theorem is that the orthonormal basis, v1 ; /; vN can be chosen so that u1 ; /; uN are also orthonormal. This is typically stated as Theorem 8.4-3 (Singular Value Decomposition) Let A be a M  N matrix, M  N and rankðAÞ ¼ N. Then there exists an M  N orthonormal matrix, U, an N  N orthonormal matrix, V, and an N  N diagonal matrix, S ¼ diagðs1 ; /; sN Þ, such that A ¼ U S VT . U and V are matrices whose colums contain, the left singular vectors and the right singular vectors, respectively. The proof by induction can be found in many linear algebra texts, for example, Demmel (1997), Golub and Van Loan (2013), and Stewart (1998, 2001a,b). Instead, we will present a proof that is based on our earlier heuristic arguments. First, we recall the spectral theorem for symmetric matrices whose proof can be found in Horn and Johnson (1990): Theorem 8.4-4 (Spectral Theorem for Symmetric Matrices) Let A be an N  N matrix. Then A has a complete set of orthonormal eigenvectors, v1 ; /; vN corresponding to real eigenvalues, l1 ; /; lN . The orthonormal N  N matrix of eigenvectors, V, diagonalizes A so that VT AV ¼ L ¼ diagðl1 ; .; lN Þ. Equivalently, A has the representation as a sum of rank one orthonormal projectors, A¼

N X

ln vn vTn ¼ V L VT

n¼1 T

Since A A is symmetric, we have by the spectral theorem, AT A ¼

N X

ln vn vTn

(8.4-72)

n¼1

where the eigenvectors, v1 ; /; vN , define an orthonormal basis for ℝN . Furthermore, since AT A is positive-definite, all of the eigenvalues are positive, i.e., ln > 0. Herein, we will order the eigenvalues in descending order. un k2, and un ¼ e un =sn . Then, As in Eq. (8.4-70), let e un ¼ Avn , sn ¼ ke similar arguments leading to Eq. (8.4-71) yield A ¼ s1 u1 vT1 þ / þ sN uN vTN

(8.4-73)

where u1 ; /; uN are linearly independent unit vectors. The linear independence follows directly from the linear independence of v1 ; /; vN .

761

762

CHAPTER 8 Numerical methods

The proof is complete once we show that u1 ; /; uN are orthogonal. Since v1 ; /; vN are orthonormal eigenvectors of AT A,   uTk e e un ðAvk ÞT ðAvn Þ vTk AT A vn vTk ðln vn Þ T ¼ ¼ ¼ uk un ¼ sk sn sk sn sk sn sk sn 8 (8.4-74) k¼n lRþ1 ¼ / ¼ lN ¼ 0, with sk ¼ lk and k ¼ 1; /; R. Since Avn ¼ 0, for n ¼ R þ 1; /; N, vRþ1 ; /; vN are orthonormal vectors that span the null space of A. Observe that u1 ; /; uR define an orthonormal basis for RA. Pick N  R orthonormal vectors, uRþ1 ; /; uN , that lie in the orthogonal complement of RA . Then the singular value decomposition generalizes to singular values, sRþ1 ¼ / ¼ sN ¼ 0, via A ¼ s1 u1 vT1 þ / þ sR uR vTR ¼ s1 u1 vT1 þ / þ sR uR vTR

þ

sRþ1 uRþ1 vTRþ1 þ / þ sN uN vTN |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} 0

¼ U1:R S1:R VT1:R þ URþ1:N SRþ1:N VTRþ1:N  ¼ ½ U1:R

j

URþ1:N 

S1:R 0NRR

0RNR ½ V1:R SRþ1:N

j

VRþ1:N T (8.4-76)

where we have used the notation, Uk:m , to denote the matrix with columns uk ; /; um . Similarly for Vk:m and Sk:m . As an example, consider the following 4  3 matrix: 2 3 1 0 1 6 1 1 0 7 6 7 A¼6 (8.4-77) 7 4 0 1 1 5 2

1

1

Observe that rankðAÞ ¼ 2, since the second column is the sum of the first and third columns. Let us compute the singular value decomposition by first computing the eigenvectors of AT A, where 2 3 6 3 3   6 7 AT A ¼ 4 3 3 0 50pðlÞ ¼ det lI  AT A ¼ lðl  3Þðl  9Þ 3

0

3 (8.4-78)

763

764

CHAPTER 8 Numerical methods

Ordering the eigenvalues in descending order, and calculating the eigenvectors leads to 8 8 9 8 9 9 2 > 0> 1 > > > > > > > > > > < < = < = = v1 ¼ v2 ¼ 1 ; l3 ¼ 0; e v3 ¼ 1 l1 ¼ 9; e 1 ; l2 ¼ 3; e > > > > > > > > > > > : : > : ; ; ; 1 1 1 (8.4-79) pffiffiffiffiffi Note that the eigenvectors are orthogonal. Calculating sk ¼ lk and normalizing e vk yields the singular values and right singular vectors, respectively, 8 8 9 9 2 > > > > >0> > > = pffiffiffi 1 < 1 < = s1 ¼ 3; v1 ¼ pffiffiffi 1 ; s2 ¼ 3; v2 ¼ pffiffiffi 1 ; > > 6> 2> > > > : : > ; ; 1 1 8 9 (8.4-80) 1 > > > > = 1 < s3 ¼ 0; v3 ¼ pffiffiffi 1 > 3> > > : ; 1 The left singular vectors, u1 and u2 , can be calculated by normalizing the u2 ¼ Av2 to yield products, e u1 ¼ Av1 and e 8 8 9 9 1 > 1 > > > > > > > > > > > 1 < 1 = 1 < 1 = u1 ¼ pffiffiffi and u2 ¼ pffiffiffi (8.4-81) > > > 0 2 6> 6 > > > > > > > > : : ; ; 2 0 To obtain u3 , we need to find a vector that is orthogonal to u1 and u2 . Let e3 ¼ fu1 ; u2 ; u3 ; u4 gT . Then the orthogonality conditions lead to the u following system of linear equations, 8 9 > u1 > )  >  > > > T 1 1 0 2 < u2 = 0 u3 ¼ 0 u1 e 0 ¼ (8.4-82) > u 1 1 2 0 > 0 uT2 e u3 ¼ 0 3 > > > ; : > u4

8.4 Linear least-square problems

765

There are infinitely many solutions to Eq. (8.4-82). For this u3 yields, example, we elect e u3 ¼ f2 0 1 1gT . Normalizing e  pffiffiffi pffiffiffi pffiffiffi T u3 ¼ 2= 6 0 1= 6 1= 6 . Therefore, the extended singular value decomposition of A is

(8.4-83) The extension of the singular value decomposition to rank deficient matrices produced a left singular factor, U, that is not square if M > N. By extending U to include the remaining MN orthonormal basis vectors for the orthogonal complement of RA , and augmenting S with MN rows with values of zero and N R columns with values of zero, we obtain the following generalization of Theorem 8.4-3: Theorem 8.4-5 (General Singular Value Decomposition) Let A be a M  N matrix, M  N and rankðAÞ ¼ R. Then there exists a M  M orthonormal matrix, U ¼ ½ U1:R j URþ1:M , an N  N orthonormal matrix, V ¼ ½ V1:R j VRþ1:N , and an R  R diagonal matrix, S ¼ diagðs1 ; /; sR Þ with s1  /  sR > 0, such that # " S 0RNR ½ V1:R VRþ1:N T ¼ U1:R S VT1:R A ¼ ½ U1:R URþ1:M  0MRR 0MRNR (8.4-84) In practice, the computed singular value decomposition of a rank deficient matrix will possess very small positive singular values instead of zeros. This raises the question of how to detect the near zero singular values? Truncating the SVD for singular values less than a specified tolerance defines what is known as the matrix’s numerical rank. The determination of a matrix’s numerical rank is straightforward if there exists a significant gap between a pair of consecutive singular values that clearly separate the “small” sn . We refer the reader to Bjo¨rck (1996) for furher

766

CHAPTER 8 Numerical methods

discussion. As a final remark, Theorem 8.4-5 easily extends to matrices when M < N, by simply transposing the singular value decomposition of AT . To illustrate how the singular value decomposition can be used to solve the least-square problem, Eq. (8.4-1), let us consider the 5  4 matrix, A, defined in (8.4-46), and let b ¼ f2  1  2 3 3gT . The singular value decomposition of A was computed using the single-precision singular value decomposition routine, SGESVD, in LAPACK (Dongarra, 1999). The factors are shown below rounded to five decimal digits. 2 3 0:26047 0:80132 0:15494 0:49300 6 7 6 0:83110 0:08916 0:17861 7 0:35461 6 7 6 7 6 U ¼ 6 0:45520 0:13092 0:44336 0:06611 7 7 6 7 6 0:15635 0:56205 0:05360 0:77785 7 4 5 0:09891 0:12998 0:86293 0:14757 2 3 4.63008 0 0 0 6 7 0 2.60637 0 0 6 7 S¼6 7 5 4 0 0 1.49479 0 0 2

0

0:27960 0.84620

0

0.73125 0.45362

0.00150

3

6 0.82515 0.11129 0.30255 0.46391 7 6 7 V¼6 7 4 0.17838 0.50743 0.83628 0.10641 5 0.45731 0.11865 0.05763 0.87947 (8.4-85) Recall that the least-square solution, x, solves for the projection of b onto RA . Since the orthonormal columns of U span RA , the projection of b onto the range of A is equal to ðUUT Þb. Substituting the singular value decomposition of A into Eq. (8.4-1) implies that x will be the solution of   (8.4-86) U S VT x ¼ UUT b Premultiplying the above by UT and noting that U is orthonormal yields S VT x ¼ UT b

(8.4-87)

8.4 Linear least-square problems

Since S is diagonal and nonsingular, we obtain VT x ¼ S1 UT b

(8.4-88)

Note that V is a 4  4 orthonormal matrix with inverse VT . Hence, premultiplying Eq. (8.4-88) by V, and substituting the computed singular value decomposition factors in Eq. (8.4-85) leads to the least-square solution, 8 9 2.22988 > > > > > < 1.64368 > = x ¼ V S1 UT b ¼ (8.4-89) > > 0.83908 > > > > : ; 1.40230 That x provides a “good” approximation is checked by calculating the residual, 9 8 9 8 9 8 0.02299 > 1.97701 > 2> > > > > > > > > > > > > > > > > > > > > > > > > > > > 0.05747 0.94253 1 > > > > > > = < = < = < 0.11494 r ¼ Ax  b ¼ 1.88506 (8.4-90)  2 ¼ > > > > > > > > > > > > > > > > 3.03448 > 3> > > > > > > > 0.03448 > > > > > > > > > > ; > : ; : ; : 0.06897 3.06897 3  In terms of the 2-norm, the residual error is krk2 kbk2 z 2:9%. The small residual error indicates that b is approximated reasonably well by its projection, ðUUT Þb ¼ Ax. Let us look at another example where b ¼ f  1 2 4 1 3gT differs significantly from its projection onto RA , 8 9 0.16092 > > > > > > > > > > 0.09770 > > < =  T UU b ¼ 0.19540 «b (8.4-91) > > > > > 0.25862 > > > > > > > : ; 0.48276

767

768

CHAPTER 8 Numerical methods

Computing the least-square solution and residual as before, we find that 8 9 0:83908 8 9 > > > > > > 0:10920 > > > > > > > > 2:09770 > > > > < 0:00575 = < = 1 T x¼V S U b ¼ and r ¼ 4:19540 > > > 0.37356 > > > > > > > > > 1:25862 : ; > > > > > > 0:18391 : ; 2:51724 (8.4-92)  For this problem, the 2-norm residual error is equal to krk2 kbk2 z 99:4%, which clearly indicates that the least-square approach provides a poor approximation to b. 8.4.3.2 Pseudo-inverse

In Section 8.3, we examined algorithms for solving the linear system of equations, Ax ¼ b, where A is an N  N nonsingular matrix. Although we never explicitly calculated A1 , the algorithms in effect calculated the solution, x ¼ A1 b. This raises the following question, is there a generalization of A1 for M  N rectangular matrices that are applicable to least-square problems? Inspection of Eq. (8.4-89)) suggests that the N  M matrix, V S 1 UT , acts like the inverse of A. This observation leads to the concept of the pseudo-inverse or Moore-Penrose inverse . Earlier we noted that if A is rank-deficient, then some of its singular values will be zero and, therefore, S1 is not defined. This requires that we broaden the definition of the inverse of S. We start by defining the pseudo-inverse of a diagonal matrix: Definition Let D ¼ diagðd1 ; d2 ; .; dNÞ, then Dy denotes  the pseudoy y y inverse of D, and is defined by Dy ¼ diag d1 ; d2 ; .; dN , where ( dny ¼

dn1

if

dn s0

0

if

dn ¼ 0

If D is nonsingular, then Dy equals the standard inverse, D1 . The pseudoinverse for arbitrary matrices is, therefore, defined as Definition Let A be an M  N matrix with singular value decomposition, A ¼ U S VT . Then Ay denotes the pseudo-inverse of A and is defined by

8.4 Linear least-square problems

Ay ¼ V Sy UT . In view of this definition, the solution to the least-square solution in Eq. (8.4-89) can be expressed in terms of the pseudo-inverse, i.e., x ¼ Ay b

(8.4-93)

Let us examine some of the properties of the pseudo-inverse. We will consider the case when A has full column rank and M  N. Then, Sy ¼ S1 and Ay ¼ V S1 UT . Calculating the N  N product, Ay A, we find that      Ay A ¼ V S1 UT U S VT ¼ V S1 UT U S VT ¼ V S1 $IN $S VT   ¼ V S1 S VT ¼ V$IN $VT ¼ VVT ¼ IN (8.4-94) where the second and last equalities follow from orthonormality of U and V. Eq. (8.4-94) implies that Ay is the left inverse of A. Reversing the order, we find that the M  M product, AAy , is given by      AAy ¼ U S VT V S1 UT ¼ U S VT V S1 UT ¼ U S$IN $S1 UT   ¼ U SS1 UT ¼ U$IN $UT ¼ UUT (8.4-95) y

y

Therefore, A is not exactly a right inverse since AA sIM , for M > N. This is to be expected since the product of A with any other matrix will have a range that is contained in RA , which is not all of ℝM if M > N. However, Eq. (8.4-95)) implies that AAy is equal to the orthogonal projector onto RA and, therefore, is equal to the identity matrix IM on RA , since for all x ˛ RA , AAy x ¼ UUT x ¼ x ¼ IM x. Substituting the singular value decomposition into the normal Eq. (8.4-5) provides further insight into the pseudo-inverse. If A has full column rank, then AT A is a symmetric positive-definite matrix and, therefore, is invertible. Premultiplying Eq. (8.4-5) by ðAT AÞ1 yields 1  (8.4-96) x ¼ AT A AT b Comparing the above with Eq. (8.4-93) suggests that Ay ¼ ðAT AÞ1 AT . Straightforward algebra shows why this is true. Substituting the singular value decomposition for A leads to

769

770



CHAPTER 8 Numerical methods

AT A

1

AT ¼ ¼

 T  1  T U S VT U S VT U S VT  1   1       V S UT U S VT V S UT ¼ V S UT U S VT V S UT

 1        ¼ V S2 V T V S UT ¼ V S2 VT V S UT ¼ V S2 VT V S UT ¼ V S2 $S UT ¼ V S1 UT ¼ V Sy UT ¼ Ay (8.4-97) The following theorem summarizes the properties of the pseudo-inverse. Theorem 8.4-6 (Properties of Pseudo-inverse) Let A be an M  N matrix with M  N, and rankðAÞ ¼ R and singular value decomposition given in Eq. (8.4-84). If R ¼ N, let A ¼ QR be its QR factorization. Then A has the pseudo-inverse,

Furthermore, the following properties hold:   1. aÞ kAk2 ¼ s1 bÞ Ay 2 ¼ s1 R   1 2. aÞ Ay ¼ AT A AT bÞ Ay ¼ R1 QT ; if R ¼ N  y 3. Ay ¼ A  T  y 4. Ay ¼ AT 5.

aÞ AAy A ¼ A T  cÞ AAy ¼ AAy

bÞ Ay AAy ¼ Ay ;  T dÞ Ay A ¼ Ay A

6.

aÞ Ay A ¼ V1:R VT1:R

¼ orthogonal projector onto RAT

bÞ AAy ¼ U1:R UT1:R

¼ orthogonal projector onto RA

cÞ IN  Ay A ¼ VRþ1:N VTRþ1:N ¼ orthogonal projector onto N A dÞ IM  AAy ¼ URþ1:M UTRþ1:M ¼ orthogonal projector onto N AT

8.4 Linear least-square problems

We have already proven Property 2a. Property 2b follows by substituting A ¼ QR into 2a and algebraic manipulation. The proofs of 6a and 6b are simple extensions of the arguments we presented for when A has full column rank. Properties 6c and 6d follow directly from our discussion on the complementary orthogonal projectors and the results from Section and N AT ¼ Rt 8.3.1.1 that showed N A ¼ Rt A . Properties 3, 4, and 5 AT can be shown by direct substitution of the singular value decomposition of A and Ay . Properties 5a-5d are known as the Penrose conditions that algebraically characterize the pseudo-inverse. To prove property 1a first recall from Eq. (8.3-6), in terms of the 2-norm, kAk2 ¼ max kAxk2

(8.4-98)

kxk2 ¼1

The right singular vectors, v1 ; /; vR ; /; vN , from the singular value decomposition of A, define an orthonormal basis for ℝN . Hence, for a unit vector, x, x¼

N X

an ¼ vTn x

an vn

n¼1

kxk22 ¼

N X

(8.4-99) a2n ¼ 1

n¼1

By the singular value decomposition A ¼

R P n¼1

Ax ¼

R X

  sn un vTn x ¼

n¼1

R X

sn un vTn and, therefore,

R   X sn un an vTn vn ¼ an sn un

n¼1

(8.4-100)

n¼1

Since u1 ; /; un are orthonormal, kAxk22 ¼

R X n¼1

a2n s2n  s21

R X n¼1

a2n  s21

N X

a2n ¼ s21

(8.4-101)

n¼1

where the inequality follows from the ordering of the singular values, i.e., s1  s2  /  sR > 0 and Eq. (8.4-99). Therefore, by Eqs. (8.4-98) and (8.4-101), we conclude that kAk2  s1 . That the upper bound, s1 , can be achieved follows from letting x ¼ v1 0Ax ¼ s1 u1 . Hence, kAk2 ¼ s1 .

771

772

CHAPTER 8 Numerical methods

To show Property 1b substitute Ay for A in Property 1a and note that s1 R equals the largest singular value of Ay . Additional details of the properties in Theorem 8.4-6 can be found in Demmel (1997) and Bjo¨rck (1996). 8.4.4 Error analysis

In this section, we will briefly examine the numerical aspects of the methods that were presented to solve the least-square problem. Specifically, we will consider the methods based on the normal equation and various implementations of the QR factorization. We refer the reader to Bjo¨rck (1996), Golub and Van Loan (2013), Steward, and Demmel (1997) for detailed error analysis discussion of SVD-based solutions. We end this section with some error estimates for the least-square problem. Let us start with the example that we introduced earlier where we sought to approximate the decaying sinusoid defined in Eq. (8.4-3) by polynomial functions. Four methods will be considered: the solution of the normal equation, and the solution by QR factorization via the classical GramSchmidt (CGS), modified Gram-Schmidt (MGS), and the Householder transformation. In order to demonstrate the sensitivity and errors that arise from these various methods, we will perform all computations in single precision. It’s worth noting that the Vandermonde matrix in Eq. (8.4-4) is highly ill conditioned and, therefore, provides an example that stresses the numerical robustness of the methods. The condition number, kðAÞ, for an M  N matrix, A, with singular values, s1  /  sR > sRþ1 ¼ / ¼ sN ¼ 0, is defined by   s1 kðAÞ ¼ kAk2 Ay 2 ¼ (8.4-102) sR Hence, kðAÞ can be very large if A possesses extremely large and small singular values. Fig. 8.4-4 plots kðAÞ versus polynomial degrees 3-14 for the Vandermonde matrix defined in Eq. (8.4-4). It indicates that for high-order polynomials, significant amplification of numerical errors can occur. Solving the normal equation by application of the Cholesky factorization requires that AT A be positive-definite, which is true analytically. However, using single-precision arithmetic, the factorization failed for polynomial degrees greater than eight. This is not unexpected since the condition number of AT A is equal to the square of kðAÞ. Therefore, to examine the solution for higher polynomial degrees, the normal equation using the LU factorization was used instead. Fig. 8.4-5 compares the polynomial

8.4 Linear least-square problems

FIGURE 8.4-4 Condition number of Vandermonde matrix versus polynomial degree.

FIGURE 8.4-5 Polynomial approximations by solving the normal equation. approximations for degrees equal to 6, 10, 11, 12, 18, and 19. The polynomial fits for degrees 6, 10, 12, and 18 provide reasonable approximations to f ðtÞ. However, the poor fit of the eleven-degree polynomial indicates an incipient numerical instability. This example illustrates the drawback of

773

774

CHAPTER 8 Numerical methods

FIGURE 8.4-6 Polynomial approximations by the CGS QR factorization. solving the least-square problem via the normal equation, and also using high-order polynomials for approximating functions. The least-square polynomial approximations were obtained by computing the QR factorization of A, in Eq. (8.4-4), and then solving for x via backward substitution in Eq. (8.4-28). Figs. 8.4-6, 8.4-7, and 8.4-8 compare the polynomial fits that resulted using the CGS, MGS, and Householder QR factorizations, respectively. The CGS QR factorization provided poor approximations for all degrees shown. The MGS QR-based method produced reasonable fits up to degree 10 and stable fits for degrees 11 and 12. The QR factorization by Householder transformation produced the “best” and most stable approximations among all the methods considered, especially at the higher degrees where the other methods failed. As we indicated earlier, this was to be expected since the Householder transformations are orthonormal and, hence, produce a Q-factor that is orthonormal with respect to machine precision. It is worth noting that orthonormal transformations are stable and do not change the condition number. To see this, first express A via its singular value decomposition, A ¼ U S VT . Let QT ¼ HN HN1 /H2 H1 represent the product of the Householder transformations. Since each Hk is

8.4 Linear least-square problems

FIGURE 8.4-7 Polynomial approximations by the MGS QR factorization. orthonormal, QT is also orthonormal. Hence, the Householder QR method yields the singular value decomposition of R via     e S VT ¼ R (8.4-103) QT A ¼ QT U S VT ¼ QT U S VT ¼ U e ¼ QT U equals the orthonormal matrix consisting of the left singuwhere U lar vectors of R. Therefore, the singular values of R are equal to the singular values of A. By the definition of condition number, Eq. (8.4-102), we conclude that kðAÞ ¼ kðQT AÞ ¼ kðRÞ. Fig. 8.4-9 compares the 2-norm of the residuals among all four methods. All methods yield similar polynomial approximations for degrees less than 6. The QR factorization via Householder transformation provided the most stable results over all polynomial degrees considered. The plot of its residual error suggests a “reasonable” degree range from 3 to 9 and that polynomials having degrees greater than 9 do not improve the least-square fit. The approximations via the normal equation produced inconsistent fits that appear unstable for degrees 10 and 11. The QR factorization by the CGS method had the worst performance with increasing residual errors for polynomial degrees greater than 6. This is primarily due to errors that progressively degrade the orthogonality of Q. On the other hand, the residual from

775

776

CHAPTER 8 Numerical methods

FIGURE 8.4-8 Polynomial approximations by the Householder QR factorization.

FIGURE 8.4-9 2-norm residual errors of least-square polynomial approximates. QR refers to Householder QR.

8.4 Linear least-square problems

the MGS factorization is significantly less than that of the CGS method. It can be shown (Bjo¨rck, 1996; Stewart (1998, 2001a,b)) that numerically the MGS and Householder QR factorizations are similar. There will be slight differences in the QR factors due to implementation and round-off errors. Fig. 8.4-9 shows that the MGS QR factorization method gave comparable results to the Householder method over its reasonable degree range. In order to gain some insight into the round-off errors that can occur from the least-square solution methods, let us consider a simple 4  3 example that can be found in Bjo¨rck (1996), 2 3 1 1 1 6d 0 07 pffiffiffiffiffiffiffiffiffiffiffi 6 7 (8.4-104) A¼6 εmach < d  εmach 7 40 d 05 0 0

d

Observe that A has full column rank and, therefore, will possess a unique least-square solution. The solution of the normal equation requires the calculation of AT A. Under floating point arithmetic, we find that 02 31 2 3 1 1 1 1 1 1 þ d2 B6   7C 6 7 2 fl AT A ¼ flB 4 5C 1 1 þ d 1 @ A ¼ 4 1 1 1 5 (8.4-105) 1 1 1 1 1 1 þ d2 Hence, due to round-off errors, flðAT AÞ is singular with rank equal to one. Therefore, unless AT b is proportional to f 1 1 1 gT , a solution to the normal equation will not exist.pItffiffiffiffiffiffiffiffiffiffiffiffiffi can be shown that the exact singular

values of A are.equal to 3 þ d2 , d, and d, which imply that pffiffiffiffiffiffiffiffiffiffiffiffiffi   k2 ðAÞ ¼ 3 þ d2 d. Since k2 ðAT AÞ ¼ 3 þ d2 d2 , AT A will be poorly conditioned for d  1. This illustrates the problem with solving the normal equation for nearly rank-deficient matrices. The round-off errors can be significant and lead to flðAT AÞ being ill conditioned and possibly singular. It is worth noting that A has the pseudo-inverse, 2 3 d 2 þ d2 1 1 1 d 6 7 (8.4-106) Ay ¼ 1 5 1 2 þ d2 24d 3þd d 1 1 2 þ d2

777

778

CHAPTER 8 Numerical methods

with floating point representation given by 2 d 2 1  y  d1 6 fl A ¼ 4 d 1 2 3 d 1 1

3 1 7 1 5

(8.4-107)

2

Use of the LAPACK singular value decomposition algorithm produces a pseudo-inverse that is given by Eq. (8.4-107) and is accurate up to machine precision. However, if Ay were calculated algebraically by ½AT A1 AT , and since AT A is singular, computing its inverse would lead to a fatal numerical error. Again, for rank-deficient matrices, computing the pseudo-inverse via ½AT A1 AT can lead to significant errors. Continuing with our 4  3 example, let us examine the QR factorizations. Applying the CGS algorithm using floating-point precision, it can be shown that 3 2 1 0 0 2 3 1 1 1 pffiffiffi pffiffiffi 7 6 6 d 1= 2 1= 2 7 pffiffiffi 6 7 7 6 7 0 2 d 0 flðQCGS Þ ¼ 6 pffiffiffi 7 and flðRCGS Þ ¼ 6 4 5 7 6 0 1= 2 0 p ffiffi ffi 5 4 0 0 2d pffiffiffi 0 0 1= 2 (8.4-108) The first column of flðQCGS Þ is nearly orthogonal to the second and third columns. However, the second and third columns are not orthogonal. This clearly shows how the CGS method can lead to nonorthogonal Qfactors. Floating-point computation of the MGS QR factorization produces the factors 3 2 1 0 0 2 3 1 1 1 pffiffiffi pffiffiffi 7 6 6 d 1= 2 1= 6 7 pffiffiffi pffiffiffi 7 6 7 6 0 2 d d= 2 7 flðQMGS Þ ¼ 6 pffiffiffi 7 and flðRMGS Þ ¼ 6 pffiffiffi 4 5 6 0 1= 2 1= 6 7 pffiffiffiffiffiffiffiffi 5 4 0 0 3=2d pffiffiffiffiffiffiffiffi 0 0 2=3 (8.4-109) Similar to the CGS method, the first column of flðQMGS Þ is nearly orthogonal to the second and third columns. However, unlike the CGS method, the

8.4 Linear least-square problems

MGS method results in columns two and three being orthogonal. Previously, we mentioned that the MGS and the Householder QR factorization methods were numerically similar. It can be shown that Householder transformations lead to the following factors:

(8.4-110)

Observe that the upper 3  3 submatrix of flðRHouse Þ is identical to flðRMGS Þ, except for the sign difference in the first row. As discussed earlier, the rows of R and the corresponding columns of Q are unique up to a sign. Also, note that the first three columns of flðQHouse Þ are similar Þ, except for the sign difference in the first column and the to flðQMGSp  ffiffiffi “small” d 2 entries in the first row. This illustrates the numerical similarity between the MGS and Householder factorizations. Next, we will examine the error bounds that characterize the accuracy of the normal equation and QR factorization methods. Derivations of these bounds can be found in Stewart (1998, 2001a,b), Bjo¨rck, 1996, Demmel (1997), and Hingham (2002). It can be shown that the computed solution, b x NE , of the normal Eq. (8.4-5) is the exact solution of the perturbed normal equation,  T  A AþG b x NE ¼ AT b;

(8.4-111) kGk2 kbk2  T   gNE 1 þ εmach A A kAk2 kxk2 2 where gNE is a constant that depends on the dimensions, M and N. The matrix G accounts for the round-off errors from computing AT A and solving

779

780

CHAPTER 8 Numerical methods

the resulting normal equation. Substituting Eq. (8.4-111) into the perturbation result, Eq. (8.3-79), leads to

 T  x NE  xk2 kb kbk2  k2 A A $ gNE 1 þ (8.4-112) εmach kxk2 kAk2 kxk2 Decomposing b into its projection onto RA and its orthogonal complement gives b ¼ PA b þ Pt A b ¼ bA þ bt

(8.4-113)

Since x solves the system of equations Ax ¼ bA , we have the inequality kbA k2  kAk2 kxk2

(8.4-114)

Note that by definition of the residual, r ¼ Ax  b ¼ bt . Substituting Eq. (8.4-114) and k2 ðAT AÞ ¼ k22 ðAÞ into Eq. (8.4-112) yields the inequality, x NE  xk2 kb  k22 ðAÞ,gNE ð2 sec qÞεmach kxk2

(8.4-115)

secq ¼ kbk2 =kbA k2 Observe that if A is ill conditioned, then the above bound indicates that the errors will be drastically amplified by the square of k2 ðAÞ. This condition is aggravated if b is effectively orthogonal to RA , which implies that sec q[1, and could, therefore, result in even greater amplification of round-off errors. We now turn our attention to the error bounds for the least-square solution by the QR methods. We will omit discussion of the CGS method because of the significant errors it tends to introduce. Let b x QR represent the computed solution by the MGS, Householder, or Givens QR factorization methods. The effect of round-off errors is such that b x QR solves the following perturbed least-square problem, ðA þ EÞb x QR z b þ f; kfk2  gQR εmach kbk2

and

kEk2  gQR εmach kAk2

(8.4-116)

where gQR depends on M and N. The perturbations, E and f account for the round-off errors introduced during the QR factorization and solution of the

8.4 Linear least-square problems

resulting upper-triangular system. A first-order perturbation analysis of the above least-square problem leads to the relative error bound for b x QR, i.e.,  

b x QR  x2 kEk2 kfk2 kbt k2 kEk2  k2 ðAÞ þ , þ k22 ðAÞ kxk2 kAk2 kAk2 kxk2 kAk2 kxk2 kAk2 (8.4-117) Substituting Eqs. (8.4-114) and (8.4-116) into the above inequality and simplifying leads to    b x QR  x2   2 sec q,k2 ðAÞ þ tan q,k22 ðAÞ ,gQR εmach kxk2 (8.4-118) tan q ¼ kbt k2=kbA k2 The relative error bound indicates that the conditioning of the QR factorization method depends on whether or not b can be approximated by the columns of A. If bA z b, then tan q z 0 and the condition number is proportional to k2 ðAÞ. Compared to the normal equation, which has a condition number proportional to the square of k2 ðAÞ, the QR-based methods for bA z b are less sensitive to round-off errors. On the other hand, if b is essentially orthogonal to the column range of A, then tan q[1 and the condition number is on the order of tan q,k22 ðAÞ. For this case, the sensitivity to round-off errors is comparable to the normal equation method. Moreover, if A is ill conditioned, then we can expect very large relative errors. We summarize the comparison between the normal equation and QR methods. In terms of speed, Stewart (1998, 2001a,b) notes that the normal equation approach is about twice as fast with the added advantage of being able to exploit sparseness for efficient computation of AT A. Both approaches produce solutions that are exact solutions to a perturbed problem. The use of orthonormal transformations in the QR methods tend to introduce small perturbations and, therefore, are more stable. This is generally not the case for the normal equation method. The 4  3 example showed that computation of AT A in the normal equation can produce round-off errors that, although bounded, can lead to a perturbed problem that is significantly different. Regarding accuracy, Stewart claims that, “. the QR approach has the edge e but not a large one.” As we discussed previously, the conditioning for the QR method depends on how well b can be approximated by the columns of A. If the residual is small, then the relative error is

781

782

CHAPTER 8 Numerical methods

bounded by k2 ðAÞεmach. However, as the residual error increases, the conditioning approaches that of the normal equation, which is proportional to k22 ðAÞ. 8.5 Matrix eigenvalue problem Many structural dynamics problems require the solution to the linearized equations of motion, _ þ KxðtÞ ¼ fðtÞ M€ xðtÞ þ CxðtÞ

(8.5-1)

where M, C, and K are the N  N real-valued mass, damping, and stiffness matrices, respectively. The time-dependent displacement responses and forces are defined by the N  1 vectors, xðtÞ and fðtÞ, respectively. For problems without feedback, gyroscopic moments, or aerodynamic stiffness and damping, for example, the matrices in Eq. (8.5-1) are symmetric. Generally, M and K are positive-definite, however, if the system possesses rigid-body modes, then K and C will be positive-semidefinite. The computation, analysis, and interpretation of the responses are greatly facilitated if they are represented in terms of the natural modes of vibration of the undamped system. Application of the Laplace transform to Eq. (8.5-1), with C ¼ 0 and fðtÞ ¼ 0, leads to the generalized eigenvalue problem (GEVP) for the undamped system, K4m ¼ lm M4m

m ¼ 1; /; M

(8.5-2)

where 4m represents the mth modal vector and lm is the corresponding eigenvalue that equals the square of the mth natural circular frequency, um . The first step toward computing the solution of Eq. (8.5-2) is to perform a change of variables that eliminates the mass matrix. Since M is symmetric and positive definite it can be expressed as a product of its Cholesky factors, M ¼ LLT . Let vm ¼ LT 4m , then Eq. (8.5-2) reduces to the “standard” symmetric eigenvalue problem, Avm ¼ lm vm

(8.5-3)

where A ¼ L1 KLT . Herein, we will refer to the pair ðlm ; vm Þ as an eigenpair. In this section, we will review some of the methods that are used to compute the eigenpairs of a matrix. Let us assume that A has a complete

8.5 Matrix eigenvalue problem

set of eigenvectors, fv1 ; v2 ; /; vN g, and corresponding eigenvalues, fl1 ; l1 ; /; lN g, then A has the decomposition, A ¼ VLV1

(8.5-4)

where V ¼ ½v1 jv2 j / j vN  is a nonsingular matrix of eigenvectors and L ¼ diagðl1 ; l2 ; /; lN Þ is the diagonal matrix of eigenvalues arranged in descending order, i.e., jl1 j  jl2 j  /  jlN j. From the Spectral Theorem (Theorem 8.4-4), if A is symmetric, then its eigenvalues are realvalued and the matrix of eigenvectors can be chosen to be orthonormal. For nonsymmetric real-valued matrices, the eigenpairs can be complexvalued and they will occur in conjugate pairs. By Eq. (8.5-4), we note that the roots of the characteristic polynomial of A are equal to its eigenvalues since 

pA ðlÞ ¼ detðlI  AÞ ¼ det V½lI  LV

1



¼ detðlI  LÞ ¼

N Y

ðl  ln Þ

n¼1

(8.5-5) Therefore, calculating the eigenvalues of a matrix is equivalent to finding the roots of its characteristic polynomial. If pA ðlÞ is a quadratic, cubic, or quartic polynomial, then there are algebraic expressions for its roots. However, in 1824, Niels Henrik Abel proved that there is no general formula for the roots of polynomials of degrees exceeding four. In other words, there is no algorithm that can provide the roots of a polynomial of degree greater than four in a finite number of algebraic steps. The implication, therefore, is that for matrices with dimensions exceeding four, all algorithms that compute the eigenvalues of a general matrix must be iterative. This is a departure from our previous experience with matrix decompositions such as the LU, Cholesky, and QR factorizations that could be accomplished in a finite number of steps. It is easily shown by example that the roots of a polynomial can be extremely sensitive to slight changes in its coefficients. Therefore, the computed eigenvalues and eigenvectors can be sensitive to small perturbations in A that arise from round-off errors. An additional complexity in solving the eigenvalue problem is related to defective matrices. These are matrices that do not possess a complete set of eigenvectors and, hence, cannot be factorized as in Eq. (8.5-4). In the context of the eigenvalue

783

784

CHAPTER 8 Numerical methods

computations, these represent the ill-conditioned matrices; and we will address some of the computational issues they pose. The matrix eigenvalue problem has a rich history and continues to be an active area of research in computational linear algebra. Its importance and computational challenges have led to numerous investigations and extensive literature. An exhaustive and rigorous treatment of this topic is beyond the scope of this section and is not necessary given the excellent books, papers, and software that are available. Therefore, we will mainly focus our discussion on the symmetric QR algorithm. This will cover the main eigensolution method that is used to calculate the modes of a reduced order structural dynamic model. Our goal will be to provide the analysts with a semirigorous understanding of the QR algorithm and its convergence properties. We also include in this section brief synopses of the Divide and Conquer (DC) algorithm and the “iterative” approach known as the Lanczos method. The DC algorithm is currently the fastest method for computing all the eigenpairs of a symmetric matrix. The Lanczos algorithm is the method of choice in current finite element applications for calculating eigenpairs of large symmetric matrices at the extreme ends of the spectrum. The main concepts of the symmetric QR algorithm have analogous counterparts for solving the nonsymmetric eigenvalue problem. We will also briefly examine the modifications needed to address the numerical issues arising from the nonsymmetry. Following this discussion, a brief analysis of the stability and errors of the computed eigensolutions will be presented. 8.5.1 Symmetric eigenvalue problem

The QR algorithm was developed by John Francis almost six decades ago. At that time, computers were slow, their memory capacity was limited and they were cumbersome to use. The existing methods for calculating the eigenpairs of matrices having dimensions greater than ten were problematic. Francis’s QR algorithm was based on H. Rutishauser’s LR algorithm that iteratively computes the LU factorization of a matrix and then calculates the product of the factors in reverse order. As we discussed earlier, the LU factorization employs Gauss transformations, which can compromise numerical stability. Francis realized this, and instead decided to use orthonormal transformations to minimize the effect of numerical roundoff errors. The QR algorithm radically transformed the field of eigenvalue computation from nearly impossible to routine. For a fascinating historical account, we refer the reader to works by Golub and Uhlig (2009) and Watkins (2011).

8.5 Matrix eigenvalue problem

Francis’s algorithm addressed the computation of the eigen-solutions of general matrices that are usually complex-valued and can be less stable numerically. For symmetric matrices, these issues are absent and greatly simplify the algorithm’s theory and implementation. Therefore, symmetric matrices provide an ideal starting point for introducing the QR algorithm. Additionally, the use of orthonormal transformations to diagonalize a symmetric matrix is “natural” given that its eigenvector matrix is orthonormal. Hence, if we are able to generate a sequence of orthonormal transformations that diagonalize a matrix, their product should yield the eigenvector matrix. We begin by using the algorithm to compute the modes of vibration of the five-degree-of-freedom spring-mass system shown in Fig. 8.5-1. The mass and stiffness matrices are M ¼ diagð1; 2; 10; 10; 2Þ 2

40

6 6 20 6 6 K¼6 6 20 6 6 0 4 0

20

20

0

4220

2000

0

2000

8020

2000

0

2000

8000

2000

0

2000

FIGURE 8.5-1 Five-degree-of-freedom spring-mass system.

0

3

7 2000 7 7 7 0 7 7 7 2000 7 5 4200

(8.5-6)

785

786

CHAPTER 8 Numerical methods

The natural frequencies and mode shapes were computed using the QR algorithm in LAPACK (Anderson, 1999) and are listed below: u2m ¼ 3196.149; 1366.347; 909.938; 339.932; 2

0:00283 0:00480

6 6 6 0.49116 6 6 F ¼6 6 0.04475 6 6 6 0.04453 4 0.48870

39.634

m ¼ 1; /; 5

0:00735 0:02959 0:99952

0:43062

0:10003

0:25136

0:11251

0:21968

0:19245

0:11310

0:21852

0:19355

0:43279

0:09956

0:25278

3

7 7 0:01163 7 7 7 0:00665 7 7 7 7 0:00371 7 5 0:00745 (8.5-7)

where we adopted the convention of normalizing the mode shapes with respect to the mass matrix so that FT MF [ I. Since M is a diagonal matrix, its Cholesky p factor and is given by the square root of its ffiffiffiffiffi is diagonal 1 entries, i.e., L ¼ M. Use of L reduces the general eigenvalue problem associated with Eq. (8.5-6) to the standard eigenvalue problem with the symmetric matrix, A ¼ L1 KLT , i.e., 2 3 40 14:142 6:325 0 0 6 7 2110 447:214 0 1000 7 6 14:142 6 7 7 A¼6 6:325 447:214 802 200 0 6 7 6 7 0 0 200 800 447:214 5 4 0 1000 0 447:214 2100 (8.5-8) 8.5.1.1 QR iteration

The QR algorithm is based on the QR factorization that we examined in Section 8.4.2. The algorithm is iterative and generates a sequence of matrices, AðkÞ , that are similar to A, and in the process converges to a diagonal matrix of its eigenvalues.

8.5 Matrix eigenvalue problem

QR iteration

Að1Þ ¼ A for k ¼ 1; / until done QR factor AðkÞ ¼ QðkÞ RðkÞ Aðkþ1Þ ¼ RðkÞ QðkÞ

(8.5-9)

Applying the above algorithm using the Householder transformation to calculate the QR factorization produces after the first iteration, 2 3 248:522 671:704 110:967 0 0 6 7 84:949 516:344 363:084 7 6 671:704 2794:347 6 7 7 Að2Þ ¼ 6 110:967 84:949 934:081 154:153 271:308 6 7 6 7 0 516:344 154:153 1305:847 401:529 5 4 0 363:084 271:308 401:529 569:203 (8.5-10) Observe that Að2Þ is still symmetric. The matrices, AðkÞ , for subsequent iterations, k ¼ 3; 5; 9; 17; 20 and 38, are listed below: 2 3 2914:094 708:590 143:499 0 0 6 7 241:196 351:482 73:966 7 6 708:590 1166:203 6 7 Að3Þ ¼ 6 241:196 1206:581 152:338 63:584 7 6 143:499 7 6 7 0 351:482 152:338 469:561 88:594 5 4 0 73:966 63:584 88:594 95:561 (8.5-11) 2 3 3189:632 108:967 17:816 0 0 6 7 199:534 30:340 0:103 7 6 108:967 1257:528 6 7 7 17:816 199:534 1023:577 14:411 0:002 Að5Þ ¼ 6 6 7 6 7 0 30:340 14:411 341:617 1:882 5 4 0 0:103 0:002 1:882 39:646 (8.5-12)

787

788

CHAPTER 8 Numerical methods

2

3196:142

3:468

6 6 3:468 1360:585 6 ð9Þ A ¼6 50:984 6 0:134 6 0 0:133 4 0 0 2 3196:149 0:004 6 6 0:004 1366:338 6 0 1:998 Að17Þ ¼ 6 6 6 0 4 0 0 2 3196:149 0 6 0 1366:346 6 6 ð20Þ 6 ¼6 0 0:590 A 6 0 4 0 0 2 3196:149 0 6 0 1366:347 6 6 ð38Þ ¼6 0 0 A 6 6 0 4 0 0

0:134

0

50:984 915:706 0:528 0

0:133 0:528 339:932 0

0 1:998

0 0

909:947 0 0 339:932 0 0 0:590

0 0

909:939 0 0 339:932 0 0 0

0 0

909:938 0 0 339:932 0

0

3

7 07 7 07 7 7 05 39:634 3 0 7 07 7 07 7 7 05 39:634 3 0 7 07 7 07 7 7 05 39:634 3 0 7 07 7 07 7 7 05 39:634

(8.5-13)

(8.5-14)

(8.5-15)

(8.5-16)

The matrices in Eqs. (8.5-10) through (8.5-16) illustrate how the QR algorithm decreases the off-diagonal terms of the iterates, AðkÞ , and eventually converges to a diagonal matrix of eigenvalues. The iterations in Eqs. (8.5-10) through (8.5-16) were selected to show the progressive decoupling (to three decimal places) that occurs at the elements indicated in bold font. For example, the ninth iteration decoupled Að9Þ into 4  4 and 1  1 matrices. By the 17th iteration, we have a decoupling into 3  3 and two 1  1 matrices. Three iterations later, Að20Þ , we obtain a 2  2 matrix and three 1  1 matrices. Finally, after 38 iterations, we obtain a diagonal 2 matrix that contains the eigenvalues,  in (8.5-7). The calculated  2um , listed 2-norm error relative to L ¼ diag u1 ; .; u25 is

8.5 Matrix eigenvalue problem

ðkÞ

ε

   ðkÞ  ðkÞ A  L A  L 2 2 z 58:77,rk ; ¼ ¼ 3196:149 kLk2

r ¼ 0:6670 (8.5-17)

The relative errors are plotted in Fig. 8.5-2. A fit to the error curve shows that convergence is linear and that, relative to machine precision limits, convergence is reached after 85 iterations. As we noted earlier with Að2Þ , the iterates, AðkÞ , are also symmetric. Although this is not obvious from the algorithm, it can be shown by simple algebraic manipulation. From the QR factorization of AðkÞ , we can solve for the upper-triangular matrix, RðkÞ ¼ QðkÞ AðkÞ . Substitution into the second step of the algorithm leads to the matrix triple product, T

Aðkþ1Þ ¼ QðkÞ AðkÞ QðkÞ T

(8.5-18)

that shows that Aðkþ1Þ is symmetric, if AðkÞ is symmetric. That this holds for all k follows from induction. Additionally, Eq. (8.5-18) implies that Aðkþ1Þ 1

is similar to AðkÞ since QðkÞ ¼ QðkÞ , which follows from the orthonormality of QðkÞ . Therefore, the eigenvalues will remain invariant under each QR iteration. Applying Eq. (8.5-18) recursively leads to T

FIGURE 8.5-2 Relative 2-norm errors of AðkÞ to L.

789

790

CHAPTER 8 Numerical methods

b ðkÞ A Q b ðkÞ Aðkþ1Þ ¼ Q b ðkÞ ¼ Qð1Þ Qð2Þ / QðkÞ Q

(8.5-19)

b ð37Þ A Q b ð37Þ z L Að38Þ ¼ Q

(8.5-20)

T

b ðkÞ is orthonormal since it is a product of orthonormal Observe that Q matrices. The example shows that T

b ð37Þ is an approximation to the matrix of eigenvectors of A, i.e., Hence, Q V ¼ ½v1 j v2 j / j v5 . The corresponding modal matrix is, therefore, b ð37Þ, which is listed below: approximated by Fð37Þ ¼ L1 Q 3 2 0:00283 0:00480 0:00735 0:02959 0:99952 7 6 0:43062 0:10003 0:25136 0:01163 7 6 0.49116 7 6 7 Fð37Þ ¼ 6 6 0.04475 0:11251 0:21968 0:19245 0:00665 7 7 6 4 0.04453 0:11310 0:21852 0:19355 0:00371 5 0.48870 0:43279 0:09956 0:25278 0:00745 (8.5-21) Note that Fð37Þ is equal to F to five decimal places modulo sign differences for the second and third modes in columns four and three, respectively. Recall that the mass-normalized modes are unique up to sign differences. Our example illustrates the simplicity of the QR algorithm, which leads us to the question of why does the QR algorithm converge? At first glance, it is not apparent as to why the algorithm should generate a sequence of similar matrices that converge to a diagonal matrix. We will answer this question after we have introduced and examined vector and subspace iteration methods. Recall that the example indicated that convergence is linear and that eight additional iterations were needed just to delete the offdiagonal terms in the (2, 3) and (3, 2) positions. Therefore, convergence can be slow and require numerous iterations. For large dense matrices, the QR factorizations and matrix products can be computationally expensive and time-consuming, which brings us to the second question: how do we modify Eq. (8.5-9) to reduce the number of iterations and avoid QR factorizations and multiplications of full matrices?

8.5 Matrix eigenvalue problem

Addressing these issues will lead us to an efficient implementation of the QR algorithm that converges so rapidly that it is often referred to as a “direct” method. Our discussion follows Lecture 27 of Trefethen and Bau (1997). For more comprehensive and detail discussions, we refer the reader to Golub and Van Loan (2013), Demmel (1997), Watkins (2007, 2010), and Stewart (1998, 2001a,b). The classic references are Wilkinson’s, The Algebraic Eigenvalue Problem (Wilkinson, 1965) and Parlett’s, The Symmetric Eigenvalue Problem (Parlett, 1998). 8.5.1.1.1 Vector iteration methods

Suppose that the dominant eigenvalue is strictly greater than the rest of the eigenvalues, i.e., jl1 j > jl2 j  /  jlN j. Since A is symmetric, the set of its eigenvectors, fv1 ; v2 ; /; vN g, define an orthonormal basis. Hence, we can represent any vector, x˛ℝN , as a unique linear combination of the eigenvectors, i.e., x ¼ a1 v1 þ a2 v2 þ a3 v3 þ / þ aN vN an ¼ vTn x

for n ¼ 1; 2; /; N (8.5-22)

Let us assume that x is not deficient in v1 , i.e., a1 s0. Applying Ak to x, we obtain Ak x ¼ lk1 a1 v1 þ lk2 a2 v2 þ lk3 a3 v3 þ / þ lkN aN vN ¼

lk1



a1 v 1 þ

z lk1 a1 v1

r2k a2 v2

þ r3k a3 v3

þ / þ

rNk aN vN



;

rn ¼

ln l1

as k/∞ (8.5-23)

where the last approximation occurs because 1 > jr2 j  jr3 j  /  jrN j and implies that   k    A x ¼ jl1 jk ja1 j þ O jr2 jk z jl1 jk ja1 j (8.5-24) 2 Therefore, the normalized iterates converge to the first eigenvector, that is,

791

792

CHAPTER 8 Numerical methods

Ak x / sgnðl 1 Þk v1 kAk xk2

as

k/∞

(8.5-25)

Eq. (8.5-24) implies that the convergence rate is dictated by jr2 j ¼ jl2 j=jl1 j. Therefore, the closer jl2 j is to jl1 j, the slower the iterates will converge to the first eigenvector. Rather than applying Ak to x and normalizing, it is more efficient to recursively apply A and normalize. This leads us to the following iteration: x xð0Þ ¼ kxk2 for k ¼ 1; /; until done b x ðkÞ ¼ Axðk1Þ b x ðkÞ  xðkÞ ¼   ðkÞ  b x 

(8.5-26)

2

To complete the above iteration, we need estimates of l1 . First, observe that for any eigenpair, ðln ; vn Þ, the quadratic polynomial, pA;vn ðaÞ ¼ kAvn  avn k22 , will have a minimum of zero at a ¼ ln . This suggests that, for an approximate eigenvector, x, the corresponding eigenvalue estimate will be one that minimizes pA;x ðaÞ ¼ kAx  axk22 . Differentiating the quadratic and setting it to zero produces i d d h ðAx  axÞT ðAx  axÞ pA;x ðaÞ ¼ 0¼ da da i  T      d h 2 T  T 2 ¼ a x x  2a x Ax þ x A x ¼ 2a xT x  2 xT Ax da (8.5-27) Solving for a leads to the Rayleigh quotient of x with respect to A, xT Ax (8.5-28) xT x which provides the optimal eigenvalue estimate. Note that for an eigenpair ðln ; vn Þ, rA ðvn Þ ¼ ln . Including the Rayleigh quotient in (8.5-26) leads to the power iteration method. rA ðxÞ ¼

8.5 Matrix eigenvalue problem

Power iteration algorithm

xð0Þ ¼

x kxk2

for k ¼ 1; /; until done b x ðkÞ ¼ Axðk1Þ xðkÞ

b x ðkÞ  ¼  ðkÞ  x  b

(8.5-29)

2

lðkÞ ¼ xðkÞ AxðkÞ T

From (8.5-24) and (8.5-25), we see that if the initial guess, x, is not deficient in v1 , then xðkÞ /sgnðl1 Þk v1 at the linear rate,      ðkÞ k  (8.5-30) x  sgnðl1 Þ v1  ¼ O r2k 2

ðkÞ

Let us examine how fast l converges to l1 . In view of (8.5-30), consider an approximate eigenvector to vm , x ¼ vm þ du, where kuk ¼ 1. Without loss of generality, we can assume that u is perpendicular to vm . Calculation of rA ðxÞ leads to rA ðxÞ ¼

¼

ðvm þ duÞT Aðvm þ duÞ ðvm þ duÞT ðvm þ duÞ

¼

vTm Avm þ 2duT Avm þ d2 uT Au vTm vm þ 2duT vm þ d2 uT u

lm vTm vm þ 2dlm uT vm þ d2 uT Au lm þ d2 uT Au ¼ vTm vm þ 2duT vm þ d2 uT u 1 þ d2 uT u

  ¼ lm þ O d2 (8.5-31)

793

794

CHAPTER 8 Numerical methods

Therefore, if the approximate eigenvector is OðdÞ close to an eigenvector,  2 its Rayleigh quotient will be O d close to its eigenvalue. This and Eq. (8.5-30) imply the error bound,       ðkÞ  l (8.5-32) ¼ O r22k l 1 which shows thatthe eigenvalue estimates converge quadratically to l 1 i.e.,   2    ðkÞ   ðkþ1Þ l ¼ O l  l . We applied the power iteration to A, defined l   ðkÞ ðkÞ in (8.5-8), to obtain estimates l ; x , that converged to the first eigenpair, ðl1 ; v1 Þ. The errors versus the iteration number are shown in Fig. 8.5-3. Calculating r2 from the eigenvalues listed in (8.5-7) produces r2 ¼ l 2 =l1 ¼ 1366:347=3196:149 z 0:4275. Therefore, by Eqs. (8.5-30) and xðkÞ and lðkÞ should converge at the rates Oð0:4275k Þ and  (8.5-32),  O 0:42752k , respectively. This is verified by the fit to the error curves. The power iteration as stated above is almost never used in practice since it only converges to the “dominant” eigenpair and convergence can be slow if jl 2 j z jl 1 j. To obtain the other eigenpairs, earlier methods extended the power iteration by employing deflation techniques that “swept” out the eigenvectors that were computed. For example,  previously  v 1 , define the deflated matrix, after calculating the first eigenpair, b l1; b

FIGURE 8.5-3 Errors of xðkÞ and lðkÞ from the power iteration.

8.5 Matrix eigenvalue problem

A1 ¼ A  b l1b v1 b v T1 . The dominant eigenpair of A1 will now be equal to ðl2 ; v2 Þ. Once, the second eigenpair is computed, the power iteration would be applied to A2 ¼ A1  b l2b v2 b v T2 , and so on. Another variation of this would be to remove from b x ðkÞ the components in the direction of the computed eigenvectors via orthogonal projections. Like the classical GrameSchmidt method, deflation techniques can lead to inaccuracies due to nonorthogonality that result from errors in the computed eigenvectors and numerical round off. A variation of the power iteration that allows one to “tune-in” to a particular eigenpair uses the inverse of a shifted matrix. For simplicity, let us assume that the eigenvalues are simple (i.e., no multiplicities). For any scalar, m, the A  mI will be ðln m; vn Þ. Therefore, if msln for all  eigenpairs of  n,

ðln  mÞ1 ; vn

will be the eigenpairs of ðA  mIÞ1 . Suppose we

want to calculate the eigenpair ðlm ; vm Þ, and m z lm , then ðlm  mÞ1 will be the dominant eigenvalue, i.e., jlm  mj1 > maxjln  mj1 nsm

(8.5-33)

Therefore, applying the power iteration to ðA  mIÞ1  yield , instead of A, will  ðkÞ ðkÞ  1 eigenpair estimates, h ; x , that converge to ðlm  mÞ ; vm . The approximation to the eigenvalue, lm , can then be recovered from

1 hðkÞ þ m. However, since xðkÞ are estimates of vm , we can approximate   lm directly from the Rayleigh quotient, rA xðkÞ . This leads to the inverse iteration method: Inverse iteration algorithm

xð0Þ ¼

x kxk2

for k ¼ 1; /; until done Solve ðA  mIÞb x ðkÞ ¼ xðk1Þ ðkÞ

b x  xðkÞ ¼   ðkÞ  x  b

2

lðkÞ ¼ xðkÞ AxðkÞ T

(8.5-34)

795

796

CHAPTER 8 Numerical methods

For efficiency, the LU factorization of A  mI is computed beforehand and b x ðkÞ is computed by the standard forward and backward substitutions. The inverse iteration method was applied to A in (8.5-8) with m ¼ 1000 to target the third eigenpair with l3 ¼ 909:938. The eigenvector estimates, xðkÞ , and v3 , were scaled by 1 so that their maximum elements were positive. This was doneso that  consistent orientation  thevectors had   ðkÞ when calculating the errors, sgn xmax xðkÞ  v3  . These errors and the 2

relative errors of the eigenvalue estimates are plotted in Fig. 8.5-4. The eigenvalues of A and ðA  mIÞ1 are listed below:

(8.5-35) From the above table, we note that ðl2  mÞ1 is the nearest eigenvalue to ðl3  mÞ1 . Therefore, the eigenvector and eigenvalue estimates should converge at the following rates:

FIGURE 8.5-4 Errors of xðkÞ and lðkÞ from the inverse power iteration.

8.5 Matrix eigenvalue problem

     k   ðkÞ ðkÞ  v x x sgn max 3 ¼ O r ; 2

ðkÞ  l l 3 jl3 j



l3  m z 0:2458 l2  m (8.5-36)

  ¼ O r 2k

Fig. 8.5-4 also shows the least-square fits to the errors that verify the convergence rates in (8.5-36). A concern that frequently arises when using the inverse iteration is the potential for ill-conditioning when A  mI is nearly singular. In practice, this is not an issue since very rarely (almost never) would m equal lm . When m z lm , the spectral decomposition of ðA  mIÞ1 implies b x

ðkÞ

¼

N X vT xðk1Þ n

n¼1

ln  m

vn z

vTm xðk1Þ vm lm  m

(8.5-37)

Therefore, although the magnitude of the computed b x ðkÞ will be subject to round-off errors, it will essentially have the same direction as the eigenvector, vm . This is all that is required for the inverse iteration to converge. This observation suggests that faster convergence can be achieved if we modify the shifts to approach lm by using the most recent Rayleigh estimates. This modification is the basis of the Rayleigh Quotient Iteration. Rayleigh quotient iteration

xð0Þ ¼

x kxk2

lð0Þ ¼ m for k ¼ 1; /; until done   ðk1Þ Solve A  l I b x ðkÞ ¼ xðk1Þ ðkÞ

x

(8.5-38)

b x ðkÞ  ¼  ðkÞ  x  b

2

lðkÞ ¼ xðkÞ AxðkÞ T

The Rayleigh quotient method was applied to A in (8.5-8) with m ¼ 1000 to tune to the third eigenpair. The eigenvector and eigenvalue errors are

797

798

CHAPTER 8 Numerical methods

FIGURE 8.5-5 Errors of xðkÞ and lðkÞ from the Rayleigh quotient iteration. plotted in Fig. 8.5-5 and they clearly illustrate the “fast” convergence rates that can be achieved by the inverse iterations using shifts that approach the targeted eigenvalue. This technique is one of the main improvements to the QR algorithm that allows it to converge within a few iterations to each eigenvalue. Before leaving this subsection, let us take a closer look at the convergence properties of the Rayleigh quotient method. Suppose we are targeting the eigenpair, ðlm ; vm Þ. Denote lp to be the eigenvalue that is nearest to lm . After k iterations, theinverse iteration with shift, m z lm , will produce the    eigenpair estimate, lðkÞ ; xðkÞ . Let xðkÞ  vm 2 ¼ d ¼ Oðr k Þ, where lm  m ¼ Oðjlm mjÞ jl2 j > / > jlN j. This assumption is often satisfied in practice since no two elastic modes in the same structure can possess the same frequency, although they could be very close. ð0Þ

ð0Þ

ð0Þ

Consider M < N, linearly independent unit vectors, x1 ; x2 ; /; xM .

799

800

CHAPTER 8 Numerical methods

Denote the matrix of vectors by

ð0Þ XM

¼

h



ð0Þ x1



ð0Þ x2

/j

ð0Þ xM

i . Then, the

ð0Þ

power iteration applied to XM leads to the iterates ðkÞ

ðk1Þ

XM ¼ AXM

ð0Þ

¼ Ak XM

(8.5-42)

Substituting the expansion for Ak in terms of its eigenvectors and eigenvalues and generalizing (8.5-23) yields     ðkÞ ð0Þ ð0Þ XM ¼ v1 lk1 vT1 þ / þ vM lkM vTM XM þ vMþ1 lkMþ1 vTMþ1 þ / þ vN lkN vTN XM ¼

lkM

¼

lkM

    ð0Þ ð0Þ k T k T k T k T v1 r1 v1 þ / þ vM rM vM XM þ vMþ1 rMþ1 vMþ1 þ / þ vN rN vN XM     ln ð0Þ k k T k T ; rn ¼ r1 v1 v1 þ / þ rM vM vM XM þ O jrMþ1 j lM

  ð0Þ z lk1 v1 vT1 þ / þ lkM vM vTM XM as k/∞ (8.5-43) since jrN j < / < jrMþ1 j < 1. ðkÞ As in the power iteration, we compute XM recursively, via ðkÞ

ðk1Þ

ð0Þ

XM ¼ AXM , rather than calculating Ak XM . Additionally, we need a way to normalize the iterates to prevent them from growing exponentially. A natural way to accomplish this is by calculating the QR factorization, which leads to the following modification of Eq. (8.5-42): Orthogonal iteration algorithm ð0Þ e ð0Þ Rð0Þ QR factor ZM ¼ Q M M

for k ¼ 1; /; until done ðkÞ

ðk1Þ

e ZM ¼ AQ M

(8.5-44)

ðkÞ e ðkÞ RðkÞ QR factor ZM ¼ Q M M e ðkÞ , to denote the orthoFor the moment, we will use the notation, Q M normal QR factor in the orthogonal iteration to distinguish it from the orthob ðkÞ . Since the normal factors, QðkÞ , in the QR iteration and their product, Q

8.5 Matrix eigenvalue problem

ðkÞ e ðkÞ column space of ZM equals the column space of Q M , i.e., RZðkÞ ¼ R M

801

ðkÞ

eM Q

,

we can think of the (8.5-44) algorithm as an approach for generating e ðkÞ converging subspaces associated with the column space of Q M . Note ðkÞ

e M , may not strictly converge because of sign differthat the matrices, Q ences in their columns that result from powers of negative eigenvalues. This is not a problem for subspaces, however. Let us denote the matrices of the first M eigenvalues and eigenvectors by LM ¼ diagðl1 ; /; lM Þ and VM ¼ ½v1 j / j vM , respectively. Then, AVM ¼ VM LM

(8.5-45)

The above equation implies that the subspace spanned by the eigenvectors, RVM , is invariant under A. This invariance under mappings is the basis of convergence for almost every iterative method. Before discussing how the subspaces R ðkÞ converge to RVM , we need the following definition: eM Q Definition Let Q and V be N  M orthonormal matrices and RQ and RV denote their column spaces, respectively. Then, the distance between RQ and RV is defined by the 2-norm of the difference of their orthogonal projectors, distðRQ ; RV Þ ¼ kPQ  PV k2

(8.5-46)

where PQ ¼ QQT and PV ¼ VVT .     It can be shown that kPQ  PV k2 ¼ QT Vt 2 ¼ VT Qt 2 , where Qt and Vt are N  ðN MÞ orthonormal matrices that are orthogonal to Q and V, respectively. Since, Q; V; Qt , and Vt are orthonormal, 0  distðRQ ; RV Þ  1. Clearly, if RQ ¼ RV , then distðRQ ; RV Þ ¼ 0. On the other hand, if the intersection with the orthogonal complement is nontrivial, i.e., RQ X RVt sf0g, then distðRQ ; RV Þ ¼ 1. To see this, suppose x ˛RQ XRVt and kxk2 ¼ 1, then 1  kPQ  PV k2  kðPQ  PV Þxk2 ¼ kPQ x  PV xk2 ¼ kx  0k2 ¼ 1 (8.5-47) In this case, we will say that Q is deficient with respect to V. We have the following convergence theorem of the orthogonal iteration, [see Golub and Van Loan (2013) for proof]:

802

CHAPTER 8 Numerical methods

e ð0Þ Theorem 8.5-1 If Q M is not deficient with respect to VM , then the e ðkÞ subspaces, R ðkÞ , with Q M defined by Eq. (8.5-45), converges to RVM . eM Q Furthermore, the convergence rate is linear and is given by !

lMþ1 k (8.5-48) dist R ðkÞ ; RVM ¼ O Q eM lM Observe that the theorem states that, if the column space of initial ð0Þ

matrix, ZM , only intersects RVtM trivially, then the column space of h i ðkÞ ðkÞ ðkÞ q1 j/je eM ¼ e qM will converge to the subspace spanned by the first Q M eigenvectors of A, arranged in descending order of their eigenvalue magnitudes. The convergence rate in (8.5-48) represents an overall convere ðkÞ gence of the column space of Q M . Note that by the QR factorization algo-

rithm, the orthogonal iteration using the first L < M columns of Zð0Þ would e ðkÞ e ðkÞ produce iterates, Q L , that are identical to the first L columns of QM , except for possible sign differences. This observation allows us to determine the ðkÞ

convergence rate of e qL to vL . First, define the following orthogonal matrices to vL , VL ¼ ½v1 j/jvL1 , and VLþ ¼ ½vLþ1 j/jvN . Also, denote the corresponding column spaces by, RL ¼ rangeðVL Þ and ðkÞ

RLþ ¼ rangeðVLþ Þ. Then, we can represent e qL by a linear combination of unit orthogonal vectors, vL , vL , and vLþ , i.e., ðkÞ

qL ¼ aðkÞ vL þ bðkÞ vL þ gðkÞ vLþ e

(8.5-49)

where vL ˛ RL and vLþ ˛ RLþ . For M ¼ L, Theorem 8.5-1 gives k !

  T l   e ðkÞ Lþ1 (8.5-50) dist R ðkÞ ; RVL ¼ Q VLþ  ¼ O L eL Q lL 2 which implies that  D 

E  ðkÞT  ðkÞ ðkÞ g ¼ qL ; vLþ  O QL VLþ  ¼ O 2

Hence, by Theorem 8.5-1, for M ¼ L  1,

! lLþ1 k l L

(8.5-51)

8.5 Matrix eigenvalue problem



dist R

ðkÞ

Q eL1

; RVL

   T e ðkÞt  ¼ VL QL1  ¼ O 2

lL l

L1

k !

(8.5-52)

h i t ðkÞ ðkÞ e ðkÞ ¼ e q where Q is an orthonormal matrix that is orthogonal to q j/je L N L1 e ðkÞ Q L1 . Accordingly, Eqs. (8.5-49) and (8.5-52) lead to

D   E  lL k  T e ðkÞt  ðkÞ ðkÞ qL ; vL  O VL QL1  ¼ O a ¼ e lL1 2 Furthermore, since

ðkÞ qL e

(8.5-53)

ðkÞ and vL have unit norms, b /1. Therefore,

ðkÞ

r kL , that is defined by the larger of the two e qL converges to vL at the rate, b ratios, i.e., ( )     lL lLþ1   ðkÞ k ; (8.5-54) b r L ¼ max rL qL  vL  ¼ O b  e 2 lL1 lL   ðkÞ r kL , its Rayleigh quoSince e q converges to vL with error bound O b  L ðkÞ ðkÞT ðkÞ qL , should converge quadratically to the eigenqL ¼e qL Ae tient, rA e value, lL , and

    ðkÞ q L  lL ¼ O b r 2k rA e L

Let us generalize the Rayleigh quotient over qðkÞ e m ; m ¼ 1; /; M, by defining the matrix iterates, ðk1ÞT ðkÞ e e ðk1Þ A M ¼ QM AQ M

Since

e qðkÞ m

(8.5-55) the

columns

(8.5-56)

converges to vm , for m ¼ 1; /; M, ðkÞ

AM /VTL AVL [ LM [ diagðl1 ; .; lM Þ

(8.5-57)

ðkÞ

In fact, by Eq. (8.5-55), the diagonals of AM converge quadratically to LM . We can also include in algorithm (8.5-44) estimates of the eigenvalues core ðk1Þ responding to Q using the Rayleigh quotient to obtain M     ðk1ÞT ðk1Þ ðk1Þ ðkÞ e e LL ¼ diag QM A QM ¼ diag AM (8.5-58) The orthogonal iteration method with the Rayleigh quotient was applied to the matrix, A, in (8.5-8), for M ¼ 3. By (8.5-54) and (8.5-7), we obtain

803

CHAPTER 8 Numerical methods

l2 z 0:4275 l1 

l2 l3 b z maxf0:4275; 0.6670g ¼ 0:6670 r 2 ¼ max ; (8.5-59) l1 l2 

l3 l4 b z maxf0.6670; 0.3736g ¼ 0:6670 r 3 ¼ max ; l2 l3  ðkÞ   Fig. 8.5-6 plots the errors,  e qL  vL 2 and lL lL lL , for L ¼ 1; 2; and 3, versus the iteration number, k.The power-law  fits to  the    ðkÞ r kL errors verify our convergence rate estimates of  e qL  vL  ¼ O b 2   ðkÞ  2k and lL lL lL ¼ O b rL . b r1 ¼

(A)

(B)

First Eigenpair (k) q1 – v1 2 – λ1 λ1 λ (k) 1

100 10-5

10-5

140.35(0.4275)2k

10-15 0

(k)

q2 – v2

λ (k) 2

16.23(0.4275)k

10-10

Second Eigenpair

100

Error

Error

2

– λ2 λ2

0.49(0.6670)k 0.08(0.6670)2k

10-10 10-15

20

60 40 Iteration No.

80

(C)

100

0

60 40 Iteration No.

20

80

100

Third Eigenpair (k)

q3 – v3

100

λ (k) 3 10-5 Error

804

2

– λ3 λ3

0.49(0.6670)k 0.12(0.6670)2k

10-10 10-15 0

20

60 40 Iteration No.

80

100

FIGURE 8.5-6 ðkÞ Errors of e qðkÞ m and lm from the orthogonal iteration. Note that the eigenvalue log-errors decrease twice as fast as the eigenvector log-errors, indicating quadratic convergence.

8.5 Matrix eigenvalue problem

805

8.5.1.1.3 QR iteration convergence

The convergence proof of the QR algorithm is based on the convergence of the “full” orthogonal iteration using a starting matrix, Zð0Þ , with M ¼ N independent columns. Since we are attempting to compute all N eigenpairs, we will omit the subscripts, M. The connection between the full orthogonal iteration and the QR iteration is the matrix iterate, AðkÞ , that is defined in Eq. (8.5-56). First, note that AðkÞ has the following QR factorization,

ðk1ÞT ðk1Þ ðk1ÞT ðkÞ ðk1ÞT ðkÞ ðkÞ ðkÞ e e e e e R A ¼Q AQ ¼Q Z ¼Q Q

(8.5-60) ðk1ÞT ðkÞ ðkÞ ðkÞ ðkÞ e e ¼ Q Q R ¼Q R ðkÞ e ðk1Þ Q e ðkÞ is the orthonormal QR factor. Reversing the orwhere Q ¼ Q der of the product of the QR factors yields







ðkÞT ðkÞ ðk1ÞT ðkÞ ðkÞT ðk1Þ ðk1ÞT ðkÞ ðkÞ ðkÞ e e e e e e e R Q ¼ Q Z Q Q Q ¼Q AQ Q

T ðkÞT ðk1Þ ðk1ÞT e A Q e e ðkÞ AQ e ðkÞ e e ðkÞ ¼ Q ¼Q Q Q T

¼ Aðkþ1Þ (8.5-61) Observe that Eqs. (8.5-60) and (8.5-61) define the QR iteration with iterates AðkÞ , that converge to L by virtue of the convergence of the orthogonal iteration. Also, if Zð0Þ ¼ IN , then the orthonormal QR factors from the QR iterðkÞ b ðkÞ ¼ Q e ðkÞ . ation and orthogonal iteration are related by QðkÞ ¼ Q and Q Therefore, the iterates, AðkÞ , from the full orthogonal iteration will equal those generated by the QR iteration algorithm. Henceforth, we will denote the orthonormal factors in the orthogonal iteration by h i ðkÞ ðkÞ ðkÞ b q j/j b q . ¼ b Q 1

N

8.5.1.1.4 Relation to power and inverse iterations

b ðkÞ is also the QR factor for Ak. Let It is worth noting that Q b ðkÞ ¼ RðkÞ Rðk1Þ /Rð1Þ denote the product of the upper triangular factors R from the orthogonal iteration. Then for k ¼ 4,

806

CHAPTER 8 Numerical methods

 ð1Þ    b Rð1Þ ¼ A2 A Q b ð1Þ Rð1Þ A4 ¼ A3 A ¼ A3 Q  ð2Þ     b Rð2Þ Rð1Þ ¼ A1 A Q b ð2Þ Rð2Þ Rð1Þ ¼ A2 Zð2Þ Rð1Þ ¼ A2 Q    ð3Þ   b Rð3Þ Rð2Þ Rð1Þ ¼ A1 Zð3Þ Rð2Þ Rð1Þ ¼ A1 Q      ð3Þ ð3Þ ð2Þ ð1Þ ð4Þ ð3Þ ð2Þ ð1Þ b ¼ AQ R R R ¼Z R R R   b ð4Þ Rð4Þ Rð3Þ Rð2Þ Rð1Þ ¼ Q b ð4Þ R b ð4Þ ¼Q (8.5-62) By induction, it can be shown that for general k, b ðkÞ R b ðkÞ Ak ¼ Q

(8.5-63)

In other words, the full orthogonal and the QR iterations produce the QR factorization of powers of A. This is expected since the orthogonal iteration is an extension of the power method. Let us apply the power method to a starting vector, e1 . Then Eq. (8.5-63) yields ðkÞ ðkÞ ðkÞ b ðkÞ R b ðkÞ b b ðkÞ e1 ¼ Q r 1;1 e1 ¼ b Ak e1 ¼ Q r 1;1 b q1

(8.5-64)

ðkÞ b ðkÞ in the first row and first column. Therefore, where b r 1;1 is the element of R

b ðkÞ is proportional to the kth iterate from the power the first column of Q method applied to the starting vector, e1 . Surprisingly, Eq. (8.5-63) also implies that the inverse power iteration is occurring simultaneously. To see this, let us take the inverse of Eq. (8.5-63), h ðkÞ i1 h ðkÞ i1 h ðkÞ i1 h ðkÞ iT k b b b b ¼ R Q Q A ¼ R h (8.5-65) i h ðkÞ iT T h ðkÞ iT ðkÞ ðkÞ 1 b b b b ¼ R ¼Q Q R  ðkÞ T b where the third equality results from the symmetry of A. Since R is lower triangular, Eq. (8.5-65) shows that orthogonal and QR iterations also calculate the QL (orthonormal and lower-triangular) factorization of the inverse power of A. Applying Ak to eN in (8.5-65) yields h ðkÞ iT ðkÞ ðkÞ ðkÞ ðkÞ k b b ðkÞ b b A eN ¼ Q r N;N eN ¼ b qN eN ¼ Q r N;N b (8.5-66) R

8.5 Matrix eigenvalue problem

 ðkÞ T ðkÞ b where b r N;N is the element of R in the Nth row and Nth column. This shows that the normalized iterates of the inverse power method, with a startb ðkÞ ing vector eN , are proportional to q N . 8.5.1.1.5 Incorporating shifts

ðkÞ

We have shown that with respect to the last column, b q N , the QR iteration is performing the inverse iteration. This allows us to speed up convergence by b ðkÞ incorporating shifts. Since q N converges to the eigenvector, vN , that is associated with the eigenvalue, lN , consider a shift m z lN . Shifts can be included in the orthogonal and QR iterations as follows: Full orthogonal iteration with constant shift

b ð0Þ ¼ IN Q for k ¼ 1; /; until done b ðk1Þ ZðkÞ ¼ ½A  mIN  Q

(8.5-67)

b ðkÞ RðkÞ QR factor ZðkÞ ¼ Q

QR algorithm with constant shift

Að1Þ ¼ A for k ¼ 1; /; until done QR factor AðkÞ  mI ¼ QðkÞ RðkÞ Aðkþ1Þ ¼ RðkÞ QðkÞ þ mI

(8.5-68)

Similar arguments as before show that orthogonal iteration and QR are related by b ðkÞ ¼ Qð1Þ Qð2Þ / QðkÞ Q T b ðkÞ A Q b ðkÞ Aðkþ1Þ ¼ Q

(8.5-69)

Moreover, Eq. (8.5-63) generalizes to b ðkÞ R b ðkÞ ½A  mIN k ¼ Q ðkÞ

(8.5-70)

b N , converges to vN ; and these are the exact Therefore, the last column, q iterates that would have been produced by the inverse iteration with a shift

807

808

CHAPTER 8 Numerical methods

equal to m. The corresponding eigenvalue estimate is given by the Rayleigh ðkÞ

ðkÞT

ðkÞ

q N Ab q N . The closer m is to lN , the faster the estimates quotient, lN ¼ b   ðkÞ ðkÞ q N converge to ðlN ; vN Þ. However, the convergence rate will still lN ; b be linear. The Rayleigh quotient iteration suggests that if we instead use the ðkÞ

current eigenvalue estimate, lN , as our shift, then cubic convergence rates may be achieved. This is indeed what occurs. In fact, (8.5-56) and (8.5-58) imply that we do not have to calculate the Rayleigh quotient since ðkÞ

ðkþ1Þ

ðkþ1Þ

lN ¼ aN;N , where aN;N is the element of Aðkþ1Þ in the last row and column. This leads to the QR iteration that uses the Rayleigh shifts. QR iteration with Rayleigh shifts

Að1Þ ¼ A mð0Þ ¼ aN;N for k ¼ 1; /; until done QR factor AðkÞ  mðk1Þ I ¼ QðkÞ RðkÞ

(8.5-71)

Aðkþ1Þ ¼ RðkÞ QðkÞ þ mðk1Þ I ðkþ1Þ

mðkÞ ¼ aN;N

We will postpone further discussion of how to incorporate these shifts until after the next section, which discusses the transformation of A to a similar tridiagonal matrix. 8.5.1.1.6 Tridiagonal reduction

We have shown that the QR iteration converges and that shifting strategies can improve convergence rates that are similar to the Rayleigh quotient iteration. We now discuss ways to reduce the computations in the QR iteration. In general, A will be a fully populated matrix. Therefore, the obvious improvement would be to convert A, via similarity transformation, to a symmetric matrix with most of its off-diagonal elements equal to zero. In fact, A can be reduced to a tridiagonal matrix, T, by Householder transformations that eliminate elements below and above the subdiagonal and superdiagonal, respectively. The use of orthonormal congruence transformations in the reduction ensures that the resulting tridiagonal matrix is symmetric and similar to A.

8.5 Matrix eigenvalue problem

To illustrate, we will tridiagonalize the following 5  5 matrix A: 2 3 51 17 3 11 5 6 7 1 6 16 7 6 L17 18 6 7 A¼6 1 16 1 9 7 (8.5-72) 6 L3 7 6 7 6 1 17 10 5 4 11 L5 16 9 10 30 We start with column one. The QR factorization of A required us to introduce zeros below the first element. However, for tridiagonalization, we instead want to zero-out elements below the second row. So let x ¼ ½ 17 3 11 5 T represent the elements that are highlighted in bold in Eq. (8.5-72). The Householder reflection vector, which eliminates the second, third, and fourth elements of x is given by u1 ¼

x þ sgnðx1 Þkxk2 e1 kx þ sgnðx1 Þkxk2 e1 k2

¼ ½ 0:95047

(8.5-73)

0:07490 0:27462 0:12483 

T

The resulting Householder transformation is given by

(8.5-74)

Premultiplying H1 2 51 6 6 21:07131 6 H1 A ¼ 6 0 6 6 0 4 0

to A yields 17 3 15:32890 0:42712 1:62630 15:88754 15:62977 11:62283

11 1:51865 0:64687

1:41234 18:29480 9:18743 9:41145

3 5 7 13:52550 7 7 11:32659 7 7 7 18:53085 5 26:12234 (8.5-75)

809

810

CHAPTER 8 Numerical methods

Observe, the Householder transformation introduced zeros in the first column as desired, but H1 A is no longer symmetric. To retain symmetry, we should postmultiply H1 A by HT1 . Since the Householder transformation matrices are symmetric, we can calculate the congruence transformation, A1 ¼ H1 AHT1 ¼ H1 AH1 , producing 2 3 51 21:07131 0 0 0 6 7 2:07548 7:65755 9:35450 7 6 21:07131 16:43018 6 7 A1 ¼ 6 0 2.07548 16:17924 0:42269 10:84043 7 6 7 6 7 0 L7.65755 0:42269 25:02324 15:47247 5 4 0 L9.35450 10:84043 15:47247 23:36734 (8.5-76) Note that H1 was defined to zero out the elements in rows three to five in the first column of A and also leave its first row unchanged. As can be seen in Eq. (8.5-75), the first row of H1 A is unchanged, hence the exact same transformation, postmultiplied to H1 A will also remove the elements in columns three to five of the first row. Continuing the tridiagonal reduction to the second column, we will represent the elements in bold by x ¼ ½ 2:07548 7:65755 9:35450 T . Then the Householder reflection vector is u2 ¼

x þ sgnðx1 Þkxk2 e1 ¼ ½ 0:76459 kx þ sgnðx1 Þkxk2 e1 k2

0:40825 0:49872 T (8.5-77)

with the transformation matrix,

(8.5-78)

Computing the triple product, A2 ¼ H2 A1 H2 , yields

8.5 Matrix eigenvalue problem

2

51

21:07131

0

0

0

3

6 7 0 0 6 21:07131 16:43018 12:26590 7 6 7 7 A2 ¼ 6 0 12:26590 41:42734 0:64366 5:05776 6 7 6 7 0 0 L0.64366 18:06097 3:73555 5 4 0 0 L5.05776 3:73555 5:08150 (8.5-79) The final reduction in the third column is based on the vector x ¼ ½ 0:64366 5:05776 T , which represents the elements that are highlighted in bold. The Householder reflection vector is given by  0:75041 x þ sgnðx1 Þkxk2 e1 ¼ (8.5-80) u3 ¼ kx þ sgnðx1 Þkxk2 e1 k2 0:66097 with the transformation matrix defined by

(8.5-81)

Calculating the triple product, T ¼ H3 A2 H3 , yields the symmetric tridiagonal matrix,

811

812

CHAPTER 8 Numerical methods

2

51

6 6 21:07131 6 T¼6 0 6 6 0 4 0

21:07131 16:43018 12:26590 0 0

0

0

0

3

7 12:26590 0 0 7 7 7 41:42734 5:09856 0 7 7 5:09856 6:22398 5:24193 5 0 5:24193 16:91849 (8.5-82)

b ¼ H1 H2 H3 denote the product of the Householder To summarize, let H b reduces transformations, which is also orthonormal. We have shown that H A, via conjugate transformation, to a similar and symmetric tridiagonal matrix, T, T

b ¼ ðH3 H2 H1 ÞAðH1 H2 H3 Þ b AH T¼H

(8.5-83)

The above example easily extends to general N  N symmetric matrices. Before stating the general algorithm, let us first see how we can use symmetry to efficiently calculate the triple product, Hk Ak1 Hk . Let u denote a Householder reflection vector, with the corresponding transformation matrix, H ¼ I  2uuT . The calculation of HAH can be rearranged algebraically as follows:     HAH ¼ I  2uuT A I  2uuT   ¼ A þ 2u 2uT Au uT  ð2AuÞuT  uð2AuÞT ¼ A þ 2buuT  puT  upT ; p ¼ 2Au; b ¼ uT p

(8.5-84)

¼ A þ ðbu  pÞuT þ uðbu  pÞT ¼ A þ wuT þ uwT ; w ¼ bu  p Householder tridiagonalization algorithm

The following algorithm transforms an N  N real symmetric matrix,   A ¼ ai;j , to a symmetric tridiagonal matrix, T. The diagonal elements, ti;i , and off-diagonal elements, tiþ1;i ¼ ti;iþ1 , overwrite the ai;i and ai;iþ1 T  elements, respectively. The reflection vectors, uk ¼ ukþ1;k ; .; uN;k , are stored in the lower-triangular part, akþ1:N;k . The algorithm is

8.5 Matrix eigenvalue problem

for k ¼ 1; /; N  2    s ¼ sgn akþ1;k akþ1: N; k 2 akþ1;k ¼ akþ1;k þ s pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m ¼ 2s akþ1;k akþ1: N; k ¼ akþ1: N; k =m

813

loop over columns of A calculate sgnðx1 Þkxk2 x1 ¼ x1 þ sgnðx1 Þkxk2 calculate norm of e uk ¼ x þ sgnðx1 Þkxk2 e1 normalize e uk

pkþ1: N ¼ 2akþ1: N;kþ1: N akþ1: N;k

calculate p ¼ 2Akþ1: N;kþ1;N u

b ¼ aTkþ1: N;k pkþ1: N wkþ1: N ¼ bakþ1: N;k  pkþ1: N

calculate b ¼ uT p calculate w ¼ bu  p

akþ1: N;kþ1: N ¼ akþ1: N;kþ1: N þ

calculate Akþ1: N;kþ1: N þ wuT þ uwT

wkþ1: N aTkþ1: N þ akþ1: N wTkþ1: N ak;kþ1 ¼ s

store off -diagonal term; tk;kþ1

If we want to compute the eigenvectors, we need to calculate the product of the Householder transformations. We can take advantage of the form of transformations to reduce the number of floating point operations. Let b k ¼ H1 /Hk denote the product of the first k Householder matrices. H Observe that Hk has the form

(8.5-85)

where by the above halgorithm, uk is stored in the elements akþ1:N;k . If we i b k1 ¼ H b k1;1 is N  k and b k1;1 H b k1;2 , where H partition H b k1;2 is N  ðN kÞ, then H

814

CHAPTER 8 Numerical methods

(8.5-86)

This leads to the following algorithm for calculating the product of the Householder transformations whose reflection vectors are stored in the lower-triangular part of A. Product of householder transformations

Let the Householder reflection vectors, uk , be stored in akþ1:N;k . Then the following algorithm calculates the product of the Householder transformations that are required to tridiagonalize A: for i ¼ 1; /; N

hi;i ¼ 1

h2: N;2:N ¼ h2:N;2:N  2a2:N;1 aT2:N;1 for k ¼ 2; /; N  2 p1:N ¼ 2h1:N;kþ1:N akþ1:N;k

define N  N identity matrix calculate IN1  2u1 uT1 submatrix of H1 loop over remaining transformations calculate pk ¼ 2H1:N;kþ1:N uk

h1:N;kþ1:N ¼ h1:N;kþ1:N  p1:N aTkþ1:N;k update H1:N;kþ1:N  pk uTk 8.5.1.1.7 QR iteration for tridiagonal matrices

Applying the tridiagonalization process to the matrix A in (8.5-8) yields the reduced tridiagonal matrix, 2 3 40 15.49193 0 0 0 6 7 0 0 6 15.49193 1558.66667 1207:13527 7 6 7 7 T¼6 0 1207:13527 2117.82598 622:90476 0 6 7 6 7 0 0 622:90476 1104.86359 262.06241 5 4 0 0 0 262.06241 1030.64376 (8.5-87) and the accumulated Householder transformations,

8.5 Matrix eigenvalue problem

2

1

6 60 6 b ¼60 H 6 6 40 0

0

0

0

0

0.912871 0.408248 0 0

0.265689 0.594098 0.067639 0.756229

0.131834 0.294791 0.876783 0.356329

815

3

7 0.280528 7 7 0.627280 7 7 7 0.476106 5 0.548769 (8.5-88)

b b T A H. so that T ¼ H Each QR iteration involves a QR factorization that eliminates only the subdiagonal elements of a tridiagonal matrix. This can be accomplished more efficiently using the Givens rotation, which was presented in Section 8.4. Recall that the kth QR iteration is given by TðkÞ ¼ QðkÞ RðkÞ Tðkþ1Þ ¼ RðkÞ QðkÞ

(8.5-89)

where Tð1Þ ¼ T. Let us complete the first  QR iteration. First, we have to compute the QR factorization of T ¼ ti;j that is defined in (8.5-87). To eliminate t2;1 , we apply the Givens transformation, G2;1 , where 2 3 c s 0 0 0 40 6 7 ¼ 0:93250 6 s c 0 0 0 7 c ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 6 7 402 þ 15:491932 7 G2;1 ¼ 6 6 0 0 1 0 0 7; 15:49193 6 7 ¼ 0:36116 4 0 0 0 1 0 5 s ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 402 þ 15:491932 0 0 0 0 1 (8.5-90) Calculating R2;1 ¼ G2;1 T yields 2 42.89522 577.37055 435.96603 6 0 1447.86913 1125.65944 6 6 R2;1 ¼ 6 0 L1207.13527 2117.82598 6 6 0 0 622.90476 4 0

0

0

0

0

0 622.90476 1104.86359

0 0 262.06241

262.06241

1030:64376 (8.5-91)

Continuing to column two, we need to eliminate the element in the third row (in bold). This leads to

3 7 7 7 7 7 7 5

816

CHAPTER 8 Numerical methods

2

1

6 60 6 G3;2 ¼ 6 60 6 40 0

0

0

0 0

c s s c 0 0 0 0

0 0 1 0

3

1447:86913 7 ¼ 0:76807 0 7 c ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 7 1447:869132 þ 1207:135272 07 7; 1207:13527 7 0 5 s ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:64037 1447:869132 þ 1207:135272 1 (8.5-92)

where R3;2 ¼ G3;2 R2;1 and, 2 42:89522 577:37055 435:96603 0 6 0 1885:07310 2220:76799 398:88655 6 6 R3;2 ¼ 6 0 0 905:80660 478:43481 6 6 0 0 L622.90476 1104:86359 4 0 Eliminating produces 2 1 6 60 6 G4;3 ¼ 6 60 6 40 0

0

0

262:06241

0 0 0 262:06241

3 7 7 7 7 7 7 5

1030:64376 (8.5-93)

the element in the fourth row and third column (in bold) 3 0 0 905:80660 7 ¼ 0:82397 0 0 7 c ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 7 905:80660 þ 622:90476 s 07 7; 622:90476 7 0 s c 0 5 s ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ 0:56663 905:806602 þ 622:904762 0 0 0 1 (8.5-94) 0 1 0

0 0 c

and R4;3 ¼ G4;3 R3;2 , where 2 3 42:89522 577:37055 435:96603 0 0 6 7 0 1885:07310 2220:76799 398:88655 0 6 7 6 7 6 R4;3 ¼ 6 0 0 1099:31613 1020:26539 148:49225 7 7 6 7 0 0 0 639:28237 215:93230 5 4 0 0 0 262.06241 1030:64376 (8.5-95)

8.5 Matrix eigenvalue problem

817

The final Givens transformation, which eliminates the element in the fifth row and fourth column (in bold), is 2 3 1 0 0 0 0 639:28237 6 7 ¼ 0:92527 6 0 1 0 0 0 7 c ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 6 7 639:282372 þ 262:062412 7 G5;4 ¼ 6 6 0 0 1 0 0 7; 262:06241 6 7 ¼ 0:37930 4 0 0 0 c s 5 s ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 639:282372 þ 262:062412 0 0 0 s c (8.5-96) Applying G5;4 to R4;3 yields the upper triangular matrix, Rð1Þ ¼ G5;4 R4;3 , where 2 3 42:89522 577:37055 435:96603 0 0 6 7 0 1885:07310 2220:76799 398:88655 0 6 7 6 7 ð1Þ R ¼6 0 0 1099:31613 1020:26539 148:49225 7 6 7 6 7 0 0 0 690:91147 590:71925 5 4 0 0 0 0 871:72478 (8.5-97) Eqs. (8.5-90) through (8.5-97) imply that   G5;4 G4;3 G3;2 G2;1 T ¼ Rð1Þ

(8.5-98)

Premultiplying (8.5-98) by the transpose of the product of the Givens matrices, we obtain the QR factorization, T ¼ Qð1Þ Rð1Þ , where the orthonormal factor, Qð1Þ , is given by

818

CHAPTER 8 Numerical methods

T  Qð1Þ ¼ G5;4 G4;3 G3;2 G2;1 2 0:93250 0:27739 6 6 6 0:36116 0:71623 6 6 ¼6 0 0:64037 6 6 6 0 0 6 4 0 0

0:19056 0:12125 0:49203

0:31307

0:63287

0:40269

0:56663

0:76240

0

0:37930

0:04971

3

7 7 0:12834 7 7 7 0:16508 7 7 7 7 0:31253 7 5 0:92527 (8.5-99)

To complete the QR iteration step, we reverse the product of the QR factors so that Tð2Þ ¼ Rð1Þ Qð1Þ , which yields 2 3 248:522 680:808 0 0 0 6 7 0 0 6 680:808 2772:248 703:964 7 6 7 ð2Þ 6 7 T ¼6 0 703:964 1273:836 391:491 0 7 6 7 0 0 391:491 750:811 330:645 5 4 0 0 0 330:645 806:584 (8.5-100) Note that the tridiagonal structure is preserved under the congruence transformations, Tð2Þ ¼ Rð1Þ Qð1Þ ¼ Qð1Þ TQð1Þ . We list below the iterates, TðkÞ , for the indices 3, 5, 10, 18, 20, and 37. The later four iterates were chosen to illustrate when decoupling occurs (to three decimal places). The numbers in bold indicate the off-diagonal elements that decouple the system: 2 3 2914:094 722:975 0 0 0 6 7 0 0 6 722:975 1261:636 447:867 7 6 7 ð3Þ 7 T ¼6 0 447:867 674:822 370:369 0 6 7 6 7 0 0 370:369 705:080 302:041 5 4 0 0 0 302:041 296:368 (8.5-101) T

8.5 Matrix eigenvalue problem

2

3189:632

6 6 110:413 6 ð5Þ T ¼6 0 6 6 0 4 0

110:413

0

0

0

3

7 0 0 7 7 127:814 0 7 7 7 368:288 4:190 5 4:190 39:691 (8.5-102) 2 3 3196:148 1:481 0 0 0 6 7 1365:240 22:461 0 0 7 6 1:481 6 7 7 0 22:461 911:045 0:915 0 Tð10Þ ¼ 6 6 7 6 7 0 0 0:915 339:933 0 4 5 0 0 0 0 39:634 (8.5-103) 2 3 3196:149 0:002 0 0 0 6 7 1366:345 0:871 0 0 7 6 0:002 6 7 0 0:871 909:940 0 0 7 Tð18Þ ¼ 6 6 7 6 7 0 0 0 339:932 0 5 4 0 0 0 0 39:634 (8.5-104) 2 3 3196:149 0 0 0 0 6 7 0 1366:347 0:386 0 0 7 6 6 7 7 0 0:386 909:939 0 0 Tð20Þ ¼ 6 6 7 6 7 0 0 0 339:932 0 5 4 0 0 0 0 39:634 (8.5-105) 2 3 3196:149 0 0 0 0 6 7 0 1366:347 0 0 0 7 6 6 7 7 0 0 909:938 0 0 Tð37Þ ¼ 6 6 7 6 7 0 0 0 339:932 0 5 4 0 0 0 0 39:634 (8.5-106) 1314:985 154:380 154:380 939:405 0 127:814 0 0

819

820

CHAPTER 8 Numerical methods

Similar to Eqs. (8.5-19) and (8.5-20), we can calculate the product of the b ð36Þ ¼ Qð1Þ Qð2Þ /Qð36Þ , which yields orthogonal factors, Q 2 3 0:00283 0:00480 0:00735 0:02959 0:99952 6 7 6 0:57632 0:41068 0:41275 0:57295 0:02360 7 6 7 b ð36Þ ¼ 6 0:78175 0:06549 0:22191 0:57884 0:01687 7 Q 6 7 6 7 0:71685 0:36956 0:54179 0:01055 5 4 0:23643 0:02861 0:55960 0:80235 0:20556 0:00279 (8.5-107) ð36Þ b , will essentially diagonalize T and give diThe orthonormal matrix, Q agonal elements that approximate the eigenvalues, b ð36Þ T Q b ð36Þ z L Tð37Þ ¼ Q T

(8.5-108)

Comparing the tridiagonal iterates to the “full” matrix iterates in Eqs. (8.5-10) through (8.5-16), we note that the number of iterations needed for each decoupling and the order of the decoupling are similar. Although the tridiagonalization does not improve the convergence rate, it does provide significant computational savings. For reference, Table 8.5-1 lists the off-diagonal elements to five decimal places. The elements in bold denote the “small” off-diagonal terms that decouple the system to three decimal places and correspond to the indices 10, 18, 20, and 37. Observe that about 50 iterations are needed for convergence to five decimal places. As discussed in the previous section, to increase the convergence rate shifting needs to be introduced. We incorporated the Rayleigh shifts using algorithm (8.5-71). The following criterion was used to zero-out the small ðkÞ

ðkÞ

off-diagonal terms, tn;n1 ¼ tn1;n , if   ðkÞ ðkÞ ðkÞ tn;n1  tol tn1;n1 þ tn;n

(8.5-109)

where tol is larger than machine precision, εmach . If 2 < n < N, then the zeroing of the off-diagonal elements will split the eigenvalue problem into two smaller unreduced tridiagonal systems. Therefore, the actual implementation will have to perform the necessary “bookkeeping” to locate and apply the QR iteration to these unreduced matrices. However, since the Rayleigh shift is equal to the lowest and rightmost diagonal element, the

8.5 Matrix eigenvalue problem

Table 8.5-1 Off-diagonal elements of QR iteration without shifts. ðkÞ

ðkÞ

ðkÞ

ðkÞ

k

t2;1

t3;2

t4;3

t5;4

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ... 36 37 ... 46 47 48

15.49193 680.80840 722.97466 274.96908 110.41338 45.64686 19.19882 8.14595 3.47057 1.48141 0.63287 0.27047 0.11561 0.04942 0.02113 0.00903 0.00386 0.00165 0.00071 0.00030 0.00013 0.00006 0.00002 0.00001 0 0 ... 0 0 ... 0 0 0

1207.13527 703.96382 447.86727 228.66415 154.38027 108.08687 74.19985 50.14960 33.62436 22.46062 14.97819 9.98093 6.64871 4.42833 2.94926 1.96415 1.30807 0.87113 0.58014 0.38635 0.25730 0.17135 0.11411 0.07600 0.05061 0.03370 ... 0.00058 L0.00039 ... 0.00001 0.00001 0

622.90476 391.49070 370.36882 283.30019 127.81398 48.03943 17.75269 6.58710 2.45268 0.91489 0.34155 0.12756 0.04765 0.01780 0.00665 0.00248 0.00093 L0.00035 0.00013 0.00005 0.00002 0.00001 0 0 0 0 ... 0 0 ... 0 0 0

262.06241 330.64482 302.04063 40.67636 4.18954 0.47828 0.05560 0.00648 0.00076 0.00009 0.00001 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ... 0 0 ... 0 0 0

deflation will generally start at n ¼ N and decrease to n ¼ 2. This is certainly true for our example. For this reason, we present a simplified QR iteration that incorporates the Rayleigh shift and deflates the system starting from n ¼ N and decrement n by one whenever Eq. (8.5-109) holds.

821

822

CHAPTER 8 Numerical methods

We will use the subscript, 1 : n, to denote the principal submatrix consisting of the first n rows and columns. QR iteration on tridiagonal system with Rayleigh shifts

Tð1Þ ¼ T n¼N ð1Þ mð1Þ ¼ tn;n

for k ¼ 1; /; until n ¼ 1 ðkÞ

ðkÞ

ðkÞ

QR factor T1:n  mðkÞ I1:n ¼ Q1:n R1:n ðkþ1Þ

ðkÞ

ðkÞ

¼ R1:n Q1:n þ mðkÞ I1:n   ðkþ1Þ ðkþ1Þ ðkþ1Þ if tn;n1  tol tn1;n1 þ tn;n

T1:n

Factor }unreduced} system

ðkþ1Þ

ðkþ1Þ

Deflate if tn;n1 is small

ðkþ1Þ

tn;n1 ¼ tn1;n ¼ 0 n¼N1 ðkþ1Þ mðkþ1Þ ¼ tn;n

(8.5-110) For our example problem, the off-diagonal elements are listed in Table 8.52. Note that the convergence is fast and is reached by the 12th iteration to within five decimal places. Table 8.5-2 Off-diagonal elements of QR iteration with Rayleigh shifts. ðkÞ

ðkÞ

ðkÞ

ðkÞ

k

t2;1

t3;2

t4;3

t5;4

1 2 3 4 5 6 7 8 9 10 11 12

15.49193 20.60304 49.40105 129.12309 335.81178 464.63436 628.26595 835.3618 1021.29389 121.67155 0.18159 0

1207.13527 838.07029 240.35257 60.12589 15.02706 8.2928 4.69809 2.68405 0 0 0 0

622.90476 234.9202 161.77993 130.20881 104.85057 1.12941 0 0 0 0 0 0

262.06241 110.73741 6.97017 0.00155 0 0 0 0 0 0 0 0

8.5 Matrix eigenvalue problem

ðkÞ

Below we also list the iterates, TRay , for k ¼ 2; 3; 5; 7; 9; and 12: 2 3 39:887 20:603 0 0 0 6 7 0 0 6 20:603 2914:523 838:070 7 6 7 ð2Þ 7 TRay ¼ 6 838:070 671:974 234:920 0 6 0 7 6 7 0 234:920 1289:788 110:737 5 4 0 0

0

2

40:459 49:401 6 6 49:401 3174:711 6 ð3Þ 240:353 TRay ¼ 6 6 0 6 0 4 0 0 0 2

75:783 6 6 335:812 6 ð5Þ 0 TRay ¼ 6 6 6 0 4 0

110:737

0 0 240:353 386:415

0 0 161:780

161:780 0

1340:374 6:970

335:812 3159:930

0 15:027

0 0

15:027 0 0

350:826 104:851 0

104:851 1355:523 0

2

170:077 628:266 0 0 6 0 6 628:266 3065:702 4:698 6 ð7Þ 6 TRay ¼ 6 0 4:698 339:936 0 6 0 0 0 1366:347 4 0

0

0

0

935:827 (8.5-111) 3 0 7 0 7 7 7 0 7 7 6:970 5 910:040 (8.5-112) 3 0 7 0 7 7 7 0 7 7 0 5 909:938 (8.5-113) 3 0 7 0 7 7 7 0 7 7 0 5 909:938 (8.5-114)

823

824

CHAPTER 8 Numerical methods

2

2821:160 1021:294

6 6 1021:294 6 ð9Þ TRay ¼ 6 0 6 6 0 4 0 2 6 6 6 ð12Þ TRay ¼ 6 6 6 4

414:632 0 0 0

3196:149 0 0 39:634 0 0 0 0 0

0

0 0 339:932 0 0 0 0 339:932 0 0

0

0

3

7 0 0 7 7 7 0 0 7 7 1366:347 0 5 0 909:938 (8.5-115) 3 0 0 7 0 0 7 7 7 0 0 7 7 1366:347 0 5 0

909:938 (8.5-116)

Observe that the ordering of the eigenvalues has changed. This is expected since the iterates will converge to the eigenvalue that is nearest to the shift. Earlier we showed that the Rayleigh quotient iteration converges cubically to the targeted eigenpair. For most matrices, this convergence rate will also apply for the QR iteration with the Rayleigh shift. However, there are some cases where convergence fails. As an alternative, Wilkinson proposed using the eigenvalue of the 2  2 matrix at the lower right corner of ðkÞ the matrix that is closest to the Rayleigh shift, tn;n . The shift using this approach is given by rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2ffi  ðkÞ ðkÞ (8.5-117) þ s  sgnðsÞ s2 þ tn;n1 mðkÞ ¼ tn;n  . ðkÞ ðkÞ where s ¼ tn1;n1  tn;n 2. Wilkinson showed that this shift strategy provides cubic convergence for almost all matrices and at least quadratic convergence in the worst cases. Modifying the QR algorithm to use the Wilkinson shifts and repeating the calculations produced the results (off-diagonal elements) listed in Table 8.5-3. Note that the convergence rate is slightly improved compared to the convergence of the QR iteration with Rayleigh shifts.

8.5 Matrix eigenvalue problem

Table 8.5-3 Off-diagonal elements of QR iteration with Wilkinson shifts. ðkÞ

ðkÞ

ðkÞ

ðkÞ

k

t2;1

t3;2

t4;3

t5;4

1 2 3 4 5 6 7 8 9 10

15.49193 28.90796 75.42018 197.1603 506.19124 681.88761 900.58731 951.99772 111.22298 0

1207.13527 595.61867 146.3894 35.44541 8.73624 4.74448 2.71967 0.00001 0 0

622.90476 448.40922 397.84463 340.5061 285.51829 0.001 0 0 0 0

262.06241 68.2832 2.12712 0.00005 0 0 0 0 0 0

The iterates for k ¼ 2; 5; 7; 9; and 10 at which deflation occurs are listed below. Observe that the final diagonalized matrix has the same ordering of the eigenvalues as obtained with the Rayleigh shifts. 2 3 39:997 28:908 0 0 0 6 7 0 0 6 28:908 3046:026 595:619 7 6 7 ð2Þ 6 7 TWilk ¼ 6 0 595:619 731:971 448:409 0 7 6 7 0 448:409 1118:186 68:283 5 4 0 0 0 0 68:283 915:820 (8.5-118) 2 3 123:019 506:191 0 0 0 6 7 8:736 0 0 6 506:191 3112:744 7 6 7 ð5Þ 7 0 8:736 426:706 285:518 0 TWilk ¼ 6 6 7 6 7 0 0 285:518 1279:593 0 4 5 0 0 0 0 909:938 (8.5-119)

825

826

CHAPTER 8 Numerical methods

2

321:808

6 6 900:587 6 ð7Þ TWilk ¼ 6 0 6 6 0 4 0

900:587

0

0

2913:975 2:720 0 2:720 339:932 0 0 0 1366:347 0 0 0

2

3192:225 111:223 0 0 6 43:558 0 0 6 111:223 6 ð9Þ 0 0 339:932 0 TWilk ¼ 6 6 6 0 0 0 1366:347 4 0 0 0 0 2 6 6 6 ð10Þ TWilk ¼ 6 6 6 4

3196:149 0 0 39:634 0 0 0 0

0 0 339:932

0 0

0 0

0

3

7 0 7 7 7 0 7 7 0 5 909:938 (8.5-120) 3 0 7 0 7 7 7 0 7 7 0 5

909:938 (8.5-121) 3 0 0 7 0 0 7 7 7 0 0 7 7 1366:347 0 5 0 909:938 (8.5-122)

8.5.1.1.8 Implicit shifts

In the previous section, we discussed how to incorporate shifts explicitly into the QR algorithms. For a shift, m, the kth QR iteration consists of the QR factorization of the shifted tridiagonal matrix, TðkÞ  mI ¼ QðkÞ RðkÞ (8.5-123) Tðkþ1Þ ¼ RðkÞ QðkÞ þ mI  T Substituting RðkÞ ¼ QðkÞ TðkÞ mI into the second equation leads to the congruence transformation, QR factor

Tðkþ1Þ ¼ QðkÞ TðkÞ QðkÞ T

(8.5-124)

8.5 Matrix eigenvalue problem

827

Recall that the factor, QðkÞ , is equal to the product of the Givens transformations,   ðkÞ ðkÞ ðkÞ T QðkÞ ¼ Gn;n1 /G3;2 G2;1 (8.5-125) ðkÞ

where Gm;m1 is the Givens transformation matrix that eliminates the off-diagonal element in the mth row and m  1 column of  ðkÞ

ðkÞ

ðkÞ

Gm1;m2 /G3;2 G2;1 TðkÞ . Substituting Eq. (8.5-125) into Eq. (8.5-124)

shows that Tðkþ1Þ can be calculated by a sequence of nested triple products involving the Givens transformations, i.e.,     ðkÞ ðkÞ ðkÞ ðkÞ ðkÞ ðkÞ T ðkþ1Þ ðkÞ T ¼ Gn;n1 /G3;2 G2;1 T Gn;n1 /G3;2 G2;1 (8.5-126)     ðkÞ

ðkÞ

ðkÞ

ðkÞT

ðkÞT

ðkÞT

¼ Gn;n1 / G3;2 G2;1 TðkÞ G2;1 G3;2 /Gn;n1 ðkÞ

Let us denote each of the nested triple products by Tm;m1, where     ðkÞ ðkÞ ðkÞ ðkÞ ðkÞT ðkÞT Tm;m1 ¼ Gm;m1 / G2;1 T G2;1 / Gm;m1 (8.5-127) ðkÞ

ðkÞ

ðkÞT

¼ Gm;m1 Tm1;m2 Gm;m1 To illustrate the effect of these nested products, we will calculate Eq. (8.5-126) for k ¼ 1 and Tð1Þ ¼ T as defined in Eq. (8.5-87). For reference, we list the four 2  2 Givens rotation matrices and the resulting Qð1Þ that were calculated during the first iteration of the QR iteration with the Wilkinson shift, mð1Þ ¼ 803:077: " # " # h h i i 0:99979 0:02030 0:53065 0:84759 ð1Þ ð1Þ ¼ ; G3;2 ¼ G2;1 1:2;1:2 2:3;2:3 0:02030 0:99979 0:84759 0:53065 h i ð1Þ G4;3

" 3:4;3:4

¼

0:46288 0:88642 0:88642

0:46288

# ;

h

ð1Þ

G5;4

"

i 4:5;4:5

¼

0:85535 0:51805

0:51805 0:85535 (8.5-128)

#

828

CHAPTER 8 Numerical methods

By Eqs. (8.5-125) and (8.5-128), the orthonormal factor, Qð1Þ , for the first QR iteration with the Wilkinson shift is given by   ð1Þ ð1Þ ð1Þ ð1Þ T Qð1Þ ¼ G5;4 G4;3 G3;2 G2;1 2 3 0:99979 0:01077 0:00796 0:01304 0:00790 6 7 6 7 0:53054 0:39225 0:64251 0:38914 7 6 0:02030 6 7 6 7 ¼6 0 0:84759 0:24563 0:40234 0:24368 7 6 7 6 7 6 7 0 0 0:88642 0:39593 0:23979 7 6 4 5 0 0 0 0:51805 0:85535 (8.5-129) ð1Þ

ð1Þ

ð1ÞT

Computing the triple product, T2;1 ¼ G2;1 Tð1Þ G2;1 , yields 2 3 39:997 15:340 L24.502 0 0 6 7 0 0 6 15:340 1558:670 1206:887 7 6 7 ð1Þ 6 7 T2;1 ¼ 6 L24.502 1206:887 2117:826 622:905 0 7 6 7 0 0 622:905 1104:864 262:062 4 5 0 0 0 262:062 1030:644 (8.5-130) Observe that the triple product destroyed the tridiagonal structure by introducing a “bulge” in the (3,1) and (1,3) positions. Computing the next triple ð1Þ

ð1Þ ð1Þ

ð1ÞT

product, T3;2 ¼ G3;2 T2;1 G3;2 , produces 2 39:997 28:908 0 0 6 6 28:908 3046:026 275:700 527.969 6 ð1Þ T3;2 ¼ 6 275:700 630:470 330:544 6 0 6 527.969 330:544 1104:864 4 0 0

0

0

262:062

0 0 0 262:062

3 7 7 7 7 7 7 5

1030:644 (8.5-131)

8.5 Matrix eigenvalue problem

Note that the bulge moved to the (4,2) and (2,4) positions. Next, we ð1Þ

ð1Þ ð1Þ

ð1ÞT

compute T4;3 ¼ G4;3 T3;2 G4;3 and find that 2 39:997 28:908 0 0 6 0 6 28:908 3046:026 595:619 6 ð1Þ T4;3 ¼ 6 595:619 731:971 383:548 6 0 6 0 383:548 1003:362 4 0 0 0 L232.298 121:304

3 0 7 0 7 7 L232.298 7 7 7 121:304 5 1030:644 (8.5-132)

The bulge moved to the (5, 3) and (3, 5) positions and appears to move toward the lower right corner of the matrix. The last triple product using the ð1Þ

ð1Þ ð1Þ

ð1ÞT

fourth Givens transformation, T5;4 ¼ G5;4 T4;3 G5;4 , yields a matrix equal ð2Þ

to TWilk , i.e., 2

39:997

28:908

6 6 28:908 3046:026 6 ð1Þ T5;4 ¼ 6 595:619 6 0 6 0 4 0 0

0

0

0

595:619 0 731:971 448:409 448:409 1118:186 0

68:283

0 0 0 68:283

3 7 7 7 7 7 7 5

915:820 (8.5-133)

The transformation essentially moved the bulge toward the lower right corner of the matrix and “pushed” it out of the matrix, thereby restoring the tridiagonal structure. Eqs. (8.5-130) through (8.5-133) illustrate the “bulge-chasing” feature of the Givens transformations that were defined using explicit shifts. The “bulge” movement holds in general for all iterations and symmetric tridiagonal matrices. It suggests an alternate approach for implementing the QR that implicitly incorporates shifts. We will discuss how to accomplish this using our previous example. First, recall that for k ¼ 1 and n ¼ 5, (8.5117) yields the Wilkinson shift, mð1Þ ¼ 803:077. We start by defining the 2  2 Givens rotation matrix that eliminates the second element of the vector,

829

830

CHAPTER 8 Numerical methods

8 ð1Þ 9 < t1;1  mð1Þ = :

ð1Þ t2;1

;

( ¼

763:077 15:492

) (8.5-134)

which corresponds to the first two rows of the first column of Tð1Þ  mð1Þ I. This leads to the Givens transformation whose 2  2 rotation matrix is  h ð1Þ i 0:99979 0:02030 e ¼ (8.5-135) G 2;1 1: 2;1: 2 0:02030 0:99979 We will use the tilde accent to denote the Givens transformations and the resulting triple products that we develop from the bulge-chasing scheme. e ð1Þ ¼ Gð1Þ . CalcuClearly, Eq. (8.5-134) implies that for the first step, G 2;1

e ð1Þ T 2;1

¼ lating the triple product, 2 39:997 15:340 6 6 15:340 1558:670 6 ð1Þ e ¼ 6 L24.502 1206:887 T 2;1 6 6 0 0 4 0 0

T e ð1Þ Tð1Þ G e ð1Þ , G 2;1 2;1

produces

L24.502 1206:887

0 0

2;1

0 0

3

7 7 7 7 2117:826 622:905 0 7 7 622:905 1104:864 262:062 5 0 262:062 1030:644 (8.5-136)

ð1Þ

which is equal to T2;1 and is defined in (8.5-130). We now define the second e 3;2 , using the vector, f 15:340 24:502 gT to Givens transformation, G eliminate the bulge element in the ð3; 1Þ position. This yields the 2  2 Givens rotation matrix,  h ð1Þ i 0:53065 0:84759 e ¼ (8.5-137) G3;2 2:3;2:3 0:84759 0:53065 e ð1Þ ¼ Gð1Þ , which implies that From (8.5-128), we note that G 3;2 3;2 ð1Þ e ð1Þ e ð1Þ e ð1Þ e ð1Þ ¼ Gð1Þ T T 3;2 3;2 2;1 G3;2 ¼ T3;2 . For reference, we list T3;2 below T

8.5 Matrix eigenvalue problem

2

39:997

6 6 28:908 6 ð1Þ e ¼6 0 T 3;2 6 6 4 0 0

28:908

0

0

3046:026 275:700 527.969 275:700 630:470 330:544 527.969 330:544 1104:864 0 0 262:062

0

3

7 0 7 7 7 0 7 7 262:062 5 1030:644 (8.5-138)

We continue to chase the bulge by eliminating the elements in the (4, 2) and (2, 4) positions that are in bold font in (8.5-138). The Givens transformation based on the vector f 275:700 527:969 gT results in the rotation matrix,  h ð1Þ i 0:46288 0:88642 e ¼ (8.5-139) G4;3 3:4;3:4 0:88642 0:46288 From (8.5-128), we note that except for a sign change, the 2  2 rotation e ð1Þ and Gð1Þ are identical. The corresponding triple product, matrices in G e ð1Þ T 4;3

¼

4;3 4;3 ð1Þ ð1Þ ð1ÞT e T e e G 4;3 3;2 G4;3 , becomes

2

39:997

28:908

0

0

6 6 28:908 3046:026 6 ð1Þ e ¼6 0 T 595:619 4;3 6 6 0 4 0

0

0

232.298

121:304

0

3

7 595:619 0 0 7 7 731:971 383:548 232.298 7 7 7 383:548 1003:362 121:304 5 1030:644 (8.5-140)

Referring to (8.5-132), observe that except for sign changes in the offe ð1Þ is equal to Tð1Þ . The last Givens transformation diagonal elements, T 4;3

4;3

is defined to remove the bulge at positions (5,3) and (3,5). Based on the vector f 383:548 232:298 gT , the Givens rotation matrix is  h ð1Þ i 0:85535 0:51805 e ¼ (8.5-141) G5;4 4:5;4:5 0:51805 0:85535

831

832

CHAPTER 8 Numerical methods

e ð1Þ and Gð1Þ are Again, comparing (8.5-141) and (8.5-128), we note that G 5;4 5;4 equal except for sign differences. The corresponding triple product, e ð1Þ T e ð1Þ e ð1Þ e ð1Þ ¼ G T 5;4 5;4 4;3 G5;4 , is 2 3 39:997 28:908 0 0 0 6 7 0 0 6 28:908 3046:026 595:619 7 6 7 ð1Þ e 6 7 T5;4 ¼ 6 0 595:619 731:971 448:409 0 7 6 7 0 448:409 1118:186 68:283 5 4 0 T

0

0 ð1Þ

0

68:283

915:820 (8.5-142)

which is equal to T5;4 in Eq. (8.5-133), except for sign differences in the offdiagonal elements. Eqs. (8.5-134) through (8.5-142) represent one complete sequence of the bulge-chasing process using Givens transformations during a single QR iteration. The product of these transformations yields the ortho ð1Þ ð1Þ ð1Þ ð1Þ T ð1Þ e G e e e e , i.e., normal matrix, Q ¼ G 5;4 4;3 G3;2 G2;1 2 3 0:99979 0:01077 0:00796 0:01304 0:00790 6 7 0:53054 0:39225 0:64251 0:38914 7 6 0:02030 6 7 e ð1Þ ¼ 6 7 Q 0 0:84759 0:24563 0:40234 0:24368 6 7 6 7 0 0 0:88642 0:39593 0:23979 5 4 0 0 0 0:51805 0:85535 (8.5-143) ð1Þ e and Qð1Þ are Comparing (8.5-143) to (8.5-129), we conclude that Q equal except for the differences in the signs of the third and last columns. The differences in signs are unimportant when considering the orthonormal QR factors. It implies that the bulge-chasing process is equivalent to the two-step iteration (8.5-123) that involves the QR factors of the explicitly shifted matrix. Additionally, it should be noted that by construction, the first e ð1Þ and Qð1Þ are equal since they are determined by the first columns of Q

8.5 Matrix eigenvalue problem

ðkÞ e ðkÞ . Our example illustrates the equivaGivens transformation, G2;1 ¼ G 2;1 lence of the orthonormal factors from the direct QR factorization and the bulge-chasing scheme. That this holds in general follows from the Implicit Q Theorem whose proof can be found in Golub and Van Loan (2013), Stewart (1998, 2001a,b), or Demmel (1997).

Theorem 8.5-2 (Implicit Q Theorem) Let T be an N  N unreduced   e¼ e tridiagonal matrix. Suppose Q ¼ ½q1 j /jqN  and Q q1 j /je qN are e T TQ e are tridiagonal matrices. orthonormal matrices such that QT TQ and Q e are equal except for sign differq1 , then the columns of Q and Q If q1 ¼ e qn ; n ¼ 2; /; N. ences, i.e., qn ¼ e First, note that if an off-diagonal element of T is equal to zero, we can split the matrix into smaller matrices. Hence, without loss of generality, we can assume that the tridiagonal system is unreduced. Suppose that at the kth step we have an unreduced tridiagonal matrix, TðkÞ . Performing one step of the QR iteration, that directly calculates the QR factorization of an explicitly shifted TðkÞ , produces an orthonormal matrix, QðkÞ , and the similar tridiagonal matrix, Tðkþ1Þ ¼ QðkÞ TðkÞ QðkÞ . Also, the implicit QR iteration, that uses the bulge-chasing Givens transformations on TðkÞ , e ðkÞ , and the tridiagonal matrix, yields an orthonormal matrix, Q T

e ðkÞ TðkÞ Q e ðkÞ . We add that both methods use the Wilkinson e ðkþ1Þ ¼ Q T shift, lðkÞ , as defined by (8.5-117). Also, both methods define the first Givens transformation that eliminates the second element of n oT ðkÞ ðkÞ ðkÞ e ðkÞ , and consequently the first t1;1  lðkÞ t2;1 . Therefore, G2;1 ¼ G 2;1 T

ðkÞ e ðkÞ are equal, i.e., qðkÞ ¼ e columns of QðkÞ and Q q1 . To show this, let us 1 ðkÞ

calculate q1 from QðkÞ , which can be expressed as a product of the Givens transformations,   ðkÞ ðkÞ ðkÞ ðkÞ T q1 ¼ QðkÞ e1 ¼ Gn;n1 / G3;2 G2;1 e1 ¼

ðkÞT G2;1



ðkÞ Gn;n1

¼ fc s 0

/

ðkÞ G3;2

/ 0 gT

 ðkÞT e1 ¼ G2;1 e1

(8.5-144)

833

834

CHAPTER 8 Numerical methods

where c and s are the cosine and sine terms of the Givens rotation matrix, ðkÞ

G2;1 . The fourth equality in Eq. (8.5-144) holds since the first column in ðkÞ

ðkÞ

Gn;n1 / G3;2 ðkÞ

q1 e

¼ fc s

is

equal

to

e1 .

Likewise,

we

can

show

that

ðkÞ e ðkÞ . Therefore, by the Im0 gT , since G2;1 ¼ G 2;1

0 /

e ðkÞ are equal except for sign differences in plicit Q Theorem, QðkÞ and Q their columns. Hence, for each QR iteration the orthonormal QR factor can be computed directly using explicit shifts or indirectly using implicit shifts and the bulge-chasing scheme. The later approach, which is referred to as the implicit QR iteration, is numerically more stable since it avoids round-off errors that occur when disproportionate shifts are added explicitly to the diagonal elements of TðkÞ . For completeness, we list in Table 8.5-4 the off-diagonal elements of TðkÞ using the Implicit QR iteration with the Wilkinson shifts. Comparing Tables 8.5-3 and 8.5-4, we note that, except for the differences in signs, the off-diagonal elements of the implicit QR iteration equal those from the QR iteration with explicit Wilkinson shifts. Below we also list the iterates, Eqs. (8.5-145) through (8.5-149) for k ¼ 2; 5; 7; 9; and 10 to indicate where the deflations occur. Table 8.5-4 Off-diagonal elements of Implicit QR iteration with Wilkinson shifts. ðkÞ

ðkÞ

ðkÞ

ðkÞ

k

t2;1

t3;2

t4;3

t5;4

1 2 3 4 5 6 7 8 9 10

15.49193 28.90796 75.42018 197.1603 506.19124 681.88761 900.58731 951.99772 111.22298 0

1207.13527 595.61867 146.3894 35.44541 8.73624 4.74448 2.71967 0.00001 0 0

622.90476 448.40922 397.84463 340.5061 285.51829 0.001 0 0 0 0

262.06241 68.2832 2.12712 0.00005 0 0 0 0 0 0

8.5 Matrix eigenvalue problem

2

39:997

28:908

6 6 28:908 3046:026 6 ð2Þ TImpl ¼ 6 595:619 6 0 6 0 4 0 0 0 2

123:019 6 6 506:191 6 ð5Þ 0 TImpl ¼ 6 6 6 0 4 0 2

321:808 6 6 900:587 6 ð7Þ 0 TImpl ¼ 6 6 6 0 4 0 2

0

0

0

3

7 0 0 7 7 7 448:409 0 7 7 1118:186 68:283 5 68:283 915:820 (8.5-145) 3 506:191 0 0 0 7 3112:744 8:736 0 0 7 7 7 8:736 426:706 285:518 0 7 7 0 285:518 1279:593 0 5 0

595:619 731:971 448:409 0

0

900:587 0 2913:975 2:720 2:720 339:932 0 0

0 0

0 0 0 0 1366:347 0

3192:225 111:223 0 0 6 43:558 0 0 6 111:223 6 ð9Þ 0 0 339:932 0 TImpl ¼ 6 6 6 0 0 0 1366:347 4 0 0 0 0

909:938 (8.5-146) 3 0 7 0 7 7 7 0 7 7 0 5 909:938 (8.5-147) 3 0 7 0 7 7 7 0 7 7 0 5 909:938 (8.5-148)

835

836

CHAPTER 8 Numerical methods

2 6 6 6 ð10Þ TImpl ¼ 6 6 6 4

3196:149 0 0 0 0

0

0

0

0

3

7 39:634 0 0 0 7 7 7 0 339:932 0 0 7 7 0 0 1366:347 0 5 0 0 0 909:938 (8.5-149)

In summary, the Implicit QR iteration is implemented in most eigensolver software packages. In LAPACK, it is often used in combination with the Implicit QL iteration (orthonormal and lower-triangular factorization) to mitigate round-off errors for graded matrices that have elements with increasing (or decreasing) magnitudes along the diagonal. As discussed, the general symmetric matrix is first reduced to tridiagonal form using Householder transformations. This significantly lowers the computational cost by avoiding the QR factorizations of full matrices. Furthermore, the QR factorization can be efficiently calculated using Givens transformations. The QR iteration converges by virtue of its equivalence to the orthogonal iteration. An immediate consequence is that the QR iteration is actually performing the power and inverse iterations simultaneously. Since the inverse iteration with the Rayleigh shifts converge cubically, it leads to the shifted QR iteration that possesses similar convergence rates. The Wilkinson shifts improve the Rayleigh shifts by guaranteeing convergence and providing cubic convergence for almost all matrices. The QR iteration using explicitly shifted matrices can incur round-off errors if the shifts are significantly disproportionate with the diagonal elements. Implicit shifting is attractive since it avoids the round-off errors from explicit shifting and replaces the two-step QR iteration by a simple bulge-chasing process. Hence, an efficient QR algorithm for symmetric matrices consists of the reduction to tridiagonal form, good shifts, and the implicit bulge-chasing scheme. We will see in a subsequent section that these are also the main ingredients for an effective QR algorithm for nonsymmetric matrices. 8.5.1.2 Divide-and-conquer method

Until recently, the implicit QR algorithm was the most efficient method for solving moderate-sized eigenproblems. We will next describe two other methods for solving the symmetric eigenvalue problem. The first method is referred to as the Divide-and-Conquer algorithm, and the second is the

8.5 Matrix eigenvalue problem

837

Lanczos method, which we will describe in the next section. The Divideand-Conquer algorithm was originally proposed by J. Cuppen in 1981 (Cuppens) and is currently the fastest method for computing all the eigenpairs of a symmetric dense matrix. Furthermore, it permits a parallel implementation and, therefore, is suited for multiprocessor computing architectures. Similar to the QR algorithm, the first step is to reduce the matrix to tridiagonal form. Note that if an off-diagonal element is zero, then the eigenproblem decouples into two smaller problems. This is the basis for the “tearing” process of the Divide-and-Conquer algorithm. Suppose that T is an N  N unreduced symmetric tridiagonal matrix and N ¼ 2M. Also, let dn ¼ tn;n and bn ¼ tnþ1;n represent the diagonal and off-diagonal elements, respectively. Consider tearing T into two M  M tridiagonal matrices, T1 and T2 , via the rank-one perturbation, (8.5-150)

where 2 d1 b1 6 6 b1 d2 6 T1 ¼ 6 6 1 4

3 1 1

bM1

bM1

dM  bM

2

7 6 7 6 7 6 ¼ T 7 2 6 7 6 5 4

dMþ1  bM

bMþ1

bMþ1

dMþ2 1

3 7 7 7 7 1 bN1 7 5 bN1 dN (8.5-151) 1

and u ¼ eM þ eMþ1 is the N-dimensional vector of zeros except for ones in the M and M þ 1 positions. Now, suppose that we have calculated the eigensolutions of T1 and T2 so that we have the spectral decompositions, T1 ¼ Q1 L1 QT1 and T2 ¼ Q2 L2 QT2 . For convenience, index the eigenvalues so that L1 ¼ diagðl1 ; /; lM Þ, L2 ¼ diagðlMþ1 ; /; lN Þ, and L [ diagðl1 ; /; lN Þ. We can synthesize the eigenvalues of T

838

CHAPTER 8 Numerical methods

from the eigenvalues of T1 and T2 . Factoring out the orthonormal factors yields (8.5-152)

where

(8.5-153)

Therefore, m is an eigenvalue of T if, and only if it is an eigenvalue of L þ bM ppT , i.e.,   detðT  mIN Þ ¼ det L þ bM ppT LmIN ¼ 0 (8.5-154) It can be shown that Eq. (8.5-154) holds if m is a root of the secular equation, 1 þ bM

N X

p2n ¼0 l  m n n¼1

(8.5-155)

that can be solved by a variation of Newton’s method. Having computed an eigenvalue, m, of L þ bM ppT , the corresponding eigenvector up to a scale factor can then be calculated by v ¼ ½LLmIN 1 p

(8.5-156)

which is computationally fast since LLmIN is a diagonal matrix. To summarize, the eigensolutions of the whole problem can be computed from combining the eigensolutions of its parts. The path is now clear. There is no reason to stop with a single partitioning since one can recursively apply dyadic rank-one tearings and syntheses to solve for all the eigenpairs. This naturally leads to parallel implementations. A word of caution, however; the simplicity of this approach disguises the subtle numerical issues that

8.5 Matrix eigenvalue problem

need to be considered for a practical and stable algorithm. In fact, it took over a decade after J. Cuppen’s publication before a numerically stable implementation was developed and implemented in LAPACK. Additional details can be found in Demmel (1997). 8.5.1.3 Lanczos method

The QR Iteration and DC methods efficiently compute all the eigenpairs for moderately sized matrices that can be stored in memory. On the other hand, large complex structures often yield finite element models with very large mass and stiffness matrices. The resulting large symmetric eigenvalue problems, although sparse, may not be suitable for the QR and DC methods since the Householder tridiagonalization and eigenvector computation steps would require storage that could exceed a computer’s internal memory. Fortunately, for most applications only the lower and upper extreme eigenpairs are required. For example, in most structural dynamic problems, only modes below a specified frequency are needed. Many of the finite element software packages compute these extreme eigenpairs using the Lanczos method, which iteratively approximates the eigenvalues and eigenvectors at the lower and upper ends of the spectrum. We will introduce this method and refer the reader to Cullum and Willoughby (2002), Saad (2003), Parlett (1998) and Watkins (2007, 2010) for additional details. Consider the problem of approximating the eigenvalues of an N  N matrix, A, by its principal M  M submatrix, AM ¼ A1:M;1:M , i.e.,

(8.5-157)

ðMÞ ðMÞ ðMÞ l1  b l2  /  b lM where M 0 ¼ N  M. Let l1  l2  /  lN and b denote the eigenvalues of A and AM , respectively, in descending order. We will need the following theorem, whose proof can be found in Parlett (1998):

Theorem 8.5-3 (Cauchy Interlacing Theorem) ðMÞ l m  lNMþm lm  b

m ¼ 1; /; M

(8.5-158)

839

840

CHAPTER 8 Numerical methods

An immediate consequence of Theorem 8.5-3 reveals how the eigenvalues of consecutive submatrices, AM and AMþ1 , are interlaced. In particular, we have ðMþ1Þ ðMÞ ðMþ1Þ ðMÞ ðMþ1Þ ðMÞ ðMþ1Þ l1 b l1  b l2 b l2  /  b lM b lM  b l Mþ1  lN l1  b

(8.5-159) Clearly, as M/N, the eigenvalues converge monotonically and, in particular, ðMÞ l 1 ¼ l1 lim b

M/N

and

ðMÞ lim b l M ¼ lN

M/N

(8.5-160)

ðMÞ We illustrate the convergence of b l m as M increases with a randomly generated symmetric tridiagonal matrix. We start with a 100  100 matrix, S, whose elements were randomly selected from a standard normal distribution. The symmetric matrix, A, was then defined by  1 (8.5-161) A ¼ S þ ST þ m 2 where the shift, m, was chosen so that the minimum eigenvalue of A was equal to one. The tridiagonal matrix, T, was computed by applying the Householder transformations to A. The eigenvalues of T(and A) were computed using the implicit QR iteration and their eigenvalues are indicated in Fig. 8.5-7 by the vertical dashed lines. The eigenvalues of the M  M principal submatrices, TM , were also computed by the QR Iteration and these eigenvalues are plotted horizontally with  at y-values equal to M. We will refer to this plot as an eigenvalue stabilization diagram that clearly shows the interlacing and convergence of the eigenvalues of TM as M increases. Furthermore, we note that the convergence is faster at the extreme ends of the spectrum. We can generalize the notion of the principal submatrices by introducing coordinate changes. First, note that AM can be expressed as AM ¼ ETM AEM , where EM ¼ ½e1 je2 j/jeM  is the N  M orthonormal matrix consisting of the first M columns of the identity matrix, IN . Consider now a general N  M orthonormal matrix, QM ¼ ½q  M , and its  1 jq 2 j/jq t t 0 N  M orthonormal complement, QM0 . Then, Q ¼ QM QM0 is a N  N orthonormal matrix and, therefore, the eigenvalues are invariant under the

8.5 Matrix eigenvalue problem

FIGURE 8.5-7 Eigenvalue stabilization diagram for principal submatrices of a 100  100 tridiagonal matrix. similarity transformation, H ¼ QT AQ. Eq. (8.5-162) defines the matrix partitions of H.

(8.5-162)

The Cauchy interlacing inequality, Eq. (8.5-158), applies to HM if we also ðMÞ ðMÞ l m . Furthermore, as M increases, b lm denote the eigenvalues of HM by b will converge monotonically to the eigenvalues of A. This was demonstrated in example Eq. (8.5-161) that showed how the eigenvalues of the submatrices, HM ¼ QTM AQM , were interlaced and converged to the eigenvalues of A.

841

842

CHAPTER 8 Numerical methods

The method of approximating the eigenvalues of A by the eigenvalues of HM is known as the Rayleigh-Ritz procedure. Let us see how we can obtain the corresponding approximate eigenvectors. First, let UM ¼ h i ðMÞ ðMÞ ðMÞ u1 u2 / uM denote the M  M orthonormal eigenvector matrix of ðMÞ

v ðMÞ ¼ QM um ; m ¼ 1; /; M, are the HM . Then, the Ritz vectors, b m optimal eigenvector approximations corresponding to the Ritz values, ðMÞ b l m . We have already encountered this optimal property in our discussion of the power iteration. There, we showed that for an approximate eigenvector of unit norm, q, the Rayleigh quotient, q ¼ qT Aq, is the optimal eigenvalue approximation that minimizes kAq  qqk2 . Before proceeding we need to review some concepts of invariant subspaces. Recall that the range of QM ¼ RQM is an invariant subspace of A, if there exists an M  M matrix, RM , such that AQM ¼ QM RM

(8.5-163)

Eq. (8.5-163) implies that RQM has a basis consisting of the eigenvectors of A. Let us take a closer look at RM . Without loss of generality, suppose spanfv1 ; v2 ; /; vM g ¼ spanfq1 ; q2 ; /; qM g

(8.5-164)

where v1 ; v2 ; /; vM are the orthonormal eigenvectors of A that correspond to l1 ; l2 ; /; lM , respectively. Let VM ¼ ½v1 jv2 j/jvM , then AVM ¼ VM LM LM ¼ diagðl1 ; /; lM Þ

(8.5-165)

which trivially shows that RVM is an invariant subspace of A. Furthermore, Eq. (8.5-164) implies that there exists an M  M orthonormal matrix, PM , such that, QM ¼ VM PM . Substituting this into Eq. (8.5-163) yields   (8.5-166) AVM ¼ VM PM RM PTM Comparing the above to Eq. (8.5-165) implies that RM ¼ PTM LM PM and, hence, RM is similar to LM . Therefore, if Eq. (8.5-163) holds, then the eigenvalues of RM are a subset of the eigenvalues of A. For a general orthonormal matrix, QM , its column space, RQM , will not be an invariant subspace and, therefore, Eq. (8.5-163) will be in error.

8.5 Matrix eigenvalue problem

However, it is shown in Demmel (1997) that selecting RM ¼ HM minimizes the error, so that minkAQM  QM RM k2 ¼ kAQM  QM HM k2 ¼ kHM0 M k2 RM

(8.5-167)

bM ¼ where HM and HM0 M are defined in, Eq. (8.5-162). As before, let L h  i  ðMÞ ðMÞ ðMÞ ðMÞ lM and UðMÞ ¼ u1 j/juM equal the matrix of eidiag b l 1 ; /; b genvalues and matrix of eigenvectors of HM , respectively. Then HM has the spectral decomposition, b M UT HM ¼ UM L (8.5-168) M b M ¼ QM UM yields Substitution into (8.5-167) and letting V    b M UT  kAQM  QM HM k2 ¼ AQM  QM UM L M 2   b M (8.5-169) ¼ AðQM UM Þ  ðQM UM Þ L 2   b M bM  V bML ¼ A V 2 h i ðMÞ ðMÞ ðMÞ bM ¼ b v 1 b v 2 / b which shows that V is the optimal eigenvector vM matrix for the Rayleigh-Ritz method that yields the minimum error,   b M  ¼ kHM0 M k A V bML bM  V (8.5-170) 2 2 Moreover, it can be shown (Demmel, 1997) that each Ritz pair,  ðMÞ  b lm ; b v ðMÞ , satisfies m ðMÞ b for some eigenvalue; lk ; of A l m  lk  kHM 0 M k2;      ðMÞ bðMÞ ðMÞ   0 ðMÞ  v m  ¼ HM M um  vm  lm b Ab 2

2

(8.5-171)  ðMÞ ðMÞ b Hence, if kHM0 M k2 is small, l m ; b vm will be good estimates of the eigenpairs of A. Given an orthonormal matrix, QM , the Ritz pairs provide optimal approximate eigenpairs of A, from the eigenpairs of a smaller matrix, HM . Note that HM can be viewed as a projection of A onto RQM , as illustrated in Fig. 8.5-8, where ℝM ; fq1 ; .; qM g denotes ℝM with basis fq1 ; /; qM g. 

843

844

CHAPTER 8 Numerical methods

FIGURE 8.5-8 HM viewed as a projection of A onto range of QM .   ðMÞ b As M increases to N, the Ritz pairs, lðMÞ ; v , will converge to an m m eigenpair of A. When N is very large and M  N, the Lanczos method provides an efficient approach for computing the orthonormal QM , so that its projection, HM , will be a tridiagonal matrix. The eigenpairs of HM can then be calculated by the implicit QR or DC methods. Since HM is in tridiagonal form, significant computational savings are attained by avoiding the costly Householder tridiagonal reduction steps as M increases sequentially. Basically, the Lanczos method calculates QM from the QR factorization of the N  M Krylov matrix ,   (8.5-172) KðA; x; MÞ ¼ xjAxjA2 xj / jAM1 x where x is a random initial vector. Without loss of generality, we will assume that kxk2 ¼ 1. For simplicity, let us assume also that KðA; x; NÞ is of full rank and that it has the QR factorization,   (8.5-173) KðA; x; NÞ ¼ x Ax / AN1 x ¼ QR Observe that r1;1 ¼ 1 and that the first column of Q ¼ ½q1 j / jqN  is equal to x. Premultiplying Eq. (8.5-173) by QT , and then substituting x ¼ Qe1 yields   R ¼ e1 QT AQe1 / QT AN1 Qe1 (8.5-174)   ¼ e1 Te1 / TN1 e1 where T ¼ QT AQ. Since, R is upper-triangular, T must be upper Hessenberg. A matrix is said to be upper Hessenberg if its elements below the

8.5 Matrix eigenvalue problem

subdiagonal are equal to zero. Since A is symmetric, T is also symmetric and, hence, it must be a tridiagonal matrix. An immediate consequence is that for M < N, the QR factorization of KðA; x; MÞ ¼ QM RM leads to TM ¼ QTM AQM

(8.5-175)

where TM is the principal M  M submatrix of T. When N is large, computing QM directly by Householder transformations on A can be computationally prohibitive, and furthermore, will tend to destroy sparsity. Typically, for large sparse matrices there are very efficient procedures to evaluate the matrix-vector product, Ax, which facilitates the computations of the the columns of QM . As before, let t  vectors and  Krylov T Q ¼ QM QM 0 , then T ¼ Q AQ has the matrix partitions,

(8.5-176)

Denote the diagonal and off-diagonal elements of T by

(8.5-177)

Since AQ ¼ QT, we obtain after equating the Mth column on both sides,

845

846

CHAPTER 8 Numerical methods

AqM ¼ bM1 qM1 þ dM qM þ bM qMþ1

(8.5-178)

Solving for bM qMþ1, we obtain the Lanczos three term recursion, bM qMþ1 ¼ AqM  dM qM  bM1 qM1

(8.5-179)

Observe that Eq. (8.5-179) is computationally economical if an efficient matrix-vector multiply procedure is available. The basic Lanczos method, which can be found in the references cited earlier, is Basic Lanczos algorithm Let A be a N  N symmetric matrix. For a N-dimensional vector, x, the following algorithm approximates the extreme eigenpairs of A:

q1 ¼ x=kxk2 ; b0 ¼ 0; q0 ¼ 0 for M ¼ 1; 2; / y ¼ AqM dM ¼ qTM y y ¼ y  dM qM  bM1 qM1 bM ¼ kyk2 if bM z 0 then exit qMþ1 ¼ y=bM Compute eigenpairs of TM Observe that by definition, bM will always be nonnegative. The exit criterion checks if the off-diagonal element, bM , is nearly zero. If this condition is met, then the tridiagonal system decouples into two smaller tridiagonal systems. Note that in theory, the Lanczos method is finite and stops when M ¼ N. However, in practice, it is often referred to as an “iterative” method since N[1 and the process stops for M much less than N. Referring to Eqs. (8.5-167) and (8.5-176), we obtain kAQM  QM TM k2 ¼ kTM0 M k2 ¼ bM

(8.5-180)

Therefore, when bM is small, the range of QM is almost an invariant sub  ðMÞ ðMÞ space of A and the eigenpairs, b l ; um , of TM will lead to good apm

proximations of the eigenpairs of A. Specifically, the Ritz pairs,

8.5 Matrix eigenvalue problem



 ðMÞ ðMÞ b lm ; b , with b v ðMÞ v ðMÞ ¼ QM um , are the optimal eigenpair estimates of m m

A. Then (8.5-171) and the form of TM 0 M imply ðMÞ b for some eigenvalue; lk ; of A l m  lk  bM;   ðMÞ  ðMÞ bðMÞ ðMÞ  v m  ¼ bM uM;m vm  lm b Ab 2

(8.5-181) n oT ðMÞ ðMÞ ðMÞ ðMÞ ðMÞ where um ¼ u1;m u2;m / uM;m . Therefore, when bM uM;m  1,   ðMÞ v ðMÞ , has converged to an eigenpair of A. the Ritz pair, b lm ; b m To illustrate the convergence of the eigenvalue approximations for the Lanczos method, we applied the basic Lanczos algorithm to the 100  100 random symmetric matrix A that we defined in our earlier example, (8.5-161). The initial vector, x, was randomly chosen. To observe the errors, the computations were performed in single precision. The Ritz values from the Lanczos iteration were calculated and are shown in the eigenvalue stabilization diagram in Fig. 8.5-9. In Fig. 8.5-9, we can observe that, as expected, the Lanczos estimates converge faster to extreme eigenvalues of A. However, the estimates begin to breakdown after about 47 iterations. This destabilization also occurs for the other eigenvalue approximations and is attributed to the nonorthogonality in the computed qM . The progressive loss of orthogonality is similar to the nonorthogonality that we noted in the classical Gram-Schmidt method. For each dimension, M, the maximum orthogonality error of qMþ1 with respect to the previous Lanczos vectors was calculated via εOrtho ðMÞ ¼ max qTm qMþ1 (8.5-182) 1mM

The orthogonality errors are plotted in Fig. 8.5-10. Note that the errors attain their maximum near M z 47 where the Ritz values begin to diverge. A simple, but expensive remedy is to reorthogonalize each qMþ1 with respect to the Lanczos vectors that were previously calculated. We implemented the full reorthogonalization by including the modified Grame Schmidt method in the basic Lanczos algorithm. The orthogonality errors from the Lanczos algorithm with full reorthogonalization are shown in Fig. 8.5-10 and indicate that the Lanczos vectors are orthonormal, relative to single-precision arithmetic. The eigenvalue stabilization diagram for the

847

848

CHAPTER 8 Numerical methods

FIGURE 8.5-9 Eigenvalue stabilization diagram for the example 100  100 random matrix using the basic Lanczos algorithm.

FIGURE 8.5-10 Lanczos orthogonality error versus dimension M.

8.5 Matrix eigenvalue problem

FIGURE 8.5-11 Eigenvalue stabilization diagram for the example 100  100 random matrix using the basic Lanczos algorithm with full reorthogonalization. resulting Ritz values are shown in Fig. 8.5-11. The figure illustrates that by requiring orthogonality among the Lanczos vectors, we obtain stable Ritz estimates that converge to the eigenpairs of A. The computational overhead from the full reorthogonalization essentially cancels the computational efficiency of the basic Lanczos method. Fortunately, the nonorthogonality can be detected cheaply, which permits the reorthogonalization to be applied selectively. Observe that the orthogonality of qMþ1 with respect to the previous Lanczos vectors is related to its orthogonality to the Ritz vectors via an orthonormal matrix,   b T qMþ1 ¼ ðQM UM ÞT qMþ1 ¼ UT QT qMþ1 (8.5-183) V M M M h i ðMÞ ðMÞ where UM ¼ u1 j/juM is the M  M orthonormal eigenvector matrix h i ðMÞ ðMÞ T b b of TM ¼ QM AQM , and V M ¼ v 1 j/jb is the N  M matrix that vM contains the Ritz vectors. Therefore, the lack of orthogonality of qMþ1

849

850

CHAPTER 8 Numerical methods

relative to the previous Lanczos vectors can be detected by its inner product to the Ritz vectors. We have the following result due to C. C. Paige whose proof can be found in Demmel (1997):   O εmach kAk2 ðMÞT b (8.5-184) v m qMþ1 ¼ ðMÞ bM uM;m  ðMÞ  Paige’s result implies that if the Ritz pair, b lm ; b v ðMÞ , has converged, m ðMÞ then bM uM;m  1 and qMþ1 will contain a significant component in the direction of b v ðMÞ v ðMÞ is a linear combination of m . Since b m qm ; m ¼ 1; /; M, qMþ1 will possess a component that is linearly dependent with the previous qm . A possible solution known as selective orthog ðMÞ onalization monitors when bM uM;m is less than a specified threshold and then removes from y(in the Lanczos algorithm) its projection onto b v ðMÞ m . Demmel suggests using the criterion ðMÞ pffiffiffiffiffiffiffiffiffiffiffi (8.5-185) bM uM;m  εmach kTM k to decide when to perform the orthogonalization. Since M is much smaller ðMÞ

than N, the computational cost of calculating the eigenvectors, um , and ðMÞ hence, evaluating bM uM;m , requires significantly less floating-point operations than computing the orthogonality vector, QTM qMþ1 . The selective orthogonalization was implemented in the Basic Lanczos algorithm using (8.5-185), and then applied to the example random symmetric matrix. The resulting eigenvalue stabilization diagram is shown in Fig. 8.5.12; and as can be ascertained it indicates that selective orthogonalization ensured that the Ritz estimates were stable and convergent. 8.5.2 Nonsymmetric eigenvalue problem

In structural dynamics, the eigenvalue problem for nonsymmetric matrices can occur, for example, if gyroscopic moments or aerodynamic forces are included, or if the second-order equations are recast as a first-order system. For most applications, where the matrices are of moderate size, the eigensolutions are typically computed with the implicit QR algorithm. The

8.5 Matrix eigenvalue problem

FIGURE 8.5-12 Eigenvalue stabilization diagram for the example 100  100 random matrix using the basic Lanczos algorithm with selective orthogonalization. implicit QR algorithm for nonsymmetric matrices is based on the original approach that was developed by John Francis in 1961. Recall that the efficiency of the implicit QR algorithm for symmetric matrices relied on a reduction to tridiagonal form, effective shifts, and a bulge-chasing scheme. These three elements, with the appropriate adjustments, are also the main features of the implicit QR algorithm for nonsymmetric matrices. Having presented the QR algorithm for the symmetric eigenvalue problem in detail, we will only outline the changes of the algorithm that are needed to address the nonsymmetry. For a complete discussion of the QR iteration, the reader can consult Golub and Van Loan (2013), Stewart (1998, 2001a,b), Demmel (1997), and Watkins (2007, 2010). We start with two decomposition theorems for nonsymmetric matrices. First, recall that the Spectral Theorem, Theorem 8.4-4, provided the theoretical basis for computing the eigensolution of a symmetric matrix. For a real symmetric matrix, A, it stated that there exists an orthonormal matrix,

851

852

CHAPTER 8 Numerical methods

Q, such that QT AQ ¼ L, where L is a real diagonal matrix consisting of the eigenvalues and the columns of Q are the corresponding orthonormal eigenvectors. Clearly, it is essential that an orthonormal matrix that diagonalizes A exists if we are to develop a method to calculate it. This is exactly what the QR iteration accomplishes. On the other hand, for nonsymmetric matrices, such an approach is not practical, since not all matrices can be diagonalized. Recall that A is diagonalizable if and only if it has a complete set of eigenvectors. Then, the similarity transformation, (8.5-4), by its matrix of eigenvectors, V, will diagonalize A. We add that V will often be complex-valued and nonunitary, which can pose numerical issues. If A possesses eigenvalues with multiplicities greater than one, it may not have a complete set of eigenvectors. These are known as defective matrices. For example, the matrix  0 1 A¼ (8.5-186) 0 0 has eigenvalues equal to zero with multiplicity two. However, it only possesses a single linearly independent eigenvector that is a scalar multiple of f1 0gT . Generally, numerical round-off errors will perturb A so that it is almost defective. For example, consider the perturbed matrix,  0 1 Aε ¼ 2 (8.5-187) ε 0 Then Aε is not defective and has two distinct eigenvalues, l1 ¼ ε and l2 ¼ ε, with corresponding eigenvectors, v1 ¼ f1  εgT and v2 ¼ f1 εgT . If ε z εmach , then V ¼ ½v1 v2  will be nearly singular and ill conditioned. Hence, an approach to diagonalize Aε via its eigenvector matrix could lead to significant numerical errors. This simple example cautions us against pursuing a diagonalization approach. The Schur Decomposition Theorem offers a practical basis for solving the general eigenvalue problem. First, recall that the Hermitian, QH , is defined as the complex-conjugate transpose of Q. Also, generalizing orthogonality for real-valued matrices, we define a complex-valued matrix, Q, as unitary, if QH Q ¼ I.

8.5 Matrix eigenvalue problem

Theorem 8.5-4 (Schur Decomposition) Let A be an N  N matrix. Then there exists a unitary matrix, Q, such that QH AQ ¼ R and R is an upper triangular matrix. The diagonal elements of R are the eigenvalues of A. Theorem 8.5-4 states that any matrix is similar to an upper triangular matrix via a unitary transformation. Note that the eigenpairs of a triangular matrix can be easily determined since its eigenvalues lie on its diagonal and the eigenvectors can be calculated via backward (or forward) substitution. Moreover, because unitary transformations are numerically stable, the Schur decomposition provides a numerically viable approach for computing the eigenpairs of a general matrix. Note that the spectral theorem is a special case of Theorem 8.5-4 when A is a real-valued symmetric matrix. The QR iteration can also yield the Schur decomposition. However, if A is a real-valued matrix, then its computed QR factors will be real-valued, and the iterates, Aðkþ1Þ ¼ QðkÞ AðkÞ QðkÞ , will also be real-valued matrices. Since nonsymmetric real matrices often have complex-valued eigenvalues, the diagonals of AðkÞ will not converge to the eigenvalues. Let us consider the following 5  5 nonsymmetric matrix, 2 3 3 1 0 1 1 6 7 6 1 1 2 2 1 7 6 7 7 A¼6 2 3 3 1 2 (8.5-188) 6 7 6 7 2 1 4 4 5 4 1 4 1 6 6 1 T

The eigenvalues of A were calculated using the implicit QR algorithm in LAPACK and are l1 ¼ 6:1238 þ 1:0322i l2 ¼ 6:1238  1:0322i l3 ¼ 2:3275 þ 3:2787i

(8.5-189)

l4 ¼ 2:3275  3:2787i l5 ¼ 2:4074 Applying the basic QR iteration, with real-valued QR factorizations, yields after 40 iterations, the block upper-triangular matrix, to four decimal places,

853

854

CHAPTER 8 Numerical methods

2

7.8966

6 6 1.5779 6 ð41Þ A ¼6 0 6 6 0 4 0

2.6671

2:2552

4:8363

4.3510 0 0 0

2:5453 2.0869 3.3453 0

2:0635 3.2308 2.5680 0

1:8954

3

7 1:0449 7 7 0:3183 7 7 7 2:1767 5 2:4074 (8.5-190)

The uncoupled blocks are indicated in bold. Observe that the real-valued eigenvalue, l5 , lies on the diagonal, while the complex-conjugate pairs of eigenvalues are represented by the 2  2 matrices on the diagonal. The first 2  2 block yields l1 and l2 , while the middle 2  2 block yields l3 and l4 . b ð40Þ to be the product of the orthonormal factors As in (8.5-19), define Q from the QR iteration. Then the example shows that A is similar to a block upper-triangular matrix via Að41Þ ¼ Qð40Þ AQð40Þ . This illustrates the next required theorem, which is known as the Real Schur Decomposition Theorem. T

Theorem 8.5-5 (Real Schur Decomposition) Let A be an N  N real-valued matrix. Then there exists an orthonormal matrix, Q, such that QT AQ ¼ R and R is a real-valued block upper-triangular matrix. The diagonal block matrices of R are either 2  2 or 1  1 matrices. The 2  2 matrices yield the complex-conjugate eigenvalue pairs, and the 1  1 matrices are associated with the real-valued eigenvalues of A. Since most structural dynamic models are real-valued, the real Schur decomposition provides the theoretical basis for a practical QR approach that computes the eigenpairs using real arithmetic. For early computers, complex arithmetic was expensive and was, therefore, avoided in many of the early software packages such as EISPACK and LINPACK. Today, for moderate size matrices, the added cost of complex arithmetic is not a significant drawback. An obvious advantage for computing the real Schur decomposition is that it provides pairs of complex eigenvalues that are exact conjugates of each other. On the other hand, if the Schur decomposition were computed using the QR iteration with complex arithmetic, numerical errors could lead to eigenpairs that are not exact conjugates of each other. To illustrate this, we “complexified” the matrix A that is defined in (8.5188) by adding a complex shift, m ¼ 1 þ i, to the diagonal, and then

8.5 Matrix eigenvalue problem

performing the complex QR iteration. The Householder transformations that are used in the QR factorization easily generalize to complex-valued matrices. In order to eliminate the lower-triangular elements to four decimal places, 180 iterations were needed. The complex shift, m, was then removed from the diagonal of the final iterate, Að180Þ . The complex modulus is shown in (8.5-191) to indicate that the Schur decomposition of A was achieved: 2 3 6:2101 3:7091 1:5670 0:5000 2:6561 6 7 0 6:2102 4:0135 0:8759 4:1546 7 6 7 6 ð180Þ  m ¼ 6 0 4:0208 1:3986 0:8353 7 A 6 0 7 6 7 0 0 2:4074 1:5588 5 4 0 0 0 0 0 4:0208 (8.5-191) The computed eigenvalues from the diagonal of Að180Þ  m are listed below. Note that l1 and l2 are almost conjugates of each other. We add that if we performed 20 more iterations, then l1 will equal l2 to four decimal places, i.e., l1 ¼ 6:1237 þ 1:0322i l2 ¼ 6:1238  1:0322i l3 ¼ 2:3275 þ 3:2787i

(8.5-192)

l4 ¼ 2:4074 l5 ¼ 2:3275  3:2787i Since most of the eigenvalue problems in structural dynamics involve real-valued matrices, we will only discuss how to efficiently compute the real Schur decomposition by the QR iteration. As in the symmetric case, the first step is to reduce the matrix using an orthonormal similarity transformation, so that it is “close” to the desired form. For symmetric matrices, the Householder procedure efficiently reduced them to tridiagonal form, which is as close to diagonal form as possible in a finite number of steps. Likewise, the real Schur decomposition indicates that we should reduce nonsymmetric matrices so that they are nearly block upper triangular. For a nonsymmetric matrix, A, it is possible to efficiently eliminate elements

855

856

CHAPTER 8 Numerical methods

below the subdiagonal by Householder transformations. Recall that such a   matrix, H ¼ hi;j , where, hi;j ¼ 0 for i > j þ 1, is said to be upper Hessenberg. The algorithm for upper Hessenberg reduction is a straightforward modification of what was presented earlier for symmetric matrices and we refer the reader to Golub and Van Loan (2013) for details. Reducing A in (8.5-188) to upper Hessenberg form leads to H ¼ UT AU, i.e., 2 3 3:000 0:4264 0:2052 1:6628 0:1050 6 7 6 4:6904 3:8636 2:3507 2:8465 7:6952 7 6 7 7 (8.5-193) H¼6 0 4:4394 1:8701 0:7230 0:6603 6 7 6 7 0 4:8296 0:1862 1:7479 5 4 0 0 0 0 1:8810 1:4525 and where U ¼ U1 U2 U3 and Uj is the Householder transformation that eliminates elements below the j þ 1 row in the jth column: 2 3 1 0 0 0 0 6 7 0:6257 0:5393 7 6 0 0:2132 0:5217 6 7 U¼6 0:7078 7 (8.5-194) 6 0 0:4264 0:5632 0:0048 7 6 7 0:5828 0:7065 0:3402 5 4 0 0:2132 0 0:8528 0:2663 0:3306 0:3041 The QR factorization of upper Hessenberg matrices can be efficiently calculated by bulge-chasing Givens transformations as was done for tridiagonal symmetric matrices. For example, let G1 represent the first Givens transformation that eliminates the (2,1) element in H. Then H1 ¼ G1 HGT1 will be almost upper Hessenberg except for a bulge in the (3, 1) position that is indicated below in boldface font. 2 3 5:9355 2:8129 2:0909 1:5020 6:4261 6 7 1:0938 2:9346 4:2347 7 6 1:4511 0:9282 6 7 7 (8.5-195) H1 ¼ 6 3:7399 2:3920 1:8701 0:7230 0:6603 6 7 6 7 0 4:8296 0:1862 1:7479 5 4 0 0 0 0 1:8810 1:4525 The similarity transformation, H2 ¼ G2 H1 GT2 , by the second Givens transformation, G2 , will move the bulge to the (4, 2) position, i.e.,

8.5 Matrix eigenvalue problem

2

5:9355

6 6 4:0116 6 H2 ¼ 6 6 0 6 4 0 0

0:9318

3:3787

2:9224 0:3200 -4:5025 0

1:5020

1:6182 1:7355 0:1241 2:4743 1:7471 0:1862 0 1:8810

6:4261

3

7 0:9163 7 7 4:1868 7 7 (8.5-196) 7 1:7479 5 1:4525

This bulge chasing via Givens transformations continues until the upper Hessenberg form is restored. It is analogous to the bulge chasing that was implemented in the QR algorithm for symmetric matrices and allows us to incorporate real-valued shifts implicitly. Unfortunately, the Givens transformations cannot efficiently accommodate complex-valued shifts if we restrict computations to real arithmetic. Recall from (8.5-68) that to speed-up the QR iteration we have to incorporate shifts that are good approximations of the eigenvalues. Since the eigenvalues of nonsymmetric matrices are generally complex, we need to include complex-valued shifts. This is a problem if we are only performing real arithmetic. Fortunately, complex eigenvalues of real-valued matrices occur in conjugate pairs. Suppose m1 is an approximate complex eigenvalue of H, and then consider the following double-shift strategy that also uses its conjugate, m2 ¼ m1 : H  m1 I ¼ Q1 R1 Hð1Þ ¼ R1 Q1 þ m1 I (8.5-197) Hð1Þ  m2 I ¼ Q2 R2 Hð2Þ ¼ R2 Q2 þ m2 I Straightforward arguments show that the upper Hessenberg form is preserved in Hð1Þ and Hð2Þ , and that the similarity transformation, Hð2Þ ¼ ðQ1 Q2 ÞH HðQ1 Q2 Þ

(8.5-198)

involves the unitary matrix, Q1 Q2 , that is likely complex-valued. Consider the product, G ¼ ðH m1 IÞðH m2 IÞ, which is a real-valued matrix since G ¼ ðH  m1 IÞðH  m2 IÞ ¼ H2  sH þ dI s ¼ m1 þ m2 and d ¼ m1 m2

(8.5-199)

857

858

CHAPTER 8 Numerical methods

and both s and d are real-valued. Let Q ¼ Q1 Q2 and R ¼ R2 R1

(8.5-200)

Then as shown below, Q and R are QR factors of G:   QR ¼ ðQ1 Q2 ÞðR2 R1 Þ ¼ Q1 ðQ2 R2 ÞR1 ¼ Q1 Hð1Þ  m2 I R1 ¼ Q1 ðR1 Q1 þ m1 I  m2 IÞR1 ¼ ðQ1 R1 ÞðQ1 R1 þ m1 I  m2 IÞ ¼ ðH  m1 IÞðH  m2 IÞ ¼ G (8.5-201) Since G is a real-valued matrix, Q and R can also be chosen as real-valued matrices. Therefore, instead of performing the double-shift QR factorization using complex arithmetic to compute Hð2Þ , we can compute Q as the real-valued QR factor of G and calculate the next QR iterate by QT HQ ¼ Hð2Þ

(8.5-202)

Calculating G and then its QR factorization explicitly can be computationally expensive when it is performed numerous times within the QR iteration. However, as in the symmetric case, we can compute Q implicitly by defining its first column and then subsequent columns by a bulge-chasing scheme. We need the following result, which generalizes Theorem 8.5-2 to upper Hessenberg matrices. Note that it suffices to consider unreduced matrices since the eigenvalue problem decouples into two smaller problems if a subdiagonal element is equal to zero. Theorem 8.5-6 (Implicit Q Theorem for upper Hessenberg matrices) Let H be an N  N unreduced upper Hessenberg matrix. Suppose Q ¼ ½q1 j/jqN    e T HQ e e¼ e and Q q1 j/je qN are orthonormal matrices such that QT HQ and Q e q1 , then the columns of Q and Q are upper Hessenberg matrices. If q1 ¼ e qn ; n ¼ 2; /; N. are equal except for sign differences, i.e., qn ¼ e Let us examine the QR factorization of G. Let hi;j denote the elements of H. Then the first column, g1 , of G is given by

8.5 Matrix eigenvalue problem

g1 ¼

8 9 2 > þ h h  sh þ d h > > 1;2 2;1 1;1 1;1 > > > > > > > > > > h þ h h  sh h > > 2;1 1;1 2;2 2;1 2;1 > > > > > > > > = < h3;2 h2;1 > > > > > > > > > > > > :

0 « 0

> > > > > > > > > > > ;

¼

9 8 g1;1 > > > > > > > > > > > > g > > 2;1 > > > > > > > = < g3;1 > > 0 > > > > > > > > > > > > > > > « > > > > > > ; : 0

(8.5-203)

Recall that the QR factorization of G begins with the initial Householder reflector matrix, U1 , that eliminates the elements below the first row of g1 . In particular, we obtain   g1 þ sgn g1;1 e1 T    U1 ¼ I  2u1 u1 ; u1 ¼  g1 þ sgn g1;1 e1  (8.5-204) 2 G1 ¼ U1 G The next Householder reflector matrix, U2 , deletes in G1 its elements in column two below the second row. Continuing in this manner reduces G to upper-triangular form, ðUN1 /U2 U1 ÞG ¼ R

(8.5-205)

from which we obtain the orthonormal QR factor, Q ¼ ðUN1 /U2 U1 ÞT ¼ U1 U2 /UN1

(8.5-206)

To apply the Implicit Q Theorem we need to determine the first column, q1 , of Q. Observe that by construction the first columns of U2 ; /; UN1 are equal to e1 . Therefore, the first column of Q will equal the first column of U1 . Moreover, the first column of  U1 is equal to the unit vector in the direction of g1 ; hence, q1 ¼ g1 kg1 k2. To initiate the bulge-chasing process, we apply the similarity transformation, H1 ¼ U1 HUT1 ¼ U1 HU1 . The second equality follows from the symmetry of Householder reflector matrices. Also, note that U1 is equal to the identity matrix except for its principal 3  3 submatrix. Following Demmel (1997) and Golub and Van Loan (2013), we will use the Wilkinson diagram to illustrate the effects of the similarity transformation on a 6  6 example. The form of H1 is shown below:

859

860

CHAPTER 8 Numerical methods

2

32

u u u

6 6u u u 6 6 6u u u 6 H1 ¼ 6 6 6 6 6 4 2

h 6 6h 6 6 6þ 6 ¼6 6þ 6 6 6 4

h

h

76 76 h h h 76 76 76 h h 76 76 76 h 76 76 76 54

1 1 1

h

h

h

h

h

h

h

h

h

h

h h

þ h

h

h

h

h

h h

h

3

h

h

h

h

h

h

h h h

h h

h

32

3

u u u

76 6 h7 76 u u u 76 6 h7 76 u u u 76 6 h7 76 76 6 h7 54 h

7 7 7 7 7 7 7 7 7 7 7 5

1 1 1

7 h7 7 7 h7 7 7 h7 7 7 h7 5 h (8.5-207)

Observe that the initial similarity transformation disrupts the upper Hessenberg form by introducing a bulge that consists of three elements, which are indicated by crosses. The subsequent Householder transformations are defined to remove the bulge by chasing it toward the lower right corner e 2 , is defined to delete the elof the matrix. The next Householder matrix, U ements in positions (3,1) and (4,1). The corresponding Wilkinson diagram e 2 becomes e 2 H1 U for H2 ¼ U

8.5 Matrix eigenvalue problem

2 6 6 6 6 6 6 H2 ¼ 6 6 6 6 6 4

32

1

2

h 6 6h 6 6 6 6 ¼6 6 6 6 6 4

u

u

u

u

u

u

u

u

u

76 76 h 76 76 76 þ 76 76 76 þ 76 76 76 54

1 h

h

h

h

h

h

h

h

h

þ

h

h

D

D h

h

1 3 h h 7 h h7 7 7 h h7 7 7 h h7 7 7 h h7 5 h h

h

h

h

h

h

h

h

h

h

h

h h

D h

h

h

h

h h

h

32

76 6 h7 76 76 6 h7 76 76 6 h7 76 76 6 h7 54 h

861

3

1

7 7 7 7 7 7 7 7 7 7 7 5

u u u u u u u u u 1 1

(8.5-208) Observe that the bulge has moved to positions (4,2), (5,2), and (5,3). The e 3 , is now defined to eliminate the bulge elements Householder matrix, U e 3 H2 U e 3 , which has the form in the second column and leads to H3 ¼ U 2 32 32 1 h h h h h h 1 6 76 76 6 76 h h 6 1 h h h h7 1 6 76 76 6 76 76 6 76 6 u u u h h h h h7 u u u 6 76 76 H3 ¼ 6 76 76 6 76 6 u u u þ h h h h7 u u u 6 76 76 6 76 76 6 76 6 u u u D D h h h7 u u u 4 54 54 2

h h 6 6h h 6 6 6 h 6 ¼6 6 6 6 6 4

1 3 h h 7 h h7 7 7 h h7 7 7 h h7 7 7 h h7 5

h

h

h

h

h

h

h

h

D

h

D

D h

h h

3 7 7 7 7 7 7 7 7 7 7 7 5 1

h (8.5-209)

862

CHAPTER 8 Numerical methods

To remove the bulge that occupies positions (4,3), (5,3), and (5,4), we e 4 , to delete the elements in rows five construct the Householder matrix, U and six of the third column. The resulting similarity transformation, e 4 H3 U e 4 , yields H4 ¼ U 2 32 32 3 1 h h h h h h 1 6 76 76 7 6 76 h h h h h h 76 7 1 1 6 76 76 7 6 76 76 7 6 76 76 7 1 h h h h h 1 6 76 76 7 H4 ¼ 6 76 76 7 6 7 6 7 6 7 u u u h h h h u u u 6 76 76 7 6 76 76 7 6 6 6 u u u7 D h h h7 u u u7 4 54 54 5 u u u D D h h u u u 2 3 h h h h h h 6 7 6h h h h h h7 6 7 6 7 6 7 h h h h h 6 7 ¼6 7 6 h h h h7 6 7 6 7 6 7 h h h 4 5 D

h

h (8.5-210)

The bulge has now been reduced to one element in position (6,4). Removal e 5 . The of the bulge can be accomplished using a Givens transformation, U T e , restores the upper e 5 H4 U resulting similarity transformation, H5 ¼ U 5 Hessenberg form,

8.5 Matrix eigenvalue problem

2 6 6 6 6 6 6 H5 ¼ 6 6 6 6 6 4

32

1

2

h 6 6h 6 6 6 6 ¼6 6 6 6 6 4

1 1 1 u h

h

u h h

h

h

h

h

h h h h h h h h

h h

h h h

h

h h

76 76 h h h h h 76 76 76 h h h h 76 76 76 h h h 76 76 6 u7 h h 54 u D h 3 h 7 h7 7 7 h7 7 7 h7 7 7 h7 5 h

32

76 6 h7 76 76 6 h7 76 76 6 h7 76 76 6 h7 54 h

3

1 1 1 1

7 7 7 7 7 7 7 7 7 7 u u7 5 u u

(8.5-211) e ¼ Let Q (8.5-211) imply

e2 / U eT, U1 U 5

then the transformations in (8.5-207) through e ¼ H5 e T HQ Q

863

(8.5-212)

e j ; j ¼ 2; /; 5, are equal to e1 , the first colSince the first columns of U e umn of Q equals the first column of U1 , which in turn is equal to the first column of Q. Furthermore, both similarity transformations, (8.5-202) and e (8.5-212), are upper Hessenberg. Therefore, by Theorem 8.5-6, Q and Q are equal, except for sign differences in their columns. We have shown how to implicitly compute the real-valued QR factors while incorporating complex conjugate shifts by a bulge-chasing scheme. The final component for improving convergence is choosing good shifts. Similar to the Wilkinson’s shift for symmetric matrices, we calculate the Francis shifts based on the eigenvalues of the lower right 2  2 matrix whose characteristic equation is given by     s2  hN 0 ;N 0 þ hN;N s þ hN 0 ;N 0 hN;N  hN 0 ;N hN;N 0 ¼ 0 (8.5-213) N0 ¼ N  1

864

CHAPTER 8 Numerical methods

Denote the roots of (8.5-213) by m1 and m2 . It is easy to show that s ¼ m1 þ m2 ¼ hN 0 ;N þ hN;N d ¼ m1 m2 ¼ hN 0 ;N 0 hN;N  hN 0 ;N hN;N 0

(8.5-214)

Therefore, we can easily incorporate the shifts, m1 and m2 , by computing g1 in (8.5-203) without having to solve for them. Observe that this also holds when m1 and m2 are real-valued, which means that we are performing two real-valued shifts in a single QR step. In summary, the implicit QR algorithm for real-valued nonsymmetric matrices involves a reduction to upper Hessenberg form, use of the Francis shifts, and a bulge-chasing procedure to implicitly compute the QR factorization. Stewart (2001a,b) has shown that for most matrices, the QR algorithm with Francis shifts will converge quadratically. Demmel (1997) noted that the following nondefective matrix, 2 3 0 1 0 6 7 H¼40 0 15 (8.5-215) 1 0 0 fails to converge under the implicit QR algorithm. Unlike the Wilkinson shift for the symmetric QR algorithm, there is no general shifting strategy that will guarantee convergence for the nonsymmetric case. Therefore, current algorithms introduce “exceptional” or “ad-hoc” shifts if they determine that convergence is slow. 8.5.3 Error analysis

In this section, we will briefly examine the eigenproblem sensitivity and the error bounds of the computed solutions from the QR algorithm. We refer the reader to Stewart (1998, 2001a,b), Golub and Van Loan (2013) and Demmel (1997) for additional details and proofs. We begin with some results related to the symmetric eigenvalue prob  b b b lem. Let A be a real symmetric matrix, and let L ¼ diag l 1 ; /; l N denote the computed diagonal matrix of eigenvalues from the implicit QR algorithm. Then it can be shown (Golub and Van Loan, 2013) that there exists a perturbation, E, and orthonormal matrix, Q, such that b ¼ QT ðA þ EÞQ L kEk z εmach kAk2

(8.5-216)

8.5 Matrix eigenvalue problem

Observe that Q is an orthonormal matrix whose columns contain the eigenvectors of A þ E. Eq. (8.5-216) implies that the QR algorithm is backward stable; i.e., the computed eigenpairs belong to the perturbed matrix, A þ E. The next result bounds the difference of the eigenvalues under these perturbations. Let ln and b l n denote the eigenvalues of A and A þ E, respectively, and arrange them in descending order. Then, it can be shown that ln  b l n  kEk2 (8.5-217) Since kEk2 z εmach kAk2 , the eigenvalues for symmetric matrices are well conditioned. Hence, the QR algorithm is a numerically stable method for computing the eigenvalues of symmetric matrices. While the eigenvalues are stable and can be computed accurately, the eigenvectors can be more sensitive and depend on their distances to neighboring eigenvalues. For an eigenvalue, ln , define its gap as gapðln Þ ¼ minjlk  ln j (8.5-218) ksn   ln; b q n represent the nth eigenpairs of A and A þ E, Let ðln ; qn Þ and b respectively. The similarity of the eigenvectors can be measured  by the 1 T b acute angle between them, specifically, let qn ¼ cos qn q n . Then, if gapðln Þ > 0 (Demmel, 1997), 1 kEk2 kAk2 sin 2qn  z εmach 2 gapðln Þ gapðln Þ

(8.5-219)

We add that if gapðln Þ  2kEk2 , then (8.5-219) provides no information on qn , since sin 2qn  1 is always true for any qn. On the other hand, if gapðln Þ is much larger than εmach kAk2 , we can expect good correlation between the actual and computed eigenvectors. In practice, since a linearized structural dynamic system cannot possess elastic modes with identical frequencies, their gaps will always be positive. However, for systems with symmetry, the resulting closely spaced modes might yield gaps that are small. Since kAk2 ¼ lmax and lmax can be very large, the computed closely spaced modes may differ significantly from the exact modes or another independently computed set of modes. While the gaps corresponding to elastic modes can never vanish, the gaps related to rigid-body modes will equal zero. For example, a typical free-free system

865

866

CHAPTER 8 Numerical methods

will have six rigid-body modes whose corresponding eigenvalues are all equal to zero. In this case, (8.5-219) does not provide any information on qn . However, recall that orthonormal eigenvectors belonging to multiple eigenvalues are unique up to an orthonormal transformation. In practice, if the computed eigenvalues of the rigid-body modes are small, then we can expect these computed modes to approximate the exact rigid-body modes via an orthonormal transformation. In view of the sensitivity of the computed eigenvectors whose gaps are small, as part of any modal analysis, the analyst should perform standard model checks to determine the reasonableness of closely spaced and rigid-body modes. We will now examine the errors and sensitivities for the nonsymmetric b eigenvalue problem. Let A be a real-valued nonsymmetric matrix and T be the real Schur form that is computed from the implicit QR algorithm. Then there exists an orthonormal matrix, Q, and an error matrix, E, such that (Golub and Van Loan, 2013) b ¼ QT ðA þ EÞQ T kEk2 z εmach kAk2

(8.5-220)

Therefore, the QR algorithm is backward stable and computes the real Schur form of a slightly perturbed matrix, A. Before presenting some results that indicate the sensitivity of the eigensolutions to perturbations, let us consider the following example that can be found in Stewart (1998, 2001a,b) and in Demmel (1997). Consider the εperturbation of the N  N Jordan block, 2 3 0 1 0 / 0 6 7 60 0 1 / 07 6 7 7 « « « 1 « Jε ¼ 6 (8.5-221) 6 7 6 7 40 0 0 / 15 ε 0 0 / 0 toffiffiffizero with When ε ¼ 0, J0 is a defective matrix with eigenvalue equal p multiplicity equal to N. For ε > 0, Jε has the eigenvalues lk ¼ N εzk , where z ¼ expð2pi =NÞ is the Nth root of unity. Therefore, even for moderate size N, a small perturbation will produce large changes in the eigenvalues. This of course is an extreme case, but it does warn us that nonsymmetric problems can be very sensitive to small perturbations that can produce

8.5 Matrix eigenvalue problem

significant computational errors. Fortunately, most problems in structural dynamics are not so pathological. Let ln be a simple eigenvalue of A, with left and right eigenvectors , yn and xn , respectively, having unit norms. Consider a slightly perturbed matrix, A þ E, and the resulting eigenvalue, b l n , that is closest to ln . We have the following result (Demmel, 1997):   kEk2 2 b (8.5-222) l n  ln  H þ O kEk2 y xn n

To a first-order approximation, kðlÞ ¼ jyH xj1 characterizes the change in the eigenvalue to a perturbation, E. Therefore, kðlÞ defines the condition number of the eigenvalue l. The Jordan matrix, J0 , in (8.5-221) illustrates an extreme example of ill conditioning. Since the right and left eigenvectors of J0 in (8.5-221) are 1 to e1 and eN , respectively, the condition number T equal becomes kð0Þ ¼ eN e1 ¼ ∞. On the other hand, the eigenvalues of symmetric matrices are well conditioned since left and right eigenvectors are equal, which implies that kðlÞ ¼ 1. This is consistent with (8.5-217). Wilkinson (Golub and Van Loan, 2013) showed that if jyH xj is small, then A is near a matrix with multiple eigenvalues equal to l. Hence, matrices with well-separated eigenvalues will not have small inner products, jyH xj, and, therefore, will be insensitive to small perturbations. For these problems, (8.5-220) and (8.5-222) imply that the implicit  QR algo rithm will produce eigenvalues that are accurate to within O εmach kAk2 . The sensitivity of the eigenvectors under small perturbations is similar to the result for symmetric matrices, but requires a generalization of gapðlÞ. Let ln be a simple eigenvalue of A, with unit eigenvector, qn . Consider the unitary matrix, Q ¼ ½qn jQ2 , and the similarity transformation,

(8.5-223)

Define the separation of ln I and T22 by the minimum singular value, sepðln ; T22 Þ ¼ smin ðT22  ln IÞ

(8.5-224)

867

868

CHAPTER 8 Numerical methods

Note that if A is symmetric, then sepðln ; T22 Þ ¼ gapðln Þ. However, for general A, sepðln ; T22 Þ  min jm  ln j ¼ gapðln Þ m˛lðT22 Þ

(8.5-225)

Consider the perturbed matrix, A þ E, and the partitioning of QH EQ as, (8.5-226)   l n; b q n , of A þ Then, if kEkF is sufficiently small, there is an eigenpair, b E such that sin qn  4

kdk2 kEk2 4 sepðln ; T22 Þ sepðln ; T22 Þ

(8.5-227)

b n . Golub & Van Loan (2013) where qn is the acute angle between qn and q provide additional details and proof. The above result implies that the eigenvector of the perturbed system is sensitive to the separation of ln and T22 . In particular, as ln approaches an eigenvalue of T22 , (8.5-225) implies that sepðln ; T22 Þ/0 and the correlaq n will diminish. This lack of correlation is not surpristion between qn and b ing if we consider the limiting case where we have multiple eigenvalues. Suppose the matrix is nondefective, then the eigenvectors are only unique up to linear combinations that are not likely to “persist” under perturbations. Note that we have observed similar eigenvector sensitivity for the symmetric case. If, however, sepðln ; T22 Þ is not small, then since the QR algorithm introduces a small perturbation, where kEk2 z εmach kAk2 , we qn. can expect good correlation between qn and b As a final remark, there is an important theme here that is worth mentioning. As we have noted in this and previous sections, the use of orthonormal transformations are preferred in computations because they provide numerical stability and lead to solutions of slightly perturbed problems. The sensitivity of these computed solutions depend on the conditioning of the problems that are being solved. The backward stable QR algorithm, which is based on orthonormal transformations, and the sensitivity of the eigenvalue problem illustrate this theme.

Problems Problem 8.1 Compute the three matrix norms of ½A, where ½A ¼ ½m1 ½k and

Solution 8.2

2

2 6 ½m ¼ 4 0 0

3 0 7 05 1

0 2 0

2

3 2 2 0 6 7 ½k ¼ 4 2 4 2 5 0 2 2

Solution 8.1 2

2 6 1 ½A ¼ ½m ½k ¼ 4 0 0

31 2 0 2 7 6 0 5 4 2 1 0

0 2 0

3 2 3 2 0 1 1 0 7 6 7 4 2 5 ¼ 4 1 2 1 5 2 2 0 2 2

The eigenvalues of ½AT ½A are l1 ¼ 0, l2 ¼ 1:9172, and l3 ¼ 14:0828. The three norms are kAk1 ¼ max j

3 X ai;j ¼ maxf 2 5 3 g ¼ 5

i¼1 p ffiffiffiffiffi kAk2 ¼ max ln ¼ maxf 0 n

kAk∞

1:3846 3:7527 g ¼ 3:7527

3 X ai;j ¼ maxf 2 ¼ max i

4g ¼ 5

4

j¼1

Problem 8.2 What is the Frobenius norm of 2

3 1 1 0 7 6 2 1 5 ½A ¼ 4 1 0 2 2

Solution 8.2 The Frobenius qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

norm

traceðAT AÞ; hence,

is

defined

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi M P N P ai;j 2 ¼ kAkF ¼

as

i¼1 j¼1

2

2 6 AT A ¼ 4 3 1

3 9 6

3 1 7 6 5 5

869

870

CHAPTER 8 Numerical methods

and

qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi traceðAT AÞ ¼ sum of diagonal terms ofðAT AÞ ¼ 16 ¼ 4

2 3 2 1 1 Problem 8.3 6 7 Show that P is a projector, where P ¼ 4 1 0 1 5. 1 1 0 Is P an orthogonal projector? Solution 8.3 In order for P to be a projector, it must satisfy P2 ¼ P. Performing the multiplications, we obtain 2 32 3 2 3 2 1 1 2 1 1 2 1 1 6 76 7 6 7 P2 ¼ 4 1 0 1 54 1 0 1 5 ¼ 4 1 0 1 5 ¼ P 1 1 0 1 1 0 1 1 0 For a projector to be an orthogonal projector, it must be symmetric, i.e., PT ¼ P. This projector does not satisfy this requirement (see Theorem 8.4-1). Problem 8.4 Use an integrating factor to solve the initial value problem,   _ ¼ 10 1  et  x xðtÞ t>0 xð0Þ ¼ 0 Solution 8.4 Rewrite the differential equation as xðtÞ R _ þ pðtÞxðtÞ ¼ f ðtÞ. Then the inte_ ¼ pðtÞIðtÞ. Hence, grating factor is IðtÞ ¼ exp pðtÞdt and IðtÞ   _ þ 10xðtÞ ¼ 10 1  et xðtÞ and pðtÞ ¼ 100IðtÞ ¼ e10t . Multiplying both sides by IðtÞ produces   _ þ 10IðtÞxðtÞ ¼ 10IðtÞ 1  et IðtÞxðtÞ By the product rule for derivatives, the left side can be expressed as _ þ 10IðtÞxðtÞ ¼ IðtÞxðtÞ

d ðIðtÞxðtÞÞ dt

Solution 8.5

Hence,   d ðIðtÞxðtÞÞ ¼ 10IðtÞ 1  et 0 IðtÞxðtÞ ¼ 10 dt Z 10 ¼ 10 e10t  e9t dt ¼ e10t  e9t þ C 9

Z

  IðtÞ 1  et dt

10 Dividing both sides by IðtÞ ¼ e10t leads to xðtÞ ¼ 1  et þ Ce10t . The initial condition, xð0Þ ¼ 0 yields C ¼ 1=9. Hence, 9 xðtÞ ¼ 1 þ

e10t  10et 9

Problem 8.5 Using a step size of Dt ¼ 0:1, use the RK-2 method to compute the solution to Eq. (8.1-8) for the first three steps, tn ¼ 0:1;0:2, and 0:3. Assume the following initial conditions:   _ ¼ 10 1  et  x xðtÞ t>0 xð0Þ ¼ 0 Solution 8.5   Aðt; xÞ ¼ 10 1  et  x a1 ¼ Aðtn ; xn Þ a2 ¼ Aðtnþ1 ; xn þ Dt a1 Þ xnþ1 ¼ xn þ

Dt ða1 þ a2 Þ 2

871

872

CHAPTER 8 Numerical methods

Problem 8.6 Using a step size of Dt ¼ 0:1, use the RK-4 method to compute the solution to Eq. (8.1-8) for the first three steps, tn ¼ 0:1;0:2, and 0:3. Assume the following initial conditions:   _ ¼ 10 1  et  x xðtÞ t>0 xð0Þ ¼ 0 Solution 8.6   Aðt; xÞ ¼ 10 1  et  x a1 ¼ Aðtn ; xn Þ

1 a2 ¼ A t 1 ; xn þ Dt a1 nþ 2 2

1 a3 ¼ A t 1 ; xn þ Dt a2 nþ 2 2 a4 ¼ Aðtnþ1 ; xn þ Dt a3 Þ xnþ1 ¼ xn þ

Dt ða1 þ 2a2 þ 2a3 þ a4 Þ 6

Problem 8.7   Show that the local truncation error of Euler’s method is O Dt2 . Solution 8.7 _ n Þ ¼ Aðtn ; xðtn ÞÞ. Euler’s method approxThe differential equation gives xðt imates xðtnþ1 Þ via xðtnþ1 Þ z xðtn Þ þ Dt Aðtn ; xðtn ÞÞ. The Taylor expansion

Solution 8.9

  _ n Þ þ O Dt2 . The local truncation for xðtnþ1 Þ is, xðtnþ1 Þ ¼ xðtn Þ þ Dt xðt error is defined by sðtn ; DtÞ ¼ xðtnþ1 Þ  ½xðtn Þ þDt Aðtn ; xðtn ÞÞ. By _ n Þ for Aðtn ; xðtn ÞÞ the Taylor expansion yields substituting xðt h  i     _ n Þ þ O Dt2  xðtn Þ þ Dt xðt _ n Þ ¼ O Dt2 sðtn ; DtÞ ¼ xðtn Þ þ Dt xðt Problem 8.8 Duhamel’s integral is often referred to as the “exact” or “closed-form” solution scheme for singleedegree-of-freedom systems. Explain why this could be misleading. Solution 8.8 In order to evaluate the convolution integral, Duhamel’s method approximates the force, or prescribed base acceleration, as piece-wise linear over each time interval of integration. For input forces that are piece-wise linear, Duhamel’s method will be exact. However, forcing functions and baseacceleration excitation are generally derived from measured time histories or simulations and are, therefore, not piece-wise linear. For these excitations, Duhamel’s method is not exact. Problem 8.9 Z t lt elðtsÞ gðsÞds, is the solution to Prove that Eq. (8.2-49), xðtÞ ¼ x0 e þ 0

the intial valued problem in Eq. (8.2-47), _  lxðtÞ ¼ gðtÞ xðtÞ xð0Þ ¼ x0

t>0

Solution 8.9 We can derive the solution using the integrating factor elt. Another way is _ and show that it satisifies the initial value probto substitute xðtÞ and xðtÞ lem. This approach lets us review how to differentiate the integral. The initial conditions are satisfied since Z 0 l$0 elð0sÞ gðsÞds ¼ x0 þ 0 ¼ x0 xð0Þ ¼ x0 e þ 0

Differentiating xðtÞ yields

873

874

CHAPTER 8 Numerical methods

0 1 2 3 Z t  Z t    _ ¼ d x0 elt þ d @ elðtsÞ gðsÞdsA ¼ x0 lelt þ4elðtsÞ gðsÞ þ d elðtsÞ gðsÞ ds5 xðtÞ dt dt 0 s¼t 0 dt Z

t

¼ x0 le þ gðtÞ þ l lt

elðtsÞ gðsÞds

0

Substituting into the left side of the differential equation leads to 1 0 1 0 Zt Zt _  lxðtÞ ¼ @x0 lelt þ gðtÞ þ l elðtsÞ gðsÞdsA l@x0 elt þ elðtsÞ gðsÞdsA ¼ gðtÞ xðtÞ 0

0

Hence, xðtÞ is a particular solution of the differential equation that satisfies the initial conditions. Problem 8.10 Consider the initial valued problem, pffiffiffiffiffiffiffiffi _ ¼ xðtÞ; t > 0 xðtÞ xð0Þ ¼ 0 Find two solutions. Why is there nonuniqueness of solutions? Solution 8.10 t2 Clearly, x1 ðtÞh0 is a trivial solution. Also, it can be shown that x2 ðtÞ ¼ is pffiffiffi 4 also a solution. The nonuniqueness occurs because x is singular at x ¼ 0. Therefore, before solving a differential equation numerically, it is important to check that the problem is well posed, that is 1) A solution exists; 2) The solution is unique; and 3) The solutions are not sensitive to small changes in initial conditions. Problem 8.11 Let Q be a M  M orthonormal matrix. Let A be M  N real-valued matrix. Show that kQAkF ¼ kAkF .

Solution 8.12

875

Solution 8.11 kQAk2F

      T ¼ trace ðQAÞ ðQAÞ ¼ trace AT QT QA ¼ trace AT $ I $ A   ¼ trace AT A ¼ kAk2F

Problem 8.12 An inner product, hx; yi, on ℝN , is a bilinear function that maps two vectors, x and y to a scalar and satifies the following properties: a.

hx; yi ¼ hy; xi

ðsymmetryÞ

b. hx; ay þ bzi ¼ ahx; yi þ bhx; zi c.

ðlinearityÞ

hx; xi  0 and hx; xi ¼ 0 if and only if x ¼ 0 ðpositivityÞ

For example, the standard Euclidean inner product is hx; yi2 ¼ xT y. If wn ; n ¼ 1; /; N are positive scalars, we can define the weighted inner N P wn xn yn . It can be shown that any inner product satproduct hx; yiW ¼ n¼1

isfies the CauchyeSchwartz inequality, hx; yi2  hx; xihy; yi where equality holds if and only if y ¼ ax, for some ffiscalar, a. Given an inpffiffiffiffiffiffiffiffiffiffiffiffi N ner product, hx; yi, on ℝ , show that kxk ¼ hx; xi satifies the four properties in Theorem 8.3-1 and, therefore, is a norm. Solution 8.12 Properties 1 and 2 are satisfied by property c. of inner products. Property 3 is satisfied by linearity (property b) since pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kaxk ¼ hax; axi ¼ a2 hx; xi ¼ jaj hx; xi ¼ jaj,kxk Property 4 relies on the CauchyeSchwartz inequality. Expanding the inner product using linearity and symmetry gives kx þ yk2 ¼ hx þ y; x þ yi ¼ hx; xi þ 2hx; yi þ hy; yi ¼ kxk2 þ 2hx; yi þ kyk2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  kxk2 þ 2 hx; xihy; yi þ kyk2 ¼ kxk2 þ 2kxkkyk þ kyk2 Taking square roots gives Property 4.

876

CHAPTER 8 Numerical methods

Problem 8.13 Real normal modes are typically normalized using the positive-definite mass matrix, M. Define the inner product with respect to M via hx; yiM ¼ xT My 1) Prove that hx; yiM defines an inner product. 2) Let f1 ; f2 ; /; fM denote the real normal modes that are mass normalized, i.e.,

1 m¼n hfm ; fn iM ¼ 0 msn Express a displacement response vector, xðtÞ, in terms of modal coordinates, qm ðtÞ, i.e., M X xðtÞ ¼ qm ðtÞfm m¼1

Show that qk ðtÞ ¼ hfk ; xðtÞiM. Solution 8.13 Symmetry for inner products is satisfied since M is symmetric and hx; yiM ¼ xT My ¼ yT Mx ¼ hy; xiM Linearity follows from linearity of matrix-vector products, hx; ay þ bziM ¼ xT Mðay þ bzÞ ¼ axT My þ bxT Mz ¼ ahx; yiM þ bhx; ziM Positivity follows from positive-definiteness of M, hx; xiM ¼ xT Mx > 0 ¼0 M P Since xðtÞ ¼ qm ðtÞfm , m¼1

hfk ;

xðtÞiM ¼ fTk MxðtÞ

¼

fTk M

M X

if xs0 if and only if M X

x¼0

qm ðtÞfm ¼ qm ðtÞfTk Mfm m¼1 m¼1

Problem 8.14 Apply the direct LU factorization to 2 2 6 A¼4 4 2

A, where 3 3 1 7 4 3 5 3 1

¼

M X m¼1

qm ðtÞdk;m ¼ qk ðtÞ

Problem 8.15

877

Solution 8.14 j¼1 i¼2 a2;1 ¼

a2;1 4 ¼ ¼2 a1;1 2

a2;2:3 ¼ a2;2:3  2a1;2:3 ¼ ½4  3  2½3  1 ¼ ½ 2  1 2 3 2 3 1 6 7 6 7 ð1Þ i¼3 0A ¼ 6 2 2 1 7 4 5 1 6 2 a3;1 ¼

a3;1 2 ¼ 1 ¼ 2 a1;1

a3;2:3 ¼ a3;2:3  ð 1Þa1;2:3 ¼ ½3  1 þ ½3  1 ¼ ½6  2 j¼2 i¼3 a3;2 ¼

a3;2 6 ¼ 3 ¼ a2;2 2

2

2

6 6 a3;3:3 ¼ a3;3:3  ð 3Þa2;3:3 ¼ ½ 2 þ 3½ 1 ¼ ½ 50Að2Þ ¼ 6 2 4 1 2

1

0

0

3

2

2

3

1

3 2 3

3

6 7 6 7 Therefore, L ¼ 4 2 1 0 5 and U ¼ 4 0 2 1 5. 1 3 1 0 0 5 Problem 8.15 Let K denote the N  N stiffness matrix of a free-free system. What is the null space, N K , of K?

1

3

7 7 1 7 5 5

878

CHAPTER 8 Numerical methods

Solution 8.15   N K ¼ span fR;n ; where fR;n equals the nth rigid-body mode. Problem 8.16 Show that the inverse of a nonsingular lower (or upper) triangular matrix is also a lower (or upper) triangular matrix. Solution 8.16   Let L ¼ li;j be a nonsingular lower-triangular matrix. Since L is nonsingular, its diagonal elements are nonzero; hence, we can assume without loss of generality that the diagonal elements are equal to one. Denote the kth colT  umn of L1 by xk ¼ x1;k x2;k / xN;k , then 9 8 9 38 2 1 / 0 0 / 0 > x1;k > > 0 > > > > > > > > > 6 « > > > > > « > 1 « « / «7 «> > > > > 7> 6 > > > > > > > 7> 6 = < < 6 lk1;1 / 1 0 / 0 7 xk1;k 0= 7 Lxk ¼ ek 06 ¼ 6 l > / lk;k1 1 / 07 xk;k > 1> > > > > 7> 6 k;1 > > > > > > > 7> 6 > > > > > > > > 4 « « / « « 1 « 5> « > > > > > > ; : > : ; xN;k lN;1 / lN;k1 lN;k / 1 0 Solving for xi;k by forward substitution yields xi;k ¼ 0 for i < k. Hence, L1 ¼ xi;k has lower-triagular form. If U is a nonsingular  upperT traingular matrix, then UT is lower triangular. Therefore, ðUT Þ1 ¼ U1 is lower triangular. Transposing shows that U1 is upper triagular. Problem 8.17 Show that a “small” perturbation of the identity matrix is still nonsingular. Let B be an N  N matrix such that kBk < 1, and consider the infinite sum, ∞ X ð 1Þn Bn C ¼ I  B þ B2  B3 þ / þ ð 1Þn Bn þ / ¼ n¼0

Show that ðI þBÞC ¼ I, and I þ B is nonsingular with inverse equal to ∞ P ð  1Þn Bn . n¼0

Solution 8.17 First, we need to show that the infinite sum converges. Note that since kBk < 1; kBN k  kBkN /0, BN /0. Denote the Nth partial sum as

Problem 8.19

SN ¼

N P

879

ð  1Þn Bn . It suffices to show that SN converges in a norm-

n¼0

absolute sense. By the triangular inequality and consistency property of norms,    X X N N N X 1  kBkNþ1 1  n n as N/∞ / ð 1Þ B   kBn k  kBkn ¼ kSN k ¼   n¼0  n¼0 1  kBk 1  kBk n¼0 where the limits follow from infinite geometric series; hence, SN converges. The solution is obtained by considering the product   ðI þ BÞSN ¼ ðI þ BÞ I  B þ B2  B3 þ / þð1ÞN BN     ¼ I  B þ B2  B3 þ / þð 1ÞN BN þ B  B2 þ B3  B4 þ / þð1ÞN BNþ1 ¼ I þ ð 1ÞN BNþ1 /I as N/∞ Problem 8.18 Let A be a nonsingular matrix. Consider the perturbation, A þ E, where 1 . Use the results of Problem 8.17 to show that A þ E is kEk < kA1 k also nonsingular. Solution 8.18   A þE ¼ A I þA1 E ¼ AðI þBÞ, where B ¼ A1 E. Since kEk <   A1 1 , we obtain     kBk ¼ A1 E  A1 ,kEk < 1 Hence, by the results in Problem 8.17, I þ B in nonsingular and, therefore, AðI þBÞ is nonsingular. 2 3 4 2 2 Problem 8.19 6 7 Compute the Cholesky factor, L, for A ¼ 4 2 2 2 5. 2

2

6

880

CHAPTER 8 Numerical methods

Solution 8.19 j¼2 j¼1

k¼2

k¼1

a2;2 ¼ 2  a2;1:1 ,a2;1:1 ¼ 2  ½ 1½ 1 ¼ 1

a1;1 ¼ 4

a1:3;1

k¼3

a1:3;1 ½4  2 2T pffiffiffi ¼ ½2  1 1T ¼ pffiffiffiffiffiffiffiffi ¼ a1;1 4 3

2

Að1Þ

2 2 2 7 6 7 6 7 6 7 6 ¼ 6 1 2 2 7 7 6 7 6 5 4 1 2 6

a3;2 ¼ 2  a3;1:1 ,a2;1:1 ¼ 2  ½1½1 ¼ 1 a2:3;2 ½1  1T pffiffiffi a2:3;2 ¼ pffiffiffiffiffiffiffiffi ¼ ¼ ½1  1T a2;2 1 2

Að2Þ

2

2

6 6 6 6 ¼ 6 1 1 6 6 4 1 1

2

3

7 7 7 7 2 7 7 7 5 6

j¼3 k¼3 a3;3 ¼ 6  a3;1:2 ,a3;1:2 ¼ 6  ½1  1½1  1 ¼ 4 a3:3;3 4 a3:3;3 ¼ pffiffiffiffiffiffiffiffi ¼ pffiffiffi ¼ 2 a3;3 4 2 3 2 2 2 2 2 6 7 6 6 7 6 Að3Þ ¼ 6 1 1 2 70L ¼ 6 1 4 5 4 1 1 2 1

0 1 1

0

3

7 7 07 5 2

Problem 8.20   Show that if A ¼ ai;j is a symmetric positive-definite matrix, then

Solution 8.22

ai;i þ aj;j ai;j < 2 Solution 8.20 From Property 4 of Theorem 8.3-8: ai;j  max ai;j < ai;i jsi Switching i and j and by symmetry: ai;j ¼ aj;i  max aj;i < aj;j isj

Adding the two above inequalities and dividing by two yields the desired result ai;i þ aj;j ai;j < 2 Problem 8.21  1 a Let A ¼ . Show that A is positive-definite if and only if jaj < 1. a 1 Solution 8.21 1þ1 ¼ 1. If A is positive definite, then by the previous problem, jaj < 2 Now, suppose the minimum eigenvalue of Affi is given by, jaj < 1, then rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ffiffi p 2 a 1 a2  1 lmin ¼ 1  1 þ . If jaj < 1, then 23  1 þ < 1. Hence, 4 4 lmin > 0 and, therefore, A is positive definite. Problem 8.22   Use the result from Problem 8.21 to show that if M ¼ mi;j is positive pffiffiffiffiffiffiffiffiffiffiffiffiffiffi definite, then mi;j < mi;i mj;j . Solution 8.22 If M is positive definite, then mi; i > 0 and mj; j > 0. Consider the N  2  matrix ej ei pffiffiffiffiffiffiffi T ¼ pffiffiffiffiffiffiffi mi;i mj;j  1 a mi;j T c , where a ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffi. Then from and define M ¼ T MT ¼ mi;i mj;j a 1 Property 8.3-8, c M is postive definite and from Problem 5 of Theorem m i;j 8.21, pffiffiffiffiffiffiffiffiffiffiffiffiffiffi < 1, which proves the result. Note that since the geometric mi;i mj;j pffiffiffiffiffiffiffiffiffiffiffiffiffiffi mi;i þ mj;j , mean is less than or equal to the arthmetic mean, i.e. mi;i mj;j  2 the result in this problem is a sharper inequality than in Problem 8.20.

881

882

CHAPTER 8 Numerical methods

Problem 8.23 Let A ¼ M  N be a splitting of A. Let P be a permuation matrix and let B ¼ PAPT . Show that the iteration matrices, GA and GB , have the same spectral radius. Solution 8.23 We know that GA ¼ M1 N, and B ¼ PAPT ¼ PðM NÞPT ¼ PMPT  PNPT is the corresponding splitting of B. Hence, since P1 ¼ PT , we get 1    1 1 1  GB ¼ PMPT PNPT ¼ PT M P PNPT ¼ PM1 NPT ¼ PGA PT Therefore, GB is similar to GA and, hence, they have the same eigenvalues and spectral radius. Problem 8.24 Consider the three-mass system below, which is the left subsystem shown in Fig. 8.3-6, and is isolated from the rest of the system. Note that the springs represent load paths and the system is a one-dimensional system.

The stiffness matrix for the subsytem is 2 1:5 1:5 6 K ¼ 4 1:5 1011:5 0 10

0

3

7 10 5 1110

1) Compute the Jacobi, Gauss-Seidel, and SOR (u1 ¼ 0:5; u2 ¼ 1:5) iteration matrices. 2) Calculate the spectral radius of the iteration matrices. 3) Show that K satisfies property A (see iterative methods) and, hence, is consistently ordered. 4) What is the optimal relaxation factor, uopt , and the corresponding spectral radius?

883

Solution 8.24

Solution 8.24 1) The iteration matrices are 2 31 2 0 1:5 0 0 6 7 6 6 7 6 GJ ¼ D1 ðL þ UÞ ¼ 6 0 1011:5 0 7 6 1:5 4 5 4 0 0 0 1110 3 2 0 1 0 7 6 6 7 ¼ 6 0:001483 0 0:009886 7 4 5 0 0:009009 0 2

1:5

0

0

31 2

6 7 6 7 GG-S ¼ ðD þ LÞ1 U ¼ 6 1:5 1011:5 0 7 4 5 0 10 1110 2 3 0 1 0 6 7 6 7 ¼ 6 0 0:001483 0:009886 7 4 5 0 1:336E  5 8:907E  5

1:5 0 10

0 1:5

6 6 60 4 0

0 0

0

3

7 7 10 7 5 0

0

3

7 7 10 7 5 0

Recall that GSOR ðuÞ ¼ ðD þ uLÞ1 ðð1 uÞD uUÞ, therefore, 2 3 31 2 0:75 0:75 0 1:5 0 0 6 7 7 6 6 7 7 6 GSOR ðu1 Þ ¼ 6 0:75 1011:5 505:75 5 7 0 7 6 0 4 5 5 4 0 0 555 0 5 1110 2 3 0:5 0:5 0 6 7 6 7 ¼ 6 0:000371 0:5 0:004943 7 4 5 1:67E  6 0:002254 0:500022

884

CHAPTER 8 Numerical methods

and

2

1:5

0

0

31 2

0:75

2:25

6 7 6 6 7 6 GSOR ðu2 Þ ¼ 6 2:25 1011:5 505:75 0 7 6 0 4 5 4 0 0 0 15:0 1110 3 2 0:5 1:5 0 7 6 6 7 ¼ 6 0:001112 0:496663 0:014829 7 4 5 1:503E  5 0:006712 0:499800

0

3

7 7 15 7 5 555

2) Calculating rðGÞ ¼ maxðjlG jÞ with LAPACK gives     rðGJ Þ ¼ 0:03965; rðGG-S Þ ¼ 0:00157; r GSORðu1 Þ ¼ 0:51422 and r GSORðu2 Þ ¼ 0:5 3) Let P be the 3  3 permutation matrix that swaps the second and third rows. Then 2 32 32 3 2 3 1:5 0 1:5 1 0 0 1:5 1:5 0 1 0 0 6 76 76 7 6 7 76 76 7 6 PKPT ¼ 6 10 7 4 0 0 1 54 1:5 1011:5 10 54 0 0 1 5 ¼ 4 0 1110 5 0 1 0 0 10 1110 0 1 0 1:5 10 1011:5 has property A. By Theorem 8.3-16, uopt ¼ and

2 2 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 1:000393 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi z 1 þ 1  rðGG-S Þ 1 þ 1  0:00157   r GSOR;uopt ¼ uopt  1 z 0:000393

Problem 8.25 Compute the QR factorization of A defined in (8.4-46) using the Classical GrameSchmidt (CGS) method and compare with the QR factorization via Householder transformation in (8.4-54) and (8.4-56).

Solution 8.25

2

2

6 6 1 6 A¼6 6 0 6 4 1 1

1

1

3 2 1 0

1 0 1 1

0

885

3

7 2 7 7 1 7 7 7 0 5 0

Solution 8.25 Applying the CGS: k¼1 1gT q;1 ¼ f2  -1 0 1pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi r1;1 ¼ q;1 2 ¼ 4 þ 1 þ 0 þ 1 þ 1 ¼ 2:64575 a;1 ¼ f2 -1 0 1 1gT =2:64575 ¼ f0.75593 -0.37796 0 0.37796 0.37796gT 2 3 0:75593 1 1 0 2 3 2:64575 0 0 0 6 7 6 0:37796 3 1 2 7 6 7 ð1Þ 6 0 0 0 07 7 7R ¼6 Að1Þ ¼ 6 0 2 0 1 6 7 6 7 4 0 0 0 05 6 7 0 5 4 0:37796 1 1 0 0 0 0 0:37796 0 1 0 k¼2 q;2 ¼ f1 -3 2 -1 0gT T  r1:1;2 ¼ a;1:1 q;2 ¼ ½0:75593  0.37796 0 0.37796 0.37796f1  3 2  1 0gT ¼ 1:51186   q;2 ¼ q;2  a;1:1 r1:1;2 ¼ f1  3 2  1 0gT  ½0:75593  0.37796 0 0.37796 0.37796T 1:51186 ¼ f  0:14286  2:42857 2:0  1:57143 0:57143gT pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   r2;2 ¼ q;2  ¼ 0:142862 þ 2:428572 þ 22 þ 1:571432 þ 0:571432 ¼ 3:56571 2

a;2 ¼ q;2 =3:56571 ¼ f  0:04006  0:68109 0:56090  0:44071  0:16026gT 2 3 0:75593 0:04006 1 0 2 3 2:64575 1:51186 0 0 6 7 6 0:37796 0:68109 1 2 7 6 6 7 0 3:56571 0 0 7 7 7 Rð2Þ ¼ 6 Að2Þ ¼ 6 0 0:56090 0 1 6 7 6 7 4 0 0 0 05 6 7 0 5 4 0:37796 0:44071 1 0 0 0 0 0:37796 0:16026 1 0

886

CHAPTER 8 Numerical methods

k¼3 q;3 ¼ f1 1 0 1  1gT " # T  0:75593 0:37796 0 0:37796 0:37796 r1:2;3 ¼ a;1:2 q;3 ¼ 0:04006 0:68109 0:56090 0:44071 0:16026  0:37796 T  f1 1 0 1  1g ¼ 1:00160   q;3 ¼ q;3  a;1:2 r1:2;3 ¼ f 0:67416 0:46067 0:56180 0:41573 1:30337 gT pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   r3;3 ¼ q;3 2 ¼ 0:674162 þ 0:460672 þ 0:561802 þ 0:415732 þ 1:303372 ¼ 1:68936 a;3 ¼ q;3 =1:68936 ¼ f 0:39906 0:27269 0:33255 0:24609 0:77152 gT 2 3 0:75593 0:04006 0:39906 0 6 7 6 0:37796 0:68109 0:27269 2 7 6 7 7 Að3Þ ¼ 6 0 0:56090 0:33255 1 6 7 6 7 0:37796 0:44071 0:24609 0 4 5 0:37796 0:16026 0:77152 0 2 3 2:64575 1:51186 0:37796 0 6 0 3:56571 1:00160 0 7 6 7 Rð3Þ ¼ 6 7 4 0 0 1:68936 0 5 0

0

0

0

Problem 8.26

887

k¼4 0gT q;4 ¼ f0  2 1 0 2 3 0:75593 0:37796 0 0:37796 0:37796  T 6 7 r1:3;4 ¼ a;1:3 q;4 ¼ 4 0:04006 0:68109 0:56090 0:44071 0:16026 5 0:39906 0:27269 0:33255 0:24609 0:77152 2 3 0:75593 6 7  f0  2 1 0 0gT ¼ 4 1:92308 5 0:21283   q;4 ¼ q;4  a;1:3 r1:3;4 ¼ f 0:40945 0:34646 0:00787 0:61417 0:14173 gT pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   r4;4 ¼ q;4 2 ¼ 0:409452 þ 0:346462 þ 0:007872 þ 0:614172 þ 0:141732 ¼ 0:82767 T a;4 ¼ q;4 =0:82767 ¼ f 0:49470 0:41859 0:00951 0:74205 0:17124 2 3g 0:75593 0:04006 0:39906 0:49470 6 7 6 0:37796 0:68109 0:27269 0:41859 7 6 7 7 Q ¼ Að4Þ ¼ 6 0 0:56090 0:33255 0:00951 6 7 6 7 0:37796 0:44071 0:24609 0:74205 4 5 0:37796 0:16026 0:77152 0:17124 2 3 2:64575 1:51186 0:37796 0:75593 6 0 3:56571 1:00160 1:92308 7 6 7 R ¼ Rð4Þ ¼ 6 7 4 0 0 1:68936 0:21283 5 0

0

0

0:82767

Comparing the QR factors from the CGS with the QR factors from Householder’s method (transformation in (8.4-54) and (8.4-56)), we note that the first four columns of Q and the first four rows of R for the two methods are equal except for sign differences. Problem 8.26 Let x ¼ f 2 1 3 1 1 gT . Calculate the Householder transformation, H, that deletes the elements below the first row and Hx.

888

CHAPTER 8 Numerical methods

Solution 8.26 pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi kxk2 ¼ 22 þ 12 þ 32 þ 12 þ 12 ¼ 4 u ¼ x þ sgnð 2Þkxk2 e1 ¼ x  4e1 ¼ f 6 1 3 1 1 gT e pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffi pffiffiffi uk2 ¼ 62 þ 12 þ 32 þ 12 þ 12 ¼ 48 ¼ 4 3 ke pffiffiffi 3 u e ¼ u¼ f 6 1 3 1 1 gT 12 uk2 ke 0:14434 0:43301 0:14434 0:14434 gT 2 3 36 6 18 6 6 6 7 6 7 6 6 1 3 1 1 7 6 7 6 7 7 2 1 6 T T 6 18 3 7 H ¼ I  2uu ¼ I  u e u e ¼ I  9 3 3 6 7 24 6 uk22 ke 7 6 7 6 6 7 1 3 1 1 6 7 4 5 6 1 3 1 1 3 2 0:5 0:25 0:75 0:25 0:25 7 6 7 6 6 0:25 0:95833 0:125 0:04167 0:04167 7 7 6 7 6 7 6 6 ¼ 6 0:75 0:125 0:625 0:125 0:125 7 7 7 6 7 6 6 0:25 0:04167 0:125 0:95833 0:04167 7 7 6 5 4 0:25 0:04167 0:125 0:04167 0:95833 ¼ f 0:86603

We can calculate Hx directly from 8.4-45, i.e., Hx ¼  sgnðx1 Þkxk2 e1 ¼ f 4 0 0 0 0 gT

Solution 8.28

Problem 8.27 Define x as in Problem 8.26. Calculate the Givens transformation, GðqÞ, that rotates the first and third coordinates to delete the element in the third row. Also, calculate GðqÞx. Solution 8.27 By (8.4-59), x1 2 x3 3 cos q ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffi ¼ 0.55470 and sinq ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffi ¼ 0:83205 13 13 x21 þ x23 x21 þ x23 Hence,

2

0.55470

6 6 0 6 6 G1;3 ðqÞ ¼ 6 6 0:83205 6 6 0 4 0  pffiffiffiffiffi G1;3 ðqÞx ¼ 13 1

0

0:83205

1

0

0 0

3

7 0 07 7 7 0 0.55470 0 0 7 7 7 0 0 1 07 5 0 0 0 1 T 0 1 1

Problem 8.28 Let xbn ; n ¼ 1; /; N, represent a discrete signal that was measured during a test at 200 samples per second. Hence, Dt ¼ tn  tn1 ¼ 0:005 sec. Suppose the measured signal was corrupted by a dominant sinusoidal 60 Hz electrical noise. Formulate a least-square problem that attempts to fit the 60 Hz noise component so that it can be removed from xbn to obtain the underlying signal, xn . Also, compute the normal equation (see Section 8.4). For simplicity, let us assume that xn does not have 60 Hz spectral content. Solution 8.28 Let u ¼ 2pð60Þ. Consider modeling the measured signal, xbn , as follows:

889

890

CHAPTER 8 Numerical methods

xbn ¼ A cosðu tn  fÞ þ xn ¼ A cos f cos u tn þ A sin f sin u tn þ xn ¼ a cos u tn þ b sin u tn þ xn where a ¼ A cos f and b ¼ A sin f. Since the noise, a cos u tn þ b sin u tn , dominates, we can consider the following leastsquares fit: 9 8 3 2 sin u t1 cos u t1 > > > xb1 > > > > > 7 6 > > sin u t2 7  > > 6 cos u t2 b x = < 2 7 a 6 7 6 « « 7 b z> « > 6 > 7 6 xbN1 > > > > > 4 cos u tN1 sin u tN1 5 > > > ; : xbN > cos u tN sin u tN Premultiplying both sides by the transpose of the coefficient matrix, we obtain the sought-after normal equation, 9 8 N 3 2 N N X X P > > > > cos2 u tn cos u tn sin u tn 7( ) > xbn cos u tn > > > 6 > > = < n¼1 7 a 6 n¼1 n¼1 7 6 ¼ 7 6 > > 7 b 6X N X > > N P > > 5 4 N 2 > > > b x sin u t cos u tn sin u tn sin u tn n n> ; : n¼1

n¼1

n¼1

Problem 8.29 Let um and fm , m ¼ 1; /; M, represent the natural frequencies and real normal modes, respectively, of the undamped system with N  N mass matrix, M, and stiffness matrix, K. Assume that the mode shapes are mass normalized. Define the generalized momentum by jm ¼ Mfm. Show that the transformation, Pm ¼ fm jTm , is a projector that projects a vector onto the space spanned by the mth mode.

Solution 8.30

Solution 8.29 Since the mode shapes are mass normalized, dk;m [ fTk Mfm ¼ ðMfk ÞT fm ¼ jTk fm Hence, the generalized momentum and mode shapes form a biorthonormal set. Therefore, Pm ¼ fm jTm is a projector. In fact, calculating P2m yields      P2m ¼ fm jTm fm jTm ¼ fm jTm fm jTm ¼ fm ,1,jTm ¼ fm jTm ¼ Pm Also, given any N-dimensional vector, x, gives   Pm x ¼ fm jTm x ¼ jTm x fm ¼ projection onto spanðfm Þ Problem 8.30 Consider the following N  N Jordan matrix, 2 a 1 0 / 6 60 a 1 / 6 J¼6 6« « « 1 6 40 0 0 / 0 0 0 /

3 0 7 07 7 «7 7 7 15 a

Show that J has an eigenvalue equal to a with multiplicity equal to N. Also show that J is defective and has only one linearly independent eigenvector. Solution 8.30 2

al 6 6 0 6 pðlÞ ¼ detðJ  lIÞ ¼ det6 6 « 6 4 0 0 Hence, x ¼ f x1

a x2

1 al «

0 / 1 / « 1

0 0 «

0 0

0 / 0 /

1 al

3 7 7 7 7 ¼ ða  lÞN 7 7 5

is an eigenvalue with multiplicity N. Let / xN gT be an eigenvector. Then ½J aIx ¼ 0; i.e.,

891

892

2

CHAPTER 8 Numerical methods

0 1 0

6 60 0 1 6 6 6« « « 6 6 60 0 0 4 0 0 0

/ / 1 / /

9 8 9 8 9 38 x2 > 0 > 0> x > > > 1 > > > > > > > > > > > 7> > > > > > > > > > > > > > > > > > > 7 x x 0 7> 0 > > > > > 2 > > 3> > > > = = < < = < 7 « « ¼ ¼ «7 « 7> > > > > > > > > > > 7> > > > > > > > > > > > > 7 x x 1 5> 0 > > > > > N N1 > > > > > > > > > > > > > > > > > ; : ; : > ; : xN 0 0 0

0 x2 ¼ 0; x3 ¼ 0; /; xN ¼ 0

Hence, any eigenvector has to have the form, x ¼ f x1 0 / 0 gT , which indicates that there is only one linearly independent eigenvector. Problem 8.31 A real-valued square matrix, S, is skew-symmetric if ST ¼ S. Note that SH ¼ ST , since S is real-valued. 1) Show that any square matrix, A, can be written as a sum of symmetric and skew-symmetric matrices. 2) Show that skew-symmetric matrices have pure imaginary (or zero) eigenvalues. 3) If ðln ; xn Þ and ðlk ; xk Þ are eigenpairs with ln slk . Then xn and xk are orthogonal, i.e., xH n xk ¼ 0. Solution 8.31 1 1 1) Let Asym ¼ A þAT Þ and Askew ¼ A AT Þ. As can be ascertained, 2 2 Asym and Askew are symmetric and skew-symmetric, respectively. 2) Let S be skew-symmetric. Since the eigenpairs can be complex-valued, we need to consider the complex inner product, xH y. Let ðln ; xn Þ be an eigenpair. Then  H H H x ¼ x ðSx Þ ¼ S xn xn ¼ ðSxn ÞH xn ¼ ðSxn ÞH xn ¼ ðln xn ÞH xn ¼ ln xH ln xH n n n n n xn Therefore, ln ¼ ln , which is true only if ln is pure imaginary or zero.  H H H H 3) lk xH n xk ¼ xn ðSxk Þ ¼ S xn xk ¼ ðSxn Þ xk ¼ ðSxn ÞH xk ¼ ðln xn ÞH xk H ¼ ln xH n xk ¼ ln xn xk

Solution 8.32

893

where the last equality results from ln being imaginary. Therefore, H H lk xH n xk ¼ ln xn xk and since, lk sln , we conclude that, xn xk ¼ 0. Problem 8.32 (This problem requires access to a numerical software tool) Perform one step of the QR iteration starting with Að2Þ in (8.5-10) to derive Að3Þ in (8.5-11). Solution 8.32 Applying Householder transformations to Að2Þ ¼ Qð2Þ Rð2Þ , with 2 0:34291 0:37522 0:06018 6 6 0:92681 0:21095 0:11932 6 ð2Þ Q ¼6 0:43659 0:85702 6 0:15311 6 4 0:00000 0:64623 0:30832 0:00000 0:45442 0:39063 2 724:750 2833:160 259:800 6 0 799:014 208:620 6 6 0 0 937:225 Rð2Þ ¼ 6 6 6 0 0 0 4 0 0 0 and

Að3Þ ¼ Rð2Þ Qð2Þ

2

2914:094

6 6 708:590 6 ¼6 6 143:499 6 0 4 0

708:590 1166:203 241:196 351:482 73:966

factor Að2Þ, we obtain 3 0:73805 7 0:23561 7 7 0:00597 0:22678 7 7 7 0:69545 0:06059 5 0:54429 0:58708 3 502:154 378:049 7 1067:956 476:277 7 7 629:977 535:344 7 7 7 606:033 88:361 5 0 162:772 0:43964 0:16365

143:499

0

0

3

7 241:196 351:482 73:966 7 7 1206:581 152:338 63:584 7 7 7 152:338 469:561 88:594 5 63:584 88:594 95:561

894

CHAPTER 8 Numerical methods

Problem 8.33 (This problem requires access to a numerical software tool) Consider the GEVP corresponding to the three-degree-of-freedom system in Problem 8.24. 1) Reduce the problem to the standard eigenvalue problem and calculate all the eigenpairs. 2) Perform five iterations of the power iteration method with initial vector, xð0Þ ¼ f 1 0:5 0 gT . 3) Perform three iterations of the Rayleigh quotient iteration starting with initial vector xð0Þ and initial shift, m ¼ 1000. Solution 8.33 From Problem 8.24, we have, 2 3 0:001 0 0 6 7 1) M ¼ 4 0 1 05 and 0

2

1:5 1:5 6 K ¼ 4 1:5 1011:5

0 1

0

10

3 0 7 10 5 1110

pffiffiffiffiffiffiffiffiffiffiffi  The Cholesky factor of M is L ¼ diag 0:001 ; 1; 1 ; 3 2 1500 47:4342 0 7 6 1011:5 10 5 A ¼ L1 KL1 ¼ 4 47:4342 0 10 1110 Calculating the eigenvalues in descending order we obtain l1 ¼ 1504:566; l2 ¼ 1110:950; l3 ¼ 1005:984 The corresponding normalized eigenvectors are 2 3 0:99540 0:01153 0:09514 6 7 V ¼ 4 0:09581 0:09459 0:99089 5 0:00243

0:99545

0:09526

2) The power iteration method produces for the first five iterations:

Problem 8.34 (This problem requires access to a numerical software tool)

xð1Þ

xð3Þ

xð5Þ

8 9 0:95503 > > > > < = ¼ 0:29649 ; lð1Þ ¼ 1430.210 > > > > : ; 0:00324 8 9 0:99597 > > > > < = ¼ 0:08961 ; lð3Þ ¼ 1487.611 > > > > : ; 0:00462 8 9 0:99992 > > > > < = ¼ 0:01194 ; lð5Þ ¼ 1501.059 > > > > : ; 0:00319

xð2Þ

xð4Þ

895

8 9 0:98426 > > > > < = ¼ 0:17669 ; lð2Þ ¼ 1468.260; > > > > : ; 0:00455 8 9 0:99957 > > > > < = ¼ 0:02915 ; lð4Þ ¼ 1496.817 > > > > : ; 0:00405

Observe that the iterations are converging to the largest eigenvalue and corresponding eigenvector. 3) The Rayleigh quotient yields for the first three iterations:

xð1Þ

xð3Þ

8 9 0:36122 > > > > < = ¼ 0:93142 ; lð1Þ ¼ 1042.689 > > > > : ; 0:04435 8 9 0:09516 > > > > < = ¼ 0:99090 ; lð3Þ ¼ 1005.984 > > > > : ; 0:09518

xð2Þ

8 9 0:07254 > > > > < = ¼ 0:98989 ; lð2Þ ¼ 1006.309 > > > > : ; 0:12192

Observe that the Rayleigh quotient iterates converge to the 3rd eigenpair. Problem 8.34 (This problem requires access to a numerical software tool) Tridiagonalize the following 5  5 symmetric matrix using Householder transformations:

896

CHAPTER 8 Numerical methods

2

1

3

2

0

2

3

6 7 2 1 1 7 6 3 3 6 7 7 A ¼ A0 ¼ 6 2 2 2 3 2 6 7 6 7 2 1 5 4 0 1 3 2 1 2 1 4 Solution 8.34 Applying Householder transformations to tridiagonalize A column-wise, Ak ¼ Hk Ak1 Hk k ¼ 12

1

6 60 6 H1 ¼ 6 60 6 40 k ¼ 22

0

1 6 60 6 H2 ¼ 6 60 6 40 0 k¼32 1 6 60 6 H3 ¼ 6 60 6 40 0

0

0

0

0:72761

0:48507

0

0

3

2

1

7 6 0:48507 7 64:1231 7 6 7 0:48507 0:86380 0 0:1362070A1 ¼6 6 0 7 6 0 0 1 0 5 4 0 0:48507 0:13620 0 0:86380 0 3 2 0 0 0 0 1 7 6 1 0 0 0 4:1231 7 6 7 6 6 0 0A 0 0:63703 0:73516 0:23180 7 ¼ 2 7 6 7 6 0 0:73516 0:66985 0:104105 4 0 0 0:23180 0:10410 0:96718 0 3 2 0 0 0 0 1 4:1231 7 6 1 0 0 0 7 6 4:1231 1:3529 7 6 7 6 0 1 0 0 3:6290 70A3 ¼ 6 0 7 6 0 0 0:79459 0:60715 5 0 4 0 0 0 0:60715 0:79459 0 0

4:1231

0

0

0

1:3529

2:3118

2:6679

2:3118 2:6679

4:5511 1:9701

1:9701 2

0:8412

1:1765

0:0299

3:0959

4:1231 1:3529

0 3:6290

0 0

0 0

3

7 0:8412 7 7 1:17657 7 7 0:02995 3

7 7 7 3:6290 1:5861 1:2779 0:9765 7 7 7 0 1:2779 5:5153 0:34835 0 0:9765 0:3483 2:5457 3 0 0 0 7 3:6290 0 0 7 7 7 1:5861 1:6083 0 7 7 1:6083 4:7566 1:3411 5 0 1:3411 3:3043

Problem 8.35 (This problem requires access to a numerical software tool) Let AN be the symmetric tridiagonal matrix defined in Eq. (8.3-151) for N ¼ 5. Perform the first iteration of the implicit QR algorithm with the Wilkinson shift.

897

Solution 8.35

Solution 8.35 2

2 6 6 1 6 A¼6 6 0 6 4 0 0

1 2

0 1

0 0

1 0 0

2 1 0

1 2 1

3 0 ! 7 kp 0 7 k ¼ 1; /; 5 7 Note that lk ¼ 2 1  cos 6 0 7 7 7 pffiffiffi pffiffiffi 1 5 ¼ 2  3; 1; 2; 3; 2 þ 3 2

The Wilkinson shift using the lower 2  2 matrix yields the initial shift, mð1Þ ¼ 2. Define G2;1 to remove the second element of ( )   0 0 1 a1;1  mð1Þ ¼ 0Givens rotation ¼ 1 1 0 a2;1 Hence, 2 0 1 6 61 0 6 6 G2;1 ¼ 6 1 6 6 6 1 4

Define G3;2

3

2

2 1

1

0

0

3

6 7 7 61 2 0 7 7 0 0 6 7 7 6 7 7 7 and; A2;1 ¼ G2;1 AGT ¼ 6 1 0 2 1 0 7 2;1 6 7 7 6 7 7 6 0 0 1 2 1 7 7 4 5 5 1 0 0 0 1 2

 1 by to remove bulge in the ð3; 1Þ and ð1; 3Þ positons: 1 2 3 1 6 7 0:70711 0:70711 6 7 6 7 7 G3;2 ¼ 6 0:70711 0:70711 6 7 6 7 1 4 5 1

898

CHAPTER 8 Numerical methods

Hence,

A3;2 ¼ G3;2 A2;1 GT3;2

2

2

1:41421

0

0

0

0

0

0

1

2

6 7 2 0 0.70711 0 7 6 1:41421 6 7 ¼6 0 0 2 0:70711 0 7 6 7 6 7 0 0.70711 0:70711 2 15 4

Define G4;3 to remove the bulge in the ð4; 2Þ and ð2; 4Þ positions: 2 3 1 6 7 1 6 7 6 7 6 7 G4;3 ¼ 6 0 1 7 6 7 1 0 4 5 1 Hence,

A4;3 ¼ G4;3 A3;2 GT4;3

2

2

1:41421

0

0

0

0

0

1

0

2

3

6 7 2 0:70711 0 07 6 1:41421 6 7 7 ¼6 0 0:70711 2 0:70711 1 6 7 6 7 0 0 0:70711 2 05 4

Define G5;4 to remove the bulge in the ð5; 3Þ and ð3; 5Þ positions: 2 3 1 6 7 1 6 7 6 7 7 G5;4 ¼ 6 1 6 7 6 7 0:57735 0:81650 5 4 0:81650 0:57735 Hence,

3

References

2 A5;4 ¼ G5;4 A4;3 GT5;4

2

1:41421

0

0

0

3

6 7 2 0:70711 0 07 6 1:41421 6 7 7 ¼6 0 0:70711 2 1:22474 0 6 7 6 7 0 0 1:22474 2 05 4 0 0 0 0 2

Note that l3 ¼ 2 ¼ m and yields a decoupled matrix with l3 on the diagonal in one iteration.

References Ahlin, K., Shock Response Spectrum Calculation e An Improvement of the Smallwood Algorithm, 70th Shock & Vibration Symposium, Albuquerque, New Mexico, November 15e19, 1999. Anderson, E., et al., 1999. LAPACK Users’ Guide, third ed. Society for Industrial and Applied Mathematics, Philadelphia. Bathe, K.J., Wilson, E.L., 1973. Stability and accuracy analysis of direct integration methods. Earthq. Eng. Struct, Dyn. 1, 283e291. Bathe, K.J., 1982. Finite Element Procedures in Engineering Analysis. Prentice-Hall, Englewood Cliffs, New Jersey. Belytschko, T., Hughes, T.J.R. (Eds.), 1983. Computational Methods in Transient Analysis. North Holland Publishing Co., New York, New York, pp. 67e155. Bjo¨rck, A., 1996. Numerical Methods for Least Squares Problems. Society for Industrial and Applied Mathematics, Philadelphia. Caughey, T.K., June 1960. Classical normal modes in damped linear systems. J. Appl. Mech. Trans ASME. Caughey, T.K., O’Kelly, M.E.J., September 1965. Classical normal modes in damped linear systems. J. Appl. Mech. Trans. ASME. Claret, A.M., Venancio-Filho, F., 1991. A Modal superposition pseudoforce method for dynamic analysis of structural systems with nonproportional damping. Earthq. Eng. Struct. Dyn. 20, 303e315. Cronin, D.L., 1973. Numerical integration of uncoupled equations of motion using recursive digital filtering. Int. J. Numer. Method. 6, 137e140.

899

900

CHAPTER 8 Numerical methods

Cuppen, J.J.M., 1991. A divide and conquer method for the symmetric tridiagonal eigenproblem. Numer. Mathematics 36, 177e195. SpringerVerlag. Cullum, J.K., Willoughby, R.A., 2002. Lanczos Algorithms for Large Symmetric Eigenvalue Computation, Vol. I: Theory. Society for Industrial and Applied Mathematics, Philadelphia. Dahlquist, G., Bjo¨rck, A., 1974. Numerical Methods. Prentice-Hall, Inc., Englewood Cliffs, New Jersey. Dempsey, K.M., Irvine, H.M., 1978. A note on the numerical evaluation of Duhamel’s integral. Earthq. Eng. and Struct. Dyn. 6, 511e515. Demmel, J.W., 1997. Applied numerical linear algebra. Society for Industrial and Applied Mathematics. Dongarra, J.J. (Ed.), 1999. LAPACK Users’ Guide, third ed. Society for Industrial and Applied Mathematics, Philadelphia. Francis, J.G.F., 1961. The QR transformation: A unitary analogue to the LR transformation e parts I and II. Comput. J. 4, pp. 265-271 and 332-345. Fromme, J.A., Golberg, M.A., August 1998. Transient response analysis of nonlinear modally coupled structural dynamics equations of motion. AIAA J. 36 (8). Gear, C.W., 1971. Numerical Initial Value Problems in Ordinary Differential Equations. Prentice-Hall Inc., Englewood Cliffs, New Jersey. Golub, G., Uhlig, F., 2009. The QR algorithm: 50 years later its genesis by John Francis and Vera Kublanovskaya and subsequent developments. IMA J. Numer. Anal. 29, 467e485. Oxford University Press. Golub, G.H., Van Loan, C.F., 2013. Matrix Computations, fourth ed. John Hopkins University Press, Baltimore. Guyan, R.J., 1965. Reduction of stiffness and mass matrices. AIAA J. 3 (2). Hasselman, T.K., November 1976. Modal coupling in lightly damped structures. AIAA J. 14 (11), 1627e1628. Higham, N.J., 2002. Accuracy and Stability of Numerical Algorithms, second ed. Society for Industrial and Applied Mathematics, Philadelphia.

References

Hilber, H.M., Hughes, T.J.R., Taylor, R.L., 1977. Improved numerical dissipation for time integration algorithms in structural dynamics. Earthq. Eng. Struct. Dyn. 5, 283e292. Hilber, H., 1976. Analysis and Design of Numerical Integration Methods in Structural Dynamics. Report No. EERC 76-29, Dissertation. College of Engineering, University of California, Berkeley. Horn, R.A., Johnson, C.R., 1990. Matrix Analysis. Cambridge University Press, Cambridge, United Kingdom. Hsiao, C., Kim, M.C., 1993. Dynamic Loads Analysis of Non-Classically Damped Structures. ASME Winter Ann. Mtg. 33, 95e103. New Orleans, LA. Hughes, T.J.R., Belytshoko, T., December 1983. A precis of developments in computational methods for transient analysis. Trans. ASME J. Appl. Mech. 50, 1033e1041. Ibrahimbegovic, A., Wilson, E.L., 1989. Simple numerical algorithms for the mode superposition analysis of linear structural systems with nonproportional damping. Comput. Struct. 33 (No. 2), 523e531. ISO Standard 18431-4, 2007. “Mechanical Vibration and Shock e Signal Processing,” Part 4: Shock Response Spectrum Analysis. Kabe, A.M., September 1985. Stiffness matrix adjustment using mode data. AIAA J. 23 (9). Kim, M.C., Kabe, A.M., Lee, S.S., 2000. Atmospheric flight gust loads analysis. J. Spacecraft Rockets 37 (4), 446e452. Newmark, N.M., 1959. A method of computation for structural dynamics. J. Eng. Mech. Div. ASCE 85 (EM3), 67e94. Nigam, N.C., Jennings, P.C., 1968. Digital Calculation of Response Spectra from Strong-Motion Earthquake Records. EERL Report No. 700-00. California Institute of Technology, Pasadena, California. Parlett, B.N., 1998. The Symmetric Eigenvalue Problem. Society for Industrial and Applied Mathematics, Philadelphia. Piersol, A.G., Paez, T.L., 2010. Harris’ Shock and Vibration Handbook, sixth ed. McGraw-Hill Companies, Inc., New York, New York. Richtmyer, R.D., Morton, K.W., 1967. Difference Methods for Initial-Value Problems, second ed. John Wiley & Sons, Inc., New York, New York.

901

902

CHAPTER 8 Numerical methods

Saad, Y., 2003. Iterative Methods for Sparse Linear Systems, second ed. Society for Industrial and Applied Mathematics, Philadelphia. Saad, Y., 2011. Numerical Methods for Large Eigenvalue Problems, second ed. Society for Industrial and Applied Mathematics, Philadelphia. Smallwood, D.O., May 1981. An Improved Recursive Formula for Calculating Shock Response Spectra. 5th Shock Vib. Bull. 51 (2), 211e217. Stewart, G.W., 1998. Matrix Algorithms Volume 1: Basic Decompositions, Society for Industrial and Applied Mathematics. Philadelphia. Stewart, G.W., 2001a. Matrix Algorithms Volume 1: Eigensystems, Society for Industrial and Applied Mathematics. Philadelphia. Stewart, G.W., 2001b. Matrix Algorithms Volume II: Eigensystems. Society for Industrial and Applied Mathematics. Philadelphia. Trefethen, L.N., Bau III, D., 1997. Numerical Linear Algebra. Society for Industrial and Applied Mathematics, Philadelphia. Udwadia, F.E., Esfandiari, R.S., June 1990. Nonclassically damped dynamic systems: An iterative approach. J. Appl. Mech. 57, 423e433. Udwadia, F., March 1993. Further results on iterative approaches to the determination of the response of nonclassically damped systems. J. Appl. Mech. 60, 235e239. Udwadia, F.E., Kumar, R., 1994a. Convergence of Iterative Methods for Nonclassically Damped Systems. Appl. Mathemat. Comput. 61, 61e97. Udwadia, F.E., Kumar, R., 1994b. Iterative Methods for Non-classically Damped Dynamic Systems. Earthq. Eng. Struct. Dyn. 23, 137e152. Varga, R.S., 1962. Matrix Iterative Analysis. Prentice-Hall, Englewood Cliffs, New Jersey. Veletsos, A.S., Ventura, C.E., 1986. Modal Analysis of Non-Classically Damped Linear Systems. Earthq. Eng. Struct. Dyn. 14, 217e243. Watkins, D.S., 2007. The Matrix Eigenvalue Problem: GR and Krylov Subspace Methods. Society for Industrial and Applied Mathematics, Philadelphia. Watkins, D.S., 2010. Fundamentals of Matrix Computations, third ed. John Wiley & Sons, Inc., New York, New York. Watkins, D.S., May 2011. Francis’s Algorithm. In: The American Mathematical Monthly, Vol. 118. Mathematical Association of America. No. 5.

References

Wilkinson, J.H., 1965. The Algebraic Eigenvalue Problem. Oxford University Press, New York, New York. Wood, W.L., Bossak, M., Zienkiewicz, O.C., 1981. An Alpha Modification of Newmark’s Method. Int. J. Numer. Method. Eng. 15, 1562e1566. Zienkiewicz, O.C., Wood, W.L., Hine, N.W., 1984. A Unified Set of Single Step Algorithms Part 1: General Formulation and Applications. Int. J. Numer. Method. Eng. 20, pp. 1592-1552.

903

Index Note: ‘Page numbers followed by “f ” indicate figures “t” indicate tables and “b” indicate boxes.’

A Absolute acceleration response, 260e264 Acceleration, 2e3 Admittance, 486e487 Aerodynamic instability, 570e577 Airfoil, 570 Algebraic equation second order, 338 Amplification factor displacement, 82e84 Angle of attack, 570e571 Angular momentum, 521 Angular rate, 9 Angular rotation, 20 Atan2, 322 Autocorrelation, 250, 255e256 Avoirdupois pound-mass, 27 Axes Cartesian, 6e7 polar, 6e7 principal, 10

B Backward stability, 678e679 Backward substitution, 687e689 Base motion, 88 independent points, 476 multi-dof, harmonic, 477e479 single dof, 88e89 single dof, harmonic, 90e94 translation plus rotation, 473e475 unidirectional, 471e473 Beat frequency, 450, 452 Beating, 86e87, 448e458 multi-dof, 452 Beat period, 455e456 Body, 2 Boundary condition, 4e5 Boxcar, 134e135 Boxcar and unit impulse, 143e144 Bureau International des Poids et Mesures, 2e3, 24

C Campbell diagram, 541e542 Carrier signal, 450 Caughey, 366, 371 Caughey-O’Kelly criterion, 652 Caughey series, 373

Center of mass, 2, 359e360, 362 Center of pressure, 571 Centripetal force, 10 Change in motion, momentum, 3e4 Characteristic values, 337e338 Characteristic vectors, 337e338 Cholesky factorization, 669, 710e714 Circular frequency, 57 Classical, approximate, 400e401 Classical normal modes, 340e341, 366e380 Coefficient of friction, 62 Coincident acceleration, 75e77 displacement, 73e75 Coincident component, 69e70, 73e76 Complementary projector, 744 Complex arguments, 293e294 Complex conjugate, 485, 487 Complex error functions, 293e294 Complex exponential, 68 Complex frequency, 125 Complex modal, 659e663 Complex modes, aerodynamic instability, 574e577 Complex plane, 73e74, 74f, 82, 83f, 117, 382e383 Complex stiffness, 194e200 Condition number, 679e680 Conjugate transpose, 390 Constant of proportionality, 368e369 Constraint holonomic, 16 kinematic, 16 non-holonomic, 16 rheonomous, 16 scleronomic, 16 Constraint modes, 476, 497 Contour integral, 259 Contrary parts, 2 Convolution, 221 frequency domain, 142 time domain, 142 unit impulse, 143 Convolution integral, 141, 239 Coordinates, 5, 333 absolute, 14e16 boundary, 509e510

905

906

Index

Coordinates (Continued) discrete, 47e48 distributed, 19e24 generalized, 17, 21 independent, 17 modal, 348e350, 357, 366, 439 relative, 14e16, 90 spatial, 20 time-dependent, 21 truncated modal, 503e517 unconstrained, 509e510 Coordinate system Cartesian, 6e7 polar, 6e7 Coriolis acceleration, 9 Coriolis force, 9 Coulomb, 62e104 Coulomb damping, 62e104, 190e192 Coulomb friction, 190 Cramer’s rule, 204, 338 Critical, 56, 58 Critical damping, 56 Critical damping ratio, 176e177, 394e395 modal, 366 Critical ratio, 56 Cross Power Spectral Density, 488 Cut-off frequency, 503

D D’Alembert’s Principle, 509 Damping, 55, 62e104, 173 classical mode, 369e380 constant of proportionality, 55 Coulomb, 190e192 equivalent viscous, 187e190 fluid resistance, 192e194 grounded, 369 gyroscopic effects, 567 half-power points, 76 hysteresis, 200e205 least-squares, 182e183 logarithmic decrement, 178e183 logarithmic decrement, non-sequential cycles, 180e181 modal, 366, 439e440 mode superposition, 369e380 modified Caughey, 372e380 proportional, 368e369 proportional damping, 372 Rayleigh, 368e369, 372e373 structural, 194e200 structural, coincident response, 199e200

structural, quadrature response, 196e198 viscous, 55e62, 355e358 viscous, coincident response, 173e176 viscous, half-power points, 176e178 Dashpot, 55 Degrees of freedom, 11 rotational, 11, 358e362 Diagonal dominant, 728 Dirac delta, 136e137 Direction of motion, 2 Direct LU Factorization, 687e689 Displacement shape, 338e339 Distribution Gaussian, 255 normal, 255 Domain modal, 439, 468 Drag, 570 Duhamel, 612 method, 627e630 Duhamel Integral, 239 initial conditions, 241e242 step function, 239e241 Dynamic coupling, 362 Dynamic imbalance, 564e565 Dynamic pressure, 192e193

E Earth, 10 Effective mass, 52 Eigenvalue, 204, 337e338 complex, 569 complex conjugate, 569 Eigenvalue problem, 782e868 defective, 394 Divide-and-Conquer method, 836e839 gyroscopic effects, 523e524 Inverse Iteration, 795e797 Lanczos algorithm, 846e850 Lanczos method, 836e837 nonsymmetric, 850e864 QR iteration convergence, 805 Rayleigh Quotient iteration, 797e800 single argument, 389e390 symmetric, 782 Eigenvector, 204, 337e338 complex, 383 left, 389e392, 553, 867 right, 337e338, 389e392, 553, 867 El Centro earthquake, 244e245 Energy kinetic, 52, 184e186, 439

Index

strain, 52, 184e186, 439 Ensemble, 250 Envelope function beating, 450 whirling, 539 Ergodic, 250 Euclidean inner product, 672 Euler acceleration, 9 explicit method, 617 force, 9 formula, 68, 127, 385e386 method, 612, 622 number, 56 Second Law, 360, 521 Euler’s formula, 598 Excitation, 67 base, 88e97 base, harmonic, 90e94 base, multi-dof, 471e483 base, random, 256e260 boxcar, 230e232 cessation, 84e86, 95e97 frequency sweep, 98e104 harmonic, 68e84, 127e130, 439e440 impulse, multi-dof, 468e470 linear sweep, 99, 285e289 octave sweep, 99, 279e285 ramp, 224e227 random, 249e250 short transient, 466e470 step, 221e224 step, multi-dof, 467e468 swept frequency, 279e294 transient, 221 Expected value, 250e251

F Factorization, 710e719 First-order formulation single dof systems, 624e626 First-order systems, 381e386 Floating point arithmetic, 677e678 Floating point representation, 676e677 Force centripetal, 10 complex modal, 549e550, 559e564 Coriolis, 9 couple, 13e14 damping, 439 ergodic, 250 Euler, 9 external, 2

follower, 12e13 gravity, 10 harmonic, 68, 437 impulse, 437 impulsive, 221, 233 internal, 2 modal, 437 nondeterministic, 249e250 random, multi-dof, 483e484 reaction, 4e5 transient, 437 Force of inactivity, 2 Forces impressed, 2 Forces, opposite, equal, 2 Forcing function, 67 Forward substitution, 696e697 Fourier transform, 125, 130e144, 498 boxcar, 134e135 constant, 138e139 convolution, 140e142 cosine, 139e140 Dirac delta, 136e137 impulse, 136e137, 137f multi-dof, 486 product, 140 sine, 139e140 unit impulse sifting property, 137e138 Fourier transform pair, 130 Frame of reference inertial, 3, 6, 333e334 Francis shifts, 863e864 Free-body diagram, 334f Freedoms, 11 Free vibration, 62e104 Frequency, 50 circular, 57, 339 complex, 125 excitation, 98 instantaneous, 99 modulation, 86 natural, 339 Frequency response acceleration, 132 base excitation, 133e134 displacement, 131 multi-dof, 486e487 Frequency response functions, 69e70, 125, 131e133 Frequency sweep, 98e104 Friction, 190 dynamic (kinetic), 62 static, 62 Frobenius norm, 674, 714

907

908

Index

G Gain, modal, 440 Gaussian elimination, 669, 682, 700 Gauss, least squares, 182e183 Gauss-Seidel method, 669, 720 Generalized eigenvalue problem, 782 Generalized function, 136 Gershgorin Circle theorem, 656e657 Givens QR algorithm, 759 transformation method, 755e759 Gram-Schmidt method classical, 746e748 modified, 748e749 Gravitational Constant, 26 Gravity, 10 Growth factor, 706 Guyan reduction, 719 Gyroscopic complex modal coordinates solution, 552e559 effects, 523e524 moment, 524e525 Gyroscopic effects dynamic imbalance, 564e565 energy dissipation, 567 matrix, 567 Gyroscopic moments, 522

H Half-power points, 76, 176e178 Hamilton’s Principles, 47 Harmonic force, 68 Harmonic functions, 455e456 Hermitian, 852 Hermitian operator, 390 Hertz, 50 Hessenberg, upper, 844e845, 855e856 Hooke’s Law, 5, 47e48 Householder transformation method, 749e755 tridiagonalization algorithm, 812e814 Householder QR algorithm, 755 Hysteresis, 200e205 Hysteresis loop, 202e203 Hz, 50

I Idempotent, 743 IEEE 754, 675e676 Ill conditioned, 680

Imaginary unit, 57 Impedance, 127 Impulse, 2, 221, 233e238 Impulse response function, 236 Impulsive force, 221, 233e238 Inertia, 2 mass moment, 14 rotational, 14 Inertial frame of reference, 3, 6 Inertial resistance, 4 Inertial, torque, 13e14 Inertia relief matrix, 509 In family, 249e250 Infinite-precision arithmetic, 293e294 Initial conditions, 50 modal coordinates, 351 Instability, 570e577 aerodynamic, 570e577 International Bureau of Weights and Measures, 2e3 International prototype, kilogram, 24e25 International System of Units, 2e3, 24e25 Interpolation functions, 23 Iterative methods, 719e737 classical, 720e724 convergence, 724e737

J Jacobi iteration, 721 Jacobi method, 722

K Kilogram, 2e3, 24 Kinetic energy, 52, 184e186 Kronecker delta, 372 Krylov matrix, 844

L Lagrange’s Equations, 47 LAPACK, 696e697 SGESVX, 708 Users Guide, 705 Laplace transform, 125e130 operator, 125 Launch vehicles pogo, 578e579 Laws of Motion, 1e5 Leading principal submatrix, 698 Least-squares, 182e183, 737e782 normal equation, 738 Left eigenvectors, 389e392 Left singular vector, 761e762

Index

L’Hoˆpital’s Rule, 60 Lift, 570 Linear sweep excitation, 99e101 Linkages, 50e51 Lipschitz continuous, 622e624 Lissajous graph, 538e539, 596, 607 Lissajous space, 536 Load path, 343e344 Loads transformation matrix (LTM), 492e493, 515 Local truncation error, 621 Logarithmic decrement, 178e183, 457e458 Lower triangular matrix, 682 LU factorization, 682e710, 750 partial pivoting, 690e691

M Machine precision error, 677 MacNeal, 511 Mass, 2 base unit, 2e3 effective, 52 inertial, 4 modal, 346e347, 351e352, 366, 439 moment of inertia, 359e360 rigid body, 362e366 rotational inertia, 359e360 Mass matrix first moment terms, 365 moments of inertia, 365 Mass moment of inertia, 14 Mass point, 3e4 Matrix damping, diagonal, 357 determinate, 381e382 Matrix algebra, 335 Matrix differential equation, 335 Matrix norm, 672e673 Matrix splitting, 725e726 Matter, 2 Mean, 250 Mean square, normalized, 502 Mean square value, 250e251 multi-dof, 483e484 Meter, 24 Miles’ equation, 260, 271e273 Modal coordinates, 348e350 Modal damping matrix, 366 Modal gain, 440 Modal mass, 351e352 Modal truncation, 351 Mode acceleration, 437, 504e507

unconstrained systems, 507e515 Mode participation factors, 473, 479e481 Modes classical, 357e358, 366e380 complex, 366, 380e402, 383, 559e561 complex, gyroscopic effects, 524 complex with rigid body, 401e402 elastic, 342e343, 510 fundamental, 342e343 non-classical, 367 rigid body, 342e343 Mode shape, 337e344 linearly independent, 344 normalization, 347e348 normalized, 346 orthogonal, 344e347 rigid body, 342e343 scale, 347 Modes of vibration, 414e415 Mode superposition, 369e372 Modulation frequency, 86 Momentum, 3e4 Momentum wheels, 517e518 Moore-Penrose inverse, 768 Motive force impressed, 3e4 Motte, Andrew, 1e2 Multi-degree-of-freedom forced vibration, 437 systems, 333

N Neutral position, 5 Newmark, 612 average acceleration, 631 central difference, 631 comparison of methods, 636e649 effective force, 632e633 effective stiffness, 632e633 Fox-Goodwin, 631 linear acceleration, 631 method, 630e636 stability criterion, 635e636 Newton, 1e5 First Law, 2e3 Second Law, 3e4 Third Law, 4e5 Universal Gravitation Law, 26 Node point, 340e341 Non-deterministic, 249e250 Normal equation, 738 Normalized cycle count, 278, 502 Normal matrix, 675

909

910

Index

Normal mode, 340e341 Null space, 672 Numerical method, 616 Numerical precision, 718 Numerical rank, 765e766 Numerical solution backward Euler method, 614e615 classically damped, 650e652 comparisons, 641e646 differential equations of motion, 624e625 Euler method, 612 explicit Euler method, 617 first-order, 663e665 frequency response, 638e641 general methods, 658e669 multi-dof system, 649e669 non-classically damped, 652e658 numerical quadrature, 615 one-step methods, 612e626 rigid body response, 646e649 RungeeKutta, 612 stability, 636e637 Numerical solution linear equations, 669e737

Power Spectral Density (PSD), 251e256, 488 cross, 488 function, 256 Principal axes, 10, 363 Principia, 1 Problem sensitivity, 678e681 PSD, Power Spectral Density, 251e256, 488 Pseudo acceleration, 94, 134, 248, 256e260 Pseudo-inverse, 511, 768e772 Pseudo velocity, 94, 248

Q QR iteration, 786e836 Quadratic equation, 56 Quadratic formula, 56, 175 Quadratic interpolant, 617e618 Quadrature acceleration, 75e77 displacement, 73e75 Quadrature component, 69e70 “Quarter cycle” point, 539 QZ algorithm, 663

O

R

Object, 2 Octave sweep excitation, 99, 101e102 O’Kelly, 366, 371 Orthogonal iteration, 799e800 method, 799e800 Orthogonality complex mode shapes, 390e392 mode shapes, 344e347 Orthogonal, perpendicular, 7 Orthogonal projectors, 738

Radian/second, 50 Random response analysis, 483e500 multi-dof, 483e484 multi-dof acceleration, 490e492 multi-dof base excitation, 496e500 multi-dof implementation, 494e496 multi-dof loads, 490e492 Range, 671e672 Rank-one transformations, 743 Rational number, beating, 451e452 Rayleigh, 47 quotient, 186, 402e406, 791e792 Rayleigh-Ritz, 412e414 Rayleigh-Ritz procedure, 842 Rayleigh’s method, 53 Rayleigh’s Principle, 405e406 Reaction, 4e5 Reaction force, 4e5 inertial, 6 non-inertial, 6e7 Reaction wheels, 517e518 Reference frame, 3, 6e10 inertial, 3, 6 non-inertial, 6e7 Relative coordinate, 90

P Parallel axis theorem, 362 Parallelogram law, 3e4 Parseval’s theorem, 251, 485 Partial fractions, 127 Partial pivoting, 690e697 LU factorization, 690e691 Participation factor, modal, 479e481 Penrose conditions, 771e772 Permutation matrix, 690 Phase angle, 76, 81 Phasor, 73e74, 144e145 Philosophiae Naturalis Principia Mathematica, 1 Planck constant, 2e3 Pogo, 578e579 Poles, 128

Index

Relaxation parameter, 721e722 Residual, 708 Residual flexibility, 437, 517 Residue theorem, 259, 262 Resistance, 2 inertial, 4 Resonant component, 341 Response beating, 86e87 coincident, 444e447 feedback, 517e518 frequency functions, 131e133 mean square, multi-dof, 498 modal, 447 phase angle, 81 quadrature, 444e447 ramp excitation, 224e227 random base excitation, 260e264 random excitation, time domain, 260, 270e279 single-degree-of-freedom, 102e104 steady-state, 91e92, 442e444 time domain, 501e503 Response backbone, 76e77 Response function unit impulse, 142, 236 Response recovery loads, 500 LTM, 500 mode acceleration, 516e517 Response spectra, 242e249 Rest, 2 Retained modes, 503e504 Right line, 1 Right singular vector, 761e762 Rigid body displacement, 16 Rigid-body modes, 509e510 Rigid body vector, 473, 497 Rigid object, 2 Ritz values, 842 Ritz vectors, 842 Root mean square, 271 Roots, 381e382 Rotating systems, gyroscopic effects and excitation, 567 Rotational momentum, 11 Rotational motion, 2 RS, Response Spectra, 242e249 RungeeKutta, 612

S Schwendler, 511 Second, 24e25

Second Law, 12e14 Second order, 665e669 Shake table, 482e483 Shape of vibration, 21 Shock Response Spectra, 242e249 SI, 2e3, 24e25 Sifting property, 137e138, 253 Simpson’s quadrature rule, 618e619 Simultaneous iteration method, 799e800 Single-degree-of-freedom system, 47 Singular value, 673 decomposition, 511, 760e772 theorem, 760e768 Skew-symmetric matrix, 566 Spectral line, 133 Speed, 2 SRS, Shock Response Spectra, 242e249, 245f Stable, algorithm, 679 Standard deviation, 250e251 Standard gravitational acceleration, 26 Static coupling, 362 Static imbalance, 564 Stationary, 250 Stiffness complex, 194e200 constant of proportionality, 48e49 modal, 346e347, 366, 439 shear center, 361 Strain energy, 52, 184e186 Structural damping, 194e200 Structural damping factor, 194e200 Subspace iteration method, 799e800 Successive over-relaxation method, 669 Superposition, 221, 233e238 modal responses, 352 Sweep rate effects, 458e466, 479 linear, 99 octave, 99 Swept frequency closed form solutions, 290e294 Symmetric positive-definite, 710e719

T Taylor series, 178 expansion, 621 Time domain random response analysis, 270e279 Time domain root mean square computation, 273e279 Timoshenko, 370 Torque, 21e22 Torque, gyroscopic moments, 521 Trace, 416, 596, 674

911

912

Index

Transfer function, 125 Trapezoidal, 631 Triangle inequality, 704 Tridiagonal reduction, 808e812 Tripartite plot, 248 Truncated modes, mode acceleration, 504e507 Turbo machinery, 523

U Unit impulse, 136 Unit impulse and boxcar, 143e144 Units, 24e27 base, 24 derived, 24 foot (ft), 25 hertz (Hz), 50 kilogram (kg), 2, 3, 25 meter (m), 2e3, 24 newton (N), 2e3, 25 pound (lb), 25 pound-force (lbf), 25 pound-mass (lbm), 25 second (s), 2e3, 24 slug, 25 Units, conversion, 27 Universal Gravitation Law, 26 Unstable aeroelastic flutter, 574 Upper triangular matrix, 682 US Customary Units, 25e27

V Vandermonde, 373

Variance, 250e251 Vector iteration, 791e792 Velocity, 2 Vibration, 47e53 damped, 57e58 forced, 67e87 free, 62e104 modes, 402e415 non-oscillatory, 58e62 oscillatory, 57e58 shape, 21 viscous damping, 355e358 Viscous damping, 55e62

W Weighting matrices, 344 Well conditioned, 680 Whirl, 565 backward, 563e564 forward, 563e564 Whirling trajectory, 534e536 Wiener-Khinchin theorem, 256 Work, 184e186 WorkeEnergy theorem, 185

Z z-transform, 638